id
stringlengths
2
8
revid
stringlengths
1
10
url
stringlengths
38
44
title
stringlengths
1
184
text
stringlengths
101
448k
6928
28481209
https://en.wikipedia.org/wiki?curid=6928
Colette
Sidonie-Gabrielle Colette (; 28 January 1873 – 3 August 1954), known as Colette or Colette Willy, was a French author and woman of letters. She was also a mime, actress, and journalist. Colette is best known in the English-speaking world for her 1944 novella "Gigi", which was the basis for the 1958 film and the 1973 stage production of the same name. Her short story collection "The Tendrils of the Vine" is also famous in France. Early life. Sidonie-Gabrielle Colette was born on 28 January 1873 in the village of Saint-Sauveur-en-Puisaye in the department of Yonne, Burgundy. Her father, Captain Jules-Joseph Colette (1829–1905) was a war hero. He was a Zouave of the Saint-Cyr military school, who had lost a leg at in the Second Italian War of Independence. He was awarded a post as tax collector in the village of Saint-Sauveur-en-Puisaye where his children were born. His wife, Adèle Eugénie Sidonie, "née" Landoy (1835–1912) was nicknamed "Sido". Colette's great-grandfather, Robert Landois, was a wealthy Martinican mulatto, who settled in Charleville in 1787. In an arranged first marriage to Jules Robineau Duclos, Colette's mother had two children: Juliette (1860–1908) and Achille (1863–1913). After she remarried Captain Colette, she had two other children: Leopold (1866–1940) and Sidonie-Gabrielle. Colette attended a public school from the ages of 6 to 17. The family was initially well off, but poor financial management substantially reduced their income. Career. In 1893, Colette married Henry Gauthier-Villars (1859–1931), an author and publisher 14 years her senior, who used the pen name "Willy". Her first four novels – the four Claudine stories: "Claudine à l'école" (1900), "Claudine à Paris" (1901), "Claudine en ménage" (1902), and "Claudine s'en va" (1903) – appeared under his name. (The four are published in English as "Claudine at School", "Claudine in Paris", "Claudine Married", and "Claudine and Annie".) The novels chart the coming of age and young adulthood of their titular heroine, Claudine, from an unconventional 15-year old in a Burgundian village to a doyenne of the literary salons of turn-of-the-century Paris. The story they tell is semi-autobiographical, although Claudine, unlike Colette, is motherless. The marriage to Gauthier-Villars allowed Colette to devote her time to writing. She later said she would never have become a writer if it had not been for Willy. Fourteen years older than his wife and one of the most notorious libertines in Paris, he introduced his wife into avant-garde intellectual and artistic circles and encouraged her lesbian dalliances. And it was he who chose the titillating subject matter of the Claudine novels: "the secondary myth of Sappho... the girls' school or convent ruled by a seductive female teacher." Willy "locked her [Colette] in her room until she produced enough pages to suit him." Post divorce. Colette and Willy separated in 1906, although their divorce was not final until 1910. Colette had no access to the sizable earnings of the Claudine books – the copyright belonged to Willy – and until 1912 she conducted a stage career in music halls across France, sometimes playing Claudine in sketches from her own novels, earning barely enough to survive and often hungry and ill. To make ends meet, she turned more seriously to journalism in the 1910s. Around this time she also became an avid amateur photographer. This period of her life is recalled in "La Vagabonde" (1910), which deals with women's independence in a male society, a theme to which she would regularly return in future works. During these years she embarked on a series of relationships with other women, notably with Natalie Clifford Barney and with Mathilde de Morny, the Marquise de Belbeuf ("Max"), with whom she sometimes shared the stage. On 3 January 1907, an onstage kiss between Max and Colette in a pantomime entitled "Rêve d'Égypte" caused a near-riot, and as a result, they were no longer able to live together openly, although their relationship continued for another five years. In 1912, Colette married Henry de Jouvenel, the editor of "Le Matin". A daughter, Colette de Jouvenel, nicknamed "Bel-Gazou", was born to them in 1913. 1920s and 1930s. In 1920, Colette published "Chéri", portraying love between an older woman and a much younger man. Chéri is the lover of Léa, a wealthy courtesan; Léa is devastated when Chéri marries a girl his own age and delighted when he returns to her, but after one final night together, she sends him away again. Colette's marriage to Jouvenel ended in divorce in 1924, due partly to his infidelities and partly to her affair with her 16-year-old stepson, Bertrand de Jouvenel. In 1925, she met Maurice Goudeket, who became her final husband; the couple stayed together until her death. Colette was by then an established writer ("The Vagabond" had received three votes for the prestigious "Prix Goncourt"). The decades of the 1920s and 1930s were her most productive and innovative period. Set mostly in Burgundy or Paris during the Belle Époque, her work focused on married life and sexuality. It was frequently quasi-autobiographical: "Chéri" (1920) and "Le Blé en Herbe" (1923) both deal with love between an aging woman and a very young man, a situation reflecting her relationship with Bertrand de Jouvenel and with her third husband, Goudeket, who was 16 years her junior. "La Naissance du Jour" (1928) is her explicit criticism of the conventional lives of women, expressed through a meditation on age and the renunciation of love by the character of her mother, Sido. By this time Colette was frequently acclaimed as France's greatest woman writer. "It... has no plot, and yet tells of three lives all that should be known", wrote Janet Flanner of "Sido" (1929). "Once again, and at greater length than usual, she has been hailed for her genius, humanities and perfect prose by those literary journals which years ago... lifted nothing at all in her direction except the finger of scorn." During the 1920s she was associated with the Jewish-Algerian writer Elissa Rhaïs, who adopted a Muslim persona to market her novels. Last years, 1940–1954. Colette was 67 years old when France was occupied by the Germans. She remained in Paris, in her apartment in the Palais-Royal. Her husband Maurice Goudeket, who was Jewish, was arrested by the Gestapo in December 1941, and although he was released after seven weeks through the intervention of the French wife of the German ambassador, Colette lived through the rest of the war years with the anxiety of a possible second arrest. During the Occupation she produced two volumes of memoirs, "Journal à Rebours" (1941) and "De ma Fenêtre" (1942); the two were issued in English in 1975 as "Looking Backwards". She wrote lifestyle articles for several pro-Nazi newspapers. These, and her novel "Julie de Carneilhan" (1941), contain many anti-Semitic slurs. In 1944, Colette published what became her most famous work, "Gigi", which tells the story of the 16-year-old Gilberte ("Gigi") Alvar. Born into a family of demimondaines, Gigi is trained as a courtesan to captivate a wealthy lover but defies the tradition by marrying him instead. In 1949 it was made into a French film starring Danièle Delorme and Gaby Morlay, then in 1951 adapted for the stage with the then-unknown Audrey Hepburn (picked by Colette personally) in the title role. The 1958 Hollywood musical movie, starring Leslie Caron and Louis Jourdan, with a screenplay by Alan Jay Lerner and a score by Lerner and Frederick Loewe, won the Academy Award for Best Picture. In the postwar years, Colette became a famous public figure. She had become crippled by arthritis and was cared for by Goudeket, who supervised the preparation of her "Œuvres Complètes" (1948–1950). She continued to write during those years and published "L'Etoile Vesper" (1946) and "Le Fanal Bleu" (1949), in which she reflected on the problems of a writer whose inspiration is primarily autobiographical. She was nominated by Claude Farrère for the Nobel Prize in Literature in 1948. Journalism. Colette's first pieces of journalism (1895–1900) were written in collaboration with her husband, Gauthier-Villars—music reviews for "La Cocarde", a daily founded by Maurice Barres and a series of pieces for La Fronde. Following her divorce from Gauthier-Villars in 1910, she wrote independently for a wide variety of publications, gaining considerable renown for her articles covering social trends, theater, fashion, and film, as well as crime reporting. In December 1910, Colette agreed to write a regular column in the Paris daily, Le Matin—at first under a pseudonym, then as "Colette Willy." One of her editors was Henry de Jouvenel, whom she married in 1912. By 1912, Colette had taught herself to be a reporter: "You have to see and not invent, you have to touch, not imagine .. because, when you see the sheets [at a crime scene] drenched in fresh blood, they are a color you could never invent." In 1914, Colette was named Le Matin's literary editor. Colette's separation from Jouvenel in 1923 forced her to sever ties with Le Matin. Over the next three decades her articles appeared in over two dozen publications, including Vogue, Le Figaro, and Paris-Soir. During the German Occupation of France, Colette continued contributing to daily and weekly publications, a number of them collaborationist and pro-Nazi, including Le Petit Parisien, which became pro-Vichy after January 1941, and La Gerbe, a pro-Nazi weekly. Though her articles were not political in nature, Colette was sharply criticized at the time for lending her prestige to these publications and implicitly accommodating herself to the Vichy regime. Her 26 November 1942 article, "Ma Bourgogne Pauvre" ("My Poor Burgundy"), has been singled out by some historians as tacitly accepting some ultra-nationalist goals that hardline Vichyist writers espoused. After 1945, her journalism was sporadic, and her final pieces were more personal essays than reported stories. Over the course of her writing career, Colette published over 1200 articles for newspapers, magazines, and journals. Death and legacy. Upon her death, on 3 August 1954, she was refused a religious funeral by the Catholic Church on account of her divorces, but given a state funeral, the first French woman of letters to be granted the honour, and interred in Père-Lachaise cemetery. Colette was elected to the Belgian Royal Academy (1935), the Académie Goncourt (1945, and President in 1949), and a Chevalier (1920) and Grand Officer (1953) of the Légion d'honneur. Colette's numerous biographers have proposed widely differing interpretations of her life and work over the decades. Initially considered a limited if talented novelist (despite the outspoken admiration in her lifetime of figures such as André Gide and Henry de Montherlant), she has been increasingly recognised as an important voice in women's writing. Before Colette's death, Katherine Anne Porter wrote in "The New York Times" that Colette "is the greatest living French writer of fiction; and that she was while Gide and Proust still lived." Singer-songwriter Rosanne Cash paid tribute to the writer in the song, "The Summer I Read Colette", on her 1996 album "10 Song Demo". Truman Capote wrote an essay in 1970 about meeting her, called "The White Rose". It tells how, when she saw him admiring a paperweight on a table (the "white rose" of the title), she insisted he take it; Capote initially refused the gift, but “…when I protested that I couldn’t accept as a present something she so clearly adored, [she replied] 'My dear, really there is no point in giving a gift unless one also treasures it oneself.'” "Womanhouse" (30 January – 28 February 1972) was a feminist art installation and performance space organized by Judy Chicago and Miriam Schapiro, co-founders of the California Institute of the Arts (CalArts) Feminist Art Program, and was the first public exhibition of art centered upon female empowerment. One of the rooms in it, Leah's Room by Karen LeCocq and Nancy Youdelman, was based on Colette’s "Chéri". Karen and Nancy borrowed an antique dressing table and rug, made lace curtains and covered the bed with satin and lace to create the effect of a boudoir. They filled the closet with old-looking clothes and veiled hats, and wallpapered the walls to add a feeling of nostalgia. LeCocq sat at the dressing table dressed in a nineteenth-century-style costume as Léa, studiously applying make-up over and over and then removing it, replicating the character’s attempts to save her fading beauty. "Lucette Stranded on the Island" by Julia Holter, from her 2015 album "Have You in My Wilderness", is based on a minor character from Colette's short story "Chance Acquaintances". In the 1991 film "Becoming Colette", Colette is played by the French actress Mathilda May. In the 2018 film "Colette", the title character is played by Keira Knightley. Both films focus on Colette's life in her twenties, her marriage to her first husband, and the publication of her first novels under his name. Notable works. Source:
6932
4484456
https://en.wikipedia.org/wiki?curid=6932
Charles Alston
Charles Henry "Spinky" Alston (November 28, 1907 – April 27, 1977) was an American painter, sculptor, illustrator, muralist and teacher who lived and worked in the New York City neighborhood of Harlem. Alston was active in the Harlem Renaissance; Alston was the first African-American supervisor for the Works Progress Administration's Federal Art Project. Alston designed and painted murals at the Harlem Hospital and the Golden State Mutual Life Insurance Building. In 1990, Alston's bust of Martin Luther King Jr. became the first image of an African American displayed at the White House. Personal life. Early life. Charles Henry Alston was born on November 28, 1907, in Charlotte, North Carolina, to Reverend Primus Priss Alston and Anna Elizabeth (Miller) Alston, as the youngest of five children. Three survived past infancy: Charles, his older sister Rousmaniere and his older brother Wendell. His father had been born into slavery in 1851 in Pittsboro, North Carolina. After the Civil War, he gained an education and graduated from St. Augustine's College in Raleigh. He became a prominent minister and founder of St. Michael's Episcopal Church, with an African-American congregation. The senior Alston was described as a "race man": an African American who dedicated his skills to the furtherance of the Black race. Reverend Alston met his wife when she was a student at his school. Charles was nicknamed "Spinky" by his father, and kept the nickname as an adult. In 1910, when Charles was three, his father died suddenly of a cerebral hemorrhage. Locals described his father as the "Booker T. Washington of Charlotte". In 1913, Anna Alston married Harry Bearden, Romare Bearden's uncle, making Charles and Romare cousins. The two Bearden families lived across the street from each other; the friendship between Romare and Charles would last a lifetime. As a child, Alston was inspired by his older brother Wendell's drawings of trains and cars, which the young artist copied. Alston also played with clay, creating a sculpture of North Carolina. As an adult he reflected on his memories of sculpting with clay as a child: "I'd get buckets of it and put it through strainers and make things out of it. I think that's the first art experience I remember, making things." His mother was a skilled embroiderer and took up painting at the age of 75. His father was also good at drawing, having wooed Alston's mother Anna with small sketches in the medians of letters he wrote her. In 1915, the Bearden/Alston family moved to New York, as many African-American families did during the Great Migration. Alston's step-father, Henry Bearden, left before his wife and children in order to get work. He secured a job overseeing elevator operations and the newsstand staff at the Bretton Hotel on the Upper West Side. The family lived in Harlem and was considered middle-class. During the Great Depression, the people of Harlem suffered economically. The "stoic strength" seen within the community was later expressed in Charles' fine art. At Public School 179 in Manhattan, the boy's artistic abilities were recognized and he was asked to draw all of the school posters during his years there. In 1917, Harry and Anna Bearden had a daughter together, Aida C. Bearden, who would later marry operatic baritone Lawrence Whisonant. Higher education. Alston graduated from DeWitt Clinton High School, where he was nominated for academic excellence and was the art editor of the school's magazine, "The Magpie". He was a member of the Arista - National Honor Society and also studied drawing and anatomy at the Saturday school of the National Academy of Art. In high school he was given his first oil paints and learned about his aunt Bessye Bearden's art salons, which stars like Duke Ellington and Langston Hughes attended. After graduating in 1925, he attended Columbia University, turning down a scholarship to the Yale School of Fine Arts. Alston entered the pre-architectural program but lost interest after realizing what difficulties many African-American architects had in the field. After also taking classes in pre-med, he decided that math, physics and chemistry "was not just my bag", and he entered the fine arts program. During his time at Columbia, Alston joined Alpha Phi Alpha, worked on the university's "Columbia Daily Spectator", and drew cartoons for the school's magazine "Jester". He also explored Harlem restaurants and clubs, where his love for jazz and black music would be fostered. In 1929, he graduated and received the Arthur Wesley Dow fellowship to study at Teachers College, where he obtained his Master's in 1931. Later life. For the years 1942 and 1943 Alston was stationed in the army at Fort Huachuca in Arizona. While working on a mural project at Harlem Hospital, he met Myra Adele Logan, then an surgical intern at the hospital. They were married on April 8, 1944. Their home, which included his studio, was on Edgecombe Avenue near Highbridge Park. The couple lived close to family; at their frequent gatherings Alston enjoyed cooking and Myra played piano. During the 1940s Alston also took occasional art classes, studying under Alexander Kostellow. On April 27, 1977, Alston died after a long bout with cancer, just months after his wife died from lung cancer. His memorial service was held at St. Martins Episcopal Church in New York City, on May 21, 1977. Professional career. While obtaining his master's degree, Alston was the boys' work director at the Utopia Children's House, started by James Lesesne Wells. He also began teaching at the Harlem Community Art Center, founded by Augusta Savage in the basement of what is now the Schomburg Center for Research in Black Culture. Alston's teaching style was influenced by the work of John Dewey, Arthur Wesley Dow, and Thomas Munro. During this period, Alston began to teach the 10-year-old Jacob Lawrence, whom he strongly influenced. Alston was introduced to African art by the poet Alain Locke. In the late 1920s, Alston joined Bearden and other black artists who refused to exhibit in William E. Harmon Foundation shows, which featured all-black artists in their traveling exhibits. Alston and his friends thought the exhibits were curated for a white audience, a form of segregation which the men protested. They did not want to be set aside but exhibited on the same level as art peers of every skin color. In 1938, the Rosenwald Fund provided money for Alston to travel to the South, which was his first return there since leaving as a child. His travel with Giles Hubert, an inspector for the Farm Security Administration, gave him access to certain situations and he photographed many aspects of rural life. These photographs served as the basis for a series of genre portraits depicting southern black life. In 1940, he completed "Tobacco Farmer", the portrait of a young black farmer in white overalls and a blue shirt with a youthful yet serious look upon his face, sitting in front of the landscape and buildings he works on and in. That same year Alston received a second round of funding from the Rosenwald Fund to travel South, and he spent extended time at Atlanta University. During the 1930s and early 1940s, Alston created illustrations for magazines such as "Fortune", "Mademoiselle", "The New Yorker", "Melody Maker" and others. He also designed album covers for artists such as Duke Ellington and Coleman Hawkins, as well as book covers for Eudora Welty and Langston Hughes. Alston became staff artist at the Office of War Information and Public Relations in 1940, creating drawings of notable African Americans. These images were used in over 200 black newspapers across the country by the government to "foster goodwill with the black citizenry." Alston left commercial work to focus on his own artwork, and 1950 he became the first African-American instructor at the Art Students League, where he remained on faculty until 1971. In 1950, his "Painting" was exhibited at the Metropolitan Museum of Art, and his artwork was one of the few pieces purchased by the museum. He landed his first solo exhibition in 1953 at the John Heller Gallery, which represented artists such as Roy Lichtenstein. He exhibited there five times from 1953 to 1958. In 1956, Alston became the first African-American instructor at the Museum of Modern Art, where he taught for a year before going to Belgium on behalf of MoMA and the United States Department of State. He coordinated the children's community center at Expo 58. In 1958, he was awarded a grant from and was elected as a member of the American Academy of Arts and Letters. In 1963, Alston co-founded Spiral with his cousin Romare Bearden and Hale Woodruff. Spiral served as a collective of conversation and artistic exploration for a large group of artists who "addressed how black artists should relate to American society in a time of segregation." Artists and arts supporters gathered for Spiral, such as Emma Amos, Perry Ferguson and Merton Simpson. This group served as the 1960s version of the 306 Group. Alston was described as an "intellectual activist", and in 1968 he spoke at Columbia about his activism. In the mid-1960s, Spiral organized an exhibition of black and white artworks, but the exhibition was never officially sponsored by the group, due to internal disagreements. In 1968, Alston received a presidential appointment from Lyndon Johnson to the National Council of Culture and the Arts. Mayor John Lindsay appointed him to the New York City Art Commission in 1969. In 1973, he was made full professor at City College of New York, where he had taught since 1968. In 1975, he was awarded the first Distinguished Alumni Award from Teachers College. The Art Student's League created a 21-year merit scholarship in 1977 under Alston's name to commemorate each year of his tenure. Painting a person and a culture. Alston shared studio space with Henry Bannarn at 306 W. 141st Street, which served as an open space for artists, photographers, musicians, writers and the like. Other artists held studio space at 306, such as Jacob Lawrence, Addison Bates and his brother Leon. During this time Alston founded the Harlem Artists Guild with Savage and Elba Lightfoot to work toward equality in Works Progress Administration art programs in New York. During the early years of 306, Alston focused on mastering portraiture. His early works such as "Portrait of a Man" (1929) show Alston's detailed and realistic style depicted through pastels and charcoals, inspired by the style of Winold Reiss. In his "Girl in a Red Dress" (1934) and "The Blue Shirt" (1935), Alston used modern and innovative techniques for his portraits of young individuals in Harlem. "Blue Shirt" is thought to be a portrait of Jacob Lawrence. During this time he also created "Man Seated with Travel Bag" (c. 1938–40), showing the seedy and bleak environment, contrasting with work like the racially charged "Vaudeville" (c. 1930) and its caricature style of a man in blackface. Inspired by his trip south, Alston began his "family series" in the 1940s. Intensity and angularity come through in the faces of the youth in his portraits "Untitled (Portrait of a Girl)" and "Untitled (Portrait of a Boy)". These works also show the influence that African sculpture had on his portraiture, with "Portrait of a Boy" showing more cubist features. Later family portraits show Alston's exploration of religious symbolism, color, form and space. His family group portraits are often faceless, which Alston states is the way that white America views blacks. Paintings such as "Family" (1955) show a woman seated and a man standing with two children – the parents seem almost solemn while the children are described as hopeful and with a use of color made famous by Cézanne. In "Family Group" (c. 1950) Alston's use of gray and ochre tones brings together the parents and son as if one with geometric patterns connecting them together as if a puzzle. The simplicity of the look, style and emotion upon the family is reflective and probably inspired by Alston's trip south. His work during this time has been described as being "characterized by his reductive use of form combined with a sun-hued" palette. During this time he also started to experiment with ink and wash painting, which is seen in work such as "Portrait of a Woman" (1955), as well as creating portraits to illustrate the music surrounding him in Harlem. "Blues Singer #4" shows a female singer on stage with a white flower on her shoulder and a bold red dress. "Girl in a Red Dress" is thought to be Bessie Smith, whom he drew many times when she was recording and performing. Jazz was an important influence in Alston's work and social life, which he expressed in such works as "Jazz" (1950) and "Harlem at Night". The 1960s civil rights movement influenced his work deeply, and he made artworks expressing feelings related to inequality and race relations in the United States. One of his few religious artworks was "Christ Head" (1960), which had an angular "Modiglianiesque" portrait of Jesus Christ. Seven years later he created "You never really meant it, did you, Mr. Charlie?" which, in a similar style as "Christ Head", shows a black man standing against a red sky "looking as frustrated as any individual can look", according to Alston. Modernism. Experimenting with the use of negative space and organic forms in the late 1940s, by the mid-1950s Alston began creating notably modernist style paintings. "Woman with Flowers" (1949) has been described as a tribute to Modigliani. "Ceremonial" (1950) shows that he was influenced by African art. Untitled works during this era show his use of color overlay, using muted colors to create simple layered abstracts of still lifes. "Symbol" (1953) relates to Picasso's "Guernica", which was a favorite work of Alston's. His final work of the 1950s, "Walking", was inspired by the Montgomery bus boycott. It is taken to represent "the surge of energy among African Americans to organize in their struggle for full equality." Alston is quoted as saying, "The idea of a march was growing...It was in the air...and this painting just came. I called it "Walking" on purpose. It wasn't the militancy that you saw later. It was a very definite walk-not going back, no hesitation." Black and white. The civil rights movement of the 1960s was a major influence on Alston. In the late 1950s, he began working in black and white, which he continued up until the mid-1960s, and the period is considered one of his most powerful. Some of the works are simple abstracts of black ink on white paper, similar to a Rorschach test. "Untitled" (c. 1960s) shows a boxing match, with an attempt to express the drama of the fight through few brushstrokes. Alston worked with oil-on-Masonite during this period as well, using impasto, cream, and ochre to create a moody cave-like artwork. "Black and White #1" (1959) is one of Alston's more "monumental" works. Gray, white and black come together to fight for space on an abstract canvas, in a softer form than the more harsh Franz Kline. Alston continued to explore the relationship between monochromatic hues throughout the series which Wardlaw describes as "some of the most profoundly beautiful works of twentieth-century American art." Murals. Charles Alston's early mural work was inspired by the work of Aaron Douglas, Diego Rivera and José Clemente Orozco. He met Orozco when they did mural work in New York. In 1943, Alston was elected to the board of directors of the National Society of Mural Painters. He created murals for the Harlem Hospital, Golden State Mutual, American Museum of Natural History, Public School 154, the Bronx Family and Criminal Court, and the Abraham Lincoln High School in Brooklyn, New York. Harlem Hospital Murals. Originally hired as an easel painter, in 1935 Alston became the first African-American supervisor to work for the Works Progress Administration's Federal Art Project (FAP) in New York. This was his first mural. At this time he was awarded Works Progress Administration Project Number 1262 – an opportunity to oversee a group of artists creating murals and to supervise their painting for the Harlem Hospital. It was the first government commission ever awarded to African-American artists, who included Beauford Delaney, Seabrook Powell and Vertis Hayes. He also had the chance to create and paint his own contribution to the collection: "Magic in Medicine" and "Modern Medicine". These paintings were part of a diptych completed in 1936 depicting the history of medicine in the African-American community and Beauford Delaney served as assistant. When creating the murals, Alston was inspired by the work of Aaron Douglas, who a year earlier had created the public art piece "Aspects of Negro Life" for the New York Public Library. He had researched traditional African culture, including traditional African medicine. "Magic in Medicine", which depicts African culture and holistic healing, is considered one of "America's first public scenes of Africa". All of the mural sketches submitted were accepted by the FAP; however, hospital superintendent Lawrence T. Dermody and commissioner of hospitals S.S. Goldwater rejected four proposals, due to what they said was an excessive amount of African-American representation in the works. The artists fought their response, writing letters to gain support. Four years later they succeeded in gaining the right to complete the murals. The sketches for "Magic in Medicine" and "Modern Medicine" were exhibited in the Museum of Modern Art's "New Horizons in American Art". Condition. Alston's murals were hung in the Women's Pavilion of the hospital over uncapped radiators, which caused the paintings to deteriorate from the steam. Plans failed to recap the radiators. In 1959, Alston estimated, in a letter to the New York State Department of Public Works, that the conservation would cost $1,500 but the funds were never acquired. In 1968, after the assassination of Martin Luther King Jr., Alston was asked to create another mural for the hospital, to be placed in a pavilion named after the slain civil rights movement leader. It was to be titled "Man Emerging from the Darkness of Poverty and Ignorance into the Light of a Better World." One year after Alston's death in 1977, a group of artists and historians, including the renowned painter and collagist Romare Bearden and art historian Greta Berman, together with administrators from the hospital, and from the New York City Art Commission, examined the murals, and presented a proposal for their restoration to then-mayor Ed Koch. The request was approved, and conservator Alan Farancz set to work in 1979, rescuing the murals from further decay. Many years passed, and the murals began to deteriorate again – especially the Alston works, which continued to suffer effects from the radiators. In 1991, the Municipal Art Society's Adopt-a-Mural program was launched, and the Harlem Hospital murals were chosen for further restoration (Greta Berman. Personal experience). A grant from Alston's sister Rousmaniere Wilson and step-sister Aida Bearden Winters assisted in completing a restoration of the works in 1993. In 2005, Harlem Hospital announced a $2 million project to conserve Alston's murals and three other pieces in the original commissioned project as part of a $225 million hospital expansion. Golden State Mutual murals. In the late 1940s, Alston became involved in a mural project commissioned by Golden State Mutual Life Insurance Company, which asked the artists to create work related to African-American contributions to the settling of California. Alston worked with Hale Woodruff on the murals in a large studio space in New York; they used ladders to reach the upper parts of the canvas. The artworks, which are considered "priceless contributions to American narrative art", consist of two panels: "Exploration and Colonization" by Alston and "Settlement and Development" by Woodruff. Alston's piece covers the period of 1527 to 1850. Images of mountain man James Beckwourth, Biddy Mason, and William Leidesdorff are portrayed in the well-detailed historical mural. Both artists kept in contact with African Americans on the West Coast during creation of the murals, which influenced their content and depictions. The murals were unveiled in 1949, and have been on display in the lobby of the Golden State Mutual Headquarters. Due to economic downturn in the early 21st century, Golden State was forced to sell their entire art collection to ward off its mounting debts. As of spring 2011 the National Museum of African American History and Culture had offered $750,000 to purchase the artworks. This generated controversy, as the artworks have been estimated to be worth at least $5 million. Supporters tried to protect the murals by gaining city landmark protections by the Los Angeles Conservancy. The state of California had declined philanthropic proposals to keep the murals in their original location, and the Smithsonian Institution withdrew their offer. The disposition of the murals are subject to a court case over jurisdiction, which was unresolved in the spring of 2011. This was resolved later in 2011 when the Golden State Mutual Life Insurance building was added to the National Register of Historic Places. The building was purchased by Community Impact Development, a partnership formed to provide a new home for the South Central Los Angeles Regional Center, an agency that provides services to people with developmental disabilities. The building was renovated in 2015. The murals remain in the lobby. Sculpture. Alston also created sculptures. "Head of a Woman" (1957) shows his shift toward a "reductive and modern approach to sculpture...where facial features were suggested rather than fully formulated in three dimensions,". In 1970, Alston was commissioned by the Community Church of New York to create a bust of Martin Luther King Jr. for $5,000, with only five copies produced. In 1990, Alston's bronze bust of Martin Luther King Jr. (1970), became the first image of an African American to be displayed in the White House. When Barack Obama became the first black president in 2009, he brought the bust of Martin Luther King Jr. into the Oval Office, replacing a bust of Winston Churchill. This marked the first time an image of an African American was displayed in the president's work quarters. Furthermore, the bust became a predominant work seen in official portraits of visiting dignitaries. Now, a second copy of the famous Martin Luther King Jr. bust is displayed in Washington for the public to view up close. World War II propaganda. During World War II, scholars have theorized that the black press strived to appeal to the black readers, while also appeasing the U.S. government by supporting the war. Alston produced over one hundred government propagandistic illustrations that supported the national position on the war for the U.S. Office of War Information. Simultaneously, the cartoons were targeted to a black audience, designed exclusively for publication in the weekly black newspapers to address specific, controversial issues in the black community. Reception. Art critic Emily Genauer stated that Alston "refused to be pigeonholed", regarding his varied exploration in his artwork. Patron Lemoine Deleaver Pierce said of Alston's work: "Never thought of as an innovative artist, Alston generally ignored popular art trends and violated many mainstream art conventions; he produced abstract and figurative paintings often simultaneously, refusing to be stylistically consistent, and during his 40-year career he worked prolifically and unapologetically in both commercial and fine art." Romare Bearden described Alston as "...one of the most versatile artists whose enormous skill led him to a diversity of styles..." Bearden also describes the professionalism and impact that Alston had on Harlem and the African-American community: "'was a consummate artist and a voice in the development of African American art who never doubted the excellence of all people's sensitivity and creative ability. During his long professional career, Alston significantly enriched the cultural life of Harlem. In a profound sense, he was a man who built bridges between Black artists in varying fields, and between other Americans." Writer June Jordan described Alston as "an American artist of first magnitude, and he is a Black American artist of undisturbed integrity." Notes. 32. ^"Charles Alston, Artist and Teacher." African American Registry. 30 July 2020. Web. 10 March 2021. Charles Alston, Artist, and Teacher born
6933
27823944
https://en.wikipedia.org/wiki?curid=6933
Chromatin
Chromatin is a complex of DNA and protein found in eukaryotic cells. The primary function is to package long DNA molecules into more compact, denser structures. This prevents the strands from becoming tangled and also plays important roles in reinforcing the DNA during cell division, preventing DNA damage, and regulating gene expression and DNA replication. During mitosis and meiosis, chromatin facilitates proper segregation of the chromosomes in anaphase; the characteristic shapes of chromosomes visible during this stage are the result of DNA being coiled into highly condensed chromatin. The primary protein components of chromatin are histones. An octamer of two sets of four histone cores (Histone H2A, Histone H2B, Histone H3, and Histone H4) bind to DNA and function as "anchors" around which the strands are wound. In general, there are three levels of chromatin organization: Many organisms, however, do not follow this organization scheme. For example, spermatozoa and avian red blood cells have more tightly packed chromatin than most eukaryotic cells, and trypanosomatid protozoa do not condense their chromatin into visible chromosomes at all. Prokaryotic cells have entirely different structures for organizing their DNA (the prokaryotic chromosome equivalent is called a genophore and is localized within the nucleoid region). The overall structure of the chromatin network further depends on the stage of the cell cycle. During interphase, the chromatin is structurally loose to allow access to RNA and DNA polymerases that transcribe and replicate the DNA. The local structure of chromatin during interphase depends on the specific genes present in the DNA. Regions of DNA containing genes which are actively transcribed ("turned on") are less tightly compacted and closely associated with RNA polymerases in a structure known as euchromatin, while regions containing inactive genes ("turned off") are generally more condensed and associated with structural proteins in heterochromatin. Epigenetic modification of the structural proteins in chromatin via methylation and acetylation also alters local chromatin structure and therefore gene expression. There is limited understanding of chromatin structure and it is active area of research in molecular biology. Dynamic chromatin structure and hierarchy. Chromatin undergoes various structural changes during a cell cycle. Histone proteins are the basic packers and arrangers of chromatin and can be modified by various post-translational modifications to alter chromatin packing (histone modification). Most modifications occur on histone tails. The positively charged histone cores only partially counteract the negative charge of the DNA phosphate backbone resulting in a negative net charge of the overall structure. An imbalance of charge within the polymer causes electrostatic repulsion between neighboring chromatin regions that promote interactions with positively charged proteins, molecules, and cations. As these modifications occur, the electrostatic environment surrounding the chromatin will flux and the level of chromatin compaction will alter. The consequences in terms of chromatin accessibility and compaction depend both on the modified amino acid and the type of modification. For example, histone acetylation results in loosening and increased accessibility of chromatin for replication and transcription. Lysine trimethylation can either lead to increased transcriptional activity (trimethylation of histone H3 lysine 4) or transcriptional repression and chromatin compaction (trimethylation of histone H3, lysine 9 or lysine 27). Several studies suggested that different modifications could occur simultaneously. For example, it was proposed that a bivalent structure (with trimethylation of both lysine 4 and 27 on histone H3) is involved in early mammalian development. Another study tested the role of acetylation of histone 4 on lysine 16 on chromatin structure and found that homogeneous acetylation inhibited 30 nm chromatin formation and blocked adenosine triphosphate remodeling. This singular modification changed the dynamics of the chromatin which shows that acetylation of H4 at K16 is vital for proper intra- and inter- functionality of chromatin structure. Polycomb-group proteins play a role in regulating genes through modulation of chromatin structure. For additional information, see Chromatin variant, Histone modifications in chromatin regulation and RNA polymerase control by chromatin structure. Structure of DNA. In nature, DNA can form three structures, A-, B-, and Z-DNA. A- and B-DNA are very similar, forming right-handed helices, whereas Z-DNA is a left-handed helix with a zig-zag phosphate backbone. Z-DNA is thought to play a specific role in chromatin structure and transcription because of the properties of the junction between B- and Z-DNA. At the junction of B- and Z-DNA, one pair of bases is flipped out from normal bonding. These play a dual role of a site of recognition by many proteins and as a sink for torsional stress from RNA polymerase or nucleosome binding. DNA bases are stored as a code structure with four chemical bases such as "Adenine (A), Guanine (G), Cytosine (C), and Thymine (T)". The order and sequences of these chemical structures of DNA are reflected as information available for the creation and control of human organisms. "A with T and C with G" pairing up to build the DNA base pair. "Sugar and phosphate" molecules are also paired with these bases, making DNA nucleotides arrange 2 long spiral strands unitedly called "double helix". In eukaryotes, DNA consists of a cell nucleus and the DNA is providing strength and direction to the mechanism of heredity. Moreover, between the nitrogenous bonds of the 2 DNA, homogenous bonds are forming. Nucleosomes and beads-on-a-string. The basic repeat element of chromatin is the nucleosome, interconnected by sections of linker DNA, a far shorter arrangement than pure DNA in solution. In addition to core histones, a linker histone H1 exists that contacts the exit/entry of the DNA strand on the nucleosome. The nucleosome core particle, together with histone H1, is known as a chromatosome. Nucleosomes, with about 20 to 60 base pairs of linker DNA, can form, under non-physiological conditions, an approximately 11 nm beads on a string fibre. The nucleosomes bind DNA non-specifically, as required by their function in general DNA packaging. There are, however, large DNA sequence preferences that govern nucleosome positioning. This is due primarily to the varying physical properties of different DNA sequences: For instance, adenine (A), and thymine (T) is more favorably compressed into the inner minor grooves. This means nucleosomes can bind preferentially at one position approximately every 10 base pairs (the helical repeat of DNA)- where the DNA is rotated to maximise the number of A and T bases that will lie in the inner minor groove. (See nucleic acid structure.) 30-nm chromatin fiber in mitosis. With addition of H1, during mitosis the beads-on-a-string structure can coil into a 30 nm-diameter helical structure known as the 30 nm fibre or filament. The precise structure of the chromatin fiber in the cell is not known in detail. This level of chromatin structure is thought to be the form of heterochromatin, which contains mostly transcriptionally silent genes. Electron microscopy studies have demonstrated that the 30 nm fiber is highly dynamic such that it unfolds into a 10 nm fiber beads-on-a-string structure when transversed by an RNA polymerase engaged in transcription. The existing models commonly accept that the nucleosomes lie perpendicular to the axis of the fibre, with linker histones arranged internally. A stable 30 nm fibre relies on the regular positioning of nucleosomes along DNA. Linker DNA is relatively resistant to bending and rotation. This makes the length of linker DNA critical to the stability of the fibre, requiring nucleosomes to be separated by lengths that permit rotation and folding into the required orientation without excessive stress to the DNA. In this view, different lengths of the linker DNA should produce different folding topologies of the chromatin fiber. Recent theoretical work, based on electron-microscopy images of reconstituted fibers supports this view. DNA loops. The beads-on-a-string chromatin structure has a tendency to form loops. These loops allow interactions between different regions of DNA by bringing them closer to each other, which increases the efficiency of gene interactions. This process is dynamic, with loops forming and disappearing. The loops are regulated by two main elements: There are many other elements involved. For example, Jpx regulates the binding sites of CTCF molecules along the DNA fiber. Spatial organization of chromatin in the cell nucleus. The spatial arrangement of the chromatin within the nucleus is not random - specific regions of the chromatin can be found in certain territories. Territories are, for example, the lamina-associated domains (LADs), and the topologically associating domains (TADs), which are bound together by protein complexes. Currently, polymer models such as the Strings & Binders Switch (SBS) model and the Dynamic Loop (DL) model are used to describe the folding of chromatin within the nucleus. The arrangement of chromatin within the nucleus may also play a role in nuclear stress and restoring nuclear membrane deformation by mechanical stress. When chromatin is condensed, the nucleus becomes more rigid. When chromatin is decondensed, the nucleus becomes more elastic with less force exerted on the inner nuclear membrane. This observation sheds light on other possible cellular functions of chromatin organization outside of genomic regulation. Chromatin and bursts of transcription. Chromatin and its interaction with enzymes has been researched, and a conclusion being made is that it is relevant and an important factor in gene expression. Vincent G. Allfrey, a professor at Rockefeller University, stated that RNA synthesis is related to histone acetylation. The lysine amino acid attached to the end of the histones is positively charged. The acetylation of these tails would make the chromatin ends neutral, allowing for DNA access. When the chromatin decondenses, the DNA is open to entry of molecular machinery. Fluctuations between open and closed chromatin may contribute to the discontinuity of transcription, or transcriptional bursting. Other factors are probably involved, such as the association and dissociation of transcription factor complexes with chromatin. Specifically, RNA polymerase and transcriptional proteins have been shown to congregate into droplets via phase separation, and recent studies have suggested that 10 nm chromatin demonstrates liquid-like behavior increasing the targetability of genomic DNA. The interactions between linker histones and disordered tail regions act as an electrostatic glue organizing large-scale chromatin into a dynamic, liquid-like domain. Decreased chromatin compaction comes with increased chromatin mobility and easier transcriptional access to DNA. The phenomenon, as opposed to simple probabilistic models of transcription, can account for the high variability in gene expression occurring between cells in isogenic populations. Alternative chromatin organizations. During metazoan spermiogenesis, the spermatid's chromatin is remodeled into a more spaced-packaged, widened, almost crystal-like structure. This process is associated with the cessation of transcription and involves nuclear protein exchange. The histones are mostly displaced, and replaced by protamines (small, arginine-rich proteins). It is proposed that in yeast, regions devoid of histones become very fragile after transcription; HMO1, an HMG-box protein, helps in stabilizing nucleosomes-free chromatin. Chromatin and DNA repair. A variety of internal and external agents can cause DNA damage in cells. Many factors influence how the repair route is selected, including the cell cycle phase and chromatin segment where the break occurred. In terms of initiating 5’ end DNA repair, the p53 binding protein 1 (53BP1) and BRCA1 are important protein components that influence double-strand break repair pathway selection. The 53BP1 complex attaches to chromatin near DNA breaks and activates downstream factors such as Rap1-Interacting Factor 1 (RIF1) and shieldin, which protects DNA ends against nucleolytic destruction. DNA damage process occurs within the condition of chromatin, and the constantly changing chromatin environment has a large effect on it. Accessing and repairing the damaged cell of DNA, the genome condenses into chromatin and repairing it through modifying the histone residues. Through altering the chromatin structure, histones residues are adding chemical groups namely phosphate, acetyl and one or more methyl groups and these control the expressions of gene building by proteins to acquire DNA. Moreover, resynthesis of the delighted zone, DNA will be repaired by processing and restructuring the damaged bases. In order to maintain genomic integrity, "homologous recombination and classical non-homologous end joining process" has been followed by DNA to be repaired. The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow the critical cellular process of DNA repair, the chromatin must be remodeled. In eukaryotes, ATP-dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of DNA damage. This process is initiated by PARP1 protein that starts to appear at DNA damage in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. Next the chromatin remodeler Alc1 quickly attaches to the product of PARP1, and completes arrival at the DNA damage within 10 seconds of the damage. About half of the maximum chromatin relaxation, presumably due to action of Alc1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA damage occurrence. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. After undergoing relaxation subsequent to DNA damage, followed by DNA repair, chromatin recovers to a compaction state close to its pre-damage level after about 20 min. Chromatin and knots. It has been a puzzle how decondensed interphase chromosomes remain essentially unknotted. The natural expectation is that in the presence of type II DNA topoisomerases that permit passages of double-stranded DNA regions through each other, all chromosomes should reach the state of topological equilibrium. The topological equilibrium in highly crowded interphase chromosomes forming chromosome territories would result in formation of highly knotted chromatin fibres. However, Chromosome Conformation Capture (3C) methods revealed that the decay of contacts with the genomic distance in interphase chromosomes is practically the same as in the crumpled globule state that is formed when long polymers condense without formation of any knots. To remove knots from highly crowded chromatin, one would need an active process that should not only provide the energy to move the system from the state of topological equilibrium but also guide topoisomerase-mediated passages in such a way that knots would be efficiently unknotted instead of making the knots even more complex. It has been shown that the process of chromatin-loop extrusion is ideally suited to actively unknot chromatin fibres in interphase chromosomes. Chromatin: alternative definitions. The term, introduced by Walther Flemming, has multiple meanings: The first definition allows for "chromatins" to be defined in other domains of life like bacteria and archaea, using any DNA-binding proteins that condenses the molecule. These proteins are usually referred to nucleoid-associated proteins (NAPs); examples include AsnC/LrpC with HU. In addition, some archaea do produce nucleosomes from proteins homologous to eukaryotic histones. Chromatin Remodeling: Chromatin remodeling can result from covalent modification of histones that physically remodel, move or remove nucleosomes. Studies of Sanosaka et al. 2022, says that Chromatin remodeler CHD7 regulate cell type-specific gene expression in human neural crest cells.
6934
39594876
https://en.wikipedia.org/wiki?curid=6934
Condition number
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given formula_1 one is solving for "x," and thus the condition number of the (local) inverse must be used. The condition number is derived from the theory of propagation of uncertainty, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in linear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables. A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. In non-mathematical terms, an ill-conditioned problem is one where, for a small change in the inputs (the independent variables) there is a large change in the answer or dependent variable. This means that the correct solution/answer to the equation becomes hard to find. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property called "backward stability"; in general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms. As a rule of thumb, if the condition number formula_2, then up to formula_3 digits of accuracy may be lost on top of what would be lost to the numerical method due to loss of precision from arithmetic methods. However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy). Matrices. For example, the condition number associated with the linear equation "Ax" = "b" gives a bound on how inaccurate the solution "x" will be after approximation. Note that this is before the effects of round-off error are taken into account; conditioning is a property of the matrix, not the algorithm or floating-point accuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solution "x" will change with respect to a change in "b". Thus, if the condition number is large, even a small error in "b" may cause a large error in "x". On the other hand, if the condition number is small, then the error in "x" will not be much bigger than the error in "b". The condition number is defined more precisely to be the maximum ratio of the relative error in "x" to the relative error in "b". Let "e" be the error in "b". Assuming that "A" is a nonsingular matrix, the error in the solution "A"−1"b" is "A"−1"e". The ratio of the relative error in the solution to the relative error in "b" is formula_4 The maximum value (for nonzero "b" and "e") is then seen to be the product of the two operator norms as follows: formula_5 The same definition is used for any consistent norm, i.e. one that satisfies formula_6 When the condition number is exactly one (which can only happen if "A" is a scalar multiple of a linear isometry), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data. However, it does not mean that the algorithm will converge rapidly to this solution, just that it will not diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors. The condition number may also be infinite, but this implies that the problem is ill-posed (does not possess a unique, well-defined solution for each choice of data; that is, the matrix is not invertible), and no algorithm can be expected to reliably find a solution. The definition of the condition number depends on the choice of norm, as can be illustrated by two examples. If formula_7 is the matrix norm induced by the (vector) Euclidean norm (sometimes known as the "L"2 norm and typically denoted as formula_8), then formula_9 where formula_10 and formula_11 are maximal and minimal singular values of formula_12 respectively. Hence: The condition number with respect to "L"2 arises so often in numerical linear algebra that it is given a name, the condition number of a matrix. If formula_7 is the matrix norm induced by the formula_21 (vector) norm and formula_12 is lower triangular non-singular (i.e. formula_23 for all formula_24), then formula_25 recalling that the eigenvalues of any triangular matrix are simply the diagonal entries. The condition number computed with this norm is generally larger than the condition number computed relative to the Euclidean norm, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a "non-linear algebra", for example when approximating irrational and transcendental functions or numbers with numerical methods). If the condition number is not significantly larger than one, the matrix is well-conditioned, which means that its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to be ill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that is not invertible is often said to have a condition number equal to infinity. Alternatively, it can be defined as formula_26, where formula_27 is the Moore-Penrose pseudoinverse. For square matrices, this unfortunately makes the condition number discontinuous, but it is a useful definition for rectangular matrices, which are never invertible but are still used to define systems of equations. Nonlinear. Condition numbers can also be defined for nonlinear functions, and can be computed using calculus. The condition number varies with the point; in some cases one can use the maximum (or supremum) condition number over the domain of the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest. One variable. The "absolute" condition number of a differentiable function formula_28 in one variable is the absolute value of the derivative of the function: formula_29 The "relative" condition number of formula_28 as a function is formula_31. Evaluated at a point formula_32, this is formula_33 Note that this is the absolute value of the elasticity of a function in economics. Most elegantly, this can be understood as (the absolute value of) the ratio of the logarithmic derivative of formula_28, which is formula_35, and the logarithmic derivative of formula_32, which is formula_37, yielding a ratio of formula_38. This is because the logarithmic derivative is the infinitesimal rate of relative change in a function: it is the derivative formula_39 scaled by the value of formula_28. Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change. More directly, given a small change formula_41 in formula_32, the relative change in formula_32 is formula_44, while the relative change in formula_45 is formula_46. Taking the ratio yields formula_47 The last term is the difference quotient (the slope of the secant line), and taking the limit yields the derivative. Condition numbers of common elementary functions are particularly important in computing significant figures and can be computed immediately from the derivative. A few important ones are given below: Several variables. Condition numbers can be defined for any function formula_28 mapping its data from some domain (e.g. an formula_49-tuple of real numbers formula_32) into some codomain (e.g. an formula_51-tuple of real numbers formula_45), where both the domain and codomain are Banach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example, polynomial root finding or computing eigenvalues. The condition number of formula_28 at a point formula_32 (specifically, its relative condition number) is then defined to be the maximum ratio of the fractional change in formula_45 to any fractional change in formula_32, in the limit where the change formula_57 in formula_32 becomes infinitesimally small: formula_59 where formula_7 is a norm on the domain/codomain of formula_28. If formula_28 is differentiable, this is equivalent to: formula_63 where denotes the Jacobian matrix of partial derivatives of formula_28 at formula_32, and formula_66 is the induced norm on the matrix.
6936
337066
https://en.wikipedia.org/wiki?curid=6936
Cheddar cheese
Cheddar cheese (or simply cheddar) is a natural cheese that is relatively hard, off-white (or orange if colourings such as annatto are added), and sometimes sharp-tasting. It originates from the English village of Cheddar in Somerset, South West England. Cheddar is produced all over the world, and "cheddar cheese" has no Protected Designation of Origin (PDO). In 2007, the name West Country Farmhouse Cheddar was registered in the European Union and (after Brexit) the United Kingdom, defined as cheddar produced from local milk within Somerset, Dorset, Devon and Cornwall and manufactured using traditional methods. Protected Geographical Indication (PGI) was registered for Orkney Scottish Island Cheddar in 2013 in the EU, which also applies under UK law. Globally, the style and quality of cheeses labelled as cheddar varies greatly, with some processed cheeses packaged as "cheddar". Cheeses similar to Red Leicester are sometimes marketed as "red cheddar". Cheddar cheese is the most popular cheese in the UK, accounting for 51% of the country's £1.9 billion annual cheese market. It is the second-most popular cheese in the United States behind mozzarella, with an average annual consumption of per capita. The United States produced approximately of cheddar in 2014, and the UK produced in 2008. History. Cheddar cheese originates from the village of Cheddar in Somerset, southwest England. Cheddar Gorge on the edge of the village contains a number of caves, which provided the ideal humidity and steady temperature for maturing the cheese. Cheddar traditionally had to be made within of Wells Cathedral. The 19th-century Somerset dairyman Joseph Harding was central to the modernisation and standardisation of cheddar. For his technical innovations, promotion of dairy hygiene, and volunteer dissemination of modern cheese-making techniques, Harding has been dubbed "the father of cheddar". Harding introduced new equipment to the process of cheese-making, including his "revolving breaker" for curd cutting; the revolving breaker saved much manual effort in the cheese-making process. The "Joseph Harding method" was the first modern system for cheddar production based upon scientific principles. Harding stated that cheddar cheese is "not made in the field, nor in the byre, nor even in the cow, it is made in the dairy". Together, Joseph Harding and his wife introduced cheddar in Scotland and North America, while his sons Henry and William Harding were responsible for introducing cheddar cheese production to Australia and facilitating the establishment of the cheese industry in New Zealand, respectively. During the Second World War and for nearly a decade thereafter, most of the milk in Britain was used to make a single kind of cheese nicknamed "government cheddar" as part of the war economy and rationing. As a result, almost all other cheese production in the country was wiped out. Before the First World War, more than 3,500 cheese producers were in Britain; fewer than 100 remained after the Second World War. According to a United States Department of Agriculture researcher, cheddar is the world's most popular cheese and is the most studied type of cheese in scientific publications. Process. During the manufacture of cheddar, the curds and whey are separated using rennet, an enzyme complex normally produced from the stomachs of newborn calves, while in vegetarian or kosher cheeses, bacterial, yeast or mould-derived chymosin is used. "Cheddaring" refers to an additional step in the production of cheddar cheese where, after heating, the curd is kneaded with salt, cut into cubes to drain the whey, and then stacked and turned. Strong, extra-mature cheddar, sometimes called vintage, needs to be matured for 15 months or more. The cheese is kept at a constant temperature, often requiring special facilities. As with other hard cheese varieties produced worldwide, caves provide an ideal environment for maturing cheese; still, today, some cheddar is matured in the caves at Wookey Hole and Cheddar Gorge. Additionally, some versions of cheddar are smoked. Character. The ideal quality of the original Somerset cheddar was described by Joseph Harding in 1864 as "close and firm in texture, yet mellow in character or quality; it is rich with a tendency to melt in the mouth, the flavour full and fine, approaching to that of a hazelnut". Cheddar made in the classical way tends to have a sharp, pungent flavour, often slightly earthy. The "sharpness" of cheddar is associated with the levels of bitter peptides in the cheese. This bitterness has been found to be significant to the overall perception of the aged cheddar flavour. The texture is firm, with farmhouse traditional cheddar being slightly crumbly; it should also, if mature, contain large cheese crystals consisting of calcium lactate – often precipitated when matured for times longer than six months. Cheddar can be a deep to pale yellow (off-white) colour, or a yellow-orange colour when certain plant extracts are added, such as beet juice. One commonly used spice is annatto, extracted from seeds of the tropical achiote tree. Originally added to simulate the colour of high-quality milk from grass-fed Jersey and Guernsey cows, annatto may also impart a sweet, nutty flavour. The largest producer of cheddar cheese in the United States, Kraft, uses a combination of annatto and oleoresin paprika, an extract of the lipophilic (oily) portion of paprika. Cheddar was sometimes (and still can be found) packaged in black wax, but was more commonly packaged in larded cloth, which was impermeable to contaminants, but still allowed the cheese to "breathe". Original-cheddar designation. The Slow Food Movement has created a cheddar presidium, arguing that only three cheeses should be called "original cheddar". Their specifications, which go further than the "West Country Farmhouse Cheddar" PDO, require that cheddar be made in Somerset and with traditional methods, such as using raw milk, traditional animal rennet, and a cloth wrapping. International production. Australia. As of 2013, cheddar accounts for over 55% of the Australian cheese market, with average annual consumption around per person. Cheddar is so commonly found that the name is rarely used: instead, cheddar is sold by strength alone as e.g. "mild", "tasty" or "sharp". Canada. Following a wheat midge outbreak in Canada in the mid-19th century, farmers in Ontario began to convert to dairy farming in large numbers, and cheddar cheese became their main exportable product, even being exported to England. By the turn of the 20th century, 1,242 cheddar factories were in Ontario, and cheddar had become Canada's second-largest export after timber. Cheddar exports totalled in 1904, but by 2012, Canada was a net importer of cheese. James L. Kraft grew up on a dairy farm in Ontario, before moving to Chicago. According to the writer Sarah Champman, "Although we cannot wholly lay the decline of cheese craft in Canada at the feet of James Lewis Kraft, it did correspond with the rise of Kraft’s processed cheese empire." Most Canadian cheddar is produced in the provinces of Québec (40.8%) and Ontario (36%), though other provinces produce some and some smaller artisanal producers exist. The annual production is 120,000 tons. Canadian cheddar cheese soup is a featured dish at the Canada Pavilion at Epcot in Walt Disney World. Percentage of butterfat or milk fat must be labelled by the words "milk fat" or abbreviations "B.F." or "M.F." New Zealand. Most of the cheddar produced in New Zealand is factory-made, although some are handmade by artisan cheesemakers. Factory-made cheddar is generally sold relatively young within New Zealand, but the Anchor dairy company ships New Zealand cheddars to the UK, where the blocks mature for another year or so. United Kingdom. Only one producer of the cheese is now based in the village of Cheddar, the Cheddar Gorge Cheese Co. The name "cheddar" is not protected under European Union or UK law, though the name "West Country Farmhouse Cheddar" has an EU and (following Brexit) a UK protected designation of origin (PDO) registration, and may only be produced in Somerset, Devon, Dorset and Cornwall, using milk sourced from those counties. Cheddar is usually sold as mild, medium, mature, extra mature or vintage. Cheddar produced in Orkney is registered as an EU protected geographical indication under the name "Orkney Scottish Island Cheddar". This protection highlights the use of traditional methods, passed down through generations since 1946 and its uniqueness in comparison to other cheddar cheeses. "West Country Farmhouse Cheddar" is protected outside the UK and the EU as a Geographical Indication also in China, Georgia, Iceland, Japan, Moldova, Montenegro, Norway, Serbia, Switzerland and Ukraine. Furthermore, a Protected Geographical Indication (PGI) was registered for "Orkney Scottish Island Cheddar" in 2013 in the EU, which also applies under UK law. It is protected as a geographical indication in Iceland, Montenegro, Norway and Serbia. United States. The state of Wisconsin produces the most cheddar cheese in the United States; other centres of production include California, Idaho, New York, Vermont, Oregon, Texas, and Oklahoma. It is sold in several varieties, namely mild, medium, , extra sharp, New York style, white, and Vermont. New York–style cheddar is particularly sharp/acidic, but tends to be somewhat softer than the milder-tasting varieties. Cheddar that does not contain annatto is frequently labelled "white cheddar" or "Vermont cheddar", regardless of whether it was actually produced there. Vermont's three creameries produce cheddar cheeses – Cabot Creamery, which produces the 16-month-old "Private Stock Cheddar"; the Grafton Village Cheese Company; and Shelburne Farms. Some processed cheeses or "cheese foods" are called "cheddar flavored". Examples include Easy Cheese, a cheese-food packaged in a pressurised spray can; also, as packs of square, sliced, individually-wrapped "process cheese", which is sometimes also pasteurised. Cheddar is one of several products used by the United States Department of Agriculture to track the status of America's overall dairy industry; reports are issued weekly detailing prices and production quantities. Records. U.S. President Andrew Jackson once held an open house party at the White House at which he served a block of cheddar. The White House is said to have smelled of cheese for weeks. The real-life event was mentioned several times in "The West Wing", with the White House staff participating in "Big Block of Cheese Day", a fictional workday on which White House Chief of Staff Leo McGarry encourages his staff to meet with fringe special interest groups that normally would not get attention from the White House. A cheese of was produced in Ingersoll, Ontario, in 1866 and exhibited in New York and Britain; it was described in the poem "Ode on the Mammoth Cheese Weighing over 7,000 Pounds" by Canadian poet James McIntyre. In 1893, farmers from the town of Perth, Ontario, produced the "mammoth cheese", which weighed for the Chicago World's Fair. It was to be exhibited at the Canadian display, but the mammoth cheese fell through the floor and was placed on a reinforced concrete floor in the Agricultural Building. It received the most journalistic attention at the fair and was awarded the bronze medal. A larger, Wisconsin cheese of was made for the 1964 New York World's Fair. A cheese this size would use the equivalent of the daily milk production of 16,000 cows. Oregon members of the Federation of American Cheese-makers created the largest cheddar in 1989. The cheese weighed . In 2012, Wisconsin cheese shop owner Edward Zahn discovered and sold a batch of unintentionally aged cheddar up to 40 years old, possibly "the oldest collection of cheese ever assembled and sold to the public". The old cheese has extensive crystallization on the outside and is "creamier and overwhelmingly sharp" on the inside.
6938
1300131559
https://en.wikipedia.org/wiki?curid=6938
Classical order
An order in architecture is a certain assemblage of parts subject to uniform established proportions, regulated by the office that each part has to perform. Coming down to the present from Ancient Greek and Ancient Roman civilization, the architectural orders are the styles of classical architecture, each distinguished by its proportions and characteristic profiles and details, and most readily recognizable by the type of column employed. The three orders of architecture—the Doric, Ionic, and Corinthian—originated in Greece. To these the Romans added, in practice if not in name, the Tuscan, which they made simpler than Doric, and the Composite, which was more ornamental than the Corinthian. The architectural order of a classical building is akin to the mode or key of classical music; the grammar or rhetoric of a written composition. It is established by certain "modules" like the intervals of music, and it raises certain expectations in an audience attuned to its language. Whereas the orders were essentially structural in Ancient Greek architecture, which made little use of the arch until its late period, in Roman architecture where the arch was often dominant, the orders became increasingly decorative elements except in porticos and similar uses. Columns shrank into half-columns emerging from walls or turned into pilasters. This treatment continued after the conscious and "correct" use of the orders, initially following exclusively Roman models, returned in the Italian Renaissance. Greek Revival architecture, inspired by increasing knowledge of Greek originals, returned to more authentic models, including ones from relatively early periods. Elements. Each style has distinctive capitals at the top of columns and horizontal entablatures which it supports, while the rest of the building does not in itself vary between the orders. The column shaft and base also varies with the order, and is sometimes articulated with vertical concave grooves known as fluting. The shaft is wider at the bottom than at the top, because its entasis, beginning a third of the way up, imperceptibly makes the column slightly more slender at the top, although some Doric columns, especially early Greek ones, are visibly "flared", with straight profiles that narrow going up the shaft. The capital rests on the shaft. It has a load-bearing function, which concentrates the weight of the entablature on the supportive column, but it primarily serves an aesthetic purpose. The necking is the continuation of the shaft, but is visually separated by one or many grooves. The echinus lies atop the necking. It is a circular block that bulges outwards towards the top to support the abacus, which is a square or shaped block that in turn supports the entablature. The entablature consists of three horizontal layers, all of which are visually separated from each other using moldings or bands. In Roman and post-Renaissance work, the entablature may be carried from column to column in the form of an arch that springs from the column that bears its weight, retaining its divisions and sculptural enrichment, if any. There are names for all the many parts of the orders. Measurement. The heights of columns are calculated in terms of a ratio between the diameter of the shaft at its base and the height of the column. A Doric column can be described as seven diameters high, an Ionic column as eight diameters high, and a Corinthian column nine diameters high, although the actual ratios used vary considerably in both ancient and revived examples, but still keeping to the trend of increasing slimness between the orders. Sometimes this is phrased as "lower diameters high", to establish which part of the shaft has been measured. Greek orders. There are three distinct orders in Ancient Greek architecture: Doric, Ionic, and Corinthian. These three were adopted by the Romans, who modified their capitals. The Roman adoption of the Greek orders took place in the 1st century BC. The three ancient Greek orders have since been consistently used in European Neoclassical architecture. Sometimes the Doric order is considered the earliest order, but there is no evidence to support this. Rather, the Doric and Ionic orders seem to have appeared at around the same time, the Ionic in eastern Greece and the Doric in the west and mainland. Both the Doric and the Ionic order appear to have originated in wood. The Temple of Hera in Olympia is the oldest well-preserved temple of Doric architecture. It was built just after 600 BC. The Doric order later spread across Greece and into Sicily, where it was the chief order for monumental architecture for 800 years. Early Greeks were no doubt aware of the use of stone columns with bases and capitals in ancient Egyptian architecture, and that of other Near Eastern cultures, although there they were mostly used in interiors, rather than as a dominant feature of all or part of exteriors, in the Greek style. Doric order. The Doric order originated on the mainland and western Greece. It is the simplest of the orders, characterized by short, organized, heavy columns with plain, round capitals (tops) and no base. With a height that is only four to eight times its diameter, the columns are the most squat of all orders. The shaft of the Doric order is channeled with 20 flutes. The capital consists of a necking or annulet, which is a simple ring. The echinus is convex, or circular cushion like stone, and the abacus is a square slab of stone. Above the capital is a square abacus connecting the capital to the entablature. The entablature is divided into three horizontal registers, the lower part of which is either smooth or divided by horizontal lines. The upper half is distinctive for the Doric order. The frieze of the Doric entablature is divided into triglyphs and metopes. A triglyph is a unit consisting of three vertical bands which are separated by grooves. Metopes are the plain or carved reliefs between two triglyphs. The Greek forms of the Doric order come without an individual base. They instead are placed directly on the stylobate. Later forms, however, came with the conventional base consisting of a plinth and a torus. The Roman versions of the Doric order have smaller proportions. As a result, they appear lighter than the Greek orders. Ionic order. The Ionic order came from eastern Greece, where its origins are entwined with the similar but little known Aeolic order. It is distinguished by slender, fluted pillars with a large base and two opposed volutes (also called "scrolls") in the echinus of the capital. The echinus itself is decorated with an egg-and-dart motif. The Ionic shaft comes with four more flutes than the Doric counterpart (totalling 24). The Ionic base has two convex moldings called "tori", which are separated by a scotia. The Ionic order is also marked by an entasis, a curved tapering in the column shaft. A column of the Ionic order is nine times more tall than its lower diameter. The shaft itself is eight diameters high. The architrave of the entablature commonly consists of three stepped bands ("fasciae"). The frieze comes without the Doric triglyph and metope. The frieze sometimes comes with a continuous ornament such as carved figures instead. Corinthian order. The Corinthian order is the most elaborated of the Greek orders, characterized by a slender fluted column having an ornate capital decorated with two rows of acanthus leaves and four scrolls. The shaft of the Corinthian order has 24 flutes. The column is commonly ten diameters high. The Roman writer Vitruvius credited the invention of the Corinthian order to Callimachus, a Greek sculptor of the 5th century BC. The oldest known building built according to this order is the Choragic Monument of Lysicrates in Athens, constructed from 335 to 334 BC. The Corinthian order was raised to rank by the writings of Vitruvius in the 1st century BC. Roman orders. The Romans adapted all the Greek orders and also developed two orders of their own, basically modifications of Greek orders. However, it was not until the Renaissance that these were named and formalized as the Tuscan and Composite, respectively the plainest and most ornate of the orders. The Romans also invented the Superposed order. A superposed order is when successive stories of a building have different orders. The heaviest orders were at the bottom, whilst the lightest came at the top. This means that the Doric order was the order of the ground floor, the Ionic order was used for the middle story, while the Corinthian or the Composite order was used for the top story. The Giant order was invented by architects in the Renaissance. The Giant order is characterized by columns that extend the height of two or more stories. Tuscan order. The Tuscan order has a very plain design, with a plain shaft, and a simple capital, base, and frieze. It is a simplified adaptation of the Greeks' Doric order. The Tuscan order is characterized by an unfluted shaft and a capital that consists of only an echinus and an abacus. In proportions it is similar to the Doric order, but overall it is significantly plainer. The column is normally seven diameters high. Compared to the other orders, the Tuscan order looks the most solid. Composite order. The Composite order is a mixed order, combining the volutes of the Ionic with the leaves of the Corinthian order. Until the Renaissance it was not ranked as a separate order. Instead it was considered as a late Roman form of the Corinthian order. The column of the Composite order is typically ten diameters high. Historical development. The Renaissance period saw renewed interest in the literary sources of the ancient cultures of Greece and Rome, and the fertile development of a new architecture based on classical principles. The treatise by Roman theoretician, architect and engineer Vitruvius, is the only architectural writing that survived from Antiquity. Effectively rediscovered in the 15th century, Vitruvius came to be regarded as the ultimate authority on architecture. However, in his text the word "order" is not to be found. To describe the four species of columns (he only mentions: Tuscan, Doric, Ionic and Corinthian) he uses, in fact, various words such as: "genus" (gender), "mos" (habit, fashion, manner), "opera" (work). The term "order", as well as the idea of redefining the "canon" started circulating in Rome, at the beginning of the 16th century, probably during the studies of Vitruvius' text conducted and shared by Peruzzi, Raphael, and Sangallo. Ever since, the definition of the "canon" has been a collective endeavor that involved several generations of European architects, from Renaissance and Baroque periods, basing their theories both on the study of Vitruvius' writings and the observation of Roman ruins (the Greek ruins became available only after Greek Independence, 1821–1823). What was added were rules for the use of the Architectural Orders, and the exact proportions of them in minute detail. Commentary on the appropriateness of the orders for temples devoted to particular deities (Vitruvius I.2.5) were elaborated by Renaissance theorists, with Doric characterized as bold and manly, Ionic as matronly, and Corinthian as maidenly. Vignola defining the concept of "order". Following the examples of Vitruvius and the five books of the "Regole generali di architettura sopra le cinque maniere de gli edifici" by Sebastiano Serlio published from 1537 onwards, Giacomo Barozzi da Vignola produced an architecture rule book that was not only more practical than the previous two treatises, but also was systematically and consistently adopting, for the first time, the term 'order' to define each of the five different species of columns inherited from antiquity. A first publication of the various plates, as separate sheets, appeared in Rome in 1562, with the title: "Regola delli cinque ordini d'architettura" ("Canon of the Five Orders of Architecture"). As David Watkin has pointed out, Vignola's book "was to have an astonishing publishing history of over 500 editions in 400 years in ten languages, Italian, Dutch, English, Flemish, French, German, Portuguese, Russian, Spanish, Swedish, during which it became perhaps the most influential book of all times". The book consisted simply of an introduction followed by 32 annotated plates, highlighting the proportional system with all the minute details of the Five Architectural Orders. According to Christof Thoenes, the main expert of Renaissance architectural treatises, "in accordance with Vitruvius's example, Vignola chose a "module" equal to a half-diameter which is the base of the system. All the other measurements are expressed in fractions or in multiples of this module. The result is an arithmetical model, and with its help each order, harmoniously proportioned, can easily be adapted to any given height, of a façade or an interior. From this point of view, Vignola's Regola is a remarkable intellectual achievement". In America, "The American Builder's Companion", written in the early 19th century by the architect Asher Benjamin, influenced many builders in the eastern states, particularly those who developed what became known as the Federal style. The last American re-interpretation of Vignola's "Regola", was edited in 1904 by William Robert Ware. The break from the classical mode came first with the Gothic Revival architecture, then the development of modernism during the 19th century. The Bauhaus promoted pure functionalism, stripped of ornament considered superfluous, and that has become one of the defining characteristics of modern architecture. There are some exceptions. Postmodernism introduced an ironic use of the orders as a cultural reference, divorced from the strict rules of composition. On the other hand, a number of practitioners such as Quinlan Terry in England, and Michael Dwyer, Richard Sammons, and Duncan Stroik in the United States, continue the classical tradition, and use the classical orders in their work. Nonce orders. Several orders, usually based upon the composite order and only varying in the design of the capitals, have been invented under the inspiration of specific occasions, but have not been used again. They are termed "nonce orders" by analogy to "nonce words"; several examples follow below. These nonce orders all express the "speaking architecture" ("architecture parlante") that was taught in the Paris courses, most explicitly by Étienne-Louis Boullée, in which sculptural details of classical architecture could be enlisted to speak symbolically, the better to express the purpose of the structure and enrich its visual meaning with specific appropriateness. This idea was taken up strongly in the training of Beaux-Arts architecture, . French order. The Hall of Mirrors in the Palace of Versailles contains pilasters with bronze capitals in the "French order". Designed by Charles Le Brun, the capitals display the national emblems of the Kingdom of France: the royal sun between two Gallic roosters above a fleur-de-lis. British orders. Robert Adam's brother James was in Rome in 1762, drawing antiquities under the direction of Clérisseau; he invented a "British order" and published an engraving of it. Its capital the heraldic lion and unicorn take the place of the Composite's volutes, a Byzantine or Romanesque conception, but expressed in terms of neoclassical realism. Adam's ink-and-wash rendering with red highlighting is at the Avery Library, Columbia University. In 1789 George Dance invented an Ammonite order, a variant of Ionic, substituting volutes in the form of fossil ammonites for John Boydell's Shakespeare Gallery in Pall Mall, London. An adaptation of the Corinthian order by William Donthorne that used turnip leaves and mangelwurzel is termed the Agricultural order. Sir Edwin Lutyens, who from 1912 laid out New Delhi as the new seat of government for the British Empire in India, designed a Delhi order having a capital displaying a band of vertical ridges, and with bells hanging at each corner as a replacement for volutes. His design for the new city's central palace, Viceroy's House, now the Presidential residence Rashtrapati Bhavan, was a thorough integration of elements of Indian architecture into a building of classical forms and proportions, and made use of the order throughout. The Delhi Order reappears in some later Lutyens buildings including Campion Hall, Oxford. American orders. In the United States Benjamin Latrobe, the architect of the Capitol building in Washington, DC, designed a series of botanical American orders. Most famous is the Corinthian order substituting ears of corn and their husks for the acanthus leaves, which was executed by Giuseppe Franzoni and used in the small domed vestibule of the Senate. Only this vestibule survived the Burning of Washington in 1814, nearly intact. With peace restored, Latrobe designed an American order that substituted tobacco leaves for the acanthus, of which he sent a sketch to Thomas Jefferson in a letter, 5 November 1816. He was encouraged to send a model of it, which remains at Monticello. In the 1830s Alexander Jackson Davis admired it enough to make a drawing of it. In 1809 Latrobe invented a second American order, employing magnolia flowers constrained within the profile of classical mouldings, as his drawing demonstrates. It was intended for "the Upper Columns in the Gallery of the Entrance of the Chamber of the Senate".
6941
7611264
https://en.wikipedia.org/wiki?curid=6941
Colin Kapp
Derek Ivor Colin Kapp (3 April 1928 – 3 August 2007), Known as Colin Kapp, was a British science fiction author best known for his stories about the Unorthodox Engineers. As an electronic engineer, he began his career with Mullard Electronics then specialised in electroplating techniques, eventually becoming a freelance consultant engineer. He was born in Southwark, south London, 3 April 1928 to John L. F. Kapp and Annie M.A. (née Towner). Works. Short stories. Unorthodox Engineers. Collected in "The Unorthodox Engineers" (1979)
6942
27015025
https://en.wikipedia.org/wiki?curid=6942
Catherine of Aragon
Catherine of Aragon (also spelt as Katherine, historical Spanish: , now: ; 16 December 1485 – 7 January 1536) was Queen of England as the first wife of King Henry VIII from their marriage on 11 June 1509 until its annulment on 23 May 1533. She had previously been Princess of Wales while married to Henry's elder brother, Arthur, Prince of Wales, for a short period before his death. Catherine was born at the Archbishop's Palace of Alcalá de Henares, and was the youngest child of Isabella I of Castile and Ferdinand II of Aragon. She was three years old when she was betrothed to Arthur, the eldest son of Henry VII of England. They married in 1501, but Arthur died five months later. Catherine spent years in limbo, and during this time, she held the position of ambassador of the Aragonese crown to England in 1507, the first known female ambassador in European history. She married Henry VIII shortly after his accession in 1509. For six months in 1513, she served as regent of England while Henry was in France. During that time the English defeated a Scottish invasion at the Battle of Flodden, an event in which Catherine played an important part with an emotional speech about courage and patriotism. By 1526, Henry was infatuated with Anne Boleyn and dissatisfied that his marriage to Catherine had produced no surviving sons, leaving their daughter Mary as heir presumptive at a time when there was no established precedent for a woman on the throne. He sought to have their marriage annulled, setting in motion a chain of events that led to England's schism with the Catholic Church. When Pope Clement VII refused to annul the marriage, Henry defied him by assuming supremacy over religious matters in England. In 1533, their marriage was consequently declared invalid and Henry married Anne on the judgement of clergy in England, without reference to the pope. Catherine refused to accept Henry as supreme head of the Church in England and considered herself the King's rightful wife and queen, attracting much popular sympathy. Despite this, Henry acknowledged her only as dowager princess of Wales. After being banished from court by Henry, Catherine lived out the remainder of her life at Kimbolton Castle, dying there in January 1536 of cancer. The English people held Catherine in high esteem, and her death set off tremendous mourning. Her daughter Mary became the first undisputed English queen regnant in 1553. Catherine commissioned "The Education of a Christian Woman" by Juan Luis Vives, who dedicated the book, controversial at the time, to the Queen in 1523. Such was Catherine's impression on people that even her adversary Thomas Cromwell said of her, "If not for her sex, she could have defied all the heroes of History." She successfully appealed for the lives of the rebels involved in the Evil May Day, for the sake of their families, and also won widespread admiration by starting an extensive programme for the relief of the poor. Catherine was a patron of Renaissance humanism and a friend of the great scholars Erasmus of Rotterdam and Thomas More. Early life. Catherine was born at the Archbishop's Palace of Alcalá de Henares near Madrid, in the early hours of 16 December 1485. She was the youngest surviving child of King Ferdinand II of Aragon and Queen Isabella I of Castile. Her siblings were Joanna, Queen of Castile and of Aragon, Isabella, Queen of Portugal, John, Prince of Asturias, and Maria, Queen of Portugal. Catherine was quite short in stature with long red hair, wide blue eyes, a round face, and a fair complexion. She was descended, on her maternal side, from the House of Lancaster, an English royal house; her great-grandmother Catherine of Lancaster, after whom she was named, and her great-great-grandmother Philippa of Lancaster were both daughters of John of Gaunt and granddaughters of Edward III of England. Consequently, she was third cousin of her father-in-law, Henry VII of England, and fourth cousin of her mother-in-law Elizabeth of York. Catherine was educated by a tutor, Alessandro Geraldini, who was a clerk in Holy Orders. She studied arithmetic, canon and civil law, classical literature, genealogy and heraldry, history, philosophy, religion, and theology. She had a strong religious upbringing and developed her Roman Catholic faith that would play a major role in later life. She learned to speak, read and write in Castilian Spanish and Latin, and spoke French and Greek. Erasmus later said that Catherine "loved good literature which she had studied with success since childhood". She had been given lessons in domestic skills, such as cooking, embroidery, lace-making, needlepoint, sewing, spinning, and weaving and was also taught music, dancing, drawing, as well as being carefully educated in good manners and court etiquette. At an early age, Catherine was considered a suitable wife for Arthur, Prince of Wales, heir apparent to the English throne, due to the English ancestry she inherited from her mother. Theoretically, by means of her mother, Catherine had a stronger legitimate claim to the English throne than King Henry VII himself through the first two wives of John of Gaunt, 1st Duke of Lancaster: Blanche of Lancaster and Constance of Castile. In contrast, Henry VII was the descendant of Gaunt's third marriage to Katherine Swynford, whose children were born out of wedlock and only legitimised after the death of Constance and the marriage of John to Katherine. The children of John and Katherine, while legitimised, were barred from inheriting the English throne, a stricture that was ignored in later generations. Because of Henry's descent through illegitimate children barred from succession to the English throne, the Tudor monarchy was not accepted by all European kingdoms. At the time, the House of Trastámara was the most prestigious in Europe, due to the rule of the Catholic Monarchs, so the alliance of Catherine and Arthur validated the House of Tudor in the eyes of European royalty and strengthened the Tudor claim to the English throne via Catherine of Aragon's ancestry. It would have given a male heir an indisputable claim to the throne. The two were married by proxy on 19 May 1499 and corresponded in Latin until Arthur turned fifteen, when it was decided that they were old enough to begin their conjugal life. Catherine was accompanied to England by the following ambassadors: Diego Fernández de Córdoba y Mendoza, 3rd Count of Cabra; Alonso de Fonseca y Acevedo, Archbishop of Santiago de Compostela; and Antonio de Rojas Manrique, Bishop of Mallorca. Her Spanish retinue, including Francisco Felipe, was supervised by her duenna, Elvira Manuel. At first it was thought Catherine's ship would arrive at Gravesend. A number of English gentlewomen were appointed to be ready to welcome her on arrival in October 1501. They were to escort Catherine in a flotilla of barges on the Thames to the Tower of London. As wife and widow of Arthur. Then-15-year-old Catherine departed from A Coruña on 17 August 1501 and met Arthur on 4 November at Dogmersfield in Hampshire. Little is known about their first impressions of each other, but Arthur did write to his parents-in-law that he would be "a true and loving husband" and told his parents that he was immensely happy to "behold the face of his lovely bride". The couple had corresponded in Latin, but found that they could not understand each other's spoken conversation, because they had learned different Latin pronunciations. Ten days later, on 14 November 1501, they were married at Old St. Paul's Cathedral, both 15 years old. A dowry of 200,000 ducats had been agreed, and half was paid shortly after the marriage. It was noted that Catherine and her Spanish ladies in waiting were dressed in Spanish style at her arrival and at the wedding. Once married, Arthur was sent to Ludlow Castle on the borders of Wales to preside over the Council of Wales and the Marches, as was his duty as Prince of Wales, and his bride accompanied him. A few months later, they both became ill, possibly with the sweating sickness, which was sweeping the area. Arthur died on 2 April 1502; 16-year-old Catherine recovered to find herself a widow. At this point, Henry VII faced the challenge of avoiding the obligation to return her 200,000-ducat dowry, half of which he had not yet received, to her father, as required by her marriage contract should she return home. Following the death of Queen Elizabeth in February 1503, there were rumours of a potential marriage between Catherine and King Henry; such rumours were, however, unsubstantiated. It was agreed that Catherine would marry Henry VII's second son, Henry, Duke of York, who was five years younger than she was. The death of Catherine's mother, however, meant that her "value" in the marriage market decreased. Castile was a much larger kingdom than Aragon, and it was inherited by Catherine's elder sister, Joanna. Ostensibly, the marriage was delayed until Henry was old enough, but Ferdinand II procrastinated so much over payment of the remainder of Catherine's dowry that it became doubtful that the marriage would take place. She lived as a virtual prisoner at Durham House in London. Some of the letters she wrote to her father complaining of her treatment have survived. In one of these letters she tells him that "I choose what I believe, and say nothing. For I am not as simple as I may seem." She had little money and struggled to cope, as she had to support her ladies-in-waiting as well as herself. In 1507 she served as the Spanish ambassador to England, the first female ambassador in European history. While Henry VII and his counsellors expected her to be easily manipulated, Catherine went on to prove them wrong. Marriage to Arthur's brother depended on the Pope granting a dispensation because canon law forbade a man to marry his brother's widow. Catherine testified that her marriage to Arthur was never consummated as, also according to canon law, a marriage could be dissolved if it was not consummated. Queen of England. Wedding. Catherine's second wedding took place on 11 June 1509, seven years after Prince Arthur's death. She married Henry VIII, who had only just acceded to the throne, in a private ceremony in the church of the Observant Friars outside Greenwich Palace. She was 23 years of age. Coronation. On Saturday 23 June 1509, the traditional eve-of-coronation procession to Westminster Abbey was greeted by a large and enthusiastic crowd. As was the custom, the couple spent the night before their coronation at the Tower of London. On Midsummer's Day, Sunday, 1509, Henry VIII and Catherine were anointed and crowned together by the Archbishop of Canterbury at a lavish ceremony at Westminster Abbey. The coronation was followed by a banquet in Westminster Hall. Many new Knights of the Bath were created in honour of the coronation. In that month that followed, many social occasions presented the new Queen to the English public. She made a fine impression and was well received by the people of England. Influence. On 11 June 1513, Henry appointed Catherine Regent in England with the titles "Governor of the Realm and Captain General", while he went to France on a military campaign. When Louis d'Orléans, Duke of Longueville, was captured at Thérouanne, Henry sent him to stay in Catherine's household. She wrote to Wolsey that she and her council would prefer the Duke to stay in the Tower of London as the Scots were "so busy as they now be" and she added her prayers for "God to sende us as good lukke against the Scotts, as the King hath ther." The war with Scotland occupied her subjects, and she was "horrible busy with making standards, banners, and badges" at Richmond Palace. Catherine wrote to towns, including Gloucester, asking them to send muster lists of men able to serve as soldiers. The Scots invaded and on 3 September 1513, she ordered Thomas Lovell to raise an army in the midland counties. Catherine was issued with banners at Richmond on 8 September, and rode north in full armour to address the troops, despite being heavily pregnant at the time. Her fine speech was reported to the historian Peter Martyr d'Anghiera in Valladolid within a fortnight. Although an Italian newsletter said she was north of London when news of the victory at Battle of Flodden Field reached her, she was near Buckingham. From Woburn Abbey, she sent a letter to Henry along with a piece of the bloodied coat of King James IV of Scotland, who died in the battle, for Henry to use as a banner at the siege of Tournai. Catherine's religious dedication increased as she became older, as did her interest in academics. She continued to broaden her knowledge and provide training for her daughter, Mary. Education among women became fashionable, partly because of Catherine's influence, and she donated large sums of money to several colleges. Henry, however, still considered a male heir essential. The Tudor dynasty was new, and its legitimacy might still be tested. In 1520, Catherine's nephew, the Holy Roman Emperor Charles V, paid a state visit to England, and she urged Henry to enter an alliance with Charles rather than with France. Immediately after his departure, she accompanied Henry to France on the celebrated visit to Francis I, the Field of the Cloth of Gold. Within two years, war was declared against France and the Emperor was once again welcome in England, where plans were afoot to betroth him to Catherine's daughter Mary. The King's great matter. In 1525, Henry VIII became enamoured of Anne Boleyn, a lady-in-waiting to Queen Catherine; Anne was between ten and seventeen years younger than Henry, being born between 1501 and 1507. Henry began pursuing her; Catherine was no longer able to bear children by this time. Henry began to believe that his marriage was cursed and sought confirmation from the Bible, which he interpreted to say that if a man marries his brother's wife, the couple will be childless. Even if her marriage to Arthur had not been consummated (and Catherine would insist to her dying day that she had come to Henry's bed a virgin), Henry's interpretation of that biblical passage meant that their marriage had been wrong in the eyes of God. Whether the pope at the time of Henry and Catherine's marriage had the right to overrule Henry's claimed scriptural impediment would become a hot topic in Henry's campaign to wrest an annulment from the present Pope. It is possible that the idea of annulment had been suggested to Henry much earlier than this, and is highly probable that it was motivated by his desire for a son. Before Henry's father ascended the throne, England was beset by civil warfare over rival claims to the English crown, and Henry may have wanted to avoid a similar uncertainty over the succession. It soon became the one absorbing object of Henry's desires to secure an annulment. Catherine was defiant when it was suggested that she quietly retire to a nunnery, saying: "God never called me to a nunnery. I am the King's true and legitimate wife." He set his hopes upon an appeal to the Holy See, acting independently of Cardinal Thomas Wolsey, whom he told nothing of his plans. William Knight, the King's secretary, was sent to Pope Clement VII to sue for an annulment, on the grounds that the dispensing bull of Pope Julius II was obtained by false pretenses. As the pope was, at that time, the prisoner of Catherine's nephew Emperor Charles V following the Sack of Rome in May 1527, Knight had difficulty in obtaining access to him. In the end, Henry's envoy had to return without accomplishing much. Henry now had no choice but to put this great matter into the hands of Wolsey, who did all he could to secure a decision in Henry's favour. Both the Pope and Martin Luther raised the possibility that Henry have two wives, not to re-introduce polygamy generally, but "to preserve the royal dignity of Catherine and Mary". Wolsey went so far as to convene an ecclesiastical court in England with a representative of the Pope presiding, and Henry and Catherine herself in attendance. The Pope had no intention of allowing a decision to be reached in England, and his legate was recalled. (How far the Pope was influenced by Charles V is difficult to say, but it is clear Henry saw that the Pope was unlikely to annul his marriage to the Emperor's aunt.) The Pope forbade Henry to marry again before a decision was given in Rome. Wolsey had failed and was dismissed from public office in 1529. Wolsey then began a secret plot to have Anne Boleyn forced into exile and began communicating with the Pope to that end. When this was discovered, Henry ordered Wolsey's arrest and, had he not been terminally ill and died in 1530, he might have been executed for treason. A year later, Catherine was banished from court, and her old rooms were given to Anne Boleyn. Catherine wrote in a letter to Charles V in 1531: My tribulations are so great, my life so disturbed by the plans daily invented to further the King's wicked intention, the surprises which the King gives me, with certain persons of his council, are so mortal, and my treatment is what God knows, that it is enough to shorten ten lives, much more mine. When Archbishop of Canterbury William Warham died, the Boleyn family's chaplain, Thomas Cranmer, was appointed to the vacant position. When Henry decided to annul his marriage to Catherine, John Fisher became her most trusted counsellor and one of her chief supporters. He appeared in the legates' court on her behalf, where he shocked people with the directness of his language, and by declaring that, like John the Baptist, he was ready to die on behalf of the indissolubility of marriage. Henry was so enraged by this that he wrote a long Latin address to the legates in answer to Fisher's speech. Fisher's copy of this still exists, with his manuscript annotations in the margin which show how little he feared Henry's anger. The removal of the cause to Rome ended Fisher's role in the matter, but Henry never forgave him. Other people who supported Catherine's case included Thomas More; Henry's own sister Mary Tudor, Queen of France; María de Salinas; Holy Roman Emperor Charles V; Pope Paul III; and Protestant Reformers Martin Luther and William Tyndale. Banishment and death. Upon returning to Dover from a meeting with King Francis I of France in Calais, Henry married Anne Boleyn in a secret ceremony. Some sources speculate that Anne was already pregnant at the time (and Henry did not want to risk a son being born illegitimate) but others testify that Anne (who had seen her sister Mary Boleyn taken up as the King's mistress and summarily cast aside) refused to sleep with Henry until they were married. Henry defended the lawfulness of their union by pointing out that Catherine had previously been married. If she and Arthur had consummated their marriage, Henry by canon law had the right to remarry. On 23 May 1533, Cranmer, sitting in judgement at a special court convened at Dunstable Priory to rule on the validity of Henry's marriage to Catherine, declared the marriage unlawful, even though Catherine had testified that she and Arthur had never had physical relations. Five days later, on 28 May 1533, Cranmer ruled that Henry and Anne's marriage was valid. Until the end of her life, Catherine would refer to herself as Henry's only lawful wedded wife and England's only rightful queen, and her servants continued to address her as such. Henry refused her the right to any title but "Dowager Princess of Wales" in recognition of her position as his brother's widow. Catherine went to live at The More Castle, Hertfordshire, late in 1531. After that, she was successively moved to the Royal Palace of Hatfield, Hertfordshire (May to September 1532), Elsyng Palace, Enfield (September 1532 to February 1533), Ampthill Castle, Bedfordshire (February to July 1533) and Buckden Towers, Cambridgeshire (July 1533 to May 1534). She was then finally transferred to Kimbolton Castle, Cambridgeshire where she confined herself to one room, which she left only to attend Mass, dressed only in the hair shirt of the Franciscans, and fasted continuously. While she was permitted to receive occasional visitors, she was forbidden to see her daughter Mary. They were also forbidden to communicate in writing, but sympathisers discreetly conveyed letters between the two. Henry offered both mother and daughter better quarters and permission to see each other if they would acknowledge Anne Boleyn as the new queen; both refused. In late December 1535, sensing her death was near, Catherine made her will, and wrote to her nephew, the Emperor Charles V, asking him to protect her daughter. It has been claimed that she then penned one final letter to Henry: The authenticity of the letter itself has been questioned, but not Catherine's attitude in its wording, which has been reported with variations in different sources. Catherine died at Kimbolton Castle on 1536. The following day, news of her death reached the King. At the time there were rumours that she was poisoned, possibly by Gregory di Casale. According to the chronicler Edward Hall, Anne Boleyn wore yellow for the mourning, which has been interpreted in various ways; Polydore Vergil interpreted this to mean that Anne did not mourn. Chapuys reported that it was King Henry who decked himself in yellow, celebrating the news and making a great show of his and Anne's daughter, Elizabeth, to his courtiers. This was seen as distasteful and vulgar by many. Another theory is that the dressing in yellow was out of respect for Catherine as yellow was said to be the Spanish colour of mourning. Certainly, later in the day it is reported that Henry and Anne both individually and privately wept for her death. On the day of Catherine's funeral, Anne Boleyn miscarried a male child. Rumours then circulated that Catherine had been poisoned by Anne or Henry, or both. The rumours were born after the apparent discovery during her embalming that there was a black growth on her heart that might have been caused by poisoning. Modern medical experts are in agreement that her heart's discolouration was due not to poisoning, but to cancer, something which was not understood at the time. Catherine was buried in Peterborough Cathedral with the ceremony due to her position as a Dowager Princess of Wales, and not a queen. Henry did not attend the funeral and forbade Mary to attend. Faith. Catherine was a member of the Third Order of Saint Francis and she was punctilious in her religious obligations in the Order, integrating without demur her necessary duties as queen with her personal piety. After the annulment, she was quoted "I would rather be a poor beggar's wife and be sure of heaven, than queen of all the world and stand in doubt thereof by reason of my own consent." The outward celebration of saints and holy relics formed no major part of her personal devotions, which she rather expressed in the Mass, prayer, confession and penance. Privately, however, she was aware of what she identified as the shortcomings of the papacy and church officialdom. Her doubts about church improprieties certainly did not extend so far as to support the allegations of corruption made public by Martin Luther in Wittenberg in 1517, which were soon to have such far-reaching consequences in initiating the Protestant Reformation. In 1523 Alfonso de Villa Sancta, a learned friar of the Observant (reform) branch of the Friars Minor and friend of the King's old advisor Erasmus, dedicated to the queen his book "De Liberio Arbitrio adversus Melanchthonem". The book denounced Philip Melanchthon, a supporter of Luther. Acting as her confessor, he was able to nominate her for the title of "Defender of the Faith" for denying Luther's arguments. Appearance. In her youth, Catherine was described as "the most beautiful creature in the world" and that there was "nothing lacking in her that the most beautiful girl should have". Thomas More and Lord Herbert would reflect later in her lifetime that in regard to her appearance "there were few women who could compete with the Queen [Catherine] in her prime." Legacy, memory and historiography. The controversial book "The Education of a Christian Woman" by Juan Luis Vives, which claimed women have the right to an education, was dedicated to and commissioned by her. Such was Catherine's impression on people, that even her enemy, Thomas Cromwell, said of her "If not for her sex, she could have defied all the heroes of History." She successfully appealed for the lives of the rebels involved in the Evil May Day for the sake of their families. Furthermore, Catherine won widespread admiration by starting an extensive programme for the relief of the poor. She was also a patron of Renaissance humanism, and a friend of the great scholars Erasmus of Rotterdam and Saint Thomas More. Some saw her as a martyr. In the reign of her daughter Mary I of England, her marriage to Henry VIII was declared "good and valid". Her daughter Queen Mary also had several portraits commissioned of Catherine, and it would not by any means be the last time she was painted. After her death, numerous portraits were painted of her, particularly of her speech at the Legatine Trial, a moment accurately rendered in Shakespeare's play about Henry VIII. Her tomb in Peterborough Cathedral can be seen and there is hardly ever a time when it is not decorated with flowers or pomegranates, her heraldic symbol. It bears the title "Katharine Queen of England". In the 20th century, George V's wife, Mary of Teck, had her grave upgraded and added banners there denoting Catherine as a queen of England. Every year at Peterborough Cathedral a service is held in her memory. There are processions, prayers and various events in the Cathedral including processions to Catherine's grave in which candles, pomegranates, flowers and other offerings are placed on her grave. The Spanish Ambassador to the United Kingdom attended the commemoration of the 470th anniversary of her death. During the 2010 service a rendition of Catherine of Aragon's speech before the Legatine court was read by Jane Lapotaire. There is a statue of her in her birthplace of Alcalá de Henares, as a young woman holding a book and a rose. Catherine has remained a popular biographical subject to the present day. The American historian Garrett Mattingly was the author of a popular biography "Katherine of Aragon" in 1942. In 1966, Catherine and her many supporters at court were the subjects of "Catherine of Aragon and her Friends", a biography by John E. Paul. In 1967, Mary M. Luke wrote the first book of her Tudor trilogy, "Catherine the Queen" which portrayed her and the tumultuous era of English history through which she lived. In recent years, the historian Alison Weir covered her life extensively in her biography "The Six Wives of Henry VIII", first published in 1991. Antonia Fraser did the same in her own 1992 biography of the same title; as did the British historian David Starkey in his 2003 book "Six Wives: The Queens of Henry VIII". Giles Tremlett's biography, "Catherine of Aragon: The Spanish Queen of Henry VIII", came out in 2010, and Julia Fox's dual biography, "Sister Queens: The Noble, Tragic Lives of Katherine of Aragon and Juana, Queen of Castile", came out in 2011. Spelling of her name. Her baptismal name was "Catalina", but "Katherine" was soon the accepted form in England after her marriage to Arthur. Catherine herself signed her name "Katherine", "Katherina", "Katharine" and sometimes "Katharina". In a letter to her, Arthur, her husband, addressed her as "Princess Katerine". Her daughter Queen Mary I called her "Quene Kateryn", in her will. Rarely were names, particularly first names, written in an exact manner during the sixteenth century and it is evident from Catherine's own letters that different variations were used. Loveknots built into his various palaces by her husband, Henry VIII, display the initials "H & K", as do other items belonging to Henry and Catherine, including gold goblets, a gold salt cellar, basins of gold, and candlesticks. Her tomb in Peterborough Cathedral is marked "Katharine Queen of England".
6943
28481209
https://en.wikipedia.org/wiki?curid=6943
Cathode ray
Cathode rays are streams of electrons observed in discharge tubes. If an evacuated glass tube is equipped with two electrodes and a voltage is applied, glass behind the positive electrode is observed to glow, due to electrons emitted from the cathode (the electrode connected to the negative terminal of the voltage supply). They were first observed in 1859 by German physicist Julius Plücker and Johann Wilhelm Hittorf, and were named in 1876 by Eugen Goldstein "Kathodenstrahlen", or cathode rays. In 1897, British physicist J. J. Thomson showed that cathode rays were composed of a previously unknown negatively charged particle, which was later named the "electron". Cathode-ray tubes (CRTs) use a focused beam of electrons deflected by electric or magnetic fields to render an image on a screen. Description. Cathode rays are so named because they are emitted by the negative electrode, or cathode, in a vacuum tube. To release electrons into the tube, they first must be detached from the atoms of the cathode. In the early experimental cold cathode vacuum tubes in which cathode rays were discovered, called Crookes tubes, this was done by using a high electrical potential of thousands of volts between the anode and the cathode to ionize the residual gas atoms in the tube. The positive ions were accelerated by the electric field toward the cathode, and when they collided with it they knocked electrons out of its surface; these were the cathode rays. Modern vacuum tubes use thermionic emission, in which the cathode is made of a thin wire filament which is heated by a separate electric current passing through it. The increased random heat motion of the filament knocks electrons out of the surface of the filament, into the evacuated space of the tube. Since the electrons have a negative charge, they are repelled by the negative cathode and attracted to the positive anode. They travel in parallel lines through the empty tube. The voltage applied between the electrodes accelerates these low mass particles to high velocities. Cathode rays are invisible, but their presence was first detected in these Crookes tubes when they struck the glass wall of the tube, exciting the atoms of the glass and causing them to emit light, a glow called fluorescence. Researchers noticed that objects placed in the tube in front of the cathode could cast a shadow on the glowing wall, and realized that something must be traveling in straight lines from the cathode. After the electrons strike the back of the tube they make their way to the anode, then travel through the anode wire through the power supply and back through the cathode wire to the cathode, so cathode rays carry electric current through the tube. The current in a beam of cathode rays through a vacuum tube can be controlled by passing it through a metal screen of wires (a grid) between cathode and anode, to which a small negative voltage is applied. The electric field of the wires deflects some of the electrons, preventing them from reaching the anode. The amount of current that gets through to the anode depends on the voltage on the grid. Thus, a small voltage on the grid can be made to control a much larger voltage on the anode. This is the principle used in vacuum tubes to amplify electrical signals. The triode vacuum tube developed between 1907 and 1914 was the first electronic device that could amplify, and is still used in some applications such as radio transmitters. High speed beams of cathode rays can also be steered and manipulated by electric fields created by additional metal plates in the tube to which voltage is applied, or magnetic fields created by coils of wire (electromagnets). These are used in cathode-ray tubes, found in televisions and computer monitors, and in electron microscopes. History. After the invention of the vacuum pump in 1654 by Otto von Guericke, physicists began to experiment with passing high voltage electricity through rarefied air. In 1705, it was noted that electrostatic generator sparks travel a longer distance through low pressure air than through atmospheric pressure air. Gas discharge tubes. In 1838, Michael Faraday applied a high voltage between two metal electrodes at either end of a glass tube that had been partially evacuated of air, and noticed a strange light arc with its beginning at the cathode (negative electrode) and its end at the anode (positive electrode). In 1857, German physicist and glassblower Heinrich Geissler sucked even more air out with an improved pump, to a pressure of around 10−3 atm and found that, instead of an arc, a glow filled the tube. The voltage applied between the two electrodes of the tubes, generated by an induction coil, was anywhere between a few kilovolts and 100 kV. These were called Geissler tubes, similar to today's neon signs. The explanation of these effects was that the high voltage accelerated free electrons and electrically charged atoms (ions) naturally present in the air of the tube. At low pressure, there was enough space between the gas atoms that the electrons could accelerate to high enough speeds that when they struck an atom they knocked electrons off of it, creating more positive ions and free electrons, which went on to create more ions and electrons in a chain reaction, known as a glow discharge. The positive ions were attracted to the cathode and when they struck it knocked more electrons out of it, which were attracted toward the anode. Thus the ionized air was electrically conductive and an electric current flowed through the tube. Geissler tubes had enough air in them that the electrons could only travel a tiny distance before colliding with an atom. The electrons in these tubes moved in a slow diffusion process, never gaining much speed, so these tubes didn't produce cathode rays. Instead, they produced a colorful glow discharge (as in a modern neon light), caused when the electrons struck gas atoms, exciting their orbital electrons to higher energy levels. The electrons released this energy as light. This process is called fluorescence. Cathode rays. By the 1870s, British physicist William Crookes and others were able to evacuate tubes to a lower pressure, below 10−6 atm. These were called Crookes tubes. Faraday had been the first to notice a dark space just in front of the cathode, where there was no luminescence. This came to be called the "cathode dark space", "Faraday dark space" or "Crookes dark space". Crookes found that as he pumped more air out of the tubes, the Faraday dark space spread down the tube from the cathode toward the anode, until the tube was totally dark. But at the anode (positive) end of the tube, the glass of the tube itself began to glow. What was happening was that as more air was pumped from the tube, the electrons knocked out of the cathode when positive ions struck it could travel farther, on average, before they struck a gas atom. By the time the tube was dark, most of the electrons could travel in straight lines from the cathode to the anode end of the tube without a collision. With no obstructions, these low mass particles were accelerated to high velocities by the voltage between the electrodes. These were the cathode rays. When they reached the anode end of the tube, they were traveling so fast that, although they were attracted to it, they often flew past the anode and struck the back wall of the tube. When they struck atoms in the glass wall, they excited their orbital electrons to higher energy levels. When the electrons returned to their original energy level, they released the energy as light, causing the glass to fluoresce, usually a greenish or bluish color. Later researchers painted the inside back wall with fluorescent chemicals such as zinc sulfide, to make the glow more visible. Cathode rays themselves are invisible, but this accidental fluorescence allowed researchers to notice that objects in the tube in front of the cathode, such as the anode, cast sharp-edged shadows on the glowing back wall. In 1869, German physicist Johann Hittorf was first to realize that something must be traveling in straight lines from the cathode to cast the shadows. Eugen Goldstein named them "cathode rays" (German "Kathodenstrahlen"). Discovery of the electron. At this time, atoms were the smallest particles known, and were believed to be indivisible. What carried electric currents was a mystery. During the last quarter of the 19th century, many historic experiments were done with Crookes tubes to determine what cathode rays were. There were two theories. Crookes and Arthur Schuster believed they were particles of "radiant matter," that is, electrically charged atoms. German scientists Eilhard Wiedemann, Heinrich Hertz and Goldstein believed they were "aether waves", some new form of electromagnetic radiation, and were separate from what carried the electric current through the tube. The debate was resolved in 1897 when J. J. Thomson measured the mass of cathode rays, showing they were made of particles, but were around 1800 times lighter than the lightest atom, hydrogen. Therefore, they were not atoms, but a new particle, the first "subatomic" particle to be discovered, which he originally called "corpuscle" but was later named "electron", after particles postulated by George Johnstone Stoney in 1874. He also showed they were identical with particles given off by photoelectric and radioactive materials. It was quickly recognized that they are the particles that carry electric currents in metal wires, and carry the negative electric charge of the atom. Thomson was given the 1906 Nobel Prize in Physics for this work. Philipp Lenard also contributed a great deal to cathode-ray theory, winning the Nobel Prize in 1905 for his research on cathode rays and their properties. Vacuum tubes. The gas ionization (or cold cathode) method of producing cathode rays used in Crookes tubes was unreliable, because it depended on the pressure of the residual air in the tube. Over time, the air was absorbed by the walls of the tube, and it stopped working. A more reliable and controllable method of producing cathode rays was investigated by Hittorf and Goldstein, and rediscovered by Thomas Edison in 1880. A cathode made of a wire filament heated red hot by a separate current passing through it would release electrons into the tube by a process called thermionic emission. The first true electronic vacuum tubes, invented in 1904 by John Ambrose Fleming, used this hot cathode technique, and they superseded Crookes tubes. These tubes didn't need gas in them to work, so they were evacuated to a lower pressure, around 10−9 atm (10−4 Pa). The ionization method of creating cathode rays used in Crookes tubes is today only used in a few specialized gas discharge tubes such as krytrons. In 1906, Lee De Forest found that a small voltage on a grid of metal wires between the cathode and anode could control a current in a beam of cathode rays passing through a vacuum tube. His invention, called the triode, was the first device that could amplify electric signals, and revolutionized electrical technology, creating the new field of "electronics". Vacuum tubes made radio and television broadcasting possible, as well as radar, talking movies, audio recording, and long-distance telephone service, and were the foundation of consumer electronic devices until the 1960s, when the transistor brought the era of vacuum tubes to a close. Cathode rays are now usually called electron beams. The technology of manipulating electron beams pioneered in these early tubes was applied practically in the design of vacuum tubes, particularly in the invention of the cathode-ray tube (CRT) by Ferdinand Braun in 1897, which was used in television sets and oscilloscopes. Today, electron beams are employed in sophisticated devices such as electron microscopes, electron beam lithography and particle accelerators. Properties of cathode rays and the experiments that revealed them. During the last quarter of the 19th century dozens of historic experiments were conducted to try to find out what cathode rays were. There were two theories: British scientists Crookes and Cromwell Varley believed they were particles of 'radiant matter', that is, electrically charged atoms. German researchers E. Wiedemann, Heinrich Hertz, and Eugen Goldstein believed they were 'aether vibrations', some new form of electromagnetic waves, and were separate from what carried the current through the tube. The debate continued until J. J. Thomson measured cathode ray’s mass, proving they were a previously unknown negatively charged particle in an atom, the first subatomic particle, which he called a 'corpuscle' but was later renamed the 'electron'. Straight line motion. Julius Plücker in 1869 built a tube with an anode shaped like a Maltese Cross facing the cathode. It was hinged, so it could fold down against the floor of the tube. When the tube was turned on, the cathode rays cast a sharp cross-shaped shadow on the fluorescence on the back face of the tube, showing that the rays moved in straight lines. This fluorescence was used as an argument that cathode rays were electromagnetic waves, since the only thing known to cause fluorescence at the time was ultraviolet light. After a while the fluorescence would get 'tired' and the glow would decrease. If the cross was folded down out of the path of the rays, it no longer cast a shadow, and the previously shadowed area would fluoresce more strongly than the area around it. Perpendicular emission. Eugen Goldstein in 1876 found that cathode rays were always emitted perpendicular to the cathode's surface. If the cathode was a flat plate, the rays were shot out in straight lines perpendicular to the plane of the plate. This was evidence that they were particles, because a luminous object, like a red hot metal plate, emits light in all directions, while a charged particle will be repelled by the cathode in a perpendicular direction. Cathode rays heat matter which they strike. If the electrode was made in the form of a concave spherical dish, the cathode rays would be focused to a spot in front of the dish. This could be used to heat samples to a high temperature. Electrostatic deflection. Cathode rays path can be deflected by an electric field. Heinrich Hertz built a tube with a second pair of metal plates to either side of the cathode ray beam, a crude CRT. If the cathode rays were charged particles, their path should be bent by the electric field created when a voltage was applied to the plates, causing the spot of light where the rays hit to move sideways. He did not find any bending, but it was later determined that his tube was insufficiently evacuated, causing accumulations of surface charge which masked the electric field. Later Arthur Schuster repeated the experiment with a higher vacuum. He found that the rays were attracted toward a positively charged plate and repelled by a negative one, bending the beam. This was evidence they were negatively charged, and therefore not electromagnetic waves. Magnetic deflection. The rays path can be deflected by a magnetic field. Crookes put a magnet across the neck of the tube, so that the North pole was on one side of the beam and the South pole was on the other, and the beam travelled through the magnetic field between them. The beam was bent down, perpendicular to the magnetic field. To reveal the path of the beam, Crookes invented a tube "(see pictures)" with a cardboard screen with a phosphor coating down the length of the tube, at a slight angle so the electrons would strike the phosphor along its length, making a glowing line on the screen. The line could be seen to bend up or down in a transverse magnetic field. This effect (now called the Lorentz force) was similar to the behavior of electric currents in an electric motor and showed that the cathode rays obeyed Faraday's law of induction like currents in wires. Both electric and magnetic deflection were evidence for the particle theory, because electric and magnetic fields have no effect on a beam of light waves in vacuum. Paddlewheel. Crookes put a tiny vaned turbine or paddlewheel in the path of the cathode rays, and found that it rotated when the rays hit it. The paddlewheel turned in a direction away from the cathode side of the tube, suggesting that the force of the cathode rays striking the paddles was causing the rotation. Crookes concluded at the time that this showed that cathode rays had momentum, so the rays were likely matter particles. However, later it was concluded that the paddle wheel turned not due to the momentum of the particles (or electrons) hitting the paddle wheel but due to the radiometric effect. When the rays hit the paddle surface they heated it, and the heat caused the gas next to it to expand, pushing the paddle. This was proven in 1903 by J. J. Thomson who calculated that the momentum of the electrons hitting the paddle wheel would only be sufficient to turn the wheel one revolution per minute. All this experiment really showed was that cathode rays were able to heat surfaces. Negative electric charge. Jean-Baptiste Perrin wanted to determine whether the cathode rays actually carried negative charge, or whether they just accompanied the charge carriers, as the Germans thought. In 1895 he constructed a tube with a 'catcher', a closed aluminum cylinder with a small hole in the end facing the cathode, to collect the cathode rays. The catcher was attached to an electroscope to measure its charge. The electroscope showed a negative charge, proving that cathode rays really carry negative electricity. Anode rays. Goldstein found in 1886 that if the cathode is made with small holes in it, streams of a faint luminous glow will be seen issuing from the holes on the back side of the cathode, facing away from the anode. It was found that in an electric field these anode rays bend in the opposite direction from cathode rays, toward a negatively charged plate, indicating that they carry a positive charge. These were the positive ions which were attracted to the cathode, and created the cathode rays. They were named "canal rays" ("Kanalstrahlen") by Goldstein. Spectral shift. Eugen Goldstein thought he had figured out a method of measuring the speed of cathode rays. If the glow discharge seen in the gas of Crookes tubes was produced by the moving cathode rays, the light radiated from them in the direction they were moving, down the tube, would be shifted in frequency due to the Doppler effect. This could be detected with a spectroscope because the emission line spectrum would be shifted. He built a tube shaped like an "L", with a spectroscope pointed through the glass of the elbow down one of the arms. He measured the spectrum of the glow when the spectroscope was pointed toward the cathode end, then switched the power supply connections so the cathode became the anode and the electrons were moving in the other direction, and again observed the spectrum looking for a shift. He did not find one, which he calculated meant that the rays were traveling very slowly. It was later recognized that the glow in Crookes tubes is emitted from gas atoms hit by the electrons, not the electrons themselves. Since the atoms are thousands of times more massive than the electrons, they move much slower, accounting for the lack of Doppler shift. Lenard window. Philipp Lenard wanted to see if cathode rays could pass out of the Crookes tube into the air. See diagram. He built a tube with a "window" "(W)" in the glass envelope made of aluminum foil just thick enough to hold the atmospheric pressure out (later called a "Lenard window") facing the cathode "(C)" so the cathode rays would hit it. He found that something did come through. Holding a fluorescent screen up to the window caused it to fluoresce, even though no light reached it. A photographic plate held up to it would be darkened, even though it was not exposed to light. The effect had a very short range of about . He measured the ability of cathode rays to penetrate sheets of material, and found they could penetrate much farther than moving atoms could. Since atoms were the smallest particles known at the time, this was first taken as evidence that cathode rays were waves. Later it was realized that electrons were much smaller than atoms, accounting for their greater penetration ability. Lenard was awarded the Nobel Prize in Physics in 1905 for his work. Wave-particle duality. Louis de Broglie later (1924) suggested in his doctoral dissertation that electrons are like photons and can act as waves. The wave-like behaviour of cathode rays was later directly demonstrated using reflection from a nickel surface by Davisson and Germer, and transmission through celluloid thin films and later metal films by George Paget Thomson and Alexander Reid in 1927. (Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned.)
6944
24565488
https://en.wikipedia.org/wiki?curid=6944
Cathode
A cathode is the electrode from which a conventional current leaves a polarized electrical device such as a lead-acid battery. This definition can be recalled by using the mnemonic "CCD" for "Cathode Current Departs". Conventional current describes the direction in which positive charges move. Electrons, which are the carriers of current in most electrical systems, have a negative electrical charge, so the movement of electrons is "opposite" to that of the conventional current flow: this means that electrons flow "into" the device's cathode from the external circuit. For example, the end of a household battery marked with a + (plus) is the cathode. The electrode through which conventional current flows the other way, into the device, is termed an anode. Charge flow. Conventional current flows from cathode to anode outside the cell or device (with electrons moving in the opposite direction), regardless of the cell or device type and operating mode. Cathode polarity with respect to the anode can be positive or negative depending on how the device is being operated. Inside a device or a cell, positively charged cations always move towards the cathode and negatively charged anions move towards the anode, although cathode polarity depends on the device type, and can even vary according to the operating mode. Whether the cathode is negatively polarized (such as recharging a battery) or positively polarized (such as a battery in use), the cathode will draw electrons into it from outside, as well as attract positively charged cations from inside. A battery or galvanic cell in use has a cathode that is the positive terminal since that is where conventional current flows out of the device. This outward current is carried internally by positive ions moving from the electrolyte to the positive cathode (chemical energy is responsible for this "uphill" motion). It is continued externally by electrons moving into the battery which constitutes positive current flowing outwards. For example, the Daniell galvanic cell's copper electrode is the positive terminal and the cathode. A battery that is recharging or an electrolytic cell performing electrolysis has its cathode as the negative terminal, from which current exits the device and returns to the external generator as charge enters the battery/ cell. For example, reversing the current direction in a Daniell galvanic cell converts it into an electrolytic cell where the copper electrode is the positive terminal and also the anode. In a diode, the cathode is the negative terminal at the pointed end of the arrow symbol, where current flows out of the device. Note: electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current. In vacuum tubes (including cathode-ray tubes) it is the negative terminal where electrons enter the device from the external circuit and proceed into the tube's near-vacuum, constituting a positive current flowing out of the device. Etymology. The word was coined in 1834 from the Greek κάθοδος ("kathodos"), 'descent' or 'way down', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the cathode is where the current leaves the electrolyte, on the West side: ""kata" downwards, "'odos" a way; the way which the sun sets". The use of 'West' to mean the 'out' direction (actually 'out' → 'West' → 'sunset' → 'down', i.e. 'out of view') may appear unnecessarily contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "exode" (the doorway where the current exits). His motivation for changing it to something meaning 'the West electrode' (other candidates had been "westode", "occiode" and "dysiode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the West electrode would not have been the 'way out' any more. Therefore, "exode" would have become inappropriate, whereas "cathode" meaning 'West electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the cathode's function any more, but more importantly because, as we now know, the Earth's magnetic field direction on which the "cathode" term is based is subject to reversals whereas the current direction convention on which the "exode" term was based has no reason to change in the future. Since the later discovery of the electron, an easier to remember, and more durably technically correct (although historically false), etymology has been suggested: cathode, from the Greek "kathodos", 'way down', 'the way (down) into the cell (or other device) for electrons'. In chemistry. In chemistry, a cathode is the electrode of an electrochemical cell at which reduction occurs. The cathode can be negative like when the cell is electrolytic (where electrical energy provided to the cell is being used for decomposing chemical compounds); or positive as when the cell is galvanic (where chemical reactions are used for generating electrical energy). The cathode supplies electrons to the positively charged cations which flow to it from the electrolyte (even if the cell is galvanic, i.e., when the cathode is positive and therefore would be expected to repel the positively charged cations; this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems in a galvanic cell). The cathodic current, in electrochemistry, is the flow of electrons from the cathode interface to a species in solution. The anodic current is the flow of electrons into the anode from a species in solution. Electrolytic cell. In an electrolytic cell, the cathode is where the negative polarity is applied to drive the cell. Common results of reduction at the cathode are hydrogen gas or pure metal from metal ions. When discussing the relative reducing power of two redox agents, the couple for generating the more reducing species is said to be more "cathodic" with respect to the more easily reduced reagent. Galvanic cell. In a galvanic cell, the cathode is where the positive pole is connected to allow the circuit to be completed: as the anode of the galvanic cell gives off electrons, they return from the circuit into the cell through the cathode. Electroplating metal cathode (electrolysis). When metal ions are reduced from ionic solution, they form a pure metal surface on the cathode. Items to be plated with pure metal are attached to and become part of the cathode in the electrolytic solution. In electronics. Vacuum tubes. In a vacuum tube or electronic vacuum system, the cathode is usually a metal surface with an oxide coating that much improves electron emission, heated by a filament, which emits free electrons into the evacuated space. In some cases the bare filament acts as the cathode. Since the electrons are attracted to the positive nuclei of the metal atoms, they normally stay inside the metal and require energy to leave it; this is called the "work function" of the metal. Cathodes are induced to emit electrons by several mechanisms: Cathodes can be divided into two types: Hot cathode. A hot cathode is a cathode that is heated by a filament to produce electrons by thermionic emission. The filament is a thin wire of a refractory metal like tungsten heated red-hot by an electric current passing through it. Before the advent of transistors in the 1960s, virtually all electronic equipment used hot-cathode vacuum tubes. Today hot cathodes are used in vacuum tubes in radio transmitters and microwave ovens, to produce the electron beams in older cathode-ray tube (CRT) type televisions and computer monitors, in x-ray generators, electron microscopes, and fluorescent tubes. There are two types of hot cathodes: In order to improve electron emission, cathodes are treated with chemicals, usually compounds of metals with a low work function. Treated cathodes require less surface area, lower temperatures and less power to supply the same cathode current. The untreated tungsten filaments used in early tubes (called "bright emitters") had to be heated to , white-hot, to produce sufficient thermionic emission for use, while modern coated cathodes produce far more electrons at a given temperature so they only have to be heated to There are two main types of treated cathodes: Cold cathode. This is a cathode that is not heated by a filament. They may emit electrons by field electron emission, and in gas-filled tubes by secondary emission. Some examples are electrodes in neon lights, cold-cathode fluorescent lamps (CCFLs) used as backlights in laptops, thyratron tubes, and Crookes tubes. They do not necessarily operate at room temperature; in some devices the cathode is heated by the electron current flowing through it to a temperature at which thermionic emission occurs. For example, in some fluorescent tubes a momentary high voltage is applied to the electrodes to start the current through the tube; after starting the electrodes are heated enough by the current to keep emitting electrons to sustain the discharge. Cold cathodes may also emit electrons by photoelectric emission. These are often called "photocathodes" and are used in phototubes used in scientific instruments and image intensifier tubes used in night vision goggles. Diodes. In a semiconductor diode, the cathode is the N–doped layer of the p–n junction with a high density of free electrons due to doping, and an equal density of fixed positive charges, which are the dopants that have been thermally ionized. In the anode, the converse applies: It features a high density of free "holes" and consequently fixed negative dopants which have captured an electron (hence the origin of the holes). When P and N-doped layers are created adjacent to each other, diffusion ensures that electrons flow from high to low density areas: That is, from the N to the P side. They leave behind the fixed positively charged dopants near the junction. Similarly, holes diffuse from P to N leaving behind fixed negative ionised dopants near the junction. These layers of fixed positive and negative charges are collectively known as the depletion layer because they are depleted of free electrons and holes. The depletion layer at the junction is at the origin of the diode's rectifying properties. This is due to the resulting internal field and corresponding potential barrier which inhibit current flow in reverse applied bias which increases the internal depletion layer field. Conversely, they allow it in forwards applied bias where the applied bias reduces the built in potential barrier. Electrons which diffuse from the cathode into the P-doped layer, or anode, become what are termed "minority carriers" and tend to recombine there with the majority carriers, which are holes, on a timescale characteristic of the material which is the p-type minority carrier lifetime. Similarly, holes diffusing into the N-doped layer become minority carriers and tend to recombine with electrons. In equilibrium, with no applied bias, thermally assisted diffusion of electrons and holes in opposite directions across the depletion layer ensure a zero net current with electrons flowing from cathode to anode and recombining, and holes flowing from anode to cathode across the junction or depletion layer and recombining. Like a typical diode, there is a fixed anode and cathode in a Zener diode, but it will conduct current in the reverse direction (electrons flow from anode to cathode) if its breakdown voltage or "Zener voltage" is exceeded.
6945
29098485
https://en.wikipedia.org/wiki?curid=6945
Chrominance
Chrominance ("chroma" or "C" for short) is the signal used in video systems to convey the color information of the picture (see YUV color model), separately from the accompanying luma signal (or Y' for short). Chrominance is usually represented as two color-difference components: U = B′ − Y′ (blue − luma) and V = R′ − Y′ (red − luma). Each of these different components may have scale factors and offsets applied to it, as specified by the applicable video standard. In composite video signals, the U and V signals modulate a color subcarrier signal, and the result is referred to as the chrominance signal; the phase and amplitude of this modulated chrominance signal correspond approximately to the hue and saturation of the color. In digital-video and still-image color spaces such as Y′CbCr, the luma and chrominance components are digital sample values. Separating RGB color signals into luma and chrominance allows the bandwidth of each to be determined separately. Typically, the chrominance bandwidth is reduced in analog composite video by reducing the bandwidth of a modulated color subcarrier, and in digital systems by chroma subsampling. History. The idea of transmitting a color television signal with distinct luma and chrominance components originated with Georges Valensi, who patented the idea in 1938. Valensi's patent application described: The use of two channels, one transmitting the predominating color (signal T), and the other the mean brilliance (signal t) output from a single television transmitter to be received not only by color television receivers provided with the necessary more expensive equipment, but also by the ordinary type of television receiver which is more numerous and less expensive and which reproduces the pictures in black and white only. Previous schemes for color television systems, which were incompatible with existing monochrome receivers, transmitted RGB signals in various ways. Television standards. In analog television, chrominance is encoded into a video signal using a subcarrier frequency. Depending on the video standard, the chrominance subcarrier may be either quadrature-amplitude-modulated (NTSC and PAL) or frequency-modulated (SECAM). In the PAL system, the color subcarrier is 4.43 MHz above the video carrier, while in the NTSC system it is 3.58 MHz above the video carrier. The NTSC and PAL standards are the most commonly used, although there are other video standards that employ different subcarrier frequencies. For example, PAL-M (Brazil) uses a 3.58 MHz subcarrier, and SECAM uses two different frequencies, 4.250 MHz and 4.40625 MHz above the video carrier. The presence of chrominance in a video signal is indicated by a color burst signal transmitted on the back porch, just after horizontal synchronization and before each line of video starts. If the color burst signal were visible on a television screen, it would appear as a vertical strip of a very dark olive color. In NTSC and PAL, hue is represented by a phase shift of the chrominance signal relative to the color burst, while saturation is determined by the amplitude of the subcarrier. In SECAM (R′ − Y′) and (B′ − Y′) signals are transmitted alternately and phase does not matter. Chrominance is represented by the U-V color plane in PAL and SECAM video signals, and by the I-Q color plane in NTSC. Digital systems. Digital video and digital still photography systems sometimes use a luma/chroma decomposition for improved compression. For example, when an ordinary RGB digital image is compressed via the JPEG standard, the RGB color space is first converted (by a rotation matrix) to a YCbCr color space, because the three components in that space have less correlation redundancy and because the chrominance components can then be subsampled by a factor of 2 or 4 to further compress the image. On decompression, the Y′CbCr space is rotated back to RGB.
6947
217538
https://en.wikipedia.org/wiki?curid=6947
Campus
A campus traditionally refers to the land and buildings of a college or university. This will often include libraries, lecture halls, student centers and, for residential universities, residence halls and dining halls. By extension, a corporate campus is a collection of buildings and grounds that belong to a company, particularly in the technology sector. Examples include Bell Labs, the Googleplex and Apple Park. Etymology. Campus comes from the , meaning "field", and was first used in the academic sense at Princeton University in 1774. At Princeton, the word referred to a large open space on the college grounds; similarly at the University of South Carolina it was used by 1826 to describe the open square (of around 10 acres) between the college buildings. By the end of the 19th century, the term was used widely at US colleges to refer to the grounds of the college, but it was not until the 20th century that it expanded to include the buildings as well. History. The tradition of a campus began with the medieval European universities where the students and teachers lived and worked together in a cloistered environment. The notion of the importance of the setting to academic life later migrated to America, and early colonial educational institutions were based on the Scottish and English collegiate system. The campus evolved from the cloistered model in Europe to a diverse set of independent styles in the United States. Early colonial colleges were all built in proprietary styles, with some contained in single buildings, such as the campus of Princeton University or arranged in a version of the cloister reflecting American values, such as Harvard's. Both the campus designs and the architecture of colleges throughout the country have evolved in response to trends in the broader world, with most representing several different contemporary and historical styles and arrangements. In 1922, a lecture by Patrick Abercrombie at the British Town Planning Institute contrasted the American campus to the style of Oxbridge colleges, saying: "generally with us the park-like garden and trees are to one side of the college buildings, in contrast with the formally enclosed quad with its clipt grass. In the Campus method the departments of the university are scattered about a park and are actually among the trees." However, he did also note that Trinity College Dublin had "what is called elsewhere a Campus" on its site in central Dublin, and that William Wilkins had "attempt[ed] an English Campus" on the site of Downing College, Cambridge. The first true campus universities in Britain were not established until the late 1940s, with the University of Reading moving to its Whiteknights campus in 1947, University College Swansea (now Swansea University) moving to its Singleton Park campus in 1948 and the University College of North Staffordshire (now the University of Keele) being established on the Keele Hall estate in 1949. Uses. Office buildings. In the early 1990s the term began to be used to describe a company's office building complex, most notably when Apple's Infinite Loop campus was first built, which at the time was exclusively for research and development. The Microsoft Campus in Redmond, Washington, is another example of this usage, although it was built in the 1980s, before the term was applied to company property. In the 21st century, hospitals and even airports sometimes use the term to describe the territory of their respective facilities. Universities. The word "campus" has also been applied to European universities, although some such institutions (in particular, "ancient" universities such as Bologna, Padua, Oxford and Cambridge) are characterized by ownership of individual buildings in university town-like urban settings rather than sprawling park-like lawns in which buildings are placed. World Heritage campuses. A number of university campuses or parts of campuses have been recognised as World Heritage Sites by UNESCO for their outstanding universal value. These include:
6948
18872885
https://en.wikipedia.org/wiki?curid=6948
Crossbow
A crossbow is a ranged weapon using an elastic launching device consisting of a bow-like assembly called a "prod", mounted horizontally on a main frame called a "tiller", which is hand-held in a similar fashion to the stock of a long gun. Crossbows shoot arrow-like projectiles called "bolts" or "quarrels". A person who shoots crossbow is called a "crossbowman", an "arbalister" or an "arbalist" (after the arbalest, a European crossbow variant used during the 12th century). Crossbows and bows use the same elastic launch principles, but differ in that an archer using a bow must draw-and-shoot in a quick and smooth motion with limited or no time for aiming, while a crossbow's design allows it to be spanned and cocked ready for use at a later time and thus affording them unlimited time to aim. When shooting bows, the archer must fully perform the draw, holding the string and arrow using various techniques while pulling it back with arm and back muscles, and then either immediately shooting instinctively without a period of aiming, or holding that form while aiming. Both demand some physical strength to do so using bows suitable for warfare, though this is easier using lighter draw-weight hunting bows. As such, their accurate and sustained use in warfare takes much practice. Crossbows avoid these potential problems by having trigger-released cocking mechanisms to maintain the tension on the string once it has been spanned – drawn – into its ready-to-shoot position, allowing these weapons to be carried cocked and ready and affording their users time to aim them. This also allows them to be readied by someone assisting their users, so multiple crossbows can be used one after the other while others reload and ready them. Crossbows are spanned into their cocked positions using a number of techniques and devices, some of which are mechanical and employ gear and pulley arrangements – levers, belt hooks, pulleys, windlasses and cranequins – to overcome very high draw weight. These potentially achieve better precision and enable their effective use by less familiarised and trained personnel, whereas the simple and composite warbows of, for example, the English and the steppe nomads require years of training, practice and familiarisation. These advantages for the crossbow are somewhat offset by the longer time needed to reload a crossbow for further shots, with the crossbows with high draw weights requiring sophisticated systems of gears and pulleys to overcome their huge draw weights that are very slow and rather awkward to employ on the battlefield. Medieval crossbows were also very inefficient, with short shot stroke lengths from the string lock to the release point of their bolts, along with the slower speeds of their steel prods and heavy strings, despite their massive draw weights compared to bows, though modern materials and crossbow designs overcome these shortcomings. The earliest known crossbows were invented in ancient China in the first millennium BC and brought about a major shift in the role of projectile weaponry in wars, especially during Qin's unification wars and later the Han campaigns against northern nomads and western states. The medieval European crossbow was called by many names, including "crossbow" itself; most of these names derived from the word "ballista", an ancient Greek torsion siege engine similar in appearance but different in design principle. In modern times, firearms have largely supplanted bows and crossbows as weapons of war, but crossbows remain widely used for competitive shooting sports and hunting, and for relatively silent shooting. Terminology. A crossbowman is sometimes called an "arbalist", or historically an arbalister". "Arrow", "bolt" and "quarrel" are all suitable terms for crossbow projectiles, as was "vire" historically. The "lath", also called the "prod", is the bow of the crossbow. According to W. F. Peterson, "prod" came into usage in the 19th century as a result of mistranslating "rodd" in a 16th-century list of crossbow effects. The "stock" (a modern term derived from the equivalent concept in firearms) is the wooden body on which the bow is mounted, although the medieval "tiller" is also used. The "lock" refers to the release mechanism, including the string, sears, trigger lever, and housing. Construction. A crossbow is essentially a bow mounted on an elongated frame (called a tiller or stock) with a built-in mechanism that holds the drawn bow string, as well as a trigger mechanism, which is used to release the string. Chinese vertical trigger lock. The Chinese trigger was a mechanism typically composed of three cast bronze pieces housed inside a hollow bronze enclosure. The entire mechanism is then dropped into a carved slot within the tiller and secured together by two bronze rods. The string catch (nut) is shaped like a "J" because it usually has a tall erect rear spine that protrudes above the housing, which serves the function of both a cocking lever (by pushing the drawn string onto it) and a primitive rear sight. It is held stationary against tension by the second piece, which is shaped like a flattened "C" and acts as the sear. The sear cannot move as it is trapped by the third piece, i.e. the actual trigger blade, which hangs vertically below the enclosure and catches the sear via a notch. The two bearing surfaces between the three trigger pieces each offers a mechanical advantage, which allow for handling significant draw weights with a much smaller pull weight. During shooting, the user will hold the crossbow at eye level by a vertical handle and aim along the arrow using the sighting spine for elevation, similar to how a modern rifleman shoots with iron sights. When the trigger blade is pulled, its notch disengages from the sear and allows the latter to drop downwards, which in turn frees up the nuts to pivot forward and release the bowstring. European rolling nut lock. The earliest European designs featured a transverse slot in the top surface of the frame, down into which the string was placed. To shoot this design, a vertical rod is thrust up through a hole in the bottom of the notch, forcing the string out. This rod is usually attached perpendicular to a rear-facing lever called a "tickler". A later design implemented a rolling cylindrical pawl called a "nut" to retain the string. This nut has a perpendicular centre slot for the bolt, and an intersecting axial slot for the string, along with a lower face or slot against which the internal trigger sits. They often also have some form of strengthening internal "sear" or trigger face, usually of metal. These "roller nuts" were either free-floating in their close-fitting hole across the stock, tied in with a binding of sinew or other strong cording; or mounted on a metal axle or pins. Removable or integral plates of wood, ivory, or metal on the sides of the stock kept the nut in place laterally. Nuts were made of antler, bone, or metal. Bows could be kept taut and ready to shoot for some time with little physical straining, allowing crossbowmen to aim better without fatiguing. Bow. Chinese crossbow prods were made of composite material from the start. European crossbows from the 10th to 12th centuries used wood for the bow, also called the "prod" or "lath", which tended to be ash or yew. Composite bows started appearing in Europe during the 13th century and could be made from layers of different material, often wood, horn, and sinew glued together and bound with animal tendon. These composite bows made of several layers are much stronger and more efficient in releasing energy than simple wooden bows. As steel became more widely available in Europe around the 14th century, steel prods came into use. Traditionally, the prod was often lashed to the stock with rope, whipcord, or other strong cording. This is called the "bridle". Spanning mechanism. The Chinese used winches for large crossbows mounted on fortifications or wagons, known as "bedded crossbows" (床弩). Winches may have been used for handheld crossbows during the Han dynasty (202 BC – 9 AD, 25–220 AD), but there is only one known depiction of it. The 11th century Chinese military text "Wujing Zongyao" mentions types of crossbows using winch mechanisms, but it is not known if these were actually handheld crossbows or mounted crossbows. Another drawing method involved the shooters sitting on the ground, and using the combined strength of leg, waist, back and arm muscles to help span much heavier crossbows, which were aptly called "waist-spun crossbows" (腰張弩). During the medieval era, both Chinese and European crossbows used stirrups as well as belt hooks. In the 13th century, European crossbows started using winches, and from the 14th century an assortment of spanning mechanisms such as winch pulleys, cord pulleys, gaffles (such as gaffe levers, goat's foot levers, and rarer internal lever-action mechanisms), cranequins, and even screws. Variants. The smallest crossbows are pistol crossbows. Others are simple long stocks with the crossbow mounted on them. These could be shot from under the arm. The next step in development was stocks of the shape that would later be used for firearms, which allowed better aiming. The arbalest was a heavy crossbow that required special systems for pulling the sinew via windlasses. For siege warfare, the size of crossbows was further increased to hurl large projectiles, such as rocks, at fortifications. The required crossbows needed a massive base frame and powerful windlass devices. Projectiles. The arrow-like projectiles of a crossbow are called bolts or quarrels. These are usually much shorter than arrows but can be several times heavier. There is an optimum weight for bolts to achieve maximum kinetic energy, which varies depending on the strength and characteristics of the crossbow, but most could pass through common mail. Crossbow bolts can be fitted with a variety of heads, some with sickle-shaped heads to cut rope or rigging; but the most common today is a four-sided point called a quarrel. A highly specialized type of bolt is employed to collect blubber biopsy samples used in biology research. Even relatively small differences in arrow weight can have a considerable impact on its flight trajectory and drop. Bullet-shooting crossbows are modified crossbows that use bullets or stones as projectiles. Accessories. The ancient Chinese crossbow often included a metal (i.e. bronze or steel) grid serving as iron sights. Modern crossbow sights often use similar technology to modern firearm sights, such as red dot sights and telescopic sights. Many crossbow scopes feature multiple crosshairs to compensate for the significant effects of gravity over different ranges. In most cases, a newly bought crossbow will need to be sighted for accurate shooting. A major cause of the sound of shooting a crossbow is vibration of various components. Crossbow silencers are multiple components placed on high vibration parts, such as the string and limbs, to dampen vibration and suppress the sound of loosing the bolt. History. China. In terms of archaeological evidence, crossbow locks dated made of cast bronze have been found in China . They have also been found in Tombs 3 and 12 at Qufu, Shandong, previously the capital of Lu, and date to the 6th century BC. Bronze crossbow bolts dating from the mid-5th century BC have been found at a Chu burial site in Yutaishan, Jiangling County, Hubei Province. Other early finds of crossbows were discovered in Tomb 138 at Saobatang, Hunan Province, and date to the mid-4th century BC. It is possible that these early crossbows used spherical pellets for ammunition. A Western Han mathematician and music theorist, Jing Fang (78–37 BC), compared the moon to the shape of a round crossbow bullet. The "Zhuangzi" also mentions crossbow bullets. The earliest Chinese documents mentioning a crossbow were texts from the 4th to 3rd centuries BC attributed to the followers of Mozi. This source refers to the use of a giant crossbow between the 6th and 5th centuries BC, corresponding to the late Spring and Autumn period. Sun Tzu's "The Art of War" (first appearance dated between 500 BC to 300 BC) refers to the characteristics and use of crossbows in chapters 5 and 12 respectively, and compares a drawn crossbow to "might". The "Huainanzi" advises its readers not to use crossbows in marshland where the surface is soft and it is hard to arm the crossbow with the foot. The "Records of the Grand Historian", completed in 94 BC, mentions that Sun Bin defeated Pang Juan by ambushing him with a battalion of crossbowmen at the Battle of Maling in 342 BC. The "Book of Han", finished 111 AD, lists two military treatises on crossbows. Handheld crossbows with complex bronze trigger mechanisms have also been found with the Terracotta Army in the tomb of Qin Shi Huang (r. 221–210 BC) that are similar to specimens from the subsequent Han dynasty (202 BC–220 AD), while crossbowmen described in the Qin and Han dynasty learned drill formations, some were even mounted as charioteers and cavalry units, and Han dynasty writers attributed the success of numerous battles against the Xiongnu and Western Regions city-states to massed crossbow volleys. The bronze triggers were designed in such a way that they were able to store a large amount of energy within the bow when drawn but was easily shot with little resistance and recoil when the trigger was pulled. The trigger nut also had a long vertical spine that could be used like a primitive rear sight for elevation adjustment, which allowed precision shooting over longer distances. The Qin and Han dynasty-era crossbow was also an early example of a modular design, as the bronze trigger components were also mass-produced with relative precise tolerances so that the parts were interchangeable between different crossbows. The trigger mechanism from one crossbow can be installed into another simply by dropping into a tiller slot of the same specifications and secured with dowel pins. Some crossbow designs were also found to be fitted with bronze buttplates and trigger guard. It is clear from surviving inventory lists in Gansu and Xinjiang that the crossbow was greatly favored by the Han dynasty. For example, in one batch of slips there are only two mentions of bows, but thirty mentions of crossbows. Crossbows were mass-produced in state armories with designs improving as time went on, such as the use of a mulberry wood stock and brass. Such crossbows during the Song Dynasty in 1068 AD could pierce a tree at 140 paces. Crossbows were used in numbers as large as 50,000 starting from the Qin dynasty and upwards of several hundred thousand during the Han. According to one authority, the crossbow had become "nothing less than the standard weapon of the Han armies", by the second century BC. Han soldiers were required to arm a crossbow with a draw weight equivalent of to qualify as an entry-level crossbowman, while it was claimed that a few elite troops were capable of arming crossbows with a draw-weight in excess of by the hands-and-feet method. After the Han dynasty, the crossbow lost favor during the Six Dynasties, until it experienced a mild resurgence during the Tang dynasty, under which the ideal expeditionary army of 20,000 included 2,200 archers and 2,000 crossbowmen. Li Jing and Li Quan prescribed 20 percent of the infantry to be armed with crossbows. During the Song dynasty, the crossbow received a huge upsurge in military usage, and often overshadowed the bow 2 to 1 in numbers. During this time period, a stirrup was added for ease of loading. The Song government attempted to restrict the public use of crossbows and sought ways to keep both body armor and crossbows out of civilian ownership. Despite the ban on certain types of crossbows, the weapon experienced an upsurge in civilian usage as both a hunting weapon and pastime. The "romantic young people from rich families, and others who had nothing particular to do" formed crossbow-shooting clubs as a way to pass time. Military crossbows were armed by treading, or basically placing the feet on the bow stave and drawing it using one's arms and back muscles. During the Song dynasty, stirrups were added for ease of drawing and to mitigate damage to the bow. Alternatively, the bow could also be drawn by a belt claw attached to the waist, but this was done lying down, as was the case for all large crossbows. Winch-drawing was used for the large mounted crossbows as seen below, but evidence for its use in Chinese hand-crossbows is scant. Southeast Asia. Around the third century BC, King An Dương of Âu Lạc (modern-day northern Vietnam) and (modern-day southern China) commissioned a man named Cao Lỗ (or Cao Thông) to construct a crossbow and christened it "Saintly Crossbow of the Supernaturally Luminous Golden Claw" "(nỏ thần)", which could kill 300 men in one shot. According to historian Keith Taylor, the crossbow, along with the word for it, seems to have been introduced into China from Austroasiatic peoples in the south around the fourth century BC. However, this is contradicted by crossbow locks found in ancient Chinese Zhou dynasty tombs dating to the 600s BC. In 315 AD, Nu Wen taught the Chams how to build fortifications and use crossbows. The Chams would later give the Chinese crossbows as presents on at least one occasion. Crossbow technology for crossbows with more than one prod was transferred from the Chinese to Champa, which Champa used in its invasion of the Khmer Empire's Angkor in 1177. When the Chams sacked Angkor they used the Chinese siege crossbow. The Chinese taught the Chams how to use crossbows and mounted archery Crossbows and archery in 1171. The Khmer also had double-bow crossbows mounted on elephants, which Michel Jacq-Hergoualc'h suggests were elements of Cham mercenaries in Jayavarman VII's army. The native Montagnards of Vietnam's Central Highlands were also known to have used crossbows, as both a tool for hunting, and later an effective weapon against the Viet Cong during the Vietnam War. Montagnard fighters armed with crossbows proved a highly valuable asset to the US Special Forces operating in Vietnam, and it was not uncommon for the Green Berets to integrate Montagnard crossbowmen into their strike teams. Ancient Greece. The earliest crossbow-like weapons in Europe probably emerged around the late 5th century BC when the "gastraphetes", an ancient Greek crossbow, appeared. The name means "belly-bow"; the concave withdrawal rest at one end of the stock was placed against the belly of the operator, and he could press it to withdraw the slider before attaching a string to the trigger and loading the bolt; this could store more energy than Greek bows. The device was described by the Greek author Heron of Alexandria in his "Belopoeica" ("On Catapult-making"), which draws on an earlier account of his compatriot engineer Ctesibius (fl. 285–222 BC). According to Heron, the "gastraphetes" was the forerunner of the later catapult, which places its invention some unknown time prior to 399 BC. The "gastraphetes" was a crossbow mounted on a stock divided into a lower and upper section. The lower was a case fixed to the bow, and the upper was a slider which had the same dimensions as the case. It was used in the Siege of Motya in 397 BC. This was a key Carthaginian stronghold in Sicily, as described in the 1st century AD by Heron of Alexandria in his book "Belopoeica". A crossbow machine, the oxybeles was in use from 375 BC to around 340 BC, when the torsion principle replaced the tension crossbow mechanism. Other arrow-shooting machines such as the larger ballista and smaller scorpio from around 338 BC are torsion catapults and are not considered crossbows. Arrow-shooting machines ("katapeltai") are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An Athenian inventory from 330 to 329 BC includes catapults bolts with heads and flights. Arrow-shooting machines in action are reported from Philip II's siege of Perinthos in Thrace in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, presumably to house anti-personnel arrow shooters, as in Aigosthena. Ancient Rome. The late 4th century author Vegetius, in his "De Re Militari", describes "arcubalistarii" (crossbowmen) working together with archers and artillerymen. However it is disputed whether arcuballistas were crossbows or torsion-powered weapons. The idea that the arcuballista was a crossbow is due to Vegetius referring separately to it and the "manuballista", which was torsion powered. Therefore, if the arcuballista was not like the manuballista, it may have been a crossbow. According to Vegetius these were well-known devices and hence he did not describe them in depth. Joseph Needham argues against the existence of Roman crossbowmen: On the other hand Arrian's earlier "Ars Tactica", from about 136 AD, also mentions 'missiles shot not from a bow but from a machine' and that this machine was used on horseback while in full gallop. It is presumed that this was a crossbow. The only pictorial evidence of Roman arcuballistas comes from sculptural reliefs in Roman Gaul depicting them in hunting scenes. These are aesthetically similar to both the Greek and Chinese crossbow but it is not clear what kind of release mechanism they used. Archaeological evidence suggests they were similar to the rolling nut mechanism of medieval Europe. Medieval Europe. There are essentially no references to the crossbow in Europe from the 5th until the 10th century. There is however a depiction of a crossbow as a hunting weapon on four Pictish stones from early medieval Scotland (6th to 9th centuries): St. Vigeans no. 1, Glenferness, Shandwick, and Meigle. The crossbow reappeared again in 947 as a French weapon during the siege of Senlis and again in 984 at the siege of Verdun. Crossbows were used at the battle of Hastings in 1066, and by the 12th century they had become common battlefield weapons. The earliest extant European crossbow remains were found at Lake Paladru, dated to the 11th century. The crossbow superseded hand bows in many European armies during the 12th century, except in England, where the longbow was more popular. Later crossbows (sometimes referred to as arbalests), utilizing all-steel prods, were able to achieve power close (and sometime superior) to longbows but were more expensive to produce and slower to reload because they required the aid of mechanical devices such as the cranequin or windlass to draw back their extremely heavy bows. Usually these could shoot only two bolts per minute versus twelve or more with a skilled archer, often necessitating the use of a pavise (shield) to protect the operator from enemy fire. Along with polearm weapons made from farming equipment, the crossbow was also a weapon of choice for insurgent peasants such as the Taborites. Genoese crossbowmen were famous mercenaries hired throughout medieval Europe, whilst the crossbow also played an important role in anti-personnel defense of ships. Crossbows were eventually replaced in warfare by gunpowder weapons. Early hand cannons had slower rates of fire and much worse accuracy than contemporary crossbows, but the arquebus (which proliferated in the mid to late 15th century) matched crossbows' rate of fire while being far more powerful. The Battle of Cerignola in 1503 was won by Spain largely through the use of matchlock arquebuses, marking the first time a major battle had been won through the use of hand-held firearms. Later, similar competing tactics would feature harquebusiers or musketeers in formation with pikemen, pitted against cavalry firing pistols or carbines. While the military crossbow had largely been supplanted by firearms on the battlefield by 1525, the sporting crossbow in various forms remained a popular hunting weapon in Europe until the eighteenth century. The accuracy of late 15th century crossbows compares well with modern handguns, based on records of shooting competitions in German cities. Crossbows saw irregular use throughout the rest of the 16th century; for example, Maria Pita's husband was killed by a crossbowman of the English Armada in 1589. Islamic world. There are no references to crossbows in Islamic texts earlier than the 14th century. Arabs in general were averse to the crossbow and considered it a foreign weapon. They called it "qaus al-rijl" (foot-drawn bow), "qaus al-zanbūrak" (bolt bow) and "qaus al-faranjīyah" (Frankish bow). Although Muslims did have crossbows, there seems to be a split between eastern and western types. Muslims in Spain used the typical European trigger, while eastern Muslim crossbows had a more complex trigger mechanism. Mamluk cavalry used crossbows. Elsewhere and later. Oyumi were ancient Japanese artillery pieces that first appeared in the seventh century (during the Asuka period). According to Japanese records, the Oyumi was different from the handheld crossbow also in use during the same time period. A quote from a seventh-century source seems to suggest that the Oyumi may have able to fire multiple arrows at once: "the Oyumi were lined up and fired at random, the arrows fell like rain". A ninth-century Japanese artisan named Shimaki no Fubito claimed to have improved on a version of the weapon used by the Chinese; his version could rotate and fire projectiles in multiple directions. The last recorded use of the Oyumi was in 1189. In West and Central Africa, crossbows served as a scouting weapon and for hunting, with African slaves bringing this technology to natives in America. In the Southern United States, the crossbow was used for hunting and warfare when firearms or gunpowder were unavailable because of economic hardships or isolation. In the north of Northern America, light hunting crossbows were traditionally used by the Inuit. These are technologically similar to the African-derived crossbows, but have a different route of influence. Spanish conquistadors continued to use crossbows in the Americas long after they were replaced in European battlefields by firearms. Only in the 1570s, did firearms become completely dominant among the Spanish in the Americas. The French and the British used a crossbow-like Sauterelle (French for grasshopper) in World War I. It was lighter and more portable than the Leach Trench Catapult, but less powerful. It weighed and could throw an F1 grenade or Mills bomb . The Sauterelle replaced the Leach Catapult in British service and was in turn replaced in 1916 by the 2-inch Medium Trench Mortar and Stokes mortar. Early in the war, actual crossbows were pressed into service in small numbers by both French and German troops to launch grenades. A range of crossbows were developed by the Allied powers during the Second World War for assassinations and covert operations, but none appear to have ever been used in the field. A small number of crossbows were built and used by Australian forces in the New Guinea campaign. Modern use. Hunting, leisure, and science. Crossbows are used for shooting sports and bowhunting in modern archery and for blubber biopsy samples in scientific research. In some countries such as Canada, they may be less heavily regulated than firearms, and thus more popular for hunting; some jurisdictions have bow and/or crossbow only seasons. Military and paramilitary. Crossbows are no longer used in battles, but they are still used in some military applications. For example, there is an undated photograph of Peruvian soldiers equipped with crossbows and rope to establish a zip-line in difficult terrain. In Brazil, the CIGS (Jungle Warfare Training Center) also trains soldiers in the use of crossbows. In the United States, SAA International Ltd manufacture a crossbow-launched version of the U.S. Army type classified Launched Grapnel Hook (LGH), among other mine countermeasure solutions designed for the Middle Eastern theatre. It was evaluated as successful in Cambodia and Bosnia. It is used to probe for and detonate tripwire-initiated mines and booby traps at up to . The concept is similar to the LGH device originally fired from a rifle, as a plastic retrieval line is attached. Reusable up to 20 times, the line can be reeled back in without exposing the user. The device is of particular use in tactical situations where noise discipline is important. In Europe, Barnett International sold crossbows to Serbian forces which, according to "The Guardian", were later used "in ambushes and as a counter-sniper weapon" against the Kosovo Liberation Army during the Kosovo War in the areas of Pec and Djakovica, south west of Kosovo. Whitehall launched an investigation, though the Department of Trade and Industry established that not being "on the military list", crossbows were not covered by export restrictions. Paul Beaver of Jane's Defence Publications commented that, "They are not only a silent killer, they also have a psychological effect". On 15 February 2008, Serbian Minister of Defence Dragan Sutanovac was pictured testing a Barnett crossbow during a public exercise of the Serbian Army's Special Forces in Nis, south of Belgrade. Special forces in both Greece and Turkey also continue to employ the crossbow. Spain's Green Berets still use the crossbow as well. In Asia, some Chinese armed forces use crossbows, including the special force Snow Leopard Commando Unit of the People's Armed Police and the People's Liberation Army. One reason for this is the crossbow's ability to stop persons carrying explosives without risk of causing detonation. During the Xinjiang riots of July 2009, Crossbows were used by security forces to suppress rioters. The Indian Navy's Marine Commando Force were equipped until the late 1980s with crossbows with cyanide-tipped bolts, as an alternative to suppressed handguns. Comparison to conventional bows. With a crossbow, archers could release a draw force far in excess of what they could have handled with a bow. Furthermore, the crossbow could hold the tension indefinitely, whereas even the strongest longbowman could only hold a drawn bow for a short time. The ease of use of a crossbow allows it to be used effectively with little training, while other types of bows take far more skill to shoot accurately. The disadvantage is the greater weight and clumsiness to reload compared to a bow, as well as the slower rate of shooting and the lower efficiency of the acceleration system, but there would be reduced elastic hysteresis, making the crossbow a more accurate weapon. Medieval European crossbows had a much smaller draw length than bows, so that for the same energy to be imparted to the projectile the crossbow had to have a much higher draw weight. A direct comparison between a fast hand-drawn replica crossbow and a longbow shows a 6:10 rate of shooting or a 4:9 rate within 30 seconds and comparable weapons. Legislation. Today, the crossbow often has a complicated legal status due to the possibility of lethal use and its similarities to both firearms and bows. While some jurisdictions treat crossbows in the same way as firearms, many others do not require any sort of license to own a crossbow. The legality of using a crossbow for hunting varies widely in different jurisdictions.
6949
27823944
https://en.wikipedia.org/wiki?curid=6949
Carbamazepine
Carbamazepine, sold under the brand name Tegretol among others, is an anticonvulsant medication used in the treatment of epilepsy and neuropathic pain. It is used as an adjunctive treatment in schizophrenia along with other medications and as a second-line agent in bipolar disorder. Carbamazepine appears to work as well as phenytoin and valproate for focal and generalized seizures. It is not effective for absence or myoclonic seizures. Carbamazepine was discovered in 1953 by Swiss chemist Walter Schindler. It was first marketed in 1962. It is available as a generic medication. It is on the World Health Organization's List of Essential Medicines. In 2020, it was the 185th most commonly prescribed medication in the United States, with more than 2million prescriptions. Photoswitchable analogues of carbamazepine have been developed to control its pharmacological activity locally and on demand using light (photopharmacology), with the purpose of reducing the adverse systemic effects of the drug. One of these light-regulated compounds (carbadiazocine, based on a bridged azobenzene or diazocine) has been shown to produce analgesia with noninvasive illumination "in vivo" in a rat model of neuropathic pain. Medical uses. Carbamazepine is typically used for the treatment of seizure disorders and neuropathic pain. It is used off-label as a second-line treatment for bipolar disorder and in combination with an antipsychotic in some cases of schizophrenia when treatment with a conventional antipsychotic alone has failed. However, evidence does not support its usage for schizophrenia. It is not effective for absence seizures or myoclonic seizures. Although carbamazepine may have a similar effectiveness (as measured by people continuing to use a medication) and efficacy (as measured by the medicine reducing seizure recurrence and improving remission) when compared to phenytoin and valproate, choice of medication should be evaluated on an individual basis as further research is needed to determine which medication is most helpful for people with newly-onset seizures. In the United States, carbamazepine is indicated for the treatment of epilepsy (including partial seizures, generalized tonic-clonic seizures and mixed seizures), and trigeminal neuralgia. Carbamazepine is the only medication that is approved by the Food and Drug Administration for the treatment of trigeminal neuralgia. As of 2014, a controlled release formulation was available for which there is tentative evidence showing fewer side effects and unclear evidence with regard to whether there is a difference in efficacy. It has also been shown to improve symptoms of "typewriter tinnitus", a type of tinnitus caused by the neurovascular compression of the cochleovestibular nerve. Adverse effects. In the US, the label for carbamazepine contains warnings concerning: Common adverse effects may include drowsiness, dizziness, headaches and migraines, ataxia, nausea, vomiting, and/or constipation. Alcohol use while taking carbamazepine may lead to enhanced depression of the central nervous system. Less common side effects may include increased risk of seizures in people with mixed seizure disorders, abnormal heart rhythms, blurry or double vision. Also, rare case reports of an auditory side effect have been made, whereby patients perceive sounds about a semitone lower than previously; this unusual side effect is usually not noticed by most people, and disappears after the person stops taking carbamazepine. Pharmacogenetics. Serious skin reactions such as Stevens–Johnson syndrome (SJS) or toxic epidermal necrolysis (TEN) due to carbamazepine therapy are more common in people with a particular human leukocyte antigen gene-variant (allele), HLA-B*1502. Odds ratios for the development of SJS or TEN in people who carry the allele can be in the double, triple or even quadruple digits, depending on the population studied. HLA-B*1502 occurs almost exclusively in people with ancestry across broad areas of Asia, but has a very low or absent frequency in European, Japanese, Korean and African populations. However, the HLA-A*31:01 allele has been shown to be a strong predictor of both mild and severe adverse reactions, such as the DRESS form of severe cutaneous reactions, to carbamazepine among Japanese, Chinese, Korean, and Europeans. It is suggested that carbamazepine acts as a potent antigen that binds to the antigen-presenting area of HLA-B*1502 alike, triggering an everlasting activation signal on immature CD8-T cells, thus resulting in widespread cytotoxic reactions like SJS/TEN. Interactions. Carbamazepine has a potential for drug interactions. Drugs that decrease breaking down of carbamazepine or otherwise increase its levels include erythromycin, cimetidine, propoxyphene, and calcium channel blockers. Grapefruit juice raises the bioavailability of carbamazepine by inhibiting the enzyme CYP3A4 in the gut wall and in the liver. Lower levels of carbamazepine are seen when administered with phenobarbital, phenytoin, or primidone, which can result in breakthrough seizure activity. Valproic acid and valnoctamide both inhibit microsomal epoxide hydrolase (mEH), the enzyme responsible for the breakdown of the active metabolite carbamazepine-10,11-epoxide into inactive metabolites. By inhibiting mEH, valproic acid and valnoctamide cause a build-up of the active metabolite, prolonging the effects of carbamazepine and delaying its excretion. Carbamazepine, as an inducer of cytochrome P450 enzymes, may increase clearance of many drugs, decreasing their concentration in the blood to subtherapeutic levels and reducing their desired effects. Drugs that are more rapidly metabolized with carbamazepine include warfarin, lamotrigine, phenytoin, theophylline, valproic acid, many benzodiazepines, and methadone. Carbamazepine also increases the metabolism of the hormones in birth control pills and can reduce their effectiveness, potentially leading to unexpected pregnancies. Pharmacology. Mechanism of action. Carbamazepine is a sodium channel blocker. It binds preferentially to voltage-gated sodium channels in their inactive conformation, which prevents repetitive and sustained firing of an action potential. Carbamazepine has effects on serotonin systems but the relevance to its antiseizure effects is uncertain. There is evidence that it is a serotonin releasing agent and possibly even a serotonin reuptake inhibitor. It has been suggested that carbamazepine can also block voltage-gated calcium channels, which will reduce neurotransmitter release. Pharmacokinetics. Carbamazepine is relatively slowly but practically completely absorbed after administration by mouth. Highest concentrations in the blood plasma are reached after 4 to 24 hours depending on the dosage form. Slow release tablets result in about 15% lower absorption and 25% lower peak plasma concentrations than ordinary tablets, as well as in less fluctuation of the concentration, but not in significantly lower minimum concentrations. In the circulation, carbamazepine itself comprises 20 to 30% of total residues. The remainder is in the form of metabolites; 70 to 80% of residues is bound to plasma proteins. Concentrations in breast milk are 25 to 60% of those in the blood plasma. Carbamazepine itself is not pharmacologically active. It is activated, mainly by CYP3A4, to carbamazepine-10,11-epoxide, which is solely responsible for the drug's anticonvulsant effects. The epoxide is then inactivated by microsomal epoxide hydrolase (mEH) to carbamazepine-"trans"-10,11-diol and further to its glucuronides. Other metabolites include various hydroxyl derivatives and carbamazepine-"N"-glucuronide. The plasma half-life is about 35 to 40 hours when carbamazepine is given as single dose, but it is a strong inducer of liver enzymes, and the plasma half-life shortens to about 12 to 17 hours when it is given repeatedly. The half-life can be further shortened to 9–10 hours by other enzyme inducers such as phenytoin or phenobarbital. About 70% are excreted via the urine, almost exclusively in form of its metabolites, and 30% via the faeces. History. Carbamazepine was discovered by chemist Walter Schindler at J.R. Geigy AG (now part of Novartis) in Basel, Switzerland, in 1953. It was first marketed as a drug to treat epilepsy in Switzerland in 1963 under the brand name Tegretol; its use for trigeminal neuralgia (formerly known as tic douloureux) was introduced at the same time. It has been used as an anticonvulsant and antiepileptic in the United Kingdom since 1965, and has been approved in the United States since 1968. Carbamazepine was studied for bipolar disorder throughout the 1970s. Society and culture. Environmental impact. Carbamazepine and its bio-transformation products have been detected in wastewater treatment plant effluent and in streams receiving treated wastewater. Field and laboratory studies have been conducted to understand the accumulation of carbamazepine in food plants grown in soil treated with sludge, which vary with respect to the concentrations of carbamazepine present in sludge and in the concentrations of sludge in the soil. Taking into account only studies that used concentrations commonly found in the environment, a 2014 review concluded that "the accumulation of carbamazepine into plants grown in soil amended with biosolids poses a "de minimis" risk to human health according to the approach." Brand names. Carbamazepine is available worldwide under many brand names including Tegretol.
6955
20542576
https://en.wikipedia.org/wiki?curid=6955
Chalcedonian Definition
The Chalcedonian Definition (also called the Chalcedonian Creed or the Definition of Chalcedon) is the declaration of the dyophysitism of Christ's nature, adopted at the Council of Chalcedon in AD 451. Chalcedon was an early centre of Christianity located in Asia Minor. The council was the fourth of the ecumenical councils that are accepted by Chalcedonian churches which include the Catholic and Orthodox churches. It was the first council not to be recognised by any Oriental Orthodox church; for this reason these churches may be classified as Non-Chalcedonian. Context. The Council of Chalcedon was summoned to consider the Christological question in light of the "one-nature" view of Christ proposed by Eutyches, archimandrite at Constantinople, which prevailed at the Second Council of Ephesus in 449, sometimes referred to as the "Robber Synod". The Council first solemnly ratified the Nicene Creed adopted in 325 and that creed as amended by the First Council of Constantinople in 381. It also confirmed the authority of two synodical letters of Cyril of Alexandria and the letter of Pope Leo I to Flavian of Constantinople. Content. The full text of the definition reaffirms the decisions of the Council of Ephesus, the pre-eminence of the Creed of Nicaea (325) and the further definitions of the Council of Constantinople (381). In one of the translations into English, the key section, emphasizing the double nature of Christ (human and divine), runs: The Definition implicitly addressed a number of popular heretical beliefs. The reference to "co-essential with the Father" was directed at Arianism; "co-essential with us" is directed at Apollinarianism; "Two Natures unconfusedly, unchangeably" refutes Eutychianism; and "indivisibly, inseparably" and "Theotokos" are against Nestorianism. Oriental Orthodox dissent. The Chalcedonian Definition was written amid controversy between the Western and Eastern churches over the meaning of the Incarnation (see Christology). The Western church readily accepted the creed, but some Eastern churches did not. Political disturbances prevented the Armenian bishops from attending. Even though Chalcedon reaffirmed the Third Council's condemnation of Nestorius, the Non-Chalcedonians always suspected that the Chalcedonian Definition tended towards Nestorianism. This was in part because of the restoration of a number of bishops deposed at the Second Council of Ephesus, bishops who had previously indicated what appeared to be support of Nestorian positions. The Coptic Church of Alexandria dissented, holding to Cyril of Alexandria's preferred formula for the oneness of Christ's nature in the incarnation of God the Word as "out of two natures". Cyril's language is not consistent and he may have countenanced the view that it is possible to contemplate in theory two natures after the incarnation, but the Church of Alexandria felt that the Definition should have stated that Christ be acknowledged "out of two natures" rather than "in two natures". The definition defines that Christ is "acknowledged in two natures", which "come together into one person and one hypostasis". The formal definition of "two natures" in Christ was understood by the critics of the council at the time, and is understood by many historians and theologians today, to side with western and Antiochene Christology and to diverge from the teaching of Cyril of Alexandria, who always stressed that Christ is "one". However, a modern analysis of the sources of the creed (by A. de Halleux, in "Revue Theologique de Louvain" 7, 1976) and a reading of the acts, or proceedings, of the council show that the bishops considered Cyril the great authority and that even the language of "two natures" derives from him. This miaphysite position, historically characterised by Chalcedonian followers as "monophysitism", though this is denied by the dissenters, formed the basis for the distinction of the Coptic Church of Egypt and Ethiopia and the "Jacobite" churches of Syria, and the Armenian Apostolic Church (see Oriental Orthodoxy) from other churches.
6956
1299271759
https://en.wikipedia.org/wiki?curid=6956
Conservation law
In physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves over time. Exact conservation laws include conservation of mass-energy, conservation of linear momentum, conservation of angular momentum, and conservation of electric charge. There are also many approximate conservation laws, which apply to such quantities as mass, parity, lepton number, baryon number, strangeness, hypercharge, etc. These quantities are conserved in certain classes of physics processes, but not in all. A local conservation law is usually expressed mathematically as a continuity equation, a partial differential equation which gives a relation between the amount of the quantity and the "transport" of that quantity. It states that the amount of the conserved quantity at a point or within a volume can only change by the amount of the quantity which flows in or out of the volume. From Noether's theorem, every differentiable symmetry leads to a local conservation law. Other conserved quantities can exist as well. Conservation laws as fundamental laws of nature. Conservation laws are fundamental to our understanding of the physical world, in that they describe which processes can or cannot occur in nature. For example, the conservation law of energy states that the total quantity of energy in an isolated system does not change, though it may change form. In general, the total quantity of the property governed by that law remains unchanged during physical processes. With respect to classical physics, conservation laws include conservation of energy, mass (or matter), linear momentum, angular momentum, and electric charge. With respect to particle physics, particles cannot be created or destroyed except in pairs, where one is ordinary and the other is an antiparticle. With respect to symmetries and invariance principles, three special conservation laws have been described, associated with inversion or reversal of space, time, and charge. Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering. Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others. One particularly important result concerning "local" conservation laws is Noether's theorem, which states that there is a one-to-one correspondence between each one of them and a "differentiable" symmetry of the Universe. For example, the local conservation of energy follows from the uniformity of time and the local conservation of angular momentum arises from the isotropy of space, i.e. because there is no preferred direction of space. Notably, there is no conservation law associated with time-reversal, although more complex conservation laws combining time-reversal with other symmetries are known. Exact laws. A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely "have never been proven to be violated:" Another exact symmetry is CPT symmetry, the simultaneous inversion of space and time coordinates, together with swapping all particles with their antiparticles; however being a discrete symmetry Noether's theorem does not apply to it. Accordingly, the conserved quantity, CPT parity, can usually not be meaningfully calculated or determined. Approximate laws. There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions. Global and local conservation laws. The total amount of some conserved quantity in the universe could remain unchanged if an equal amount were to appear at one point "A" and simultaneously disappear from another separate point "B". For example, an amount of energy could appear on Earth without changing the total amount in the Universe if the same amount of energy were to disappear from some other region of the Universe. This weak form of "global" conservation is really not a conservation law because it is not Lorentz invariant, so phenomena like the above do not occur in nature. Due to special relativity, if the appearance of the energy at "A" and disappearance of the energy at "B" are simultaneous in one inertial reference frame, they will not be simultaneous in other inertial reference frames moving with respect to the first. In a moving frame one will occur before the other; either the energy at "A" will appear "before" or "after" the energy at "B" disappears. In both cases, during the interval energy will not be conserved. A stronger form of conservation law requires that, for the amount of a conserved quantity at a point to change, there must be a flow, or "flux" of the quantity into or out of the point. For example, the amount of electric charge at a point is never found to change without an electric current into or out of the point that carries the difference in charge. Since it only involves continuous "local" changes, this stronger type of conservation law is Lorentz invariant; a quantity conserved in one reference frame is conserved in all moving reference frames. This is called a "local conservation" law. Local conservation also implies global conservation; that the total amount of the conserved quantity in the Universe remains constant. All of the conservation laws listed above are local conservation laws. A local conservation law is expressed mathematically by a "continuity equation", which states that the change in the quantity in a volume is equal to the total net "flux" of the quantity through the surface of the volume. The following sections discuss continuity equations in general. Differential forms. In continuum mechanics, the most general form of an exact conservation law is given by a continuity equation. For example, conservation of electric charge is formula_1 where is the divergence operator, is the density of (amount per unit volume), is the flux of (amount crossing a unit area in unit time), and is time. If we assume that the motion u of the charge is a continuous function of position and time, then formula_2 In one space dimension this can be put into the form of a homogeneous first-order quasilinear hyperbolic equation: formula_3 where the dependent variable is called the "density" of a "conserved quantity", and is called the "current Jacobian", and the subscript notation for partial derivatives has been employed. The more general inhomogeneous case: formula_4 is not a conservation equation but the general kind of balance equation describing a dissipative system. The dependent variable is called a "nonconserved quantity", and the inhomogeneous term is the-"source", or dissipation. For example, balance equations of this kind are the momentum and energy Navier-Stokes equations, or the entropy balance for a general isolated system. In the one-dimensional space a conservation equation is a first-order quasilinear hyperbolic equation that can be put into the "advection" form: formula_5 where the dependent variable is called the density of the "conserved" (scalar) quantity, and is called the current coefficient, usually corresponding to the partial derivative in the conserved quantity of a current density of the conserved quantity : formula_6 In this case since the chain rule applies: formula_7 the conservation equation can be put into the current density form: formula_8 In a space with more than one dimension the former definition can be extended to an equation that can be put into the form: formula_9 where the "conserved quantity" is , denotes the scalar product, is the nabla operator, here indicating a gradient, and is a vector of current coefficients, analogously corresponding to the divergence of a vector current density associated to the conserved quantity : formula_10 This is the case for the continuity equation: formula_11 Here the conserved quantity is the mass, with density and current density , identical to the momentum density, while is the flow velocity. In the general case a conservation equation can be also a system of this kind of equations (a vector equation) in the form: formula_12 where is called the "conserved" (vector) quantity, is its gradient, is the zero vector, and is called the Jacobian of the current density. In fact as in the former scalar case, also in the vector case A(y) usually corresponding to the Jacobian of a current density matrix : formula_13 and the conservation equation can be put into the form: formula_14 For example, this the case for Euler equations (fluid dynamics). In the simple incompressible case they are: formula_15 where: It can be shown that the conserved (vector) quantity and the current density matrix for these equations are respectively: formula_16 where formula_17 denotes the outer product. Integral and weak forms. Conservation equations can usually also be expressed in integral form: the advantage of the latter is substantially that it requires less smoothness of the solution, which paves the way to weak form, extending the class of admissible solutions to include discontinuous solutions. By integrating in any space-time domain the current density form in 1-D space: formula_18 and by using Green's theorem, the integral form is: formula_19 In a similar fashion, for the scalar multidimensional space, the integral form is: formula_20 where the line integration is performed along the boundary of the domain, in an anticlockwise manner. Moreover, by defining a test function "φ"(r,"t") continuously differentiable both in time and space with compact support, the weak form can be obtained pivoting on the initial condition. In 1-D space it is: formula_21 In the weak form all the partial derivatives of the density and current density have been passed on to the test function, which with the former hypothesis is sufficiently smooth to admit these derivatives.
6960
1296239823
https://en.wikipedia.org/wiki?curid=6960
Car Talk
Car Talk is a metonym for the humorous work of "Click and Clack, the Tappet Brothers", Tom and Ray Magliozzi, on automobile repair. Originally, "Car Talk" was a radio show that ran on National Public Radio (NPR) from 1977 until October 2012, when the Magliozzi brothers retired. Since their retirement, the oeuvre now includes a website and a podcast of reruns that is currently hosted by Apple Podcasts, NPR Podcasts, and Stitcher. The "Car Talk" radio show was honored with a Peabody Award in 1992, and the Magliozzis were both inducted into the National Radio Hall of Fame in 2014 and the Automotive Hall of Fame in 2018. Premise. "Car Talk" was presented in the form of a call-in radio show: listeners called in with questions related to motor vehicle maintenance and repair. Most of the advice sought was diagnostic, with callers describing symptoms and demonstrating sounds of an ailing vehicle while the Magliozzis made an attempt to identify the malfunction over the telephone and give advice on how to fix it. While the hosts peppered their call-in sessions with jokes directed at both the caller and at themselves, the Magliozzis were usually able to arrive at a diagnosis. However, when they were stumped, they attempted anyway with an answer they claimed was "unencumbered by the thought process", the official motto of the show. Edited reruns are carried on XM Satellite Radio (now Sirius XM) via both the Public Radio and NPR Now channels. The "Car Talk" theme music is "Dawggy Mountain Breakdown" by bluegrass artist David Grisman. Call-in procedure. Throughout the program, listeners were encouraged to dial the toll-free telephone number, 1-888-CAR-TALK (1-888-227-8255), which connected to a 24-hour answering service. Although the approximately 2,000 queries received each week were screened by the "Car Talk" staff, the questions were unknown to the Magliozzis in advance as "that would entail researching the right answer, which is what? ... Work." Features. The show originally consisted of two segments with a break in between but was changed to three segments. After the shift to the three-segment format, it became a running joke to refer to the last segment as "the third half" of the program. The show opened with a short comedy segment, typically jokes sent in by listeners, followed by eight call-in sessions. The hosts ran a contest called the "Puzzler", in which a riddle, sometimes car-related, was presented. The answer to the previous week's "Puzzler" was given at the beginning of the "second half" of the show, and a new "Puzzler" was given at the start of the "third half". The hosts gave instructions to listeners to write answers addressed to "Puzzler Tower" on some non-existent or expensive object, such as a "$26 bill" or an advanced DSLR camera. This gag initially started as suggestions that the answers be written "on the back of a $20 bill". A running gag concerned Tom's inability to remember the previous week's "Puzzler" without heavy prompting from Ray. During a tribute show following Tom's death in 2014 due to complications of Alzheimer's disease, Ray joked, "I guess he wasn't joking about not being able to remember the puzzler all those years." For each puzzler, one correct answer was chosen at random, with the winner receiving a $26 gift certificate to the "Car Talk" store, referred to as the "Shameless Commerce Division". It was originally $25, but was increased for inflation after a few years. Originally, the winner received a specific item from the store, but it soon changed to a gift certificate to allow the winner to choose the item they wanted (though Tom often made an item suggestion). A recurring feature was "Stump the Chumps," in which the hosts revisited a caller from a previous show to determine the accuracy and the effect, if any, of their advice. A similar feature began in May 2001, "Where Are They Now, Tommy?" It began with a comical musical theme with a sputtering, backfiring car engine and a horn as a backdrop. Tom then announced who the previous caller was, followed by a short replay of the essence of the previous call, preceded and followed by harp music often used in other audiovisual media to indicate recalling and returning from a dream. The hosts then greeted the previous caller, confirmed that they had not spoken since their previous appearance and asked them if there had been any influences on the answer they were about to relate, such as arcane bribes by the NPR staff. The repair story was then discussed, followed by a fanfare and applause if the Tappet Brothers' diagnosis was correct, or a wah-wah-wah music piece mixed with a car starter operated by a weak battery (an engine which wouldn't start) if the diagnosis was wrong. The hosts then thanked the caller for their return appearance. The brothers also had an official Animal-Vehicle Biologist and Wildlife Guru named Kieran Lindsey. She answered questions like "How do I remove a snake from my car?" and offered advice on how those living in cities and suburbs could reconnect with wildlife. They also would sometimes rely on Harvard University professors Wolfgang Rueckner and Jim E. Davis for questions concerning physics and chemistry, respectively. There were numerous appearances from NPR personalities, including Bob Edwards, Susan Stamberg, Scott Simon, Ray Suarez, Will Shortz, Sylvia Poggioli, and commentator and author Daniel Pinkwater. On one occasion, the show featured Martha Stewart as an in-studio guest, whom the Magliozzis twice during the segment referred to as "Margaret". Celebrities and public figures were featured as "callers" as well, including Geena Davis, Ashley Judd, Morley Safer, Gordon Elliott, former Major League Baseball pitcher Bill Lee, journalist Farhad Manjoo, and astronaut John M. Grunsfeld. Space program calls. Astronaut and engineer John Grunsfeld called into the show during Space Shuttle mission STS-81 in January 1997, in which "Atlantis" docked to the Mir space station. In this call he complained about the performance of his serial-numbered, Rockwell-manufactured "government van". To wit, it would run very loud and rough for about two minutes, quieter and smoother for another six and a half, and then the engine would stop with a jolt. He went on to state that the brakes of the vehicle, when applied, would glow red-hot, and that the vehicle's odometer displayed "about 60 million miles". This created some consternation for the hosts, until they noticed the audio of Grunsfeld's voice, being relayed from Mir via TDRS satellite, sounded similar to that of Tom Hanks in the then-recent film Apollo 13, after which they realized the call was from space and the government van in question was, in fact, the Space Shuttle. In addition to the on-orbit call, the Brothers once received a call asking advice on winterizing an electric car. When they asked what kind of car, the caller stated it was a "kit car", a $400 million "kit car". It was a joke call from NASA's Jet Propulsion Laboratory concerning the preparation of the Mars "Opportunity" rover for the oncoming Martian winter, during which temperatures drop to several hundred degrees below freezing. Click and Clack have also been featured in editorial cartoons, including one where a befuddled NASA engineer called them to ask how to fix the Space Shuttle. Humor. Humor and wisecracking pervaded the program. Tom and Ray are known for their self-deprecating humor, often joking about the supposedly poor quality of their advice and the show in general. They also commented at the end of each show: "Well, it's happened again—you've wasted another perfectly good hour listening to "Car Talk"." The phrase "our fair city" was introduced during a puzzler segment on the radio show. Ray presented a puzzler involving a well-dressed man who referred to a city as "your fair city." Tom found the phrase amusing and began using it humorously to refer to Cambridge, Massachusetts, where the show was based. This playful reference quickly became a running joke on the show, with the hosts frequently referring to Cambridge as "our fair city" in subsequent episodes. In another episode Ray mentioned Cambridge, Massachusetts, at which point Tom reverently interjected with a tone of civic pride, "Our fair city". Ray invariably mocked Cambridge, MA', the United States Postal Service's two-letter abbreviation for 'Massachusetts, by pronouncing the "MA" as a word.. This also became a running gag. Preceding each break in the show, one of the hosts led up to the network identification with a humorous take on a disgusted reaction of some usually famous person to hearing that identification. The full line went along the pattern of, for example, "And even though Roger Clemens stabs his radio with a syringe whenever he hears "us" say it, this is NPR: National Public Radio" (later just "... this is NPR"). At one point in the show, often after the break, Ray usually stated that: "Support for this show is provided by," followed by an absurd fundraiser. The ending credits of the show started with thanks to the colorfully nicknamed actual staffers: producer Doug "the subway fugitive, not a slave to fashion, bongo boy frogman" Berman; "John 'Bugsy' Lawlor, just back from the ..." every week a different eating event with rhyming foodstuff names; David "Calves of Belleville" Greene; Catherine "Frau Blücher" Fenollosa, whose name caused a horse to neigh and gallop (an allusion to a running gag in the movie "Young Frankenstein"); and Carly "High Voltage" Nix, among others. Following the real staff was a lengthy list of pun-filled fictional staffers and sponsors such as statistician Marge Innovera ("margin of error"), customer care representative Haywood Jabuzoff ("Hey, would ya buzz off"), meteorologist Claudio Vernight ("cloudy overnight"), optometric firm C. F. Eye Care ("see if I care"), Russian chauffeur Picov Andropov ("pick up and drop off"), Leo Tolstoy biographer Warren Peace ("War and Peace"), hygiene officer and chief of the Tokyo office Oteka Shawa ("oh, take a shower"), Swedish snowboard instructor Soren Derkeister ("sore in the keister"), law firm Dewey, Cheetham & Howe ("Do we cheat 'em? And how!"), Greek tailor Euripides Eumenades ("You rip-a these, you mend-a these"), cloakroom attendant Mahatma Coate ("My hat, my coat"), seat cushion tester Mike Easter (my keister) and many, many others, usually concluding with Erasmus B. Dragon ("Her ass must be draggin), whose job title varied, but who was often said to be head of the show's working mothers' support group. They sometimes advised that "our chief counsel from the law firm of Dewey, Cheetham, & Howe is Hugh Louis Dewey, known to a group of people in Harvard Square as Huey Louie Dewey." (Huey, Louie, and Dewey were the juvenile nephews being raised by Donald Duck in "Walt Disney's Comics and Stories".) Guest accommodations were provided by The Horseshoe Road Inn ("the horse you rode in"). At the end of the show, Ray warns the audience, "Don't drive like my brother!" to which Tom replies, "And don't drive like "my" brother!" The original tag line was "Don't drive like a knucklehead!" There were variations such as, "Don't drive like my brother ..." "And don't drive like his brother!" and "Don't drive like my sister ..." "And don't drive like "my" sister!" The tagline was heard in the Pixar film "Cars", in which Tom and Ray voiced anthropomorphized vehicles (Rusty and Dusty Rust-eze, respectively a 1963 Dodge Dart and 1963 Dodge A100 van, as Lightning McQueen's racing sponsors) with personalities similar to their own on-air personae. Tom notoriously once owned a "convertible, green with large areas of rust!" Dodge Dart, known jokingly on the program by the faux-elegant name "Dartre". History. In 1977, radio station WBUR-FM in Boston scheduled a panel of local car mechanics to discuss car repairs on one of its programs, but only Tom Magliozzi showed up. He did so well that he was asked to return as a guest, and he invited his younger brother Ray (who was actually more of a car repair expert) to join him. The brothers were soon asked to host their own radio show on WBUR, which they continued to do every week. In 1986, NPR decided to distribute their show nationally. In 1989, the brothers started a newspaper column "Click and Clack Talk Cars" which, like the radio show, mixed serious advice with humor. King Features distributes the column. Ray Magliozzi continues to write the column, retitled "Car Talk", after his brother's death in 2014, knowing he would have wanted the advice and humor to continue. In 1992, "Car Talk" won a Peabody Award, saying "Each week, master mechanics Tom and Ray Magliozzi provide useful information about preserving and protecting our cars. But the real core of this program is what it tells us about human mechanics ... The insight and laughter provided by Messrs. Magliozzi, in conjunction with their producer Doug Berman, provide a weekly mental tune-up for a vast and ever-growing public radio audience." In 2005, Tom and Ray Magliozzi founded the Car Talk Vehicle Donation Program, "as a way to give back to the stations that were our friends and partners for decades — and whose programs we listen to every day." Since the Car Talk Vehicle Donation Program was founded, over 40,000 vehicles have been donated to support local NPR stations and programs, with over $40 million donated. Approximately 70% of the proceeds generated go directly toward funding local NPR affiliates and programs. As of 2012, it had 3.3 million listeners each week, on about 660 stations. On June 8, 2012, the brothers announced that they would no longer broadcast new episodes as of October. Executive producer Doug Berman said the best material from 25 years of past shows would be used to put together "repurposed" shows for NPR to broadcast. Berman estimated the archives contain enough for eight years' worth of material before anything would have to be repeated. Ray Magliozzi, however, would occasionally record new taglines and sponsor announcements that were aired at the end of the show. The show was inducted into the National Radio Hall of Fame in 2014. Ray Magliozzi hosted a special "Car Talk" memorial episode for his brother Tom after he died in November 2014. The "Best of Car Talk" episodes ended their weekly broadcast on NPR on September 30, 2017, although past episodes would continue availability online and via podcasts. 120 of the 400 stations intended to continue airing the show. NPR announced one option for the time slot would be their new news-talk program "It's Been a Minute". On June 11, 2021, it was announced that radio distribution of "Car Talk" would officially end on October 1, 2021, and that NPR would begin distribution of a twice-weekly podcast that will be 35–40 minutes in length and include early versions of every show, in sequential order. Hosts. The Magliozzis were long-time auto mechanics. Ray Magliozzi has a Bachelor of Science degree in humanities and science from MIT, while Tom had a Bachelor of Science degree in economics from MIT, an MBA from Northeastern University, and a DBA from the Boston University School of Management. The Magliozzis operated a do-it-yourself garage together in the 1970s which became more of a conventional repair shop in the 1980s. Ray continued to have a hand in the day-to-day operations of the shop for years, while his brother Tom semi-retired, often joking on "Car Talk" about his distaste for doing "actual work". The show's offices were located near their shop at the corner of JFK Street and Brattle Street in Harvard Square, marked as "Dewey, Cheetham & Howe", the imaginary law firm to which they referred on-air. DC&H doubled as the business name of Tappet Brothers Associates, the corporation established to manage the business end of "Car Talk". Initially a joke, the company was incorporated after the show expanded from a single station to national syndication. The two were commencement speakers at MIT in 1999. Executive producer Doug Berman said in 2012, "The guys are culturally right up there with Mark Twain and the Marx Brothers. They will stand the test of time. People will still be enjoying them years from now. They're that good." Tom Magliozzi died on November 3, 2014, at age 77, due to complications from Alzheimer's disease. Adaptations. The show was the inspiration for the short-lived "The George Wendt Show", which briefly aired on CBS in the 1994–1995 season as a mid-season replacement. In July 2007, PBS announced that it had green-lit an animated adaptation of "Car Talk", to air on prime-time in 2008. The show, titled "Click and Clack's As the Wrench Turns", is based on the adventures of the fictional "Click and Clack" brothers' garage at "Car Talk Plaza". The ten episodes aired in July and August 2008. "Car Talk: The Musical!!!" was written and directed by Wesley Savick, and composed by Michael Wartofsky. The adaptation was presented by Suffolk University, and opened on March 31, 2011, at the Modern Theatre in Boston, Massachusetts. The play was not officially endorsed by the Magliozzis, but they participated in the production, lending their voices to a central puppet character named "The Wizard of Cahs".
6962
7903804
https://en.wikipedia.org/wiki?curid=6962
Council of Chalcedon
The Council of Chalcedon (; ) was the fourth ecumenical council of the Christian Church. It was convoked by the Roman emperor Marcian. The council convened in the city of Chalcedon, Bithynia (modern-day Kadıköy, Istanbul, Turkey) from 8 October to 1 November 451. The council was attended by over 520 bishops or their representatives, making it the largest and best-documented of the first seven ecumenical councils. The principal purpose of the council was to re-assert the teachings of the ecumenical Council of Ephesus against the teachings of Eutyches and Nestorius. Such doctrines viewed Christ's divine and human natures as separate (Nestorianism) or viewed Christ as solely divine (monophysitism). Agenda. The ruling of the council stated: Whilst this judgment marked a significant turning point in the Christological debates, it also generated heated disagreements between the council and the Oriental Orthodox Church, who did not agree with such conduct or proceedings. This disagreement would later cause the Oriental Orthodox Churches and the Chalcedonian churches to schism, and led to the council being regarded as "Chalcedon, the Ominous" by the Miaphysites"." The council's other responsibilities included addressing controversy, dealing with issues such as ecclesiastical discipline and jurisdiction, and approving statements of belief such as the Creed of Nicaea (325), the Creed of Constantinople (381, subsequently known as the Nicene Creed), two letters of St. Cyril of Alexandria against Nestorius, and the Tome of Pope Leo I. The Christology of the Church of the East may be called "non-Ephesine" for not accepting the Council of Ephesus, but did finally gather to ratify the Council of Chalcedon at the Synod of Mar Aba I in 544. Through the 1994 Common Christological Declaration between the Chalcedonian Catholic Church and the Nestorian Assyrian Church of the East, the Assyrian Church of the East and the Catholic Church each accepted and confessed the same doctrine of Christology. Background. In 325, the first ecumenical council (First Council of Nicaea) determined that Jesus Christ was God, "consubstantial" with the Father, and rejected the Arian contention that Jesus was a created being. This was reaffirmed at the First Council of Constantinople (381) and the First Council of Ephesus (431). Eutychian controversy. About two years after Cyril of Alexandria's death in 444, an aged monk from Constantinople named Eutyches began teaching a subtle variation on the traditional Christology in an attempt to stop what he saw as a new outbreak of Nestorianism. He claimed to be a faithful follower of Cyril's teaching, which was declared orthodox in the Union of 433. Cyril had taught that "There is only one "physis", since it is the Incarnation, of God the Word." It was argued by Pope Leo I that due to potential ambiguity between the various Greek terms and their Latin equivalents, in addition to the energy and imprudence with which he asserted his opinions, Eutyches was misunderstood, and many believed that he was advocating Docetism, a sort of reversal of Arianism – where Arius had denied the consubstantial divinity of Jesus, Eutyches seemed to be denying that Jesus was fully human. Pope Leo I wrote that Eutyches' error seemed to be more from a lack of skill than from malice. Eutyches had been accusing various personages of covert Nestorianism. In November 448, Flavian, Bishop of Constantinople held a local synod regarding a point of discipline connected with the province of Sardis. At the end of the session of this synod one of those inculpated, Eusebius, Bishop of Dorylaeum, brought a counter charge of heresy against the archimandrite. Eusebius demanded that Eutyches be removed from office. Flavian preferred that the bishop and the archimandrite sort out their differences, but as his suggestion went unheeded, Eutyches was summoned to clarify his position regarding the nature of Christ. Eventually, Eutyches reluctantly appeared, but his position was considered to be theologically unsophisticated, and the synod finding his answers unresponsive condemned and exiled him. Flavian sent a full account to Pope Leo I. Although it had been accidentally delayed, Leo wrote a compendious explanation of the whole doctrine involved, and sent it to Flavian as a formal and authoritative decision of the question. Eutyches appealed against the decision, labeling Flavian a Nestorian, and received the support of Pope Dioscorus I of Alexandria. John Anthony McGuckin sees an "innate rivalry" between the Sees of Alexandria and Constantinople. Dioscurus, imitating his predecessors in assuming a primacy over Constantinople, held his own synod which annulled the sentence of Flavian, and absolved Eutyches after he claimed to have repented. Latrocinium of Ephesus. Through the influence of the court official Chrysaphius, godson of Eutyches, in 449, the competing claims between the Patriarchs of Constantinople and Alexandria led Emperor Theodosius II to call a council which was held in Ephesus in 449, with Dioscorus presiding. Pope Leo sent four legates to represent him and expressed his regret that the shortness of the notice must prevent the presence of any other bishop of the West. He provided his legates, one of whom died en route, with a letter addressed to Flavian explaining Rome's position in the controversy. Leo's letter, now known as Leo's Tome, confessed that Christ had two natures, and was not of or from two natures. On August 8, 449 the Second Council of Ephesus began its first session. The Acts of the first session of this synod were read at the Council of Chalcedon, 451, and are thus preserved. The remainder of the Acts (the first session being wanting) are known through a Syriac translation by a Miaphysite monk, written in the year 535 and published from a manuscript in the British Museum. Nonetheless, there are somewhat different interpretations as to what actually transpired. The question before the council by order of the emperor was whether Flavian, in a synod held by him at Constantinople in November, 448, had justly deposed and excommunicated the Archimandrite Eutyches for refusing to admit two natures in Christ. Dioscorus began the council by banning all members of the November 448 synod which had deposed Eutyches from sitting as judges. He then introduced Eutyches who publicly professed that while Christ had two natures before the incarnation, the two natures had merged to form a single nature after the incarnation. Of the 130 assembled bishops, 111 voted to rehabilitate Eutyches. Throughout these proceedings, Hilary (one of the papal legates) repeatedly called for the reading of Leo's Tome, but was ignored. The Eastern Orthodox Church has very different accounts of The Second Council of Ephesus. Pope Dioscorus requested deferring reading of Leo's Tome, as it was not seen as necessary to start with, and could be read later. This was seen as a rebuke to the representatives from the Church of Rome not reading the Tome from the start. Not having wanted for the letter to be read aloud for this reason, Dioscorus was then forced to depose Flavian of Constantinople and Eusebius of Dorylaeum on the grounds that they taught the Word having two distinct hypostases, in contrast to Cyril's teachings. According to Chalcedonian accounts, when Flavian and Hilary objected, Dioscorus called for a mob to enter the church which assaulted Flavian as he clung to the altar; other accounts blame one monk, Barsauma, and others yet blame Dioscorus himself. Flavian would die three days later. Other sources, however, implicate Empress Pulcheria and Anatolius of Constantinople, as it would not be realistic for a murder witnessed by many bishops and patriarchs at a major council to go unmentioned, and Flavian himself has letters to Leo "after" the council that make no mention of such events. The papal legates refused to attend the second session at which several more bishops were deposed, including Ibas of Edessa, Irenaeus of Tyre, Domnus of Antioch, and Theodoret. Dioscorus then had Cyril of Alexandria's Twelve Anathemas declared orthodox with the intent of condemning any confession other than Cyril's one-nature formula. According to a letter to the Empress Pulcheria collected among the letters of Leo I, Hilary apologized for not delivering to her the pope's letter after the synod, but owing to Dioscurus, who tried to hinder his going either to Rome or to Constantinople, he had great difficulty in making his escape in order to bring to the pontiff the news of the result of the council. Hilary, who later became pope and dedicated an oratory in the Lateran Basilica in thanks for his life, managed to escape from Constantinople and brought news of the council to Leo who immediately dubbed it a "synod of robbers"Latrociniumand refused to accept its pronouncements. The decisions of this council now threatened schism between the East and the West. The claims that bishops being forced to approve actions, were challenged by Pope Dioscorus and the Egyptian Bishops at Chalcedon. Convocation and session. The situation continued to deteriorate, with Leo demanding the convocation of a new council and Emperor Theodosius II refusing to budge, all the while appointing bishops in agreement with Dioscorus. All this changed dramatically with the Emperor's death and the elevation of Marcian to the imperial throne. To resolve the simmering tensions, Marcian announced his intention to hold a new council to set aside the 449 Second Council of Ephesus which was named the "Latrocinium" or "Robber Council" by Pope Leo. Pulcheria, the sister of Theodosius, may have influenced this decision, or even made the convention of a council a requirement during her negotiations with Aspar, the magister militum, to marry Marcian. Leo had pressed for it to take place in Italy, but Emperor Marcian instead called for it to convene at Chalcedon, because it was closer to Constantinople, and would thus allow him to respond quickly to any events along the Danube, which was being raided by the Huns under Attila. The council opened on 8 October 451. Marcian had the bishops deposed by Dioscorus returned to their dioceses and had the body of Flavian brought to the capital to be buried honorably. The Emperor asked Leo to preside over the council, but Leo again chose to send legates in his place. This time, Bishops Paschasinus of Lilybaeum and Julian of Cos and two priests Boniface and Basil represented the western church at the council. The council was attended by about 520 bishops or their representatives and was the largest and best-documented of the first seven ecumenical councils. All the sessions were held in the church of St. Euphemia, Martyr, outside the city and directly opposite Constantinople. As to the number of sessions held by the Council of Chalcedon there is a great discrepancy in the various texts of the Acts, also in the ancient historians of the council. Either the respective manuscripts must have been incomplete; or the historians passed over in silence several sessions held for secondary purposes. According to the deacon Rusticus, there were in all sixteen sessions; this division is commonly accepted by scholars, including Karl Josef von Hefele, historian of the councils. If all the separate meetings were counted, there would be twenty-one sessions; several of these meetings, however, are considered as supplementary to preceding sessions. Paschasinus refused to give Dioscorus (who had excommunicated Leo leading up to the council) a seat at the council. As a result, he was moved to the nave of the church. Paschasinus further ordered the reinstatement of Theodoret and that he be given a seat, but this move caused such an uproar among the council fathers, that Theodoret also sat in the nave, though he was given a vote in the proceedings, which began with a trial of Dioscorus. Ibas of Edessa, was also declared to be orthodox according to the contents of his letter which were read during the minutes of the Council of Chalcedon. Marcian wished to bring proceedings to a speedy end, and asked the council to make a pronouncement on the doctrine of the Incarnation before continuing the trial. The council fathers, however, felt that no new creed was necessary, and that the doctrine had been laid out clearly in Leo's Tome. They were also hesitant to write a new creed as the First Council of Ephesus had forbidden the composition or use of any new creed. Aetius, deacon of Constantinople then read Cyril's letter to Nestorius, and a second letter to John of Antioch. The bishops responded, "We all so believe: Pope Leo thus believes ... we all thus believe. As Cyril so believe we, all of us: eternal be the memory of Cyril: as the epistles of Cyril teach such is our mind, such has been our faith: such is our faith: this is the mind of Archbishop Leo, so he believes, so he has written." Beronician, clerk of the consistory, then read from a book handed him by Aetius, the synodical letter of Leo to Flavian (Leo's Tome). After the reading of the letter, the bishops cried out: "This is the faith of the fathers, this is the faith of the Apostles. So we all believe, thus the orthodox believe. ... Peter has spoken thus through Leo. So taught the Apostles. Piously and truly did Leo teach, so taught Cyril. Everlasting be the memory of Cyril. Leo and Cyril taught the same thing, ... This is the true faith ... This is the faith of the fathers. Why were not these things read at Ephesus?" However, during the reading of Leo's Tome, three passages were challenged as being potentially Nestorian, and their orthodoxy was defended by using the writings of Cyril. Due to such concerns, the council decided to adjourn and appoint a special committee to investigate the orthodoxy of Leo's Tome, judging it by the standard of Cyril's Twelve Chapters, as some of the bishops present raised concerns about their compatibility. This committee was headed by Anatolius, Patriarch of Constantinople, and was given five days to carefully study the matter. The committee unanimously decided in favor of the orthodoxy of Leo, determining that what he said was compatible with the teaching of Cyril. A number of other bishops also entered statements to the effect that they believed that Leo's Tome was not in contradiction with the teaching of Cyril as well. The council continued with Dioscorus' trial, but he refused to appear before the assembly. However, historical accounts from the Eastern Orthodox Church note that Dioscorus was put under solitary arrest. As a result, he was condemned, but by an underwhelming amount (more than half the bishops present for the previous sessions did not attend his condemnation), and all of his decrees were declared null. Empress Pulcheria (Marcian's wife) told Dioscorus "In my father's time, there was a man who was stubborn (referring to St. John Chrysostom) and you are aware of what was made of him", to which Dioscorus famously responded "And you may recall that your mother prayed at his tomb, as she was bleeding of sickness". Pulcheria is said to have slapped Dioscorus in the face, breaking some of his teeth, and ordered the guards to confine him, which they did pulling his beard hair. Dioscorus is said to have put these in a box and sent them back to his Church in Alexandria noting "this is the fruit of my faith." Marcian responded by exiling Dioscorus. All of the bishops were then asked to sign their assent to the Tome, but a group of thirteen Egyptians refused, saying that they would assent to "the traditional faith". As a result, the Emperor's commissioners decided that a "credo" would indeed be necessary and presented a text to the fathers. No consensus was reached. Paschasinus threatened to return to Rome to reassemble the council in Italy. Marcian agreed, saying that if a clause were not added to the "credo", the bishops would have to relocate. The Committee then sat in the oratory of the most holy martyr Euphemis and afterwards reported a definition of faith which while teaching the same doctrine was not the Tome of Leo. Although it could be reconciled with Cyril's Formula of Reunion, it was not compatible in its wording with Cyril's Twelve Anathemas. In particular, the third anathema reads: "If anyone divides in the one Christ the hypostases after the union, joining them only by a conjunction of dignity or authority or power, and not rather by a coming together in a union by nature, let him be anathema." This appeared to some to be incompatible with Leo's definition of two natures hypostatically joined. However, the council would determine (with the exception of 13 Egyptian bishops) that this was an issue of wording and not of doctrine; a committee of bishops appointed to study the orthodoxy of the Tome using Cyril's letters (which included the twelve anathemas) as their criteria unanimously determined it to be orthodox, and the council, with few exceptions, supported this. It approved the creed of Nicaea (325), the creed of Constantinople (381; subsequently known as the Nicene Creed), two letters of Cyril against Nestorius, which insisted on the unity of divine and human persons in Christ, and the Tome of Pope Leo I confirming two distinct natures in Christ. Acceptance. The dogmatic definitions of the council are recognized as normative by the Eastern Orthodox and Catholic Churches, as well by certain other Western Churches; also, most Protestants agree that the council's teachings regarding the Trinity and the Incarnation are orthodox doctrine which must be adhered to. The council, however, is rejected by the Oriental Orthodox Churches, the latter teaching rather that "The Lord Jesus Christ is God the Incarnate Word. He possesses the perfect Godhead and the perfect manhood. His fully divine nature is united with His fully human nature yet without mixing, blending or alteration." The Oriental Orthodox contend that this latter teaching has been misunderstood as monophysitism, an appellation with which they strongly disagree but, nevertheless, refuse to accept the decrees of the council. Many Anglicans and most Protestants consider it to be the last authoritative ecumenical council. These churches, along with Martin Luther, hold that both conscience and scripture preempt doctrinal councils and generally agree that the conclusions of later councils were unsupported by or contradictory to scripture. Results. The Council of Chalcedon issued the Chalcedonian Definition, which repudiated the notion of a single nature in Christ, and declared that he has two natures in one person and hypostasis. It also insisted on the completeness of his two natures: Godhead and manhood. The council also issued 27 disciplinary canons governing church administration and authority. In a further decree, later known as canon 28, the bishops declared that the See of Constantinople (New Rome) had the patriarchal status with "equal privileges" ( in Greek, in Latin) to the See of Rome. No reference was made in Canon 28 to the bishops of Rome or Constantinople having their authority from being successors to Peter or Andrew respectively. Instead, the stated reasons in the actual text of the Canon that the episcopacy of these cities had been granted their status was the importance of these cities as major cities of the empire of the time. Confession of Chalcedon. The Confession of Chalcedon provides a clear statement on the two natures of Christ, human and divine: The full text of the definition reaffirms the decisions of the Council of Ephesus and the pre-eminence of the Creed of Nicea (325). It also canonises as authoritative two of Cyril of Alexandria's letters and the Tome of Leo written against Eutyches and sent to Archbishop Flavian of Constantinople in 449. Canons. The work of the council was completed by a series of 30 disciplinary canons, the Ancient Epitomes of which are: Canon 28 grants equal privileges () to Constantinople as of Rome because Constantinople is the New Rome as renewed by canon 36 of the Quinisext Council. Pope Leo declared the canon 28 null and void and only approved the canons of the council which were pertaining to faith. Initially, the council indicated their understanding that Pope Leo's ratification was necessary for the canon to be binding, writing, "we have made still another enactment which we have deemed necessary for the maintenance of good order and discipline, and we are persuaded that your Holiness will approve and confirm our decree. ... We are confident you will shed upon the Church of Constantinople a ray of that Apostolic splendor which you possess, for you have ever cherished this church, and you are not at all niggardly in imparting your riches to your children. ... Vouchsafe then, most Holy and most Blessed Father, to accept what we have done in your name, and in a friendly spirit (). For your legates have made a violent stand against it, desiring, no doubt, that this good deed should proceed, in the first instance, from your provident hand. But we, wishing to gratify the pious Christian emperors, and the illustrious Senate, and the capital of the empire, have judged that an Ecumenical Council was the fittest occasion for effecting this measure. Hence we have made bold to confirm the privileges of the aforementioned city (tharresantes ekurosamen) as if your holiness had taken the initiative, for we know how tenderly you love your children, and we feel that in honoring the child we have honored its parent. ... We have informed you of everything with a view of proving our sincerity, and of obtaining for our labors your confirmation and consent." Following Leo's rejection of the canon, Bishop Anatolius of Constantinople conceded, "Even so, the whole force of confirmation of the acts was reserved for the authority of Your Blessedness. Therefore, let Your Holiness know for certain that I did nothing to further the matter, knowing always that I held myself bound to avoid the lusts of pride and covetousness." However, the Canon has since been viewed as valid by the Eastern Orthodox Church. According to some ancient Greek collections, canons 29 and 30 are attributed to the council: canon 29, which states that an unworthy bishop cannot be demoted but can be removed, is an extract from the minutes of the 19th session; canon 30, which grants the Egyptians time to consider their rejection of Leo's "Tome", is an extract from the minutes of the fourth session. In all likelihood an official record of the proceedings was made either during the council itself or shortly afterwards. The assembled bishops informed the pope that a copy of all the "Acta" would be transmitted to him; in March 453, Pope Leo commissioned Julian of Cos, then at Constantinople, to make a collection of all the Acts and translate them into Latin. Most of the documents, chiefly the minutes of the sessions, were written in Greek; others, e.g. the imperial letters, were issued in both languages; others, again, e.g. the papal letters, were written in Latin. Eventually nearly all of them were translated into both languages. The status of the sees of Constantinople and Jerusalem. The status of Jerusalem. The metropolitan of Jerusalem was given independence from the metropolitan of Antioch and from any other higher-ranking bishop, given what is now known as autocephaly, in the council's seventh session whose "Decree on the Jurisdiction of Jerusalem and Antioch" contains: "the bishop of Jerusalem, or rather the most holy Church which is under him, shall have under his own power the three Palestines". This led to Jerusalem becoming a patriarchate, one of the five patriarchates known as the pentarchy, when the title of "patriarch" was created in 531 by Justinian. The Oxford Dictionary of the Christian Church, s.v. "patriarch (ecclesiastical)", also calls it "a title dating from the 6th century, for the bishops of the five great sees of Christendom". Merriam-Webster's Encyclopedia of World Religions, says: "Five patriarchates, collectively called the pentarchy, were the first to be recognized by the legislation of the emperor Justinian (reigned 527–565)". The status of Constantinople. In a canon of disputed validity, the Council of Chalcedon also elevated the See of Constantinople to a position "second in eminence and power to the Bishop of Rome". The Council of Nicaea in 325 had noted that the Sees of Rome, Alexandria and Antioch should have primacy over other, lesser dioceses. At the time, the See of Constantinople was not yet of ecclesiastical prominence, but its proximity to the Imperial court gave rise to its importance. The Council of Constantinople in 381 modified the situation somewhat by placing Constantinople second in honor, above Alexandria and Antioch, stating in Canon III, that "the bishop of Constantinople ... shall have the prerogative of honor after the bishop of Rome; because Constantinople is New Rome". In the early 5th century, this status was challenged by the bishops of Alexandria, but the Council of Chalcedon confirmed in Canon XXVIII: In making their case, the council fathers argued that tradition had accorded "honor" to the see of older Rome because it was the first imperial city. Accordingly, "moved by the same purposes" the fathers "apportioned equal prerogatives to the most holy see of new Rome" because "the city which is honored by the imperial power and senate and enjoying privileges equaling older imperial Rome should also be elevated to her level in ecclesiastical affairs and take second place after her". The framework for allocating ecclesiastical authority advocated by the council fathers mirrored the allocation of imperial authority in the later period of the Roman Empire. The Eastern position could be characterized as being political in nature, as opposed to a doctrinal view. In practice, all Christians East and West addressed the papacy as the See of Peter and Paul or the Apostolic See rather than the See of the Imperial Capital. Rome understands this to indicate that its precedence has always come from its direct lineage from the apostles Peter and Paul rather than its association with Imperial authority. After the passage of the Canon 28, Rome filed a protest against the reduction of honor given to Antioch and Alexandria. However, fearing that withholding Rome's approval would be interpreted as a rejection of the entire council, in 453 the pope confirmed the council's canons while declaring the 28th null and void. This position would change and later be accepted in 1215 at the Fourth Council of the Lateran. Consequences: Chalcedonian Schism. The near-immediate result of the council was a major schism. The bishops who were uneasy with the language of Pope Leo's Tome repudiated the council, saying that the acceptance of two "physes" was tantamount to Nestorianism. Dioscorus of Alexandria advocated miaphysitism and had dominated the Council of Ephesus. A schism occurred, whereby the churches that rejected Chalcedon in favor of Ephesus and the rest of the Eastern Church broke off from each other, the most significant among the former being the Church of Alexandria, today known as the Coptic Orthodox Church. The rise of the "so-called" monophysitism in the East (as branded by the West) was incorrectly believed to be led by the Copts of Egypt. This must be regarded as the outward expression of the growing nationalist trends in that province against the gradual intensification of Byzantine imperialism, soon to reach its consummation during the reign of Emperor Justinian. However, the Coptic Orthodox Church is miaphysite, which means they believe that Jesus Christ is both 100% human and 100% divine but in one person without mingling, confusion or alteration. In every liturgy till this day, the Copts recite “Christ's divinity parted not from His humanity, not for a single moment nor a twinkling of an eye". In Egypt, opponents starkly outnumbered adherents, as 30,000 Greeks of Chalcedonian persuasion were ranged against some five million Coptic non-Chalcedonians. A significant effect on the Orthodox Christians in Egypt was a series of persecutions by the Roman (later, Byzantine) empire forcing followers of the Oriental Orthodox Church to claim allegiance to Leo's Tome, or Chalcedon. This led to the martyrdom, persecution and death of thousands of Egyptian saints and bishops till the Arab Conquest of Egypt. As a result, The Council of Chalcedon is referred to as "Chalcedon, the Ominous" among Coptic Egyptians given how it led to Christians persecuting other Christians for the first time in history. Coptic Orthodox Christians continue to distinguish themselves from followers of Chalcedon to this day. Although the theological differences are seen as limited (if non-existent), it is politics, the subsequent persecutions and the power struggles in the Roman Empire, that may have led to the this schism, or at least contributed significantly to amplifying it through the centuries. The divisions in the Church weakened the Byzantine Empire's eastern provinces and helped ease the subsequent Sassanian and Arab invasions. Justinian I attempted to bring those monks who still rejected the decision of the Council of Chalcedon into communion with the Chalcedonian church. The exact time of this event is unknown, but it is believed to have been between 535 and 548. Abraham of Farshut was summoned to Constantinople and chose to bring with him four monks. Upon arrival, Justinian summoned them and informed them that they could either accept the decision of the council or lose their positions. Abraham refused to entertain the idea. Theodora tried to persuade Justinian to change his mind, seemingly to no avail. Abraham himself stated in a letter to his monks that he preferred to remain in exile rather than subscribe to a faith contrary to that of Athanasius. They were not alone, and the non-Chalcedon churches compose Oriental Orthodoxy, with the Church of Alexandria as their primus inter pares. Only in recent years has a degree of rapprochement between Chalcedonian Christians and the Oriental Orthodox been seen. Oriental Orthodox view. Plenty of Oriental Orthodox theologians, saints and modern clergy have affirmed that a real difference in the confessions of faith has taken place. Modern Coptic Orthodox Christians profess to be Miaphysites. This is reflected in the daily liturgical prayer “Christ's divinity parted not from His humanity, not for a single moment nor a twinkling of an eye". Liturgical commemorations. The Eastern Orthodox Church commemorates the "Holy Fathers of the 4th Ecumenical Council, who assembled in Chalcedon" on the Sunday on or after July 13; however, in some places (e.g., Russia) on that date is rather a feast of the Fathers of the First Six Ecumenical Councils. For both of the above complete propers have been composed and are found in the Menaion. For the former "The Office of the 630 Holy and God-bearing Fathers of the 4th ... Summoned against the Monophysites Eftyches and Dioskoros" (despite the latter being miaphysite and not monophysite) was composed in the middle of the 14th century by Patriarch Philotheus I of Constantinople. This contains numerous hymns exposing the council's teaching, commemorating its leaders whom it praises and whose prayers it implores, and naming its opponents pejoratively, e.g., "Come let us clearly reject the errors of ... but praise in divine songs the fourth council of pious fathers." For the latter the propers are titled "We Commemorate Six Holy Ecumenical Councils". This repeatedly damns those anathematized by the councils with such rhetoric as "Christ-smashing deception enslaved Nestorius" and "mindless Arius and ... is tormented in the fires of Gehenna" while the fathers of the councils are praised and the dogmas of the councils are expounded in the hymns therein.
6963
3632083
https://en.wikipedia.org/wiki?curid=6963
Canadian football
Canadian football, or simply football, is a sport in Canada in which two teams of 12 players each compete on a field long and wide, attempting to advance a pointed oval-shaped ball into the opposing team's end zone. Canadian and American football have shared origins and are closely related, but have some major differences. Canadian football is played with three downs, goalposts in the front of the endzone, and twelve players on each side. Comparatively, American football has four downs, goalposts in the back of the endzone, and eleven players on each side. Canadian football is also played on a wider and longer field, with deeper endzones. Rugby football, from which Canadian football developed, was first recorded in Canada in the early 1860s, taken there by British immigrants, possibly in 1824. Both the Canadian Football League (CFL), the sport's top professional league, and Football Canada, the governing body for amateur play, trace their roots to 1880 and the founding of the Canadian Rugby Football Union. The CFL is the most popular and only major professional Canadian football league. Its championship game, the Grey Cup, is one of Canada's biggest sporting events, attracting a large television audience. Canadian football is also played at high school, junior, collegiate, and semi-professional levels: the Canadian Junior Football League and Quebec Junior Football League are for players aged 18–22, post-secondary institutions compete in U Sports football for the Vanier Cup, and seniors in the Alberta Football League. The Canadian Football Hall of Fame is in Hamilton, Ontario. History. The first documented football match was a practice game played on November 9, 1861, at University College, University of Toronto (approximately west of Queen's Park). One of the participants in the game involving University of Toronto students was Sir William Mulock, later chancellor of the school. A football club was formed at the university soon afterward, although its rules of play at this stage are unclear. The first written account of a game played was on October 15, 1862, on the Montreal Cricket Grounds. It was between the First Battalion Grenadier Guards and the Second Battalion Scots Fusilier Guards resulting in a win by the Grenadier Guards 3 goals, 2 rouges to nothing. In 1864, at Trinity College, Toronto, F. Barlow Cumberland, Frederick A. Bethune, and Christopher Gwynn, one of the founders of Milton, Massachusetts, devised rules based on rugby football. The game gradually gained a following, with the Hamilton Football Club (later the Hamilton Tiger-Cats) formed on November 3, 1869. Montreal Football Club was formed on April 8, 1872. Toronto Argonaut Football Club was formed on October 4, 1873, and the Ottawa Football Club (later the Ottawa Rough Riders) on September 20, 1876. Of those clubs, only the Toronto club is still in continuous operation today. This rugby-football soon became popular at Montreal's McGill University. McGill challenged Harvard University to a two-game series in 1874, using a hybrid game of English rugby devised by the University of McGill. The first attempt to establish a proper governing body and to adopt the current set of Rugby rules was the Foot Ball Association of Canada, organized on March 24, 1873, followed by the Canadian Rugby Football Union (CRFU) founded June 12, 1880, which included teams from Ontario and Quebec. Later both the Ontario Rugby Football Union and Quebec Rugby Football Union (ORFU and QRFU respectively) were formed (January 1883), and then the Interprovincial (1907) and Western Interprovincial Football Union (1936) (IRFU and WIFU). The CRFU reorganized into an umbrella organization forming the Canadian Rugby Union (CRU) in 1891. The immediate forerunner to the current Canadian Football League was established in 1956 when the IRFU and WIFU formed an umbrella organization, the Canadian Football Council (CFC). In 1958, the CFC left the CRU to become the "Canadian Football League" (CFL). The Burnside rules closely resembling American football (which are similar rules developed by Walter Camp for that sport) that were incorporated in 1903 by the ORFU, were an effort to distinguish it from a more rugby-oriented game. The Burnside Rules had teams reduced to 12 men per side, introduced the snap-back system, required the offensive team to gain 10 yards on three downs, eliminated the throw-in from the sidelines, allowed only six men on the line, stated that all goals by kicking were to be worth two points and the opposition was to line up 10 yards from the defenders on all kicks. The rules were an attempt to standardize the rules throughout the country. The CIRFU, QRFU, and CRU refused to adopt the new rules at first. Forward passes were not allowed in the Canadian game until 1929, and touchdowns, which had been five points, were increased to six points in 1956, in both cases several decades after the Americans had adopted the same changes. The primary differences between the Canadian and American games stem from rule changes that the American side of the border adopted but the Canadian side did not (originally, both sides had three downs, goal posts on the goal lines, and unlimited forward motion, but the American side modified these rules and the Canadians did not). The Canadian field width was one rule that was not based on American rules, as the Canadian game was played in wider fields and stadiums that were not as narrow as the American stadiums. The Grey Cup was established in 1909, after being donated by Albert Grey, 4th Earl Grey, Governor General of Canada, as the championship of teams under the CRU for the Rugby Football Championship of Canada. Initially an amateur competition, it eventually became dominated by professional teams in the 1940s and early 1950s. The ORFU, the last amateur organization to compete for the trophy, withdrew from competition after the 1954 season. The move ushered in the modern era of Canadian professional football, culminating in the formation of the present-day Canadian Football League in 1958. Canadian football has mostly been confined to Canada, with the United States being the only other country to have hosted high-level Canadian football games. The CFL's controversial "South Division" as it would come to be officially known attempted to put CFL teams in the United States playing under Canadian rules in 1995. The Expansion was aborted after three years; the Baltimore Stallions were the most successful of the numerous Americans teams to play in the CFL, winning the 83rd Grey Cup. Continuing financial losses, a lack of proper Canadian football venues, a pervasive belief that the American teams were simply pawns to provide the struggling Canadian teams with expansion fee revenue, and the return of the NFL to Baltimore prompted the end of Canadian football on the American side of the border. The CFL hosted the Touchdown Atlantic regular season game in Nova Scotia in 2005 and New Brunswick in 2010, 2011, and 2013. In 2013, Newfoundland and Labrador became the last province to establish football at the minor league level, with teams playing on the Avalon Peninsula and in Labrador City. The province however has yet to host a college or CFL game. Prince Edward Island, the smallest of the provinces, has also never hosted a CFL game. On 13 February 2023, the International Federation of American Football (IFAF) and Football Canada announced in a joint statement that the Canadian Amateur Football Rulebook would be an accepted rules code for international play, but would not be a substitute for world championships or world championship qualification. "As Football Canada continues to work with IFAF, I believe this opens the door for international friendlies and tournaments to be staged in Canada employing the infrastructure communities have invested in for our sport from coast to coast," Football Canada president and IFAF General Secretary Jim Mullin said in the joint statement. League play. Canadian football is played at several levels in Canada; the top league is the professional nine-team Canadian Football League (CFL). The CFL regular season begins in June, and playoffs for the Grey Cup are completed by late November. In cities with outdoor stadiums such as Edmonton, Winnipeg, Calgary, and Regina, low temperatures and icy field conditions can seriously affect the outcome of a game. Amateur football is governed by Football Canada. At the university level, 27 teams play in four conferences under the auspices of U Sports; the U Sports champion is awarded the Vanier Cup. Junior football is played by many after high school before joining the university ranks. There are 19 junior teams in three conferences in the Canadian Junior Football League competing for the Canadian Bowl. The Quebec Junior Football League includes teams from Ontario and Quebec who battle for the Manson Cup. Semi-professional leagues have grown in popularity in recent years, with the Alberta Football League becoming especially popular. The Northern Football Conference formed in Ontario in 1954 has also surged in popularity for former college players who do not continue to professional football. The Ontario champion plays against the Alberta champion for the "National Championship". The Canadian Major Football League is the governing body for the semi-professional game. Women's football has gained attention in recent years in Canada. The first Canadian women's league to begin operations was the Maritime Women's Football League in 2004. The largest women's league is the Western Women's Canadian Football League. Field. The Canadian football field is long and wide, within which the goal areas are deep, and the goal lines are apart. Weighted pylons are placed on the inside corner of the intersections of the goal lines and end lines. Including the end zones, the total area of the field is . At each goal line is a set of goalposts, which consist of two "uprights" joined by an crossbar which is above the goal line. The goalposts may be H-shaped (both posts fixed in the ground) although in the higher-calibre competitions the tuning-fork design (supported by a single curved post behind the goal line, so that each post starts above the ground) is preferred. The sides of the field are marked by white sidelines, the goal line is marked in white or yellow, and white lines are drawn laterally across the field every from the goal line. These lateral lines are called "yard lines" and often marked with the distance in yards from and an arrow pointed toward the nearest goal line. Prior to the early 1980s, arrows were not used and all yard lines (in both multiples of 5 and 10) were usually marked with the distance to the goal line, including the goal line itself which was marked with either a "0" or "00"; in most stadiums today, only the yard markers in multiples of 10 are marked with numbers, with the goal line sometimes being marked with a "G". The centre (55-yard) line usually is marked with a "C" (or, more rarely, with a "55"). "Hash marks" are painted in white, parallel to the yardage lines, at intervals, from the sidelines under amateur rules, but in the CFL. On fields that have a surrounding running track, such as Molson Stadium and many universities, the end zones are often cut off in the corners to accommodate the track. Until 1986, the end zones were deep, giving the field an overall length of , and a correspondingly larger cutoff could be required at the corners. The first field to feature the shorter 20-yard end zone was Vancouver's BC Place (home of the BC Lions), which opened in 1983. This was particularly common among U.S.-based teams during the CFL's American expansion, where few American stadiums were able to accommodate the much longer and noticeably wider CFL field. The end zones in Toronto's BMO Field are only 18 yards instead of 20 yards. Gameplay. Teams advance across the field through the execution of quick, distinct plays, which involve the possession of a brown, prolate spheroid ball with ends tapered to a point. The ball has two one-inch-wide white stripes. Start of play. At the beginning of a match, an official tosses a coin and allows the captain of the visiting team to call heads or tails. The captain of the team winning the coin toss is given the option of having first choice, or of deferring first choice to the other captain. The captain making first choice may either choose a) to kick off or receive the kick at the beginning of the half, or b) which direction of the field to play in. The remaining choice is given to the opposing captain. Before the resumption of play in the second half, the captain that did not have first choice in the first half is given first choice. Teams usually choose to defer, so it is typical for the team that wins the coin toss to kick to begin the first half and receive to begin the second. Play begins at the start of each half with one team place-kicking the ball from its own end of the field: the 35-yard line in the CFL, the 45-yard line in amateur play. Both teams then attempt to catch the ball. The player who recovers the ball may run while holding the ball, or laterally throw the ball to a teammate. Stoppage of play. Play stops when the ball carrier's knee, elbow, or any other body part aside from the feet and hands, is forced to the ground (a "tackle"); when a forward pass is not caught on the fly (during a scrimmage); when a touchdown or a field goal is scored; when the ball leaves the playing area by any means (being carried, thrown, or fumbled out of bounds); or when the ball carrier is in a standing position but can no longer move forwards (called forward progress). If no score has been made, the next play starts from "scrimmage". Scrimmage. Before scrimmage, an official places the ball at the spot it was at the stop of clock, but no nearer than 24 yards from the sideline or 1 yard from the goal line. The line parallel to the goal line passing through the ball (line from sideline to sideline for the length of the ball) is referred to as the line of scrimmage. This line is similar to "no-man's land"; players must stay on their respective sides of this line until the play has begun again. For a scrimmage to be valid the team in possession of the football must have seven players, excluding the quarterback, within one yard of the line of scrimmage. The defending team must stay a yard or more back from the line of scrimmage. On the field at the beginning of a play are two teams of 12 (and not 11 as in American football). The team in possession of the ball is the offence and the team defending is referred to as the defence. Play begins with a backwards pass through the legs (the snap) by a member of the offensive team, to another member of the offensive team. This is usually the quarterback or punter, but a "direct snap" to a running back is also not uncommon. If the quarterback or punter receives the ball, he may then do any of the following: Each play constitutes a "down". The offence must advance the ball at least ten yards towards the opponents' goal line within three downs or forfeit the ball to their opponents. Once ten yards have been gained the offence gains a new set of three downs (rather than the four downs given in American football). Downs do not accumulate. If the offensive team completes 10 yards on their first play, they lose the other two downs and are granted another set of three. If a team fails to gain ten yards in two downs they usually punt the ball on third down or try to kick a field goal , depending on their position on the field. The team may, however use its third down in an attempt to advance the ball and gain a cumulative 10 yards. Change in possession. The ball changes possession in the following instances: Rules of contact. There are many rules to contact in this type of football. The only player on the field who may be legally tackled is the player currently in possession of the football (the ball carrier). On a passing play a receiver, that is to say, an offensive player sent down the field to receive a pass, may not be interfered with (have his motion impeded, be blocked, etc.) unless he is within five yards of the line of scrimmage. Prior to a pass that goes beyond the line of scrimmage, a defender may not be impeded more than one yard past that line. Otherwise, any player may block another player's passage, so long as he does not hold or trip the player he intends to block. The kicker may not be contacted after the kick but before his kicking leg returns to the ground (this rule is not enforced upon a player who has blocked a kick). The quarterback may not be hit or tackled after throwing the ball, nor may he be hit while in the pocket (i.e. behind the offensive line) prior to that point below the knees or above the shoulders. Infractions and penalties. Infractions of the rules are punished with "penalties", typically a loss of yardage of 5, 10 or 15 yards against the penalized team. Minor violations such as "offside" (a player from either side encroaching into scrimmage zone before the play starts) are penalized five yards, more serious penalties (such as holding) are penalized 10 yards, and severe violations of the rules (such as face-masking [grabbing the face mask attached to a player's helmet]) are typically penalized 15 yards. Depending on the penalty, the penalty yardage may be assessed from the original line of scrimmage, from where the violation occurred (for example, for a pass interference infraction), or from where the ball ended after the play. Penalties on the offence may, or may not, result in a loss of down; penalties on the defence may result in a first down being automatically awarded to the offence. For particularly severe conduct, the game official(s) may eject players (ejected players may be substituted for), or in exceptional cases, declare the game over and award victory to one side or the other. Penalties do not affect the yard line which the offence must reach to gain a first down (unless the penalty results in a first down being awarded); if a penalty against the defence results in the first down yardage being attained, then the offence is awarded a first down. If the defence is penalized on a two-point convert attempt and the offence chooses to attempt the play again, the offence must attempt another two-point convert; it cannot change to a one-point attempt. Conversely, the offence can attempt a two-point convert following a defensive penalty on a one-point attempt. Penalties may occur before a play starts (such as offside), during the play (such as holding), or in a dead-ball situation (such as unsportsmanlike conduct). Penalties never result in a score for the offence. For example, a point-of-foul infraction committed by the defence in their end zone is not ruled a touchdown, but instead advances the ball to the one-yard line with an automatic first down. For a distance penalty, if the yardage is greater than half the distance to the goal line, then the ball is advanced half the distance to the goal line, though only up to the one-yard line (unlike American football, in Canadian football no scrimmage may start inside either one-yard line). If the original penalty yardage would have resulted in a first down or moving the ball past the goal line, a first down is awarded. In most cases, the non-penalized team will have the option of "declining" the penalty; in which case the results of the previous play stand as if the penalty had not been called. One notable exception to this rule is if the kicking team on a 3rd down punt play is penalized before the kick occurs: the receiving team may not decline the penalty and take over on downs. After the kick is made, change of possession occurs and subsequent penalties are assessed against either the spot where the ball is caught, or the runback. Kicking. Canadian football distinguishes four ways of kicking the ball: On any kicking play, all onside players (the kicker, and teammates behind the kicker at the time of the kick) may recover and advance the ball. Players on the kicking team who are not onside may not approach within five yards of the ball until it has been touched by the receiving team, or by an onside teammate. Scoring. The methods of scoring are: Officially, the single is called a "rouge" (French for "red") but is often referred to as a single. The exact derivation of the term is unknown, but it has been thought that in early Canadian football, the scoring of a single was signalled with a red flag. A "rouge" is also a method of scoring in the Eton field game, which dates from at least 1815. Resumption of play. Resumption of play following a score is conducted under procedures which vary with the type of score. Game timing. The game consists of two 30-minute halves, each of which is divided into two 15-minute quarters. The clock counts down from 15:00 in each quarter. Timing rules change when there are three minutes remaining in a half. A short break interval of 2 minutes occurs after the end of each quarter (a longer break of 15 minutes at halftime), and the two teams then change goals. In the first 27 minutes of a half, the clock stops when: The clock starts again when the referee determines the ball is ready for scrimmage, except for team time-outs (where the clock starts at the snap), after a time count foul (at the snap) and kickoffs (where the clock starts not at the kick but when the ball is first touched after the kick). In the last three minutes of a half, the clock stops whenever the ball becomes dead. On kickoffs, the clock starts when the ball is first touched after the kick. On scrimmages, when it starts depends on what ended the previous play. The clock starts when the ball is ready for scrimmage except that it starts on the snap when on the previous play: During the last three minutes of a half, the penalty for failure to place the ball in play within the 20-second play clock, known as a "time count violation" (this foul is known as "delay of game" in American football), is dramatically different from during the first 27 minutes. Instead of the penalty being 5 yards with the down repeated, the base penalty (except during convert attempts) becomes loss of down on first or second down, and 10 yards on third down with the down repeated. In addition, as noted previously, the referee can give possession to the defence for repeated deliberate time count violations on third down. The clock does not run during convert attempts in the last three minutes of a half. If the 15 minutes of a quarter expire while the ball is live, the quarter is extended until the ball becomes dead. If a quarter's time expires while the ball is dead, the quarter is extended for one more scrimmage. A quarter cannot end while a penalty is pending: after the penalty yardage is applied, the quarter is extended one scrimmage. The non-penalized team has the option to "decline" any penalty it considers disadvantageous, so a losing team cannot indefinitely prolong a game by repeatedly committing infractions. Overtime. In the CFL, if the game is tied at the end of regulation play, then each team is given an equal number of offensive possessions to break the tie. A coin toss is held to determine which team will take possession first; the first team scrimmages the ball at the opponent's 35-yard line and conducts a series of downs until it scores or loses possession. If the team scores a touchdown, starting with the 2010 season, it is required to attempt a two-point conversion. The other team then scrimmages the ball at the opponent's 35-yard line and has the same opportunity to score. After the teams have completed their possessions, if one team is ahead, then it is declared the winner; otherwise, the two teams each get another chance to score, scrimmaging from the other 35-yard line. After this second round, if there is still no winner, during the regular season the game ends as a tie. In a playoff game, the teams continue to attempt to score from alternating 35-yard lines, until one team is leading after both have had an equal number of possessions. In U Sports football, for the Uteck Bowl, Mitchell Bowl, and Vanier Cup, the same overtime procedure is followed until there is a winner. Officials and fouls. Officials are responsible for enforcing game rules and monitoring the clock. All officials carry a whistle and wear black-and-white striped shirts and black caps except for the referee, whose cap is white. Each carries a weighted orange flag that is thrown to the ground to signal that a foul has been called. An official who spots multiple fouls will throw their cap as a secondary signal. The seven officials (of a standard seven-man crew; lower levels of play up to the university level use fewer officials) on the field are each tasked with a different set of responsibilities: Another set of officials, the chain crew, is responsible for moving the chains. The chains, consisting of two large sticks with a 10-yard-long chain between them, are used to measure for a first down. The chain crew stays on the sidelines during the game, but if requested by the officials they will briefly bring the chains on to the field to measure. The chain crew work under the direction of the head linesman and will typically consist of at least three people—two members of the chain crew will hold either of the two sticks, while a third will hold the down marker. The down marker, a large stick with a dial on it, is flipped after each play to indicate the current down and is typically moved to the approximate spot of the ball. The chain crew system has been used for over 100 years and is considered to be an accurate measure of distance, rarely subject to criticism from either side. Severe weather. In the CFL, a game must be delayed if lightning strikes within of the stadium or for other severe weather conditions, or if dangerous weather is anticipated. In the regular season, if play has not resumed after 1 hour and at least half of the third quarter has been completed, the score stands as final; this happened for the first time on August 9, 2019, when a Saskatchewan–Montreal game was stopped late in the third quarter with the score having been finalized where it stood at the time of stoppage: in favor of Saskatchewan. If the stoppage is earlier in the game, or if it is a playoff or Grey Cup game, play may be stopped for up to 3 hours and then resume. After 3 hours of stoppage, play is terminated at least for the day. A playoff or Grey Cup game must then be resumed the following day at the point where it left off. In the regular season, if a game is stopped for 3 hours and one team is leading by at least a certain amount, then that team is awarded a win. The size of lead required is 21, 17, or 13 depending on whether the stoppage is in the first, second, or third quarter respectively. If neither team is leading by that much and they are not scheduled to play again in the season, the game is declared a tie. If a regular-season game is stopped for 3 hours and neither team is leading by the required amount to be awarded a win, but the two teams are scheduled to play again later in the season, then the stopped game is decided by a "two-possession shootout" procedure held before the later game is started. The procedure is generally similar to overtime in the CFL, with two major exceptions: each team must play exactly two possessions regardless of what happens; and while the score from the stopped game is not added to the shootout score, it is used instead to determine the yard line where each team starts its possessions, so the team that was leading still has an advantage. Positions. The positions in Canadian football have evolved throughout the years, and are not officially defined in the rules. However, there are still several standard positions, as outlined below. Offence. The offence must have at least seven players lined up along the line of scrimmage on every play. The players on either end (usually the wide receivers) are eligible to receive forward passes, and may be in motion along the line of scrimmage prior to the snap. The other players on the line of scrimmage (usually the offensive linemen) are ineligible to receive forward passes, and once they are in position, they may not move until the play begins. Offensive positions fit into three general categories: Offensive linemen. The primary roles of the offensive linemen (or "down linemen") are to protect the quarterback so that he can pass, and to help block on running plays. Offensive linemen generally do not run with the ball (unless they recover it on a fumble) or receive a handoff or lateral pass, but there is no rule against it. Offensive linemen include the following positions: Centre: Snaps the ball to the quarterback to initiate play. The most important pass blocker on pass plays. Calls offensive line plays. Left/right guards: Stand to the left and right of the centre. Helps protect the quarterback. Usually very good run blockers, opening holes up the middle for runners. Left/right tackles: Stand on the ends of the offensive line. These are the biggest players on the line, usually well over . Usually very good pass blockers. Backs. Backs are behind the linemen at the start of play. They may run with the ball, and receive handoffs, laterals, and forward passes. They may also be in motion before the play starts. Backs include the following positions: Quarterback: Generally, the leader of the offence. Calls all plays to teammates, receives the ball from the snap, and initiates the offensive play, usually by passing the ball to a receiver, handing the ball off to another back, or running the ball himself. Fullback: Has multiple roles including pass protection, receiving, and blocking for the running back. Sometimes carries the ball, usually on short yardage situations. Running back (or "tailback"): As the name implies, the main runner on the team. Also receives passes sometimes, and blocks on pass plays. Receivers. Receivers may start the play either on or behind the line of scrimmage. They may run with the ball, and receive handoffs, laterals, and forward passes. Receivers include the following positions: Wide receiver: Lines up on the line of scrimmage, usually at a distance from the centre. Runs a given route to catch a pass and gain yardage. Slotback: Lines up behind the line of scrimmage, between the wide receiver and the tackle. May begin running towards the line of scrimmage before the snap. Runs a given route to catch a pass and gain yardage. Defence. The rules do not constrain how the defence may arrange itself, other than the requirement that they must remain one yard behind the line of scrimmage until the play starts. Defensive positions fit into three general categories: Left/right defensive tackles: Try to get past the offensive line, or to open holes in the offensive line for linebackers to rush the quarterback. Nose tackle: A defensive tackle that lines up directly across from the centre. Left/right defensive ends: The main rushing linemen. Rush the quarterback and try to stop runners behind the line of scrimmage. Middle linebacker: Starts the play across from the centre, about 3–4 yards away. Generally, the leader of the defence. Calls plays for linemen and linebackers. Weak-side linebacker: Lines up on the short side of the field, and can drop back into pass coverage, or contain a run. Strong-side linebacker: Lines up on the long side of the field, and usually focuses on stopping the runner. Cornerback: Covers one of the wide receivers on most plays. Defensive halfback: Covers one of the slotbacks and helps contain the run from going to the side of the field. Safety: Covers the back of the field, usually in the centre, and as the last line of defence. Occasionally rushes the quarterback or stops the runner. Special teams. Special teams are generally used on kicking plays, which include kickoffs, punts, field goal attempts, and extra point attempts. Special teams include the following positions: Long snapper: Snaps the ball for a punt, field goal attempt, or extra point attempt. Holder: Receives the snap on field goal attempts and extra point attempts. Places the ball in position and holds it for the kicker. This position is generally filled by a reserve quarterback, but occasionally the starting quarterback or punter will fill in as holder. Kicker: Performs kickoffs. Kicks field goal attempts and extra point attempts. Punter: Punts the ball, usually on third down. Returner: On kickoffs, punts, and missed field goals, returns the ball as far down the field as possible. Typically, a fast, agile runner.
6966
18872885
https://en.wikipedia.org/wiki?curid=6966
Chinese calendar
The Chinese calendar, as the name suggests, is a lunisolar calendar created by or commonly used by the Chinese people. While this description is generally accurate, it does not provide a definitive or complete answer. A total of 102 calendars have been officially recorded in classical historical texts. In addition, many more calendars were created privately, with others being built by people who adapted Chinese cultural practices, such as the Koreans, Japanese, Vietnamese, and many others, over the course of a long history. A Chinese calendar consists of twelve months, each aligned with the phases of the moon, along with an intercalary month inserted as needed to keep the calendar in sync with the seasons. It also features twenty-four solar terms, which track the position of the sun and are closely related to climate patterns. Among these, the winter solstice is the most significant reference point and must occur in the eleventh month of the year. Each month contains either twenty-nine or thirty days. The sexagenary cycle for each day runs continuously over thousands of years and serves as a determining factor to pinpoint a specific day amidst the many variations in the calendar. In addition, there are many other cycles attached to the calendar that determine the appropriateness of particular days, guiding decisions on what is considered auspicious or inauspicious for different types of activities. The variety of calendars arises from deviations in algorithms and assumptions about inputs. The Chinese calendar is location-sensitive, meaning that calculations based on different locations, such as Peking and Nanking, can yield different results. This has even led to occasions where the Mid-Autumn Festival was celebrated on different days between mainland China and Hong Kong in 1978, as some almanacs based on old imperial rule. The sun and moon do not move at a constant speed across the sky. While ancient Chinese astronomers were aware of this fact, it was simpler to create a calendar using average values. There was a series of struggles over this issue, and as measurement techniques improved over time, so did the precision of the algorithms. The driving force behind all these variations has been the pursuit of a more accurate description and prediction of natural phenomena. The calendar during imperial times was regarded as sacred and mysterious. Rulers, with their mandate from Heaven, worked tirelessly to create an accurate calendar capable of predicting climate patterns and astronomical phenomena, which were crucial to all aspects of life, especially agriculture, fishing, and hunting. This, in turn, helped maintain their authority and secure an advantage over rivals. In imperial times, only the rulers had the authority to announce a calendar. An illegal calendar could be considered a serious offence, often punishable by capital punishment. Early calendars were also lunisolar, but they were less stable due to their reliance on direct observation. Over time, increasingly refined methods for predicting lunar and solar cycles were developed, eventually reaching maturity around 104 BC, when the Taichu Calendar (太初曆), namely the genesis calendar, was introduced during the Han dynasty. This calendar laid the foundation for subsequent calendars, with its principles being followed by calendar experts for over two thousand years. Over centuries, the calendar was refined through advancements in astronomy and horology, with dynasties introducing variations to improve accuracy and meet cultural or political needs. Improving accuracy has its downsides. The solar terms, namely solar positions, calculated based on the predicted location of the sun, make them far more irregular than a simple average model. In practice, solar terms don't need to be that precise because climate don't change overnight. The introduction of the leap second to the Chinese calendar is somewhat excessive, as it makes future predictions more challenging. This is particularly true since the leap second is typically announced six months in advance, which can complicate the determination of which day the new moon or solar terms fall on, especially when they occur close to midnight. While modern China primarily adopts the Gregorian calendar for official purposes, the traditional calendar remains culturally significant, influencing festivals and cultural practices, determining the timing of Chinese New Year with traditions like the twelve animals of the Chinese Zodiac still widely observed. The winter solstice serves as another New Year, a tradition inherited from ancient China. Beyond China, it has shaped other East Asian calendars, including the Korean, Vietnamese, and Japanese lunisolar systems, each adapting the same lunisolar principles while integrating local customs and terminology. The sexagenary cycle, a repeating system of Heavenly Stems and Earthly Branches, is used to mark years, months, and days. Before adopting their current names, the Heavenly Stems were known as the "Ten Suns" (十日), having research that it is a remnant of an ancient solar calendar. Epochs, or fixed starting points for year counting, have played an essential role in the Chinese calendar's structure. Some epochs are based on historical figures, such as the inauguration of the Yellow Emperor (Huangdi), while others marked the rise of dynasties or significant political shifts. This system allowed for the numbering of years based on regnal eras, with the start of a ruler's reign often resetting the count. The Chinese calendar also tracks time in smaller units, including months, days, double-hour, hour and quarter periods. These timekeeping methods have influenced broader fields of horology, with some principles, such as precise time subdivisions, still evident in modern scientific timekeeping. The continued use of the calendar today highlights its enduring cultural, historical, and scientific significance. Etymology. The name of calendar is in , and was represented in earlier character forms variants (), and ultimately derived from an ancient form (). The ancient form of the character consists of two stalks of rice plant (), arranged in parallel. This character represents the order in space and also the order in time. As its meaning became complex, the modern dedicated character () was created to represent the meaning of calendar. Maintaining the correctness of calendars was an important task to maintain the authority of rulers, being perceived as a way to measure the ability of a ruler. For example, someone seen as a competent ruler would foresee the coming of seasons and prepare accordingly. This understanding was also relevant in predicting abnormalities of the Earth and celestial bodies, such as lunar and solar eclipses. The significant relationship between authority and timekeeping helps to explain why there are 102 calendars in Chinese history, trying to predict the correct courses of sun, moon and stars, and marking good time and bad time. Each calendar is named as and recorded in a dedicated calendar section in history books of different eras. The last one in imperial era was . A ruler would issue an almanac before the commencement of each year. There were private almanac issuers, usually illegal, when a ruler lost his control of some territories. There are various Chinese terms for the calendar including: Various modern Chinese calendar names resulted from the struggle between the introduction of Gregorian calendar by government and the preservation of customs by the public in the era of Republic of China. The government wanted to abolish the Chinese calendar to force everyone to use the Gregorian calendar, and even abolished the Chinese New Year, but faced great opposition. The public needed the astronomical Chinese calendar to do things at a proper time, for example farming and fishing; also, a wide spectrum of festivals and customs observations have been based on the calendar. The government finally compromised and rebranded it as the agricultural calendar in 1947, depreciating the calendar to merely agricultural use. After the end of the imperial era, there are some almanacs based upon the algorithm of the last Imperial calendar with longitude of Peking. Such almanacs were under the name of "universal book" , or under Cantonese name , transcribed as Tung Shing. Later these almanacs moved to new calculation based on the location of Purple Mountain Observatory, with longitude of 120°E. Year-numbering systems. Eras. Ancient China numbered years from an emperor's ascension to the throne or his declaration of a new era name. The first recorded reign title was , from 140 BCE; the last reign title was , from 1908 CE. The era system was abolished in 1912, after which the current or Republican era was used. Epochs. An epoch is a point in time chosen as the origin of a particular calendar era, thus serving as a reference point from which subsequent time or dates are measured. The use of epochs in Chinese calendar system allow for a chronological starting point from whence to begin point continuously numbering subsequent dates. Various epochs have been used. Similarly, nomenclature similar to that of the Christian era has occasionally been used: No reference date is universally accepted. The most popular is the Gregorian calendar (). During the 17th century, the Jesuit missionaries tried to determine the epochal year of the Chinese calendar. In his (published in Munich in 1658), Martino Martini (1614–1661) dated the Yellow Emperor's ascension at 2697 BCE and began the Chinese calendar with the reign of Fuxi (which, according to Martini, began in 2952 BCE). Philippe Couplet's 1686 "Chronological table of Chinese monarchs" () gave the same date for the Yellow Emperor. The Jesuits' dates provoked interest in Europe, where they were used for comparison with Biblical chronology. Modern Chinese chronology has generally accepted Martini's dates, except that it usually places the reign of the Yellow Emperor at 2698 BCE and omits his predecessors Fuxi and Shennong as "too legendary to include". Publications began using the estimated birth date of the Yellow Emperor as the first year of the Han calendar in 1903, with newspapers and magazines proposing different dates. Jiangsu province counted 1905 as the year 4396 (using a year 1 of 2491 BCE, and implying that 2025 CE is 4516), and the newspaper "Ming Pao" () reckoned 1905 as 4603 (using a year 1 of 2698 BCE, and implying that 2025 CE is 4723). Liu Shipei (, 1884–1919) created the Yellow Emperor Calendar (), with year 1 as the birth of the emperor (which he determined as 2711 BCE, implying that 2025 CE is 4736). There is no evidence that this calendar was used before the 20th century. Liu calculated that the 1900 international expedition sent by the Eight-Nation Alliance to suppress the Boxer Rebellion entered Beijing in the 4611th year of the Yellow Emperor. Taoists later adopted Yellow Emperor Calendar and named it Tao Calendar (). On 2 January 1912, Sun Yat-sen announced changes to the official calendar and era. 1 January was 14 4609 year, assuming a year 1 of 2698 BCE, making 2025 CE year 4723. Many overseas Chinese communities like San Francisco's Chinatown adopted the change. The modern Chinese standard calendar uses the epoch of the Gregorian calendar, which is on 1 January of the year 1 CE. History. The Chinese calendar system has a long history, which has traditionally been associated with specific dynastic periods. Various individual calendar types have been developed with different names. In terms of historical development, some of the calendar variations are associated with dynastic changes along a spectrum beginning with a prehistorical/mythological time to and through well attested historical dynastic periods. Many individuals have been associated with the development of the Chinese calendar, including researchers into underlying astronomy; and, furthermore, the development of instruments of observation are historically important. Influences from India, Islam, and Jesuits also became significant. Solar calendars. The traditional Chinese lunisolar calendar was developed between 771 BCE and 476 BCE, during the Spring and Autumn period of the Eastern Zhou dynasty. Solar calendars were used before the Zhou dynasty period, along with the basic sexagenary system. One version of the solar calendar is the five-elements (or phases) calendar (), which derives from the Wu Xing. A 365-day year was divided into five phases of 72 days, with each phase preceded by an intercalary day associated with the claimed beginning of the following 72 day period of domination by the next Wu Xing element; thus, the five phases each begin with a governing-element day (), followed by a 72-day period characterized by the ruling element. Years began on a day and a 72-day wood phase, followed by a day and a 72-day fire phase; a day and a 72-day earth phase; a day and a 72-day metal phase, and a day followed by a water phase. Each phase consisted of two three-week months, making each year ten months long. Other days were tracked using the Yellow River Map ("He Tu"). Another version is a four-quarters calendar (, or ). The weeks were ten days long, with one month consisting of three weeks. A year had 12 months, with a ten-day week intercalated in summer as needed to keep up with the tropical year. The 10 Heavenly Stems and 12 Earthly Branches were used to mark days. A third version is the balanced calendar (). A year was 365.25 days, and a month was 29.5 days. After every 16th month, a half-month was intercalated. According to oracle bone records, the Shang dynasty calendar ( BCE) was a balanced calendar with 12 to 14 months in a year; the month after the winter solstice was . A solar calendar called the Tung Shing, the "Yellow Calendar" or "Imperial Calendar" (both alluding to Yellow Emperor) continued to see use as an almanac and agricultural guide throughout Chinese history. Lunisolar calendars by dynasty. Lunisolar calendars involve correlations of the cycles of the sun (solar) and the moon (lunar). Zhou dynasty. The first lunisolar calendar was the "Zhou" calendar (), introduced under the Zhou dynasty (1046 BCE – 256 BCE). This calendar sets the beginning of the year at the day of the new moon before the winter solstice. Competing Warring states calendars. Several competing lunisolar calendars were introduced as Zhou devolved into the Warring States, especially by states fighting Zhou control during the Warring States period (perhaps 475 BCE - 221 BCE). From the Warring States period (ending in 221 BCE), six especially significant calendar systems are known to have begun to be developed. Later on, during their future course in history, the modern names for the ancient six calendars were also developed: "Huangdi, Yin, Zhou, Xia, Zhuanxu," and "Lu". Modern historical knowledge and records are limited for the earlier calendars. These calendars are known as the "six ancient calendars" (), or quarter-remainder calendars, (), since all calculate a year as days long. Months begin on the day of the new moon, and a year has 12 or 13 months. Intercalary months (a 13th month) are added to the end of the year. The state of Lu issued its own "Lu calendar" (). The state of Jin issued the "Xia calendar" () with a year beginning on the day of the new moon nearest the March equinox. The state of Qin issued the "Zhuanxu calendar" (), with a year beginning on the day of the new moon nearest the winter solstice. The "Qiang" and "Dai calendars" are modern versions of the Zhuanxu calendar, used by highland peoples. The Song state's "Yin calendar" () began its year on the day of the new moon after the winter solstice. Qin and early Han dynasties. After Qin Shi Huang unified China under the Qin dynasty in 221 BCE, the "Qin calendar" () was introduced. It followed most of the rules governing the Zhuanxu calendar, but the month order was that of the Xia calendar; the year began with month 10 and ended with month 9, analogous to a Gregorian calendar beginning in October and ending in September. The intercalary month, known as "the second" , was placed at the end of the year. The Qin calendar was used going into the Han dynasty. Han dynasty Tàichū calendar. Emperor Wu of Han introduced reforms in the seventh of the eleven named eras of his reign, , 104 BCE – 101 BCE. His " calendar" () defined a solar year as days (365;06:00:14.035), and the lunisolar month had days (29;12:44:44.444). Since formula_1 the 19 years cycle used for the 7 additional months was taken as an exact one, and not as an approximation. This calendar introduced the 24 solar terms, dividing the year into 24 equal parts of 15° each. Solar terms were paired, with the 12 combined periods known as "climate terms". The first solar term of the period was known as a pre-climate (), and the second was a mid-climate (). Months were named for the mid-climate to which they were closest, and a month without a mid-climate was an intercalary month. The Taichu calendar established a framework for traditional calendars, with later calendars adding to the basic formula. Northern and Southern Dynasties Dàmíng calendar. The "Dàmíng calendar" (), created in the Northern and Southern Dynasties by Zu Chongzhi (429 CE – 500 CE), introduced the equinoxes. Tang dynasty Wùyín Yuán calendar. The use of syzygy to determine the lunisolar month was first described in the Tang dynasty " calendar" (). Yuan dynasty Shòushí calendar. The Yuan dynasty "Shòushí calendar" () used spherical trigonometry to find the length of the tropical year. The calendar had a 365.2425-day year, identical to the Gregorian calendar. Ming and Qing Shíxiàn calendar. From 1645 to 1913 the "" or "Chongzhen calendar" was developed. During the late Ming dynasty, the Chinese Emperor appointed Xu Guangqi in 1629 to be the leader of the Shixian calendar reform. Assisted by Jesuits, he translated Western astronomical works and introduced new concepts, such as those of Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Tycho Brahe; however, the new calendar was not released before the end of the dynasty. In the early Qing dynasty, Johann Adam Schall von Bell submitted the calendar which was edited by the lead of Xu Guangqi to the Shunzhi Emperor. The Qing government issued it as the " (seasonal) calendar". In this calendar, the solar terms are 15° each along the ecliptic and it can be used as a solar calendar. However, the length of the climate term near the perihelion is less than 30 days and there may be two mid-climate terms. The calendar changed the mid-climate-term rule to "decide the month in sequence, except the intercalary month." The present "traditional calendar" follows the Shíxiàn calendar, except: Modern Chinese calendar. The Chinese calendar lost its place as the country's official calendar at the beginning of the 20th century, its use has continued. The "Republic of China Calendar" published by the Beiyang government of the Republic of China still listed the dates of the Chinese calendar in addition to the Gregorian calendar. In 1929, the Nationalist government tried to ban the traditional Chinese calendar. The " Calendar" published by the government no longer listed the dates of the Chinese calendar. However, Chinese people were used to the traditional calendar and many traditional customs were based on the Chinese calendar. The ban failed and was lifted in 1934. The latest Chinese calendar was "New Edition of , revised edition", edited by Beijing Purple Mountain Observatory, People's Republic of China. In China, the modern calendar is defined by the Chinese national standard GB/T 33661–2017, "Calculation and Promulgation of the Chinese Calendar", issued by the Standardization Administration of China on 12 May 2017. Although modern-day China uses the Gregorian calendar, the traditional Chinese calendar governs holidays, such as the Chinese New Year and Lantern Festival, in both China and overseas Chinese communities. It also provides the traditional Chinese nomenclature of dates within a year which people use to select auspicious days for weddings, funerals, moving or starting a business. The evening state-run news program "Xinwen Lianbo" in the People's Republic of China continues to announce the months and dates in both the Gregorian and the traditional lunisolar calendar. To optimize the Chinese calendar, astronomers have proposed a number of changes. Kao Ping-tse (; 1888–1970), a Chinese astronomer who co-founded the Purple Mountain Observatory, proposed that month numbers be calculated before the new moon and solar terms to be rounded to the day. Since the intercalary month is determined by the first month without a mid-climate and the mid-climate time varies by time zone, countries that adopted the calendar but calculate with their own time could vary from the time in China. Contributions from Chinese astronomy. The Chinese calendar has been a development involving much observation and calculation of the apparent movements of the Sun, Moon, planets, and stars, as observed from Earth. Many Chinese astronomers have contributed to the development of the Chinese calendar. Many were of the scholarly or "shi" class (), including writers of history, such as Sima Qian. Notable Chinese astronomers who have contributed to the development of the calendar include Gan De, Shi Shen, and Zu Chongzhi Early technological developments aiding in calendar development include the development of the gnomon. Later technological developments useful to the calendar system include naming, numbering and mapping of the sky, the development of analog computational devices such as the armillary sphere and the water clock, and the establishment of observatories. Phenology. Early calendar systems, including the Chinese calendar, often were closely tied to natural phenomena. Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate, as well as habitat factors (such as elevation). The plum-rains season (), the rainy season in late spring and early summer, begins on the first day after "Mangzhong" () and ends on the first day after Xiaoshu (). The Three Fu () are three periods of hot weather, counted from the first day after the summer solstice. The first () is 10 days long. The mid- () is 10 or 20 days long. The last () is 10 days from the first day after the beginning of autumn. The Shujiu cold days () are the 81 days after the winter solstice (divided into nine sets of nine days), and are considered the coldest days of the year. Each nine-day unit is known by its order in the set, followed by "nine" (). In traditional Chinese culture, "nine" represents the infinity, which is also the number of "Yang". According to one belief nine times accumulation of "Yang" gradually reduces the "Yin", and finally the weather becomes warm. Names of months. Lunisolar months were originally named according to natural phenomena. Current naming conventions use numbers as the month names. Every month is also associated with one of the twelve Earthly Branches. Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about Chinese calendar dates. Horology. Horology, or chronometry, refers to the measurement of time. In the context of the Chinese calendar, horology involves the definition and mathematical measurement of terms or elements such observable astronomic movements or events such as are associated with days, months, years, hours, and so on. These measurements are based upon objective, observable phenomena. Calendar accuracy is based upon accuracy and precision of measurements. The Chinese calendar is lunisolar, similar to the Hindu, Hebrew and ancient Babylonian calendars. In this case the calendar is in part based in objective, observable phenomena and in part by mathematical analysis to correlate the observed phenomena. Lunisolar calendars especially attempt to correlate the solar and lunar cycles, but other considerations can be agricultural and seasonal or phenological, or religious, or even political. Basic horologic definitions include that days begin and end at midnight, and months begin on the day of the new moon. Years start on the second (or third) new moon after the winter solstice. Solar terms govern the beginning, middle, and end of each month. A sexagenary cycle, comprising the heavenly stems () and the earthly branches (), is used as identification alongside each year and month, including intercalary months or leap months. Months are also annotated as either long ( for months with 30 days) or short ( for months with 29 days). There are also other elements of the traditional Chinese calendar. Day. Days are Sun oriented, based upon divisions of the solar year. A day () is considered both traditionally and currently to be the time from one midnight to the next. Traditionally days (including the night-time portion) were divided into 12 double-hours, and in modern times the 24 hour system has become more standard. Week. As early as the Bronze Age Xia dynasty, days were grouped into nine- or ten-day weeks known as . Months consisted of three . The first 10 days were the early (), the middle 10 the mid (), and the last nine (or 10) days were the late (). Japan adopted this pattern, with 10-day-weeks known as . In Korea, they were known as "sun" (,). The structure of led to public holidays every five or ten days. Officials of the Han dynasty were legally required to rest every five days (twice a , or 5–6 times a month). The name of these breaks became . Grouping days into sets of ten is still used today in referring to specific natural events. "Three Fu" (), a 29–30-day period which is the hottest of the year, reflects its three- length. After the winter solstice, nine sets of nine days were counted to calculate the end of winter. Seven-day week and 28-day cycle. The seven-day week was adopted from the Hellenistic system by the 4th century CE, although its method of transmission into China is unclear. It was again transmitted to China in the 8th century by Manichaeans via Kangju, spoken in a variant of Sogdian language (a Central Asian kingdom near Samarkand). Its meaning is derived from the five classical planets, along with the Sun and Moon, making a total of seven celestial bodies highly visible in the sky, in Chinese translation 七曜. At that time, people created simple handwritten almanacs, where Sunday was marked with the character 密. The seven-day week had fallen out of favour for a long time, only to be revived when Christianity gained a foothold in China and later made mandatory by the government. In between, a 28-day cycle system was used, borrowing from the Twenty-Eight Mansions system. Originally, these mansions tracked the moon's position against the stars in the sky, much like the sun and zodiac, and became part of the Chinese constellations. However, in this context, the 28-day cycle had no connection to astronomy and was used purely for fortune-telling. This information was documented and is still referenced in the "Tung Shing", a Chinese almanac. When Westerners introduced the seven-day week system to China, whether for religious, business, or colonial reasons, both the Chinese and the Westerners found the 28-day cycle useful. Sunday, for instance, was written as "星房虛昴," indicating the corresponding four days on the 28-day cycle, as easily found in the almanac. Following the calendrical reforms in China during the era of the Republic of China, a period marked by both rejection and integration, the seven-day week system became the most widely used, aligning with the Western world. Month. Months are Moon oriented. "Month" (), the time from one new moon to the next. These synodic months are about days long. This includes the "Date" (), when a day occurs in the month. Days are numbered in sequence from 1 to 29 (or 30). And, a "Calendar month" (), is when a month occurs within a year. Some months may be repeated. Months are defined by the time between new moons, which averages approximately days. There is no specified length of any particular Chinese month, so the first month could have 29 days (short month, ) in some years and 30 days (long month, ) in other years. Since the beginning of the month is determined by when the new moon occurs, other countries using this calendar use their own time standards to calculate it; this results in deviations. The first new moon in 1968 was at 16:29 UTC on 29 January. Since North Vietnam used to calculate their Vietnamese calendar and South Vietnam used (Beijing time) to calculate theirs, North Vietnam began the Tết holiday at 29 January at 23:29 while South Vietnam began it on 30 January at 00:15. The time difference allowed asynchronous attacks in the Tet Offensive. Because astronomical observation determines month length, dates on the calendar correspond to moon phases. The first day of each month is the new moon. On the seventh or eighth day of each month, the first-quarter moon is visible in the afternoon and early evening. On the 15th or 16th day of each month, the full moon is visible all night. On the 22nd or 23rd day of each month, the last-quarter moon is visible late at night and in the morning. Different eras used different systems to determine the length of each month. The synodic month of the Taichu calendar was days long. The 7th-century, Tang-dynasty Wùyín Yuán Calendar was the first to determine month length by synodic month instead of the cycling method. Since then, month lengths have primarily been determined by observation and prediction. The days of the month are always written with two characters and numbered beginning with 1. Days one to 10 are written with the day's numeral, preceded by the character "Chū" (); "Chūyī" () is the first day of the month, and "Chūshí" () the 10th. Days 11 to 20 are written as regular Chinese numerals; "Shíwǔ" () is the 15th day of the month, and "Èrshí" () the 20th. Days 21 to 29 are written with the character "Niàn" () before the characters one through nine; "Niànsān" (), for example, is the 23rd day of the month. Day 30 (when applicable) is written as the numeral "Sānshí" (). Year. A year () is based upon the time of one revolution of Earth around the Sun, rounded to whole days. Traditionally, the year is measured from the first day of spring (lunisolar year) or the winter solstice (solar year). A 12-month-year using this system has 354 days, which would drift significantly from the tropical year. To fix this, traditional Chinese years have a 13-month year approximately once every three years. The 13-month version has the same long and short months alternating, but adds a 30-day leap month (). Years with 12 months are called common years, and 13-month years are known as long years. A solar year is astronomically about days. A lunisolar calendar year is either 353–355 or 383–385 days long. The lunisolar calendar () year usually begins on the new moon closest to Lichun, the first day of spring. This is typically the second and sometimes third new moon after the winter solstice. The lunisolar year begins with the first spring month, (), and ends with the last winter month, (). All other months are named for their number in the month order. See below on the timing of the Chinese New Year. Solar year and solar terms. The solar year (), the time between winter solstices, is divided into 24 solar terms known as . Each term is a 15° portion of the ecliptic. These solar terms mark both Western and Chinese seasons, as well as equinoxes, solstices, and other Chinese events. Pairs of solar terms are referred to as climate terms. The first solar term in a pair is the "pre-climate" (), and the second is the "mid-climate" (). The are considered "major terms", while the are deemed "minor terms". The solar terms on 5 April and on 22 December are both celebrated events in China. The solar year ("suì", ) begins on the December solstice and proceeds through the 24 solar terms. Since the speed of the Sun's apparent motion in the elliptical is variable, the time between major terms/mid-climates is not fixed. This variation in time between major terms results in different solar year lengths. There are generally 11 or 12 complete lunisolar months, plus two incomplete lunisolar months around the winter solstice, in a solar year. The complete lunisolar months are numbered from 0 to 10, and the incomplete lunisolar month is considered the 11th month. If there are 12 complete (and one incomplete) lunisolar months within a solar year, it is known as a leap year (a year possessing an intercalary month). Different versions of the traditional calendar might have different average solar year lengths. For example, one solar year of the 1st century BCE Tàichū calendar is (365.25016) days. A solar year of the 13th-century Shòushí calendar is (365.2425) days, identical to the Gregorian calendar. The additional .00766 day from the Tàichū calendar leads to a one-day shift every 130.5 years. If there are 12 complete lunisolar months within a solar year, the first lunisolar month that does not contain any mid-climate is designated the leap, or intercalary, month. Leap months are numbered with , the character for "intercalary", plus the name of the month they follow. In 2017, the intercalary month after month six was called , or "intercalary sixth month" () and written as "6i" or "6+". The next intercalary month (in 2020, after month four) will be called () and written "4i" or "4+". Planets. The movements of the Sun, Moon, Mercury, Venus, Mars, Jupiter and Saturn (sometimes known as the seven luminaries) are the references for calendar calculations. Stars. Big Dipper. The Big Dipper is the celestial compass, and its handle's direction indicates the season and month. 3 Enclosures and 28 Mansions. The stars are divided into Three Enclosures and 28 Mansions according to their location in the sky relative to Ursa Minor, at the center. Each mansion is named with a character describing the shape of its principal asterism. The Three Enclosures are Purple Forbidden, (), Supreme Palace (), and Heavenly Market (). The eastern mansions are . Southern mansions are . Western mansions are . Northern mansions are . The moon moves through about one lunar mansion per day, so the 28 mansions were also used to count days. In the Tang dynasty, Yuan Tiangang () matched the 28 mansions, seven luminaries and yearly animal signs to yield combinations such as "horn-wood-flood dragon" (). List of lunar mansions. The names and determinative stars of the mansions are: Sexagenary system. Several coding systems are used to avoid ambiguity. The Heavenly Stems is a decimal system. The Earthly Branches, a duodecimal system, mark dual hours ( or ) and climatic terms. The 12 characters progress from the first day with the same branch as the month (first day () of ; first day () of ), and count the days of the month. Years, months, days of the month and hours could traditionally numbered by the terminology of the Chinese sexagenary cycle. The stem-branches is a sexagesimal system. The Heavenly Stems and Earthly Branches make up 60 stem-branches. The stem branches mark days and years. The five Wu Xing elements are assigned to each stem, branch, or stem branch. For example, the year from 12 February 2021 to 31 January 2022 was a year () of 12 months or 354 days. The 60 stem-branches have been used to mark the year since the Shang dynasty (1600 BCE – 1046 BCE). Astrologers knew that the orbital period of Jupiter is about 12×361 = 4332 days, which they divided into 12 years () of 361 days each. The stem-branches system solved the era system's problem of unequal reign lengths. Current naming conventions use numbers as the month names, although each month is also associated with one of the twelve Earthly Branches. Correspondences with Gregorian dates are approximate and should be used with caution. Many years have intercalary months. Historically, Chinese had days of the month numbered with the 60 stem-branches: Fortune-tellers identify the heavenly stem and earthly branch corresponding to a particular day in the month, and those corresponding to its month, and those to its year, to determine the Four Pillars of Destiny associated with it, for which the Tung Shing, also referred to as the Chinese Almanac of the year, or the Huangli, and containing the essential information concerning Chinese astrology, is the most convenient publication to consult. Days rotate through a sexagenary cycle marked by coordination between heavenly stems and earthly branches, hence the referral to the Four Pillars of Destiny as, "Bazi", or "Birth Time Eight Characters", with each pillar consisting of a character for its corresponding heavenly stem, and another for its earthly branch. Since Huangli days are sexagenaric, their order is quite independent of their numeric order in each month, and of their numeric order within a week (referred to as True Animals in relation to the Chinese zodiac). Therefore, it does require painstaking calculation for one to arrive at the Four Pillars of Destiny of a particular given date, which rarely outpaces the convenience of simply consulting the Huangli by looking up its Gregorian date. The Tang dynasty used the Earthly Branches to mark the months from December 761 to May 762. Over this period, the year began with the winter solstice. China has used the Western hour-minute-second system to divide the day since the Qing dynasty. Several systems were in use historically; systems using multiples of twelve and ten were popular, since they could be easily counted and aligned with the Heavenly Stems and Earthly Branches. Age reckoning. In modern China, a person's official age is based on the Gregorian calendar. For traditional use, age is based on the Chinese "Sui" calendar. A child is considered one year old at birth. After each Chinese New Year, one year is added to their traditional age. Their age therefore is the number of Chinese calendar years in which they have lived. Due to the potential for confusion, the age of infants is often given in months instead of years. After the Gregorian calendar was introduced in China, the Chinese traditional-age was referred to as the "nominal age" () and the Gregorian age was known as the "real age" (). In Hong Kong, they are named as "hui ling" 虛齡 and sut ling 實齡 respectively. Holidays. Various traditional and religious holidays shared by communities throughout the world use the Chinese (Lunisolar) calendar: Chinese New Year. The date of the Chinese New Year accords with the patterns of the lunisolar calendar and hence is variable from year to year. The invariant between years is that the winter solstice, Dongzhi is required to be in the eleventh month of the year This means that Chinese New Year will be on the second new moon after the previous winter solstice, unless there is a leap month 11 or 12 in the previous year. This rule is accurate, however there are two other mostly (but not completely) accurate rules that are commonly stated: It has been found that Chinese New Year moves back by either 10, 11, or 12 days in most years. If it falls on or before 31 January, then it moves forward in the next year by either 18, 19, or 20 days. Holidays with the same day and same month. The Chinese New Year (known as the Spring Festival/ in China) is on the first day of the first month and was traditionally called the Yuan Dan () or Zheng Ri (). In Vietnam it is known as Tết Nguyên Đán (). Traditionally it was the most important holiday of the year. It is an official holiday in China including Hong Kong, Macau, and Taiwan regions,and, Vietnam, Korea, the Philippines, Malaysia, Singapore, Indonesia, and Mauritius. It is also a public holiday in Thailand's Narathiwat, Pattani, Yala and Satun provinces, and is an official public school holiday in New York City. The Double Third Festival is on the third day of the third month. The Dragon Boat Festival, or the Duanwu Festival (), is on the fifth day of the fifth month and is an official holiday in China including Hong Kong, Macau, and Taiwan regions. The Qixi Festival () is celebrated in the evening of the seventh day of the seventh month. The Double Ninth Festival () is celebrated on the ninth day of the ninth month. Full moon holidays (holidays on the fifteenth day). The Lantern Festival is celebrated on the fifteenth day of the first month and was traditionally called the Yuan Xiao () or Shang Yuan Festival (). The Zhong Yuan Festival is celebrated on the fifteenth day of the seventh month. The Mid-Autumn Festival is celebrated on the fifteenth day of the eighth month. The Xia Yuan Festival is celebrated on the fifteenth day of the tenth month. Celebrations of the twelfth month. The Laba Festival is on the eighth day of the twelfth month. It is the enlightenment day of Sakyamuni Buddha and in Vietnam is known as . The Kitchen God Festival is celebrated on the twenty-third day of the twelfth month in northern regions of China and on the twenty-fourth day of the twelfth month in southern regions of China. Chinese New Year's Eve is also known as the Chuxi Festival and is celebrated on the evening of the last day of the traditional Chinese calendar. It is celebrated wherever the traditional Chinese calendar is observed. Celebrations of solar-term holidays. The Qingming Festival () is celebrated on the fifteenth day after the Spring Equinox. The Dongzhi Festival () or the Winter Solstice is celebrated. Religious holidays based on the Chinese calendar. East Asian Mahayana, Daoist, and some Cao Dai holidays and/or vegetarian observances are based on the traditional Chinese calendar. Celebrations in Japan. Many of the above holidays of the traditional Chinese calendar are also celebrated in Japan, but since the Meiji era on the similarly numbered dates of the Gregorian calendar. Double celebrations due to intercalary months. In the case when there is a corresponding intercalary month, the holidays may be celebrated twice. For example, in the hypothetical situation in which there is an additional intercalary seventh month, the Zhong Yuan Festival will be celebrated in the seventh month followed by another celebration in the intercalary seventh month. The next such occasion will be 2033, the first such since the calendar reform of 1645. Similar calendars. Like Chinese characters, variants of the Chinese calendar have been used in different parts of the Sinosphere throughout history: this includes Vietnam, Korea, Singapore, Japan and Ryukyu, Mongolia, and elsewhere. Outlying areas of China. Calendars of ethnic groups in mountains and plateaus of southwestern China and grasslands of northern China are based on their phenology and algorithms of traditional calendars of different periods, particularly the Tang and pre-Qin dynasties. Non-Chinese areas. Korea, Vietnam, and the Ryukyu Islands adopted the Chinese calendar. In the respective regions, the Chinese calendar has been adapted into the Korean, Vietnamese, and Ryukyuan calendars, with the main difference from the Chinese calendar being the use of different meridians due to geography, leading to some astronomical events — and calendar events based on them — falling on different dates. The traditional Japanese calendar was also derived from the Chinese calendar (based on a Japanese meridian), but Japan abolished its official use in 1873 after Meiji Restoration reforms. Calendars in Mongolia and Tibet have absorbed elements of the traditional Chinese calendar but are not direct descendants of it.
6968
764407
https://en.wikipedia.org/wiki?curid=6968
Customer relationship management
Customer relationship management (CRM) is a strategic process that organizations use to manage, analyze, and improve their interactions with customers. By leveraging data-driven insights, CRM helps businesses optimize communication, enhance customer satisfaction, and drive sustainable growth. CRM systems compile data from a range of different communication channels, including a company's website, telephone (which many services come with a softphone), email, live chat, marketing materials and more recently, social media. They allow businesses to learn more about their target audiences and how to better cater to their needs, thus retaining customers and driving sales growth. CRM may be used with past, present or potential customers. The concepts, procedures, and rules that a corporation follows when communicating with its consumers are referred to as CRM. This complete connection covers direct contact with customers, such as sales and service-related operations, forecasting, and the analysis of consumer patterns and behaviours, from the perspective of the company. The global customer relationship management market size is projected to grow from $101.41 billion in 2024 to $262.74 billion by 2032, at a CAGR of 12.6% History. The concept of customer relationship management started in the early 1970s, when customer satisfaction was evaluated using annual surveys or by front-line asking. At that time, businesses had to rely on standalone mainframe systems to automate sales, but the extent of technology allowed them to categorize customers in spreadsheets and lists. One of the best-known precursors of modern-day CRM is the Farley File. Developed by Franklin Roosevelt's campaign manager, James Farley, the Farley File was a comprehensive set of records detailing political and personal facts about people FDR and Farley met or were supposed to meet. Using it, people that FDR met were impressed by his "recall" of facts about their family and what they were doing professionally and politically. In 1982, Kate and Robert D. Kestenbaum introduced the concept of database marketing, namely applying statistical methods to analyze and gather customer data. By 1986, Pat Sullivan and Mike Muhney had released a customer evaluation system called ACT! based on the principle of a digital Rolodex, which offered a contact management service for the first time. The trend was followed by numerous companies and independent developers trying to maximize lead potential, including Tom Siebel of Siebel Systems, who designed the first CRM product, Siebel Customer Relationship Management, in 1993. In order to compete with these new and quickly growing stand-alone CRM solutions, established enterprise resource planning (ERP) software companies like Oracle, Zoho Corporation, SAP, Peoplesoft (an Oracle subsidiary as of 2005) and Navision started extending their sales, distribution and customer service capabilities with embedded CRM modules. This included embedding sales force automation or extended customer service (e.g. inquiry, activity management) as CRM features in their ERP. Customer relationship management was popularized in 1997 due to the work of Siebel, Gartner, and IBM. Between 1997 and 2000, leading CRM products were enriched with shipping and marketing capabilities. Siebel introduced the first mobile CRM app called Siebel Sales Handheld in 1999. The idea of a stand-alone, cloud-hosted customer base was soon adopted by other leading providers at the time, including PeopleSoft (acquired by Oracle), Oracle, SAP and Salesforce.com. The first open-source CRM system was developed by SugarCRM in 2004. During this period, CRM was rapidly migrating to the cloud, as a result of which it became accessible to sole entrepreneurs and small teams. This increase in accessibility generated a huge wave of price reduction. Around 2009, developers began considering the options to profit from social media's momentum and designed tools to help companies become accessible on all users' favourite networks. Many startups at the time benefited from this trend to provide exclusively social CRM solutions, including Base and Nutshell. The same year, Gartner organized and held the first Customer Relationship Management Summit, and summarized the features systems should offer to be classified as CRM solutions. In 2013 and 2014, most of the popular CRM products were linked to business intelligence systems and communication software to improve corporate communication and end-users' experience. The leading trend is to replace standardized CRM solutions with industry-specific ones, or to make them customizable enough to meet the needs of every business. In November 2016, Forrester released a report where it "identified the nine most significant CRM suites from eight prominent vendors". Types. Strategic. Strategic CRM concentrates upon the development of a customer-centric business culture. The focus of a business on being customer-centric (in design and implementation of their CRM strategy) will translate into an improved CLV. Operational. The primary goal of CRM systems is integration and automation of sales, marketing, and customer support. Therefore, these systems typically have a dashboard that gives an overall view of the three functions on a single customer view, a single page for each customer that a company may have. The dashboard may provide client information, past sales, previous marketing efforts, and more, summarizing all of the relationships between the customer and the firm. Operational CRM is made up of three main components: sales force automation, marketing automation, and service automation. Analytical. The role of analytical CRM systems is to analyze customer data collected through multiple sources and present it so that business managers can make more informed decisions. Analytical CRM systems use techniques such as data mining, correlation, and pattern recognition to analyze customer data. These analytics help improve customer service by finding small problems which can be solved, perhaps by marketing to different parts of a consumer audience differently. For example, through the analysis of a customer base's buying behavior, a company might see that this customer base has not been buying a lot of products recently. After reviewing their data, the company might think to market to this subset of consumers differently to best communicate how this company's products might benefit this group specifically. Collaborative. The third primary aim of CRM systems is to incorporate external stakeholders such as suppliers, vendors, and distributors, and share customer information across groups/departments and organizations. For example, feedback can be collected from technical support calls, which could help provide direction for marketing products and services to that particular customer in the future. Customer data platform. A customer data platform (CDP) is a computer system used by marketing departments that assembles data about individual people from various sources into one database, with which other software systems can interact. , about twenty companies were selling such systems and revenue for them was around US$300 million. Components. The main components of CRM are building and managing customer relationships through marketing, observing relationships as they mature through distinct phases, managing these relationships at each stage and recognizing that the distribution of the value of a relationship to the firm is not homogeneous. When building and managing customer relationships through marketing, firms might benefit from using a variety of tools to help organizational design, incentive schemes, customer structures, and more to optimize the reach of their marketing campaigns. Through the acknowledgment of the distinct phases of CRM, businesses will be able to benefit from seeing the interaction of multiple relationships as connected transactions. The final factor of CRM highlights the importance of CRM through accounting for the profitability of customer relationships. By studying the particular spending habits of customers, a firm may be able to dedicate different resources and amounts of attention to different types of consumers. Relational Intelligence, which is the awareness of the variety of relationships a customer can have with a firm and the ability of the firm to reinforce or change those connections, is an important component of the main phases of CRM. Companies may be good at capturing demographic data, such as gender, age, income, and education, and connecting them with purchasing information to categorize customers into profitability tiers, but this is only a firm's industrial view of customer relationships. A lack of relational intelligence is a sign that firms still see customers as resources that can be used for up-sell or cross-sell opportunities, rather than people looking for interesting and personalized interactions. CRM systems include: Effect on customer satisfaction. Customer satisfaction has important implications for the economic performance of firms because it has the ability to increase customer loyalty and usage behavior and reduce customer complaints and the likelihood of customer defection. The implementation of a CRM approach is likely to affect customer satisfaction and customer knowledge for a variety of different reasons. Firstly, firms can customize their offerings for each customer. By accumulating information across customer interactions and processing this information to discover hidden patterns, CRM applications help firms customize their offerings to suit the individual tastes of their customers. This customization enhances the perceived quality of products and services from a customer's viewpoint, and because the perceived quality is a determinant of customer satisfaction, it follows that CRM applications indirectly affect customer satisfaction. CRM applications also enable firms to provide timely, accurate processing of customer orders and requests and the ongoing management of customer accounts. For example, Piccoli and Applegate discuss how Wyndham uses IT tools to deliver a consistent service experience across its various properties to a customer. Both an improved ability to customize and reduced variability of the consumption experience enhance perceived quality, which in turn positively affects customer satisfaction. CRM applications also help firms manage customer relationships more effectively across the stages of relationship initiation, maintenance, and termination. Customer benefits. With CRM systems, customers are served on the day-to-day process. With more reliable information, their demand for self-service from companies will decrease. If there is less need to interact with the company for different problems, then the customer satisfaction level is expected to increase. These central benefits of CRM will be connected hypothetically to the three kinds of equity, which are relationship, value, and brand, and in the end to customer equity. Eight benefits were recognized to provide value drivers. Examples. Research has found a 5% increase in customer retention boosts lifetime customer profits by 50% on average across multiple industries, as well as a boost of up to 90% within specific industries such as insurance. Companies that have mastered customer relationship strategies have the most successful CRM programs. For example, MBNA Europe has had a 75% annual profit growth since 1995. The firm heavily invests in screening potential cardholders. Once proper clients are identified, the firm retains 97% of its profitable customers. They implement CRM by marketing the right products to the right customers. The firm's customers' card usage is 52% above the industry norm, and the average expenditure is 30% more per transaction. Also 10% of their account holders ask for more information on cross-sale products. Amazon has also seen successes through its customer proposition. The firm implemented personal greetings, collaborative filtering, and more for the customer. They also used CRM training for the employees to see up to 80% of customers repeat. Customer profile. A customer profile is a detailed description of any particular classification of customer which is created to represent the typical users of a product or service. Customer profiling is a method to understand your customers in terms of demographics, behaviour and lifestyle. It is used to help make customer-focused decisions without confusing the scope of the project with personal opinion. Overall profiling is gathering information that sums up consumption habits so far and projects them into the future so that they can be grouped for marketing and advertising purposes. Customer or consumer profiles are the essences of the data that is collected alongside core data (name, address, company) and processed through customer analytics methods, essentially a type of profiling. The three basic methods of customer profiling are the psychographic approach, the consumer typology approach, and the consumer characteristics approach. These customer profiling methods help you design your business around who your customers are and help you make better customer-centered decisions. Improving CRM. Consultants hold that it is important for companies to establish strong CRM systems to improve their relational intelligence. According to this argument, a company must recognize that people have many different types of relationships with different brands. One research study analyzed relationships between consumers in China, Germany, Spain, and the United States, with over 200 brands in 11 industries including airlines, cars, and media. This information is valuable as it provides demographic, behavioral, and value-based customer segmentation. These types of relationships can be both positive and negative. Some customers view themselves as friends of the brands, while others as enemies, and some are mixed with a love-hate relationship with the brand. Some relationships are distant, intimate, or anything in between. Data analysis. Managers must understand the different reasons for the types of relationships, and provide the customer with what they are looking for. Companies can collect this information by using surveys, interviews, and more, with current customers. Companies must also improve the relational intelligence of their CRM systems. Companies store and receive huge amounts of data through emails, online chat sessions, phone calls, and more. Many companies do not properly make use of this great amount of data, however. All of these are signs of what types of relationships the customer wants with the firm, and therefore companies may consider investing more time and effort in building out their relational intelligence. Companies can use data mining technologies and web searches to understand relational signals. Social media such as social networking sites, blogs, and forums can also be used to collect and analyze information. Understanding the customer and capturing this data allows companies to convert customers' signals into information and knowledge that the firm can use to understand a potential customer's desired relations with a brand. Employee training. Many firms have also implemented training programs to teach employees how to recognize and create strong customer-brand relationships. Other employees have also been trained in social psychology and the social sciences to help bolster customer relationships. Customer service representatives must be trained to value customer relationships and trained to understand existing customer profiles. Even the finance and legal departments should understand how to manage and build relationships with customers. In practice. Call centers. Contact centre CRM providers are popular for small and mid-market businesses. These systems codify the interactions between the company and customers by using analytics and key performance indicators to give the users information on where to focus their marketing and customer service. This allows agents to have access to a caller's history to provide personalized customer communication. The intention is to maximize average revenue per user, decrease churn rate and decrease idle and unproductive contact with the customers. Growing in popularity is the idea of gamifying, or using game design elements and game principles in a non-game environment such as customer service environments. The gamification of customer service environments includes providing elements found in games like rewards and bonus points to customer service representatives as a method of feedback for a job well done. Gamification tools can motivate agents by tapping into their desire for rewards, recognition, achievements, and competition. Contact-center automation. Contact-center automation, CCA, the practice of having an integrated system that coordinates contacts between an organization and the public, is designed to reduce the repetitive and tedious parts of a contact center agent's job. Automation prevents this by having pre-recorded audio messages that help customers solve their problems. For example, an automated contact center may be able to re-route a customer through a series of commands asking him or her to select a certain number to speak with a particular contact center agent who specializes in the field in which the customer has a question. Software tools can also integrate with the agent's desktop tools to handle customer questions and requests. This also saves time on behalf of the employees. Social media. Social CRM involves the use of social media and technology to engage and learn from consumers. Because the public, especially young people, are increasingly using social networking sites, companies use these sites to draw attention to their products, services and brands, with the aim of building up customer relationships to increase demand. With the increase in the use of social media platforms, integrating CRM with the help of social media can potentially be a quicker and more cost-friendly process. Some CRM systems integrate social media sites like Twitter, LinkedIn, and Facebook to track and communicate with customers. These customers also share their own opinions and experiences with a company's products and services, giving these firms more insight. Therefore, these firms can both share their own opinions and also track the opinions of their customers. Enterprise feedback management software platforms combine internal survey data with trends identified through social media to allow businesses to make more accurate decisions on which products to supply. Location-based services. CRM systems can also include technologies that create geographic marketing campaigns. The systems take in information based on a customer's physical location and sometimes integrates it with popular location-based GPS applications. It can be used for networking or contact management as well to help increase sales based on location. Business-to-business transactions. Despite the general notion that CRM systems were created for customer-centric businesses, they can also be applied to B2B environments to streamline and improve customer management conditions. For the best level of CRM operation in a B2B environment, the software must be personalized and delivered at individual levels. The main differences between business-to-consumer (B2C) and business-to-business CRM systems concern aspects like sizing of contact databases and length of relationships. Market trends. Social networking. In the Gartner CRM Summit 2010 challenges like "system tries to capture data from social networking traffic like Twitter, handles Facebook page addresses or other online social networking sites" were discussed and solutions were provided that would help in bringing more clientele. The era of the "social customer" refers to the use of social media by customers. Mobile. Some CRM systems are equipped with mobile capabilities, making information accessible to remote sales staff. Cloud computing and SaaS. Many CRM vendors offer subscription-based web tools (cloud computing) and SaaS. Salesforce.com was the first company to provide enterprise applications through a web browser, and has maintained its leadership position. Over the years, the number of SaaS providers has grown with CRM being the leading category for 2024. Traditional providers moved into the cloud-based market via acquisitions of smaller providers: Oracle purchased RightNow in October 2011, and Taleo and Eloqua in 2012; SAP acquired SuccessFactors in December 2011 and NetSuite acquired Verenia in 2022. Sales and sales force automation. Sales forces also play an important role in CRM, as maximizing sales effectiveness and increasing sales productivity is a driving force behind the adoption of CRM software. Some of the top CRM trends identified in 2021 include focusing on customer service automation such as chatbots, hyper-personalization based on customer data and insights, and the use of unified CRM systems. CRM vendors support sales productivity with different products, such as tools that measure the effectiveness of ads that appear in 3D video games. Pharmaceutical companies were some of the first investors in sales force automation (SFA) and some are on their third- or fourth-generation implementations. However, until recently, the deployments did not extend beyond SFA—limiting their scope and interest to Gartner analysts. Vendor relationship management. Another related development is vendor relationship management (VRM), which provide tools and services that allow customers to manage their individual relationship with vendors. VRM development has grown out of efforts by ProjectVRM at Harvard's Berkman Center for Internet & Society and Identity Commons' Internet Identity Workshops, as well as by a growing number of startups and established companies. VRM was the subject of a cover story in the May 2010 issue of "CRM" Magazine. Customer success. Another trend worth noting is the rise of Customer Success as a discipline within companies. More and more companies establish Customer Success teams as separate from the traditional Sales team and task them with managing existing customer relations. This trend fuels demand for additional capabilities for a more holistic understanding of customer health, which is a limitation for many existing vendors in the space. As a result, a growing number of new entrants enter the market while existing vendors add capabilities in this area to their suites. AI and predictive analytics. In 2017, artificial intelligence and predictive analytics were identified as the newest trends in CRM. Criticism. Companies face large challenges when trying to implement CRM systems. Consumer companies frequently manage their customer relationships haphazardly and unprofitably. They may not effectively or adequately use their connections with their customers, due to misunderstandings or misinterpretations of a CRM system's analysis. Clients may be treated like an exchange party, rather than a unique individual, due to, occasionally, a lack of a bridge between the CRM data and the CRM analysis output. Many studies show that customers are frequently frustrated by a company's inability to meet their relationship expectations, and on the other side, companies do not always know how to translate the data they have gained from CRM software into a feasible action plan. In 2003, a Gartner report estimated that more than $2 billion had been spent on software that was not being used. According to CSO Insights, less than 40 percent of 1,275 participating companies had end-user adoption rates above 90 percent. Many corporations only use CRM systems on a partial or fragmented basis. In a 2007 survey from the UK, four-fifths of senior executives reported that their biggest challenge is getting their staff to use the systems they had installed. Forty-three percent of respondents said they use less than half the functionality of their existing systems. However, market research regarding consumers' preferences may increase the adoption of CRM among developing countries' consumers. Collection of customer data such as personally identifiable information must strictly obey customer privacy laws, which often requires extra expenditures on legal support. Part of the paradox with CRM stems from the challenge of determining exactly what CRM is and what it can do for a company. The CRM paradox, also referred to as the "dark side of CRM", may entail favoritism and differential treatment of some customers. This can happen because a business prioritizes customers who are more profitable, more relationship-orientated or tend to have increased loyalty to the company. Although focusing on such customers by itself is not a bad thing, it can leave other customers feeling left out and alienated potentially decreasing profits because of it. CRM technologies can easily become ineffective if there is no proper management, and they are not implemented correctly. The data sets must also be connected, distributed, and organized properly so that the users can access the information that they need quickly and easily. Research studies also show that customers are increasingly becoming dissatisfied with contact center experiences due to lags and wait times. They also request and demand multiple channels of communication with a company, and these channels must transfer information seamlessly. Therefore, it is increasingly important for companies to deliver a cross-channel customer experience that can be both consistent as well as reliable.
6970
33450425
https://en.wikipedia.org/wiki?curid=6970
Chuck-a-luck
Chuck-a-luck, also known as birdcage, or sweat rag, is a game of chance played with three dice. It is derived from grand hazard and both can be considered a variant of sic bo, which is a popular casino game, although chuck-a-luck is more of a carnival game than a true casino game. The game is sometimes used as a fundraiser for charity. Rules. Chuck-a-luck is played with three standard six-sided, numbered dice that are kept in a device shaped somewhat like an hourglass which resembles a wire-frame bird cage and pivots about its centre. The dealer rotates the cage end over end, with the dice landing on the bottom. Wagers are placed based on possible combinations that can appear on the three dice. The possible wagers are usually fewer than the wagers that are possible in sic bo and, in that sense, chuck-a-luck can be considered to be a simpler game. In the simplest variant, bettors place stakes on a board with six numbered spaces, labelled 1 through 6, inclusive. They receive a 1:1 payout if the number bet on appears once, a 2:1 payout if the number appears twice, and a 3:1 payout if the number is rolled all 3 times. In this respect, the basic game is identical to Crown and Anchor, but with numbered dice instead of symbols. Additional wagers that are commonly seen, and their associated odds, are set out in the table below. House advantage or edge. Chuck-a-luck is a game of chance. On average, the players are expected to lose more than they win. The casino's advantage (house advantage or house edge) is greater than most other casino games and can be much greater for certain wagers. According to John Scarne, "habitual gamblers stay away from Chuck-a-Luck because they know how little chance they have against such a high [house edge]. They call Chuck-a-Luck 'the champ chump's game. For the single die bet, there are 216 (6 × 6 × 6) possible outcomes for a throw of three dice. For a specific number: At payouts of 1 to 1, 2 to 1 and 10 to 1 respectively for each of these types of outcome, the expected loss as a percentage of the stake wagered is: 1 - ((75/216) × 2 + (15/216) × 3 + (1/216) × 11) = 4.6% At more disadvantageous payouts of 1 to 1, 2 to 1 and 3 to 1, the expected loss as a percentage of the stake wagered is: 1 - ((75/216) × 2 + (15/216) × 3 + (1/216) × 4) = 7.9% If the payouts are adjusted to 1 to 1, 3 to 1 and 5 to 1 respectively, the expected loss as a percentage is: 1 - ((75/216) × 2 + (15/216) × 4 + (1/216) × 6) = 0% Commercially organised gambling games almost always have a house advantage which acts as a fee for the privilege of being allowed to play the game, so the last scenario would represent a payout system used for a home game, where players take turns being the role of banker/casino. In popular culture. There is a reference to chuck-a-luck in the Abbott and Costello film "Hold That Ghost". In Fritz Lang's 1952 film, "Rancho Notorious", chuck-a-luck is the name of the ranch run by Altar Keane (played by Marlene Dietrich) where outlaws hide from the law. Chuck-a-luck is featured in the lyrics to the theme song and in some plot points. The game is played by Lazar in the James Bond movie "The Man with the Golden Gun". The game is played by Freddie Rumsen in "Mad Men" season 2 episode 9, "Six-Month Leave". In "Dragonfly in Amber" the character Claire Randall describes the activity inside of an inn as having several soldiers playing chuck-a-luck on the floor along with a dog sleeping by the fire and smelling strongly of hops.
6972
33145
https://en.wikipedia.org/wiki?curid=6972
Chipmunk
Chipmunks are small, striped rodents of subtribe Tamiina. Chipmunks are found in North America, with the exception of the Siberian chipmunk which is found primarily in Asia. Taxonomy and systematics. Chipmunks are classified as four genera: "Tamias", of which the eastern chipmunk ("T. striatus") is the only living member; "Eutamias", of which the Siberian chipmunk ("E. sibiricus") is the only living member; "Nototamias", which consists of three extinct species, and "Neotamias", which includes the 23 remaining, mostly western North American, species. These classifications were treated as subgenera due to the chipmunks' morphological similarities. As a result, most taxonomies over the twentieth century have placed the chipmunks into a single genus. Joseph C. Moore reclassified chipmunks to form a subtribe Tamiina in a 1959 study, and this classification has been supported by studies of mitochondrial DNA. The common name originally may have been spelled "chitmunk", from the native Odawa (Ottawa) word "jidmoonh", meaning "red squirrel" ("cf." Ojibwe "ajidamoo"). The earliest form cited in the "Oxford English Dictionary" is "chipmonk", from 1842. Other early forms include "chipmuck" and "chipminck", and in the 1830s they were also referred to as "chip squirrels", probably in reference to the sound they make. In the mid-19th century, John James Audubon and his sons included a lithograph of the chipmunk in their "Viviparous Quadrupeds of North America", calling it the "chipping squirrel [or] hackee". Chipmunks have also been referred to as "ground squirrels" (although the name "ground squirrel" may refer to other squirrels, such as those of the genus "Spermophilus"). Diet. Chipmunks have an omnivorous diet primarily consisting of seeds, nuts and other fruits, and buds. They also commonly eat grass, shoots, and many other forms of plant matter, as well as fungi, insects and other arthropods, small frogs, worms, and bird eggs. They will also occasionally eat newly hatched baby birds. Around humans, chipmunks can eat cultivated grains and vegetables, and other plants from farms and gardens, so they are sometimes considered pests. Chipmunks mostly forage on the ground, but they climb trees to obtain nuts such as hazelnuts and acorns. At the beginning of autumn, many species of chipmunk begin to stockpile nonperishable foods for winter. They mostly cache their foods in a larder in their burrows and remain in their nests until spring, unlike some other species which make multiple small caches of food. Cheek pouches allow chipmunks to carry food items to their burrows for either storage or consumption. Ecology and life history. Eastern chipmunks, the largest of the chipmunks, mate in early spring and again in early summer, producing litters of four or five young twice each year. Western chipmunks breed only once a year. The young emerge from the burrow after about six weeks and strike out on their own within the next two weeks. These small mammals fulfill several important functions in forest ecosystems. Their activities harvesting and hoarding tree seeds play a crucial role in seedling establishment. They consume many different kinds of fungi, including those involved in symbiotic mycorrhizal associations with trees, and are a vector for dispersal of the spores of subterranean sporocarps (truffles) in some regions. Chipmunks construct extensive burrows which can be more than in length with several well-concealed entrances. The sleeping quarters are kept clear of shells, and feces are stored in refuse tunnels. The eastern chipmunk hibernates in the winter, while western chipmunks do not, relying on the stores in their burrows. Chipmunks play an important role as prey for various predatory mammals and birds but are also opportunistic predators themselves, particularly with regard to bird eggs and nestlings, as in the case of eastern chipmunks and mountain bluebirds ("Siala currucoides"). Chipmunks typically live about three years, although some have been observed living to nine years in captivity. Chipmunks are diurnal. In captivity, they are said to sleep for an average of about 15 hours a day. It is thought that mammals which can sleep in hiding, such as rodents and bats, tend to sleep longer than those that must remain on alert. Genera. Genus "Eutamias" Genus "Tamias" Genus "Neotamias" Genus "Nototamias" †
6974
28481209
https://en.wikipedia.org/wiki?curid=6974
Computer music
Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century. History. Much of the work on computer music has drawn on the relationship between music and mathematics, a relationship that has been noted since the Ancient Greeks described the "harmony of the spheres". Musical melodies were first generated by the computer originally named the CSIR Mark 1 (later renamed CSIRAC) in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were speculative). Research has shown that people "speculated" about computers playing music, possibly because computers would make noises, but there is no evidence that they did it. The world's first computer to play music was the CSIR Mark 1 (later named CSIRAC), which was designed and built by Trevor Pearcey and Maston Beard in the late 1940s. Mathematician Geoff Hill programmed the CSIR Mark 1 to play popular musical melodies from the very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the first known use of a digital computer for that purpose. The music was never recorded, but it has been accurately reconstructed. In 1951 it publicly played the "Colonel Bogey March" of which only the reconstruction exists. However, the CSIR Mark 1 played standard repertoire and was not used to extend musical thinking or composition practice, as Max Mathews did, which is current computer-music practice. The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark 1, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, "Baa, Baa, Black Sheep", and "In the Mood"; this is recognized as the earliest recording of a computer to play music as the CSIRAC music was never recorded. This recording can be heard at the Manchester University site. Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on SoundCloud. Two further major 1950s developments were the origins of digital sound synthesis by computer, and of algorithmic composition programs beyond rote playback. Amongst other pioneers, the musical chemists Lejaren Hiller and Leonard Isaacson worked on a series of algorithmic composition experiments from 1956 to 1959, manifested in the 1957 premiere of the "Illiac Suite" for string quartet. Max Mathews at Bell Laboratories developed the influential MUSIC I program and its descendants, further popularising computer music through a 1963 article in "Science". The first professional composer to work with digital synthesis was James Tenney, who created a series of digitally synthesized and/or algorithmically composed pieces at Bell Labs using Mathews' MUSIC III system, beginning with "Analog #1 (Noise Study)" (1961). After Tenney left Bell Labs in 1964, he was replaced by composer Jean-Claude Risset, who conducted research on the synthesis of instrumental timbres and composed "Computer Suite from Little Boy" (1968). Early computer-music programs typically did not run in real time, although the first experiments on CSIRAC and the Ferranti Mark 1 did operate in real time. From the late 1950s, with increasingly sophisticated programming, programs would run for hours or days, on multi million-dollar computers, to generate a few minutes of music. One way around this was to use a 'hybrid system' of digital control of an analog synthesiser and early examples of this were Max Mathews' GROOVE system (1969) and also MUSYS by Peter Zinovieff (1969). Until now partial use has been exploited for musical research into the substance and form of sound (convincing examples are those of Hiller and Isaacson in Urbana, Illinois, US; Iannis Xenakis in Paris and Pietro Grossi in Florence, Italy). In May 1967 the first experiments in computer music in Italy were carried out by the "S 2F M studio" in Florence in collaboration with "General Electric Information Systems" Italy. "Olivetti-General Electric GE 115" (Olivetti S.p.A.) is used by Grossi as a "performer": three programmes were prepared for these experiments. The programmes were written by Ferruccio Zulian and used by Pietro Grossi for playing Bach, Paganini, and Webern works and for studying new sound structures. John Chowning's work on FM synthesis from the 1960s to the 1970s allowed much more efficient digital synthesis, eventually leading to the development of the affordable FM synthesis-based Yamaha DX7 digital synthesizer, released in 1983. Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound. In Japan. In Japan, experiments in computer music date back to 1962, when Keio University professor Sekine and Toshiba engineer Hayashi experimented with the computer. This resulted in a piece entitled "TOSBAC Suite", influenced by the "Illiac Suite". Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented during Osaka Expo '70 and "Panoramic Sonore" (1974) by music critic Akimichi Takeda. Ezaki also published an article called "Contemporary Music and Computers" in 1970. Since then, Japanese research in computer music has largely been carried out for commercial purposes in popular music, though some of the more serious Japanese musicians used large computer systems such as the "Fairlight" in the 1970s. In the late 1970s these systems became commercialized, including systems like the Roland MC-8 Microcomposer, where a microprocessor-based system controls an analog synthesizer, released in 1978. In addition to the Yamaha DX7, the advent of inexpensive digital chips and microcomputers opened the door to real-time generation of computer music. In the 1980s, Japanese personal computers such as the NEC PC-88 came installed with FM synthesis sound chips and featured audio programming languages such as Music Macro Language (MML) and MIDI interfaces, which were most often used to produce video game music, or chiptunes. By the early 1990s, the performance of microprocessor-based computers reached the point that real-time generation of computer music using more general programs and algorithms became possible. Advances. Advances in computing power and software for manipulation of digital media have dramatically affected the way computer music is generated and performed. Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a wide variety of algorithms and approaches. Computer music systems and approaches are now ubiquitous, and so firmly embedded in the process of creating music that we hardly give them a second thought: computer-based synthesizers, digital mixers, and effects units have become so commonplace that use of digital rather than analog technology to create and record music is the norm, rather than the exception. Research. There is considerable activity in the field of computer music as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches. Throughout the world there are many organizations and institutions dedicated to the area of computer and electronic music study and research, including the CCRMA (Center of Computer Research in Music and Acoustic, Stanford, USA), ICMA (International Computer Music Association), C4DM (Centre for Digital Music), IRCAM, GRAME, SEAMUS (Society for Electro Acoustic Music in the United States), CEC (Canadian Electroacoustic Community), and a great number of institutions of higher learning around the world. Music composed and performed by computers. Later, composers such as Gottfried Michael Koenig and Iannis Xenakis had computers generate the sounds of the composition as well as the score. Koenig produced algorithmic composition programs which were a generalization of his own serial composition practice. This is not exactly similar to Xenakis' work as he used mathematical abstractions and examined how far he could explore these musically. Koenig's software translated the calculation of mathematical equations into codes which represented musical notation. This could be converted into musical notation by hand and then performed by human players. His programs Project 1 and Project 2 are examples of this kind of software. Later, he extended the same kind of principles into the realm of synthesis, enabling the computer to produce the sound directly. SSP is an example of a program which performs this kind of function. All of these programs were produced by Koenig at the Institute of Sonology in Utrecht in the 1970s. In the 2000s, Andranik Tangian developed a computer algorithm to determine the time event structures for rhythmic canons and rhythmic fugues, which were then "manually" worked out into harmonic compositions "Eine kleine Mathmusik I" and "Eine kleine Mathmusik II" performed by computer; for scores and recordings see. Computer-generated scores for performance by human players. Computers have also been used in an attempt to imitate the music of great composers of the past, such as Mozart. A present exponent of this technique is David Cope, whose computer programs analyses works of other composers to produce new works in a similar style. Cope's best-known program is Emily Howell. Melomics, a research project from the University of Málaga (Spain), developed a computer composition cluster named Iamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception, Iamus has composed a full album in 2012, also named "Iamus", which "New Scientist" described as "the first major work composed by a computer and performed by a full orchestra". The group has also developed an API for developers to utilize the technology, and makes its music available on its website. Computer-aided algorithmic composition. Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use. The label "computer-aided composition" lacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The label "algorithmic composition" is likewise too broad, particularly in that it does not specify the use of a computer. The term computer-aided, rather than computer-assisted, is used in the same manner as computer-aided design. Machine improvisation. Machine improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic re-injection. This is different from other improvisation methods with computers that use algorithmic composition to generate new music without performing analysis of existing music examples. Statistical style modeling. Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's "Illiac Suite for String Quartet" (1957) and Xenakis' uses of Markov chains and stochastic processes. Modern methods include the use of lossless data compression for incremental parsing, prediction suffix tree, string searching and more. Style mixing is possible by blending models derived from several musical sources, with the first style mixing done by S. Dubnov in a piece NTrope Suite using Jensen-Shannon joint source model. Later the use of factor oracle algorithm (basically a "factor oracle" is a finite state automaton constructed in linear time and space in an incremental fashion) was adopted for music by Assayag and Dubnov and became the basis for several systems that use stylistic re-injection. Implementations. The first implementation of statistical style modeling was the LZify method in Open Music, followed by the Continuator system that implemented interactive machine improvisation that interpreted the LZ incremental parsing in terms of Markov models and used it for real time style modeling developed by François Pachet at Sony CSL Paris in 2002. Matlab implementation of the Factor Oracle machine improvisation can be found as part of Computer Audition toolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation. OMax is a software environment developed in IRCAM. OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. the "OMax Brothers") in the Ircam Music Representations group. One of the problems in modeling audio signals with factor oracle is the symbolization of features from continuous values to a discrete alphabet. This problem was solved in the Variable Markov Oracle (VMO) available as python implementation, using an information rate criteria for finding the optimal or most informative representation. Use of artificial intelligence. The use of artificial intelligence to generate new melodies, cover pre-existing music, and clone artists' voices, is a recent phenomenon that has been reported to disrupt the music industry. Live coding. Live coding (sometimes known as 'interactive programming', 'on-the-fly programming', 'just in time programming') is the name given to the process of writing software in real time as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live.
6978
28481209
https://en.wikipedia.org/wiki?curid=6978
Concept
A concept is an abstract idea that serves as a foundation for more concrete principles, thoughts, and beliefs. Concepts play an important role in all aspects of cognition. As such, concepts are studied within such disciplines as linguistics, psychology, and philosophy, and these disciplines are interested in the logical and psychological structure of concepts, and how they are put together to form thoughts and sentences. The study of concepts has served as an important flagship of an emerging interdisciplinary approach, cognitive science. In contemporary philosophy, three understandings of a concept prevail: Concepts are classified into a hierarchy, higher levels of which are termed "superordinate" and lower levels termed "subordinate". Additionally, there is the "basic" or "middle" level at which people will most readily categorize a concept. For example, a basic-level concept would be "chair", with its superordinate, "furniture", and its subordinate, "easy chair". Concepts may be exact or inexact. When the mind makes a generalization such as the concept of "tree", it extracts similarities from numerous examples; the simplification enables higher-level thinking. A concept is instantiated (reified) by all of its actual or potential instances, whether these are things in the real world or other ideas. Concepts are studied as components of human cognition in the cognitive science disciplines of linguistics, psychology, and philosophy, where an ongoing debate asks whether all cognition must occur through concepts. Concepts are regularly formalized in mathematics, computer science, databases and artificial intelligence. Examples of specific high-level conceptual classes in these fields include classes, schema or categories. In informal use, the word "concept" can refer to any idea. Ontology of concepts. A central question in the study of concepts is the question of what they "are". Philosophers construe this question as one about the ontology of concepts—what kind of things they are. The ontology of concepts determines the answer to other questions, such as how to integrate concepts into a wider theory of the mind, what functions are allowed or disallowed by a concept's ontology, etc. There are two main views of the ontology of concepts: (1) Concepts are abstract objects, and (2) concepts are mental representations. Concepts as mental representations. The psychological view of concepts. Within the framework of the representational theory of mind, the structural position of concepts can be understood as follows: Concepts serve as the building blocks of what are called "mental representations" (colloquially understood as "ideas in the mind"). Mental representations, in turn, are the building blocks of what are called "propositional attitudes" (colloquially understood as the stances or perspectives we take towards ideas, be it "believing", "doubting", "wondering", "accepting", etc.). And these propositional attitudes, in turn, are the building blocks of our understanding of thoughts that populate everyday life, as well as folk psychology. In this way, we have an analysis that ties our common everyday understanding of thoughts down to the scientific and philosophical understanding of concepts. The physicalist view of concepts. In a physicalist theory of mind, a concept is a mental representation, which the brain uses to denote a class of things in the world. This is to say that it is literally a symbol or group of symbols together made from the physical material of the brain. Concepts are mental representations that allow us to draw appropriate inferences about the type of entities we encounter in our everyday lives. Concepts do not encompass all mental representations, but are merely a subset of them. The use of concepts is necessary to cognitive processes such as categorization, memory, decision making, learning, and inference. Concepts are thought to be stored in long term cortical memory, in contrast to episodic memory of the particular objects and events which they abstract, which are stored in hippocampus. Evidence for this separation comes from hippocampal damaged patients such as patient HM. The abstraction from the day's hippocampal events and objects into cortical concepts is often considered to be the computation underlying (some stages of) sleep and dreaming. Many people (beginning with Aristotle) report memories of dreams which appear to mix the day's events with analogous or related historical concepts and memories, and suggest that they were being sorted or organized into more abstract concepts. ("Sort" is itself another word for concept, and "sorting" thus means to organize into concepts.) Concepts as abstract objects. The semantic view of concepts suggests that concepts are abstract objects. In this view, concepts are abstract objects of a category out of a human's mind rather than some mental representations. There is debate as to the relationship between concepts and natural language. However, it is necessary at least to begin by understanding that the concept "dog" is philosophically distinct from the things in the world grouped by this concept—or the reference class or extension. Concepts that can be equated to a single word are called "lexical concepts". The study of concepts and conceptual structure falls into the disciplines of linguistics, philosophy, psychology, and cognitive science. In the simplest terms, a concept is a name or label that regards or treats an abstraction as if it had concrete or material existence, such as a person, a place, or a thing. It may represent a natural object that exists in the real world like a tree, an animal, a stone, etc. It may also name an artificial (man-made) object like a chair, computer, house, etc. Abstract ideas and knowledge domains such as freedom, equality, science, happiness, etc., are also symbolized by concepts. A concept is merely a symbol, a representation of the abstraction. The word is not to be mistaken for the thing. For example, the word "moon" (a concept) is not the large, bright, shape-changing object up in the sky, but only "represents" that celestial object. Concepts are created (named) to describe, explain and capture reality as it is known and understood. "A priori" concepts and "a posteriori" concepts. Kant maintained the view that human minds possess not only empirical or "a posteriori" concepts, but also pure or "a priori" concepts. Instead of being abstracted from individual perceptions, like empirical concepts, they originate in the mind itself. He called these concepts categories, in the sense of the word that means predicate, attribute, characteristic, or quality. But these pure categories are predicates of things "in general", not of a particular thing. According to Kant, there are twelve categories that constitute the understanding of phenomenal objects. Each category is that one predicate which is common to multiple empirical concepts. In order to explain how an "a priori" concept can relate to individual phenomena, in a manner analogous to an "a posteriori" concept, Kant employed the technical concept of the schema. He held that the account of the concept as an abstraction of experience is only partly correct. He called those concepts that result from abstraction "a posteriori concepts" (meaning concepts that arise out of experience). An empirical or an "a posteriori" concept is a general representation ("Vorstellung") or non-specific thought of that which is common to several specific perceived objects (Logic §1, Note 1) A concept is a common feature or characteristic. Kant investigated the way that empirical "a posteriori" concepts are created. Embodied content. In cognitive linguistics, abstract concepts are transformations of concrete concepts derived from embodied experience. The mechanism of transformation is structural mapping, in which properties of two or more source domains are selectively mapped onto a blended space (Fauconnier & Turner, 1995; see conceptual blending). A common class of blends are metaphors. This theory contrasts with the rationalist view that concepts are perceptions (or "recollections", in Plato's term) of an independently existing world of ideas, in that it denies the existence of any such realm. It also contrasts with the empiricist view that concepts are abstract generalizations of individual experiences, because the contingent and bodily experience is preserved in a concept, and not abstracted away. While the perspective is compatible with Jamesian pragmatism, the notion of the transformation of embodied concepts through structural mapping makes a distinct contribution to the problem of concept formation. Realist universal concepts. Platonist views of the mind construe concepts as abstract objects. Plato was the starkest proponent of the realist thesis of universal concepts. By his view, concepts (and ideas in general) are innate ideas that were instantiations of a transcendental world of pure forms that lay behind the veil of the physical world. In this way, universals were explained as transcendent objects. Needless to say, this form of realism was tied deeply with Plato's ontological projects. This remark on Plato is not of merely historical interest. For example, the view that numbers are Platonic objects was revived by Kurt Gödel as a result of certain puzzles that he took to arise from the phenomenological accounts. Sense and reference. Gottlob Frege, founder of the analytic tradition in philosophy, famously argued for the analysis of language in terms of sense and reference. For him, the sense of an expression in language describes a certain state of affairs in the world, namely, the way that some object is presented. Since many commentators view the notion of sense as identical to the notion of concept, and Frege regards senses as the linguistic representations of states of affairs in the world, it seems to follow that we may understand concepts as the manner in which we grasp the world. Accordingly, concepts (as senses) have an ontological status. Concepts in calculus. According to Carl Benjamin Boyer, in the introduction to his "The History of the Calculus and its Conceptual Development", concepts in calculus do not refer to perceptions. As long as the concepts are useful and mutually compatible, they are accepted on their own. For example, the concepts of the derivative and the integral are not considered to refer to spatial or temporal perceptions of the external world of experience. Neither are they related in any way to mysterious limits in which quantities are on the verge of nascence or evanescence, that is, coming into or going out of existence. The abstract concepts are now considered to be totally autonomous, even though they originated from the process of abstracting or taking away qualities from perceptions until only the common, essential attributes remained. Notable theories on the structure of concepts. Classical theory. The classical theory of concepts, also referred to as the empiricist theory of concepts, is the oldest theory about the structure of concepts (it can be traced back to Aristotle), and was prominently held until the 1970s. The classical theory of concepts says that concepts have a definitional structure. Adequate definitions of the kind required by this theory usually take the form of a list of features. These features must have two important qualities to provide a comprehensive definition. Features entailed by the definition of a concept must be both "necessary" and jointly "sufficient" for membership in the class of things covered by a particular concept. A feature is considered necessary if every member of the denoted class has that feature. A set of features is considered sufficient if having all the parts required by the definition entails membership in the class. For example, the classic example "bachelor" is said to be defined by "unmarried" and "man". An entity is a bachelor (by this definition) if and only if it is both unmarried and a man. To check whether something is a member of the class, you compare its qualities to the features in the definition. Another key part of this theory is that it obeys the "law of the excluded middle", which means that there are no partial members of a class, you are either in or out. The classical theory persisted for so long unquestioned because it seemed intuitively correct and has great explanatory power. It can explain how concepts would be acquired, how we use them to categorize and how we use the structure of a concept to determine its referent class. In fact, for many years it was one of the major activities in philosophy—concept analysis. Concept analysis is the act of trying to articulate the necessary and sufficient conditions for the membership in the referent class of a concept. For example, Shoemaker's classic "Time Without Change" explored whether the concept of the flow of time can include flows where no changes take place, though change is usually taken as a definition of time. Arguments against the classical theory. Given that most later theories of concepts were born out of the rejection of some or all of the classical theory, it seems appropriate to give an account of what might be wrong with this theory. In the 20th century, philosophers such as Wittgenstein and Rosch argued against the classical theory. There are six primary arguments summarized as follows: Prototype theory. Prototype theory came out of problems with the classical view of conceptual structure. Prototype theory says that concepts specify properties that members of a class tend to possess, rather than must possess. Wittgenstein, Rosch, Mervis, Brent Berlin, Anglin, and Posner are a few of the key proponents and creators of this theory. Wittgenstein describes the relationship between members of a class as "family resemblances". There are not necessarily any necessary conditions for membership; a dog can still be a dog with only three legs. This view is particularly supported by psychological experimental evidence for prototypicality effects. Participants willingly and consistently rate objects in categories like 'vegetable' or 'furniture' as more or less typical of that class. It seems that our categories are fuzzy psychologically, and so this structure has explanatory power. We can judge an item's membership of the referent class of a concept by comparing it to the typical member—the most central member of the concept. If it is similar enough in the relevant ways, it will be cognitively admitted as a member of the relevant class of entities. Rosch suggests that every category is represented by a central exemplar which embodies all or the maximum possible number of features of a given category. Lech, Gunturkun, and Suchan explain that categorization involves many areas of the brain. Some of these are: visual association areas, prefrontal cortex, basal ganglia, and temporal lobe. The Prototype perspective is proposed as an alternative view to the Classical approach. While the Classical theory requires an all-or-nothing membership in a group, prototypes allow for more fuzzy boundaries and are characterized by attributes. Lakoff stresses that experience and cognition are critical to the function of language, and Labov's experiment found that the function that an artifact contributed to what people categorized it as. For example, a container holding mashed potatoes versus tea swayed people toward classifying them as a bowl and a cup, respectively. This experiment also illuminated the optimal dimensions of what the prototype for "cup" is. Prototypes also deal with the essence of things and to what extent they belong to a category. There have been a number of experiments dealing with questionnaires asking participants to rate something according to the extent to which it belongs to a category. This question is contradictory to the Classical Theory because something is either a member of a category or is not. This type of problem is paralleled in other areas of linguistics such as phonology, with an illogical question such as "is /i/ or /o/ a better vowel?" The Classical approach and Aristotelian categories may be a better descriptor in some cases. Theory-theory. Theory-theory is a reaction to the previous two theories and develops them further. This theory postulates that categorization by concepts is something like scientific theorizing. Concepts are not learned in isolation, but rather are learned as a part of our experiences with the world around us. In this sense, concepts' structure relies on their relationships to other concepts as mandated by a particular mental theory about the state of the world. How this is supposed to work is a little less clear than in the previous two theories, but is still a prominent and notable theory. This is supposed to explain some of the issues of ignorance and error that come up in prototype and classical theories as concepts that are structured around each other seem to account for errors such as whale as a fish (this misconception came from an incorrect theory about what a whale is like, combining with our theory of what a fish is). When we learn that a whale is not a fish, we are recognizing that whales don't in fact fit the theory we had about what makes something a fish. Theory-theory also postulates that people's theories about the world are what inform their conceptual knowledge of the world. Therefore, analysing people's theories can offer insights into their concepts. In this sense, "theory" means an individual's mental explanation rather than scientific fact. This theory criticizes classical and prototype theory as relying too much on similarities and using them as a sufficient constraint. It suggests that theories or mental understandings contribute more to what has membership to a group rather than weighted similarities, and a cohesive category is formed more by what makes sense to the perceiver. Weights assigned to features have shown to fluctuate and vary depending on context and experimental task demonstrated by Tversky. For this reason, similarities between members may be collateral rather than causal. Methodology of conceptualisation. Regarding conceptualisation, the conventional approach, influenced by Giovanni Sartori, treats concepts as precise categories with defined attributes. It uses a "ladder of abstraction" to adjust a concept's generality for clear classification and analytical rigour. In contrast, interpretive approaches view concepts as fluid products of language and social context. These methods analyse how language and a researcher's own positionality shape conceptual meaning in practice. Methodological debates in the social sciences have been addressing how researchers should refine concepts that are misaligned with empirical reality. Emerging methodologies tried to bridge the above-mentioned traditions. For instance, Knott and Alejandro developed a structured four-step process to guide researchers through reconceptualisation when an existing concept is misaligned with their observations. Their approach integrates the conventional need for clarity with a reflexive attention to context to give concepts greater analytical leverage. Specifically, the process involves mapping the attributes of the original concept against the nuances of the empirical case to pinpoint misalignments and systematically build a revised concept. Ideasthesia. According to the theory of ideasthesia (or "sensing concepts"), activation of a concept may be the main mechanism responsible for the creation of phenomenal experiences. Therefore, understanding how the brain processes concepts may be central to solving the mystery of how conscious experiences (or qualia) emerge within a physical system e.g., the sourness of the sour taste of lemon. This question is also known as the hard problem of consciousness. Research on ideasthesia emerged from research on synesthesia where it was noted that a synesthetic experience requires first an activation of a concept of the inducer. Later research expanded these results into everyday perception. There is a lot of discussion on the most effective theory in concepts. Another theory is semantic pointers, which use perceptual and motor representations and these representations are like symbols. Etymology. The term "concept" is traced back to 1554–60 (Latin "" – "something conceived").
6979
27823944
https://en.wikipedia.org/wiki?curid=6979
Cell Cycle
Cell Cycle is a biweekly peer-reviewed scientific journal covering all aspects of cell biology. It was established in 2002. Originally published bimonthly, it is now published biweekly. Abstracting and indexing. The journal is abstracted and indexed in: According to the "Journal Citation Reports", the journal has a 5-year impact factor of 7.7 and a 2024 CiteScore of 7.9.
6982
1301470298
https://en.wikipedia.org/wiki?curid=6982
List of classical music competitions
European Classical music has long relied on music competitions to provide a public forum that identifies the strongest players and contributes to the establishment of their professional careers. This is a list of current competitions in classical music, with each competition and reference link given only once. Many offer competitions across a range of categories and in these cases they are listed under "General/mixed". Competitions with age restrictions are listed under "Young musicians".
6984
31612503
https://en.wikipedia.org/wiki?curid=6984
Colin Powell
Colin Luther Powell ( ; – ) was an American diplomat and Army officer who was the 65th United States secretary of state from 2001 to 2005. He was the first African-American to hold the office. He was the 15th national security advisor from 1987 to 1989, and the 12th chairman of the Joint Chiefs of Staff from 1989 to 1993. Powell was born in New York City in 1937 to parents who immigrated from Jamaica. He was raised in the South Bronx and educated in the New York City public schools, earning a bachelor's degree in geology from the City College of New York. He joined the Reserve Officers' Training Corps while at City College and was commissioned as a second lieutenant on graduating in 1958. He was a professional soldier for 35 years, holding many command and staff positions and rising to the rank of four-star general. He was commander of the U.S. Army Forces Command in 1989. Powell's last military assignment, from October 1989 to September 1993, was as Joint Chiefs of Staff chairman, the highest military position in the United States Department of Defense. During this time, he oversaw twenty-eight crises, including the invasion of Panama in 1989 and Operation Desert Storm in the Persian Gulf War against Iraq in 1990–1991. He formulated the Powell Doctrine, which limits American military action unless it satisfies criteria regarding American national security interests, overwhelming force, and widespread public support. He served as secretary of state under Republican president George W. Bush. As secretary of state, Powell gave a presentation to the United Nations Security Council regarding the rationale for the Iraq War, but he later admitted that the speech contained substantial inaccuracies. He resigned after Bush was reelected in 2004. In 1995, Powell wrote his autobiography, "My American Journey" and then in retirement another book titled, "It Worked for Me: In Life and Leadership" (2012). He pursued a career as a public speaker, addressing audiences across the country and abroad. Before his appointment as Secretary of State he chaired America's Promise. In the 2016 United States presidential election, Powell, who was not a candidate, received three electoral votes from Washington state for the office of President of the United States. He won numerous U.S. and foreign military awards and decorations. His civilian awards included the Purple Heart Award, Presidential Medal of Freedom (twice), the Congressional Gold Medal, the Presidential Citizens Medal, and the Secretary's Distinguished Service Award. Powell died from complications of COVID-19 in 2021, while being treated for a form of blood cancer that damaged his immune system. Early life and education. Colin Luther Powell was born on , in Harlem, a neighborhood in the New York City borough of Manhattan. He was born to Jamaican immigrants Maud Ariel (née McKoy) and Luther Theophilus Powell. His parents were both of mixed African, Irish, and Scottish ancestry. Luther worked as a shipping clerk and Maud as a seamstress. Powell was raised in the South Bronx and attended the now closed Morris High School, from which he graduated in 1954. While at school, Powell worked at a local baby furniture store, where he picked up Yiddish from the Eastern European Jewish shopkeepers and some of the customers. He also served as a Shabbos goy, helping Orthodox families with needed tasks on the Sabbath. He received a bachelor of science degree in geology from the City College of New York in 1958 and said that he was a "C average" student. While at CCNY, Powell shifted his study focus to the Reserve Officers' Training Corps (ROTC) and became a "straight A student" in it; he held the distinction of being the first chairman to have attained his commission through the ROTC. Powell also graduated from George Washington University with an MBA in 1971 and was awarded an honorary doctor of public service in 1990. Military career. Powell was a professional soldier for thirty-five years, holding a variety of command and staff positions and rising to the rank of general. Training. While attending the City College of New York, Powell joined the Reserve Officers' Training Corps (ROTC). He described the experience as one of the happiest experiences of his life. According to Powell: As a cadet, Powell joined the Pershing Rifles, the ROTC fraternal organization and drill team begun by General John Pershing. Early career. Upon graduation, he received a commission as an Army second lieutenant; at this time, the Army was newly desegregated . He underwent training in the state of Georgia, where he was refused service in bars and restaurants because of the color of his skin. After attending basic training at Fort Benning, Powell was assigned to the 48th Infantry, in West Germany, as a platoon leader. From 1960 to 1962, he served as group liaison officer, company executive officer, and commander of Company A, 1st Battle Group, 4th Infantry, 2nd Infantry Brigade, 5th Infantry Division (Mechanized) at Fort Devens, Massachusetts. Vietnam War. Captain Powell served a tour in Vietnam as a South Vietnamese Army (ARVN) advisor from 1962 to 1963. While on patrol in a Viet Cong-held area, he was wounded by stepping on a punji stake and was awarded a Purple Heart. The large infection made it difficult for him to walk, and caused his foot to swell for a short time, shortening his first tour. Powell returned to Vietnam as a major in 1968, serving as assistant chief of staff of operations for the 23rd (Americal) Infantry Division. During the second tour in Vietnam he was decorated with the Soldier's Medal for bravery after he survived a helicopter crash and single-handedly rescued three others, including division commander Major General Charles M. Gettys, from the burning wreckage. My Lai massacre inquiry. Powell was charged with investigating a detailed letter by 11th Light Infantry Brigade soldier Tom Glen, which backed up rumored allegations of the 1968 My Lai massacre. Powell wrote: "In direct refutation of this portrayal is the fact that relations between Americal soldiers and the Vietnamese people are excellent". Later, Powell's assessment would be described as whitewashing the news of the massacre, and questions would continue to remain undisclosed to the public. In May 2004, Powell said to television and radio host Larry King, "I was in a unit that was responsible for My Lai. I got there after My Lai happened. So, in war, these sorts of horrible things happen every now and again, but they are still to be deplored". After the Vietnam War. When he returned to the U.S. from Vietnam in 1971, Powell earned a Master of Business Administration degree from George Washington University in Washington, D.C. He later served a White House Fellowship under President Richard Nixon from 1972 to 1973. During 1975–1976 he attended the National War College, Washington, D.C. In his autobiography, "My American Journey", Powell named several officers he served under who inspired and mentored him. As a lieutenant colonel commanding 1st Battalion, 32nd Infantry, 2nd Infantry Division in South Korea, Powell was very close to his division commander, Major General Henry "Gunfighter" Emerson, whom he regarded as one of the most caring officers he ever met. Emerson insisted his troops train at night to fight a possible North Korean attack, and made them repeatedly watch the television film "Brian's Song" to promote racial harmony. Powell always professed that what set Emerson apart was his great love of his soldiers and concern for their welfare. After a race riot occurred, in which African-American soldiers almost killed a white officer, Powell was charged by Emerson to crack down on black militants; Powell's efforts led to the discharge of one soldier, and other efforts to reduce racial tensions. During 1976–1977 he commanded the 2nd Brigade of the 101st Airborne Division. Powell subsequently served as the junior military assistant to deputy secretaries of defense Charles Duncan and Graham Claytor, receiving a promotion to brigadier general on 1 June 1979. At the ceremony, he received from Secretary Harold Brown's protocol officer, Stuart Purviance, a framed quotation by President Abraham Lincoln. The quote was "I can make a brigadier general in five minutes. But it's not so easy to replace one hundred ten horses". Taped to the back of the frame was an envelope with instructions that it not be opened for ten years. When Powell opened the note in 1989, after he had become Chairman of the Joint Chiefs of Staff, he read Purviance's prediction that Powell would become Chief of Staff of the United States Army. Powell wrote that he kept the Lincoln quote as a reminder to remain humble despite his rank and position. National Security Advisor and other advisory roles. Powell retained his role as the now-senior military assistant into the presidency of Ronald Reagan, serving under Claytor's successor as deputy secretary of defense, Frank Carlucci. Powell and Carlucci formed a close friendship, referring to each by first names in private, as Powell refused any sort of first-name basis in an official capacity. It was on Powell's advice that newly-elected President Ronald Reagan presented Roy Benavidez the Medal of Honor; Benavidez had received the Distinguished Service Cross, which his commander argued should be upgraded, but Army officials believed there was no living eyewitness to testify to Benavidez's heroism. A soldier who had been present during the action in question learned in July 1980 of the effort to upgrade Benavidez's medal and provided the necessary sworn statement; the upgrade to the Medal of Honor was approved in December 1980. Powell also declined an offer from Secretary of the Army John O. Marsh Jr. to be his under secretary due to his reluctance to assume a political appointment; James R. Ambrose was selected instead. Intent on attaining a division command, Powell petitioned Carlucci and Army chief of staff Edward C. Meyer for reassignment away from the Pentagon, with Meyer appointing Powell as assistant division commander for operations and training of the 4th Infantry Division at Fort Carson, Colorado under Major General John W. Hudachek. After he left Fort Carson, Powell became the senior military assistant to Secretary of Defense Caspar Weinberger, whom he assisted during the 1983 invasion of Grenada and the 1986 airstrike on Libya. Under Weinberger, Powell was also involved in the unlawful transfer of U.S.-made TOW anti-tank missiles and Hawk anti-aircraft missiles from Israel to Iran as part of the criminal conspiracy that would later become known as the Iran–Contra affair. In November 1985, Powell solicited and delivered to Weinberger a legal assessment that the transfer of Hawk missiles to Israel or Iran, without Congressional notification, would be "a clear violation" of the law. Despite this, thousands of TOW missiles and hundreds of Hawk missiles and spare parts were transferred from Israel to Iran until the venture was exposed in a Lebanese magazine, "Ash-Shiraa", in November 1986. According to Iran-Contra Independent Counsel Lawrence E. Walsh, when questioned by Congress, Powell "had given incomplete answers" concerning notes withheld by Weinberger and that the activities of Powell and others in concealing the notes "seemed corrupt enough to meet the new, poorly defined test of obstruction". Following his resignation as Secretary of Defense, Weinberger was indicted on five felony charges, including one count Obstruction of Congress for concealing the notes. Powell was never indicted by the Independent Counsel in connection with the Iran-Contra affair. In 1986, Powell took over the command of V Corps in Frankfurt, Germany, from Robert Lewis "Sam" Wetzel. The next year, he served as United States Deputy National Security Advisor, under Frank Carlucci. Following the Iran–Contra scandal, Powell became, at the age of 49, Ronald Reagan's National Security Advisor, serving from 1987 to 1989 while retaining his Army commission as a lieutenant general. He helped negotiate a number of arms treaties with Mikhail Gorbachev, the leader of the Soviet Union. In April 1989, after his tenure with the National Security Council, Powell was promoted to four-star general under President George H. W. Bush and briefly served as the Commander in Chief, Forces Command (FORSCOM), headquartered at Fort McPherson, Georgia, overseeing all active U.S. Army regulars, U.S. Army Reserve, and National Guard units in the Continental U.S., Hawaii, and Puerto Rico. He became the third general since World War II to reach four-star rank without ever serving as a division commander, joining Dwight D. Eisenhower and Alexander Haig. Later that year, President George H. W. Bush selected him as Chairman of the Joint Chiefs of Staff. Chairman of the Joint Chiefs of Staff. Powell's last military assignment, from 1 October 1989 to 30 September 1993, was as the 12th chairman of the Joint Chiefs of Staff, the highest military position in the Department of Defense. At age 52, he became the youngest officer, and first Afro-Caribbean American, to serve in this position. Powell was also the first JCS chair who received his commission through ROTC. During this time, Powell oversaw responses to 28 crises, including the invasion of Panama in 1989 to remove General Manuel Noriega from power and Operation Desert Storm in the 1991 Persian Gulf War. During these events, Powell earned the nickname "the reluctant warrior" – although Powell himself disputed this label, and spoke in favor of the first Bush administration's Gulf War policies. As a military strategist, Powell advocated an approach to military conflicts that maximizes the potential for success and minimizes casualties. A component of this approach is the use of overwhelming force, which he applied to Operation Desert Storm in 1991. His approach has been dubbed the Powell Doctrine. Powell continued as chairman of the JCS into the Clinton presidency. However, as a realist, he considered himself a bad fit for an administration largely made up of liberal internationalists. He clashed with then-U.S. ambassador to the United Nations Madeleine Albright over the Bosnian crisis, as he opposed any military intervention that did not involve U.S. interests. Powell also regularly clashed with Secretary of Defense Leslie Aspin, whom he was initially hesitant to support after Aspin was nominated by President Clinton. During a lunch meeting between Powell and Aspin in preparation of Operation Gothic Serpent, Aspin was more focused on eating salad than listening and paying attention to Powell's presentation on military operations. The incident caused Powell to grow more irritated towards Aspin and led to his early resignation on 30 September 1993. Powell was succeeded temporarily by Vice Chairman of the Joint Chiefs of Staff Admiral David E. Jeremiah, who took the position as Acting Chairman of the Joint Chiefs of Staff. Soon after Powell's resignation, on 3–4 October 1993, the Battle of Mogadishu, the aim of which was to capture Somali warlord Mohamed Farrah Aidid, was initiated and ended in disaster. Powell later defended Aspin, saying in part that he could not fault Aspin for Aspin's decision to remove a Lockheed AC-130 from the list of armaments requested for the operation. Powell took an early resignation from his tenure as Chairman of the Joint Chiefs of Staff on 30 September 1993. The following year President Clinton sent newly retired Powell, together with former president Jimmy Carter and Senator Sam Nunn, to visit Haiti in an effort to persuade General Raoul Cédras and the ruling junta to abdicate in favor of former Haitian President Aristide, under the threat of an imminent US invasion to remove them by force. Powell's status as a retired general was well known and respected in Haiti and was held to be instrumental in persuading Gen. Cédras. During his chairmanship of the JCS, there was discussion of awarding Powell a fifth star, granting him the rank of General of the Army. But even in the wake of public and Congressional pressure to do so, Clinton-Gore presidential transition team staffers decided against it. Potential presidential candidate. Powell's experience in military matters made him a very popular figure with both American political parties. Many Democrats admired his moderate stance on military matters, while many Republicans saw him as a great asset associated with the successes of past Republican administrations. Put forth as a potential Democratic vice presidential nominee in the 1992 U.S. presidential election or even potentially replacing Vice President Dan Quayle as the Republican vice presidential nominee, Powell eventually declared himself a Republican and began to campaign for Republican candidates in 1995. He was touted as a possible opponent of Bill Clinton in the 1996 U.S. presidential election, possibly capitalizing on a split conservative vote in Iowa and even leading New Hampshire polls for the GOP nomination, but Powell declined, citing a lack of passion for politics. Powell defeated Clinton 50–38 in a hypothetical match-up proposed to voters in the exit polls conducted on Election Day. Despite not standing in the race, Powell won the Republican New Hampshire Vice-Presidential primary on write-in votes. In 1997, Powell founded America's Promise with the objective of helping children from all socioeconomic sectors. That same year saw the establishment of The Colin L. Powell Center for Leadership and Service. The mission of the center is to "prepare new generations of publicly engaged leaders from populations previously underrepresented in public service and policy circles, to build a strong culture of civic engagement at City College, and to mobilize campus resources to meet pressing community needs and serve the public good". Powell was mentioned as a potential candidate in the 2000 U.S. presidential election, but again decided against running. Once Texas governor George W. Bush secured the Republican nomination, Powell endorsed him for president and spoke at the 2000 Republican National Convention. Bush won the general election and appointed Powell as secretary of state in 2001. In the electoral college vote count of 2016, Powell received three votes for president from faithless electors from the state of Washington. Secretary of State (2001–2005). President-elect George W. Bush named Powell as his nominee to be secretary of state in a ceremony at his ranch in Crawford, Texas on 16 December 2000. This made Powell the first person to formally accept a Cabinet post in the Bush administration, as well the first black United States secretary of state. As secretary of state, Powell was perceived as moderate. Powell was unanimously confirmed by the United States Senate by voice vote on 20 January 2001, and ceremonially sworn in on 26 January. Over the course of his tenure he traveled less than any other U.S. Secretary of State in thirty years. This is partly attributed to a letter from former diplomat George F. Kennan, who advised Powell to focus on his duties as the president's principal foreign policy advisor and avoid trips that risked undercutting the duties of the ambassadors. On September 11, 2001, Powell was in Lima, Peru, meeting with president Alejandro Toledo and attending a meeting of foreign ministers of the Organization of American States. After the terror attacks that day, Powell's job became of critical importance in managing the United States of America's relationships with foreign countries to secure a stable coalition in the War on Terrorism. Powell’s diplomatic skills led to immediate consensus, and the Inter-American Democratic Charter was approved by acclamation on September 11, 2001. The charter is regarded as one of the most comprehensive inter-American documents, created to promote and strengthen democratic ideas, practices, and culture among the states of the Americas. 2003 U.S. invasion of Iraq. Powell came under fire for his role in building the case for the 2003 invasion of Iraq. A 2004 report by the Iraq Survey Group concluded that the evidence that Powell offered to support the allegation that the Iraqi government possessed weapons of mass destruction (WMDs) was inaccurate. As early as 2000 on the day Powell was nominated to be Secretary of State he told the press "Saddam is sitting on a failed regime that is not going to be around in a few years time". In a press statement on 24 February 2001, Powell had said that sanctions against Iraq had prevented the development of any weapons of mass destruction by Saddam Hussein. Powell favored involving the international community in the invasion, as opposed to a unilateral approach. Powell's chief role was to garner international support for a multi-national coalition to mount the invasion. To this end, Powell addressed a plenary session of the United Nations Security Council on 5 February 2003, to argue in favor of military action. Citing numerous anonymous Iraqi defectors, Powell asserted that "there can be no doubt that Saddam Hussein has biological weapons and the capability to rapidly produce more, many more". Powell also stated that there was "no doubt in my mind" that Saddam was working to obtain key components to produce nuclear weapons. Powell stated that he gave his speech to the UN on "four days' notice". Britain's "Channel 4 News" reported soon afterwards that a British intelligence dossier that Powell had referred to as a "fine paper" during his presentation had been based on old material and plagiarized an essay by American graduate student Ibrahim al-Marashi. A Senate report on intelligence failures would later detail the intense debate that went on behind the scenes on what to include in Powell's speech. State Department analysts had found dozens of factual problems in drafts of the speech. Some of the claims were taken out, but others were left in, such as claims based on the yellowcake forgery. The administration came under fire for having acted on faulty intelligence, particularly that which was single-sourced to the informant known as Curveball. Powell later recounted how Vice President Dick Cheney had joked with him before he gave the speech, telling him, "You've got high poll ratings; you can afford to lose a few points". Powell's longtime aide-de-camp and Chief of Staff from 1989 to 2003, Colonel Lawrence Wilkerson, later characterized Cheney's view of Powell's mission as to "go up there and sell it, and we'll have moved forward a peg or two. Fall on your damn sword and kill yourself, and I'll be happy, too". In September 2005, Powell was asked about the speech during an interview with Barbara Walters and responded that it was a "blot" on his record. He went on to say, "It will always be a part of my record. It was painful. It's painful now". Wilkerson later said that he inadvertently participated in a hoax on the American people in preparing Powell's erroneous testimony before the United Nations Security Council. As recounted in "Soldier: The Life of Colin Powell", in 2001 before 9/11, Richard A. Clarke, a National Security Council holdover from the Clinton administration, pushed the new Bush administration for action against al-Qaeda in Afghanistan, a move opposed by Paul Wolfowitz who advocated for the creation of a "U.S.-protected, opposition-run 'liberated' enclave around the southern Iraqi city of Basra". Powell referred to Wolfowitz and other top members of Donald Rumsfeld's staff "as the 'JINSA crowd,' " in reference to the pro-Israel Jewish Institute for National Security Affairs. Again invoking "the JINSA crowd" Powell also attributed the decision to go to war in Iraq in 2003 to the neoconservative belief that regime change in Baghdad "was a first and necessary stop on the road to peace in Jerusalem". A review of "Soldier" by Tim Rutten criticized Powell's remarks as a "blot on his record", accusing Powell of slandering "neoconservatives in the Defense Department – nearly all of them Jews" with "old and wholly unmeritorious allegations of dual loyalty". A 2007 article about fears that Jewish groups "will be accused of driving America into a war with the regime in Tehran" cited the DeYoung biography and quoted JINSA's then-executive director, Thomas Neumann, as "surprised" Powell "would single out a Jewish group when naming those who supported the war". Neumann said, "I am not accusing Powell of anything, but these are words that the antisemites will use in the future". Once Saddam Hussein had been deposed, Powell's renewed role was to once again establish a working international coalition, this time to assist in the rebuilding of post-war Iraq. On 13 September 2004, Powell testified before the Senate Governmental Affairs Committee, acknowledging that the sources who provided much of the information in his February 2003 UN presentation were "wrong" and that it was "unlikely" that any stockpiles of WMDs would be found. Claiming that he was unaware that some intelligence officials questioned the information prior to his presentation, Powell pushed for reform in the intelligence community, including the creation of a national intelligence director who would assure that "what one person knew, everyone else knew". Other foreign policy issues. Additionally, Powell was critical of other aspects of U.S. foreign policy in the past, such as its support for the 1973 Chilean coup d'état that deposed the democratically elected president Salvador Allende in favor of Augusto Pinochet. From two separate interviews in 2003, Powell stated in one about the 1973 event: "I can't justify or explain the actions and decisions that were made at that time. It was a different time. There was a great deal of concern about communism in this part of the world. Communism was a threat to the democracies in this part of the world. It was a threat to the United States". In another interview, he also simply stated: "With respect to your earlier comment about Chile in the 1970s and what happened with Mr. Allende, it is not a part of American history that we're proud of." In the Hainan Island incident of 1 April 2001, a United States US EP-3 surveillance aircraft collided mid-air with a Chinese Shenyang J-8 jet fighter over the South China Sea. While somewhat ambiguous, Powell's expression of "very sorry" was accepted as sufficient for the formal apology that China had sought. The incident was nonetheless a serious flare-up in United States-China relations and created negative feelings towards the United States by the Chinese public and increased public feelings of Chinese nationalism. In September 2004, Powell described the Darfur genocide as "genocide", thus becoming the first cabinet member to apply the term "genocide" to events in an ongoing conflict. In November the president "forced Powell to resign", according to Walter LaFeber. Powell announced his resignation as Secretary of State on 15 November 2004, shortly after Bush was reelected. Bush's desire for Powell to resign was communicated to Powell via a phone call by Bush's chief of staff, Andrew Card. The following day, Bush nominated National Security Advisor Condoleezza Rice as Powell's successor. In mid-November, Powell stated that he had seen new evidence suggesting that Iran was adapting missiles for a nuclear delivery system. The accusation came at the same time as the settlement of an agreement between Iran, the IAEA, and the European Union. Although biographer Jeffrey J. Matthews is highly critical of how Powell misled the United Nations Security Council regarding weapons of mass destruction in Iraq, he credits Powell with a series of achievements at the State Department. These include restoration of morale to psychologically demoralized professional diplomats, leadership of the international HIV/AIDS initiative, resolving a crisis with China, and blocking efforts to tie Saddam Hussein to the 9/11 attacks on the United States. Life after diplomatic service. After retiring from the role of Secretary of State, Powell returned to private life. In April 2005, he was privately telephoned by Republican senators Lincoln Chafee and Chuck Hagel, at which time Powell expressed reservations and mixed reviews about the nomination of John Bolton as ambassador to the United Nations, but refrained from advising the senators to oppose Bolton (Powell had clashed with Bolton during Bush's first term). The decision was viewed as potentially dealing significant damage to Bolton's chances of confirmation. Bolton was put into the position via a recess appointment because of the strong opposition in the Senate. On 28 April 2005, an opinion piece in "The Guardian" by Sidney Blumenthal (a former top aide to President Bill Clinton) claimed that Powell was in fact "conducting a campaign" against Bolton because of the acrimonious battles they had had while working together, which among other things had resulted in Powell cutting Bolton out of talks with Iran and Libya after complaints about Bolton's involvement from the British. Blumenthal added that "The foreign relations committee has discovered that Bolton made a highly unusual request and gained access to 10 intercepts by the National Security Agency. Staff members on the committee believe that Bolton was probably spying on Powell, his senior advisors and other officials reporting to him on diplomatic initiatives that Bolton opposed". In September 2005, Powell criticized the response to Hurricane Katrina, and said thousands of people were not properly protected because they were poor, rather than because they were black. On 5 January 2006, he participated in a meeting at the White House of former Secretaries of Defense and State to discuss United States foreign policy with Bush administration officials. In September 2006, Powell sided with more moderate Senate Republicans in supporting more rights for detainees and opposing President Bush's terrorism bill. He backed Senators John Warner, John McCain, and Lindsey Graham in their statement that U.S. military and intelligence personnel in future wars will suffer for abuses committed in 2006 by the U.S. in the name of fighting terrorism. Powell stated that "The world is beginning to doubt the moral basis of our fight against terrorism". In 2007, he joined the board of directors of Steve Case's new company Revolution Health. Powell also served on the Council on Foreign Relations Board of directors. In 2008, Powell served as a spokesperson for National Mentoring Month, a campaign held each January to recruit volunteer mentors for at-risk youth. Soon after Barack Obama's 2008 election, Powell began being mentioned as a possible cabinet member. He was not nominated. In September 2009, Powell advised President Obama against surging U.S. forces in Afghanistan. The president announced the surge the following December. In 2010, Powell joined the Smithsonian advisory council. Together with his wife, Alma Powell, they are the founding donors who offer their support to the museum's capital campaign and Living History campaign. He was an advocate for the National Museum of African American History and Culture. In March 2014, Salesforce.com announced that Powell had joined its board of directors. Political positions. During his early political career through his tenure within the Joint Chiefs of Staff, Powell was an independent. Powell was a moderate Republican from 1995 until 2021. In 2021, Powell recanted his status as a Republican following the storming of the United States Capitol on 6 January. The attack moved Powell to call for President Trump's resignation, noting: "I wish he would do what Nixon did and just step down. Somebody ought to go up to him and it's over". Powell also accused Trump of attempting to "overthrow the government", and that Trump's false claims of a stolen election were "dangerous for our democracy". Powell was pro-choice regarding abortion, and expressed some support for an assault weapons ban. He stated in his autobiography that he supported affirmative action that levels the playing field, without giving a leg up to undeserving persons because of racial issues. Powell originally suggested the don't ask, don't tell policy to President Clinton, though he later supported its repeal as proposed by Robert Gates and Admiral Mike Mullen in January 2010, saying "circumstances had changed". Powell gained attention in 2004 when, in a conversation with British Foreign Secretary Jack Straw, he reportedly referred to neoconservatives within the Bush administration as "fucking crazies". In a September 2006 letter to John McCain, Powell expressed opposition to President Bush's push for military tribunals of those formerly and currently classified as enemy combatants. Specifically, he objected to the effort in Congress to "redefine Common Article 3 of the Geneva Convention". He also asserted: "The world is beginning to doubt the moral basis of our fight against terrorism". Defending the Iraq War. At the 2007 Aspen Ideas Festival in Colorado, Powell stated that he had spent two and a half hours explaining to President Bush "the consequences of going into an Arab country and becoming the occupiers". During this discussion, he insisted that the U.S. appeal to the United Nations first, but if diplomacy failed, he would support the invasion: "I also had to say to him that you are the President, you will have to make the ultimate judgment, and if the judgment is this isn't working and we don't think it is going to solve the problem, then if military action is undertaken I'm with you, I support you". In a 2008 interview on CNN, Powell reiterated his support for the 2003 decision to invade Iraq in the context of his endorsement of Barack Obama, stating: "My role has been very, very straightforward. I wanted to avoid a war. The president [Bush] agreed with me. We tried to do that. We couldn't get it through the U.N. and when the president made the decision, I supported that decision. And I've never blinked from that. I've never said I didn't support a decision to go to war". Powell's position on the Iraq War troop surge of 2007 was less consistent. In December 2006, he expressed skepticism that the strategy would work and whether the U.S. military had enough troops to carry it out successfully. He stated: "I am not persuaded that another surge of troops into Baghdad for the purposes of suppressing this communitarian violence, this civil war, will work". Following his endorsement of Barack Obama in October 2008, however, Powell praised General David Petraeus and U.S. troops, as well as the Iraqi government, concluding that "it's starting to turn around". By mid-2009, he had concluded a surge of U.S. forces in Iraq should have come sooner, perhaps in late 2003. Endorsement of Barack Obama. Powell donated the maximum allowable amount to John McCain's campaign in the summer of 2007 and in early 2008, his name was listed as a possible running mate for Republican nominee McCain's bid during the 2008 U.S. presidential election. McCain won the Republican presidential nomination, but the Democrats nominated the first black candidate, Senator Barack Obama of Illinois. On 19 October 2008, Powell announced his endorsement of Obama during a "Meet the Press" interview, citing "his ability to inspire, because of the inclusive nature of his campaign, because he is reaching out all across America, because of who he is and his rhetorical abilities", in addition to his "style and substance". He additionally referred to Obama as a "transformational figure". Powell further questioned McCain's judgment in appointing Sarah Palin as the vice presidential candidate, stating that despite the fact that she is admired, "now that we have had a chance to watch her for some seven weeks, I don't believe she's ready to be president of the United States, which is the job of the vice president". He said that Obama's choice for vice president, Joe Biden, was ready to be president. He also added that he was "troubled" by the "false intimations that Obama was Muslim". Powell stated that "[Obama] is a Christianhe's always been a Christian... But the really right answer is, what if he is? Is there something wrong with being a Muslim in this country? The answer's no, that's not America". Powell then mentioned Kareem Rashad Sultan Khan, a Muslim American soldier in the U.S. Army who served and died in the Iraq War. He later stated, "Over the last seven weeks, the approach of the Republican Party has become narrower and narrower [...] I look at these kind of approaches to the campaign, and they trouble me". Powell concluded his Sunday morning talk show comments with "It isn't easy for me to disappoint Sen. McCain in the way that I have this morning, and I regret that [...] I think we need a transformational figure. I think we need a president who is a generational change and that's why I'm supporting Barack Obama, not out of any lack of respect or admiration for Sen. John McCain". Later in a 12 December 2008, CNN interview with Fareed Zakaria, Powell reiterated his belief that during the last few months of the campaign, Palin pushed the Republican party further to the right and had a polarizing impact on it. When asked why he was still a Republican on "Meet the Press" he said, "I'm still a Republican. And I think the Republican Party needs me more than the Democratic Party needs me. And you can be a Republican and still feel strongly about issues such as immigration, and improving our education system, and doing something about some of the social problems that exist in our society and our country. I don't think there's anything inconsistent with this". Views on the Obama administration. In a July 2009 CNN interview with John King, Powell expressed concern over President Obama increasing the size of the federal government and the size of the federal budget deficit. In September 2010, he criticized the Obama administration for not focusing "like a razor blade" on the economy and job creation. Powell reiterated that Obama was a "transformational figure". In a video that aired on CNN.com in November 2011, Colin Powell said in reference to Barack Obama, "many of his decisions have been quite sound. The financial system was put back on a stable basis". On 25 October 2012, 12 days before the presidential election, he gave his endorsement to President Obama for re-election during a broadcast of "CBS This Morning". He considered the administration to have had success and achieved progress in foreign and domestic policy arenas. As additional reasons for his endorsement, Powell cited the changing positions and perceived lack of thoughtfulness of Mitt Romney on foreign affairs, and a concern for the validity of Romney's economic plans. In an interview with ABC's Diane Sawyer and George Stephanopoulos during ABC's coverage of President Obama's second inauguration, Powell criticized members of the Republican Party who spread "things that demonize the president". He called on GOP leaders to publicly denounce such talk. 2016 e-mail leaks and criticism of Donald Trump. Powell was vocal about the state of the Republican Party. Speaking at a Washington Ideas forum in 2015, he warned the audience that the Republican Party had begun a move to the fringe right, lessening the chances of a Republican presidency in the future. On Republican presidential candidate Donald Trump's statements regarding immigrants, Powell noted there were many immigrants working in Trump hotels. Powell denounced the "nastiness" of the 2016 Republican primaries. He compared the race to reality television, and said the campaign had gone "into the mud". Powell accused the Hillary Clinton campaign of trying to pin her email controversy on him. Speaking to "People" magazine, Powell said "she was using [the private email server] for a year before I sent her a memo telling her what I did". On 13 September 2016, emails were obtained that revealed Powell's private communications regarding both Donald Trump and Hillary Clinton. Powell privately reiterated his comments regarding Clinton's email scandal, writing, "I have told Hillary's minions repeatedly that they are making a mistake trying to drag me in, yet they still try", and complaining that "Hillary's mafia keeps trying to suck me into it". In another email, Powell said she should have told everyone what she did "two years ago", and said that she has not "been covering herself with glory". Writing on the 2012 Benghazi attack controversy surrounding Clinton, Powell said to then U.S. Ambassador Susan Rice, "Benghazi is a stupid witch hunt". Commenting on Clinton in a general sense, he mused that "Everything HRC touches she kind of screws up with hubris", and "I would rather not have to vote for her, although she is a friend I respect". Powell publicly endorsed Clinton on 25 October 2016, "because I think she's qualified, and the other gentleman is not qualified". In private emails, Powell called Donald Trump a "national disgrace" with "no sense of shame". He wrote of Trump's role in the birther movement, which he called "racist". He suggested the media ignore Trump: "To go on and call him an idiot just emboldens him". The emails were obtained by the media as the result of a hack. Despite not running in the 2016 federal elections, Powell received three electoral votes for president from faithless electors in Washington who had pledged to vote for Clinton, coming in third overall. After Barack Obama, he was the second black person to receive electoral votes in a presidential election. Views on the Trump administration. In an interview in October 2019, Powell warned that the GOP needed to "get a grip" and put the country before their party, standing up to president Trump rather than worrying about political fallout. He said: "When they see things that are not right, they need to say something about it because our foreign policy is in shambles right now, in my humble judgment, and I see things happening that are hard to understand". On 7 June 2020, Powell announced he would be voting for former vice president Joe Biden in the 2020 United States presidential election. In August, Powell delivered a speech in support of Biden's candidacy at the 2020 Democratic National Convention. In January 2021, after the Capitol building was attacked by Trump supporters, Powell told CNN: "I can no longer call myself a fellow Republican". Personal life and death. Powell married Alma Johnson on 25 August 1962. Their son, Michael Powell, was the chairman of the Federal Communications Commission (FCC) from 2001 to 2005. Their daughter is actress Linda Powell. As a hobby, Powell restored old Volvo and Saab automobiles. In 2013, he faced questions about his relationship with the Romanian diplomat Corina Crețu, after a hacked AOL email account had been made public. He acknowledged a "very personal" email relationship but denied further involvement. He was an Episcopalian. On 18 October 2021, Powell, who was being treated for multiple myeloma, died at Walter Reed National Military Medical Center of complications from COVID-19 at the age of 84. He had been vaccinated, but his myeloma compromised his immune system; he also had early-stage Parkinson's disease. President Joe Biden and four of the five living former presidents issued statements calling Powell an American hero. Former president Donald Trump released a statement saying "He made plenty of mistakes, but anyway, may he rest in peace!" and referred to him as a "classic RINO".<ref name="Garrison_10/18/2021"></ref> Present at the funeral service at the Washington National Cathedral were President Biden and former presidents Barack Obama and George W. Bush, along with First Lady Jill Biden and former first ladies Michelle Obama, Laura Bush, and Hillary Clinton (also representing her husband, former president Bill Clinton, who was unable to attend following treatment for sepsis) as well as many other dignitaries. Powell is buried at Arlington National Cemetery in Section 60, Grave 11917. Civilian awards and honors. Powell's civilian awards include two Presidential Medals of Freedom (the second with distinction), the Congressional Gold Medal, and the Ronald Reagan Freedom Award.
6985
7903804
https://en.wikipedia.org/wiki?curid=6985
Chlorophyll
Chlorophyll is any of several related green pigments found in cyanobacteria and in the chloroplasts of algae and plants. Its name is derived from the Greek words (, "pale green") and (, "leaf"). Chlorophyll allows plants to absorb energy from light. Those pigments are involved in oxygenic photosynthesis, as opposed to bacteriochlorophylls, related molecules found only in bacteria and involved in anoxygenic photosynthesis. Chlorophylls absorb light most strongly in the blue portion of the electromagnetic spectrum as well as the red portion. Conversely, it is a poor absorber of green and near-green portions of the spectrum. Hence chlorophyll-containing tissues appear green because green light, diffusively reflected by structures like cell walls, is less absorbed. Two types of chlorophyll exist in the photosystems of green plants: chlorophyll "a" and "b". History. Chlorophyll was first isolated and named by Joseph Bienaimé Caventou and Pierre Joseph Pelletier in 1817. The presence of magnesium in chlorophyll was discovered in 1906, and was the first detection of that element in living tissue. After initial work done by German chemist Richard Willstätter spanning from 1905 to 1915, the general structure of chlorophyll "a" was elucidated by Hans Fischer in 1940. By 1960, when most of the stereochemistry of chlorophyll "a" was known, Robert Burns Woodward published a total synthesis of the molecule. In 1967, the last remaining stereochemical elucidation was completed by Ian Fleming, and in 1990 Woodward and co-authors published an updated synthesis. Chlorophyll "f" was announced to be present in cyanobacteria and other oxygenic microorganisms that form stromatolites in 2010; a molecular formula of C55H70O6N4Mg and a structure of (2-formyl)-chlorophyll "a" were deduced based on NMR, optical and mass spectra. Photosynthesis. Chlorophyll is vital for photosynthesis, which allows plants to absorb energy from light. Chlorophyll molecules are arranged in and around photosystems that are embedded in the thylakoid membranes of chloroplasts. In these complexes, chlorophyll serves three functions: The two currently accepted photosystem units are and which have their own distinct reaction centres, named P700 and P680, respectively. These centres are named after the wavelength (in nanometers) of their red-peak absorption maximum. The identity, function and spectral properties of the types of chlorophyll in each photosystem are distinct and determined by each other and the protein structure surrounding them. The function of the reaction center of chlorophyll is to absorb light energy and transfer it to other parts of the photosystem. The absorbed energy of the photon is transferred to an electron in a process called charge separation. The removal of the electron from the chlorophyll is an oxidation reaction. The chlorophyll donates the high energy electron to a series of molecular intermediates called an electron transport chain. The charged reaction center of chlorophyll (P680+) is then reduced back to its ground state by accepting an electron stripped from water. The electron that reduces P680+ ultimately comes from the oxidation of water into O2 and H+ through several intermediates. This reaction is how photosynthetic organisms such as plants produce O2 gas, and is the source for practically all the O2 in Earth's atmosphere. Photosystem I typically works in series with Photosystem II; thus the P700+ of Photosystem I is usually reduced as it accepts the electron, via many intermediates in the thylakoid membrane, by electrons coming, ultimately, from Photosystem II. Electron transfer reactions in the thylakoid membranes are complex, however, and the source of electrons used to reduce P700+ can vary. The electron flow produced by the reaction center chlorophyll pigments is used to pump H+ ions across the thylakoid membrane, setting up a proton-motive force a chemiosmotic potential used mainly in the production of ATP (stored chemical energy) or to reduce NADP+ to NADPH. NADPH is a universal agent used to reduce CO2 into sugars as well as other biosynthetic reactions. Reaction center chlorophyll–protein complexes are capable of directly absorbing light and performing charge separation events without the assistance of other chlorophyll pigments, but the probability of that happening under a given light intensity is small. Thus, the other chlorophylls in the photosystem and antenna pigment proteins all cooperatively absorb and funnel light energy to the reaction center. Besides chlorophyll "a", there are other pigments, called accessory pigments, which occur in these pigment–protein antenna complexes. Chemical structure. Several chlorophylls are known. All are defined as derivatives of the parent chlorin by the presence of a fifth, ketone-containing ring beyond the four pyrrole-like rings. Most chlorophylls are classified as chlorins, which are reduced relatives of porphyrins (found in hemoglobin). They share a common biosynthetic pathway with porphyrins, including the precursor uroporphyrinogen III. Unlike hemes, which contain iron bound to the N4 center, most chlorophylls bind magnesium. The axial ligands attached to the Mg2+ center are often omitted for clarity. Appended to the chlorin ring are various side chains, usually including a long phytyl chain (). The most widely distributed form in terrestrial plants is chlorophyll "a". Chlorophyll "a" has methyl group in place of a formyl group in chlorophyll "b". This difference affects the absorption spectrum, allowing plants to absorb a greater portion of visible light. The structures of chlorophylls are summarized below: Chlorophyll "e" is reserved for a pigment that has been extracted from algae in 1966 but not chemically described. Besides the lettered chlorophylls, a wide variety of sidechain modifications to the chlorophyll structures are known in the wild. For example, "Prochlorococcus", a cyanobacterium, uses 8-vinyl Chl "a" and "b". Measurement of chlorophyll content. Chlorophylls can be extracted from the protein into organic solvents. In this way, the concentration of chlorophyll within a leaf can be estimated. Methods also exist to separate chlorophyll "a" and chlorophyll "b". In diethyl ether, chlorophyll "a" has approximate absorbance maxima of 430 nm and 662 nm, while chlorophyll "b" has approximate maxima of 453 nm and 642 nm. The absorption peaks of chlorophyll "a" are at 465 nm and 665 nm. Chlorophyll "a" fluoresces at 673 nm (maximum) and 726 nm. The peak molar absorption coefficient of chlorophyll "a" exceeds 105 M−1 cm−1, which is among the highest for small-molecule organic compounds. In 90% acetone-water, the peak absorption wavelengths of chlorophyll "a" are 430 nm and 664 nm; peaks for chlorophyll "b" are 460 nm and 647 nm; peaks for chlorophyll "c1" are 442 nm and 630 nm; peaks for chlorophyll "c2" are 444 nm and 630 nm; peaks for chlorophyll "d" are 401 nm, 455 nm and 696 nm. Ratio fluorescence emission can be used to measure chlorophyll content. By exciting chlorophyll "a" fluorescence at a lower wavelength, the ratio of chlorophyll fluorescence emission at and can provide a linear relationship of chlorophyll content when compared with chemical testing. The ratio "F"735/"F"700 provided a correlation value of "r"2 0.96 compared with chemical testing in the range from 41 mg m−2 up to 675 mg m−2. Gitelson also developed a formula for direct readout of chlorophyll content in mg m−2. The formula provided a reliable method of measuring chlorophyll content from 41 mg m−2 up to 675 mg m−2 with a correlation "r"2 value of 0.95. Also, the chlorophyll concentration can be estimated by measuring the light transmittance through the plant leaves. The assessment of leaf chlorophyll content using optical sensors such as Dualex and SPAD allows researchers to perform real-time and non-destructive measurements. Research shows that these methods have a positive correlation with laboratory measurements of chlorophyll. Biosynthesis. In some plants, chlorophyll is derived from glutamate and is synthesised along a branched biosynthetic pathway that is shared with heme and siroheme. Chlorophyll synthase is the enzyme that completes the biosynthesis of chlorophyll "a": chlorophyllide "a" + phytyl diphosphate formula_1 chlorophyll "a" + diphosphate This conversion forms an ester of the carboxylic acid group in chlorophyllide "a" with the 20-carbon diterpene alcohol phytol. Chlorophyll "b" is made by the same enzyme acting on chlorophyllide "b". The same is known for chlorophyll "d" and "f", both made from corresponding chlorophyllides ultimately made from chlorophyllide "a". In Angiosperm plants, the later steps in the biosynthetic pathway are light-dependent. Such plants are pale (etiolated) if grown in darkness. Non-vascular plants and green algae have an additional light-independent enzyme and grow green even in darkness. Chlorophyll is bound to proteins. Protochlorophyllide, one of the biosynthetic intermediates, occurs mostly in the free form and, under light conditions, acts as a photosensitizer, forming free radicals, which can be toxic to the plant. Hence, plants regulate the amount of this chlorophyll precursor. In angiosperms, this regulation is achieved at the step of aminolevulinic acid (ALA), one of the intermediate compounds in the biosynthesis pathway. Plants that are fed by ALA accumulate high and toxic levels of protochlorophyllide; so do the mutants with a damaged regulatory system. Senescence and the chlorophyll cycle. The process of plant senescence involves the degradation of chlorophyll: for example the enzyme chlorophyllase () hydrolyses the phytyl sidechain to reverse the reaction in which chlorophylls are biosynthesised from chlorophyllide "a" or "b". Since chlorophyllide "a" can be converted to chlorophyllide "b" and the latter can be re-esterified to chlorophyll "b", these processes allow cycling between chlorophylls "a" and "b". Moreover, chlorophyll "b" can be directly reduced (via ) back to chlorophyll "a", completing the cycle. In later stages of senescence, chlorophyllides are converted to a group of colourless tetrapyrroles known as nonfluorescent chlorophyll catabolites (NCC's) with the general structure: These compounds have also been identified in ripening fruits and they give characteristic autumn colours to deciduous plants. Distribution. Chlorophyll maps from 2002 to 2024, provided by NASA, show milligrams of chlorophyll per cubic meter of seawater each month. Places where chlorophyll amounts are very low, indicating very low numbers of phytoplankton, are blue. Places where chlorophyll concentrations are high, meaning many phytoplankton were growing, are yellow. The observations come from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Aqua satellite. Land is dark gray, and places where MODIS could not collect data because of sea ice, polar darkness, or clouds are light gray. The highest chlorophyll concentrations, where tiny surface-dwelling ocean plants are, are in cold polar waters or in places where ocean currents bring cold water to the surface, such as around the equator and along the shores of continents. It is not the cold water itself that stimulates the phytoplankton. Instead, the cool temperatures are often a sign that the water has welled up to the surface from deeper in the ocean, carrying nutrients that have built up over time. In polar waters, nutrients accumulate in surface waters during the dark winter months when plants cannot grow. When sunlight returns in the spring and summer, the plants flourish in high concentrations. Uses. Culinary. Synthetic chlorophyll is registered as a food additive colorant, and its E number is E140. Chefs use chlorophyll to color a variety of foods and beverages green, such as pasta and spirits. Absinthe gains its green color naturally from the chlorophyll introduced through the large variety of herbs used in its production. Chlorophyll is not soluble in water, and it is first mixed with a small quantity of vegetable oil to obtain the desired solution. In marketing. In years 1950–1953 in particular, chlorophyll was used as a marketing tool to promote toothpaste, sanitary towels, soap and other products. This was based on claims that it was an odor blocker — a finding from research by F. Howard Westcott in the 1940s — and the commercial value of this attribute in advertising led to many companies creating brands containing the compound. However, it was soon determined that the hype surrounding chlorophyll was not warranted and the underlying research may even have been a hoax. As a result, brands rapidly discontinued its use. In the 2020s, chlorophyll again became the subject of unsubstantiated medical claims, as social media influencers promoted its use in the form of "chlorophyll water", for example.
6986
43007828
https://en.wikipedia.org/wiki?curid=6986
Carotene
The term carotene (also carotin, from the Latin "carota", "carrot") is used for many related unsaturated hydrocarbon substances having the formula C40Hx, which are synthesized by plants but in general cannot be made by animals (with the exception of some aphids and spider mites which acquired the synthesizing genes from fungi). Carotenes are photosynthetic pigments important for photosynthesis. Carotenes contain no oxygen atoms. They absorb ultraviolet, violet, and blue light and scatter orange or red light, and yellow light(in low concentrations). Carotenes are responsible for the orange colour of the carrot, after which this class of chemicals is named, and for the colours of many other fruits, vegetables and fungi (for example, sweet potatoes, chanterelle and orange cantaloupe melon). Carotenes are also responsible for the orange (but not all of the yellow) colours in dry foliage. They also (in lower concentrations) impart the yellow coloration to milk-fat and butter. Omnivorous animal species which are relatively poor converters of coloured dietary carotenoids to colourless retinoids, such as humans and chickens, have yellow-coloured body fat, as a result of the carotenoid retention from the vegetable portion of their diet. Carotenes contribute to photosynthesis by transmitting the light energy they absorb to chlorophyll. They also protect plant tissues by helping to absorb the energy from singlet oxygen, an excited form of the oxygen molecule O2 which is formed during photosynthesis. β-Carotene is composed of two retinyl groups, and is broken down in the mucosa of the human small intestine by β-carotene 15,15'-monooxygenase to retinal, a form of vitamin A. β-Carotene can be stored in the liver and body fat and converted to retinal as needed, thus making it a form of vitamin A for humans and some other mammals. The carotenes α-carotene and γ-carotene, due to their single retinyl group (β-ionone ring), also have some vitamin A activity (though less than β-carotene), as does the xanthophyll carotenoid β-cryptoxanthin. All other carotenoids, including lycopene, have no beta-ring and thus no vitamin A activity (although they may have antioxidant activity and thus biological activity in other ways). Animal species differ greatly in their ability to convert retinyl (beta-ionone) containing carotenoids to retinals. Carnivores in general are poor converters of dietary ionone-containing carotenoids. Pure carnivores such as ferrets lack β-carotene 15,15'-monooxygenase and cannot convert any carotenoids to retinals at all (resulting in carotenes not being a form of vitamin A for this species); while cats can convert a trace of β-carotene to retinol, although the amount is totally insufficient for meeting their daily retinol needs. Molecular structure. Carotenes are polyunsaturated hydrocarbons containing 40 carbon atoms per molecule, variable numbers of hydrogen atoms, and no other elements. Some carotenes are terminated by rings, on one or both ends of the molecule. All are coloured, due to the presence of conjugated double bonds. Carotenes are tetraterpenes, meaning that they are derived from eight 5-carbon isoprene units (or four 10-carbon terpene units). Carotenes are found in plants in two primary forms designated by characters from the Greek alphabet: alpha-carotene (α-carotene) and beta-carotene (β-carotene). Gamma-, delta-, epsilon-, and zeta-carotene (γ, δ, ε, and ζ-carotene) also exist. Since they are hydrocarbons, and therefore contain no oxygen, carotenes are fat-soluble and insoluble in water (in contrast with other carotenoids, the xanthophylls, which contain oxygen and thus are less chemically hydrophobic). History. The discovery of carotene from carrot juice is credited to Heinrich Wilhelm Ferdinand Wackenroder, a finding made during a search for antihelminthics, which he published in 1831. He obtained it in small ruby-red flakes soluble in ether, which when dissolved in fats gave "a beautiful yellow colour". William Christopher Zeise recognised its hydrocarbon nature in 1847, but his analyses gave him a composition of C5H8. It was Léon-Albert Arnaud in 1886 who confirmed its hydrocarbon nature and gave the formula C26H38, which is close to the theoretical composition of C40H56. Adolf Lieben in studies, also published in 1886, of the colouring matter in corpora lutea, first came across carotenoids in animal tissue, but did not recognise the nature of the pigment. Johann Ludwig Wilhelm Thudichum, in 1868–1869, after stereoscopic spectral examination, applied the term 'luteine' (lutein) to this class of yellow crystallizable substances found in animals and plants. Richard Martin Willstätter, who gained the Nobel Prize in Chemistry in 1915, mainly for his work on chlorophyll, assigned the composition of C40H56, distinguishing it from the similar but oxygenated xanthophyll, C40H56O2. With Heinrich Escher, in 1910, lycopene was isolated from tomatoes and shown to be an isomer of carotene. Later work by Escher also differentiated the 'luteal' pigments in egg yolk from that of the carotenes in cow corpus luteum. Dietary sources. The following foods contain carotenes in notable amounts: Absorption from these foods is enhanced if eaten with fats, as carotenes are fat soluble, and if the food is cooked for a few minutes until the plant cell wall splits and the color is released into any liquid. 12 μg of dietary β-carotene supplies the equivalent of 1 μg of retinol, and 24 μg of α-carotene or β-cryptoxanthin provides the equivalent of 1 μg of retinol. Forms of carotene. The two primary isomers of carotene, α-carotene and β-carotene, differ in the position of a double bond (and thus a hydrogen) in the cyclic group at one end (the right end in the diagram at right). β-Carotene is the more common form and can be found in yellow, orange, and green leafy fruits and vegetables. As a rule of thumb, the greater the intensity of the orange colour of the fruit or vegetable, the more β-carotene it contains. Carotene protects plant cells against the destructive effects of ultraviolet light so β-carotene is an antioxidant. β-Carotene and physiology. β-Carotene and cancer. An article on the American Cancer Society says that The Cancer Research Campaign has called for warning labels on β-carotene supplements to caution smokers that such supplements may increase the risk of lung cancer. The New England Journal of Medicine published an article in 1994 about a trial which examined the relationship between daily supplementation of β-carotene and vitamin E (α-tocopherol) and the incidence of lung cancer. The study was done using supplements and researchers were aware of the epidemiological correlation between carotenoid-rich fruits and vegetables and lower lung cancer rates. The research concluded that no reduction in lung cancer was found in the participants using these supplements, and furthermore, these supplements may, in fact, have harmful effects. The Journal of the National Cancer Institute and The New England Journal of Medicine published articles in 1996 about a trial with a goal to determine if vitamin A (in the form of retinyl palmitate) and β-carotene (at about 30 mg/day, which is 10 times the Reference Daily Intake) supplements had any beneficial effects to prevent cancer. The results indicated an "increased" risk of lung and prostate cancers for the participants who consumed the β-carotene supplement and who had lung irritation from smoking or asbestos exposure, causing the trial to be stopped early. A review of all randomized controlled trials in the scientific literature by the Cochrane Collaboration published in "JAMA" in 2007 found that synthetic β-carotene "increased" mortality by 1–8% (Relative Risk 1.05, 95% confidence interval 1.01–1.08). However, this meta-analysis included two large studies of smokers, so it is not clear that the results apply to the general population. The review only studied the influence of synthetic antioxidants and the results should not be translated to potential effects of fruits and vegetables. β-Carotene and photosensitivity. Oral β-carotene is prescribed to people suffering from erythropoietic protoporphyria. It provides them some relief from photosensitivity. Carotenemia. Carotenemia or hypercarotenemia is excess carotene, but unlike excess vitamin A, carotene is non-toxic. Although hypercarotenemia is not particularly dangerous, it can lead to an oranging of the skin (carotenodermia), but not the conjunctiva of eyes (thus easily distinguishing it visually from jaundice). It is most commonly associated with consumption of an abundance of carrots, but it also can be a medical sign of more dangerous conditions. Production. Carotenes are produced in a general manner for other terpenoids and terpenes, i.e. by coupling, cyclization, and oxygenation reactions of isoprene derivatives. Lycopene is the key precursor to carotenoids. It is formed by coupling of geranylgeranyl pyrophosphate and geranyllinally pyrophosphate. Most of the world's synthetic supply of carotene comes from a manufacturing complex located in Freeport, Texas and owned by DSM. The other major supplier BASF also uses a chemical process to produce β-carotene. Together these suppliers account for about 85% of the β-carotene on the market. In Spain Vitatene produces natural β-carotene from fungus "Blakeslea trispora", as does DSM but at much lower amount when compared to its synthetic β-carotene operation. In Australia, organic β-carotene is produced by Aquacarotene Limited from dried marine algae "Dunaliella salina" grown in harvesting ponds situated in Karratha, Western Australia. BASF Australia is also producing β-carotene from microalgae grown in two sites in Australia that are the world's largest algae farms. In Portugal, the industrial biotechnology company Biotrend is producing natural all-"trans"-β-carotene from a non-genetically modified bacteria of the genus "Sphingomonas" isolated from soil. Carotenes are also found in palm oil, corn, and in the milk of dairy cows, causing cow's milk to be light yellow, depending on the feed of the cattle, and the amount of fat in the milk (high-fat milks, such as those produced by Guernsey cows, tend to be yellower because their fat content causes them to contain more carotene). Carotenes are also found in some species of termites, where they apparently have been picked up from the diet of the insects. Synthesis. There are currently two commonly used methods of total synthesis of β-carotene. The first was developed by BASF and is based on the Wittig reaction with Wittig himself as patent holder: The second is a Grignard reaction, elaborated by Hoffman-La Roche from the original synthesis of Inhoffen et al. They are both symmetrical; the BASF synthesis is C20 + C20, and the Hoffman-La Roche synthesis is C19 + C2 + C19. Nomenclature. Carotenes are carotenoids containing no oxygen. Carotenoids containing some oxygen are known as xanthophylls. The two ends of the β-carotene molecule are structurally identical, and are called β-rings. Specifically, the group of nine carbon atoms at each end form a β-ring. The α-carotene molecule has a β-ring at one end; the other end is called an ε-ring. There is no such thing as an "α-ring". These and similar names for the ends of the carotenoid molecules form the basis of a systematic naming scheme, according to which: ζ-Carotene is the biosynthetic precursor of neurosporene, which is the precursor of lycopene, which, in turn, is the precursor of the carotenes α through ε. Food additive. Carotene is used to colour products such as juice, cakes, desserts, butter and margarine. It is approved for use as a food additive in the EU (listed as additive E160a) Australia and New Zealand (listed as 160a) and the US.
6988
7903804
https://en.wikipedia.org/wiki?curid=6988
Cyclic adenosine monophosphate
Cyclic adenosine monophosphate (cAMP, cyclic AMP, or 3',5'-cyclic adenosine monophosphate) is a second messenger, or cellular signal occurring within cells, that is important in many biological processes. cAMP is a derivative of adenosine triphosphate (ATP) and used for intracellular signal transduction in many different organisms, conveying the cAMP-dependent pathway. History. Earl Sutherland of Vanderbilt University won a Nobel Prize in Physiology or Medicine in 1971 "for his discoveries concerning the mechanisms of the action of hormones", especially epinephrine, via second messengers (such as cyclic adenosine monophosphate, cyclic AMP). Synthesis. The synthesis of cAMP is stimulated by trophic hormones that bind to receptors on the cell surface. cAMP levels reach maximal levels within minutes and decrease gradually over an hour in cultured cells. Cyclic AMP is synthesized from ATP by adenylyl cyclase located on the inner side of the plasma membrane and anchored at various locations in the interior of the cell. Adenylyl cyclase is "activated" by a range of signaling molecules through the activation of adenylyl cyclase stimulatory G (Gs)-protein-coupled receptors. Adenylyl cyclase is "inhibited" by agonists of adenylyl cyclase inhibitory G (Gi)-protein-coupled receptors. Liver adenylyl cyclase responds more strongly to glucagon, and muscle adenylyl cyclase responds more strongly to adrenaline. cAMP decomposition into AMP is catalyzed by the enzyme phosphodiesterase. Functions. cAMP is a second messenger, used for intracellular signal transduction, such as transferring into cells the effects of hormones like glucagon and adrenaline, which cannot pass through the plasma membrane. It is also involved in the activation of protein kinases. In addition, cAMP binds to and regulates the function of ion channels such as the HCN channels and a few other cyclic nucleotide-binding proteins such as Epac1 and RAPGEF2. Role in eukaryotic cells. cAMP is associated with kinases function in several biochemical processes, including the regulation of glycogen, sugar, and lipid metabolism. In eukaryotes, cyclic AMP works by activating protein kinase A (PKA, or cAMP-dependent protein kinase). PKA is normally inactive as a tetrameric holoenzyme, consisting of two catalytic and two regulatory units (C2R2), with the regulatory units blocking the catalytic centers of the catalytic units. Cyclic AMP binds to specific locations on the regulatory units of the protein kinase, and causes dissociation between the regulatory and catalytic subunits, thus enabling those catalytic units to phosphorylate substrate proteins. The active subunits catalyze the transfer of phosphate from ATP to specific serine or threonine residues of protein substrates. The phosphorylated proteins may act directly on the cell's ion channels, or may become activated or inhibited enzymes. Protein kinase A can also phosphorylate specific proteins that bind to promoter regions of DNA, causing increases in transcription. Not all protein kinases respond to cAMP. Several classes of protein kinases, including protein kinase C, are not cAMP-dependent. Further effects mainly depend on cAMP-dependent protein kinase, which vary based on the type of cell. Still, there are some minor PKA-independent functions of cAMP, e.g., activation of calcium channels, providing a minor pathway by which growth hormone-releasing hormone causes a release of growth hormone. However, the view that the majority of the effects of cAMP are controlled by PKA is an outdated one. In 1998 a family of cAMP-sensitive proteins with guanine nucleotide exchange factor (GEF) activity was discovered. These are termed Exchange proteins activated by cAMP (Epac) and the family comprises Epac1 and Epac2. The mechanism of activation is similar to that of PKA: the GEF domain is usually masked by the N-terminal region containing the cAMP binding domain. When cAMP binds, the domain dissociates and exposes the now-active GEF domain, allowing Epac to activate small Ras-like GTPase proteins, such as Rap1. Additional role of secreted cAMP in social amoebae. In the species "Dictyostelium discoideum", cAMP acts outside the cell as a secreted signal. The chemotactic aggregation of cells is organized by periodic waves of cAMP that propagate between cells over distances as large as several centimetres. The waves are the result of a regulated production and secretion of extracellular cAMP and a spontaneous biological oscillator that initiates the waves at centers of territories. Role in bacteria. In bacteria, the level of cAMP varies depending on the medium used for growth. In particular, cAMP is low when glucose is the carbon source. This occurs through inhibition of the cAMP-producing enzyme, adenylyl cyclase, as a side-effect of glucose transport into the cell. The transcription factor cAMP receptor protein (CRP) also called CAP (catabolite gene activator protein) forms a complex with cAMP and thereby is activated to bind to DNA. CRP-cAMP increases expression of a large number of genes, including some encoding enzymes that can supply energy independent of glucose. cAMP, for example, is involved in the positive regulation of the lac operon. In an environment with a low glucose concentration, cAMP accumulates and binds to the allosteric site on CRP (cAMP receptor protein), a transcription activator protein. The protein assumes its active shape and binds to a specific site upstream of the lac promoter, making it easier for RNA polymerase to bind to the adjacent promoter to start transcription of the lac operon, increasing the rate of lac operon transcription. With a high glucose concentration, the cAMP concentration decreases, and the CRP disengages from the lac operon. Pathology. Since cyclic AMP is a second messenger and plays vital role in cell signalling, it has been implicated in various disorders but not restricted to the roles given below: Role in human carcinoma. Some research has suggested that a deregulation of cAMP pathways and an aberrant activation of cAMP-controlled genes is linked to the growth of some cancers. Role in prefrontal cortex disorders. Research suggests that cAMP affects the function of higher-order thinking in the prefrontal cortex through its regulation of ion channels called hyperpolarization-activated cyclic nucleotide-gated channels (HCN). HCN channels will open when exposed to cAMP. Once the HCN channel is open, the electrical activity within the neuron is disrupted and the cell becomes less responsive. This interferes with the function of the prefrontal cortex in working memory tasks. Inhibition of cAMP has been observed to improve spatial working memory. cAMP is involved in activation of trigeminocervical system leading to neurogenic inflammation and causing migraine. Role in infectious disease agents' pathogenesis. Disrupted functioning of cAMP has been noted as one of the mechanisms of several bacterial exotoxins. They can be subgrouped into two distinct categories: Uses. Forskolin is commonly used as a tool in biochemistry to raise levels of cAMP in the study and research of cell physiology.
6991
48523215
https://en.wikipedia.org/wiki?curid=6991
Cimabue
Giovanni Cimabue ( , ; – 1302), also known as Cenni di Pepo or Cenni di Pepi, was an Italian painter and designer of mosaics from Florence. Although heavily influenced by Byzantine models, Cimabue is generally regarded as one of the first great Italian painters to break from the Italo-Byzantine style. Compared with the norms of medieval art, his works have more lifelike figural proportions and a more sophisticated use of shading to suggest volume. According to Italian painter and historian Giorgio Vasari, Cimabue was the teacher of Giotto, the first great artist of the Italian Proto-Renaissance. However, many scholars today tend to discount Vasari's claim by citing earlier sources that suggest otherwise. Life. Little is known about Cimabue's early life. One source that recounts his career is Vasari's "Lives of the Most Excellent Painters, Sculptors, and Architects", but its accuracy is uncertain. He was born in Florence and died in Pisa. Hayden Maginnis speculates that he could have trained in Florence under masters who were culturally connected to Byzantine art. The art historian Pietro Toesca attributed the "Crucifixion" in the church of San Domenico in Arezzo to Cimabue, dating around 1270, making it the earliest known attributed work that departs from the Byzantine style. Cimabue's Christ is bent, and the clothes have the golden striations that were introduced by Coppo di Marcovaldo. Around 1272, Cimabue is documented as being present in Rome, and a little later he made another "Crucifix" for the Florentine church of Santa Croce. Now restored, having been damaged by the 1966 Arno River flood, the work was larger and more advanced than the one in Arezzo, with traces of naturalism perhaps inspired by the works of Nicola Pisano. According to Vasari, Cimabue, while travelling from Florence to Vespignano, came upon the 10-year-old Giotto (c. 1277) drawing his sheep with a rough rock upon a smooth stone. He asked if Giotto would like to come and stay with him, which the child accepted with his father's permission. Vasari elaborates that during Giotto's apprenticeship, he allegedly painted a fly on the nose of a portrait Cimabue was working on; the teacher attempted to sweep the fly away several times before he understood his pupil's prank. Many scholars now discount Vasari's claim that he took Giotto as his pupil, citing earlier sources that suggest otherwise. Around 1280, Cimabue painted the "Maestà", originally displayed in the church of San Francesco at Pisa, but now at the Louvre. This work established a style that was followed subsequently by numerous artists, including Duccio di Buoninsegna in his "Rucellai Madonna" (in the past, wrongly attributed to Cimabue) as well as Giotto. Other works from the period, which were said to have heavily influenced Giotto, include a "Flagellation" (Frick Collection), mosaics for the Baptistery of Florence (now largely restored), the "Maestà" at the Santa Maria dei Servi in Bologna and the "Madonna" in the Pinacoteca of Castelfiorentino. A workshop painting, perhaps assignable to a slightly later period, is the "Maestà with Saints Francis and Dominic" now in the Uffizi. During the pontificate of Pope Nicholas IV, the first Franciscan pope, Cimabue worked in Assisi. At Assisi, in the transept of the Lower Basilica of San Francesco, he created a fresco named "Madonna with Child Enthroned, Four Angels and St Francis". The left portion of this fresco is lost, but it may have shown St Anthony of Padua (the authorship of the painting has been recently disputed for technical and stylistic reasons). Cimabue was subsequently commissioned to decorate the apse and the transept of the Upper Basilica of Assisi, in the same period of time that Roman artists were decorating the nave. The cycle he created there comprises scenes from the Gospels, the lives of the Virgin Mary, St Peter and St Paul. The paintings are now in poor condition because of oxidation of the brighter colours that were used by the artist. The "Maestà of Santa Trinita", dated to c. 1290–1300, which was originally painted for the church of Santa Trinita in Florence, is now in the Uffizi Gallery. The softer expression of the characters suggests that it was influenced by Giotto, who was by then already active as a painter. Cimabue spent the last period of his life, 1301 to 1302, in Pisa. There, he was commissioned to finish a mosaic of Christ Enthroned, originally begun by Maestro Francesco, in the apse of the city's cathedral. Cimabue was to create the part of the mosaic depicting St John the Evangelist, which remains the sole surviving work documented as being by the artist. Cimabue died around 1302. Character. According to Vasari, quoting a contemporary of Cimabue, "Cimabue of Florence was a painter who lived during the author's own time, a nobler man than anyone knew but he was as a result so haughty and proud that if someone pointed out to him any mistake or defect in his work, or if he had noted any himself ... he would immediately destroy the work, no matter how precious it might be." The nickname Cimabue translates as "bull-head" but also possibly as "one who crushes the views of others", from the Italian verb "cimare", meaning "to top", "to shear", and "to blunt". The conclusion for the second meaning is drawn from similar commentaries on Dante, who was also known "for being contemptuous of criticism". Legacy. History has long regarded Cimabue as the last of an era that was overshadowed by the Italian Renaissance. As early as 1543, Vasari wrote of Cimabue, "Cimabue was, in one sense, the principal cause of the renewal of painting," with the qualification that, "Giotto truly eclipsed Cimabue's fame just as a great light eclipses a much smaller one." In Dante's "Divine Comedy". In Canto XI of his "Purgatorio", Dante laments the quick loss of public interest in Cimabue in the face of Giotto's revolution in art. Cimabue himself does not appear in "Purgatorio", but is mentioned by Oderisi, who is also repenting for his pride. The artist serves to represent the fleeting nature of fame in contrast with the Enduring God. <poem> O vanity of human powers, how briefly lasts the crowning green of glory, unless an age of darkness follows! In painting Cimabue thought he held the field but now it's Giotto has the cry, so that the other's fame is dimmed. </poem> Market. On 27 October 2019, "The Mocking of Christ", was sold for €24m (£20m; $26.6m), a price the auctioneers described as a new world record for a medieval painting. The picture had been located in the kitchen of a home in northern France, and its owner had been unaware of its value. List of works. Around a dozen works are securely attributed to Cimabue, with several less secure attributions. None are signed or dated.
6997
28481209
https://en.wikipedia.org/wiki?curid=6997
Corporatocracy
Corporatocracy or corpocracy is an economic, political and judicial system controlled or influenced by business corporations or corporate interests. The concept has been used in explanations of bank bailouts, excessive pay for CEOs, and the exploitation of national treasuries, people, and natural resources. It has been used by critics of globalization, sometimes in conjunction with criticism of the World Bank or unfair lending practices, as well as criticism of free trade agreements. Corporate rule is also a common theme in dystopian science-fiction media. Forms of corporatocracy. Corporatocracy can manifest in different forms, varying according to the degree of involvement of corporations in the political and social sphere, for example: Use of corporatocracy and similar ideas. Historian Howard Zinn argues that during the Gilded Age in the United States, the U.S. government was acting exactly as Karl Marx described capitalist states: "pretending neutrality to maintain order, but serving the interests of the rich". According to economist Joseph Stiglitz, there has been a severe increase in the market power of corporations, largely due to U.S. antitrust laws being weakened by neoliberal reforms, leading to growing income inequality and a generally underperforming economy. He states that to improve the economy, it is necessary to decrease the influence of money on U.S. politics. In his 1956 book "The Power Elite", sociologist C. Wright Mills stated that together with the military and political establishment, leaders of the biggest corporations form a "power elite", which is in control of the U.S. Economist Jeffrey Sachs described the United States as a corporatocracy in "The Price of Civilization" (2011). He suggested that it arose from four trends: weak national parties and strong political representation of individual districts, the large U.S. military establishment after World War II, large corporations using money to finance election campaigns, and globalization tilting the balance of power away from workers. In 2013, economist Edmund Phelps criticized the economic system of the U.S. and other western countries in recent decades as being what he calls "the new corporatism", which he characterizes as a system in which the state is far too involved in the economy and is tasked with "protecting everyone against everyone else", but at the same time, big companies have a great deal of influence on the government, with lobbyists' suggestions being "welcome, especially if they come with bribes". Corporate influence on politics in the United States. Corruption. During the Gilded Age in the United States, corruption was rampant, as business leaders spent significant amounts of money ensuring that government did not regulate their activities. Corporate influence on legislation. Corporations have a significant influence on the regulations and regulators that monitor them. For example, Senator Elizabeth Warren stated in December 2014 that an omnibus spending bill required to fund the government was modified late in the process to weaken banking regulations. The modification made it easier to allow taxpayer-funded bailouts of banking "swaps entities", which the Dodd-Frank banking regulations prohibited. She singled out Citigroup, one of the largest banks, which had a role in modifying the legislation. She also stated that both Wall Street bankers and members of the government that formerly had worked on Wall Street stopped bi-partisan legislation that would have broken up the largest banks. She repeated President Theodore Roosevelt's warnings regarding powerful corporate entities that threatened the "very foundations of Democracy". In a 2015 interview, former President Jimmy Carter stated that the United States is now "an oligarchy with unlimited political bribery" due to the "Citizens United v. FEC" ruling, which effectively removed limits on donations to political candidates. Wall Street spent a record $2 billion trying to influence the 2016 United States elections. Joel Bakan, a University of British Columbia law professor and the author of the award-winning book "The Corporation: The Pathological Pursuit of Profit and Power", writes: Perceived symptoms of corporatocracy in the United States. Share of income. With regard to income inequality, the 2014 income analysis of the University of California, Berkeley economist Emmanuel Saez confirms that relative growth of income and wealth is not occurring among small and mid-sized entrepreneurs and business owners (who generally populate the lower half of top one per-centers in income), but instead only among the top .1 percent of the income distribution, who earn $2,000,000 or more every year. Corporate power can also increase income inequality. Nobel Prize winner of economics Joseph Stiglitz wrote in May 2011: "Much of today’s inequality is due to manipulation of the financial system, enabled by changes in the rules that have been bought and paid for by the financial industry itself—one of its best investments ever. The government lent money to financial institutions at close to zero percent interest and provided generous bailouts on favorable terms when all else failed. Regulators turned a blind eye to a lack of transparency and to conflicts of interest." Stiglitz stated that the top 1% got nearly "one-quarter" of the income and own approximately 40% of the wealth. Measured relative to GDP, total compensation and its component wages and salaries have been declining since 1970. This indicates a shift in income from labor (persons who derive income from hourly wages and salaries) to capital (persons who derive income via ownership of businesses, land, and assets). Larry Summers estimated in 2007 that the lower 80% of families were receiving $664 billion less income than they would be with a 1979 income distribution, or approximately $7,000 per family. Not receiving this income may have led many families to increase their debt burden, a significant factor in the 2007–2009 subprime mortgage crisis, as highly leveraged homeowners suffered a much larger reduction in their net worth during the crisis. Further, since lower income families tend to spend relatively more of their income than higher income families, shifting more of the income to wealthier families may slow economic growth. Effective corporate tax rates. Some large U.S. corporations have used a strategy called tax inversion to change their headquarters to a non-U.S. country to reduce their tax liability. About 46 companies have reincorporated in low-tax countries since 1982, including 15 since 2012. Six more also planned to do so in 2015. Stock buybacks versus wage increases. One indication of increasing corporate power was the removal of restrictions on their ability to buy back stock, contributing to increased income inequality. Writing in the "Harvard Business Review" in September 2014, William Lazonick blamed record corporate stock buybacks for reduced investment in the economy and a corresponding impact on prosperity and income inequality. Between 2003 and 2012, the 449 companies in the S&P 500 used 54% of their earnings ($2.4 trillion) to buy back their own stock. An additional 37% was paid to stockholders as dividends. Together, these were 91% of profits. This left little for investment in productive capabilities or higher income for employees, shifting more income to capital rather than labor. He blamed executive compensation arrangements, which are heavily based on stock options, stock awards, and bonuses, for meeting earnings per share (EPS) targets. EPS increases as the number of outstanding shares decreases. Legal restrictions on buybacks were greatly eased in the early 1980s. He advocates changing these incentives to limit buybacks. In the 12 months to March 31, 2014, S&P 500 companies increased their stock buyback payouts by 29% year on year, to $534.9 billion. U.S. companies are projected to increase buybacks to $701 billion in 2015, according to Goldman Sachs, an 18% increase over 2014. For scale, annual non-residential fixed investment (a proxy for business investment and a major GDP component) was estimated to be about $2.1 trillion for 2014. Industry concentration. Brid Brennan of the Transnational Institute stated that the concentration of corporations increases their influence over government: "It's not just their size, their enormous wealth and assets that make the TNCs [transnational corporations] dangerous to democracy. It's also their concentration, their capacity to influence, and often infiltrate, governments and their ability to act as a genuine international social class in order to defend their commercial interests against the common good. It is such decision-making power as well as the power to impose deregulation over the past 30 years, resulting in changes to national constitutions, and to national and international legislation which has created the environment for corporate crime and impunity." Brennan concludes that this concentration in power leads to again more concentration of income and wealth. An example of such industry concentration is in banking. The top 5 U.S. banks had approximately 30% of the U.S. banking assets in 1998; this rose to 45% by 2008 and to 48% by 2010, before falling to 47% in 2011. "The Economist" also stated that an increasingly profitable corporate financial and banking sector caused Gini coefficients to rise in the U.S. since 1980: "Financial services' share of GDP in America, doubled to 8% between 1980 and 2000; over the same period their profits rose from about 10% to 35% of total corporate profits, before collapsing in 2007–09. Bankers are being paid more, too. In America the compensation of workers in financial services was similar to average compensation until 1980. Now it is twice that average." Mass incarceration. Several scholars have linked mass incarceration of the poor in the United States with the rise of neoliberalism. Sociologist Loïc Wacquant and Marxist economic geographer David Harvey have argued that the criminalization of poverty and mass incarceration is a neoliberal policy for dealing with social instability among economically marginalized populations. According to Wacquant, this situation follows the implementation of other neoliberal policies, which have allowed for the retrenchment of the social welfare state and the rise of punitive workfare, whilst increasing gentrification of urban areas, privatization of public functions, the shrinking of collective protections for the working class via economic deregulation and the rise of underpaid, precarious wage labor. By contrast, it is extremely lenient in dealing with those in the upper echelons of society, in particular when it comes to economic crimes of the upper class and corporations such as fraud, embezzlement, insider trading, credit and insurance fraud, money laundering and violation of commerce and labor codes. According to Wacquant, neoliberalism does not shrink government, but instead sets up a "centaur state" with little governmental oversight for those at the top and strict control of those at the bottom. Austerity. In his 2014 book, Mark Blyth claims that austerity not only fails to stimulate growth, but effectively passes that debt down to the working classes. As such, many academics such as Andrew Gamble view Austerity in Britain less as an economic necessity, and more as a tool of statecraft, driven by ideology and not economic requirements. A study published in "The BMJ" in November 2017 found the Conservative government austerity programme had been linked to approximately 120,000 deaths since 2010; however, this was disputed, for example on the grounds that it was an observational study which did not show cause and effect. More studies claim adverse effects of austerity on population health, which include an increase in the mortality rate among pensioners which has been linked to unprecedented reductions in income support, an increase in suicides and the prescription of antidepressants for patients with mental health issues, and an increase in violence, self-harm, and suicide in prisons. Clara E. Mattei, assistant professor of economics at the New School for Social Research, posits that austerity is less of a means to "fix the economy" and is more of an ideological weapon of class oppression wielded by economic and political elites in order to suppress revolts and unrest by the working class public and close off any alternatives to the capitalist system. She traces the origins of modern austerity to post-World War I Britain and Italy, when it served as a "powerful counteroffensive" to rising working class agitation and anti-capitalist sentiment. In this, she quotes British economist G. D. H. Cole writing on the British response to the economic downturn of 1921:"The big working-class offensive had been successfully stalled off; and British capitalism, though threatened with economic adversity, felt itself once more safely in the saddle and well able to cope, both industrially and politically, with any attempt that might still be made from the labour side to unseat it."
6999
22620462
https://en.wikipedia.org/wiki?curid=6999
Culture of Canada
The culture of Canada embodies the artistic, culinary, literary, humour, musical, political and social elements that are representative of Canadians. Throughout Canada's history, its culture has been influenced firstly by its indigenous cultures, and later by European culture and traditions, mostly by the British and French. Over time, elements of the cultures of Canada's immigrant populations have become incorporated to form a Canadian cultural mosaic. Certain segments of Canada's population have, to varying extents, also been influenced by American culture due to shared language (in English-speaking Canada), significant media penetration, and geographic proximity. Canada is often characterized as being "very progressive, diverse, and multicultural". Canada's federal government has often been described as the instigator of multicultural ideology because of its public emphasis on the social importance of immigration. Canada's culture draws from its broad range of constituent nationalities, and policies that promote a just society are constitutionally protected. Canadian policies—such as abortion, euthanasia, same-sex marriage, and cannabis; an emphasis on cultural diversity; significant immigration; abolishing capital punishment; publicly funded health care; higher and more progressive taxation; efforts to eliminate poverty; and strict gun control are social indicators of the country's political and cultural values. Canadians view the country's institutions of health care, military peacekeeping, the national park system, and the "Canadian Charter of Rights and Freedoms" as integral to their national identity. The Canadian government has influenced culture with programs, laws and institutions. It has created crown corporations to promote Canadian culture through media, such as the Canadian Broadcasting Corporation (CBC) and the National Film Board of Canada (NFB), and promotes many events which it considers to promote Canadian traditions. It has also tried to protect Canadian culture by setting legal minimums on Canadian content in many media using bodies like the Canadian Radio-television and Telecommunications Commission (CRTC). Cultural components. History. Influences. For thousands of years, Canada has been inhabited by indigenous peoples from a variety of different cultures and of several major linguistic groupings. Although not without conflict and bloodshed, early European interactions with First Nations and Inuit populations in what is now Canada were arguably peaceful. First Nations and Métis peoples played a critical part in the development of European colonies in Canada, particularly for their role in assisting European coureur des bois and voyageurs in the exploration of the continent during the North American fur trade. Over the course of three centuries, countless North American Indigenous words, inventions, concepts, and games have become an everyday part of Canadian language and use. Many places in Canada, both natural features and human habitations, use indigenous names. The name "Canada" itself derives from the St. Lawrence Huron-Iroquoian word "Kanata" meaning "village" or "settlement". The name of Canada's capital city Ottawa comes from the Algonquin language term "adawe" meaning "to trade". In the 17th-century, French colonials settled New France in Acadia, in the present-day Maritimes, and in "Canada", along the St. Lawrence River in present-day Quebec and Ontario. These regions were under French control from 1534 to 1763. However, the British conquered Acadia in 1710 and conquered "Canada" in 1760. The British were able to deport most of the Acadians, but they were unable to deport the Canadiens of "Canada" because they severely outnumbered the British forces. The British therefore had to make deals with Canadiens and hope they would one day become assimilated. The American Revolution, from 1775 to 1783, provoked the migration of 40,000 to 50,000 United Empire Loyalists from the Thirteen Colonies to the newly conquered British lands, which brought American influences to Canada for the first time. Following the War of 1812, many Scottish and English people settled in Upper Canada and Lower Canada. Many Irish people fleeing the Great Famine also arrived between 1845 and 1852. The Canadian Forces and overall civilian participation in the First World War and Second World War helped to foster Canadian nationalism; however, in 1917 and 1944, conscription crises highlighted the considerable rift along ethnic lines between Anglophones and Francophones. As a result of the First and Second World Wars, the Government of Canada became more assertive and less deferential to British authority. Canada, until the 1940s, was often described as "binational", with the 2 components being the cultural, linguistic and political identities of English Canadians and of French Canadians. Legislative restrictions on immigration (such as the Continuous journey regulation and "Chinese Immigration Act") that had favoured British, American and other European immigrants (such as Dutch, German, Italian, Polish, Swedish and Ukrainian) were amended during the 1960s, resulting in an influx of people of many different ethnicities. By the end of the 20th century, immigrants were increasingly Chinese, Indian, Vietnamese, Jamaican, Filipino, Lebanese, Pakistani and Haitian. By the 21st century Canada had thirty four ethnic groups with at least one hundred thousand members each, of which eleven have over 1,000,000 people and numerous others are represented in smaller numbers. , 16.2% of the population self-identify as a visible minority. Development of popular culture. Themes and symbols of pioneers, trappers, and traders played an important part in the early development of Canadian culture. Modern Canadian culture as it is understood today can be traced to its time period of westward expansion and nation building. Contributing factors include Canada's unique geography, climate, and cultural makeup. Being a cold country with long winter nights for most of the year, certain unique leisure activities developed in Canada during this period including ice hockey and embracement of the summer indigenous game of lacrosse. By the 19th century, Canadians came to believe themselves possessed of a unique "northern character," due to the long, harsh winters that only those of hardy body and mind could survive. This hardiness was claimed as a Canadian trait, and sports that reflected this, such as snowshoeing and cross-country skiing, were asserted as characteristically Canadian. During this period, the churches tried to influence leisure activities by preaching against drinking, and scheduling annual revivals and weekly club activities. In a society in which most middle-class families now owned a harmonium or piano, and standard education included at least the rudiments of music, the result was often an original song. Such stirrings frequently occurred in response to noteworthy events, and few local or national excitements were allowed to pass without some musical comment. By the 1930s, radio played a major role in uniting Canadians behind their local or regional teams. Rural areas were especially influenced by sports coverage and the propagation of national myths. Outside the sports and music arena, Canadians expressed a national character of being hard working, peaceful, orderly and polite. Political culture. Cultural legislation. French Canada's early development was relatively cohesive during the 17th and 18th centuries, and this was preserved by the Quebec Act 1774, which allowed Roman Catholics to hold offices and practice their faith. The "Constitution Act, 1867" was thought to meet the growing calls for Canadian autonomy while avoiding the overly strong decentralization that contributed to the Civil War in the United States. The compromises reached during this time between the English- and French-speaking Fathers of Confederation set Canada on a path to bilingualism which in turn contributed to an acceptance of diversity. The English and French languages have had limited constitutional protection since 1867 and full official status since 1969. Section 133 of the Constitution Act, 1867 (BNA Act) guarantees that both languages may be used in the Parliament of Canada. Canada adopted its "first Official Languages Act" in 1969, giving English and French equal status in the government of Canada. Doing so makes them "official" languages, having preferred status in law over all other languages used in Canada. Prior to the advent of the "Canadian Bill of Rights" in 1960 and its successor the "Canadian Charter of Rights and Freedoms" in 1982, the laws of Canada did not provide much in the way of civil rights and this issue was typically of limited concern to the courts. Canada since the 1960s has placed emphasis on equality and inclusiveness for all people. Multiculturalism in Canada was adopted as the official policy of the Canadian government and is enshrined in Section 27 of the Canadian Charter of Rights and Freedoms. In 1995, the Supreme Court of Canada ruled in "Egan v. Canada" that sexual orientation should be "read in" to Section Fifteen of the Canadian Charter of Rights and Freedoms, a part of the Constitution of Canada guaranteeing equal rights to all Canadians. Following a series of decisions by provincial courts and the Supreme Court of Canada, on July 20, 2005, the "Civil Marriage Act" (Bill C-38) became law, legalizing same-sex marriage in Canada. Furthermore, sexual orientation was included as a protected status in the human-rights laws of the federal government and of all provinces and territories. Contemporary politics. Canadian governments at the federal level have a tradition of liberalism, and govern with a moderate, centrist political ideology. Canada's egalitarian approach to governance emphasizing social justice and multiculturalism, is based on selective immigration, social integration, and suppression of far-right politics that has wide public and political support. Peace, order, and good government are constitutional goals of the Canadian government. Canada has a multi-party system in which many of its legislative customs derive from the unwritten conventions of and precedents set by the Westminster parliament of the United Kingdom. The country has been dominated by two parties, the centre-left Liberal Party of Canada and the centre-right Conservative Party of Canada. The historically predominant Liberals position themselves at the centre of the political scale, with the Conservatives sitting on the right and the New Democratic Party occupying the left. Smaller parties like the Quebec nationalist Bloc Québécois and the Green Party of Canada have also been able to exert their influence over the political process by representation at the federal level. Nationalism and protectionism. In general, Canadian nationalists are concerned about the protection of Canadian sovereignty and loyalty to the Canadian State, placing them in the civic nationalist category. It has likewise often been suggested that anti-Americanism plays a prominent role in Canadian nationalist ideologies. A unified, bi-cultural, tolerant and sovereign Canada remains an ideological inspiration to many Canadian nationalists. Alternatively Quebecois nationalism and support for maintaining French Canadian culture many of whom were supporters of the Quebec sovereignty movement during the late-20th century. Cultural protectionism in Canada has, since the mid-20th century, taken the form of conscious, interventionist attempts on the part of various Canadian governments to promote Canadian cultural production. Sharing a large border, a common language (for the majority), and being exposed to massive diffusions of American media makes it difficult for Canada to preserve its own culture versus being assimilated to American culture. While Canada tries to maintain its cultural differences, it also must balance this with responsibility in trade arrangements such as the General Agreement on Tariffs and Trade (GATT) and the United States–Mexico–Canada Agreement (USMCA). Foreign relations. The notion of peacekeeping is deeply embedded in Canadian culture and a distinguishing feature that Canadians feel sets their foreign policy apart from its closest ally, the United States. Canada's foreign policy of peacekeeping, peace enforcement, peacemaking, and peacebuilding has been intertwined with its tendency to pursue multilateral and international solutions since the end of World War II. Canada's central role in the development of peacekeeping in the mid-1950s gave it credibility and established it as a country fighting for the "common good" of all nations. Canada has since been engaged with the United Nations, NATO and the European Union (EU) in promoting its middle power status into an active role in world affairs. Canada has long been reluctant to participate in military operations that are not sanctioned by the United Nations, such as the Vietnam War or the 2003 Invasion of Iraq. Canada has participated in US-led, UN-sanctioned operations such as the first Gulf War, in Afghanistan and Libya. The country also participates with its NATO allies in UN-sanctioned missions, such as the Kosovo Conflict and in Haiti. Values. Canadian values are the perceived commonly shared ethical and human values of Canadians. Canadians generally value freedom and individuality, often making personal decisions based on family interests rather than collective Canadian identity. Tolerance and sensitivity hold significant importance in Canada's multicultural society, as does politeness and fairness Canadians typically tend to embrace liberal views on social and political issues. A majority of Canadians shared the values of human rights, respect for the law and gender equality. Universal access to publicly funded health services "is often considered by Canadians as a fundamental value that ensures national health care insurance for everyone wherever they live in the country." The major political parties have claimed explicitly that they uphold Canadian values, but use generalities to specify them. Historian Ian MacKay argues that, thanks to the long-term political impact of "Rebels, Reds, and Radicals", and allied leftist political elements, "egalitarianism, social equality, and peace... are now often simply referred to...as 'Canadian values.'" The Canadian Charter of Rights and Freedoms, was intended to be a source for Canadian values and national unity. The 15th Prime Minister Pierre Trudeau wrote in his "Memoirs" that: Numerous scholars, beginning in the 1940s with American sociologist Seymour Martin Lipset; have tried to identify, measure and compare them with other countries, especially the United States. However, there are critics who say that such a task is practically impossible. Denis Stairs a professor of political Science at Dalhousie University; links the concept of Canadian values with nationalism. [Canadians typically]...believe, in particular, that they subscribe to a distinctive set of values – "Canadian" values – and that those values are special in the sense of being unusually virtuous. Identity. Canada's large geographic size, the presence of a significant number of indigenous peoples, the conquest of one European linguistic population by another and relatively open immigration policy have led to an extremely diverse society. As a result, the issue of Canadian identity remains under scrutiny. Canada has constitutional protection for policies that promote multiculturalism rather than cultural assimilation or a single national myth. In Quebec, cultural identity is strong, and many commentators speak of a French Canadian culture as distinguished from English Canadian culture. However, as a whole, Canada is in theory, a cultural mosaic—a collection of several regional, and ethnic subcultures. As Professor Alan Cairns noted about the " Canadian Charter of Rights and Freedoms ", "the initial federal government premise was on developing a pan-Canadian identity"'. Pierre Trudeau himself later wrote in his "Memoirs (1993)" that "Canada itself" could now be defined as a "society where all people are equal and where they share some fundamental values based upon freedom", and that all Canadians could identify with the values of liberty and equality. Political philosopher Charles Blattberg suggests that Canada is a "multinational country"; as all Canadians are members of Canada as a civic or political community, a community of citizens, and this is a community that contains many other kinds within it. These include not only communities of ethnic, regional, religious, and civic (the provincial and municipal governments) sorts, but also national communities, which often include or overlap with many of the other kinds. Journalist and author Richard Gwyn has suggested that "tolerance" has replaced "loyalty" as the touchstone of Canadian identity. Journalist and professor Andrew Cohen wrote in 2007: Canada's 15th prime minister Pierre Trudeau in regards to uniformity stated: In 2015, Prime Minister Justin Trudeau defined the country as the world's first postnational state: "There is no core identity, no mainstream in Canada". The question of Canadian identity was traditionally dominated by three fundamental themes: first, the often conflicted relations between English Canadians and French Canadians stemming from the French Canadian imperative for cultural and linguistic survival; secondly, the generally close ties between English Canadians and the British Empire, resulting in a gradual political process towards complete independence from the imperial power; and finally, the close proximity of English-speaking Canadians to the United States. Much of the debate over contemporary Canadian identity is argued in political terms, and defines Canada as a country defined by its government policies, which are thought to reflect deeper cultural values. In 2013, nearly nine in ten (87%) Canadians were proud to identify as Canadian, with over half (61%) expressing they were very proud. The highest pride levels were for Canadian history (70%), the armed forces (64%), the health care system (64%), and the Constitution (63%). However, pride in Canada’s political influence was lower at 46%. Outside Quebec, pride ranged from 91% in British Columbia to 94% in Prince Edward Island, while 70% of Quebec residents felt proud. Seniors and women showed the most pride, especially among first- and second-generation immigrants, who valued both Canadian identity and achievements. Inter-provincial interactions. Western alienation is the notion that the western provinces have historically been alienated, and in extreme cases excluded, from mainstream Canadian political affairs in favour of Eastern Canada or more specifically the central provinces. Western alienation claims that these latter two are politically represented, and economically favoured, more significantly than the former, which has given rise to the sentiment of alienation among many western Canadians. Likewise; the Quebec sovereignty movement that lead to the Québécois nation and the province of Quebec being recognized as a "distinct society" within Canada, highlights the sharp divisions between the Anglo and Francophone population. Though more than half of Canadians live in just two provinces (Ontario and Quebec), each province is largely self-contained due to provincial economic self-sufficiency. Only 15 percent of Canadians live in a different province from where they were born, and only 10 percent go to another province for university. Canada has always been like this, and stands in sharp contrast to the United States' internal mobility which is much higher. For example 30 percent live in a different state from where they were born, and 30 percent go away for university. Scott Gilmore in "Maclean's" argues that "Canada is a nation of strangers", in the sense that for most individuals, the rest of Canada outside their province is little-known. Another factor is the cost of internal travel. Intra-Canadian airfares are high—it is cheaper and more common to visit the United States than to visit another province. Gilmore argues that the mutual isolation makes it difficult to muster national responses to major national issues. Humour. Canadian humour is an integral part of the Canadian Identity. There are several traditions in Canadian humour in both English and French. While these traditions are distinct and at times very different, there are common themes that relate to Canadians' shared history and geopolitical situation in the Western Hemisphere and the world. Various trends can be noted in Canadian comedy. One trend is the portrayal of a "typical" Canadian family in an ongoing radio or television series. Other trends include outright absurdity, and political and cultural satire. Irony, parody, satire, and self-deprecation are arguably the primary characteristics of Canadian humour. The beginnings of Canadian national radio comedy date to the late 1930s with the debut of "The Happy Gang", a long-running weekly variety show that was regularly sprinkled with corny jokes in between tunes. Canadian television comedy begins with Wayne and Shuster, a sketch comedy duo who performed as a comedy team during the Second World War, and moved their act to radio in 1946 before moving on to television. "Second City Television", otherwise known as "SCTV", "Royal Canadian Air Farce", "This Hour Has 22 Minutes", "The Kids in the Hall", "Trailer Park Boys", "Corner Gas" and more recently "Schitt's Creek" are regarded as television shows which were very influential on the development of Canadian humour. Canadian comedians have had great success in the film industry and are amongst the most recognized in the world. Humber College in Toronto and the École nationale de l'humour in Montreal offer post-secondary programmes in comedy writing and performance. Montreal is also home to the bilingual (English and French) Just for Laughs festival and to the Just for Laughs Museum, a bilingual, international museum of comedy. Canada has a national television channel, The Comedy Network, devoted to comedy. Many Canadian cities feature comedy clubs and showcases, most notable, The Second City branch in Toronto (originally housed at The Old Fire Hall) and the Yuk Yuk's national chain. The Canadian Comedy Awards were founded in 1999 by the Canadian Comedy Foundation for Excellence, a not-for-profit organization. Symbols. Predominant symbols of Canada include the maple leaf, beaver, and the Canadian horse. Many official symbols of the country such as the Flag of Canada have been changed or modified over the past few decades to Canadianize them and de-emphasise or remove references to the United Kingdom. Other prominent symbols include the sports of hockey and lacrosse, the Canada goose, the Royal Canadian Mounted Police, the Canadian Rockies, and more recently the totem pole and Inuksuk; material items such as Canadian beer, maple syrup, tuques, canoes, nanaimo bars, butter tarts and the Quebec dish of poutine have also been defined as uniquely Canadian. Symbols of the Canadian monarchy continue to be featured in, for example, the Arms of Canada, the armed forces, and the prefix His Majesty's Canadian Ship. The designation "Royal" remains for institutions as varied as the Royal Canadian Armed Forces, Royal Canadian Mounted Police and the Royal Winnipeg Ballet. Arts. Visual arts. Indigenous artists were producing art in the territory that is now called Canada for thousands of years prior to the arrival of European settler colonists and the eventual establishment of Canada as a nation state. Like the peoples that produced them, indigenous art traditions spanned territories that extended across the current national boundaries between Canada and the United States. The majority of indigenous artworks preserved in museum collections date from the period after European contact and show evidence of the creative adoption and adaptation of European trade goods such as metal and glass beads. Canadian sculpture has been enriched by the walrus ivory, muskox horn and caribou antler and soapstone carvings by the Inuit artists. These carvings show objects and activities from the daily life, myths and legends of the Inuit. Inuit art since the 1950s has been the traditional gift given to foreign dignitaries by the Canadian government. The works of most early Canadian painters followed European trends. During the mid-19th century, Cornelius Krieghoff, a Dutch-born artist in Quebec, painted scenes of the life of the "habitants" (French-Canadian farmers). At about the same time, the Canadian artist Paul Kane painted pictures of indigenous life in western Canada. A group of landscape painters called the Group of Seven developed the first distinctly Canadian style of painting, inspired by the works of the legendary landscape painter Tom Thomson. All these artists painted large, brilliantly coloured scenes of the Canadian wilderness. Since the 1930s, Canadian painters have developed a wide range of highly individual styles. Emily Carr became famous for her paintings of totem poles in British Columbia. Other noted painters have included the landscape artist David Milne, the painters Jean-Paul Riopelle, Harold Town and Charles Carson and multi-media artist Michael Snow. The abstract art group Painters Eleven, particularly the artists William Ronald and Jack Bush, also had an important impact on modern art in Canada. Government support has played a vital role in their development enabling visual exposure through publications and periodicals featuring Canadian art, as has the establishment of numerous art schools and colleges across the country. Literature. Canadian literature is often divided into French- and English-language literatures, which are rooted in the literary traditions of France and Britain, respectively. Canada's early literature, whether written in English or French, often reflects the Canadian perspective on nature, frontier life, and Canada's position in the world, for example the poetry of Bliss Carman or the memoirs of Susanna Moodie and Catherine Parr Traill. These themes, and Canada's literary history, inform the writing of successive generations of Canadian authors, from Leonard Cohen to Margaret Atwood. By the mid-20th century, Canadian writers were exploring national themes for Canadian readers. Authors were trying to find a distinctly Canadian voice, rather than merely emulating British or American writers. Canadian identity is closely tied to its literature. The question of national identity recurs as a theme in much of Canada's literature, from Hugh MacLennan's "Two Solitudes" (1945) to Alistair MacLeod's "No Great Mischief" (1999). Canadian literature is often categorized by region or province; by the socio-cultural origins of the author (for example, Acadians, indigenous peoples, LGBT, and Irish Canadians); and by literary period, such as "Canadian postmoderns" or "Canadian Poets Between the Wars". Canadian authors have accumulated numerous international awards. In 1992, Michael Ondaatje became the first Canadian to win the Booker Prize for "The English Patient". Margaret Atwood won the Booker in 2000 for "The Blind Assassin" and Yann Martel won it in 2002 for the "Life of Pi". Carol Shields's "The Stone Diaries" won the Governor General's Awards in Canada in 1993, the 1995 Pulitzer Prize for Fiction, and the 1994 National Book Critics Circle Award. In 2013, Alice Munro was the first Canadian to be awarded the Nobel Prize in Literature for her work as "master of the modern short story". Munro is also a recipient of the Man Booker International Prize for her lifetime body of work, and three-time winner of Canada's Governor General's Award for fiction. Theatre. Canada has had a thriving stage theatre scene since the late 1800s. Theatre festivals draw many tourists in the summer months, especially the Stratford Shakespeare Festival in Stratford, Ontario, and the Shaw Festival in Niagara-on-the-Lake, Ontario. The Famous People Players are only one of many touring companies that have also developed an international reputation. Canada also hosts one of the largest fringe festivals, the Edmonton International Fringe Festival. Canada's largest cities host a variety of modern and historical venues. The Toronto Theatre District is Canada's largest, as well as being the third largest English-speaking theatre district in the world. In addition to original Canadian works, shows from the West End and Broadway frequently tour in Toronto. Toronto's Theatre District includes the venerable Roy Thomson Hall; the Princess of Wales Theatre; the Tim Sims Playhouse; The Second City; the Canon Theatre; the Panasonic Theatre; the Royal Alexandra Theatre; historic Massey Hall; and the city's new opera house, the Sony Centre for the Performing Arts. Toronto's Theatre District also includes the Theatre Museum Canada. Montreal's theatre district ("Quartier des Spectacles") is the scene of performances that are mainly French-language, although the city also boasts a lively anglophone theatre scene, such as the Centaur Theatre. Large French theatres in the city include Théâtre Saint-Denis and Théâtre du Nouveau Monde. Vancouver is host to, among others, the Vancouver Fringe Festival, the Arts Club Theatre Company, Carousel Theatre, Bard on the Beach, Theatre Under the Stars and Studio 58. Calgary is home to Theatre Calgary, a mainstream regional theatre; Alberta Theatre Projects, a major centre for new play development in Canada; the Calgary Animated Objects Society; and One Yellow Rabbit, a touring company. There are three major theatre venues in Ottawa; the Ottawa Little Theatre, originally called the Ottawa Drama League at its inception in 1913, is the longest-running community theatre company in Ottawa. Since 1969, Ottawa has been the home of the National Arts Centre, a major performing-arts venue that houses four stages and is home to the National Arts Centre Orchestra, the Ottawa Symphony Orchestra and Opera Lyra Ottawa. Established in 1975, the Great Canadian Theatre Company specializes in the production of Canadian plays at a local level. Television. Canadian television, especially supported by the Canadian Broadcasting Corporation, is the home of a variety of locally produced shows. French-language television, like French Canadian film, is buffered from excessive American influence by the fact of language, and likewise supports a host of home-grown productions. The success of French-language domestic television in Canada often exceeds that of its English-language counterpart. In recent years nationalism has been used to prompt products on television. The "I Am Canadian" campaign by Molson beer, most notably the commercial featuring Joe Canadian, infused domestically brewed beer and nationalism. Canada's television industry is in full expansion as a site for Hollywood productions. Since the 1980s, Canada, and Vancouver in particular, has become known as Hollywood North. The American TV series "Queer as Folk" was filmed in Toronto. Canadian producers have been very successful in the field of science fiction since the mid-1990s, with such shows as "The X-Files", "Stargate SG-1", "", the new "Battlestar Galactica", "My Babysitter's a Vampire", "Smallville", and "The Outer Limits" all filmed in Vancouver. The CRTC's Canadian content regulations dictate that a certain percentage of a domestic broadcaster's transmission time must include content that is produced by Canadians, or covers Canadian subjects. These regulations also apply to US cable television channels such as MTV and the Discovery Channel, which have local versions of their channels available on Canadian cable networks. Similarly, BBC Canada, while showing primarily BBC shows from the United Kingdom, also carries Canadian output. Film. A number of Canadian pioneers in early Hollywood significantly contributed to the creation of the motion picture industry in the early days of the 20th century. Over the years, many Canadians have made enormous contributions to the American entertainment industry, although they are frequently not recognized as Canadians. Canada has developed a vigorous film industry that has produced a variety of well-known films and actors. In fact, this eclipsing may sometimes be creditable for the bizarre and innovative directions of some works, such as auteurs Atom Egoyan ("The Sweet Hereafter", 1997) and David Cronenberg ("The Fly", "Naked Lunch", "A History of Violence") and the "avant-garde" work of Michael Snow and Jack Chambers. Also, the distinct French-Canadian society permits the work of directors such as Denys Arcand and Denis Villeneuve, while First Nations cinema includes the likes of "". At the 76th Academy Awards, Arcand's "The Barbarian Invasions" became Canada's first film to win the Academy Award for Best Foreign Language Film. The National Film Board of Canada is a public agency that produces and distributes films and other audiovisual works which reflect Canada to Canadians and the rest of the world'. Canada has produced many popular documentaries such as "The Corporation", "Nanook of the North", "Final Offer", and "". The Toronto International Film Festival (TIFF) is considered by many to be one of the most prevalent film festivals for Western cinema. It is the première film festival in North America from which the Oscars race begins. Music. The music of Canada has reflected the multi-cultural influences that have shaped the country. Indigenous, the French, and the British have all made historical contributions to the musical heritage of Canada. The country has produced its own composers, musicians and ensembles since the mid-1600s. From the 17th century onward, Canada has developed a music infrastructure that includes church halls; chamber halls; conservatories; academies; performing arts centres; record companies; radio stations, and television music-video channels. The music has subsequently been heavily influenced by American culture because of its proximity and migration between the two countries. Canadian rock has had a considerable impact on the development of modern popular music and the development of the most popular subgenres. Patriotic music in Canada dates back over 200 years as a distinct category from British patriotism, preceding the first legal steps to independence by over 50 years. The earliest known song, "The Bold Canadian", was written in 1812. The national anthem of Canada, "O Canada" adopted in 1980, was originally commissioned by the Lieutenant Governor of Quebec, the Honourable Théodore Robitaille, for the 1880 Saint-Jean-Baptiste Day ceremony. Calixa Lavallée wrote the music, which was a setting of a patriotic poem composed by the poet and judge Sir Adolphe-Basile Routhier. The text was originally only in French, before English lyrics were written in 1906. Music broadcasting in the country is regulated by the Canadian Radio-television and Telecommunications Commission (CRTC). The Canadian Academy of Recording Arts and Sciences presents Canada's music industry awards, the Juno Awards, which were first awarded in a ceremony during the summer of 1970. Media. Canada's media is highly autonomous, uncensored, diverse, and very regionalized. The "Broadcasting Act" declares "the system should serve to safeguard, enrich, and strengthen the cultural, political, social, and economic fabric of Canada". Canada has a well-developed media sector, but its cultural output—particularly in English films, television shows, and magazines—is often overshadowed by imports from the United States and the United Kingdom. As a result, the preservation of a distinctly Canadian culture is supported by federal government programs, laws, and institutions such as the Canadian Broadcasting Corporation (CBC), the National Film Board of Canada (NFB), and the Canadian Radio-television and Telecommunications Commission (CRTC). Canadian mass media, both print and digital, and in both official languages, is largely dominated by a "handful of corporations". The largest of these corporations is the country's national public broadcaster, the Canadian Broadcasting Corporation, which also plays a significant role in producing domestic cultural content, operating its own radio and TV networks in both English and French. In addition to the CBC, some provincial governments offer their own public educational TV broadcast services as well, such as TVOntario and Télé-Québec. Non-news media content in Canada, including film and television, is influenced both by local creators as well as by imports from the United States, the United Kingdom, Australia, and France. In an effort to reduce the amount of foreign-made media, government interventions in television broadcasting can include both regulation of content and public financing. Canadian tax laws limit foreign competition in magazine advertising. Sports. Sports in Canada consists of a variety of games. Although there are many contests that Canadians value, the most common are ice hockey, box lacrosse, Canadian football, basketball, soccer, curling and ringette. All but curling and soccer are considered domestic sports as they were either invented by Canadians or trace their roots to Canada. Ice hockey, referred to as simply "hockey", is Canada's most prevalent winter sport, its most popular spectator sport, and its most successful sport in international competition. It is Canada's official national winter sport. Lacrosse, a sport with indigenous origins, is Canada's oldest and official summer sport. Canadian football is Canada's second most popular spectator sport, and the Canadian Football League's annual championship, the Grey Cup, is the country's largest annual sports event. While other sports have a larger spectator base, association football, known in Canada as "soccer" in both English and French, has the most registered players of any team sport in Canada, and is the most played sport with all demographics, including ethnic origin, ages and genders. Professional teams exist in many cities in Canada – with a trio of teams in North America's top pro league, Major League Soccer – and international soccer competitions such as the FIFA World Cup, UEFA Euro and the UEFA Champions League attract some of the biggest audiences in Canada. Other popular team sports include curling, street hockey, rugby league, rugby union, softball and Ultimate frisbee. Popular individual sports include auto racing, boxing, karate, kickboxing, hunting, sport shooting, fishing, cycling, golf, hiking, horse racing, ice skating, skiing, snowboarding, swimming, triathlon, disc golf, water sports, and several forms of wrestling. As a country with a generally cool climate, Canada has enjoyed greater success at the Winter Olympics than at the Summer Olympics, although significant regional variations in climate allow for a wide variety of both team and individual sports. Great achievements in Canadian sports are recognized by Canada's Sports Hall of Fame, while the Lou Marsh Trophy is awarded annually to Canada's top athlete by a panel of journalists. There are numerous other Sports Halls of Fame in Canada. Cuisine. Canadian cuisine varies widely depending on the region. The former Canadian prime minister Joe Clark has been paraphrased to have noted: "Canada has a cuisine of cuisines. Not a stew pot, but a smorgasbord." While there are considerable overlaps between Canadian food and the rest of the cuisine in North America, many unique dishes (or versions of certain dishes) are found and available only in the country. Common contenders for the Canadian national food include poutine and butter tarts. Other popular Canadian made foods include indigenous fried bread bannock, French tourtière, Kraft Dinner, ketchup chips, date squares, nanaimo bars, back bacon, the caesar cocktail and many many more. The Canadian province of Quebec is the birthplace and world's largest producer of maple syrup, The Montreal-style bagel and Montreal-style smoked meat are both food items originally developed by Jewish communities living in Quebec The three earliest cuisines of Canada have First Nations, English, and French roots. The indigenous population of Canada often have their own traditional cuisine. The cuisines of English Canada are closely related to British and American cuisine. Finally, the traditional cuisines of French Canada have evolved from 16th-century French cuisine because of the tough conditions of colonial life and the winter provisions of Coureur des bois. With subsequent waves of immigration in the 18th and 19th century from Central, Southern, and Eastern Europe, and then from Asia, Africa and Caribbean, the regional cuisines were subsequently affected. Public opinion data. A 2022 web survey by the Association for Canadian Studies found that an absolute majority of respondents in all provinces except Alberta disagreed with the statement that "there is only one Canadian culture". Most respondents didn't choose what music to listen to based on whether or not the artist was Canadian. While half of Quebeckers and more than one third of respondents in the rest of Canada agreed that "I worry about preserving my culture" at the same time 60% of respondents agreed that "If a Canadian artist is good enough, they will become discovered without the need for specific Canadian content rules". Forty-six percent of respondents had no favourite Canadian musical artist. Rock, pop, and country music were the most popular genres of music, with above twenty percent fan bases in all age categories, but with hip-hop also appealing to more than twenty percent in the youngest cohort (18–35 years old). Film genre preferences were largely as the same across age categories, with comedies and action films the most popular, except that only one percent of older people (>55 years old) were fans of animated movies compared to eleven percent in young adults, while older adults showed a strong preference for dramas compared to younger people. Three out of four respondents could not name a single Canadian visual artist, living or dead. Outside views. In a 2002 interview with the "Globe and Mail", Aga Khan, the 49th Imam of the Ismaili Muslims, described Canada as "the most successful pluralist society on the face of our globe", citing it as "a model for the world". A 2007 poll ranked Canada as the country with the most positive influence in the world. 28,000 people in 27 countries were asked to rate 12 countries as either having a positive or negative worldwide influence. Canada's overall influence rating topped the list with 54 per cent of respondents rating it mostly positive and only 14 per cent mostly negative. A global opinion poll for the BBC saw Canada ranked the second most positively viewed nation in the world (behind Germany) in 2013 and 2014. The United States is home to a number of perceptions about Canadian culture, due to the countries' partially shared heritage and the relatively large number of cultural features common to both the US and Canada. For example, the average Canadian may be perceived as more reserved than his or her American counterpart. Canada and the United States are often inevitably compared as sibling countries, and the perceptions that arise from this oft-held contrast have gone to shape the advertised worldwide identities of both nations: the United States is seen as the rebellious child of the British Crown, forged in the fires of violent revolution; Canada is the calmer offspring of the United Kingdom, known for a more relaxed national demeanour.
7000
49835332
https://en.wikipedia.org/wiki?curid=7000
List of companies of Canada
Canada is a country in the northern part of North America. Canada is the world's eighth-largest economy , with a nominal GDP of approximately US$2.2 trillion. It is a member of the Organisation for Economic Co-operation and Development (OECD) and the Group of Seven (G7), and is one of the world's top ten trading nations, with a highly globalized economy. Canada is a mixed economy, ranking above the US and most western European nations on The Heritage Foundation's index of economic freedom, and experiencing a relatively low level of income disparity. The country's average household disposable income per capita is over US$23,900, higher than the OECD average. Furthermore, the Toronto Stock Exchange is the seventh-largest stock exchange in the world by market capitalization, listing over 1,500 companies with a combined market capitalization of over US$2 trillion . For further information on the types of business entities in this country and their abbreviations, see "Business entities in Canada". Largest firms. This list shows firms in the Fortune Global 500, which ranks firms by total revenues reported before March 31, 2022. Only the top five firms (if available) are included as a sample. Notable firms. This list includes notable companies with primary headquarters located in the country. The industry and sector follow the Industry Classification Benchmark taxonomy. Organizations which have ceased operations are included and noted as defunct.
7003
4626
https://en.wikipedia.org/wiki?curid=7003
Cauchy distribution
The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution formula_1 is the distribution of the -intercept of a ray issuing from formula_2 with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero. The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both its expected value and its variance are undefined (but see below). The Cauchy distribution does not have finite moments of order greater than or equal to one; only fractional absolute moments exist. The Cauchy distribution has no moment generating function. In mathematics, it is closely related to the Poisson kernel, which is the fundamental solution for the Laplace equation in the upper half-plane. It is one of the few stable distributions with a probability density function that can be expressed analytically, the others being the normal distribution and the Lévy distribution. Definitions. Here are the most important constructions. Rotational symmetry. If one stands in front of a line and kicks a ball at a uniformly distributed random angle towards the line, then the distribution of the point where the ball hits the line is a Cauchy distribution. For example, consider a point at formula_3 in the x-y plane, and select a line passing through the point, with its direction (angle with the formula_4-axis) chosen uniformly (between −180° and 0°) at random. The intersection of the line with the x-axis follows a Cauchy distribution with location formula_5 and scale formula_6. This definition gives a simple way to sample from the standard Cauchy distribution. Let formula_7 be a sample from a uniform distribution from formula_8, then we can generate a sample, formula_4 from the standard Cauchy distribution using formula_10 When formula_11 and formula_12 are two independent normally distributed random variables with expected value 0 and variance 1, then the ratio formula_13 has the standard Cauchy distribution. More generally, if formula_14 is a rotationally symmetric distribution on the plane, then the ratio formula_13 has the standard Cauchy distribution. Probability density function (PDF). The Cauchy distribution is the probability distribution with the following probability density function (PDF) formula_16 where formula_5 is the location parameter, specifying the location of the peak of the distribution, and formula_6 is the scale parameter which specifies the half-width at half-maximum (HWHM), alternatively formula_19 is full width at half maximum (FWHM). formula_6 is also equal to half the interquartile range and is sometimes called the probable error. This function is also known as a Lorentzian function, and an example of a nascent delta function, and therefore approaches a Dirac delta function in the limit as formula_21. Augustin-Louis Cauchy exploited such a density function in 1827 with an infinitesimal scale parameter, defining this Dirac delta function. Properties of PDF. The maximum value or amplitude of the Cauchy PDF is formula_22, located at formula_23. It is sometimes convenient to express the PDF in terms of the complex parameter formula_24 formula_25 The special case when formula_26 and formula_27 is called the standard Cauchy distribution with the probability density function formula_28 In physics, a three-parameter Lorentzian function is often used: formula_29 formula_30 Note that formula_31 is a monotone function in formula_6 and that the solution formula_6 must satisfy formula_34 Solving just for formula_5 requires solving a polynomial of degree formula_36, and solving just for formula_37 requires solving a polynomial of degree formula_38. Therefore, whether solving for one parameter or for both parameters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating formula_5 using the sample median is only about 81% as asymptotically efficient as estimating formula_5 by maximum likelihood. The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of formula_5 as the maximum likelihood estimate. When Newton's method is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for formula_5. The shape can be estimated using the median of absolute values, since for location 0 Cauchy variables formula_43, the formula_44 the shape parameter. Related distributions. Lévy measure. The Cauchy distribution is the stable distribution of index 1. The Lévy–Khintchine representation of such a stable distribution of parameter formula_61 is given, for formula_62 by: formula_63 where formula_64 and formula_65 can be expressed explicitly. In the case formula_66 of the Cauchy distribution, one has formula_67. This last representation is a consequence of the formula formula_68 Multivariate Cauchy distribution. A random vector formula_69 is said to have the multivariate Cauchy distribution if every linear combination of its components formula_70 has a Cauchy distribution. That is, for any constant vector formula_71, the random variable formula_72 should have a univariate Cauchy distribution. The characteristic function of a multivariate Cauchy distribution is given by: formula_73 where formula_74 and formula_75 are real functions with formula_74 a homogeneous function of degree one and formula_75 a positive homogeneous function of degree one. More formally: formula_78 for all formula_79. An example of a bivariate Cauchy distribution can be given by: formula_80 Note that in this example, even though the covariance between formula_4 and formula_82 is 0, formula_4 and formula_82 are not statistically independent. We also can write this formula for complex variable. Then the probability density function of complex Cauchy is : formula_85 The properties of multidimensional Cauchy distribution are then special cases of the multivariate Student distribution. Occurrence and applications. Relativistic Breit–Wigner distribution. In nuclear and particle physics, the energy profile of a resonance is described by the relativistic Breit–Wigner distribution, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution. History. A function with the form of the density function of the Cauchy distribution was studied geometrically by Fermat in 1659, and later was known as the witch of Agnesi, after Maria Gaetana Agnesi included it as an example in her 1748 calculus textbook. Despite its name, the first explicit analysis of the properties of the Cauchy distribution was published by the French mathematician Poisson in 1824, with Cauchy only becoming associated with it during an academic controversy in 1853. Poisson noted that if the mean of observations following such a distribution were taken, the standard deviation did not converge to any finite number. As such, Laplace's use of the central limit theorem with such a distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast to Bienaymé, who was to engage Cauchy in a long dispute over the matter.
7011
44645973
https://en.wikipedia.org/wiki?curid=7011
Control engineering
Control engineering, also known as control systems engineering and, in some European countries, automation engineering, is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering, chemical engineering and mechanical engineering at many institutions around the world. The practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems. Overview. Modern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem. Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner. Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering. Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a proportional–integral–derivative controller (PID controller) system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved. Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors. History. Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt, around the third century BCE. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel. This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 CE. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt in 1788. In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis. Control theory made significant strides over the next century. New mathematical techniques, as well as advances in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes. Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today. Mathematical modelling. David Quinn Mayne, (1930–2024) was among the early developers of a rigorous mathematical method for analysing Model predictive control algorithms (MPC). It is currently used in tens of thousands of applications and is a core part of the advanced control technology by hundreds of process control producers. MPC's major strength is its capacity to deal with nonlinearities and hard constraints in a simple and intuitive fashion. His work underpins a class of algorithms that are probably correct, heuristically explainable, and yield control system designs which meet practically important objectives. Education. At many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, in Italy there are several master in Automation & Robotics that are fully specialised in Control engineering or the Department of Automatic Control and Systems Engineering at the University of Sheffield or the Department of Robotics and Control Engineering at the United States Naval Academy and the Department of Control and Automation Engineering at the Istanbul Technical University. Control engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education. Careers. A control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are typically paired with an electrical or mechanical engineering degree, but can also be paired with a degree in chemical engineering. According to a "Control Engineering" survey, most of the people who answered were control engineers in various forms of their own career. There are not very many careers that are classified as "control engineer", most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering. Because of this, there are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, chemical companies, petroleum companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, Phillips 66, Eastman, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation. Process Control Engineers, typically found in Refineries and Specialty Chemical plants, can earn upwards of $90k annually. In India, control System Engineering is provided at different levels with a diploma, graduation and postgraduation. These programs require the candidate to have chosen physics, chemistry and mathematics for their secondary schooling or relevant bachelor's degree for postgraduate studies. Recent advancement. Originally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock. The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components. Therefore, at the design stage either: The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers. Similarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme. Resilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc.
7012
6209078
https://en.wikipedia.org/wiki?curid=7012
Chagas disease
Chagas disease, also known as American trypanosomiasis, is a tropical parasitic disease caused by "Trypanosoma cruzi". It is spread mostly by insects in the subfamily Triatominae, known as "kissing bugs". The symptoms change throughout the infection. In the early stage, symptoms are typically either not present or mild and may include fever, swollen lymph nodes, headaches, or swelling at the site of the bite. After four to eight weeks, untreated individuals enter the chronic phase of disease, which in most cases does not result in further symptoms. Up to 45% of people with chronic infections develop heart disease 10–30 years after the initial illness, which can lead to heart failure. Digestive complications, including an enlarged esophagus or an enlarged colon, may also occur in up to 21% of people, and up to 10% of people may experience nerve damage. is commonly spread to humans and other mammals by the kissing bug's bite wound and the bug's infected feces. The disease may also be spread through blood transfusion, organ transplantation, consuming food or drink contaminated with the parasites, and vertical transmission (from a mother to her baby). Diagnosis of early disease is by finding the parasite in the blood using a microscope or detecting its DNA by polymerase chain reaction. Chronic disease is diagnosed by finding antibodies for in the blood. Prevention focuses on eliminating kissing bugs and avoiding their bites. This may involve the use of insecticides or bed-nets. Other preventive efforts include screening blood used for transfusions. Early infections are treatable with the medications benznidazole or nifurtimox, which usually cure the disease if given shortly after the person is infected, but become less effective the longer a person has had Chagas disease. When used in chronic disease, medication may delay or prevent the development of end-stage symptoms. Benznidazole and nifurtimox often cause side effects, including skin disorders, digestive system irritation, and neurological symptoms, which can result in treatment being discontinued. New drugs for Chagas disease are under development, and while experimental vaccines have been studied in animal models, a human vaccine has not been developed. It is estimated that 6.5 million people, mostly in Mexico, Central America and South America, have Chagas disease as of 2019, resulting in approximately 9,490 annual deaths. Most people with the disease are poor, and most do not realize they are infected. Large-scale population migrations have carried Chagas disease to new regions, which include the United States and many European countries. The disease affects more than 150 types of animals. The disease was first described in 1909 by Brazilian physician Carlos Chagas, after whom it is named. Chagas disease is classified as a neglected tropical disease. Signs and symptoms. Chagas disease occurs in two stages: an acute stage, which develops one to two weeks after the insect bite, and a chronic stage, which develops over many years. The acute stage is often symptom-free. When present, the symptoms are typically minor and not specific to any particular disease. Signs and symptoms include fever, malaise, headache, and enlargement of the liver, spleen, and lymph nodes. Sometimes, people develop a swollen nodule at the site of infection, which is called "Romaña's sign" if it is on the eyelid, or a "chagoma" if it is elsewhere on the skin. In rare cases (less than 1–5%), infected individuals develop severe acute disease, which can involve inflammation of the heart muscle, fluid accumulation around the heart, and inflammation of the brain and surrounding tissues, and may be life-threatening. The acute phase typically lasts four to eight weeks and resolves without treatment. Unless treated with antiparasitic drugs, individuals remain infected with after recovering from the acute phase. Most chronic infections are asymptomatic, which is referred to as "indeterminate" chronic Chagas disease. However, over decades with the disease, approximately 30–40% of people develop organ dysfunction ("determinate" chronic Chagas disease), which most often affects the heart or digestive system. The most common long-term manifestation is heart disease, which occurs in 14–45% of people with chronic Chagas disease. People with Chagas heart disease often experience heart palpitations, and sometimes fainting, due to irregular heart function. By electrocardiogram, people with Chagas heart disease most frequently have arrhythmias. As the disease progresses, the heart's ventricles become enlarged (dilated cardiomyopathy), which reduces its ability to pump blood. In many cases, the first sign of Chagas heart disease is heart failure, thromboembolism, or chest pain associated with abnormalities in the microvasculature. Also common in chronic Chagas disease is damage to the digestive system, which affects 10–21% of people. Enlargement of the esophagus or colon are the most common digestive issues. Those with enlarged esophagus often experience pain (odynophagia) or trouble swallowing (dysphagia), acid reflux, cough, and weight loss. Individuals with enlarged colon often experience constipation, and may develop severe blockage of the intestine or its blood supply. Up to 10% of chronically infected individuals develop nerve damage that can result in numbness and altered reflexes or movement. While chronic disease typically develops over decades, some individuals with Chagas disease (less than 10%) progress to heart damage directly after acute disease. Signs and symptoms differ for people infected with through less common routes. People infected through ingestion of parasites tend to develop severe disease within three weeks of consumption, with symptoms including fever, vomiting, shortness of breath, cough, and pain in the chest, abdomen, and muscles. Those infected congenitally typically have few to no symptoms, but can have mild non-specific symptoms, or severe symptoms such as jaundice, respiratory distress, and heart problems. People infected through organ transplant or blood transfusion tend to have symptoms similar to those of vector-borne disease, but the symptoms may not manifest for anywhere from a week to five months. Chronically infected individuals who become immunosuppressed due to HIV infection can have particularly severe and distinct disease, most commonly characterized by inflammation in the brain and surrounding tissue or brain abscesses. Symptoms vary widely based on the size and location of brain abscesses, but typically include fever, headaches, seizures, loss of sensation, or other neurological issues that indicate particular sites of nervous system damage. Occasionally, these individuals also experience acute heart inflammation, skin lesions, and disease of the stomach, intestine, or peritoneum. Cause. Chagas disease is caused by infection with the protozoan parasite , which is typically introduced into humans through the bite of triatomine bugs, also called "kissing bugs". When the insect defecates at the bite site, motile forms called trypomastigotes enter the bloodstream and invade various host cells. Inside a host cell, the parasite transforms into a replicative form called an amastigote, which undergoes several rounds of replication. The replicated amastigotes transform back into trypomastigotes, which burst the host cell and are released into the bloodstream. Trypomastigotes then disseminate throughout the body to various tissues, where they invade cells and replicate. Over many years, cycles of parasite replication and immune response can severely damage these tissues, particularly the heart and digestive tract. Transmission. "T. cruzi" can be transmitted by various triatomine bugs in the genera "Triatoma", "Panstrongylus", and "Rhodnius". The primary vectors for human infection are the species of triatomine bugs that inhabit human dwellings, namely "Triatoma infestans", "Rhodnius prolixus", "Triatoma dimidiata" and "Panstrongylus megistus". These insects are known by a number of local names, including "vinchuca" in Argentina, Bolivia, Chile and Paraguay, "barbeiro" (the barber) in Brazil, "pito" in Colombia, "chinche" in Central America, and "chipo" in Venezuela. The bugs tend to feed at night, preferring moist surfaces near the eyes or mouth. A triatomine bug can become infected with when it feeds on an infected host. replicates in the insect's intestinal tract and is shed in the bug's feces. When an infected triatomine feeds, it pierces the skin and takes in a blood meal, defecating at the same time to make room for the new meal. The bite is typically painless, but causes itching. Scratching at the bite introduces the -laden feces into the bite wound, initiating infection. In addition to classical vector spread, Chagas disease can be transmitted through the consumption of food or drink contaminated with triatomine insects or their feces. Since heating or drying kills the parasites, drinks and especially fruit juices are the most frequent source of infection. This oral route of transmission has been implicated in several outbreaks, where it led to unusually severe symptoms, likely due to infection with a higher parasite load than from the bite of a triatomine bug—a single crushed triatomine in a food or beverage harboring "T cruzi" can contain about 600,000 metacyclic trypomastigotes, while triatomine fecal matter contains 3,000-4,000 per μL. "T. cruzi" can be transmitted independently of the triatomine bug during blood transfusion, following organ transplantation, or across the placenta during pregnancy. Transfusion with the blood of an infected donor infects the recipient 10–25% of the time. To prevent this, blood donations are screened for in many countries with endemic Chagas disease, as well as the United States. Similarly, transplantation of solid organs from an infected donor can transmit to the recipient. This is especially true for heart transplant, which transmits "T. cruzi" 75–100% of the time, and less so for transplantation of the liver (0–29%) or a kidney (0–19%). An infected mother can pass to her child through the placenta; this occurs in up to 15% of births by infected mothers. As of 2019, 22.5% of new infections occurred through congenital transmission. Pathophysiology. In the acute phase of the disease, signs and symptoms are caused directly by the replication of and the immune system's response to it. During this phase, can be found in various tissues throughout the body and circulating in the blood. During the initial weeks of infection, parasite replication is brought under control by the production of antibodies and activation of the host's inflammatory response, particularly cells that target intracellular pathogens such as NK cells and macrophages, driven by inflammation-signaling molecules like TNF-α and IFN-γ. During chronic Chagas disease, long-term organ damage develops over the years due to continued replication of the parasite and damage from the immune system. Early in the course of the disease, is found frequently in the striated muscle fibers of the heart. As disease progresses, the heart becomes generally enlarged, with substantial regions of cardiac muscle fiber replaced by scar tissue and fat. Areas of active inflammation are scattered throughout the heart, with each housing inflammatory immune cells, typically macrophages and T cells. Late in the disease, parasites are rarely detected in the heart, and may be present at only very low levels. In the heart, colon, and esophagus, chronic disease leads to a massive loss of nerve endings. In the heart, this may contribute to arrhythmias and other cardiac dysfunction. In the colon and esophagus, loss of nervous system control is the major driver of organ dysfunction. Loss of nerves impairs the movement of food through the digestive tract, which can lead to blockage of the esophagus or colon and restriction of their blood supply. The parasite can insert kinetoplast DNA into host cells, an example of horizontal gene transfer. Vertical inheritance of the inserted kDNA has been demonstrated in rabbits and birds. In chickens, offspring carrying inserted kDNA show symptoms of disease despite carrying no live trypanosomes. In 2010, integrated kDNA was found to be vertically transmitted in five human families. Diagnosis. The presence of "T. cruzi" in the blood is diagnostic of Chagas disease. During the acute phase of infection, it can be detected by microscopic examination of fresh anticoagulated blood, or its buffy coat, for motile parasites; or by preparation of thin and thick blood smears stained with Giemsa, for direct visualization of parasites. Blood smear examination detects parasites in 34–85% of cases. The sensitivity increases if techniques such as microhematocrit centrifugation are used to concentrate the blood. On microscopic examination of stained blood smears, trypomastigotes appear as S or U-shaped organisms with a flagellum connected to the body by an undulating membrane. A nucleus and a smaller structure called a kinetoplast are visible inside the parasite's body; the kinetoplast of is relatively large, which helps to distinguish it from other species of trypanosomes that infect humans. Alternatively, "T. cruzi" DNA can be detected by polymerase chain reaction (PCR). In acute and congenital Chagas disease, PCR is more sensitive than microscopy, and it is more reliable than antibody-based tests for the diagnosis of congenital disease because it is not affected by the transfer of antibodies against from a mother to her baby (passive immunity). PCR is also used to monitor levels in organ transplant recipients and immunosuppressed people, which allows infection or reactivation to be detected at an early stage. In chronic Chagas disease, the concentration of parasites in the blood is too low to be reliably detected by microscopy or PCR, so the diagnosis is usually made using serological tests, which detect immunoglobulin G antibodies against in the blood. Two positive serology results, using different test methods, are required to confirm the diagnosis. If the test results are inconclusive, additional testing methods such as Western blot can be used. Various rapid diagnostic tests for Chagas disease are available. These tests are easily transported and can be performed by people without special training. They are useful for screening large numbers of people and testing people who cannot access healthcare facilities, but their sensitivity is relatively low, and it is recommended that a second method is used to confirm a positive result. "T. cruzi" parasites can be grown from blood samples by blood culture, xenodiagnosis, or by inoculating animals with the person's blood. In the blood culture method, the person's red blood cells are separated from the plasma and added to a specialized growth medium to encourage multiplication of the parasite. It can take up to six months to obtain the result. Xenodiagnosis involves feeding the blood to triatomine insects, and then examining their feces for the parasite 30 to 60 days later. These methods are not routinely used, as they are slow and have low sensitivity. Prevention. Efforts to prevent Chagas disease have largely focused on vector control to limit exposure to triatomine bugs. Insecticide-spraying programs have been the mainstay of vector control, consisting of spraying homes and the surrounding areas with residual insecticides. This was originally done with organochlorine, organophosphate, and carbamate insecticides, which were supplanted in the 1980s with pyrethroids. These programs have drastically reduced transmission in Brazil and Chile, and eliminated major vectors from certain regions: "Triatoma infestans" from Brazil, Chile, Uruguay, and parts of Peru and Paraguay, as well as "Rhodnius prolixus" from Central America. Vector control in some regions has been hindered by the development of insecticide resistance among triatomine bugs. In response, vector control programs have implemented alternative insecticides (e.g. fenitrothion and bendiocarb in Argentina and Bolivia), treatment of domesticated animals (which are also fed on by triatomine bugs) with pesticides, pesticide-impregnated paints, and other experimental approaches. In areas with triatomine bugs, transmission of can be prevented by sleeping under bed nets and by housing improvements that prevent triatomine bugs from colonizing houses. Blood transfusion was formerly the second-most common mode of transmission for Chagas disease. can survive in refrigerated stored blood, and can survive freezing and thawing, allowing it to persist in whole blood, packed red blood cells, granulocytes, cryoprecipitate, and platelets. The development and implementation of blood bank screening tests have dramatically reduced the risk of infection during a blood transfusion. Nearly all blood donations in Latin American countries undergo Chagas screening. Widespread screening is also common in non-endemic nations with significant populations of immigrants from endemic areas, including the United Kingdom (implemented in 1999), Spain (2005), the United States (2007), France and Sweden (2009), Switzerland (2012), and Belgium (2013). Serological tests, typically ELISAs, are used to detect antibodies against proteins in donor blood. Other modes of transmission have been targeted by Chagas disease prevention programs. Treating -infected mothers during pregnancy reduces the risk of congenital transmission of the infection. To this end, many countries in Latin America have implemented routine screening of pregnant women and infants for infection, and the World Health Organization recommends screening all children born to infected mothers to prevent congenital infection from developing into chronic disease. Similarly to blood transfusions, many countries with endemic Chagas disease screen organs for transplantation with serological tests. There is no vaccine against Chagas disease. Several experimental vaccines have been tested in animals infected with and were able to reduce parasite numbers in the blood and heart, but no vaccine candidates had undergone clinical trials in humans as of 2016. Management. Chagas disease is managed using antiparasitic drugs to eliminate "T. cruzi" from the body, and symptomatic treatment to address the effects of the infection. As of 2018, benznidazole and nifurtimox were the antiparasitic drugs of choice for treating Chagas disease, though benznidazole is the only drug available in most of Latin America. For either drug, treatment typically consists of two to three oral doses per day for 60 to 90 days. Antiparasitic treatment is most effective early in the course of infection: it eliminates from 50 to 80% of people in the acute phase (WHO: "nearly 100 %"), but only 20–60% of those in the chronic phase. Treatment of chronic disease is more effective in children than in adults, and the cure rate for congenital disease approaches 100% if treated in the first year of life. Antiparasitic treatment can also slow the progression of the disease and reduce the possibility of congenital transmission. Elimination of does not cure the cardiac and gastrointestinal damage caused by chronic Chagas disease, so these conditions must be treated separately. Antiparasitic treatment is not recommended for people who have already developed dilated cardiomyopathy. Benznidazole is usually considered the first-line treatment because it has milder adverse effects than nifurtimox, and its efficacy is better understood. Both benznidazole and nifurtimox have common side effects that can result in treatment being discontinued. The most common side effects of benznidazole are skin rash, digestive problems, decreased appetite, weakness, headache, and sleeping problems. These side effects can sometimes be treated with antihistamines or corticosteroids, and are generally reversed when treatment is stopped. However, benznidazole is discontinued in up to 29% of cases. Nifurtimox has more frequent side effects, affecting up to 97.5% of individuals taking the drug. The most common side effects are loss of appetite, weight loss, nausea and vomiting, and various neurological disorders including mood changes, insomnia, paresthesia and peripheral neuropathy. Treatment is discontinued in up to 75% of cases. Both drugs are contraindicated for use in pregnant women and people with liver or kidney failure. As of 2019, resistance to these drugs has been reported. Complications. In the chronic stage, treatment involves managing the clinical manifestations of the disease. The treatment of Chagas cardiomyopathy is similar to that of other forms of heart disease. Beta blockers and ACE inhibitors may be prescribed, but some people with Chagas disease may not be able to take the standard dose of these drugs because they have low blood pressure or a low heart rate. To manage irregular heartbeats, people may be prescribed anti-arrhythmic drugs such as amiodarone, or have a pacemaker implanted. Blood thinners may be used to prevent thromboembolism and stroke. Chronic heart disease caused by untreated "T. cruzi" infection is a common reason for heart transplantation surgery. Because transplant recipients take immunosuppressive drugs to prevent organ rejection, they are monitored using PCR to detect reactivation of the disease. People with Chagas disease who undergo heart transplantation have higher survival rates than the average heart transplant recipient. Mild gastrointestinal disease may be treated symptomatically, such as by using laxatives for constipation or taking a prokinetic drug like metoclopramide before meals to relieve esophageal symptoms. Surgery to sever the muscles of the lower esophageal sphincter (cardiomyotomy) may be performed in more severe cases of esophageal disease, and surgical removal of the affected part of the organ may be required for advanced megacolon and megaesophagus. Epidemiology. In 2019, an estimated 6.5 million people worldwide had Chagas disease, with approximately 173,000 new infections and 9,490 deaths each year. The disease resulted in a global annual economic burden estimated at US$7.2 billion in 2013, 86% of which is borne by endemic countries. Chagas disease results in the loss of over 800,000 disability-adjusted life years each year. The endemic area of Chagas disease stretches from the southern United States to northern Chile and Argentina, with Bolivia (6.1%), Argentina (3.6%), and Paraguay (2.1%) exhibiting the highest prevalence of the disease. Within continental Latin America, Chagas disease is endemic to 21 countries: Argentina, Belize, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, El Salvador, French Guiana, Guatemala, Guyana, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Suriname, Uruguay, and Venezuela. In endemic areas, due largely to vector control efforts and screening of blood donations, annual infections and deaths have fallen by 67% and more than 73% respectively from their peaks in the 1980s to 2010. Transmission by insect vector and blood transfusion has been completely interrupted in Uruguay (1997), Chile (1999), and Brazil (2006), and in Argentina, vectorial transmission had been interrupted in 13 of the 19 endemic provinces as of 2001. During Venezuela's humanitarian crisis, vectorial transmission has begun occurring in areas where it had previously been interrupted, and Chagas disease seroprevalence rates have increased. Transmission rates have also risen in the Gran Chaco region due to insecticide resistance and in the Amazon basin due to oral transmission. While the rate of vector-transmitted Chagas disease has declined throughout most of Latin America, the rate of orally transmitted disease has risen, possibly due to increasing urbanization and deforestation bringing people into closer contact with triatomines and altering the distribution of triatomine species. Orally transmitted Chagas disease is of particular concern in Venezuela, where 16 outbreaks have been recorded between 2007 and 2018. Chagas exists in two different ecological zones. In the Southern Cone region, the main vector lives in and around human homes. In Central America and Mexico, the main vector species lives both inside dwellings and in uninhabited areas. In both zones, Chagas occurs almost exclusively in rural areas, where also circulates in wild and domestic animals. commonly infects more than 100 species of mammals across Latin America including opossums ("Didelphis" spp.), armadillos, marmosets, bats, various rodents and dogs all of which can be infected by the vectors or orally by eating triatomine bugs and other infected animals. For entomophagous animals this is a common mode. "Didelphis" spp. are unique in that they do not require the triatomine for transmission, completing the life cycle through their own urine and feces. Veterinary transmission also occurs through vertical transmission through the placenta, blood transfusion and organ transplants. Non-endemic countries. Though Chagas is traditionally considered a disease of rural Latin America, international migration has dispersed those with the disease to numerous non-endemic countries, primarily in North America and Europe. As of 2020, approximately 300,000 infected people are living in the United States, and in 2018 it was estimated that 30,000 to 40,000 people in the United States had Chagas cardiomyopathy. The vast majority of cases in the United States occur in immigrants from Latin America, but local transmission is possible. Eleven triatomine species are native to the United States, and some southern states have persistent cycles of disease transmission between insect vectors and animal reservoirs, which include woodrats, possums, raccoons, armadillos and skunks. However, locally acquired infection is very rare: only 28 cases were documented from 1955 to 2015. As of 2013, the cost of treatment in the United States was estimated to be US$900 million annually (global cost $7 billion), which included hospitalization and medical devices such as pacemakers. Chagas disease affected approximately 68,000 to 123,000 people in Europe as of 2019. Spain, which has a high rate of immigration from Latin America, has the highest prevalence of the disease. It is estimated that 50,000 to 70,000 people in Spain are living with Chagas disease, accounting for the majority of European cases. The prevalence varies widely within European countries due to differing immigration patterns. Italy has the second highest prevalence, followed by the Netherlands, the United Kingdom, and Germany. History. "T. cruzi" likely circulated in South American mammals long before the arrival of humans on the continent. has been detected in ancient human remains across South America, from a 9000-year-old Chinchorro mummy in the Atacama Desert, to remains of various ages in Minas Gerais, to an 1100-year-old mummy as far north as the Chihuahuan Desert near the Rio Grande. Many early written accounts describe symptoms consistent with Chagas disease, with early descriptions of the disease sometimes attributed to Miguel Diaz Pimenta (1707), (1735), and Theodoro J. H. Langgaard (1842). The formal description of Chagas disease was made by Carlos Chagas in 1909 after examining a two-year-old girl with fever, swollen lymph nodes, and an enlarged spleen and liver. Upon examination of her blood, Chagas saw trypanosomes identical to those he had recently identified from the hindgut of triatomine bugs and named "Trypanosoma cruzi" in honor of his mentor, Brazilian physician Oswaldo Cruz. He sent infected triatomine bugs to Cruz in Rio de Janeiro, who showed the bite of the infected triatomine could transmit to marmoset monkeys as well. In just two years, 1908 and 1909, Chagas published descriptions of the disease, the organism that caused it, and the insect vector required for infection. Almost immediately thereafter, at the suggestion of Miguel Couto, then professor of the , the disease was widely referred to as "Chagas disease". Chagas' discovery brought him national and international renown, but in highlighting the inadequacies of the Brazilian government's response to the disease, Chagas attracted criticism to himself and to the disease that bore his name, stifling research on his discovery and likely frustrating his nomination for the Nobel Prize in 1921. In the 1930s, Salvador Mazza rekindled Chagas disease research, describing over a thousand cases in Argentina's Chaco Province. In Argentina, the disease is known as "mal de Chagas-Mazza" in his honor. Serological tests for Chagas disease were introduced in the 1940s, demonstrating that infection with was widespread across Latin America. This, combined with successes eliminating the malaria vector through insecticide use, spurred the creation of public health campaigns focused on treating houses with insecticides to eradicate triatomine bugs. The 1950s saw the discovery that treating blood with crystal violet could eradicate the parasite, leading to its widespread use in transfusion screening programs in Latin America. Large-scale control programs began to take form in the 1960s, first in São Paulo, then various locations in Argentina, then national-level programs across Latin America. These programs received a major boost in the 1980s with the introduction of pyrethroid insecticides, which did not leave stains or odors after application and were longer-lasting and more cost-effective. Regional bodies dedicated to controlling Chagas disease arose through support of the Pan American Health Organization, with the Initiative of the Southern Cone for the Elimination of Chagas Diseases launching in 1991, followed by the Initiative of the Andean countries (1997), Initiative of the Central American countries (1997), and the Initiative of the Amazon countries (2004). Research. Treatments. Fexinidazole, an antiparasitic drug approved for treating African trypanosomiasis, has shown activity against Chagas disease in animal models. As of 2019, it is undergoing phase II clinical trials for chronic Chagas disease in Spain. Other drug candidates include GNF6702, a proteasome inhibitor that is effective against Chagas disease in mice and is undergoing preliminary toxicity studies, and AN4169, which has had promising results in animal models. Several experimental vaccines have been tested in animals. In addition to subunit vaccines, some approaches have involved vaccination with attenuated parasites or organisms that express some of the same antigens as but do not cause human disease, such as "Trypanosoma rangeli" or "Phytomonas serpens". DNA vaccination has also been explored. As of 2019, vaccine research has mainly been limited to small animal models. Diagnostic tests. As of 2018, standard diagnostic tests for Chagas disease were limited in their ability to measure the effectiveness of antiparasitic treatment, as serological tests may remain positive for years after is eliminated from the body, and PCR may give false-negative results when the parasite concentration in the blood is low. Several potential biomarkers of treatment response are under investigation, such as immunoassays against specific antigens, flow cytometry testing to detect antibodies against different life stages of , and markers of physiological changes caused by the parasite, such as alterations in coagulation and lipid metabolism. Another research area is the use of biomarkers to predict the progression of chronic disease. Serum levels of tumor necrosis factor alpha, brain and atrial natriuretic peptide, and angiotensin-converting enzyme 2 have been studied as indicators of the prognosis of Chagas cardiomyopathy. "T. cruzi" shed acute-phase antigen (SAPA), which can be detected in blood using ELISA or Western blot, has been used as an indicator of early acute and congenital infection. An assay for antigens in urine has been developed to diagnose congenital disease.
7015
27823944
https://en.wikipedia.org/wiki?curid=7015
Christiaan Barnard
Christiaan Neethling Barnard (8November 19222September 2001) was a South African cardiac surgeon who performed the world's first human-to-human heart transplant operation. On 3 December 1967, Barnard transplanted the heart of accident victim Denise Darvall into the chest of 54-year-old Louis Washkansky, who regained full consciousness and was able to talk easily with his wife, before dying 18 days later of pneumonia, largely brought on by the anti-rejection drugs that suppressed his immune system. Barnard had told Mr. and Mrs. Washkansky that the operation had an 80% chance of success, an assessment which has been criticised as misleading. Barnard's second transplant patient, Philip Blaiberg, whose operation was performed at the beginning of 1968, returned home from the hospital and lived for a year and a half. Born in Beaufort West, Cape Province, Barnard studied medicine and practised for several years in his native South Africa. As a young doctor experimenting on dogs, Barnard developed a remedy for the infant defect of intestinal atresia. His technique saved the lives of ten babies in Cape Town and was adopted by surgeons in Britain and the United States. In 1955, he travelled to the United States and was initially assigned further gastrointestinal work by Owen Harding Wangensteen at the University of Minnesota. He was introduced to the heart-lung machine, and Barnard was allowed to transfer to the service run by open heart surgery pioneer Walt Lillehei. Upon returning to South Africa in 1958, Barnard was appointed head of the Department of Experimental Surgery at the Groote Schuur Hospital, Cape Town. He retired as head of the Department of Cardiothoracic Surgery in Cape Town in 1983 after rheumatoid arthritis in his hands ended his surgical career. He became interested in anti-aging research, and in 1986 his reputation suffered when he promoted Glycel, an expensive "anti-aging" skin cream, whose approval was withdrawn by the United States Food and Drug Administration soon thereafter. During his remaining years, he established the Christiaan Barnard Foundation, dedicated to helping underprivileged children throughout the world. He died in 2001 at the age of 78 after an asthma attack. Early life. Barnard was born on November 8, 1922 and grew up in Beaufort West, Cape Province, Union of South Africa. His father, Adam Barnard, was a minister in the Dutch Reformed Church. One of his four brothers, Abraham, was a "blue baby" who died of a heart problem at the age of three (Barnard would later guess that it was tetralogy of Fallot). The family also experienced the loss of a daughter who was stillborn and who had been the fraternal twin of Barnard's older brother Johannes, who was twelve years older than Christiaan. Barnard matriculated from the Beaufort West High School in 1940, and went to study medicine at the University of Cape Town Medical School, where he obtained his MB ChB in 1945. His father served as a missionary to mixed-race people. His mother, the former Maria Elisabeth de Swart, instilled in the surviving brothers the belief that they could do anything they set their minds to. Career. Barnard did his internship and residency at the Groote Schuur Hospital in Cape Town, after which he worked as a general practitioner in Ceres, a rural town in the Cape Province. In 1951, he returned to Cape Town where he worked at the City Hospital as a Senior Resident Medical Officer, and in the Department of Medicine at Groote Schuur as a registrar. He completed his master's degree, receiving Master of Medicine in 1953 from the University of Cape Town. In the same year he obtained a doctorate in medicine (MD) from the same university for a dissertation titled "The treatment of tuberculous meningitis". Soon after qualifying as a doctor, Barnard performed experiments on dogs while investigating intestinal atresia, congenital, life-threatening obstructions in the intestines. He followed a medical hunch that this was caused by inadequate blood flow to the fetus. After nine months and forty-three attempts, Barnard was able to reproduce this condition in a fetus puppy by tying off some of the blood supply to a puppy's intestines and then placing the animal back in the womb, after which it was born some two weeks later, with the condition of intestinal atresia. He was also able to cure the condition by removing the piece of intestine with inadequate blood supply. The mistake of previous surgeons had been attempting to reconnect ends of intestine which themselves still had inadequate blood supply. To be successful, it was typically necessary to remove between 15 and 20 centimeters of intestine (6 to 8 inches). Jannie Louw used this innovation in a clinical setting, and Barnard's method saved the lives of ten babies in Cape Town. This technique was also adapted by surgeons in Britain and the US. In addition, Barnard analyzed 259 cases of tubercular meningitis. Owen Wangensteen at the University of Minnesota in the United States had been impressed by the work of Alan Thal, a young South African doctor working in Minnesota. Wangensteen asked the Groote Schuur Head of Medicine John Brock if he might recommend any similarly talented South Africans, and Brock recommended Barnard. In December 1955, Barnard travelled to Minneapolis, Minnesota to begin a two-year scholarship under Chief of Surgery Wangensteen, who assigned Barnard more work on the intestines, which Barnard accepted even though he wanted to move onto something new. Simply by luck, whenever Barnard needed a break from this work, he could wander across the hall and talk with Vince Gott who ran the lab for open-heart surgery pioneer Walt Lillehei. Gott had begun to develop a technique of running blood backwards through the veins of the heart so Lillehei could more easily operate on the aortic valve (McRae writes, "It was the type of inspired thinking that entranced Barnard"). In March 1956, Gott asked Barnard to help him run the heart-lung machine for an operation. Shortly thereafter, Wangensteen agreed to let Barnard switch to Lillehei's service. It was during this time that Barnard became acquainted with fellow future heart transplantation surgeon Norman Shumway. Barnard also became friendly with Gil Campbell, who had demonstrated that a dog's lung could be used to oxygenate blood during open-heart surgery. (The year before Barnard arrived, Lillehei and Campbell had used this procedure for twenty minutes during surgery on a 13-year-old boy with ventricular septal defect, and the boy had made a full recovery.) Barnard and Campbell met regularly for early breakfast. In 1958, Barnard received a Master of Science in Surgery for a thesis titled "The aortic valve – problems in the fabrication and testing of a prosthetic valve". The same year he was awarded a Ph.D. for his dissertation titled "The aetiology of congenital intestinal atresia". Barnard described the two years he spent in the United States as "the most fascinating time in my life." Upon returning to South Africa in 1958, Barnard was appointed head of the Department of Experimental Surgery at Groote Schuur hospital, as well as holding a joint post at the University of Cape Town. He was promoted to full-time lecturer and Director of Surgical Research at the University of Cape Town. In 1960, he flew to Moscow in order to meet Vladimir Demikhov, a top expert on organ transplants (later he credited Demikhov's accomplishment saying that "if there is a father of heart and lung transplantation then Demikhov certainly deserves this title.") In 1961 he was appointed Head of the Division of Cardiothoracic Surgery at the teaching hospitals of the University of Cape Town. He rose to associate professor in the Department of Surgery at the University of Cape Town in 1962. Barnard's younger brother Marius, who also studied medicine, eventually became Barnard's right-hand man at the department of Cardiac Surgery. Over time, Barnard became known as a brilliant surgeon with many contributions to the treatment of cardiac diseases, such as the Tetralogy of Fallot and Ebstein's anomaly. He was promoted to Professor of Surgical Science in the Department of Surgery at the University of Cape Town in 1972. In 1981, Barnard became a founding member of the World Cultural Council. Among the recognition he received over the years, he was named Professor Emeritus in 1984. Historical context. Following the first-ever successful kidney transplant in 1953, in the United States, Barnard performed South Africa's second kidney transplant in October 1967, the first having been done in Johannesburg the previous year. On 23 January 1964, James Hardy at the University of Mississippi Medical Center in Jackson, Mississippi, performed the world's first heart transplant and world's first cardiac xenotransplant by transplanting the heart of a chimpanzee into a desperately ill and dying man. This heart did beat in the patient's chest for approximately 60 to 90 minutes. The patient, Boyd Rush, died without regaining consciousness. Barnard had experimentally transplanted forty-eight hearts into dogs, which was about a fifth the number that Adrian Kantrowitz had performed at Maimonides Medical Center in New York and about a sixth the number Norman Shumway had performed at Stanford University in California. Barnard had no dogs which had survived longer than ten days, unlike Kantrowitz and Shumway who had had dogs survive for more than a year. With the availability of new breakthroughs introduced by several pioneers, also including Richard Lower at the Medical College of Virginia, several surgical teams were in a position to prepare for a human heart transplant. Barnard had a patient willing to undergo the procedure, but as with other surgeons, he needed a suitable donor. During the Apartheid era in South Africa, non-white persons and citizens were not given equal opportunities in the medical professions. At Groote Schuur Hospital, Hamilton Naki was an informally taught surgeon. He started out as a gardener and cleaner. One day he was asked to help out with an experiment on a giraffe. From this modest beginning, Naki became principal lab technician and taught hundreds of surgeons, and assisted with Barnard's organ transplant program. Barnard said, "Hamilton Naki had better technical skills than I did. He was a better craftsman than me, especially when it came to stitching, and had very good hands in the theatre". A popular myth, propagated principally by a widely discredited documentary film called "Hidden Heart" and an erroneous newspaper article, maintains incorrectly that Naki was present during the Washkansky transplant. First human-to-human heart transplant. Barnard performed the world's first human-to-human heart transplant operation in the early morning hours of Sunday 3 December 1967. Louis Washkansky, a 54-year-old grocer who was suffering from diabetes and incurable heart disease, was the patient. Barnard was assisted by his brother Marius Barnard, as well as a team of thirty staff members. The operation lasted approximately five hours. Barnard stated to Washkansky and his wife Ann Washkansky that the transplant had an 80% chance of success. This has been criticised by the ethicists Peter Singer and Helga Kuhse as making claims for chances of success to the patient and family which were "unfounded" and "misleading". Barnard later wrote, "For a dying man it is not a difficult decision because he knows he is at the end. If a lion chases you to the bank of a river filled with crocodiles, you will leap into the water, convinced you have a chance to swim to the other side." The donor heart came from a young woman, Denise Darvall, who had been rendered brain dead in an accident on 2 December 1967, while crossing a street in Cape Town. On examination at Groote Schuur hospital, Darvall had two serious fractures in her skull, with no electrical activity in her brain detected, and no sign of pain when ice water was poured into her ear. Coert Venter and Bertie Bosman requested permission from Darvall's father for Denise's heart to be used in the transplant attempt. The afternoon before his first transplant, Barnard dozed at his home while listening to music. When he awoke, he decided to modify Shumway and Lower's technique. Instead of cutting straight across the back of the atrial chambers of the donor heart, he would avoid damage to the septum and instead cut two small holes for the venae cavae and pulmonary veins. Prior to the transplant, rather than wait for Darvall's heart to stop beating, at his brother Marius Barnard's urging, Christiaan had injected potassium into her heart to paralyse it and render her technically dead by the whole-body standard. Twenty years later, Marius Barnard recounted, "Chris stood there for a few moments, watching, then stood back and said, 'It works.'" Washkansky survived the operation and lived for 18 days; he died from pneumonia, possibly due to the immunosuppressive drugs he was taking. Additional heart transplants. Barnard and his patient received worldwide publicity. A 2017 BBC retrospective article described the occasion as one where "Journalists and film crews flooded into Cape Town's Groote Schuur Hospital, soon making Barnard and Washkansky household names." Barnard himself was described as "charismatic" and "photogenic" while initial reports labeled the operation as "successful" despite the death of Washkansky 18 days later. Worldwide, approximately 100 transplants were performed by various doctors during 1968. However, only a third of these patients lived longer than three months. Many medical centers stopped performing transplants. In fact, a US National Institutes of Health publication states, "Within several years, only Shumway's team at Stanford was attempting transplants." Barnard's second transplant operation was conducted on 2 January 1968, and the patient, Philip Blaiberg, survived for 19 months. Blaiberg's heart was donated by Clive Haupt, a 24-year-old black man who suffered a stroke, inciting controversy (especially in the African-American press) during the time of South African apartheid. Dirk van Zyl, who received a new heart in 1971, was the longest-lived recipient, surviving over 23 years. Between December 1967 and November 1974 at Groote Schuur Hospital in Cape Town, South Africa, ten heart transplants were performed, as well as a heart and lung transplant in 1971. Of these ten patients, four lived longer than 18 months, with two of these four becoming long-term survivors. One patient, Dorothy Fischer, lived for over thirteen years and another for over twenty-four years. Full recovery of donor heart function often takes place over hours or days, during which time considerable damage can occur. Other deaths to patients can occur from preexisting conditions. For example, in pulmonary hypertension the patient's right ventricle has often adapted to the higher pressure over time and, although diseased and hypertrophied, is often capable of maintaining circulation to the lungs. Barnard designed the idea of the heterotopic (or "piggy back" transplant) in which the patient's diseased heart is left in place while the donor heart is added, essentially forming a "double heart". Barnard performed the first such heterotopic heart transplant in 1974. From November 1974 through December 1983, 49 consecutive heterotopic heart transplants on 43 patients were performed at Groote Schuur. The survival rate for patients at one year was over 60%, as compared to less than 40% with standard transplants, and the survival rate at five years was over 36% as compared to less than 20% with standard transplants. Many surgeons gave up cardiac transplantation due to poor results, often due to rejection of the transplanted heart by the patient's immune system. Barnard persisted until the advent of cyclosporine, an effective immunosuppressive drug, which helped revive the operation throughout the world. He also attempted xenotransplantation in two human patients, utilizing a baboon heart and chimpanzee heart, respectively. Public life. Barnard was an outspoken opponent of South Africa's laws of apartheid, and was not afraid to criticise his nation's government, although he had to temper his remarks to some extent to travel abroad. Rather than leaving his homeland, he used his fame to campaign for a change in the law. Christiaan's brother, Marius Barnard, went into politics, and was elected to the legislature from the Progressive Federal Party. Barnard later stated that the reason he never won the Nobel Prize in Physiology or Medicine was probably because he was a "white South African". Shortly before his visit to Kenya in 1978, the following was written about his views regarding race relations in South Africa; "While he believes in the participation of Africans in the political process of South Africa, he is opposed to a one-man-one-vote system in South Africa". In answering a hypothetical question on how he would solve the race problem were he a "benevolent dictator in South Africa", Barnard stated the following in a long interview at the Weekly Review: The interview ended with the following summary from he himself; "I often say that, like King Lear, South Africa is a country more sinned against than sinning." Personal life. Barnard's first marriage was to Aletta Gertruida Louw, a nurse, whom he married in 1948 while practising medicine in Ceres. The couple had two children: Deirdre (born 1950) and Andre (1951–1984). International fame took a toll on his family, and in 1969, Barnard and his wife divorced. In 1970, he married heiress Barbara Zoellner when she was 19, the same age as his son, and they had two children: Frederick (born 1972) and Christiaan Jr. (born 1974). He divorced Zoellner in 1982. Barnard married for a third time in 1988 to Karin Setzkorn, a young model. They also had two children, Armin (born 1989) and Lara (born 1997). This last marriage also ended in divorce in 2000. Barnard described in his autobiography "The Second Life" a one-night extramarital affair with Italian film star Gina Lollobrigida, that occurred in January 1968. During that visit to Rome he received an audience from Pope Paul VI. In October 2016, US Congresswoman Ann McLane Kuster stated that Barnard sexually assaulted her when she was 23 years old. According to Kuster, Barnard attempted to grope her under her skirt while she was seated at a business luncheon with US Representative Pete McCloskey, for whom she worked at the time. Retirement. Barnard retired as Head of the Department of Cardiothoracic Surgery in Cape Town in 1983 after developing rheumatoid arthritis in his hands which ended his surgical career. He had struggled with arthritis since 1956, when it was diagnosed during his postgraduate work in the United States. After retirement, he spent two years as the Scientist-In-Residence at the Oklahoma Transplantation Institute in the United States and as an acting consultant for various institutions. He had by this time become very interested in anti-aging research, and his reputation suffered in 1986 when he promoted "Glycel", an expensive "anti-aging" skin cream, whose approval was withdrawn by the United States Food and Drug Administration soon thereafter. He also spent time as a research advisor to the Clinique la Prairie, in Switzerland, where the controversial "rejuvenation therapy" was practised. Barnard divided the remainder of his years between Austria, where he established the Christiaan Barnard Foundation, dedicated to helping underprivileged children throughout the world, and his game farm in Beaufort West, South Africa. In his later years, he had Basal-cell carcinoma (skin cancer) on his face, for which he was treated in Parow, South Africa. Death. Christiaan Barnard died on 2 September 2001, while on holiday in Paphos, Cyprus. Early reports stated that he had died of a heart attack, but an autopsy showed his death was caused by a severe asthma attack. Books. Barnard wrote two autobiographies. His first book, "One Life", was published in 1969 () and sold copies worldwide. Some of the proceeds were used to set up the Chris Barnard Fund for research into heart disease and heart transplants in Cape Town. His second autobiography, "The Second Life", was published in 1993, eight years before his death (). Apart from his autobiographies, Barnard wrote books including:
7016
7852030
https://en.wikipedia.org/wiki?curid=7016
Concubinage
Concubinage is an interpersonal and sexual relationship between two people in which the couple does not want to, or cannot, enter into a full marriage. Concubinage and marriage are often regarded as similar, but mutually exclusive. During the early stages of European colonialism, administrators often encouraged European men to practice concubinage to discourage them from paying prostitutes for sex (which could spread venereal disease) and from homosexuality. Colonial administrators also believed that having an intimate relationship with a native woman would enhance white men's understanding of native culture and would provide them with essential domestic labor. The latter was critical, as it meant white men did not require wives from the metropole, hence did not require a family wage. Colonial administrators eventually discouraged the practice when these liaisons resulted in offspring who threatened colonial rule by producing a mixed race class. This political threat eventually prompted colonial administrators to encourage white women to travel to the colonies, where they contributed to the colonial project, while at the same time contributing to domesticity and the separation of public and private spheres. In China, until the 20th century, concubinage was a formal and institutionalized practice that upheld concubines' rights and obligations. A concubine could be freeborn or of slave origin, and her experience could vary tremendously according to her master's whim. During the Mongol conquests, both foreign royals and captured women were taken as concubines. Concubinage was also common in Meiji Japan as a status symbol. Many Middle Eastern societies used concubinage for reproduction. The practice of a barren wife giving her husband a slave as a concubine is recorded in the Code of Hammurabi. The children of such relationships would be regarded as legitimate. Such concubinage was also widely practiced in the premodern Muslim world, and many of the rulers of the Abbasid Caliphate and the Ottoman Empire were born out of such relationships. Throughout Africa, from Egypt to South Africa, slave concubinage resulted in racially mixed populations. The practice declined as a result of the abolition of slavery. In ancient Rome, the practice of "concubinatus" was a monogamous relationship that was an alternative to marriage, usually because of the woman's lesser social status. Widowed or divorced men often took a "concubina", the Latin term from which the English "concubine" is derived, rather than remarrying, so as to avoid complications of inheritance. After the Christianization of the Roman Empire, Christian emperors improved the status of the concubine by granting concubines and their children the sorts of property and inheritance rights usually reserved for wives. In European colonies and American slave plantations, single and married men entered into long-term sexual relationships with local women. In the Dutch East Indies, concubinage between Dutch men and local women created the mixed-race Eurasian Indo community. In India, Anglo-Indians were a result of marriages and concubinage between European men and Indian women. In the Judeo-Christian-Islamic world, the term "concubine" has almost exclusively been applied to women, although a cohabiting male may also be called a concubine. In the 21st century, "concubinage" is used in some Western countries as a gender-neutral legal term to refer to cohabitation (including cohabitation between same-sex partners). Etymology and usage. The English terms "concubine" and "concubinage" appeared in the 14th century, deriving from Latin terms in Roman society and law. The term concubine (), meaning "a paramour, a woman who cohabits with a man without being married to him", comes from the Latin (f.) and (m.), terms that in Roman law meant "one who lives unmarried with a married man or woman". The Latin terms are derived from the verb from "to lie with, to lie together, to cohabit," an assimilation of "com", a prefix meaning "with, together" and "", meaning "to lie down". Concubine is a term used widely in historical and academic literature, and which varies considerably depending on the context. In the twenty-first century, it typically refers explicitly to extramarital affection, "either to a mistress or to a sex slave", without the same emphasis on the cohabiting aspect of the original meaning. Concubinage emerged as an English term in the late 14th century to mean the "state of being a concubine; act or practice of cohabiting in intimacy without legal marriage", and was derived from Latin by means of Old French, where the term may in turn have been derived from the Latin , an institution in ancient Rome that meant "a permanent cohabitation between persons to whose marriage there were no legal obstacles". It has also been described more plainly as a long-term sexual relationship between a man and a woman who are not legally married. In pre-modern to modern law, concubinage has been used in certain jurisdictions to describe cohabitation, and in France, was formalized in 1999 as the French equivalent of a civil union. The US legal system also used to use the term in reference to cohabitation, but the term never evolved further and is now considered outdated. Characteristics. Forms of concubinage have existed in all cultures, though the prevalence of the practice and the rights and expectations of the persons involved have varied considerably, as have the rights of the offspring born from such relationships, a concubine's legal and social status, their role within a household and society's perceptions of the institution. A relationship of concubinage could take place voluntarily, with the parties involved agreeing not to enter into marriage, or involuntarily (i.e. through slavery). In slave-owning societies, most concubines were slaves, also called "slave-concubines". This institutionalization of concubinage with female slaves dates back to Babylonian times, and has been practiced in patriarchal cultures throughout history. Whatever the status and rights of the persons involved, they were typically inferior to those of a legitimate spouse, often with the rights of inheritance being limited or excluded. Concubinage and marriage are often regarded as similar but mutually exclusive. In the past, a couple may not have been able to marry because of differences in social class, ethnicity or religion, or a man might want to avoid the legal and financial complications of marriage. Practical impediments or social disincentives for a couple to marry could include differences in social rank status, an existing marriage and laws against bigamy, religious or professional prohibitions, or a lack of recognition by the appropriate authorities. The concubine in a concubinage tended to have a lower social status than the married party or home owner, and this was often the reason why concubinage was preferred to marriage. A concubine could be an "alien" in a society that did not recognize marriages between foreigners and citizens. Alternatively, they might be a slave, or person from a poor family interested in a union with a man from the nobility. In other cases, some social groups were forbidden to marry, such as Roman soldiers, and concubinage served as a viable alternative to marriage. In polygynous situations, the number of concubines there were permitted within an individual concubinage arrangement has varied greatly. In Roman law, where monogamy was expected, the relationship was identical (and alternative) to marriage except for the lack of "marital affection" from both or one of the parties, which conferred rights related to property, inheritance and social rank. By contrast, in parts of Asia and the Middle East, powerful men kept as many concubines as they could financially support. Some royal households had thousands of concubines. In such cases concubinage served as a status symbol and for the production of sons. In societies that accepted polygyny, there were advantages to having a concubine over a mistress, as children from a concubine were legitimate, while children from a mistress would be considered "bastards". Categorization. Scholars have made attempts to categorize patterns of concubinage practiced in the world. "The International Encyclopedia of Anthropology" gives four distinct forms of concubinage: Junius P. Rodriguez gives three cultural patterns of concubinage: Asian, Islamic and European. Antiquity. Mesopotamia. In Mesopotamia, it was customary for a sterile wife to give her husband a slave as a concubine to bear children. The status of such concubines was ambiguous; they normally could not be sold but they remained the slave of the wife. However, in the late Babylonian period, there are reports that concubines could be sold. In general, marriage was monogamous. "If after two or three years of marriage the wife had not given birth to any children, the husband was allowed to buy a slave (who could also be chosen by the wife) in order to produce heirs. This woman, however, remained a slave and never gained the status of a second wife." In the Middle Assyrian Period, the main wife ("assatu") wore a veil in the street, as could a concubine ("esirtu") if she were accompanying the main wife, or if she were married. "If a man veils his concubine in public, by declaring 'she is my wife,' this woman shall be his wife." It was illegal for unmarried women, prostitutes and slave women to wear a veil in the street. "The children of a concubine were lower in rank than the descendants of a wife, but they could inherit if the marriage of the latter remained childless." Ancient Egypt. While most Ancient Egyptians were monogamous, a male pharaoh would have had other, lesser wives and concubines in addition to the Great Royal Wife. This arrangement would allow the pharaoh to enter into diplomatic marriages with the daughters of allies, as was the custom of ancient kings. Concubinage was a common occupation for women in ancient Egypt, especially for talented women. A request for forty concubines by Amenhotep III (c. 1386–1353 BC) to a man named Milkilu, Prince of Gezer states:"Behold, I have sent you Hanya, the commissioner of the archers, with merchandise in order to have beautiful concubines, i.e. weavers. Silver, gold, garments, all sort of precious stones, chairs of ebony, as well as all good things, worth 160 deben. In total: forty concubines—the price of every concubine is forty of silver. Therefore, send very beautiful concubines without blemish." – "(Lewis, 146)"Concubines would be kept in the pharaoh's harem. Amenhotep III kept his concubines in his palace at Malkata, which was one of the most opulent in the history of Egypt. The king was considered to be deserving of many women as long as he cared for his Great Royal Wife as well. Ancient Greece. In Ancient Greece, the practice of keeping a concubine ( "pallakís") was common among the upper classes, and they were for the most part women who were slaves or foreigners, but occasional free born based on family arrangements (typically from poor families). Children produced by slaves remained slaves and those by non-slave concubines varied over time; sometimes they had the possibility of citizenship. The law prescribed that a man could kill another man caught attempting a relationship with his concubine. By the mid fourth century, concubines could inherit property, but, like wives, they were treated as sexual property. While references to the sexual exploitation of maidservants appear in literature, it was considered disgraceful for a man to keep such women under the same roof as his wife. Apollodorus of Acharnae said that "hetaera" were concubines when they had a permanent relationship with a single man, but nonetheless used the two terms interchangeably. Ancient Rome. "Concubinatus" was a monogamous union recognized socially and to some extent legally as an alternative to marriage in the Roman Empire. Concubinage was practiced most often in couples when one partner, almost always the man, belonged to a higher social rank, especially the senatorial order, who were penalized for marrying below their class. The female partner was a "concubina"; the term "concubinus" is used of men mainly in a same-sex union or to deprecate a relationship in which the woman was dominant. The use of the term "concubina" in epitaphs for family memorials indicates that the role was socially acceptable. A man was not allowed to have both a "concubina" and a wife "(uxor)" at the same time, but a single tombstone might list multiple wives or "concubinae" serially. By contrast, the pejorative "paelex" referred to a concubine who was a sexual rival to a wife—in early Rome, most often a war captive and hence unwillingly—and by late antiquity was loosely equivalent to "prostitute". However, in Latin literature "concubinae" are often disparaged as slaves kept as sexual luxuries in the literal sense of "bedmate". The distinction is that the use of an enslaved woman was not "concubinatus" in the legal sense, which might involve a signed document, though even an informal concubine had some legal protections that placed her among the more privileged slaves of the household. Concubines occupied an entire chapter, now fragmentary, in the 6th-century compilation of Roman law known as the "Digest", but "concubinatus" was never a fully realized legal institution. It evolved in ad hoc response to Augustan moral legislation that criminalized some forms of adultery and other consensual sexual behaviors among freeborn people "(ingenui)" outside marriage. Even Roman legal experts had trouble parsing the various forms of marriage, the status of a "concubina", and whether an extramarital sexual relationship was adultery or permissible pleasure-seeking with a prostitute, professional entertainer, or slave. Roman emperors not infrequently took a "concubina", often a freedwoman, rather than remarrying after the death of their wife to avoid the legal complications pertaining to succession and inheritance. Caenis, the freedwoman and secretary of Antonia Minor, was Vespasian's wife "in all but name", according to Suetonius, until her death in AD 74. Roman manumission law also allowed a slave-owner to free the slave and enter into "concubinatus" or a regular marriage. Epitaphs indicate that both partners in "concubinatus" might also be freedpersons, for reasons that are not entirely clear. A slave lacked the legal personhood to marry under Roman law or to contract "concubinatus", but the heterosexual union of two slaves, or a freedperson and a slave, might be recognized as an intention to marry when both partners gained the legal status that permitted them to do so. In this quasi-marital union, called "contubernium", children seem often to have been desired, in contrast to "concubinatus", in which children more often were viewed as complications and there was no intention to marry. Asia. Concubinage was highly popular before the early 20th century all over East Asia. The main functions of concubinage for men was for pleasure and producing additional heirs, whereas for women the relationship could provide financial security. Children of concubines had lower rights in account to inheritance, which was regulated by the Dishu system. In China and the Muslim world, the concubine of a king could achieve power, especially if her son also became a monarch. China. In China, successful men often had concubines until the practice was outlawed when the Chinese Communist Party came to power in 1949. The standard Chinese term translated as "concubine" was "qiè" , a term that has been used since ancient times. Concubinage resembled marriage in that concubines were recognized sexual partners of a man and were expected to bear children for him. Unofficial concubines () were of lower status, and their children were considered illegitimate. The English term concubine is also used for what the Chinese refer to as "pínfēi" (), or "consorts of emperors", an official position often carrying a very high rank. In premodern China it was illegal and socially disreputable for a man to have more than one wife at a time, but it was acceptable to have concubines. From the earliest times wealthy men purchased concubines and added them to their household in addition to their wife. The purchase of concubines was similar to the purchase of slaves, but concubines had a higher social status. In the earliest records a man could have as many concubines as he could afford to purchase. From the Eastern Han period (AD 25–220) onward, the number of concubines a man could have was limited by law. The higher rank and the more noble identity a man possessed, the more concubines he was permitted to have. A concubine's treatment and situation was variable and was influenced by the social status of the male to whom she was attached, as well as the attitude of his wife. In the "Book of Rites" chapter on "The Pattern of the Family" () it says, "If there were betrothal rites, she became a wife; and if she went without these, a concubine." Wives brought a dowry to a relationship, but concubines did not. A concubinage relationship could be entered into without the ceremonies used in marriages, and neither remarriage nor a return to her natal home in widowhood were allowed to a concubine. There are early records of concubines allegedly being buried alive with their masters to "keep them company in the afterlife". Women in concubinage (妾) were treated as inferior, and expected to be subservient to any wife under traditional Chinese marriage (if there was one). The position of the concubine was generally inferior to that of the wife. Although a concubine could produce heirs, her children would be inferior in social status to a wife's children, although they were of higher status than illegitimate children. The child of a concubine had to show filial duty to two women, their biological mother and their legal mother—the wife of their father. After the death of a concubine, her sons would make an offering to her, but these offerings were not continued by the concubine's grandsons, who only made offerings to their grandfather's wife. Until the Song dynasty (960–1276), it was considered a serious breach of social ethics to promote a concubine to a wife. During the Qing dynasty (1644–1911), the status of concubines improved. It became permissible to promote a concubine to wife, if the original wife had died and the concubine was the mother of the only surviving sons. Moreover, the prohibition against forcing a widow to remarry was extended to widowed concubines. During this period tablets for concubine-mothers seem to have been more commonly placed in family ancestral altars, and genealogies of some lineages listed concubine-mothers. Many of the concubines of the emperor of the Qing dynasty were freeborn women from prominent families. Concubines of men of lower social status could be either freeborn or slave. Imperial concubines, kept by emperors in the Forbidden City, had different ranks and were traditionally guarded by eunuchs to ensure that they could not be impregnated by anyone but the emperor. In Ming China (1368–1644) there was an official system to select concubines for the emperor. The age of the candidates ranged mainly from 14 to 16. Virtues, behavior, character, appearance and body condition were the selection criteria. Despite the limitations imposed on Chinese concubines, there are several examples in history and literature of concubines who achieved great power and influence. Lady Yehenara, otherwise known as Empress Dowager Cixi, was one of the most successful concubines in Chinese history. Cixi first entered the court as a concubine to Xianfeng Emperor and gave birth to his only surviving son, who later became Tongzhi Emperor. She eventually became the "de facto" ruler of Qing China for 47 years after her husband's death. An examination of concubinage features in one of the Four Great Classical Novels, "Dream of the Red Chamber" (believed to be a semi-autobiographical account of author Cao Xueqin's family life). Three generations of the Jia family are supported by one notable concubine of the emperor, Jia Yuanchun, the full elder sister of the male protagonist Jia Baoyu. In contrast, their younger half-siblings by concubine Zhao, Jia Tanchun and Jia Huan, develop distorted personalities because they are the children of a concubine. Emperors' concubines and harems are emphasized in 21st-century romantic novels written for female readers and set in ancient times. As a plot element, the children of concubines are depicted with a status much inferior to that in actual history. The zhai dou (,residential intrigue) and gong dou (,harem intrigue) genres show concubines and wives, as well as their children, scheming secretly to gain power. Empresses in the Palace, a "gong dou" type novel and TV drama, has had great success in 21st-century China. Hong Kong officially abolished the Great Qing Legal Code in 1971, thereby making concubinage illegal. Casino magnate Stanley Ho of Macau took his "second wife" as his official concubine in 1957, while his "third and fourth wives" retain no official status. Mongols. Polygyny and concubinage were very common in Mongol society, especially for powerful Mongol men. Genghis Khan, Ögedei Khan, Jochi, Tolui, and Kublai Khan (among others) all had many wives and concubines. Genghis Khan frequently acquired wives and concubines from empires and societies that he had conquered, these women were often princesses or queens that were taken captive or gifted to him. Genghis Khan's most famous concubine was Möge Khatun, who, according to the Persian historian Ata-Malik Juvayni, was "given to Chinggis Khan by a chief of the Bakrin tribe, and he loved her very much." After Genghis Khan died, Möge Khatun became a wife of Ögedei Khan. Ögedei also favored her as a wife, and she frequently accompanied him on his hunting expeditions. Japan. Before monogamy was legally imposed in the Meiji period, concubinage was common among the nobility. Its purpose was to ensure male heirs. For example, the son of an Imperial concubine often had a chance of becoming emperor. Yanagihara Naruko, a high-ranking concubine of Emperor Meiji, gave birth to Emperor Taishō, who was later legally adopted by Empress Haruko, Emperor Meiji's formal wife. Even among merchant families, concubinage was occasionally used to ensure heirs. Asako Hirooka, an entrepreneur who was the daughter of a concubine, worked hard to help her husband's family survive after the Meiji Restoration. She lost her fertility giving birth to her only daughter, Kameko; so her husband—with whom she got along well—took Asako's maid-servant as a concubine and fathered three daughters and a son with her. Kameko, as the child of the formal wife, married a noble man and matrilineally carried on the family name. A samurai could take concubines but their backgrounds were checked by higher-ranked samurai. In many cases, taking a concubine was akin to a marriage. Kidnapping a concubine, although common in fiction, would have been shameful, if not criminal. If the concubine was a commoner, a messenger was sent with betrothal money or a note for exemption of tax to ask for her parents' acceptance. Even though the woman would not be a legal wife, a situation normally considered a demotion, many wealthy merchants believed that being the concubine of a samurai was superior to being the legal wife of a commoner. When a merchant's daughter married a samurai, her family's money erased the samurai's debts, and the samurai's social status improved the standing of the merchant family. If a samurai's commoner concubine gave birth to a son, the son could inherit his father's social status. Concubines sometimes wielded significant influence. Nene, wife of Toyotomi Hideyoshi, was known to overrule her husband's decisions at times and Yodo-dono, his concubine, became the "de facto" master of Osaka castle and the Toyotomi clan after Hideyoshi's death. Korea. Joseon monarchs had a harem which contained concubines of different ranks. Empress Myeongseong managed to have sons, preventing sons of concubines from getting power. Children of concubines often had lower value in account of marriage. A daughter of concubine could not marry a wife-born son of the same class. For example, Jang Nok-su was a concubine-born daughter of a mayor, who was initially married to a slave-servant, and later became a high-ranking concubine of Yeonsangun. The Joseon dynasty established in 1392 debated whether the children of a free parent and a slave parent should be considered free or slave. The child of a scholar-official father and a slave-concubine mother was always free, although the child could not occupy government positions. India. In Hindu society, concubinage was practiced with women with whom marriage was undesirable, such as a woman from an upper-caste or a Brahmin woman. Children born of concubinage followed the caste categorization of the mother. Polygamy and concubinage prevailed in ancient India for rulers and kings. Before the Independence of India, in Gujarat, the Bhil women were concubines for the Koli landlords. In medieval Rajasthan, the ruling Rajput family often had certain women called "paswan", "khawaas", "pardayat". These women were kept by the ruler if their beauty had impressed him, but without formal marriage. Sometimes they were given rights to income collected from a particular village, as queens did. Their children were socially accepted but did not receive a share in the ruling family's property and married others of the same status as them. Concubinage was practiced in elite Rajput households between 16th and 20th centuries. Female slave-servants or slave-performers could be elevated to the rank of concubine (called "khavas", "pavas") if a ruler found them attractive. The entry into concubinage was marked by a ritual; however, this ritual differentiated from rituals marking marriage. Rajputs often took concubines from Jat, Gujjar, Ahir, Muslim but did not take concubines from the untouchable castes and refrained from taking Charans, Brahmins, and other Rajputs. There are instances of wives eloping with their Rajput lovers and becoming their concubines. Europe. Vikings. Polygyny occurred among Vikings, and rich and powerful Viking men could have more than one wife as well as concubines. Vikings competed with one another for access to the marriage market. Viking men could capture women and make them into their wives or concubines. Concubinage for Vikings was connected to slavery; the Vikings took both free women and slaves as concubines. Researchers have suggested that Vikings may have originally started sailing and raiding due to a need to seek out women from foreign lands. There are theories that polygynous relationships in Viking society could have led to a shortage of eligible women for the average male; polygyny increases male–male competition in society because it creates a pool of unmarried men willing to engage in risky status-elevating and sex-seeking behaviors. Thus, the average Viking man could have been forced to perform riskier actions to gain wealth and power to be able to find suitable women. The theory and concept was expressed in the 11th century by historian Dudo of Saint-Quentin in his semi imaginary "History of The Normans". The Annals of Ulster depicts raptio and states that in 821 the Vikings plundered an Irish village and "carried off a great number of women into captivity". People taken captive during the Viking raids across Eastern Europe could be sold to Moorish Spain via the Dublin slave trade or transported to Hedeby or Brännö and from there via the Volga trade route to present day Russia, where Slavic slaves and furs were sold to Muslim merchants in exchange for Arab silver "dirham" and silk, which have been found in Birka, Wollin and Dublin; initially this trade route between Europe and the Abbasid Caliphate passed via the Khazar Kaghanate, but from the early 10th-century onward it went via Volga Bulgaria and from there by caravan to Khwarazm, to the Samanid slave market in Central Asia and finally via Iran to the Abbasid Caliphate in the Middle East where there was a great market for slave girls as concubines. Early Christianity and Feudalism. The Christian morals developed by Patristic writers largely promoted marriage as the only form of union between men and women. Both Saint Augustine and Saint Jerome strongly condemned the institution of concubinage. Emperor Justinian in his great sixth-century code, the Corpus Iurus Civilis, granted to concubines and their children the sorts of property and inheritance rights usually reserved for wives. He brought the institution of concubinatus closer to marriage, but he also repeated the Christian injunction that concubinage must be permanent and monogamous. The two views, Christian condemnation and secular continuity with the Roman legal system, continued to be in conflict throughout the entire Middle Ages, until in the 14th and 15th centuries the Church outlawed concubinage in the territories under its control. Middle East. In the historic Muslim Arab world, "concubine" ("surriyya") referred to the female slave ("jāriya"), whether Muslim or non-Muslim, with whom her master engages in sexual intercourse in addition to household or other services. Such relationships were common in pre-Islamic Arabia and other pre-existing cultures of the wider region. Islam introduced legal restrictions and discipline to the concubinage and encouraged manumission. Islam furthermore endorsed educating (instruction in Islam), freeing or marrying female slaves if they embrace Islam abandoning polytheism or infidelity. Acknowledged children of concubines are generally declared as legitimate with or without wedlock, and the mother of a free child was considered free upon the death of her male enslaver. There is evidence that concubines had a higher rank than female slaves. Abu Hanifa and others argued for modesty-like practices for the concubine, recommending that the concubine be established in the home and their chastity be protected and not to misuse them for sale or sharing with friends or kins. While scholars exhorted masters to treat their slaves equally, a master was allowed to show favoritism towards a concubine. Islamic scholars have disagreed on the exact interpretation. is believed by some Islamic scholars to say that it is allowed to have sexual intercourse with concubines after marrying them, as Islam forbids sexual intercourse outside of marriage. Some scholars recommended holding a wedding banquet ("walima") to celebrate the concubinage relationship; however, this is not required in teachings of Islam and is rather the self-preferred opinions of certain non-liberal Islamic scholars. Even the Arabic term for concubine "surriyya" may have been derived from "sarat" meaning "eminence", indicating the concubine's higher status over other female slaves. The Qur'an does not use the word "surriyya", but instead uses the expression "Ma malakat aymanukum" (that which your right hands own), which occurs 15 times in the book. Sayyid Abul Ala Maududi explains that "two categories of women have been excluded from the general command of guarding the private parts: (a) wives, (b) women who are legally in one's possession". Some contend that concubinage was a pre-Islamic custom that was allowed to be practiced under Islam, with Jews and non-Muslim people to marry a concubine after teaching her, instructing her well and then giving her freedom. In the traditions of the Abrahamic religions, Abraham had a concubine named Hagar, who was originally a slave of his wife Sarah. The story of Hagar would affect how concubinage was perceived in early Islamic history. Sikainiga writes that one rationale for concubinage in Islam was that "it satisfied the sexual desire of the female slaves and thereby prevented the spread of immorality in the Muslim community." Most Islamic schools of thought restricted concubinage to a relationship where the female slave was required to be monogamous to her master, (though the master's monogamy to her is not required), but according to Sikainga, in reality this was not always practiced and female slaves were targeted by other men of the master's household. These opinions of Sikaingia are controversial and contested. In ancient times, two sources for concubines were permitted under an Islamic regime. Primarily, non-Muslim women taken as prisoners of war were made concubines as happened after the Battle of the Trench, or in numerous later Caliphates. It was encouraged to manumit slave women who rejected their initial faith and converted to Islam, or to bring them into formal marriage. The expansion of various Muslim dynasties resulted in acquisitions of concubines, through purchase from the slave trade, gifts from other rulers, and captives of war. To have a large number of concubines became a symbol of status. Almost all Abbasid caliphs were born to concubines. The custom to have concubines was common in all Islamic dynasties until the abolition of slavery in the 20th-century. Similarly, the sultans of the Ottoman empire were often the son of a concubine. As a result, some individual concubines came to exercise a degree of influence over Ottoman politics. Some concubines developed social networks, and accumulated personal wealth, both of which allowed them to rise on social status. The practice declined with the abolition of slavery, starting in the 19th century and finally abolished in the Arabian Peninsula in the 1960s, with slavery in Saudi Arabia being banned in 1962 and slavery in Oman in 1970. Ottoman sultans appeared to have preferred concubinage to marriage, and for a time all royal children were born of concubines. The consorts of Ottoman sultans were often neither Turkish, nor Muslim by birth. Leslie Peirce argues that this was because a concubine would not have the political leverage that would be possessed by a princess or a daughter of the local elite. Ottoman sultans also appeared to have only one son with each concubine; that is once a concubine gave birth to a son, the sultan would no longer have intercourse with her. This limited the power of each son. New World. When slavery became institutionalized in Colonial America, white men, whether or not they were married, sometimes took enslaved women as concubines; children of such unions remained slaves. In the various European colonies in the Caribbean, white planters took black and mulatto concubines, owing to the shortage of white women. The children of such unions were sometimes freed from slavery and even inherited from their father, though this was not the case for the majority of children born of such unions. These relationships appeared to have been socially accepted in the colony of Jamaica and even attracted European emigrants to the island. Brazil. In colonial Brazil, men were expected to marry women who were equal to them in status and wealth. Alternatively, some men practiced concubinage, an extra-marital sexual relationship. This sort of relationship was condemned by the Catholic Church and the Council of Trent threatened those who engaged in it with excommunication. Concubines constituted both female slaves and former slaves. One reason for taking non-white women as concubines was that free white men outnumbered free white women, although marriage between races was not illegal. New France. Some French settlers in New France were recorded to keep native women as "concubines," sometimes while being married to a white woman. This was particularly common in Louisiana, but was discouraged by the clergy. United States. Relationships with slaves in the United States and the Confederacy were sometimes euphemistically referred to as concubinary. From lifelong to single or serial sexual visitations, these relationships with enslaved people illustrate a radical power imbalance between a human owned as chattel, and the legal owner of same. When personal ownership of slaves was enshrined in the law, an enslaved person had no legal power over their own legal personhood, the legal control to which was held by another entity; therefore, a slave could never give real and legal consent in any aspect of their life. The inability to give any kind of consent when enslaved is in part due to the ability of a slave master to legally coerce acts and declarations including those of affection, attraction, and consent through rewards and punishments. Legally however, the concept of chattel slavery in the United States and Confederate States defined and enforced in the law owning the legal personhood of a slave; meaning that the proxy for legal consent was found with the slave's master, who was the sole source of consent in the law to the bodily integrity and all efforts of that slave except as regulated or limited by law. With slavery being recognized as a crime against humanity in the United States law, as well as in international customary law, the legal basis of slavery is repudiated for all time, as are any rights which owner-rapists had had to exercise any proxy consent, sexual or otherwise for their slaves. Free men in the United States sometimes took female slaves in relationships which they referred to as concubinage, although marriage between the races was prohibited by law in the colonies and the later United States. Many colonies and states also had laws against miscegenation or any interracial relations. From 1662 the Colony of Virginia, followed by others, incorporated into law the principle that children took their mother's status, i.e., the principle of "partus sequitur ventrem". This led to generations of multiracial slaves, some of whom were otherwise considered legally white (one-eighth or less African, equivalent to a great-grandparent) before the American Civil War. In some cases, men had long-term relationships with enslaved women, giving them and their mixed-race children freedom and providing their children with apprenticeships, education and transfer of capital. A relationship between Thomas Jefferson and Sally Hemings is an example of this. Such arrangements were more prevalent in the American South during the antebellum period. Plaçage. In Louisiana and former French territories, a formalized system of concubinage called "plaçage" developed. European men took enslaved or free women of color as mistresses after making arrangements to give them a dowry, house or other transfer of property, and sometimes, if they were enslaved, offering freedom and education for their children. A third class of free people of color developed, especially in New Orleans. Many became educated, artisans and property owners. French-speaking and practicing Catholicism, these women combined French and African-American culture and created an elite between those of European descent and the slaves. Today, descendants of the free people of color are generally called Louisiana Creole people. In Judaism. In Judaism, a concubine is a marital companion of inferior status to a wife. Among the Israelites, men commonly acknowledged their concubines, and such women enjoyed the same rights in the house as legitimate wives. Ancient Judaism. The term concubine did not necessarily refer to women after the first wife. A man could have many wives and concubines. Legally, any children born to a concubine were considered to be the children of the wife she was under. The concubine may not have commanded the exact amount of respect as the wife. In the Levitical rules on sexual relations, the Hebrew word that is commonly translated as "wife" is distinct from the Hebrew word that means "concubine". However, on at least one other occasion the term is used to refer to a woman who is not a wife specifically, the handmaiden of Jacob's wife. In the Levitical code, sexual intercourse between a man and a wife of a different man was forbidden and punishable by death for both persons involved. Since it was regarded as the highest blessing to have many children, wives often gave their maids to their husbands if they were barren, as in the case of Rachel and Bilhah. The children of the concubine often had equal rights with those of the wife; for example, King Abimelech was the son of Gideon and his concubine. Later biblical figures, such as Gideon and Solomon, had concubines in addition to many childbearing wives. For example, the Books of Kings say that Solomon had 700 wives and 300 concubines. The account of the unnamed Levite in Judges 19–20 shows that the taking of concubines was not the exclusive preserve of kings or patriarchs in Israel during the time of the Judges, and that the rape of a concubine was completely unacceptable to the Israelite nation and led to a civil war. In the story, the Levite appears to be an ordinary member of the tribe, whose concubine was a woman from Bethlehem in Judah. This woman was unfaithful, and eventually abandoned him to return to her paternal household. However, after four months, the Levite, referred to as her husband, decided to travel to her father's house to persuade his concubine to return. She is amenable to returning with him, and the father-in-law is very welcoming. The father-in-law convinces the Levite to remain several additional days, until the party leaves behind schedule in the late evening. The group pass up a nearby non-Israelite town to arrive very late in the city of Gibeah, which is in the land of the Benjaminites. The group sit around the town square, waiting for a local to invite them in for the evening, as was the custom for travelers. A local old man invites them to stay in his home, offering them guest right by washing their feet and offering them food. A band of wicked townsmen attack the house and demand the host send out the Levite man so they can rape him. The host offers to send out his virgin daughter as well as the Levite's concubine for them to rape, to avoid breaking guest right towards the Levite. Eventually, to ensure his own safety and that of his host, the Levite gives the men his concubine, who is raped and abused through the night, until she is left collapsed against the front door at dawn. It is important to note that the Levite man chose to save himself from rape at the expense of his wife. In the morning, the Levite finds her when he tries to leave. When she fails to respond to her husband's order to get up (possibly because she is dead, although the language is unclear) the Levite places her on his donkey and continues home. Once home, he dismembers her body and distributes the 12 parts throughout the nation of Israel. The Israelites gather to learn why they were sent such grisly gifts, and are told by the Levite of the sadistic rape of his concubine. The crime is considered outrageous by the Israelite tribesmen, who then wreak total retribution on the men of Gibeah, as well as the surrounding tribe of Benjamin when they support the Gibeans, killing them without mercy and burning all their towns. The inhabitants of (the town of) Jabesh Gilead are then slaughtered as a punishment for not joining the 11 tribes in their war against the Benjaminites, and their 400 unmarried daughters given in forced marriage to the 600 Benjamite survivors. Finally, the 200 Benjaminite survivors who still have no wives are granted a mass marriage by abduction by the other tribes. Medieval and modern Judaism. In Judaism, concubines are referred to by the Hebrew term pilegesh (). The term is a loanword from Ancient Greek , meaning "a mistress staying in house". According to the Babylonian Talmud, the difference between a concubine and a legitimate wife was that the latter received a ketubah and her marriage ("nissu'in") was preceded by an erusin ("formal betrothal"), which was not the case for a concubine. One opinion in the Jerusalem Talmud argues that the concubine should also receive a "marriage contract", but without a clause specifying a divorce settlement. According to Rashi, "wives with kiddushin and ketubbah, concubines with kiddushin but without ketubbah"; this reading is from the Jerusalem Talmud, Certain Jewish thinkers, such as Maimonides, believed that concubines were strictly reserved for royal leadership and thus that a commoner may not have a concubine. Indeed, such thinkers argued that commoners may not engage in any type of sexual relations outside of a marriage. Maimonides was not the first Jewish thinker to criticise concubinage. For example, Leviticus Rabbah severely condemns the custom. Other Jewish thinkers, such as Nahmanides, Samuel ben Uri Shraga Phoebus, and Jacob Emden, strongly objected to the idea that concubines should be forbidden. Despite these prohibitions, concubinage remained widespread among Jewish households of the Ottoman empire and resembled the practice among the Muslim households. In the Hebrew of the contemporary State of Israel, "pilegesh" is often used as the equivalent of the English word "mistress"—i.e., the female partner in extramarital relations—regardless of legal recognition. Attempts have been initiated to popularise "pilegesh" as a form of premarital, non-marital or extramarital relationship (which, according to the perspective of the enacting person(s), is permitted by Jewish law). Concubinage and slavery. In some context, the institution of concubinage diverged from a free quasi-marital cohabitation to the extent that it was forbidden to a free woman to be involved in a concubinage and the institution was reserved only to slaves. This type of concubinage was practiced in patriarchal cultures throughout history. Many societies automatically freed the concubine after she had a child. Among societies that did not legally require the manumission of concubines, it was usually done anyway. In slave-owning societies, most concubines were slaves, but not all. The feature of concubinage that made it attractive to certain men was that the concubine was dependent on the man; she could be sold or punished at the master's will. According to Orlando Peterson, slaves taken as concubines would have had a higher level of material comfort than the slaves used in agriculture or in mining.
7017
45200258
https://en.wikipedia.org/wiki?curid=7017
Central Plaza (Hong Kong)
Central Plaza is a 78-storey, skyscraper at 18 Harbour Road, in Wan Chai on Hong Kong Island in Hong Kong. Completed in August 1992, it is the third tallest tower in the city after 2 International Finance Centre (2 IFC) in Central and the International Commerce Centre in West Kowloon. It was the tallest building in Asia from 1992 to 1996, until the Shun Hing Square was built in Shenzhen, a neighbouring city. Central Plaza surpassed the Bank of China Tower as the tallest building in Hong Kong until the completion of 2 IFC. Central Plaza was also the tallest reinforced concrete building in the world, until it was surpassed by CITIC Plaza, Guangzhou in 1996. The building uses a triangular floor plan. On the top of the tower is a four-bar neon clock that indicates the time by displaying different colours for 15-minute periods, blinking at the change of the quarter. An anemometer is installed on the tip of the building's mast, at above sea level. The mast has a height of . Central Plaza also houses the world's highest church inside a skyscraper, Sky City Church. History. The land upon which Central Plaza sits was reclaimed from Victoria Harbour in the 1970s. The site was auctioned off by the Hong Kong Government at City Hall Theatre on 25 January 1989. It was sold for a record HK$3.35 billion to a joint venture called "Cheer City Properties", owned 50 per cent by Sun Hung Kai Properties and 50 per cent by fellow real estate conglomerate Sino Land and their major shareholder the Ng Teng Fong family. A third developer, Ryoden Development, joined the consortium afterwards. Ryoden Development disposed off its 5% interest for 190,790 square feet of office space in New Kowloon Plaza from Sun Hung Kai in 1995. The first major tenant to sign a lease was the Provisional Airport Authority, which on 2 August 1991 agreed to lease the 24th to 26th floors. A topping-out ceremony, presided over by Sir David Ford, was held on 9 April 1992. Design. Central Plaza is made up of two principal components: a free standing office tower and a podium block attached to it. The tower is made up of three sections: a tower base forming the main entrance and public circulation spaces; a tower body containing 57 office floors, a sky lobby and five mechanical plant floors; and the tower top consisting of six mechanical plant floors and a tower mast. The ground level public area along with the public sitting out area form an landscaped garden with a fountain, trees and artificial stone paving. No commercial element is included in the podium. The first level is a public thoroughfare for three pedestrian bridges linking the Mass Transit Railway, the Convention and Exhibition Centre and the China Resources Building. By converting these space for public use, the building gained 20% more plot ratio. The shape of the tower is not truly triangular with its three corners truncated to provide better internal office spaces. Central Plaza was designed by the Hong Kong architectural firm Ng Chun Man and Associates (since renamed Dennis Lau & Ng Chun Man Architects, or DLN) and engineered by Arup. The main contractor was a joint venture, comprising the contracting firms Sanfield (a subsidiary of Sun Hung Kai) and Tat Lee, called Manloze Ltd. Design constraints. Triangular shaped floor plan. The building was designed to be triangular in shape because it would allow 20% more of the office area to enjoy the harbour view as compared to square or rectangular shaped buildings. From an architectural point of view, this arrangement provides better floor area utilisation, offering an internal column-free office area with a clear depth of and an overall usable floor area efficiency of 81%. Nonetheless, the triangular building plan causes the air handling unit (AHU) room in the internal core to also assume a triangular configuration. With only limited space, this makes the adoption of a standard AHU not feasible. Furthermore, all air-conditioning ducting, electrical trunking and piping gathered inside the core area has to be squeezed into a very narrow and congested corridor ceiling void. Super high-rise building. As the building is situated opposite the Hong Kong Convention & Exhibition Centre, the best to maximise the harbour views for the building and to not be obstructed by the neighbouring high-rise buildings was to design it to be tall enough to clear the height of neighbouring buildings. However, designing tall buildings bring several difficulties in structural and building services design. For example, excessive system static pressure in water systems, high line voltage drop, and a long distance of vertical transportation (resulting in long waits for elevators). All these problems can increase the capital cost of the building systems and impair the safety operations of the building. Maximum clear ceiling height. As a general practice, achieving a clear height of , and a floor-to-floor height of would be required. However, due to the high wind load in Hong Kong for such a tall high-rise building, every increase in building height per metre would increase the structural cost by more than HK$1 million (HK$304,800 per ft). Therefore, a comprehensive study was conducted and finally a floor height of was adopted. With this issue alone, an estimated construction cost saving for a total of 58 office floors, would be around HK$30 million. Yet at the same time, a maximum ceiling height of in office area could still be achieved with careful coordination and dedicated integration. Steel structure vs. reinforced concrete. Steel structures are more commonly adopted in high-rise buildings. In the original scheme, an externally cross-braced framed tube was applied with primary/secondary beams carrying metal decking with reinforced concrete slab. The core was also composed of steelwork, designed to carry vertical loads only. Later after a financial review by the developer, they decided to reduce the height of the superstructure by increasing the size of the floor plate so as to reduce the complex architectural requirements of the tower base which means a high strength concrete solution became possible. In the final scheme, columns at centres and floor edge beams were used to replace the large steel corner columns. As climbing form and table form construction method and efficient construction management are used in this project which make this reinforced concrete structure take no longer construction time than the steel structure. And the most attractive point is that the reinforced concrete scheme can save HK$230 million compared to that of steel structure. Hence the reinforced concrete structure was adopted and Central Plaza is now one of the tallest reinforced concrete buildings in the world. In the reinforced concrete structure scheme, the core has a similar arrangement to the steel scheme and the wind shear is taken out from the core at the lowest basement level and transferred to the perimeter diaphragm walls. In order to reduce large shear reversals in the core walls in the basement, and at the top of the tower base level, the ground floor, basement levels 1 and 2 and the 5th and 6th floors, the floor slabs and beams are separated horizontally from the core walls. Another advantage of using reinforced concrete structure is that it is more flexible to cope with changes in structural layout, sizes and height according to the site conditions by using table form system. Trivia. This skyscraper was visited in the seventh leg of the reality TV show "The Amazing Race 2", which described Central Plaza as "the tallest building in Hong Kong" (despite this being inaccurate). Although contestants were told to reach the top floor, the actual task was performed on the 46th floor.
7018
7903804
https://en.wikipedia.org/wiki?curid=7018
Caravaggio
Michelangelo Merisi da Caravaggio (also Michele Angelo Merigi or Amerighi da Caravaggio; 29 September 1571 – 18 July 1610), known mononymously as Caravaggio, was an Italian painter active in Rome for most of his artistic life. During the final four years of his life, he moved between Naples, Malta, and Sicily. His paintings have been characterized by art critics as combining a realistic observation of the human state, both physical and emotional, with a dramatic use of lighting, which had a formative influence on Baroque painting. Caravaggio employed close physical observation with a dramatic use of chiaroscuro that came to be known as tenebrism. He made the technique a dominant stylistic element, transfixing subjects in bright shafts of light and darkening shadows. Caravaggio vividly expressed crucial moments and scenes, often featuring violent struggles, torture, and death. He worked rapidly with live models, preferring to forgo drawings and work directly onto the canvas. His inspiring effect on the new Baroque style that emerged from Mannerism was profound. His influence can be seen directly or indirectly in the work of Peter Paul Rubens, Jusepe de Ribera, Gian Lorenzo Bernini, and Rembrandt. Artists heavily under his influence were called the "Caravaggisti" (or "Caravagesques"), as well as tenebrists or "tenebrosi" ("shadowists"). Caravaggio trained as a painter in Milan before moving to Rome when he was in his twenties. He developed a considerable name as an artist and as a violent, touchy and provocative man. He killed Ranuccio Tommasoni in a brawl, which led to a death sentence for murder and forced him to flee to Naples. There he again established himself as one of the most prominent Italian painters of his generation. He travelled to Malta and on to Sicily in 1607 and pursued a papal pardon for his sentence. In 1609, he returned to Naples, where he was involved in a violent clash; his face was disfigured, and rumours of his death circulated. Questions about his mental state arose from his erratic and bizarre behavior. He died in 1610 under uncertain circumstances while on his way from Naples to Rome. Reports stated that he died of a fever, but suggestions have been made that he was murdered or that he died of lead poisoning. Caravaggio's innovations inspired Baroque painting, but the latter incorporated the drama of his chiaroscuro without the psychological realism. The style evolved and fashions changed, and Caravaggio fell out of favour. In the 20th century, interest in his work revived, and his importance to the development of Western art was reevaluated. The 20th-century art historian stated: "What begins in the work of Caravaggio is, quite simply, modern painting." Biography. Early life (1571–1592). Caravaggio (Michelangelo Merisi or Amerighi) was born in Milan, where his father, Fermo (Fermo Merixio), was a household administrator and architect-decorator to the marquess of Caravaggio, a town to the east of Milan and south of Bergamo. In 1576 the family moved to Caravaggio to escape a plague that ravaged Milan, and Caravaggio's father and grandfather both died there on the same day in 1577. It is assumed that the artist grew up in Caravaggio, but his family kept up connections with the Sforzas and the powerful Colonna family, who were allied by marriage with the Sforzas and destined to play a major role later in Caravaggio's life. Caravaggio's mother had to raise all of her five children in poverty. She died in 1584, the same year he began his four-year apprenticeship to the Milanese painter Simone Peterzano, described in the contract of apprenticeship as a pupil of Titian. Caravaggio appears to have stayed in the Milan-Caravaggio area after his apprenticeship ended, but it is possible that he visited Venice and saw the works of Giorgione, whom Federico Zuccari later accused him of imitating, and Titian. He would also have become familiar with the art treasures of Milan, including Leonardo da Vinci's "Last Supper", and with the regional Lombard art, a style that valued simplicity and attention to naturalistic detail and was closer to the naturalism of Germany than to the stylised formality and grandeur of Roman Mannerism. Beginnings in Rome (1592/95–1600). Following his initial training under Simone Peterzano, in 1592, Caravaggio left Milan for Rome in flight after "certain quarrels" and the wounding of a police officer. The young artist arrived in Rome "naked and extremely needy... without fixed address and without provision... short of money." During this period, he stayed with the miserly Pandolfo Pucci, known as "monsignor Insalata". A few months later he was performing hack-work for the highly successful Giuseppe Cesari, Pope Clement VIII's favourite artist, "painting flowers and fruit" in his factory-like workshop. In Rome, there was a demand for paintings to fill the many huge new churches and palaces being built at the time. It was also a period when the Church was searching for a stylistic alternative to Mannerism in religious art that was tasked to counter the threat of Protestantism. Caravaggio's innovation was a radical naturalism that combined close physical observation with a dramatic, even theatrical, use of chiaroscuro that came to be known as tenebrism (the shift from light to dark with little intermediate value). Known works from this period include the small "Boy Peeling a Fruit" (his earliest known painting), "Boy with a Basket of Fruit", and "Young Sick Bacchus", supposedly a self-portrait done during convalescence from a serious illness that ended his employment with Cesari. All three demonstrate the physical particularity for which Caravaggio was to become renowned: the fruit-basket-boy's produce has been analyzed by a professor of horticulture, who was able to identify individual cultivars right down to "...a large fig leaf with a prominent fungal scorch lesion resembling anthracnose ("Glomerella cingulata")." Caravaggio left Cesari, determined to make his own way after a heated argument. At this point he forged some extremely important friendships, with the painter Prospero Orsi, the architect Onorio Longhi, and the sixteen-year-old Sicilian artist Mario Minniti. Orsi, established in the profession, introduced him to influential collectors; Longhi, more balefully, introduced him to the world of Roman street brawls. Minniti served Caravaggio as a model and, years later, would be instrumental in helping him to obtain important commissions in Sicily. Ostensibly, the first archival reference to Caravaggio in a contemporary document from Rome is the listing of his name, with that of Prospero Orsi as his partner, as an 'assistant' in a procession in October 1594 in honour of St. Luke. The earliest informative account of his life in the city is a court transcript dated 11 July 1597, when Caravaggio and Prospero Orsi were witnesses to a crime near San Luigi de' Francesi. "The Fortune Teller", his first composition with more than one figure, shows a boy, likely Minniti, having his palm read by a Romani girl, who is stealthily removing his ring as she strokes his hand. The theme was quite new for Rome and proved immensely influential over the next century and beyond. However, at the time, Caravaggio sold it for practically nothing. "The Cardsharps"—showing another naïve youth of privilege falling victim to card cheats—is even more psychologically complex and perhaps Caravaggio's first true masterpiece. Like "The Fortune Teller", it was immensely popular, and over 50 copies survived. More importantly, it attracted the patronage of Cardinal Francesco Maria del Monte, one of the leading connoisseurs in Rome. For del Monte and his wealthy art-loving circle, Caravaggio executed a number of intimate chamber-pieces—"The Musicians", "The Lute Player", a tipsy "Bacchus", and an allegorical but realistic "Boy Bitten by a Lizard"—featuring Minniti and other adolescent models. Caravaggio's first paintings on religious themes returned to realism and the emergence of remarkable spirituality. The first of these was the "Penitent Magdalene", showing Mary Magdalene at the moment when she has turned from her life as a courtesan and sits weeping on the floor, her jewels scattered around her. "It seemed not a religious painting at all ... a girl sitting on a low wooden stool drying her hair ... Where was the repentance ... suffering ... promise of salvation?" It was understated, in the Lombard manner, not histrionic in the Roman manner of the time. It was followed by others in the same style: "Saint Catherine"; "Martha and Mary Magdalene"; "Judith Beheading Holofernes"; "Sacrifice of Isaac"; "Saint Francis of Assisi in Ecstasy"; and "Rest on the Flight into Egypt". These works, while viewed by a comparatively limited circle, increased Caravaggio's fame with both connoisseurs and his fellow artists. But a true reputation would depend on public commissions, for which it was necessary to look to the Church. Already evident was the intense realism or naturalism for which Caravaggio is now famous. He preferred to paint his subjects as the eye sees them, with all their natural flaws and defects, instead of as idealised creations. This allowed a full display of his virtuosic talents. This shift from accepted standard practice and the classical idealism of Michelangelo was very controversial at the time. Caravaggio also dispensed with the lengthy preparations for a painting that were traditional in central Italy at the time. Instead, he preferred the Venetian practice of working in oils directly from the subject—half-length figures and still life. "Supper at Emmaus", from , is a characteristic work of this period demonstrating his virtuoso talent. "Most famous painter in Rome" (1600–1606). In 1599, presumably through the influence of del Monte, Caravaggio was contracted to decorate the Contarelli Chapel in the church of San Luigi dei Francesi. The two works making up the commission, "The Martyrdom of Saint Matthew" and "The Calling of Saint Matthew", delivered in 1600, were an immediate sensation. Thereafter he never lacked commissions or patrons. Caravaggio's tenebrism (a heightened chiaroscuro) brought high drama to his subjects, while his acutely observed realism brought a new level of emotional intensity. Opinion among his artist peers was polarized. Some denounced him for various perceived failings, notably his insistence on painting from life, without drawings, but for the most part he was hailed as a great artistic visionary: "The painters then in Rome were greatly taken by this novelty, and the young ones particularly gathered around him, praised him as the unique imitator of nature, and looked on his work as miracles." Caravaggio went on to secure a string of prestigious commissions for religious works featuring violent struggles, grotesque decapitations, torture, and death. Most notable and technically masterful among them were "The Incredulity of Saint Thomas" (circa 1601) and "The Taking of Christ" (circa 1602) the latter only rediscovered in the 1990s in Dublin after remaining unrecognized for two centuries. For the most part, each new painting increased his fame, but a few were rejected by the various bodies for whom they were intended, at least in their original forms, and had to be re-painted or find new buyers. The essence of the problem was that while Caravaggio's dramatic intensity was appreciated, his realism was seen by some as unacceptably vulgar. His first version of "Saint Matthew and the Angel", featuring the saint as a bald peasant with dirty legs attended by a lightly clad over-familiar boy-angel, was rejected and a second version had to be painted as "The Inspiration of Saint Matthew". Similarly, "The Conversion of Saint Paul" was rejected, and while another version of the same subject, the "Conversion on the Way to Damascus", was accepted, it featured the saint's horse's haunches far more prominently than the saint himself, prompting this exchange between the artist and an exasperated official of Santa Maria del Popolo: "Why have you put a horse in the middle, and Saint Paul on the ground?" "Because!" "Is the horse God?" "No, but he stands in God's light!" The aristocratic collector Ciriaco Mattei, brother of Cardinal Girolamo Mattei, who was friends with Cardinal Francesco Maria Bourbon Del Monte, gave "The Supper at Emmaus" to the city palace he shared with his brother, 1601 (National Gallery, London), The Incredulity of Saint Thomas, , "Ecclesiastical Version" (Private Collection, Florence), The Incredulity of Saint Thomas , 1601 "Secular Version" (Sanssouci Palace, Potsdam), John the Baptist with the Ram, 1602 (Capitoline Museums, Rome) and "The Taking of Christ", 1602 (National Gallery of Ireland, Dublin) Caravaggio commissioned. The second version of "The Taking of Christ", which was looted from the Odessa Museum in 2008 and recovered in 2010, is believed by some experts to be a contemporary copy. "The Incredulity of Saint Thomas" is one of the most famous paintings by Caravaggio, circa 1601–1602. It entered the Prussian Royal Collection, survived the Second World War unscathed, and can be viewed in the Palais in Sanssouci, Potsdam. The painting depicts the episode that led to the term "Doubting Thomas"—in art history formally known as "The Incredulity of Saint Thomas"—which has been frequently depicted and used to make various theological statements in Christian art since at least the 5th century. According to the Gospel of John, Thomas the Apostle missed one of Jesus' appearances to the apostles after his resurrection and said, "Unless I see the marks of the nails in his hands, and put my finger where the nails were, and put my hand into his side, I will not believe it." A week later, Jesus appeared and told Thomas to touch him and stop doubting. Then Jesus said, "Because you have seen me, you have believed; blessed are those who have not seen and yet have believed." The painting shows in a demonstrative gesture how the doubting apostle puts his finger into Christ's side wound, the latter guiding his hand. The unbeliever is depicted like a peasant, dressed in a robe torn at the shoulder and with dirt under his fingernails. The composition of the picture is designed in such a way that the viewer is directly involved in the event and also feels its intensity. Other works included "Entombment", the "Madonna di Loreto" ("Madonna of the Pilgrims"), the "Grooms' Madonna", and "Death of the Virgin". The history of these last two paintings illustrates the reception given to some of Caravaggio's art and the times in which he lived. The "Grooms' Madonna", also known as "Madonna dei palafrenieri", painted for a small altar in Saint Peter's Basilica in Rome, remained there for just two days and was then removed. A cardinal's secretary wrote: "In this painting, there are but vulgarity, sacrilege, impiousness and disgust...One would say it is a work made by a painter that can paint well, but of a dark spirit, and who has been for a lot of time far from God, from His adoration, and from any good thought..." "Death of the Virgin", commissioned in 1601 by a wealthy jurist for his private chapel in the new Carmelite church of Santa Maria della Scala, was rejected by the Carmelites in 1606. Caravaggio's contemporary Giulio Mancini records that it was rejected because Caravaggio had used a well-known prostitute as his model for the Virgin. Giovanni Baglione, another contemporary, tells that it was due to Mary's bare legs—a matter of decorum in either case. Caravaggio scholar John Gash suggests that the problem for the Carmelites may have been theological rather than aesthetic, in that Caravaggio's version fails to assert the doctrine of the Assumption of Mary, the idea that the Mother of God did not die in any ordinary sense but was assumed into Heaven. The replacement altarpiece commissioned (from one of Caravaggio's most able followers, Carlo Saraceni), showed the Virgin not dead, as Caravaggio had painted her, but seated and dying; and even this was rejected, and replaced with a work showing the Virgin not dying, but ascending into Heaven with choirs of angels. In any case, the rejection did not mean that Caravaggio or his paintings were out of favour. "Death of the Virgin" was no sooner taken out of the church than it was purchased by the Duke of Mantua, on the advice of Rubens, and later acquired by Charles I of England before entering the French royal collection in 1671. One secular piece from these years is "Amor Vincit Omnia", in English also called "Amor Victorious", painted in 1602 for Vincenzo Giustiniani, a member of del Monte's circle. The model was named in a memoir of the early 17th century as "Cecco", the diminutive for Francesco. He is possibly Francesco Boneri, identified with an artist active in the period 1610–1625 and known as Cecco del Caravaggio ('Caravaggio's Cecco'), carrying a bow and arrows and trampling symbols of the warlike and peaceful arts and sciences underfoot. He is unclothed, and it is difficult to accept this grinning urchin as the Roman god Cupid—as difficult as it was to accept Caravaggio's other semi-clad adolescents as the various angels he painted in his canvases, wearing much the same stage-prop wings. The point, however, is the intense yet ambiguous reality of the work: it is simultaneously Cupid and Cecco, as Caravaggio's Virgins were simultaneously the Mother of Christ and the Roman courtesans who modeled for them. Legal problems and flight from Rome (1606). Caravaggio led a tumultuous life. He was notorious for brawling, even in a time and place when such behavior was commonplace, and the transcripts of his police records and trial proceedings fill many pages. Bellori claims that around 1590–1592, Caravaggio, already well known for brawling with gangs of young men, committed a murder which forced him to flee from Milan, first to Venice and then to Rome. On 28 November 1600, while living at the Palazzo Madama with his patron Cardinal Del Monte, Caravaggio beat nobleman Girolamo Stampa da Montepulciano, a guest of the cardinal, with a club, resulting in an official complaint to the police. Episodes of brawling, violence, and tumult grew more and more frequent. Caravaggio was often arrested and jailed at Tor di Nona. After his release from jail in 1601, Caravaggio returned to paint first "The Taking of Christ" and then "Amor Vincit Omnia". In 1603, he was arrested again, this time for the defamation of another painter, Giovanni Baglione, who sued Caravaggio and his followers Orazio Gentileschi and Onorio Longhi for writing offensive poems about him. The French ambassador intervened, and Caravaggio was transferred to house arrest after a month in jail in Tor di Nona. Between May and October 1604, Caravaggio was arrested several times for possession of illegal weapons and for insulting the city guards. He was also sued by a tavern waiter for having thrown a plate of artichokes in his face. An early published notice on Caravaggio, dating from 1604 and describing his lifestyle three years previously, recounts that "after a fortnight's work he will swagger about for a month or two with a sword at his side and a servant following him, from one ball-court to the next, ever ready to engage in a fight or an argument, so that it is most awkward to get along with him." In 1605, Caravaggio was forced to flee to Genoa for three weeks after seriously injuring Mariano Pasqualone di Accumoli, a notary, in a dispute over Lena, Caravaggio's model and lover. The notary reported having been attacked on 29 July with a sword, causing a severe head injury. Caravaggio's patrons intervened and managed to cover up the incident. Upon his return to Rome, Caravaggio was sued by his landlady Prudenzia Bruni for not having paid his rent. Out of spite, Caravaggio threw rocks through her window at night and was sued again. In November, Caravaggio was hospitalized for an injury which he claimed he had caused himself by falling on his own sword. On 29 May 1606, Caravaggio killed a young man, possibly unintentionally, resulting in his fleeing Rome with a death sentence hanging over him. Ranuccio Tomassoni was a gangster from a wealthy family. The two had argued many times, often ending in blows. The circumstances are unclear, whether a brawl or a duel with swords at Campo Marzio, but the killing may have been unintentional. Many rumours circulated at the time as to the cause of the fight. Several contemporary "avvisi" referred to a quarrel over a gambling debt and a pallacorda game, a sort of tennis, and this explanation has become established in the popular imagination. Other rumours, however, claimed that the duel stemmed from jealousy over Fillide Melandroni, a well-known Roman prostitute who had modeled for him in several important paintings; Tomassoni was her pimp. According to such rumours, Caravaggio castrated Tomassoni with his sword before deliberately killing him, with other versions claiming that Tomassoni's death had been caused accidentally during the castration. The duel may have had a political dimension, as Tomassoni's family was notoriously pro-Spanish, whereas Caravaggio was a client of the French ambassador. Caravaggio's patrons had hitherto been able to shield him from any serious consequences of his frequent duels and brawling, but Tomassoni's wealthy family was outraged by his death and demanded justice. Caravaggio's patrons were unable to protect him. Caravaggio was sentenced to beheading for murder, and an open bounty was decreed, enabling anyone who recognized him to carry out the sentence legally. Caravaggio's paintings began, obsessively, to depict severed heads, often his own, at this time. Modern accounts are to be found in Peter Robb's "M" and Helen Langdon's "Caravaggio: A Life". A theory relating the death to Renaissance notions of honour and symbolic wounding has been advanced by art historian Andrew Graham-Dixon. Whatever the details, the matter was serious enough that Caravaggio was forced to flee Rome. He moved just south of the city, then to Naples, Malta, and Sicily. Exile and death (1606–1610). Naples. Following the death of Tomassoni, Caravaggio fled first to the estates of the Colonna family south of Rome and then on to Naples, where Costanza Colonna Sforza, widow of Francesco Sforza, in whose husband's household Caravaggio's father had held a position, maintained a palace. In Naples, outside the jurisdiction of the Roman authorities and protected by the Colonna family, the most famous painter in Rome became the most famous in Naples. His connections with the Colonnas led to a stream of important church commissions, including the "Madonna of the Rosary", and "The Seven Works of Mercy". "The Seven Works of Mercy" depicts the seven corporal works of mercy as a set of compassionate acts concerning the material needs of others. The painting was made for and is still housed in the church of Pio Monte della Misericordia in Naples. Caravaggio combined all seven works of mercy in one composition, which became the church's altarpiece. Alessandro Giardino has also established the connection between the iconography of "The Seven Works of Mercy" and the cultural, scientific and philosophical circles of the painting's commissioners. Malta. Despite his success in Naples, after only a few months in the city Caravaggio left for Hospitaller Malta, the headquarters of the Knights of Malta. Fabrizio Sforza Colonna, Costanza's son, was a Knight of Malta and general of the Order's galleys. He appears to have facilitated Caravaggio's arrival on the island in 1607 (and his escape the next year). Caravaggio presumably hoped that the patronage of Alof de Wignacourt, Grand Master of the Knights of Saint John, could help him secure a pardon for Tomassoni's death. Wignacourt was so impressed at having the artist as official painter to the Order that he inducted him as a Knight, and the early biographer Bellori records that the artist was well pleased with his success. Wignacourt reportedly gifted some slaves to Caravaggio in recognition for his services. Major works from his Malta period include the "Beheading of Saint John the Baptist", his largest ever work, and the only painting to which he put his signature, "Saint Jerome Writing" (both housed in Saint John's Co-Cathedral, Valletta, Malta) and a "Portrait of Alof de Wignacourt and his Page", as well as portraits of other leading Knights. According to Andrea Pomella, "The Beheading of Saint John the Baptist" is widely considered "one of the most important works in Western painting." Completed in 1608, the painting had been commissioned by the Knights of Malta as an altarpiece and measuring was the largest altarpiece Caravaggio painted. It still hangs in St. John's Co-Cathedral, for which it was commissioned and where Caravaggio himself was inducted and briefly served as a knight. Yet, by late August 1608, he was arrested and imprisoned, likely the result of yet another brawl, this time with an aristocratic knight, during which the door of a house was battered down and the knight seriously wounded. Caravaggio was imprisoned by the Knights at Valletta, but he managed to escape. By December, he had been expelled from the Order "as a foul and rotten member", a formal phrase used in all such cases. Sicily. Caravaggio made his way to Sicily where he met his old friend Mario Minniti, who was now married and living in Syracuse. Together they set off on what amounted to a triumphal tour from Syracuse to Messina and, maybe, on to the island capital, Palermo. In Syracuse and Messina Caravaggio continued to win prestigious and well-paid commissions. Among other works from this period are "Burial of St. Lucy", "The Raising of Lazarus", and "Adoration of the Shepherds". His style continued to evolve, showing now friezes of figures isolated against vast empty backgrounds. "His great Sicilian altarpieces isolate their shadowy, pitifully poor figures in vast areas of darkness; they suggest the desperate fears and frailty of man, and at the same time convey, with a new yet desolate tenderness, the beauty of humility and of the meek, who shall inherit the earth." Contemporary reports depict a man whose behaviour was becoming increasingly bizarre, which included sleeping fully armed and in his clothes, ripping up a painting at a slight word of criticism, and mocking local painters. Caravaggio displayed bizarre behaviour from very early in his career. Mancini describes him as "extremely crazy", a letter from Del Monte notes his strangeness, and Minniti's 1724 biographer says that Mario left Caravaggio because of his behaviour. The strangeness seems to have increased after Malta. Susinno's early-18th-century "Le vite de' pittori Messinesi" ("Lives of the Painters of Messina") provides several colourful anecdotes of Caravaggio's erratic behaviour in Sicily, and these are reproduced in modern full-length biographies such as Langdon and Robb. Bellori writes of Caravaggio's "fear" driving him from city to city across the island and finally, "feeling that it was no longer safe to remain", back to Naples. Baglione says Caravaggio was being "chased by his enemy", but like Bellori does not say who this enemy was. Return to Naples. After only nine months in Sicily, Caravaggio returned to Naples in the late summer of 1609. According to his earliest biographer, he was being pursued by enemies while in Sicily and felt it safest to place himself under the protection of the Colonnas until he could secure his pardon from the pope (now Paul V) and return to Rome. In Naples he painted "The Denial of Saint Peter", a final "John the Baptist (Borghese)", and his last picture, "The Martyrdom of Saint Ursula". His style continued to evolve—Saint Ursula is caught in a moment of highest action and drama, as the arrow fired by the king of the Huns strikes her in the breast, unlike earlier paintings that had all the immobility of the posed models. The brushwork was also much freer and more impressionistic. In October 1609, he was involved in a violent clash, an attempt on his life, perhaps ambushed by men in the pay of the knight he had wounded in Malta or some other faction of the Order. His face was seriously disfigured and rumours circulated in Rome that he was dead. He painted a "Salome with the Head of John the Baptist (Madrid)", showing his own head on a platter, and sent it to Wignacourt as a plea for forgiveness. Perhaps at this time, he also painted a "David with the Head of Goliath", showing the young David with a strangely sorrowful expression gazing at the severed head of the giant, which is again Caravaggio. This painting he may have sent to his patron, the unscrupulous art-loving Cardinal Scipione Borghese, nephew of the pope, who had the power to grant or withhold pardons. Caravaggio hoped Borghese could mediate a pardon in exchange for works by the artist. News from Rome encouraged Caravaggio, and in the summer of 1610, he took a boat northwards to receive the pardon, which seemed imminent thanks to his powerful Roman friends. With him were three last paintings, the gifts for Cardinal Scipione. What happened next is the subject of much confusion and conjecture, shrouded in much mystery. The bare facts seem to be that on 28 July, an anonymous "avviso" (private newsletter) from Rome to the ducal court of Urbino reported that Caravaggio was dead. Three days later, another "avviso" said that he had died of fever on his way from Naples to Rome. A poet friend of the artist later gave 18 July as the date of death, and a recent researcher claims to have discovered a death notice showing that the artist died on that day of a fever in Porto Ercole, near Grosseto in Tuscany. Death. Caravaggio had a fever at the time of his death, and what killed him was a matter of controversy and rumour at the time and has been a matter of historical debate and study since. Contemporary rumours held that either the Tomassoni family or the Knights had him killed in revenge. Traditionally historians have long thought he died of syphilis. Some have said he had malaria, or possibly brucellosis from unpasteurised dairy. Some scholars have argued that Caravaggio was actually attacked and killed by the same "enemies" that had been pursuing him since he fled Malta, possibly Wignacourt or factions of the Knights. Caravaggio's remains were buried in Porto Ercole's San Sebastiano cemetery, which closed in 1956, and then moved to St. Erasmus cemetery, where, in 2010, archaeologists conducted a year-long investigation of remains found in three crypts and after using DNA, carbon dating, and other methods. They believe with a high degree of confidence that they have identified those of Caravaggio. Initial tests suggested Caravaggio might have died of lead poisoning—paints used at the time contained high amounts of lead salts, and Caravaggio is known to have indulged in violent behavior, as caused by lead poisoning. Later research concluded he died as the result of a wound sustained in a brawl in Naples, specifically from sepsis caused by Staphylococcus aureus. Vatican documents released in 2002 support the theory that the wealthy Tomassoni family had him hunted down and killed as a vendetta for Caravaggio's murder of gangster Ranuccio Tomassoni, in a botched attempt at castration after a duel over the affections of model Fillide Melandroni. Sexuality. Since the 1970s art scholars and historians have debated the inferences of homoeroticism in Caravaggio's works as a way to better understand the man. Caravaggio never married and had no known children, and Howard Hibbard observed the absence of erotic female figures in the artist's oeuvre: "In his entire career he did not paint a single female nude", and the cabinet-pieces from the Del Monte period are replete with "full-lipped, languorous boys ... who seem to solicit the onlooker with their offers of fruit, wine, flowers—and themselves" suggesting an erotic interest in the male form. The model for "Amor vincit omnia", Cecco del Caravaggio, lived with the artist in Rome and stayed with him even after he was obliged to leave the city in 1606. The two may have been lovers. A connection with a certain Lena is mentioned in a 1605 court deposition by Pasqualone, where she is described as "Michelangelo's girl". According to G. B. Passeri, this 'Lena' was Caravaggio's model for the "Madonna di Loreto"; and according to Catherine Puglisi, 'Lena' may have been the same person as the courtesan Maddalena di Paolo Antognetti, who named Caravaggio as an "intimate friend" by her own testimony in 1604. Caravaggio was also rumoured to be madly in love with Fillide Melandroni, a well known Roman prostitute who modeled for him in several important paintings. Caravaggio's sexuality also received early speculation due to claims about the artist by Honoré Gabriel Riqueti, comte de Mirabeau. Writing in 1783, Mirabeau contrasted the personal life of Caravaggio directly with the writings of St Paul in the Book of Romans, arguing that "Romans" excessively practice sodomy or homosexuality. "The Holy Mother Catholic Church teachings on morality" (and so on; short book title) contains the Latin phrase "Et fœminæ eorum immutaverunt naturalem usum in eum usum qui est contra naturam." ("and their women changed their natural habit to that which is against nature"). The phrase, according to Mirabeau, entered Caravaggio's thoughts, and he claimed that such an "abomination" could be witnessed through a particular painting housed at the Museum of the Grand Duke of Tuscany—featuring a rosary of a blasphemous nature, in which a circle of thirty men ("turpiter ligati") are intertwined in embrace and presented in unbridled composition. Mirabeau notes the affectionate nature of Caravaggio's depiction reflects the voluptuous glow of the artist's sexuality. By the late nineteenth century, Sir Richard Francis Burton identified the painting as Caravaggio's painting of St. Rosario. Burton also identifies both St. Rosario and this painting with the practices of Tiberius mentioned by Seneca the Younger. The survival status and location of Caravaggio's painting is unknown. No such painting appears in his or his school's catalogues. Aside from the paintings, evidence also comes from the libel trial brought against Caravaggio by Giovanni Baglione in 1603. Baglione accused Caravaggio and his friends of writing and distributing scurrilous doggerel attacking him; the pamphlets, according to Baglione's friend and witness Mao Salini, had been distributed by a certain Giovanni Battista, a "bardassa", or boy prostitute, shared by Caravaggio and his friend Onorio Longhi. Caravaggio denied knowing any young boy of that name, and the allegation was not followed up. Baglione's painting of "Divine Love" has also been seen as a visual accusation of sodomy against Caravaggio. Such accusations were damaging and dangerous as sodomy was a capital crime at the time. Even though the authorities were unlikely to investigate such a well-connected person as Caravaggio, "Once an artist had been smeared as a pederast, his work was smeared too." Francesco Susino in his later biography additionally relates the story of how the artist was chased by a schoolmaster in Sicily for spending too long gazing at the boys in his care. Susino presents it as a misunderstanding, but some authors have speculated that Caravaggio may indeed have been seeking sex with the boys, using the incident to explain some of his paintings which they believe to be homoerotic. The art historian Andrew Graham-Dixon has summarised the debate: A lot has been made of Caravaggio's presumed homosexuality, which has in more than one previous account of his life been presented as the single key that explains everything, both the power of his art and the misfortunes of his life. There is no absolute proof of it, only strong circumstantial evidence and much rumour. The balance of probability suggests that Caravaggio did indeed have sexual relations with men. But he certainly had female lovers. Throughout the years that he spent in Rome, he kept close company with a number of prostitutes. The truth is that Caravaggio was as uneasy in his relationships as he was in most other aspects of life. He likely slept with men. He did sleep with women. He settled with no one... [but] the idea that he was an early martyr to the drives of an unconventional sexuality is an anachronistic fiction. "Washington Post" art critic Philip Kennicott has taken issue with what he regarded as Graham-Dixon's minimizing of Caravaggio's homosexuality: There was a fussiness to the tone whenever a scholar or curator was forced to grapple with transgressive sexuality, and you can still find it even in relatively recent histories, including Andrew Graham-Dixon's 2010 biography of Caravaggio, which acknowledges only that "he likely slept with men". The author notes the artist's fluid sexual desires but gives some of Caravaggio's most explicitly homoerotic paintings tortured readings to keep them safely in the category of mere "ambiguity". As an artist. The birth of Baroque. Caravaggio "put the oscuro (shadows) into chiaroscuro". Chiaroscuro was practised long before he came on the scene, but it was Caravaggio who made the technique a dominant stylistic element, darkening the shadows and transfixing the subject in a blinding shaft of light. With this came the acute observation of physical and psychological reality that formed the ground both for his immense popularity and for his frequent problems with his religious commissions. He worked at great speed, from live models, scoring basic guides directly onto the canvas with the end of the brush handle; very few of Caravaggio's drawings appear to have survived, and it is likely that he preferred to work directly on the canvas, an unusual approach at the time. His models were basic to his realism; some have been identified, including Mario Minniti and Francesco Boneri, both fellow artists, Minniti appearing as various figures in the early secular works, the young Boneri as a succession of angels, Baptists and Davids in the later canvasses. His female models include Fillide Melandroni, Anna Bianchini, and Maddalena Antognetti (the "Lena" mentioned in court documents of the "artichoke" case as Caravaggio's concubine), all well-known prostitutes, who appear as female religious figures including the Virgin and various saints. Caravaggio himself appears in several paintings, his final self-portrait being as the witness on the far right to the "Martyrdom of Saint Ursula". Caravaggio had a noteworthy ability to express in one scene of unsurpassed vividness the passing of a crucial moment. "The Supper at Emmaus" depicts the recognition of Christ by his disciples: a moment before he is a fellow traveller, mourning the passing of the Messiah, as he never ceases to be to the innkeeper's eyes; the second after, he is the Saviour. In "The Calling of St Matthew", the hand of the Saint points to himself as if he were saying, "who, me?", while his eyes, fixed upon the figure of Christ, have already said, "Yes, I will follow you". With "The Resurrection of Lazarus", he goes a step further, giving a glimpse of the actual physical process of resurrection. The body of Lazarus is still in the throes of rigor mortis, but his hand, facing and recognising that of Christ, is alive. Other major Baroque artists would travel the same path, for example Bernini, fascinated with themes from Ovid's "Metamorphoses". The Caravaggisti. The installation of the St. Matthew paintings in the Contarelli Chapel had an immediate impact among the younger artists in Rome, and Caravaggism became the cutting edge for every ambitious young painter. The first Caravaggisti included Orazio Gentileschi and Giovanni Baglione. Baglione's Caravaggio phase was short-lived; Caravaggio later accused him of plagiarism and the two were involved in a long feud. Baglione went on to write the first biography of Caravaggio. In the next generation of Caravaggisti, there were Carlo Saraceni, Bartolomeo Manfredi and Orazio Borgianni. Gentileschi, despite being considerably older, was the only one of these artists to live much beyond 1620 and ended up as a court painter to Charles I of England. His daughter Artemisia Gentileschi was also stylistically close to Caravaggio and one of the most gifted of the movement. However, in Rome and Italy, it was not Caravaggio, but the influence of his rival Annibale Carracci, blending elements from the High Renaissance and Lombard realism, that ultimately triumphed. Caravaggio's brief stay in Naples produced a notable school of Neapolitan Caravaggisti, including Battistello Caracciolo and Carlo Sellitto. The Caravaggisti movement there ended with a terrible outbreak of plague in 1656, but the Spanish connection—Naples was a possession of Spain—was instrumental in forming the important Spanish branch of his influence. Rubens was likely one of the first Flemish artists to be influenced by Caravaggio whose work he got to know during his stay in Rome in 1601. He later painted a copy (or rather an interpretation) of Caravaggio's "Entombment of Christ" and recommended his patron, the Duke of Mantua, purchase "Death of the Virgin" (Louvre). Although some of this interest in Caravaggio is reflected in his drawings during his Italian residence, it was only after his return to Antwerp in 1608 that Rubens' works show openly Caravaggesque traits such as in the "" (1608–1609) (Courtauld Institute of Art) and the (1618–1619) (Mauritshuis). However, the influence of Caravaggio on Rubens' work would be less important than that of Raphael, Correggio, Barocci and the Venetians. Flemish artists, who were influenced by Rubens, such as Jacob Jordaens, Pieter van Mol, Gaspar de Crayer and Willem Jacob Herreyns, also used certain stark realism and strong contrasts of light and shadow, common to the Caravaggesque style. A number of Catholic artists from Utrecht, including Hendrick ter Brugghen, Gerrit van Honthorst and Dirck van Baburen travelled in the first decades of the 17th century to Rome. Here they became profoundly influenced by the work of Caravaggio and his followers. On their return to Utrecht, their Caravaggesque works inspired a short-lived but influential flowering of artworks inspired indirectly in style and subject matter by the works of Caravaggio and the Italian followers of Caravaggio. This style of painting was later referred to as Utrecht Caravaggism. In the following generation of Dutch artists the effects of Caravaggio, although attenuated, are to be seen in the work of Vermeer and Rembrandt, neither of whom visited Italy. Death and rebirth of a reputation. Caravaggio's innovations inspired the Baroque, but the Baroque took the drama of his chiaroscuro without the psychological realism. While he directly influenced the style of the artists mentioned above, and, at a distance, the Frenchmen Georges de La Tour and Simon Vouet, and the Spaniard Giuseppe Ribera, within a few decades his works were being ascribed to less scandalous artists, or simply overlooked. The Baroque, to which he contributed so much, had evolved, and fashions had changed, but perhaps more pertinently, Caravaggio never established a workshop as the Carracci did and thus had no school to spread his techniques. Nor did he ever set out his underlying philosophical approach to art, the psychological realism that may only be deduced from his surviving work. Thus his reputation was doubly vulnerable to the unsympathetic critiques of his earliest biographers, Giovanni Baglione, a rival painter with a vendetta, and the influential 17th-century critic Gian Pietro Bellori, who had not known him but was under the influence of the earlier Giovanni Battista Agucchi and Bellori's friend Poussin, in preferring the "classical-idealistic" tradition of the Bolognese school led by the Carracci. Baglione, his first biographer, played a considerable part in creating the legend of Caravaggio's unstable and violent character, as well as his inability to draw. In the 1920s, art critic Roberto Longhi brought Caravaggio's name once more to the foreground and placed him in the European tradition: "Ribera, Vermeer, La Tour and Rembrandt could never have existed without him. And the art of Delacroix, Courbet and Manet would have been utterly different". The influential Bernard Berenson agreed: "With the exception of Michelangelo, no other Italian painter exercised so great an influence." Epitaph. Caravaggio's epitaph was composed by his friend Marzio Milesi. It reads: He was commemorated on the front of the "Banca d'Italia" 100,000-lire banknote in the 1980s and '90s (before Italy switched to the euro), with the back showing his "Basket of Fruit". Oeuvre. There is disagreement as to the size of Caravaggio's oeuvre, with counts as low as 40 and as high as 80. In his monograph of 1983, the Caravaggio scholar Alfred Moir wrote, "The forty-eight color plates in this book include almost all of the surviving works accepted by every Caravaggio expert as autograph, and even the least demanding would add fewer than a dozen more", but there have been some generally accepted additions since then. One, "The Calling of Saints Peter and Andrew", was in 2006 authenticated and restored; it had been in storage in Hampton Court, mislabeled as a copy. Richard Francis Burton writes of a "picture of St. Rosario (in the museum of the Grand Duke of Tuscany), showing a circle of thirty men "turpiter ligati"" ("lewdly banded"), which is not known to have survived. The rejected version of "Saint Matthew and the Angel", intended for the Contarelli Chapel in San Luigi dei Francesi in Rome, was destroyed during the bombing of Dresden, though black and white photographs of the work exist. In June 2011 it was announced that a previously unknown Caravaggio painting of Saint Augustine dating to about 1600 had been discovered in a private collection in Britain. Called a "significant discovery", the painting had never been published and is thought to have been commissioned by Vincenzo Giustiniani, a patron of the painter in Rome. A painting depicting "Judith Beheading Holofernes" was allegedly discovered in an attic in Toulouse in 2014. In April 2016 the expert and art dealer to whom the work was shown announced that this was a long-lost painting by the hand of Caravaggio himself. That lost Caravaggio painting was only known up to that date by a presumed copy of it by the Flemish painter Louis Finson, who had shared a studio with Caravaggio in Naples. The French government imposed an export ban on the newly discovered painting while tests were carried out to establish whether it was an authentic painting by Caravaggio. In February 2019 it was announced that the painting would be sold at auction after the Louvre had turned down the opportunity to purchase it for €100 million. After an auction was considered, the painting was finally sold in a private sale to the American billionaire hedge fund manager J. Tomilson Hill. The art historical world is not united over the attribution of the work, with the art dealer who sold the work promoting its authenticity with the support of art historians who were given privileged access to the work, while other art historians remain unconvinced mainly based on stylistic and quality considerations. Some art historians believe it may be a work by Louis Finson himself. In April 2021 a minor work believed to be from the circle of a Spanish follower of Caravaggio, Jusepe de Ribera, was withdrawn from sale at the Madrid auction house Ansorena when the Museo del Prado alerted the Ministry of Culture, which placed a preemptive export ban on the painting. The by painting has been in the Pérez de Castro family since 1823, when it was exchanged for another work from the Real Academia of San Fernando. It had been listed as "Ecce-Hommo con dos saiones de Carabaggio" before the attribution was later lost or changed to the circle of Ribera. Stylistic evidence, as well as the similarity of the models to those in other Caravaggio works, has convinced some experts that the painting is the original Caravaggio 'Ecce Homo' for the 1605 Massimo Massimi commission. The attribution to Caravaggio is disputed by other experts. The painting is now undergoing restoration by Colnaghis, who will also be handling the future sale of the work. Theft. In October 1969, two thieves entered the Oratory of Saint Lawrence in Palermo, Sicily, and stole Caravaggio's "Nativity with St. Francis and St. Lawrence" from its frame. Experts estimated its value at $20 million. Following the theft, Italian police set up an art theft task force with the specific aim of re-acquiring lost and stolen artworks. Since the creation of this task force, many leads have been followed regarding the "Nativity". Former Italian mafia members have stated that "Nativity with St. Francis and St. Lawrence" was stolen by the Sicilian Mafia and displayed at important mafia gatherings. Former mafia members have said that the "Nativity" was damaged and has since been destroyed. The whereabouts of the painting are still unknown. A reproduction currently hangs in its place in the Oratory of San Lorenzo. In December 1984, "Saint Jerome Writing" (Caravaggio, Valletta) was stolen from the St. John's Co-Cathedral, Malta. The canvas was cut out of the frame. The painting was recovered two years later, following negotiations between the thieves and Fr. Marius J. Zerafa, then the Director of Museums in Malta. A full account of the theft and successful recovery had been recorded by Fr. Marius J. Zerafa in his book "Caravaggio Diaries". In popular culture. Caravaggio's work has been widely influential in late-20th-century American gay culture, with frequent references to male sexual imagery in paintings such as "The Musicians" and "Amor Victorious". Several poems written by Thom Gunn were responses to specific Caravaggio paintings, and British filmmaker Derek Jarman made a critically applauded biopic entitled "Caravaggio" in 1986. Another biopic, "L'Ombra di Caravaggio" ("Caravaggio's Shadow"), directed by Michele Placido and starring Riccardo Scamarcio, was released in 2022. Some of Caravaggio’s work has been used in the discography of rapper Westside Gunn, specifically in the albums Pray for Paris and And Then You Pray for Me. Caravaggio was prominently featured as motif in Steven Zaillian's Netflix series "Ripley", based on Patricia Highsmith's book "The Talented Mr. Ripley." The murder of Rannucchio is also depicted. Caravaggio is portrayed by Daniele Rienzo. Contemporary analysis and Modernist revisitation. Caravaggio's baroque art was rediscovered as self-consciously political in the 1999 book "Quoting Caravaggio" by Mieke Bal. In 2013, a touring Caravaggio exhibition called "Burst of Light: Caravaggio and His Legacy" opened in the Wadsworth Atheneum Museum of Art in Hartford, Connecticut. The show included five paintings by the master artist that included "Saint John the Baptist in the Wilderness" (1604–1605) and "Martha and Mary Magdalene" (1589). The whole travelled to France and also to Los Angeles, California. Other Baroque artists like Georges de La Tour, Orazio Gentileschi, and the Spanish trio of Diego Velazquez, Francisco de Zurbarán, and Carlo Saraceni were also included in the exhibitions. References. Primary sources. The main primary sources for Caravaggio's life are: All have been reprinted in Howard Hibbard's "Caravaggio" and in the appendices to Catherine Puglisi's "Caravaggio".
7019
38777489
https://en.wikipedia.org/wiki?curid=7019
Jean Siméon Chardin
Jean Siméon Chardin (; November 2, 1699 – December 6, 1779) was an 18th-century French painter. He is considered a master of still life, and is also noted for his genre paintings which depict kitchen maids, children, and domestic activities. Carefully balanced composition, soft diffusion of light, and granular impasto characterize his work. Life. Chardin was born in Paris, the son of a cabinetmaker, and rarely left the city. He lived on the Left Bank near Saint-Sulpice until 1757, when Louis XV granted him a studio and living quarters in the Louvre. Chardin entered into a marriage contract with Marguerite Saintard in 1723, whom he did not marry until 1731. He served apprenticeships with the history painters Pierre-Jacques Cazes and Noël-Nicolas Coypel, and in 1724 became a master in the Académie de Saint-Luc. According to one nineteenth-century writer, at a time when it was hard for unknown painters to come to the attention of the Royal Academy, he first found notice by displaying a painting at the "small Corpus Christi" (held eight days after the regular one) on the Place Dauphine (by the Pont Neuf). Van Loo, passing by in 1720, bought it and later assisted the young painter. Upon presentation of "The Ray" and "The Buffet" in 1728, he was admitted to the Académie Royale de Peinture et de Sculpture. The following year he ceded his position in the Académie de Saint-Luc. He made a modest living by "produc[ing] paintings in the various genres at whatever price his customers chose to pay him", and by such work as the restoration of the frescoes at the Galerie François I at Fontainebleau in 1731. In November 1731 his son Jean-Pierre was baptized, and a daughter, Marguerite-Agnès, was baptized in 1733. In 1735 his wife Marguerite died, and within two years Marguerite-Agnès had died as well. Beginning in 1737 Chardin exhibited regularly at the Salon. He would prove to be a "dedicated academician", regularly attending meetings for fifty years, and functioning successively as counsellor, treasurer, and secretary, overseeing in 1761 the installation of Salon exhibitions. Chardin's work gained popularity through reproductive engravings of his genre paintings (made by artists such as François-Bernard Lépicié and P.-L. Sugurue), which brought Chardin income in the form of "what would now be called royalties". In 1744 he entered his second marriage, this time to Françoise-Marguerite Pouget. The union brought a substantial improvement in Chardin's financial circumstances. In 1745 a daughter, Angélique-Françoise, was born, but she died in 1746. In 1752 Chardin was granted a pension of 500 livres by Louis XV. In 1756 Chardin returned to the subject of the still life. At the Salon of 1759 he exhibited nine paintings; it was the first Salon to be commented upon by Denis Diderot, who would prove to be a great admirer and public champion of Chardin's work. Beginning in 1761, his responsibilities on behalf of the Salon, simultaneously arranging the exhibitions and acting as treasurer, resulted in a diminution of productivity in painting, and the showing of 'replicas' of previous works. In 1763 his services to the Académie were acknowledged with an extra 200 livres in pension. In 1765 he was unanimously elected associate member of the Académie des Sciences, Belles-Lettres et Arts of Rouen, but there is no evidence that he left Paris to accept the honor. By 1770 Chardin was the 'Premier peintre du roi', and his pension of 1,400 livres was the highest in the academy. In the 1770s his eyesight weakened and he took to painting in pastels, a medium in which he executed portraits of his wife and himself (see "Self-portrait" at top right). His works in pastels are now highly valued. In 1772 Chardin's son, also a painter, drowned in Venice, a probable suicide. The artist's last known oil painting was dated 1776; his final Salon participation was in 1779, and featured several pastel studies. Gravely ill by November of that year, he died in Paris on December 6, at the age of 80. Work. Chardin worked very slowly and painted only slightly more than 200 pictures (about four a year) in total. Chardin's work had little in common with the Rococo painting that dominated French art in the 18th century. At a time when history painting was considered the supreme classification for public art, Chardin's subjects of choice were viewed as minor categories. He favored simple yet beautifully textured still lifes, and sensitively handled domestic interiors and genre paintings. Simple, even stark, paintings of common household items ("Still Life with a Smoker's Box") and an uncanny ability to portray children's innocence in an unsentimental manner ("Boy with a Top" [right]) nevertheless found an appreciative audience in his time, and account for his timeless appeal. Largely self-taught, Chardin was greatly influenced by the realism and subject matter of the 17th-century Low Country masters. Despite his unconventional portrayal of the ascendant bourgeoisie, early support came from patrons in the French aristocracy, including Louis XV. Though his popularity rested initially on paintings of animals and fruit, by the 1730s he introduced kitchen utensils into his work ("The Copper Cistern", , Louvre). Soon figures populated his scenes as well, supposedly in response to a portrait painter who challenged him to take up the genre. "Woman Sealing a Letter" (ca. 1733), which may have been his first attempt, was followed by half-length compositions of children saying grace, as in "Le Bénédicité", and kitchen maids in moments of reflection. These humble scenes deal with simple, everyday activities, yet they also have functioned as a source of documentary information about a level of French society not hitherto considered a worthy subject for painting. The pictures are noteworthy for their formal structure and pictorial harmony. Chardin said about painting, "Who said one paints with colors? One "employs" colors, but one paints with "feeling"." A child playing was a favourite subject of Chardin. He depicted an adolescent building a house of cards on at least four occasions. The version at Waddesdon Manor is the most elaborate. Scenes such as these derived from 17th-century Netherlandish vanitas works, which bore messages about the transitory nature of human life and the worthlessness of material ambitions, but Chardin's also display a delight in the ephemeral phases of childhood for their own sake. Chardin frequently painted replicas of his compositions—especially his genre paintings, nearly all of which exist in multiple versions which in many cases are virtually indistinguishable. Beginning with "The Governess" (1739, in the National Gallery of Canada, Ottawa), Chardin shifted his attention from working-class subjects to slightly more spacious scenes of bourgeois life. Chardin's extant paintings, which number about 200, are in many major museums, including the Louvre. Influence. Chardin's influence on the art of the modern era was wide-ranging and has been well-documented. Édouard Manet's half-length "Boy Blowing Bubbles" and the still lifes of Paul Cézanne are equally indebted to their predecessor. He was one of Henri Matisse's most admired painters; as an art student Matisse made copies of four Chardin paintings in the Louvre. Chaïm Soutine's still lifes looked to Chardin for inspiration, as did the paintings of Georges Braque, and later, Giorgio Morandi. In 1999 Lucian Freud painted and etched several copies after "The Young Schoolmistress" (National Gallery, London). Marcel Proust, in the chapter "How to open your eyes?" from "In Search of Lost Time" ("À la recherche du temps perdu"), describes a melancholic young man sitting at his simple breakfast table. The only comfort he finds is in the imaginary ideas of beauty depicted in the great masterpieces of the Louvre, materializing fancy palaces, rich princes, and the like. The author tells the young man to follow him to another section of the Louvre where the pictures of Chardin are. There he would see the beauty in still life at home and in everyday activities like peeling turnips.
7021
44217690
https://en.wikipedia.org/wiki?curid=7021
Crookes radiometer
The Crookes radiometer (also known as a light mill) consists of an airtight glass bulb containing a partial vacuum, with a set of vanes which are mounted on a spindle inside. The vanes rotate when exposed to light, with faster rotation for more intense light, providing a quantitative measurement of electromagnetic radiation intensity. The reason for the rotation was a cause of much scientific debate in the ten years following the invention of the device, but in 1879 the currently accepted explanation for the rotation was published. Today the device is mainly used in physics education as a demonstration of a heat engine run by light energy. It was invented in 1873 by the chemist Sir William Crookes as the by-product of some chemical research. In the course of very accurate quantitative chemical work, he was weighing samples in a partially evacuated chamber to reduce the effect of air currents, and noticed the weighings were disturbed when sunlight shone on the balance. Investigating this effect, he created the device named after him. It is still manufactured and sold as an educational aid or for curiosity. General description. The radiometer is made from a glass bulb from which much of the air has been removed to form a partial vacuum. Inside the bulb, on a low-friction spindle, is a rotor with several (usually four) vertical lightweight vanes spaced equally around the axis. The vanes are polished or white on one side and black on the other. When exposed to sunlight, artificial light, or infrared radiation (even the heat of a hand nearby can be enough), the vanes turn with no apparent motive power, the dark sides retreating from the radiation source and the light sides advancing. Cooling the outside of the radiometer rapidly causes rotation in the opposite direction. Effect observations. The effect begins to be observed at partial vacuum pressures of several hundred pascals (or several torrs), reaches a peak at around and has disappeared by the time the vacuum reaches (see explanations note 1). At these very high vacuums the effect of photon radiation pressure on the vanes can be observed in very sensitive apparatus (see Nichols radiometer), but this is insufficient to cause rotation. Origin of the name. The prefix "" in the title originates from the combining form of Latin "radius", a ray: here it refers to electromagnetic radiation. A Crookes radiometer, consistent with the suffix "" in its title, can provide a quantitative measurement of electromagnetic radiation intensity. This can be done, for example, by visual means (e.g., a spinning slotted disk, which functions as a simple stroboscope) without interfering with the measurement itself. Thermodynamic explanation. Movement with absorption. When a radiant energy source is directed at a Crookes radiometer, the radiometer becomes a heat engine. The operation of a heat engine is based on a difference in temperature that is converted to a mechanical output. In this case, the black side of the vane becomes hotter than the other side, as radiant energy from a light source warms the black side by absorption faster than the silver or white side. The internal air molecules are heated up when they touch the black side of the vane. The warmer side of the vane is subjected to a force which moves it forward. The internal temperature rises as the black vanes impart heat to the air molecules, but the molecules are cooled again when they touch the bulb's glass surface, which is at ambient temperature. This heat loss through the glass keeps the internal bulb temperature steady with the result that the two sides of the vanes develop a temperature difference. The white or silver side of the vanes are slightly warmer than the internal air temperature but cooler than the black side, as some heat conducts through the vane from the black side. The two sides of each vane must be thermally insulated to some degree so that the polished or white side does not immediately reach the temperature of the black side. If the vanes are made of metal, then the black or white paint can be the insulation. The glass stays much closer to ambient temperature than the temperature reached by the black side of the vanes. The external air helps conduct heat away from the glass. The air pressure inside the bulb needs to strike a balance between too low and too high. A strong vacuum inside the bulb does not permit motion, because there are not enough air molecules to cause the air currents that propel the vanes and transfer heat to the outside before both sides of each vane reach thermal equilibrium by heat conduction through the vane material. High inside pressure inhibits motion because the temperature differences are not enough to push the vanes through the higher concentration of air: there is too much air resistance for "eddy currents" to occur, and any slight air movement caused by the temperature difference is damped by the higher pressure before the currents can "wrap around" to the other side. Movement with radiation. When the radiometer is heated in the absence of a light source, it turns in the forward direction (i.e. black sides trailing). If a person's hands are placed around the glass without touching it, the vanes will turn slowly or not at all, but if the glass is touched to warm it quickly, they will turn more noticeably. Directly heated glass gives off enough infrared radiation to turn the vanes, but glass blocks much of the far-infrared radiation from a source of warmth not in contact with it. However, near-infrared and visible light more easily penetrate the glass. If the glass is cooled quickly in the absence of a strong light source by putting ice on the glass or placing it in the freezer with the door almost closed, it turns backwards (i.e. the silver sides trail). This demonstrates radiation from the black sides of the vanes rather than absorption. The wheel turns backwards because the net exchange of heat between the black sides and the environment initially cools the black sides faster than the white sides. Upon reaching equilibrium, typically after a minute or two, reverse rotation ceases. This contrasts with sunlight, with which forward rotation can be maintained all day. Explanations for the force on the vanes. Over the years, there have been many attempts to explain how a Crookes radiometer works: Incorrect theories. Crookes incorrectly suggested that the force was due to the pressure of light. This theory was originally supported by James Clerk Maxwell, who had predicted this force. This explanation is still often seen in leaflets packaged with the device. The first experiment to test this theory was done by Arthur Schuster in 1876, who observed that there was a force on the glass bulb of the Crookes radiometer that was in the opposite direction to the rotation of the vanes. This showed that the force turning the vanes was generated inside the radiometer. If light pressure were the cause of the rotation, then the better the vacuum in the bulb, the less air resistance to movement, and the faster the vanes should spin. In 1901, with a better vacuum pump, Pyotr Lebedev showed that in fact, the radiometer only works when there is low-pressure gas in the bulb, and the vanes stay motionless in a hard vacuum. Finally, if light pressure were the motive force, the radiometer would spin in the opposite direction, as the photons on the shiny side being reflected would deposit more momentum than on the black side, where the photons are absorbed. This results from conservation of momentum – the momentum of the reflected photon exiting on the light side must be matched by a reaction on the vane that reflected it. The actual pressure exerted by light is far too small to move these vanes, but can be measured with devices such as the Nichols radiometer. It is in fact possible to make the radiometer spin in the opposite direction by either heating it or putting it in a cold environment (like a freezer) in absence of light, when black sides become cooler than the white ones due to the thermal radiation. Another incorrect theory was that the heat on the dark side was causing the material to outgas, which pushed the radiometer around. This was later effectively disproved by both Schuster's experiments (1876) and Lebedev's (1901) Partially correct theory. A partial explanation is that gas molecules hitting the warmer side of the vane will pick up some of the heat, bouncing off the vane with increased speed. Giving the molecule this extra boost effectively means that a minute pressure is exerted on the vane. The imbalance of this effect between the warmer black side and the cooler silver side means the net pressure on the vane is equivalent to a push on the black side and as a result the vanes spin round with the black side trailing. The problem with this idea is that while the faster moving molecules produce more force, they also do a better job of stopping other molecules from reaching the vane, so the net force on the vane should be the same. The greater temperature causes a decrease in local density which results in the same force on both sides. Years after this explanation was dismissed, Albert Einstein showed that the two pressures do not cancel out exactly at the edges of the vanes because of the temperature difference there. The force predicted by Einstein would be enough to move the vanes, but not fast enough. Currently accepted theory. The currently accepted theory was formulated by Osborne Reynolds, who theorized that thermal transpiration was the cause of the motion. Reynolds found that if a porous plate is kept hotter on one side than the other, the interactions between gas molecules and the plates are such that gas will flow through from the cooler to the hotter side. The vanes of a typical Crookes radiometer are not porous, but the space past their edges behaves like the pores in Reynolds's plate. As gas moves from the cooler to the hotter side, the pressure on the hotter side increases. When the plate is fixed, the pressure on the hotter side increases until the ratio of pressures between the sides equals the square root of the ratio of absolute temperatures. Because the plates in a radiometer are not fixed, the pressure difference from cooler to hotter side causes the vane to move. The cooler (white) side moves forward, pushed by the higher pressure behind it. From a molecular point of view, the vane moves due to the tangential force of the rarefied gas colliding differently with the edges of the vane between the hot and cold sides. The Reynolds paper went unpublished for a while because it was refereed by Maxwell, who then published a paper of his own, which contained a critique of the mathematics in Reynolds's unpublished paper. Maxwell died that year and the Royal Society refused to publish Reynolds's critique of Maxwell's rebuttal to Reynolds's unpublished paper, as it was felt that this would be an inappropriate argument when one of the people involved had already died. All-black light mill. To rotate, a light mill does not have to be coated with different colors across each vane. In 2009, researchers at the University of Texas, Austin created a monocolored light mill which has four curved vanes; each vane forms a convex and a concave surface. The light mill is uniformly coated by gold nanocrystals, which are a strong light absorber. Upon exposure, due to geometric effect, the convex side of the vane receives more photon energy than the concave side does, and subsequently the gas molecules receive more heat from the convex side than from the concave side. At rough vacuum, this asymmetric heating effect generates a net gas movement across each vane, from the concave side to the convex side, as shown by the researchers' direct simulation Monte Carlo modeling. The gas movement causes the light mill to rotate with the concave side moving forward, due to Newton's third law. This monocolored design promotes the fabrication of micrometer- or nanometer-scaled light mills, as it is difficult to pattern materials of distinct optical properties within a very narrow, three-dimensional space. Horizontal vane light mill. The thermal creep from the hot side of a vane to the cold side has been demonstrated in a mill with horizontal vanes that have a two-tone surface with a black half and a white half. This design is called a Hettner radiometer. This radiometer's angular speed was found to be limited by the behavior of the drag force due to the gas in the vessel more than by the behavior of the thermal creep force. This design does not experience the Einstein effect because the faces are parallel to the temperature gradient. Nanoscale light mill. In 2010 researchers at the University of California, Berkeley, succeeded in building a nanoscale light mill that works on an entirely different principle to the Crookes radiometer. A gold light mill, only 100 nanometers in diameter, was built and illuminated by laser light that had been tuned. The possibility of doing this had been suggested by the Princeton physicist Richard Beth in 1936. The torque was greatly enhanced by the resonant coupling of the incident light to plasmonic waves in the gold structure. Practical applications. The radiometric effect has not been often used for practical applications. Marcel Bétrisey made in 2001 two different clocks (Le Chronolithe and Conti) powered by the light. Their pendulums had bulb lamps located outside the glass dôme and pointing against 4 mica vanes. One meter pendulum gives one second, two lamps placed in either side light up alternately, thus “pushing” the 4 kilos pendulum each time. As there was vacuum inside, its accuracy was of the order of 2 seconds per month. Radiometers are now commonly sold worldwide as a novelty ornament; needing no batteries, but only light to get the vanes to turn. They come in various forms, such as the one pictured, and are often used in science museums to illustrate "radiation pressure" – a scientific principle that they do not in fact demonstrate.
7022
1819986
https://en.wikipedia.org/wiki?curid=7022
Cold Chisel
Cold Chisel are an Australian pub rock band, which formed in Adelaide in 1973 by mainstay members Ian Moss on guitar and vocals, Steve Prestwich on drums, Les Kaczmarek on bass and Don Walker on piano and keyboards. They were soon joined by Jimmy Barnes on lead vocals and, in 1975, Phil Small became their bass guitarist. The group disbanded in late 1983 but subsequently re-formed several times. Musicologist Ian McFarlane wrote that they became "one of Australia's best-loved groups" as well as "one of the best live bands", fusing "a combination of rockabilly, hard rock and rough-house soul'n'blues that was defiantly Australian in outlook." Eight of their studio albums have reached the Australian top five, "Breakfast at Sweethearts" (February 1979), "East" (June 1980), "Circus Animals" (March 1982, No. 1), "Twentieth Century" (April 1984, No. 1), "The Last Wave of Summer" (October 1998, No. 1), "No Plans" (April 2012), "The Perfect Crime" (October 2015) and "Blood Moon" (December 2019, No. 1). They have achieved six number-one albums on the ARIA Charts, the latest being their 2024 compilation "50 Years – The Best Of". Their top-10 singles are "Cheap Wine" (1980), "Forever Now" (1982), "Hands Out of My Pocket" (1994) and "The Things I Love in You" (1998). At the ARIA Music Awards of 1993 they were inducted into the Hall of Fame. In 2001 Australasian Performing Right Association (APRA) listed their single "Khe Sanh" (May 1978) at No. 8 of the all-time best Australian songs. "Circus Animals" was listed at No. 4 in the book "100 Best Australian Albums" (October 2010), while "East" appeared at No. 53. They won The Ted Albert Award for Outstanding Services to Australian Music at the APRA Music Awards of 2016. Cold Chisel's popularity is almost entirely confined to Australia and New Zealand, with their songs and musicianship highlighting working class life. Their early bass guitarist (1973–75), Les Kaczmarek, died in December 2008; Steve Prestwich died of a brain tumour in January 2011. History. 1973–1978: Beginnings. Cold Chisel originally formed as Orange in Adelaide in 1973 as a heavy metal band with Ted Broniecki on keyboards, Les Kaczmarek on bass guitar, Ian Moss on guitar and vocals, Steve Prestwich on drums and Don Walker on piano. Their early material included cover versions of Free and Deep Purple material. Broniecki left by September 1973 and seventeen-year-old singer Jimmy Barnes – called Jim Barnes during their initial career – joined in December. The group changed its name several times, often for every live performance, before choosing “Cold Chisel” after an early Don Walker song of that title, and that name stuck. Barnes' relationship with the others was volatile: he often came to blows with Prestwich and left the band several times. During these periods Moss would handle vocals until Barnes returned. Walker emerged as the group's primary songwriter and spent 1974 in Armidale, completing his studies in quantum mechanics. Barnes' older brother, John Swan, was a member of Cold Chisel around this time, providing backing vocals and percussion. After several violent incidents, including beating up a roadie, he was fired. In mid-1975 Barnes left to join Fraternity as Bon Scott's replacement on lead vocals, alongside Swan on drums and vocals. Kaczmarek left Cold Chisel during 1975 and was replaced by Phil Small on bass guitar. In November of that year, without Barnes, they recorded their early demos. In May 1976 Cold Chisel relocated to Melbourne, but "frustrated by their lack of progress," they moved on to Sydney in early 1977. In May 1977, Barnes told his fellow members that he would leave again. From July he joined Feather for a few weeks, on co-lead vocals with Swan – they were a Sydney-based hard rock group, which had evolved from Blackfeather. A farewell performance for Cold Chisel, with Barnes aboard, went so well that the singer changed his mind and returned. In the following month the Warner Music Group signed the group. 1978–1979: "Cold Chisel" and "Breakfast at Sweethearts". In the early months of 1978 Cold Chisel recorded their self-titled debut album with their manager and producer, Peter Walker (ex-Bakery). All tracks were written by Don Walker, except "Juliet", where Barnes composed its melody and Walker the lyrics. "Cold Chisel" was released in April and included guest studio musicians: Dave Blight on harmonica (who became a regular on-stage guest) and saxophonists Joe Camilleri and Wilbur Wilde (from Jo Jo Zep & The Falcons). Australian musicologist Ian McFarlane described how, "[it] failed to capture the band's renowned live firepower, despite the presence of such crowd favourites as 'Khe Sanh', 'Home and Broken Hearted' and 'One Long Day'." It reached the top 40 on the Kent Music Report and was certified gold. In May 1978, "Khe Sanh" was released as their debut single but it was declared too offensive for commercial radio due to the sexual implication of the lyrics, e.g. "Their legs were often open/But their minds were always closed." However, it was played regularly on Sydney youth radio station Double J, which was not subject to the restrictions as it was part of the Australian Broadcasting Corporation (ABC). Another ABC program, "Countdown"s producers asked them to change the lyric but they refused. Despite such setbacks, "Khe Sanh" reached No. 41 on the Kent Music Report singles chart. It became Cold Chisel's signature tune and was popular among their fans. They later remixed the track, with re-recorded vocals, for inclusion on the international version of their third album, "East" (June 1980). The band's next release was a live five-track extended play, "You're Thirteen, You're Beautiful, and You're Mine", in November 1978. McFarlane observed, "It captured the band in its favoured element, fired by raucous versions of Walker's 'Merry-Go-Round' and Chip Taylor's 'Wild Thing'." It was recorded at the Regent Theatre, Sydney in 1977, when they had Midnight Oil as one of the support acts. Australian writer Ed Nimmervoll described a typical performance by Cold Chisel: "Everybody was talking about them anyway, drawn by the songs, and Jim Barnes' presence on stage, crouched, sweating, as he roared his vocals into the microphone at the top of his lungs." The EP peaked at No. 35 on the Kent Music Report Singles Chart. "Merry Go Round" was re-recorded for their second studio album, "Breakfast at Sweethearts" (February 1979). This was recorded between July 1978 and January 1979 with producer Richard Batchens, who had previously worked with Richard Clapton, Sherbet and Blackfeather. Batchens smoothed out the band's rough edges and attempted to give their songs a sophisticated sound. With regards to this approach, the band were unsatisfied with the finished product. It peaked at No. 4 and was the top-selling album in Australia by a locally based artist for that year; it was certified platinum. The majority of its tracks were written by Walker, with Barnes and Walker on the lead single, "Goodbye (Astrid, Goodbye)" (September 1978), and Moss contributed to "Dresden". "Goodbye (Astrid, Goodbye)" became a live favourite, and was covered by U2 during Australian tours in the 1980s. 1979-1980: "East". Cold Chisel had gained national chart success and increased popularity of their fans without significant commercial radio airplay. The members developed reputations for wild behaviour, particularly Barnes, who claimed to have had sex with over 1000 women and who consumed more than a bottle of vodka each night while performing. In late 1979, severing their relationship with Batchens, Cold Chisel chose Mark Opitz to produce the next single, "Choirgirl" (November). It is a Walker composition dealing with a young woman's experience with abortion. Despite the subject matter it reached No. 14. "Choirgirl" paved the way for the group's third studio album, "East" (June 1980), with Opitz producing. Recorded over two months in early 1980, "East", reached No. 2 and is the second highest selling album by an Australian artist for that year. "The Australian Women's Weekly"s Gregg Flynn noticed, "[they are] one of the few Australian bands in which each member is capable of writing hit songs." Despite the continued dominance of Walker, the other members contributed more tracks to their play list, and this was their first album to have songs written by each one. McFarlane described it as, "a confident, fully realised work of tremendous scope." Nimmervoll explained how, "This time everything fell into place, the sound, the songs, the playing... "East" was a triumph. [The group] were now the undisputed No. 1 rock band in Australia." The album varied from straight-ahead rock tracks "Standing on the Outside" and "My Turn to Cry" to rockabilly-flavoured work-outs ("Rising Sun", written about Barnes' relationship with his then-girlfriend Jane Mahoney) and pop-laced love songs ("My Baby" by Phil Small, featuring Joe Camilleri on saxophone) to a poignant piano ballad about prison life, "Four Walls". The cover art showed Barnes reclined in a bathtub wearing a kamikaze bandanna in a room littered with junk and was inspired by Jacques-Louis David's 1793 painting "The Death of Marat". The Ian Moss-penned "Never Before" was chosen as the first song to air on the ABC's youth radio station, Triple J, when it switched to the FM band that year. Supporting the release of "East", Cold Chisel embarked on the Youth in Asia Tour from May 1980, which took its name from a lyric in "Star Hotel". In late 1980, the Aboriginal rock reggae band No Fixed Address supported the band on its Summer Offensive tour to the east coast, with the final concert on 20 December at the University of Adelaide. 1981-1982: "Swingshift" to "Circus Animals". The Youth in Asia Tour performances were used for Cold Chisel's double live album, "Swingshift" (March 1981). Nimmervoll declared, "[the group] rammed what they were all about with [this album]." In March 1981 the band won seven categories: Best Australian Album, Most Outstanding Achievement, Best Recorded Song Writer, Best Australian Producer, Best Australian Record Cover Design, Most Popular Group and Most Popular Record, at the "Countdown"/"TV Week" pop music awards for 1980. They attended the ceremony at the Sydney Entertainment Centre and were due to perform: however, as a protest against a TV magazine's involvement, they refused to accept any trophy and finished the night with "My Turn to Cry". After one verse and chorus, they smashed up the set and left the stage. "Swingshift" debuted at No 1, which demonstrated their status as the highest-selling local act. With a slightly different track listing, "East" was issued in the United States and they undertook their first US tour in mid-1981. Ahead of the tour they had issued "My Baby" for the North America market and it reached the top 40 on "Billboard"s chart, Mainstream Rock. They were generally popular as a live act there, but the US branch of their label did little to promote the album. According to Barnes' biographer, Toby Creswell, at one point they were ushered into an office to listen to the US master tape to find it had substantial hiss and other ambient noise, which made it almost unable to be released. Nevertheless, the album reached the lower region of the "Billboard" 200 in July. The group were booed off stage after a lacklustre performance in Dayton, Ohio in May 1981 opening for Ted Nugent. Other support slots they took were for Cheap Trick, Joe Walsh, Heart and the Marshall Tucker Band. European audiences were more accepting of the Australian band and they developed a fan base in Germany. In August 1981 Cold Chisel began work on a fourth studio album, "Circus Animals" (March 1982), again with Opitz producing. To launch the album, the band performed under a circus tent at Wentworth Park in Sydney and toured heavily once more, including a show in Darwin that attracted more than 10 percent of the city's population. It peaked at No. 1 in both Australia and on the Official New Zealand Music Chart. In October 2010 it was listed at No. 4 in the book "100 Best Australian Albums" by music journalists Creswell, Craig Mathieson and John O'Donnell. Its lead single, "You Got Nothing I Want" (November 1981), is an aggressive Barnes-penned hard rock track, which attacked the US industry for its handling of the band on their recent tour. The song caused problems for Barnes when he later attempted to break into the US market as a solo performer; senior music executives there continued to hold it against him. Like its predecessor, "Circus Animals" contained songs of contrasting styles, with harder-edged tracks like "Bow River" and "Hound Dog" beside more expansive ballads such as the next two singles, "Forever Now" (March 1982) and "When the War Is Over" (August), both written by Prestwich. "Forever Now" is their highest-charting single in two Australasian markets: No. 4 on the Kent Music Report Singles Chart and No. 2 on the Official New Zealand Music Chart. "When the War Is Over" is the most-covered Cold Chisel track – Uriah Heep included a version on their 1989 album, "Raging Silence"; John Farnham recorded it while he and Prestwich were members of Little River Band in the mid-1980s and again for his 1988 solo album, "Age of Reason". The song was also a No. 1 hit for former "Australian Idol" contestant Cosima De Vito in 2004 and was performed by Bobby Flynn during that show's 2006 season. "Forever Now" was covered, as a country waltz, by Australian band the Reels. 1983: Break-up. Success outside Australasia continued to elude Cold Chisel and friction occurred between the members. According to McFarlane, "[the] failed attempts to break into the American market represented a major blow... [their] earthy, high-energy rock was overlooked." In early 1983 they toured Germany but the shows went so badly that in the middle of the tour Walker up-ended his keyboard and stormed off stage during one show. After returning to Australia, Prestwich was fired and replaced by Ray Arnott, formerly of the 1970s progressive rockers Spectrum and country rockers the Dingoes. After this, Barnes requested a large advance from management. Now married with a young child, reckless spending had left him almost broke. His request was refused as there was a standing arrangement that any advance to one band member had to be paid to all the others. After a meeting on 17 August during which Barnes quit the band it was decided that the group would split up. A farewell concert series, The Last Stand, was planned and a final studio album, "Twentieth Century" (February 1984), was recorded. Prestwich returned for that tour, which began in October. Before the last four scheduled shows in Sydney, Barnes lost his voice and those dates were postponed to mid-December. The band's final performances were at the Sydney Entertainment Centre from 12 to 15 December 1983 – ten years since their first live appearance as Cold Chisel in Adelaide – and the group then disbanded. The Sydney shows formed the basis of a concert film, "The Last Stand" (July 1984), which became the biggest-selling cinema-released concert documentary by an Australian band to that time. Other recordings from the tour were used on a live album, "" (1984); the title is a reference to the pseudonym the group occasionally used when playing warm-up shows before tours. Some were also used as B-sides for a three-CD singles package, "Three Big XXX Hits", issued ahead of the release of their 1994 compilation album, "Teenage Love". During breaks in the tour, "Twentieth Century" was recorded. It was a fragmentary process, spread across various studios and sessions as the individual members often refused to work together – both Arnott (on ten tracks) and Prestwich (on three tracks) are recorded as drummers. The album reached No. 1 and provided the singles "Saturday Night" (March 1984) and "Flame Trees" (August), both of which remain radio staples. "Flame Trees", co-written by Prestwich and Walker, took its title from the BBC series "The Flame Trees of Thika", although it was lyrically inspired by Walker's hometown of Grafton. Barnes later recorded an acoustic version for his 1993 solo album, "Flesh and Wood", and it was also covered by Sarah Blasko in 2006. 1984-1996: Aftermath and ARIA Hall of Fame. Barnes launched his solo career in January 1984, which has provided nine Australian number-one studio albums and an array of hit singles, including "Too Much Ain't Enough Love", which peaked at No. 1. He has recorded with INXS, Tina Turner, Joe Cocker and John Farnham to become one of the country's most popular male rock singers. Prestwich joined Little River Band in 1984 and appeared on the albums "Playing to Win" and "No Reins", before departing in 1986 to join Farnham's touring band. Moss, Small and Walker took extended breaks from music. Small maintained a low profile as a member in a variety of minor groups Pound, the Earls of Duke and the Outsiders. Walker formed Catfish in 1988, ostensibly a solo band with a variable membership, which included Moss, Charlie Owen and Dave Blight at times. Catfish's recordings during this phase attracted little commercial success. During 1988 and 1989 Walker wrote several tracks for Moss including the singles "Tucker's Daughter" (November 1988) and "Telephone Booth" (June 1989), which appeared on Moss' debut solo album, "Matchbook" (August 1989). Both the album and "Tucker's Daughter" peaked at No. 1. Moss won five trophies at the ARIA Music Awards of 1990. His other solo albums met with less chart or award success. Throughout the 1980s and most of the 1990s, Cold Chisel were courted to re-form but refused, at one point reportedly turning down a $5 million offer to play a sole show in each of the major Australian state capitals. Moss and Walker often collaborated on projects; neither worked with Barnes until Walker wrote "Stone Cold" for the singer's sixth studio album, "Heat" (October 1993). The pair recorded an acoustic version for "Flesh and Wood" (December). Thanks primarily to continued radio airplay and Barnes' solo success, Cold Chisel's legacy remained solidly intact. By the early 1990s the group had surpassed 3 million album sales, most sold since 1983. The 1991 compilation album, "Chisel", was re-issued and re-packaged several times, once with the long-deleted 1978 EP as a bonus disc and a second time in 2001 as a double album. The "Last Stand" soundtrack album was finally released in 1992. In 1994 a complete album of previously unreleased demo and rare live recordings, "Teenage Love", was released, which provided three singles. 1997–2010: Reunited. Cold Chisel reunited in October 1997, with the line-up of Barnes, Moss, Prestwich, Small and Walker. They recorded their sixth studio album, "The Last Wave of Summer" (October 1998), from February to July with the band members co-producing. They supported it with a national tour. The album debuted at No. 1 on the ARIA Albums Chart. In 2003 they re-grouped for the Ringside Tour and in 2005 again to perform at a benefit for the victims of the Boxing Day tsunami at the Myer Music Bowl in Melbourne. Founding bass guitarist, Les Kaczmarek, died of liver failure on 5 December 2008, aged 53. Walker described him as "a wonderful and beguiling man in every respect." On 10 September 2009 Cold Chisel announced they would re-form for a one-off performance at the Sydney 500 V8 Supercars event on 5 December. The band performed at Stadium Australia to the largest crowd of its career, with more than 45,000 fans in attendance. They played a single live show in 2010: at the Deniliquin ute muster in October. In December Moss confirmed that Cold Chisel were working on new material for an album. 2011–2019: Death of Steve Prestwich & "The Perfect Crime". In January 2011 Steve Prestwich was diagnosed with a brain tumour; he underwent surgery on 14 January but never regained consciousness and died two days later, aged 56. All six of Cold Chisel's studio albums were re-released in digital and CD formats in mid-2011. Three digital-only albums were released – "Never Before", "Besides" and "Covered" – as well as a new compilation album, "The Best of Cold Chisel: All for You", which peaked at No. 2 on the ARIA Charts. The thirty-date Light the Nitro Tour was announced in July along with the news that former Divinyls and Catfish drummer Charley Drayton had replaced Prestwich. Most shows on the tour sold out within days and new dates were later announced for early 2012. "No Plans", their seventh studio album, was released in April 2012, with Kevin Shirley producing, which peaked at No. 2. "The Australian"s Stephen Fitzpatrick rated it as four-and-a-half out of five and found its lead track, "All for You", "speaks of redemption; of a man's ability to make something of himself through love." The track "I Got Things to Do" was written and sung by Prestwich, which Fitzpatrick described as "the bittersweet finale", a song that had "a vocal track the other band members did not know existed until after [Prestwich's] death." Midway through 2012 they embarked on a short UK tour and played with Soundgarden and Mars Volta at Hard Rock Calling at London's Hyde Park. The group's eighth studio album, "The Perfect Crime", appeared in October 2015, again with Shirley producing, which peaked at No. 2. Martin Boulton of "The Sydney Morning Herald" rated it at four out of five stars and explained that the album does what Cold Chisel always does: "work incredibly hard, not take any shortcuts and play the hell out of the songs." The album, Boulton writes, "delves further back to their rock'n'roll roots with chief songwriter [Walker] carving up the keys, guitarist [Moss] both gritty and sublime and the [Small/Drayton] engine room firing on every cylinder. Barnes' voice sounds worn, wonderful and better than ever." The band's latest album, "Blood Moon", was released in December 2019. The album debuted at No. 1 on the ARIA Album Chart, the band's fifth to reach the top. Half of the songs had lyrics written by Barnes and music by Walker, a new combination for Cold Chisel, with Barnes noting his increased confidence after writing two autobiographies. 2024: 50th Anniversary Tour. On 29 May 2024, Cold Chisel announced 'The 50th Anniversary Tour', beginning in Armidale on 5 October 2024 and ending in the band's hometown of Adelaide on 17 November 2024. However, Jimmy Barnes' wife Jane subsequently posted on X.com that further tour dates including New Zealand would be announced later. Musical style and lyrical themes. McFarlane described Cold Chisel's early career in his "Encyclopedia of Australian Rock and Pop" (1999): "after ten years on the road, [they] called it a day. Not that the band split up for want of success; by that stage [they] had built up a reputation previously uncharted in Australian rock history. By virtue of the profound effect the band's music had on the many thousands of fans who witnessed its awesome power, Cold Chisel remains one of Australia's best-loved groups. As one of the best live bands of its day, [they] fused a combination of rockabilly, hard rock and rough-house soul'n'blues that was defiantly Australian in outlook." "The Canberra Times" Luis Feliu, in July 1978, observed, "This is not just another Australian rock band, no mediocrity here, and their honest, hard-working approach looks like paying off." He further wrote, "the range of styles tackled and done convincingly, from hard rock to blues, boogie, rhythm and blues, is where the appeal lies." Influences from blues and early rock n' roll was broadly apparent, fostered by the love of those styles by Moss, Barnes and Walker. Small and Prestwich contributed strong pop sensibilities. This allowed volatile rock songs like "You Got Nothing I Want" and "Merry-Go-Round" to stand beside thoughtful ballads like "Choirgirl", pop-flavoured love songs like "My Baby" and caustic political statements like "Star Hotel", an attack on the late-1970s government of Malcolm Fraser, inspired by the Star Hotel riot in Newcastle. The songs were not overtly political but rather observations of everyday life within Australian society and culture, in which the members with their various backgrounds (Moss was from Alice Springs, Walker grew up in rural New South Wales, Barnes and Prestwich were working-class immigrants from the UK) were quite well able to provide. Cold Chisel's songs were about distinctly Australian experiences, a factor often cited as a major reason for the band's lack of international appeal. "Saturday Night" and "Breakfast at Sweethearts" were observations of the urban experience of Sydney's Kings Cross district where Walker lived for many years. "Misfits", which featured on the B-side to "My Baby", was about homeless kids in the suburbs surrounding Sydney. Songs like "Shipping Steel" and "Standing on The Outside" were working-class anthems and many others featured characters trapped in mundane, everyday existences, yearning for the good times of the past ("Flame Trees") or for something better from life ("Bow River"). Recognition. At the ARIA Music Awards of 1993 they were inducted into the Hall of Fame. While repackages and compilations accounted for much of these sales, 1994's "Teenage Love" provided two of its singles, which were top-ten hits. When the group finally re-formed in 1998 the resultant album was also a major hit and the follow-up tour sold out almost immediately. In 2001 Australasian Performing Right Association (APRA) listed their single "Khe Sanh" (May 1978) at No. 8 of the all-time best Australian songs. Cold Chisel were one of the first Australian acts to have become the subject of a major tribute album. In 2007, "Standing on the Outside: The Songs of Cold Chisel" was released, featuring a collection of the band's songs as performed by artists including The Living End, Evermore, Something for Kate, Pete Murray, Katie Noonan, You Am I, Paul Kelly, Alex Lloyd, Thirsty Merc and Ben Lee, many of whom were children when Cold Chisel first disbanded and some, like the members of Evermore, had not even been born. "Circus Animals" was listed at No. 4 in the book "100 Best Australian Albums" (October 2010), while "East" appeared at No. 53. They won The Ted Albert Award for Outstanding Services to Australian Music at the APRA Music Awards of 2016. In March 2021, a previously unnamed lane off Burnett Street (off Currie Street) in the Adelaide central business district, near where the band had its first residency in the 1970s, was officially named Cold Chisel Lane. On one of its walls, there is a mural by Adelaide artist James Dodd, inspired by the band. Members. Current members Current touring musicians Former members Former touring musicians Awards and nominations. APRA Awards. The APRA Awards are presented annually from 1982 by the Australasian Performing Right Association (APRA), "honouring composers and songwriters". They commenced in 1982. ARIA Music Awards. The ARIA Music Awards is an annual awards ceremony that recognises excellence, innovation, and achievement across all genres of Australian music. They commenced in 1987. Cold Chisel was inducted into the Hall of Fame in 1993. Helpmann Awards. The Helpmann Awards is an awards show, celebrating live entertainment and performing arts in Australia, presented by industry group "Live Performance Australia" since 2001. South Australian Music Awards. The South Australian Music Awards are annual awards that exist to recognise, promote and celebrate excellence in the South Australian contemporary music industry. They commenced in 2012. The South Australian Music Hall of Fame celebrates the careers of successful music industry personalities. TV Week / Countdown Awards. "Countdown" was an Australian pop music TV series on national broadcaster ABC-TV from 1974 to 1987, it presented music awards from 1979 to 1987, initially in conjunction with magazine "TV Week". The TV Week / Countdown Awards were a combination of popular-voted and peer-voted awards.
7023
26139030
https://en.wikipedia.org/wiki?curid=7023
Confederate States of America
The Confederate States of America (CSA), also known as the Confederate States (C.S.), the Confederacy, or the South, was an unrecognized breakaway republic in the Southern United States from 1861 to 1865. It comprised eleven U.S. states that declared secession: South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, Texas, Virginia, Arkansas, Tennessee, and North Carolina. These states fought against the United States during the American Civil War. With Abraham Lincoln's election as President of the United States in 1860, eleven southern states believed their slavery-dependent plantation economies were threatened, and seven initially seceded from the United States. The Confederacy was formed on February 8, 1861, by South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas. They adopted a new constitution establishing a confederation government of "sovereign and independent states". The federal government in Washington D.C. and states under its control were known as the Union. The Civil War began in April 1861, when South Carolina's militia attacked Fort Sumter. Four slave states of the Upper South—Virginia, Arkansas, Tennessee, and North Carolina—then seceded and joined the Confederacy. In February 1862, Confederate States Army leaders installed a centralized federal government in Richmond, Virginia, and enacted the first Confederate draft on April 16, 1862. By 1865, the Confederacy's federal government dissolved into chaos, and the Confederate States Congress adjourned, effectively ceasing to exist as a legislative body on March 18. After four years of heavy fighting, most Confederate land and naval forces either surrendered or otherwise ceased hostilities by May 1865. The most significant capitulation was Confederate general Robert E. Lee's surrender on April 9, after which any doubt about the war's outcome or the Confederacy's survival was extinguished. Confederate President Davis's administration declared the Confederacy dissolved on May 5. After the war, during the Reconstruction era, the Confederate states were readmitted to Congress after each ratified the 13th Amendment to the U.S. Constitution, which outlawed slavery, "except as a punishment for crime". Lost Cause mythology, an idealized view of the Confederacy valiantly fighting for a just cause, emerged in the decades after the war among former Confederate generals and politicians, and in organizations such as the United Daughters of the Confederacy and the Sons of Confederate Veterans. Intense periods of Lost Cause activity developed around the turn of the 20th century and during the civil rights movement of the 1950s and 60s in reaction to growing support for racial equality. Advocates sought to ensure future generations of Southern whites would continue to support white supremacist policies such as the Jim Crow laws through activities such as building Confederate monuments and influencing the authors of textbooks. The modern display of the Confederate battle flag primarily started during the 1948 presidential election, when it was used by the pro-segregationist and white supremacist Dixiecrat Party. Origins. Historians who address the origins of the American Civil War agree that the preservation of the institution of slavery was the principal aim of the eleven Southern states (seven states before the war and four states after its onset) that declared their secession from the United States (the Union) and united to form the Confederacy. While historians in the 21st century agree on the centrality of slavery, they disagree on which aspects of this conflict (ideological, economic, political, or social) were most important, and on the North's reasons for refusing to allow the Southern states to secede. Proponents of the pseudo-historical Lost Cause ideology have denied that slavery was the principal cause of the secession, a view that has been disproven by the overwhelming historical evidence against it, notably some of the seceding states' own secession documents. The principal political battle leading to secession was over whether slavery would be permitted to expand into the Western territories destined to become states. Initially Congress had admitted new states into the Union in pairs, one slave and one free. This had kept a sectional balance in the Senate but not in the House of Representatives, as free states outstripped slave states in numbers of eligible voters. Thus, at mid-19th century, the free-versus-slave status of the new territories was a critical issue, both for the North, where anti-slavery sentiment had grown, and for the South, where the fear of slavery's abolition had grown. Another factor leading to secession and the formation of the Confederacy was the development of white Southern nationalism in the preceding decades. The primary reason for the North to reject secession was to preserve the Union, a cause based on American nationalism. Abraham Lincoln won the 1860 presidential election. His victory triggered declarations of secession by seven slave states of the Deep South, all of whose riverfront or coastal economies were based on cotton that was cultivated by slave labor. They formed the Confederate States after Lincoln was elected in November 1860 but before he took office in March 1861. Nationalists in the North and "Unionists" in the South refused to accept the declarations of secession. No foreign government ever recognized the Confederacy. The U.S. government, under President James Buchanan, refused to relinquish its forts in territory claimed by the Confederacy. The war began on April 12, 1861, when Confederate forces bombarded the Union's Fort Sumter, in the harbor of Charleston, South Carolina. Background factors in the run up to the war were partisan politics, abolitionism, nullification versus secession, Southern and Northern nationalism, expansionism, economics, and modernization in the antebellum period. "While slavery and its various and multifaceted discontents were the primary cause of disunion, it was disunion itself that sparked the war." Historian David M. Potter wrote: "The problem for Americans who, in the age of Lincoln, wanted slaves to be free was not simply that southerners wanted the opposite, but that they themselves cherished a conflicting value: they wanted the Constitution, which protected slavery, to be honored, and the Union, which was a fellowship with slaveholders, to be preserved. Thus they were committed to values that could not logically be reconciled." Secession. The first secession state conventions from the Deep South sent representatives to the Montgomery Convention in Alabama on February 4, 1861. A provisional government was established. The new provisional Confederate President Jefferson Davis issued a call for 100,000 men from the states' militias to defend the newly formed Confederacy. All federal property was seized, including gold bullion and coining dies at the U.S. mints. The Confederate capital was moved from Montgomery to Richmond, Virginia, in May 1861. On February 22, 1862, Davis was inaugurated as president with a term of six years. The Confederate administration pursued a policy of national territorial integrity, continuing earlier state efforts in 1860–1861 to remove U.S. government presence. This included taking possession of U.S. courts, custom houses, post offices, and most notably, arsenals and forts. After the Confederate attack and capture of Fort Sumter in April 1861, Lincoln called up 75,000 of the states' militia to muster under his command. The stated purpose was to re-occupy U.S. properties throughout the South, as the U.S. Congress had not authorized their abandonment. The resistance at Fort Sumter signaled his change of policy from that of the Buchanan Administration. Lincoln's response ignited a firestorm of emotion. The people of both North and South demanded war, with soldiers rushing to their colors in the hundreds of thousands. Secessionists argued that the United States Constitution was a contract among sovereign states that could be abandoned without consultation and each state had a right to secede. After intense debates and statewide votes, seven Deep South states passed secession ordinances by February 1861, while secession efforts failed in the other eight slave states. The Confederacy expanded in May–July 1861 (with Virginia, Arkansas, Tennessee, North Carolina), and disintegrated in April–May 1865. It was formed by delegations from seven slave states of the Lower South that had proclaimed their secession. After the fighting began in April, four additional slave states seceded and were admitted. Later, two slave states (Missouri and Kentucky) and two territories were given seats in the Confederate Congress. Its establishment flowed from and deepened Southern nationalism, which prepared men to fight for "The Southern Cause". This "Cause" included support for states' rights, tariff policy, and internal improvements, but above all, cultural and financial dependence on the South's slavery-based economy. The convergence of race and slavery, politics, and economics raised South-related policy questions to the status of moral questions over, way of life, merging love of things Southern and hatred of things Northern. As the war approached, political parties split, and national churches and interstate families divided along sectional lines. According to historian John M. Coski: Following South Carolina's unanimous 1860 secession vote, no other Southern states considered the question until 1861; when they did, none had a unanimous vote. All had residents who cast significant numbers of Unionist votes. Voting to remain in the Union did not necessarily mean individuals were sympathizers with the North. Once fighting began, many who voted to remain in the Union accepted the majority decision, and supported the Confederacy. Many writers have evaluated the Civil War as an American tragedy—a "Brothers' War", pitting "brother against brother, father against son, kin against kin of every degree". States. Initially, some secessionists hoped for a peaceful departure. Moderates in the Confederate Constitutional Convention included a provision against importation of slaves from Africa to appeal to the Upper South. Non-slave states might join, but the radicals secured a two-thirds requirement in both houses of Congress to accept them. Seven states declared their secession from the United States before Lincoln took office on March 4, 1861. After the Confederate attack on Fort Sumter April 12, 1861, and Lincoln's subsequent call for troops, four more states declared their secession. Kentucky declared neutrality, but after Confederate troops moved in, the state legislature asked for Union troops to drive them out. Delegates from 68 Kentucky counties were sent to the Russellville Convention that signed an Ordinance of Secession. Kentucky was admitted into the Confederacy on December 10, 1861, with Bowling Green as its first capital. Early in the war, the Confederacy controlled more than half of Kentucky but largely lost control in 1862. The splinter Confederate government of Kentucky relocated to accompany western Confederate armies and never controlled the state population after 1862. By the end of the war, 90,000 Kentuckians had fought for the Union, compared to 35,000 for the Confederacy. In Missouri, a constitutional convention was approved and delegates elected. The convention rejected secession 89–1 on March 19, 1861. The governor maneuvered to take control of the St. Louis Arsenal and restrict federal military movements. This led to a confrontation, and in June federal forces drove him and the General Assembly from Jefferson City. The executive committee of the convention called the members together in July, and declared the state offices vacant and appointed a Unionist interim state government. The exiled governor called a rump session of the former General Assembly together in Neosho and, on October 31, 1861, it passed an ordinance of secession. The Confederate government of Missouri effectively controlled only southern Missouri early in the war. It had its capital at Neosho, then Cassville, before being driven out of the state. For the remainder of the war, it operated as a government in exile at Marshall, Texas. Not having seceded, neither Kentucky nor Missouri was declared in rebellion in Lincoln's Emancipation Proclamation. The Confederacy recognized the pro-Confederate claimants in Kentucky (December 10, 1861) and Missouri (November 28, 1861) and laid claim to those states, granting them congressional representation and adding two stars to the Confederate flag. Voting for the representatives was done mostly by Confederate soldiers from Kentucky and Missouri. Some Southern Unionists blamed Lincoln's call for troops as the precipitating event for the second wave of secessions. Historian James McPherson argues such claims have "a self-serving quality" and regards them as misleading: Historian Daniel W. Crofts disagrees with McPherson: The order of secession resolutions and dates are: 1. South Carolina (December 20, 1860) 2. Mississippi (January 9, 1861) 3. Florida (January 10) 4. Alabama (January 11) 5. Georgia (January 19) 6. Louisiana (January 26) 7. Texas (February 1; referendum February 23) * Inauguration of President Lincoln, March 4 * Bombardment of Fort Sumter (April 12) and President Lincoln's call-up (April 15) 8. Virginia (April 17; referendum May 23, 1861) 9. Arkansas (May 6) 10. Tennessee (May 7; referendum June 8) 11. North Carolina (May 20) In Virginia, the populous counties along the Ohio and Pennsylvania borders rejected the Confederacy. Unionists held a Convention in Wheeling in June 1861, establishing a "restored government" with a rump legislature, but sentiment in the region remained deeply divided. In the 50 counties that would make up the state of West Virginia, voters from 24 counties had voted for disunion in Virginia's May 23 referendum on the ordinance of secession. In the 1860 election "Constitutional Democrat" Breckenridge had outpolled "Constitutional Unionist" Bell in the 50 counties by 1,900 votes, 44% to 42%. The counties simultaneously supplied over 20,000 soldiers to each side of the conflict. Representatives for most counties were seated in both state legislatures at Wheeling and at Richmond for the duration of the war. Attempts to secede from the Confederacy by counties in East Tennessee were checked by martial law. Although slaveholding Delaware and Maryland did not secede, citizens exhibited divided loyalties. Regiments of Marylanders fought in Lee's Army of Northern Virginia. Overall, 24,000 men from Maryland joined Confederate forces, compared to 63,000 who joined Union forces. Delaware never produced a full regiment for the Confederacy, but neither did it emancipate slaves as did Missouri and West Virginia. District of Columbia citizens made no attempts to secede and through the war, referendums sponsored by Lincoln approved compensated emancipation and slave confiscation from "disloyal citizens". Territories. Citizens at Mesilla and Tucson in the southern part of New Mexico Territory formed a secession convention, which voted to join the Confederacy on March 16, 1861, and appointed Dr. Lewis S. Owings as the new territorial governor. They won the Battle of Mesilla and established a territorial government with Mesilla serving as its capital. The Confederacy proclaimed the Confederate Arizona Territory on February 14, 1862, north to the 34th parallel. Marcus H. MacWillie served in both Confederate Congresses as Arizona's delegate. In 1862, the Confederate New Mexico campaign to take the northern half of the U.S. territory failed and the Confederate territorial government in exile relocated to San Antonio, Texas. Confederate supporters in the trans-Mississippi west claimed portions of the Indian Territory after the US evacuated the federal forts and installations. Over half of the American Indian troops participating in the War from the Indian Territory supported the Confederacy. On July 12, 1861, the Confederate government signed a treaty with both the Choctaw and Chickasaw Indian nations. After several battles, Union armies took control of the territory. The Indian Territory never formally joined the Confederacy, but did receive representation in the Congress. Many Indians from the Territory were integrated into regular Confederate Army units. After 1863, the tribal governments sent representatives to the Confederate Congress: Elias Cornelius Boudinot representing the Cherokee and Samuel Benton Callahan representing the Seminole and Creek. The Cherokee Nation aligned with the Confederacy. They practiced and supported slavery, opposed abolition, and feared their lands would be seized by the Union. After the war, the Indian territory was disestablished, their black slaves were freed, and the tribes lost some of their lands. Capitals. Montgomery, Alabama, served as capital of the Confederate States from February 4 until May 29, 1861, in the Alabama State Capitol. Six states created the Confederacy there on February 8, 1861. The Texas delegation was seated at the time, so it is counted in the "original seven" states of the Confederacy; it had no roll call vote until after its referendum made secession "operative". The Permanent Constitution was adopted there on March 12, 1861. The permanent capital provided for in the Confederate Constitution called for a state cession of a 100 square mile district to the central government. Atlanta, which had not yet supplanted Milledgeville, Georgia, as its state capital, put in a bid noting its central location and rail connections, as did Opelika, Alabama, noting its strategically interior situation, rail connections and deposits of coal and iron. Richmond, Virginia, was chosen for the interim capital at the Virginia State Capitol. The move was used by Vice President Stephens and others to encourage other border states to follow Virginia into the Confederacy. In the political moment it was a show of "defiance and strength". The war for Southern independence was surely to be fought in Virginia, but it also had the largest Southern military-aged white population, with infrastructure, resources, and supplies. The Davis Administration's policy was that "It must be held at all hazards." The naming of Richmond as the new capital took place on May 30, 1861, and the last two sessions of the Provisional Congress were held there. As war dragged on, Richmond became crowded with training and transfers, logistics and hospitals. Prices rose dramatically despite government efforts at price regulation. A movement in Congress argued for moving the capital from Richmond. At the approach of federal armies in mid-1862, the government's archives were readied for removal. As the Wilderness Campaign progressed, Congress authorized Davis to remove the executive department and call Congress to session elsewhere in 1864 and again in 1865. Shortly before the end of the war, the Confederate government evacuated Richmond, planning to relocate further south. Little came of these plans before Lee's surrender. Davis and most of his cabinet fled to Danville, Virginia, which served as their headquarters for eight days. Diplomacy. During its four years, the Confederacy asserted its independence and appointed dozens of diplomatic agents abroad. None were recognized by a foreign government. The US government regarded the Southern states as being in rebellion or insurrection and so refused any formal recognition of their status. The US government never declared war on those "kindred and countrymen" in the Confederacy but conducted its military efforts beginning with a presidential proclamation issued April 15, 1861. It called for troops to recapture forts and suppress what Lincoln later called an "insurrection and rebellion". Mid-war parleys between the two sides occurred without formal political recognition, though the laws of war predominantly governed military relationships on both sides of uniformed conflict. Once war with the United States began, the Confederacy pinned its hopes for survival on military intervention by the UK or France. The Confederate government sent James M. Mason to London and John Slidell to Paris. On their way in 1861, the U.S. Navy intercepted their ship, the "Trent," and took them to Boston, an international episode known as the "Trent" Affair. The diplomats were eventually released and continued their voyage. However, their mission was unsuccessful; historians judge their diplomacy as poor. Neither secured diplomatic recognition for the Confederacy, much less military assistance. The Confederates who had believed that "cotton is king", that is, that Britain had to support the Confederacy to obtain cotton, proved mistaken. The British had stocks to last over a year and had been developing alternative sources. The United Kingdom took pride leading the end of transatlantic enslavement of Africans; by 1833, the Royal Navy patrolled middle passage waters to prevent additional slave ships from reaching the Western Hemisphere. It was in London that the first World Anti-Slavery Convention had been held in 1840. Black abolitionist speakers toured England, Scotland, and Ireland, exposing the reality of America's chattel slavery and rebutting the Confederate position that blacks were "unintellectual, timid, and dependent", and "not equal to the white man...the superior race." Frederick Douglass, Henry Highland Garnet, Sarah Parker Remond, her brother Charles Lenox Remond, James W. C. Pennington, Martin Delany, Samuel Ringgold Ward, and William G. Allen all spent years in Britain, where fugitive slaves were safe and, as Allen said, there was an "absence of prejudice against color. Here the colored man feels himself among friends, and not among enemies". Most British public opinion was against the practice, with Liverpool seen as the primary base of Southern support. Throughout the early years of the war, British foreign secretary Lord John Russell, Emperor Napoleon III of France, and, to a lesser extent, British Prime Minister Lord Palmerston, showed interest in recognition of the Confederacy or at least mediation of the war. Chancellor of the Exchequer William Gladstone attempted unsuccessfully to convince Palmerston to intervene. By September 1862 the Union victory at the Battle of Antietam, Lincoln's preliminary Emancipation Proclamation and abolitionist opposition in Britain put an end to these possibilities. The cost to Britain of a war with the U.S. would have been high: the immediate loss of American grain shipments, the end of British exports to the U.S., and seizure of billions of pounds invested in American securities. War would have meant higher taxes in Britain, another invasion of Canada, and attacks on the British merchant fleet. In mid-1862, fears of a race war (like the Haitian Revolution of 1791–1804) led to the British considering intervention for humanitarian reasons. John Slidell, the Confederate States emissary to the French Empire, succeeded in negotiating a loan of $15,000,000 from Erlanger and other French capitalists for ironclad warships and military supplies. The British government did allow the construction of blockade runners in Britain; they were owned and operated by British financiers and shipowners; a few were owned and operated by the Confederacy. The British investors' goal was to acquire highly profitable cotton. Several European nations maintained diplomats in place who had been appointed to the U.S., but no country appointed any diplomat to the Confederacy. Those nations recognized the Union and Confederate sides as belligerents. In 1863, the Confederacy expelled European diplomatic missions for advising their resident subjects to refuse to serve in the Confederate army. Both Confederate and Union agents were allowed to work openly in British territories. The Confederacy appointed Ambrose Dudley Mann as special agent to the Holy See in September 1863, but the Holy See never released a statement supporting or recognizing the Confederacy. In November 1863, Mann met Pope Pius IX and received a letter supposedly addressed "to the Illustrious and Honorable Jefferson Davis, President of the Confederate States of America"; Mann had mistranslated the address. In his report to Richmond, Mann claimed a great diplomatic achievement for himself, but Confederate Secretary of State Judah P. Benjamin told Mann it was "a mere inferential recognition, unconnected with political action or the regular establishment of diplomatic relations" and thus did not assign it the weight of formal recognition. Nevertheless, the Confederacy was seen internationally as a serious attempt at nationhood, and European governments sent military observers to assess whether there had been a "de facto" establishment of independence. These observers included Arthur Lyon Fremantle of the British Coldstream Guards, who entered the Confederacy via Mexico, Fitzgerald Ross of the Austrian Hussars, and Justus Scheibert of the Prussian Army. European travelers visited and wrote accounts for publication. Importantly in 1862, the Frenchman Charles Girard's "Seven months in the rebel states during the North American War" testified "this government ... is no longer a trial government ... but really a normal government, the expression of popular will". Fremantle went on to write in his book "Three Months in the Southern States" that he had: French Emperor Napoleon III assured Confederate diplomat John Slidell that he would make a "direct proposition" to Britain for joint recognition. The Emperor made the same assurance to British members of Parliament John A. Roebuck and John A. Lindsay. Roebuck in turn publicly prepared a bill to submit to Parliament supporting joint Anglo-French recognition of the Confederacy. "Southerners had a right to be optimistic, or at least hopeful, that their revolution would prevail, or at least endure." Following the disasters at Vicksburg and Gettysburg in July 1863, the Confederates "suffered a severe loss of confidence in themselves" and withdrew into an interior defensive position. By December 1864, Davis considered sacrificing slavery in order to enlist recognition and aid from Paris and London; he secretly sent Duncan F. Kenner to Europe with a message that the war was fought solely for "the vindication of our rights to self-government and independence" and that "no sacrifice is too great, save that of honor". The message stated that if the French or British governments made their recognition conditional on anything at all, the Confederacy would consent to such terms. European leaders all saw that the Confederacy was on the verge of defeat. The Confederacy's biggest foreign policy successes were with Brazil and Cuba, but this had little military import. Brazil represented the "peoples most identical to us in Institutions", in which slavery remained legal until the 1880s and the abolitionist movement was small. Confederate ships were welcome in Brazilian ports. After the war, Brazil was the primary destination of those Southerners who wanted to continue living in a slave society, where, as one immigrant remarked, "Confederado" slaves were cheap. The Captain–General of Cuba declared in writing that Confederate ships were welcome, and would be protected in Cuban ports. Historians speculate that if the Confederacy had achieved independence, it probably would have tried to acquire Cuba as a base of expansion. At war. Motivations of soldiers. Most soldiers who joined Confederate national or state military units joined voluntarily. Perman (2010) says historians are of two minds on why millions of soldiers seemed so eager to fight, suffer and die over four years: Military strategy. Civil War historian E. Merton Coulter wrote that for those who would secure its independence, "The Confederacy was unfortunate in its failure to work out a general strategy for the whole war". Aggressive strategy called for offensive force concentration. Defensive strategy sought dispersal to meet demands of locally minded governors. The controlling philosophy evolved into a combination "dispersal with a defensive concentration around Richmond". The Davis administration considered the war purely defensive, a "simple demand that the people of the United States would cease to war upon us". Historian James M. McPherson is a critic of Lee's offensive strategy: "Lee pursued a faulty military strategy that ensured Confederate defeat". As the Confederate government lost control of territory in campaign after campaign, it was said that "the vast size of the Confederacy would make its conquest impossible". The enemy would be struck down by the same elements which so often debilitated or destroyed visitors and transplants in the South: heat exhaustion, sunstroke, and endemic diseases such as malaria and typhoid. Early in the war, both sides believed that one great battle would decide the conflict; the Confederates won a surprise victory at the First Battle of Bull Run, also known as First Manassas (the name used by Confederate forces). It drove the Confederate people "insane with joy"; the public demanded a forward movement to capture Washington, relocate the Confederate capital there, and admit Maryland to the Confederacy. A council of war by the victorious Confederate generals decided not to advance against larger numbers of fresh Federal troops in defensive positions. Davis did not countermand it. Following the Confederate incursion into Maryland halted at the Battle of Antietam in October 1862, generals proposed concentrating forces from state commands to re-invade the north. Nothing came of it. Again in mid-1863 at his incursion into Pennsylvania, Lee requested of Davis that Beauregard simultaneously attack Washington with troops taken from the Carolinas. But the troops there remained in place during the Gettysburg Campaign. The eleven states of the Confederacy were outnumbered by the North about four-to-one in military manpower. It was overmatched far more in military equipment, industrial facilities, railroads for transport, and wagons supplying the front. Confederates slowed the Yankee invaders, at heavy cost to the Southern infrastructure. The Confederates burned bridges, laid land mines in the roads, and made harbors inlets and inland waterways unusable with sunken mines (called "torpedoes" at the time). Coulter reports: The Confederacy relied on external sources for war materials. The first came from trade with the enemy. "Vast amounts of war supplies" came through Kentucky, and thereafter, western armies were "to a very considerable extent" provisioned with illicit trade via Federal agents and northern private traders. But that trade was interrupted in the first year of war by Admiral Porter's river gunboats as they gained dominance along navigable rivers north–south and east–west. Overseas blockade running then came to be of "outstanding importance". On April 17, President Davis called on privateer raiders, the "militia of the sea", to wage war on U.S. seaborne commerce. Despite noteworthy effort, over the course of the war the Confederacy was found unable to match the Union in ships and seamanship, materials and marine construction. An inescapable obstacle to success in the warfare of mass armies was the Confederacy's lack of manpower, and sufficient numbers of disciplined, equipped troops in the field at the point of contact with the enemy. During the winter of 1862–63, Lee observed that none of his famous victories had resulted in the destruction of the opposing army. He lacked reserve troops to exploit an advantage on the battlefield as Napoleon had done. Lee explained, "More than once have most promising opportunities been lost for want of men to take advantage of them, and victory itself had been made to put on the appearance of defeat, because our diminished and exhausted troops have been unable to renew a successful struggle against fresh numbers of the enemy." Armed forces. The military armed forces of the Confederacy comprised three branches: Army, Navy and Marine Corps. On February 28, 1861, the Provisional Confederate Congress established a provisional volunteer army and gave control over military operations and authority for mustering state forces and volunteers to the newly chosen Confederate president, Jefferson Davis. On March 1, 1861, on behalf of the Confederate government, Davis assumed control of the military situation at Charleston, South Carolina, where South Carolina state militia besieged Fort Sumter in Charleston harbor, held by a small U.S. Army garrison. By March 1861, the Provisional Confederate Congress expanded the provisional forces and established a more permanent Confederate States Army. The total population of the Confederate Army is unknowable due to incomplete and destroyed Confederate records but estimates are between 750,000 and 1,000,000 troops. This does not include an unknown number of slaves pressed into army tasks, such as the construction of fortifications and defenses or driving wagons. Confederate casualty figures also are incomplete and unreliable, estimated at 94,000 killed or mortally wounded, 164,000 deaths from disease, and between 26,000 and 31,000 deaths in Union prison camps. One incomplete estimate is 194,026. The Confederate military leadership included many veterans from the United States Army and United States Navy who had resigned their Federal commissions and were appointed to senior positions. Many had served in the Mexican–American War (including Robert E. Lee and Jefferson Davis), but some such as Leonidas Polk (who graduated from West Point but did not serve in the Army) had little or no experience. The Confederate officer corps consisted of men from both slave-owning and non-slave-owning families. The Confederacy appointed junior and field grade officers by election from the enlisted ranks. Although no Army service academy was established for the Confederacy, some colleges (such as The Citadel and Virginia Military Institute) maintained cadet corps that trained Confederate military leadership. A naval academy was established at Drewry's Bluff, Virginia in 1863, but no midshipmen graduated before the Confederacy's end. Most soldiers were white males aged between 16 and 28; half were 23 or older by 1861. The Confederate Army was permitted to disband for two months in early 1862 after its short-term enlistments expired. The majority of those in uniform would not re-enlist after their one-year commitment, thus on April 16, 1862, the Confederate Congress imposed the first mass conscription on North American territory. (A year later, on March 3, 1863, the United States Congress passed the Enrollment Act.) Rather than a universal draft, the first program was a selective one with physical, religious, professional, and industrial exemptions. These became narrower as the battle progressed. Initially substitutes were permitted, but by December 1863 these were disallowed. In September 1862 the age limit was increased from 35 to 45 and by February 1864, all men under 18 and over 45 were conscripted to form a reserve for state defense inside state borders. By March 1864, the Superintendent of Conscription reported that all across the Confederacy, every officer in constituted authority, man and woman, "engaged in opposing the enrolling officer in the execution of his duties". Although challenged in the state courts, the Confederate State Supreme Courts routinely rejected legal challenges to conscription. Many thousands of slaves served as personal servants to their owner, or were hired as laborers, cooks, and pioneers. Some freed blacks and men of color served in local state militia units of the Confederacy, primarily in Louisiana and South Carolina, but their officers deployed them for "local defense, not combat". Depleted by casualties and desertions, the military suffered chronic manpower shortages. In early 1865, the Confederate Congress, influenced by the public support by General Lee, approved the recruitment of black infantry units. Contrary to Lee's and Davis's recommendations, the Congress refused "to guarantee the freedom of black volunteers". No more than two hundred black combat troops were ever raised. Raising troops. The immediate onset of war meant that it was fought by the "Provisional" or "Volunteer Army". State governors resisted concentrating a national effort. Several wanted a strong state army for self-defense. Others feared large "Provisional" armies answering only to Davis. When filling the Confederate government's call for 100,000 men, another 200,000 were turned away by accepting only those enlisted "for the duration" or twelve-month volunteers who brought their own arms or horses. It was important to raise troops; it was just as important to provide capable officers to command them. With few exceptions the Confederacy secured excellent general officers. Efficiency in the lower officers was "greater than could have been reasonably expected". As with the Federals, political appointees could be indifferent. Otherwise, the officer corps was governor-appointed or elected by unit enlisted. Promotion to fill vacancies was made internally regardless of merit, even if better officers were immediately available. Anticipating the need for more "duration" men, in January 1862 Congress provided for company level recruiters to return home for two months, but their efforts met little success on the heels of Confederate battlefield defeats in February. Congress allowed for Davis to require numbers of recruits from each governor to supply the volunteer shortfall. States responded by passing their own draft laws. The veteran Confederate army of early 1862 was mostly twelve-month volunteers with terms about to expire. Enlisted reorganization elections disintegrated the army for two months. Officers pleaded with the ranks to re-enlist, but a majority did not. Those remaining elected majors and colonels whose performance led to officer review boards in October. The boards caused a "rapid and widespread" thinning out of 1,700 incompetent officers. Troops thereafter would elect only second lieutenants. In early 1862, the popular press suggested the Confederacy required a million men under arms. But veteran soldiers were not re-enlisting, and earlier secessionist volunteers did not reappear to serve in war. One Macon, Georgia, newspaper asked how two million brave fighting men of the South were about to be overcome by four million northerners who were said to be cowards. Conscription. The Confederacy passed the first American law of national conscription on April 16, 1862. The white males of the Confederate States from 18 to 35 were declared members of the Confederate army for three years, and all men then enlisted were extended to a three-year term. They would serve only in units and under officers of their state. Those under 18 and over 35 could substitute for conscripts, in September those from 35 to 45 became conscripts. The cry of "rich man's war and a poor man's fight" led Congress to abolish the substitute system altogether in December 1863. All principals benefiting earlier were made eligible for service. By February 1864, the age bracket was made 17 to 50, those under eighteen and over forty-five to be limited to in-state duty. Confederate conscription was not universal; it was a selective service. The First Conscription Act of April 1862 exempted occupations related to transportation, communication, industry, ministers, teaching and physical fitness. The Second Conscription Act of October 1862 expanded exemptions in industry, agriculture and conscientious objection. Exemption fraud proliferated in medical examinations, army furloughs, churches, schools, apothecaries and newspapers. Rich men's sons were appointed to the socially outcast "overseer" occupation, but the measure was received in the country with "universal odium". The legislative vehicle was the controversial Twenty Negro Law that specifically exempted one white overseer or owner for every plantation with at least 20 slaves. Backpedaling six months later, Congress provided overseers under 45 could be exempted only if they held the occupation before the first Conscription Act. The number of officials under state exemptions appointed by state Governor patronage expanded significantly. The Conscription Act of February 1864 "radically changed the whole system" of selection. It abolished industrial exemptions, placing detail authority in President Davis. As the shame of conscription was greater than a felony conviction, the system brought in "about as many volunteers as it did conscripts." Many men in otherwise "bombproof" positions were enlisted in one way or another, nearly 160,000 additional volunteers and conscripts in uniform. Still there was shirking. To administer the draft, a Bureau of Conscription was set up to use state officers, as state Governors would allow. It had a checkered career of "contention, opposition and futility". Armies appointed alternative military "recruiters" to bring in the out-of-uniform 17–50-year-old conscripts and deserters. Nearly 3,000 officers were tasked with the job. By late 1864, Lee was calling for more troops. "Our ranks are constantly diminishing by battle and disease, and few recruits are received; the consequences are inevitable." By March 1865 conscription was to be administered by generals of the state reserves calling out men over 45 and under 18 years old. All exemptions were abolished. These regiments were assigned to recruit conscripts ages 17–50, recover deserters, and repel enemy cavalry raids. The service retained men who had lost but one arm or a leg in home guards. Ultimately, conscription was a failure, and its main value was in goading men to volunteer. The survival of the Confederacy depended on a strong base of civilians and soldiers devoted to victory. The soldiers performed well, though increasing numbers deserted in the last year of fighting, and the Confederacy never succeeded in replacing casualties as the Union could. The civilians, although enthusiastic in 1861–62, seem to have lost faith in the future of the Confederacy by 1864, and instead looked to protect their homes and communities. As Rable explains, "This contraction of civic vision was more than a crabbed libertarianism; it represented an increasingly widespread disillusionment with the Confederate experiment." Victories: 1861. The American Civil War broke out in April 1861 with a Confederate victory at the Battle of Fort Sumter in Charleston. In January, President James Buchanan had attempted to resupply the garrison with the steamship, "Star of the West", but Confederate artillery drove it away. In March, President Lincoln notified South Carolina Governor Pickens that without Confederate resistance to the resupply there would be no military reinforcement without further notice, but Lincoln prepared to force resupply if it were not allowed. Confederate President Davis, in cabinet, decided to seize Fort Sumter before the relief fleet arrived, and on April 12, 1861, General Beauregard forced its surrender. Following Sumter, Lincoln directed states to provide 75,000 militiamen for three months to recapture the Charleston Harbor forts and all other federal property. This emboldened secessionists in Virginia, Arkansas, Tennessee and North Carolina to secede rather than provide troops to march into neighboring Southern states. In May, Federal troops crossed into Confederate territory along the entire border from the Chesapeake Bay to New Mexico. The first battles were Confederate victories at Big Bethel (Bethel Church, Virginia), First Bull Run (First Manassas) in Virginia July and in August, Wilson's Creek (Oak Hills) in Missouri. At all three, Confederate forces could not follow up their victory due to inadequate supply and shortages of fresh troops to exploit their successes. Following each battle, Federals maintained a military presence and occupied Washington, DC; Fort Monroe, Virginia; and Springfield, Missouri. Both North and South began training up armies for major fighting the next year. Union General George B. McClellan's forces gained possession of much of northwestern Virginia in mid-1861, concentrating on towns and roads; the interior was too large to control and became the center of guerrilla activity. General Robert E. Lee was defeated at Cheat Mountain in September and no serious Confederate advance in western Virginia occurred until the next year. Meanwhile, the Union Navy seized control of much of the Confederate coastline from Virginia to South Carolina. It took over plantations and the abandoned slaves. Federals there began a war-long policy of burning grain supplies up rivers into the interior wherever they could not occupy. The Union Navy began a blockade of the major southern ports and prepared an invasion of Louisiana to capture New Orleans in early 1862. Incursions: 1862. The victories of 1861 were followed by a series of defeats east and west in early 1862. To restore the Union by military force, the Federal strategy was to (1) secure the Mississippi River, (2) seize or close Confederate ports, and (3) march on Richmond. To secure independence, the Confederate intent was to (1) repel the invader on all fronts, costing him blood and treasure, and (2) carry the war into the North by two offensives in time to affect the mid-term elections. Much of northwestern Virginia was under Federal control. In February and March, most of Missouri and Kentucky were Union "occupied, consolidated, and used as staging areas for advances further South". Following the repulse of a Confederate counterattack at the Battle of Shiloh, Tennessee, permanent Federal occupation expanded west, south and east. Confederate forces repositioned south along the Mississippi River to Memphis, Tennessee, where at the naval Battle of Memphis, its River Defense Fleet was sunk. Confederates withdrew from northern Mississippi and northern Alabama. New Orleans was captured on April 29 by a combined Army-Navy force under U.S. Admiral David Farragut, and the Confederacy lost control of the mouth of the Mississippi River. It had to concede extensive agricultural resources that had supported the Union's sea-supplied logistics base. Although Confederates had suffered major reverses everywhere, as of the end of April the Confederacy still controlled territory holding 72% of its population. Federal forces disrupted Missouri and Arkansas; they had broken through in western Virginia, Kentucky, Tennessee and Louisiana. Along the Confederacy's shores, Union forces had closed ports and made garrisoned lodgments on every coastal Confederate state except Alabama and Texas. Although scholars sometimes assess the Union blockade as ineffectual under international law until the last few months of the war, from the first months it disrupted Confederate privateers, making it "almost impossible to bring their prizes into Confederate ports". British firms developed small fleets of blockade running companies, such as John Fraser and Company and S. Isaac, Campbell & Company while the Ordnance Department secured its own blockade runners for dedicated munitions cargoes. During the Civil War fleets of armored warships were deployed for the first time in sustained blockades at sea. After some success against the Union blockade, in March the ironclad CSS "Virginia" was forced into port and burned by Confederates at their retreat. Despite several attempts mounted from their port cities, CSA naval forces were unable to break the Union blockade. Attempts were made by Commodore Josiah Tattnall III's ironclads from Savannah in 1862 with the CSS "Atlanta". Secretary of the Navy Stephen Mallory placed his hopes in a European-built ironclad fleet, but they were never realized. On the other hand, four new English-built commerce raiders served the Confederacy, and several fast blockade runners were sold in Confederate ports. They were converted into commerce-raiding cruisers, and manned by their British crews. In the east, Union forces could not close on Richmond. General McClellan landed his army on the Lower Peninsula of Virginia. Lee subsequently ended that threat from the east, then Union General John Pope attacked overland from the north only to be repulsed at Second Bull Run (Second Manassas). Lee's strike north was turned back at Antietam MD, then Union Major General Ambrose Burnside's offensive was disastrously ended at Fredericksburg VA in December. Both armies then turned to winter quarters to recruit and train for the coming spring. In an attempt to seize the initiative, reprove, protect farms in mid-growing season and influence U.S. Congressional elections, two major Confederate incursions into Union territory had been launched in August and September 1862. Both Braxton Bragg's invasion of Kentucky and Lee's invasion of Maryland were decisively repulsed, leaving Confederates in control of but 63% of its population. Civil War scholar Allan Nevins argues that 1862 was the strategic high-water mark of the Confederacy. The failures of the two invasions were attributed to the same irrecoverable shortcomings: lack of manpower at the front, lack of supplies including serviceable shoes, and exhaustion after long marches without adequate food. Also in September Confederate General William W. Loring pushed Federal forces from Charleston, Virginia, and the Kanawha Valley in western Virginia, but lacking reinforcements Loring abandoned his position and by November the region was back in Federal control. Anaconda: 1863–1864. The failed Middle Tennessee campaign was ended January 2, 1863, at the inconclusive Battle of Stones River (Murfreesboro), both sides losing the largest percentage of casualties suffered during the war. It was followed by another strategic withdrawal by Confederate forces. The Confederacy won a significant victory April 1863, repulsing the Federal advance on Richmond at Chancellorsville, but the Union consolidated positions along the Virginia coast and the Chesapeake Bay. Without an effective answer to Federal gunboats, river transport and supply, the Confederacy lost the Mississippi River following the capture of Vicksburg, Mississippi, and Port Hudson in July, ending Southern access to the trans-Mississippi West. July brought short-lived counters, Morgan's Raid into Ohio and the New York City draft riots. Robert E. Lee's strike into Pennsylvania was repulsed at Gettysburg, Pennsylvania despite Pickett's famous charge and other acts of valor. Southern newspapers assessed the campaign as "The Confederates did not gain a victory, neither did the enemy." September and November left Confederates yielding Chattanooga, Tennessee, the gateway to the lower south. For the remainder of the war fighting was restricted inside the South, resulting in a slow but continuous loss of territory. In early 1864, the Confederacy still controlled 53% of its population, but it withdrew further to reestablish defensive positions. Union offensives continued with Sherman's March to the Sea to take Savannah and Grant's Wilderness Campaign to encircle Richmond and besiege Lee's army at Petersburg. In April 1863, the C.S. Congress authorized a uniformed Volunteer Navy, many of whom were British. The Confederacy had altogether eighteen commerce-destroying cruisers, which seriously disrupted Federal commerce at sea and increased shipping insurance rates 900%. Commodore Tattnall again unsuccessfully attempted to break the Union blockade on the Savannah River in Georgia with an ironclad in 1863. Beginning in April 1864 the ironclad CSS "Albemarle" engaged Union gunboats for six months on the Roanoke River in North Carolina. The Federals closed Mobile Bay by sea-based amphibious assault in August, ending Gulf coast trade east of the Mississippi River. In December, the Battle of Nashville ended Confederate operations in the western theater. Large numbers of families relocated to safer places, usually remote rural areas, bringing along household slaves if they had any. Mary Massey argues these elite exiles introduced an element of defeatism into the southern outlook. Collapse: 1865. The first three months of 1865 saw the Federal Carolinas Campaign, devastating a wide swath of the remaining Confederate heartland. The "breadbasket of the Confederacy" in the Great Valley of Virginia was occupied by Philip Sheridan. The Union Blockade captured Fort Fisher in North Carolina, and Sherman finally took Charleston, South Carolina, by land attack. The Confederacy controlled no ports, harbors or navigable rivers. Railroads were captured or had ceased operating. Its major food-producing regions had been war-ravaged or occupied. Its administration survived in only three pockets of territory holding only one-third of its population. Its armies were defeated or disbanding. At the February 1865 Hampton Roads Conference with Lincoln, senior Confederate officials rejected his invitation to restore the Union with compensation for emancipated slaves. The three pockets of unoccupied Confederacy were southern Virginia—North Carolina, central Alabama—Florida, and Texas, the latter two areas less from any notion of resistance than from the disinterest of Federal forces to occupy them. The Davis policy was independence or nothing, while Lee's army was wracked by disease and desertion, barely holding the trenches defending Jefferson Davis' capital. The Confederacy's last remaining blockade-running port, Wilmington, North Carolina, was lost. When the Union broke through Lee's lines at Petersburg, Richmond fell immediately. Lee surrendered at Appomattox Court House, Virginia, on April 9, 1865. "The Surrender" marked the end of the Confederacy. The CSS "Stonewall" sailed from Europe to break the Union blockade in March; on making Havana, Cuba, it surrendered. Some high officials escaped to Europe, but President Davis was captured May 10; all remaining Confederate land forces surrendered by June 1865. The U.S. Army took control of the Confederate areas, but peace was subsequently marred by a great deal of local violence, feuding and revenge killings. The last confederate military unit, the commerce raider CSS "Shenandoah", surrendered on November 6, 1865, in Liverpool. Historian Gary Gallagher concluded that the Confederacy capitulated in early 1865 because northern armies crushed "organized southern military resistance". The Confederacy's population, soldier and civilian, had suffered material hardship and social disruption. Jefferson Davis' assessment in 1890 determined, "With the capture of the capital, the dispersion of the civil authorities, the surrender of the armies in the field, and the arrest of the President, the Confederate States of America disappeared ... their history henceforth became a part of the history of the United States." Government and politics. Constitution. In February, 1861, Southern leaders met in Montgomery, Alabama to adopt their first constitution, establishing a confederation of "sovereign and independent states", guaranteeing states the right to a republican form of government. Prior to adopting to the first Confederate constitution, the independent states were sovereign republics. A second Confederate constitution was written in March, 1861, which sought to replace the confederation with a federal government; much of this constitution replicated the United States Constitution verbatim, but contained several explicit protections of the institution of slavery including provisions for the recognition and protection of slavery in any territory of the Confederacy. It maintained the ban on international slave-trading, though it made the ban's application explicit to "Negroes of the African race" in contrast to the U.S. Constitution's reference to "such Persons as any of the States now existing shall think proper to admit". It protected the existing internal trade of slaves among slaveholding states. In certain areas, the second Confederate Constitution gave greater powers to the states (or curtailed the powers of the central government more) than the U.S. Constitution of the time did, but in other areas, the states lost rights they had under the U.S. Constitution. Although the Confederate Constitution, like the U.S. Constitution, contained a commerce clause, the Confederate version prohibited the central government from using revenues collected in one state for funding internal improvements in another state. The Confederate Constitution's equivalent to the U.S. Constitution's general welfare clause prohibited protective tariffs (but allowed tariffs for providing domestic revenue). State legislatures had the power to impeach officials of the Confederate government in some cases. On the other hand, the Confederate Constitution contained a Necessary and Proper Clause and a Supremacy Clause that essentially duplicated the respective clauses of the U.S. Constitution. The Confederate Constitution also incorporated each of the 12 amendments to the U.S. Constitution that had been ratified up to that point. The second Confederate Constitution was adopted on February 22, 1862, one year into the American Civil War, and did not specifically include a provision allowing states to secede; the Preamble spoke of each state "acting in its sovereign and independent character" but also of the formation of a "permanent federal government". During the debates on drafting the Confederate Constitution, one proposal would have allowed states to secede from the Confederacy. The proposal was tabled with only the South Carolina delegates voting in favor of considering the motion. The Confederate Constitution also explicitly denied States the power to bar slaveholders from other parts of the Confederacy from bringing their slaves into any state of the Confederacy or to interfere with the property rights of slave owners traveling between different parts of the Confederacy. In contrast with the secular language of the United States Constitution, the Confederate Constitution overtly asked God's blessing ("... invoking the favor and guidance of Almighty God ..."). Some historians have referred to the Confederacy as a form of Herrenvolk democracy. Executive. The Montgomery Convention to establish the Confederacy and its executive met on February 4, 1861. Each state as a sovereignty had one vote, with the same delegation size as it held in the U.S. Congress, and generally 41 to 50 members attended. Offices were "provisional", limited to a term not to exceed one year. One name was placed in nomination for president, one for vice president. Both were elected unanimously, 6–0. Jefferson Davis was elected provisional president. His U.S. Senate resignation speech greatly impressed with its clear rationale for secession and his pleading for a peaceful departure from the Union to independence. Although he had made it known that he wanted to be commander-in-chief of the Confederate armies, when elected, he assumed the office of Provisional President. Three candidates for provisional Vice President were under consideration the night before the February 9 election. All were from Georgia, and the various delegations meeting in different places determined two would not do, so Alexander H. Stephens was elected unanimously provisional Vice President, though with some privately held reservations. Stephens was inaugurated February 11, Davis February 18. Davis and Stephens were elected president and vice president, unopposed on November 6, 1861. They were inaugurated on February 22, 1862. Coulter stated, "No president of the U.S. ever had a more difficult task." Washington was inaugurated in peacetime. Lincoln inherited an established government of long standing. The creation of the Confederacy was accomplished by men who saw themselves as fundamentally conservative. Although they referred to their "Revolution", it was in their eyes more a counter-revolution against changes away from their understanding of U.S. founding documents. In Davis' inauguration speech, he explained the Confederacy was not a French-like revolution, but a transfer of rule. The Montgomery Convention had assumed all the laws of the United States until superseded by the Confederate Congress. The Permanent Constitution provided for a President of the Confederate States of America, elected to serve a six-year term but without the possibility of re-election. Unlike the United States Constitution, the Confederate Constitution gave the president the ability to subject a bill to a line item veto, a power also held by some state governors. The Confederate Congress could overturn either the general or the line item vetoes with the same two-thirds votes required in the U.S. Congress. In addition, appropriations not specifically requested by the executive branch required passage by a two-thirds vote in both houses of Congress. The only person to serve as president was Jefferson Davis, as the Confederacy was defeated before the completion of his term. Legislative. The only two "formal, national, functioning, civilian administrative bodies" in the Civil War South were the Jefferson Davis administration and the Confederate Congresses. The Confederacy was begun by the Provisional Congress in Convention at Montgomery, Alabama on February 28, 1861. The Provisional Confederate Congress was a unicameral assembly; each state received one vote. The Permanent Confederate Congress was elected and began its first session February 18, 1862. The Permanent Congress for the Confederacy followed the United States forms with a bicameral legislature. The Senate had two per state, twenty-six Senators. The House numbered 106 representatives apportioned by free and slave populations within each state. Two Congresses sat in six sessions until March 18, 1865. The political influences of the civilian, soldier vote and appointed representatives reflected divisions of political geography of a diverse South. These in turn changed over time relative to Union occupation and disruption, the war impact on the local economy, and the course of the war. Without political parties, key candidate identification related to adopting secession before or after Lincoln's call for volunteers to retake Federal property. Previous party affiliation played a part in voter selection, predominantly secessionist Democrat or unionist Whig. The absence of political parties made individual roll call voting all the more important, as the Confederate "freedom of roll-call voting [was] unprecedented in American legislative history." Key issues throughout the life of the Confederacy related to (1) suspension of habeas corpus, (2) military concerns such as control of state militia, conscription and exemption, (3) economic and fiscal policy including impressment of slaves, goods and scorched earth, and (4) support of the Jefferson Davis administration in its foreign affairs and negotiating peace. For the first year, the unicameral Provisional Confederate Congress functioned as the Confederacy's legislative branch. Judicial. The Confederate Constitution outlined a judicial branch of the government, but the ongoing war and resistance from states-rights advocates, particularly on the question of whether it would have appellate jurisdiction over the state courts, prevented the creation or seating of the "Supreme Court of the Confederate States". Thus, the state courts generally continued to operate as they had done, simply recognizing the Confederate States as the national government. Confederate district courts were authorized by Article III, Section 1, of the Confederate Constitution, and President Davis appointed judges within the individual states of the Confederate States of America. In many cases, the same US Federal District Judges were appointed as Confederate States District Judges. Confederate district courts began reopening in early 1861, handling many of the same type cases as had been done before. Prize cases, in which Union ships were captured by the Confederate Navy or raiders and sold through court proceedings, were heard until the blockade of southern ports made this impossible. After a Sequestration Act was passed by the Confederate Congress, the Confederate district courts heard many cases in which enemy aliens (typically Northern absentee landlords owning property in the South) had their property sequestered (seized) by Confederate Receivers. Supreme Court – not established. District Courts – judges Post office. The Confederacy established the Confederate Post Office for mail delivery. One of the first undertakings in establishing the office was the appointment of John H. Reagan as Postmaster General, by Jefferson Davis in 1861. Writing in 1906, historian Walter Flavius McCaleb praised Reagan's "energy and intelligence... in a degree scarcely matched by any of his associates". When the war began, the US Post Office briefly delivered mail from the secessionist states. Mail that was postmarked after the date of a state's admission into the Confederacy through May 31, 1861, and bearing US postage was still delivered. After this time, private express companies still managed to carry some of the mail across enemy lines. Later, mail that crossed lines had to be sent by 'Flag of Truce' and was allowed to pass at only two specific points. Mail sent from the Confederacy to the U.S. was received, opened and inspected at Fortress Monroe on the Virginia coast before being passed on into the U.S. mail stream. Mail sent from the North to the South passed at City Point, also in Virginia, where it was also inspected before being sent on. With the chaos of the war, a working postal system was more important than ever for the Confederacy. The Civil War had divided family members and friends and consequently letter writing increased dramatically across the entire divided nation, especially to and from the men who were away serving in an army. Mail delivery was also important for the Confederacy for a myriad of business and military reasons. Because of the Union blockade, basic supplies were always in demand and so getting mailed correspondence out of the country to suppliers was imperative to the successful operation of the Confederacy. Volumes of material have been written about the Blockade runners who evaded Union ships on blockade patrol, usually at night, and who moved cargo and mail in and out of the Confederate States throughout the course of the war. Of particular interest to students and historians of the American Civil War is "Prisoner of War mail" and "Blockade mail" as these items were often involved with a variety of military and other war time activities. The postal history of the Confederacy along with has helped historians document the various people, places and events that were involved in the American Civil War as it unfolded. Civil liberties. The Confederacy actively used the army to arrest people suspected of loyalty to the United States. Historian Mark Neely found 4,108 names of men arrested and estimated a much larger total. The Confederacy arrested pro-Union civilians in the South at about the same rate as the Union arrested pro-Confederate civilians in the North. Neely argues: Economy. Slaves. Across the South, widespread rumors predicted the slaves were planning insurrection, causing panic. Patrols were stepped up. The slaves did become increasingly independent and resistant to punishment, but historians agree there were no insurrections. Many slaves became spies for the North, and large numbers ran away to federal lines. According to the 1860 United States census, about 31% of free households in the eleven states that would join the Confederacy owned slaves. The 11 states that seceded had the highest percentage of slaves as a proportion of their population, representing 39% of their total population. The proportions ranged from a majority in South Carolina (57.2%) and Mississippi (55.2%) to about a quarter in Tennessee (24.8%). Lincoln's Emancipation Proclamation on January 1, 1863, legally freed three million slaves in designated areas of the Confederacy. The long-term effect was that the Confederacy could not preserve the institution of slavery and lost the use of the core element of its plantation labor force. Over 200,000 freed slaves were hired by the federal army as teamsters, cooks, launderers and laborers, and eventually as soldiers. Plantation owners, realizing that emancipation would destroy their economic system, sometimes moved their slaves as far as possible out of reach of the Union army. Though the concept was promoted within certain circles of the Union hierarchy during and immediately following the war, no program of reparations for freed slaves was ever attempted. Unlike other Western countries, such as Britain and France, the U.S. government never paid compensation to Southern slave owners for their "lost property". The only place compensated emancipation was carried out was the District of Columbia. Political economy. The plantations of the South, with white ownership and an enslaved labor force, produced substantial wealth from cash crops. It supplied two-thirds of the world's cotton, which was in high demand for textiles, along with tobacco, sugar, and naval stores (such as turpentine). These raw materials were exported to factories in Europe and the Northeast. Planters reinvested their profits in more slaves and fresh land, as cotton and tobacco depleted the soil. There was little manufacturing or mining; shipping was controlled by non-southerners. The plantations that enslaved over three million black people were the principal source of wealth. Most were concentrated in "black belt" plantation areas (because few white families in the poor regions owned slaves). For decades, there had been widespread fear of slave revolts. During the war, extra men were assigned to "home guard" patrol duty and governors sought to keep militia units at home for protection. Historian William Barney reports, "no major slave revolts erupted during the Civil War." Nevertheless, slaves took the opportunity to enlarge their sphere of independence, and when union forces were nearby, many ran off to join them. Slave labor was applied in industry in a limited way in the Upper South and in a few port cities. One reason for the regional lag in industrial development was top-heavy income distribution. Mass production requires mass markets, and slaves living in small cabins, using self-made tools and outfitted with one suit of work clothes each year of inferior fabric, did not generate consumer demand to sustain local manufactures of any description in the same way as did a mechanized family farm of free labor in the North. The Southern economy was "pre-capitalist" in that slaves were put to work in the largest revenue-producing enterprises, not free labor markets. That labor system as practiced in the American South encompassed paternalism, whether abusive or indulgent, and that meant labor management considerations apart from productivity. Approximately 85% of both the North and South white populations lived on family farms, both regions were predominantly agricultural, and mid-century industry in both was mostly domestic. But the Southern economy was pre-capitalist in its overwhelming reliance on the agriculture of cash crops to produce wealth, while the great majority of farmers fed themselves and supplied a small local market. Southern cities and industries grew faster than ever before, but the thrust of the rest of the country's exponential growth elsewhere was toward urban industrial development along transportation systems of canals and railroads. The South was following the dominant currents of the American economic mainstream, but at a "great distance" as it lagged in the all-weather modes of transportation that brought cheaper, speedier freight shipment and forged new, expanding inter-regional markets. A third count of the pre-capitalist Southern economy relates to the cultural setting. White southerners did not adopt a work ethic, nor the habits of thrift that marked the rest of the country. It had access to the tools of capitalism, but it did not adopt its culture. The Southern Cause as a national economy in the Confederacy was grounded in "slavery and race, planters and patricians, plain folk and folk culture, cotton and plantations". National production. The Confederacy started its existence as an agrarian economy with exports, to a world market, of cotton, and, to a lesser extent, tobacco and sugarcane. Local food production included grain, hogs, cattle, and vegetables. The cash came from exports but the Southern people spontaneously stopped exports in early 1861 to hasten the impact of "King Cotton", a failed strategy to coerce international support for the Confederacy through its cotton exports. When the blockade was announced, commercial shipping practically ended (because the ships could not get insurance), and only a trickle of supplies came via blockade runners. The cutoff of exports was an economic disaster for the South, rendering useless its most valuable properties: its plantations and their enslaved workers. Many planters kept growing cotton, which piled up everywhere, but most turned to food production. All across the region, the lack of repair and maintenance wasted away the physical assets. The eleven states had produced $155 million (~$ in ) in manufactured goods in 1860, chiefly from local gristmills, and lumber, processed tobacco, cotton goods and naval stores such as turpentine. The main industrial areas were border cities such as Baltimore, Wheeling, Louisville and St. Louis, that were never under Confederate control. The government did set up munitions factories in the Deep South. Combined with captured munitions and those coming via blockade runners, the armies were kept minimally supplied with weapons. The soldiers suffered from reduced rations, lack of medicines, and the growing shortages of uniforms, shoes and boots. Shortages were much worse for civilians, and the prices of necessities steadily rose. The Confederacy adopted a tariff or tax on imports of 15 percent and imposed it on all imports from other countries, including the United States. The tariff mattered little; the Union blockade minimized commercial traffic through the Confederacy's ports, and very few people paid taxes on goods smuggled from the North. The Confederate government in its entire history collected only $3.5 million in tariff revenue. The lack of adequate financial resources led the Confederacy to finance the war through printing money, which led to high inflation. The Confederacy underwent an economic revolution by centralization and standardization, but it was too little too late, as its economy was systematically strangled by blockade and raids. Transportation systems. In peacetime, the South's extensive and connected systems of navigable rivers and coastal access allowed for cheap and easy transportation of agricultural products. The railroad system in the South had developed as a supplement to the navigable rivers to enhance the all-weather shipment of cash crops to market. Railroads tied plantation areas to the nearest river or seaport and so made supply more dependable, lowered costs and increased profits. In the event of invasion, the vast geography of the Confederacy made logistics difficult for the Union. Wherever Union armies invaded, they assigned many of their soldiers to garrison captured areas and to protect rail lines. At the onset of the Civil War the South had a rail network disjointed and plagued by changes in track gauge as well as lack of interchange. Locomotives and freight cars had fixed axles and could not use tracks of different gauges (widths). Railroads of different gauges leading to the same city required all freight to be off-loaded onto wagons for transport to the connecting railroad station, where it had to await freight cars and a locomotive before proceeding. Centers requiring off-loading included Vicksburg, New Orleans, Montgomery, Wilmington and Richmond. In addition, most rail lines led from coastal or river ports to inland cities, with few lateral railroads. Because of this design limitation, the relatively primitive railroads of the Confederacy were unable to overcome the Union naval blockade of the South's crucial intra-coastal and river routes. The Confederacy had no plan to expand, protect or encourage its railroads. Southerners' refusal to export the cotton crop in 1861 left railroads bereft of their main source of income. Many lines had to lay off employees; many critical skilled technicians and engineers were permanently lost to military service. In the early years of the war the Confederate government had a hands-off approach to the railroads. Only in mid-1863 did the Confederate government initiate a national policy, and it was confined solely to aiding the war effort. Railroads came under the "de facto" control of the military. In contrast, the U.S. Congress had authorized military administration of Union-controlled railroad and telegraph systems in January 1862, imposed a standard gauge, and built railroads into the South using that gauge. Confederate armies successfully reoccupying territory could not be resupplied directly by rail as they advanced. The C.S. Congress formally authorized military administration of railroads in February 1865. In the last year before the end of the war, the Confederate railroad system stood permanently on the verge of collapse. There was no new equipment and raids on both sides systematically destroyed key bridges, as well as locomotives and freight cars. Spare parts were cannibalized; feeder lines were torn up to get replacement rails for trunk lines, and rolling stock wore out through heavy use. Horses and mules. The Confederate army experienced a persistent shortage of horses and mules and requisitioned them with dubious promissory notes given to local farmers and breeders. Union forces paid in real money and found ready sellers in the South. Both armies needed horses for cavalry and for artillery. Mules pulled the wagons. The supply was undermined by an unprecedented epidemic of glanders, a fatal disease that baffled veterinarians. After 1863 the invading Union forces had a policy of shooting all the local horses and mules that they did not need, in order to keep them out of Confederate hands. The Confederate armies and farmers experienced a growing shortage of horses and mules, which hurt the Southern economy and the war effort. The South lost half of its 2.5 million horses and mules; many farmers ended the war with none left. Army horses were used up by hard work, malnourishment, disease and battle wounds; they had a life expectancy of about seven months. Financial instruments. Both the individual Confederate states and later the Confederate government printed Confederate States of America dollars as paper currency in various denominations, with a total face value of $1.5 billion. Much of it was signed by Treasurer Edward C. Elmore. Inflation became rampant as the paper money depreciated and eventually became worthless. The state governments and some localities printed their own paper money, adding to the runaway inflation. The Confederate government initially wanted to finance its war mostly through tariffs on imports, export taxes, and voluntary donations of gold. After the spontaneous imposition of an embargo on cotton sales to Europe in 1861, these sources of revenue dried up and the Confederacy increasingly turned to issuing debt and printing money to pay for war expenses. The Confederate States politicians were worried about angering the general population with hard taxes. A tax increase might disillusion many Southerners, so the Confederacy resorted to printing more money. As a result, inflation increased and remained a problem for the southern states throughout the rest of the war. By April 1863, for example, the cost of flour in Richmond had risen to $100 (~$ in ) a barrel and housewives were rioting. The Confederate government took over the three national mints in its territory: the Charlotte Mint in North Carolina, the Dahlonega Mint in Georgia, and the New Orleans Mint in Louisiana. During 1861 all of these facilities produced small amounts of gold coinage, and the latter half dollars as well. A lack of silver and gold precluded further coinage. The Confederacy apparently also experimented with issuing one cent coins, although only 12 were produced by a jeweler in Philadelphia, who was afraid to send them to the South. Like the half dollars, copies were later made as souvenirs. US coinage was hoarded and did not have any general circulation. U.S. coinage was admitted as legal tender up to $10, as were British sovereigns, French Napoleons and Spanish and Mexican doubloons at a fixed rate of exchange. Confederate money was paper and postage stamps. Food shortages and riots. By mid-1861, the Union naval blockade virtually shut down the export of cotton and the import of manufactured goods. Food that formerly came overland was cut off. As women were the ones who remained at home, they had to make do with the lack of food and supplies. They cut back on purchases, used old materials, and planted more flax and peas to provide clothing and food. They used ersatz substitutes when possible. The households were severely hurt by inflation in the cost of everyday items like flour, and the shortages of food, fodder for the animals, and medical supplies for the wounded. State governments requested that planters grow less cotton and more food, but most refused. When cotton prices soared in Europe, expectations were that Europe would soon intervene to break the blockade and make them rich, but Europe remained neutral. The Georgia legislature imposed cotton quotas, making it a crime to grow an excess. But food shortages only worsened, especially in the towns. The overall decline in food supplies, made worse by the inadequate transportation system, led to serious shortages and high prices in urban areas. When bacon reached a dollar a pound in 1863, the poor women of Richmond, Atlanta and many other cities began to riot; they broke into shops and warehouses to seize food. As wives and widows of soldiers, they were hurt by the inadequate welfare system. Devastation by 1865. By the end of the war deterioration of the Southern infrastructure was widespread. The number of civilian deaths is unknown. Every Confederate state was affected, but most of the war was fought in Virginia and Tennessee, while Texas and Florida saw the least military action. Much of the damage was caused by direct military action, but most was caused by lack of repairs and upkeep, and by deliberately using up resources. Historians have recently estimated how much of the devastation was caused by military action. Paul Paskoff calculates that Union military operations were conducted in 56% of 645 counties in nine Confederate states (excluding Texas and Florida). These counties contained 63% of the 1860 white population and 64% of the slaves. By the time the fighting took place, undoubtedly some people had fled to safer areas, so the exact population exposed to war is unknown. The eleven Confederate States in the 1860 United States census had 297 towns and cities with 835,000 people; of these 162 with 681,000 people were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,600), Charleston, Columbia, and Richmond (with prewar populations of 40,500, 8,100, and 37,900, respectively); the eleven contained 115,900 people in the 1860 census, or 14 percent of the urban South. Historians have not estimated what their actual population was when Union forces arrived. The number of people (as of 1860) who lived in the destroyed towns represented just over 1 percent of the Confederacy's 1860 population. In addition, 45 court houses were burned (out of 830). The South's agriculture was not highly mechanized. The value of farm implements and machinery in the 1860 Census was $81 million; by 1870, it had diminished by 40 percent and was worth just $48 million. Many old tools had broken through heavy use; new tools were rarely available, and even repairs were difficult. The economic losses affected everyone. Most banks and insurance companies had gone bankrupt. Confederate currency and bonds were worthless. The billions of dollars invested in slaves vanished. Most debts were also left behind. Most farms were intact but had lost their horses, mules, and cattle. Paskoff shows the loss of farm infrastructure was about the same whether or not fighting took place nearby. The loss of infrastructure and productive capacity meant that rural widows throughout the region faced not only the absence of able-bodied men, but a depleted stock of material resources. During four years of warfare, disruption, and blockades, the South used up about half its capital stock. The rebuilding took years and was hindered by the low price of cotton after the war. Outside investment was essential, especially in railroads. One historian has summarized the collapse of the transportation infrastructure needed for economic recovery: Effect on women and families. More than 250,000 Confederate soldiers died during the war. Some widows abandoned their family farms and merged into the households of relatives, or even became refugees living in camps with high rates of disease and death. In the Old South, being an "old maid" was an embarrassment to the woman and her family, but after the war, it became almost a norm. Some women welcomed the freedom of not having to marry. Divorce, while never fully accepted, became more common. The concept of the "New Woman" emerged – she was self-sufficient and independent, and stood in sharp contrast to the "Southern Belle" of antebellum lore. National flags. The first official flag of the Confederate States of America—called the "Stars and Bars"—originally had seven stars, representing the first seven states that initially formed the Confederacy. As more states joined, more stars were added, until the total was 13 (two stars were added for the divided states of Kentucky and Missouri). During the First Battle of Bull Run, (First Manassas) it sometimes proved difficult to distinguish the Stars and Bars from the Union flag. To rectify the situation, a separate "Battle Flag" was designed for use by troops in the field. Also known as the "Southern Cross", many variations sprang from the original square configuration. Although it was never officially adopted by the Confederate government, the popularity of the Southern Cross among both soldiers and the civilian population was a primary reason why it was made the main color feature when a new national flag was adopted in 1863. This new standard—known as the "Stainless Banner"—consisted of a lengthened white field area with a Battle Flag canton. This flag too had its problems when used in military operations as, on a windless day, it could easily be mistaken for a flag of truce or surrender. Thus, in 1865, a modified version of the Stainless Banner was adopted. This final national flag of the Confederacy kept the Battle Flag canton, but shortened the white field and added a vertical red bar to the fly end. The "Confederate Flag" has a color scheme similar to that of the most common Battle Flag design, but is rectangular, not square. The "Confederate Flag" is a highly recognizable symbol of the South in the United States today and continues to be a controversial icon. Southern Unionism. Unionism—opposition to the Confederacy—was strong in certain areas within the Confederate States. Southern Unionists were widespread in the mountain regions of Appalachia and the Ozarks. Unionists, led by Parson Brownlow and Senator Andrew Johnson, took control of East Tennessee in 1863. Unionists also attempted control over western Virginia, but never effectively held more than half of the counties that formed the new state of West Virginia. Union forces captured parts of coastal North Carolina, and at first were largely welcomed by local unionists. The occupiers became perceived as oppressive, callous, radical and favorable to Freedmen. Occupiers pillaged, freed slaves, and evicted those who refused to swear loyalty oaths to the Union. In Texas, local officials harassed and murdered Unionists. Draft resistance was widespread especially among Texans of German or Mexican descent, many of the latter leaving for Mexico. Confederate officials attempted to hunt down and kill potential draftees who had gone into hiding. Over 4,000 suspected Unionists were imprisoned in the Confederate States without trial. Up to 100,000 men living in states under Confederate control served in the Union Army or pro-Union guerilla groups. Although Southern Unionists came from all classes, most differed socially, culturally, and economically from the region's dominant pre-war planter class. Geography. Region and climate. The Confederate States of America claimed a total of of coastline, thus a large part of its territory lay on the seacoast with level and often sandy or marshy ground. Most of the interior portion consisted of arable farmland, though much was also hilly and mountainous, and the far western territories were deserts. The southern reaches of the Mississippi River bisected the country, and the western half was often referred to as the Trans-Mississippi. The highest point (excluding Arizona and New Mexico) was Guadalupe Peak in Texas at . Much of the area had a humid subtropical climate with mild winters and long, hot, humid summers. The climate and terrain varied from vast swamps to semi-arid steppes and arid deserts. The subtropical climate made winters mild but allowed infectious diseases to flourish; on both sides more soldiers died from disease than were killed in combat. Demographics. Population. The 1860 United States census gives a picture of the population for the areas that had joined the Confederacy. The population numbers exclude non-assimilated Indian tribes. In 1860, the areas that later formed the eleven Confederate states (and including the future West Virginia) had 132,760 (2%) free blacks. Males made up 49% of the total population and females 51%. Rural and urban population. The CSA was overwhelmingly rural. Few towns had populations of more than 1,000—the typical county seat had a population under 500. Of the twenty largest U.S. cities in the 1860 census, only New Orleans lay in Confederate territory. Only 13 Confederate-controlled cities ranked among the top 100 U.S. cities in 1860, most of them ports whose economic activities vanished or suffered severely in the Union blockade. The population of Richmond swelled after it became the Confederate capital, reaching an estimated 128,000 in 1864. The cities of the Confederacy included (by size of population): Religion. The CSA was overwhelmingly Protestant. Both free and enslaved populations identified with evangelical Protestantism. Baptists and Methodists together formed majorities of both the white and the slave population, becoming the Black church. Freedom of religion and separation of church and state were fully ensured by Confederate laws. Church attendance was very high and chaplains played a major role in the Army. Most large denominations experienced a North–South split in the prewar era on the issue of slavery. The creation of a new country necessitated independent structures. For example, the Presbyterian Church in the United States split, with much of the new leadership provided by Joseph Ruggles Wilson. Baptists and Methodists both broke off from their Northern coreligionists over the slavery issue, forming the Southern Baptist Convention and the Methodist Episcopal Church, South. Elites in the southeast favored the Protestant Episcopal Church in the Confederate States of America, which had reluctantly split from the Episcopal Church in 1861. Other elites were Presbyterians belonging to the 1861-founded Presbyterian Church in the United States. Catholics included an Irish working-class element in coastal cities and an old French element in southern Louisiana. The southern churches met the shortage of Army chaplains by sending missionaries. One result was wave after wave of revivals in the Army. Legacy and assessment. Amnesty and treason issue. When the war ended over 14,000 Confederates petitioned President Johnson for a pardon; he was generous in giving them out. He issued a general amnesty to all Confederate participants in the "late Civil War" in 1868. Congress passed additional Amnesty Acts in May 1866 with restrictions on office holding, and the Amnesty Act in May 1872 lifting those restrictions. There was a great deal of discussion in 1865 about bringing treason trials, especially against Jefferson Davis. There was no consensus in President Johnson's cabinet, and no one was charged with treason. An acquittal of Davis would have been humiliating for the government. Davis was indicted for treason but never tried; he was released from prison on bail in May 1867. The amnesty of December 25, 1868, eliminated any possibility of Davis standing trial for treason. Henry Wirz, the commandant of a notorious prisoner-of-war camp near Andersonville, Georgia, was convicted by a military court of charges related to cruelty and conspiracy, and executed on November 10, 1865. The U.S. government began a decade-long process known as Reconstruction which attempted to resolve the political and constitutional issues of the Civil War. The priorities were: to guarantee that Confederate nationalism and slavery were ended, to ratify and enforce the Thirteenth Amendment which outlawed slavery; the Fourteenth which guaranteed dual U.S. and state citizenship to all native-born residents, regardless of race; the Fifteenth, which made it illegal to deny the right to vote because of race; and repeal each state's ordinance of secession. By 1877, the Compromise of 1877 ended Reconstruction in the former Confederate states. Federal troops were withdrawn. The war left the entire region economically devastated by military action, ruined infrastructure, and exhausted resources. Still dependent on an agricultural economy and resisting investment in infrastructure, it remained dominated by the planter elite into the next century. Democrat-dominated legislatures passed new constitutions and amendments to exclude most blacks and many poor whites. This exclusion and a weakened Republican Party remained the norm until the Voting Rights Act of 1965. The Solid South of the early 20th century did not achieve national levels of prosperity until long after World War II. Supreme Court rulings. In "Texas v. White" (1869), the Supreme Court ruled by a 5–3 majority that Texas had remained a state ever since it first joined the Union, despite claims that it joined the Confederate States of America. The Court held that the Constitution did not permit a state to unilaterally secede. In declaring that no state could leave the Union, "except through revolution or through consent of the States", it was "explicitly repudiating the position of the Confederate states that the United States was a voluntary compact between sovereign states". In "Sprott v. United States" (1874), the Supreme Court ruled 8–1 to reaffirm its conclusion in "White" and held that the Confederacy's "foundation was treason" and its "single purpose, so long as it lasted, was to make that treason successful." Theories regarding downfall. Historian Frank Lawrence Owsley argued that the Confederacy "died of states' rights". The central government was denied requisitioned soldiers and money by governors and state legislatures because they feared that Richmond would encroach on the rights of the states. Georgia's governor Joseph Brown warned of a secret conspiracy by Jefferson Davis to destroy states' rights and individual liberty. The first conscription act in North America, authorizing Davis to draft soldiers, was said to be the "essence of military despotism". Roger Lowenstein argued that the Confederacy's failure to raise adequate revenue led to hyperinflation and being unable to win a war of attrition, despite the prowess of its military leadership such as Robert E. Lee. Though political differences were within the Confederacy, no national political parties were formed because they were seen as illegitimate. "Anti-partyism became an article of political faith." Without a system of political parties building alternate sets of national leaders, electoral protests tended to be narrowly state-based, "negative, carping and petty". The 1863 mid-term elections became mere expressions of futile and frustrated dissatisfaction. According to historian David M. Potter, the lack of a functioning two-party system caused "real and direct damage" to the Confederate war effort since it prevented the formulation of any effective alternatives to the conduct of the war by the Davis administration. The enemies of President Davis proposed that the Confederacy "died of Davis". He was unfavorably compared to George Washington by critics such as Edward Alfred Pollard, editor of the most influential newspaper in the Confederacy, the "Daily Richmond Examiner". Beyond the early honeymoon period, Davis was never popular. Ellis Merton Coulter, viewed by historians as a Confederate apologist, argues that Davis was unable to mobilize Confederate nationalism in support of his government effectively, and especially failed to appeal to the small farmers who made up the bulk of the population. Davis failed to build a network of supporters who would speak up when he came under criticism, and he repeatedly alienated governors and other state-based leaders by demanding centralized control of the war effort.
7025
7903804
https://en.wikipedia.org/wiki?curid=7025
Cranberry
Cranberries are a group of evergreen dwarf shrubs or trailing vines in the subgenus Oxycoccus of the genus "Vaccinium". Cranberries are low, creeping shrubs or vines up to long and in height; they have slender stems that are not thickly woody and have small evergreen leaves. The flowers are dark pink. The fruit is a berry that is larger than the leaves of the plant; it is initially light green, turning red when ripe. It is edible, but has an acidic taste. In Britain, "cranberry" may refer to the native species "Vaccinium oxycoccos", while in North America, "cranberry" may refer to "Vaccinium macrocarpon". "Vaccinium oxycoccos" is cultivated in central and northern Europe, while "V. macrocarpon" is cultivated throughout the northern United States, Canada and Chile. In some methods of classification, "Oxycoccus" is regarded as a genus in its own right. Cranberries can be found in acidic bogs throughout the cooler regions of the Northern Hemisphere. In 2020, the U.S., Canada, and Chile accounted for 97% of the world production of cranberries. Most cranberries are processed into products such as juice, sauce, jam, and sweetened dried cranberries, with the remainder sold fresh to consumers. Cranberry sauce is a traditional accompaniment to turkey at Christmas and Thanksgiving dinners in the U.S. and Canada, and at Christmas dinner in the United Kingdom. Description and species. Cranberries are low, creeping shrubs or vines up to long and in height; they have slender, wiry stems that are not thickly woody and have small evergreen leaves. The flowers are dark pink, with very distinct reflexed petals, leaving the style and stamens fully exposed and pointing forward. They are pollinated by bees. The fruit is a berry that is larger than the leaves of the plant; it is initially light green, turning red when ripe. It has an acidic taste which usually overwhelms its sweetness. There are 4–5 species of cranberry, classified by subgenus: Similar species. Cranberries are related to bilberries, blueberries, and huckleberries, all in "Vaccinium" subgenus "Vaccinium". These differ in having bell-shaped flowers, petals that are not reflexed, and woodier stems, forming taller shrubs. Etymology. The name "cranberry" derives from the Middle Low German "kraanbere" (English translation, "craneberry"), first named as "cranberry" in English by the missionary John Eliot in 1647. Around 1694, German and Dutch colonists in New England used the word, cranberry, to represent the expanding flower, stem, calyx, and petals resembling the neck, head, and bill of a crane. The traditional English name for the plant more common in Europe, "Vaccinium oxycoccos", , originated from plants with small red berries found growing in fen (marsh) lands of England. Cultivation. American Revolutionary War veteran Henry Hall first cultivated cranberries in the Cape Cod town of Dennis around 1816. In the 1820s, Hall was shipping cranberries to New York City and Boston from which shipments were also sent to Europe. In 1843, Eli Howes planted his own crop of cranberries on Cape Cod, using the "Howes" variety. In 1847, Cyrus Cahoon planted a crop of "Early Black" variety near Pleasant Lake, Harwich, Massachusetts. By 1900, were under cultivation in the New England region. In 2021, the total output of cranberries harvested in the United States was , with Wisconsin as the largest state producer (59% of total), followed by Massachusetts, New Jersey, and Oregon. Cranberries have had two major breeding events. The first occurred in the 1920s, with aims to create a crop that was more insect-resistant, specifically to blunt-nosed leafhopper ("Limotettix vaccini"), the vector of cranberry false blossom disease. This resulted in cultivars such as "Stevens" and "Franklin". As such, cultivars like "Howes" tend to be more susceptible to insects than "Stevens". However, with the introduction of many broad-spectrum pesticides in the 1940s and 1950s, breeders eventually stopped breeding for pest resistance. Instead, beginning in the 1980s, cranberries were bred for high-yielding varieties, leading to cultivars such as "Crimson Queen" and "Mullica Queen". Many of these varieties were spearheaded and bred by Dr. Nicholi Vorsa of Rutgers University. In more recent years, there have been heavier restrictions on pesticides due to environmental safety concerns, leading to a greater emphasis on high yield, high resistance varieties. Geography and bog method. Historically, cranberry beds were constructed in wetlands. Today's cranberry beds are constructed in upland areas with a shallow water table. The topsoil is scraped off to form dykes around the bed perimeter. Clean sand is hauled in and spread to a depth of . The surface is laser leveled flat to provide even drainage. Beds are frequently drained with socked tile in addition to the perimeter ditch. In addition to making it possible to hold water, the dykes allow equipment to service the beds without driving on the vines. Irrigation equipment is installed in the bed to provide irrigation for vine growth and for spring and autumn frost protection. A common misconception about cranberry production is that the beds remain flooded throughout the year. During the growing season cranberry beds are not flooded, but are irrigated regularly to maintain soil moisture. Beds are flooded in the autumn to facilitate harvest and again during the winter to protect against low temperatures. In cold climates like Wisconsin, New England, and eastern Canada, the winter flood typically freezes into ice, while in warmer climates the water remains liquid. When ice forms on the beds, trucks can be driven onto the ice to spread a thin layer of sand to control pests and rejuvenate the vines. Sanding is done every three to five years. Propagation. Cranberry vines are propagated by moving vines from an established bed. The vines are spread on the surface of the sand of the new bed and pushed into the sand with a blunt disk. The vines are watered frequently during the first few weeks until roots form and new shoots grow. Beds are given frequent, light application of nitrogen fertilizer during the first year. The cost of renovating cranberry beds is estimated to be between . Ripening and harvest. Cranberries are harvested in the fall when the fruit takes on its distinctive deep red color, and most ideally after the first frost. This is usually in September through the first part of November. Berries that receive sun turn a deep red when fully ripe, while those that do not fully mature are a pale pink or white color. To harvest cranberries, the beds are flooded with of water above the vines. A harvester is driven through the beds to remove the fruit from the vines. For the past 50 years, water reel type harvesters have been used. Harvested cranberries float in the water and can be corralled into a corner of the bed and conveyed or pumped from the bed. From the farm, cranberries are taken to receiving stations where they are cleaned, sorted, and stored prior to packaging or processing. While cranberries are harvested when they take on their deep red color, they can also be harvested beforehand when they are still white, which is how white cranberry juice is made. Yields are lower on beds harvested early and the early flooding tends to damage vines, but not severely. Vines can also be trained through dry picking to help avoid damage in subsequent harvests. Although most cranberries are wet-picked as described above, 5–10% of the US crop is still dry-picked. This entails higher labor costs and lower yield, but dry-picked berries are less bruised and can be sold as fresh fruit instead of having to be immediately frozen or processed. Originally performed with two-handed comb scoops, dry picking is today accomplished by motorized, walk-behind harvesters which must be small enough to traverse beds without damaging the vines. Cranberries for fresh market are stored in shallow bins or boxes with perforated or slatted bottoms, which deter decay by allowing air to circulate. Because harvest occurs in late autumn, cranberries for fresh market are frequently stored in thick walled barns without mechanical refrigeration. Temperatures are regulated by opening and closing vents in the barn as needed. Cranberries destined for processing are usually frozen in bulk containers shortly after arriving at a receiving station. Diseases. Diseases of cranberry include: Insect Pests. Probably due to the high phenolics and plant defenses, in addition to the harsh environments that cranberries are grown under (acid, sandy soils that get flooded every year), a majority of insect pests associated with cranberries are native to the cranberry's home range of North America. The top studied insect pests of cranberries include: All four of these top studied insect pests are direct pests, eating the berries. Other well studied cranberry pests include: As more and more pesticides become banned due to environmental concern, there are increased resurgences of secondary pests. Production. In 2022, world production of cranberry was 582,924 tonnes, with the United States and Canada together accounting for 99% of the total. Wisconsin (59% of US production) and Quebec (60% of Canadian production) are two of the largest producers of cranberries in the two countries. Cranberries are also a major commercial crop in Massachusetts, New Jersey, Oregon, and Washington, as well as in the Canadian province of British Columbia (33% of Canadian production). Possible safety concerns. The anticoagulant effects of warfarin may be increased by consuming cranberry juice, resulting in adverse effects such as increased incidence of bleeding and bruising. Other safety concerns from consuming large quantities of cranberry juice or using cranberry supplements include potential for nausea, and increasing stomach inflammation, sugar intake or kidney stone formation. Uses. Nutrition. Raw cranberries are 87% water, 12% carbohydrates, and contain negligible protein and fat (table). In a 100 gram reference amount, raw cranberries supply 46 calories and moderate levels of vitamin C, dietary fiber, and the essential dietary mineral manganese, each with more than 10% of its Daily Value. Other micronutrients have low content (table). Dried cranberries are commonly processed with up to 10 times their natural sugar content. The drying process also eliminates vitamin C content. History. In North America, the Narragansett people of the Algonquian nation in the regions of New England appeared to be using cranberries in pemmican for food and for dye. Calling the red berries, "sasemineash", the Narragansett people may have introduced cranberries to colonists in Massachusetts. In 1550, James White Norwood made reference to Native Americans using cranberries, and it was the first reference to American cranberries up until this point. In James Rosier's book "The Land of Virginia" there is an account of Europeans coming ashore and being met with Native Americans bearing bark cups full of cranberries. In Plymouth, Massachusetts, there is a 1633 account of the husband of Mary Ring auctioning her cranberry-dyed petticoat for 16 shillings. In 1643, Roger Williams's book "A Key into the Language of America" described cranberries, referring to them as "bearberries" because bears ate them. In 1648, preacher John Elliott was quoted in Thomas Shepard's book "Clear Sunshine of the Gospel" with an account of the difficulties the Pilgrims were having in using the Indians to harvest cranberries as they preferred to hunt and fish. In 1663, the Pilgrim cookbook appears with a recipe for cranberry sauce. In 1667, New Englanders sent to King Charles ten barrels of cranberries, three barrels of codfish and some Indian corn as a means of appeasement for his anger over their local coining of the pine tree shilling minted by John Hull. In 1669, Captain Richard Cobb had a banquet in his house (to celebrate both his marriage to Mary Gorham and his election to the Convention of Assistance), serving wild turkey with sauce made from wild cranberries. In the 1672 book "New England Rarities Discovered", author John Josselyn described cranberries, writing:Sauce for the Pilgrims, cranberry or bearberry, is a small trayling plant that grows in salt marshes that are overgrown with moss. The berries are of a pale yellow color, afterwards red, as big as a cherry, some perfectly round, others oval, all of them hollow with sower astringent taste; they are ripe in August and September. They are excellent against the Scurvy. They are also good to allay the fervor of hoof diseases. The Indians and English use them mush, boyling them with sugar for sauce to eat with their meat; and it is a delicate sauce, especially with roasted mutton. Some make tarts with them as with gooseberries. "The Compleat Cook's Guide", published in 1683, made reference to cranberry juice. In 1703, cranberries were served at the Harvard University commencement dinner. In 1787, James Madison wrote Thomas Jefferson in France for background information on constitutional government to use at the Constitutional Convention. Jefferson sent back a number of books on the subject and in return asked for a gift of apples, pecans and cranberries. William Aiton, a Scottish botanist, included an entry for the cranberry in volume II of his 1789 work "Hortus Kewensis". He notes that "Vaccinium macrocarpon" (American cranberry) was cultivated by James Gordon in 1760. In 1796, cranberries were served at the first celebration of the landing of the Pilgrims, and Amelia Simmons (an American orphan) wrote a book entitled "American Cookery" which contained a recipe for cranberry tarts. Products. As fresh cranberries are hard, sour, and bitter, about 95% of cranberries are processed and used to make cranberry juice and sauce. They are also sold dried and sweetened. Cranberry juice is usually sweetened or blended with other fruit juices to reduce its natural tartness. At four teaspoons of sugar per 100 grams (one teaspoon per ounce), cranberry juice cocktail is more highly sweetened than even soda drinks that have been linked to obesity. Usually cranberries as fruit are cooked into a compote or jelly, known as cranberry sauce. Such preparations are traditionally served with roast turkey, as a staple of Thanksgiving (both in Canada and in the United States) as well as English dinners. The berry is also used in baking (muffins, scones, cakes and breads). In baking it is often combined with orange or orange zest. Less commonly, cranberries are used to add tartness to savory dishes such as soups and stews. Fresh cranberries can be frozen at home, and will keep up to nine months; they can be used directly in recipes without thawing. There are several alcoholic cocktails, including the cosmopolitan, that include cranberry juice. Urinary tract infections. A 2023 Cochrane systematic review of 50 studies concluded there is evidence that consuming cranberry products (such as juice or capsules) is effective for reducing the risk of urinary tract infections (UTIs) in women with recurrent UTIs, in children, and in people susceptible to UTIs following clinical interventions; there was little evidence of effect in elderly people, those with urination disorders or pregnant women. When the quality of meta-analyses on the efficacy of consuming cranberry products for preventing or treating UTIs is examined with the weaker evidence that is available, large variation and uncertainty of effects are seen, resulting from inconsistencies of clinical research design and inadequate numbers of subjects. In 2014, the European Food Safety Authority reviewed the evidence for one brand of cranberry extract and concluded that a cause and effect relationship had not been established between cranberry consumption and reduced risk of UTIs. A 2022 review of international urology guidelines on UTI found that most clinical organizations felt the evidence for use of cranberry products to inhibit UTIs was conflicting, unconvincing or weak. Research. Phytochemicals. Raw cranberries, cranberry juice and cranberry extracts are a source of polyphenols – including proanthocyanidins, flavonols and quercetin. These phytochemical compounds are being studied in vivo and in vitro for possible effects on the cardiovascular system, immune system and cancer. However, there is no confirmation from human studies that consuming cranberry polyphenols provides anti-cancer, immune, or cardiovascular benefits. Potential is limited by poor absorption and rapid excretion. Cranberry juice contains a high molecular weight non-dializable material that is under research for its potential to affect formation of plaque by "Streptococcus mutans" pathogens that cause tooth decay. Cranberry juice components are also being studied for possible effects on kidney stone formation. Extract quality. Problems may arise with the lack of validation for quantifying of A-type proanthocyanidins (PAC) extracted from cranberries. For instance, PAC extract quality and content can be performed using different methods including the European Pharmacopoeia method, liquid chromatography–mass spectrometry, or a modified 4-dimethylaminocinnamaldehyde colorimetric method. Variations in extract analysis can lead to difficulties in assessing the quality of PAC extracts from different cranberry starting material, such as by regional origin, ripeness at time of harvest and post-harvest processing. Assessments show that quality varies greatly from one commercial PAC extract product to another. Marketing and economics. United States. Cranberry sales in the United States have traditionally been associated with holidays of Thanksgiving and Christmas. In the U.S., large-scale cranberry cultivation has been developed as opposed to other countries. American cranberry growers have a long history of cooperative marketing. As early as 1904, John Gaynor, a Wisconsin grower, and A.U. Chaney, a fruit broker from Des Moines, Iowa, organized Wisconsin growers into a cooperative called the Wisconsin Cranberry Sales Company to receive a uniform price from buyers. Growers in New Jersey and Massachusetts were also organized into cooperatives, creating the National Fruit Exchange that marketed fruit under the Eatmor brand. The success of cooperative marketing almost led to its failure. With consistent and high prices, area and production doubled between 1903 and 1917 and prices fell. With surplus cranberries and changing American households some enterprising growers began canning cranberries that were below-grade for fresh market. Competition between canners was fierce because profits were thin. The Ocean Spray cooperative was established in 1930 through a merger of three primary processing companies: Ocean Spray Preserving company, Makepeace Preserving Co, and Cranberry Products Co. The new company was called Cranberry Canners, Inc. and used the Ocean Spray label on their products. Since the new company represented over 90% of the market, it would have been illegal under American antitrust laws had attorney John Quarles not found an exemption for agricultural cooperatives. , about 65% of the North American industry belongs to the Ocean Spray cooperative. In 1958, Morris April Brothers—who produced Eatmor brand cranberry sauce in Tuckahoe, New Jersey—brought an action against Ocean Spray for violation of the Sherman Antitrust Act and won $200,000 in real damages plus triple damages, just in time for the Great Cranberry Scare: on 9 November 1959, Secretary of the United States Department of Health, Education, and Welfare Arthur S. Flemming announced that some of the 1959 cranberry crop was tainted with traces of the herbicide aminotriazole. The market for cranberries collapsed and growers lost millions of dollars. However, the scare taught the industry that they could not be completely dependent on the holiday market for their products; they had to find year-round markets for their fruit. They also had to be exceedingly careful about their use of pesticides. After the aminotriazole scare, Ocean Spray reorganized and spent substantial sums on product development. New products such as cranberry-apple juice blends were introduced, followed by other juice blends. Prices and production increased steadily during the 1980s and 1990s. Prices peaked at about $65.00 per barrel ()—a cranberry barrel equals —in 1996 then fell to $18.00 per barrel () in 2001. The cause for the precipitous drop was classic oversupply. Production had outpaced consumption leading to substantial inventory in freezers or as concentrate. Cranberry handlers (processors) include Ocean Spray, Cliffstar Corporation, Northland Cranberries Inc. (Sun Northland LLC), Clement Pappas & Co., and Decas Cranberry Products as well as a number of small handlers and processors. Cranberry Marketing Committee. The Cranberry Marketing Committee is an organization that was established in 1962 as a Federal Marketing Order to ensure a stable, orderly supply of good quality product. The order has been renewed and modified slightly over the years. The market order has been invoked during six crop years: 1962 (12%), 1963 (5%), 1970 (10%), 1971 (12%), 2000 (15%), and 2001 (35%). Even though supply still exceeds demand, there is little will to invoke the Federal Marketing Order out of the realization that any pullback in supply by U.S. growers would easily be filled by Canadian production. The Cranberry Marketing Committee, based in Wareham, Massachusetts, represents more than 1,100 cranberry growers and 60 cranberry handlers across Massachusetts, Rhode Island, Connecticut, New Jersey, Wisconsin, Michigan, Minnesota, Oregon, Washington and New York (Long Island). The authority for the actions taken by the Cranberry Marketing Committee is provided in Chapter IX, Title 7, Code of Federal Regulations which is called the Federal Cranberry Marketing Order. The Order is part of the Agricultural Marketing Agreement Act of 1937, identifying cranberries as a commodity good that can be regulated by Congress. The Federal Cranberry Marketing Order has been altered over the years to expand the Cranberry Marketing Committee's ability to develop projects in the United States and around the world. The Cranberry Marketing Committee currently runs promotional programs in the United States, China, India, Mexico, Pan-Europe, and South Korea. International trade. , the European Union was the largest importer of American cranberries, followed individually by Canada, China, Mexico, and South Korea. From 2013 to 2017, U.S. cranberry exports to China grew exponentially, making China the second largest country importer, reaching $36 million in cranberry products. The China–United States trade war resulted in many Chinese businesses cutting off ties with their U.S. cranberry suppliers. References. Notes Further reading
7030
1275691747
https://en.wikipedia.org/wiki?curid=7030
Code coverage
In software engineering, code coverage, also called test coverage, is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high code coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low code coverage. Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite. Code coverage was among the first methods invented for systematic software testing. The first published reference was by Miller and Maloney in "Communications of the ACM", in 1963. Coverage criteria. To measure what percentage of code has been executed by a test suite, one or more "coverage criteria" are used. These are usually defined as rules or requirements, which a test suite must satisfy. Basic coverage criteria. There are a number of coverage criteria, but the main ones are: For example, consider the following C function: int foo (int x, int y) int z = 0; if ((x > 0) && (y > 0)) z = x; return z; Assume this function is a part of some bigger program and this program was run with some test suite. In programming languages that do not perform short-circuit evaluation, condition coverage does not necessarily imply branch coverage. For example, consider the following Pascal code fragment: if a and b then Condition coverage can be satisfied by two tests: However, this set of tests does not satisfy branch coverage since neither case will meet the codice_6 condition. Fault injection may be necessary to ensure that all conditions and branches of exception-handling code have adequate coverage during testing. Modified condition/decision coverage. A combination of function coverage and branch coverage is sometimes also called decision coverage. This criterion requires that every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context, the decision is a Boolean expression comprising conditions and zero or more Boolean operators. This definition is not the same as branch coverage, however, the term "decision coverage" is sometimes used as a synonym for it. Condition/decision coverage requires that both decision and condition coverage be satisfied. However, for safety-critical applications (such as avionics software) it is often required that modified condition/decision coverage (MC/DC) be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently. For example, consider the following code: if (a or b) and c then The condition/decision criteria will be satisfied by the following set of tests: However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy MC/DC: Multiple condition coverage. This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests: Parameter value coverage. Parameter value coverage (PVC) requires that in a method taking parameters, all the common values for such parameters be considered. The idea is that all common possible values for a parameter are tested. For example, common values for a string are: 1) null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may result in a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, there is only 14.2% PVC. Other coverage criteria. There are further coverage criteria, which are used less often: Safety-critical or dependable applications are often required to demonstrate 100% of some form of test coverage. For example, the ECSS-E-ST-40C standard demands 100% statement and decision coverage for two out of four different criticality levels; for the other ones, target coverage values are up to negotiation between supplier and customer. However, setting specific target values - and, in particular, 100% - has been criticized by practitioners for various reasons (cf.) Martin Fowler writes: "I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing". Some of the coverage criteria above are connected. For instance, path coverage implies decision, statement and entry/exit coverage. Decision coverage implies statement coverage, because every statement is part of a branch. Full path coverage, of the type described above, is usually impractical or impossible. Any module with a succession of formula_1 decisions in it can have up to formula_2 paths within it; loop constructs can result in an infinite number of paths. Many paths may also be infeasible, in that there is no input to the program under test that can cause that particular path to be executed. However, a general-purpose algorithm for identifying infeasible paths has been proven to be impossible (such an algorithm could be used to solve the halting problem). Basis path testing is for instance a method of achieving complete branch coverage without achieving complete path coverage. Methods for practical path coverage testing instead attempt to identify classes of code paths that differ only in the number of loop executions, and to achieve "basis path" coverage the tester must cover all the path classes. In practice. The target software is built with special options or libraries and run under a controlled environment, to map every executed function to the function points in the source code. This allows testing parts of the target software that are rarely or never accessed under normal conditions, and helps reassure that the most important conditions (function points) have been tested. The resulting output is then analyzed to see what areas of code have not been exercised and the tests are updated to include these areas as necessary. Combined with other test coverage methods, the aim is to develop a rigorous, yet manageable, set of regression tests. In implementing test coverage policies within a software development environment, one must consider the following: Software authors can look at test coverage results to devise additional tests and input or configuration sets to increase the coverage over vital functions. Two common forms of test coverage are statement (or line) coverage and branch (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage. The meaning of this depends on what form(s) of coverage have been used, as 67% branch coverage is more comprehensive than 67% statement coverage. Generally, test coverage tools incur computation and logging in addition to the actual program thereby slowing down the application, so typically this analysis is not done in production. As one might expect, there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing. There are also some sorts of defects which are affected by such tools. In particular, some race conditions or similar real time sensitive operations can be masked when run under test environments; though conversely, some of these defects may become easier to find as a result of the additional overhead of the testing code. Most professional software developers use C1 and C2 coverage. C1 stands for statement coverage and C2 for branch or condition coverage. With a combination of C1 and C2, it is possible to cover most statements in a code base. Statement coverage would also cover function coverage with entry and exit, loop, path, state flow, control flow and data flow coverage. With these methods, it is possible to achieve nearly 100% code coverage in most software projects. Usage in industry. Test coverage is one consideration in the safety certification of avionics equipment. The guidelines by which avionics gear is certified by the Federal Aviation Administration (FAA) is documented in DO-178B and DO-178C. Test coverage is also a requirement in part 6 of the automotive safety standard ISO 26262 "Road Vehicles - Functional Safety".
7033
8936939
https://en.wikipedia.org/wiki?curid=7033
Caitlin Clarke
Caitlin Clarke (born Katherine Anne Clarke; May 3, 1952 – September 9, 2004) was an American actress best known for her roles as Valerian in the 1981 fantasy film "Dragonslayer" and Charlotte Cardoza in the 1998–1999 Broadway musical "Titanic". Early life and education. Clarke was born in Pittsburgh, the oldest of five sisters, the youngest of whom is Victoria Clarke. Her family moved to Sewickley when she was ten. Clarke received her B.A. in theater arts from Mount Holyoke College in 1974 and her M.F.A. from the Yale School of Drama in 1978. During her final year at Yale, Clarke performed with the Yale Repertory Theater in such plays as "Tales from the Vienna Woods". Career. Clarke starred in the 1981 fantasy film "Dragonslayer". After appearing in three Broadway plays in 1985, Clarke moved to Los Angeles for several years as a film and television actress. In 1986, she appeared in the film "Crocodile Dundee" as Simone, a friendly prostitute. That same year, Clarke appeared in the television series "The Equalizer" as Jessie Moore, the mother of Laura, played by nine year-old Melissa Joan Hart, who asks McCall for protection from an abusive ex-husband in "Torn." She returned to theater in the early 1990s, and to Broadway as Charlotte Cardoza in "Titanic". From 1997 to 2000, Clarke had a reoccurring role on "Law & Order" as Defense Attorney Linda Walsh. Personal life and death. Clarke was diagnosed with ovarian cancer in 2000. She returned to Pittsburgh to teach theater at the University of Pittsburgh and at the Pittsburgh Musical Theater's Rauh Conservatory as well as to perform in Pittsburgh theatre until her death on September 9, 2004.
7034
47283813
https://en.wikipedia.org/wiki?curid=7034
Cruiser
A cruiser is a type of warship. Modern cruisers are generally the largest ships in a fleet after aircraft carriers and amphibious assault ships, and can usually perform several operational roles from search-and-destroy to ocean escort to sea denial. The term "cruiser", which has been in use for several hundred years, has changed its meaning over time. During the Age of Sail, the term "cruising" referred to certain kinds of missions—independent scouting, commerce protection, or raiding—usually fulfilled by frigates or sloops-of-war, which functioned as the "cruising warships" of a fleet. In the middle of the 19th century, "cruiser" came to be a classification of the ships intended for cruising distant waters, for commerce raiding, and for scouting for the battle fleet. Cruisers came in a wide variety of sizes, from the medium-sized protected cruiser to large armored cruisers that were nearly as big (although not as powerful or as well-armored) as a pre-dreadnought battleship. With the advent of the dreadnought battleship before World War I, the armored cruiser evolved into a vessel of similar scale known as the battlecruiser. The very large battlecruisers of the World War I era that succeeded armored cruisers were now classified, along with dreadnought battleships, as capital ships. By the early 20th century, after World War I, the direct successors to protected cruisers could be placed on a consistent scale of warship size, smaller than a battleship but larger than a destroyer. In 1922, the Washington Naval Treaty placed a formal limit on these cruisers, which were defined as warships of up to 10,000 tons displacement carrying guns no larger than 8 inches in calibre; whilst the 1930 London Naval Treaty created a divide of two cruiser types, heavy cruisers having 6.1 inches to 8 inch guns, while those with guns of 6.1 inches or less were light cruisers. Each type were limited in total and individual tonnage which shaped cruiser design until the collapse of the treaty system just prior to the start of World War II. Some variations on the Treaty cruiser design included the German "pocket battleships", which had heavier armament at the expense of speed compared to standard heavy cruisers, and the American , which was a scaled-up heavy cruiser design designated as a "cruiser-killer". In the later 20th century, the obsolescence of the battleship left the cruiser as the largest and most powerful surface combatant ships (as opposed to the aerial warfare role of aircraft carriers). The role of the cruiser varied according to ship and navy, often including air defense and shore bombardment. During the Cold War the Soviet Navy's cruisers had heavy anti-ship missile armament designed to sink NATO carrier task-forces via saturation attack. The U.S. Navy built guided-missile cruisers upon destroyer-style hulls (some called "destroyer leaders" or "frigates" prior to the 1975 reclassification) primarily designed to provide air defense while often adding anti-submarine capabilities, being larger and having longer-range surface-to-air missiles (SAMs) than early "Charles F. Adams" guided-missile destroyers tasked with the short-range air defense role. By the end of the Cold War the line between cruisers and destroyers had blurred, with the cruiser using the hull of the destroyer but receiving the cruiser designation due to their enhanced mission and combat systems. , only two countries operated active duty vessels formally classed as cruisers: the United States and Russia. These cruisers are primarily armed with guided missiles, with the exceptions of the aircraft cruiser . was the last gun cruiser in service, serving with the Peruvian Navy until 2017. Nevertheless, other classes in addition to the above may be considered cruisers due to differing classification systems. The US/NATO system includes the Type 055 from China and the "Kirov" and "Slava" from Russia. International Institute for Strategic Studies' "The Military Balance" defines a cruiser as a surface combatant displacing at least 9750 tonnes; with respect to vessels in service as of the early 2020s it includes the Type 055, the "Sejong the Great" from South Korea, the "Atago" and "Maya" from Japan and the Flight III "Arleigh Burke", "Ticonderoga" and "Zumwalt" from the US. Early history. The term "cruiser" or "cruizer" was first commonly used in the 17th century to refer to an independent warship. "Cruiser" meant the purpose or mission of a ship, rather than a category of vessel. However, the term was nonetheless used to mean a smaller, faster warship suitable for such a role. In the 17th century, the ship of the line was generally too large, inflexible, and expensive to be dispatched on long-range missions (for instance, to the Americas), and too strategically important to be put at risk of fouling and foundering by continual patrol duties. The Dutch navy was noted for its cruisers in the 17th century, while the Royal Navy—and later French and Spanish navies—subsequently caught up in terms of their numbers and deployment. The British Cruiser and Convoy Acts were an attempt by mercantile interests in Parliament to focus the Navy on commerce defence and raiding with cruisers, rather than the more scarce and expensive ships of the line. During the 18th century the frigate became the preeminent type of cruiser. A frigate was a small, fast, long range, lightly armed (single gun-deck) ship used for scouting, carrying dispatches, and disrupting enemy trade. The other principal type of cruiser was the sloop, but many other miscellaneous types of ship were used as well. Steam cruisers. During the 19th century, navies began to use steam power for their fleets. The 1840s saw the construction of experimental steam-powered frigates and sloops. By the middle of the 1850s, the British and U.S. Navies were both building steam frigates with very long hulls and a heavy gun armament, for instance or . The 1860s saw the introduction of the ironclad. The first ironclads were frigates, in the sense of having one gun deck; however, they were also clearly the most powerful ships in the navy, and were principally to serve in the line of battle. In spite of their great speed, they would have been wasted in a cruising role. The French constructed a number of smaller ironclads for overseas cruising duties, starting with the , commissioned 1865. These "station ironclads" were the beginning of the development of the armored cruisers, a type of ironclad specifically for the traditional cruiser missions of fast, independent raiding and patrol. The first true armored cruiser was the Russian , completed in 1874, and followed by the British a few years later. Until the 1890s armored cruisers were still built with masts for a full sailing rig, to enable them to operate far from friendly coaling stations. Unarmored cruising warships, built out of wood, iron, steel or a combination of those materials, remained popular until towards the end of the 19th century. The ironclad's armor often meant that they were limited to short range under steam, and many ironclads were unsuited to long-range missions or for work in distant colonies. The unarmored cruiser—often a screw sloop or screw frigate—could continue in this role. Even though mid- to late-19th century cruisers typically carried up-to-date guns firing explosive shells, they were unable to face ironclads in combat. This was evidenced by the clash between , a modern British cruiser, and the Peruvian monitor "Huáscar". Even though the Peruvian vessel was obsolete by the time of the encounter, it stood up well to roughly 50 hits from British shells. Steel cruisers. In the 1880s, naval engineers began to use steel as a material for construction and armament. A steel cruiser could be lighter and faster than one built of iron or wood. The "Jeune Ecole" school of naval doctrine suggested that a fleet of fast unprotected steel cruisers were ideal for commerce raiding, while the torpedo boat would be able to destroy an enemy battleship fleet. Steel also offered the cruiser a way of acquiring the protection needed to survive in combat. Steel armor was considerably stronger, for the same weight, than iron. By putting a relatively thin layer of steel armor above the vital parts of the ship, and by placing the coal bunkers where they might stop shellfire, a useful degree of protection could be achieved without slowing the ship too much. Protected cruisers generally had an armored deck with sloped sides, providing similar protection to a light armored belt at less weight and expense. The first protected cruiser was the Chilean ship "Esmeralda", launched in 1883. Produced by a shipyard at Elswick, in Britain, owned by Armstrong, she inspired a group of protected cruisers produced in the same yard and known as the "Elswick cruisers". Her forecastle, poop deck and the wooden board deck had been removed, replaced with an armored deck. "Esmeralda"s armament consisted of fore and aft 10-inch (25.4 cm) guns and 6-inch (15.2 cm) guns in the midships positions. It could reach a speed of , and was propelled by steam alone. It also had a displacement of less than 3,000 tons. During the two following decades, this cruiser type came to be the inspiration for combining heavy artillery, high speed and low displacement. Torpedo cruisers. The torpedo cruiser (known in the Royal Navy as the torpedo gunboat) was a smaller unarmored cruiser, which emerged in the 1880s–1890s. These ships could reach speeds up to and were armed with medium to small calibre guns as well as torpedoes. These ships were tasked with guard and reconnaissance duties, to repeat signals and all other fleet duties for which smaller vessels were suited. These ships could also function as flagships of torpedo boat flotillas. After the 1900s, these ships were usually traded for faster ships with better sea going qualities. Pre-dreadnought armored cruisers. Steel also affected the construction and role of armored cruisers. Steel meant that new designs of battleship, later known as pre-dreadnought battleships, would be able to combine firepower and armor with better endurance and speed than ever before. The armored cruisers of the 1890s and early 1900s greatly resembled the battleships of the day; they tended to carry slightly smaller main armament ( rather than 12-inch) and have somewhat thinner armor in exchange for a faster speed (perhaps rather than 18). Because of their similarity, the lines between battleships and armored cruisers became blurred. Early 20th century. Shortly after the turn of the 20th century there were difficult questions about the design of future cruisers. Modern armored cruisers, almost as powerful as battleships, were also fast enough to outrun older protected and unarmored cruisers. In the Royal Navy, Jackie Fisher cut back hugely on older vessels, including many cruisers of different sorts, calling them "a miser's hoard of useless junk" that any modern cruiser would sweep from the seas. The scout cruiser also appeared in this era; this was a small, fast, lightly armed and armored type designed primarily for reconnaissance. The Royal Navy and the Italian Navy were the primary developers of this type. Battle cruisers. The growing size and power of the armored cruiser resulted in the battlecruiser, with an armament and size similar to the revolutionary new dreadnought battleship; the brainchild of British admiral Jackie Fisher. He believed that to ensure British naval dominance in its overseas colonial possessions, a fleet of large, fast, powerfully armed vessels which would be able to hunt down and mop up enemy cruisers and armored cruisers with overwhelming fire superiority was needed. They were equipped with the same gun types as battleships, though usually with fewer guns, and were intended to engage enemy capital ships as well. This type of vessel came to be known as the "battlecruiser", and the first were commissioned into the Royal Navy in 1907. The British battlecruisers sacrificed protection for speed, as they were intended to "choose their range" (to the enemy) with superior speed and only engage the enemy at long range. When engaged at moderate ranges, the lack of protection combined with unsafe ammunition handling practices became tragic with the loss of three of them at the Battle of Jutland. Germany and eventually Japan followed suit to build these vessels, replacing armored cruisers in most frontline roles. German battlecruisers were generally better protected but slower than British battlecruisers. Battlecruisers were in many cases larger and more expensive than contemporary battleships, due to their much larger propulsion plants. Light cruisers. At around the same time as the battlecruiser was developed, the distinction between the armored and the unarmored cruiser finally disappeared. By the British , the first of which was launched in 1909, it was possible for a small, fast cruiser to carry both belt and deck armor, particularly when turbine engines were adopted. These light armored cruisers began to occupy the traditional cruiser role once it became clear that the battlecruiser squadrons were required to operate with the battle fleet. Flotilla leaders. Some light cruisers were built specifically to act as the leaders of flotillas of destroyers. Coastguard cruisers. These vessels were essentially large coastal patrol boats armed with multiple light guns. One such warship was "Grivița" of the Romanian Navy. She displaced 110 tons, measured 60 meters in length and was armed with four light guns. Auxiliary cruisers. The auxiliary cruiser was a merchant ship hastily armed with small guns on the outbreak of war. Auxiliary cruisers were used to fill gaps in their long-range lines or provide escort for other cargo ships, although they generally proved to be useless in this role because of their low speed, feeble firepower and lack of armor. In both world wars the Germans also used small merchant ships armed with cruiser guns to surprise Allied merchant ships. Some large liners were armed in the same way. In British service these were known as Armed Merchant Cruisers (AMC). The Germans and French used them in World War I as raiders because of their high speed (around 30 knots (56 km/h)), and they were used again as raiders early in World War II by the Germans and Japanese. In both the First World War and in the early part of the Second, they were used as convoy escorts by the British. World War I. Cruisers were one of the workhorse types of warship during World War I. By the time of World War I, cruisers had accelerated their development and improved their quality significantly, with drainage volume reaching 3000–4000 tons, a speed of 25–30 knots and a calibre of 127–152 mm. Mid-20th century. Naval construction in the 1920s and 1930s was limited by international treaties designed to prevent the repetition of the Dreadnought arms race of the early 20th century. The Washington Naval Treaty of 1922 placed limits on the construction of ships with a standard displacement of more than 10,000 tons and an armament of guns larger than 8-inch (203 mm). A number of navies commissioned classes of cruisers at the top end of this limit, known as "treaty cruisers". The London Naval Treaty in 1930 then formalised the distinction between these "heavy" cruisers and light cruisers: a "heavy" cruiser was one with guns of more than 6.1-inch (155 mm) calibre. The Second London Naval Treaty attempted to reduce the tonnage of new cruisers to 8,000 or less, but this had little effect; Japan and Germany were not signatories, and some navies had already begun to evade treaty limitations on warships. The first London treaty did touch off a period of the major powers building 6-inch or 6.1-inch gunned cruisers, nominally of 10,000 tons and with up to fifteen guns, the treaty limit. Thus, most light cruisers ordered after 1930 were the size of heavy cruisers but with more and smaller guns. The Imperial Japanese Navy began this new race with the , launched in 1934. After building smaller light cruisers with six or eight 6-inch guns launched 1931–35, the British Royal Navy followed with the 12-gun in 1936. To match foreign developments and potential treaty violations, in the 1930s the US developed a series of new guns firing "super-heavy" armor piercing ammunition; these included the 6-inch (152 mm)/47 caliber gun Mark 16 introduced with the 15-gun s in 1936, and the 8-inch (203 mm)/55 caliber gun Mark 12 introduced with in 1937. Heavy cruisers. The heavy cruiser was a type of cruiser designed for long range, high speed and an armament of naval guns around 203 mm (8 in) in calibre. The first heavy cruisers were built in 1915, although it only became a widespread classification following the London Naval Treaty in 1930. The heavy cruiser's immediate precursors were the light cruiser designs of the 1910s and 1920s; the US lightly armored 8-inch "treaty cruisers" of the 1920s (built under the Washington Naval Treaty) were originally classed as light cruisers until the London Treaty forced their redesignation. Initially, all cruisers built under the Washington treaty had torpedo tubes, regardless of nationality. However, in 1930, results of war games caused the US Naval War College to conclude that only perhaps half of cruisers would use their torpedoes in action. In a surface engagement, long-range gunfire and destroyer torpedoes would decide the issue, and under air attack numerous cruisers would be lost before getting within torpedo range. Thus, beginning with launched in 1933, new cruisers were built without torpedoes, and torpedoes were removed from older heavy cruisers due to the perceived hazard of their being exploded by shell fire. The Japanese took exactly the opposite approach with cruiser torpedoes, and this proved crucial to their tactical victories in most of the numerous cruiser actions of 1942. Beginning with the launched in 1925, every Japanese heavy cruiser was armed with torpedoes, larger than any other cruisers'. By 1933 Japan had developed the Type 93 torpedo for these ships, eventually nicknamed "Long Lance" by the Allies. This type used compressed oxygen instead of compressed air, allowing it to achieve ranges and speeds unmatched by other torpedoes. It could achieve a range of at , compared with the US Mark 15 torpedo with at . The Mark 15 had a maximum range of at , still well below the "Long Lance". The Japanese were able to keep the Type 93's performance and oxygen power secret until the Allies recovered one in early 1943, thus the Allies faced a great threat they were not aware of in 1942. The Type 93 was also fitted to Japanese post-1930 light cruisers and the majority of their World War II destroyers. Heavy cruisers continued in use until after World War II, with some converted to guided-missile cruisers for air defense or strategic attack and some used for shore bombardment by the United States in the Korean War and the Vietnam War. German pocket battleships. The German was a series of three "Panzerschiffe" ("armored ships"), a form of heavily armed cruiser, designed and built by the German Reichsmarine in nominal accordance with restrictions imposed by the Treaty of Versailles. All three ships were launched between 1931 and 1934, and served with Germany's Kriegsmarine during World War II. Within the Kriegsmarine, the Panzerschiffe had the propaganda value of capital ships: heavy cruisers with battleship guns, torpedoes, and scout aircraft. (The similar Swedish "Panzerschiffe" were tactically used as centers of battlefleets and not as cruisers.) They were deployed by Nazi Germany in support of the German interests in the Spanish Civil War. Panzerschiff "Admiral Graf Spee" represented Germany in the 1937 Coronation Fleet Review. The British press referred to the vessels as pocket battleships, in reference to the heavy firepower contained in the relatively small vessels; they were considerably smaller than contemporary battleships, though at 28 knots were slower than battlecruisers. At up to 16,000 tons at full load, they were not treaty compliant 10,000 ton cruisers. And although their displacement and scale of armor protection were that of a heavy cruiser, their main armament was heavier than the guns of other nations' heavy cruisers, and the latter two members of the class also had tall conning towers resembling battleships. The Panzerschiffe were listed as Ersatz replacements for retiring Reichsmarine coastal defense battleships, which added to their propaganda status in the Kriegsmarine as Ersatz battleships; within the Royal Navy, only battlecruisers HMS "Hood", HMS "Repulse" and HMS "Renown" were capable of both outrunning and outgunning the Panzerschiffe. They were seen in the 1930s as a new and serious threat by both Britain and France. While the Kriegsmarine reclassified them as heavy cruisers in 1940, "Deutschland"-class ships continued to be called "pocket battleships" in the popular press. Large cruiser. The American represented the supersized cruiser design. Due to the German pocket battleships, the , and rumored Japanese "super cruisers", all of which carried guns larger than the standard heavy cruiser's 8-inch size dictated by naval treaty limitations, the "Alaska"s were intended to be "cruiser-killers". While superficially appearing similar to a battleship/battlecruiser and mounting three triple turrets of 12-inch guns, their actual protection scheme and design resembled a scaled-up heavy cruiser design. Their hull classification symbol of CB (cruiser, big) reflected this. Anti-aircraft cruisers. A precursor to the anti-aircraft cruiser was the Romanian British-built protected cruiser "Elisabeta". After the start of World War I, her four 120 mm main guns were landed and her four 75 mm (12-pounder) secondary guns were modified for anti-aircraft fire. The development of the anti-aircraft cruiser began in 1935 when the Royal Navy re-armed and . Torpedo tubes and low-angle guns were removed from these World War I light cruisers and replaced with ten high-angle guns, with appropriate fire-control equipment to provide larger warships with protection against high-altitude bombers. A tactical shortcoming was recognised after completing six additional conversions of s. Having sacrificed anti-ship weapons for anti-aircraft armament, the converted anti-aircraft cruisers might themselves need protection against surface units. New construction was undertaken to create cruisers of similar speed and displacement with dual-purpose guns, which offered good anti-aircraft protection with anti-surface capability for the traditional light cruiser role of defending capital ships from destroyers. The first purpose built anti-aircraft cruiser was the British , completed in 1940–42. The US Navy's cruisers (CLAA: light cruiser with anti-aircraft capability) were designed to match the capabilities of the Royal Navy. Both "Dido" and "Atlanta" cruisers initially carried torpedo tubes; the "Atlanta" cruisers at least were originally designed as destroyer leaders, were originally designated CL (light cruiser), and did not receive the CLAA designation until 1949. The concept of the quick-firing dual-purpose gun anti-aircraft cruiser was embraced in several designs completed too late to see combat, including: , completed in 1948; , completed in 1949; two s, completed in 1947; two s, completed in 1953; , completed in 1955; , completed in 1959; and , and , all completed between 1959 and 1961. Most post-World War II cruisers were tasked with air defense roles. In the early 1950s, advances in aviation technology forced the move from anti-aircraft artillery to anti-aircraft missiles. Therefore, most modern cruisers are equipped with surface-to-air missiles as their main armament. Today's equivalent of the anti-aircraft cruiser is the guided-missile cruiser (CAG/CLG/CG/CGN). World War II. Cruisers participated in a number of surface engagements in the early part of World War II, along with escorting carrier and battleship groups throughout the war. In the later part of the war, Allied cruisers primarily provided anti-aircraft (AA) escort for carrier groups and performed shore bombardment. Japanese cruisers similarly escorted carrier and battleship groups in the later part of the war, notably in the disastrous Battle of the Philippine Sea and Battle of Leyte Gulf. In 1937–41 the Japanese, having withdrawn from all naval treaties, upgraded or completed the "Mogami" and es as heavy cruisers by replacing their triple turrets with twin turrets. Torpedo refits were also made to most heavy cruisers, resulting in up to sixteen tubes per ship, plus a set of reloads. In 1941 the 1920s light cruisers and were converted to torpedo cruisers with four guns and forty torpedo tubes. In 1944 "Kitakami" was further converted to carry up to eight "Kaiten" human torpedoes in place of ordinary torpedoes. Before World War II, cruisers were mainly divided into three types: heavy cruisers, light cruisers and auxiliary cruisers. Heavy cruiser tonnage reached 20–30,000 tons, speed 32–34 knots, endurance of more than 10,000 nautical miles, armor thickness of 127–203 mm. Heavy cruisers were equipped with eight or nine guns with a range of more than 20 nautical miles. They were mainly used to attack enemy surface ships and shore-based targets. In addition, there were 10–16 secondary guns with a caliber of less than . Also, dozens of automatic antiaircraft guns were installed to fight aircraft and small vessels such as torpedo boats. For example, in World War II, American Alaska-class cruisers were more than 30,000 tons, equipped with nine guns. Some cruisers could also carry three or four seaplanes to correct the accuracy of gunfire and perform reconnaissance. Together with battleships, these heavy cruisers formed powerful naval task forces, which dominated the world's oceans for more than a century. After the signing of the Washington Treaty on Arms Limitation in 1922, the tonnage and quantity of battleships, aircraft carriers and cruisers were severely restricted. In order not to violate the treaty, countries began to develop light cruisers. Light cruisers of the 1920s had displacements of less than 10,000 tons and a speed of up to 35 knots. They were equipped with 6–12 main guns with a caliber of 127–133 mm (5–5.5 inches). In addition, they were equipped with 8–12 secondary guns under 127 mm (5 in) and dozens of small caliber cannons, as well as torpedoes and mines. Some ships also carried 2–4 seaplanes, mainly for reconnaissance. In 1930 the London Naval Treaty allowed large light cruisers to be built, with the same tonnage as heavy cruisers and armed with up to fifteen guns. The Japanese "Mogami" class were built to this treaty's limit, the Americans and British also built similar ships. However, in 1939 the "Mogami"s were refitted as heavy cruisers with ten guns. 1939 to Pearl Harbor. In December 1939, three British cruisers engaged the German "pocket battleship" "Admiral Graf Spee" (which was on a commerce raiding mission) in the Battle of the River Plate; German cruiser Admiral Graf Spee then took refuge in neutral Montevideo, Uruguay. By broadcasting messages indicating capital ships were in the area, the British caused "Admiral Graf Spee"s captain to think he faced a hopeless situation while low on ammunition and order his ship scuttled. On 8 June 1940 the German capital ships and , classed as battleships but with large cruiser armament, sank the aircraft carrier with gunfire. From October 1940 through March 1941 the German heavy cruiser (also known as "pocket battleship", see above) conducted a successful commerce-raiding voyage in the Atlantic and Indian Oceans. On 27 May 1941, attempted to finish off the German battleship with torpedoes, probably causing the Germans to scuttle the ship. "Bismarck" (accompanied by the heavy cruiser ) previously sank the battlecruiser and damaged the battleship with gunfire in the Battle of the Denmark Strait. On 19 November 1941 sank in a mutually fatal engagement with the German raider "Kormoran" in the Indian Ocean near Western Australia. Atlantic, Mediterranean, and Indian Ocean operations 1942–1944. Twenty-three British cruisers were lost to enemy action, mostly to air attack and submarines, in operations in the Atlantic, Mediterranean, and Indian Ocean. Sixteen of these losses were in the Mediterranean. The British included cruisers and anti-aircraft cruisers among convoy escorts in the Mediterranean and to northern Russia due to the threat of surface and air attack. Almost all cruisers in World War II were vulnerable to submarine attack due to a lack of anti-submarine sonar and weapons. Also, until 1943–44 the light anti-aircraft armament of most cruisers was weak. In July 1942 an attempt to intercept Convoy PQ 17 with surface ships, including the heavy cruiser "Admiral Scheer", failed due to multiple German warships grounding, but air and submarine attacks sank 2/3 of the convoy's ships. In August 1942 "Admiral Scheer" conducted Operation Wunderland, a solo raid into northern Russia's Kara Sea. She bombarded Dikson Island but otherwise had little success. On 31 December 1942 the Battle of the Barents Sea was fought, a rare action for a Murmansk run because it involved cruisers on both sides. Four British destroyers and five other vessels were escorting Convoy JW 51B from the UK to the Murmansk area. Another British force of two cruisers ( and ) and two destroyers were in the area. Two heavy cruisers (one the "pocket battleship" "Lützow"), accompanied by six destroyers, attempted to intercept the convoy near North Cape after it was spotted by a U-boat. Although the Germans sank a British destroyer and a minesweeper (also damaging another destroyer), they failed to damage any of the convoy's merchant ships. A German destroyer was lost and a heavy cruiser damaged. Both sides withdrew from the action for fear of the other side's torpedoes. On 26 December 1943 the German capital ship "Scharnhorst" was sunk while attempting to intercept a convoy in the Battle of the North Cape. The British force that sank her was led by Vice Admiral Bruce Fraser in the battleship , accompanied by four cruisers and nine destroyers. One of the cruisers was the preserved . "Scharnhorst"s sister "Gneisenau", damaged by a mine and a submerged wreck in the Channel Dash of 13 February 1942 and repaired, was further damaged by a British air attack on 27 February 1942. She began a conversion process to mount six guns instead of nine guns, but in early 1943 Hitler (angered by the recent failure at the Battle of the Barents Sea) ordered her disarmed and her armament used as coast defence weapons. One 28 cm triple turret survives near Trondheim, Norway. Pearl Harbor through Dutch East Indies campaign. The attack on Pearl Harbor on 7 December 1941 brought the United States into the war, but with eight battleships sunk or damaged by air attack. On 10 December 1941 HMS "Prince of Wales" and the battlecruiser were sunk by land-based torpedo bombers northeast of Singapore. It was now clear that surface ships could not operate near enemy aircraft in daylight without air cover; most surface actions of 1942–43 were fought at night as a result. Generally, both sides avoided risking their battleships until the Japanese attack at Leyte Gulf in 1944. Six of the battleships from Pearl Harbor were eventually returned to service, but no US battleships engaged Japanese surface units at sea until the Naval Battle of Guadalcanal in November 1942, and not thereafter until the Battle of Surigao Strait in October 1944. was on hand for the initial landings at Guadalcanal on 7 August 1942, and escorted carriers in the Battle of the Eastern Solomons later that month. However, on 15 September she was torpedoed while escorting a carrier group and had to return to the US for repairs. Generally, the Japanese held their capital ships out of all surface actions in the 1941–42 campaigns or they failed to close with the enemy; the Naval Battle of Guadalcanal in November 1942 was the sole exception. The four ships performed shore bombardment in Malaya, Singapore, and Guadalcanal and escorted the raid on Ceylon and other carrier forces in 1941–42. Japanese capital ships also participated ineffectively (due to not being engaged) in the Battle of Midway and the simultaneous Aleutian diversion; in both cases they were in battleship groups well to the rear of the carrier groups. Sources state that sat out the entire Guadalcanal Campaign due to lack of high-explosive bombardment shells, poor nautical charts of the area, and high fuel consumption. It is likely that the poor charts affected other battleships as well. Except for the "Kongō" class, most Japanese battleships spent the critical year of 1942, in which most of the war's surface actions occurred, in home waters or at the fortified base of Truk, far from any risk of attacking or being attacked. From 1942 through mid-1943, US and other Allied cruisers were the heavy units on their side of the numerous surface engagements of the Dutch East Indies campaign, the Guadalcanal Campaign, and subsequent Solomon Islands fighting; they were usually opposed by strong Japanese cruiser-led forces equipped with Long Lance torpedoes. Destroyers also participated heavily on both sides of these battles and provided essentially all the torpedoes on the Allied side, with some battles in these campaigns fought entirely between destroyers. Along with lack of knowledge of the capabilities of the Long Lance torpedo, the US Navy was hampered by a deficiency it was initially unaware of—the unreliability of the Mark 15 torpedo used by destroyers. This weapon shared the Mark 6 exploder and other problems with the more famously unreliable Mark 14 torpedo; the most common results of firing either of these torpedoes were a dud or a miss. The problems with these weapons were not solved until mid-1943, after almost all of the surface actions in the Solomon Islands had taken place. Another factor that shaped the early surface actions was the pre-war training of both sides. The US Navy concentrated on long-range 8-inch gunfire as their primary offensive weapon, leading to rigid battle line tactics, while the Japanese trained extensively for nighttime torpedo attacks. Since all post-1930 Japanese cruisers had 8-inch guns by 1941, almost all of the US Navy's cruisers in the South Pacific in 1942 were the 8-inch-gunned (203 mm) "treaty cruisers"; most of the 6-inch-gunned (152 mm) cruisers were deployed in the Atlantic. Dutch East Indies campaign. Although their battleships were held out of surface action, Japanese cruiser-destroyer forces rapidly isolated and mopped up the Allied naval forces in the Dutch East Indies campaign of February–March 1942. In three separate actions, they sank five Allied cruisers (two Dutch and one each British, Australian, and American) with torpedoes and gunfire, against one Japanese cruiser damaged. With one other Allied cruiser withdrawn for repairs, the only remaining Allied cruiser in the area was the damaged . Despite their rapid success, the Japanese proceeded methodically, never leaving their air cover and rapidly establishing new air bases as they advanced. Guadalcanal campaign. After the key carrier battles of the Coral Sea and Midway in mid-1942, Japan had lost four of the six fleet carriers that launched the Pearl Harbor raid and was on the strategic defensive. On 7 August 1942 US Marines were landed on Guadalcanal and other nearby islands, beginning the Guadalcanal Campaign. This campaign proved to be a severe test for the Navy as well as the Marines. Along with two carrier battles, several major surface actions occurred, almost all at night between cruiser-destroyer forces. Battle of Savo Island On the night of 8–9 August 1942 the Japanese counterattacked near Guadalcanal in the Battle of Savo Island with a cruiser-destroyer force. In a controversial move, the US carrier task forces were withdrawn from the area on the 8th due to heavy fighter losses and low fuel. The Allied force included six heavy cruisers (two Australian), two light cruisers (one Australian), and eight US destroyers. Of the cruisers, only the Australian ships had torpedoes. The Japanese force included five heavy cruisers, two light cruisers, and one destroyer. Numerous circumstances combined to reduce Allied readiness for the battle. The results of the battle were three American heavy cruisers sunk by torpedoes and gunfire, one Australian heavy cruiser disabled by gunfire and scuttled, one heavy cruiser damaged, and two US destroyers damaged. The Japanese had three cruisers lightly damaged. This was the most lopsided outcome of the surface actions in the Solomon Islands. Along with their superior torpedoes, the opening Japanese gunfire was accurate and very damaging. Subsequent analysis showed that some of the damage was due to poor housekeeping practices by US forces. Stowage of boats and aircraft in midships hangars with full gas tanks contributed to fires, along with full and unprotected ready-service ammunition lockers for the open-mount secondary armament. These practices were soon corrected, and US cruisers with similar damage sank less often thereafter. Savo was the first surface action of the war for almost all the US ships and personnel; few US cruisers and destroyers were targeted or hit at Coral Sea or Midway. Battle of the Eastern Solomons On 24–25 August 1942 the Battle of the Eastern Solomons, a major carrier action, was fought. Part of the action was a Japanese attempt to reinforce Guadalcanal with men and equipment on troop transports. The Japanese troop convoy was attacked by Allied aircraft, resulting in the Japanese subsequently reinforcing Guadalcanal with troops on fast warships at night. These convoys were called the "Tokyo Express" by the Allies. Although the Tokyo Express often ran unopposed, most surface actions in the Solomons revolved around Tokyo Express missions. Also, US air operations had commenced from Henderson Field, the airfield on Guadalcanal. Fear of air power on both sides resulted in all surface actions in the Solomons being fought at night. Battle of Cape Esperance The Battle of Cape Esperance occurred on the night of 11–12 October 1942. A Tokyo Express mission was underway for Guadalcanal at the same time as a separate cruiser-destroyer bombardment group loaded with high explosive shells for bombarding Henderson Field. A US cruiser-destroyer force was deployed in advance of a convoy of US Army troops for Guadalcanal that was due on 13 October. The Tokyo Express convoy was two seaplane tenders and six destroyers; the bombardment group was three heavy cruisers and two destroyers, and the US force was two heavy cruisers, two light cruisers, and five destroyers. The US force engaged the Japanese bombardment force; the Tokyo Express convoy was able to unload on Guadalcanal and evade action. The bombardment force was sighted at close range () and the US force opened fire. The Japanese were surprised because their admiral was anticipating sighting the Tokyo Express force, and withheld fire while attempting to confirm the US ships' identity. One Japanese cruiser and one destroyer were sunk and one cruiser damaged, against one US destroyer sunk with one light cruiser and one destroyer damaged. The bombardment force failed to bring its torpedoes into action, and turned back. The next day US aircraft from Henderson Field attacked several of the Japanese ships, sinking two destroyers and damaging a third. The US victory resulted in overconfidence in some later battles, reflected in the initial after-action report claiming two Japanese heavy cruisers, one light cruiser, and three destroyers sunk by the gunfire of alone. The battle had little effect on the overall situation, as the next night two Kongō-class battleships bombarded and severely damaged Henderson Field unopposed, and the following night another Tokyo Express convoy delivered 4,500 troops to Guadalcanal. The US convoy delivered the Army troops as scheduled on the 13th. Battle of the Santa Cruz Islands The Battle of the Santa Cruz Islands took place 25–27 October 1942. It was a pivotal battle, as it left the US and Japanese with only two large carriers each in the South Pacific (another large Japanese carrier was damaged and under repair until May 1943). Due to the high carrier attrition rate with no replacements for months, for the most part both sides stopped risking their remaining carriers until late 1943, and each side sent in a pair of battleships instead. The next major carrier operations for the US were the carrier raid on Rabaul and support for the invasion of Tarawa, both in November 1943. Naval Battle of Guadalcanal The Naval Battle of Guadalcanal occurred 12–15 November 1942 in two phases. A night surface action on 12–13 November was the first phase. The Japanese force consisted of two Kongō-class battleships with high explosive shells for bombarding Henderson Field, one small light cruiser, and 11 destroyers. Their plan was that the bombardment would neutralize Allied airpower and allow a force of 11 transport ships and 12 destroyers to reinforce Guadalcanal with a Japanese division the next day. However, US reconnaissance aircraft spotted the approaching Japanese on the 12th and the Americans made what preparations they could. The American force consisted of two heavy cruisers, one light cruiser, two anti-aircraft cruisers, and eight destroyers. The Americans were outgunned by the Japanese that night, and a lack of pre-battle orders by the US commander led to confusion. The destroyer closed with the battleship , firing all torpedoes (though apparently none hit or detonated) and raking the battleship's bridge with gunfire, wounding the Japanese admiral and killing his chief of staff. The Americans initially lost four destroyers including "Laffey", with both heavy cruisers, most of the remaining destroyers, and both anti-aircraft cruisers damaged. The Japanese initially had one battleship and four destroyers damaged, but at this point they withdrew, possibly unaware that the US force was unable to further oppose them. At dawn US aircraft from Henderson Field, , and Espiritu Santo found the damaged battleship and two destroyers in the area. The battleship ("Hiei") was sunk by aircraft (or possibly scuttled), one destroyer was sunk by the damaged , and the other destroyer was attacked by aircraft but was able to withdraw. Both of the damaged US anti-aircraft cruisers were lost on 13 November, one () torpedoed by a Japanese submarine, and the other sank on the way to repairs. "Juneau"s loss was especially tragic; the submarine's presence prevented immediate rescue, over 100 survivors of a crew of nearly 700 were adrift for eight days, and all but ten died. Among the dead were the five Sullivan brothers. The Japanese transport force was rescheduled for the 14th and a new cruiser-destroyer force (belatedly joined by the surviving battleship ) was sent to bombard Henderson Field the night of 13 November. Only two cruisers actually bombarded the airfield, as "Kirishima" had not arrived yet and the remainder of the force was on guard for US warships. The bombardment caused little damage. The cruiser-destroyer force then withdrew, while the transport force continued towards Guadalcanal. Both forces were attacked by US aircraft on the 14th. The cruiser force lost one heavy cruiser sunk and one damaged. Although the transport force had fighter cover from the carrier , six transports were sunk and one heavily damaged. All but four of the destroyers accompanying the transport force picked up survivors and withdrew. The remaining four transports and four destroyers approached Guadalcanal at night, but stopped to await the results of the night's action. On the night of 14–15 November a Japanese force of "Kirishima", two heavy and two light cruisers, and nine destroyers approached Guadalcanal. Two US battleships ( and ) were there to meet them, along with four destroyers. This was one of only two battleship-on-battleship encounters during the Pacific War; the other was the lopsided Battle of Surigao Strait in October 1944, part of the Battle of Leyte Gulf. The battleships had been escorting "Enterprise", but were detached due to the urgency of the situation. With nine 16-inch (406 mm) guns apiece against eight 14-inch (356 mm) guns on "Kirishima", the Americans had major gun and armor advantages. All four destroyers were sunk or severely damaged and withdrawn shortly after the Japanese attacked them with gunfire and torpedoes. Although her main battery remained in action for most of the battle, "South Dakota" spent much of the action dealing with major electrical failures that affected her radar, fire control, and radio systems. Although her armor was not penetrated, she was hit by 26 shells of various calibers and temporarily rendered, in a US admiral's words, "deaf, dumb, blind, and impotent". "Washington" went undetected by the Japanese for most of the battle, but withheld shooting to avoid "friendly fire" until "South Dakota" was illuminated by Japanese fire, then rapidly set "Kirishima" ablaze with a jammed rudder and other damage. "Washington", finally spotted by the Japanese, then headed for the Russell Islands to hopefully draw the Japanese away from Guadalcanal and "South Dakota", and was successful in evading several torpedo attacks. Unusually, only a few Japanese torpedoes scored hits in this engagement. "Kirishima" sank or was scuttled before the night was out, along with two Japanese destroyers. The remaining Japanese ships withdrew, except for the four transports, which beached themselves in the night and started unloading. However, dawn (and US aircraft, US artillery, and a US destroyer) found them still beached, and they were destroyed. Battle of Tassafaronga The Battle of Tassafaronga took place on the night of 30 November – 1 December 1942. The US had four heavy cruisers, one light cruiser, and four destroyers. The Japanese had eight destroyers on a Tokyo Express run to deliver food and supplies in drums to Guadalcanal. The Americans achieved initial surprise, damaging one destroyer with gunfire which later sank, but the Japanese torpedo counterattack was devastating. One American heavy cruiser was sunk and three others heavily damaged, with the bows blown off of two of them. It was significant that these two were not lost to Long Lance hits as happened in previous battles; American battle readiness and damage control had improved. Despite defeating the Americans, the Japanese withdrew without delivering the crucial supplies to Guadalcanal. Another attempt on 3 December dropped 1,500 drums of supplies near Guadalcanal, but Allied strafing aircraft sank all but 300 before the Japanese Army could recover them. On 7 December PT boats interrupted a Tokyo Express run, and the following night sank a Japanese supply submarine. The next day the Japanese Navy proposed stopping all destroyer runs to Guadalcanal, but agreed to do just one more. This was on 11 December and was also intercepted by PT boats, which sank a destroyer; only 200 of 1,200 drums dropped off the island were recovered. The next day the Japanese Navy proposed abandoning Guadalcanal; this was approved by the Imperial General Headquarters on 31 December and the Japanese left the island in early February 1943. Post-Guadalcanal. After the Japanese abandoned Guadalcanal in February 1943, Allied operations in the Pacific shifted to the New Guinea campaign and isolating Rabaul. The Battle of Kula Gulf was fought on the night of 5–6 July. The US had three light cruisers and four destroyers; the Japanese had ten destroyers loaded with 2,600 troops destined for Vila to oppose a recent US landing on Rendova. Although the Japanese sank a cruiser, they lost two destroyers and were able to deliver only 850 troops. On the night of 12–13 July, the Battle of Kolombangara occurred. The Allies had three light cruisers (one New Zealand) and ten destroyers; the Japanese had one small light cruiser and five destroyers, a Tokyo Express run for Vila. All three Allied cruisers were heavily damaged, with the New Zealand cruiser put out of action for 25 months by a Long Lance hit. The Allies sank only the Japanese light cruiser, and the Japanese landed 1,200 troops at Vila. Despite their tactical victory, this battle caused the Japanese to use a different route in the future, where they were more vulnerable to destroyer and PT boat attacks. The Battle of Empress Augusta Bay was fought on the night of 1–2 November 1943, immediately after US Marines invaded Bougainville in the Solomon Islands. A Japanese heavy cruiser was damaged by a nighttime air attack shortly before the battle; it is likely that Allied airborne radar had progressed far enough to allow night operations. The Americans had four of the new cruisers and eight destroyers. The Japanese had two heavy cruisers, two small light cruisers, and six destroyers. Both sides were plagued by collisions, shells that failed to explode, and mutual skill in dodging torpedoes. The Americans suffered significant damage to three destroyers and light damage to a cruiser, but no losses. The Japanese lost one light cruiser and a destroyer, with four other ships damaged. The Japanese withdrew; the Americans pursued them until dawn, then returned to the landing area to provide anti-aircraft cover. After the Battle of the Santa Cruz Islands in October 1942, both sides were short of large aircraft carriers. The US suspended major carrier operations until sufficient carriers could be completed to destroy the entire Japanese fleet at once should it appear. The Central Pacific carrier raids and amphibious operations commenced in November 1943 with a carrier raid on Rabaul (preceded and followed by Fifth Air Force attacks) and the bloody but successful invasion of Tarawa. The air attacks on Rabaul crippled the Japanese cruiser force, with four heavy and two light cruisers damaged; they were withdrawn to Truk. The US had built up a force in the Central Pacific of six large, five light, and six escort carriers prior to commencing these operations. From this point on, US cruisers primarily served as anti-aircraft escorts for carriers and in shore bombardment. The only major Japanese carrier operation after Guadalcanal was the disastrous (for Japan) Battle of the Philippine Sea in June 1944, nicknamed the "Marianas Turkey Shoot" by the US Navy. Leyte Gulf. The Imperial Japanese Navy's last major operation was the Battle of Leyte Gulf, an attempt to dislodge the American invasion of the Philippines in October 1944. The two actions at this battle in which cruisers played a significant role were the Battle off Samar and the Battle of Surigao Strait. Battle of Surigao Strait The Battle of Surigao Strait was fought on the night of 24–25 October, a few hours before the Battle off Samar. The Japanese had a small battleship group composed of and , one heavy cruiser, and four destroyers. They were followed at a considerable distance by another small force of two heavy cruisers, a small light cruiser, and four destroyers. Their goal was to head north through Surigao Strait and attack the invasion fleet off Leyte. The Allied force, known as the 7th Fleet Support Force, guarding the strait was overwhelming. It included six battleships (all but one previously damaged in 1941 at Pearl Harbor), four heavy cruisers (one Australian), four light cruisers, and 28 destroyers, plus a force of 39 PT boats. The only advantage to the Japanese was that most of the Allied battleships and cruisers were loaded mainly with high explosive shells, although a significant number of armor-piercing shells were also loaded. The lead Japanese force evaded the PT boats' torpedoes, but were hit hard by the destroyers' torpedoes, losing a battleship. Then they encountered the battleship and cruiser guns. Only one destroyer survived. The engagement is notable for being one of only two occasions in which battleships fired on battleships in the Pacific Theater, the other being the Naval Battle of Guadalcanal. Due to the starting arrangement of the opposing forces, the Allied force was in a "crossing the T" position, so this was the last battle in which this occurred, but it was not a planned maneuver. The following Japanese cruiser force had several problems, including a light cruiser damaged by a PT boat and two heavy cruisers colliding, one of which fell behind and was sunk by air attack the next day. An American veteran of Surigao Strait, , was transferred to Argentina in 1951 as , becoming most famous for being sunk by in the Falklands War on 2 May 1982. She was the first ship sunk by a nuclear submarine outside of accidents, and only the second ship sunk by a submarine since World War II. Battle off Samar At the Battle off Samar, a Japanese battleship group moving towards the invasion fleet off Leyte engaged a minuscule American force known as "Taffy 3" (formally Task Unit 77.4.3), composed of six escort carriers with about 28 aircraft each, three destroyers, and four destroyer escorts. The biggest guns in the American force were /38 caliber guns, while the Japanese had , , and guns. Aircraft from six additional escort carriers also participated for a total of around 330 US aircraft, a mix of F6F Hellcat fighters and TBF Avenger torpedo bombers. The Japanese had four battleships including "Yamato", six heavy cruisers, two small light cruisers, and 11 destroyers. The Japanese force had earlier been driven off by air attack, losing "Yamato"s sister . Admiral Halsey then decided to use his Third Fleet carrier force to attack the Japanese carrier group, located well to the north of Samar, which was actually a decoy group with few aircraft. The Japanese were desperately short of aircraft and pilots at this point in the war, and Leyte Gulf was the first battle in which "kamikaze" attacks were used. Due to a tragedy of errors, Halsey took the American battleship force with him, leaving San Bernardino Strait guarded only by the small Seventh Fleet escort carrier force. The battle commenced at dawn on 25 October 1944, shortly after the Battle of Surigao Strait. In the engagement that followed, the Americans exhibited uncanny torpedo accuracy, blowing the bows off several Japanese heavy cruisers. The escort carriers' aircraft also performed very well, attacking with machine guns after their carriers ran out of bombs and torpedoes. The unexpected level of damage, and maneuvering to avoid the torpedoes and air attacks, disorganized the Japanese and caused them to think they faced at least part of the Third Fleet's main force. They had also learned of the defeat a few hours before at Surigao Strait, and did not hear that Halsey's force was busy destroying the decoy fleet. Convinced that the rest of the Third Fleet would arrive soon if it hadn't already, the Japanese withdrew, eventually losing three heavy cruisers sunk with three damaged to air and torpedo attacks. The Americans lost two escort carriers, two destroyers, and one destroyer escort sunk, with three escort carriers, one destroyer, and two destroyer escorts damaged, thus losing over one-third of their engaged force sunk with nearly all the remainder damaged. Wartime cruiser production. The US built cruisers in quantity through the end of the war, notably 14 heavy cruisers and 27 "Cleveland"-class light cruisers, along with eight "Atlanta"-class anti-aircraft cruisers. The "Cleveland" class was the largest cruiser class ever built in number of ships completed, with nine additional "Cleveland"s completed as light aircraft carriers. The large number of cruisers built was probably due to the significant cruiser losses of 1942 in the Pacific theater (seven American and five other Allied) and the perceived need for several cruisers to escort each of the numerous s being built. Losing four heavy and two small light cruisers in 1942, the Japanese built only five light cruisers during the war; these were small ships with six guns each. Losing 20 cruisers in 1940–42, the British completed no heavy cruisers, thirteen light cruisers ( and classes), and sixteen anti-aircraft cruisers ("Dido" class) during the war. Late 20th century. The rise of air power during World War II dramatically changed the nature of naval combat. Even the fastest cruisers could not maneuver quickly enough to evade aerial attack, and aircraft now had torpedoes, allowing moderate-range standoff capabilities. This change led to the end of independent operations by single ships or very small task groups, and for the second half of the 20th century naval operations were based on very large fleets believed able to fend off all but the largest air attacks, though this was not tested by any war in that period. The US Navy became centered around carrier groups, with cruisers and battleships primarily providing anti-aircraft defense and shore bombardment. Until the Harpoon missile entered service in the late 1970s, the US Navy was almost entirely dependent on carrier-based aircraft and submarines for conventionally attacking enemy warships. Lacking aircraft carriers, the Soviet Navy depended on anti-ship cruise missiles; in the 1950s these were primarily delivered from heavy land-based bombers. Soviet submarine-launched cruise missiles at the time were primarily for land attack; but by 1964 anti-ship missiles were deployed in quantity on cruisers, destroyers, and submarines. US cruiser development. The US Navy was aware of the potential missile threat as soon as World War II ended, and had considerable related experience due to Japanese "kamikaze" attacks in that war. The initial response was to upgrade the light AA armament of new cruisers from 40 mm and 20 mm weapons to twin 3-inch (76 mm)/50 caliber gun mounts. For the longer term, it was thought that gun systems would be inadequate to deal with the missile threat, and by the mid-1950s three naval SAM systems were developed: Talos (long range), Terrier (medium range), and Tartar (short range). Talos and Terrier were nuclear-capable and this allowed their use in anti-ship or shore bombardment roles in the event of nuclear war. Chief of Naval Operations Admiral Arleigh Burke is credited with speeding the development of these systems. Terrier was initially deployed on two converted "Baltimore"-class cruisers (CAG), with conversions completed in 1955–56. Further conversions of six "Cleveland"-class cruisers (CLG) ( and classes), redesign of the as guided-missile "frigates" (DLG), and development of the DDGs resulted in the completion of numerous additional guided-missile ships deploying all three systems in 1959–1962. Also completed during this period was the nuclear-powered , with two Terrier and one Talos launchers, plus an ASROC anti-submarine launcher the World War II conversions lacked. The converted World War II cruisers up to this point retained one or two main battery turrets for shore bombardment. However, in 1962–1964 three additional "Baltimore" and cruisers were more extensively converted as the . These had two Talos and two Tartar launchers plus ASROC and two 5-inch (127 mm) guns for self-defense, and were primarily built to get greater numbers of Talos launchers deployed. Of all these types, only the "Farragut" DLGs were selected as the design basis for further production, although their successors were significantly larger (5,670 tons standard versus 4,150 tons standard) due to a second Terrier launcher and greater endurance. An economical crew size compared with World War II conversions was probably a factor, as the "Leahy"s required a crew of only 377 versus 1,200 for the "Cleveland"-class conversions. Through 1980, the ten "Farragut"s were joined by four additional classes and two one-off ships for a total of 36 guided-missile frigates, eight of them nuclear-powered (DLGN). In 1975 the "Farragut"s were reclassified as guided-missile destroyers (DDG) due to their small size, and the remaining DLG/DLGN ships became guided-missile cruisers (CG/CGN). The World War II conversions were gradually retired between 1970 and 1980; the Talos missile was withdrawn in 1980 as a cost-saving measure and the "Albany"s were decommissioned. "Long Beach" had her Talos launcher removed in a refit shortly thereafter; the deck space was used for Harpoon missiles. Around this time the Terrier ships were upgraded with the RIM-67 Standard ER missile. The guided-missile frigates and cruisers served in the Cold War and the Vietnam War; off Vietnam they performed shore bombardment and shot down enemy aircraft or, as Positive Identification Radar Advisory Zone (PIRAZ) ships, guided fighters to intercept enemy aircraft. By 1995 the former guided-missile frigates were replaced by the s and s. The U.S. Navy's guided-missile cruisers were built upon destroyer-style hulls (some called "destroyer leaders" or "frigates" prior to the 1975 reclassification). As the U.S. Navy's strike role was centered around aircraft carriers, cruisers were primarily designed to provide air defense while often adding anti-submarine capabilities. These U.S. cruisers that were built in the 1960s and 1970s were larger, often nuclear-powered for extended endurance in escorting nuclear-powered fleet carriers, and carried longer-range surface-to-air missiles (SAMs) than early "Charles F. Adams" guided-missile destroyers that were tasked with the short-range air defense role. The U.S. cruiser was a major contrast to their contemporaries, Soviet "rocket cruisers" that were armed with large numbers of anti-ship cruise missiles (ASCMs) as part of the combat doctrine of saturation attack, though in the early 1980s the U.S. Navy retrofitted some of these existing cruisers to carry a small number of Harpoon anti-ship missiles and Tomahawk cruise missiles. The line between U.S. Navy cruisers and destroyers blurred with the . While originally designed for anti-submarine warfare, a "Spruance" destroyer was comparable in size to existing U.S. cruisers, while having the advantage of an enclosed hangar (with space for up to two medium-lift helicopters) which was a considerable improvement over the basic aviation facilities of earlier cruisers. The "Spruance" hull design was used as the basis for two classes; the which had comparable anti-air capabilities to cruisers at the time, and then the DDG-47-class destroyers which were redesignated as the "Ticonderoga"-class guided-missile cruisers to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff. In addition, 24 members of the "Spruance" class were upgraded with the vertical launch system (VLS) for Tomahawk cruise missiles due to its modular hull design, along with the similarly VLS-equipped "Ticonderoga" class, these ships had anti-surface strike capabilities beyond the 1960s–1970s cruisers that received Tomahawk armored-box launchers as part of the New Threat Upgrade. Like the "Ticonderoga" ships with VLS, the "Arleigh Burke" and , despite being classified as destroyers, actually have much heavier anti-surface armament than previous U.S. ships classified as cruisers. Following the American example, three smaller light cruisers of other NATO countries were rearmed with anti-aircraft missiles installed in place of their aft armament: the Dutch "De Zeven Provinciën", the Italian "Giuseppe Garibaldi", and the French "Colbert". Only the French ship, rebuilt last in 1972, also received Exocet anti-ship missile launchers and domestically produced Masurca anti-aircraft missiles. The others received American Terrier missiles, with "Garibaldi" uniquely among surface ships also being armed with Polaris strategic missile launchers, although these were never actually carried. In the Soviet Navy, only one cruiser, "Dzerzhinsky", of Project 68bis, was similarly rearmed with anti-aircraft missiles. The M-2 missiles used on it, adapted from the land-based S-75, proved ineffective as a naval system, and further conversions were abandoned. Another cruiser of this project, "Admiral Nakhimov", was used for testing anti-ship missiles but never entered service in this role. The British considered converting older cruisers to guided-missile cruisers with the Seaslug system but ultimately did not proceed. Several other classical cruisers from various countries were rearmed with short-range anti-aircraft systems requiring fewer modifications, such as Seacat or Osa-M, but since these were intended only for self-defense, they are not considered guided-missile cruisers (e.g., the Soviet "Zhdanov" and "Admiral Senyavin" of Project 68U). The Peruvian light cruiser "Almirante Grau" (formerly the Dutch "De Ruyter") was rearmed with eight Otomat anti-ship missiles at the end of the 20th century, but these did not constitute its primary armament. US Navy "cruiser gap". Prior to the introduction of the "Ticonderoga"s, the US Navy used odd naming conventions that left its fleet seemingly without many cruisers, although a number of their ships were cruisers in all but name. From the 1950s to the 1970s, US Navy cruisers were large vessels equipped with heavy, specialized missiles (mostly surface-to-air, but for several years including the Regulus nuclear cruise missile) for wide-ranging combat against land-based and sea-based targets. Naming conventions changed, and some guided-missile cruisers were classified as frigates or destroyers during certain periods or at the construction stage. All save one—USS "Long Beach"—were converted from World War II cruisers of the "Oregon City", "Baltimore" and "Cleveland" classes. "Long Beach" was also the last cruiser built with a World War II-era cruiser style hull (characterized by a long lean hull); later new-build cruisers were actually converted frigates (DLG/CG , , and the "Leahy", , , and classes) or uprated destroyers (the DDG/CG "Ticonderoga" class was built on a "Spruance"-class destroyer hull). Literature sometimes considers ships as cruisers even if they are not officially classified as such, primarily larger representatives of the Soviet large anti-submarine ship class, which had no equivalent in global classification. Ultimately, after the 1975 classification reform in the US, larger ships were called cruisers, slightly smaller and weaker fleet escorts were called destroyers, and smaller ships for ocean escort and anti-submarine warfare were called frigates. However, the size and qualitative differences between them and destroyers were vague and arbitrary. With the development of destroyers, this distinction has blurred even further (for example, the American "Arleigh Burke"-class destroyers, complementing the "Ticonderoga"-class cruisers as the core of US Navy air defense, have displacements up to 9,700 tons and nearly equal combat capabilities, carrying the Aegis system and similar missiles, albeit in smaller numbers; similarly for Japanese destroyers). Frigates under this scheme were almost as large as the cruisers and optimized for anti-aircraft warfare, although they were capable anti-surface warfare combatants as well. In the late 1960s, the US government perceived a "cruiser gap"—at the time, the US Navy possessed six ships designated as cruisers, compared to 19 for the Soviet Union, even though the USN had 21 ships designated as frigates with equal or superior capabilities to the Soviet cruisers at the time. Because of this, in 1975 the Navy performed a massive redesignation of its forces: Also, a series of Patrol Frigates of the , originally designated PFG, were redesignated into the FFG line. The cruiser-destroyer-frigate realignment and the deletion of the Ocean Escort type brought the US Navy's ship designations into line with the rest of the world's, eliminating confusion with foreign navies. In 1980, the Navy's then-building DDG-47-class destroyers were redesignated as cruisers ("Ticonderoga" guided-missile cruisers) to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff. Soviet cruiser development. In the Soviet Navy, cruisers formed the basis of combat groups. In the immediate post-war era it built a fleet of gun-armed light cruisers, but replaced these beginning in the early 1960s with large ships called "rocket cruisers", carrying large numbers of anti-ship cruise missiles (ASCMs) and anti-aircraft missiles. The Soviet combat doctrine of saturation attack meant that their cruisers (as well as destroyers and even missile boats) mounted multiple missiles in large container/launch tube housings and carried far more ASCMs than their NATO counterparts, while NATO combatants instead used individually smaller and lighter missiles (while appearing under-armed when compared to Soviet ships). In 1962–1965 the four s entered service; these had launchers for eight long-range SS-N-3 Shaddock ASCMs with a full set of reloads; these had a range of up to with mid-course guidance. The four more modest s, with launchers for four SS-N-3 ASCMs and no reloads, entered service in 1967–69. In 1969–79 Soviet cruiser numbers more than tripled with ten s and seven s entering service. These had launchers for eight large-diameter missiles whose purpose was initially unclear to NATO. This was the SS-N-14 Silex, an over/under rocket-delivered heavyweight torpedo primarily for the anti-submarine role, but capable of anti-surface action with a range of up to . Soviet doctrine had shifted; powerful anti-submarine vessels (these were designated "Large Anti-Submarine Ships", but were listed as cruisers in most references) were needed to destroy NATO submarines to allow Soviet ballistic missile submarines to get within range of the United States in the event of nuclear war. By this time Long Range Aviation and the Soviet submarine force could deploy numerous ASCMs. Doctrine later shifted back to overwhelming carrier group defenses with ASCMs, with the "Slava" and "Kirov" classes. After the dissolution of the Soviet Union, the Russian cruiser "Moskva" of Project 1164 became the flagship of the Black Sea Fleet and in 2022 participated in the invasion of Ukraine, shelling and blockading the coast, but was subsequently sunk by anti-ship missiles. Current cruisers. The end of the Cold War and the subsequent reduction of military rivalry led to significant reductions in naval forces. This reduction was more pronounced in the Soviet Navy, which was mostly taken over by Russia. Faced with severe financial difficulties, Russia was forced to decommission most of its ships in the 1990s or send them for extended overhauls. The most recent Soviet/Russian rocket cruisers, the four s, were built in the 1970s and 1980s. One of the "Kirov" class is in refit, and 2 are being scrapped, with the in active service. Russia also operates two s and one "Admiral Kuznetsov"-class carrier which is officially designated as a cruiser, specifically a "heavy aviation cruiser" () due to her complement of 12 P-700 Granit supersonic AShMs. In 2022, the cruiser "Moskva" of Project 1164 sank after being hit by a Ukrainian missile. Currently, the "Kirov"-class heavy missile cruisers are used for command purposes, as "Pyotr Velikiy" is the flagship of the Northern Fleet. However, their air defense capabilities are still powerful, as shown by the array of point defense missiles they carry, from 44 OSA-MA missiles to 196 9K311 Tor missiles. For longer range targets, the S-300 is used. For closer range targets, AK-630 or Kashtan CIWSs are used. Aside from that, "Kirov"s have 20 P-700 Granit missiles for anti-ship warfare. For target acquisition beyond the radar horizon, three helicopters can be used. Besides a vast array of armament, "Kirov"-class cruisers are also outfitted with many sensors and communications equipment, allowing them to lead the fleet. The United States Navy has centered on the aircraft carrier since World War II. The "Ticonderoga"-class cruisers, built in the 1980s, were originally designed and designated as a class of destroyer, intended to provide a very powerful air-defense in these carrier-centered fleets. As of 2020, the US Navy still had 22 of its newest "Ticonderoga"-class cruisers in service. These ships were continuously upgraded, enhancing their value and versatility. Some were equipped with ballistic missile defense capabilities (Aegis BMD system). However, no new cruisers of this class were being built. In the 21st century, there were design efforts for futuristic large cruisers provisionally designated as CG(X), but the program was canceled in 2010 due to budget constraints. Formally, only the aforementioned ships are classified as cruisers globally. The latest American futuristic large destroyers of the "Zumwalt" class, despite their displacement of approximately 16,000 tons and armament with two large-caliber (155 mm) guns traditionally associated with cruisers, are classified as destroyers. Literature often emphasizes that these ships are essentially large cruisers. Similarly, Japanese large missile destroyers of the "Kongō" class, with a displacement of 9,485 tons and equipped with the Aegis system (derived from the "Arleigh Burke"-class destroyers), are sometimes referred to as cruisers. Their improved versions, the "Atago" and "Maya" classes, exceed 10,000 tons. Japan, for political reasons, does not use the term "cruiser" or even "destroyer", formally classifying these ships as missile escorts with hull numbers prefixed by DDG, corresponding to guided-missile destroyers. These Japanese destroyers also provide ballistic missile defense. Outside the US and Soviet navies, new cruisers were rare following World War II. Most navies use guided-missile destroyers for fleet air defense, and destroyers and frigates for cruise missiles. The need to operate in task forces has led most navies to change to fleets designed around ships dedicated to a single role, anti-submarine or anti-aircraft typically, and the large "generalist" ship has disappeared from most forces. The United States Navy and the Russian Navy are the only remaining navies which operate active duty ships formally classed as cruisers. Italy used until 2003 (decommissioned in 2006) and the aircraft cruiser until 2024; France operated a single helicopter cruiser until May 2010, , for training purposes only. While Type 055 of the Chinese Navy is classified as a cruiser by the U.S. Department of Defense, the Chinese consider it a guided-missile destroyer. In the years since the launch of in 1981, the class has received a number of upgrades that have dramatically improved its members' capabilities for anti-submarine and land attack (using the Tomahawk missile). Like their Soviet counterparts, the modern "Ticonderoga"s can also be used as the basis for an entire battle group. Their cruiser designation was almost certainly deserved when first built, as their sensors and combat management systems enable them to act as flagships for a surface warship flotilla if no carrier is present, but newer ships rated as destroyers and also equipped with Aegis approach them very closely in capability, and once more blur the line between the two classes. Aircraft cruisers. From time to time, some navies have experimented with aircraft-carrying cruisers. One example is the Swedish . Another was the Japanese "Mogami", which was converted to carry a large floatplane group in 1942. Another variant is the "helicopter cruiser". The further development of helicopter cruisers led to the creation of ships formally classified only as cruisers but significantly larger and effectively light aircraft carriers. In the Soviet Union, a series of unusual hybrid ships of Project 1143 ("Kiev" class) were built in the late 1970s and early 1980s. Initially classified as anti-submarine cruisers, they were ultimately designated as "heavy aircraft cruisers". These ships combined the architecture of cruisers and aircraft carriers and were armed with long-range anti-ship and anti-aircraft missiles along with a deck for vertical take-off and landing aircraft. Their full displacement of approximately 43,000 tons is typical for aircraft carriers. By hosting several helicopters, their primary mission was also anti-submarine warfare. The last example in service was the Soviet Navy's , whose last unit was converted to a pure aircraft carrier and sold to India as . The Russian Navy's is nominally designated as an aviation cruiser but otherwise resembles a standard medium aircraft carrier, albeit with a surface-to-surface missile battery. The Royal Navy's aircraft-carrying and the Italian Navy's aircraft-carrying vessels were originally designated 'through-deck cruisers', but were since designated as small aircraft carriers (although the 'C' in the pennant for "Giuseppe Garibaldi" indicated it retained some status as an aircraft-carrying cruiser). It was armed with missiles, but these were short-range self-defense missiles (anti-aircraft Aspide and anti-ship Otomat) and did not match the significance of its aviation capabilities. Similarly, the Japan Maritime Self-Defense Force's "helicopter destroyers" are really more along the lines of helicopter cruisers in function and aircraft complement, but due to the Treaty of San Francisco, must be designated as destroyers. One cruiser alternative studied in the late 1980s by the United States was variously entitled a Mission Essential Unit (MEU) or CG V/STOL. In a return to the thoughts of the independent operations cruiser-carriers of the 1930s and the Soviet "Kiev" class, the ship was to be fitted with a hangar, elevators, and a flight deck. The mission systems were Aegis, SQS-53 sonar, 12 SV-22 ASW aircraft and 200 VLS cells. The resulting ship would have had a waterline length of 700 feet, a waterline beam of 97 feet, and a displacement of about 25,000 tons. Other features included an integrated electric drive and advanced computer systems, both stand-alone and networked. It was part of the U.S. Navy's "Revolution at Sea" effort. The project was curtailed by the sudden end of the Cold War and its aftermath, otherwise the first of class would have been likely ordered in the early 1990s. Strike cruisers. An alternative development path for guided-missile cruisers was represented by ships armed with heavy long-range anti-ship missiles, primarily developed in the Soviet Union with a focus on combating aircraft carriers. Starting in 1962, four ships of Project 58 (NATO designation: "Kynda") entered service. They were armed with eight P-35 missile launchers with a range of 250 km and a twin launcher for M-1 Volna anti-aircraft missiles. With a moderate full displacement of 5,350 tons, they were initially intended to be classified as destroyers but ultimately entered service as guided-missile cruisers. During this period, designs for larger cruisers, such as Project 64 and the nuclear-powered Project 63 (with 24 anti-ship missiles), were also developed. However, their construction was abandoned due to high costs and vulnerability to air attacks due to the shortcomings of available anti-aircraft missiles. The next built type was four ships of Project 1134 (NATO designation: "Kresta I") with a displacement of 7,500 tons, equipped with four P-35 anti-ship missile launchers and two Volna anti-aircraft missile launchers. These were transitional types with lesser strike capabilities and were initially classified as large anti-submarine ships but were reclassified as guided-missile cruisers in 1977. In the 1980s, before the dissolution of the Soviet Union, only three guided-missile cruisers of the new generation Project 1164 ("Slava" class) with a full displacement of 11,300 tons were completed out of a longer planned series. They carried 16 Bazalt anti-ship missile launchers and eight vertical launchers for long-range Fort anti-aircraft missiles. The pinnacle of development for cruisers designed to engage surface ships, while also protecting fleet formations from aircraft and submarines, was the four large nuclear-powered cruisers of Project 1144 ("Kirov" class) from the 1980s. These were officially classified as "heavy nuclear guided-missile cruisers". With a full displacement of up to 25,000 tons, they were armed with 20 Granit heavy anti-ship missile launchers, 12 vertical launchers for long-range Fort anti-aircraft missiles, and short-range missiles. For anti-submarine warfare, they were equipped with rocket-torpedo launchers and three helicopters, and their crew numbered up to 744 people. In English-language literature, they are sometimes referred to as "battlecruisers", although this designation lacks official justification. The ship "Muntenia", with a displacement of 5,790 tons, was constructed and built in Romania in the 1980s. It was initially somewhat ambitiously designated as a light helicopter cruiser but was reclassified as a destroyer in 1990, along with a name change. The ship and its classification reflected the ambitions of dictator Nicolae Ceaușescu amid limited industrial capabilities. It carried eight Soviet P-20M medium-range anti-ship missiles but lacked anti-aircraft missile armament and was equipped with two light helicopters without means for long-range anti-submarine warfare. Operators. Few cruisers are still operational in the world's navies. Those that remain in service today are: The following is laid up: The following are classified as destroyers by their respective operators, but, due to their size and capabilities, are considered to be cruisers by some, all having full load displacements of at least 10,000 tons: Museum ships. As of 2019, several decommissioned cruisers have been saved from scrapping and exist worldwide as museum ships. They are:
7037
7903804
https://en.wikipedia.org/wiki?curid=7037
Chlamydia
Chlamydia, or more specifically a chlamydia infection, is a sexually transmitted infection caused by the bacterium "Chlamydia trachomatis". Most people who are infected have no symptoms. When symptoms do appear, they may occur only several weeks after infection; the incubation period between exposure and being able to infect others is thought to be on the order of two to six weeks. Symptoms in women may include vaginal discharge or burning with urination. Symptoms in men may include discharge from the penis, burning with urination, or pain and swelling of one or both testicles. The infection can spread to the upper genital tract in women, causing pelvic inflammatory disease, which may result in future infertility or ectopic pregnancy. Chlamydia infections can occur in other areas besides the genitals, including the anus, eyes, throat, and lymph nodes. Repeated chlamydia infections of the eyes that go without treatment can result in trachoma, a common cause of blindness in the developing world. Chlamydia can be spread during vaginal, anal, oral, or manual sex and can be passed from an infected mother to her baby during childbirth. The eye infections may also be spread by personal contact, flies, and contaminated towels in areas with poor sanitation. Infection by the bacterium "Chlamydia trachomatis" only occurs in humans. Diagnosis is often by screening, which is recommended yearly in sexually active women under the age of 25, others at higher risk, and at the first prenatal visit. Testing can be done on the urine or a swab of the cervix, vagina, or urethra. Rectal or mouth swabs are required to diagnose infections in those areas. Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Chlamydia can be cured by antibiotics, with typically either azithromycin or doxycycline being used. Erythromycin or azithromycin is recommended in babies and during pregnancy. Sexual partners should also be treated, and infected people should be advised not to have sex for seven days and until symptom free. Gonorrhea, syphilis, and HIV should be tested for in those who have been infected. Following treatment, people should be tested again after three months. Chlamydia is one of the most common sexually transmitted infections, affecting about 4.2% of women and 2.7% of men worldwide. In 2015, about 61 million new cases occurred globally. In the United States, about 1.4 million cases were reported in 2014. Infections are most common among those between the ages of 15 and 25 and are more common in women than men. In 2015, infections resulted in about 200 deaths. The word "chlamydia" is from the Greek , meaning 'cloak'. Signs and symptoms. Genital disease. Women. Chlamydial infection of the cervix (neck of the womb) is a sexually transmitted infection which has no symptoms for around 70% of women infected. The infection can be passed through vaginal, anal, oral, or manual sex. Of those who have an asymptomatic infection that is not detected by their doctor, approximately half will develop pelvic inflammatory disease (PID), a generic term for infection of the uterus, fallopian tubes, and/or ovaries. PID can cause scarring inside the reproductive organs, which can later cause serious complications, including chronic pelvic pain, difficulty becoming pregnant, ectopic (tubal) pregnancy, and other dangerous complications of pregnancy. Chlamydia is known as the "silent epidemic", as at least 70% of genital "C. trachomatis" infections in women (and 50% in men) are asymptomatic at the time of diagnosis, and can linger for months or years before being discovered. Signs and symptoms may include abnormal vaginal bleeding or discharge, abdominal pain, painful sexual intercourse, fever, painful urination or the urge to urinate more often than usual (urinary urgency). For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. Guidelines recommend all women attending for emergency contraceptive are offered chlamydia testing, with studies showing up to 9% of women aged under 25 years had chlamydia. Men. In men, those with a chlamydial infection show symptoms of infectious inflammation of the urethra in about 50% of cases. Symptoms that may occur include: a painful or burning sensation when urinating, an unusual discharge from the penis, testicular pain or swelling, or fever. If left untreated, chlamydia in men can spread to the testicles causing epididymitis, which in rare cases can lead to sterility if not treated. Chlamydia is also a potential cause of prostatic inflammation in men, although the exact relevance in prostatitis is difficult to ascertain due to possible contamination from urethritis. Eye disease. Trachoma is a chronic conjunctivitis caused by "Chlamydia trachomatis". It was once the leading cause of blindness worldwide, but its role diminished from 15% of blindness cases by trachoma in 1995 to 3.6% in 2002. The infection can be spread from eye to eye by fingers, shared towels or cloths, coughing and sneezing and eye-seeking flies. Symptoms include mucopurulent ocular discharge, irritation, redness, and lid swelling. Newborns can also develop chlamydia eye infection through childbirth (see below). Using the SAFE strategy (acronym for surgery for in-growing or in-turned lashes, antibiotics, facial cleanliness, and environmental improvements), the World Health Organization aimed (unsuccessfully) for the global elimination of trachoma by 2020 (GET 2020 initiative). The updated World Health Assembly neglected tropical diseases road map (2021–2030) sets 2030 as the new timeline for global elimination. Joints. Chlamydia may also cause reactive arthritis—the triad of arthritis, conjunctivitis and urethral inflammation—especially in young men. About 15,000 men develop reactive arthritis due to chlamydia infection each year in the U.S., and about 5,000 are permanently affected by it. It can occur in both sexes, though is more common in men. Infants. As many as half of all infants born to mothers with chlamydia will be born with the disease. Chlamydia can affect infants by causing spontaneous abortion; premature birth; conjunctivitis, which may lead to blindness; and pneumonia. Conjunctivitis due to chlamydia typically occurs one week after birth (compared with chemical causes (within hours) or gonorrhea (2–5 days)). Other conditions. A different serovar of "Chlamydia trachomatis" is also the cause of lymphogranuloma venereum, an infection of the lymph nodes and lymphatics. It usually presents with genital ulceration and swollen lymph nodes in the groin, but it may also manifest as rectal inflammation, fever or swollen lymph nodes in other regions of the body. Transmission. Chlamydia can be transmitted during vaginal, anal, oral, or manual sex or direct contact with infected tissue such as conjunctiva. Chlamydia can also be passed from an infected mother to her baby during vaginal childbirth. It is assumed that the probability of becoming infected is proportionate to the number of bacteria one is exposed to. Recent research using droplet digital PCR and viability assays found evidence of high-viability "C. trachomatis" in the gastrointestinal tract of women who abstained from receptive anal intercourse. Rectal "C. trachomatis" appeared independent of cervical infection—with distinct MLST types detected in rectal versus endocervical samples—suggesting persistent gastrointestinal colonization likely acquired through prior vaginorectal or oral routes, rather than direct anal exposure. Pathophysiology. Chlamydia bacteria have the ability to establish long-term associations with host cells. When an infected host cell is starved for various nutrients such as amino acids (for example, tryptophan), iron, or vitamins, this has a negative consequence for chlamydia bacteria since the organism is dependent on the host cell for these nutrients. Long-term cohort studies indicate that approximately 50% of those infected clear within a year, 80% within two years, and 90% within three years. The starved chlamydia bacteria can enter a persistent growth state where they stop cell division and become morphologically aberrant by increasing in size. Persistent organisms remain viable as they are capable of returning to a normal growth state once conditions in the host cell improve. There is debate as to whether persistence has relevance: some believe that persistent chlamydia bacteria are the cause of chronic chlamydial diseases. Some antibiotics such as β-lactams have been found to induce a persistent-like growth state. Diagnosis. The diagnosis of genital chlamydial infections evolved rapidly from the 1990s through 2006. Nucleic acid amplification tests (NAAT), such as polymerase chain reaction (PCR), transcription mediated amplification (TMA), and the DNA strand displacement amplification (SDA) now are the mainstays. NAAT for chlamydia may be performed on swab specimens sampled from the cervix (women) or urethra (men), on self-collected vaginal swabs, or on voided urine. NAAT has been estimated to have a sensitivity of approximately 90% and a specificity of approximately 99%, regardless of sampling from a cervical swab or by urine specimen. In women seeking treatment in a sexually transmitted infection clinic where a urine test is negative, a subsequent cervical swab has been estimated to be positive in approximately 2% of the time. At present, the NAATs have regulatory approval only for testing urogenital specimens, although rapidly evolving research indicates that they may give reliable results on rectal specimens. Because of improved test accuracy, ease of specimen management, convenience in specimen management, and ease of screening sexually active men and women, the NAATs have largely replaced culture, the historic gold standard for chlamydia diagnosis, and the non-amplified probe tests. The latter test is relatively insensitive, successfully detecting only 60–80% of infections in asymptomatic women, and often giving falsely-positive results. Culture remains useful in selected circumstances and is currently the only assay approved for testing non-genital specimens. Other methods also exist including: ligase chain reaction (LCR), direct fluorescent antibody resting, enzyme immunoassay, and cell culture. The swab sample for chlamydial infections does not show difference whether the sample was collected in home or in clinic in terms of numbers of patient treated. The implications in cured patients, reinfection, partner management, and safety are unknown. Rapid point-of-care tests are, as of 2020, not thought to be effective for diagnosing chlamydia in men of reproductive age and non-pregnant women because of high false-negative rates. Prevention. Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Screening. For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. For pregnant women, guidelines vary: screening women with age or other risk factors is recommended by the U.S. Preventive Services Task Force (USPSTF) (which recommends screening women under 25) and the American Academy of Family Physicians (which recommends screening women aged 25 or younger). The American College of Obstetricians and Gynecologists recommends screening all at risk, while the Centers for Disease Control and Prevention recommend universal screening of pregnant women. The USPSTF acknowledges that in some communities there may be other risk factors for infection, such as ethnicity. Evidence-based recommendations for screening initiation, intervals and termination are currently not possible. For men, the USPSTF concludes evidence is currently insufficient to determine if regular screening of men for chlamydia is beneficial. They recommend regular screening of men who are at increased risk for HIV or syphilis infection. A Cochrane review found that the effects of screening are uncertain in terms of chlamydia transmission but that screening probably reduces the risk of pelvic inflammatory disease in women. In the United Kingdom the National Health Service (NHS) aims to: Treatment. "C. trachomatis" infection can be effectively cured with antibiotics. Guidelines recommend azithromycin, doxycycline, erythromycin, levofloxacin or ofloxacin. In men, doxycycline (100 mg twice a day for 7 days) is probably more effective than azithromycin (1 g single dose) but evidence for the relative effectiveness of antibiotics in women is very uncertain. Agents recommended during pregnancy include erythromycin or amoxicillin. An option for treating sexual partners of those with chlamydia or gonorrhea includes patient-delivered partner therapy (PDT or PDPT), which is the practice of treating the sex partners of index cases by providing prescriptions or medications to the patient to take to his/her partner without the health care provider first examining the partner. Following treatment people should be tested again after three months to check for reinfection. Test of cure may be false-positive due to the limitations of NAAT in a bacterial (rather than a viral) context, since targeted genetic material may persist in the absence of viable organisms. Epidemiology. Globally, as of 2015, sexually transmitted chlamydia affects approximately 61 million people. It is more common in women (3.8%) than men (2.5%). In 2015 it resulted in about 200 deaths. In the United States about 1.6 million cases were reported in 2016. The CDC estimates that if one includes unreported cases there are about 2.9 million each year. It affects around 2% of young people. Chlamydial infection is the most common bacterial sexually transmitted infection in the UK. Chlamydia causes more than 250,000 cases of epididymitis in the U.S. each year. Chlamydia causes 250,000 to 500,000 cases of PID every year in the United States. Women infected with chlamydia are up to five times more likely to become infected with HIV, if exposed.
7038
27823944
https://en.wikipedia.org/wiki?curid=7038
Candidiasis
Candidiasis is a fungal infection due to any species of the genus "Candida" (a yeast). When it affects the mouth, in some countries it is commonly called thrush. Signs and symptoms include white patches on the tongue or other areas of the mouth and throat. Other symptoms may include soreness and problems swallowing. When it affects the vagina, it may be referred to as a yeast infection or thrush. Signs and symptoms include genital itching, burning, and sometimes a white "cottage cheese-like" discharge from the vagina. Yeast infections of the penis are less common and typically present with an itchy rash. Very rarely, yeast infections may become invasive, spreading to other parts of the body. This may result in fevers, among other symptoms. Finally, candiasis of the esophagus is an important risk factor for contracting esophageal cancer in individuals with achalasia. More than 20 types of "Candida" may cause infection with "Candida albicans" being the most common. Infections of the mouth are most common among children less than one month old, the elderly, and those with weak immune systems. Conditions that result in a weak immune system include HIV/AIDS, the medications used after organ transplantation, diabetes, and the use of corticosteroids. Other risk factors include during breastfeeding, following antibiotic therapy, and the wearing of dentures. Vaginal infections occur more commonly during pregnancy, in those with weak immune systems, and following antibiotic therapy. Individuals at risk for invasive candidiasis include low birth weight babies, people recovering from surgery, people admitted to intensive care units, and those with an otherwise compromised immune system. Efforts to prevent infections of the mouth include the use of chlorhexidine mouthwash in those with poor immune function and washing out the mouth following the use of inhaled steroids. Little evidence supports probiotics for either prevention or treatment, even among those with frequent vaginal infections. For infections of the mouth, treatment with topical clotrimazole or nystatin is usually effective. Oral or intravenous fluconazole, itraconazole, or amphotericin B may be used if these do not work. A number of topical antifungal medications may be used for vaginal infections, including clotrimazole. In those with widespread disease, an echinocandin such as caspofungin or micafungin is used. A number of weeks of intravenous amphotericin B may be used as an alternative. In certain groups at very high risk, antifungal medications may be used preventively, and concomitantly with medications known to precipitate infections. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. About three-quarters of women have at least one yeast infection at some time during their lives. Widespread disease is rare except in those who have risk factors. Signs and symptoms. Signs and symptoms of candidiasis vary depending on the area affected. Most candidal infections result in minimal complications such as redness, itching, and discomfort, though complications may be severe or even fatal if left untreated in certain populations. In healthy (immunocompetent) persons, candidiasis is usually a localized infection of the skin, fingernails or toenails (onychomycosis), or mucosal membranes, including the oral cavity and pharynx (thrush), esophagus, and the sex organs (vagina, penis, etc.); less commonly in healthy individuals, the gastrointestinal tract, urinary tract, and respiratory tract are sites of candida infection. In immunocompromised individuals, "Candida" infections in the esophagus occur more frequently than in healthy individuals and have a higher potential of becoming systemic, causing a much more serious condition, a fungemia called candidemia. Symptoms of esophageal candidiasis include difficulty swallowing, painful swallowing, abdominal pain, nausea, and vomiting. Mouth. Infection in the mouth is characterized by white discolorations in the tongue, around the mouth, and in the throat. Irritation may also occur, causing discomfort when swallowing. Thrush is commonly seen in infants. It is not considered abnormal in infants unless it lasts longer than a few weeks. Genitals. Infection of the vagina or vulva may cause severe itching, burning, soreness, irritation, and a whitish or whitish-gray cottage cheese-like discharge. Symptoms of infection of the male genitalia (balanitis thrush) include red skin around the head of the penis, swelling, irritation, itchiness and soreness of the head of the penis, thick, lumpy discharge under the foreskin, unpleasant odour, difficulty retracting the foreskin (phimosis), and pain when passing urine or during sex. Skin. Signs and symptoms of candidiasis in the skin include itching, irritation, and chafing or broken skin. Invasive infection. Common symptoms of gastrointestinal candidiasis in healthy individuals are anal itching, belching, bloating, indigestion, nausea, diarrhea, gas, intestinal cramps, vomiting, and gastric ulcers. Perianal candidiasis can cause anal itching; the lesion can be red, papular, or ulcerative in appearance, and it is not considered to be a sexually transmitted infection. Abnormal proliferation of the candida in the gut may lead to dysbiosis. While it is not yet clear, this alteration may be the source of symptoms generally described as the irritable bowel syndrome, and other gastrointestinal diseases. Neurological symptoms. Systemic candidiasis can affect the central nervous system causing a variety of neurological symptoms, with a presentation similar to meningitis. Causes. "Candida" yeasts are generally present in healthy humans, frequently part of the human body's normal oral and intestinal flora, and particularly on the skin; however, their growth is normally limited by the human immune system and by competition of other microorganisms, such as bacteria occupying the same locations in the human body. "Candida" requires moisture for growth, notably on the skin. For example, wearing wet swimwear for long periods of time is believed to be a risk factor. Candida can also cause diaper rashes in babies. In extreme cases, superficial infections of the skin or mucous membranes may enter the bloodstream and cause systemic "Candida" infections. Factors that increase the risk of candidiasis include HIV/AIDS, mononucleosis, cancer treatments, steroids, stress, antibiotic therapy, diabetes, and nutrient deficiency. Hormone replacement therapy and infertility treatments may also be predisposing factors. Use of inhaled corticosteroids increases risk of candidiasis of the mouth. Inhaled corticosteroids with other risk factors such as antibiotics, oral glucocorticoids, not rinsing mouth after use of inhaled corticosteroids or high dose of inhaled corticosteroids put people at even higher risk. Treatment with antibiotics can lead to eliminating the yeast's natural competitors for resources in the oral and intestinal flora, thereby increasing the severity of the condition. A weakened or undeveloped immune system or metabolic illnesses are significant predisposing factors of candidiasis. Almost 15% of people with weakened immune systems develop a systemic illness caused by "Candida" species. Diets high in simple carbohydrates have been found to affect rates of oral candidiases. "C. albicans" was isolated from the vaginas of 19% of apparently healthy women, i.e., those who experienced few or no symptoms of infection. External use of detergents or douches or internal disturbances (hormonal or physiological) can perturb the normal vaginal flora, consisting of lactic acid bacteria, such as lactobacilli, and result in an overgrowth of "Candida" cells, causing symptoms of infection, such as local inflammation. Pregnancy and the use of oral contraceptives have been reported as risk factors. Diabetes mellitus and the use of antibiotics are also linked to increased rates of yeast infections. In penile candidiasis, the causes include sexual intercourse with an infected individual, low immunity, antibiotics, and diabetes. Male genital yeast infections are less common, but a yeast infection on the penis caused from direct contact via sexual intercourse with an infected partner is not uncommon. Breast-feeding mothers may also develop candidiasis on and around the nipple as a result of moisture created by excessive milk-production. Vaginal candidiasis can cause congenital candidiasis in newborns. Diagnosis. In oral candidiasis, simply inspecting the person's mouth for white patches and irritation may make the diagnosis. A sample of the infected area may also be taken to determine what organism is causing the infection. Symptoms of vaginal candidiasis are also present in the more common bacterial vaginosis; aerobic vaginitis is distinct and should be excluded in the differential diagnosis. In a 2002 study, only 33% of women who were self-treating for a yeast infection were found to have such an infection, while most had either bacterial vaginosis or a mixed-type infection. Diagnosis of a yeast infection is confirmed either via microscopic examination or culturing. For identification by light microscopy, a scraping or swab of the affected area is placed on a microscope slide. A single drop of 10% potassium hydroxide (KOH) solution is then added to the specimen. The KOH dissolves the skin cells, but leaves the "Candida" cells intact, permitting visualization of pseudohyphae and budding yeast cells typical of many "Candida" species. For the culturing method, a sterile swab is rubbed on the infected skin surface. The swab is then streaked on a culture medium. The culture is incubated at 37 °C (98.6 °F) for several days, to allow development of yeast or bacterial colonies. The characteristics (such as morphology and colour) of the colonies may allow initial diagnosis of the organism causing disease symptoms. Respiratory, gastrointestinal, and esophageal candidiasis require an endoscopy to diagnose. For gastrointestinal candidiasis, it is necessary to obtain a 3–5 milliliter sample of fluid from the duodenum for fungal culture. The diagnosis of gastrointestinal candidiasis is based upon the culture containing in excess of 1,000 colony-forming units per milliliter. Classification. Candidiasis may be divided into these types: Prevention. A diet that supports the immune system and is not high in simple carbohydrates contributes to a healthy balance of the oral and intestinal flora. While yeast infections are associated with diabetes, the level of blood sugar control may not affect the risk. Wearing cotton underwear may help to reduce the risk of developing skin and vaginal yeast infections, along with not wearing wet clothes for long periods of time. For women who experience recurrent yeast infections, there is limited evidence that oral or intravaginal probiotics help to prevent future infections. This includes either as pills or as yogurt. Oral hygiene can help prevent oral candidiasis when people have a weakened immune system. For people undergoing cancer treatment, chlorhexidine mouthwash can prevent or reduce thrush. People who use inhaled corticosteroids can reduce the risk of developing oral candidiasis by rinsing the mouth with water or mouthwash after using the inhaler. People with dentures should also disinfect their dentures regularly to prevent oral candidiasis. Treatment. Candidiasis is treated with antifungal medications; these include clotrimazole, nystatin, fluconazole, voriconazole, amphotericin B, and echinocandins. Intravenous fluconazole or an intravenous echinocandin such as caspofungin are commonly used to treat immunocompromised or critically ill individuals. The 2016 revision of the clinical practice guideline for the management of candidiasis lists a large number of specific treatment regimens for "Candida" infections that involve different "Candida" species, forms of antifungal drug resistance, immune statuses, and infection localization and severity. Gastrointestinal candidiasis in immunocompetent individuals is treated with 100–200 mg fluconazole per day for 2–3 weeks. Localized infection. Mouth and throat candidiasis are treated with antifungal medication. Oral candidiasis usually responds to topical treatments; otherwise, systemic antifungal medication may be needed for oral infections. Candidal skin infections in the skin folds (candidal intertrigo) typically respond well to topical antifungal treatments (e.g., nystatin or miconazole). For breastfeeding mothers topical miconazole is the most effective treatment for treating candidiasis on the breasts. Gentian violet can be used for thrush in breastfeeding babies. Systemic treatment with antifungals by mouth is reserved for severe cases or if treatment with topical therapy is unsuccessful. Candida esophagitis may be treated orally or intravenously; for severe or azole-resistant esophageal candidiasis, treatment with amphotericin B may be necessary. Vaginal yeast infections are typically treated with topical antifungal agents. Penile yeast infections are also treated with antifungal agents, but while an internal treatment may be used (such as a pessary) for vaginal yeast infections, only external treatments – such as a cream – can be recommended for penile treatment. A one-time dose of fluconazole by mouth is 90% effective in treating a vaginal yeast infection. For severe nonrecurring cases, several doses of fluconazole is recommended. Local treatment may include vaginal suppositories or medicated douches. Other types of yeast infections require different dosing. "C. albicans" can develop resistance to fluconazole, this being more of an issue in those with HIV/AIDS who are often treated with multiple courses of fluconazole for recurrent oral infections. For vaginal yeast infection in pregnancy, topical imidazole or triazole antifungals are considered the therapy of choice owing to available safety data. Systemic absorption of these topical formulations is minimal, posing little risk of transplacental transfer. In vaginal yeast infection in pregnancy, treatment with topical azole antifungals is recommended for seven days instead of a shorter duration. For vaginal yeast infections, many complementary treatments are proposed, however a number have side effects. No benefit from probiotics has been found for active infections. Blood-borne infection. Candidemia occurs when any Candida species infects the blood. Its treatment typically consists of oral or intravenous antifungal medications. Examples include intravenous fluconazole or an echinocandin such as caspofungin may be used. Amphotericin B is another option. Prognosis. In hospitalized patients who develop candidemia, age is an important prognostic factor. Mortality following candidemia is 50% in patients aged ≥75 years and 24% in patients aged <75 years. Among individuals being treated in intensive care units, the mortality rate is about 30–50% when systemic candidiasis develops. Epidemiology. Oral candidiasis is the most common fungal infection of the mouth, and it also represents the most common opportunistic oral infection in humans. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. It is estimated that 20% of women may be asymptomatically colonized by vaginal yeast. In the United States there are approximately 1.4 million doctor office visits every year for candidiasis. About three-quarters of women have at least one yeast infection at some time during their lives. Esophageal candidiasis is the most common esophageal infection in persons with AIDS and accounts for about 50% of all esophageal infections, often coexisting with other esophageal diseases. About two-thirds of people with AIDS and esophageal candidiasis also have oral candidiasis. Candidal sepsis is rare. Candida is the fourth most common cause of bloodstream infections among hospital patients in the United States. The incidence of bloodstream candida in intensive care units varies widely between countries. History. Descriptions of what sounds like oral thrush go back to the time of Hippocrates "circa" 460–370 BCE. The first description of a fungus as the causative agent of an oropharyngeal and oesophageal candidosis was by Bernhard von Langenbeck in 1839. Vulvovaginal candidiasis was first described in 1849 by Wilkinson. In 1875, Haussmann demonstrated the causative organism in both vulvovaginal and oral candidiasis is the same. With the advent of antibiotics following World War II, the rates of candidiasis increased. The rates then decreased in the 1950s following the development of nystatin. The colloquial term "thrush" is of unknown origin but may stem from an unrecorded Old English word "*þrusc" or from a Scandinavian root. The term is not related to the bird of the same name. The term candidosis is largely used in British English, and candidiasis in American English. "Candida" is also pronounced differently; in American English, the stress is on the "i", whereas in British English the stress is on the first syllable. The genus "Candida" and species "C. albicans" were described by botanist Christine Marie Berkhout in her doctoral thesis at the University of Utrecht in 1923. Over the years, the classification of the genera and species has evolved. Obsolete names for this genus include "Mycotorula" and "Torulopsis". The species has also been known in the past as "Monilia albicans" and "Oidium albicans". The current classification is "nomen conservandum", which means the name is authorized for use by the International Botanical Congress (IBC). The genus "Candida" includes about 150 different species. However, only a few are known to cause human infections. "C. albicans" is the most significant pathogenic species. Other species pathogenic in humans include "C. auris", "C. tropicalis", "C. parapsilosis", "C. dubliniensis", and "C. lusitaniae". The name "Candida" was proposed by Berkhout. It is from the Latin word "toga candida", referring to the white toga (robe) worn by candidates for the Senate of the ancient Roman republic. The specific epithet "albicans" also comes from Latin, "albicare" meaning "to whiten". These names refer to the generally white appearance of "Candida" species when cultured. Alternative medicine. A 2005 publication noted that "a large pseudoscientific cult" has developed around the topic of "Candida", with claims stating that up to one in three people are affected by yeast-related illness, particularly a condition called "Candidiasis hypersensitivity". Some practitioners of alternative medicine have promoted these purported conditions and sold dietary supplements as supposed cures; a number of them have been prosecuted. In 1990, alternative health vendor Nature's Way signed an FTC consent agreement not to misrepresent in advertising any self-diagnostic test concerning yeast conditions or to make any unsubstantiated representation concerning any food or supplement's ability to control yeast conditions, with a fine of $30,000 payable to the National Institutes of Health for research in genuine candidiasis. Research. High level "Candida" colonization is linked to several diseases of the gastrointestinal tract including Crohn's disease. There has been an increase in resistance to antifungals worldwide over the past 30–40 years.
7039
35936988
https://en.wikipedia.org/wiki?curid=7039
Control theory
Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any "delay", "overshoot", or "steady-state error" and ensuring a level of control stability; often with the aim to achieve a degree of optimality. To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the "error" signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics. Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky. Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research. History. Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled "On Governors". A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem. A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics. Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant. Linear and nonlinear control theory. The field of control theory can be divided into two branches: Analysis techniques – frequency domain and time domain. Mathematical techniques for analyzing and designing control systems fall into two different categories: In contrast to the frequency-domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space. System interfacing. Control systems can be divided into different categories depending on the number of inputs and outputs. Classical SISO system design. The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. Modern MIMO system design. Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory. Topics in control theory. Stability. The "stability" of a general dynamical system with no input can be described with Lyapunov stability criteria. For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems. Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the formula_1 axis is the real axis and the discrete Z-transform is in circular coordinates where the formula_2 axis is the real axis. When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero. If a system in question has an impulse response of formula_3 then the Z-transform (see this example), is given by formula_4 which has a pole in formula_5 (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is "inside" the unit circle. However, if the impulse response was formula_6 then the Z-transform is formula_7 which has a pole at formula_8 and is not BIBO stable since the pole has a modulus strictly greater than one. Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots. Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll. Controllability and observability. Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed "stabilizable". Observability instead is related to the possibility of "observing", through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable. From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis. Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors. Control specification. Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control). A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have formula_9, where formula_10 is a fixed value strictly greater than zero, instead of simply asking that formula_11. Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included. Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after). Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI). Model identification and robustness. A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible. The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that formula_12. Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal. Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance. Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties. A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold. System classifications. Linear systems control. For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design. Nonlinear systems control. Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states. Decentralized systems control. When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions. Deterministic and stochastic systems control. A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks. Main control strategies. Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen. People in systems and control. Many active and historical figures made significant contribution to control theory including
7042
34669967
https://en.wikipedia.org/wiki?curid=7042
Joint cracking
Joint cracking is the manipulation of joints to produce a sound and related "popping" sensation. It is sometimes performed by physical therapists, chiropractors, and osteopaths pursuing a variety of outcomes. The cracking mechanism and the resulting sound is caused by dissolved gas (nitrogen gas) cavitation bubbles suddenly collapsing inside the joints. This happens when the joint cavity is stretched beyond its normal size. The pressure inside the joint cavity drops and the dissolved gas suddenly comes out of solution and takes gaseous form which makes a distinct popping noise. To be able to crack the same knuckle again requires waiting about 20 minutes before the bubbles dissolve back into the synovial fluid and will be able to form again. It is possible for voluntary joint cracking by an individual to be considered as part of the obsessive–compulsive disorders spectrum. Causes. For many decades, the physical mechanism that causes the cracking sound as a result of bending, twisting, or compressing joints was uncertain. Suggested causes included: There were several hypotheses to explain the cracking of joints. Synovial fluid cavitation has some evidence to support it. When a spinal manipulation is performed, the applied force separates the articular surfaces of a fully encapsulated synovial joint, which in turn creates a reduction in pressure within the joint cavity. In this low-pressure environment, some of the gases that are dissolved in the synovial fluid (which are naturally found in all bodily fluids) leave the solution, making a bubble, or cavity (tribonucleation), which rapidly collapses upon itself, resulting in a "clicking" sound. The contents of the resultant gas bubble are thought to be mainly carbon dioxide, oxygen and nitrogen. The effects of this process will remain for a period of time known as the "refractory period", during which the joint cannot be "re-cracked", which lasts about 20 minutes, while the gases are slowly reabsorbed into the synovial fluid. There is some evidence that ligament laxity may be associated with an increased tendency to cavitate. In 2015, research showed that bubbles remained in the fluid after cracking, suggesting that the cracking sound was produced when the bubble within the joint was formed, not when it collapsed. In 2018, a team in France created a mathematical simulation of what happens in a joint just before it cracks. The team concluded that the sound is caused by bubbles' collapse, and bubbles observed in the fluid are the result of a partial collapse. Due to the theoretical basis and lack of physical experimentation, the scientific community is still not fully convinced of this conclusion. The snapping of tendons or scar tissue over a prominence (as in snapping hip syndrome) can also generate a loud snapping or popping sound. Relation to arthritis. The common old wives' tale that cracking one's knuckles causes arthritis is without scientific evidence. A study published in 2011 examined the hand radiographs of 215 people (aged 50 to 89). It compared the joints of those who regularly cracked their knuckles to those who did not. The study concluded that knuckle-cracking did not cause hand osteoarthritis, no matter how many years or how often a person cracked their knuckles. This early study has been criticized for not taking into consideration the possibility of confounding factors, such as whether the ability to crack one's knuckles is associated with impaired hand functioning rather than being a cause of it. The medical doctor Donald Unger cracked the knuckles of his left hand every day for more than sixty years, but he did not crack the knuckles of his right hand. No arthritis or other ailments formed in either hand, and for this, he was awarded 2009's satirical Ig Nobel Prize in Medicine.
7043
764861
https://en.wikipedia.org/wiki?curid=7043
Chemical formula
-\overset{\displaystyle H \atop |}{\underset{\underset{\underset{\underset{\underset{\underset{| \atop \displaystyle H}{C}}-H</chem> Molecular formulae simply indicate the numbers of each type of atom in a molecule of a molecular substance. They are the same as empirical formulae for molecules that only have one atom of a particular type, but otherwise may have larger numbers. An example of the difference is the empirical formula for glucose, which is ("ratio" 1:2:1), while its molecular formula is ("number of atoms" 6:12:6). For water, both formulae are . A molecular formula provides more information about a molecule than its empirical formula, but is more difficult to establish. Structural formula. In addition to indicating the number of atoms of each elementa molecule, a structural formula indicates how the atoms are organized, and shows (or implies) the chemical bonds between the atoms. There are multiple types of structural formulas focused on different aspects of the molecular structure. The two diagrams show two molecules which are structural isomers of each other, since they both have the same molecular formula , but they have different structural formulas as shown. Condensed formula. The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule. A condensed (or semi-structural) formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as . In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: , and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write or less commonly . The two lines (or two pairs of dots) indicate that a double bond connects the atoms on either side of them. A triple bond may be expressed with three lines () or three pairs of dots (), and if there may be ambiguity, a single line or pair of dots may be used to indicate a single bond. Molecules with multiple functional groups that are the same may be expressed by enclosing the repeated group in round brackets. For example, isobutane may be written . This condensed structural formula implies a different connectivity from other molecules that can be formed using the same atoms in the same proportions (isomers). The formula implies a central carbon atom connected to one hydrogen atom and three methyl groups (). The same number of atoms of each element (10 hydrogens and 4 carbons, or ) may be used to make a straight chain molecule, "n"-butane: . Chemical names in answer to limitations of chemical formulae. The alkene called but-2-ene has two isomers, which the chemical formula does not identify. The relative position of the two methyl groups must be indicated by additional notation denoting whether the methyl groups are on the same side of the double bond ("cis" or "Z") or on the opposite sides from each other ("trans" or "E"). As noted above, in order to represent the full structural formulae of many complex organic and inorganic compounds, chemical nomenclature may be needed which goes well beyond the available resources used above in simple condensed formulae. See IUPAC nomenclature of organic chemistry and IUPAC nomenclature of inorganic chemistry 2005 for examples. In addition, linear naming systems such as International Chemical Identifier (InChI) allow a computer to construct a structural formula, and simplified molecular-input line-entry system (SMILES) allows a more human-readable ASCII input. However, all these nomenclature systems go beyond the standards of chemical formulae, and technically are chemical naming systems, not formula systems. Polymers in condensed formulae. For polymers in condensed chemical formulae, parentheses are placed around the repeating unit. For example, a hydrocarbon molecule that is described as , is a molecule with fifty repeating units. If the number of repeating units is unknown or variable, the letter "n" may be used to indicate this formula: . Ions in condensed formulae. For ions, the charge on a particular atom may be denoted with a right-hand superscript. For example, , or . The total charge on a charged molecule or a polyatomic ion may also be shown in this way, such as for hydronium, , or sulfate, . Here + and − are used in place of +1 and −1, respectively. For more complex ions, brackets [ ] are often used to enclose the ionic formula, as in , which is found in compounds such as caesium dodecaborate, . Parentheses ( ) can be nested inside brackets to indicate a repeating unit, as in Hexamminecobalt(III) chloride, . Here, indicates that the ion contains six ammine groups () bonded to cobalt, and [ ] encloses the entire formula of the ion with charge +3. This is strictly optional; a chemical formula is valid with or without ionization information, and Hexamminecobalt(III) chloride may be written as or . Brackets, like parentheses, behave in chemistry as they do in mathematics, grouping terms togetherthey are not specifically employed only for ionization states. In the latter case here, the parentheses indicate 6 groups all of the same shape, bonded to another group of size 1 (the cobalt atom), and then the entire bundle, as a group, is bonded to 3 chlorine atoms. In the former case, it is clearer that the bond connecting the chlorines is ionic, rather than covalent. Isotopes. Although isotopes are more relevant to nuclear chemistry or stable isotope chemistry than to conventional chemistry, different isotopes may be indicated with a prefixed superscript in a chemical formula. For example, the phosphate ion containing radioactive phosphorus-32 is . Also a study involving stable isotope ratios might include the molecule . A left-hand subscript is sometimes used redundantly to indicate the atomic number. For example, for dioxygen, and for the most abundant isotopic species of dioxygen. This is convenient when writing equations for nuclear reactions, in order to show the balance of charge more clearly. Trapped atoms. The @ symbol (at sign) indicates an atom or molecule trapped inside a cage but not chemically bound to it. For example, a buckminsterfullerene () with an atom (M) would simply be represented as regardless of whether M was inside the fullerene without chemical bonding or outside, bound to one of the carbon atoms. Using the @ symbol, this would be denoted if M was inside the carbon network. A non-fullerene example is , an ion in which one arsenic (As) atom is trapped in a cage formed by the other 32 atoms. This notation was proposed in 1991 with the discovery of fullerene cages (endohedral fullerenes), which can trap atoms such as La to form, for example, or . The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene. Non-stoichiometric chemical formulae. Chemical formulae most often use integers for each element. However, there is a class of compounds, called non-stoichiometric compounds, that cannot be represented by small integers. Such a formula might be written using decimal fractions, as in , or it might include a variable part represented by a letter, as in , where "x" is normally much less than 1. General forms for organic compounds. A chemical formula used for a series of compounds that differ from each other by a constant unit is called a "general formula". It generates a homologous series of chemical formulae. For example, alcohols may be represented by the formula ("n" ≥ 1), giving the homologs methanol, ethanol, propanol for 1 ≤ "n" ≤ 3. Hill system. The Hill system (or Hill notation) is a system of writing empirical chemical formulae, molecular chemical formulae and components of a condensed formula such that the number of carbon atoms in a molecule is indicated first, the number of hydrogen atoms next, and then the number of all other chemical elements subsequently, in alphabetical order of the chemical symbols. When the formula contains no carbon, all the elements, including hydrogen, are listed alphabetically. By sorting formulae according to the number of atoms of each element present in the formula according to these rules, with differences in earlier elements or numbers being treated as more significant than differences in any later element or number—like sorting text strings into lexicographical order—it is possible to collate chemical formulae into what is known as Hill system order. The Hill system was first published by Edwin A. Hill of the United States Patent and Trademark Office in 1900. It is the most commonly used system in chemical databases and printed indexes to sort lists of compounds. A list of formulae in Hill system order is arranged alphabetically, as above, with single-letter elements coming before two-letter symbols when the symbols begin with the same letter (so "B" comes before "Be", which comes before "Br"). The following example formulae are written using the Hill system, and listed in Hill order:
7044
47922545
https://en.wikipedia.org/wiki?curid=7044
Beetle
Beetles are insects that form the order Coleoptera (), in the superorder Holometabola. Their front pair of wings are hardened into wing-cases, elytra, distinguishing them from most other insects. The Coleoptera, with about 400,000 described species, is the largest of all orders, constituting almost 40% of described arthropods and 25% of all known animal species; new species are discovered frequently, with estimates suggesting that there are between 0.9 and 2.1 million total species. However, the number of beetle species is challenged by the number of species in dipterans (flies) and hymenopterans (wasps). Found in almost every habitat except the sea and the polar regions, they interact with their ecosystems in several ways: beetles often feed on plants and fungi, break down animal and plant debris, and eat other invertebrates. Some species are serious agricultural pests, such as the Colorado potato beetle, while others such as Coccinellidae (ladybirds or ladybugs) eat aphids, scale insects, thrips, and other plant-sucking insects that damage crops. Some others also have unusual characteristics, such as fireflies, which use a light-emitting organ for mating and communication purposes. Beetles typically have a particularly hard exoskeleton including the elytra, though some such as the rove beetles have very short elytra while blister beetles have softer elytra. The general anatomy of a beetle is quite uniform and typical of insects, although there are several examples of novelty, such as adaptations in water beetles which trap air bubbles under the elytra for use while diving. Beetles are holometabolans, which means that they undergo complete metamorphosis, with a series of conspicuous and relatively abrupt changes in body structure between hatching and becoming adult after a relatively immobile pupal stage. Some, such as stag beetles, have a marked sexual dimorphism, the males possessing enormously enlarged mandibles which they use to fight other males. Many beetles are aposematic, with bright colors and patterns warning of their toxicity, while others are harmless Batesian mimics of such insects. Many beetles, including those that live in sandy places, have effective camouflage. Beetles are prominent in human culture, from the sacred scarabs of ancient Egypt to beetlewing art and use as pets or fighting insects for entertainment and gambling. Many beetle groups are brightly and attractively colored making them objects of collection and decorative displays. Over 300 species are used as food, mostly as larvae; species widely consumed include mealworms and rhinoceros beetle larvae. However, the major impact of beetles on human life is as agricultural, forestry, and horticultural pests. Serious pest species include the boll weevil of cotton, the Colorado potato beetle, the coconut hispine beetle, the mountain pine beetle, and many others. Most beetles, however, do not cause economic damage and some, such as numerous species of lady beetles, are beneficial by helping to control insect pests. The scientific study of beetles is known as coleopterology. Etymology. The name of the taxonomic order, Coleoptera, comes from the Greek "koleopteros" (κολεόπτερος), given to the group by Aristotle for their elytra, hardened shield-like forewings, from "koleos", sheath, and "pteron", wing. The English name beetle comes from the Old English word "bitela", little biter, related to "bītan" (to bite), leading to Middle English "betylle". Another Old English name for beetle is "ċeafor", chafer, used in names such as cockchafer, from the Proto-Germanic *"kebrô" ("beetle"; compare German "Käfer", Dutch "kever", Afrikaans "kewer"). Distribution and diversity. Beetles are by far the largest order of insects: the roughly 400,000 species make up about 40% of all arthropod species so far described, and about 25% of all animal species. A 2015 study provided four independent estimates of the total number of beetle species, giving a mean estimate of some 1.5 million with a "surprisingly narrow range" spanning all four estimates from a minimum of 0.9 to a maximum of 2.1 million beetle species. The four estimates made use of host-specificity relationships (1.5 to 1.9 million), ratios with other taxa (0.9 to 1.2 million), plant:beetle ratios (1.2 to 1.3), and extrapolations based on body size by year of description (1.7 to 2.1 million). This immense diversity led the evolutionary biologist J. B. S. Haldane to quip, when some theologians asked him what could be inferred about the mind of the Christian God from the works of His Creation, "An inordinate fondness for beetles". However, the ranking of beetles as most diverse has been challenged. Multiple studies posit that Diptera (flies) and/or Hymenoptera (sawflies, wasps, ants and bees) may have more species. Beetles are found in nearly all habitats, including freshwater and coastal habitats, wherever vegetative foliage is found, from trees and their bark to flowers, leaves, and underground near roots - even inside plants in galls, in every plant tissue, including dead or decaying ones. Tropical forest canopies have a large and diverse fauna of beetles, including Carabidae, Chrysomelidae, and Scarabaeidae. The heaviest beetle, indeed the heaviest insect stage, is the larva of the goliath beetle, "Goliathus goliatus", which can attain a mass of at least and a length of . Adult male goliath beetles are the heaviest beetle in its adult stage, weighing and measuring up to . Adult elephant beetles, "Megasoma elephas" and "Megasoma actaeon" often reach and . The longest beetle is the Hercules beetle "Dynastes hercules", with a maximum overall length of at least 16.7 cm (6.6 in) including the very long pronotal horn. The smallest recorded beetle and the smallest free-living insect (), is the featherwing beetle "Scydosella musawasensis" which may measure as little as 325 μm in length. Evolution. Late Paleozoic and Triassic. The oldest known beetle is "Coleopsis", from the earliest Permian (Asselian) of Germany, around 295 million years ago. Early beetles from the Permian, which are collectively grouped into the "Protocoleoptera" are thought to have been xylophagous (wood eating) and wood boring. Fossils from this time have been found in Siberia and Europe, for instance in the red slate fossil beds of Niedermoschel near Mainz, Germany. Further fossils have been found in Obora, Czech Republic and Tshekarda in the Ural mountains, Russia. However, there are only a few fossils from North America before the middle Permian, although both Asia and North America had been united to Euramerica. The first discoveries from North America made in the Wellington Formation of Oklahoma were published in 2005 and 2008. The earliest members of modern beetle lineages appeared during the Late Permian. In the Permian–Triassic extinction event at the end of the Permian, most "protocoleopteran" lineages became extinct. Beetle diversity did not recover to pre-extinction levels until the Middle Triassic. Jurassic. During the Jurassic (), there was a dramatic increase in the diversity of beetle families, including the development and growth of carnivorous and herbivorous species. The Chrysomeloidea diversified around the same time, feeding on a wide array of plant hosts from cycads and conifers to angiosperms. Close to the Upper Jurassic, the Cupedidae decreased, but the diversity of the early plant-eating species increased. Most recent plant-eating beetles feed on flowering plants or angiosperms, whose success contributed to a doubling of plant-eating species during the Middle Jurassic. However, the increase of the number of beetle families during the Cretaceous does not correlate with the increase of the number of angiosperm species. Around the same time, numerous primitive weevils (e.g. Curculionoidea) and click beetles (e.g. Elateroidea) appeared. The first jewel beetles (e.g. Buprestidae) are present, but they remained rare until the Cretaceous. The first scarab beetles were not coprophagous but presumably fed on rotting wood with the help of fungus; they are an early example of a mutualistic relationship. There are more than 150 important fossil sites from the Jurassic, the majority in Eastern Europe and North Asia. Outstanding sites include Solnhofen in Upper Bavaria, Germany, Karatau in South Kazakhstan, the Yixian formation in Liaoning, North China, as well as the Jiulongshan formation and further fossil sites in Mongolia. In North America there are only a few sites with fossil records of insects from the Jurassic, namely the shell limestone deposits in the Hartford basin, the Deerfield basin and the Newark basin. Cretaceous. The Cretaceous saw the fragmenting of the southern landmass, with the opening of the southern Atlantic Ocean and the isolation of New Zealand, while South America, Antarctica, and Australia grew more distant. The diversity of Cupedidae and Archostemata decreased considerably. Predatory ground beetles (Carabidae) and rove beetles (Staphylinidae) began to distribute into different patterns; the Carabidae predominantly occurred in the warm regions, while the Staphylinidae and click beetles (Elateridae) preferred temperate climates. Likewise, predatory species of Cleroidea and Cucujoidea hunted their prey under the bark of trees together with the jewel beetles (Buprestidae). The diversity of jewel beetles increased rapidly, as they were the primary consumers of wood, while longhorn beetles (Cerambycidae) were rather rare: their diversity increased only towards the end of the Upper Cretaceous. The first coprophagous beetles are from the Upper Cretaceous and may have lived on the excrement of herbivorous dinosaurs. The first species where both larvae and adults are adapted to an aquatic lifestyle are found. Whirligig beetles (Gyrinidae) were moderately diverse, although other early beetles (e.g. Dytiscidae) were less, with the most widespread being the species of Coptoclavidae, which preyed on aquatic fly larvae. A 2020 review of the palaeoecological interpretations of fossil beetles from Cretaceous ambers has suggested that saproxylicity was the most common feeding strategy, with fungivorous species in particular appearing to dominate. Many fossil sites worldwide contain beetles from the Cretaceous. Most are in Europe and Asia and belong to the temperate climate zone during the Cretaceous. Lower Cretaceous sites include the Crato fossil beds in the Araripe basin in the Ceará, North Brazil, as well as overlying Santana formation; the latter was near the equator at that time. In Spain, important sites are near Montsec and Las Hoyas. In Australia, the Koonwarra fossil beds of the Korumburra group, South Gippsland, Victoria, are noteworthy. Major sites from the Upper Cretaceous include Kzyl-Dzhar in South Kazakhstan and Arkagala in Russia. Cenozoic. Beetle fossils are abundant in the Cenozoic; by the Quaternary (up to 1.6 mya), fossil species are identical to living ones, while from the Late Miocene (5.7 mya) the fossils are still so close to modern forms that they are most likely the ancestors of living species. The large oscillations in climate during the Quaternary caused beetles to change their geographic distributions so much that current location gives little clue to the biogeographical history of a species. It is evident that geographic isolation of populations must often have been broken as insects moved under the influence of changing climate, causing mixing of gene pools, rapid evolution, and extinctions, especially in middle latitudes. Phylogeny. The very large number of beetle species poses special problems for classification. Some families contain tens of thousands of species, and need to be divided into subfamilies and tribes. Polyphaga is the largest suborder, containing more than 300,000 described species in more than 170 families, including rove beetles (Staphylinidae), scarab beetles (Scarabaeidae), blister beetles (Meloidae), stag beetles (Lucanidae) and true weevils (Curculionidae). These polyphagan beetle groups can be identified by the presence of cervical sclerites (hardened parts of the head used as points of attachment for muscles) absent in the other suborders. Adephaga contains about 10 families of largely predatory beetles, includes ground beetles (Carabidae), water beetles (Dytiscidae) and whirligig beetles (Gyrinidae). In these insects, the testes are tubular and the first abdominal sternum (a plate of the exoskeleton) is divided by the hind coxae (the basal joints of the beetle's legs). Archostemata contains four families of mainly wood-eating beetles, including reticulated beetles (Cupedidae) and the telephone-pole beetle. The Archostemata have an exposed plate called the metatrochantin in front of the basal segment or coxa of the hind leg. Myxophaga contains about 65 described species in four families, mostly very small, including Hydroscaphidae and the genus "Sphaerius". The myxophagan beetles are small and mostly alga-feeders. Their mouthparts are characteristic in lacking galeae and having a mobile tooth on their left mandible. The consistency of beetle morphology, in particular their possession of elytra, has long suggested that Coleoptera is monophyletic, though there have been doubts about the arrangement of the suborders, namely the Adephaga, Archostemata, Myxophaga and Polyphaga within that clade. The twisted-wing parasites, Strepsiptera, are thought to be a sister group to the beetles, having split from them in the Early Permian. Molecular phylogenetic analysis confirms that the Coleoptera are monophyletic. Duane McKenna et al. (2015) used eight nuclear genes for 367 species from 172 of 183 Coleopteran families. They split the Adephaga into 2 clades, Hydradephaga and Geadephaga, broke up the Cucujoidea into 3 clades, and placed the Lymexyloidea within the Tenebrionoidea. The Polyphaga appear to date from the Triassic. Most extant beetle families appear to have arisen in the Cretaceous. The cladogram is based on McKenna (2015). The number of species in each group (mainly superfamilies) is shown in parentheses, and boldface if over 10,000. English common names are given where possible. Dates of origin of major groups are shown in italics in millions of years ago (mya). External morphology. Beetles are generally characterized by a particularly hard exoskeleton and hard forewings (elytra) not usable for flying. Almost all beetles have mandibles that move in a horizontal plane. The mouthparts are rarely suctorial, though they are sometimes reduced; the maxillae always bear palps. The antennae usually have 11 or fewer segments, except in some groups like the Cerambycidae (longhorn beetles) and the Rhipiceridae (cicada parasite beetles). The coxae of the legs are usually located recessed within a coxal cavity. The genitalic structures are telescoped into the last abdominal segment in all extant beetles. Beetle larvae can often be confused with those of other holometabolan groups. The beetle's exoskeleton is made up of numerous plates, called sclerites, separated by thin sutures. This design provides armored defenses while maintaining flexibility. The general anatomy of a beetle is quite uniform, although specific organs and appendages vary greatly in appearance and function between the many families in the order. Like all insects, beetles' bodies are divided into three sections: the head, the thorax, and the abdomen. Because there are so many species, identification is quite difficult, and relies on attributes including the shape of the antennae, the tarsal formulae and shapes of these small segments on the legs, the mouthparts, and the ventral plates (sterna, pleura, coxae). In many species accurate identification can only be made by examination of the unique male genitalic structures. Head. The head, having mouthparts projecting forward or sometimes downturned, is usually heavily sclerotized and is sometimes very large. The eyes are compound and may display remarkable adaptability, as in the case of the aquatic whirligig beetles (Gyrinidae), where they are split to allow a view both above and below the waterline. A few Longhorn beetles (Cerambycidae) and weevils as well as some fireflies (Rhagophthalmidae) have divided eyes, while many have eyes that are notched, and a few have ocelli, small, simple eyes usually farther back on the head (on the vertex); these are more common in larvae than in adults. The anatomical organization of the compound eyes may be modified and depends on whether a species is primarily crepuscular, or diurnally or nocturnally active. Ocelli are found in the adult carpet beetle (as a single central ocellus in Dermestidae), some rove beetles (Omaliinae), and the Derodontidae. Beetle antennae are primarily organs of sensory perception and can detect motion, odor and chemical substances, but may also be used to physically feel a beetle's environment. Beetle families may use antennae in different ways. For example, when moving quickly, tiger beetles may not be able to see very well and instead hold their antennae rigidly in front of them in order to avoid obstacles. Certain Cerambycidae use antennae to balance, and blister beetles may use them for grasping. Some aquatic beetle species may use antennae for gathering air and passing it under the body whilst submerged. Equally, some families use antennae during mating, and a few species use them for defense. In the cerambycid "Onychocerus albitarsis", the antennae have venom injecting structures used in defense, which is unique among arthropods. Antennae vary greatly in form, sometimes between the sexes, but are often similar within any given family. Antennae may be clubbed, threadlike, angled, shaped like a string of beads, comb-like (either on one side or both, bipectinate), or toothed. The physical variation of antennae is important for the identification of many beetle groups. The Curculionidae have elbowed or geniculate antennae. Feather like flabellate antennae are a restricted form found in the Rhipiceridae and a few other families. The Silphidae have a capitate antennae with a spherical head at the tip. The Scarabaeidae typically have lamellate antennae with the terminal segments extended into long flat structures stacked together. The Carabidae typically have thread-like antennae. The antennae arises between the eye and the mandibles and in the Tenebrionidae, the antennae rise in front of a notch that breaks the usually circular outline of the compound eye. They are segmented and usually consist of 11 parts, the first part is called the scape and the second part is the pedicel. The other segments are jointly called the flagellum. Beetles have mouthparts like those of grasshoppers. The mandibles appear as large pincers on the front of some beetles. The mandibles are a pair of hard, often tooth-like structures that move horizontally to grasp, crush, or cut food or enemies (see defence, below). Two pairs of finger-like appendages, the maxillary and labial palpi, are found around the mouth in most beetles, serving to move food into the mouth. In many species, the mandibles are sexually dimorphic, with those of the males enlarged enormously compared with those of females of the same species. Thorax. The thorax is segmented into the two discernible parts, the pro- and pterothorax. The pterothorax is the fused meso- and metathorax, which are commonly separated in other insect species, although flexibly articulate from the prothorax. When viewed from below, the thorax is that part from which all three pairs of legs and both pairs of wings arise. The abdomen is everything posterior to the thorax. When viewed from above, most beetles appear to have three clear sections, but this is deceptive: on the beetle's upper surface, the middle section is a hard plate called the pronotum, which is only the front part of the thorax; the back part of the thorax is concealed by the beetle's wings. This further segmentation is usually best seen on the abdomen. Legs. The multisegmented legs end in two to five small segments called tarsi. Like many other insect orders, beetles have claws, usually one pair, on the end of the last tarsal segment of each leg. While most beetles use their legs for walking, legs have been variously adapted for other uses. Aquatic beetles including the Dytiscidae (diving beetles), Haliplidae, and many species of Hydrophilidae, the legs, often the last pair, are modified for swimming, typically with rows of long hairs. Male diving beetles have suctorial cups on their forelegs that they use to grasp females. Other beetles have fossorial legs widened and often spined for digging. Species with such adaptations are found among the scarabs, ground beetles, and clown beetles (Histeridae). The hind legs of some beetles, such as flea beetles (within Chrysomelidae) and flea weevils (within Curculionidae), have enlarged femurs that help them leap. Wings. The forewings of beetles are not used for flight, but form elytra which cover the hind part of the body and protect the hindwings. The elytra are usually hard shell-like structures which must be raised to allow the hindwings to move for flight. However, in the soldier beetles (Cantharidae), the elytra are soft, earning this family the name of leatherwings. Other soft wing beetles include the net-winged beetle "Calopteron discrepans", which has brittle wings that rupture easily in order to release chemicals for defense. Beetles' flight wings are crossed with veins and are folded after landing, often along these veins, and stored below the elytra. A fold ("jugum") of the membrane at the base of each wing is characteristic. Some beetles have lost the ability to fly. These include some ground beetles (Carabidae) and some true weevils (Curculionidae), as well as desert- and cave-dwelling species of other families. Many have the two elytra fused together, forming a solid shield over the abdomen. In a few families, both the ability to fly and the elytra have been lost, as in the glow-worms (Phengodidae), where the females resemble larvae throughout their lives. The presence of elytra and wings does not always indicate that the beetle will fly. For example, the tansy beetle walks between habitats despite being physically capable of flight. Abdomen. The abdomen is the section behind the metathorax, made up of a series of rings, each with a hole for breathing and respiration, called a spiracle, composing three different segmented sclerites: the tergum, pleura, and the sternum. The tergum in almost all species is membranous, or usually soft and concealed by the wings and elytra when not in flight. The pleura are usually small or hidden in some species, with each pleuron having a single spiracle. The sternum is the most widely visible part of the abdomen, being a more or less sclerotized segment. The abdomen itself does not have any appendages, but some (for example, Mordellidae) have articulating sternal lobes. Anatomy and physiology. Digestive system. The digestive system of beetles is primarily adapted for a herbivorous diet. Digestion takes place mostly in the anterior midgut, although in predatory groups like the Carabidae, most digestion occurs in the crop by means of midgut enzymes. In the Elateridae, the larvae are liquid feeders that extraorally digest their food by secreting enzymes. The alimentary canal basically consists of a short, narrow pharynx, a widened expansion, the crop, and a poorly developed gizzard. This is followed by the midgut, that varies in dimensions between species, with a large amount of cecum, and the hindgut, with varying lengths. There are typically four to six Malpighian tubules. Nervous system. The nervous system in beetles contains all the types found in insects, varying between different species, from three thoracic and seven or eight abdominal ganglia which can be distinguished to that in which all the thoracic and abdominal ganglia are fused to form a composite structure. Respiratory system. Like most insects, beetles inhale air, for the oxygen it contains, and exhale carbon dioxide, via a tracheal system. Air enters the body through spiracles, and circulates within the haemocoel in a system of tracheae and tracheoles, through whose walls the gases can diffuse. Diving beetles, such as the Dytiscidae, carry a bubble of air with them when they dive. Such a bubble may be contained under the elytra or against the body by specialized hydrophobic hairs. The bubble covers at least some of the spiracles, permitting air to enter the tracheae. The function of the bubble is not only to contain a store of air but to act as a physical gill. The air that it traps is in contact with oxygenated water, so as the animal's consumption depletes the oxygen in the bubble, more oxygen can diffuse in to replenish it. Carbon dioxide is more soluble in water than either oxygen or nitrogen, so it readily diffuses out faster than in. Nitrogen is the most plentiful gas in the bubble, and the least soluble, so it constitutes a relatively static component of the bubble and acts as a stable medium for respiratory gases to accumulate in and pass through. Occasional visits to the surface are sufficient for the beetle to re-establish the constitution of the bubble. Circulatory system. Like other insects, beetles have open circulatory systems, based on hemolymph rather than blood. As in other insects, a segmented tube-like heart is attached to the dorsal wall of the hemocoel. It has paired inlets or "ostia" at intervals down its length, and circulates the hemolymph from the main cavity of the haemocoel and out through the anterior cavity in the head. Specialized organs. Different glands are specialized for different pheromones to attract mates. Pheromones from species of Rutelinae are produced from epithelial cells lining the inner surface of the apical abdominal segments; amino acid-based pheromones of Melolonthinae are produced from eversible glands on the abdominal apex. Other species produce different types of pheromones. Dermestids produce esters, and species of Elateridae produce fatty acid-derived aldehydes and acetates. To attract a mate, fireflies (Lampyridae) use modified fat body cells with transparent surfaces backed with reflective uric acid crystals to produce light by bioluminescence. Light production is highly efficient, by oxidation of luciferin catalyzed by enzymes (luciferases) in the presence of adenosine triphosphate (ATP) and oxygen, producing oxyluciferin, carbon dioxide, and light. Tympanal organs or hearing organs consist of a membrane (tympanum) stretched across a frame backed by an air sac and associated sensory neurons, are found in two families. Several species of the genus "Cicindela" (Carabidae) have hearing organs on the dorsal surfaces of their first abdominal segments beneath the wings; two tribes in the Dynastinae (within the Scarabaeidae) have hearing organs just beneath their pronotal shields or neck membranes. Both families are sensitive to ultrasonic frequencies, with strong evidence indicating they function to detect the presence of bats by their ultrasonic echolocation. Reproduction and development. Beetles are members of the superorder Holometabola, and accordingly most of them undergo complete metamorphosis. The typical form of metamorphosis in beetles passes through four main stages: the egg, the larva, the pupa, and the imago or adult. The larvae are commonly called grubs and the pupa sometimes is called the chrysalis. In some species, the pupa may be enclosed in a cocoon constructed by the larva towards the end of its final instar. Some beetles, such as typical members of the families Meloidae and Rhipiphoridae, go further, undergoing hypermetamorphosis in which the first instar takes the form of a triungulin. Mating. Some beetles have intricate mating behaviour. Pheromone communication is often important in locating a mate. Different species use different pheromones. Scarab beetles such as the Rutelinae use pheromones derived from fatty acid synthesis and others use pheromones from organic compounds, while other scarabs such as the Melolonthinae use amino acids and terpenoids. Another way beetles find mates is seen in the fireflies (Lampyridae) which are bioluminescent, with abdominal light-producing organs. The males and females engage in a complex dialog before mating; each species has a unique combination of flight patterns, duration, composition, and intensity of the light produced. Before mating, males and females may stridulate, or vibrate the objects they are on. In the Meloidae, the male climbs onto the dorsum of the female and strokes his antennae on her head, palps, and antennae. In "Eupompha", the male draws his antennae along his longitudinal vertex. They may not mate at all if they do not perform the precopulatory ritual. This mating behavior may be different amongst dispersed populations of the same species. For example, the mating of a Russian population of tansy beetle ("Chrysolina graminis") is preceded by an elaborate ritual involving the male tapping the female's eyes, pronotum and antennae with its antennae, which is not evident in the population of this species in the United Kingdom. In another example, the intromittent organ of male thistle tortoise beetles is a long, tube-like structure called the flagellum which is thin and curved. When not in use, the flagellum is stored inside the abdomen of the male and can extend out to be longer than the male when needed. During mating, this organ bends to the complex shape of the female reproductive organ, which includes a coiled duct that the male must penetrate with the organ. Furthermore, these physical properties of the thistle tortioise beetle have been studied because the ability of a thin, flexible structure to harden without buckling or rupturing is mechanically challenging and may have important implications for the development of microscopic catheters in modern medicine. Competition can play a part in the mating rituals of species such as burying beetles ("Nicrophorus"), the insects fighting to determine which can mate. Many male beetles are territorial and fiercely defend their territories from intruding males. In such species, the male often has horns on the head or thorax, making its body length greater than that of a female. Copulation is generally quick, but in some cases lasts for several hours. During copulation, sperm cells are transferred to the female to fertilize the egg. Life cycle. Egg. Essentially all beetles lay eggs, though some myrmecophilous Aleocharinae and some Chrysomelinae which live in mountains or the subarctic are ovoviviparous, laying eggs which hatch almost immediately. Beetle eggs generally have smooth surfaces and are soft, though the Cupedidae have hard eggs. Eggs vary widely between species: the eggs tend to be small in species with many instars (larval stages), and in those that lay large numbers of eggs. A female may lay from several dozen to several thousand eggs during her lifetime, depending on the extent of parental care. This ranges from the simple laying of eggs under a leaf, to the parental care provided by scarab beetles, which house, feed and protect their young. The Attelabidae roll leaves and lay their eggs inside the roll for protection. Larva. The larva is usually the principal feeding stage of the beetle life cycle. Larvae tend to feed voraciously once they emerge from their eggs. Some feed externally on plants, such as those of certain leaf beetles, while others feed within their food sources. Examples of internal feeders are most Buprestidae and longhorn beetles. The larvae of many beetle families are predatory like the adults (ground beetles, ladybirds, rove beetles). The larval period varies between species, but can be as long as several years. The larvae of skin beetles undergo a degree of reversed development when starved, and later grow back to the previously attained level of maturity. The cycle can be repeated many times (see Biological immortality). Larval morphology is highly varied amongst species, with well-developed and sclerotized heads, distinguishable thoracic and abdominal segments (usually the tenth, though sometimes the eighth or ninth). Beetle larvae can be differentiated from other insect larvae by their hardened, often darkened heads, the presence of chewing mouthparts, and spiracles along the sides of their bodies. Like adult beetles, the larvae are varied in appearance, particularly between beetle families. Beetles with somewhat flattened, highly mobile larvae include the ground beetles and rove beetles; their larvae are described as campodeiform. Some beetle larvae resemble hardened worms with dark head capsules and minute legs. These are elateriform larvae, and are found in the click beetle (Elateridae) and darkling beetle (Tenebrionidae) families. Some elateriform larvae of click beetles are known as wireworms. Beetles in the Scarabaeoidea have short, thick larvae described as scarabaeiform, more commonly known as grubs. All beetle larvae go through several instars, which are the developmental stages between each moult. In many species, the larvae simply increase in size with each successive instar as more food is consumed. In some cases, however, more dramatic changes occur. Among certain beetle families or genera, particularly those that exhibit parasitic lifestyles, the first instar (the planidium) is highly mobile to search out a host, while the following instars are more sedentary and remain on or within their host. This is known as hypermetamorphosis; it occurs in the Meloidae, Micromalthidae, and Ripiphoridae. The blister beetle "Epicauta vittata" (Meloidae), for example, has three distinct larval stages. Its first stage, the triungulin, has longer legs to go in search of the eggs of grasshoppers. After feeding for a week it moults to the second stage, called the caraboid stage, which resembles the larva of a carabid beetle. In another week it moults and assumes the appearance of a scarabaeid larva—the scarabaeidoid stage. Its penultimate larval stage is the pseudo-pupa or the coarcate larva, which will overwinter and pupate until the next spring. The larval period can vary widely. A fungus feeding staphylinid "Phanerota fasciata" undergoes three moults in 3.2 days at room temperature while "Anisotoma" sp. (Leiodidae) completes its larval stage in the fruiting body of slime mold in 2 days and possibly represents the fastest growing beetles. Dermestid beetles, "Trogoderma inclusum" can remain in an extended larval state under unfavourable conditions, even reducing their size between moults. A larva is reported to have survived for 3.5 years in an enclosed container. Pupa and adult. As with all holometabolans, beetle larvae pupate, and from these pupae emerge fully formed, sexually mature adult beetles, or imagos. Pupae never have mandibles (they are adecticous). In most pupae, the appendages are not attached to the body and are said to be exarate; in a few beetles (Staphylinidae, Ptiliidae etc.) the appendages are fused with the body (termed as obtect pupae). Adults have extremely variable lifespans, from weeks to years, depending on the species. Some wood-boring beetles can have extremely long life-cycles. It is believed that when furniture or house timbers are infested by beetle larvae, the timber already contained the larvae when it was first sawn up. A birch bookcase 40 years old released adult "Eburia quadrigeminata" (Cerambycidae), while "Buprestis aurulenta" and other Buprestidae have been documented as emerging as much as 51 years after manufacture of wooden items. Behaviour. Locomotion. The elytra allow beetles to both fly and move through confined spaces, doing so by folding the delicate wings under the elytra while not flying, and folding their wings out just before takeoff. The unfolding and folding of the wings is operated by muscles attached to the wing base; as long as the tension on the radial and cubital veins remains, the wings remain straight. Some beetle species (many Cetoniinae; some Scarabaeinae, Curculionidae and Buprestidae) fly with the elytra closed, with the metathoracic wings extended under the lateral elytra margins. The altitude reached by beetles in flight varies. One study investigating the flight altitude of the ladybird species "Coccinella septempunctata" and "Harmonia axyridis" using radar showed that, whilst the majority in flight over a single location were at 150–195 m above ground level, some reached altitudes of over 1100 m. Many rove beetles have greatly reduced elytra, and while they are capable of flight, they most often move on the ground: their soft bodies and strong abdominal muscles make them flexible, easily able to wriggle into small cracks. Aquatic beetles use several techniques for retaining air beneath the water's surface. Diving beetles (Dytiscidae) hold air between the abdomen and the elytra when diving. Hydrophilidae have hairs on their under surface that retain a layer of air against their bodies. Adult crawling water beetles use both their elytra and their hind coxae (the basal segment of the back legs) in air retention, while whirligig beetles simply carry an air bubble down with them whenever they dive. Communication. Beetles have a variety of ways to communicate, including the use of pheromones. The mountain pine beetle emits a pheromone to attract other beetles to a tree. The mass of beetles are able to overcome the chemical defenses of the tree. After the tree's defenses have been exhausted, the beetles emit an anti-aggregation pheromone. This species can stridulate to communicate, but others may use sound to defend themselves when attacked. Parental care. Parental care is found in a few families of beetle, perhaps for protection against adverse conditions and predators. The rove beetle "Bledius spectabilis" lives in salt marshes, so the eggs and larvae are endangered by the rising tide. The maternal beetle patrols the eggs and larvae, burrowing to keep them from flooding and asphyxiating, and protects them from the predatory carabid beetle "Dicheirotrichus gustavii" and from the parasitoidal wasp "Barycnemis blediator", which kills some 15% of the larvae. Burying beetles are attentive parents, and participate in cooperative care and feeding of their offspring. Both parents work to bury small animal carcass to serve as a food resource for their young and build a brood chamber around it. The parents prepare the carcass and protect it from competitors and from early decomposition. After their eggs hatch, the parents keep the larvae clean of fungus and bacteria and help the larvae feed by regurgitating food for them. Some dung beetles provide parental care, collecting herbivore dung and laying eggs within that food supply, an instance of mass provisioning. Some species do not leave after this stage, but remain to safeguard their offspring. Most species of beetles do not display parental care behaviors after the eggs have been laid. Subsociality, where females guard their offspring, is well-documented in two families of Chrysomelidae, Cassidinae and Chrysomelinae. Eusociality. Eusociality involves cooperative brood care (including brood care of offspring from other individuals), overlapping generations within a colony of adults, and a division of labor into reproductive and non-reproductive groups. Few organisms outside Hymenoptera exhibit this behavior; the only beetle to do so is the weevil "Austroplatypus incompertus". This Australian species lives in horizontal networks of tunnels, in the heartwood of "Eucalyptus" trees. It is one of more than 300 species of wood-boring Ambrosia beetles which distribute the spores of ambrosia fungi. The fungi grow in the beetles' tunnels, providing food for the beetles and their larvae; female offspring remain in the tunnels and maintain the fungal growth, probably never reproducing. Cooperative brood care is also found in the bess beetles (Passalidae) where the larvae feed on the semi-digested faeces of the adults. Feeding. Beetles are able to exploit a wide diversity of food sources available in their many habitats. Some are omnivores, eating both plants and animals. Other beetles are highly specialized in their diet. Many species of leaf beetles, longhorn beetles, and weevils are very host-specific, feeding on only a single species of plant. Ground beetles and rove beetles (Staphylinidae), among others, are primarily carnivorous and catch and consume many other arthropods and small prey, such as earthworms and snails. While most predatory beetles are generalists, a few species have more specific prey requirements or preferences. In some species, digestive ability relies upon a symbiotic relationship with fungi - some beetles have yeasts living their guts, including some yeasts previously undiscovered anywhere else. Decaying organic matter is a primary diet for many species. This can range from dung, which is consumed by coprophagous species (such as certain scarab beetles in the Scarabaeidae), to dead animals, which are eaten by necrophagous species (such as the carrion beetles, Silphidae). Some beetles found in dung and carrion are in fact predatory. These include members of the Histeridae and Silphidae, preying on the larvae of coprophagous and necrophagous insects. Many beetles feed under bark, some feed on wood while others feed on fungi growing on wood or leaf-litter. Some beetles have special mycangia, structures for the transport of fungal spores. Ecology. Anti-predator adaptations. Beetles, both adults and larvae, are the prey of many animal predators including mammals from bats to rodents, birds, lizards, amphibians, fishes, dragonflies, robberflies, reduviid bugs, ants, other beetles, and spiders. Beetles use a variety of anti-predator adaptations to defend themselves. These include camouflage and mimicry against predators that hunt by sight, toxicity, and defensive behaviour. Camouflage. Camouflage is common and widespread among beetle families, especially those that feed on wood or vegetation, such as leaf beetles (Chrysomelidae, which are often green) and weevils. In some species, sculpturing or various colored scales or hairs cause beetles such as the avocado weevil "Heilipus apiatus" to resemble bird dung or other inedible objects. Many beetles that live in sandy environments blend in with the coloration of that substrate. Mimicry and aposematism. Some longhorn beetles (Cerambycidae) are effective Batesian mimics of wasps. Beetles may combine coloration with behavioural mimicry, acting like the wasps they already closely resemble. Many other beetles, including ladybirds, blister beetles, and lycid beetles secrete distasteful or toxic substances to make them unpalatable or poisonous, and are often aposematic, where bright or contrasting coloration warn off predators; many beetles and other insects mimic these chemically protected species. Chemical defense is important in some species, usually being advertised by bright aposematic colors. Some Tenebrionidae use their posture for releasing noxious chemicals to warn off predators. Chemical defenses may serve purposes other than just protection from vertebrates, such as protection from a wide range of microbes. Some species sequester chemicals from the plants they feed on, incorporating them into their own defenses. Other species have special glands to produce deterrent chemicals. The defensive glands of carabid ground beetles produce a variety of hydrocarbons, aldehydes, phenols, quinones, esters, and acids released from an opening at the end of the abdomen. African carabid beetles (for example, "Anthia") employ the same chemicals as ants: formic acid. Bombardier beetles have well-developed pygidial glands that empty from the sides of the intersegment membranes between the seventh and eighth abdominal segments. The gland is made of two containing chambers, one for hydroquinones and hydrogen peroxide, the other holding hydrogen peroxide and catalase enzymes. These chemicals mix and result in an explosive ejection, reaching a temperature of around , with the breakdown of hydroquinone to hydrogen, oxygen, and quinone. The oxygen propels the noxious chemical spray as a jet that can be aimed accurately at predators. Other defenses. Large ground-dwelling beetles such as Carabidae, the rhinoceros beetle and the longhorn beetles defend themselves using strong mandibles, or heavily sclerotised (armored) spines or horns to deter or fight off predators. Many species of weevil that feed out in the open on leaves of plants react to attack by employing a drop-off reflex. Some combine it with thanatosis, in which they close up their appendages and "play dead". The click beetles (Elateridae) can suddenly catapult themselves out of danger by releasing the energy stored by a click mechanism, which consists of a stout spine on the prosternum and a matching groove in the mesosternum. Some species startle an attacker by producing sounds through a process known as stridulation. Parasitism. A few species of beetles are ectoparasitic on mammals. One such species, "Platypsyllus castoris", parasitises beavers ("Castor" spp.). This beetle lives as a parasite both as a larva and as an adult, feeding on epidermal tissue and possibly on skin secretions and wound exudates. They are strikingly flattened dorsoventrally, no doubt as an adaptation for slipping between the beavers' hairs. They are wingless and eyeless, as are many other ectoparasites. Others are kleptoparasites of other invertebrates, such as the small hive beetle ("Aethina tumida") that infests honey bee nests, while many species are parasitic inquilines or commensal in the nests of ants. A few groups of beetles are primary parasitoids of other insects, feeding off of, and eventually killing their hosts. Pollination. Beetle-pollinated flowers are usually large, greenish or off-white in color, and heavily scented. Scents may be spicy, fruity, or similar to decaying organic material. Beetles were most likely the first insects to pollinate flowers. Most beetle-pollinated flowers are flattened or dish-shaped, with pollen easily accessible, although they may include traps to keep the beetle longer. The plants' ovaries are usually well protected from the biting mouthparts of their pollinators. The beetle families that habitually pollinate flowers are the Buprestidae, Cantharidae, Cerambycidae, Cleridae, Dermestidae, Lycidae, Melyridae, Mordellidae, Nitidulidae and Scarabaeidae. Beetles may be particularly important in some parts of the world such as semiarid areas of southern Africa and southern California and the montane grasslands of KwaZulu-Natal in South Africa. Mutualism. Mutualism is well known in a few beetles, such as the ambrosia beetle, which partners with fungi to digest the wood of dead trees. The beetles excavate tunnels in dead trees in which they cultivate fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases spores of its fungal symbiont. The fungus penetrates the plant's xylem tissue, digests it, and concentrates the nutrients on and near the surface of the beetle gallery, so the weevils and the fungus both benefit. The beetles cannot eat the wood due to toxins, and uses its relationship with fungi to help overcome the defenses of its host tree in order to provide nutrition for their larvae. Chemically mediated by a bacterially produced polyunsaturated peroxide, this mutualistic relationship between the beetle and the fungus is coevolved. Tolerance of extreme environments. About 90% of beetle species enter a period of adult diapause, a quiet phase with reduced metabolism to tide unfavourable environmental conditions. Adult diapause is the most common form of diapause in Coleoptera. To endure the period without food (often lasting many months) adults prepare by accumulating reserves of lipids, glycogen, proteins and other substances needed for resistance to future hazardous changes of environmental conditions. This diapause is induced by signals heralding the arrival of the unfavourable season; usually the cue is photoperiodic. Short (decreasing) day length serves as a signal of approaching winter and induces winter diapause (hibernation). A study of hibernation in the Arctic beetle "Pterostichus brevicornis" showed that the body fat levels of adults were highest in autumn with the alimentary canal filled with food, but empty by the end of January. This loss of body fat was a gradual process, occurring in combination with dehydration. All insects are poikilothermic, so the ability of a few beetles to live in extreme environments depends on their resilience to unusually high or low temperatures. The bark beetle "Pityogenes chalcographus" can survive whilst overwintering beneath tree bark; the Alaskan beetle "Cucujus clavipes puniceus" is able to withstand ; its larvae may survive . At these low temperatures, the formation of ice crystals in internal fluids is the biggest threat to survival to beetles, but this is prevented through the production of antifreeze proteins that stop water molecules from grouping together. The low temperatures experienced by "Cucujus clavipes" can be survived through their deliberate dehydration in conjunction with the antifreeze proteins. This concentrates the antifreezes several fold. The hemolymph of the mealworm beetle "Tenebrio molitor" contains several antifreeze proteins. The Alaskan beetle "Upis ceramboides" can survive −60 °C: its cryoprotectants are xylomannan, a molecule consisting of a sugar bound to a fatty acid, and the sugar-alcohol, threitol. Conversely, desert dwelling beetles are adapted to tolerate high temperatures. For example, the Tenebrionid beetle "Onymacris rugatipennis" can withstand . Tiger beetles in hot, sandy areas are often whitish (for example, "Habroscelimorpha dorsalis"), to reflect more heat than a darker color would. These beetles also exhibits behavioural adaptions to tolerate the heat: they are able to stand erect on their tarsi to hold their bodies away from the hot ground, seek shade, and turn to face the sun so that only the front parts of their heads are directly exposed. The fogstand beetle of the Namib Desert, "Stenocara gracilipes", is able to collect water from fog, as its elytra have a textured surface combining hydrophilic (water-loving) bumps and waxy, hydrophobic troughs. The beetle faces the early morning breeze, holding up its abdomen; droplets condense on the elytra and run along ridges towards their mouthparts. Similar adaptations are found in several other Namib desert beetles such as "Onymacris unguicularis". Some terrestrial beetles that exploit shoreline and floodplain habitats have physiological adaptations for surviving floods. In the event of flooding, adult beetles may be mobile enough to move away from flooding, but larvae and pupa often cannot. Adults of "Cicindela togata" are unable to survive immersion in water, but larvae are able to survive a prolonged period, up to 6 days, of anoxia during floods. Anoxia tolerance in the larvae may have been sustained by switching to anaerobic metabolic pathways or by reducing metabolic rate. Anoxia tolerance in the adult carabid beetle "Pelophilia borealis" was tested in laboratory conditions and it was found that they could survive a continuous period of up to 127 days in an atmosphere of 99.9% nitrogen at 0 °C. Migration. Many beetle species undertake annual mass movements which are termed as migrations. These include the pollen beetle "Meligethes aeneus" and many species of coccinellids. These mass movements may also be opportunistic, in search of food, rather than seasonal. A 2008 study of an unusually large outbreak of Mountain Pine Beetle ("Dendroctonus ponderosae") in British Columbia found that beetles were capable of flying 30–110 km per day in densities of up to 18,600 beetles per hectare. Relationship to humans. In ancient cultures. Several species of dung beetle, especially the sacred scarab, "Scarabaeus sacer", were revered in Ancient Egypt. The hieroglyphic image of the beetle may have had existential, fictional, or ontologic significance. Images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals are known from the Sixth Dynasty and up to the period of Roman rule. The scarab was of prime significance in the funerary cult of ancient Egypt. The scarab was linked to Khepri, the god of the rising sun, from the supposed resemblance of the rolling of the dung ball by the beetle to the rolling of the sun by the god. Some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these are the Judean LMLK seals, where eight of 21 designs contained scarab beetles, which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah. Beetles are mentioned as a symbol of the sun, as in ancient Egypt, in Plutarch's 1st century "Moralia". The Greek Magical Papyri of the 2nd century BC to the 5th century AD describe scarabs as an ingredient in a spell. Pliny the Elder discusses beetles in his "Natural History", describing the stag beetle: "Some insects, for the preservation of their wings, are covered with (elytra)—the beetle, for instance, the wing of which is peculiarly fine and frail. To these insects a sting has been denied by Nature; but in one large kind we find horns of a remarkable length, two-pronged at the extremities, and forming pincers, which the animal closes when it is its intention to bite." The stag beetle is recorded in a Greek myth by Nicander and recalled by Antoninus Liberalis in which Cerambus is turned into a beetle: "He can be seen on trunks and has hook-teeth, ever moving his jaws together. He is black, long and has hard wings like a great dung beetle". The story concludes with the comment that the beetles were used as toys by young boys, and that the head was removed and worn as a pendant. As pests. About 75% of beetle species are phytophagous in both the larval and adult stages. Many feed on economically important plants and stored plant products, including trees, cereals, tobacco, and dried fruits. Some, such as the boll weevil, which feeds on cotton buds and flowers, can cause extremely serious damage to agriculture. The boll weevil crossed the Rio Grande near Brownsville, Texas, to enter the United States from Mexico around 1892, and had reached southeastern Alabama by 1915. By the mid-1920s, it had entered all cotton-growing regions in the US, traveling per year. It remains the most destructive cotton pest in North America. Mississippi State University has estimated, since the boll weevil entered the United States, it has cost cotton producers about $13 billion, and in recent times about $300 million per year. The bark beetle, elm leaf beetle and the Asian longhorned beetle ("Anoplophora glabripennis") are among the species that attack elm trees. Bark beetles (Scolytidae) carry Dutch elm disease as they move from infected breeding sites to healthy trees. The disease has devastated elm trees across Europe and North America. Some species of beetle have evolved immunity to insecticides. For example, the Colorado potato beetle, "Leptinotarsa decemlineata", is a destructive pest of potato plants. Its hosts include other members of the Solanaceae, such as nightshade, tomato, eggplant and capsicum, as well as the potato. Different populations have between them developed resistance to all major classes of insecticide. The Colorado potato beetle was evaluated as a tool of entomological warfare during World War II, the idea being to use the beetle and its larvae to damage the crops of enemy nations. Germany tested its Colorado potato beetle weaponisation program south of Frankfurt, releasing 54,000 beetles. The death watch beetle, "Xestobium rufovillosum" (Ptinidae), is a serious pest of older wooden buildings in Europe. It attacks hardwoods such as oak and chestnut, always where some fungal decay has taken or is taking place. The actual introduction of the pest into buildings is thought to take place at the time of construction. Other pests include the coconut hispine beetle, "Brontispa longissima", which feeds on young leaves, seedlings and mature coconut trees, causing serious economic damage in the Philippines. The mountain pine beetle is a destructive pest of mature or weakened lodgepole pine, sometimes affecting large areas of Canada. As beneficial resources. Beetles can be beneficial to human economics by controlling the populations of pests. The larvae and adults of some species of lady beetles (Coccinellidae) feed on aphids that are pests. Other lady beetles feed on scale insects, whitefly and mealybugs. If normal food sources are scarce, they may feed on small caterpillars, young plant bugs, or honeydew and nectar. Ground beetles (Carabidae) are common predators of many insect pests, including fly eggs, caterpillars, and wireworms. Ground beetles can help to control weeds by eating their seeds in the soil, reducing the need for herbicides to protect crops. The effectiveness of some species in reducing certain plant populations has resulted in the deliberate introduction of beetles in order to control weeds. For example, the genus "Calligrapha" is native to North America but has been used to control "Parthenium hysterophorus" in India and "Ambrosia artemisiifolia" in Russia. Dung beetles (Scarabidae) have been successfully used to reduce the populations of pestilent flies, such as "Musca vetustissima" and "Haematobia exigua" which are serious pests of cattle in Australia. The beetles make the dung unavailable to breeding pests by quickly rolling and burying it in the soil, with the added effect of improving soil fertility, tilth, and nutrient cycling. The Australian Dung Beetle Project (1965–1985), introduced species of dung beetle to Australia from South Africa and Europe to reduce populations of "Musca vetustissima", following successful trials of this technique in Hawaii. The American Institute of Biological Sciences reports that dung beetles, such as "Euoniticellus intermedius", save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces. The Dermestidae are often used in taxidermy and in the preparation of scientific specimens, to clean soft tissue from bones. Larvae feed on and remove cartilage along with other soft tissue. As food and medicine. Beetles are the most widely eaten insects, with about 344 species used as food, usually at the larval stage. The mealworm (the larva of the darkling beetle) and the rhinoceros beetle are among the species commonly eaten. A wide range of species is also used in folk medicine to treat those suffering from a variety of disorders and illnesses, though this is done without clinical studies supporting the efficacy of such treatments. As biodiversity indicators. Due to their habitat specificity, many species of beetles have been suggested as suitable as indicators, their presence, numbers, or absence providing a measure of habitat quality. Predatory beetles such as the tiger beetles (Cicindelidae) have found scientific use as an indicator taxon for measuring regional patterns of biodiversity. They are suitable for this as their taxonomy is stable; their life history is well described; they are large and simple to observe when visiting a site; they occur around the world in many habitats, with species specialised to particular habitats; and their occurrence by species accurately indicates other species, both vertebrate and invertebrate. According to the habitats, many other groups such as the rove beetles in human-modified habitats, dung beetles in savannas and saproxylic beetles in forests have been suggested as potential indicator species. In art and adornment. Many beetles have durable elytra that has been used as material in art, with beetlewing the best example. Sometimes, they are incorporated into ritual objects for their religious significance. Whole beetles, either as-is or encased in clear plastic, are made into objects ranging from cheap souvenirs such as key chains to expensive fine-art jewellery. In parts of Mexico, beetles of the genus "Zopherus" are made into living brooches by attaching costume jewelry and golden chains, which is made possible by the incredibly hard elytra and sedentary habits of the genus. In entertainment. Fighting beetles are used for entertainment and gambling. This sport exploits the territorial behavior and mating competition of certain species of large beetles. In the Chiang Mai district of northern Thailand, male "Xylotrupes" rhinoceros beetles are caught in the wild and trained for fighting. Females are held inside a log to stimulate the fighting males with their pheromones. These fights may be competitive and involve gambling both money and property. In South Korea the Dytiscidae species "Cybister tripunctatus" is used in a roulette-like game. Beetles are sometimes used as instruments: the Onabasulu of Papua New Guinea historically used the "hugu" weevil "Rhynchophorus ferrugineus" as a musical instrument by letting the human mouth serve as a variable resonance chamber for the wing vibrations of the live adult beetle. As pets. Some species of beetle are kept as pets, for example diving beetles (Dytiscidae) may be kept in a domestic fresh water tank. In Japan the practice of keeping horned rhinoceros beetles (Dynastinae) and stag beetles (Lucanidae) is particularly popular amongst young boys. Such is the popularity in Japan that vending machines dispensing live beetles were developed in 1999, each holding up to 100 stag beetles. As things to collect. Beetle collecting became extremely popular in the Victorian era. The naturalist Alfred Russel Wallace collected (by his own count) a total of 83,200 beetles during the eight years described in his 1869 book "The Malay Archipelago", including 2,000 species new to science. As inspiration for technologies. Several coleopteran adaptations have attracted interest in biomimetics with possible commercial applications. The bombardier beetle's powerful repellent spray has inspired the development of a fine mist spray technology, claimed to have a low carbon impact compared to aerosol sprays. Moisture harvesting behavior by the Namib desert beetle ("Stenocara gracilipes") has inspired a self-filling water bottle which utilises hydrophilic and hydrophobic materials to benefit people living in dry regions with no regular rainfall. Living beetles have been used as cyborgs. A Defense Advanced Research Projects Agency funded project implanted electrodes into "Mecynorhina torquata" beetles, allowing them to be remotely controlled via a radio receiver held on its back, as proof-of-concept for surveillance work. Similar technology has been applied to enable a human operator to control the free-flight steering and walking gaits of "Mecynorhina torquata" as well as graded turning, backward walking and feedback control of "Zophobas morio". Research published in 2020 sought to create a robotic camera backpack for beetles. Miniature cameras weighing 248 mg were attached to live beetles of the Tenebrionid genera "Asbolus" and "Eleodes". The cameras filmed over a 60° range for up to 6 hours. In conservation. Since beetles form such a large part of the world's biodiversity, their conservation is important, and equally, loss of habitat and biodiversity is essentially certain to impact on beetles. Many species of beetles have very specific habitats and long life cycles that make them vulnerable. Some species are highly threatened while others are already feared extinct. Island species tend to be more susceptible as in the case of "Helictopleurus undatus" of Madagascar which is thought to have gone extinct during the late 20th century. Conservationists have attempted to arouse a liking for beetles with flagship species like the stag beetle, "Lucanus cervus", and tiger beetles (Cicindelidae). In Japan the Genji firefly, "Luciola cruciata", is extremely popular, and in South Africa the Addo elephant dung beetle offers promise for broadening ecotourism beyond the big five tourist mammal species. Popular dislike of pest beetles, too, can be turned into public interest in insects, as can unusual ecological adaptations of species like the fairy shrimp hunting beetle, "Cicinis bruchi".
7045
9467159
https://en.wikipedia.org/wiki?curid=7045
Concorde
Concorde () is a retired Anglo-French supersonic airliner jointly developed and manufactured by Sud Aviation and the British Aircraft Corporation (BAC). Studies began in 1954 and a UK–France treaty followed in 1962, as the programme cost was estimated at £70 million (£ in ). Construction of six prototypes began in February 1965, with the first flight from Toulouse on 2 March 1969. The market forecast was 350 aircraft, with manufacturers receiving up to 100 options from major airlines. On 9 October 1975, it received its French certificate of airworthiness, and from the UK CAA on 5 December. Concorde is a tailless aircraft design with a narrow fuselage permitting four-abreast seating for 92 to 128 passengers, an ogival delta wing, and a droop nose for landing visibility. It is powered by four Rolls-Royce/Snecma Olympus 593 turbojets with variable engine intake ramps, and reheat for take-off and acceleration to supersonic speed. Constructed from aluminium, it was the first airliner to have analogue fly-by-wire flight controls. The airliner had transatlantic range while supercruising at twice the speed of sound for 75% of the distance. Delays and cost overruns pushed costs to £1.5–2.1 billion in 1976, (£– in ). Concorde entered service on 21 January 1976 with Air France from Paris-Roissy and British Airways from London Heathrow. Transatlantic flights were the main market, to Washington Dulles from 24 May, and to New York JFK from 17 October 1977. Air France and British Airways remained the sole customers with seven airframes each, for a total production of 20. Supersonic flight more than halved travel times, but sonic booms over the ground limited it to transoceanic flights only. Its only competitor was the Tupolev Tu-144, carrying passengers from November 1977 until a May 1978 crash, while a potential competitor, the Boeing 2707, was cancelled in 1971 before any prototypes were built. On 25 July 2000, Air France Flight 4590 crashed shortly after take-off with all 109 occupants and four on the ground killed. This was the only fatal incident involving Concorde; commercial service was suspended until November 2001. The surviving aircraft were retired in 2003, 27 years after commercial operations had begun. Eighteen of the 20 aircraft built are preserved and are on display in Europe and North America. Development. Early studies. In the early 1950s, Arnold Hall, director of the Royal Aircraft Establishment (RAE), asked Morien Morgan to form a committee to study supersonic transport (SST). The group met in February 1954 and delivered their first report in April 1955. Robert T. Jones' work at NACA had demonstrated that the drag at supersonic speeds was strongly related to the span of the wing. This led to the use of short-span, thin, trapezoidal wings such as those seen on the control surfaces of many missiles, or aircraft such as the Lockheed F-104 Starfighter interceptor or the planned Avro 730 strategic bomber that the team studied. The team outlined a baseline configuration that resembled an enlarged Avro 730. This short wingspan produced little lift at low speed, resulting in long take-off runs and high landing speeds. In an SST design, this would have required enormous engine power to lift off from existing runways, and to provide the fuel needed, "some horribly large aeroplanes" resulted. Based on this, the group considered the concept of an SST infeasible, and instead suggested continued low-level studies into supersonic aerodynamics. Slender deltas. Soon after, Johanna Weber and Dietrich Küchemann at the RAE published a series of reports on a new wing planform, known in the UK as the "slender delta". The team, including Eric Maskell whose report "Flow Separation in Three Dimensions" contributed to an understanding of separated flow, worked with the fact that delta wings can produce strong vortices on their upper surfaces at high angles of attack. The vortex will lower the air pressure and cause lift. This had been noticed by Chuck Yeager in the Convair XF-92, but its qualities had not been fully appreciated. Weber suggested that the effect could be used to improve low-speed performance. Küchemann and Weber's papers changed the entire nature of supersonic design. The delta had already been used on aircraft, but these designs used planforms that were not much different from a swept wing of the same span. Weber noted that the lift from the vortex was increased by the length of the wing it had to operate over, which suggested that the effect would be maximised by extending the wing along the fuselage as far as possible. Such a layout would still have good supersonic performance, but also have reasonable take-off and landing speeds using vortex generation. The aircraft would have to take off and land very "nose high" to generate the required vortex lift, which led to questions about the low-speed handling qualities of such a design. Küchemann presented the idea at a meeting where Morgan was also present. Test pilot Eric Brown recalls Morgan's reaction to the presentation, saying that he immediately seized on it as the solution to the SST problem. Brown considers this moment as being the birth of the Concorde project. Supersonic Transport Aircraft Committee. On 1 October 1956, the Ministry of Supply asked Morgan to form a new study group, the Supersonic Transport Aircraft Committee (STAC) (sometimes referred to as the Supersonic Transport Advisory Committee), to develop a practical SST design and find industry partners to build it. At the first meeting, on 5 November 1956, the decision was made to fund the development of a test-bed aircraft to examine the low-speed performance of the slender delta, a contract that eventually produced the Handley Page HP.115. This aircraft demonstrated safe control at speeds as low as , about one-third that of the F-104 Starfighter. STAC stated that an SST would have economic performance similar to existing subsonic types. Lift is not generated the same way at supersonic and subsonic speeds, with the lift-to-drag ratio for supersonic designs being about half that of subsonic designs. The aircraft would need more thrust than a subsonic design of the same size. Although they would use more fuel in cruise, they would be able to fly more revenue-earning flights in a given time, so fewer aircraft would be needed to service a particular route. This would remain economically advantageous as long as fuel represented a small percentage of operational costs. STAC suggested that two designs naturally fell out of their work, a transatlantic model flying at about Mach 2, and a shorter-range version flying at Mach 1.2. Morgan suggested that a 150-passenger transatlantic SST would cost about £75 to £90 million to develop, and be in service in 1970. The smaller 100-passenger short-range version would cost perhaps £50 to £80 million, and be ready for service in 1968. To meet this schedule, development would need to begin in 1960, with production contracts let in 1962. Morgan suggested that the US was already involved in a similar project, and that if the UK failed to respond, it would be locked out of an airliner market that he believed would be dominated by SST aircraft. In 1959, a study contract was awarded to Hawker Siddeley and Bristol for preliminary designs based on the slender delta, which developed as the HSA.1000 and Bristol 198. Armstrong Whitworth also responded with an internal design, the M-Wing, for the lower-speed, shorter-range category. Both the STAC group and the government were looking for partners to develop the designs. In September 1959, Hawker approached Lockheed, and after the creation of British Aircraft Corporation in 1960, the former Bristol team immediately started talks with Boeing, General Dynamics, Douglas Aircraft, and Sud Aviation. Ogee planform selected. Küchemann and others at the RAE continued their work on the slender delta throughout this period, considering three basic shapes - the classic straight-edge delta, the "gothic delta" that was rounded outward to appear like a gothic arch, and the "ogival wing" that was compound-rounded into the shape of an ogee. Each of these planforms had advantages and disadvantages. As they worked with these shapes, a practical concern grew to become so important that it forced selection of one of these designs. Generally, the wing's centre of pressure (CP, or "lift point") should be close to the aircraft's centre of gravity (CG, or "balance point") to reduce the amount of control force required to pitch the aircraft. As the aircraft layout changes during the design phase, the CG commonly moves fore or aft. With a normal wing design, this can be addressed by moving the wing slightly fore or aft to account for this. With a delta wing running most of the length of the fuselage, this was no longer easy; moving the wing would leave it in front of the nose or behind the tail. Studying the various layouts in terms of CG changes, both during design and changes due to fuel use during flight, the ogee planform immediately came to the fore. To test the new wing, NASA assisted the team by modifying a Douglas F5D Skylancer to mimic the wing selection. In 1965, the NASA test aircraft successfully tested the wing, and found that it reduced landing speeds noticeably over the standard delta wing. NASA also ran simulations at Ames that showed the aircraft would exhibit a sudden change in pitch when entering ground effect. Ames test pilots later participated in a joint cooperative test with the French and British test pilots and found that the simulations had been correct, and this information was added to pilot training. Partnership with Sud Aviation. France had its own SST plans. In the late 1950s, the government requested designs from the government-owned Sud Aviation and Nord Aviation, as well as Dassault. All three returned designs based on Küchemann and Weber's slender delta; Nord suggested a ramjet-powered design flying at Mach 3, and the other two were jet-powered Mach 2 designs that were similar to each other. Of the three, the Sud Aviation Super-Caravelle won the design contest with a medium-range design deliberately sized to avoid competition with transatlantic US designs they assumed were already on the drawing board. As soon as the design was complete, in April 1960, Pierre Satre, the company's technical director, was sent to Bristol to discuss a partnership. Bristol was surprised to find that the Sud team had designed a similar aircraft after considering the SST problem and coming to the same conclusions as the Bristol and STAC teams in terms of economics. It was later revealed that the original STAC report, marked "For UK Eyes Only", had secretly been passed to France to win political favour. Sud made minor changes to the paper and presented it as their own work. France had no modern large jet engines and had already decided to buy a British design (as they had on the earlier subsonic Caravelle). As neither company had experience in the use of heat-resistant metals for airframes, a maximum speed of around Mach 2 was selected so aluminium could be used – above this speed, the friction with the air heats the metal so much that it begins to soften. This lower speed would also speed development and allow their design to fly before the Americans. Everyone involved agreed that Küchemann's ogee-shaped wing was the right one. The British team was still focused on a 150-passenger design serving transatlantic routes, while France was deliberately avoiding these. Common components could be used in both designs, with the shorter-range version using a clipped fuselage and four engines, and the longer one a stretched fuselage and six engines, leaving only the wing to be extensively redesigned. The teams continued to meet in 1961, and by this time it was clear that the two aircraft would be very similar in spite of different ranges and seating arrangements. A single design emerged that differed mainly in fuel load. More-powerful Bristol Siddeley Olympus engines, being developed for the TSR-2, allowed either design to be powered by only four engines. Cabinet response, treaty. While the development teams met, the French Minister of Public Works and Transport Robert Buron was meeting with the UK Minister of Aviation Peter Thorneycroft, and Thorneycroft told the cabinet that France was much more serious about a partnership than any of the US companies. The various US companies had proved uninterested, likely due to the belief that the government would be funding development and would frown on any partnership with a European company, and the risk of "giving away" US technological leadership to a European partner. When the STAC plans were presented to the UK cabinet, the economic considerations were considered highly questionable, especially as these were based on development costs, now estimated to be , which were repeatedly overrun in the industry. The Treasury Ministry presented a negative view, suggesting that the project in no way would have any positive financial returns for the government, especially because "the industry's past record of over-optimistic estimating (including the recent history of the TSR.2) suggests that it would be prudent to consider" the cost "to turn out much too low." This led to an independent review of the project by the Committee on Civil Scientific Research and Development, which met on the topic between July and September 1962. The committee rejected the economic arguments, including considerations of supporting the industry made by Thorneycroft. Their report in October stated that any direct positive economic outcome would be unlikely, but that the project should still be considered because everyone else was going supersonic, and they were concerned they would be locked out of future markets. The project apparently would not be likely to significantly affect other, more important, research efforts. At the time, the UK was pressing for admission to the European Economic Community, and this became the main rationale for moving ahead with the aircraft. The development project was negotiated as an international treaty between the two countries rather than a commercial agreement between companies, and included a clause, originally asked for by the UK government, imposing heavy penalties for cancellation. This treaty was signed on 29 November 1962. Charles de Gaulle vetoed the UK's entry into the European Community in a speech on 25 January 1963. Naming. At Charles de Gaulle's January 1963 press conference, the aircraft was first called "Concorde". The name was suggested by the 18-year-old son of F.G. Clark, the publicity manager at BAC's Filton plant. Reflecting the treaty between the British and French governments that led to Concorde's construction, the name "Concorde" is from the French word "concorde" (), which has an English equivalent, "concord". Both words mean "agreement", "harmony", or "union". The name was changed to "Concord" by Harold Macmillan in response to a perceived slight by de Gaulle. At the French roll-out in Toulouse in late 1967, the British Minister of Technology, Tony Benn, announced that he would change the spelling back to "Concorde". This created a nationalist uproar that died down when Benn stated that the suffixed "e" represented "Excellence, England, Europe, and Entente (Cordiale)". In his memoirs, he recounted a letter from a Scotsman claiming, "you talk about 'E' for England, but part of it is made in Scotland." Given Scotland's contribution of providing the nose cone for the aircraft, Benn replied, "it was also 'E' for 'Écosse' (the French name for Scotland) – and I might have added 'e' for extravagance and 'e' for escalation as well!" In common usage in the United Kingdom, the type is known as "Concorde" without an article, rather than " Concorde" or " Concorde". Sales efforts. Advertisements for Concorde during the late 1960s placed in publications such as "Aviation Week & Space Technology" predicted a market for 350 aircraft by 1980. The new consortium intended to produce one long-range and one short-range version, but prospective customers showed no interest in the short-range version, thus it was later dropped. Concorde's costs spiralled during development to more than six times the original projections, arriving at a unit cost of £23 million in 1977 (equivalent to £ million in ). Its sonic boom made travelling supersonically over land impossible without causing complaints from citizens. World events also dampened Concorde sales prospects; the 1973–74 stock market crash and the 1973 oil crisis had made airlines cautious about aircraft with high fuel consumption, and new wide-body aircraft, such as the Boeing 747, had recently made subsonic aircraft significantly more efficient and presented a low-risk option for airlines. While carrying a full load, Concorde achieved 15.8 passenger miles per gallon of fuel, while the Boeing 707 reached 33.3 pm/g, the Boeing 747 46.4 pm/g, and the McDonnell Douglas DC-10 53.6 pm/g. A trend in favour of cheaper airline tickets also caused airlines such as Qantas to question Concorde's market suitability. During the early 2000s, "Flight International" described Concorde as being "one of aerospace's most ambitious but commercially flawed projects", The consortium received orders (non-binding options) for more than 100 of the long-range version from the major airlines of the day: Pan Am, BOAC, and Air France were the launch customers, with six aircraft each. Other airlines in the order book included Panair do Brasil, Continental Airlines, Japan Airlines, Lufthansa, American Airlines, United Airlines, Air India, Air Canada, Braniff, Singapore Airlines, Iran Air, Olympic Airways, Qantas, CAAC Airlines, Middle East Airlines, and TWA. At the time of the first flight, the options list contained 74 options from 16 airlines: Testing. The design work was supported by a research programme studying the flight characteristics of low ratio delta wings. A supersonic Fairey Delta 2 was modified to carry the ogee planform, and, renamed as the BAC 221, used for tests of the high-speed flight envelope; the Handley Page HP.115 also provided valuable information on low-speed performance. Construction of two prototypes began in February 1965: 001, built by Aérospatiale at Toulouse, and 002, by BAC at Filton, Bristol. 001 made its first test flight from Toulouse on 2 March 1969, piloted by André Turcat, and first went supersonic on 1 October. The first UK-built Concorde flew from Filton to RAF Fairford on 9 April 1969, piloted by Brian Trubshaw. Both prototypes were presented to the public on 7–8 June 1969 at the Paris Air Show. As the flight programme progressed, 001 embarked on a sales and demonstration tour on 4 September 1971, which was also the first transatlantic crossing of Concorde. Concorde 002 followed on 2 June 1972 with a tour of the Middle and Far East. Concorde 002 made the first visit to the United States in 1973, landing at Dallas/Fort Worth Regional Airport to mark the airport's opening. Concorde had initially held a great deal of customer interest, but the project was hit by order cancellations. The Paris Le Bourget air show crash of the competing Soviet Tupolev Tu-144 had shocked potential buyers, and public concern over the environmental issues of supersonic aircraftthe sonic boom, take-off noise and pollutionhad produced a change in the public opinion of SSTs. By 1976 the remaining buyers were from four countries: Britain, France, China, and Iran. Only Air France and British Airways (the successor to BOAC) took up their orders, with the two governments taking a cut of any profits. The US government cut federal funding for the Boeing 2707, its supersonic transport programme, in 1971; Boeing did not complete its two 2707 prototypes. The US, India, and Malaysia all ruled out Concorde supersonic flights over the noise concern, although some of these restrictions were later relaxed. Professor Douglas Ross characterised restrictions placed upon Concorde operations by President Jimmy Carter's administration as having been an act of protectionism of American aircraft manufacturers. Programme cost. The original programme cost estimate was £70 million in 1962, (£ in ). After cost overruns and delays the programme eventually cost between £1.5 and £2.1 billion in 1976, (£ – in ). This cost was the main reason the production run was much smaller than expected. Design. General features. Concorde is an ogival delta winged aircraft with four Olympus engines based on those employed in the RAF's Avro Vulcan strategic bomber. It has an unusual tailless configuration for a commercial aircraft, as does the Tupolev Tu-144. Concorde was the first airliner to have a fly-by-wire flight-control system (in this case, analogue); the avionics system Concorde used was unique because it was the first commercial aircraft to employ hybrid circuits. The principal designer for the project was Pierre Satre, with Sir Archibald Russell as his deputy. Concorde pioneered the following technologies: For high speed and optimisation of flight: For weight-saving and enhanced performance: Powerplant. A symposium titled "Supersonic-Transport Implications" was hosted by the Royal Aeronautical Society on 8 December 1960. Various views were put forward on the likely type of powerplant for a supersonic transport, such as podded or buried installation and turbojet or ducted-fan engines. Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag (but would be studied for future SSTs). Olympus turbojet technology was already available for development to meet the design requirements. Rolls-Royce proposed developing the RB.169 to power Concorde during its initial design phase, but developing a wholly-new engine for a single aircraft would have been extremely costly, so the existing BSEL Olympus Mk 320 turbojet engine, which was already flying in the BAC TSR-2 supersonic strike bomber prototype, was chosen instead. Boundary layer management in the podded installation was put forward as simpler with only an inlet cone, however, Dr. Seddon of the RAE favoured a more integrated buried installation. One concern of placing two or more engines behind a single intake was that an intake failure could lead to a double or triple engine failure. While a ducted fan over the turbojet would reduce noise, its larger cross-section also incurred more drag. Acoustics specialists were confident that a turbojet's noise could be reduced and SNECMA made advances in silencer design during the programme. The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but was not pursued. By 1974, the spade silencers which projected into the exhaust were reported to be ineffective but "entry-into-service aircraft are likely to meet their noise guarantees". The powerplant configuration selected for Concorde highlighted airfield noise, boundary layer management and interactions between adjacent engines and the requirement that the powerplant, at Mach 2, tolerate pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws addressed most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde "had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6". Situated behind the wing leading edge, the engine intake had a wing boundary layer ahead of it. Two-thirds were diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened and caused surging. Wind tunnel testing helped define leading-edge modifications ahead of the intakes which solved the problem. Each engine had its own intake and the nacelles were paired with a splitter plate between them to minimise the chance of one powerplant influencing the other. Only above was an engine surge likely to affect the adjacent engine. The air intake design for Concorde's engines was especially critical. The intakes had to slow down supersonic inlet air to subsonic speeds with high-pressure recovery to ensure efficient operation at cruising speed while providing low distortion levels (to prevent engine surge) and maintaining high efficiency for all likely ambient temperatures in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake of air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle. As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise. Concorde's Air Intake Control Units (AICUs) made use of a digital processor for intake control. It was the first use of a digital processor with full authority control of an essential system in a passenger aircraft. It was developed by BAC's Electronics and Space Systems division after the analogue AICUs (developed by Ultra Electronics) fitted to the prototype aircraft were found to lack sufficient accuracy. Ultra Electronics also developed Concorde's thrust-by-wire engine control system. Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double-engine failure. Concorde used reheat (afterburners) only at take-off and to pass through the transonic speed range, between Mach 0.95 and 1.7. Heating problems. Kinetic heating from the high speed boundary layer caused the skin to heat up during supersonic flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Apart from the engine bay, the hottest part of any supersonic aircraft's structure is the nose, due to aerodynamic heating. Hiduminium R.R. 58, an aluminium alloy, was used throughout the aircraft because it was relatively cheap and easy to work with. The highest temperature it could sustain over the life of the aircraft was , which limited the top speed to Mach 2.02. Concorde went through two cycles of cooling and heating during a flight, first cooling down as it gained altitude at subsonic speed, then heating up accelerating to cruise speed, finally cooling again when descending and slowing down before heating again in low altitude air before landing. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The airframe was designed for a life of 45,000 flying hours. As the fuselage heated up it expanded by as much as . The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when the airframe shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight a visor was used to keep high temperature air from flowing over the cockpit skin. Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects. The white finish reduced the skin temperature by . Structural issues. Due to its high speeds, large forces were applied to the aircraft during turns, causing distortion of the aircraft's structure. There were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by ratio changes between the inboard and outboard elevon deflections, varying at differing speeds including supersonic. Only the innermost elevons, attached to the stiffest area of the wings, were used at higher speeds. The narrow fuselage flexed, which was apparent to rear passengers looking along the length of the cabin. When any aircraft passes the critical mach of its airframe, the centre of pressure shifts rearwards. This causes a pitch-down moment on the aircraft if the centre of gravity remains where it was. The wings were designed to reduce this, but there was still a shift of about . This could have been countered by the use of trim controls, but at such high speeds, this would have increased drag which would have been unacceptable. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control. Range. To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of powerplants which were efficient at twice the speed of sound, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. Only a modest payload could be carried and the aircraft was trimmed without using deflected control surfaces, to avoid the drag that would incur. Nevertheless, soon after Concorde began flying, a Concorde "B" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It would have higher thrust engines with noise reducing features and no environmentally-objectionable afterburner. Preliminary design studies showed that an engine with a 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593 could be produced. This would have given additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s. Radiation concerns. Concorde's high cruising altitude meant people on board received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of increase or decrease of radiation. If the radiation level became too high, Concorde would descend below . Cabin pressurisation. Airliner cabins were usually maintained at a pressure equivalent to elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, . Concorde's maximum cruising altitude was ; subsonic airliners typically cruise below . A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above , a sudden cabin depressurisation would leave a "time of useful consciousness" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks. Flight characteristics. While subsonic commercial jets took eight hours to fly from Paris to New York (seven hours from New York to Paris), the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruising altitude of and an average cruise speed of , more than twice the speed of conventional aircraft. With no other civil traffic operating at its cruising altitude of about , Concorde had exclusive use of dedicated oceanic airways, or "tracks", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a block, allowing for a slow climb from during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient "cruise-climb" flight profile following take-off. The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low-pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was . Because of this high angle, during a landing approach Concorde was on the backside of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload. Brakes and undercarriage. Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation, the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation ( indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also needed to contract in length telescopically before swinging to clear each other when stowed. The four main wheel tyres on each bogie unit are inflated to . The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of , and the wheel assembly carries a spray deflector to prevent standing water from being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of . The high take-off speed of required Concorde to have upgraded brakes. Like most airliners, Concorde has anti-skid braking to prevent the tyres from losing traction when the brakes are applied. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of . Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around . Landing Concorde required a minimum of runway length; the shortest runway Concorde ever landed on carrying commercial passengers was Cardiff Airport. Concorde G-AXDN (101) made its final landing at Duxford Aerodrome on 20 August 1977, which had a runway length of just at the time. This was the last aircraft to land at Duxford before the runway was shortened later that year. Droop nose. Concorde's drooping nose, developed by Marshall's of Cambridge, enabled the aircraft to switch from being streamlined to reduce drag and achieve optimal aerodynamic efficiency during flight, to not obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the ability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining. A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage due to collision with ground vehicles, and then raised fully before engine shutdown to prevent pooling of internal condensation within the radome seeping down into the aircraft's pitot/ADC system probes. The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used in the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of at supersonic flight, were developed by Triplex. Operational history. First flights and routes flown. Concorde began scheduled flights with British Airways and Air France on 21 January 1976. Concorde operated on various routes, including London–Bahrain, London–New York, London–Miami, and London–Barbados (with British Airways), and Paris–Dakar–Rio de Janeiro, Paris–Azores–Caracas, Paris–New York, and Paris–Washington (with Air France), but faced challenges such as bans and low profitability. Later, British Airways repositioned Concorde as a super-premium service and it then became profitable. Retirement. In 2003, Air France and British Airways announced the retirement of Concorde, due to rising maintenance costs, low passenger numbers following the 25 July 2000 crash, and the slump in air travel following the September 11 attacks. Air France flew its last commercial flight on 30 May 2003 with British Airways retiring its Concorde fleet on 24 October 2003. Accidents and incidents. Air France Flight 4590. On 25 July 2000, Air France Flight 4590, registration F-BTSC, crashed in Gonesse, France, after departing from Charles de Gaulle Airport en route to John F. Kennedy International Airport in New York City, killing all 100 passengers and nine crew members on board as well as four people on the ground. It was the only fatal accident involving Concorde. This crash also damaged Concorde's reputation and caused both British Airways and Air France to temporarily ground their fleets. According to the official investigation conducted by the Bureau of Enquiry and Analysis for Civil Aviation Safety (BEA), the crash was caused by a metallic strip that had fallen from a Continental Airlines DC-10 that had taken off minutes earlier. This fragment punctured a tyre on Concorde's left main wheel bogie during take-off. The tyre exploded, and a piece of rubber hit the fuel tank, which caused a fuel leak and led to a fire. The crew shut down engine number 2 in response to a fire warning, and with engine number 1 surging and producing little power, the aircraft was unable to gain altitude or speed. The aircraft entered a rapid pitch-up then a sudden descent, rolling left and crashing tail-low into the Hôtelissimo Les Relais Bleus Hotel in Gonesse. Before the accident, Concorde had been arguably the safest operational passenger airliner in the world with zero passenger deaths, but there had been two prior non-fatal accidents and a rate of tyre damage 30 times higher than subsonic airliners from 1995 to 2000. Safety improvements made after the crash included more secure electrical controls, Kevlar lining on the fuel tanks and specially developed burst-resistant tyres. The first flight with the modifications departed from London Heathrow on 17 July 2001, piloted by BA Chief Concorde Pilot Mike Bannister. In a flight of 3 hours 20 minutes over the mid-Atlantic towards Iceland, Bannister attained Mach 2.02 and then returned to RAF Brize Norton. The test flight, intended to resemble the London–New York route, was declared a success and was watched on live TV, and by crowds on the ground at both locations. The first flight with passengers after the 2000 grounding landed shortly before the World Trade Center attacks in the United States. This was not a commercial flight: all the passengers were BA employees. Normal commercial operations resumed on 7 November 2001 by BA and AF (aircraft G-BOAE and F-BTSD), with service to New York JFK, where Mayor Rudy Giuliani greeted the passengers. Other accidents and incidents. On 12 April 1989, Concorde G-BOAF, on a chartered flight from Christchurch, New Zealand, to Sydney, Australia, suffered a structural failure at supersonic speed. As the aircraft was climbing and accelerating through Mach 1.7, a "thud" was heard. The crew did not notice any handling problems, and they assumed the thud they heard was a minor engine surge. No further difficulty was encountered until descent through at Mach 1.3, when a vibration was felt throughout the aircraft, lasting two to three minutes. Most of the upper rudder had separated from the aircraft at this point. Aircraft handling was unaffected, and the aircraft made a safe landing at Sydney. The UK's Air Accidents Investigation Branch (AAIB) concluded that the skin of the rudder had been separating from the rudder structure over a period before the accident due to moisture seepage past the rivets in the rudder. Production staff had not followed proper procedures during an earlier modification of the rudder; the procedures were difficult to adhere to. The aircraft was repaired and returned to service. On 21 March 1992, G-BOAB while flying British Airways Flight 001 from London to New York, also suffered a structural failure at supersonic speed. While cruising at Mach 2, at approximately , the crew heard a "thump". No difficulties in handling were noticed, and no instruments gave any irregular indications. This crew also suspected there had been a minor engine surge. One hour later, during descent and while decelerating below Mach 1.4, a sudden "severe" vibration began throughout the aircraft. The vibration worsened when power was added to the No 2 engine. The crew shut down the No 2 engine and made a successful landing in New York, noting that increased rudder control was needed to keep the aircraft on its intended approach course. Again, the skin had separated from the structure of the rudder, which led to most of the upper rudder detaching in flight. The AAIB concluded that repair materials had leaked into the structure of the rudder during a recent repair, weakening the bond between the skin and the structure of the rudder, leading to it breaking up in flight. The large size of the repair had made it difficult to keep repair materials out of the structure, and prior to this accident, the severity of the effect of these repair materials on the structure and skin of the rudder was not appreciated. The 2010 trial involving Continental Airlines over the crash of Flight 4590 established that from 1976 until Flight 4590 there had been 57 tyre failures involving Concordes during takeoffs, including a near-crash at Dulles International Airport on 14 June 1979 involving Air France Flight 54 where a tyre blowout pierced the plane's fuel tank and damaged a left engine and electrical cables, with the loss of two of the craft's hydraulic systems. Aircraft on display. Twenty Concorde aircraft were built: two prototypes, two pre-production aircraft, two development aircraft and 14 production aircraft for commercial service. With the exception of two of the production aircraft, all are preserved, mostly in museums. One aircraft was scrapped in 1994, and another was destroyed in the Air France Flight 4590 crash in 2000. Comparable aircraft. Tu-144. Concorde was one of only two supersonic jetliner models to operate commercially; the other was the Soviet-built Tupolev Tu-144, which operated in the late 1970s. The Tu-144 was nicknamed "Concordski" by Western European journalists for its outward similarity to Concorde. Soviet espionage efforts allegedly stole Concorde blueprints to assist in the design of the Tu-144. As a result of a rushed development programme, the first Tu-144 prototype was substantially different from the preproduction machines, but both were cruder than Concorde. The Tu-144"S" had a significantly shorter range than Concorde. Jean Rech, Sud Aviation, attributed this to two things, a very heavy powerplant with an intake twice as long as that on Concorde, and low-bypass turbofan engines with too high a bypass ratio which needed afterburning for cruise. The aircraft had poor control at low speeds because of a simpler wing design. The Tu-144 required braking parachutes to land. The Tu-144 had two crashes, one at the 1973 Paris Air Show, and another during a pre-delivery test flight in May 1978. Passenger service commenced in November 1977, but after the 1978 crash the aircraft was taken out of passenger service after only 55 flights, which carried an average of 58 passengers. The Tu-144 had an inherently unsafe structural design as a consequence of an automated production method chosen to simplify and speed up manufacturing. The Tu-144 program was cancelled by the Soviet government on 1 July 1983. SST and others. The main competing designs for the US government-funded supersonic transport (SST) were the swing-wing Boeing 2707 and the compound delta wing Lockheed L-2000. These were to have been larger, with seating for up to 300 people. The Boeing 2707 was selected for development. Concorde first flew in 1969, the year Boeing began building 2707 mockups after changing the design to a cropped delta wing; the cost of this and other changes helped to kill the project. The operation of US military aircraft such as the Mach 3+ North American XB-70 Valkyrie prototypes and Convair B-58 Hustler strategic nuclear bomber had shown that sonic booms were capable of reaching the ground, and the experience from the Oklahoma City sonic boom tests led to the same environmental concerns that hindered the commercial success of Concorde. The American government cancelled its SST project in 1971 having spent more than $1 billion without any aircraft being built. Impact. Environmental. Before Concorde's flight trials, developments in the civil aviation industry were largely accepted by governments and their respective electorates. Opposition to Concorde's noise, particularly on the east coast of the United States, forged a new political agenda on both sides of the Atlantic, with scientists and technology experts across a multitude of industries beginning to take the environmental and social impact more seriously. Although Concorde led directly to the introduction of a general noise abatement programme for aircraft flying out of John F. Kennedy Airport, many found that Concorde was quieter than expected, partly due to the pilots temporarily throttling back their engines to reduce noise during overflight of residential areas. Even before commercial flights started, it had been claimed that Concorde was quieter than many other aircraft. In 1971, BAC's technical director stated, "It is certain on present evidence and calculations that in the airport context, production Concordes will be no worse than aircraft now in service and will in fact be better than many of them." Concorde produced nitrogen oxides in its exhaust, which, despite complicated interactions with other ozone-depleting chemicals, are understood to result in degradation to the ozone layer at the stratospheric altitudes it cruised. It has been pointed out that other, lower-flying, airliners produce ozone during their flights in the troposphere, but vertical transit of gases between the layers is restricted. The small fleet meant overall ozone-layer degradation caused by Concorde was negligible. In 1995, David Fahey, of the National Oceanic and Atmospheric Administration in the United States, warned that a fleet of 500 supersonic aircraft with exhausts similar to Concorde might produce a 2 per cent drop in global ozone levels, much higher than previously thought. Each 1 per cent drop in ozone is estimated to increase the incidence of non-melanoma skin cancer worldwide by 2 per cent. Dr Fahey said if these particles are produced by highly oxidised sulphur in the fuel, as he believed, then removing sulphur in the fuel will reduce the ozone-destroying impact of supersonic transport. Concorde's technical leap forward boosted the public's understanding of conflicts between technology and the environment as well as awareness of the complex decision analysis processes that surround such conflicts. In France, the use of acoustic fencing alongside TGV tracks might not have been achieved without the 1970s controversy over aircraft noise. In the UK, the CPRE has issued tranquillity maps since 1990. Public perception. Concorde was normally perceived as a privilege of the rich, but special circular or one-way (with return by other flight or ship) charter flights were arranged to bring a trip within the means of moderately well-off enthusiasts. As a symbol of national pride, an example from the BA fleet made occasional flypasts at selected Royal events, major air shows and other special occasions, sometimes in formation with the Red Arrows. On the final day of commercial service, public interest was so great that grandstands were erected at Heathrow Airport. Significant numbers of people attended the final landings; the event received widespread media coverage. The aircraft was usually referred to by the British as simply "Concorde". In France it was known as "le Concorde" due to "le", the definite article, used in French grammar to introduce the name of a ship or aircraft, and the capital being used to distinguish a proper name from a common noun of the same spelling. In French, the common noun "concorde" means "agreement, harmony, or peace". Concorde's pilots and British Airways in official publications often refer to Concorde both in the singular and plural as "she" or "her". In 2006, 37 years after its first test flight, Concorde was announced the winner of the Great British Design Quest organised by the BBC (through "The Culture Show") and the Design Museum. A total of 212,000 votes were cast with Concorde beating other British design icons such as the Mini, mini skirt, Jaguar E-Type car, the Tube map, the World Wide Web, the K2 red telephone box and the Supermarine Spitfire. Special missions. The heads of France and the United Kingdom flew in Concorde many times. Presidents Georges Pompidou, Valéry Giscard d'Estaing and François Mitterrand regularly used Concorde as French flagship aircraft on foreign visits. Elizabeth II and Prime Ministers Edward Heath, Jim Callaghan, Margaret Thatcher, John Major and Tony Blair took Concorde in some charter flights such as the Queen's trips to Barbados on her Silver Jubilee in 1977, in 1987 and in 2003, to the Middle East in 1984 and to the United States in 1991. Pope John Paul II flew on Concorde in May 1989. Concorde sometimes made special flights for demonstrations, air shows (such as the Farnborough, Paris-Le Bourget, Oshkosh AirVenture and MAKS air shows) as well as parades and celebrations (for example, of Zurich Airport's anniversary in 1998). The aircraft were also used for private charters (including by the President of Zaire Mobutu Sese Seko on multiple occasions), for advertising companies (including for the firm OKI), for Olympic torch relays (1992 Winter Olympics in Albertville) and for observing solar eclipses, including the solar eclipse of 30 June 1973 and again for the total solar eclipse on 11 August 1999. Records. The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996, aided by a 175 mph (282 km/h) tailwind, by the British Airways G-BOAD, in 2 hours, 52 minutes, 59 seconds from take-off to touchdown. On 13 February 1985, a Concorde charter flight flew from London Heathrow to Sydney in a time of 17 hours, 3 minutes and 45 seconds, including refuelling stops. Concorde set the FAI "Westbound Around the World" and "Eastbound Around the World" world air speed records. On 12–13 October 1992, in commemoration of the 500th anniversary of Columbus' first voyage to the New World, Concorde Spirit Tours (US) chartered Air France Concorde F-BTSD and circumnavigated the world in 32 hours 49 minutes and 3 seconds, from Lisbon, Portugal, including six refuelling stops at Santo Domingo, Acapulco, Honolulu, Guam, Bangkok, and Bahrain. The eastbound record was set by the same Air France Concorde (F-BTSD) under charter to Concorde Spirit Tours in the US on 15–16 August 1995. This promotional flight circumnavigated the world from New York/JFK International Airport in 31 hours 27 minutes 49 seconds, including six refuelling stops at Toulouse, Dubai, Bangkok, Andersen AFB in Guam, Honolulu, and Acapulco. On its way to the Museum of Flight in November 2003, G-BOAG set a New York City-to-Seattle speed record of 3 hours, 55 minutes, and 12 seconds. Due to the restrictions on supersonic overflights within the US the flight was granted permission by the Canadian authorities for the majority of the journey to be flown supersonically over sparsely populated Canadian territory.
7053
2387872
https://en.wikipedia.org/wiki?curid=7053
Cannon
A cannon (plural either "cannons" or "cannon") is a large-caliber gun classified as a type of artillery, which usually launches a projectile using explosive chemical propellant. Gunpowder ("black powder") was the primary propellant before the invention of smokeless powder during the late 19th century. Cannons vary in gauge, effective range, mobility, rate of fire, angle of fire and firepower; different forms of cannon combine and balance these attributes in varying degrees, depending on their intended use on the battlefield. A cannon is a type of heavy artillery weapon. The word "cannon" is derived from several languages, in which the original definition can usually be translated as "tube", "cane", or "reed". The earliest known depiction of cannons may have appeared in Song dynasty China as early as the 12th century; however, solid archaeological and documentary evidence of cannons do not appear until the 13th century. In 1288, Yuan dynasty troops are recorded to have used hand cannons in combat, and the earliest extant cannon bearing a date of production comes from the same period. By the end of the 14th century, cannons were widespread throughout Eurasia. Cannons were used primarily as anti-infantry weapons until around 1374, when large cannons were recorded to have breached walls for the first time in Europe. Cannons featured prominently as siege weapons. In 1464 a cannon known as the Great Turkish Bombard was created in the Ottoman Empire. Cannons as field artillery became more important after 1453 when cannons broke down the walls of the Roman Empire's capital, with the introduction of limber, which greatly improved cannon maneuverability and mobility. European cannons reached their longer, lighter, more accurate, and more efficient "classic form" around 1480. This classic European cannon design stayed relatively consistent in form with minor changes until the 1750s. In the modern era, the term "cannon" has fallen into decline, replaced by "guns" or "artillery", if not a more specific term such as howitzer or mortar, except for high-caliber automatic weapons firing bigger rounds than machine guns, called autocannons. Etymology and terminology. The word "cannon" is derived from the Old Italian word , meaning "large tube", which came from the Latin , in turn originating from the Greek (), "reed", and then generalised to mean any hollow tube-like object. The word has been used to refer to a gun since 1326 in Italy and 1418 in England. The plural forms "cannons" and "cannon" are both correct. Early history. East Asia. The cannon may have appeared as early as the 12th century in China, and was probably a parallel development or evolution of the fire-lance, a short ranged anti-personnel weapon combining a gunpowder-filled tube and a polearm. Projectiles such as iron scraps or porcelain shards, mixed together with the gunpowder ("co-viative"), were placed in fire lance barrels at some point, and eventually, the paper and bamboo materials of fire lance barrels were replaced by metal. The earliest known depiction of a cannon is a sculpture from the Dazu Rock Carvings in Sichuan dated to 1128, however, the earliest archaeological samples and textual accounts do not appear until the 13th century. The primary extant specimens of cannon from the 13th century are the Wuwei Bronze Cannon dated to 1227, the Heilongjiang hand cannon dated to 1288, and the Xanadu Gun dated to 1298. However, only the Xanadu gun contains an inscription bearing a date of production, so it is considered the earliest confirmed extant cannon. The Xanadu Gun is in length and weighs . The other cannons are dated using contextual evidence. The Heilongjiang hand cannon is also often considered by some to be the oldest firearm since it was unearthed near the area where the "History of Yuan" reports a battle took place involving hand cannons. According to the "History of Yuan", in 1288, a Jurchen commander by the name of Li Ting led troops armed with hand cannons into battle against the rebel prince Nayan. Chen Bingying argues there were no guns before 1259, while Dang Shoushan believes the Wuwei gun and other Western Xia era samples point to the appearance of guns by 1220, and Stephen Haw goes even further by stating that guns were developed as early as 1200. Sinologist Joseph Needham and renaissance siege expert Thomas Arnold provide a more conservative estimate of around 1280 for the appearance of the "true" cannon. Whether or not any of these are correct, it seems likely that the gun was born sometime during the 13th century. References to cannons proliferated throughout China in the following centuries. Cannon featured in literary pieces. In 1341 Xian Zhang wrote a poem called "The Iron Cannon Affair" describing a cannonball fired from an eruptor which could "pierce the heart or belly when striking a man or horse, and even transfix several persons at once." By the 1350s the cannon was used extensively in Chinese warfare. In 1358 the Ming army failed to take a city due to its garrisons' usage of cannon, however, they themselves would use cannon, in the thousands, later on during the siege of Suzhou in 1366. The Mongol invasion of Java in 1293 brought gunpowder technology to the Nusantara archipelago in the form of cannon (Chinese: "Pao"). During the Ming dynasty cannons were used in riverine warfare at the Battle of Lake Poyang. One shipwreck in Shandong had a cannon dated to 1377 and an anchor dated to 1372. From the 13th to 15th centuries cannon-armed Chinese ships also travelled throughout Southeast Asia. Cannon appeared in Đại Việt by 1390 at the latest. The first of the western cannon to be introduced were breech-loaders in the early 16th century, which the Chinese began producing themselves by 1523 and improved on by including composite metal construction in their making. Japan did not acquire cannon until 1510 when a monk brought one back from China, and did not produce any in appreciable numbers. During the 1593 siege of Pyongyang, 40,000 Ming troops deployed a variety of cannons against Japanese troops. Despite their defensive advantage and the use of arquebus by Japanese soldiers, the Japanese were at a severe disadvantage due to their lack of cannon. Throughout the Japanese invasions of Korea (1592–1598), the Ming–Joseon coalition used artillery widely in land and naval battles, including on the turtle ships of Yi Sun-sin. According to Ivan Petlin, the first Russian envoy to Beijing, in September 1619, the city was armed with large cannon with cannonballs weighing more than . Western Europe. Outside of China, the earliest texts to mention gunpowder are Roger Bacon's (1267) and in what has been interpreted as references to firecrackers. In the early 20th century, a British artillery officer proposed that another work tentatively attributed to Bacon, , dated to 1247, contained an encrypted formula for gunpowder hidden in the text. These claims have been disputed by science historians. In any case, the formula itself is not useful for firearms or even firecrackers, burning slowly and producing mostly smoke. There is a record of a gun in Europe dating to 1322 that was discovered in the nineteenth century, but the artifact has since been lost. The earliest known European depiction of a gun appeared in 1326 in a manuscript by Walter de Milemete, although not necessarily drawn by him, known as ("Concerning the Majesty, Wisdom, and Prudence of Kings"), which displays a gun with a large arrow emerging from it and its user lowering a long stick to ignite the gun through the touch hole. In the same year, another similar illustration showed a darker gun being set off by a group of knights, in another work of de Milemete's, . On 11 February of that same year, the Signoria of Florence appointed two officers to obtain and ammunition for the town's defense. In the following year a document from the Turin area recorded a certain amount was paid "for the making of a certain instrument or device made by Friar Marcello for the projection of pellets of lead". A reference from 1331 describes an attack mounted by two Germanic knights on Cividale del Friuli, using man-portable gunpowder weapons of some sort. The 1320s seem to have been the takeoff point for guns in Europe according to most modern military historians. Scholars suggest that the lack of gunpowder weapons in a well-traveled Venetian's catalogue for a new crusade in 1321 implies that guns were unknown in Europe up until this point, further solidifying the 1320 mark, however more evidence in this area may be forthcoming in the future. The oldest extant cannon in Europe is a small bronze example unearthed in Loshult, Scania in southern Sweden. It dates from the early-mid 14th century, and is currently in the Swedish History Museum in Stockholm. Early cannons in Europe often shot arrows and were known by an assortment of names such as , , "ribaldis", and . The "ribaldis", which shot large arrows and simplistic grapeshot, were first mentioned in the English Privy Wardrobe accounts during preparations for the Battle of Crécy, between 1345 and 1346. The Florentine Giovanni Villani recounts their destructiveness, indicating that by the end of the battle, "the whole plain was covered by men struck down by arrows and cannon balls". Similar cannon were also used at the siege of Calais (1346–47), although it was not until the 1380s that the "ribaudekin" clearly became mounted on wheels. The Battle of Crecy which pitted the English against the French in 1346 featured the early use of cannon which helped the longbowmen repulse a large force of Genoese crossbowmen deployed by the French. The English originally intended to use the cannon against cavalry sent to attack their archers, thinking that the loud noises produced by their cannon would panic the advancing horses along with killing the knights atop them. Early cannons could also be used for more than simply killing men and scaring horses. English cannon were used defensively in 1346 during the siege of Breteuil to launch fire onto an advancing siege tower. In this way, cannons could be used to burn down siege equipment before it reached the fortifications. The use of cannons to shoot fire could also be used offensively as another battle involved the setting of a castle ablaze with similar methods. The particular incendiary used in these projectiles was most likely a gunpowder mixture. This is one area where early Chinese and European cannons share a similarity as both were possibly used to shoot fire. Another aspect of early European cannons is that they were rather small, dwarfed by the bombards, which would come later. In fact, it is possible that the cannons used at Crécy were capable of being moved rather quickly as there is an anonymous chronicle that notes the guns being used to attack the French camp, indicating that they would have been mobile enough to press the attack. These smaller cannons would eventually give way to larger, wall-breaching guns by the end of the 1300s. Islamic world. There is no clear consensus on when the cannon first appeared in the Islamic world, with dates ranging from 1260 to the mid-14th century. The cannon may have appeared in the Islamic world in the late 13th century, with Ibn Khaldun in the 14th century stating that cannons were used in the Maghreb region of North Africa in 1274, and other Arabic military treatises in the 14th century referring to the use of cannon by Mamluk forces in 1260 and 1303, and by Muslim forces at the 1324 siege of Huesca in Spain. However, some scholars do not accept these early dates. While the date of its first appearance is not entirely clear, the general consensus among most historians is that there is no doubt the Mamluk forces were using cannon by 1342. Other accounts may have also mentioned the use of cannon in the early 14th century. An Arabic text dating to 1320–1350 describes a type of gunpowder weapon called a which uses gunpowder to shoot projectiles out of a tube at the end of a stock. Some scholars consider this a hand cannon while others dispute this claim. The Nasrid army besieging Elche in 1331 made use of "iron pellets shot with fire". According to historian Ahmad Y. al-Hassan, during the Battle of Ain Jalut in 1260, the Mamluks used cannon against the Mongols. He claims that this was "the first cannon in history" and used a gunpowder formula almost identical to the ideal composition for explosive gunpowder. He also argues that this was not known in China or Europe until much later. Al-Hassan further claims that the earliest textual evidence of cannon is from the Middle East, based on earlier originals which report hand-held cannons being used by the Mamluks at the Battle of Ain Jalut in 1260. Such an early date is not accepted by some historians, including David Ayalon, Iqtidar Alam Khan, Joseph Needham and Tonio Andrade. Khan argues that it was the Mongols who introduced gunpowder to the Islamic world, and believes cannon only reached Mamluk Egypt in the 1370s. Needham argued that the term , dated to textual sources from 1342 to 1352, did not refer to true hand-guns or bombards, and that contemporary accounts of a metal-barrel cannon in the Islamic world did not occur until 1365. Similarly, Andrade dates the textual appearance of cannons in middle eastern sources to the 1360s. Gabor Ágoston and David Ayalon note that the Mamluks had certainly used siege cannons by 1342 or the 1360s, respectively, but earlier uses of cannons in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s and 1330s, though evidence is inconclusive. Ibn Khaldun reported the use of cannon as siege machines by the Marinid sultan Abu Yaqub Yusuf at the siege of Sijilmasa in 1274. The passage by Ibn Khaldun on the Marinid siege of Sijilmassa in 1274 occurs as follows: "[The Sultan] installed siege engines ... and gunpowder engines ..., which project small balls of iron. These balls are ejected from a chamber ... placed in front of a kindling fire of gunpowder; this happens by a strange property which attributes all actions to the power of the Creator." The source is not contemporary and was written a century later, around 1382. Its interpretation has been rejected as anachronistic by some historians, who urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, as they did for an earlier incendiary, naphtha. Ágoston and Peter Purton note that in the 1204–1324 period, late medieval Arabic texts used the same word for gunpowder, , that they used for an earlier incendiary, naphtha. Needham believes Ibn Khaldun was speaking of fire lances rather than hand cannon. The Ottoman Empire made good use of cannon as siege artillery. Sixty-eight super-sized bombards were used by Mehmed the Conqueror to capture Constantinople in 1453. Jim Bradbury argues that Urban, a Hungarian cannon engineer, introduced this cannon from Central Europe to the Ottoman realm; according to Paul Hammer, however, it could have been introduced from other Islamic countries which had earlier used cannons. These cannons could fire heavy stone balls a mile, and the sound of their blast could reportedly be heard from a distance of . Shkodëran historian Marin Barleti discusses Turkish bombards at length in his book "De obsidione Scodrensi" (1504), describing the 1478–79 siege of Shkodra in which eleven bombards and two mortars were employed. The Ottomans also used cannon to control passage of ships through the Bosphorus strait. Ottoman cannons also proved effective at stopping crusaders at Varna in 1444 and Kosovo in 1448 despite the presence of European cannon in the former case. The similar Dardanelles Guns (for the location) were created by Munir Ali in 1464 and were still in use during the Anglo-Turkish War (1807–1809). These were cast in bronze into two parts: the chase (the barrel) and the breech, which combined weighed 18.4 tonnes. The two parts were screwed together using levers to facilitate moving it. Fathullah Shirazi, a Persian inhabitant of India who worked for Akbar in the Mughal Empire, developed a volley gun in the 16th century. While there is evidence of cannons in Iran as early as 1405 they were not widespread. This changed following the increased use of firearms by Shah Ismail I, and the Iranian army used 500 cannons by the 1620s, probably captured from the Ottomans or acquired by allies in Europe. By 1443, Iranians were also making some of their own cannon, as Mir Khawand wrote of a 1200 kg metal piece being made by an Iranian which was most likely a cannon. Due to the difficulties of transporting cannon in mountainous terrain, their use was less common compared to their use in Europe. Eastern Europe. Documentary evidence of cannons in Russia does not appear until 1382 and they were used only in sieges, often by the defenders. It was not until 1475 when Ivan III established the first Russian cannon foundry in Moscow that they began to produce cannons natively. The earliest surviving cannon from Russia dates to 1485. Later on, large cannons were known as bombards, ranging from three to five feet in length and were used by Dubrovnik and Kotor in defence during the later 14th century. The first bombards were made of iron, but bronze became more prevalent as it was recognized as more stable and capable of propelling stones weighing as much as . Around the same period, the Byzantine Empire began to accumulate its own cannon to face the Ottoman Empire, starting with medium-sized cannon long and of 10 in calibre. The earliest reliable recorded use of artillery in the region was against the Ottoman siege of Constantinople in 1396, forcing the Ottomans to withdraw. The Ottomans acquired their own cannon and laid siege to the Byzantine capital again in 1422. By 1453, the Ottomans used 68 Hungarian-made cannon for the 55-day bombardment of the walls of Constantinople, "hurling the pieces everywhere and killing those who happened to be nearby". The largest of their cannons was the Great Turkish Bombard, which required an operating crew of 200 men and 70 oxen, and 10,000 men to transport it. Gunpowder made the formerly devastating Greek fire obsolete, and with the final fall of Constantinople—which was protected by what were once the strongest walls in Europe—on 29 May 1453, "it was the end of an era in more ways than one". Southeast Asia. Cannons were introduced to the Javanese Majapahit Empire when Kublai Khan's Mongol-Chinese army under the leadership of Ike Mese sought to invade Java in 1293. "History of Yuan" mentioned that the Mongol used a weapon called "p'ao" against Daha forces. This weapon is interpreted differently by researchers, it may be a trebuchet that throws thunderclap bombs, firearms, cannons, or rockets. It is possible that the gunpowder weapons carried by the Mongol–Chinese troops amounted to more than one type. Thomas Stamford Raffles wrote in "The History of Java" that in 1247 saka (1325 AD), cannons were widely used in Java especially by the Majapahit. It is recorded that the small kingdoms in Java that sought the protection of Majapahit had to hand over their cannons to the Majapahit. Majapahit under "Mahapatih" (prime minister) Gajah Mada (in office 1331–1364) utilized gunpowder technology obtained from Yuan dynasty for use in naval fleet. Mongol-Chinese gunpowder technology of Yuan dynasty resulted in eastern-style cetbang which is similar to Chinese cannon. Swivel guns however, only developed in the archipelago because of the close maritime relations of the Nusantara archipelago with the territory of West India after 1460 AD, which brought new types of gunpowder weapons to the archipelago, likely through Arab intermediaries. This weapon seems to be cannon and gun of Ottoman tradition, for example the prangi, which is a breech-loading swivel gun. A new type of cetbang, called the western-style cetbang, was derived from the Turkish prangi. Just like prangi, this cetbang is a breech-loading swivel gun made of bronze or iron, firing single rounds or scattershots (a large number of small bullets). Cannons derived from western-style cetbang can be found in Nusantara, among others were lantaka and lela. Most lantakas were made of bronze and the earliest ones were breech-loaded. There is a trend toward muzzle-loading weapons during colonial times. When the Portuguese came to the archipelago, they referred to the breech-loading swivel gun as "berço", while the Spaniards call it . A pole gun () was recorded as being used by Java in 1413. Duarte Barbosa c. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannon ( or ), long muskets, (arquebus), (hand cannon), Greek fire, guns (cannon), and other fireworks. Every place was considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180- and 260-pounders, weighing anywhere between 3 and 8 tons, length of them between . Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the 14th century firearms were also used by the Trần dynasty. Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles' "The History of Java" (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali. Africa. In Africa, the Adal Sultanate and the Abyssinian Empire both deployed cannons during the Adal-Abyssinian War. Imported from Arabia, and the wider Islamic world, the Adalites led by Ahmed ibn Ibrahim al-Ghazi were the first African power to introduce cannon warfare to the African continent. Later on as the Portuguese Empire entered the war it would supply and train the Abyssinians with cannons, while the Ottoman Empire sent soldiers and cannon to back Adal. The conflict proved, through their use on both sides, the value of firearms such as the matchlock musket, cannon, and the arquebus over traditional weapons. Offensive and defensive use. While previous smaller guns could burn down structures with fire, larger and more powerful cannons forced engineers to develop stronger castle walls from enemy attacks. Cannons were used for other purposes, as fortifications began using cannons as defensive instruments. In India, the fort of Raicher had gun ports built into its walls to accommodate the use of defensive cannons. In "The Art of War", Niccolò Machiavelli opined that field artillery forced an army to take up a defensive posture and opposed a more ideal offensive stance. Machiavelli's concerns can be seen in the criticisms of Portuguese mortars being used in India during the sixteenth century as lack of mobility was one of the key problems with the design. In Russia the early cannons were again placed in forts as a defensive tool. Cannons were also difficult to move around in mountainous regions; offensives conducted with such weapons would often be unsuccessful in areas such as Iran. Modern history. Early modern. By the 16th century, cannons were made in a great variety of lengths and bore diameters, but the general rule was that the longer the barrel, the longer the range. Some cannons made during this time had barrels exceeding in length, and could weigh up to . Consequently, large amounts of gunpowder were needed to allow them to fire stone balls several hundred yards. By mid-century, European monarchs began to classify cannons to reduce the confusion. Henry II of France opted for six sizes of cannon, but others settled for more; the Spanish used twelve sizes, and the English sixteen. They are, from largest to smallest: the cannon royal, cannon, cannon serpentine, bastard cannon, demicannon, pedrero, culverin, basilisk, demiculverin, bastard culverin, saker, minion, falcon, falconet, serpentine, and rabinet. Better powder had been developed by this time as well. Instead of the finely ground powder used by the first bombards, powder was replaced by a "corned" variety of coarse grains. This coarse powder had pockets of air between grains, allowing fire to travel through and ignite the entire charge quickly and uniformly. The end of the Middle Ages saw the construction of larger, more powerful cannon, as well as their spread throughout the world. As they were not effective at breaching the newer fortifications resulting from the development of cannon, siege engines—such as siege towers and trebuchets—became less widely used. However, wooden "battery-towers" took on a similar role as siege towers in the gunpowder age—such as that used at Siege of Kazan in 1552, which could hold ten large-calibre cannon, in addition to 50 lighter pieces. Another notable effect of cannon on warfare during this period was the change in conventional fortifications. Niccolò Machiavelli wrote, "There is no wall, whatever its thickness that artillery will not destroy in only a few days." Although castles were not immediately made obsolete by cannon, their use and importance on the battlefield rapidly declined. Instead of majestic towers and merlons, the walls of new fortresses were thick, angled, and sloped, while towers became low and stout; increasing use was also made of earth and brick in breastworks and redoubts. These new defences became known as bastion forts, after their characteristic shape which attempted to force any advance towards it directly into the firing line of the guns. A few of these featured cannon batteries, such as the House of Tudor's Device Forts in England. Bastion forts soon replaced castles in Europe and, eventually, those in the Americas as well. By the end of the 15th century, several technological advancements made cannons more mobile. Wheeled gun carriages and trunnions became common, and the invention of the limber further facilitated transportation. As a result, field artillery became more viable and began to see more widespread use, often alongside the larger cannons intended for sieges. Better gunpowder, cast-iron projectiles (replacing stone), and the standardisation of calibres meant that even relatively light cannons could be deadly. In "The Art of War", Niccolò Machiavelli observed that "It is true that the arquebuses and the small artillery do much more harm than the heavy artillery." This was the case at the Battle of Flodden, in 1513: the English field guns outfired the Scottish siege artillery, firing two or three times as many rounds. Despite the increased maneuverability, however, cannon were still the slowest component of the army: a heavy English cannon required 23 horses to transport, while a culverin needed nine. Even with this many animals pulling, they still moved at a walking pace. Due to their relatively slow speed, lack of organisation, and undeveloped tactics, the combination of pike and shot still dominated the battlefields of Europe. Innovations continued, notably the German invention of the mortar, a thick-walled, short-barrelled gun that blasted shot upward at a steep angle. Mortars were useful for sieges, as they could hit targets behind walls or other defences. This cannon found more use with the Dutch, who learnt to shoot bombs filled with powder from them. Setting the bomb fuse was a problem. "Single firing" was first used to ignite the fuse, where the bomb was placed with the fuse down against the cannon's propellant. This often resulted in the fuse being blown into the bomb, causing it to blow up as it left the mortar. Because of this, "double firing" was tried where the gunner lit the fuse and then the touch hole. This required considerable skill and timing, and was especially dangerous if the gun misfired, leaving a lighted bomb in the barrel. Not until 1650 was it accidentally discovered that double-lighting was superfluous as the heat of firing would light the fuse. Gustavus Adolphus of Sweden emphasised the use of light cannon and mobility in his army, and created new formations and tactics that revolutionised artillery. He discontinued using all 12 pounder—or heavier—cannon as field artillery, preferring, instead, to use cannons that could be handled by only a few men. One obsolete type of gun, the "leatheren", was replaced by 4 pounder and 9 pounder demi-culverins. These could be operated by three men, and pulled by only two horses. Gustavus Adolphus's army was also the first to use a cartridge that contained both powder and shot which sped up reloading, increasing the rate of fire. Finally, against infantry he pioneered the use of canister shot—essentially a tin can filled with musket balls. Until then there was no more than one cannon for every thousand infantrymen on the battlefield but Gustavus Adolphus increased the number of cannons sixfold. Each regiment was assigned two pieces, though he often arranged them into batteries instead of distributing them piecemeal. He used these batteries to break his opponent's infantry line, while his cavalry would outflank their heavy guns. At the Battle of Breitenfeld, in 1631, Adolphus proved the effectiveness of the changes made to his army, by defeating Johann Tserclaes, Count of Tilly. Although severely outnumbered, the Swedes were able to fire between three and five times as many volleys of artillery, and their infantry's linear formations helped ensure they did not lose any ground. Battered by cannon fire, and low on morale, Tilly's men broke ranks and fled. In England, cannons were being used to besiege various fortified buildings during the English Civil War. Nathaniel Nye is recorded as testing a Birmingham cannon in 1643 and experimenting with a saker in 1645. From 1645 he was the master gunner to the Parliamentarian garrison at Evesham and in 1646 he successfully directed the artillery at the Siege of Worcester, detailing his experiences and in his 1647 book "The Art of Gunnery". Believing that war was as much a science as an art, his explanations focused on triangulation, arithmetic, theoretical mathematics, and cartography as well as practical considerations such as the ideal specification for gunpowder or slow matches. His book acknowledged mathematicians such as Robert Recorde and Marcus Jordanus as well as earlier military writers on artillery such as Niccolò Fontana Tartaglia and Thomas (or Francis) Malthus (author of "A Treatise on Artificial Fire-Works"). Around this time also came the idea of aiming the cannon to hit a target. Gunners controlled the range of their cannons by measuring the angle of elevation, using a "gunner's quadrant". Cannons did not have sights; therefore, even with measuring tools, aiming was still largely guesswork. In the latter half of the 17th century, the French engineer Sébastien Le Prestre de Vauban introduced a more systematic and scientific approach to attacking gunpowder fortresses, in a time when many field commanders "were notorious dunces in siegecraft". Careful sapping forward, supported by enfilading ricochets, was a key feature of this system, and it even allowed Vauban to calculate the length of time a siege would take. He was also a prolific builder of bastion forts, and did much to popularize the idea of "depth in defence" in the face of cannon. These principles were followed into the mid-19th century, when changes in armaments necessitated greater depth defence than Vauban had provided for. It was only in the years prior to World War I that new works began to break radically away from his designs. 18th and 19th centuries. The lower tier of 17th-century English ships of the line were usually equipped with demi-cannons, guns that fired a solid shot, and could weigh up to . Demi-cannons were capable of firing these heavy metal balls with such force that they could penetrate more than a metre of solid oak, from a distance of , and could dismast even the largest ships at close range. Full cannon fired a shot, but were discontinued by the 18th century, as they were too unwieldy. By the end of the 18th century, principles long adopted in Europe specified the characteristics of the Royal Navy's cannon, as well as the acceptable defects, and their severity. The United States Navy tested guns by measuring them, firing them two or three times—termed "proof by powder"—and using pressurized water to detect leaks. The carronade was adopted by the Royal Navy in 1779; the lower muzzle velocity of the round shot when fired from this cannon was intended to create more wooden splinters when hitting the structure of an enemy vessel, as they were believed to be more deadly than the ball by itself. The carronade was much shorter, and weighed between a third to a quarter of the equivalent long gun; for example, a 32-pounder carronade weighed less than a ton, compared with a 32-pounder long gun, which weighed over . The guns were, therefore, easier to handle, and also required less than half as much gunpowder, allowing fewer men to crew them. Carronades were manufactured in the usual naval gun calibres, but were not counted in a ship of the line's rated number of guns. As a result, the classification of Royal Navy vessels in this period can be misleading, as they often carried more cannons than were listed. Cannons were crucial in Napoleon's rise to power, and continued to play an important role in his army in later years. During the French Revolution, the unpopularity of the Directory led to riots and rebellions. When over 25,000 royalists led by General Danican assaulted Paris, Paul Barras was appointed to defend the capital; outnumbered five to one and disorganised, the Republicans were desperate. When Napoleon arrived, he reorganised the defences but realised that without cannons the city could not be held. He ordered Joachim Murat to bring the guns from the Sablons artillery park; the Major and his cavalry fought their way to the recently captured cannons, and brought them back to Napoleon. When Danican's poorly trained men attacked, on 13 Vendémiaire 1795 (5 October in the calendar used in France at the time), Napoleon ordered his cannon to fire grapeshot into the mob, an act that became known as the "whiff of grapeshot". The slaughter effectively ended the threat to the new government, while, at the same time, making Bonaparte a famous—and popular—public figure. Among the first generals to recognise that artillery was not being used to its full potential, Napoleon often massed his cannon into batteries and introduced several changes into the French artillery, improving it significantly and making it among the finest in Europe. Such tactics were successfully used by the French, for example, at the Battle of Friedland, when 66 guns fired a total of 3,000 roundshot and 500 rounds of grapeshot, inflicting severe casualties to the Russian forces, whose losses numbered over 20,000 killed and wounded, in total. At the Battle of Waterloo—Napoleon's final battle—the French army had many more artillery pieces than either the British or Prussians. As the battlefield was muddy, recoil caused cannons to bury themselves into the ground after firing, resulting in slow rates of fire, as more effort was required to move them back into an adequate firing position; also, roundshot did not ricochet with as much force from the wet earth. Despite the drawbacks, sustained artillery fire proved deadly during the engagement, especially during the French cavalry attack. The British infantry, having formed infantry squares, took heavy losses from the French guns, while their own cannons fired at the cuirassiers and lancers, when they fell back to regroup. Eventually, the French ceased their assault, after taking heavy losses from the British cannon and musket fire. In the 1810s and 1820s, greater emphasis was placed on the accuracy of long-range gunfire, and less on the weight of a broadside. Around 1822, George Marshall wrote "Marshall's Practical Marine Gunnery". The book was used by cannon operators in the United States Navy throughout the 19th century. It listed all the types of cannons and instructions. The carronade, although initially very successful and widely adopted, disappeared from the Royal Navy in the 1850s after the development of wrought-iron-jacketed steel cannon by William Armstrong and Joseph Whitworth. Nevertheless, carronades were used in the American Civil War. Western cannons during the 19th century became larger, more destructive, more accurate, and could fire at longer range. One example is the American wrought-iron, muzzle-loading rifle, or Griffen gun (usually called the 3-inch Ordnance Rifle), used during the American Civil War, which had an effective range of over . Another is the smoothbore 12-pounder Napoleon, which originated in France in 1853 and was widely used by both sides in the American Civil War. This cannon was renowned for its sturdiness, reliability, firepower, flexibility, relatively lightweight, and range of . The practice of rifling—casting spiralling lines inside the cannon's barrel—was applied to artillery more frequently by 1855, as it gave cannon projectiles gyroscopic stability, which improved their accuracy. One of the earliest rifled cannons was the breech-loading Armstrong Gun—also invented by William Armstrong—which boasted significantly improved range, accuracy, and power than earlier weapons. The projectile fired from the Armstrong gun could reportedly pierce through a ship's side and explode inside the enemy vessel, causing increased damage and casualties. The British military adopted the Armstrong gun, and was impressed; the Duke of Cambridge even declared that it "could do everything but speak". Despite being significantly more advanced than its predecessors, the Armstrong gun was rejected soon after its integration, in favour of the muzzle-loading pieces that had been in use before. While both types of gun were effective against wooden ships, neither had the capability to pierce the armour of ironclads; due to reports of slight problems with the breeches of the Armstrong gun, and their higher cost, the older muzzle-loaders were selected to remain in service instead. Realising that iron was more difficult to pierce with breech-loaded cannons, Armstrong designed rifled muzzle-loading guns, which proved successful; "The Times" reported: "even the fondest believers in the invulnerability of our present ironclads were obliged to confess that against such artillery, at such ranges, their plates and sides were almost as penetrable as wooden ships." The superior cannon of the Western world brought them tremendous advantages in warfare. For example, in the First Opium War in China, during the 19th century, British battleships bombarded the coastal areas and fortifications from afar, safe from the reach of the Chinese cannons. Similarly, the shortest war in recorded history, the Anglo-Zanzibar War of 1896, was brought to a swift conclusion by shelling from British cruisers. The cynical attitude towards recruited infantry in the face of ever more powerful field artillery is the source of the term "cannon fodder", first used by François-René de Chateaubriand, in 1814; however, the concept of regarding soldiers as nothing more than "food for powder" was mentioned by William Shakespeare as early as 1598, in "Henry IV, Part 1". 20th and 21st centuries. Cannons in the 20th and 21st centuries are usually divided into sub-categories and given separate names. Some of the most widely used types of modern cannon are howitzers, mortars, guns, and autocannon, although a few very large-calibre cannon, custom-designed, have also been constructed. Nuclear artillery was experimented with, but was abandoned as impractical. Modern artillery is used in a variety of roles, depending on its type. According to NATO, the general role of artillery is to provide fire support, which is defined as "the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize, or suppress the enemy". When referring to cannons, the term "gun" is often used incorrectly. In military usage, a gun is a cannon with a high muzzle velocity and a flat trajectory, useful for hitting the sides of targets such as walls, as opposed to howitzers or mortars, which have lower muzzle velocities, and fire indirectly, lobbing shells up and over obstacles to hit the target from above. By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannons proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited to hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This calibre gun was used by the Germans against Paris and could hit targets more than away. The Second World War sparked new developments in cannon technology. Among them were sabot rounds, hollow-charge projectiles, and proximity fuses, all of which increased the effectiveness of cannon against specific targets. The proximity fuse emerged on the battlefields of Europe in late December 1944. Used to great effect in anti-aircraft projectiles, proximity fuses were fielded in both the European and Pacific Theatres of Operations; they were particularly useful against V-1 flying bombs and kamikaze planes. Although widely used in naval warfare, and in anti-air guns, both the British and Americans feared unexploded proximity fuses would be reverse engineered, leading to them limiting their use in continental battles. During the Battle of the Bulge, however, the fuses became known as the American artillery's "Christmas present" for the German army because of their effectiveness against German personnel in the open, when they frequently dispersed attacks. Anti-tank guns were also tremendously improved during the war: in 1939, the British used primarily 2 pounder and 6 pounder guns. By the end of the war, 17 pounders had proven much more effective against German tanks, and 32 pounders had entered development. Meanwhile, German tanks were continuously upgraded with better main guns, in addition to other improvements. For example, the Panzer III was originally designed with a 37 mm gun, but was mass-produced with a 50 mm cannon. To counter the threat of the Russian T-34s, another, more powerful 50 mm gun was introduced, only to give way to a larger 75 mm cannon, which was in a fixed mount as the StuG III, the most-produced German World War II armoured fighting vehicle of any type. Despite the improved guns, production of the Panzer III was ended in 1943, as the tank still could not match the T-34, and was replaced by the Panzer IV and Panther tanks. In 1944, the 8.8 cm KwK 43 and many variations, entered service with the Wehrmacht, and was used as both a tank main gun, and as the PaK 43 anti-tank gun. One of the most powerful guns to see service in World War II, it was capable of destroying any Allied tank at very long ranges. Despite being designed to fire at trajectories with a steep angle of descent, howitzers can be fired directly, as was done by the 11th Marine Regiment at the Battle of Chosin Reservoir, during the Korean War. Two field batteries fired directly upon a battalion of Chinese infantry; the Marines were forced to brace themselves against their howitzers, as they had no time to dig them in. The Chinese infantry took heavy casualties, and were forced to retreat. The tendency to create larger calibre cannons during the World Wars has reversed since. The United States Army, for example, sought a lighter, more versatile howitzer, to replace their aging pieces. As it could be towed, the M198 was selected to be the successor to the World War II–era cannons used at the time, and entered service in 1979. Still in use today, the M198 is, in turn, being slowly replaced by the M777 Ultralightweight howitzer, which weighs nearly half as much and can be more easily moved. Although land-based artillery such as the M198 are powerful, long-ranged, and accurate, naval guns have not been neglected, despite being much smaller than in the past, and, in some cases, having been replaced by cruise missiles. However, the 's planned armament included the Advanced Gun System (AGS), a pair of 155 mm guns, which fire the Long Range Land-Attack Projectile. The warhead, which weighed , had a circular error of probability of , and was mounted on a rocket, to increase the effective range to , further than that of the Paris Gun. The AGS's barrels would be water-cooled, and fire 10 rounds per minute, per gun. The combined firepower from both turrets would give a "Zumwalt"-class destroyer the firepower equivalent to 12 conventional M198 howitzers. The reason for the re-integration of cannons as a main armament in United States Navy ships was because satellite-guided munitions fired from a gun would be less expensive than a cruise missile but have a similar guidance capability. Autocannon. Autocannons have an automatic firing mode, similar to that of a machine gun. They have mechanisms to automatically load their ammunition, and therefore have a higher rate of fire than artillery, often approaching, or, in the case of rotary autocannons, even surpassing the firing rate of a machine gun. While there is no minimum bore for autocannons, they are generally larger than machine guns, typically or greater since World War II and are usually capable of using explosive ammunition even if it is not always used. Machine guns in contrast are usually too small to use explosive ammunition; such ammunition is additionally banned in international conflict for the parties to the Saint Petersburg Declaration of 1868. Most nations use rapid-fire cannon on light vehicles, replacing a more powerful, but heavier, tank gun. A typical autocannon is the 25 mm "Bushmaster" chain gun, mounted on the LAV-25 and M2 Bradley armoured vehicles. Autocannons may be capable of a very high rate of fire, but ammunition is heavy and bulky, limiting the amount carried. For this reason, both the 25 mm Bushmaster and the 30 mm RARDEN are deliberately designed with relatively low rates of fire. The typical rate of fire for a modern autocannon ranges from 90 to 1,800 rounds per minute. Systems with multiple barrels, such as a rotary autocannon, can have rates of fire of several thousand rounds per minute. The fastest of these is the GSh-6-23, which has a rate of fire of over 10,000 rounds per minute. Autocannons are often found in aircraft, where they replaced machine guns and as shipboard anti-aircraft weapons, as they provide greater destructive power than machine guns. Aircraft use. The first documented installation of a cannon firing explosive shells on an aircraft was on the Voisin Canon in 1911, displayed at the Paris Exposition that year. By World War I, all of the major powers were experimenting with aircraft-mounted cannons; however, their low rate of fire and great size and weight precluded any of them from being anything other than experimental. The most successful (or least unsuccessful) was the SPAD 12 Ca.1 with a single 37 mm Puteaux mounted to fire between the cylinder banks and through the propeller boss of the aircraft's Hispano-Suiza 8C. The pilot (by necessity an ace) had to manually reload each round. The first autocannon were developed during World War I as anti-aircraft guns, and one of these, the Coventry Ordnance Works "COW 37 mm gun", was installed in an aircraft. However, the war ended before it could be given a field trial, and it never became standard equipment in a production aircraft. Later trials had it fixed at a steep angle upwards in both the Vickers Type 161 and the Westland C.O.W. Gun Fighter, an idea that would return later. During this period autocannons became available and several fighters of the German and the Imperial Japanese Navy Air Service were fitted with 20 mm cannons. They continued to be installed as an adjunct to machine guns rather than as a replacement, as the rate of fire was still too low and the complete installation too heavy. There was a some debate in the RAF as to whether the greater number of possible rounds being fired from a machine gun, or a smaller number of explosive rounds from a cannon was preferable. Improvements during the war in regards to rate of fire allowed the cannon to displace the machine gun almost entirely. The cannon was more effective against armour so they were increasingly used during the course of World War II, and newer fighters such as the Hawker Tempest usually carried two or four instead of the six .50 Browning machine guns for US aircraft or eight to twelve M1919 Browning machine guns on earlier British aircraft. The Hispano-Suiza HS.404, Oerlikon 20 mm cannon, MG FF, and their numerous variants became among the most widely used autocannon in the war. Cannons, as with machine guns, were either fixed to fire forwards (mounted in the wings, in the nose or fuselage, or in a pannier under either), or mounted in gun turrets on heavier aircraft. Both the Germans and Japanese mounted cannons to fire upwards and forwards for use against heavy bombers, with the Germans calling guns so-installed , derived from a German colloquialism for jazz music— means "off-key". Preceding the Vietnam War the high speeds aircraft were attaining and availability of missiles led to a move to omit cannon due to the belief that they would be useless in a dogfight, but combat experience during the Vietnam War showed conclusively that, despite advances in missiles, there was still a need for cannon. Nearly all modern fighter aircraft are armed with an autocannon, and they are also commonly found on ground-attack aircraft. One of the most powerful examples is the 30mm GAU-8/A Avenger Gatling-type rotary cannon mounted on the Fairchild Republic A-10 Thunderbolt II. The Lockheed AC-130 gunship (a converted transport) can carry a 105 mm howitzer as well as a variety of autocannons ranging up to 40 mm. Both are used in the close air support role. Composition. Cannons in general have the form of a truncated cone with an internal cylindrical bore for holding an explosive charge and a projectile. The thickest, strongest, and closed part of the cone is located near the explosive charge. As any explosive charge will dissipate in all directions equally, the thickest portion of the cannon is useful for containing and directing this force. The backward motion of the cannon as its projectile leaves the bore is termed its recoil, and the effectiveness of the cannon can be measured in terms of how much this response can be diminished, though obviously diminishing recoil through increasing the overall mass of the cannon means decreased mobility. Field artillery cannon in Europe and the Americas were initially made most often of bronze, though later forms were constructed of cast iron and eventually steel. Bronze has several characteristics that made it preferable as a construction material: although it is relatively expensive, does not always alloy well, and can result in a final product that is "spongy about the bore", bronze is more flexible than iron and therefore less prone to bursting when exposed to high pressure; cast-iron cannon are less expensive and more durable generally than bronze and withstand being fired more times without deteriorating. However, cast-iron cannon have a tendency to burst without having shown any previous weakness or wear, and this makes them more dangerous to operate. The older and more-stable forms of cannon were muzzle-loading as opposed to breech-loading—to be used they had to have their ordnance packed down the bore through the muzzle rather than inserted through the breech. The following terms refer to the components or aspects of a classical western cannon (c. 1850) as illustrated here. In what follows, the words "near", "close", and "behind" will refer to those parts towards the thick, closed end of the piece, and "far", "front", "in front of", and "before" to the thinner, open end. Solid spaces. The main body of a cannon consists of three basic extensions: the foremost and the longest is called the "chase", the middle portion is the "reinforce", and the closest and briefest portion is the "cascabel" or "cascable". The chase is simply the entire conical part of the cannon in front of the "reinforce". It is the longest portion of the cannon, and includes the following elements: To pack a muzzle-loading cannon, first gunpowder is poured down the bore. This is followed by a layer of wadding (often nothing more than paper), and then the cannonball itself. A certain amount of windage (in this case meaning that the bore is designed slightly wider than the cannonball) allows the ball to fit down the bore, though the greater the windage the less efficient the propulsion of the ball when the gunpowder is ignited. To fire the cannon, the fuse located in the vent is lit, quickly burning down to the gunpowder, which then explodes violently, propelling wadding and ball down the bore and out of the muzzle. A small portion of exploding gas also escapes through the vent, but this does not dramatically affect the total force exerted on the ball. Any large, smoothbore, muzzle-loading gun—used before the advent of breech-loading, rifled guns—may be referred to as a cannon, though once standardised names were assigned to different-sized cannon, the term specifically referred to a gun designed to fire a shot, as distinct from a demi-cannon – , culverin – , or demi-culverin – . "Gun" in this context specifically refers to a type of cannon that fires projectiles at high speeds, and usually at relatively low angles; they have been used in warships, and as field artillery. The term "cannon" is also used for autocannon, a modern repeating weapon firing explosive projectiles. Cannon have been used extensively in fighter aircraft since World War II. Operation. In the 1770s, cannon operation worked as follows: each cannon would be manned by two gunners, six soldiers, and four officers of artillery. The right gunner was to prime the piece and load it with powder, and the left gunner would fetch the powder from the magazine and be ready to fire the cannon at the officer's command. On each side of the cannon, three soldiers stood, to ram and sponge the cannon, and hold the ladle. The second soldier on the left was tasked with providing 50 bullets. Before loading, the cannon would be cleaned with a wet sponge to extinguish any smouldering material from the last shot. Fresh powder could be set off prematurely by lingering ignition sources. The powder was added, followed by wadding of paper or hay, and the ball was placed in and rammed down. After ramming, the cannon would be aimed with the elevation set using a quadrant and a plummet. At 45 degrees, the ball had the utmost range: about ten times the gun's level range. Any angle above a horizontal line was called random-shot. Wet sponges were used to cool the pieces every ten or twelve rounds. During the Napoleonic Wars, a British gun team consisted of five gunners to aim it, clean the bore with a damp sponge to quench any remaining embers before a fresh charge was introduced, and another to load the gun with a bag of powder and then the projectile. The fourth gunner pressed his thumb on the vent hole, to prevent a draught that might fan a flame. The charge loaded, the fourth would prick the bagged charge through the vent hole, and fill the vent with powder. On command, the fifth gunner would fire the piece with a slow match. Friction primers replaced slow match ignition by the mid-19th century. When a cannon had to be abandoned such as in a retreat or surrender, the touch hole of the cannon would be plugged flush with an iron spike, disabling the cannon (at least until metal boring tools could be used to remove the plug). This was called "spiking". A gun was said to be "honeycombed" when the surface of the bore had cavities, or holes in it, caused by corrosion or casting defects. Legislation. In the United States, muzzleloading cannons made before 1899 (and replicas) that are unable to fire fixed ammunition are considered antiques. They are not subject to the Gun Control Act of 1968 or National Firearms Act of 1934. They may be subject to local rules in some jurisdictions, however. Deceptive imitations. Historically, logs or poles have been used as decoys to mislead the enemy as to the strength of an emplacement. The "Quaker Gun trick" was used by Colonel William Washington's Continental Army during the American Revolutionary War; in 1780, approximately 100 Loyalists surrendered to them, rather than face bombardment. During the American Civil War, Quaker guns were also used by the Confederates, to compensate for their shortage of artillery. The decoy cannon were painted black at the "muzzle", and positioned behind fortifications to delay Union attacks on those positions. On occasion, real gun carriages were used to complete the deception. In popular culture. Cannon sounds have sometimes been used in classical pieces with a military theme. One of the best known examples is Pyotr Ilyich Tchaikovsky's "1812 Overture". The overture is to be performed using an artillery section together with the orchestra, resulting in noise levels high enough that musicians are required to wear ear protection. The cannon fire simulates Russian artillery bombardments of the Battle of Borodino, a critical battle in Napoleon's invasion of Russia, whose defeat the piece celebrates. When the overture was first performed, the cannon were fired by an electric current triggered by the conductor. However, the overture was not recorded with real cannon fire until Mercury Records and conductor Antal Doráti's 1958 recording of the Minnesota Orchestra. Cannon fire is also frequently used in presentations of the "1812" on American Independence Day, a tradition started by Arthur Fiedler of the Boston Pops in 1974. The hard rock band AC/DC used cannon in their song "For Those About to Rock (We Salute You)", and in live shows replica Napoleonic cannon and pyrotechnics were used to perform the piece. A recording of that song has accompanied the firing of an authentic reproduction of a M1857 12-pounder Napoleon during Columbus Blue Jackets goal celebrations at Nationwide Arena since opening night of the 2007–08 season. The cannon is the focal point of the team's alternate logo on its third jerseys. Cannons have been fired in touchdown celebrations by several American football teams including the San Diego Chargers. The Pittsburgh Steelers used one only during the 1962 campaign but discontinued it after Buddy Dial was startled by inadvertently running face-first into the cannon's smoky discharge in a 42–27 loss to the Dallas Cowboys. Restoration. Cannon recovered from the sea are often extensively damaged from exposure to salt water; electrolytic reduction treatment is required to forestall corrosion. The cannon is then washed in deionized water to remove the electrolyte, and is treated in tannic acid, which prevents further rust and gives the metal a bluish-black colour. Cannon on display may be protected from oxygen and moisture by a wax sealant. A coat of polyurethane may also be painted over the wax sealant, to prevent the cannon from attracting dust.
7056
7903804
https://en.wikipedia.org/wiki?curid=7056
Computer mouse
A computer mouse (plural mice; also mouses) is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of the pointer (called a cursor) on a display, which allows a smooth control of the graphical user interface of a computer. The first public demonstration of a mouse controlling a computer system was done by Doug Engelbart in 1968 as part of the Mother of All Demos. Mice originally used two separate wheels to directly track movement across a surface: one in the x-dimension and one in the Y. Later, the standard design shifted to use a ball rolling on a surface to detect motion, in turn connected to internal rollers. Most modern mice use optical movement detection with no moving parts. Though originally all mice were connected to a computer by a cable, many modern mice are cordless, relying on short-range radio communication with the connected system. In addition to moving a cursor, computer mice have one or more buttons to allow operations such as the selection of a menu item on a display. Mice often also feature other elements, such as touch surfaces and scroll wheels, which enable additional control and dimensional input. Etymology. The earliest known written use of the term "mouse" in reference to a computer pointing device is in Bill English's July 1965 publication, "Computer-Aided Display Control". This likely originated from its resemblance to the shape and size of a mouse, with the cord resembling its tail. The popularity of wireless mice without cords makes the resemblance less obvious. According to Roger Bates, a hardware designer under English, the term also came about because the cursor on the screen was, for an unknown reason, referred to as "CAT" and was seen by the team as if it would be chasing the new desktop device. The plural for the small rodent is always "mice" in modern usage. The plural for a computer mouse is either "mice" or "mouses" according to most dictionaries, with "mice" being more common. The first recorded plural usage is "mice"; the online "Oxford Dictionaries" cites a 1984 use, and earlier uses include J. C. R. Licklider's "The Computer as a Communication Device" of 1968. History. Stationary trackballs. The trackball, a related pointing device, was invented in 1946 by Ralph Benjamin as part of a post-World War II-era fire-control radar plotting system called the Comprehensive Display System (CDS). Benjamin was then working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented what they called a "roller ball" for this purpose. The device was patented in 1947, but only a prototype using a metal ball rolling on two rubber-coated wheels was ever built, and the device was kept as a military secret. Another early trackball was built by Kenyon Taylor, a British electrical engineer working in collaboration with Tom Cranston and Fred Longstaff. Taylor was part of the original Ferranti Canada, working on the Royal Canadian Navy's DATAR (Digital Automated Tracking and Resolving) system in 1952. DATAR was similar in concept to Benjamin's display. The trackball used four disks to pick up motion, two each for the X and Y directions. Several rollers provided mechanical support. When the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, producing pulses of output with each movement of the ball. By counting the pulses, the physical movement of the ball could be determined. A digital computer calculated the tracks and sent the resulting data to other ships in a task force using pulse-code modulation radio signals. This trackball used a standard Canadian five-pin bowling ball. It was not patented, since it was a secret military project. Engelbart's first "mouse". Douglas Engelbart of the Stanford Research Institute (now SRI International) has been credited in published books by Thierry Bardini, Paul Ceruzzi, Howard Rheingold, and several others as the inventor of the computer mouse. Engelbart was also recognized as such in various obituary titles after his death in July 2013. By 1963, Engelbart had already established a research lab at SRI, the Augmentation Research Center (ARC), to pursue his objective of developing both hardware and software computer technology to "augment" human intelligence. That November, while attending a conference on computer graphics in Reno, Nevada, Engelbart began to ponder how to adapt the underlying principles of the planimeter to inputting X- and Y-coordinate data. On 14 November 1963, he first recorded his thoughts in his personal notebook about something he initially called a "bug", which is a "3-point" form could have a "drop point and 2 orthogonal wheels". He wrote that the "bug" would be "easier" and "more natural" to use, and unlike a stylus, it would stay still when let go, which meant it would be "much better for coordination with the keyboard". In 1964, Bill English joined ARC, where he helped Engelbart build the first mouse prototype. They christened the device the "mouse" as early models had a cord attached to the rear part of the device which looked like a tail, and in turn, resembled the common mouse. According to Roger Bates, a hardware designer under English, another reason for choosing this name was because the cursor on the screen was also referred to as "CAT" at this time. As noted above, this "mouse" was first mentioned in print in a July 1965 report, on which English was the lead author. On 9 December 1968, Engelbart publicly demonstrated the mouse at what would come to be known as The Mother of All Demos. Engelbart never received any royalties for it, as his employer SRI held the patent, which expired before the mouse became widely used in personal computers. In any event, the invention of the mouse was just a small part of Engelbart's much larger project of augmenting human intellect. Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS) exploited different body movements – for example, head-mounted devices attached to the chin or nose – but ultimately the mouse won out because of its speed and convenience. The first mouse, a bulky device (pictured) used two potentiometers perpendicular to each other and connected to wheels: the rotation of each wheel translated into motion along one axis. At the time of the "Mother of All Demos", Engelbart's group had been using their second-generation, 3-button mouse for about a year. First rolling-ball mouse. On 2 October 1968, three years after Engelbart's prototype but more than two months before his public demo, a mouse device named "" (German for "Trackball control") was shown in a sales brochure by the German company AEG-Telefunken as an optional input device for the SIG 100 vector graphics terminal, part of the system around their process computer TR 86 and the main frame. Based on an even earlier trackball device, the mouse device had been developed by the company in 1966 in what had been a parallel and independent discovery. As the name suggests and unlike Engelbart's mouse, the Telefunken model already had a ball (diameter 40 mm, weight 40 g) and two mechanical 4-bit rotational position transducers with Gray code-like states, allowing easy movement in any direction. The bits remained stable for at least two successive states to relax debouncing requirements. This arrangement was chosen so that the data could also be transmitted to the TR 86 front-end process computer and over longer distance telex lines with 50 baud. Weighing , the device with a total height of about came in a diameter hemispherical injection-molded thermoplastic casing featuring one central push button. As noted above, the device was based on an earlier trackball-like device (also named ') that was embedded into radar flight control desks. This trackball had been originally developed by a team led by at Telefunken for the German ' (Federal Air Traffic Control). It was part of the corresponding workstation system SAP 300 and the terminal SIG 3001, which had been designed and developed since 1963. Development for the TR 440 main frame began in 1965. This led to the development of the TR 86 process computer system with its SIG 100-86 terminal. Inspired by a discussion with a university customer, Mallebrein came up with the idea of "reversing" the existing trackball into a moveable mouse-like device in 1966, so that customers did not have to be bothered with mounting holes for the earlier trackball device. The device was finished in early 1968, and together with light pens and trackballs, it was commercially offered as an optional input device for their system starting later that year. Not all customers opted to buy the device, which added costs of per piece to the already up to 20-million DM deal for the main frame, of which only a total of 46 systems were sold or leased. They were installed at more than 20 German universities including RWTH Aachen, Technische Universität Berlin, University of Stuttgart and Konstanz. Several mice installed at the Leibniz Supercomputing Centre in Munich in 1972 are well preserved in a museum, two others survived in a museum at the University of Stuttgart, two in Hamburg, the one from Aachen at the Computer History Museum in the US, and yet another sample was recently donated to the Heinz Nixdorf MuseumsForum (HNF) in Paderborn. Anecdotal reports claim that Telefunken's attempt to patent the device was rejected by the German Patent Office due to lack of inventiveness. For the air traffic control system, the Mallebrein team had already developed a precursor to touch screens in form of an ultrasonic-curtain-based pointing device in front of the display. In 1970, they developed a device named "Touchinput-" ("touch input device") based on a conductively coated glass screen. First mice on personal computers and workstations. The Xerox Alto was one of the first computers designed for individual use in 1973 and is regarded as the first modern computer to use a mouse. Alan Kay designed the 16-by-16 mouse cursor icon with its left edge vertical and right edge 45-degrees so it displays well on the bitmap.Inspired by PARC's Alto, the Lilith, a computer which had been developed by a team around Niklaus Wirth at ETH Zürich between 1978 and 1980, provided a mouse as well. The third marketed version of an integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star in 1981. By 1982, the Xerox 8010 was probably the best-known computer with a mouse. The Sun-1 also came with a mouse, and the forthcoming Apple Lisa was rumored to use one, but the peripheral remained obscure; Jack Hawley of The Mouse House reported that one buyer for a large organization believed at first that his company sold lab mice. Hawley, who manufactured mice for Xerox, stated that "Practically, I have the market all to myself right now"; a Hawley mouse cost $415. In 1982, Logitech introduced the P4 Mouse at the Comdex trade show in Las Vegas, its first hardware mouse. That same year Microsoft made the decision to make the MS-DOS program Microsoft Word mouse-compatible, and developed the first PC-compatible mouse. The Microsoft Mouse shipped in 1983, thus beginning the Microsoft Hardware division of the company. However, the mouse remained relatively obscure until the appearance of the Macintosh 128K (which included an updated version of the single-button Lisa Mouse) in 1984, and of the Amiga 1000 and the Atari ST in 1985. Aftermarket mice were offered, from the mid 1980s, for many 8-bit home computers, the like of the Commodore 1351 being offered for the Commodore 64 and 128, as was the NEOS Mouse that was also offered for the MSX range, while the AMX Mouse was offered for the Acorn BBC Micro and Electron, Sinclair ZX Spectrum, and Amstrad CPC lines. Operation. A mouse typically controls the motion of a pointer in two dimensions in a graphical user interface (GUI). The mouse turns movements of the hand backward and forward, left and right into equivalent electronic signals that in turn are used to move the pointer. The relative movements of the mouse on the surface are applied to the position of the pointer on the screen, which signals the point where actions of the user take place, so hand movements are replicated by the pointer. Clicking or pointing (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook and clicking while the cursor points at this icon might cause a text editing program to open the file in a window. Different ways of operating the mouse cause specific things to happen in the GUI: Gestures. Gestural interfaces have become an integral part of modern computing, allowing users to interact with their devices in a more intuitive and natural way. In addition to traditional pointing-and-clicking actions, users can now employ gestural inputs to issue commands or perform specific actions. These stylized motions of the mouse cursor, known as "gestures", have the potential to enhance user experience and streamline workflow. To illustrate the concept of gestural interfaces, let's consider a drawing program as an example. In this scenario, a user can employ a gesture to delete a shape on the canvas. By rapidly moving the mouse cursor in an "x" motion over the shape, the user can trigger the command to delete the selected shape. This gesture-based interaction enables users to perform actions quickly and efficiently without relying solely on traditional input methods. While gestural interfaces offer a more immersive and interactive user experience, they also present challenges. One of the primary difficulties lies in the requirement of finer motor control from users. Gestures demand precise movements, which can be more challenging for individuals with limited dexterity or those who are new to this mode of interaction. However, despite these challenges, gestural interfaces have gained popularity due to their ability to simplify complex tasks and improve efficiency. Several gestural conventions have become widely adopted, making them more accessible to users. One such convention is the drag and drop gesture, which has become pervasive across various applications and platforms. The drag and drop gesture is a fundamental gestural convention that enables users to manipulate objects on the screen seamlessly. It involves a series of actions performed by the user: This gesture allows users to transfer or rearrange objects effortlessly. For instance, a user can drag and drop a picture representing a file onto an image of a trash can, indicating the intention to delete the file. This intuitive and visual approach to interaction has become synonymous with organizing digital content and simplifying file management tasks. In addition to the drag and drop gesture, several other semantic gestures have emerged as standard conventions within the gestural interface paradigm. These gestures serve specific purposes and contribute to a more intuitive user experience. Some of the notable semantic gestures include: These standard semantic gestures, along with the drag and drop convention, form the building blocks of gestural interfaces, allowing users to interact with digital content using intuitive and natural movements. Specific uses. At the end of 20th century, digitizer mice (puck) with magnifying glass was used with AutoCAD for the digitizations of blueprints. Other uses of the mouse's input occur commonly in special application domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual objects' or camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate so that all sides can be examined. 3D design and animation software often modally chord many different combinations to allow objects and cameras to be rotated and moved through space with the few axes of movement mice can detect. When mice have more than one button, the software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button. Types. Mechanical mice. The German company Telefunken published on their early ball mouse on 2 October 1968. Telefunken's mouse was sold as optional equipment for their computer systems. Bill English, builder of Engelbart's original mouse, created a ball mouse in 1972 while working for Xerox PARC. The ball mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required. The ball mouse has two freely rotating rollers. These are located 90 degrees apart. One roller detects the forward-backward motion of the mouse and the other the left-right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc has a pair of light beams, located so that a given beam becomes interrupted or again starts to pass light freely when the other beam of the pair is about halfway between changes. Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This incremental rotary encoder scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensors produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the computer screen. The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975. Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Key Tronic later produced a similar product. Modern computer mice took form at the École Polytechnique Fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent"; though optical mice from Mouse Systems had incorporated microprocessors by 1984. Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug compatible with an analog joystick. The "Color Mouse", originally marketed by RadioShack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example. Optical and laser mice. Early optical mice relied entirely on one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, eschewing the internal moving parts a mechanical mouse uses in addition to its optics. A laser mouse is an optical mouse that uses coherent (laser) light. The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the modern LED optical mouse works on most opaque diffuse surfaces; it is usually unable to detect movement on specular surfaces like polished stone. Laser diodes provide good resolution and precision, improving performance on opaque specular surfaces. Later, more surface-independent optical mice use an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. Battery powered, wireless optical mice flash the LED intermittently to save power, and only glow steadily when movement is detected. Inertial and gyroscopic mice. Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm". Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use. In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture. 3D mice. A 3D mouse is a computer input device for viewport interaction with at least three degrees of freedom (DoF), e.g. in 3D computer graphics software for manipulating virtual objects, navigating in the viewport, defining camera paths, posing, and desktop motion capture. 3D mice can also be used as spatial controllers for video game interaction, e.g. SpaceOrb 360. To perform such different tasks the used transfer function and the device stiffness are essential for efficient interaction. Transfer function. The virtual motion is connected to the 3D mouse control handle via a transfer function. Position control means that the virtual position and orientation is proportional to the mouse handle's deflection whereas velocity control means that translation and rotation velocity of the controlled object is proportional to the handle deflection. A further essential property of a transfer function is its interaction metaphor: Ware and Osborne performed an experiment investigating these metaphors whereby it was shown that there is no single best metaphor. For manipulation tasks, the object-in-hand metaphor was superior, whereas for navigation tasks the camera-in-hand metaphor was superior. Device stiffness. Zhai used and the following three categories for device stiffness: Isotonic 3D mice. Logitech 3D Mouse (1990) was the first ultrasonic mouse and is an example of an isotonic 3D mouse having six degrees of freedom (6DoF). Isotonic devices have also been developed with less than 6DoF, e.g. the Inspector at Technical University of Denmark (5DoF input). Other examples of isotonic 3D mice are motion controllers, i.e. is a type of game controller that typically uses accelerometers to track motion. Motion tracking systems are also used for motion capture e.g. in the film industry, although that these tracking systems are not 3D mice in a strict sense, because motion capture only means recording 3D motion and not 3D interaction. Isometric 3D mice. Early 3D mice for velocity control were almost ideally isometric, e.g. SpaceBall 1003, 2003, 3003, and a device developed at Deutsches Zentrum für Luft und Raumfahrt (DLR), cf. US patent US4589810A. Elastic 3D mice. At DLR an elastic 6DoF sensor was developed that was used in Logitech's SpaceMouse and in the products of 3DConnexion. SpaceBall 4000 FLX has a maximum deflection of approximately at a maximum force of approximately 10N, that is, a stiffness of approximately . SpaceMouse has a maximum deflection of at a maximum force of , that is, a stiffness of approximately . Taking this development further, the softly elastic Sundinlabs SpaceCat was developed. SpaceCat has a maximum translational deflection of approximately and maximum rotational deflection of approximately 30° at a maximum force less than 2N, that is, a stiffness of approximately . With SpaceCat Sundin and Fjeld reviewed five comparative experiments performed with different device stiffness and transfer functions and performed a further study comparing 6DoF softly elastic position control with 6DoF stiffly elastic velocity control in a positioning task. They concluded that for positioning tasks position control is to be preferred over velocity control. They could further conjecture the following two types of preferred 3D mouse usage: 3DConnexion's 3D mice have been commercially successful over decades. They are used in combination with the conventional mouse for CAD. The Space Mouse is used to orient the target object or change the viewpoint with the non-dominant hand, whereas the dominant hand operates the computer mouse for conventional CAD GUI operation. This is a kind of space-multiplexed input where the 6 DoF input device acts as a graspable user interface that is always connected to the view port. Force feedback. With force feedback the device stiffness can dynamically be adapted to the task just performed by the user, e.g. performing positioning tasks with less stiffness than navigation tasks. Tactile mice. In 2000, Logitech introduced a "tactile mouse" known as the "iFeel Mouse" developed by Immersion Corporation that contained a small actuator to enable the mouse to generate simulated physical sensations. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf the internet by touch-enabled mouse was first developed in 1996 and first implemented commercially by the Wingman Force Feedback Mouse. It requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice but never marketed. Pucks. Tablet digitizers are sometimes used with accessories called pucks, devices which rely on absolute positioning, but can be configured for sufficiently mouse-like relative tracking that they are sometimes marketed as mice. Ergonomic mice. As the name suggests, this type of mouse is intended to provide optimum comfort and avoid injuries such as carpal tunnel syndrome, arthritis, and other repetitive strain injuries. It is designed to fit natural hand position and movements, to reduce discomfort. When holding a typical mouse, the ulna and radius bones on the arm are crossed. Some designs attempt to place the palm more vertically, so the bones take more natural parallel position. Increasing mouse height and angling the mouse topcase can improve wrist posture without negatively affecting performance. Some limit wrist movement, encouraging arm movement instead, that may be less precise but more optimal from the health point of view. A mouse may be angled from the thumb downward to the opposite side – this is known to reduce wrist pronation. However such optimizations make the mouse right or left hand specific, making more problematic to change the tired hand. "Time" has criticized manufacturers for offering few or no left-handed ergonomic mice: "Oftentimes I felt like I was dealing with someone who'd never actually met a left-handed person before." Another solution is a pointing bar device. The so-called "roller bar mouse" is positioned snugly in front of the keyboard, thus allowing bi-manual accessibility. Gaming mice. These mice are specifically designed for use in computer games. They typically employ a wider array of controls and buttons and have designs that differ radically from traditional mice. They may also have decorative monochrome or programmable RGB LED lighting. The additional buttons can often be used for changing the sensitivity of the mouse or they can be assigned (programmed) to macros (i.e., for opening a program or for use instead of a key combination). It is also common for game mice, especially those designed for use in real-time strategy games such as "StarCraft", or in multiplayer online battle arena games such as League of Legends to have a relatively high sensitivity, measured in dots per inch (DPI), which can be as high as 25,600. DPI and CPI are the same values that refer to the mouse's sensitivity. DPI is a misnomer used in the gaming world, and many manufacturers use it to refer to CPI, counts per inch. Some advanced mice from gaming manufacturers also allow users to adjust the weight of the mouse by adding or subtracting weights to allow for easier control. Ergonomic quality is also an important factor in gaming mouse, as extended gameplay times may render further use of the mouse to be uncomfortable. Some mice have been designed to have adjustable features such as removable and/or elongated palm rests, horizontally adjustable thumb rests and pinky rests. Some mice may include several different rests with their products to ensure comfort for a wider range of target consumers. Gaming mice are held by gamers in three styles of grip: Connectivity and communication protocols. To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB, or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses. While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer. Mouse use in DOS applications became more common after the introduction of the Microsoft Mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a driver that implements the same API, even if the mouse hardware itself was incompatible with Microsoft's. This driver provides the state of the buttons and the distance the mouse has moved in units that its documentation calls "mickeys". Early mice. In the 1970s, the Xerox Alto mouse, and in the 1980s the Xerox optical mouse, used a quadrature-encoded X and Y interface. This two-bit encoding per dimension had the property that only one bit of the two would change at a time, like a Gray code or Johnson counter, so that the transitions would not be misinterpreted when asynchronously sampled. The 1985 Sun-3 workstations would ship with a ball based, bus mouse, connected via an 3 pin mini din socket. Sun later replacing the ball for an optical mechanism dependent on a patterned, reflective, metallic mouse mat, with their type M4 mouse. The earliest mass-market mice, such as the original Macintosh, Amiga, and Atari ST mice used a D-subminiature 9-pin connector to send the quadrature-encoded X and Y axis signals directly, plus one pin per mouse button. The mouse was a simple optomechanical device, and the decoding circuitry was all in the main computer. The 1987 Acorn Archimedes line kept the quadrature-encoded mice of the 68000 computers, and the aftermarket mice sold for 8-bit home computers, like the AMX Mouse, but opted for its own propriety 9 pin mini din connector. Serial interface and protocol. Because the IBM PC did not have a quadrature decoder built in, early PC mice used the RS-232C serial port to communicate encoded mouse movements, as well as provide power to the mouse's circuits. The Mouse Systems Corporation (MSC) version used a five-byte protocol and supported three buttons. The Microsoft version used a three-byte protocol and supported two buttons. Due to the incompatibility between the two protocols, some manufacturers sold serial mice with a mode switch: "PC" for MSC mode, "MS" for Microsoft mode. Apple Desktop Bus. In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy chaining of up to 16 devices, including mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to device communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when Apple's iMac line of computers joined the industry-wide switch to using USB. Beginning with the Bronze Keyboard PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005. PS/2 interface and protocol. With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 port for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin MIDI style full sized DIN 41524 connector. In default mode (called "stream mode") a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets. For any motion, button press or button release event, a PS/2 mouse sends, over a bi-directional serial port, a sequence of three bytes, with the following format: Here, XS and YS represent the sign bits of the movement vectors, XV and YV indicate an overflow in the respective vector component, and LB, MB and RB indicate the status of the left, middle and right mouse buttons (1 = pressed). PS/2 mice also understand several commands for reset and self-test, switching between different operating modes, and changing the resolution of the reported motion vectors. A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of "IntelliMouse" and "PS/2"). It initially operates in standard PS/2 format, for backward compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five). Mouse vendors also use other extended formats, often without providing public documentation. The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that an ordinary PS/2 driver can handle them. For 3D (or 6-degree-of-freedom) input, vendors have made many extensions both to the hardware and to software. In the late 1990s, Logitech created ultrasound based tracking which gave 3D input to a few millimeters accuracy, which worked well as an input device but failed as a profitable product. In 2008, Motion4U introduced its "OptiBurst" system using IR tracking for use as a Maya (graphics software) plugin. USB. Almost all wired mice today use USB and the USB human interface device class for communication. Cordless or wireless. Cordless or wireless mice transmit data via radio. Some mice connect to the computer through Bluetooth or Wi-Fi, while others use a receiver that plugs into the computer, for example through a USB port. Many mice that use a USB receiver have a storage compartment for it inside the mouse. Some "nano receivers" are designed to be small enough to remain plugged into a laptop during transport, while still being large enough to easily remove. Operating system support. MS-DOS and Windows 1.0 support connecting a mouse such as a Microsoft Mouse via multiple interfaces: BallPoint, Bus (InPort), Serial port or PS/2. Windows 98 added built-in support for USB Human Interface Device class (USB HID), with native vertical scrolling support. Windows 2000 and Windows Me expanded this built-in support to 5-button mice. Windows XP Service Pack 2 introduced a Bluetooth stack, allowing Bluetooth mice to be used without any USB receivers. Windows Vista added native support for horizontal scrolling and standardized wheel movement granularity for finer scrolling. Windows 8 introduced BLE (Bluetooth Low Energy) mouse/HID support. Multiple-mouse systems. Some systems allow two or more mice to be used at once as input devices. Late-1980s era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer (Lemmings and The Settlers for example). The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around. Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time requires cooperation of users or applications designed for multiple input devices. Multiple mice are often used in multi-user gaming in addition to specially designed devices that provide several input interfaces. Windows also has full support for multiple input/mouse configurations for multi-user environments. Starting with Windows XP, Microsoft introduced an SDK for developing applications that allow multiple input devices to be used at the same time with independent cursors and independent input points. However, it no longer appears to be available. The introduction of Windows Vista and Microsoft Surface (now known as Microsoft PixelSense) introduced a new set of input APIs that were adopted into Windows 7, allowing for 50 points/cursors, all controlled by independent users. The new input points provide traditional mouse input; however, they were designed with other input technologies like touch and image in mind. They inherently offer 3D coordinates along with pressure, size, tilt, angle, mask, and even an image bitmap to see and recognize the input point/object on the screen. , Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support 255 cursors/input points through Multi-Pointer X. However, currently no window managers support Multi-Pointer X leaving it relegated to custom software usage. There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications. Buttons. Mouse buttons are microswitches which can be pressed to select or interact with an element of a graphical user interface, producing a distinctive clicking sound. Since around the late 1990s, the three-button scrollmouse has become the de facto standard. Users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software. Scrolling. Nearly all mice now have an integrated input primarily intended for scrolling on top, usually a single-axis digital wheel or rocker switch which can also be depressed to act as a third button. Though less common, many mice instead have two-axis inputs such as a tiltable wheel, trackball, or touchpad. Those with a trackball may be designed to stay stationary, using the trackball instead of moving the mouse. Speed. Mickeys per second is a unit of measurement for the speed and movement direction of a computer mouse, where direction is often expressed as "horizontal" versus "vertical" mickey count. However, speed can also refer to the ratio between how many pixels the cursor moves on the screen and how far the mouse moves on the mouse pad, which may be expressed as pixels per mickey, pixels per inch, or pixels per centimeter. The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed as dots per inch (DPI)the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi). The mickey originally referred to one of these counts, or one resolvable step of motion. If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, operating system and application software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. For simple software, when the mouse starts to move, the software will count the number of "counts" or "mickeys" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, with good precision. When the movement of the mouse passes the value set for some threshold, the software will start to move the cursor faster, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the "acceleration" setting. Operating systems sometimes apply acceleration, referred to as "ballistics", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response. Mousepads. Engelbart's original mouse did not require a mousepad; the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance. The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because to roll smoothly the ball requires more friction than common desk surfaces usually provide. So-called "hard mousepads" for gamers or optical/laser mice also exist. Most optical and laser mice do not require a pad, the notable exception being early optical mice which relied on a grid on the pad to detect movement (e.g. Mouse Systems). Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or laser tracking, for example, a transparent or reflective surface, such as glass. Some mice also come with small "pads" attached to the bottom surface, also called mouse feet or mouse skates, that help the user slide the mouse smoothly across surfaces. In the marketplace. Around 1981, Xerox included mice with its Xerox Star, based on the mouse used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use. The Macintosh design, commercially successful and technically influential, led many other vendors to begin producing mice or including them with their other computer products (by 1986, Atari ST, Amiga, Windows 1.0, GEOS for the Commodore 64, and the Apple IIGS). The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers. In November 2008, Logitech built their billionth mouse. Use in games. The device often functions as an interface for PC-based computer games and sometimes for video game consoles. The Classic Mac OS Desk Accessory "Puzzle" in 1984 was the first game designed specifically for a mouse. First-person shooters. FPSs naturally lend themselves to separate and simultaneous control of the player's movement and aim, and on computers this has traditionally been achieved with a combination of keyboard and mouse. Players use the X-axis of the mouse for looking (or turning) left and right, and the Y-axis for looking up and down; the keyboard is used for movement and supplemental inputs. Many shooting genre players prefer a mouse over a gamepad analog stick because the wide range of motion offered by a mouse allows for faster and more varied control. Although an analog stick allows the player more granular control, it is poor for certain movements, as the player's input is relayed based on a vector of both the stick's direction and magnitude. Thus, a small but fast movement (known as "flick-shotting") using a gamepad requires the player to quickly move the stick from its rest position to the edge and back again in quick succession, a difficult maneuver. In addition the stick also has a finite magnitude; if the player is currently using the stick to move at a non-zero velocity their ability to increase the rate of movement of the camera is further limited based on the position their displaced stick was already at before executing the maneuver. The effect of this is that a mouse is well suited not only to small, precise movements but also to large, quick movements and immediate, responsive movements; all of which are important in shooter gaming. This advantage also extends in varying degrees to similar game styles such as third-person shooters. Some incorrectly ported games or game engines have acceleration and interpolation curves which unintentionally produce excessive, irregular, or even negative acceleration when used with a mouse instead of their native platform's non-mouse default input device. Depending on how deeply hardcoded this misbehavior is, internal user patches or external 3rd-party software may be able to fix it. Individual game engines will also have their own sensitivities. This often restricts one from taking a game's existing sensitivity, transferring it to another, and acquiring the same 360 rotational measurements. A sensitivity converter is the preferred tool that FPS gamers use to translate correctly the rotational movements between different mice and between different games. Calculating the conversion values manually is also possible but it is more time-consuming and requires performing complex mathematical calculations, while using a sensitivity converter is a lot faster and easier for gamers. Due to their similarity to the WIMP desktop metaphor interface for which mice were originally designed, and to their own tabletop game origins, computer strategy games are most commonly played with mice. In particular, real-time strategy and MOBA games usually require the use of a mouse. The left button usually controls primary fire. If the game supports multiple fire modes, the right button often provides secondary fire from the selected weapon. Games with only a single fire mode will generally map secondary fire to "aim down the weapon sights". In some games, the right button may also invoke accessories for a particular weapon, such as allowing access to the scope of a sniper rifle or allowing the mounting of a bayonet or silencer. Players can use a scroll wheel for changing weapons (or for controlling scope-zoom magnification, in older games). On most first person shooter games, programming may also assign more functions to additional buttons on mice with more than three controls. A keyboard usually controls movement (for example, WASD for moving forward, left, backward, and right, respectively) and other functions such as changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag (latency) will give a player an advantage over players with less accurate or slower mice. In some cases the right mouse button may be used to move the player forward, either in lieu of, or in conjunction with the typical WASD configuration. Many games provide players with the option of mapping their own choice of a key or button to a certain control. An early technique of players, circle strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse toward the opponent. Games using mice for input are so popular that many manufacturers make mice specifically for gaming. Such mice may feature adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable CPI. Mouse Bungees are typically used with gaming mice because it eliminates the annoyance of the cable. Many games, such as first- or third-person shooters, have a setting named "invert mouse" or similar (not to be confused with "button inversion", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration. After id Software's commercial hit of "Doom", which did not support vertical aiming, competitor Bungie's "Marathon" became the first first-person shooter to support using the mouse to aim up and down. Games using the Build engine had an option to invert the Y-axis. The "invert" feature actually made the mouse behave in a manner that users regard as non-inverted (by default, moving mouse forward resulted in looking down). Soon after, id Software released "Quake", which introduced the invert feature as users know it. Home consoles. In 1988, the VTech Socrates educational video game console featured a wireless mouse with an attached mouse pad as an optional controller used for some games. In the early 1990s, the Super Nintendo Entertainment System video game system featured a mouse in addition to its controllers. A mouse was also released for the Nintendo 64, although it was only released in Japan. The 1992 game "Mario Paint" in particular used the mouse's capabilities, as did its Japanese-only successor "Mario Artist" on the N64 for its 64DD disk drive peripheral in 1999. Sega released official mice for their Genesis/Mega Drive, Saturn and Dreamcast consoles. NEC sold official mice for its PC Engine and PC-FX consoles. Sony released an official mouse product for the PlayStation console, included one along with the Linux for PlayStation 2 kit, as well as allowing owners to use virtually any USB mouse with the PS2, PS3, and PS4. Nintendo's Wii also had this feature implemented in a later software update, and this support was retained on its successor, the Wii U. Microsoft's Xbox line of game consoles (which used operaring systems based on modified versions of Windows NT) also had universal-wide mouse support using USB.
7059
27015025
https://en.wikipedia.org/wiki?curid=7059
Civil defense
Civil defense or civil protection is an effort to protect the citizens of a state (generally non-combatants) from human-made and natural disasters. It uses the principles of emergency management: prevention, mitigation, preparation, response, or emergency evacuation and recovery. Programs of this sort were initially discussed at least as early as the 1920s and were implemented in some countries during the 1930s as the threat of war and aerial bombardment grew. Civil-defense structures became widespread after authorities recognised the threats posed by nuclear weapons. Since the end of the Cold War, the focus of civil defense has largely shifted from responding to military attack to dealing with emergencies and disasters in general. The new concept is characterised by a number of terms, each of which has its own specific shade of meaning, such as "crisis management", "emergency management", "emergency preparedness", "contingency planning", "civil contingency", "civil aid" and "civil protection". Some countries treat civil defense as a key part of defense in general. For example, total defence refers to the commitment of a wide range of national resources to defense, including the protection of all aspects of civilian life. History. Origins. United Kingdom. The advent of civil defense was stimulated by the experience of the bombing of civilian areas during the First World War. The bombing of the United Kingdom began on 19 January 1915 when German zeppelins dropped bombs on the Great Yarmouth area, killing six people. German bombing operations of the First World War were surprisingly effective, especially after the Gotha bombers surpassed the zeppelins. The most devastating raids inflicted 121 casualties for each ton of bombs dropped; this figure was then used as a basis for predictions. After the war, attention was turned toward civil defense in the event of war, and the Air Raid Precautions Committee (ARP) was established in 1924 to investigate ways for ensuring the protection of civilians from the danger of air-raids. The Committee produced figures estimating that in London there would be 9,000 casualties in the first two days and then a continuing rate of 17,500 casualties a week. These rates were thought conservative. It was believed that there would be "total chaos and panic" and hysterical neurosis as the people of London would try to flee the city. To control the population harsh measures were proposed: bringing London under almost military control, and physically cordoning off the city with 120,000 troops to force people back to work. A different government department proposed setting up camps for refugees for a few days before sending them back to London. A special government department, the Civil Defence Service, was established by the Home Office in 1935. Its remit included the pre-existing ARP as well as wardens, firemen (initially the Auxiliary Fire Service (AFS) and latterly the National Fire Service (NFS)), fire watchers, rescue, first aid post, stretcher party and industry. Over 1.9 million people served within the CD; nearly 2,400 died from enemy action. The organization of civil defense was the responsibility of the local authority. Volunteers were ascribed to different units depending on experience or training. Each local civil defense service was divided into several sections. Wardens were responsible for local reconnaissance and reporting, and leadership, organization, guidance and control of the general public. Wardens would also advise survivors of the locations of rest and food centers, and other welfare facilities. Rescue Parties were required to assess and then access bombed-out buildings and retrieve injured or dead people. In addition they would turn off gas, electricity and water supplies, and repair or pull down unsteady buildings. Medical services, including First Aid Parties, provided on the spot medical assistance. The expected stream of information that would be generated during an attack was handled by 'Report and Control' teams. A local headquarters would have an ARP controller who would direct rescue, first aid and decontamination teams to the scenes of reported bombing. If local services were deemed insufficient to deal with the incident then the controller could request assistance from surrounding boroughs. Fire Guards were responsible for a designated area/building and required to monitor the fall of incendiary bombs and pass on news of any fires that had broken out to the NFS. They could deal with an individual magnesium alloy ("Elektron") incendiary bomb by dousing it with buckets of sand or water or by smothering. Additionally, 'Gas Decontamination Teams' kitted out with gas-tight and waterproof protective clothing were to deal with any gas attacks. They were trained to decontaminate buildings, roads, rail and other material that had been contaminated by liquid or jelly gases. Little progress was made over the issue of air-raid shelters, because of the apparently irreconcilable conflict between the need to send the public underground for shelter and the need to keep them above ground for protection against gas attacks. In February 1936 the Home Secretary appointed a technical Committee on Structural Precautions against Air Attack. During the Munich crisis, local authorities dug trenches to provide shelter. After the crisis, the British Government decided to make these a permanent feature, with a standard design of precast concrete trench lining. They also decided to issue the Anderson shelter free to poorer households and to provide steel props to create shelters in suitable basements. During the Second World War, the ARP was responsible for the issuing of gas masks, pre-fabricated air-raid shelters (such as Anderson shelters, as well as Morrison shelters), the upkeep of local public shelters, and the maintenance of the blackout. The ARP also helped rescue people after air raids and other attacks, and some women became ARP Ambulance Attendants whose job was to help administer first aid to casualties, search for survivors, and in many grim instances, help recover bodies, sometimes those of their own colleagues. As the war progressed, the military effectiveness of Germany's aerial bombardment was very limited. Thanks to the Luftwaffe's shifting aims, the strength of British air defenses, the use of early warning radar in combination with the Royal Observer Corps, and the life-saving actions of local civil defense units, the aerial "Blitz" during the Battle of Britain failed to break the morale of the British people, destroy the Royal Air Force or significantly hinder British industrial production. Despite a significant investment in civil and military defense, British civilian losses during the Blitz were higher than in most strategic bombing campaigns throughout the war. For example, there were 14,000-20,000 UK civilian fatalities during the Battle of Britain, a relatively high number considering that the Luftwaffe dropped only an estimated 30,000 tons of ordinance during the battle. Granted, this resulting 0.47-0.67 civilian fatalities per ton of bombs dropped was lower than the earlier 121 casualties per ton prediction. However, in comparison, Allied strategic bombing of Germany during the war proved slightly less lethal than what was observed in the UK, with an estimated 400,000-600,000 German civilian fatalities for approximately 1.35 million tons of bombs dropped on Germany, an estimated resulting rate therefore of 0.30-0.44 civilian fatalities per ton of bombs dropped. United States. In the United States, the Office of Civilian Defense was established in May 1941 to coordinate civilian defense efforts. It coordinated with the Department of the Army and established similar groups to the British ARP. One of these groups that still exists today is the Civil Air Patrol, which was originally created as a civilian auxiliary to the Army. The CAP was created on December 1, 1941, with the main civil defense mission of search and rescue. The CAP also sank two Axis submarines and provided aerial reconnaissance for Allied and neutral merchant ships. In 1946, the Civil Air Patrol was barred from combat by Public Law 79-476. The CAP then received its current mission: search and rescue for downed aircraft. When the Air Force was created, in 1947, the Civil Air Patrol became the auxiliary of the Air Force. The Coast Guard Auxiliary performs a similar role in support of the U.S. Coast Guard. Like the Civil Air Patrol, the Coast Guard Auxiliary was established in the run up to World War II. Auxiliarists were sometimes armed during the war, and extensively participated in port security operations. After the war, the Auxiliary shifted its focus to promoting boating safety and assisting the Coast Guard in performing search and rescue and marine safety and environmental protection. In the United States a federal civil defense program existed under Public Law 920 of the 81st Congress, as amended, from 1951 to 1994. That statutory scheme was made so-called all-hazards by Public Law 103–160 in 1993 and largely repealed by Public Law 103–337 in 1994. Parts now appear in Title VI of the Robert T. Stafford Disaster Relief and Emergency Assistance Act, Public Law 100-107 [1988 as amended]. The term EMERGENCY PREPAREDNESS was largely codified by that repeal and amendment. See 42 USC Sections 5101 and following. Post–World War II. In most of the states of the North Atlantic Treaty Organization, such as the United States, the United Kingdom and West Germany, as well as the Soviet Bloc, and especially in the neutral countries, such as Switzerland and in Sweden during the 1950s and 1960s, many civil defense practices took place to prepare for the aftermath of a nuclear war, which seemed quite likely at that time. In the United Kingdom, the Civil Defence Service was disbanded in 1945, followed by the ARP in 1946. With the onset of the growing tensions between East and West, the service was revived in 1949 as the Civil Defence Corps. As a civilian volunteer organization, it was tasked to take control in the aftermath of a major national emergency, principally envisaged as being a Cold War nuclear attack. Although under the authority of the Home Office, with a centralized administrative establishment, the corps was administered locally by Corps Authorities. In general every county was a Corps Authority, as were most county boroughs in England and Wales and large burghs in Scotland. Each division was divided into several sections, including the Headquarters, Intelligence and Operations, Scientific and Reconnaissance, Warden & Rescue, Ambulance and First Aid and Welfare. In 1954 Coventry City Council caused international controversy when it announced plans to disband its Civil Defence committee because the councillors had decided that hydrogen bombs meant that there could be no recovery from a nuclear attack. The British government opposed such a move and held a provocative Civil Defence exercise on the streets of Coventry which Labour council members protested against. The government also decided to implement its own committee at the city's cost until the council reinstituted its committee. In the United States, the sheer power of nuclear weapons and the perceived likelihood of such an attack precipitated a greater response than had yet been required of civil defense. Civil defense, previously considered an important and commonsense step, became divisive and controversial in the charged atmosphere of the Cold War. In 1950, the National Security Resources Board created a 162-page document outlining a model civil defense structure for the U.S. Called the "Blue Book" by civil defense professionals in reference to its solid blue cover, it was the template for legislation and organization for the next 40 years. Perhaps the most memorable aspect of the Cold War civil defense effort was the educational effort made or promoted by the government. In "Duck and Cover", Bert the Turtle advocated that children "duck and cover" when they "see the flash." Booklets such as "Survival Under Atomic Attack", "Fallout Protection" and "Nuclear War Survival Skills" were also commonplace. The transcribed radio program Stars for Defense combined hit music with civil defense advice. Government institutes created public service announcements including children's songs and distributed them to radio stations to educate the public in case of nuclear attack. The US President Kennedy (1961–63) launched an ambitious effort to install fallout shelters throughout the United States. These shelters would not protect against the blast and heat effects of nuclear weapons, but would provide some protection against the radiation effects that would last for weeks and even affect areas distant from a nuclear explosion. In order for most of these preparations to be effective, there had to be some degree of warning. In 1951, CONELRAD (Control of Electromagnetic Radiation) was established. Under the system, a few primary stations would be alerted of an emergency and would broadcast an alert. All broadcast stations throughout the country would be constantly listening to an upstream station and repeat the message, thus passing it from station to station. In a once classified US war game analysis, looking at varying levels of war escalation, warning and pre-emptive attacks in the late 1950s early 1960s, it was estimated that approximately 27 million US citizens would have been saved with civil defense education. At the time, however, the cost of a full-scale civil defense program was regarded as less effective in cost-benefit analysis than a ballistic missile defense (Nike Zeus) system, and as the Soviet adversary was increasing their nuclear stockpile, the efficacy of both would follow a diminishing returns trend. Contrary to the largely noncommittal approach taken in NATO, with its stops and starts in civil defense depending on the whims of each newly elected government, the military strategy in the comparatively more ideologically consistent USSR held that, amongst other things, a winnable nuclear war was possible. To this effect the Soviets planned to minimize, as far as possible, the effects of nuclear weapon strikes on its territory, and therefore spent considerably more thought on civil defense preparations than in U.S., with defense plans that have been assessed to be far more effective than those in the U.S. Soviet Civil Defense Troops played the main role in the massive disaster relief operation following the 1986 Chernobyl nuclear accident. Defense Troop reservists were officially mobilized (as in a case of war) from throughout the USSR to join the Chernobyl task force and formed on the basis of the Kyiv Civil Defense Brigade. The task force performed some high-risk tasks including, with the failure of their robotic machinery, the manual removal of highly-radioactive debris. Many of their personnel were later decorated with medals for their work at containing the release of radiation into the environment, with a number of the 56 deaths from the accident being Civil defense troops. In Western countries, strong civil defense policies were never properly implemented, because it was fundamentally at odds with the doctrine of "mutual assured destruction" (MAD) by making provisions for survivors. It was also considered that a full-fledged total defense would have not been worth the very large expense. For whatever reason, the public saw efforts at civil defense as fundamentally ineffective against the powerful destructive forces of nuclear weapons, and therefore a waste of time and money, although detailed scientific research programs did underlie the much-mocked government civil defense pamphlets of the 1950s and 1960s. The Civil Defence Corps was stood down in Great Britain in 1968 due to the financial crisis of the mid-1960s. Its neighbors, however, remained committed to Civil Defence, namely the Isle of Man Civil Defence Corps and Civil Defence Ireland (Ireland). In the United States, the various civil defense agencies were replaced with the Federal Emergency Management Agency (FEMA) in 1979. In 2002 this became part of the Department of Homeland Security. The focus was shifted from nuclear war to an "all-hazards" approach of comprehensive emergency management. Natural disasters and the emergence of new threats such as terrorism have caused attention to be focused away from traditional civil defense and into new forms of civil protection such as emergency management and homeland security. Today. Many countries maintain a national Civil Defence Corps, usually having a wide brief for assisting in large scale civil emergencies such as flood, earthquake, invasion, or civil disorder. After the September 11 attacks in 2001, in the United States the concept of civil defense has been revisited under the umbrella term of homeland security and all-hazards emergency management. In Europe, the triangle CD logo continues to be widely used. Created in 1939 by Charles Coiner of the N. W. Ayer Advertising Agency, it was used throughout World War II and the Cold War era. In the U.S., 2006 saw the retirement of the old triangle logo, to be replaced with a stylised "EM" (for emergency management). A reference to the old CD logo (without the red CD letters) can be seen above the eagle's head in the FEMA seal. The name and logo continue to be used by Hawaii State Civil Defense and Guam Homeland Security/Office of Civil Defense. The term "civil protection" is currently widely used within the European Union to refer to government-approved systems and resources tasked with protecting the non-combat population, primarily in the event of natural and technological disasters. For example, the EU's humanitarian aid policy director on the Ebola Crisis, Florika Fink-Hooijer, said that civil protection requires "not just more resources, but first and foremost better governance of the resources that are available including better synergies between humanitarian aid and civil protection". In recent years there has been emphasis on preparedness for technological disasters resulting from terrorist attack. Within EU countries the term "crisis-management" emphasizes the political and security dimension rather than measures to satisfy the immediate needs of the population. In Australia, civil defense is the responsibility of the volunteer-based State Emergency Service. In most former Soviet countries civil defense is the responsibility of governmental ministries, such as Russia's Ministry of Emergency Situations. Importance. Relatively small investments in preparation can speed up recovery by months or years and thereby prevent millions of deaths by hunger, cold and disease. According to human capital theory in economics, a country's population is more valuable than all of the land, factories and other assets that it possesses. People rebuild a country after its destruction, and it is therefore important for the economic security of a country that it protect its people. According to psychology, it is important for people to feel as though they are in control of their own destiny, and preparing for uncertainty via civil defense may help to achieve this. In the United States, the federal civil defense program was authorized by statute and ran from 1951 to 1994. Originally authorized by Public Law 920 of the 81st Congress, it was repealed by Public Law 93–337 in 1994. Small portions of that statutory scheme were incorporated into the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Public Law 100–707) which partly superseded in part, partly amended, and partly supplemented the Disaster Relief Act of 1974 (Public Law 93-288). In the portions of the civil defense statute incorporated into the Stafford Act, the primary modification was to use the term "Emergency Preparedness" wherever the term "Civil Defence" had previously appeared in the statutory language. An important concept initiated by President Jimmy Carter was the so-called "Crisis Relocation Program" administered as part of the federal civil defense program. That effort largely lapsed under President Ronald Reagan, who discontinued the Carter initiative because of opposition from areas potentially hosting the relocated population. Threat assessment. Threats to civilians and civilian life include NBC (Nuclear, Biological, and Chemical warfare) and others, like the more modern term CBRN (Chemical Biological Radiological and Nuclear). Threat assessment involves studying each threat so that preventative measures can be built into civilian life. Stages. Mitigation. Mitigation is the process of actively preventing war or the release of nuclear weapons. It includes policy analysis, diplomacy, political measures, nuclear disarmament and more military responses such as a National Missile Defense and air defense artillery. In the case of counter-terrorism, mitigation would include diplomacy, intelligence gathering and direct action against terrorist groups. Mitigation may also be reflected in long-term planning such as the design of the interstate highway system and the placement of military bases further away from populated areas. Preparation. Preparation consists of building blast shelters and pre-positioning information, supplies, and emergency infrastructure. For example, most larger cities in the U.S. now have underground emergency operations centers that can perform civil defense coordination. FEMA also has many underground facilities for the same purpose located near major railheads such as the ones in Denton, Texas and Mount Weather, Virginia. Other measures would include continual government inventories of grain silos, the Strategic National Stockpile, the uncapping of the Strategic Petroleum Reserve, the dispersal of lorry-transportable bridges, water purification, mobile refineries, mobile de-contamination facilities, mobile general and special purpose disaster mortuary facilities such as Disaster Mortuary Operational Response Team (DMORT) and DMORT-WMD, and other aids such as temporary housing to speed civil recovery. On an individual scale, one means of preparation for exposure to nuclear fallout is to obtain potassium iodide (KI) tablets as a safety measure to protect the human thyroid gland from the uptake of dangerous radioactive iodine. Another measure is to cover the nose, mouth and eyes with a piece of cloth and sunglasses to protect against alpha particles, which are only an internal hazard. To support and supplement efforts at national, regional and local level with regard to disaster prevention, the preparedness of those responsible for civil protection and the intervention in the event of disaster Preparing also includes sharing information: Response. Response consists first of warning civilians so they can enter fallout shelters and protect assets. Staffing a response is always full of problems in a civil defense emergency. After an attack, conventional full-time emergency services are dramatically overloaded, with conventional fire fighting response times often exceeding several days. Some capability is maintained by local and state agencies, and an emergency reserve is provided by specialized military units, especially civil affairs, Military Police, Judge Advocates and combat engineers. However, the traditional response to massed attack on civilian population centers is to maintain a mass-trained force of volunteer emergency workers. Studies in World War II showed that lightly trained (40 hours or less) civilians in organised teams can perform up to 95% of emergency activities when trained, liaised and supported by local government. In this plan, the populace rescues itself from most situations, and provides information to a central office to prioritize professional emergency services. In the 1990s, this concept was revived by the Los Angeles Fire Department to cope with civil emergencies such as earthquakes. The program was widely adopted, providing standard terms for organization. In the U.S., this is now official federal policy, and it is implemented by community emergency response teams, under the Department of Homeland Security, which certifies training programs by local governments, and registers "certified disaster service workers" who complete such training. Recovery. Recovery consists of rebuilding damaged infrastructure, buildings and production. The recovery phase is the longest and ultimately most expensive phase. Once the immediate "crisis" has passed, cooperation fades away and recovery efforts are often politicized or seen as economic opportunities. Preparation for recovery can be very helpful. If mitigating resources are dispersed before the attack, cascades of social failures can be prevented. One hedge against bridge damage in riverine cities is to subsidize a "tourist ferry" that performs scenic cruises on the river. When a bridge is down, the ferry takes up the load. Civil defense organizations. Civil Defense is also the name of a number of organizations around the world dedicated to protecting civilians from military attacks, as well as to providing rescue services after natural and human-made disasters alike. Worldwide protection is managed by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). In a few countries such as Jordan and Singapore (see Singapore Civil Defence Force), civil defense is essentially the same organization as the fire brigade. In most countries, however, civil defense is a government-managed, volunteer-staffed organization, separate from the fire brigade and the ambulance service. As the threat of Cold War eased, a number of such civil defense organizations have been disbanded or mothballed (as in the case of the Royal Observer Corps in the United Kingdom and the United States civil defense), while others have changed their focuses into providing rescue services after natural disasters (as for the State Emergency Service in Australian states). However, the ideals of Civil Defense have been brought back in the United States under FEMA's Citizen Corps and Community Emergency Response Team (CERT). In the United Kingdom Civil Defence work is carried out by Emergency Responders under the Civil Contingencies Act 2004, with assistance from voluntary groups such as RAYNET, Search and Rescue Teams and 4x4 Response. In Ireland, the Civil Defence is still very much an active organization and is occasionally called upon for its Auxiliary Fire Service and ambulance/rescue services when emergencies such as flash flooding occur and require additional manpower. The organization has units of trained firemen and medical responders based in key areas around the country. See also. General:
7060
27823944
https://en.wikipedia.org/wiki?curid=7060
Chymotrypsin
Chymotrypsin (, chymotrypsins A and B, alpha-chymar ophth, avazyme, chymar, chymotest, enzeon, quimar, quimotrase, alpha-chymar, alpha-chymotrypsin A, alpha-chymotrypsin) is a digestive enzyme component of pancreatic juice acting in the duodenum, where it performs proteolysis, the breakdown of proteins and polypeptides. Chymotrypsin preferentially cleaves peptide amide bonds where the side chain of the amino acid N-terminal to the scissile amide bond (the P1 position) is a large hydrophobic amino acid (tyrosine, tryptophan, and phenylalanine). These amino acids contain an aromatic ring in their side chain that fits into a hydrophobic pocket (the S1 position) of the enzyme. It is activated in the presence of trypsin. The hydrophobic and shape complementarity between the peptide substrate P1 side chain and the enzyme S1 binding cavity accounts for the substrate specificity of this enzyme. Chymotrypsin also hydrolyzes other amide bonds in peptides at slower rates, particularly those containing leucine at the P1 position. Structurally, it is the archetypal structure for its superfamily, the PA clan of proteases. Activation. Chymotrypsin is synthesized in the pancreas. Its precursor is chymotrypsinogen. Trypsin activates chymotrypsinogen by cleaving peptidic bonds in positions Arg15 – Ile16 and produces π-chymotrypsin. In turn, aminic group (-NH3+) of the Ile16 residue interacts with the side chain of Asp194, producing the "oxyanion hole" and the hydrophobic "S1 pocket". Moreover, chymotrypsin induces its own activation by cleaving in positions 14–15, 146–147, and 148–149, producing α-chymotrypsin (which is more active and stable than π-chymotrypsin). The resulting molecule is a three-polypeptide molecule interconnected via disulfide bonds. Mechanism of action and kinetics. "In vivo", chymotrypsin is a proteolytic enzyme (serine protease) acting in the digestive systems of many organisms. It facilitates the cleavage of peptide bonds by a hydrolysis reaction, which despite being thermodynamically favorable, occurs extremely slowly in the absence of a catalyst. The main substrates of chymotrypsin are peptide bonds in which the amino acid N-terminal to the bond is a tryptophan, tyrosine, phenylalanine, or leucine. Like many proteases, chymotrypsin also hydrolyses amide bonds "in vitro", a virtue that enabled the use of substrate analogs such as N-acetyl-L-phenylalanine p-nitrophenyl amide for enzyme assays. Chymotrypsin cleaves peptide bonds by attacking the unreactive carbonyl group with a powerful nucleophile, the serine 195 residue located in the active site of the enzyme, which briefly becomes covalently bonded to the substrate, forming an enzyme-substrate intermediate. Along with histidine 57 and aspartic acid 102, this serine residue constitutes the catalytic triad of the active site. These findings rely on inhibition assays and the study of the kinetics of cleavage of the aforementioned substrate, exploiting the fact that the enzyme-substrate intermediate "p"-nitrophenolate has a yellow colour, enabling measurement of its concentration by measuring light absorbance at 410 nm. Chymotrypsin catalysis of the hydrolysis of a protein substrate (in red) is performed in two steps.  First, the nucleophilicity of Ser-195 is enhanced by general-base catalysis in which the proton of the serine hydroxyl group is transferred to the imidazole moiety of His-57 during its attack on the electron-deficient carbonyl carbon of the protein-substrate main chain (k1 step). This occurs via the concerted action of the three-amino-acid residues in the catalytic triad. The buildup of negative charge on the resultant tetrahedral intermediate is stabilized in the enzyme's active site's oxyanion hole, by formation of two hydrogen bonds to adjacent main-chain amide-hydrogens. The His-57 imidazolium moiety formed in the k1 step is a general acid catalyst for the k-1 reaction.  However, evidence for similar general-acid catalysis of the k2 reaction (Tet2) has been controverted; apparently water provides a proton to the amine leaving group. Breakdown of Tet1 (via k3) generates an acyl enzyme, which is hydrolyzed with His-57 acting as a general base  (kH2O) in formation of a tetrahedral intermediate, that breaks down to regenerate the serine hydroxyl moiety, as well as the protein fragment with the newly formed carboxyl terminus. Uses. Medical uses. Chymotrypsin has been used during cataract surgery. It was marketed under the brand name Zolyse.
7061
27823944
https://en.wikipedia.org/wiki?curid=7061
Community emergency response team
In the United States, Community Emergency Response Team (CERT) can refer to Sometimes programs and organizations take different names, such as neighborhood emergency response team (NERT), or neighborhood emergency team (NET). The concept of civilian auxiliaries is similar to civil defense, which has a longer history. The CERT concept differs because it includes nonmilitary emergencies, and is coordinated with all levels of emergency authorities, local to national, via an overarching incident command system. In 2022, the CERT program moved under FEMA's community preparedness umbrella along with the Youth Preparedness Council. Organization. A local government agency, often a fire department, police department, or emergency management agency, agrees to sponsor CERT within its jurisdiction. The sponsoring agency liaises with, deploys and may train or supervise the training of CERT members. Many sponsoring agencies employ a full-time community-service person as liaison to the CERT members. In some communities, the liaison is a volunteer and CERT member. As people are trained and agree to join the community emergency response effort, a CERT is formed. Initial efforts may result in a team with only a few members from across the community. As the number of members grow, a single community-wide team may subdivide. Multiple CERTs are organized into a hierarchy of teams consistent with ICS principles. This follows the ICS principle of span of control until the ideal distribution is achieved: one or more teams are formed at each neighborhood within a community. A "teen community emergency response team" ("teen CERT"), or "student emergency response team" ("SERT"), can be formed from any group of teens. A teen CERT can be formed as a school club, service organization, venturing crew, explorer post, or the training can be added to a school's graduation curriculum. Some CERTs form a club or service corporation, and recruit volunteers to perform training on behalf of the sponsoring agency. This reduces the financial and human resource burden on the sponsoring agency. When not responding to disasters or large emergencies, CERTs may: Some sponsoring agencies use state and federal grants to purchase response tools and equipment for their members and teams (subject to Stafford Act limitations). Most CERTs also acquire their own supplies, tools, and equipment. As community members, CERTs are aware of the specific needs of their community, and equip the teams accordingly. Response. The basic idea is to use CERT to perform the large number of tasks needed in emergencies. This frees highly trained professional responders for more technical tasks. Much of CERT training concerns the ICS and organization, so CERT members fit easily into larger command structures. A team member may self-activate (self-deploy) when their own neighborhood is affected by disaster or when an incident takes place at their current location (ex. home, work, school, church, or if an accident occurred in front of them). They should not hear about an incident and drive or respond to an event unless told to do so by their team member or sponsoring agency (as specified in chapters 1 and 6 of the basic CERT training). An effort is made to report their response status to the sponsoring agency. A self-activated team will size up the loss in their neighborhood and begin performing the skills they have learned to minimize further loss of life, property, and environment. They will continue to respond safely until redirected or relieved by the sponsoring agency or professional responders on-scene. Teams in neighborhoods not affected by disaster may be deployed or activated by the sponsoring agency. The sponsoring agency may communicate with neighborhood CERT leaders through an organic communication team. In some areas the communications may be by amateur radio, FRS, GMRS or MURS radio, dedicated telephone or fire-alarm networks. In other areas, relays of bicycle-equipped runners can effectively carry messages between the teams and the local emergency operations center. The sponsoring agency may activate and dispatch teams in order to gather or respond to intelligence about an incident. Teams may be dispatched to affected neighborhoods, or organized to support operations. CERT members may augment support staff at an ICS or emergency operations center. Additional teams may also be created to guard a morgue, locate supplies and food, convey messages to and from other CERTs and local authorities, and other duties on an as-needed basis as identified by the team leader. In the short term, CERTs perform data gathering, especially to locate mass-casualties requiring professional response, or situations requiring professional rescues, simple fire-fighting tasks (for example, small fires, turning off gas), light search and rescue, damage evaluation of structures, triage and first aid. In the longer term, CERTs may assist in the evacuation of residents, or assist with setting up a neighborhood shelter. While responding, CERT members are temporary volunteer government workers. In some areas, such as California, Hawaii and Kansas, registered, activated CERT members are eligible for worker's compensation for on-the-job injuries during declared disasters. Member roles. The Federal Emergency Management Agency (FEMA) recommends that the standard, minimum ten-person team be comprised as follows: Because every CERT member in a community receives the same core instruction, any team member has the training necessary to assume any of these roles. This is important during a disaster response because not all members of a regular team may be available to respond. Hasty teams may be formed by whichever members are responding at the time. Additionally, members may need to adjust team roles due to stress, fatigue, injury, or other circumstances. Training. While state and local jurisdictions will implement training in the manner that best suits the community, FEMA's National CERT Program has an established curriculum. Jurisdictions may augment the training, but are strongly encouraged to deliver the entire core content. The CERT core curriculum for the basic course is composed of the following nine units (time is instructional hours): CERT training emphasizes safely "doing the most good for the most people as quickly as possible" when responding to a disaster. For this reason, cardiopulmonary resuscitation (CPR) training is not included in the core curriculum, as it is time- and responder-intensive in a mass-casualty incident. However, many jurisdictions encourage or require CERT members to obtain CPR training. Many CERT programs provide or encourage members to take additional first aid training. Some CERT members may also take training to become a certified first responder or emergency medical technician. Many CERT programs also provide training in amateur radio operation, shelter operations, flood response, community relations, mass care, the ICS, and the National Incident Management System (NIMS). Each unit of CERT training is ideally delivered by professional responders or other experts in the field addressed by the unit. This is done to help build unity between CERT members and responders, keep the attention of students, and help the professional response organizations be comfortable with the training which CERT members receive. Each course of instruction is ideally facilitated by one or more instructors certified in the CERT curriculum by the state or sponsoring agency. Facilitating instructors provide continuity between units, and help ensure that the CERT core curriculum is being delivered successfully. Facilitating instructors also perform set-up and tear-down of the classroom, provide instructional materials for the course, record student attendance and other tasks which assist the professional responder in delivering their unit as efficiently as possible. CERT training is provided free to interested members of the community, and is delivered in a group classroom setting. People may complete the training without obligation to join a CERT. Citizen corps grant funds can be used to print and provide each student with a printed manual. Some sponsoring agencies use citizen corps grant funds to purchase disaster response tool kits. These kits are offered as an incentive to join a CERT, and must be returned to the sponsoring agency when members resign from CERT. Some sponsoring agencies require a criminal background-check of all trainees before allowing them to participate on a CERT. For example, the city of Albuquerque, New Mexico, requires all volunteers to pass a background check, while the city of Austin, Texas, does not require a background check to take part in training classes, but requires members to undergo a background check in order to receive a CERT badge and directly assist first responders during an activation of the emergency operations center. However, most programs do not require a criminal background check in order to participate. The CERT curriculum (including the "Train-the-Trainer" and program manager courses) was updated in 2019 to reflect feedback from instructors across the nation. FEMA Position Qualification System. In 2021 FEMA published Position Qualification System standards for CERT programs: Programs who choose to participate must have CERT members complete a position task book every 2 years:
7063
4519234
https://en.wikipedia.org/wiki?curid=7063
Catapult
A catapult is a ballistic device used to launch a projectile at a great distance without the aid of gunpowder or other propellants – particularly various types of ancient and medieval siege engines. A catapult uses the sudden release of stored potential energy to propel its payload. Most convert tension or torsion energy that was more slowly and manually built up within the device before release, via springs, bows, twisted rope, elastic, or any of numerous other materials and mechanisms which allow the catapult to launch a projectile such as rocks, cannon balls, or debris. During wars in the ancient times, the catapult was usually known to be the strongest heavy weaponry. In modern times the term can apply to devices ranging from a simple hand-held implement (also called a "slingshot") to a mechanism for launching aircraft from a ship. The earliest catapults date to at least the 7th century BC, with King Uzziah of Judah recorded as equipping the walls of Jerusalem with machines that shot "great stones". Catapults are mentioned in Yajurveda under the name "Jyah" in chapter 30, verse 7. In the 5th century BC the mangonel appeared in ancient China, a type of traction trebuchet and catapult. Early uses were also attributed to Ajatashatru of Magadha in his 5th century BC war against the Licchavis. Greek catapults were invented in the early 4th century BC, being attested by Diodorus Siculus as part of the equipment of a Greek army in 399 BC, and subsequently used at the siege of Motya in 397 BC. Etymology. The word 'catapult' comes from the Latin 'catapulta', which in turn comes from the Greek ("katapeltēs"), itself from κατά ("kata"), "downwards" and πάλλω ("pallō"), "to toss, to hurl". Catapults were invented by the ancient Greeks and in ancient India where they were used by the Magadhan King Ajatashatru around the early to mid 5th century BC. Greek and Roman catapults. The catapult and crossbow in Greece are closely intertwined. Primitive catapults were essentially "the product of relatively straightforward attempts to increase the range and penetrating power of missiles by strengthening the bow which propelled them". The historian Diodorus Siculus (fl. 1st century BC), described the invention of a mechanical arrow-firing catapult ("katapeltikon") by a Greek task force in 399 BC. The weapon was soon after employed against Motya (397 BC), a key Carthaginian stronghold in Sicily. Diodorus is assumed to have drawn his description from the highly rated history of Philistus, a contemporary of the events then. The introduction of crossbows however, can be dated further back: according to the inventor Hero of Alexandria (fl. 1st century AD), who referred to the now lost works of the 3rd-century BC engineer Ctesibius, this weapon was inspired by an earlier foot-held crossbow, called the "gastraphetes", which could store more energy than the Greek bows. A detailed description of the "gastraphetes", or the "belly-bow", along with a watercolor drawing, is found in Heron's technical treatise "Belopoeica". A third Greek author, Biton (fl. 2nd century BC), whose reliability has been positively reevaluated by recent scholarship, described two advanced forms of the "gastraphetes", which he credits to Zopyros, an engineer from southern Italy. Zopyrus has been plausibly equated with a Pythagorean of that name who seems to have flourished in the late 5th century BC. He probably designed his bow-machines on the occasion of the sieges of Cumae and Milet between 421 BC and 401 BC. The bows of these machines already featured a winched pull back system and could apparently throw two missiles at once. Philo of Byzantium provides probably the most detailed account on the establishment of a theory of belopoietics ("belos" = "projectile"; "poietike" = "(art) of making") circa 200 BC. The central principle to this theory was that "all parts of a catapult, including the weight or length of the projectile, were proportional to the size of the torsion springs". This kind of innovation is indicative of the increasing rate at which geometry and physics were being assimilated into military enterprises. From the mid-4th century BC onwards, evidence of the Greek use of arrow-shooting machines becomes more dense and varied: arrow firing machines ("katapaltai") are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An extant inscription from the Athenian arsenal, dated between 338 and 326 BC, lists a number of stored catapults with shooting bolts of varying size and springs of sinews. The later entry is particularly noteworthy as it constitutes the first clear evidence for the switch to torsion catapults, which are more powerful than the more-flexible crossbows and which came to dominate Greek and Roman artillery design thereafter. This move to torsion springs was likely spurred by the engineers of Philip II of Macedonia. Another Athenian inventory from 330 to 329 BC includes catapult bolts with heads and flights. As the use of catapults became more commonplace, so did the training required to operate them. Many Greek children were instructed in catapult usage, as evidenced by "a 3rd Century B.C. inscription from the island of Ceos in the Cyclades [regulating] catapult shooting competitions for the young". Arrow firing machines in action are reported from Philip II's siege of Perinth (Thrace) in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, which could have been used to house anti-personnel arrow shooters, as in Aigosthena. Projectiles included both arrows and (later) stones that were sometimes lit on fire. Onomarchus of Phocis first used catapults on the battlefield against Philip II of Macedon. Philip's son, Alexander the Great, was the next commander in recorded history to make such use of catapults on the battlefield as well as to use them during sieges. The Romans started to use catapults as arms for their wars against Syracuse, Macedon, Sparta and Aetolia (3rd and 2nd centuries BC). The Roman machine known as an arcuballista was similar to a large crossbow. Later the Romans used ballista catapults on their warships. Other ancient catapults. In chronological order: Medieval catapults. Castles and fortified walled cities were common during this period and catapults were used as siege weapons against them. As well as their use in attempts to breach walls, incendiary missiles, or diseased carcasses or garbage could be catapulted over the walls. Defensive techniques in the Middle Ages progressed to a point that rendered catapults largely ineffective. The Viking siege of Paris (AD 885–6) "saw the employment by both sides of virtually every instrument of siege craft known to the classical world, including a variety of catapults", to little effect, resulting in failure. The most widely used catapults throughout the Middle Ages were as follows: Modern use. Military. The last large scale military use of catapults was during the trench warfare of World War I. During the early stages of the war, catapults were used to throw hand grenades across no man's land into enemy trenches. They were eventually replaced by small mortars. The SPBG (Silent Projector of Bottles and Grenades) was a Soviet proposal for an anti-tank weapon that launched grenades from a spring-loaded shuttle up to . Special variants called aircraft catapults are used to launch planes from land bases and sea carriers when the takeoff runway is too short for a powered takeoff or simply impractical to extend. Ships also use them to launch torpedoes and deploy bombs against submarines. In 2024, during the Gaza war, a trebuchet created by private initiative of an IDF reserve unit was used to throw firebrands over the border into Lebanon, in order to set on fire the undergrowth which offered camouflage to Hezbollah fighters. Toys, sports, entertainment. In the 1840s, the invention of vulcanized rubber allowed the making of small hand-held catapults, either improvised from Y-shaped sticks or manufactured for sale; both were popular with children and teenagers. These devices were also known as slingshots in the United States. Small catapults, referred to as "traps", are still widely used to launch clay targets into the air in the sport of clay pigeon shooting. In the 1990s and early 2000s, a powerful catapult, a trebuchet, was used by thrill-seekers first on private property and in 2001–2002 at Middlemoor Water Park, Somerset, England, to experience being catapulted through the air for . The practice has been discontinued due to a fatality at the Water Park. There had been an injury when the trebuchet was in use on private property. Injury and death occurred when those two participants failed to land onto the safety net. The operators of the trebuchet were tried, but found not guilty of manslaughter, though the jury noted that the fatality might have been avoided had the operators "imposed stricter safety measures." Human cannonball circus acts use a catapult launch mechanism, rather than gunpowder, and are risky ventures for the human cannonballs. Early launched roller coasters used a catapult system powered by a diesel engine or a dropped weight to acquire their momentum, such as Shuttle Loop installations between 1977 and 1978. The catapult system for roller coasters has been replaced by flywheels and later linear motors. "Pumpkin chunking" is another widely popularized use, in which people compete to see who can launch a pumpkin the farthest by mechanical means (although the world record is held by a pneumatic air cannon). Smuggling. In January 2011, a homemade catapult was discovered that was used to smuggle cannabis into the United States from Mexico. The machine was found from the border fence with bales of cannabis ready to launch.
7066
25511559
https://en.wikipedia.org/wiki?curid=7066
Cinquain
Cinquain ( ) is a class of poetic forms that employ a 5-line pattern. Earlier used to describe any five-line form, it now refers to one of several forms that are defined by specific rules and guidelines. American cinquain. The modern form, known as American cinquain is inspired by Japanese haiku and tanka and is akin in spirit to that of the Imagists. In her 1915 collection titled "Verse", published a year after her death, Adelaide Crapsey included 28 cinquains. Crapsey's American Cinquain form developed in two stages. The first, fundamental form is a stanza of five lines of accentual verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses. Then Crapsey decided to make the criterion a stanza of five lines of accentual-syllabic verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses and 2, 4, 6, 8, and 2 syllables. Iambic feet were meant to be the standard for the cinquain, which made the dual criteria match perfectly. Some resource materials define classic cinquains as solely iambic, but that is not necessarily so. In contrast to the Eastern forms upon which she based them, Crapsey always titled her cinquains, effectively utilizing the title as a sixth line. Crapsey's cinquain depends on strict structure and intense physical imagery to communicate a mood or feeling. The form is illustrated by Crapsey's "November Night": Listen...<br>With faint dry sound,<br>Like steps of passing ghosts,<br>The leaves, frost-crisp'd, break from the trees<br>And fall. The Scottish poet William Soutar also wrote over one hundred American cinquains (he labelled them "epigrams") between 1933 and 1940. Cinquain variations. The Crapsey cinquain has subsequently seen a number of variations by modern poets, including: Didactic cinquain. The didactic cinquain is closely related to the Crapsey cinquain. It is an informal cinquain widely taught in elementary schools and has been featured in, and popularized by, children's media resources, including "Junie B. Jones" and PBS Kids. This form is also embraced by young adults and older poets for its expressive simplicity. The prescriptions of this type of cinquain refer to word count, not syllables and stresses. Ordinarily, the first line is a one-word title, the subject of the poem; the second line is a pair of adjectives describing that title; the third line is a three-word phrase that gives more information about the subject (often a list of three gerunds); the fourth line consists of four words describing feelings related to that subject; and the fifth line is a single word synonym or other reference for the subject from line one. For example: Snow Silent, white Dancing, falling, drifting Covering everything it touches Blanket
7067
42944403
https://en.wikipedia.org/wiki?curid=7067
Cook Islands
The Cook Islands is an island country in Polynesia, part of Oceania, in the South Pacific Ocean. It consists of 15 islands whose total land area is approximately . The Cook Islands Exclusive Economic Zone (EEZ) covers of ocean. Avarua is its capital. The Cook Islands is self-governing while in free association with New Zealand. Since the start of the 21st century, the Cook Islands conducts its own independent foreign and defence policy, and also has its own customs regulations. Like most members of the Pacific Islands Forum, it has no armed forces, but the Cook Islands Police Service owns a Guardian Class Patrol Boat, , provided by Australia, for policing its waters. In recent decades, the Cook Islands has adopted an increasingly assertive and distinct foreign policy, and a Cook Islander, Henry Puna, served as Secretary General of the Pacific Islands Forum from 2021 to 2024. Most Cook Islanders have New Zealand citizenship, plus the status of Cook Islands nationals, which is not given to other New Zealand citizens. The Cook Islands has been an active member of the Pacific Community, formerly the South Pacific Commission, since 1980. The Cook Islands' main population centres are on the island of Rarotonga (10,863 in 2021), also the location of Rarotonga International Airport, the main international gateway to the country. The census of 2021 put the total population at 14,987. There is also a larger population of Cook Islanders in New Zealand and Australia: in the 2018 New Zealand census, 80,532 people said they were Cook Islanders, or of Cook Islands descent. The last Australian census recorded 28,000 Cook Islanders living in Australia, many with Australian citizenship. With over 168,000 visitors to the islands in 2018, tourism is the country's main industry and leading element of its economy, ahead of offshore banking, pearls, and marine and fruit exports. Etymology. The Cook Islands comprise 15 islands that have had individual names in indigenous languages including Cook Islands Māori and Pukapukan throughout the time they have been inhabited. The first name given by Europeans was "Gente Hermosa" (beautiful people) by Spanish explorers to Rakahanga in 1606. The islands as a whole are named after the English captain and explorer James Cook, who visited during the 1770s and named Manuae "Hervey Island" after Augustus Hervey, 3rd Earl of Bristol. The southern island group became known as the "Hervey Islands" after this. In the 1820s, Russian Admiral Adam Johann von Krusenstern referred to the southern islands as the "Cook Islands" in his "Atlas de l'Ocean Pacifique". The entire territory (including the northern island group) was not known as the "Cook Islands" until after its annexation by New Zealand in the early 20th century. In 1901, the New Zealand parliament passed the "Cook and other Islands Government Act", demonstrating that the name "Cook Islands" only referred to some of the islands. This situation had changed by the passage of the "Cook Islands Act 1915", which defined the Cooks' area and included all presently included islands. The islands' official name in Cook Islands Māori is "Kūki 'Āirani", a transliteration of the English name. History. The Cook Islands were first settled around AD 1000 by Polynesian people who are thought to have migrated from Tahiti, an island to the northeast of the main island of Rarotonga. The first European contact with the islands took place in 1595 when the Spanish navigator Álvaro de Mendaña de Neira sighted the island of Pukapuka, which he named "San Bernardo" (Saint Bernard). Pedro Fernandes de Queirós, a Portuguese captain at the service of the Spanish Crown, made the first European landing in the islands when he set foot on Rakahanga in 1606, calling the island "Gente Hermosa" (Beautiful People). British explorer and naval officer Captain James Cook arrived in 1773 and again in 1777, giving the island of Manuae the name "Hervey Island". The "Hervey Islands" later came to be applied to the entire southern group. The name "Cook Islands", in honour of Cook, first appeared on a Russian naval chart published by Adam Johann von Krusenstern in the 1820s. In 1813 John Williams, a missionary on the colonial brig "Endeavour" (not the same ship as Cook's) made the first recorded European sighting of Rarotonga. The first recorded landing on Rarotonga by Europeans was in 1814 by the "Cumberland"; trouble broke out between the sailors and the Islanders and many were killed on both sides. The islands saw no more Europeans until English missionaries arrived in 1821. Christianity quickly took hold in the culture and many islanders are Christians today. The islands were a popular stop in the 19th century for whaling ships from the United States, Britain and Australia. They visited, from at least 1826, to obtain water, food, and firewood. Their favourite islands were Rarotonga, Aitutaki, Mangaia and Penrhyn. The Cook Islands became aligned to the United Kingdom in 1890, largely because of the fear of British residents that France might occupy the islands as it already had Tahiti. On 6 September 1900, the islanders' leaders presented a petition asking that the islands (including Niue "if possible") should be annexed as British territory. On 8 and 9 October 1900, seven instruments of cession of Rarotonga and other islands were signed by their chiefs and people. A British Proclamation was issued, stating that the cessions were accepted and the islands declared parts of Her Britannic Majesty's dominions. However, it did not include Aitutaki. Even though the inhabitants regarded themselves as British subjects, the Crown's title was unclear until the island was formally annexed by that Proclamation. In 1901 the islands were included within the boundaries of the Colony of New Zealand by Order in Council under the Colonial Boundaries Act, 1895 of the United Kingdom. The boundary change became effective on 11 June 1901, and the Cook Islands have had a formal relationship with New Zealand since that time. The Cook Islands responded to the call for service when World War I began, immediately sending five contingents, close to 500 men, to the war. The island's young men volunteered at the outbreak of the war to reinforce the Māori Contingents and the Australian and New Zealand Mounted Rifles. A Patriotic Fund was set up very quickly, raising funds to support the war effort. The Cook Islanders were trained at Narrow Neck Camp in Devonport, and the first recruits departed on 13 October 1915 on the SS "Te Anau". The ship arrived in Egypt just as the New Zealand units were about to be transferred to the Western Front. In September 1916, the Pioneer Battalion, a combination of Cook Islanders, Māori and Pakeha soldiers, saw heavy action in the Allied attack on Flers, the first battle of the Somme. Three Cook Islanders from this first contingent died from enemy action and at least ten died of disease as they struggled to adapt to the conditions in Europe. The 2nd and 3rd Cook Island Contingents were part of the Sinai-Palestine campaign, first in a logistical role for the Australian and New Zealand Mounted Rifles at their Moascar base and later in ammunition supply for the Royal Artillery. After the war, the men returned to the outbreak of the influenza epidemic in New Zealand, and this, along with European diseases meant that a large number did not survive and died in New Zealand or on their return home over the coming years. When the British Nationality and New Zealand Citizenship Act 1948 came into effect on 1 January 1949, Cook Islanders who were British subjects automatically gained New Zealand citizenship. The islands remained a New Zealand dependent territory until the New Zealand Government decided to grant them self-governing status. On 4 August 1965, a constitution was promulgated. The first Monday in August is celebrated each year as Constitution Day. Albert Henry of the Cook Islands Party was elected as the first Premier and was knighted by Queen Elizabeth II. Henry led the nation until 1978, when he was accused of vote-rigging and resigned. He was stripped of his knighthood in 1979. He was succeeded by Tom Davis of the Democratic Party who held that position until March 1983. On 13 July 2017, the Cook Islands established Marae Moana, making it become the world's largest protected area by size. In March 2019, it was reported that the Cook Islands had plans to change its name and remove the reference to Captain James Cook in favour of "a title that reflects its 'Polynesian nature. It was later reported in May 2019 that the proposed name change had been poorly received by the Cook Islands diaspora. As a compromise, it was decided that the English name of the islands would not be altered, but that a new Cook Islands Māori name would be adopted to replace the current name, a transliteration from English. Discussions over the name continued in 2020. Geography. The Cook Islands are in the South Pacific Ocean, north-east of New Zealand, between American Samoa and French Polynesia. There are 15 major islands spread over of ocean, divided into two distinct groups: the Southern Cook Islands, and the Northern Cook Islands of coral atolls. The islands were formed by volcanic activity; the northern group is older and consists of six atolls, which are sunken volcanoes topped by coral growth. The climate is moderate to tropical. The Cook Islands consist of 15 islands and two reefs. From March to December, the Cook Islands are in the path of tropical cyclones, the most notable of which were the cyclones Martin and Percy. Two terrestrial ecoregions lie within the islands' territory: the Central Polynesian tropical moist forests and the Cook Islands tropical moist forests. Note: The table is ordered from north to south. Population figures from the 2021 census. Biodiversity. The national flower of the Cook Islands is the "tiare māori" or "tiale māoli" (Penrhyn, Nassau, Pukapuka). The Cook Islands have a large non-native population of ship rat and "kiore toka" (Polynesian rat). The rats have dramatically reduced the bird population on the islands. In April 2007, 27 Kuhl's lorikeets were re-introduced to Atiu from Rimatara. Fossil and oral traditions indicate that the species was formerly on at least five islands of the southern group. Excessive exploitation for its red feathers is the most likely reason for the species's extinction in the Cook Islands. The islands' surrounding waters are the home of the peppermint angelfish. While they are common, due to the difficulty of harvesting them they are one of the most expensive marine aquarium fish with a price of US$30,000. Politics and foreign relations. The Cook Islands are a representative democracy with a parliamentary system in an associated state relationship with New Zealand. Executive power is exercised by the government, with the Prime Minister as head of government. Legislative power is vested in both the government and the Parliament of the Cook Islands. While the country is de jure unicameral, there are two legislative bodies with the House of Ariki acting as a "de facto" upper house. There is a multi-party system. The judiciary is independent of the executive and the legislature. The head of state is the of New Zealand, who is represented in the Cook Islands by the 's Representative. The islands are self-governing in "free association" with New Zealand. Under the Cook Islands constitution, New Zealand cannot pass laws for the Cook Islands. Rarotonga has its own foreign service and diplomatic network. Cook Islands nationals have the right to become citizens of New Zealand and can receive New Zealand government services when in New Zealand, but the reverse is not true; New Zealand citizens are not Cook Islands nationals. Despite this, , the Cook Islands had diplomatic relations in its own name with 52 other countries. The Cook Islands is not a United Nations member state, but, along with Niue, has had their "full treaty-making capacity" recognised by the United Nations Secretariat, and is a full member of the World Health Organization (WHO), UNESCO, the International Civil Aviation Organization, the International Maritime Organization and the UN Food and Agriculture Organization, all UN specialized agencies, and is an associate member of the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP) and a Member of the Assembly of States of the International Criminal Court. On 11 June 1980, the United States signed a treaty with the Cook Islands specifying the maritime border between the Cook Islands and American Samoa and also relinquishing any American claims to Penrhyn, Pukapuka, Manihiki, and Rakahanga. In 1990 the Cook Islands and France signed a treaty that delimited the boundary between the Cook Islands and French Polynesia. In late August 2012, United States Secretary of State Hillary Clinton visited the islands. In 2017, the Cook Islands signed the UN Treaty on the Prohibition of Nuclear Weapons. On 25 September 2023, the Cook Islands and the United States of America established diplomatic relations under the leadership of Prime Minister Mark Brown at a ceremony in Washington, DC. In 2024, the Cook Islands' efforts to join the Commonwealth of Nations as a full member were "ongoing" but, despite this, the government was unable to secure an invitation to attend the 2024 Commonwealth Heads of Government Meeting in Samoa. In 2025, Cook Islands prime minister Mark Brown stated that the UN confirmed that the Cook Islands did not meet the requirements for UN membership and foreign minister Tingika Elikana stated that any future decision to join the UN would require a referendum and reevaluation of the relationship with New Zealand. Brown also confirmed that at the Commonwealth of Nations, the Cook Islands is considered to be represented by the Realm of New Zealand, meaning that they would not have their own separate representation unless they become fully sovereign. Additionally, in response to a push to introduce Cook Islander passports and agreements made with China, a spokesperson for New Zealand foreign minister Winston Peters stated, "Unlike Samoa, Tonga and Tuvalu, the Cook Islands is not a fully independent and sovereign state", unless its status and relationship with New Zealand are changed by referendum. Defence and police. The Cook Islands Police Service polices its own waters, and shares responsibility for defence with New Zealand, in consultation with the Cook Islands Government and at its request. The total offshore EEZ is about 2 million square kilometres. Vessels of the Royal New Zealand Navy can be employed for this task including its s. These naval forces may also be supported by Royal New Zealand Air Force aircraft, including P-8 Poseidons. However, these forces are limited in size and in 2023 were described by the Government as "not in a fit state" to respond to regional challenges. New Zealand's subsequently announced "Defence Policy and Strategy Statement" noted that shaping the security environment, "focusing in particular on supporting security in and for the Pacific" would receive enhanced attention. The Cook Islands Police Service is the police force of the Cook Islands. The Maritime Wing of the Police Service exercises sovereignty over the nation's EEZ. Vessels have included a , commissioned in May 1989 which received a re-fit in 2015 but was withdrawn from service and replaced by a larger and more capable , , which entered service in 2022. Cook Islands has its own customs regulations. Human rights. Formerly, male homosexuality was "de jure" illegal in the Cook Islands and was punishable by a maximum term of seven years imprisonment; however, the law was never enforced. In 2023, legislation was passed which legalised homosexuality. Administrative subdivisions. There are island councils, each headed by a mayor, on all of the inhabited outer islands (Outer Islands Local Government Act 1987 with amendments up to 2004, and Palmerston Island Local Government Act 1993) except Nassau, which is governed by Pukapuka (Suwarrow, with only one caretaker living on the island, also governed by Pukapuka, is not counted with the inhabited islands in this context). Three vaka councils headed by mayors were established on Rarotonga by the Rarotonga Local Government Act 1997, then abolished in February 2008, despite much controversy. On the lowest level, there are village committees. Nassau, which is governed by Pukapuka, has an island committee (Nassau Island Committee), which advises the Pukapuka Island Council on matters concerning its own island. Demographics. Births and deaths Religion. In the Cook Islands, the Church is separate from the state, and most of the population is Christian. Various Protestant groups account for 62.8% of the believers, the most followed denomination being the Cook Islands Christian Church with 49.1%. Other Protestant Christian groups include Seventh-day Adventist 7.9%, Assemblies of God 3.7% and Apostolic Church 2.1%. The main non-Protestant group are Catholics, with 17% of the population. The Church of Jesus Christ of Latter-day Saints makes up 4.4%. "None" or "unspecified" account for 15.6% of the population. Economy. The economy is strongly affected by geography. It is isolated from foreign markets, and has some inadequate infrastructure; it lacks major natural resources except for significant seabed critical minerals, has limited manufacturing and suffers moderately from natural disasters. Tourism provides the economic base that makes up approximately 67.5% of GDP. Additionally, the economy is supported by foreign aid, largely from New Zealand. China has also contributed foreign aid, which has resulted in, among other projects, the Police Headquarters building. The Cook Islands is expanding its agriculture, mining and fishing sectors, with varying success. Since approximately 1989, the Cook Islands have become a location specialising in so-called asset protection trusts, by which investors shelter assets from the reach of creditors and legal authorities. According to "The New York Times", the Cooks have "laws devised to protect foreigners' assets from legal claims in their home countries", which were apparently crafted specifically to thwart the long arm of American justice; creditors must travel to the Cook Islands and argue their cases under Cooks law, often at prohibitive expense. Unlike other foreign jurisdictions such as the British Virgin Islands, the Cayman Islands and Switzerland, the Cooks "generally disregard foreign court orders" and do not require that bank accounts, real estate, or other assets protected from scrutiny (it is illegal to disclose names or any information about Cooks trusts) be physically located within the archipelago. Taxes on trusts and trust employees account for some 8% of the Cook Islands economy, behind tourism but ahead of fishing. In recent years, the Cook Islands has gained a reputation as a debtor paradise, through the enactment of legislation that permits debtors to shield their property from the claims of creditors. Since 2023 the Executive Director of Cook Islands Bank has been Jennifer Henry (nee Matheson). In 2019, the Cook Islands passed the Sea Bed Minerals (SBM) Act to manage the seabed minerals located in the Exclusive Economic Zone surrounding the islands. In 2022, the SBMA granted three exploration licenses for polymetallic nodules to three private companies, including one co-owned by the government. In 2025, the Cook Islands announced that it had signed a seabed mineral exploration agreement with China. Infrastructure. There are eleven airports in the Cook Islands, including one with a paved runway, Rarotonga International Airport, served by five passenger airlines. Culture. Language. The languages of the Cook Islands include English, Cook Islands Māori (or "Rarotongan"), and Pukapukan. Dialects of Cook Islands Māori include Penrhyn; Rakahanga-Manihiki; the Ngaputoru dialect of Atiu, Mitiaro, and Mauke; the Aitutaki dialect; and the Mangaian dialect. Cook Islands Māori and its dialectic variants are closely related to both Tahitian and to New Zealand Māori. Pukapukan is considered closely related to the Samoan language. English and Cook Islands Māori are official languages of the Cook Islands, per the Te Reo Maori Act. The legal definition of Cook Islands Māori includes Pukapukan. Art. Traditional arts. Woodcarving is a common art form in the Cook Islands. The proximity of islands in the southern group helped produce a homogeneous style of carving but that had special developments in each island. Rarotonga is known for its fisherman's gods and staff-gods, Atiu for its wooden seats, Mitiaro, Mauke and Atiu for mace and slab gods and Mangaia for its ceremonial adzes. Most of the original wood carvings were either spirited away by early European collectors or were burned in large numbers by missionaries. Today, carving is no longer the major art form with the same spiritual and cultural emphasis given to it by the Maori in New Zealand. However, there are continual efforts to interest young people in their heritage and some good work is being turned out under the guidance of older carvers. Atiu, in particular, has a strong tradition of crafts both in carving and local fibre arts such as tapa. Mangaia is the source of many fine adzes carved in a distinctive, idiosyncratic style with the so-called double-k design. Mangaia also produces food pounders carved from the heavy calcite found in its extensive limestone caves. The outer islands produce traditional weaving of mats, basketware and hats. Particularly fine examples of rito hats are worn by women to church. They are made from the uncurled immature fibre of the coconut palm and are of very high quality. The Polynesian equivalent of Panama hats, they are highly valued and are keenly sought by Polynesian visitors from Tahiti. Often, they are decorated with hatbands made of minuscule pupu shells that are painted and stitched on by hand. Although pupu are found on other islands the collection and use of them in decorative work has become a speciality of Mangaia. The weaving of rito is a speciality of the northern islands, Manihiki, Rakahanga and Penrhyn. A major art form in the Cook Islands is tivaevae. This is, in essence, the art of handmade Island scenery patchwork quilts. Introduced by the wives of missionaries in the 19th century, the craft grew into a communal activity, which is probably one of the main reasons for its popularity. Contemporary art. The Cook Islands has produced internationally recognised contemporary artists, especially in the main island of Rarotonga. Artists include painter (and photographer) Mahiriki Tangaroa, sculptors Eruera (Ted) Nia (originally a film maker) and master carver Mike Tavioni, painter (and Polynesian tattoo enthusiast) Upoko'ina Ian George, Aitutakian-born painter Tim Manavaroa Buchanan, Loretta Reynolds, Judith Kunzlé, Joan Gragg, Kay George (who is also known for her fabric designs), Apii Rongo, Varu Samuel, and multi-media, installation and community-project artist Ani O'Neill, all of whom currently live on the main island of Rarotonga. Atiuan-based Andrea Eimke is an artist who works in the medium of tapa and other textiles, and also co-authored the book 'Tivaivai – The Social Fabric of the Cook Islands' with British academic Susanne Kuechler. Many of these artists have studied at university art schools in New Zealand and continue to enjoy close links with the New Zealand art scene. New Zealand-based Cook Islander artists include Michel Tuffery, print-maker David Teata, Richard Shortland Cooper, Nina Oberg Humphries, Sylvia Marsters and Jim Vivieaere. Bergman Gallery (formerly BCA Gallery) is the main commercial dealer gallery in the Cook Islands, situated in the main island of Rarotonga, and represents Cook Islands artists such as Sylvia Marsters, Mahiriki Tangaroa, Nina Oberg Humphries, Joan Gragg and Tungane Broadbent The Art Studio Gallery in Arorangi, was run by Ian George and Kay George is now Beluga Cafe. There is also Gallery Tavioni and Vananga run by Mike Tavioni and The Cook Islands National Museum also exhibits art. Music. Music in the Cook Islands is varied, with Christian songs being quite popular, but traditional dancing and songs in Cook Islands Maori and Pukapukan remain popular. Sport. The Cook Islands have competed at the Summer Olympic Games since 1988, without winning a medal. Rugby league is the most popular sport and the national sport of the country. Newspapers. Newspapers in the Cook Islands are usually published in English with some articles in Cook Islands Māori. The "Cook Islands News" has been published since 1945, although it was owned by the government until 1989. Former newspapers include Te Akatauira, which was published from 1978 to 1980.
7068
1437349
https://en.wikipedia.org/wiki?curid=7068
History of the Cook Islands
The Cook Islands are named after Captain James Cook, who visited the islands in 1773 and 1777, although Spanish navigator Alvaro de Mendaña was the first European to reach the islands in 1595. The Cook Islands became aligned to the United Kingdom in 1890, largely because of the fear of British residents that France might occupy the islands as it already had Tahiti. By 1900, the islands were annexed as British territory. In 1901, the islands were included within the boundaries of the Colony of New Zealand. The Cook Islands contain 15 islands in the group spread over a vast area in the South Pacific. The majority of islands are low coral atolls in the Northern Group, with Rarotonga, a volcanic island in the Southern Group, as the main administration and government centre. The main Cook Islands language is Rarotongan Māori. There are some variations in dialect in the 'outer' islands. Early settlers of the Cooks. It is thought that the Cook Islands may have been settled between the years 900-1200 CE. Early settlements suggest that the settlers migrated from Tahiti, to the northeast of the Cooks. The Cook Islands continue to hold important connections with Tahiti, and this is generally found in the two countries' culture, tradition and language. It is also thought that the early settlers were true Tahitians, who landed in Rarotonga (Takitumu district). There are notable historic epics of great warriors who travel between the two nations for a wide variety of reasons. The purpose of these missions is still unclear but recent research indicates that large to small groups often fled their island due to local wars being forced upon them. For each group to travel and to survive, they would normally rely on a warrior to lead them. Outstanding warriors are still mentioned in the countries' traditions and stories. These arrivals are evidenced by an older road in Toi, the "Ara Metua", which runs around most of Rarotonga, and is believed to be at least 1200 years old. This 29 km long, paved road is a considerable achievement of ancient engineering, possibly unsurpassed elsewhere in Polynesia. The islands of Manihiki and Rakahanga trace their origins to the arrival of Toa Nui, a warrior from the Puaikura tribe of Rarotonga, and Tepaeru, a high-ranking woman from the Takitumu or Te-Au-O-Tonga tribes of Rarotonga. Tongareva was settled by an ancestor from Rakahanga called Mahuta and an Aitutaki Ariki & Chief Taruia, and possibly a group from Tahiti. The remainder of the northern islands, Pukapuka (Te Ulu O Te Watu) was probably settled by expeditions from Samoa. Early European contact. Spanish ships visited the islands in the 16th century; the first written record of contact between Europeans and the native inhabitants of the Cook Islands came with the sighting of Pukapuka by Spanish sailor Álvaro de Mendaña in 1595, who called it "San Bernardo" (Saint Bernard). Portuguese-Spaniard Pedro Fernández de Quirós made the first recorded European landing in the islands when he set foot on Rakahanga in 1606, calling it "Gente Hermosa" (Beautiful People). British navigator Captain James Cook arrived in 1773 and 1777. Cook named the islands the 'Hervey Islands' to honour a British Lord of the Admiralty. Half a century later, the Russian Baltic German Admiral Adam Johann von Krusenstern published the "Atlas de l'Ocean Pacifique", in which he renamed the islands the Cook Islands to honour Cook. Captain Cook navigated and mapped much of the group. Surprisingly, Cook never sighted the largest island, Rarotonga, and the only island that he personally set foot on was the tiny, uninhabited Palmerston Atoll. The first recorded landing by Europeans on Rarotonga was in 1814 by the "Cumberland"; trouble broke out between the sailors and the Islanders and many were killed on both sides. The islands saw no more Europeans until missionaries arrived from England in 1821. Christianity quickly took hold in the culture and remains the predominant religion today. In 1823, Captain John Dibbs of the colonial barque "Endeavour" made the first official sighting of the island Rarotonga. The "Endeavour" was transporting Rev. John Williams on a missionary voyage to the islands. Brutal Peruvian slave traders, known as blackbirders, took a terrible toll on the islands of the Northern Group in 1862 and 1863. At first, the traders may have genuinely operated as labour recruiters, but they quickly turned to subterfuge and outright kidnapping to round up their human cargo. The Cook Islands was not the only island group visited by the traders, but Penrhyn Atoll was their first port of call and it has been estimated that three-quarters of the population was taken to Callao, Peru. Rakahanga and Pukapuka also suffered tremendous losses. British protectorate. The Cook Islands became a British protectorate in 1888, due largely to community fears that France might occupy the territory as it had Tahiti. On 6 September 1900, the leading islanders presented a petition asking that the islands (including Niue "if possible") should be annexed as British territory. On 8–9 October 1900, seven instruments of cession of Rarotonga and other islands were signed by their chiefs and people, and a British proclamation issued at the same time accepted the cessions, the islands being declared parts of Her Britannic Majesty's dominions. These instruments did not include Aitutaki. It appears that, though the inhabitants regarded themselves as British subjects, the Crown's title was uncertain, and the island was formally annexed by Proclamation dated 9 October 1900. The islands were included within the boundaries of the Colony of New Zealand in 1901 by Order in Council under the Colonial Boundaries Act, 1895 of the United Kingdom. The boundary change became effective on 11 June 1901, and the Cook Islands have had a formal relationship with New Zealand since that time. Recent history. In 1962 New Zealand asked the Cook Islands legislature to vote on four options for the future: independence, self-government, integration into New Zealand, or integration into a larger Polynesian federation. The legislature decided upon self-government. Following elections in 1965, the Cook Islands transitioned to become a self-governing territory in free association with New Zealand. This arrangement left the Cook Islands politically independent, but officially remaining under New Zealand sovereignty. This political transition was approved by the United Nations. Despite this status change, the islands remained financially dependent on New Zealand, and New Zealand believed that a failure of the free association agreement would lead to integration rather than full independence. New Zealand is tasked with overseeing the country's foreign relations and defense. The Cook Islands, Niue, and New Zealand (with its territories: Tokelau and the Ross Dependency) make up the Realm of New Zealand. After achieving autonomy in 1965, the Cook Islands elected Albert Henry of the Cook Islands Party as their first Prime Minister. He led the country until 1978 when he was accused of vote-rigging. He was succeeded by Tom Davis of the Democratic Party. On 11 June 1980, the United States signed a treaty with the Cook Islands specifying the maritime border between the Cook Islands and American Samoa and also relinquishing the US claim to the islands of Penrhyn, Pukapuka, Manihiki, and Rakahanga. In 1990, the Cook Islands signed a treaty with France which delimited the maritime boundary between the Cook Islands and French Polynesia. On June 13, 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. "Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers," chief Makea Vakatini Joseph Ariki explained. The "Cook Islands Herald" suggested that the "ariki" were attempting thereby to regain some of their traditional prestige or "mana". Prime Minister Jim Marurai described the take-over move as "ill-founded and nonsensical". By June 23, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties. Timeline. 900 - first People arrive to the islands 1595 — Spaniard Álvaro de Mendaña de Neira is the first European to sight the islands. 1606 — Portuguese-Spaniard Pedro Fernández de Quirós makes the first recorded European landing in the islands when he sets foot on Rakahanga. 1773 — Captain James Cook explores the islands and names them the Hervey Islands. Fifty years later they are renamed in his honour by Russian Admiral Adam Johann von Krusenstern. 1821 — English and Tahitian missionaries land in Aitutaki, become the first non-Polynesian settlers. 1823 — English missionary John Williams lands in Rarotonga, converting Makea Pori Ariki to Christianity. 1858 — The Cook Islands become united as a state, the Kingdom of Rarotonga. 1862 — Peruvian slave traders take a terrible toll on the islands of Penrhyn, Rakahanga and Pukapuka in 1862 and 1863. 1888 — Cook Islands are proclaimed a British protectorate and a single federal parliament is established. 1900 — The Cook Islands are ceded to the United Kingdom as British territory, except for Aitutaki which was annexed by the United Kingdom at the same time. 1901 — The boundaries of the Colony of New Zealand are extended by the United Kingdom to include the Cook Islands. 1924 — The All Black "Invincibles" stop in Rarotonga on their way to the United Kingdom and play a friendly match against a scratch Rarotongan team. 1946 — Legislative Council is established. For the first time since 1912, the territory has direct representation. 1957 — Legislative Council is reorganized as the Legislative Assembly. 1965 — The Cook Islands become a self-governing territory in free association with New Zealand. Albert Henry, leader of the Cook Islands Party, is elected as the territory's first prime minister. 1974 — Albert Henry is knighted by Queen Elizabeth II 1979 — Sir Albert Henry is found guilty of electoral fraud and stripped of his premiership and his knighthood. Tom Davis becomes Premier. 1980 — Cook Islands – United States Maritime Boundary Treaty establishes the Cook Islands – American Samoa boundary 1981 — Constitution is amended. Legislative Assembly is renamed Parliament, which grows from 22 to 24 seats, and the parliamentary term is extended from four to five years. Tom Davis is knighted. 1984 — The country's first coalition government, between Sir Thomas and Geoffrey Henry, is signed in the lead up to hosting regional Mini Games in 1985. Shifting coalitions saw ten years of political instability. At one stage, all but two MPs were in government. 1985 — Rarotonga Treaty is opened for signing in the Cook Islands, creating a nuclear-free zone in the South Pacific. 1986 — In January 1986, following the rift between New Zealand and the US in respect of the ANZUS security arrangements Prime Minister Tom Davis declared the Cook Islands a neutral country, because he considered that New Zealand (which has control over the islands' defence and foreign policy) was no longer in a position to defend the islands. The proclamation of neutrality meant that the Cook Islands would not enter into a military relationship with any foreign power, and, in particular, would prohibit visits by US warships. Visits by US naval vessels were allowed to resume by Henry's Government. 1990 — Cook Islands – France Maritime Delimitation Agreement establishes the Cook Islands–French Polynesia boundary 1991 — The Cook Islands signed a treaty of friendship and co-operation with France, covering economic development, trade and surveillance of the islands' EEZ. The establishment of closer relations with France was widely regarded as an expression of the Cook Islands' Government's dissatisfaction with existing arrangements with New Zealand which was no longer in a position to defend the Cook Islands. 1995 — The French Government resumed its programme of nuclear-weapons testing at Mururoa Atoll in September 1995 upsetting the Cook Islands. New Prime Minister Geoffrey Henry was fiercely critical of the decision and dispatched a "vaka" (traditional voyaging canoe) with a crew of Cook Islands' traditional warriors to protest near the test site. The tests were concluded in January 1996 and a moratorium was placed on future testing by the French government. 1997 — Full diplomatic relations established with the People's Republic of China. 1997 — In November, Cyclone Martin in Manihiki kills at least six people; 80% of buildings are damaged and the black pearl industry suffered severe losses. 1999 — A second era of political instability begins, starting with five different coalitions in less than nine months, and at least as many since then. 2000 — Full diplomatic relations concluded with France. 2002 — Prime Minister Terepai Maoate is ousted from government following second vote of no-confidence in his leadership. 2004 — Prime Minister Robert Woonton visits China; Chinese Premier Wen Jiabao grants $16 million in development aid. 2006 — Parliamentary elections held. The Democratic Party keeps majority of seats in parliament, but is unable to command a majority for confidence, forcing a coalition with breakaway MPs who left, then rejoined the "Demos". 2008 — Pacific Island nations imposed a series of measures aimed at halting overfishing.
7069
11015197
https://en.wikipedia.org/wiki?curid=7069
Geography of the Cook Islands
The Cook Islands can be divided into two groups: the Southern Cook Islands and the Northern Cook Islands. The country is located in Oceania, in the South Pacific Ocean, about halfway between Hawaii and New Zealand. From December through to March, the Cook Islands are in the path of tropical cyclones, the most notable of which were cyclones Martin (1997) and Percy (2005). Two terrestrial ecoregions lie within the islands' territory: the Central Polynesian tropical moist forests and the Cook Islands tropical moist forests. Islands and reefs. Table. Note: The table is ordered from north to south. Population figures from the 2016 census. * Total: * Land: 236 km2 * Water: 0 km2 1.3 times the size of Washington, DC * Territorial sea: * Continental shelf: or to the edge of the continental margin * Exclusive economic zone: Tropical; moderated by trade winds; a dry season from April to November and a more humid season from December to March Low coral atolls in north; volcanic, hilly islands in south * Lowest point: Pacific Ocean 0 m * Highest point: Te Manga coconuts fresh water * Arable land: 4.17% * Permanent crops: 4.17% * Other: 91.67% (2012 est.) Typhoons (November to March) Tsunamis (Year-round) UTC -10 (GMT -10) Rarotonga * Party to: Biodiversity, Climate Change-Kyoto Protocol, Desertification, Hazardous Wastes, Law of the Sea, Ozone Layer Protection
7070
49447023
https://en.wikipedia.org/wiki?curid=7070
Demographics of the Cook Islands
Demographic features of the population of the Cook Islands include population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population. Population. A census is carried out every five years in the Cook Islands. The last census was carried out in 2021 and the next census will be carried out in 2026. Ethnic groups. The indigenous Polynesian people of the Cook Islands are known as Cook Islands Māori. These include speakers of Cook Islands Māori language, closely related to Tahitian and New Zealand Māori, who form the majority of the population and inhabit the southern islands including Rarotonga; and also the people of Pukapuka, who speak a language more closely related to Samoan. Cook Islanders of non-indigenous descent include other Pacific Island peoples, Papa'a (Europeans), and those of Asian descent. Religion. The Cook Islands are majority-Protestant, with almost half the population being members of the Reformed Cook Islands Christian Church. Other Protestant denominations include Seventh-day Adventists, Assemblies of God and the Apostolic Church (the latter two being Pentecostal denominations). The largest non-Protestant denomination are Roman Catholics, followed by the Church of Jesus Christ of Latter-day Saints. Non-Christian faiths including Hinduism, Buddhism and Islam have small followings primarily by non-indigenous inhabitants.
7071
28564
https://en.wikipedia.org/wiki?curid=7071
Politics of the Cook Islands
The politics of the Cook Islands takes place in a framework of a parliamentary representative democracy within a constitutional monarchy. The monarch of New Zealand, represented in the Cook Islands by the King or Queen's Representative, is the head of state; the prime minister is the head of government of a multi-party system. The nation is self-governing and fully responsible for its internal and foreign affairs; it has run its own foreign and defence policy since 2001. Executive power is exercised by the government, while legislative power is vested in both the government and the islands' parliament. The judiciary is independent of the executive and the legislatures. Constitution. The Constitution of the Cook Islands took effect on 4 August 1965, when the Cook Islands became a self-governing state in free association with New Zealand. The anniversary of these events in 1965 is commemorated annually on Constitution Day, with week long activities known as "Te Maeva Nui Celebrations" locally. Executive. Ten years of rule by the Cook Islands Party (CIP) came to an end 18 November 1999 with the resignation of Prime Minister Joe Williams. Williams had led a minority government since October 1999 when the New Alliance Party (NAP) left the government coalition and joined the main opposition Democratic Party (DAP). On 18 November 1999, DAP leader Dr. Terepai Maoate was sworn in as prime minister. He was succeeded by his co-partisan Robert Woonton. When Dr Woonton lost his seat in the 2004 elections, Jim Marurai took over. In the 2010 elections, the CIP regained power and Henry Puna was sworn in as prime minister on 30 November 2010. His Deputy, Mark Brown, succeeded Puna in 2020, when Puna was elected Secretary General of the Pacific Islands Forum. Prime Minister Mark Brown was reelected in 2022 with an increased majority Legislature. The Parliament of the Cook Islands has 24 members, elected for a five-year term in single-seat constituencies. There is also a House of Ariki, composed of chiefs, which has a purely advisory role. The Koutu Nui is a similar organization consisting of sub-chiefs. It was established by an amendment in 1972 of the 1966 House of Ariki Act. On 13 June 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. "Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers," chief Makea Vakatini Joseph Ariki explained. The "Cook Islands Herald" suggested that the "ariki" were attempting thereby to regain some of their traditional prestige or "mana". Prime Minister Jim Marurai described the take-over move as "ill-founded and nonsensical". By 23 June, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties. Judiciary. The judiciary is established by part IV of the Constitution, and consists of the High Court of the Cook Islands and the Cook Islands Court of Appeal. The Judicial Committee of the Privy Council serves as the final court of appeal. Judges are appointed by the King's Representative on the advice of the Executive Council as given by the Chief Justice and the Minister of Justice. Non-resident Judges are appointed for a three-year term; other Judges are appointed for life. Judges may be removed from office by the King's Representative on the recommendation of an investigative tribunal and only for inability to perform their office, or for misbehaviour. With regard to the legal profession, Iaveta Taunga o Te Tini Short was the first Cook Islander to establish a law practice in 1968. He would later become a Cabinet Minister (1978) and High Commissioner for the Cook Islands (1985). Recent political history. The 1999 election produced a hung Parliament. Cook Islands Party leader Geoffrey Henry remained prime minister, but was replaced after a month by Joe Williams following a coalition realignment. A further realignment three months later saw Williams replaced by Democratic Party leader Terepai Maoate. A third realignment saw Maoate replaced mid-term by his deputy Robert Woonton in 2002, who ruled with the backing of the CIP. The Democratic Party won a majority in the 2004 election, but Woonton lost his seat, and was replaced by Jim Marurai. In 2005 Marurai left the Democrats due to an internal disputes, founding his own Cook Islands First Party. He continued to govern with the support of the CIP, but in 2005 returned to the Democrats. The loss of several by-elections forced a snap-election in 2006, which produced a solid majority for the Democrats and saw Marurai continue as prime minister. In December 2009, Marurai sacked his Deputy Prime Minister, Terepai Maoate, sparking a mass-resignation of Democratic Party cabinet members He and new Deputy Prime Minister Robert Wigmore were subsequently expelled from the Democratic Party. Marurai appointed three junior members of the Democratic party to Cabinet, but on 31 December 2009 the party withdrew its support.
7072
28564
https://en.wikipedia.org/wiki?curid=7072
Economy of the Cook Islands
The economy of the Cook Islands is based mainly on tourism, with minor exports made up of tropical and citrus fruit. Manufacturing activities are limited to fruit-processing, clothing and handicrafts. As in many other South Pacific nations, the Cook Islands's economy is hindered by the country's isolation from foreign markets, lack of natural resources aside from fish, periodic devastation from natural disasters, and inadequate infrastructure. Trade deficits are made up for by remittances from emigrants and by foreign aid, overwhelmingly from New Zealand. Efforts to exploit tourism potential, encourage offshore banking, and expand the mining and fishing industries have been partially successful in stimulating investment and growth. Banking and finance. Banks in the Cook Islands are regulated under the "Banking Act 2011". Banks must be licensed and are supervised by the Cook Islands Financial Supervisory Commission. The Cook Islands developed an offshore financial services industry in the early 1980s. Allegations that New Zealand-based companies were using it as a tax haven led to the Winebox Inquiry in New Zealand in the 1990s, and in 2000 it was listed as a tax haven by the OECD. In 2002 it was delisted after it agreed to fiscal transparency and to exchange tax information. Allegations of being a tax haven re-emerged in 2013 following the International Consortium of Investigative Journalists Offshore Leaks. Trusts incorporated in the Cook Islands are used to provide anonymity and asset-protection. The Cook Islands also featured in the Panama Papers, Paradise Papers, and Pandora Papers financial leaks. The Bank of the Cook Islands was created in 2001 by merging the Cook Islands Development Bank and the Cook Islands Savings Bank. Economist Vaine Nooana-Arioka has been executive director of the Bank of the Cook Islands since 2008. Mining. In 2019, the Cook Islands passed the Sea Bed Minerals (SBM) Act. In 2022, the Cook Islands Seabed Minerals Authority (SBMA) granted three exploration licenses for polymetallic nodules within their Exclusive Economic Zone to private companies Cobalt (CIC) Limited, Moana Minerals Limited and Cook Islands Investment Company (CIIC) Seabed Resources Limited, which is co-owned by the Cook Islands Government. The licenses expire in 2027. In 2025, the Cook Islands announced that it had signed a five-year agreement with China focused on exploration and research into seabed minerals. Telecommunications. Telecom Cook Islands Ltd (TCI) is the sole provider of telecommunications in the Cook Islands. It is a private company owned by Spark New Zealand Ltd (60%) and the Cook Islands Government (40%). In operation since July 1991, TCI provides local, national and international telecommunications as well as internet access on all islands except Suwarrow. Communications to Suwarrow is via HF radio. Purchasing power parity - $183.2 million (2005 est.) * Agriculture: 78.9% * Industry: 9.6% * Services: 75.3% (2000) 28.4% of the population lives below the national poverty line. Statistics 2016 Asian Development Bank * Lowest 10%: NA% * Highest 10%: NA% 4.2% (2024 est.) 6,820 (2001) Agriculture 29%, industry 15%, services 56% (1995) * Revenues: $70.95 million * Expenditures: $69.05 million; including capital expenditures of $5.744 million (FY00/01 est.) Fruit processing, tourism, fishing, clothing, handicrafts 1% (2002) 28 GW·h (2003) * Fossil fuel: 100% * Hydro: 0% * Nuclear: 0% * Other: 0% (2001) 34.46 GW·h (2005 est) 0 kW·h (2003) 0 kW·h (2003) (2003) Copra, citrus, pineapples, tomatoes, beans, pawpaws, bananas, yams, taro, coffee, pigs, poultry $5.222 million (2005) Copra, papayas, fresh and canned citrus fruit, coffee; fish; pearls and pearl shells; clothing Australia 34%, Japan 27%, New Zealand 25%, US 8% (2004) $81.04 million (2005) Foodstuffs, textiles, fuels, timber, capital goods New Zealand 61%, Fiji 19%, US 9%, Australia 6%, Japan 2% (2004) $141 million (1996 est.) $13.1 million (1995); note - New Zealand furnishes the greater part 1 New Zealand dollar (NZ$) = 100 cents New Zealand dollars (NZ$) per US$1 - 1.4203 (2005), 1.9451 (January 2000), 1.8886 (1999), 1.8632 (1998), 1.5083 (1997), 1.4543 (1996), 1.5235 (1995) 1 April–31 March
7073
82835
https://en.wikipedia.org/wiki?curid=7073
Telecommunications in the Cook Islands
Like most countries and territories in Oceania, telecommunications in the Cook Islands is limited by its isolation and low population, with only one major television broadcasting station and six radio stations. However, most residents have a main line or mobile phone. Its telecommunications are mainly provided by Telecom Cook Islands, who is currently working with O3b Networks, Ltd. for faster Internet connection. In February 2015 the former owner of Telecom Cook Islands Ltd., Spark New Zealand, sold its 60% interest for approximately NZD 23 million (US$17.3 million) to Teleraro Limited. Telephone. In July 2012, there were about 7,500 main line telephones, which covers about 98% of the country's population. There were approximately 7,800 mobile phones in 2009. Telecom Cook Islands, owned by Spark New Zealand, is the islands' main telephone system and offers international direct dialling, Internet, email, fax, and Telex. The individual islands are connected by a combination of satellite earth stations, microwave systems, and very high frequency and high frequency radiotelephone; within the islands, service is provided by small exchanges connected to subscribers by open wire, cable, and fibre-optic cable. For international communication, they rely on the satellite earth station Intelsat. In 2003, the largest island of Rarotonga started using a GSM/GPRS mobile data service system with GSM 900 by 2013 3G UMTS 900 was introduce covering 98% of Rarotonga with HSPA+. In March 2017 4G+ launch in Rarotonga with LTE700 (B28A) and LTE1800 (B3). Mobile service covers Aitutaki GSM/GPRS mobile data service system in GSM 900 from 2006 to 2013 while in 2014 3G UMTS 900 was introduce with HSPA+ stand system. In March 2017 4G+ also launch in Aitutaki with LTE700 (B28A). The rest of the Outer Islands (Pa Enua) mobile was well establish in 2007 with mobile coverage at GSM 900 from Mangaia 3 villages (Oneroa, Ivirua, Tamarua), Atiu, Mauke, Mitiaro, Palmerston in the Southern Group (Pa Enua Tonga) and the Northern Group (Pa Enua Tokerau) Nassau, Pukapuka, Rakahanga, Manihiki 2 Village (Tukao, Tauhunu) and Penrhyn 2 villages (Omoka Tetautua). The Cook Islands uses the country calling code +682. Broadcasting. There are six radio stations in the Cook Islands, with one reaching all islands. there were 14,000 radios. Cook Islands Television broadcasts from Rarotonga, providing a mix of local news and overseas-sourced programs. there were 4,000 television sets. Internet infrastructure and connectivity. History. The internet was first setup in the Cook Islands in 1995 by Casinos of the South Pacific (also the first iGaming license in the country). Donald Wright and his nephew Darren Wright set up a 256K connection in Telecom Cook Islands facilities, connected to Telecom New Zealand. The Cook Islands are one of the birthplaces of the iGaming industry. There were 6,000 Internet users in 2009 and 3,562 Internet hosts as of 2012. The country code top-level domain for the Cook Islands is .ck. In June 2010, Telecom Cook Islands partnered with O3b Networks, Ltd. to provide faster Internet connection to the Cook Islands. On 25 June 2013 the O3b satellite constellation was launched from an Arianespace Soyuz ST-B rocket in French Guiana. The medium Earth orbit satellite orbits at and uses the Ka band. It has a latency of about 100 milliseconds because it is much closer to Earth than standard geostationary satellites, whose latencies can be over 600 milliseconds. Although the initial launch consisted of 4 satellites, as many as 20 may be launched eventually to serve various areas with little or no optical fibre service, the first of which is the Cook Islands. In December 2015, Alcatel-Lucent and Bluesky Pacific Group announced that they would build the Moana Cable system connecting New Zealand to Hawaii with a single fibre pair branching off to the Cook Islands. The Moana Cable is expected to be completed in 2018. Digital transformation. The challenges. As a small island digital state (SIDS) the Cook Islands faces a unique set of challenges in digital transformation. One being that they are heavily reliant on international support and cooperation to develop and fund its ICT improvement projects. Supplier monopoly, connectivity, physical infrastructure and access. Until 2019,Telecom Cook Islands (TCI) was the sole provider for internet, mobile and fixed telephone communications for the country. Internet was provided via satellite which was costly to the government with an unreliable connection especially to the outer islands. The passing of the Competition and Regulatory Authority (CRA) Act 2019 and the Telecommunications Act 2019 provided the opportunity for new competitors to join the market. It also provides for subsidising the provision of telecommunications to areas or customer groups which cannot reasonably be served on a commercial basis. Funding. The passing of the CRA and Telecommunications acts, paved the way for the Cook Islands Government to establish the state owned enterprise, Avaroa Cable Limited (ACL). This was made possible through the funding from New Zealand Aid Programme and the Asian Development Bank. A member of the United Nations (UN) the Cook Islands is influenced by convention initiatives like the UN's Sustainable Development Goals (SDG's). By using digital transformation as a platform to achieve this the Cook Islands is able to access international funding to support its digital transformation initiatives. Knowledge. During research and development phases, the Cook Islands was able to tap into the knowledge and expertise through the United Nations Development Programme (UNDP) including their digital readiness assessment tool and then subsequently their digital transformation framework. Digital transformation initiatives. Manatua One Polynesia Fibre Cable. In July 2020 the Cook Islands were connected to the Manatua One Polynesia Fibre Cable, which links the Cook Islands, Niue, Samoa and Tahiti. The cable has landing points at Rarotonga and Aitutaki. Then in September 2020 Avaroa Cable Limited and Vodafone Cook Islands signed a partnership for use of the Manatua One Polynesia Fibre Cable, making Avaroa Cable Limited the first company to be awarded a telecommunications licence under the new Cook Islands Competition and Regulatory Authority Act 2019. Upgrade to the National ITC Network. In December 2021, the contract was awarded to Wellington-based IT company Aiscorp for the upgrade of the Cook Islands government network infrastructure. This project is ongoing and has lasting impact, benefits for the government, and the people of the Cook Island as they work towards the centralised system being available to all of government. Individual Government agencies and state-owned enterprises are continuing with development and implementation of digital plans National ICT policy 2023-27. With facilitation support from the Asian Development Bank the latest version of the Cook Islands National ICT policy was launched. Digital access to Cook Islands Legislation. From 24 June 2024, in a partnership with LexisNexis a global leader in legal publishing, the Cook Islands government have launched a new website. Laws Of The Cook Islands - Te Au Ture O Te Kuki Airani with features including a comprehensive legal database, user-friendly interface and regular updates. With the aim of increasing transparency National strategy Launch. After years of research, analysis, development and financial support from international bodies such as the United Nations (UN), New Zealand Agency for International Development (NZAID) and Asian Development Bank (ADB) as contributors, the Cook Islands celebrated a key milestone in its digital transformation journey in February 2024, with the launch of its first ever National Digital Strategy 2024 to 2030.' The vision for the Cook Islands is 'A digitally empowered and inclusive Cook Islands, where technology enhances all lives, fosters innovation, drives economic growth and prosperity, improves social services, and protects our unique culture and environment – while building a shared identity for our island home.’ Cyber security policy 2024. With the successive rollout of the above initiatives the Cook Islands needed to ensure that set clear foundations for tackling online harm and cybercrime, securing the information and data held by its government and respective agencies, ensuring the safety of its people and protection of its critical national infrastructure. This consideration has resulted in the development of its first cyber security policy. Online visa and permit application system. On Monday 6 January 2025, T he Ministry of Foreign Affairs and an Immigration (MFAI) announced that the new online visa and permit application was now live. Developed in conjunction with the United Nations Conference on Trade and Development (UNCTAD) this platform allows the thousands that visit each year a more efficient and effective service. ----[1] (Vodafone Cook Islands and Avaroa Cable sign Manatua Cable Deal, 2020)
7074
28564
https://en.wikipedia.org/wiki?curid=7074
Transport in the Cook Islands
This article lists transport in the Cook Islands. Road transport. Traffic drives on the left side of the road. The maximum speed limit is 50 km/h, with a limit of 30 km/h in some areas. On the main island of Rarotonga, there are no traffic lights and just two roundabouts. Buses operate clockwise and anti-clockwise services around the island's coastal ring-road. Road safety is poor. In 2011, the Cook Islands had the second-highest per-capita road deaths in the world. In 2018, crashes neared a record high, with speeding, alcohol and careless behaviour being the main causes. Motor scooters are a common form of transport, but there was no requirement for helmets, making them a common cause of death and injuries. Legislation requiring helmets was passed in 2007, but scrapped in early 2008 before it came into force. In 2016, a law was passed requiring visitors and riders aged 16 to 25 to wear helmets, but it was widely flouted. In March 2020 the Cook Islands parliament again legislated for compulsory helmets to be worn from June 26, but implementation was delayed until July 31, and then until September 30. * Total: 295 km (2018) * Paved: 207 km (2018) * Unpaved: 88 km (2018) Rail transport. The Cook Islands has no effective rail transport. Rarotonga had a 170-metre tourist railway on private property, the Rarotonga Steam Railway, but it is no longer in working condition. Water transport. The Cook Islands have a long history of sea transport. The islands were colonised from Tahiti, and in turn colonised New Zealand in ocean-going waka. In the late nineteenth century, following European contact, the islands had a significant fleet of schooners, which they used to travel between islands and to trade with Tahiti and New Zealand. In 1899, locally owned shipping carried 10% of all international trade to the islands, and 66% of all trade carried by sail. Indigenous-owned shipping was driven out of business following New Zealand's acquisition of the islands, replaced by government-owned vessels, New Zealand trading companies, and the steamships of the Union Steamship Company. International shipping is provided by Pacific Forum Line and Matson, Inc. (as EXCIL shipping). Only the port of Avatiu can handle containers, with ships unloading at Aitutaki using lighters. There are two inter-island shipping companies: Taio Shipping, operating two vessels, and Cook Islands Towage, operating one. In the past, shipping interruptions have led to shortages of imported goods and fuel, and electricity blackouts on the outer islands. Shipping has frequently been subsidised to ensure service. In 2019 the Cook Islands government announced that it would acquire a dedicated cargo ship for the outer islands after Cook Islands Towage's barge was sold. It subsequently delayed the purchase pending the development of a Cook Islands Shipping Roadmap, and issued a tender for a Pa Enua Shipping Charter. The Cook Islands operates an open ship registry and has been placed on the Paris Memorandum of Understanding on Port State Control Black List as a flag of convenience. Ships registered in the Cook Islands have been used to smuggle oil from Iran in defiance of international sanctions. In February 2021 two ships were removed from the shipping register for concealing their movements by turning their Automatic identification system off. In April 2022 the motoryacht "Tango" owned by sanctioned Russian oligarch Viktor Vekselberg was seized in Spain. Maritime Cook Islands claimed that no other sanctioned vessels were on its registry. In July 2022 two yachts owned by sanctioned oligarch Roman Abramovich were reflagged as Cook Islands vessels, allowing them to escape arrest in Antigua and Barbuda. In 2024 Maritime Cook Islands deflagged 12 tankers for violating sanctions against Russia and Iran. It denied that it had become a haven for Russia's "dark fleet" of sanctions-evaders. Ports and harbours. The smaller islands have passages through their reefs, but these are unsuitable for large vessels. * total: 205 * by type: bulk carrier 21, container ship 3, general cargo 85, oil tanker 33, other 63 (2019) * country comparison to the world: 65 Air transport. One airline, Air Rarotonga, is based in the country. It flies domestically and to Tahiti in French Polynesia. Three foreign airlines also provide international service. There is one international airport, Rarotonga International Airport. Eight airports provide local or charter services. Only Rarotonga and Aitutaki Airport are paved. 11 (2013) * Total: 1 (2019) * 1,524 to 2,437 m: 1 * Total: 10 (2013) * 1,524 to 2,437 m: 2 (2013) * 914 to 1,523 m: 7 (2013) * Under 914 m: 1 (2013)
7077
48016325
https://en.wikipedia.org/wiki?curid=7077
Computer file
A computer file is a collection of data on a computer storage device, primarily identified by its filename. Just as words can be written on paper, so too can data be written to a computer file. Files can be shared with and transferred between computers and mobile devices via removable media, networks, or the Internet. Different types of computer files are designed for different purposes. A file may be designed to store a written message, a document, a spreadsheet, an image, a video, a program, or any wide variety of other kinds of data. Certain files can store multiple data types at once. By using computer programs, a person can open, read, change, save, and close a computer file. Computer files may be reopened, modified, and copied an arbitrary number of times. Files are typically organized in a file system, which tracks file locations on the disk and enables user access. Etymology. The word "file" derives from the Latin "filum" ("a thread, string"). "File" was used in the context of computer storage as early as January 1940. In "Punched Card Methods in Scientific Computation", W. J. Eckert stated, "The first extensive use of the early Hollerith Tabulator in astronomy was made by Comrie. He used it for building a table from successive differences, and for adding large numbers of harmonic terms". "Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a "file of punched cards"." In February 1950, in a Radio Corporation of America (RCA) advertisement in "Popular Science" magazine describing a new "memory" vacuum tube it had developed, RCA stated: "the results of countless computations can be kept 'on file' and taken out again. Such a 'file' now exists in a 'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones – speeds intelligent solutions through mazes of mathematics." In 1952, "file" denoted, among other things, information stored on punched cards. In early use, the underlying hardware, rather than the contents stored on it, was denominated a "file". For example, the IBM 350 disk drives were denominated "disk files". The introduction, , by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a "file system" that managed several virtual "files" on one storage device is the origin of the contemporary denotation of the word. Although the contemporary "register file" demonstrates the early concept of files, its use has greatly decreased. File contents. On most modern operating systems, files are organized into one-dimensional arrays of bytes. The format of a file is defined by its content since a file is solely a container for data. On some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file ( in Windows) are associated with either ASCII or UTF-8 characters, while the bytes of image, video, and audio files are interpreted otherwise. Most file types also allocate a few bytes for metadata, which allows a file to carry some basic information about itself. Some file systems can store arbitrary (not interpreted by the file system) file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via sidecar files or software-specific databases. All those methods, however, are more susceptible to loss of metadata than container and archive file formats. File size. At any instant in time, a file has a specific size, normally expressed as a number of bytes, that indicates how much storage is occupied by the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count (e.g., CP/M used a special control character, Ctrl-Z, to signal the end of text files). The general definition of a file does not require that its size have any real meaning, however, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file; these files can be newly created files that have not yet had any data written to them, or may serve as some kind of flag in the file system, or are accidents (the results of aborted disk operations). For example, the file to which the link points in a typical Unix-like system probably has a defined size that seldom changes. Compare this with which is also a file, but as a character special file, its size is not meaningful. Organization of data in a file. Information in a computer file can consist of smaller packets of information (often called "records" or "lines") that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details; each record in the payroll file concerns just one employee, and all the records have the common trait of being related to payroll—this is very similar to placing all payroll information into a specific filing cabinet in an office that does not have a computer. A text file may contain lines of text, corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image (a blob) or it may contain an executable. The way information is grouped into a file is entirely up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis. The programmers who create the programs decide what files are needed, how they are to be used and (often) their names. In some cases, computer programs manipulate files that are made visible to the computer user. For example, in a word-processing program, the user manipulates document files that the user personally names. Although the content of the document file is arranged in a format that the word-processing program understands, the user is able to choose the name and location of the file and provide the bulk of the information (such as words and text) that will be stored in the file. Many applications pack all their data files into a single file called an archive file, using internal markers to discern the different types of information contained within. The benefits of the archive file are to lower the number of files for easier transfer, to reduce storage usage, or just to organize outdated files. The archive file must often be unpacked before next using. File operations. The most basic operations that programs can perform on a file are: Files on a computer can be created, moved, modified, grown, shrunk (truncated), and deleted. In most cases, computer programs that are executed on the computer handle these operations, but the user of a computer can also manipulate files if necessary. For instance, Microsoft Word files are normally created and modified by the Microsoft Word program in response to user commands, but the user can also move, rename, or delete these files directly by using a file manager program such as Windows Explorer (on Windows computers) or by command lines (CLI). In Unix-like systems, user space programs do not operate directly, at a low level, on a file. Only the kernel deals with files, and it handles all user-space interaction with files in a manner that is transparent to the user-space programs. The operating system provides a level of abstraction, which means that interaction with a file from user-space is simply through its filename (instead of its inode). For example, rm "filename" will not delete the file itself, but only a link to the file. There can be many links to a file, but when they are all removed, the kernel considers that file's memory space free to be reallocated. This free space is commonly considered a security risk (due to the existence of file recovery software). Any secure-deletion program uses kernel-space (system) functions to wipe the file's data. File moves within a file system complete almost immediately because the data content does not need to be rewritten. Only the paths need to be changed. Moving methods. There are two distinct implementations of file moves. When moving files between devices or partitions, some file managing software deletes each selected file from the source directory individually after being transferred, while other software deletes all files at once only after every file has been transferred. With the codice_1 command for instance, the former method is used when selecting files individually, possibly with the use of wildcards (example: codice_2, while the latter method is used when selecting entire directories (example: codice_3). Microsoft Windows Explorer uses the former method for mass storage file moves, but the latter method using Media Transfer Protocol, as described in . The former method (individual deletion from source) has the benefit that space is released from the source device or partition imminently after the transfer has begun, meaning after the first file is finished. With the latter method, space is only freed after the transfer of the entire selection has finished. If an incomplete file transfer with the latter method is aborted unexpectedly, perhaps due to an unexpected power-off, system halt or disconnection of a device, no space will have been freed up on the source device or partition. The user would need to merge the remaining files from the source, including the incompletely written (truncated) last file. With the individual deletion method, the file moving software also does not need to cumulatively keep track of all files finished transferring for the case that a user manually aborts the file transfer. A file manager using the latter (afterwards deletion) method will have to only delete the files from the source directory that have already finished transferring. Identifying and organizing. In modern computer systems, files are typically accessed using names (filenames). In some operating systems, the name is associated with the file itself. In others, the file is anonymous, and is pointed to by links that have names. In the latter case, a user can identify the name of the link with the file itself, but this is a false analogue, especially where there exists more than one link to the same file. Files (or links to files) can be located in directories. However, more generally, a directory can contain either a list of files or a list of links to files. Within this definition, it is of paramount importance that the term "file" includes directories. This permits the existence of directory hierarchies, i.e., directories containing sub-directories. A name that refers to a file within a directory must be typically unique. In other words, there must be no identical names within a directory. However, in some operating systems, a name may include a specification of type that means a directory can contain an identical name for more than one type of object such as a directory and a file. In environments in which a file is named, a file's name and the path to the file's directory must uniquely identify it among all other files in the computer system—no two files can have the same name and path. Where a file is anonymous, named references to it will exist within a namespace. In most cases, any name within the namespace will refer to exactly zero or one file. However, any file may be represented within any namespace by zero, one or more names. Any string of characters may be a well-formed name for a file or a link depending upon the context of application. Whether or not a name is well-formed depends on the type of computer system being used. Early computers permitted only a few letters or digits in the name of a file, but modern computers allow long names (some up to 255 characters) containing almost any combination of Unicode letters or Unicode digits, making it easier to understand the purpose of a file at a glance. Some computer systems allow file names to contain spaces; others do not. Case-sensitivity of file names is determined by the file system. Unix file systems are usually case sensitive and allow user-level applications to create files whose names differ only in the case of characters. Microsoft Windows supports multiple file systems, each with different policies regarding case-sensitivity. The common FAT file system can have multiple files whose names differ only in case if the user uses a disk editor to edit the file names in the directory entries. User applications, however, will usually not allow the user to create multiple files with the same name but differing in case. Most computers organize files into hierarchies using folders, directories, or catalogs. The concept is the same irrespective of the terminology used. Each folder can contain an arbitrary number of files, and it can also contain other folders. These other folders are referred to as subfolders. Subfolders can contain still more files and folders and so on, thus building a tree-like structure in which one "master folder" (or "root folder" — the name varies from one operating system to another) can contain any number of levels of other folders and files. Folders can be named just as files can (except for the root folder, which often does not have a name). The use of folders makes it easier to organize files in a logical way. When a computer allows the use of folders, each file and folder has not only a name of its own, but also a path, which identifies the folder or folders in which a file or folder resides. In the path, some sort of special character—such as a slash—is used to separate the file and folder names. For example, in the illustration shown in this article, the path uniquely identifies a file called in a folder called , which in turn is contained in a folder called . The folder and file names are separated by slashes in this example; the topmost or root folder has no name, and so the path begins with a slash (if the root folder had a name, it would precede this first slash). Many computer systems use extensions in file names to help identify what they contain, also known as the file type. On Windows computers, extensions consist of a dot (period) at the end of a file name, followed by a few letters to identify the type of file. An extension of identifies a text file; a extension identifies any type of document or documentation, commonly in the Microsoft Word file format; and so on. Even when extensions are used in a computer system, the degree to which the computer system recognizes and heeds them can vary; in some systems, they are required, while in other systems, they are completely ignored if they are presented. Protection. Many modern computer systems provide methods for protecting files against accidental and deliberate damage. Computers that allow for multiple users implement file permissions to control who may or may not modify, delete, or create files and folders. For example, a given user may be granted only permission to read a file or folder, but not to modify or delete it; or a user may be given permission to read and modify files or folders, but not to execute them. Permissions may also be used to allow only certain users to see the contents of a file or folder. Permissions protect against unauthorized tampering or destruction of information in files, and keep private information confidential from unauthorized users. Another protection mechanism implemented in many computers is a "read-only flag." When this flag is turned on for a file (which can be accomplished by a computer program or by a human user), the file can be examined, but it cannot be modified. This flag is useful for critical information that must not be modified or erased, such as special files that are used only by internal parts of the computer system. Some systems also include a "hidden flag" to make certain files invisible; this flag is used by the computer system to hide essential system files that users should not alter. Storage. Any file that has any useful purpose must have some physical manifestation. That is, a file (an abstract concept) in a real computer system must have a real physical analogue if it is to exist at all. In physical terms, most computer files are stored on some type of data storage device. For example, most operating systems store files on a hard disk. Hard disks have been the ubiquitous form of non-volatile storage since the early 1960s. Where files contain only temporary information, they may be stored in RAM. Computer files can be also stored on other media in some cases, such as magnetic tapes, compact discs, Digital Versatile Discs, Zip drives, USB flash drives, etc. The use of solid state drives is also beginning to rival the hard disk drive. In Unix-like operating systems, many files have no associated physical storage device. Examples are and most files under directories , and . These are virtual files: they exist as objects within the operating system kernel. As seen by a running user program, files are usually represented either by a file control block or by a file handle. A file control block (FCB) is an area of memory which is manipulated to establish a filename etc. and then passed to the operating system as a parameter; it was used by older IBM operating systems and early PC operating systems including CP/M and early versions of MS-DOS. A file handle is generally either an opaque data type or an integer; it was introduced in around 1961 by the ALGOL-based Burroughs MCP running on the Burroughs B5000 but is now ubiquitous. File corruption. When a file is said to be corrupted, it is because its contents have been saved to the computer in such a way that they cannot be properly read, either by a human or by software. Depending on the extent of the damage, the original file can sometimes be recovered, or at least partially understood. A file may be created corrupt, or it may be corrupted at a later point through overwriting. There are many ways by which a file can become corrupted. Most commonly, the issue happens in the process of writing the file to a disk. For example, if an image-editing program unexpectedly crashes while saving an image, that file may be corrupted because the program could not save its entirety. The program itself might warn the user that there was an error, allowing for another attempt at saving the file. Some other examples of reasons for which files become corrupted include: File corruption is typically unintentional; however, it may be done intentionally as an act of deception so that a student or employee can receive an extension on their deadline. There are services that provide on demand file corruption, which essentially fill a given file with random data so that it cannot be opened or read yet still seems legitimate. One of the most effective countermeasures for unintentional file corruption is backing up important files. In the event of an important file becoming corrupted, the user can simply replace it with the backed up version. Backup. When computer files contain information that is extremely important, a "back-up" process is used to protect against disasters that might destroy the files. Backing up files simply means making copies of the files in a separate location so that they can be restored if something happens to the computer, or if they are deleted accidentally. There are many ways to back up files. Most computer systems provide utility programs to assist in the back-up process, which can become very time-consuming if there are many files to safeguard. Files are often copied to removable media such as writable CDs or cartridge tapes. Copying files to another hard disk in the same computer protects against failure of one disk, but if it is necessary to protect against failure or destruction of the entire computer, then copies of the files must be made on other media that can be taken away from the computer and stored in a safe, distant location. The grandfather-father-son backup method automatically makes three back-ups; the grandfather file is the oldest copy of the file and the son is the current copy. File systems and file managers. The way a computer organizes, names, stores and manipulates files is globally referred to as its "file system." Most computers have at least one file system. Some computers allow the use of several different file systems. For instance, on newer MS Windows computers, the older FAT-type file systems of MS-DOS and old versions of Windows are supported, in addition to the NTFS file system that is the normal file system for recent versions of Windows. Each system has its own advantages and disadvantages. Standard FAT allows only eight-character file names (plus a three-character extension) with no spaces, for example, whereas NTFS allows much longer names that can contain spaces. You can call a file "" in NTFS, but in FAT you would be restricted to something like (unless you were using VFAT, a FAT extension allowing long file names). File manager programs are utility programs that allow users to manipulate files directly. They allow you to move, create, delete and rename files and folders, although they do not actually allow you to read the contents of a file or store information in it. Every computer system provides at least one file-manager program for its native file system. For example, File Explorer (formerly Windows Explorer) is commonly used in Microsoft Windows operating systems, and Nautilus is common under several distributions of Linux.
7079
5558684
https://en.wikipedia.org/wiki?curid=7079
CID
CID may refer to: Film. Several Indian films have the mention of "Criminal Investigation Department" (as "C.I.D.") in their title:
7080
7903804
https://en.wikipedia.org/wiki?curid=7080
Christian Doppler
Christian Andreas Doppler (; ; 29 November 1803 – 17 March 1853) was an Austrian mathematician and physicist. He formulated the principle – now known as the Doppler effect – that the observed frequency of a wave depends on the relative speed of the source and the observer. Biography. Early life and education. Doppler was born in Salzburg (today Austria) in 1803. Doppler was the second son of Johann Evangelist Doppler and Theresia Seeleuthner (Doppler). Doppler's father, Johann Doppler, was a third-generation stone mason in Salzburg. As a young boy, Doppler showed promise for his family's trade. However, due to his weak health, Doppler's father encouraged him instead to pursue a career in business. Doppler started elementary education at the age of 13. After completion, he moved on to secondary education at a school in Linz. Doppler's proficiency in mathematics was discovered by Simon Stampfer, a mathematician in Salzburg. Upon his recommendation, Doppler took a break from high school to attend the Polytechnic Institute in Vienna in 1822. Doppler returned to Salzburg in 1825 to finish his secondary education. After completing high school, Doppler studied philosophy in Salzburg and mathematics and physics at the University of Vienna and Imperial–Royal Polytechnic Institute (now TU Wien). In 1829, he was chosen for an assistant position to Professor Adam Von Burg at the Polytechnic Institute of Vienna, where he continued his studies. In 1835, he decided to immigrate to the United States to pursue a position in academia. Before departing for the United States, Doppler was offered a teaching position at a state-operated high school in Prague, which convinced him to stay in Europe. Shortly after, in 1837 he was appointed as an associate professor of math and geometry at the Prague Polytechnic Institute (now Czech Technical University in Prague). He received a full professorship position in 1841. Family. In 1836, Doppler married Mathilde Sturm, the daughter of goldsmith Franz Sturm. Doppler and Mathilde had five children together. Their first child was Mathilde Doppler who was born in 1837. Doppler's second child, Ludwig Doppler was born in 1838. Two years later, in 1840 Adolf Doppler was born. Doppler's fourth child, Bertha Doppler was born in 1843. Their last child Hermann was born in 1845. Development of the Doppler effect. In 1842, at the age of 38, Doppler gave a lecture to the Royal Bohemian Society of Sciences and subsequently published "Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels" ("On the coloured light of the binary stars and some other stars of the heavens"). In this work, Doppler postulated his principle (later named the Doppler effect) that the observed frequency of a wave depends on the relative speed of the source and the observer, and he later tried to use this concept to explain the visible colours of binary stars (this hypothesis was later proven wrong). Doppler also incorrectly believed that if a star were to exceed 136,000 kilometers per second in radial velocity, then it would not be visible to the human eye. Later life. Doppler continued working as a professor at the Prague Polytechnic, publishing over 50 articles on mathematics, physics and astronomy, but in 1847 he left Prague for the professorship of mathematics, physics, and mechanics at the Academy of Mines and Forests (its successor is the University of Miskolc) in Selmecbánya (then Kingdom of Hungary, now Banská Štiavnica Slovakia). Doppler's research was interrupted by the Hungarian Revolution of 1848. In 1849, he fled to Vienna and in 1850 was appointed head of the Institute for Experimental Physics at the University of Vienna. While there, Doppler, along with Franz Unger, influenced the development of young Gregor Mendel, the founding father of genetics, who was a student at the University of Vienna from 1851 to 1853. Death. Doppler died on 17 March 1853 at age 49 from a pulmonary disease in Venice (at that time part of the Austrian Empire). His tomb is in the San Michele cemetery on the Venetian island of San Michele. Full name. Some confusion exists about Doppler's full name. Doppler referred to himself as Christian Doppler. The records of his birth and baptism stated Christian "Andreas" Doppler. Doppler's middle name is shared by his great-great-grandfather Andreas Doppler. Forty years after Doppler's death the misnomer "Johann" Christian Doppler was introduced by the astronomer Julius Scheiner. Scheiner's mistake has since been copied by many.
7081
47081136
https://en.wikipedia.org/wiki?curid=7081
Clerihew
A clerihew () is a whimsical, four-line biographical poem of a type invented by Edmund Clerihew Bentley. The first line is the name of the poem's subject, usually a famous person, and the remainder puts the subject in an absurd light or reveals something unknown or spurious about the subject. The rhyme scheme is formula_1, and the rhymes are often forced. The line length and metre are irregular. Bentley invented the clerihew in school and then popularized it in books. One of his best known is this (1905): Form. A clerihew has the following properties: Clerihews are not satirical or abusive, but they target famous individuals and reposition them in an absurd, anachronistic or commonplace setting, often giving them an over-simplified and slightly garbled description. Practitioners. The form was invented by and is named after Edmund Clerihew Bentley. When he was a 16-year-old pupil at St Paul's School in London, the lines of his first clerihew, about Humphry Davy, came into his head during a science class. Together with his schoolfriends, he filled a notebook with examples. The first known use of the word in print dates from 1928. Bentley published three volumes of his own clerihews: "Biography for Beginners" (1905), published as "edited by E. Clerihew"; "More Biography" (1929); and "Baseless Biography" (1939), a compilation of clerihews originally published in "Punch" illustrated by the author's son Nicolas Bentley. G. K. Chesterton, a friend of Bentley, was also a practitioner of the clerihew and one of the sources of its popularity. Chesterton provided verses and illustrations for the original schoolboy notebook and illustrated "Biography for Beginners". Other serious authors also produced clerihews, including W. H. Auden, and it remains a popular humorous form among other writers and the general public. Among contemporary writers, the satirist Craig Brown has made considerable use of the clerihew in his columns for "The Daily Telegraph". Examples. Bentley's first clerihew, published in 1905, was written about Sir Humphry Davy: The original poem had the second line "Was not fond of gravy"; but the published version has "Abominated gravy". Other clerihews by Bentley include: and W. H. Auden's "Academic Graffiti" (1971) includes: Satirical magazine "Private Eye" noted Auden's work and responded: A second stanza aimed a jibe at Auden's publisher, Faber and Faber. Alan Turing, one of the founders of computing, was the subject of a clerihew written by the pupils of his "alma mater", Sherborne School in England: A clerihew appreciated by chemists is cited in "Dark Sun" by Richard Rhodes, and regards the inventor of the thermos bottle (or Dewar flask): The version in "Biography for Beginners" says "condense" rather than "liquefy". "Dark Sun" also features a clerihew about the German-British physicist and Soviet nuclear spy Klaus Fuchs: In 1983, "Games" magazine ran a contest titled "Do You Clerihew?" The winning entry was: Other uses of the form. The clerihew form has also occasionally been used for non-biographical verses. Bentley opened his 1905 "Biography for Beginners" with an example, entitled "Introductory Remarks", on the theme of biography itself: The third edition of the same work, published in 1925, includes a "Preface to the New Edition" in 11 stanzas, each in clerihew form. One stanza runs:
7085
7903804
https://en.wikipedia.org/wiki?curid=7085
Civil war
A civil war is a war between organized groups within the same state (or country). The aim of one side may be to take control of the country or a region, to achieve independence for a region, or to change government policies. The term is a calque of Latin which was used to refer to the various civil wars of the Roman Republic in the 1st century BC. "Civil" here means "of/related to citizens", a civil war being a war between the citizenry, rather than with an outsider. Most modern civil wars involve intervention by outside powers. According to Patrick M. Regan in his book "Civil Wars and Foreign Powers" (2000) about two thirds of the 138 intrastate conflicts between the end of World War II and 2000 saw international intervention. A civil war is often a high-intensity conflict, often involving regular armed forces, that is sustained, organized and large-scale. Civil wars may result in large numbers of casualties and the consumption of significant resources. Civil wars since the end of World War II have lasted on average just over four years, a dramatic rise from the one-and-a-half-year average of the 1900–1944 period. While the rate of emergence of new civil wars has been relatively steady since the mid-19th century, the increasing length of those wars has resulted in increasing numbers of wars ongoing at any one time. For example, there were no more than five civil wars underway simultaneously in the first half of the 20th century while there were over 20 concurrent civil wars close to the end of the Cold War. Since 1945, civil wars have resulted in the deaths of over 25 million people, as well as the forced displacement of millions more. Civil wars have further resulted in economic collapse; Somalia, Burma (Myanmar), Uganda and Angola are examples of nations that were considered to have had promising futures before being engulfed in civil wars. Formal classification. James Fearon, a scholar of civil wars at Stanford University, defines a civil war as "a violent conflict within a country fought by organized groups that aim to take power at the center or in a region, or to change government policies". Ann Hironaka further specifies that one side of a civil war is the state. Stathis Kalyvas defines civil war as "armed combat taking place within the boundaries of a recognized sovereign entity between parties that are subject to a common authority at the outset of the hostilities." The intensity at which a civil disturbance becomes a civil war is contested by academics. Some political scientists define a civil war as having more than 1,000 casualties, while others further specify that at least 100 must come from each side. The Correlates of War, a dataset widely used by scholars of conflict, classifies civil wars as having over 1,000 war-related casualties per year of conflict. This rate is a small fraction of the millions killed in the Second Sudanese Civil War and Cambodian Civil War, for example, but excludes several highly publicized conflicts, such as The Troubles of Northern Ireland and the struggle of the African National Congress in Apartheid-era South Africa. Based on the 1,000-casualties-per-year criterion, there were 213 civil wars from 1816 to 1997, 104 of which occurred from 1944 to 1997. If one uses the less-stringent 1,000 casualties total criterion, there were over 90 civil wars between 1945 and 2007, with 20 ongoing civil wars as of 2007. The Geneva Conventions do not specifically define the term "civil war"; nevertheless, they do outline the responsibilities of parties in "armed conflict not of an international character". This includes civil wars; however, no specific definition of civil war is provided in the text of the Conventions. Nevertheless, the International Committee of the Red Cross has sought to provide some clarification through its commentaries on the Geneva Conventions, noting that the Conventions are "so general, so vague, that many of the delegations feared that it might be taken to cover any act committed by force of arms". Accordingly, the commentaries provide for different 'conditions' on which the application of the Geneva Convention would depend; the commentary, however, points out that these should not be interpreted as rigid conditions. The conditions listed by the ICRC in its commentary are as follows: Causes. According to a 2017 review study of civil war research, there are three prominent explanations for civil war: greed-based explanations which center on individuals' desire to maximize their profits, grievance-based explanations which center on conflict as a response to socioeconomic or political injustice, and opportunity-based explanations which center on factors that make it easier to engage in violent mobilization. According to the study, the most influential explanation for civil war onset is the opportunity-based explanation by James Fearon and David Laitin in their 2003 American Political Science Review article. Greed. Scholars investigating the cause of civil war are attracted by two opposing theories, greed versus grievance. Roughly stated: are conflicts caused by differences of ethnicity, religion or other social affiliation, or do conflicts begin because it is in the economic best interests of individuals and groups to start them? Scholarly analysis supports the conclusion that economic and structural factors are more important than those of identity in predicting occurrences of civil war. A comprehensive study of civil war was carried out by a team from the World Bank in the early 21st century. The study framework, which came to be called the Collier–Hoeffler Model, examined 78 five-year increments when civil war occurred from 1960 to 1999, as well as 1,167 five-year increments of "no civil war" for comparison, and subjected the data set to regression analysis to see the effect of various factors. The factors that were shown to have a statistically significant effect on the chance that a civil war would occur in any given five-year period were: A high proportion of primary commodities in national exports significantly increases the risk of a conflict. A country at "peak danger", with commodities comprising 32% of gross domestic product, has a 22% risk of falling into civil war in a given five-year period, while a country with no primary commodity exports has a 1% risk. When disaggregated, only petroleum and non-petroleum groupings showed different results: a country with relatively low levels of dependence on petroleum exports is at slightly less risk, while a high level of dependence on oil as an export results in slightly more risk of a civil war than national dependence on another primary commodity. The authors of the study interpreted this as being the result of the ease by which primary commodities may be extorted or captured compared to other forms of wealth; for example, it is easy to capture and control the output of a gold mine or oil field compared to a sector of garment manufacturing or hospitality services. A second source of finance is national diasporas, which can fund rebellions and insurgencies from abroad. The study found that statistically switching the size of a country's diaspora from the smallest found in the study to the largest resulted in a sixfold increase in the chance of a civil war. Higher male secondary school enrollment, per capita income and economic growth rate all had significant effects on reducing the chance of civil war. Specifically, a male secondary school enrollment 10% above the average reduced the chance of a conflict by about 3%, while a growth rate 1% higher than the study average resulted in a decline in the chance of a civil war of about 1%. The study interpreted these three factors as proxies for earnings forgone by rebellion, and therefore that lower forgone earnings encourage rebellion. Phrased another way: young males (who make up the vast majority of combatants in civil wars) are less likely to join a rebellion if they are getting an education or have a comfortable salary, and can reasonably assume that they will prosper in the future. Low per capita income has also been proposed as a cause for grievance, prompting armed rebellion. However, for this to be true, one would expect economic inequality to also be a significant factor in rebellions, which it is not. The study therefore concluded that the economic model of opportunity cost better explained the findings. Grievance. Most proxies for "grievance"—the theory that civil wars begin because of issues of identity, rather than economics—were statistically insignificant, including economic equality, political rights, ethnic polarization and religious fractionalization. Only ethnic dominance, the case where the largest ethnic group comprises a majority of the population, increased the risk of civil war. A country characterized by ethnic dominance has nearly twice the chance of a civil war. However, the combined effects of ethnic and religious fractionalization, i.e. the greater chance that any two randomly chosen people will be from separate ethnic or religious groups, the less chance of a civil war, were also significant and positive, as long as the country avoided ethnic dominance. The study interpreted this as stating that minority groups are more likely to rebel if they feel that they are being dominated, but that rebellions are more likely to occur the more homogeneous the population and thus more cohesive the rebels. These two factors may thus be seen as mitigating each other in many cases. Criticism of the "greed versus grievance" theory. David Keen, a professor at the Development Studies Institute at the London School of Economics is one of the major critics of greed vs. grievance theory, defined primarily by Paul Collier, and argues the point that a conflict, although he cannot define it, cannot be pinpointed to simply one motive. He believes that conflicts are much more complex and thus should not be analyzed through simplified methods. He disagrees with the quantitative research methods of Collier and believes a stronger emphasis should be put on personal data and human perspective of the people in conflict. Beyond Keen, several other authors have introduced works that either disprove greed vs. grievance theory with empirical data, or dismiss its ultimate conclusion. Authors such as Cristina Bodea and Ibrahim Elbadawi, who co-wrote the entry, "Riots, coups and civil war: Revisiting the greed and grievance debate", argue that empirical data can disprove many of the proponents of greed theory and make the idea "irrelevant". They examine a myriad of factors and conclude that too many factors come into play with conflict, which cannot be confined to simply greed or grievance. Anthony Vinci makes a strong argument that "fungible concept of power and the primary motivation of survival provide superior explanations of armed group motivation and, more broadly, the conduct of internal conflicts". Opportunities. James Fearon and David Laitin find that ethnic and religious diversity does not make civil war more likely. They instead find that factors that make it easier for rebels to recruit foot soldiers and sustain insurgencies, such as "poverty—which marks financially & bureaucratically weak states and also favors rebel recruitment—political instability, rough terrain, and large populations" make civil wars more likely. Such research finds that civil wars happen because the state is weak; both authoritarian and democratic states can be stable if they have the financial and military capacity to put down rebellions. Critical Responses to Fearon and Laitin. Some scholars, such as Lars-Erik Cederman of the Center for Security Studies (CSS) at the Swiss Federal Institute of Technology, have criticized the data used by Fearon and Laitin to determine ethnic and religious diversity. In his 2007 paper "Beyond Fractionalization: Mapping Ethnicity onto Nationalist Insurgencies", Cederman argues that the ethno-linguistic fractionalization index (ELF) used by Fearon, Laitin and other political scientists is flawed. ELF, Cederman states, measures diversity on a country's population-wide level and makes no attempt to determine the number of ethnic groups in relation to what role they play in the power of the state and its military. Cederman believes it makes little sense to test hypotheses relating national ethnic diversity to civil war outbreak without any explicit reference to how many different ethnic groups actually hold power in the state. This suggests that ethnic, linguistic and religious cleavages can matter, depending on the extent to which the various groups have ability and influence to mobilize on either side of a forming conflict. Themes explored in Cederman's later work criticizing the use of ethnic fractionalization measures as input variables to predict civil war outbreak relate to these indices not accounting for the geographical distribution of ethnic groups within countries, as this can affect their access to regional resources and commodities, which in turn can lead to conflict. A third theme explored by Cederman is that ethnolinguistic fractionalization does not quantify the extent to which there is pre-existing economic inequality between ethnic groups within countries. In a 2011 article, Cederman and fellow researchers describe finding that “in highly unequal societies, both rich and poor groups fight more often than those groups whose wealth lies closer to the country average”, going against the opportunity-based explanation for civil war outbreak. Michael Bleaney, Professor of International Economics at the University of Nottingham, published a 2009 paper titled "Incidence, Onset and Duration of Civil Wars: A Review of the Evidence", which tested numerous variables for their relationship to civil war outbreak with different datasets, including that utilized by Fearon and Laitin. Bleaney concluded that neither ethnoreligious diversity, as measured by fractionalization, nor another variable, ethnic polarization, defined as the extent to which individuals in a population are distributed across different ethnic groups, were "a sufficient measure of diversity as it affects the probability of conflict." Other causes. Bargaining problems. In a state torn by civil war, the contesting powers often do not have the ability to commit or the trust to believe in the other side's commitment to put an end to war. When considering a peace agreement, the involved parties are aware of the high incentives to withdraw once one of them has taken an action that weakens their military, political or economical power. Commitment problems may deter a lasting peace agreement as the powers in question are aware that neither of them is able to commit to their end of the bargain in the future. States are often unable to escape conflict traps (recurring civil war conflicts) due to the lack of strong political and legal institutions that motivate bargaining, settle disputes, and enforce peace settlements. Governance. Political scientist Barbara F. Walter suggests that most contemporary civil wars are actually repeats of earlier civil wars that often arise when leaders are not accountable to the public, when there is poor public participation in politics, and when there is a lack of transparency of information between the executives and the public. Walter argues that when these issues are properly reversed, they act as political and legal restraints on executive power forcing the established government to better serve the people. Additionally, these political and legal restraints create a standardized avenue to influence government and increase the commitment credibility of established peace treaties. It is the strength of a nation's institutionalization and good governance—not the presence of democracy nor the poverty level—that is the number one indicator of the chance of a repeat civil war, according to Walter. Military advantage. High levels of population dispersion and, to a lesser extent, the presence of mountainous terrain, increased the chance of conflict. Both of these factors favor rebels, as a population dispersed outward toward the borders is harder to control than one concentrated in a central region, while mountains offer terrain where rebels can seek sanctuary. Rough terrain was highlighted as one of the more important factors in a 2006 systematic review. Population size. The various factors contributing to the risk of civil war rise increase with population size. The risk of a civil war rises approximately proportionately with the size of a country's population. Poverty. There is a correlation between poverty and civil war, but the causality (which causes the other) is unclear. Some studies have found that in regions with lower income per capita, the likelihood of civil war is greater. Economists Simeon Djankov and Marta Reynal-Querol argue that the correlation is spurious, and that lower income and heightened conflict are instead products of other phenomena. In contrast, a study by Alex Braithwaite and colleagues showed systematic evidence of "a causal arrow running from poverty to conflict". Inequality. While there is a supposed negative correlation between absolute welfare levels and the probability of civil war outbreak, relative deprivation may actually be a more pertinent possible cause. Historically, higher inequality levels led to higher civil war probability. Since colonial rule or population size are known to increase civil war risk, also, one may conclude that "the discontent of the colonized, caused by the creation of borders across tribal lines and bad treatment by the colonizers" is one important cause of civil conflicts. Time. The more time that has elapsed since the last civil war, the less likely it is that a conflict will recur. The study had two possible explanations for this: one opportunity-based and the other grievance-based. The elapsed time may represent the depreciation of whatever capital the rebellion was fought over and thus increase the opportunity cost of restarting the conflict. Alternatively, elapsed time may represent the gradual process of healing of old hatreds. The study found that the presence of a diaspora substantially reduced the positive effect of time, as the funding from diasporas offsets the depreciation of rebellion-specific capital. Evolutionary psychologist Satoshi Kanazawa has argued that an important cause of intergroup conflict may be the relative availability of women of reproductive age. He found that polygyny greatly increased the frequency of civil wars but not interstate wars. Gleditsch et al. did not find a relationship between ethnic groups with polygyny and increased frequency of civil wars but nations having legal polygamy may have more civil wars. They argued that misogyny is a better explanation than polygyny. They found that increased women's rights were associated with fewer civil wars and that legal polygamy had no effect after women's rights were controlled for. Political scholar Elisabeth Wood from Yale University offers yet another rationale for why civilians rebel and/or support civil war. Through her studies of the Salvadoran Civil War, Wood finds that traditional explanations of greed and grievance are not sufficient to explain the emergence of that insurgent movement. Instead, she argues that "emotional engagements" and "moral commitments" are the main reasons why thousand of civilians, most of them from poor and rural backgrounds, joined or supported the Farabundo Martí National Liberation Front, despite individually facing both high risks and virtually no foreseeable gains. Wood also attributes participation in the civil war to the value that insurgents assigned to changing social relations in El Salvador, an experience she defines as the "pleasure of agency". Duration and effects. Ann Hironaka, author of "Neverending Wars", divides the modern history of civil wars into the pre-19th century, 19th century to early 20th century, and late 20th century. In 19th-century Europe, the length of civil wars fell significantly, largely due to the nature of the conflicts as battles for the power center of the state, the strength of centralized governments, and the normally quick and decisive intervention by other states to support the government. Following World War II the duration of civil wars grew past the norm of the pre-19th century, largely due to weakness of the many postcolonial states and the intervention by major powers on both sides of conflict. The most obvious commonality to civil wars are that they occur in fragile states. In the 19th and early 20th centuries. Civil wars in the 19th century and in the early 20th century tended to be short; civil wars between 1900 and 1944 lasted on average one and a half years. The state itself formed the obvious center of authority in the majority of cases, and the civil wars were thus fought for control of the state. This meant that whoever had control of the capital and the military could normally crush resistance. A rebellion which failed to quickly seize the capital and control of the military for itself normally found itself doomed to rapid destruction. For example, the fighting associated with the 1871 Paris Commune occurred almost entirely in Paris, and ended quickly once the military sided with the government at Versailles and conquered Paris. The power of non-state actors resulted in a lower value placed on sovereignty in the 18th and 19th centuries, which further reduced the number of civil wars. For example, the pirates of the Barbary Coast were recognized as "de facto" states because of their military power. The Barbary pirates thus had no need to rebel against the Ottoman Empire – their nominal state government – to gain recognition of their sovereignty. Conversely, states such as Virginia and Massachusetts in the United States of America did not have sovereign status, but had significant political and economic independence coupled with weak federal control, reducing the incentive to secede. The two major global ideologies, monarchism and democracy, led to several civil wars. However, a bi-polar world, divided between the two ideologies, did not develop, largely due to the dominance of monarchists through most of the period. The monarchists would thus normally intervene in other countries to stop democratic movements taking control and forming democratic governments, which were seen by monarchists as being both dangerous and unpredictable. The Great Powers (defined in the 1815 Congress of Vienna as the United Kingdom, Habsburg Austria, Prussia, France, and Russia) would frequently coordinate interventions in other nations' civil wars, nearly always on the side of the incumbent government. Given the military strength of the Great Powers, these interventions nearly always proved decisive and quickly ended the civil wars. There were several exceptions from the general rule of quick civil wars during this period. The American Civil War (1861–1865) was unusual for at least two reasons: it was fought around regional identities as well as political ideologies, and it ended through a war of attrition, rather than with a decisive battle over control of the capital, as was the norm. The Spanish Civil War (1936–1939) proved exceptional because "both" sides in the struggle received support from intervening great powers: Germany, Italy, and Portugal supported opposition leader Francisco Franco, while France and the Soviet Union supported the government (see proxy war). Since 1945. In the 1990s, about twenty civil wars were occurring concurrently during an average year, a rate about ten times the historical average since the 19th century. However, the rate of new civil wars had not increased appreciably; the drastic rise in the number of ongoing wars after World War II was a result of the tripling of the average duration of civil wars to over four years. This increase was a result of the increased number of states, the fragility of states formed after 1945, the decline in interstate war, and the Cold War rivalry. Following World War II, the major European powers divested themselves of their colonies at an increasing rate: the number of ex-colonial states jumped from about 30 to almost 120 after the war. The rate of state formation leveled off in the 1980s, at which point few colonies remained. More states also meant more states in which to have long civil wars. Hironaka statistically measures the impact of the increased number of ex-colonial states as increasing the post-World War II incidence of civil wars by +165% over the pre-1945 number. While the new ex-colonial states appeared to follow the blueprint of the idealized state—centralized government, territory enclosed by defined borders, and citizenry with defined rights—as well as accessories such as a national flag, an anthem, a seat at the United Nations and an official economic policy, they were in actuality far weaker than the Western states they were modeled after. In Western states, the structure of governments closely matched states' actual capabilities, which had been arduously developed over centuries. The development of strong administrative structures, in particular those related to extraction of taxes, is closely associated with the intense warfare between predatory European states in the 17th and 18th centuries, or in Charles Tilly's famous formulation: "War made the state and the state made war". For example, the formation of the modern states of Germany and Italy in the 19th century is closely associated with the wars of expansion and consolidation led by Prussia and Sardinia-Piedmont, respectively. The Western process of forming effective and impersonal bureaucracies, developing efficient tax systems, and integrating national territory continued into the 20th century. Nevertheless, Western states that survived into the latter half of the 20th century were considered "strong" by simple reason that they had managed to develop the institutional structures and military capability required to survive predation by their fellow states. In sharp contrast, decolonization was an entirely different process of state formation. Most imperial powers had not foreseen a need to prepare their colonies for independence; for example, Britain had given limited self-rule to India and Sri Lanka, while treating British Somaliland as little more than a trading post, while all major decisions for French colonies were made in Paris and Belgium prohibited any self-government up until it suddenly granted independence to its colonies in 1960. Like Western states of previous centuries, the new ex-colonies lacked autonomous bureaucracies, which would make decisions based on the benefit to society as a whole, rather than respond to corruption and nepotism to favor a particular interest group. In such a situation, factions manipulate the state to benefit themselves or, alternatively, state leaders use the bureaucracy to further their own self-interest. The lack of credible governance was compounded by the fact that most colonies were economic loss-makers at independence, lacking both a productive economic base and a taxation system to effectively extract resources from economic activity. Among the rare states profitable at decolonization was India, to which scholars credibly argue that Uganda, Malaysia and Angola may be included. Neither did imperial powers make territorial integration a priority, and may have discouraged nascent nationalism as a danger to their rule. Many newly independent states thus found themselves impoverished, with minimal administrative capacity in a fragmented society, while faced with the expectation of immediately meeting the demands of a modern state. Such states are considered "weak" or "fragile". The "strong"-"weak" categorization is not the same as "Western"-"non-Western", as some Latin American states like Argentina and Brazil and Middle Eastern states like Egypt and Israel are considered to have "strong" administrative structures and economic infrastructure. Historically, the international community would have targeted weak states for territorial absorption or colonial domination or, alternatively, such states would fragment into pieces small enough to be effectively administered and secured by a local power. However, international norms towards sovereignty changed in the wake of World War II in ways that support and maintain the existence of weak states. Weak states are given "de jure" sovereignty equal to that of other states, even when they do not have "de facto" sovereignty or control of their own territory, including the privileges of international diplomatic recognition and an equal vote in the United Nations. Further, the international community offers development aid to weak states, which helps maintain the facade of a functioning modern state by giving the appearance that the state is capable of fulfilling its implied responsibilities of control and order. The formation of a strong international law regime and norms against territorial aggression is strongly associated with the dramatic drop in the number of interstate wars, though it has also been attributed to the effect of the Cold War or to the changing nature of economic development. Consequently, military aggression that results in territorial annexation became increasingly likely to prompt international condemnation, diplomatic censure, a reduction in international aid or the introduction of economic sanction, or, as in the case of 1990 invasion of Kuwait by Iraq, international military intervention to reverse the territorial aggression. Similarly, the international community has largely refused to recognize secessionist regions, while keeping some secessionist self-declared states such as Somaliland in diplomatic recognition limbo. While there is not a large body of academic work examining the relationship, Hironaka's statistical study found a correlation that suggests that every major international anti-secessionist declaration increased the number of ongoing civil wars by +10%, or a total +114% from 1945 to 1997. The diplomatic and legal protection given by the international community, as well as economic support to weak governments and discouragement of secession, thus had the unintended effect of encouraging civil wars. Interventions by outside powers. There has been an enormous amount of international intervention in civil wars since 1945 that some have argued served to extend wars. According to Patrick M. Regan in his book "Civil Wars and Foreign Powers" (2000) about 2/3rds of the 138 intrastate conflicts between the end of World War II and 2000 saw international intervention, with the United States intervening in 35 of these conflicts. While intervention has been practiced since the international system has existed, its nature changed substantially. It became common for both the state and opposition group to receive foreign support, allowing wars to continue well past the point when domestic resources had been exhausted. Superpowers, such as the European great powers, had always felt no compunction in intervening in civil wars that affected their interests, while distant regional powers such as the United States could declare the interventionist Monroe Doctrine of 1821 for events in its Central American "backyard". However, the large population of weak states after 1945 allowed intervention by former colonial powers, regional powers and neighboring states who themselves often had scarce resources. Effectiveness of intervention. The effectiveness of intervention is widely debated, in part because the data suffers from selection bias; as Fortna has argued, peacekeepers select themselves into difficult cases. When controlling for this effect, Forta holds that peacekeeping is resoundingly successful in shortening wars. However, other scholars disagree. Knaus and Stewart are extremely skeptical as to the effectiveness of interventions, holding that they can only work when they are performed with extreme caution and sensitivity to context, a strategy they label 'principled incrementalism'. Few interventions, for them, have demonstrated such an approach. Other scholars offer more specific criticisms; Dube and Naidu, for instance, show that US military aid, a less conventional form of intervention, seems to be siphoned off to paramilitaries thus exacerbating violence. Weinstein holds more generally that interventions might disrupt processes of 'autonomous recovery' whereby civil war contributes to state-building. On average, a civil war with interstate intervention was 300% longer than those without. When disaggregated, a civil war with intervention on only one side is 156% longer, while when intervention occurs on both sides the average civil war is longer by an additional 92%. If one of the intervening states was a superpower, a civil war is a further 72% longer; a conflict such as the Angolan Civil War, in which there is two-sided foreign intervention, including by a superpower (actually, two superpowers in the case of Angola), would be 538% longer on average than a civil war without any international intervention. Effect of the Cold War. The Cold War (1947–1991) provided a global network of material and ideological support that often helped perpetuate civil wars, which were mainly fought in weak ex-colonial states rather than the relatively strong states that were aligned with the Warsaw Pact and North Atlantic Treaty Organization. In some cases, superpowers would superimpose Cold War ideology onto local conflicts, while in others local actors using Cold War ideology would attract the attention of a superpower to obtain support. A notable example is the Greek Civil War (1946–1949), which erupted shortly after the end of World War II. This conflict saw the communist-dominated Democratic Army of Greece, supported by Yugoslavia and the Soviet Union, opposing the Kingdom of Greece, which was backed by the United Kingdom and the United States under the Truman Doctrine and the Marshall Plan. Using a separate statistical evaluation than used above for interventions, civil wars that included pro- or anti-communist forces lasted 141% longer than the average non-Cold War conflict, while a Cold War civil war that attracted superpower intervention resulted in wars typically lasting over three times as long as other civil wars. Conversely, the end of the Cold War marked by the fall of the Berlin Wall in 1989 resulted in a reduction in the duration of Cold War civil wars of 92% or, phrased another way, a roughly ten-fold increase in the rate of resolution of Cold War civil wars. Lengthy Cold War-associated civil conflicts that ground to a halt include the wars of Guatemala (1960–1996), El Salvador (1979–1991) and Nicaragua (1970–1990). Post-2003. According to Barbara F. Walter, post-2003 civil wars are different from previous civil wars in that most are situated in Muslim-majority countries; most of the rebel groups espouse radical Islamist ideas and goals; and most of these radical groups pursue transnational rather than national aims. She attributes this shift to changes in information technology, especially the advent of the Web 2.0 in the early 2000s. Effects. Civil wars often have severe economic consequences: two studies estimate that each year of civil war reduces a country's GDP growth by about 2%. It also has a regional effect, reducing the GDP growth of neighboring countries. Civil wars also have the potential to lock the country in a conflict trap, where each conflict increases the likelihood of future conflict.
7088
7903804
https://en.wikipedia.org/wiki?curid=7088
List of cryptographers
This is a list of cryptographers. Cryptography is the practice and study of techniques for secure communication in the presence of third parties called adversaries. Modern. See also: for a more exhaustive list.
7089
49529255
https://en.wikipedia.org/wiki?curid=7089
Chocolate
Chocolate is a food made from roasted and ground cocoa beans that can be a liquid, solid, or paste, either by itself or to flavor other foods. Cocoa beans are the processed seeds of the cacao tree ("Theobroma cacao"). They are usually fermented to develop the flavor, then dried, cleaned, and roasted. The shell is removed to reveal nibs, which are ground to chocolate liquor: unadulterated chocolate in rough form. The liquor can be processed to separate its two components, cocoa solids and cocoa butter, or shaped and sold as unsweetened baking chocolate. By adding sugar, sweetened chocolates are produced, which can be sold simply as dark chocolate, or, with the addition of milk, can be made into milk chocolate. Making milk chocolate with cocoa butter and without cocoa solids produces white chocolate. Chocolate is one of the most popular food types and flavors in the world, and many foodstuffs involving chocolate exist, particularly desserts, including ice creams, cakes, mousse, and cookies. Many candies are filled with or coated with sweetened chocolate. Chocolate bars, either made of solid chocolate or other ingredients coated in chocolate, are eaten as snacks. Gifts of chocolate molded into different shapes (such as eggs, hearts, and coins) are traditional on certain Western holidays, including Christmas, Easter, Valentine's Day, and Hanukkah. Chocolate is also used in cold and hot beverages, such as chocolate milk, hot chocolate and chocolate liqueur. The cacao tree was first used as a source for food in what is today Ecuador at least 5,300 years ago. Mesoamerican civilizations widely consumed cacao beverages, and in the 16th century, one of these beverages, chocolate, was introduced to Europe. Until the 19th century, chocolate was a drink consumed by societal elite. After then, technological and cocoa production changes led to chocolate becoming a solid, mass-consumed food. Today, the cocoa beans for most chocolate is produced in West African countries, particularly Ivory Coast and Ghana, which contribute about 60% of the world's cocoa supply. The presence of child labor, particularly child slavery and trafficking, in cocoa bean production in these countries has received significant media attention. Etymology. "Chocolate" is a Spanish loanword, first recorded in English in 1604, and in Spanish in 1579. However, the word's origins beyond this are contentious. Despite a popular belief that "chocolate" derives from the Nahuatl word , early texts documenting the Nahuatl word for chocolate drink use a different term, "," meaning "cacao water". Several alternatives have therefore been proposed. In one, chocolate is derived from the hypothetical Nahuatl word , meaning "bitter drink". Scholars Michael and Sophie Coe consider this unlikely, saying that there is no clear reason why the 'sh' sound represented by 'x' would change to 'ch', or why an 'l' would be added. Another theory suggests that "chocolate" comes from "chocolatl", meaning 'hot water' in a Mayan language. However, there is no evidence of the form 'chocol' being used to mean hot. Despite the uncertainty about its Nahuatl origin, there is some agreement that chocolate likely derives from the Nawat word "chikola:tl". However, whether "chikola:tl" means 'cacao-beater' (referring to whisking cocoa to create foam), is contested, as the meaning of "chico" is unknown. According to anthropologist Kathryn Sampeck, chocolate originally referred to one cacao beverage among many, which included annatto and was made in what is today Guatemala. According to Sampeck, it became the generic word for cacao beverages , when the Izalcos from that area were the most notable producers of cacao. History. Evidence for the domestication of the cacao tree exists as early as 5300 BP in South America, in present-day southeast Ecuador by the Mayo-Chinchipe culture, before it was introduced to Mesoamerica. It is unknown when chocolate was first consumed as opposed to other cacao-based drinks, and there is evidence the Olmecs, the earliest known major Mesoamerican civilization, fermented the sweet pulp surrounding the cacao beans into an alcoholic beverage. Chocolate was extremely important to several Mesoamerican societies, and cacao was considered a gift from the gods by the Mayans and the Aztecs. The cocoa bean was used as a currency across civilizations and was used in ceremonies, as a tribute to leaders and gods and as a medicine. Chocolate in Mesoamerica was a bitter drink, flavored with additives such as vanilla, earflower and chili, and was capped with a dark brown foam created by pouring the liquid from a height between containers. While Spanish conquistador Hernán Cortés may have been the first European to encounter chocolate when he observed it in the court of Moctezuma II in 1520, it proved to be an acquired taste, and it took until 1585 for the first official recording of a shipment of cocoa beans to Europe. Chocolate was believed to be an aphrodisiac and medicine, and spread across Europe in the 17th century, sweetened, served warm and flavored with familiar spices. It was initially primarily consumed by the elite, with expensive cocoa supplied by colonial plantations in the Americas. In the 18th century, it was considered southern European, aristocratic and Catholic and was still produced in a similar way to the way it had been produced by the Aztecs.Starting in the 18th century, chocolate production was improved. In the 19th century, engine-powered milling was developed, and in 1828, Coenraad Johannes van Houten received a patent for a process making Dutch cocoa. This removed cocoa butter from chocolate liquor (the product of milling), and permitted large scale production of chocolate. Other developments in the 19th century, including the "melanger" (a mixing machine), modern milk chocolate, the conching process to make chocolate smoother and change the flavor meant a worker in 1890 could produce fifty times more chocolate with the same labor than they could before the Industrial Revolution, and chocolate became a food to be eaten rather than drunk. As production moved from the Americas to Asia and Africa, mass markets in Western nations for chocolate opened up. In the early 20th century, British chocolate producers including Cadbury and Fry's faced controversy over the labor conditions in the Portuguese cacao industry in Africa. A 1908 report by a Cadbury agent described conditions as "de facto slavery." While conditions somewhat improved with a boycott by chocolate makers, slave labor among African cacao growers again gained public attention in the early 21st century. During the 20th century, chocolate production further developed, with development of the tempering technique to improve the snap and gloss of chocolate and the addition of lecithin to improve texture and consistency. White and couverture chocolate were developed in the 20th century and the bean-to-bar trade model began. Types. Several types of chocolate can be distinguished. Pure, unsweetened chocolate, often called "baking chocolate", contains primarily cocoa solids and cocoa butter in varying proportions. Much of the chocolate consumed today is in the form of sweet chocolate, which combines chocolate with sugar. Eating chocolate. The traditional types of chocolate are dark, milk and white. All of them contain cocoa butter, which is the ingredient defining the physical properties of chocolate (consistency and melting temperature). Plain (or dark) chocolate, as its name suggests, is a form of chocolate that is similar to pure cocoa liquor, although is usually made with a slightly higher proportion of cocoa butter. It is simply defined by its cocoa percentage. In milk chocolate, the non-fat cocoa solids are partly or mostly replaced by milk solids. In white chocolate, they are all replaced by milk solids, hence its ivory color. Other forms of eating chocolate exist, these include raw chocolate (made with unroasted beans) and ruby chocolate. An additional popular form of eating chocolate, gianduja, is made by incorporating nut paste (typically hazelnut) to the chocolate paste. Other types. Other types of chocolate are used in baking and confectionery. These include baking chocolate (often unsweetened), couverture chocolate (used for coating), compound chocolate (a lower-cost alternative) and modeling chocolate. Modeling chocolate is a chocolate paste made by melting chocolate and combining it with corn syrup, glucose syrup, or golden syrup. Cacao. Chocolate is made from cocoa beans, the dried and often fermented seeds of the cacao tree ("Theobroma cacao"), a small, tall evergreen tree native to South America. The most common genotype originated in the Amazon basin, and was gradually transported by humans throughout South and Central America. Early forms of another genotype have also been found in what is now Venezuela. The scientific name, "Theobroma", means "food of the gods". The fruit, called a cocoa pod, is ovoid, long and wide, ripening yellow to orange, and weighing about when ripe. Cacao trees are small, understory trees that need rich, well-drained soils. They naturally grow within 20° of either side of the equator because they need about 2000 mm of rainfall a year, and temperatures in the range of . Cacao trees cannot tolerate a temperature lower than . The genome of the cacao tree was sequenced in 2010. Traditionally, cacao was understood to be divided into three varieties: Criollo, Forastero, and Trinitario. New genetic research has not found a genetic backing for this division, and it has identified eleven genetic clusters. Processing. Cocoa pods are harvested by cutting them from the tree using a machete, or by knocking them off the tree using a stick. Pods are harvested when they are ripe, as beans in unripe pods have a low cocoa butter content, or low sugar content, impacting the ultimate flavor. Cocoa beans. The beans (which are sterile within their pods) and their surrounding pulp are removed from the pods and placed in piles or bins to ferment. Micro-organisms, present naturally in the environment, ferment the seeds. Yeasts produce ethanol, lactic acid bacteria produce lactic acid, and acetic acid bacteria produce acetic acid. The fermentation process, which takes up to seven days, produces several flavor precursors, that eventually provide the chocolate taste. After fermentation, the beans are dried to prevent mold growth. Where the weather permits it, this is done by spreading the beans out in the sun for five to seven days. The dried beans are then transported to a chocolate manufacturing facility. The beans are cleaned (removing twigs, stones, and other debris), roasted, and graded. Next, the shell of each bean is removed to extract the nib. From nibs to chocolate. Next, the nibs are ground, producing chocolate liquor. The liquor can be further processed into cocoa solids and cocoa butter. The penultimate process is called conching. A conche is a container filled with metal beads, which act as grinders. The refined and blended chocolate mass is kept in a liquid state by frictional heat. Before conching, chocolate has an uneven and gritty texture. The conching process produces cocoa and sugar particles smaller than the tongue can detect (typically around 20 μm) and reduces rough edges, hence the smooth feel in the mouth. The length of the conching process determines the final smoothness and quality of the chocolate. After the process is complete, the chocolate mass is stored in tanks heated to about until final processing. After conching, chocolate is tempered to crystallize a small amount of fat, allowing the remaining fats to crystallize with an overall gloss. After chocolate has been tempered, it is molded into different shapes, including chocolate bars and chocolate chips. Storage. Chocolate is very sensitive to temperature and humidity. Ideal storage temperatures are between , with a relative humidity of less than 50%. If refrigerated or frozen without containment, chocolate can absorb enough moisture to cause a whitish discoloration, the result of fat or sugar crystals rising to the surface. Various types of "blooming" effects can occur if chocolate is stored or served improperly. Chocolate bloom is caused by storage temperature fluctuating or exceeding , while sugar bloom is caused by temperature below or excess humidity. A fat bloom can be distinguished by touch; it disappears if the surface of affected chocolate is lightly rubbed. Although visually unappealing, chocolate suffering from bloom is safe for consumption and taste is unaffected. Bloom can be reversed by retempering the chocolate or using it for any use that requires melting the chocolate. Chocolate is generally stored away from other foods, as it can absorb aromas. To avoid this, chocolate is packed or wrapped, then stored in darkness, in ideal humidity and temperature conditions. Health effects. Nutrition. One hundred grams of milk chocolate supplies 540 calories. It is 59% carbohydrates (52% as sugar and 3% as dietary fiber), 30% fat and 8% protein (table). Approximately 65% of the fat in milk chocolate is saturated, mainly palmitic acid and stearic acid, while the predominant unsaturated fat is oleic acid (table). One hundred grams of milk chocolate is an "excellent source" (over 19% of the Daily Value, DV) of riboflavin, vitamin B12 and the dietary minerals manganese, phosphorus and zinc. Chocolate is a "good source" (10–19% DV) of calcium, magnesium and iron. Phytochemicals. Chocolate contains polyphenols, especially flavan-3-ols (catechins) and smaller amounts of other flavonoids. It also contains alkaloids, such as theobromine, phenethylamine, and caffeine, which are under study for their potential effects in the body. Heavy metals. It is unlikely that chocolate consumption in small amounts causes lead poisoning. Some studies have shown that lead may bind to cocoa shells, and contamination may occur during the manufacturing process. One study showed the mean lead level in milk chocolate candy bars was 0.027 μg lead per gram of candy; another study found that some chocolate purchased at U.S. supermarkets contained up to 0.965 μg per gram, close to the international (voluntary) standard limit for lead in cocoa powder or beans, which is 1 μg of lead per gram. In 2006, the U.S. FDA lowered by one-fifth the amount of lead permissible in candy, but compliance is only voluntary. Studies concluded that "children, who are big consumers of chocolates, may be at risk of exceeding the daily limit of lead, [as] one 10 g cube of dark chocolate may contain as much as 20% of the daily lead oral limit. Moreover chocolate may not be the only source of lead in their nutrition" and "chocolate might be a significant source of cadmium and lead ingestion, particularly for children." According to a 2005 study, the average lead concentration of cocoa beans is ≤ 0.5 ng/g, which is one of the lowest reported values for a natural food. However, during cultivation and production, chocolate may absorb lead from the environment (such as in atmospheric emissions of now unused leaded gasoline). The European Food Safety Authority recommended a tolerable weekly intake for cadmium of 2.5 micrograms per kg of body weight for Europeans, indicating that consuming chocolate products caused exposure of about 4% among all foods eaten.<ref name="foodsafetynews/2021/lowers-cadmium"></ref> Maximum levels for baby foods and chocolate/cocoa products were established under Commission Regulation (EU) No 488/2014. 1986 California Proposition 65 requires a warning label on chocolate products having more than 4.1 mg of cadmium per daily serving of a single product.<ref name="confectionerynews/cadmium-lead-chocolate"></ref><ref name="consumerreports/a8480295550"></ref> Caffeine. One tablespoonful (5 grams) of dry unsweetened cocoa powder has 12.1 mg of caffeine and a 25-g single serving of dark chocolate has 22.4 mg of caffeine. This is much less than the amount found in coffee, of which a single 7 oz. (200 ml) serving may contain 80–175 mg of caffeine, though studies have shown psychoactive effects in caffeine doses as low as 9 mg, and a dose as low as 12.5 mg was shown to have effects on cognitive performance. Theobromine and oxalate. Chocolate may be a factor for heartburn in some people because one of its constituents, theobromine, may affect the esophageal sphincter muscle in a way that permits stomach acids to enter the esophagus. Theobromine poisoning is an overdosage reaction to the bitter alkaloid, which happens more frequently in domestic animals than humans. However, daily intake of 50–100 g cocoa (0.8–1.5 g theobromine) by humans has been associated with sweating, trembling, and severe headache. Chocolate and cocoa contain moderate to high amounts of oxalate, which may increase the risk of kidney stones. Non-human animals. In sufficient amounts, the theobromine found in chocolate is toxic to animals such as cats, dogs, horses, parrots, and small rodents because they are unable to metabolize the chemical effectively. If animals are fed chocolate, the theobromine may remain in the circulation for up to 20 hours, possibly causing epileptic seizures, heart attacks, internal bleeding, and eventually death. Medical treatment performed by a veterinarian involves inducing vomiting within two hours of ingestion and administration of benzodiazepines or barbiturates for seizures, antiarrhythmics for heart arrhythmias, and fluid diuresis. A typical dog will normally experience great intestinal distress after eating less than of dark chocolate, but will not necessarily experience bradycardia or tachycardia unless it eats at least a half a kilogram (1.1 lb) of milk chocolate. Dark chocolate has 2 to 5 times more theobromine and thus is more dangerous to dogs. According to the Merck Veterinary Manual, approximately 1.3 grams of baker's chocolate per kilogram of a dog's body weight (0.02 oz/lb) is sufficient to cause symptoms of toxicity. For example, a typical baker's chocolate bar would be enough to bring about symptoms in a dog. In the 20th century, there were reports that mulch made from cocoa bean shells is dangerous to dogs and livestock. Research. Commonly consumed chocolate is high in fat and sugar, which are associated with an increased risk for obesity when chocolate is consumed in excess. Overall evidence is insufficient to determine the relationship between chocolate consumption and acne. Various studies point not to chocolate, but to the high glycemic nature of certain foods, like sugar, corn syrup, and other simple carbohydrates, as potential causes of acne, along with other possible dietary factors. Food, including chocolate, is not typically viewed as addictive. Some people, however, may want or crave chocolate, leading to a self-described term, "chocoholic". By some popular myths, chocolate is considered to be a mood enhancer, such as by increasing sex drive or stimulating cognition, but there is little scientific evidence that such effects are consistent among all chocolate consumers. If mood improvement from eating chocolate occurs, there is not enough research to indicate whether it results from the favorable flavor or from the stimulant effects of its constituents, such as caffeine, theobromine, or their parent molecule, methylxanthine. A 2019 review reported that chocolate consumption does not improve depressive mood. Reviews support a short-term effect of lowering blood pressure by consuming cocoa products, but there is no evidence of long-term cardiovascular health benefit. Chocolate and cocoa are under preliminary research to determine if consumption affects the risk of certain cardiovascular diseases or enhances cognitive abilities. While daily consumption of cocoa flavanols (minimum dose of 200 mg) appears to benefit platelet and vascular function, there is no good evidence to indicate an effect on heart attacks or strokes. Research has also shown that consuming dark chocolate does not substantially affect blood pressure. Labeling. Some manufacturers provide the percentage of chocolate in a finished chocolate confection as a label quoting percentage of "cocoa" or "cacao". This refers to the combined percentage of both cocoa solids and cocoa butter in the bar, not just the percentage of cocoa solids. The Belgian AMBAO certification mark indicates that no non-cocoa vegetable fats have been used in making the chocolate. A long-standing dispute between Britain on the one hand and Belgium and France over British use of vegetable fats in chocolate ended in 2000 with the adoption of new standards which permitted the use of up to five percent vegetable fats in clearly labelled products. Chocolates that are organic or fair trade certified carry labels accordingly. Legal definitions. In the US, the Food and Drug Administration does not allow a product to be referred to as "chocolate" if the product contains any of these ingredients. In the EU a product can be sold as chocolate if it contains up to 5% vegetable oil, and must be labeled as "family milk chocolate" rather than "milk chocolate" if it contains 20% milk. According to Canadian Food and Drug Regulations, a "chocolate product" is a food product that is sourced from at least one "cocoa product" and contains at least one of the following: "chocolate, bittersweet chocolate, semi-sweet chocolate, dark chocolate, sweet chocolate, milk chocolate, or white chocolate". A "cocoa product" is defined as a food product that is sourced from cocoa beans and contains "cocoa nibs, cocoa liquor, cocoa mass, unsweetened chocolate, bitter chocolate, chocolate liquor, cocoa, low-fat cocoa, cocoa powder, or low-fat cocoa powder". Industry. Chocolate is a steadily growing, US$50 billion-a-year worldwide business as of 2009. As of 2006, Europe accounted for 45% of the world's chocolate revenue, and the US spent $20 billion in 2013. Big Chocolate is a grouping of major international chocolate companies in Europe and the US. In 2004, Mars and Hershey's alone accounted for two-thirds of US production. As of 2007, roughly two-thirds of the world's cocoa was produced in West Africa, with 43% sourced from Ivory Coast, which commonly used child labor. That year some 50 million people around the world depended on cocoa as a source of livelihood. in the UK, most chocolatiers purchase their chocolate from them, to melt, mold and package to their own design. As of 2012, the Ivory Coast is the largest producer of cocoa in the world. The two main jobs associated with creating chocolate candy are chocolate makers and chocolatiers. Chocolate makers use harvested cocoa beans and other ingredients to produce couverture chocolate (covering). Chocolatiers use the finished couverture to make chocolate candies (bars, truffles, etc.). Production costs can be decreased by reducing cocoa solids content or by substituting cocoa butter with another fat. Cocoa growers object to allowing the resulting food to be called "chocolate", due to the risk of lower demand for their crops. Manufacturers. Chocolate manufacturers produce a range of products from chocolate bars to fudge. Large manufacturers of chocolate products include Cadbury, Ferrero, Guylian, The Hershey Company, Lindt & Sprüngli, Mars, Incorporated, Milka, Neuhaus and Suchard. Guylian is best known for its chocolate sea shells; Cadbury for its Dairy Milk and Creme Egg. The Hershey Company, the largest chocolate manufacturer in North America, produces the Hershey Bar and Hershey's Kisses. Mars Incorporated, a large privately owned U.S. corporation, produces Mars Bar, Milky Way, M&M's, Twix, and Snickers. Lindt is known for its truffle balls and gold foil-wrapped Easter bunnies. Food conglomerates Nestlé SA and Mondelēz both have chocolate brands. Nestlé acquired Rowntree's in 1988 and now markets chocolates under their brand, including Smarties (a chocolate candy) and Kit Kat (a chocolate bar); Kraft Foods through its 1990 acquisition of Jacobs Suchard, now owns Milka and Suchard. Fry's, Trebor Basset and the fair trade brand Green & Black's also belongs to the group. Child labor in cocoa harvesting. The widespread use of children in cocoa production is controversial, not only for the concerns about child labor and exploitation, but also because according to a 2002 estimate, up to 12,000 of the 200,000 children then working in the Ivory Coast cocoa industry may have been victims of trafficking or slavery. Most attention on this subject has focused on West Africa, which collectively supplies 69 percent of the world's cocoa, and the Ivory Coast in particular, which supplies 35 percent of the world's cocoa. Thirty percent of children under age 15 in sub-Saharan Africa are child laborers, mostly in agricultural activities including cocoa farming. Major chocolate producers, such as Nestlé, buy cocoa at commodities exchanges where Ivorian cocoa is mixed with other cocoa. As of 2017, approximately 2.1 million children in Ghana and Ivory Coast were involved in farming cocoa, carrying heavy loads, clearing forests, and being exposed to pesticides. As of 2018, a 3-year pilot program – conducted by Nestlé with 26,000 farmers mostly located in Ivory Coast – observed a 51% decrease in the number of children doing hazardous jobs in cocoa farming. The US Department of Labor formed the Child Labor Cocoa Coordinating Group as a public-private partnership with the governments of Ghana and Ivory Coast to address child labor practices in the cocoa industry. The International Cocoa Initiative involving major cocoa manufacturers established the Child Labor Monitoring and Remediation System to monitor thousands of farms in Ghana and Ivory Coast for child labor conditions, but the program reached less than 20% of the child laborers. In April 2018, the Cocoa Barometer report stated: "Not a single company or government is anywhere near reaching the sector-wide objective of the elimination of child labor, and not even near their commitments of a 70% reduction of child labor by 2020". They cited persistent poverty, the absence of schools, increasing world cocoa demand, more intensive farming of cocoa, and continued exploitation of child labor. Fair trade. In the 2000s, some chocolate producers began to engage in fair trade initiatives, to address concerns about the low pay of cocoa laborers in developing countries. Traditionally, Africa and other developing countries received low prices for their exported commodities such as cocoa, which caused poverty. Fairtrade seeks to establish a system of direct trade from developing countries to counteract this system. One solution for fair labor practices is for farmers to become part of an agricultural cooperative. Cooperatives pay farmers a fair price for their cocoa so farmers have enough money for food, clothes, and school fees. One of the main tenets of fair trade is that farmers receive a fair price, but this does not mean that the larger amount of money paid for fair trade cocoa goes directly to the farmers. The effectiveness of fair trade has been questioned. In a 2014 article, "The Economist" stated that workers on fair trade farms have a lower standard of living than on similar farms outside the fair trade system based on a study of tea and coffee farmers in Uganda and Ethiopia. Usage and consumption. Chocolate is sold in chocolate bars, which come in dark chocolate, milk chocolate and white chocolate varieties. Some bars that are mostly chocolate have other ingredients blended into the chocolate, such as nuts, raisins, or crisped rice. Chocolate is used as an ingredient in a huge variety of bars, which typically contain various confectionary ingredients (e.g., nougat, wafers, caramel, nuts, etc.) which are coated in chocolate. Chocolate is used as a flavoring product in many desserts, such as chocolate cakes, chocolate brownies, chocolate mousse and chocolate chip cookies. Numerous types of candy and snacks contain chocolate, either as a filling (e.g., M&M's) or as a coating (e.g., chocolate-coated raisins or chocolate-coated peanuts). Some non-alcoholic beverages contain chocolate, such as chocolate milk, hot chocolate, chocolate milkshakes and tejate. Some alcoholic liqueurs are flavored with chocolate, such as chocolate liqueur and crème de cacao. Chocolate is a popular flavor of ice cream and pudding, and chocolate sauce is a commonly added as a topping on ice cream sundaes. The caffè mocha is an espresso beverage containing chocolate. Eating experience. The experience of eating chocolate varies with the ingredients used. More sugary chocolates have a flavor that is more immediately apparent, while chocolates with higher cocoa percentages have flavors that take longer to be perceived but stay on the palate for longer. These chocolates with more cocoa are increasingly bitter. Society and culture. Chocolate is perceived to be different things at different times, including a sweet treat, a luxury product, a consumer good and a mood enhancer, the latter reputation in part driven by marketing. Chocolate is a popular metaphor for the black racial category. It has connotations of transgression and sexuality and is gendered as feminine. In the US there is a cultural practice of women consuming chocolate in secret; alone and with other women. Children use chocolate as a euphemism for feces. Chocolate is popularly understood to have "exotic" origins, In China, chocolate is considered "heaty", and avoided in hot weather. Chocolate is associated with festivals such as Easter, when molded chocolate rabbits and eggs are traditionally given in Christian communities, and Hanukkah, when chocolate coins are given in Jewish communities. Chocolate hearts and chocolate in heart-shaped boxes are popular on Valentine's Day and are often presented along with flowers and a greeting card Boxes of filled chocolates quickly became associated with the holiday. Chocolate is an acceptable gift on other holidays and on occasions such as birthdays. Many confectioners make holiday-specific chocolate candies. Chocolate Easter eggs or rabbits and Santa Claus figures are two examples. Such confections can be solid, hollow, or filled with sweets or fondant. In 1964, Roald Dahl published a children's novel titled "Charlie and the Chocolate Factory". The novel centers on a poor boy named Charlie Bucket who takes a tour through the greatest chocolate factory in the world, owned by the eccentric Willy Wonka. Two film adaptations of the novel were produced: "Willy Wonka & the Chocolate Factory" (1971) and "Charlie and the Chocolate Factory" (2005). A third adaptation, an origin prequel film titled "Wonka", was released in 2023. "Chocolat", a 1999 novel by Joanne Harris, was adapted for film in "Chocolat" which was released a year later. Some artists have utilized chocolate in their art; Dieter Roth was influential in this beginning with his works in the 1960s casting human and animal figures in chocolate, which used the chocolate's inevitable decay to comment on contemporary attitudes towards the permanence of museum displays. Other works have played on the audience's ability to consume displayed chocolate, encouraged in Sonja Alhäuser's "Exhibition Basics" (2001) and painfully disallowed in Edward Ruscha's "Chocolate Room" (1970). In the 1980s and 90s, performance artists Karen Finley and Janine Antoni used chocolate's cultural popular associations of excrement and consumption, and desirability respectively to comment on the status of women in society. Flavors. Mint chocolate (or chocolate mint) is a popular flavor of chocolate, made by adding a mint flavoring, such as peppermint, spearmint, or crème de menthe, to chocolate. Mint chocolate can be found in a wide variety of confectionery items, such as candy, mints, cookies, mint chocolate chip ice cream, hot chocolate, and others. In addition, it is marketed in a non-edible format as cosmetics. Depending widely on the ingredients and the process used, mint chocolate can give off a very distinctive mint fragrance. The chocolate component can be milk chocolate, regular dark chocolate, or white chocolate; due to this, mint chocolate has no one specific flavour, and so each chocolate-plus-flavor combination can be unique. The U.S. National Confectioners Association lists February 19 as "Chocolate Mint Day".
7100
4904587
https://en.wikipedia.org/wiki?curid=7100
Cornet
The cornet (, ) is a brass instrument similar to the trumpet but distinguished from it by its conical bore, more compact shape, and mellower tone quality. The most common cornet is a transposing instrument in B. There is also a soprano cornet in E and cornets in A and C. All are unrelated to the Renaissance and early Baroque cornett. History. The cornet was derived from the posthorn by applying valves to it in the 1820s. Initially using Stölzel valves, by the 1830s, Parisian makers were using the improved Périnet piston valves. Cornets first appeared as separate instrumental parts in 19th-century French compositions. The instrument could not have been developed without the improvement of piston valves by Silesian horn players Friedrich Blühmel (or Blümel) and Heinrich Stölzel, in the early 19th century. These two instrument makers almost simultaneously invented valves, though it is likely that Blühmel was the inventor, while Stölzel developed a practical instrument. They were jointly granted a patent for a period of ten years. François Périnet received a patent in 1838 for an improved valve, which became the model for modern brass instrument piston valves. The first notable virtuoso player was Jean-Baptiste Arban, who studied the cornet extensively and published , commonly referred to as the "Arban method", in 1864. Up until the early 20th century, the trumpet and cornet co-existed in musical ensembles; symphonic repertoire often involves separate parts for trumpet and cornet. As several instrument builders made improvements to both instruments, they started to look and sound more alike. The modern-day cornet is used in brass bands, concert bands, and in specific orchestral repertoire that requires a more mellow sound. The name "cornet" derives from the French "corne", meaning "horn", itself from Latin . While not musically related, instruments of the Zink family (which includes serpents) are named "cornetto" or "cornett" in modern English, to distinguish them from the valved cornet described here. The 11th edition of the "Encyclopædia Britannica" referred to serpents as "old wooden cornets". The Roman/Etruscan cornu (or simply "horn") is the lingual ancestor of these. It is a predecessor of the post horn, from which the cornet evolved, and was used like a bugle to signal orders on the battlefield. Relationship to trumpet. The cornet's valves allowed for melodic playing throughout the instrument's register. Trumpets were slower to adopt the new valve technology, so for 100 years or more, composers often wrote separate parts for trumpet and cornet. The trumpet would play fanfare-like passages, while the cornet played more melodic ones. The modern trumpet has valves that allow it to play the same notes and fingerings as the cornet. Cornets and trumpets made in a given key (usually the key of B) play at the same pitch, and the technique for playing the instruments is nearly identical. However, cornets and trumpets are not entirely interchangeable, as they differ in timbre. Also available, but usually seen only in the brass band, is an E soprano model, pitched a fourth above the standard B. Unlike the trumpet, which has a cylindrical bore up to the bell section, the tubing of the cornet has a mostly conical bore, starting very narrow at the mouthpiece and gradually widening towards the bell. Cornets following the 1913 patent of E. A. Couturier can have a continuously conical bore. This shape is primarily responsible for the instrument's characteristic warm, mellow tone, which can be distinguished from the more penetrating sound of the trumpet. The conical bore of the cornet also makes it more agile than the trumpet when playing fast passages, but correct pitching is often less assured. The cornet is often preferred for young beginners as it is easier to hold, with its centre of gravity much closer to the player. The cornet mouthpiece has a shorter and narrower shank than that of a trumpet, so it can fit the cornet's smaller mouthpiece receiver. The cup size is often deeper than that of a trumpet mouthpiece. One variety is the short-model traditional cornet, also known as a "Shepherd's Crook" shaped model. These are most often large-bore instruments with a rich mellow sound. There is also a long-model, or "American-wrap" cornet, often with a smaller bore and a brighter sound, which is produced in a variety of different tubing wraps and is closer to a trumpet in appearance. The Shepherd's Crook model is preferred by cornet traditionalists. The long-model cornet is generally used in concert bands in the United States and has found little following in British-style brass and concert bands. A third, and relatively rare variety—distinct from the "American-wrap" cornet—is the "long cornet", which was produced in the mid-20th century by C. G. Conn and F. E. Olds and is visually nearly indistinguishable from a trumpet, except that it has a receiver fashioned to accept cornet mouthpieces. Echo cornet. The echo cornet has been called an obsolete variant. It has a mute chamber (or echo chamber) mounted to the side, acting as a second bell when the fourth valve is pressed. The second bell has a sound similar to that of a Harmon mute and is typically used to play echo phrases, whereupon the player imitates the sound from the primary bell using the echo chamber. Playing technique. Like the trumpet and all other modern brass wind instruments, the cornet makes a sound when the player vibrates ("buzzes") the lips in the mouthpiece, creating a vibrating column of air in the tubing. The frequency of the air column's vibration can be modified by changing the lip tension and aperture, or embouchure, and by altering the tongue position to change the shape of the oral cavity, thereby increasing or decreasing the speed of the airstream. In addition, the column of air can be lengthened by engaging one or more valves, thus lowering the pitch. Double and triple tonguing are also possible. Without valves, the player could produce only a harmonic series of notes, like those played by the bugle and other "natural" brass instruments. These notes are far apart for most of the instrument's range, making diatonic and chromatic playing impossible, except in the extreme high register. The valves change the length of the vibrating column and provide the cornet with the ability to play chromatically. Ensembles with cornets. Brass band. British brass bands consist only of brass instruments and a percussion section. The cornet is the leading melodic instrument in this ensemble; trumpets are never used. The ensemble consists of about thirty musicians, including nine B cornets and one E cornet (soprano cornet). In the UK, companies such as Besson and Boosey & Hawkes specialized in instruments for brass bands. In America, 19th-century manufacturers such as Graves and Company, Hall and Quinby, E. G. Wright, and the Boston Musical Instrument Manufactury made instruments for this ensemble. Concert band. The cornet features in the British-style concert band, and early American concert band pieces, particularly those written or transcribed before 1960, often feature distinct, separate parts for trumpets and cornets. Cornet parts are rarely included in later American pieces, however, and they are replaced in modern American bands by the trumpet. This slight difference in instrumentation derives from the British concert band's heritage in military bands, where the highest brass instrument is always the cornet. There are usually four to six B cornets present in a British concert band, but no E instrument, as this role is taken by the E clarinet. Fanfareorkest. Fanfareorkesten ("fanfare orchestras"), found in only the Netherlands, Belgium, northern France, and Lithuania, use the complete saxhorn family of instruments. The standard instrumentation includes both the cornet and the trumpet; however, in recent decades, the cornet has largely been replaced by the trumpet. Jazz ensemble. In old-style jazz bands, the cornet was preferred to the trumpet, but from the swing era onwards, it has been largely replaced by the louder, more piercing trumpet. Likewise, the cornet has been largely phased out of big bands by a growing taste for louder and more aggressive instruments, especially since the advent of bebop in the post-World War II era. Jazz pioneer Buddy Bolden played the cornet, and Louis Armstrong started off on the instrument, but his switch to the trumpet is often credited with the beginning of the trumpet's dominance in jazz. Cornetists such as Bubber Miley and Rex Stewart contributed substantially to the Duke Ellington Orchestra's early sound. Other influential jazz cornetists include Freddie Keppard, King Oliver, Bix Beiderbecke, Ruby Braff, Bobby Hackett, and Nat Adderley. Notable performances on cornet by players generally associated with the trumpet include Freddie Hubbard's on "Empyrean Isles", by Herbie Hancock, and Don Cherry's on "The Shape of Jazz to Come", by Ornette Coleman. The band Tuba Skinny is led by cornetist Shaye Cohn. Symphony orchestra. Soon after its invention, the cornet was introduced into the symphony orchestra, supplementing the trumpets. The use of valves meant they could play a full chromatic scale in contrast with trumpets, which were still restricted to the harmonic series. In addition, their tone was found to unify the horn and trumpet sections. Hector Berlioz was the first significant composer to use them in these ways, and his orchestral works often use pairs of both trumpets and cornets, the latter playing more of the melodic lines. In his "Symphonie fantastique" (1830), he added a counter-melody for a solo cornet in the second movement (). Cornets continued to be used, particularly in French compositions, well after the valve trumpet was common. They blended well with other instruments and were held to be better suited to certain types of melody. Tchaikovsky used them effectively this way in his "Capriccio Italien" (1880). From the early 20th century, the cornet and trumpet combination was still favored by some composers, including Edward Elgar and Igor Stravinsky, but tended to be used for occasions when the composer wanted the specific mellower and more agile sound. The sounds of the cornet and trumpet have grown closer together over time, and the former is now rarely used as an ensemble instrument: in the first version of his ballet "Petrushka" (1911), Stravinsky gives a celebrated solo to the cornet; in the 1946 revision, he removed cornets from the orchestration and instead assigned the solo to the trumpet.
7104
47221383
https://en.wikipedia.org/wiki?curid=7104
Cotton Mather
Cotton Mather (; February 12, 1663 – February 13, 1728) was a Puritan clergyman and author in colonial New England, who wrote extensively on theological, historical, and scientific subjects. After being educated at Harvard College, he joined his father Increase as minister of the Congregationalist Old North Meeting House in Boston, then part of the Massachusetts Bay Colony, where he preached for the rest of his life. He has been referred to as the "first American Evangelical". A major intellectual and public figure in English-speaking colonial America, Cotton Mather helped lead the successful revolt of 1689 against Sir Edmund Andros, the governor of New England appointed by King James II. Mather's subsequent involvement in the Salem witch trials of 1692–1693, which he defended in the book "Wonders of the Invisible World" (1693), attracted intense controversy in his own day and has negatively affected his historical reputation. As a historian of colonial New England, Mather is noted for his "Magnalia Christi Americana" (1702). Personally and intellectually committed to the waning social and religious orders in New England, Cotton Mather unsuccessfully sought the presidency of Harvard College. After 1702, Cotton Mather clashed with Joseph Dudley, the governor of the Province of Massachusetts Bay, whom Mather attempted unsuccessfully to drive out of power. Mather championed the new Yale College as an intellectual bulwark of Puritanism in New England. He corresponded extensively with European intellectuals and received an honorary Doctor of Divinity degree from the University of Glasgow in 1710. A promoter of the new experimental science in America, Cotton Mather carried out original research on plant hybridization. He also researched the variolation method of inoculation as a means of preventing smallpox contagion, which he learned about from an African-American slave who he owned, Onesimus. He dispatched multiple reports on scientific matters to the Royal Society of London, which elected him as a fellow in 1713. Mather's promotion of inoculation against smallpox caused violent controversy in Boston during the outbreak of 1721. Scientist and United States Founding Father Benjamin Franklin, who as a young Bostonian had opposed the old Puritan order represented by Mather and participated in the anti-inoculation campaign, later described Mather's book "Bonifacius", or "Essays to Do Good" (1710) as a major influence on his life. Early life and education. Cotton Mather was born in 1663 in the city of Boston, the capital of the Massachusetts Bay Colony, to the Rev. Increase Mather and his wife Maria "née" Cotton. His grandfathers were Richard Mather and John Cotton, both of them prominent Puritan ministers who had played major roles in the establishment and growth of the Massachusetts colony. Richard Mather was a graduate of the University of Oxford and John Cotton a graduate of the University of Cambridge. Increase Mather was a graduate of Harvard College and the Trinity College Dublin, and served as the minister of Boston's original North Church (not to be confused with the Anglican Old North Church of Paul Revere fame). This was one of the two principal Congregationalist churches in the city, the other being the First Church established by John Winthrop. Cotton Mather was therefore born into one of the most influential and intellectually distinguished families in colonial New England and seemed destined to follow his father and grandfathers into the Puritan clergy. Cotton entered Harvard College, in the neighboring town of Cambridge, in 1674. Aged only eleven and a half, he is the youngest student ever admitted to that institution. At around this time, Cotton began to be afflicted by stuttering, a speech disorder that he would struggle to overcome throughout the rest of his life. Bullied by the older students and fearing that his stutter would make him unsuitable as a preacher, Cotton withdrew temporarily from the college, continuing his education at home. He also took an interest in medicine and considered the possibility of pursuing a career as a physician rather than as a religious minister. Cotton eventually returned to Harvard and received his Bachelor of Arts degree in 1678, followed by a Master of Arts degree in 1681, the same year his father became Harvard President. At Harvard, Cotton studied Hebrew and the sciences. After completing his education, Cotton joined his father's church as assistant pastor. In 1685, Cotton was ordained and assumed full responsibilities as co-pastor of the church. Father and son continued to share responsibility for the care of the congregation until the death of Increase in 1723. Cotton would die less than five years after his father, and was therefore throughout most of his career in the shadow of the respected and formidable Increase. When Increase Mather became president of Harvard in 1692, he exercised considerable influence on the politics of the Massachusetts colony. Despite Cotton's efforts, he never became quite as influential as his father. One of the most public displays of their strained relationship emerged during the Salem witch trials, which Increase Mather reportedly did not support. Cotton did surpass his father's output as a writer, producing nearly 400 works. Personal life. Cotton Mather married Abigail Phillips, daughter of Colonel John Phillips of Charlestown, on May 4, 1686, when Cotton was twenty-three and Abigail was not quite sixteen years old. They had eight children. Abigail died of smallpox in 1702, having previously suffered a miscarriage. He married widow Elizabeth Hubbard in 1703. Like his first marriage, he was happily married to a very religious and emotionally stable woman. They had six children. Elizabeth, the couple's newborn twins, and a two-year-old daughter, Jerusha, all succumbed to a measles epidemic in 1713. On July 5, 1715, Mather married widow Lydia Lee George. Her daughter Katherine, wife of Nathan Howell, became a widow shortly after Lydia married Mather and she came to live with the newly married couple. Also living in the Mather household at that time were Mather's children Abigal (21), Hannah (18), Elizabeth (11), and Samuel (9). Initially, Mather wrote in his journal how lovely he found his wife and how much he enjoyed their discussions about scripture. Within a few years of their marriage, Lydia was subject to rages which left Mather humiliated and depressed. They clashed over Mather's piety and his mishandling of Nathan Howell's estate. He began to call her deranged. She left him for ten days, returning when she learned that Mather's son Increase was lost at sea. Lydia nursed him through illnesses, the last of which lasted five weeks and ended with his death on February 15, 1728. Of the children that Mather had with Abigail and Elizabeth, the only children to survive him were Hannah and Samuel. He did not have any children with Lydia. Revolt of 1689. On May 14, 1686, ten days after Cotton Mather's marriage to Abigail Phillips, Edward Randolph disembarked in Boston bearing letters patent from King James II of England that revoked the Charter of the Massachusetts Bay Company and commissioned Randolph to reorganize the colonial government. James's intention was to curb Massachusetts's religious separatism by incorporating the colony into a larger Dominion of New England, without an elected legislature and under a governor who would serve at the pleasure of the Crown. Later that year, the King appointed Sir Edmund Andros as governor of that new Dominion. This was a direct attack upon the Puritan religious and social orders that the Mathers represented, as well as upon the local autonomy of Massachusetts. The colonists were particularly outraged when Andros declared that all grants of land made in the name of the old Massachusetts Bay Company were invalid, forcing them to apply and pay for new royal patents on land that they already occupied or face eviction. In April 1687, Increase Mather sailed to London, where he remained for the next four years, pleading with the Court for what he regarded as the interests of the Massachusetts colony. The birth of a male heir to King James in June 1688, which could have cemented a Roman Catholic dynasty in the English throne, triggered the so-called Glorious Revolution in which Parliament deposed James and gave the Crown jointly to his Protestant daughter Mary and her husband, the Dutch Prince William of Orange. News of the events in London greatly emboldened the opposition in Boston to Governor Andros, finally precipitating the 1689 Boston revolt. Cotton Mather, then aged twenty-six, was one of the Puritan ministers who guided resistance in Boston to Andros's regime. Early in 1689, Randolph had a warrant issued for Cotton Mather's arrest on a charge of "scandalous libel", but the warrant was overruled by Wait Winthrop. According to some sources, Cotton Mather escaped a second attempted arrest on April 18, 1689, the same day that the people of Boston took up arms against Andros. The young Mather may have authored, in whole or in part, the "Declaration of the Gentlemen, Merchants, and Inhabitants of Boston and the Country Adjacent", which justified that uprising by a list of grievances that the declaration attributed to the deposed officials. The authorship of that document is uncertain: it was not signed by Mather or any other clergymen, and Puritans frowned upon the clergy being seen to play too direct and personal a hand in political affairs. That day, Mather probably read the Declaration to a crowd gathered in front of the Boston Town House. In July, Andros, Randolph, Joseph Dudley, and other officials who had been deposed and arrested in the Boston revolt were summoned to London to answer the complaints against them. The administration of Massachusetts was temporarily assumed by Simon Bradstreet, whose rule proved weak and contentious. In 1691, the government of King William and Queen Mary issued a new Massachusetts Charter. This charter united the Massachusetts Bay Colony with Plymouth Colony into the new Province of Massachusetts Bay. Rather than restoring the old Puritan rule, the Charter of 1691 mandated religious toleration for all non-Catholics and established a government led by a Crown-appointed governor. The first governor under the new charter was Sir William Phips, who was a member of the Mathers' church in Boston. Involvement with the Salem witch trials. Cotton Mather's reputation, in his own day as well as in the historiography and popular culture of subsequent generations, has been very adversely affected by his association with the events surrounding the Salem witch trials of 1692–1693. As a consequence of those trials, nineteen people were executed by hanging for practicing witchcraft and one was pressed to death for refusing to enter a plea before the court. Although Mather had no official role in the legal proceedings, he wrote the book "Wonders of the Invisible World", which appeared in 1693 with the endorsement of William Stoughton, the Lieutenant Governor of Massachusetts and chief judge of the Salem witch trials. Mather's book constitutes the most detailed written defense of the conduct of those trials. Mather's role in drumming up and sustaining the witch hysteria behind those proceedings was denounced by Robert Calef in his book "More Wonders of the Invisible World", published in 1700. In the 19th century, Nathaniel Hawthorne called Mather "the chief agent of the mischief" at Salem. More recently historians have tended to downplay Mather's role in the events at Salem. According to Jan Stievermann, of the Heidelberg Center for American Studies, Prelude: The Goodwin case. In 1689, Mather published "Memorable Providences, Relating to Witchcrafts and Possessions", based on his study of events surrounding the affliction of the children of a Boston mason named John Goodwin. Those afflictions had begun after Goodwin's eldest daughter confronted a washerwoman whom she suspected of stealing some of the family's linen. In response to this, the washerwoman's mother, Ann Glover, verbally insulted the Goodwin girl, who soon began to suffer from hysterical fits that later began to afflict also the three other Goodwin children. Glover was an Irish Catholic widow who could understand English but spoke only Gaelic. Interrogated by the magistrates, she admitted that she tormented her enemies by stroking certain images or dolls with her finger wetted with spittle. After she was sentenced to death for witchcraft, Mather visited her in prison and interrogated her through an interpreter. Before her execution, Glover warned that her death would not bring relief to the Goodwin children, as she was not the one responsible for their torments. Indeed, after Glover was hanged the children's afflictions increased. Mather documented these events and attempted to de-possess the "Haunted Children" by prayer and fasting. He also took in the eldest Goodwin child, Martha, into his own home, where she lived for several weeks. Eventually, the afflictions ceased and Martha was admitted into Mather's church. The publication of Mather's "Memorable Providences" attracted attention on both sides of the Atlantic, including from the eminent English Puritan Richard Baxter. In his book, Mather argued that since there are witches and devils, there are "immortal souls". He also claimed that witches appear spectrally as themselves. He opposed any natural explanations for the fits, believed that people who confessed to using witchcraft were sane, and warned against all magical practices due to their diabolical connections. Mather's contemporary Robert Calef would later accuse Mather of laying the groundwork, with his "Memorable Providences", for the witchcraft hysteria that gripped Salem three years later: Similar views, on Mather's responsibility for the climate of hysteria over witchcraft that led to the Salem trials, were repeated by later commentators, such as the politician and historian Charles W. Upham in the 19th century. Preparation for the Salem trials. When the accusations of witchcraft arose in Salem Village in 1692, Cotton Mather was incapacitated by a serious illness, which he attributed to overwork. He suggested that the afflicted girls be separated and offered to take six of them into his home, as he had done previously with Martha Goodwin. That offer was not accepted. In May of that year, Sir William Phips, governor of the newly chartered Province of Massachusetts Bay, appointed a special "Court of Oyer and Terminer" to try the cases of witchcraft in Salem. The chief judge of that court was Phips's lieutenant governor, William Stoughton. Stoughton had close ties to the Mathers and had been recommended as Governor Phips's lieutenant by Increase Mather. Another of the judges in the new court, John Richards, requested that Cotton Mather accompany him to Salem, but Mather refused due to his ill health. Instead, Mather wrote a long letter to Richards in which he gave his advice on the impending trials. In that letter, Mather states that witches guilty of the most grievous crimes should be executed, but that witches convicted of lesser offenses deserve more lenient punishment. He also wrote that the identification and conviction of all witches should be undertaken with the greatest caution and warned against the use of spectral evidence (i.e., testimony that the specter of the accused had tormented a victim) on the grounds that devils could assume the form of innocent and even virtuous people. Under English law, spectral evidence had been admissible in witchcraft trials for a century before the events in Salem, and it would remain admissible until 1712. There was, however, debate among experts as to how much weight should be given to such testimonies. Response to the trials. On June 10, 1692, Bridget Bishop, the thrice-married owner of an unlicensed tavern, was hanged after being convicted and sentenced by the Court of Oyer and Terminer, based largely on spectral evidence. A group of twelve Puritan ministers issued a statement, drawn up by Cotton Mather and presented to Governor Phips and his council a few days later, entitled "The Return of Several Ministers". In that document, Mather criticized the court's reliance on spectral evidence and recommended that it adopt a more cautious procedure. However, he ended the document with a statement defending the continued prosecution of witchcraft according to the "Direction given by the Laws of God, and the wholesome Statues of the English Nation". Robert Calef would later criticize Mather's intervention in "The Return of Several Ministers" as "perfectly ambidexter, giving a great or greater encouragement to proceed in those dark methods, than cautions against them." On August 4, Cotton Mather preached a sermon before his North Church congregation on the text of Revelation 12:12: "Woe to the Inhabitants of the Earth, and of the Sea; for the Devil is come down unto you, having great Wrath; because he knoweth, that he hath but a short time." In the sermon, Mather claimed that the witches "have associated themselves to do no less a thing than to destroy the Kingdom of our Lord Jesus Christ, in these parts of the World." Although he did not intervene in any of the trials, there are some testimonies that Mather was present at the executions that were carried out in Salem on August 19. According to Mather's contemporary critic Robert Calef, the crowd was disturbed by George Burroughs's eloquent declarations of innocence from the scaffold and by his recitation of the Lord's Prayer, of which witches were commonly believed to be incapable. Calef claimed that, after Burroughs had been hanged, As public discontent with the witch trials grew in the summer of 1692, threatening civil unrest, Cotton Mather felt compelled to defend the responsible authorities. On September 2, 1692, after eleven people had been executed as witches, Cotton Mather wrote a letter to Judge Stoughton congratulating him on "extinguishing of as wonderful a piece of devilism as has been seen in the world". As the opposition to the witch trials was bringing them to a halt, Mather wrote "Wonders of the Invisible World", a defense of the trials that carried Stoughton's official approval. Post-trials. Mather's "Wonders" did little to appease the growing clamor against the Salem witch trials. At around the same time that the book began to circulate in manuscript form, Governor Phips decided to restrict greatly the use of spectral evidence, thus raising a high barrier against further convictions. The Court of Oyer and Terminer was dismissed on October 29. A new court convened in January 1693 to hear the remaining cases, almost all of which ended in acquittal. In May, Governor Phips issued a general pardon, bringing the witch trials to an end. The last major events in Mather's involvement with witchcraft were his interactions with Mercy Short in December 1692 and Margaret Rule in September 1693. Mather appears to have remained convinced that genuine witches had been executed in Salem and he never publicly expressed regrets over his role in those events. Robert Calef, an otherwise obscure Boston merchant, published "More Wonders of the Invisible World" in 1700, bitterly attacking Cotton Mather over his role in the events of 1692. In the words of 20th-century historian Samuel Eliot Morison, "Robert Calef tied a tin can to Cotton Mather which has rattled and banged through the pages of superficial and popular historians". Intellectual historian Reiner Smolinski, an expert on the writings of Cotton Mather, found it "deplorable that Mather's reputation is still overshadowed by the specter of Salem witchcraft." Historical and theological writings. Cotton Mather was an extremely prolific writer, producing 388 different books and pamphlets during his lifetime. His most widely distributed work was "Magnalia Christi Americana" (which may be translated as "The Glorious Works of Christ in America"), subtitled "The ecclesiastical history of New England, from its first planting in the year 1620 unto the year of Our Lord 1698. In seven books." Despite the Latin title, the work is written in English. Mather began working on it towards the end of 1693 and it was finally published in London in 1702. The work incorporates information that Mather put together from a variety of sources, such as letters, diaries, sermons, Harvard College records, personal conversations, and the manuscript histories composed by William Hubbard and William Bradford. The "Magnalia" includes about fifty biographies of eminent New Englanders (ranging from John Eliot, the first Puritan missionary to the Native Americans, to Sir William Phips, the incumbent governor of Massachusetts at the time that Mather began writing), plus dozens of brief biographical sketches, including those of Hannah Duston and Hannah Swarton. According to Kenneth Silverman, an expert on early American literature and Cotton Mather's biographer, Silverman argues that, although Mather glorifies New England's Puritan past, in the "Magnalia" he also attempts to transcend the religious separatism of the old Puritan settlers, reflecting Mather's more ecumenical and cosmopolitan embrace of a Transatlantic Protestant Christianity that included, in addition to Mather's own Congregationalists, also Presbyterians, Baptists, and low church Anglicans. In 1693 Mather also began work on a grand intellectual project that he titled "Biblia Americana", which sought to provide a commentary and interpretation of the Christian Bible in light of "all of the Learning in the World". Mather, who continued to work on it for many years, sought to incorporate into his reading of Scripture the new scientific knowledge and theories, including geography, heliocentrism, atomism, and Newtonianism. According to Silverman, the project "looks forward to Mather's becoming probably the most influential spokesman in New England for a rationalized, scientized Christianity." Mather could not find a publisher for the "Biblia Americana", which remained in manuscript form during his lifetime. It is currently being edited in ten volumes, published by Mohr Siebeck under the direction of Reiner Smolinski and Jan Stievermann. As of 2023, seven of the ten volumes have appeared in print. Conflict with Governor Dudley. In Massachusetts at the start of the 18th century, Joseph Dudley was a highly controversial figure, as he had participated actively in the government of Sir Edmund Andros in 1686–1689. Dudley was among those arrested in the revolt of 1689, and was later called to London to answer the charges against him brought by a committee of the colonists. However, Dudley was able to pursue a successful political career in Britain. Upon the death in 1701 of acting governor William Stoughton, Dudley began enlisting support in London to procure appointment as the new governor of Massachusetts. Although the Mathers (to whom he was related by marriage), continued to resent Dudley's role in the Andros administration, they eventually came around to the view that Dudley would now be preferable as governor to the available alternatives, at a time when the English Parliament was threatening to repeal the Massachusetts Charter. With the Mathers' support, Dudley was appointed governor by the Crown and returned to Boston in 1702. Contrary to the promises that he had made to the Mathers, Governor Dudley proved a divisive and high-handed executive, reserving his patronage for a small circle composed of transatlantic merchants, Anglicans, and religious liberals such as Thomas Brattle, Benjamin Colman, and John Leverett. In the context of Queen Anne's War (1702–1713), Cotton Mather preached and published against Governor Dudley, whom Mather accused of corruption and misgovernment. Mather sought unsuccessfully to have Dudley replaced by Sir Charles Hobby. Outmaneuvered by Dudley, this political rivalry left Mather increasingly isolated at a time when Massachusetts society was steadily moving away from the Puritan tradition that Mather represented. Relationship with Harvard and Yale. Cotton Mather was a fellow of Harvard College from 1690 to 1702, and at various times sat on its Board of Overseers. His father Increase had succeeded John Rogers as president of Harvard in 1684, first as acting president (1684–1686), later with the title of "rector" (1686–1692, during much of which period he was away from Massachusetts, pleading the Puritans' case before the Royal Court in London), and finally with the full title of president (1692–1701). Increase was unwilling to move permanently to the Harvard campus in Cambridge, Massachusetts, since his congregation in Boston was much larger than the Harvard student body, which at the time counted only a few dozen. Instructed by a committee of the Massachusetts General Assembly that the president of Harvard had to reside in Cambridge and preach to the students in person, Increase resigned in 1701 and was replaced by the Rev. Samuel Willard as acting president. Cotton Mather sought the presidency of Harvard, but in 1708 the fellows instead appointed a layman, John Leverett, who had the support of Governor Dudley. The Mathers disapproved of the increasing independence and liberalism of the Harvard faculty, which they regarded as laxity. Cotton Mather came to see the Collegiate School, which had moved in 1716 from Saybrook to New Haven, Connecticut, as a better vehicle for preserving the Puritan orthodoxy in New England. In 1718, Cotton convinced Boston-born British businessman Elihu Yale to make a charitable gift sufficient to ensure the school's survival. It was also Mather who suggested that the school change its name to Yale College after it accepted that donation. Cotton Mather sought the presidency of Harvard again after Leverett's death in 1724, but the fellows offered the position to the Rev. Joseph Sewall (son of Judge Samuel Sewall, who had repented publicly for his role in the Salem witch trials). When Sewall turned it down, Mather once again hoped that he might get the appointment. Instead, the fellows offered it to one of its own number, the Rev. Benjamin Coleman, an old rival of Mather. When Coleman refused it, the presidency went finally to the Rev. Benjamin Wadsworth. Advocacy for smallpox inoculation. The practice of smallpox inoculation (as distinguished from to the later practice of vaccination) was developed possibly in 8th-century India or 10th-century China and by the 17th-century had reached Turkey. It was also practiced in western Africa, but it is not known when it started there. Inoculation or, rather, variolation, involved infecting a person via a cut in the skin with exudate from a patient with a relatively mild case of smallpox (variola), to bring about a manageable and recoverable infection that would provide later immunity. By the beginning of the 18th century, the Royal Society in England was discussing the practice of inoculation, and the smallpox epidemic in 1713 spurred further interest. It was not until 1721, however, that England recorded its first case of inoculation. Early New England. Smallpox was a serious threat in colonial America, most devastating to Native Americans, but also to Anglo-American settlers. New England suffered smallpox epidemics in 1677, 1689–90, and 1702. It was highly contagious, and mortality could reach as high as 30 percent. Boston had been plagued by smallpox outbreaks in 1690 and 1702. During this era, public authorities in Massachusetts dealt with the threat primarily by means of quarantine. Incoming ships were quarantined in Boston Harbor, and any smallpox patients in town were held under guard or in a "pesthouse". In 1716, Onesimus, one of Mather's slaves, explained to Mather how he had been inoculated as a child in Africa. Mather was fascinated by the idea. By July 1716, he had read an endorsement of inoculation by Dr Emanuel Timonius of Constantinople in the "Philosophical Transactions". Mather then declared, in a letter to Dr John Woodward of Gresham College in London, that he planned to press Boston's doctors to adopt the practice of inoculation should smallpox reach the colony again. By 1721, a whole generation of young Bostonians was vulnerable and memories of the last epidemic's horrors had by and large disappeared. Smallpox returned on April 22 of that year, when HMS "Seahorse" arrived from the West Indies carrying smallpox on board. Despite attempts to protect the town through quarantine, nine known cases of smallpox appeared in Boston by May 27, and by mid-June, the disease was spreading at an alarming rate. As a new wave of smallpox hit the area and continued to spread, a number of residents fled to outlying rural settlements. The combination of exodus, quarantine, and outside traders' fears disrupted business in the capital of the Bay Colony for weeks. Guards were stationed at the House of Representatives to keep Bostonians from entering without special permission. The death toll reached 101 in September, and the Selectmen, powerless to stop it, "severely limited the length of time funeral bells could toll." As one response, legislators delegated a thousand pounds from the treasury to help the people who, under these conditions, could no longer support their families. On June 6, 1721, Mather sent an abstract of reports on inoculation by Timonius and Jacobus Pylarinus to local physicians, urging them to consult about the matter. He received no response. Next, Mather pleaded his case to Dr. Zabdiel Boylston, who tried the procedure on his youngest son and two slaves—one grown and one a boy. All recovered in about a week. Boylston inoculated seven more people by mid-July. The epidemic peaked in October 1721, with 411 deaths; by February 26, 1722, Boston was again free from smallpox. The total number of cases since April 1721 came to 5,889, with 844 deaths—more than three-quarters of all the deaths in Boston during 1721. Meanwhile, Boylston had inoculated 287 people, with six resulting deaths. Inoculation debate. Boylston and Mather's inoculation crusade "raised a horrid Clamour" among the people of Boston. Both Boylston and Mather were "Object[s] of their Fury; their furious Obloquies and Invectives", which Mather acknowledges in his diary. Boston's Selectmen, consulting a doctor who claimed that the practice caused many deaths and only spread the infection, forbade Boylston from performing it again. "The New-England Courant" published writers who opposed the practice. The editorial stance was that the Boston populace feared that inoculation spread, rather than prevented, the disease; however, some historians, notably H. W. Brands, have argued that this position was a result of the contrarian positions of editor-in-chief James Franklin (a brother of Benjamin Franklin). Public discourse ranged in tone from organized arguments by John Williams from Boston, who posted that "several arguments proving that inoculating the smallpox is not contained in the law of Physick, either natural or divine, and therefore unlawful", to those put forth in a pamphlet by Dr. William Douglass of Boston, entitled "The Abuses and Scandals of Some Late Pamphlets in Favour of Inoculation of the Small Pox" (1721), on the qualifications of inoculation's proponents. (Douglass was exceptional at the time for holding a medical degree from Europe.) At the extreme, in November 1721, someone hurled a lighted grenade into Mather's home. Medical opposition. Several opponents of smallpox inoculation, among them John Williams, stated that there were only two laws of physick (medicine): sympathy and antipathy. In his estimation, inoculation was neither a sympathy toward a wound or a disease, or an antipathy toward one, but the creation of one. For this reason, its practice violated the natural laws of medicine, transforming health care practitioners into those who harm rather than heal. As with most colonists, Williams' Puritan beliefs were enmeshed in every aspect of his life, and he used the Bible to state his case. He quoted , when Jesus said: "It is not the healthy who need a doctor, but the sick." William Douglass proposed a more secular argument against inoculation, stressing the importance of reason over passion and urging the public to be pragmatic in their choices. In addition, he demanded that ministers leave the practice of medicine to physicians, and not meddle in areas where they lacked expertise. According to Douglass, smallpox inoculation was "a medical experiment of consequence," one not to be undertaken lightly. He believed that not all learned individuals were qualified to doctor others, and while ministers took on several roles in the early years of the colony, including that of caring for the sick, they were now expected to stay out of state and civil affairs. Douglass felt that inoculation caused more deaths than it prevented. The only reason Mather had had success in it, he said, was because Mather had used it on children, who are naturally more resilient. Douglass vowed to always speak out against "the wickedness of spreading infection". Speak out he did: "The battle between these two prestigious adversaries [Douglass and Mather] lasted far longer than the epidemic itself, and the literature accompanying the controversy was both vast and venomous." Puritan resistance. Generally, Puritan pastors favored the inoculation experiments. Increase Mather, Cotton's father, was joined by prominent pastors Benjamin Colman and William Cooper in openly propagating the use of inoculations. "One of the classic assumptions of the Puritan mind was that the will of God was to be discerned in nature as well as in revelation." Nevertheless, Williams questioned whether the smallpox "is not one of the strange works of God; and whether inoculation of it be not a fighting with the most High." He also asked his readers if the smallpox epidemic may have been given to them by God as "punishment for sin," and warned that attempting to shield themselves from God's fury (via inoculation), would only serve to "provoke him more". Puritans found meaning in affliction, and they did not yet know why God was showing them disfavor through smallpox. Not to address their errant ways before attempting a cure could set them back in their "errand". Many Puritans believed that creating a wound and inserting poison was doing violence and therefore was antithetical to the healing art. They grappled with adhering to the Ten Commandments, with being proper church members and good caring neighbors. The apparent contradiction between harming or murdering a neighbor through inoculation and the Sixth Commandment—"thou shalt not kill"—seemed insoluble and hence stood as one of the main objections against the procedure. Williams maintained that because the subject of inoculation could not be found in the Bible, it was not the will of God, and therefore "unlawful." He explained that inoculation violated The Golden Rule, because if one neighbor voluntarily infected another with disease, he was not doing unto others as he would have done to him. With the Bible as the Puritans' source for all decision-making, lack of scriptural evidence concerned many, and Williams vocally scorned Mather for not being able to reference an inoculation edict directly from the Bible. Inoculation defended. With the smallpox epidemic catching speed and racking up a staggering death toll, a solution to the crisis was becoming more urgently needed by the day. The use of quarantine and various other efforts, such as balancing the body's humors, did not slow the spread of the disease. As news rolled in from town to town and correspondence arrived from overseas, reports of horrific stories of suffering and loss due to smallpox stirred mass panic among the people. "By circa 1700, smallpox had become among the most devastating of epidemic diseases circulating in the Atlantic world." Mather strongly challenged the perception that inoculation was against the will of God and argued the procedure was not outside of Puritan principles. He wrote that "whether a Christian may not employ this Medicine (let the matter of it be what it will) and humbly give Thanks to God's good Providence in discovering of it to a miserable World; and humbly look up to His Good Providence (as we do in the use of any other Medicine) It may seem strange, that any wise Christian cannot answer it. And how strangely do Men that call themselves Physicians betray their Anatomy, and their Philosophy, as well as their Divinity in their invectives against this Practice?" The Puritan minister began to embrace the sentiment that smallpox was an inevitability for anyone, both the good and the wicked, yet God had provided them with the means to save themselves. Mather reported that, from his view, "none that have used it ever died of the Small Pox, tho at the same time, it were so malignant, that at least half the People died, that were infected With it in the Common way." While Mather was experimenting with the procedure, prominent Puritan pastors Benjamin Colman and William Cooper expressed public and theological support for them. The practice of smallpox inoculation was eventually accepted by the general population due to first-hand experiences and personal relationships. Although many were initially wary of the concept, it was because people were able to witness the procedure's consistently positive results, within their own community of ordinary citizens, that it became widely utilized and supported. One important change in the practice after 1721 was regulated quarantine of inoculees. The aftermath. Although Mather and Boylston were able to demonstrate the efficacy of the practice, the debate over inoculation would continue even beyond the epidemic of 1721–22. After overcoming considerable difficulty and achieving notable success, Boylston traveled to London in 1725, where he published his results and was elected to the Royal Society in 1726, with Mather formally receiving the honor two years prior. Other scientific work. In 1716, Mather used different varieties of maize ("Indian corn") to conduct one of the first recorded experiments on plant hybridization. He described the results in a letter to his friend James Petiver: In his "Curiosa Americana" (1712–1724) collection, Mather also announced that flowering plants reproduce sexually, an observation that later became the basis of the Linnaean system of plant classification. Mather may also have been the first to develop the concept of genetic dominance, which later would underpin Mendelian genetics. In 1713, the Secretary of the Royal Society of London, naturalist Richard Waller, informed Mather that he had been elected as a fellow of the Society. Mather was the eighth colonial American to join that learned body, with the first having been John Winthrop the Younger in 1662. During the controversies surrounding Mather's smallpox inoculation campaign of 1721, his adversaries questioned that credential on the grounds that Mather's name did not figure in the published lists of the Society's members. At the time, the Society responded that those published lists included only members who had been inducted in person and who were therefore entitled to vote in the Society's yearly elections. In May 1723, Mather's correspondent John Woodward discovered that, although Mather had been duly nominated in 1713, approved by the council, and informed by Waller of his election at that time, due to an oversight the nomination had not in fact been voted upon by the full assembly of fellows or the vote had not been recorded. After Woodward informed the Society of the situation, the members proceeded to elect Mather by a formal vote. Mather's enthusiasm for experimental science was strongly influenced by his reading of Robert Boyle's work. Mather was a significant popularizer of the new scientific knowledge and promoted Copernican heliocentrism in some of his sermons. He also argued against the spontaneous generation of life and compiled a medical manual titled "The Angel of Bethesda" that he hoped would assist people who were unable to procure the services of a physician, but which went unpublished in Mather's lifetime. This was the only comprehensive medical work written in colonial English-speaking America. Although much of what Mather included in that manual were folk remedies now regarded as unscientific or superstitious, some of them are still valid, including smallpox inoculation and the use of citrus juice to treat scurvy. Mather also outlined an early form of germ theory and discussed psychogenic diseases, while recommending hygiene, physical exercise, temperate diet, and avoidance of tobacco smoking. In his later years, Mather also promoted the professionalization of scientific research in America. He presented a Boston tradesman named Grafton Feveryear with the barometer that Feveryear used to make the first quantitative meteorological observations in New England, which he communicated to the Royal Society in 1727. Mather also sponsored Isaac Greenwood, a Harvard graduate and member of Mather's church, who travelled to London and collaborated with the Royal Society's curator of experiments, John Theophilus Desaguliers. Greenwood later became the first Hollis professor of mathematics and natural philosophy at Harvard, and may well have been the first American to practice science professionally. Slavery and racial attitudes. Cotton Mather's household included both free servants and a number of slaves who performed domestic chores. Surviving records indicate that, over the course of his lifetime, Mather owned at least three, and probably more, slaves. Like the majority of Christians at the time, but unlike his political rival Judge Samuel Sewall, Mather was never an abolitionist, although he did publicly denounce what he regarded as the illegal and inhuman aspects of the burgeoning Atlantic slave trade. Concerned about the New England sailors enslaved in Africa since the 1680s and 1690s, in 1698 Mather wrote them his "Pastoral Letter to the Captives", consoling them, and expressing hope that “your slavery to the monsters of Africa will be but short.” On the return of some survivors of African slavery in 1703, Mather published "The History of What the Goodness of God has done for the Captives, lately delivered out of Barbary," wherein he lamented the death of multiple American slaves, the length of their captivity—which he described as between 7 and 19 years,—the harsh conditions of their bondage, and celebrated their refusal to convert to Islam, unlike others who did. In his book "The Negro Christianized" (1706), Mather insisted that slaveholders should treat their black slaves humanely and instruct them in Christianity with a view to promoting their salvation. Mather received black members of his congregation in his home and he paid a schoolteacher to instruct local black people in reading. Mather consistently held that black Africans were "of one Blood" with the rest of mankind and that blacks and whites would meet as equals in Heaven. After a number of black people carried out arson attacks in Boston in 1723, Mather asked the outraged white Bostonians whether the black population had been "always treated according to the Rules of Humanity? Are they treated as those, that are of one Blood with us, and those who have Immortal Souls in them, and are not mere Beasts of Burden?" Mather advocated the Christianization of black slaves both on religious grounds and as tending to make them more patient and faithful servants of their masters. In "The Negro Christianized", Mather argued against the opinion of Richard Baxter that a Christian could not enslave another baptized Christian. The African slave Onesimus, from whom Mather first learned about smallpox inoculation, had been purchased for him as a gift by his congregation in 1706. Despite his efforts, Mather was unable to convert Onesimus to Christianity and finally manumitted him in 1716. Sermons against pirates and piracy. Throughout his career Mather was also keen to minister to convicted pirates. He produced a number of pamphlets and sermons concerning piracy, including "Faithful Warnings to prevent Fearful Judgments"; "Instructions to the Living, from the Condition of the Dead"; "The Converted Sinner… A Sermon Preached in Boston, May 31, 1724, In the Hearing and at the Desire of certain Pirates"; "A Brief Discourse occasioned by a Tragical Spectacle of a Number of Miserables under Sentence of Death for Piracy"; "Useful Remarks. An Essay upon Remarkables in the Way of Wicked Men" and "The Vial Poured Out Upon the Sea". His father Increase had preached at the trial of Dutch pirate Peter Roderigo; Cotton Mather in turn preached at the trials and sometimes executions of pirate Captains (or the crews of) William Fly, John Quelch, Samuel Bellamy, William Kidd, Charles Harris, and John Phillips. He also ministered to Thomas Hawkins, Thomas Pound, and William Coward; having been convicted of piracy, they were jailed alongside "Mary Glover the Irish Catholic witch," daughter of witch "Goody" Ann Glover at whose trial Mather had also preached. In his conversations with William Fly and his crew Mather scolded them: "You have something within you, that will compell you to confess, That the Things which you have done, are most Unreasonable and Abominable. The Robberies and Piracies, you have committed, you can say nothing to Justify them. … It is a most hideous Article in the Heap of Guilt lying on you, that an Horrible Murder is charged upon you; There is a cry of Blood going up to Heaven against you." Death and place of burial. Cotton Mather was twice widowed, and only two of his 15 children survived him. He died on the day after his 65th birthday and was buried on Copp's Hill Burying Ground, in Boston's North End. Works. Mather was a prolific writer and industrious in having his works printed, including a vast number of his sermons. "Pillars of Salt". Mather's first published sermon, printed in 1686, concerned the execution of James Morgan, convicted of murder. Thirteen years later, Mather published the sermon in a compilation, along with other similar works, called "Pillars of Salt". "Magnalia Christi Americana". "Magnalia Christi Americana", considered Mather's greatest work, was published in 1702, when he was 39. The book includes several biographies of saints and describes the process of the New England settlement. In this context "saints" does not refer to the canonized saints of the Catholic church, but to those Puritan divines about whom Mather is writing. It comprises seven total books, including "Pietas in Patriam: The life of His Excellency Sir William Phips", originally published anonymously in London in 1697. Despite being one of Mather's best-known works, some have openly criticized it, labeling it as hard to follow and understand, and poorly paced and organized. However, other critics have praised Mather's work, citing it as one of the best efforts at properly documenting the establishment of America and growth of the people. "The Christian Philosopher". In 1721, Mather published "The Christian Philosopher", the first systematic book on science published in America. Mather attempted to show how Newtonian science and religion were in harmony. It was in part based on Robert Boyle's "The Christian Virtuoso" (1690). Mather took inspiration from "Hayy ibn Yaqdhan", by the 12th-century Islamic philosopher Abu Bakr Ibn Tufail. Mather's short treatise on the Lord's Supper was later translated by his cousin Josiah Cotton. In popular culture. Comic books. Marvel Comics features a supervillain character named Cotton Mather with alias name, 'Witch-Slayer', that is an enemy of Spider-Man. He first appears in the 1972 comic 'Marvel Team-Up' issue #41, and appears in the subsequent issues until issue #45. Music. The rock band Cotton Mather is named after Mather. The Handsome Family's 2006 album "Last Days of Wonder" is named in reference to Mather's 1693 book "Wonders of the Invisible World", which lyricist Rennie Sparks found intriguing because of what she called its "madness brimming under the surface of things." Radio. Howard da Silva portrayed Mather in "Burn, Witch, Burn," a December 15, 1975 episode of the CBS Radio Mystery Theater. Literature. One of the stories in Richard Brautigan′s collection "Revenge of the Lawn" is called ″1692 Cotton Mather Newsreel″. In "Burned: A Daughters of Salem Novel", a 2023 young adult novel by Kellie O'Neill, the character "Vivienne Mathers" is a descendant of Cotton Mathers. Mather is mentioned in several of the New England horror stories of H.P. Lovecraft, such as "The Picture in the House," "The Unnamable" and "Pickman's Model." Television. Seth Gabel portrays Cotton Mather in the TV series "Salem", which aired from 2014 to 2017.
7105
196446
https://en.wikipedia.org/wiki?curid=7105
Cordwainer Smith
Paul Myron Anthony Linebarger (July 11, 1913 – August 6, 1966), known by his pen-name Cordwainer Smith, was an American author of science fiction. He was an officer in the US Army, a noted scholar of East Asia, and an expert in psychological warfare. He was one of science fiction's more influential authors despite an early death at the age of 53. Biography. Early life and education. Linebarger's father, Paul Myron Wentworth Linebarger, was a lawyer, working as a judge in the Philippines. There he met Chinese nationalist Sun Yat-sen to whom he became an advisor. Linebarger's father sent his wife to give birth in Milwaukee, Wisconsin, so that their child would be eligible to become president of the United States. Sun Yat-sen, who was considered the father of Chinese nationalism, became Linebarger's godfather. His childhood was unsettled as his father moved the family to a succession of places in Asia, Europe, and the United States. He was sometimes sent to boarding schools for safety. In all, Linebarger attended more than 30 schools. In 1919, while at a boarding school in Hawaii, he was blinded in his right eye, which was replaced by a glass eye. The vision in his remaining eye was impaired by infection. Linebarger was familiar with English, German, and Chinese by adulthood. At the age of 23, he received a PhD in political science from Johns Hopkins University. Career. From 1937 to 1946, Linebarger held a faculty appointment at Duke University, where he began producing highly regarded works on Far Eastern affairs. While retaining his professorship at Duke after the beginning of World War II, Linebarger began serving as a second lieutenant of the United States Army, where he was involved in the creation of the Office of War Information and the Operation Planning and Intelligence Board. He also helped to organize the army's first psychological warfare section. In 1943, he was sent to China to coordinate military intelligence operations. When he later pursued his interest in China, Linebarger became a close confidant of Chiang Kai-shek. By the end of the war, he had risen to the rank of major. In 1947, Linebarger moved to the Johns Hopkins University's School of Advanced International Studies in Washington, DC, where he served as Professor of Asiatic Studies. He used his experiences in the war to write the book "Psychological Warfare" (1948), regarded by many in the field as a classic text. Linebarger eventually rose to the rank of colonel in the reserves. He was recalled to advise the British forces during the Malayan Emergency and the U.S. Eighth Army during the Korean War. Though he sometimes called himself a "visitor to small wars", he refrained from becoming involved in the Vietnam War, but he did some work for the Central Intelligence Agency (CIA). In 1969, CIA officer Miles Copeland Jr. wrote that Linebarger was "perhaps the leading practitioner of 'black' and 'gray' propaganda in the Western world". According to Joseph Burkholder Smith, a former CIA operative, Linebarger conducted classes in psychological warfare for CIA agents at his home in Washington, under cover of his position at Johns Hopkins University. He traveled extensively and became a member of the Foreign Policy Association, and he was called upon to advise President John F. Kennedy. Marriage and family. In 1936, Linebarger married Margaret Snow. They had a daughter in 1942 and another in 1947. They divorced in 1949. In 1950, Linebarger married his second wife Genevieve Collins; they had no children. They remained married until his death from a heart attack in 1966; he died at Johns Hopkins University Medical Center in Baltimore, Maryland, at the age of 53. Linebarger had expressed a wish to retire to Australia, which he had visited while traveling. He is buried in Arlington National Cemetery, Section 35, Grave Number 4712. His widow, Genevieve Collins Linebarger, was interred with him on November 16, 1981. Case history debate. Linebarger is rumored to have been Kirk Allen, the fantasy-haunted subject of "The Jet-Propelled Couch," a chapter in psychologist Robert M. Lindner's best-selling collection "The Fifty-Minute Hour" (1954)"." According to Cordwainer Smith scholar Alan C. Elms, this speculation first reached print in Brian Aldiss's science fiction history, "Billion Year Spree" (1973); Aldiss, in turn, claimed to have received the information from science fiction fan and scholar Leon Stover. More recently, both Elms and librarian Lee Weinstein have gathered circumstantial evidence to support the case for Linebarger's being Allen, but both concede there is no direct proof that Linebarger was ever a patient of Lindner's or that he suffered from a disorder similar to that of Kirk Allen. Literary style. Frederik Pohl commented on the imaginary universe of Linebarger's fiction: Linebarger's identity as "Cordwainer Smith" was a secret until his death. "Cordwainer" is an archaic word for "a worker in cordwain or cordovan leather; a shoemaker", and a "smith" is "one who works in iron or other metals; esp. a blacksmith or farrier"; these are two kinds of skilled workers using traditional materials. Linebarger also used the literary pseudonyms "Carmichael Smith" (for the political thriller "Atomsk"), "Anthony Bearden" (for poetry) and "Felix C. Forrest" (for the novels "Ria" and "Carola"). Some of Smith's stories are written in a narrative style closer to traditional Chinese stories than to most English-language fiction, and reminiscent of the Genji tales by the Japanese writer Lady Murasaki. His total science fiction output is relatively small, because of his time-consuming profession and his early death. Smith's works consist of two parts: Linebarger's cultural links with China are partially expressed by the pseudonym Felix C. Forrest, which he used in addition to Cordwainer Smith. His godfather Sun Yat-Sen suggested that Linebarger adopt the Chinese name "Lin Bai-lo" (), which may be roughly translated as "Forest of Incandescent Bliss"; "felix" is Latin for "happy". In later years, Linebarger proudly wore a tie with the Chinese characters for this name embroidered on it. An expert in psychological warfare, Linebarger was fascinated by the newly developing fields of psychology and psychiatry. He used many concepts from these fields in his fiction, which often contains religious overtones or motifs, particularly with characters who have no control over their actions. James B. Jordan argued for the importance of Anglicanism in Smith's work dating back to 1949. But Linebarger's daughter Rosana Hart has indicated that he did not become Anglican until 1950 and was not strongly interested in religion until later. In his introduction to the collection "The" "Rediscovery of Man", James. A. Mann notes that Linebarger became more devout starting around 1960 and expressed this change in his writing. Linebarger's works are sometimes included in analyses of Christianity in fiction, along with the works of authors such as C. S. Lewis and J.R.R. Tolkien. Most of Smith's stories are set in the distant future, between 4,000 and 14,000 years after the twentieth century. In this future, after the Ancient Wars devastate Earth, humans—ruled by the Instrumentality of Mankind—rebuild and expand to the stars in the Second Age of Space (around 6000 AD). Over the next few thousand years, humanity spreads to thousands of worlds, and human life becomes safe but sterile, as robots and the animal-derived Underpeople take over many human jobs, and humans themselves are genetically programmed as embryos to perform specified duties. Toward the end of this period, the Instrumentality attempts to revive old cultures and languages in a process known as the Rediscovery of Man, where humans emerge from their mundane utopia and Underpeople are liberated from slavery. For years, Linebarger kept a pocket notebook that he filled with ideas about the Instrumentality and additional stories for the series. But while in a small boat on a lake or bay during the mid-1960s, he leaned over the side, and the notebook fell out of his breast pocket into the water, where it was permanently lost. Another story claims that he accidentally left the notebook in a restaurant in Rhodes in 1965. With the notebook gone, he felt empty of ideas, so he decided to start a new series that was an allegory of Middle Eastern politics. Smith's stories describe a long future history of Earth. One setting is a postapocalyptic landscape with walled cities, defended by agents of the Instrumentality; another setting is a sterile utopia, in which freedom can be found only far below the surface of the planet, in long-forgotten and buried anthropogenic strata. These features may place Smith's works within the Dying Earth subgenre of science fiction, but his stories are ultimately more optimistic and distinctive. Smith's most celebrated short story is the first one that he published, "Scanners Live in Vain"; this led many of its earliest readers to assume that Cordwainer Smith was a new pen name for an established giant of the genre. It was selected as one of the best science fiction short stories of the pre-Nebula Award period by the Science Fiction and Fantasy Writers of America, appearing in "The Science Fiction Hall of Fame Volume One, 1929-1964". "The Ballad of Lost C'Mell" was similarly honored, appearing in "The Science Fiction Hall of Fame, Volume Two". After "Scanners Live in Vain", Smith's next story did not appear for several years; but from 1955 until his death in 1966, his stories appeared regularly, most often in "Galaxy Science Fiction". His universe featured creations such as the following: Works. Fiction. Short stories. Titles marked with an asterisk * are independent stories not related to the Instrumentality universe.
7110
49627532
https://en.wikipedia.org/wiki?curid=7110
CSS (disambiguation)
CSS, or Cascading Style Sheets, is a language used to describe the style of document presentations in web development. CSS may also refer to:
7118
11009441
https://en.wikipedia.org/wiki?curid=7118
Churnsike Lodge
Churnsike Lodge is an early Victorian hunting lodge situated in the parish of Greystead, west Northumberland, England. Constructed in 1850 by the Charlton family, descendants of the noted Border Reivers family of the English Middle March, the lodge formed part of the extensive Hesleyside estate, located some 10 miles from Hesleyside Hall itself. Consisting of the main house, stable block, hunting-dog kennels and gamekeepers bothy, when the property was acquired by the Chesters Estate in 1887 the 'Cairnsyke' estate consisted of several thousand acres of moorland, much of which was managed to support shooting of the formerly populous black grouse. Although much of this land has now reverted to fellside or has been otherwise managed as part of the commercial timber plantations of Kielder Forest, areas of heather moorland persist, dotted with remnants of the shooting butts. It is with reference to these fells that the 1887 sale catalogue described the estate as being the "Finest grouse moor in the Kingdom". Historically, the Lodge was home to the Irthing Head and Kielder hounds, regionally renowned and headed by the locally famed fox hunter William Dodd. Dodd, and his hounds, are repeatedly referenced in the traditional Northumbrian ballads of James Armstrong's 'Wanny Blossoms'. Having fallen into ruin by the 1980s, the property fell into the care of the Forestry Commission and was slated for demolition, as many properties in the area were, until being privately purchased. The former gamekeepers bothy now serves as a holiday-home.
7119
2308770
https://en.wikipedia.org/wiki?curid=7119
William Kidd
William Kidd (c. 1645 – 23 May 1701), also known as Captain William Kidd or simply Captain Kidd, was a Scottish privateer. Conflicting accounts exist regarding his early life, but he was likely born in Dundee and later settled in New York City. By 1690, Kidd had become a highly successful privateer, commissioned to protect English interests in the Thirteen Colonies in North America and the West Indies. In 1695, Kidd received a royal commission from the Earl of Bellomont, the governor of New York, Massachusetts Bay and New Hampshire, to hunt down pirates and enemy French ships in the Indian Ocean. He received a letter of marque and set sail on a new ship, "Adventure Galley", the following year. On his voyage he failed to find many targets, lost much of his crew and faced threats of mutiny. In 1698, Kidd captured his greatest prize, the 400-ton "Quedagh Merchant", a ship hired by Armenian merchants and captained by an Englishman. The political climate in England had turned against him, however, and he was denounced as a pirate. Bellomont engineered Kidd's arrest upon his return to Boston and sent him to stand trial in London. He was found guilty and hanged in 1701. Kidd was romanticised after his death and his exploits became a popular subject of pirate-themed works of fiction. The belief that he had left buried treasure contributed significantly to his legend, which inspired numerous treasure hunts in the following centuries. Life and career. Early life and education. Kidd was born in Dundee, Scotland prior to 15 October 1645. While claims have been made of alternative birthplaces, including Greenock and Belfast, he said himself he came from Dundee in a testimony given by Kidd to the High Court of Admiralty in 1695. There have also been records of his baptism taking place in Dundee. A local society supported the family financially after the death of the father. The myth that his "father was thought to have been a Church of Scotland minister" has been discounted, insofar as there is no mention of the name in comprehensive Church of Scotland records for the period. Others still hold the contrary view. Early voyages. As a young man, Kidd settled in New York City, which the English had taken over from the Dutch. There he befriended many prominent colonial citizens, including three governors. Some accounts suggest that he served as a seaman's apprentice on a pirate ship during this time, before beginning his more famous seagoing exploits as a privateer. By 1689, Kidd was a member of a French–English pirate crew sailing the Caribbean under Captain Jean Fantin. During one of their voyages, Kidd and other crew members mutinied, ousting the captain and sailing to the British colony of Nevis. There they renamed the ship "Blessed William", and Kidd became captain either as a result of election by the ship's crew, or by appointment of Christopher Codrington, governor of the island of Nevis. Kidd was an experienced leader and sailor by that time, and the "Blessed William" became part of Codrington's small fleet assembled to defend Nevis from the French, with whom the English were at war. The governor did not pay the sailors for their defensive service, telling them instead to take their pay from the French. Kidd and his men attacked the French island of Marie-Galante, destroying its only town and looting the area, and gathering around 2,000 pounds sterling. Later, during the War of the Grand Alliance, on commissions from the provinces of New York and Massachusetts Bay, Kidd captured an enemy privateer off the New England coast. Shortly afterwards, he was awarded £150 for successful privateering in the Caribbean. One year later, Captain Robert Culliford, a notorious pirate, stole Kidd's ship while he was ashore at Antigua in the West Indies. In New York City, Kidd was active in financially supporting the construction of Trinity Church, New York. On 16 May 1691, Kidd married Sarah Bradley Cox Oort, who was still in her early twenties. She had already been twice widowed and was one of the wealthiest women in New York, based on an inheritance from her first husband. Preparing his expedition. On 11 December 1695, Richard Coote, 1st Earl of Bellomont, who was governing New York, Massachusetts, and New Hampshire, asked the "trusty and well beloved Captain Kidd" to attack Thomas Tew, John Ireland, Thomas Wake, William Maze, and all others who associated themselves with pirates, along with any enemy French ships. His request had the weight of the Crown behind it, and Kidd would have been considered disloyal, carrying much social stigma, to refuse Bellomont. This request preceded the voyage that contributed to Kidd's reputation as a pirate and marked his image in history and folklore. Four-fifths of the cost for the 1696 venture was paid by noble lords, who were among the most powerful men in England: the Earl of Orford, the Baron of Romney, the Duke of Shrewsbury, and Sir John Somers. Kidd was presented with a letter of marque, signed personally by King William III of England, which authorised him as a privateer. This letter reserved 10% of the loot for the Crown, and Henry Gilbert's "The Book of Pirates" suggests that the King fronted some of the money for the voyage himself. Kidd and his acquaintance Colonel Robert Livingston orchestrated the whole plan; they sought additional funding from merchant Sir Richard Blackham. Kidd also had to sell his ship "Antigua" to raise funds. The new ship, "Adventure Galley", was well suited to the task of catching pirates, weighing over 284 tons burthen and equipped with 34 cannon, oars, and 150 men. The oars were a key advantage, as they enabled "Adventure Galley" to manoeuvre in a battle when the winds had calmed and other ships were dead in the water. Kidd took pride in personally selecting the crew, choosing only those whom he deemed to be the best and most loyal officers. Because of Kidd's refusal to salute, the Navy vessel's captain retaliated by pressing much of Kidd's crew into naval service, despite the captain's strong protests and the general exclusion of privateer crew from such action. Short-handed, Kidd sailed for New York City, capturing a French vessel en route (which was legal under the terms of his commission). To make up for the lack of officers, Kidd picked up replacement crew in New York, the vast majority of whom were known and hardened criminals, some likely former pirates. Among Kidd's officers was quartermaster Hendrick van der Heul. The quartermaster was considered "second in command" to the captain in pirate culture of this era. It is not clear, however, if Van der Heul exercised this degree of responsibility because Kidd was authorised as a privateer. Van der Heul may have been African or of Dutch descent. A contemporary source describes him as a "small black Man". If Van der Heul was of African ancestry, he would be considered the highest-ranking black pirate or privateer so far identified. Van der Heul later became a master's mate on a merchant vessel and was never convicted of piracy. Hunting for Pirates. In September 1696, Kidd weighed anchor and set course for the Cape of Good Hope in southern Africa. A third of his crew died on the Comoros due to an outbreak of cholera, the brand-new ship developed many leaks, and he failed to find the pirates whom he expected to encounter off Madagascar. With his ambitious enterprise failing, Kidd became desperate to cover its costs. Yet he failed to attack several ships when given a chance, including a Dutchman and a New York privateer. Both were out of bounds of his commission. The latter would have been considered out of bounds because New York was part of the territories of the Crown, and Kidd was authorised in part by the New York governor. Some of the crew deserted Kidd the next time that "Adventure Galley" anchored offshore. Those who decided to stay on made constant open threats of mutiny. Kidd killed one of his own crewmen on 30 October 1697. Kidd's gunner William Moore was on deck sharpening a chisel when a Dutch ship appeared. Moore urged Kidd to attack the Dutchman, an act that would have been considered piratical, since the nation was not at war with England, but also certain to anger Dutch-born King William. Kidd refused, calling Moore a lousy dog. Moore retorted, "If I am a lousy dog, you have made me so; you have brought me to ruin and many more." Kidd reportedly dropped an ironbound bucket on Moore, fracturing his skull. Moore died the following day. Seventeenth-century English admiralty law allowed captains great leeway in using violence against their crew, but killing was not permitted. Kidd said to his ship's surgeon that he had "good friends in England, that will bring me off for that". Accusations of piracy. Escaped prisoners told stories of being hoisted up by the arms and "drubbed" (thrashed) with a drawn cutlass by Kidd. On one occasion, crew members sacked the trading ship "Mary" and tortured several of its crew members while Kidd and the other captain, Thomas Parker, conversed privately in Kidd's cabin. Kidd was declared a pirate very early in his voyage by a Royal Navy officer, to whom he had promised "thirty men or so". Kidd sailed away during the night to preserve his crew, rather than subject them to Royal Navy impressment. The letter of marque was intended to protect a privateer's crew from such impressment. On 30 January 1698, Kidd raised French colours and took his greatest prize, the 400-ton "Quedagh Merchant", an Indian ship hired by Armenian merchants. It was loaded with satins, muslins, gold, silver, and a variety of East Indian merchandise, as well as extremely valuable silks. The captain of "Quedagh Merchant" was an Englishman named Wright, who had purchased passes from the French East India Company promising him the protection of the French Crown. When news of his capture of this ship reached England, however, officials classified Kidd as a pirate. Various naval commanders were ordered to "pursue and seize the said Kidd and his accomplices" for the "notorious piracies" they had committed. Kidd kept the French sea passes of the "Quedagh Merchant", as well as the vessel itself. British admiralty and vice-admiralty courts (especially in North America) previously had often winked at privateers' excesses amounting to piracy. Kidd might have hoped that the passes would provide the legal fig leaf that would allow him to keep "Quedagh Merchant" and her cargo. Renaming the seized merchantman as "Adventure Prize", he set sail for Madagascar. On 1 April 1698, Kidd reached Madagascar. After meeting privately with trader Tempest Rogers (who would later be accused of trading and selling Kidd's looted East India goods), he found the first pirate of his voyage, Robert Culliford (the same man who had stolen Kidd's ship at Antigua years before) and his crew aboard "Mocha Frigate". Two contradictory accounts exist of how Kidd proceeded. According to "A General History of the Pyrates", published more than 25 years after the event by an author whose identity is disputed by historians, Kidd made peaceful overtures to Culliford: he "drank their Captain's health", swearing that "he was in every respect their Brother", and gave Culliford "a Present of an Anchor and some Guns". This account appears to be based on the testimony of Kidd's crewmen Joseph Palmer and Robert Bradinham at his trial. The other version was presented by Richard Zacks in his 2002 book "The Pirate Hunter: The True Story of Captain Kidd". According to Zacks, Kidd was unaware that Culliford had only about 20 crew with him, and felt ill-manned and ill-equipped to take "Mocha Frigate" until his two prize ships and crews arrived. He decided to leave Culliford alone until these reinforcements arrived. After "Adventure Prize" and "Rouparelle" reached port, Kidd ordered his crew to attack Culliford's "Mocha Frigate". However, his crew refused to attack Culliford and threatened instead to shoot Kidd. Zacks does not refer to any source for his version of events. Both accounts agree that most of Kidd's men abandoned him for Culliford. Only 13 remained with "Adventure Galley". Deciding to return home, Kidd left the "Adventure Galley" behind, ordering her to be burnt because she had become worm-eaten and leaky. Before burning the ship, he salvaged every last scrap of metal, such as hinges. With the loyal remnant of his crew, he returned to the Caribbean aboard the "Adventure Prize", stopping first at St. Augustine's Bay for repairs. Some of his crew later returned to North America on their own as passengers aboard Giles Shelley's ship "Nassau". The 1698 Act of Grace, which offered a royal pardon to pirates in the Indian Ocean, specifically exempted Kidd (and Henry Every) from receiving a pardon, in Kidd's case due to his association with prominent Whig statesmen. Kidd became aware both that he was wanted and that he could not make use of the Act of Grace upon his arrival in Anguilla, his first port of call since St. Augustine's Bay. Trial and execution. Prior to returning to New York City, Kidd knew that he was wanted as a pirate and that several English men-of-war were searching for him. Realising that his ship the "Adventure Prize" was a marked vessel, he cached it in the Caribbean Sea, sold off his remaining plundered goods through pirate and fence William Burke, and continued towards New York aboard a sloop. He deposited some of his treasure on Gardiners Island, hoping to use his knowledge of its location as a bargaining tool. Kidd landed in Oyster Bay to avoid mutinous crew who had gathered in New York City. To avoid them, Kidd sailed around the eastern tip of Long Island, and doubled back along the Sound to Oyster Bay. He felt this was a safer passage than the highly trafficked Narrows between Staten Island and Brooklyn. New York Governor Bellomont, also an investor, was away in Boston, Massachusetts. Aware of the accusations against Kidd, Bellomont was afraid of being implicated in piracy himself and believed that presenting Kidd to England in chains was his best chance to survive. He lured Kidd into Boston with false promises of clemency, and ordered him arrested on 6 July 1699. Kidd was placed in Stone Prison, spending most of the time in solitary confinement. His wife, Sarah, was also arrested and imprisoned. They were separated and she never saw him again. The conditions of Kidd's imprisonment were extremely harsh, and were said to have driven him at least temporarily insane. By then, Bellomont had turned against Kidd and other pirates, writing that the inhabitants of Long Island were "a lawless and unruly people" protecting pirates who had "settled among them". The civil government had changed and the new Tory ministry hoped to use Kidd as a tool to discredit the Whigs who had backed him, but Kidd refused to name names, naively confident his patrons would reward his loyalty by interceding on his behalf. There is speculation that he could have been spared had he talked. Finding Kidd politically useless, the Tory leaders sent him to stand trial before the High Court of Admiralty in London, for the charges of piracy on high seas and the murder of William Moore. Whilst awaiting trial, Kidd was confined in the infamous Newgate Prison, regarded even by the standards of the day as a disgusting hellhole, and was held there for almost two years before his trial even began. Kidd had two lawyers to assist in his defence. However, the money that the Admiralty had set aside for his defence was misplaced until right before the trial's start, and he had no legal counsel until the morning that the trial started and had time for just one brief consultation with them before it began. He was shocked to learn at his trial that he was charged with murder. He was found guilty on all charges (murder and five counts of piracy) and sentenced to death. He was hanged in a public execution on 23 May 1701, at Execution Dock, Wapping, in London. He had to be hanged twice. On the first attempt, the hangman's rope broke and Kidd survived. Although some in the crowd called for Kidd's release, claiming the breaking of the rope was a sign from God, Kidd was hanged again minutes later, and died. His body was gibbeted over the River Thames at Tilbury Point, as a warning to future would-be pirates, for three years. Kidd's remains were either buried in the riverbank near where he was executed or more probably taken for public dissection by surgeons, a common fate for executed persons (e.g. Hogarth's Tom Nero). Of Kidd's associates, Gabriel Loffe, Able Owens, and Hugh Parrot were also convicted of piracy. They were pardoned just prior to hanging at Execution Dock. Robert Lamley, William Jenkins and Richard Barleycorn were released. Kidd's Whig backers were embarrassed by his trial. Far from rewarding his loyalty, they participated in the effort to convict him by depriving him of the money and information which might have provided him with some legal defence. In particular, the two sets of French passes he had kept were missing at his trial. These passes (and others dated 1700) resurfaced in the early 20th century, misfiled with other government papers in a London building. These passes confirm Kidd's version of events, and call the extent of his guilt as a pirate into question. A broadside song, "Captain Kidd's Farewell to the Seas, or, the Famous Pirate's Lament", was printed shortly after his execution. It popularised the common belief that Kidd had confessed to the charges. Mythology and legend. The belief that Kidd had left buried treasure contributed greatly to the growth of his legend. The 1701 broadside song "Captain Kid's Farewell to the Seas, or, the Famous Pirate's Lament" lists "Two hundred bars of gold, and rix dollars manifold, we seized uncontrolled". It also inspired numerous treasure hunts conducted on Oak Island in Nova Scotia; in Suffolk County, Long Island in New York where Gardiners Island is located; Charles Island in Milford, Connecticut; the Thimble Islands in Connecticut and Cockenoe Island in Westport, Connecticut. Kidd was also alleged to have buried treasure on the Rahway River in New Jersey across the Arthur Kill from Staten Island. Captain Kidd did bury a small cache of treasure on Gardiners Island off the eastern coast of Long Island, in a spot known as Cherry Tree Field. Governor Bellomont reportedly had it found and sent to England to be used as evidence against Kidd in his trial. Some time in the 1690s, Kidd visited Block Island where he was supplied with provisions by Mrs. Mercy (Sands) Raymond, daughter of the mariner James Sands. It was said that before he departed, Kidd asked Mrs. Raymond to hold out her apron, which he then filled with gold and jewels as payment for her hospitality. After her husband Joshua Raymond died, Mercy moved with her family to northern New London, Connecticut (later Montville), where she purchased much land. The Raymond family was said by family acquaintances to have been "enriched by the apron". On Grand Manan in the Bay of Fundy, as early as 1875, there were searches on the west side of the island for treasure allegedly buried by Kidd during his time as a privateer. For nearly 200 years, this remote area of the island has been called "Money Cove". In 1983, Cork Graham and Richard Knight searched for Captain Kidd's buried treasure off the Vietnamese island of Phú Quốc. Knight and Graham were caught, convicted of illegally landing on Vietnamese territory, and each assessed a $10,000 fine. They were imprisoned for 11 months until they paid the fine. "Quedagh Merchant" found. For years, people and treasure hunters tried to locate the "Quedagh Merchant". It was reported on 13 December 2007 that "wreckage of a pirate ship abandoned by Captain Kidd in the 17th century has been found by divers in shallow waters off the Dominican Republic". The waters in which the ship was found were less than ten feet deep and were only off Catalina Island, just to the south of La Romana on the Dominican coast. The ship is believed to be "the remains of the "Quedagh Merchant"". Charles Beeker, the director of Academic Diving and Underwater Science Programs in Indiana University (Bloomington)'s School of Health, Physical Education, and Recreation, was one of the experts leading the Indiana University diving team. He said that it was "remarkable that the wreck has remained undiscovered all these years given its location", and that the ship had been the subject of so many prior failed searches. Captain Kidd's cannon, an artefact from the shipwreck, was added to a permanent exhibit at The Children's Museum of Indianapolis in 2011. False find. In May 2015, a ingot expected to be silver was found in a wreck off the coast of Île Sainte-Marie in Madagascar by a team led by marine archaeologist Barry Clifford. It was believed to be part of Kidd's treasure. Clifford gave the booty to Hery Rajaonarimampianina, President of Madagascar. But, in July 2015, a UNESCO scientific and technical advisory body reported that testing showed the ingot consisted of 95% lead, and speculated that the wreck in question was a broken part of the Sainte-Marie port constructions.
7120
27823944
https://en.wikipedia.org/wiki?curid=7120
Calreticulin
Calreticulin also known as calregulin, CRP55, CaBP3, calsequestrin-like protein, and endoplasmic reticulum resident protein 60 (ERp60) is a protein that in humans is encoded by the "CALR" gene. Calreticulin is a multifunctional soluble protein that binds Ca2+ ions (a second messenger in signal transduction), rendering them inactive. The Ca2+ is bound with low affinity, but high capacity, and can be released on a signal (see inositol trisphosphate). Calreticulin is located in storage compartments associated with the endoplasmic reticulum and is considered an ER resident protein. The term "Mobilferrin" is considered to be the same as calreticulin by some sources. Function. Calreticulin binds to misfolded proteins and prevents them from being exported from the endoplasmic reticulum to the Golgi apparatus. A similar quality-control molecular chaperone, calnexin, performs the same service for soluble proteins as does calreticulin, however it is a membrane-bound protein. Both proteins, calnexin and calreticulin, have the function of binding to oligosaccharides containing terminal glucose residues, thereby targeting them for degradation. Calreticulin and Calnexin's ability to bind carbohydrates associates them with the lectin protein family. In normal cellular function, trimming of glucose residues off the core oligosaccharide added during N-linked glycosylation is a part of protein processing. If "overseer" enzymes note that residues are misfolded, proteins within the rER will re-add glucose residues so that other calreticulin/calnexin can bind to these proteins and prevent them from proceeding to the Golgi. This leads these aberrantly folded proteins down a path whereby they are targeted for degradation. Studies on transgenic mice reveal that calreticulin is a cardiac embryonic gene that is essential during development. Calreticulin and calnexin are also integral in the production of MHC class I proteins. As newly synthesized MHC class I α-chains enter the endoplasmic reticulum, calnexin binds on to them retaining them in a partly folded state. After the β2-microglobulin binds to the peptide-loading complex (PLC), calreticulin (along with ERp57) takes over the job of chaperoning the MHC class I protein while the tapasin links the complex to the transporter associated with antigen processing (TAP) complex. This association prepares the MHC class I to bind an antigen for presentation on the cell surface. Transcription regulation. Calreticulin is also found in the nucleus, suggesting that it may have a role in transcription regulation. Calreticulin binds to the synthetic peptide KLGFFKR, which is almost identical to an amino acid sequence in the DNA-binding domain of the superfamily of nuclear receptors. The amino terminus of calreticulin interacts with the DNA-binding domain of the glucocorticoid receptor and prevents the receptor from binding to its specific glucocorticoid response element. Calreticulin can inhibit the binding of androgen receptor to its hormone-responsive DNA element and can inhibit androgen receptor and retinoic acid receptor transcriptional activities in vivo, as well as retinoic acid-induced neuronal differentiation. Thus, calreticulin can act as an important modulator of the regulation of gene transcription by nuclear hormone receptors. Clinical significance. Calreticulin binds to antibodies in certain area of systemic lupus and Sjögren patients that contain anti-Ro/SSA antibodies. Systemic lupus erythematosus is associated with increased autoantibody titers against calreticulin, but calreticulin is not a Ro/SS-A antigen. Earlier papers referred to calreticulin as an Ro/SS-A antigen, but this was later disproven. Increased autoantibody titer against human calreticulin is found in infants with complete congenital heart block of both the IgG and IgM classes. In 2013, two groups detected calreticulin mutations in a majority of JAK2-negative/MPL-negative patients with essential thrombocythemia and primary myelofibrosis, which makes "CALR" mutations the second most common in myeloproliferative neoplasms. All mutations (insertions or deletions) affected the last exon, generating a reading frame shift of the resulting protein, that creates a novel terminal peptide and causes a loss of endoplasmic reticulum KDEL retention signal. Role in cancer. Calreticulin (CRT) is expressed in many cancer cells and plays a role to promote macrophages to engulf hazardous cancerous cells. The reason why most of the cells are not destroyed is the presence of another molecule with signal CD47, which blocks CRT. Hence antibodies that block CD47 might be useful as a cancer treatment. In mice models of myeloid leukemia and non-Hodgkin lymphoma, anti-CD47 were effective in clearing cancer cells while normal cells were unaffected. Interactions. Calreticulin has been shown to interact with Perforin and NK2 homeobox 1.
7122
1798972
https://en.wikipedia.org/wiki?curid=7122
Crannog
A crannog (; ; ) is typically a partially or entirely artificial island, usually constructed in lakes, bogs and estuarine waters of Ireland, Scotland, and Wales. Unlike the prehistoric pile dwellings around the Alps, which were built on shores and not inundated until later, crannogs were built in the water, thus forming artificial islands. Humans have inhabited crannogs over five millennia, from the European Neolithic Period to as late as the 17th/early-18th centuries. In Scotland there is no convincing evidence in the archaeological record of their use in the Early or Middle Bronze Age or in the Norse period. The radiocarbon dating obtained from key sites such as Oakbank and Redcastle indicates at a 95.4 per cent confidence level that they date to the Late Bronze Age to Early Iron Age. The date ranges fall "after" around 800 BC and so could be considered Late Bronze Age by only the narrowest of margins. Some crannogs apparently involved free-standing wooden structures, as at Loch Tay, although more commonly they are composed of brush, stone or timber mounds that can be revetted with timber piles. In areas such as the Outer Hebrides of Scotland, timber was unavailable from the Neolithic era onwards. As a result, crannogs made completely of stone and supporting drystone architecture are common there. Etymology and uncertain meanings. The Irish word derives from Old Irish , which referred to a wooden structure or vessel, stemming from "crann", which means "tree", suffixed with "-óg" which is a diminutive ending ultimately borrowed from Welsh. The suffix "-óg" is sometimes misunderstood by non-native Irish-speakers as "óg", which is a separate word that means "young". This misunderstanding leads to a folk etymology whereby "crannóg" is misanalysed as "crann óg", which is pronounced differently and means "a young tree". The modern sense of the term first appears sometime around the 12th century; its popularity spread in the medieval period along with the terms "isle", "ylle", "inis", "eilean" or "oileán". There is some confusion on what the term "crannog" originally referred to, as the structure atop the island or the island itself. The additional meanings of Irish can be variously related as 'structure/piece of wood', including 'crow's nest', 'pulpit', or 'driver's box on a coach'; 'vessel/box/chest' more generally; and 'wooden pin'. The Scottish Gaelic form is and has the additional meanings of 'pulpit' and 'churn'. Thus, there is no real consensus on what the term "crannog" actually implies, although the modern adoption in the English language broadly refers to a partially or completely artificial islet that saw use from the prehistoric to the Post-Medieval period in Ireland and Scotland. Location. Crannogs are widespread in Ireland, with an estimated 1,200 examples, while Scotland has 389 sites officially listed as such. The actual number in Scotland varies considerably depending on definition—between about 350 and 500, due to the use of the term "island dun" for well over one hundred Hebridean examples—a distinction that has created a divide between mainland Scottish crannog and Hebridean islet settlement studies. Previously unknown crannogs in Scotland and Ireland are still being found as underwater surveys continue to investigate loch beds for completely submerged examples. The largest concentrations of crannogs in Ireland are found in the Drumlin Belt of the Midlands, North and Northwest. In Scotland, crannogs are mostly found on the western coast, with high concentrations in Argyll and Dumfries and Galloway. In reality, the Western Isles contain the highest density of lake-settlements in Scotland, yet they are recognised under varying terms besides "crannog". One lone Welsh example exists at Llangorse Lake, probably a product of Irish influence. In Ireland, crannogs were most prevalent in Connacht and Ulster; where they were built on bogs and small lakes such as Lough Conn and Lough Gara, while being less frequent on larger lakes such as Lough Erne, or rivers such as the Shannon. Today, crannogs typically appear as small, circular islets, often in diameter, covered in dense vegetation due to their inaccessibility to grazing livestock. Reconstructed Irish crannógs are located at Craggaunowen, County Clare, in the Irish National Heritage Park, County Wexford and at Castle Espie, County Down. In Scotland there are reconstructions at the "Scottish Crannog Centre" at Loch Tay, Perthshire; this centre offers guided tours and hands-on activities, including wool-spinning, wood-turning and making fire, holds events to celebrate wild cooking and crafts, and hosts yearly Midsummer, Lughnasadh and Samhain festivals. Types and problems with definition. Crannogs took on many different forms and methods of construction based on what was available in the immediate landscape. The classic image of a prehistoric crannog stems from both post-medieval illustrations and highly influential excavations, such as Milton Loch in Scotland by C. M. Piggot after World War II. The Milton Loch interpretation is of a small islet surrounded or defined at its edges by timber piles and a gangway, topped by a typical Iron Age roundhouse. The choice of a small islet as a home may seem odd today, yet waterways were the main channels for both communication and travel until the 19th century in much of Ireland and, especially, Highland Scotland. Crannogs are traditionally interpreted as simple prehistorical farmsteads. They are also interpreted as boltholes in times of danger, as status symbols with limited access, and as inherited locations of power that imply a sense of legitimacy and ancestry towards ownership of the surrounding landscape. A strict definition of a crannog, which has long been debated, requires the use of timber. Sites in the Western Isles do not satisfy this criterion, although their inhabitants shared the common habit of living on water. If not classed as "true" crannogs, small occupied islets (often at least partially artificial in nature) may be referred to as "island duns". Rather confusingly, 22 islet-based sites are classified as "proper" crannogs due to differing interpretations of inspectors or excavators who drew up field reports. Hebridean island dwellings or crannogs were commonly built on both natural and artificial islets, usually reached by a stone causeway. The visible structural remains are traditionally interpreted as duns or, in more recent terminology, as "Atlantic roundhouses". This terminology has recently become popular when describing the entire range of robust, drystone structures that existed in later prehistoric Atlantic Scotland. The majority of crannog excavations were, by modern standards, poorly conducted in the late 19th and early 20th centuries by early antiquarians, or were purely accidental finds as lochs were drained during the improvements to increase usable farmland or pasture. In some early digs, labourers hauled away tons of materials, with little regard to anything that was not of immediate economic value. Conversely, the vast majority of early attempts at proper excavation failed to accurately measure or record stratigraphy, thereby failing to provide a secure context for artefact finds. Thus only extremely limited interpretations are possible. Preservation and conservation techniques for waterlogged materials such as logboats or structural material were all but non-existent, and a number of extremely important finds were destroyed as a result; in some instances, they were even dried out for firewood. From about 1900 to the late 1940s there was very little crannog excavation in Scotland, while some important and highly influential contributions were made in Ireland. In contrast, relatively few crannogs have been excavated since the Second World War. This number has steadily grown, especially since the early 1980s, and may soon surpass prewar totals. The overwhelming majority of crannogs show multiple phases of occupation and re-use, often extending over centuries. Thus the re-occupiers may have viewed crannogs as a legacy that was alive in local tradition and memory. Crannog reoccupation is important and significant, especially in the many instances of crannogs built near natural islets, which were often completely unused. This long chronology of use has been verified by both radiocarbon dating and more precisely by dendrochronology. Interpretations of crannog function have not been static; instead they appear to have changed in both the archaeological and historic records. Rather than the simple domestic residences of prehistory, the medieval crannogs were increasingly seen as strongholds of the upper class or regional political players, such as the Gaelic chieftains of the O'Boylans and McMahons in County Monaghan and the Kingdom of Airgíalla, until the 17th century. In Scotland, the medieval and post-medieval use of crannogs is also documented into the early 18th century. Whether this increase in status is real, or just a by-product of increasingly complex material assemblages, remains to be convincingly validated. History. The earliest-known constructed crannog is the completely artificial Neolithic islet of Eilean Dòmhnuill, Loch Olabhat on North Uist in Scotland. Eilean Domhnuill has produced radiocarbon dates ranging from 3650 to 2500 BC. Irish crannogs appear in middle Bronze Age layers at Ballinderry (1200–600 BC). Recent radiocarbon dating of worked timber found in Loch Bhorghastail on the Isle of Lewis has produced evidence of crannogs as old as 3380–3630 BC. Prior to the Bronze Age, the existence of artificial island settlement in Ireland is not as clear. While lakeside settlements are evident in Ireland from 4500 BC, these settlements are not crannogs, as they were not intended to be islands. Despite having a lengthy chronology, their use was not at all consistent or unchanging. Crannog construction and occupation was at its peak in Scotland from about 800 BC to AD 200. Not surprisingly, crannogs have useful defensive properties, although there appears to be more significance to prehistoric use than simple defense, as very few weapons or evidence for destruction appear in excavations of prehistoric crannogs. In Ireland, crannogs were at their zenith during the Early Historic period, when they were the homes and retreats of kings, lords, prosperous farmers and, occasionally, socially marginalised groups, such as monastic hermits or metalsmiths who could work in isolation. Despite scholarly concepts supporting a strict Early Historic evolution, Irish excavations are increasingly uncovering examples that date from the "missing" Iron Age in Ireland. Construction. The construction techniques for a crannog (prehistoric or otherwise) are as varied as the multitude of finished forms that make up the archaeological record. Island settlement in Scotland and Ireland is manifest through the entire range of possibilities ranging from entirely natural, small islets to completely artificial islets, therefore definitions remain contentious. For crannogs in the strict sense, typically the construction effort began on a shallow reef or rise in the lochbed. When timber was available, many crannogs were surrounded by a circle of wooden piles, with axe-sharpened bases that were driven into the bottom, forming a circular enclosure that helped to retain the main mound and prevent erosion. The piles could also be joined by mortise and tenon, or large holes cut to carefully accept specially shaped timbers designed to interlock and provide structural rigidity. On other examples, interior surfaces were built up with any mixture of clay, peat, stone, timber or brush – whatever was available. In some instances, more than one structure was built on crannogs. In other types of crannogs, builders and occupants added large stones to the waterline of small natural islets, extending and enlarging them over successive phases of renewal. Larger crannogs could be occupied by extended families or communal groups, and access was either by logboats or coracles. Evidence for timber or stone causeways exists on a large number of crannogs. The causeways may have been slightly submerged; this has been interpreted as a device to make access difficult but may also be a result of loch level fluctuations over the ensuing centuries or millennia. Organic remains are often found in excellent condition on these water-logged sites. The bones of cattle, deer, and swine have been found in excavated crannogs, while remains of wooden utensils and even dairy products have been completely preserved for several millennia. Fire and reconstruction. In June 2021, the Loch Tay Crannog was seriously damaged in a fire but funding was given to repair the structure, and conserve the museum materials retained. The UNESCO Chair in Refugee Integration through Languages and the Arts, Alison Phipps of Glasgow University and African artist Tawona Sithole considered its future and its impact as a symbol of common human history and 'potent ways of healing' including restarting the creative weaving with Soay sheep wool in 'a thousand touches'.
7123
29596807
https://en.wikipedia.org/wiki?curid=7123
Calendar date
A calendar date is a reference to a particular day, represented within a calendar system, enabling a specific day to be unambiguously identified. Simple math can be performed between dates; commonly, the number of days between two dates may be calculated, e.g., "25 2025" is ten days after "15 2025". The date of a particular event depends on the time zone used to record it. For example, the air attack on Pearl Harbor that began at 7:48 a.m. local Hawaiian time (HST) on 7 December 1941 is recorded equally as having happened on 8 December at 3:18 a.m. Japan Standard Time (JST). A particular day may be assigned a different nominal date according to the calendar used. The de facto standard for recording dates worldwide is the Gregorian calendar, the world's most widely used civil calendar. Many cultures use religious calendars such as the Gregorian (Western Christendom, AD), the Julian calendar (Eastern Christendom, AD), Hebrew calendar (Judaism, AM), the Hijri calendars (Islam, AH), or any other of the many calendars used around the world. Regnal calendars (that record a date in terms of years since the beginning of the monarch's reign) are also used in some places, for particular purposes. In most calendar systems, the date consists of three parts: the (numbered) "day of the month", the "month", and the (numbered) "year". There may also be additional parts, such as the "day of the week". Years are counted from a particular starting point called the "epoch", with "era" referring to the span of time since that epoch. A date without the year may also be referred to as a "date" or "calendar date" (such as "29 " rather than "29 2025"). As such, it is either shorthand for the current year, or else it defines the day of an annual event such as a birthday on 31 May or Christmas on 25 December. Date format. There is a large variety of formats for dates in use, which differ in the order of date components. These variations use the sample date of 31 May 2006: (e.g. 31/05/2006, 05/31/2006, 2006/05/31), component separators (e.g. 31.05.2006, 31/05/2006, 31-05-2006), whether leading zeros are included (e.g. 31/5/2006 vs. 31/05/2006), whether all four digits of the year are written (e.g., 31.05.2006 vs. 31.05.06), and whether the month is represented in Arabic or Roman numerals or by name (e.g. 31.05.2006, 31.V.2006 vs. 31 May 2006). Gregorian, day–month–year (DMY). This little-endian sequence is used by a majority of the world and is the preferred form by the United Nations when writing the full date format in official documents. This date format originates from the custom of writing the date as "the Nth day of [month] in the year of our Lord [year]" in Western religious and legal documents. The format has shortened over time but the order of the elements has remained constant. The following examples use the date of 9 November 2006. (With the years 2000–2009, care must be taken to ensure that two digit years do not intend to be 1900–1909 or other similar years.) The dots have a function of ordinal dot. Gregorian, year–month–day (YMD). In this format, the most significant data item is written before lesser data items i.e. the year before the month before the day. It is consistent with the big-endianness of the Hindu–Arabic numeral system, which progresses from the highest to the lowest order magnitude. That is, using this format textual orderings and chronological orderings are identical. This form is standard in East Asia, Iran, Lithuania, Hungary, and Sweden; and some other countries to a limited extent. Examples for the 9th of November 2003: It is also extended through the universal big-endian format clock time: 2003 November 9, 18h 14m 12s, or 2003/11/9/18:14:12 or (ISO 8601) 2003-11-09T18:14:12. Gregorian, month–day–year (MDY). This sequence is used primarily in the Philippines and the United States. It is also used to varying extents in Canada (though never in Quebec). This date format was commonly used alongside the little-endian form in the United Kingdom until the mid-20th century and can be found in both defunct and modern print media such as the "London Gazette" and "The Times", respectively. This format was also commonly used by several English-language print media in many former British colonies and also one of two formats commonly used in India during British Raj era until the mid-20th century. Modern style guides recommend avoiding the use of the ordinal (e.g. 1st, 2nd, 3rd, 4th) form of numbers when the day follows the month (July 4 or July 4, 2024), and that format is not included in ISO standards. The ordinal was common in the past and is still sometimes used ([the] 4th [of] July or July 4th). Gregorian, year–day–month (YDM). This date format is used in Kazakhstan, Latvia, Nepal, and Turkmenistan. According to the official rules of documenting dates by governmental authorities, the long date format in Kazakh is written in the year–day–month order, e.g. 2006 5 April (). However, both Latvia and Kazakhstan use the day-month-year format (DD.MM.YYYY or DD/MM/YYYY) for all-numeric dates. Standards. There are several standards that specify date formats: Difficulties. Many numerical forms can create confusion when used in international correspondence, particularly when abbreviating the year to its final two digits, with no context. For example, "07/08/06" could refer to either 7 August 2006 or July 8, 2006 (or 1906, or the sixth year of any century), or 2007 August 6. The date format of YYYY-MM-DD in ISO 8601, as well as other international standards, have been adopted for many applications for reasons including reducing transnational ambiguity and simplifying machine processing. An early U.S. Federal Information Processing Standard recommended 2-digit years. This is now widely recognized as extremely problematic, because of the year 2000 problem. Some U.S. government agencies now use ISO 8601 with 4-digit years. When transitioning from one calendar or date notation to another, a format that includes both styles may be developed; for example Old Style and New Style dates in the transition from the Julian to the Gregorian calendar. Advantages for ordering in sequence. One of the advantages of using the ISO 8601 date format is that the lexicographical order (ASCIIbetical) of the representations is equivalent to the chronological order of the dates, assuming that all dates are in the same time zone. Thus dates can be sorted using simple string comparison algorithms, and indeed by any left to right collation. For example: 2003-02-28 (28 February 2003) sorts before 2006-03-01 (1 March 2006) which sorts before 2015-01-30 (30 January 2015) The YYYY-MM-DD layout is the only common format that can provide this. Sorting other date representations involves some parsing of the date strings. This also works when a time in 24-hour format is included after the date, as long as all times are understood to be in the same time zone. ISO 8601 is used widely where concise, human-readable yet easily computable and unambiguous dates are required, although many applications store dates internally as UNIX time and only convert to ISO 8601 for display. All modern computer Operating Systems retain date information of files outside of their titles, allowing the user to choose which format they prefer and have them sorted thus, irrespective of the files' names. Specialized usage. Day and year only. The U.S. military sometimes uses a system, known to them as the "Julian date format", which indicates the year and the actual day out of the 365 days of the year (and thus a designation of the month would not be needed). For example, "11 December 1999" can be written in some contexts as "1999345" or "99345", for the 345th day of 1999. This system is most often used in US military logistics since it simplifies the process of calculating estimated shipping and arrival dates. For example: say a tank engine takes an estimated 35 days to ship by sea from the US to South Korea. If the engine is sent on 06104 (Friday, 14 April 2006), it should arrive on 06139 (Friday, 19 May). Outside of the US military and some US government agencies, including the Internal Revenue Service, this format is usually referred to as "ordinal date", rather than "Julian date". Such ordinal date formats are also used by many computer programs (especially those for mainframe systems). Using a three-digit Julian day number saves one byte of computer storage over a two-digit month plus two-digit day, for example, "January 17" is 017 in Julian versus 0117 in month-day format. OS/390 or its successor, z/OS, display dates in yy.ddd format for most operations. UNIX time stores time as a number in seconds since the beginning of the UNIX Epoch (1970-01-01). Another "ordinal" date system ("ordinal" in the sense of advancing in value by one as the date advances by one day) is in common use in astronomical calculations and referencing and uses the same name as this "logistics" system. The continuity of representation of period regardless of the time of year being considered is highly useful to both groups of specialists. The astronomers describe their system as also being a "Julian date" system. Week number used. Companies in Europe often use year, week number, and day for planning purposes. So, for example, an event in a project can happen on (week 43) or (Monday, week 43) or, if the year needs to be indicated, on (the year 2006, week 43; i.e., Monday 23OctoberSunday 29October 2006). An ISO week-numbering year has 52 or 53 full weeks. That is 364 or 371 days instead of the conventional Gregorian year of 365 or 366 days. These 53 week years occur on all years that have Thursday as 1 January and on leap years that start on Wednesday the 1 January. The extra week is sometimes referred to as a 'leap week', although ISO 8601 does not use this term. Expressing dates in spoken English. In English-language outside North America (mostly in Anglophone Europe and some countries in Australasia), full dates are written as "7 December 1941" (or "7th December 1941") and spoken as "the seventh of December, nineteen forty-one" (exceedingly common usage of "the" and "of"), with the occasional usage of "December 7, 1941" ("December the seventh, nineteen forty-one"). In common with most continental European usage, however, all-numeric dates are invariably ordered dd/mm/yyyy. In Canada and the United States, the usual written form is "December 7, 1941", spoken as "December seventh, nineteen forty-one" or colloquially "December the seventh, nineteen forty-one". Ordinal numerals, however, are not always used when writing and pronouncing dates, and "December seven, nineteen forty-one" is also an accepted pronunciation of the date written "December 7, 1941". A notable exception to this rule is the Fourth of July (U.S. Independence Day).
7124
28481209
https://en.wikipedia.org/wiki?curid=7124
Cist
In archeology, a cist (; also kist ; ultimately from ; cognate to ) or cist grave is a small stone-built coffin-like box or ossuary used to hold the bodies of the dead. In some ways, it is similar to the deeper shaft tomb. Examples occur across Europe and in the Middle East. A cist may have formerly been associated with other monuments, perhaps under a cairn or a long barrow. Several cists are sometimes found close together within the same cairn or barrow. Often ornaments have been found within an excavated cist, indicating the wealth or prominence of the interred individual. This old word is preserved in the Nordic languages as in Swedish and in Danish and Norwegian, where it is the word for a funerary coffin. In English the term is related to "cistern" and to "chest".
7125
57939
https://en.wikipedia.org/wiki?curid=7125
Center (group theory)
In abstract algebra, the center of a group is the set of elements that commute with every element of . It is denoted , from German "Zentrum," meaning "center". In set-builder notation, The center is a normal subgroup, formula_1, and also a characteristic subgroup, but is not necessarily fully characteristic. The quotient group, , is isomorphic to the inner automorphism group, . A group is abelian if and only if . At the other extreme, a group is said to be centerless if is trivial; i.e., consists only of the identity element. The elements of the center are central elements. As a subgroup. The center of "G" is always a subgroup of . In particular: Furthermore, the center of is always an abelian and normal subgroup of . Since all elements of commute, it is closed under conjugation. A group homomorphism might not restrict to a homomorphism between their centers. The image elements commute with the image , but they need not commute with all of unless is surjective. Thus the center mapping formula_2 is not a functor between categories Grp and Ab, since it does not induce a map of arrows. Conjugacy classes and centralizers. By definition, an element is central whenever its conjugacy class contains only the element itself; i.e. . The center is the intersection of all the centralizers of elements of : formula_3 As centralizers are subgroups, this again shows that the center is a subgroup. Conjugation. Consider the map , from to the automorphism group of defined by , where is the automorphism of defined by The function, is a group homomorphism, and its kernel is precisely the center of , and its image is called the inner automorphism group of , denoted . By the first isomorphism theorem we get, The cokernel of this map is the group of outer automorphisms, and these form the exact sequence Higher centers. Quotienting out by the center of a group yields a sequence of groups called the upper central series: The kernel of the map is the th center of (second center, third center, etc.), denoted . Concretely, the ()-st center comprises the elements that commute with all elements up to an element of the th center. Following this definition, one can define the 0th center of a group to be the identity subgroup. This can be continued to transfinite ordinals by transfinite induction; the union of all the higher centers is called the hypercenter. The ascending chain of subgroups stabilizes at "i" (equivalently, ) if and only if is centerless.
7129
372290
https://en.wikipedia.org/wiki?curid=7129
Commonwealth of England
The Commonwealth of England was the political structure during the period from 1649 to 1660 when the Kingdom of England, later along with Ireland and Scotland, were governed as a republic after the end of the Second English Civil War and the trial and execution of Charles I. The republic's existence was declared through "An Act declaring England to be a Commonwealth", adopted by the Rump Parliament on 19 May 1649. Power in the early Commonwealth was vested primarily in the Parliament and a Council of State. During the period, fighting continued, particularly in Ireland and Scotland, between the parliamentary forces and those opposed to them, in the Cromwellian conquest of Ireland and the Anglo-Scottish war of 1650–1652. In 1653, after dissolution of the Rump Parliament, the Army Council adopted the Instrument of Government, by which Oliver Cromwell was made Lord Protector of a united "Commonwealth of England, Scotland and Ireland", inaugurating the period now usually known as the Protectorate. After Cromwell's death, and following a brief period of rule under his son, Richard Cromwell, the Protectorate Parliament was dissolved in 1659 and the Rump Parliament recalled, starting a process that led to the restoration of the monarchy in 1660. The term Commonwealth is sometimes used for the whole of 1649 to 1660 – called by some the Interregnum – although for other historians, the use of the term is limited to the years prior to Cromwell's formal assumption of power in 1653. In retrospect, the period of republican rule for England was a failure in the short term. During the 11-year period, no stable government was established to rule the English state for longer than a few months at a time. Several administrative structures were tried, and several Parliaments called and seated, but little in the way of meaningful, lasting legislation was passed. The only force keeping it together was the personality of Oliver Cromwell, who exerted control through the military by way of the "Grandees", being the Major-Generals and other senior military leaders of the New Model Army. Not only did Cromwell's regime crumble into near anarchy upon his death and the brief administration of his son, but the monarchy he overthrew was restored in 1660, and its first act was officially to erase all traces of any constitutional reforms of the Republican period. Still, the memory of the Parliamentarian cause, dubbed the Good Old Cause by the soldiers of the New Model Army, lingered on. The Commonwealth period is better remembered for the military success of Thomas Fairfax, Oliver Cromwell, and the New Model Army. Besides resounding victories in the English Civil War, the reformed Navy under the command of Robert Blake defeated the Dutch in the First Anglo-Dutch War which marked the first step towards England's naval supremacy. In Ireland, the Commonwealth period is remembered for Cromwell's conquest of Ireland, which continued and completed the policies of the Tudor and Stuart periods. 1649–1653. Rump Parliament. The Rump was created by Pride's Purge of those members of the Long Parliament who did not support the political position of the Grandees in the New Model Army. Just before and after the execution of King Charles I on 30 January 1649, the Rump passed a number of acts of Parliament creating the legal basis for the republic. With the abolition of the monarchy, Privy Council and the House of Lords, it had unchecked executive and legislative power. The English Council of State, which replaced the Privy Council, took over many of the executive functions of the monarchy. It was selected by the Rump, and most of its members were MPs. However, the Rump depended on the support of the Army with which it had a very uneasy relationship. After the execution of Charles I, the House of Commons abolished the monarchy and the House of Lords. It declared the people of England "and of all the Dominions and Territories thereunto belonging" to be henceforth under the governance of a "Commonwealth", effectively a republic. Structure. In Pride's Purge, all members of parliament (including most of the political Presbyterians) who would not accept the need to bring the King to trial had been removed. Thus the Rump never had more than two hundred members (less than half the number of the Commons in the original Long Parliament). They included: supporters of religious independents who did not want an established church and some of whom had sympathies with the Levellers; Presbyterians who were willing to countenance the trial and execution of the King; and later admissions, such as formerly excluded MPs who were prepared to denounce the Newport Treaty negotiations with the King. Most Rumpers were gentry, though there was a higher proportion of lesser gentry and lawyers than in previous parliaments. Less than one-quarter of them were regicides. This left the Rump as basically a conservative body whose vested interests in the existing land ownership and legal systems made it unlikely to want to reform them. Issues and achievements. For the first two years of the Commonwealth, the Rump faced economic depression and the risk of invasion from Scotland and Ireland. By 1653 Cromwell and the Army had largely eliminated these threats. There were many disagreements amongst factions of the Rump. Some wanted a republic, but others favoured retaining some type of monarchical government. Most of England's traditional ruling classes regarded the Rump as an illegal government made up of regicides and upstarts. However, they were also aware that the Rump might be all that stood in the way of an outright military dictatorship. High taxes, mainly to pay the Army, were resented by the gentry. Limited reforms were enough to antagonise the ruling class but not enough to satisfy the radicals. Despite its unpopularity, the Rump was a link with the old constitution and helped to settle England down and make it secure after the biggest upheaval in its history. By 1653, France and Spain had recognised England's new government. Reforms. Though the Church of England was retained, episcopacy was suppressed and the Act of Uniformity 1558 was repealed in September 1650. Mainly on the insistence of the Army, many independent churches were tolerated, although everyone still had to pay tithes to the established church. Some small improvements were made to law and court procedure; for example, all court proceedings were now conducted in English rather than in Law French or Latin. However, there were no widespread reforms of the common law. This would have upset the gentry, who regarded the common law as reinforcing their status and property rights. The Rump passed many restrictive laws to regulate people's moral behaviour, such as closing down theatres and requiring strict observance of Sunday. Laws were also passed banning the celebration of Easter and Christmas. This antagonised most of the gentry. Dismissal. Cromwell, aided by Thomas Harrison, forcibly dismissed the Rump on 20 April 1653, for reasons that are unclear. Theories are that he feared the Rump was trying to perpetuate itself as the government, or that the Rump was preparing for an election which could return an anti-Commonwealth majority. Many former members of the Rump continued to regard themselves as England's only legitimate constitutional authority. The Rump had not agreed to its own dissolution; their legal, constitutional view that it was unlawful was based on Charles' concessionary Act prohibiting the dissolution of Parliament without its own consent (on 11 May 1641, leading to the entire Commonwealth being the latter years of the Long Parliament in their majority view). Barebone's Parliament, July–December 1653. The dissolution of the Rump was followed by a short period in which Cromwell and the Army ruled alone. Nobody had the constitutional authority to call an election, but Cromwell did not want to impose a military dictatorship. Instead, he ruled through a "nominated assembly" which he believed would be easy for the Army to control since Army officers did the nominating. Barebone's Parliament was opposed by former Rumpers and ridiculed by many gentries as being an assembly of inferior people. Over 110 of its 140 members were lesser gentry or of higher social status; an exception was Praise-God Barebone, a Baptist merchant after whom the Assembly got its derogatory nickname. Many were well educated. The assembly reflected the range of views of the officers who nominated it. The Radicals (approximately 40) included a hard core of Fifth Monarchists who wanted to be rid of Common Law and any state control of religion. The Moderates (approximately 60) wanted some improvements within the existing system and might move to either the radical or conservative side depending on the issue. The Conservatives (approximately 40) wanted to keep the "status quo", since common law protected the interests of the gentry, and tithes and advowsons were valuable property. Cromwell saw Barebone's Parliament as a temporary legislative body which he hoped would produce reforms and develop a constitution for the Commonwealth. However, members were divided over key issues, only 25 had previous parliamentary experience, and although many had some legal training, there were no qualified lawyers. Cromwell seems to have expected this group of amateurs to produce reform without management or direction. When the radicals mustered enough support to defeat a bill which would have preserved the "status quo" in religion, the conservatives, together with many moderates, surrendered their authority back to Cromwell, who sent soldiers to clear the rest of the Assembly. Barebone's Parliament was over. Protectorate, 1653–1659. Throughout 1653, Cromwell and the Army slowly dismantled the machinery of the Commonwealth state. The English Council of State, which had assumed the executive function formerly held by the King and his Privy Council, was forcibly dissolved by Cromwell on 20 April, and in its place a new council, filled with Cromwell's own chosen men, was installed. Three days after Barebone's Parliament dissolved itself, the Instrument of Government was adopted by Cromwell's council and a new state structure, now known historically as The Protectorate, was given its shape. This new constitution granted Cromwell sweeping powers as Lord Protector, an office which ironically had much the same role and powers as the King had under the monarchy, a fact not lost on Cromwell's critics. On 12 April 1654, under the terms of the Tender of Union, the "Ordinance for uniting Scotland into one Commonwealth with England" was issued by the Lord Protector and proclaimed in Scotland by the military governor of Scotland, General George Monck, 1st Duke of Albemarle. The ordinance declared that "the people of Scotland should be united with the people of England into one Commonwealth and under one Government" and decreed that a new "Arms of the Commonwealth", incorporating the Saltire, should be placed on "all the public seals, seals of office, and seals of bodies civil or corporate, in Scotland" as "a badge of this Union". First Protectorate Parliament. Cromwell and his Council of State spent the first several months of 1654 preparing for the First Protectorate Parliament by drawing up a set of 84 bills for consideration. The Parliament was freely elected (as free as such elections could be in the 17th century) and as such, the Parliament was filled with a wide range of political interests, and as such did not accomplish any of its goals. Having passed none of Cromwell's proposed bills, he dissolved it as soon as law would allow. Rule of the Major-Generals and Second Protectorate Parliament. Having decided that Parliament was not an efficient means of getting his policies enacted, Cromwell instituted a system of direct military rule of England during a period known as the Rule of the Major-Generals; all of England was divided into ten regions, each was governed directly by one of Cromwell's Major-Generals, who were given sweeping powers to collect taxes and enforce the peace. The Major-Generals were highly unpopular, a fact that they themselves noticed and many urged Cromwell to call another Parliament to give his rule legitimacy. Unlike the prior Parliament, which had been open to all eligible males in the Commonwealth, the new elections specifically excluded Catholics and Royalists from running or voting; as a result, it was stocked with members who were more in line with Cromwell's own politics. The first major bill to be brought up for debate was the Militia Bill, which was ultimately voted down by the House. As a result, the authority of the Major-Generals to collect taxes to support their own regimes ended, and the Rule of the Major Generals came to an end. The second piece of major legislation was the passage of the Humble Petition and Advice, a sweeping constitutional reform which had two purposes. The first was to reserve for Parliament certain rights, such as a three-year fixed-term (which the Lord Protector was required to abide by) and to reserve for the Parliament the sole right of taxation. The second, as a concession to Cromwell, was to make the Lord Protector a hereditary position and to convert the title to a formal constitutional Kingship. Cromwell refused the title of King, but accepted the rest of the legislation, which was passed in final form on 25 May 1657. A second session of the Parliament met in 1658; it allowed previously excluded MPs (who had been not allowed to take their seats because of Catholic and/or Royalist leanings) to take their seats, however, this made the Parliament far less compliant to the wishes of Cromwell and the Major-Generals; it accomplished little in the way of a legislative agenda and was dissolved after a few months. Richard Cromwell and the Third Protectorate Parliament. On the death of Oliver Cromwell in 1658, his son, Richard Cromwell, inherited the title, Lord Protector. Richard had never served in the Army, which meant he lost control over the Major-Generals that had been the source of his own father's power. The Third Protectorate Parliament was summoned in late 1658 and was seated on 27 January 1659. Its first act was to confirm Richard's role as Lord Protector, which it did by a sizeable, but not overwhelming, majority. Quickly, however, it became apparent that Richard had no control over the Army and divisions quickly developed in the Parliament. One faction called for a recall of the Rump Parliament and a return to the constitution of the Commonwealth, while another preferred the existing constitution. As the parties grew increasingly quarrelsome, Richard dissolved it. He was quickly removed from power, and the remaining Army leadership recalled the Rump Parliament, setting the stage for the return of the Monarchy a year later. 1659–1660. After the Grandees in the New Model Army removed Richard, they reinstalled the Rump Parliament in May 1659. Charles Fleetwood was appointed a member of the Committee of Safety and of the Council of State, and one of the seven commissioners for the army. On 9 June he was nominated lord-general (commander-in-chief) of the army. However, his power was undermined in parliament, which chose to disregard the army's authority in a similar fashion to the pre–Civil War parliament. On 12 October 1659 the Commons cashiered General John Lambert and other officers, and installed Fleetwood as chief of a military council under the authority of the Speaker. The next day Lambert ordered that the doors of the House be shut and the members kept out. On 26 October a "Committee of Safety" was appointed, of which Fleetwood and Lambert were members. Lambert was appointed major-general of all the forces in England and Scotland, Fleetwood being general. Lambert was now sent, by the Committee of Safety, with a large force to meet George Monck, who was in command of the English forces in Scotland, and either negotiate with him or force him to come to terms. It was into this atmosphere that General George Monck marched south with his army from Scotland. Lambert's army began to desert him, and he returned to London almost alone. On 21 February 1660, Monck reinstated the Presbyterian members of the Long Parliament "secluded" by Pride, so that they could prepare legislation for a new parliament. Fleetwood was deprived of his command and ordered to appear before parliament to answer for his conduct. On 3 March Lambert was sent to the Tower, from which he escaped a month later. Lambert tried to rekindle the civil war in favour of the Commonwealth by issuing a proclamation calling on all supporters of the "Good Old Cause" to rally on the battlefield of Edgehill. However, he was recaptured by Colonel Richard Ingoldsby, a regicide who hoped to win a pardon by handing Lambert over to the new regime. The Long Parliament dissolved itself on 16 March. On 4 April 1660, in response to a secret message sent by Monck, Charles II issued the Declaration of Breda, which made known the conditions of his acceptance of the crown of England. Monck organised the Convention Parliament, which met for the first time on 25 April. On 8 May it proclaimed that King Charles II had been the lawful monarch since the execution of Charles I in January 1649. Charles returned from exile on 23 May. He entered London on 29 May, his birthday. To celebrate "his Majesty's Return to his Parliament" 29 May was made a public holiday, popularly known as Oak Apple Day. He was crowned at Westminster Abbey on 23 April 1661.
7131
27823944
https://en.wikipedia.org/wiki?curid=7131
Charles Evers
James Charles Evers (September 11, 1922July 22, 2020) was an American civil rights activist, businessman, radio personality, and politician. Evers was known for his role in the civil rights movement along with his younger brother Medgar Evers. After serving in World War II, Evers began his career as a disc jockey at WHOC in Philadelphia, Mississippi. In 1954, he was made the National Association for the Advancement of Colored People (NAACP) State Voter Registration chairman. After his brother's assassination in 1963, Evers took over his position as field director of the NAACP in Mississippi. In this role, he organized and led many demonstrations for the rights of African Americans. In 1969, Evers was named "Man of the Year" by the NAACP. On June 3, 1969, Evers was elected in Fayette, Mississippi, as the first African-American mayor of a biracial town in Mississippi since the Reconstruction era, following passage of the Voting Rights Act of 1965 which enforced constitutional rights for citizens. At the time of Evers's election as mayor, the town of Fayette had a population of 1,600 of which 75% was African-American and almost 25% white; the white officers on the Fayette city police "resigned rather than work under a black administration," according to the Associated Press. Evers told reporters "I guess we will just have to operate with an all-black police department for the present. But I am still looking for some whites to join us in helping Fayette grow." Evers then outlawed the carrying of firearms within city limits. He ran for governor in 1971 and the United States Senate in 1978, both times as an independent candidate. In 1989, Evers was defeated for re-election after serving sixteen years as mayor. In his later life, he became a Republican, endorsing Ronald Reagan in 1980, and more recently Donald Trump in 2016. This diversity in party affiliations throughout his life was reflected in his fostering of friendships with people from a variety of backgrounds, as well as his advising of politicians from across the political spectrum. After his political career ended, he returned to radio and hosted his own show, "Let's Talk". In 2017, Evers was inducted into the National Rhythm & Blues Hall of Fame for his contributions to the music industry. Early life and education. Charles Evers was born in Decatur, Mississippi, on September 11, 1922, to James Evers, a laborer, and Jesse Wright Evers, a maid. He was the eldest of four children; Medgar Evers was his younger brother. He attended segregated public schools, which were typically underfunded in Mississippi following the exclusion of African Americans from the political system by disenfranchisement after 1890. Evers graduated from Alcorn State University in Lorman, Mississippi. Career. Business activities. During World War II, Charles and Medgar Evers both served in the United States Army. Charles fell in love with a Philippine woman while stationed overseas. He could not marry her and bring her home to Mississippi because the state's constitution prohibited interracial marriages. During the war he established a brothel in Quezon City which catered to American servicemen. After serving a year of reserve duty following the Korean War, he settled in Philadelphia, Mississippi. In 1949, he began working as a disc jockey at WHOC, making him the first black disc jockey in the state. By the early 1950s, he was managing a hotel, cab company, and burial insurance business in the town. He had a cafe in Philadelphia and influenced over two hundred black citizens to pay their poll tax. Forced to leave due to local white hostility in 1956, he moved to Chicago. Low on money, he began working as a meatpacker in stockyards during the day and as an attendant for the men's restroom at the Conrad Hilton Hotel at nights. He also began pimping and ran a numbers game, taking $500 a week from the latter. He gained enough money to purchase several bars, bootlegged liquor, and sold jukeboxes. Civil rights activism. In Mississippi about 1951, brothers Charles and Medgar Evers grew interested in African freedom movements. They were interested in Jomo Kenyatta and the rise of the Kikuyu tribal resistance to colonialism in Kenya, known as the Mau Mau uprising as it moved to open violence. Along with his brother, Charles became active in the Regional Council of Negro Leadership (RCNL), a civil rights organization that promoted self-help and business ownership. He also helped his brother with black voter registration drives. Between 1952 and 1955, Evers often spoke at the RCNL's annual conferences in Mound Bayou, a town founded by freedmen, on such issues as voting rights. His brother Medgar continued to be involved in civil rights, becoming field secretary and head of the National Association for the Advancement of Colored People (NAACP) in Mississippi. While working in Chicago he sent money to him, not specifying the source. On June 12, 1963, Byron De La Beckwith, a member of a Ku Klux Klan chapter, fatally shot Evers's brother, Medgar, in Mississippi as he arrived home from work. Medgar died at the hospital in Jackson. Charles learned of his brother's death several hours later and flew to Jackson the following morning. Deeply upset by the assassination, he heavily involved himself in the planning of his brother's funeral. He decided to relocate to Mississippi to carry on his brother's work. Journalist Jason Berry, who later worked for Charles, said, "I think he wanted to be a better person. I think Medgar's death was a cathartic experience." A decade after his death, Evers and blues musician B.B. King created the Medgar Evers Homecoming Festival, an annual three-day event held the first week of June in Mississippi. Over the opposition of more establishment figures in the National Association for the Advancement of Colored People (NAACP) such as Roy Wilkins, Evers took over his brother's post as head of the NAACP in Mississippi. Wilkins never managed a friendly relationship with Evers, and Medgar's widow, Myrlie, also disapproved of Charles' replacing him. A staunch believer in racial integration, he distrusted what he viewed as the militancy and separatism of the Student Nonviolent Coordinating Committee and the Mississippi Freedom Democratic Party, a black-dominated breakaway of the segregationist Mississippi Democratic Party. In 1965 he launched a series of successful black boycotts in southwestern Mississippi which partnered with the Natchez Deacons for Defense and Justice, which won concessions from the Natchez authorities and ratified his unconventional boycott methods. Often accompanied by a group of 65 male followers, he would pressure local blacks in small towns to avoid stores under boycott and directly challenge white business leaders. He also led a voter registration campaign. He coordinated his efforts from the small town of Fayette in Jefferson County. Fayette was a small, economically depressed town of about 2,500 people. About three-fourths of the population was black, and they had long been socially and economically subordinate to the white minority. Evers moved the NAACP's Mississippi field office from Jackson to Fayette to take advantage of the potential of the black majority and achieve political influence in Jefferson and two adjacent counties. He explained, "My feeling is that Negroes gotta control somewhere in America, and we've dropped anchor in these counties. We are going to control these three counties in the next ten years. There is no question about it." With his voter registration drives having made Fayette's number of black registered voters double the size of the white electorate, Evers helped elect a black man to the local school board in 1966. He also established the Medgar Evers Community Center at the outskirts of town, which served as a center for registration efforts, grocery store, restaurant, and dance hall. By early 1968 he had established a network of local NAACP branches in the region. The president of each branch served as Evers' deputies, and he attended all of their meetings. That year he made a bid for the open seat of the 3rd congressional district in the U.S. House of Representatives, facing six white opponents in the Democratic primary. Though low on funds, he led in the primary with a plurality of the votes. The Mississippi Legislature responded by passing a law mandating a runoff primary in the event of no absolute majority in the initial contest, which Evers lost. He also supported Robert F. Kennedy's 1968 presidential campaign, serving as co-director of his Mississippi campaign organization, and was with Kennedy in Los Angeles when he was assassinated. Mayor of Fayette. In May 1969, Evers ran for the office of Mayor of Fayette and defeated white incumbent R. J. Allen, 386 votes to 255. This made him the first black mayor of a biracial Mississippi town (unlike the all-black Mound Bayou) since Reconstruction. Evers' election as mayor had great symbolic significance statewide and attracted national attention. The NAACP named Evers their 1969 Man of the Year. Evers popularized the slogan, "Hands that picked cotton can now pick the mayor." The local white community was bitter about his victory, but he became intensively popular among Mississippi's blacks. To celebrate his victory, he hosted an inaugural ball in Natchez, which was widely attended by black Mississippians, reporters from around the country, and prominent national liberals including Ramsey Clark, Ted Sorensen, Whitney Young, Julian Bond, Shirley MacLaine, and Paul O'Dwyer. The white-dominated school board refused to let Evers swear-in on property under their jurisdiction, so he took his oath of office in a parking lot. Evers appointed a black police force and several black staff members. He also benefitted from an influx of young, white liberal volunteers who wanted to assist a civil rights leader. Many ended up leaving after growing disillusioned with Evers' pursuit of personal financial success and domineering leadership style. Evers sought to make Fayette an upstanding community and a symbolic refuge for black people. Repulsed by the behavior of poor blacks in the town, he ordered the police force to enforce a 25-mile per hour speed limit on local roads, banned cursing in public, and cracked down on truancy. He also prohibited the carrying of firearms in town but kept a gun on himself. He quickly responded to concerns from poor blacks while making white businessmen wait outside of his office. Rhetorically, he would vacillate between messages of racial conciliation and statements of hostility. Fayette's white population remained bitter about Evers' victory. Many avoided the city hall where they used to socialize and "The Fayette Chronicle" regularly criticized him. He argued with the county board of supervisors over his plan to erect busts of his brother, Martin Luther King Jr., and the Kennedys on the courthouse square. He told the press, "They're cooperating because they haven't blown my head off. This is Mississippi." In September 1969, a Klansman drove into Fayette with a collection of weapons, intending to assassinate Evers. A white resident tipped off the mayor and the Klansman was arrested. The Klansman defended his motives by saying, "I am a Mississippi white man". Evers' moralistic style began to create discontent; in early 1970, most of Fayette's police department resigned, saying the mayor had treated them "like dogs". Evers complained that local blacks were "jealous" of him. As the judge in the municipal court, he personally issued fines for infractions such as cursing in public. He regularly ignored the input of the town board of aldermen, and town employee Charles Ramberg reported that he said he would fire municipal workers who would not vote for him. During Evers' tenure, Fayette benefitted from several federal grants, and ITT Inc. built an assembly plant in the town, but the region's economy largely remained depressed. By 1981, Jefferson County had the highest unemployment rate in the state. Whites' perception that Evers was venal and self-interested persisted and began to spread among the black community. This problem ballooned when in 1974 the Internal Revenue Service arranged for him to be indicted for tax evasion by failing to report $156,000 in income he garnered in the late 1960s. Prosecutors further accused him of depositing town funds in a personal bank account. His attorney told the court that Evers had indeed concealed the income, but argued that the charge was invalid since this had been done before the late 1960s, as the indictment specified. The case resulted in a mistrial, but Evers' reputation permanently suffered. In the late 1970s he used a $5,300 federal grant to renovate a building he owned which he leased to a federal day care program, and used some of the employees for personal business. Evers served many terms as mayor of Fayette. Admired by some, he alienated others with his inflexible stands on various issues. Evers did not like to share or delegate power. Evers lost the Democratic primary for mayor in 1981 to Kennie Middleton. Four years later, Evers defeated Middleton in the primaries and won back the office of mayor. In 1989, Evers lost the nomination once again to political rival Kennie Middleton. In his response to the defeat, Evers accepted, said he was tired, and that: "Twenty years is enough. I'm tired of being out front. Let someone else be out front." 1971 gubernatorial campaign. Evers began mulling the possibility of a campaign for the office of governor in 1969. He decided to enter the 1971 gubernatorial election as an independent, kicking off his campaign with a rally in Decatur. He later explained his reason for launching the bid, saying, "I ran for governor because if someone doesn't start running, there will never be a black man or a black woman governor of the state of Mississippi." He endorsed white segregationist Jimmy Swan in the Democratic primary, reasoning that if Swan won the nomination, moderate whites would be more inclined to vote for himself in the general election. He campaigned on a platform of reduced taxes—particularly for lower property taxes on the elderly, improved healthcare, and legalizing gambling along the Gulf Coast. Low on money, his candidacy was largely funded by the sale of campaign buttons and copies of his recently published autobiography. His campaign staff was largely young and inexperienced and lacked organization. Evers' rallies drew large crowds of blacks. "The Clarion-Ledger", a leading Mississippian conservative newspaper, largely ignored his campaign. To gain attention, he unexpectedly gatecrashed the annual Fisherman's Rodeo in Pascagoula and stopped and spoke to people on the streets of Jackson during their morning commute. Police departments in rural towns were often horrified by the arrival of his campaign caravan. A total of 269 other black candidates were running for office in Mississippi that year, and many of them complained that Evers was self-absorbed and hoarding resources, despite his slim chances of winning. Evers did little to support them. In the general election, Evers faced Democratic nominee Bill Waller and independent segregationist Thomas Pickens Brady. Waller and Evers were personally acquainted with one another, as Waller had prosecuted Beckwith for the murder of Medgar. Despite the fears of public observers, the campaign was largely devoid of overt racist appeals and Evers and Waller avoided negative tactics. Though about 40 percent of the Mississippi electorate in 1971 was black, Evers only secured about 22 percent of the total vote; Waller won with 601,222 votes to Evers' 172,762 and Brady's 6,653. The night of the election, Evers shook the hands of Waller supporters in Jackson and then went to a local television station where his opponent was delivering a victory speech. Learning that Evers had arrived, Waller's nervous aides hurried the governor-elect to his car. Evers approached the car shortly before its departure and told Waller, "I just wanted to congratulate you." Waller replied, "Whaddya say, Charlie?" and his wife leaned over and shook Evers' hand. Later political career. In 1978, Evers ran as an independent for the U.S. Senate seat vacated by Democrat James Eastland. He finished in third place behind his opponents, Democrat Maurice Dantin and Republican Thad Cochran. He received 24 percent of the vote, likely siphoning off African-American votes that would have otherwise gone to Dantin. Cochran won the election with a plurality of 45 percent of the vote. With the shift in white voters moving into the Republican Party in the state (and the rest of the South), Cochran was continuously re-elected to his Senate seat. After his failed Senate race, Evers briefly switched political parties and became a Republican. In 1983, Evers ran as an independent for governor of Mississippi but lost to the Democrat Bill Allain. Republican Leon Bramlett of Clarksdale, also known as a college All-American football player, finished second with 39 percent of the vote. Evers endorsed Ronald Reagan for President of the United States during the 1980 United States presidential election. Evers later attracted controversy for his support of judicial nominee Charles W. Pickering, a Republican, who was nominated by President George H. W. Bush for a seat on the U.S. Court of Appeals. Evers criticized the NAACP and other organizations for opposing Pickering, as he said the candidate had a record of supporting the civil rights movement in Mississippi. Evers befriended a range of people from sharecroppers to presidents. He was an informal adviser to politicians as diverse as Lyndon B. Johnson, George C. Wallace, Ronald Reagan and Robert F. Kennedy. Evers severely criticized such national leaders as Roy Wilkins, Stokely Carmichael, H. Rap Brown and Louis Farrakhan over various issues. Evers was a member of the Republican Party for 30 years when he spoke warmly of the 2008 election of Barack Obama as the first black President of the United States. During the 2016 presidential election, Evers supported Donald Trump's presidential campaign. Books. Evers wrote two autobiographies or memoirs: "Evers" (1971), written with Grace Halsell and self-published; and "Have No Fear," written with Andrew Szanton and published by John Wiley & Sons (1997). Personal life. Evers was briefly married to Christine Evers until their marriage ended in annulment. In 1951, Evers married Nannie L. Magee, with whom he had four daughters. The couple divorced in June 1974. Evers lived in Brandon, Mississippi, and served as station manager of WMPR 90.1 FM in Jackson. On July 22, 2020, Evers died in Brandon at age 97. Media portrayal. Evers was portrayed by Bill Cobbs in the 1996 film "Ghosts of Mississippi" (1996).
7143
1297765147
https://en.wikipedia.org/wiki?curid=7143
Code-division multiple access
Code-division multiple access (CDMA) is a channel access method used by various radio communication technologies. CDMA is an example of multiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies (see bandwidth). To permit this without undue interference between the users, CDMA employs spread spectrum technology and a special coding scheme (where each transmitter is assigned a code). CDMA optimizes the use of available bandwidth as it transmits over the entire frequency range and does not limit the user's frequency range. It is used as the access method in many mobile phone standards. IS-95, also called "cdmaOne", and its 3G evolution CDMA2000, are often simply referred to as "CDMA", but UMTS, the 3G standard used by GSM carriers, also uses "wideband CDMA", or W-CDMA, as well as TD-CDMA and TD-SCDMA, as its radio technologies. Many carriers (such as AT&T, UScellular and Verizon) shut down 3G CDMA-based networks in 2022 and 2024, rendering handsets supporting only those protocols unusable for calls, even to 911. It can be also used as a channel or medium access technology, like ALOHA for example or as a permanent pilot/signalling channel to allow users to synchronize their local oscillators to a common system frequency, thereby also estimating the channel parameters permanently. In these schemes, the message is modulated on a longer spreading sequence, consisting of several chips (0s and 1s). Due to their very advantageous auto- and crosscorrelation characteristics, these spreading sequences have also been used for radar applications for many decades, where they are called Barker codes (with a very short sequence length of typically 8 to 32). For space-based communication applications, CDMA has been used for many decades due to the large path loss and Doppler shift caused by satellite motion. CDMA is often used with binary phase-shift keying (BPSK) in its simplest form, but can be combined with any modulation scheme like (in advanced cases) quadrature amplitude modulation (QAM) or orthogonal frequency-division multiplexing (OFDM), which typically makes it very robust and efficient (and equipping them with accurate ranging capabilities, which is difficult without CDMA). Other schemes use subcarriers based on binary offset carrier modulation (BOC modulation), which is inspired by Manchester codes and enable a larger gap between the virtual center frequency and the subcarriers, which is not the case for OFDM subcarriers. History. The technology of code-division multiple access channels has long been known. United States. In the US, one of the earliest descriptions of CDMA can be found in the summary report of Project Hartwell on "The Security of Overseas Transport", which was a summer research project carried out at the Massachusetts Institute of Technology from June to August 1950. Further research in the context of jamming and anti-jamming was carried out in 1952 at Lincoln Lab. Soviet Union. In the Soviet Union (USSR), the first work devoted to this subject was published in 1935 by Dmitry Ageev. It was shown that through the use of linear methods, there are three types of signal separation: frequency, time and compensatory. The technology of CDMA was used in 1957, when the young military radio engineer Leonid Kupriyanovich in Moscow made an experimental model of a wearable automatic mobile phone, called LK-1 by him, with a base station. LK-1 has a weight of 3 kg, 20–30 km operating distance, and 20–30 hours of battery life. The base station, as described by the author, could serve several customers. In 1958, Kupriyanovich made the new experimental "pocket" model of mobile phone. This phone weighed 0.5 kg. To serve more customers, Kupriyanovich proposed the device, which he called "correlator." In 1958, the USSR also started the development of the "Altai" national civil mobile phone service for cars, based on the Soviet MRT-1327 standard. The phone system weighed . It was placed in the trunk of the vehicles of high-ranking officials and used a standard handset in the passenger compartment. The main developers of the Altai system were VNIIS (Voronezh Science Research Institute of Communications) and GSPI (State Specialized Project Institute). In 1963 this service started in Moscow, and in 1970 Altai service was used in 30 USSR cities. Steps in CDMA modulation. CDMA is a spread-spectrum multiple-access technique. A spread-spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is a pseudo-random code in the time domain that has a narrow ambiguity function in the frequency domain, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined by bitwise XOR (exclusive OR) with the faster code. The figure shows how a spread-spectrum signal is generated. The data signal with pulse duration of formula_1 (symbol period) is XORed with the code signal with pulse duration of formula_2 (chip period). (Note: bandwidth is proportional to formula_3, where formula_4 = bit time.) Therefore, the bandwidth of the data signal is formula_5 and the bandwidth of the spread spectrum signal is formula_6. Since formula_2 is much smaller than formula_1, the bandwidth of the spread-spectrum signal is much larger than the bandwidth of the original signal. The ratio formula_9 is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station. Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance occurs when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made by correlating the received signal with the locally generated code of the desired user. If the signal matches the desired user's code, then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal, the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to as cross-correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference. An analogy to the problem of multiple access is a room (channel) in which people wish to talk to each other simultaneously. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but other languages are perceived as noise and rejected. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can communicate. In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes). Code-division multiplexing (synchronous CDMA). The digital modulation method is analogous to those used in simple radio transceivers. In the analog case, a low-frequency data signal is time-multiplied with a high-frequency pure sine-wave carrier and transmitted. This is effectively a frequency convolution (Wiener–Khinchin theorem) of the two signals, resulting in a carrier with narrow sidebands. In the digital case, the sinusoidal carrier is replaced by Walsh functions. These are binary square waves that form a complete orthonormal set. The data signal is also binary and the time multiplication is achieved with a simple XOR function. This is usually a Gilbert cell mixer in the circuitry. Synchronous CDMA exploits mathematical properties of orthogonality between vectors representing the data strings. For example, the binary string "1011" is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking their dot product, by summing the products of their respective components (for example, if u = ("a", "b") and v = ("c", "d"), then their dot product u·v = "ac" + "bd"). If the dot product is zero, the two vectors are said to be "orthogonal" to each other. Some properties of the dot product aid understanding of how W-CDMA works. If vectors a and b are orthogonal, then formula_10 and: formula_11 formula_12 formula_13 formula_14 Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of 4 mutually orthogonal digital signals is shown in the figure below. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95, 64-bit Walsh codes are used to encode the signal to separate different users. Since each of the 64 Walsh codes is orthogonal to all other, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded. Example. Start with a set of vectors that are mutually orthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows from Walsh matrices.) An example of orthogonal functions is shown in the adjacent picture. These vectors will be assigned to individual users and are called the "code", "chip code", or "chipping code". In the interest of brevity, the rest of this example uses codes v with only two bits. Each user is associated with a different code, say v. A 1 bit is represented by transmitting a positive code v, and a 0 bit is represented by a negative code −v. For example, if v = ("v"0, "v"1) = (1, −1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be (v, −v, v, v) = ("v"0, "v"1, −"v"0, −"v"1, "v"0, "v"1, "v"0, "v"1) = (1, −1, −1, 1, 1, −1, 1, −1). For the purposes of this article, we call this constructed vector the "transmitted vector". Each sender has a different, unique vector v chosen from that set, but the construction method of the transmitted vector is identical. Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component. If sender0 has code (1, −1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps: Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal (1, −1, −1, 1, 1, −1, 1, −1) + (−1, −1, −1, −1, 1, 1, 1, 1) = (0, −2, −2, 0, 2, 0, 2, 0). This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern. The following table explains how this works and shows that the signals do not interfere with one another: Further, after decoding, all values greater than 0 are interpreted as 1, while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, −2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 mean that the sender did not transmit any data, as in the following example: Assume signal0 = (1, −1, −1, 1, 1, −1, 1, −1) is transmitted alone. The following table shows the decode at the receiver: When the receiver attempts to decode the signal using sender1's code, the data is all zeros; therefore the cross-correlation is equal to zero and it is clear that sender1 did not transmit any data. Asynchronous CDMA. When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique "pseudo-random" or "pseudo-noise" sequences called spreading sequences are used in "asynchronous" CDMA systems. A spreading sequence is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These spreading sequences are used to encode and decode a user's signal in asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These spreading sequences are statistically uncorrelated, and the sum of a large number of spreading sequences results in "multiple access interference" (MAI) that is approximated by a Gaussian noise process (following the central limit theorem in statistics). Gold codes are an example of a spreading sequence suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users. All forms of CDMA use the spread-spectrum spreading factor to allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified spreading sequences are received, while signals with different sequences (or the same sequences but different timing offsets) appear as wideband noise reduced by the spreading factor. Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power-control scheme to tightly control each mobile's transmit power. In 2019, schemes to precisely estimate the required length of the codes in dependence of Doppler and delay characteristics have been developed. Soon after, machine learning based techniques that generate sequences of a desired length and spreading properties have been published as well. These are highly competitive with the classic Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables. Advantages of asynchronous CDMA over other techniques. Efficient practical utilization of the fixed frequency spectrum. In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA. TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency. Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictable Doppler shift of the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum. Flexible allocation of resources. Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2"N" users that only talk half of the time, then 2"N" users can be accommodated with the same "average" bit error probability as "N" users that talk all of the time. The key difference here is that the bit error probability for "N" users talking all of the time is constant, whereas it is a "random" quantity (with the same mean) for 2"N" users talking half of the time. In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number of orthogonal codes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there are "N" time slots in a TDMA system and 2"N" users that talk half of the time, then half of the time there will be more than "N" users needing to use more than "N" time slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long as they are connected to the system. Spread-spectrum characteristics of CDMA. Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread-spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread-spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal. CDMA can also effectively reject narrow-band interference. Since narrow-band interference affects only a small portion of the spread-spectrum signal, it can easily be removed through notch filtering without much loss of information. Convolution encoding and interleaving can be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread-spectrum signal occupies a large bandwidth, only a small portion of this will undergo fading due to multipath at any given time. Like the narrow-band interference, this will result in only a small loss of data and can be overcome. Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored. Some CDMA devices use a rake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal. Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems, frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell, because channelization is done using the pseudo-random codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudo-random sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell. Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft hand-offs. Soft hand-offs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the hand-off is complete. This is different from hard hand-offs utilized in other cellular systems. In a hard-hand-off situation, as the mobile telephone approaches a hand-off, signal strength may vary abruptly. In contrast, CDMA systems use the soft hand-off, which is undetectable and provides a more reliable and higher-quality signal. Collaborative CDMA. A novel collaborative multi-user transmission and detection scheme called collaborative CDMA has been investigated for the uplink that exploits the differences between users' fading channel signatures to increase the user capacity well beyond the spreading length in the MAI-limited environment. The authors show that it is possible to achieve this increase at a low complexity and high bit error rate performance in flat fading channels, which is a major research challenge for overloaded CDMA systems. In this approach, instead of using one sequence per user as in conventional CDMA, the authors group a small number of users to share the same spreading sequence and enable group spreading and despreading operations. The new collaborative multi-user receiver consists of two stages: group multi-user detection (MUD) stage to suppress the MAI between the groups and a low-complexity maximum-likelihood detection stage to recover jointly the co-spread users' data using minimal Euclidean-distance measure and users' channel-gain coefficients. An enhanced CDMA version known as interleave-division multiple access (IDMA) uses the orthogonal interleaving as the only means of user separation in place of signature sequence used in CDMA system.
7144
28481209
https://en.wikipedia.org/wiki?curid=7144
Internet filter
An Internet filter is software that restricts or controls the content an Internet user is capable to access, especially when utilized to restrict material delivered over the Internet via the Web, Email, or other means. Such restrictions can be applied at various levels: a government can attempt to apply them nationwide (see Internet censorship), or they can, for example, be applied by an Internet service provider to its clients, by an employer to its personnel, by a school to its students, by a library to its visitors, by a parent to a child's computer, or by an individual user to their own computers. The motive is often to prevent access to content which the computer's owner(s) or other authorities may consider objectionable. When imposed without the consent of the user, content control can be characterised as a form of internet censorship. Some filter software includes time control functions that empowers parents to set the amount of time that child may spend accessing the Internet or playing games or other computer activities. Terminology. The term "content control" is used on occasion by CNN, "Playboy" magazine, the "San Francisco Chronicle", and "The New York Times". However, several other terms, including "content filtering software", "web content filter", "filtering proxy servers", "secure web gateways", "censorware", "content security and control", "web filtering software", "content-censoring software", and "content-blocking software", are often used. "Nannyware" has also been used in both product marketing and by the media. Industry research company Gartner uses "secure web gateway" (SWG) to describe the market segment. Companies that make products that selectively block Web sites do not refer to these products as censorware, and prefer terms such as "Internet filter" or "URL Filter"; in the specialized case of software specifically designed to allow parents to monitor and restrict the access of their children, "parental control software" is also used. Some products log all sites that a user accesses and rates them based on content type for reporting to an "accountability partner" of the person's choosing, and the term accountability software is used. Internet filters, parental control software, and/or accountability software may also be combined into one product. Those critical of such software, however, use the term "censorware" freely: consider the Censorware Project, for example. The use of the term "censorware" in editorials criticizing makers of such software is widespread and covers many different varieties and applications: Xeni Jardin used the term in a 9 March 2006 editorial in "The New York Times," when discussing the use of American-made filtering software to suppress content in China; in the same month a high school student used the term to discuss the deployment of such software in his school district. In general, outside of editorial pages as described above, traditional newspapers do not use the term "censorware" in their reporting, preferring instead to use less overtly controversial terms such as "content filter", "content control", or "web filtering"; "The New York Times" and "The Wall Street Journal" both appear to follow this practice. On the other hand, Web-based newspapers such as CNET use the term in both editorial and journalistic contexts, for example "Windows Live to Get Censorware." Types of filtering. Filters can be implemented in many different ways: by software on a personal computer, via network infrastructure such as proxy servers, DNS servers, or firewalls that provide Internet access. No solution provides complete coverage, so most companies deploy a mix of technologies to achieve the proper content control in line with their policies. Browser based content filtering solution is the most lightweight solution to do the content filtering, and is implemented via a third party browser extension. E-mail filters act on information contained in the mail body, in the mail headers such as sender and subject, and e-mail attachments to classify, accept, or reject messages. Bayesian filters, a type of statistical filter, are commonly used. Both client and server based filters are available. This type of filter is installed as software on each computer where filtering is required. This filter can typically be managed, disabled or uninstalled by anyone who has administrator-level privileges on the system. A DNS-based client-side filter would be to set up a DNS Sinkhole, such as Pi-Hole. Content-limited (or filtered) ISPs are Internet service providers that offer access to only a set portion of Internet content on an opt-in or a mandatory basis. Anyone who subscribes to this type of service is subject to restrictions. The type of filters can be used to implement government, regulatory or parental control over subscribers. This type of filter is implemented at the transport layer as a transparent proxy, or at the application layer as a web proxy. Filtering software may include data loss prevention functionality to filter outbound as well as inbound information. All users are subject to the access policy defined by the institution. The filtering can be customized, so a school district's high school library can have a different filtering profile than the district's junior high school library. This type of filtering is implemented at the DNS layer and attempts to prevent lookups for domains that do not fit within a set of policies (either parental control or company rules). Multiple free public DNS services offer filtering options as part of their services. DNS Sinkholes such as Pi-Hole can be also be used for this purpose, though client-side only. Many search engines, such as Google and Bing offer users the option of turning on a safety filter. When this safety filter is activated, it filters out the inappropriate links from all of the search results. If users know the actual URL of a website that features explicit or adult content, they have the ability to access that content without using a search engine. Some providers offer child-oriented versions of their engines that permit only child friendly websites. Reasons for filtering. The Internet does not intrinsically provide content blocking, and therefore there is much content on the Internet that is considered unsuitable for children, given that much content is given certifications as suitable for adults only, e.g. 18-rated games and movies. Internet service providers (ISPs) that block material containing pornography, or controversial religious, political, or news-related content en route are often utilized by parents who do not permit their children to access content not conforming to their personal beliefs. Content filtering software can, however, also be used to block malware and other content that is or contains hostile, intrusive, or annoying material including adware, spam, computer viruses, worms, trojan horses, and spyware. Most content control software is marketed to organizations or parents. It is, however, also marketed on occasion to facilitate self-censorship, for example by people struggling with addictions to online pornography, gambling, chat rooms, etc. Self-censorship software may also be utilised by some in order to avoid viewing content they consider immoral, inappropriate, or simply distracting. A number of accountability software products are marketed as "self-censorship" or "accountability software". These are often promoted by religious media and at religious gatherings. Criticism. Filtering errors. Overblocking. Utilizing a filter that is overly zealous at filtering content, or mislabels content not intended to be censored can result in over-blocking, or over-censoring. Overblocking can filter out material that should be acceptable under the filtering policy in effect, for example health related information may unintentionally be filtered along with porn-related material because of the Scunthorpe problem. Filter administrators may prefer to err on the side of caution by accepting over blocking to prevent any risk of access to sites that they determine to be undesirable. Content-control software was mentioned as blocking access to Beaver College before its name change to Arcadia University. Another example was the filtering of Horniman Museum. As well, over-blocking may encourage users to bypass the filter entirely. Underblocking. Whenever new information is uploaded to the Internet, filters can under block, or under-censor, content if the parties responsible for maintaining the filters do not update them quickly and accurately, and a blacklisting rather than a whitelisting filtering policy is in place. Morality and opinion. Many would not be satisfied with government filtering viewpoints on moral or political issues, agreeing that this could become support for propaganda. Many would also find it unacceptable that an ISP, whether by law or by the ISP's own choice, should deploy such software without allowing the users to disable the filtering for their own connections. In the United States, the First Amendment to the United States Constitution has been cited in calls to criminalise forced internet censorship. (See section below) Legal actions. In 1998, a United States federal district court in Virginia ruled (Loudoun v. Board of Trustees of the Loudoun County Library) that the imposition of mandatory filtering in a public library violates the First Amendment. In 1996 the US Congress passed the Communications Decency Act, banning indecency on the Internet. Civil liberties groups challenged the law under the First Amendment, and in 1997 the Supreme Court ruled in their favor. Part of the civil liberties argument, especially from groups like the Electronic Frontier Foundation, was that parents who wanted to block sites could use their own content-filtering software, making government involvement unnecessary. In the late 1990s, groups such as the Censorware Project began reverse-engineering the content-control software and decrypting the blacklists to determine what kind of sites the software blocked. This led to legal action alleging violation of the "Cyber Patrol" license agreement. They discovered that such tools routinely blocked unobjectionable sites while also failing to block intended targets. Some content-control software companies responded by claiming that their filtering criteria were backed by intensive manual checking. The companies' opponents argued, on the other hand, that performing the necessary checking would require resources greater than the companies possessed and that therefore their claims were not valid. The Motion Picture Association successfully obtained a UK ruling enforcing ISPs to use content-control software to prevent copyright infringement by their subscribers. Religious, anti-religious, and political censorship. Many types of content-control software have been shown to block sites based on the religious and political leanings of the company owners. Examples include blocking several religious sites (including the Web site of the Vatican), many political sites, and homosexuality-related sites. "X-Stop" was shown to block sites such as the Quaker web site, the National Journal of Sexual Orientation Law, The Heritage Foundation, and parts of The Ethical Spectacle. CYBERsitter blocks out sites like National Organization for Women. Nancy Willard, an academic researcher and attorney, pointed out that many U.S. public schools and libraries use the same filtering software that many Christian organizations use. Cyber Patrol, a product developed by The Anti-Defamation League and Mattel's The Learning Company, has been found to block not only political sites it deems to be engaging in 'hate speech' but also human rights web sites, such as Amnesty International's web page about Israel and gay-rights web sites, such as glaad.org. Content labeling. Content labeling may be considered another form of content-control software. In 1994, the Internet Content Rating Association (ICRA) — now part of the Family Online Safety Institute — developed a content rating system for online content providers. Using an online questionnaire a webmaster describes the nature of their web content. A small file is generated that contains a condensed, computer readable digest of this description that can then be used by content filtering software to block or allow that site. ICRA labels come in a variety of formats. These include the World Wide Web Consortium's Resource Description Framework (RDF) as well as Platform for Internet Content Selection (PICS) labels used by Microsoft's Internet Explorer Content Advisor. ICRA labels are an example of self-labeling. Similarly, in 2006 the Association of Sites Advocating Child Protection (ASACP) initiated the Restricted to Adults self-labeling initiative. ASACP members were concerned that various forms of legislation being proposed in the United States were going to have the effect of forcing adult companies to label their content. The RTA label, unlike ICRA labels, does not require a webmaster to fill out a questionnaire or sign up to use. Like ICRA the RTA label is free. Both labels are recognized by a wide variety of content-control software. The Voluntary Content Rating (VCR) system was devised by Solid Oak Software for their CYBERsitter filtering software, as an alternative to the PICS system, which some critics deemed too complex. It employs HTML metadata tags embedded within web page documents to specify the type of content contained in the document. Only two levels are specified, "mature" and "adult", making the specification extremely simple. Use in public libraries. Australia. The Australian Internet Safety Advisory Body has information about "practical advice on Internet safety, parental control and filters for the protection of children, students and families" that also includes public libraries. NetAlert, the software made available free of charge by the Australian government, was allegedly cracked by a 16-year-old student, Tom Wood, less than a week after its release in August 2007. Wood supposedly bypassed the $84 million filter in about half an hour to highlight problems with the government's approach to Internet content filtering. The Australian Government has introduced legislation that requires ISPs to "restrict access to age restricted content (commercial MA15+ content and R18+ content) either hosted in Australia or provided from Australia" that was due to commence from 20 January 2008, known as Cleanfeed. Cleanfeed is a proposed mandatory ISP level content filtration system. It was proposed by the Beazley led Australian Labor Party opposition in a 2006 press release, with the intention of protecting children who were vulnerable due to claimed parental computer illiteracy. It was announced on 31 December 2007 as a policy to be implemented by the Rudd ALP government, and initial tests in Tasmania have produced a 2008 report. Cleanfeed is funded in the current budget, and is moving towards an Expression of Interest for live testing with ISPs in 2008. Public opposition and criticism have emerged, led by the EFA and gaining irregular mainstream media attention, with a majority of Australians reportedly "strongly against" its implementation. Criticisms include its expense, inaccuracy (it will be impossible to ensure only illegal sites are blocked) and the fact that it will be compulsory, which can be seen as an intrusion on free speech rights. Another major criticism point has been that although the filter is claimed to stop certain materials, the underground rings dealing in such materials will not be affected. The filter might also provide a false sense of security for parents, who might supervise children less while using the Internet, achieving the exact opposite effect. Cleanfeed is a responsibility of Senator Conroy's portfolio. Denmark. In Denmark it is stated policy that it will "prevent inappropriate Internet sites from being accessed from children's libraries across Denmark". "'It is important that every library in the country has the opportunity to protect children against pornographic material when they are using library computers. It is a main priority for me as Culture Minister to make sure children can surf the net safely at libraries,' states Brian Mikkelsen in a press-release of the Danish Ministry of Culture." United States. The use of Internet filters or content-control software varies widely in public libraries in the United States, since Internet use policies are established by the local library board. Many libraries adopted Internet filters after Congress conditioned the receipt of universal service discounts on the use of Internet filters through the Children's Internet Protection Act (CIPA). Other libraries do not install content control software, believing that acceptable use policies and educational efforts address the issue of children accessing age-inappropriate content while preserving adult users' right to freely access information. Some libraries use Internet filters on computers used by children only. Some libraries that employ content-control software allow the software to be deactivated on a case-by-case basis on application to a librarian; libraries that are subject to CIPA are required to have a policy that allows adults to request that the filter be disabled without having to explain the reason for their request. Many legal scholars believe that a number of legal cases, in particular "Reno v. American Civil Liberties Union", established that the use of content-control software in libraries is a violation of the First Amendment. The Children's Internet Protection Act [CIPA] and the June 2003 case "United States v. American Library Association" found CIPA constitutional as a condition placed on the receipt of federal funding, stating that First Amendment concerns were dispelled by the law's provision that allowed adult library users to have the filtering software disabled, without having to explain the reasons for their request. The plurality decision left open a future "as-applied" Constitutional challenge, however. In November 2006, a lawsuit was filed against the North Central Regional Library District (NCRL) in Washington State for its policy of refusing to disable restrictions upon requests of adult patrons, but CIPA was not challenged in that matter. In May 2010, the Washington State Supreme Court provided an opinion after it was asked to certify a question referred by the United States District Court for the Eastern District of Washington: "Whether a public library, consistent with Article I, § 5 of the Washington Constitution, may filter Internet access for all patrons without disabling Web sites containing constitutionally-protected speech upon the request of an adult library patron." The Washington State Supreme Court ruled that NCRL's internet filtering policy did not violate Article I, Section 5 of the Washington State Constitution. The Court said: "It appears to us that NCRL's filtering policy is reasonable and accords with its mission and these policies and is viewpoint neutral. It appears that no article I, section 5 content-based violation exists in this case. NCRL's essential mission is to promote reading and lifelong learning. As NCRL maintains, it is reasonable to impose restrictions on Internet access in order to maintain an environment that is conducive to study and contemplative thought." The case returned to federal court. In March 2007, Virginia passed a law similar to CIPA that requires public libraries receiving state funds to use content-control software. Like CIPA, the law requires libraries to disable filters for an adult library user when requested to do so by the user. Bypassing filters. Content filtering in general can "be bypassed entirely by tech-savvy individuals." Blocking content on a device "[will not]…guarantee that users won't eventually be able to find a way around the filter." Content providers may change URLs or IP addresses to circumvent filtering. Individuals with technical expertise may use a different method by employing multiple domains or URLs that direct to a shared IP address where restricted content is present. This strategy doesn't circumvent IP packet filtering, however can evade DNS poisoning and web proxies. Additionally, perpetrators may use mirrored websites that avoid filters. Some software may be bypassed successfully by using alternative protocols such as FTP or telnet or HTTPS, conducting searches in a different language, using a proxy server or a circumventor such as Psiphon. Also cached web pages returned by Google or other searches could bypass some controls as well. Web syndication services may provide alternate paths for content. Some of the more poorly designed programs can be shut down by killing their processes: for example, in Microsoft Windows through the Windows Task Manager, or in Mac OS X using Force Quit or Activity Monitor. Numerous workarounds and counters to workarounds from content-control software creators exist. Google services are often blocked by filters, but these may most often be bypassed by using "https://" in place of "http://" since content filtering software is not able to interpret content under secure connections (in this case SSL). An encrypted VPN can be used as means of bypassing content control software, especially if the content control software is installed on an Internet gateway or firewall. Other ways to bypass a content control filter include translation sites and establishing a remote connection with an uncensored device. Products and services. Some ISPs offer parental control options. Some offer security software which includes parental controls. Mac OS X v10.4 offers parental controls for several applications (Mail, Finder, iChat, Safari & Dictionary). Microsoft's Windows Vista operating system also includes content-control software. Content filtering technology exists in two major forms: application gateway or packet inspection. For HTTP access the application gateway is called a web-proxy or just a proxy. Such web-proxies can inspect both the initial request and the returned web page using arbitrarily complex rules and will not return any part of the page to the requester until a decision is made. In addition they can make substitutions in whole or for any part of the returned result. Packet inspection filters do not initially interfere with the connection to the server but inspect the data in the connection as it goes past, at some point the filter may decide that the connection is to be filtered and it will then disconnect it by injecting a TCP-Reset or similar faked packet. The two techniques can be used together with the packet filter monitoring a link until it sees an HTTP connection starting to an IP address that has content that needs filtering. The packet filter then redirects the connection to the web-proxy which can perform detailed filtering on the website without having to pass through all unfiltered connections. This combination is quite popular because it can significantly reduce the cost of the system. There are constraints to IP level packet-filtering, as it may result in rendering all web content associated with a particular IP address inaccessible. This may result in the unintentional blocking of legitimate sites that share the same IP address or domain. For instance, university websites commonly employ multiple domains under one IP address. Moreover, IP level packet-filtering can be surpassed by using a distinct IP address for certain content while still being linked to the same domain or server. Gateway-based content control software may be more difficult to bypass than desktop software as the user does not have physical access to the filtering device. However, many of the techniques in the Bypassing filters section still work.
7145
45070860
https://en.wikipedia.org/wiki?curid=7145
Chambered cairn
A chambered cairn is a burial monument, usually constructed during the Neolithic, consisting of a sizeable (usually stone) chamber around and over which a cairn of stones was constructed. Some chambered cairns are also passage-graves. They are found throughout Britain and Ireland, with the largest number in Scotland. Typically, the chamber is larger than a cist, and will contain a larger number of interments, which are either excarnated bones or inhumations (cremations). Most were situated near a settlement, and served as that community's "graveyard". Scotland. Background. During the early Neolithic (4000–3300 BC) architectural forms are highly regionalised with timber and earth monuments predominating in the east and stone-chambered cairns in the west. During the later Neolithic (3300–2500 BC) massive circular enclosures and the use of grooved ware and Unstan ware pottery emerge. Scotland has a particularly large number of chambered cairns; they are found in various different types described below. Along with the excavations of settlements such as Skara Brae, Links of Noltland, Barnhouse, Rinyo and Balfarg and the complex site at Ness of Brodgar these cairns provide important clues to the character of civilization in Scotland in the Neolithic. However the increasing use of cropmarks to identify Neolithic sites in lowland areas has tended to diminish the relative prominence of these cairns. In the early phases bones of numerous bodies are often found together and it has been argued that this suggests that in death at least, the status of individuals was played down. During the late Neolithic henge sites were constructed and single burials began to become more commonplace; by the Bronze Age it is possible that even where chambered cairns were still being built they had become the burial places of prominent individuals rather than of communities as a whole. Clyde-Carlingford court cairns. The Clyde or Clyde-Carlingford type are principally found in northern and western Ireland and southwestern Scotland. They first were identified as a separate group in the Firth of Clyde region, hence the name. Over 100 have been identified in Scotland alone. Lacking a significant passage, they are a form of gallery grave. The burial chamber is normally located at one end of a rectangular or trapezoidal cairn, while a roofless, semi-circular forecourt at the entrance provided access from the outside (although the entrance itself was often blocked), and gives this type of chambered cairn its alternate name of court tomb or court cairn. These forecourts are typically fronted by large stones and it is thought the area in front of the cairn was used for public rituals of some kind. The chambers were created from large stones set on end, roofed with large flat stones and often sub-divided by slabs into small compartments. They are generally considered to be the earliest in Scotland. Examples include Cairn Holy I and Cairn Holy II near Newton Stewart, a cairn at Port Charlotte, Islay, which dates to 3900–4000 BC, and Monamore, or Meallach's Grave, Arran, which may date from the early fifth millennium BC. Excavations at the Mid Gleniron cairns near Cairnholy revealed a multi-period construction which shed light on the development of this class of chambered cairn. Orkney-Cromarty. The Orkney-Cromarty group is by far the largest and most diverse. It has been subdivided into Yarrows, Camster and Cromarty subtypes but the differences are extremely subtle. The design is of dividing slabs at either side of a rectangular chamber, separating it into compartments or stalls. The number of these compartments ranges from 4 in the earliest examples to over 24 in an extreme example on Orkney. The actual shape of the cairn varies from simple circular designs to elaborate 'forecourts' protruding from each end, creating what look like small amphitheatres. It is likely that these are the result of cultural influences from mainland Europe, as they are similar to designs found in France and Spain. Examples include Midhowe on Rousay, and both the Unstan Chambered Cairn and Wideford Hill chambered cairn from the Orkney Mainland, both of which date from the mid 4th millennium BC and were probably in use over long periods of time. When the latter was excavated in 1884, grave goods were found that gave their name to Unstan ware pottery. Blackhammer cairn on Rousay is another example dating from the 3rd millennium BC. The Grey Cairns of Camster in Caithness are examples of this type from mainland Scotland. The Tomb of the Eagles on South Ronaldsay is a stalled cairn that shows some similarities with the later Maeshowe type. It was in use for 800 years or more and numerous bird bones were found here, predominantly white-tailed sea eagle. Maeshowe. The Maeshowe group, named after the famous Orkney monument, is among the most elaborate. They appear relatively late and only in Orkney and it is not clear why the use of cairns continued in the north when their construction had largely ceased elsewhere in Scotland. They consist of a central chamber from which lead small compartments, into which burials would be placed. The central chambers are tall and steep-sided and have corbelled roofing faced with high quality stone. In addition to Maeshowe itself, which was constructed c. 2700 BC, there are various other examples from the Orkney Mainland. These include Quanterness chambered cairn (3250 BC) in which the remains of 157 individuals were found when excavated in the 1970s, Cuween Hill near Finstown which was found to contain the bones of men, dogs and oxen and Wideford Hill chambered cairn, which dates from 2000 BC. Examples from elsewhere in Orkney are the Vinquoy chambered cairn, and the Huntersquoy chambered cairn, both found on the north end of the island of Eday and Quoyness on Sanday constructed about 2900 BC and which is surrounded by an arc of Bronze Age mounds. The central chamber of Holm of Papa Westray South cairn is over 20 metres long. Bookan. The Bookan type is named after a cairn found to the north-west of the Ring of Brodgar in Orkney, which is now a dilapidated oval mound, about 16 metres in diameter. Excavations in 1861 indicated a rectangular central chamber surrounded by five smaller chambers. Because of the structure's unusual design, it was originally presumed to be an early form. However, later interpretations and further excavation work in 2002 suggested that they have more in common with the later Maeshowe type rather than the stalled Orkney-Cromarty cairns. Huntersquoy chambered cairn on Eday is a double storied Orkney–Cromarty type cairn with a Booken-type lower chamber. Shetland. The Shetland or Zetland group are relatively small passage graves, that are round or heel-shaped in outline. The whole chamber is cross or trefoil-shaped and there are no smaller individual compartments. An example is to be found on the uninhabited island of Vementry on the north side of the West Mainland, where it appears that the cairn may have originally been circular and its distinctive heel shape added as a secondary development, a process repeated elsewhere in Shetland. This probably served to make the cairn more distinctive and the forecourt area more defined. Hebridean. Like the Shetland cairn the Hebridean group appear relatively late in the Neolithic. They are largely found in the Outer Hebrides, although a mixture of cairn types are found here. These passage graves are usually larger than the Shetland type and are round or have funnel-shaped forecourts, although a few are long cairns – perhaps originally circular but with later tails added. They often have a polygonal chamber and a short passage to one end of the cairn. The Rubha an Dùnain peninsula on the island of Skye provides an example from the 2nd or 3rd millennium BC. Barpa Langass on North Uist is the best preserved chambered cairn in the Hebrides. Bargrennan. Bargrennan chambered cairns are a class of passage graves found only in south-west Scotland, in western Dumfries and Galloway and southern Ayrshire. As well as being structurally different from the nearby Clyde cairns, Bargrennan cairns are distinguished by their siting and distribution; they are found in upland, inland areas of Galloway and Ayrshire. Bronze Age. In addition to the increasing prominence of individual burials, during the Bronze Age regional differences in architecture in Scotland became more pronounced. The Clava cairns date from this period, with about 50 cairns of this type in the Inverness area. Corrimony chambered cairn near Drumnadrochit is an example dated to 2000 BC or older. The only surviving evidence of burial was a stain indicating the presence of a single body. The cairn is surrounded by a circle of 11 standing stones. The cairns at Balnuaran of Clava are of a similar date. The largest of three is the north-east cairn, which was partially reconstructed in the 19th century and the central cairn may have been used as a funeral pyre. Glebe cairn in Kilmartin Glen in Argyll dates from 1700 BC and has two stone cists inside one of which a jet necklace was found during 19th century excavations. There are numerous prehistoric sites in the vicinity including Nether Largie North cairn, which was entirely removed and rebuilt during excavations in 1930. Wales. Chambered long cairns. There are 18 Scheduled Ancient Monuments listed:
7147
69423
https://en.wikipedia.org/wiki?curid=7147
Canadian whisky
Canadian whisky is a type of whisky produced in Canada. Most Canadian whiskies are blended whiskies consisting of a mix of different whisky styles, and are typically lighter and smoother than other whisky styles. While most Canadian whiskies are based predominantly on corn-based whiskies, Canadian distillers have long included rye-based whisky for flavour; in Canadian law the terms "Canadian whisky" and "rye whisky" are interchangeable to refer to the exact same product, even when it is made with only a small amount of rye spirits. Characteristics. Historically, in Canada, corn-based whisky that had some rye grain added to the mash bill to give it more flavour came to be called "rye". The regulations under Canada's "Food and Drugs Act" stipulate the minimum conditions that must be met in order to label a product as "Canadian Whisky" or "Canadian Rye Whisky" (or "Rye Whisky")—these are also upheld internationally through geographical indication agreements. These regulations state that whisky must "be mashed, distilled and aged in Canada", "be aged in small wood vessels for not less than three years", "contain not less than 40 per cent alcohol by volume" and "may contain caramel and flavouring". Within these parameters Canadian whiskies can vary considerably, especially with the allowance of "flavouring"—though the additional requirement that they "possess the aroma, taste and character generally attributed to Canadian whisky" can act as a limiting factor. Canadian whiskies are most typically blends of whiskies made from a single grain, principally corn and rye, but also sometimes wheat or barley. Mash bills of multiple grains may also be used for some flavouring whiskies. The availability of inexpensive American corn, with its higher proportion of usable starches relative to other cereal grains, has led it to be most typically used to create base whiskies to which flavouring whiskies are blended in. Exceptions to this include the Highwood Distillery which specializes in using wheat and the Alberta Distillers which developed its own proprietary yeast strain that specializes in distilling rye. The flavouring whiskies are most typically rye whiskies, blended into the product to add most of its flavour and aroma. While Canadian whisky may be labelled as a "rye whisky" this blending technique only necessitates a small percentage (such as 10%) of rye to create the flavour, whereas much more rye would be required if it were added to a mash bill alongside the more readily distilled corn. The base whiskies are distilled to between 180 and 190 proof which results in few congener by-products (such as fusel alcohol, aldehydes, esters, etc.) and creates a lighter taste. By comparison, an American whisky distilled any higher than 160 proof is labelled as "light whiskey". The flavouring whiskies are distilled to a lower proof so that they retain more of the grain's flavour. The relative lightness created by the use of base whiskies makes Canadian whisky useful for mixing into cocktails and highballs. The minimum three year aging in small wood barrels applies to all whiskies used in the blend. As the regulations do not limit the specific type of wood that must be used, a variety of flavours can be achieved by blending whiskies aged in different types of barrels. In addition to new wood barrels, charred or uncharred, flavour can be added by aging whiskies in previously used bourbon or fortified wine barrels for different lengths of time. History. In the 18th and early 19th centuries, gristmills distilled surplus grains to avoid spoilage. Most of these early whiskies would have been rough, mostly unaged wheat whiskey. Distilling methods and technologies were brought to Canada by American and European immigrants with experience in distilling wheat and rye. This early whisky from improvised stills, often with the grains closest to spoilage, was produced with various, uncontrolled proofs and was consumed, unaged, by the local market. While most distilling capacity was taken up producing rum, a result of Atlantic Canada's position in the British sugar trade, the first commercial scale production of whisky in Canada began in 1801 when John Molson purchased a copper pot still, previously used to produce rum, in Montreal. With his son Thomas Molson, and eventually partner James Morton, the Molsons operated a distillery in Montreal and Kingston and were the first in Canada to export whisky, benefiting from Napoleonic Wars' disruption in supplying French wine and brandies to England. Gooderham and Worts began producing whisky in 1837 in Toronto as a side business to their wheat milling but surpassed Molson's production by the 1850s as it expanded their operations with a new distillery in what would become the Distillery District. Henry Corby started distilling whisky as a side business from his gristmill in 1859 in what became known as Corbyville and Joseph Seagram began working in his father-in-law's Waterloo flour mill and distillery in 1864, which he would eventually purchase in 1883. Meanwhile, Americans Hiram Walker and J.P. Wiser moved to Canada: Walker to Windsor in 1858 to open a flour mill and distillery and Wiser to Prescott in 1857 to work at his uncle's distillery where he introduced a rye whisky and was successful enough to buy the distillery five years later. The disruption of American Civil War created an export opportunity for Canadian-made whiskies and their quality, particularly those from Walker and Wiser who had already begun the practice of aging their whiskies, sustained that market even after post-war tariffs were introduced. In the 1880s, Canada's National Policy placed high tariffs on foreign alcoholic products as whisky began to be sold in bottles and the federal government instituted a bottled in bond program that provided certification of the time a whisky spent aging and allowed deferral of taxes for that period, which encouraged aging. In 1890 Canada became the first country to enact an aging law for whiskies, requiring them to be aged at least two years. The growing temperance movement culminated in prohibition in 1916 and distilleries had to either specialize in the export market or switch to alternative products, like industrial alcohols which were in demand in support of the war effort. With the deferred revenue and storage costs of the Aging Law acting as a barrier to new entrants and the reduced market due to prohibition, consolidation of Canadian whisky had begun. Henry Corby Jr. modernized and expanded upon his father's distillery and sold it, in 1905, to businessman Mortimer Davis who also purchased the Wiser distillery, in 1918, from the heirs of J.P. Wiser. Davis's salesman Harry Hatch spent time promoting the Corby and Wiser brands and developing a distribution network in the United States which held together as Canadian prohibition ended and American prohibition began. After Hatch's falling out with Davis, Hatch purchased the struggling Gooderham and Worts in 1923 and switched out Davis's whisky for his. Hatch was successful enough to be able to also purchase the Walker distillery, and the popular Canadian Club brand, from Hiram's grandsons in 1926. While American prohibition created risk and instability in the Canadian whisky industry, some benefited from purchasing unused American distillation equipment and from sales to exporters (nominally to foreign countries like Saint Pierre and Miquelon, though actually to bootleggers to the United States). Along with Hatch, the Bronfman family was able to profit from making whisky destined for the US during prohibition, and were able to open a distillery in LaSalle, Quebec, and merge their company, in 1928, with Seagram's which had struggled with transitioning to the prohibition marketplace. Samuel Bronfman became president of the company and, with his dominant personality, began a strategy of increasing their capacity and aging whiskies in anticipation of the end of prohibition. When that did occur, in 1933, Seagram's was in a position to quickly expand; they purchased The British Columbia Distilling Company from the Riefel family in 1935, as well as several American distilleries and introduced new brands, one of them being Crown Royal, in 1939, which would eventually become one of the best-selling Canadian whiskies. While some capacity was switched to producing industrial alcohols in support of the country's World War II efforts, the industry expanded again after the war until the 1980s. In 1945, Schenley Industries purchased one of those industrial alcohol distilleries in Valleyfield, Quebec, and repurposed several defunct American whiskey brands, like Golden Wedding, Old Fine Copper, and starting in 1972, Gibson's Finest. Seeking to secure their supply of Canadian whisky, Barton Brands also built a new distillery in Collingwood, Ontario, in 1967, where they would produce Canadian Mist, though they sold the distillery and brand only four years later to Brown–Forman. As proximity to the shipping routes (by rail and boat) to the US became less important, large distilleries were established in Alberta and Manitoba. Five years after starting to experiment with whiskies in their Toronto gin distillery, W. & A. Gilbey Ltd. created the Black Velvet blend in 1951 which was so successful a new distillery in Lethbridge, Alberta, was constructed in 1973 to produce it. Also in the west, a Calgary-based business group recruited the Riefels from British Columbia to oversee their Alberta Distillers operations in 1948. The company became an innovator in the practice of bulk shipping whiskies to the United States for bottling and the success of their Windsor Canadian brand (produced in Alberta but bottled in the United States) led National Distillers Limited to purchase Alberta Distillers, in 1964, to secure their supply chain. More Alberta investors founded the Highwood Distillery in 1974 in High River, Alberta, which specialized in wheat-based whiskies. Seagram's opened a large, new plant in Gimli, Manitoba, in 1969, which would eventually replace their Waterloo and LaSalle distilleries. In British Columbia, Ernie Potter, who had been producing fruit liqueurs from alcohols distilled at Alberta Distillers, built his own whisky distillery in Langley in 1958 and produced the Potter's and Century brands of whisky. Hiram Walker's built the Okanagan Distillery in Winfield, British Columbia, in 1970 with the intention of producing Canadian Club but was redirected to fulfill contracts to produce whiskies for Suntory before being closed in 1995. After decades of expansion, a shift in consumer preferences towards white spirits (such as vodka) in the American market resulted in an excess supply of Canadian whiskies. While this allowed the whiskies to be aged longer, the unexpected storage costs and deferred revenue strained individual companies. With the distillers seeking investors and multinational corporations seeking value brands, a series of acquisitions and mergers occurred. Alberta Distillers was bought in 1987 by Fortune Brands which would go on to become part of Suntory Global Spirits. Hiram Walker was sold in 1987 to Allied Lyons which Pernod Ricard took over in 2006, with Fortune Brands acquiring the Canadian Club brand. Grand Metropolitan had purchased Black Velvet in 1972 but sold the brand in 1999 to Constellation Brands who in turn sold it to Heaven Hill in 2019. Schenley was acquired in 1990 by United Distillers which would go on to become part of Diageo, though Gibson's Finest was sold to William Grant & Sons in 2001. Seagram's was sold in 2000 to Vivendi, which in turn sold its various brands and distilleries to Pernod Ricard and Diageo. Highwood would purchase Potter's in 2006. Despite the consolidation, the Kittling Ridge Distillery in Grimsby, Ontario, began to produce the Forty Creek brand, though it was sold to the Campari Group in 2014. Later, the Sazerac Company would purchase the brands Seagram's VO, Canadian 83 and Five Star from Diageo in 2018. Illicit export to the United States. Canadian whisky featured prominently in rum-running into the U.S. during Prohibition. Hiram Walker's distillery in Windsor, Ontario, directly across the Detroit River and the international boundary between Canada and the United States, easily served bootleggers using small, fast smuggling boats. Distilleries and brands. The following is a listing of distilleries presently producing Canadian whiskys: Alberta. There are several distilleries based in Alberta. Alberta Distillers was established in 1946 in Calgary, Alberta. The distillery was purchased in 1987 by Fortune Brands which became Beam Suntory in 2011 and Suntory Global Spirits in 2024. The distillery uses a specific strain of yeast which they developed that specializes in fermenting rye. While the distillery exports much of its whisky for bottling in other countries, they also produce the brands Alberta Premium, Alberta Springs, Windsor Canadian, Tangle Ridge, and Canadian Club Chairman's Select. Black Velvet Distillery (formerly the Palliser Distillery) was established in 1973 in Lethbridge, Alberta, and has been owned by Heaven Hill since 2019. It produces the Black Velvet brand, mostly shipped in bulk for bottling in America, with some bottled onsite for Canadians. It also makes Danfield's and the Schenley's Golden Wedding and OFC labels. Highwood Distillery (formerly the Sunnyvale Distillery) was established in 1974 in High River, Alberta. It specializes in using wheat in their base whiskies. This distillery also produces vodka, rum, gin and liqueurs. Brands of Canadian whisky produced at the Highwood Distillery include Centennial, Century, Ninety, and Potter's. They also produce White Owl whisky which is charcoal-filtered to remove the colouring introduced by aging in wood barrels. Manitoba. Gimli Distillery was established in 1968 in Gimli, Manitoba, to produce Seagram brands, and was acquired by Diageo in 2001. The Gimli Distillery is responsible for producing Crown Royal, the best-selling Canadian whisky in the world with 7 million cases shipped in 2017. They also supply some of the whisky used in Seagram's VO and other blends. Ontario. Distilleries were established in Ontario during the mid-19th century, with Gooderham and Worts's beginning operations in Toronto's Distillery District in the 1830s. Distilleries continued to operate from the Distillery District until 1990, when the area was reoriented towards commercial and residential development. Other former distilleries in the province includes one in Corbyville, which hosted a distillery operated by Corby Spirit and Wine. A distillery in Waterloo was operated by Seagram to produce Crown Royal until 1992; although the company still maintains a blending and bottling plant in Amherstburg. Presently, there are several major distilleries based in Ontario. The oldest functioning distillery in Ontario is the Hiram Walker Distillery, established in 1858 in Windsor, Ontario, but modernized and expanded upon several times since. The distillery is owned by Pernod Ricard and operated by Corby Spirit and Wine, of which Pernod has a controlling share. Brands produced at the Walker Distillery include Lot 40, Pike Creek, Gooderham and Worts, Hiram Walker's Special Old, Corby's Royal Reserve, and J.P. Wiser's brands. Most of its capacity is used for contract production of the Suntory Global Spirits brand (and former Hiram Walker brand) Canadian Club, in addition to generic Canadian whisky that is exported in bulk and bottled under various labels in other countries. Canadian Mist Distillery was established in 1967 in Collingwood, Ontario, the distillery is owned by the Sazerac Company and primarily produces the Canadian Mist brand for export. The distillery also produces whiskies used in the Collingwood brand, introduced 2011, and the Bearface brand, introduced 2018. Kittling Ridge Distillery was established in 1992 with an associated winery in Grimsby, Ontario, its first whiskies came to market in 2002. The distillery was purchased in 2014 by Campari Group. The distillery produces the Forty Creek brand. Quebec. Old Montreal Distillery was established in 1929 as a Corby Spirit and Wine distillery, it was acquired by Sazerac Company in 2011 and modernized in 2018. It produces Sazerac brands and has taken over bottling of Caribou Crossing. Valleyfield Distillery (formerly the Schenley Distillery) was established in 1945 in a former brewery in Salaberry-de-Valleyfield, Quebec, near Montreal, the distillery has been owned by Diageo in 2008. Seagram's VO is bottled here with flavouring whisky from the Gimli Distillery. Otherwise, the Valleyfield Distillery specializes in producing base whiskies distilled from corn for other Diageo products.
7148
38455
https://en.wikipedia.org/wiki?curid=7148
Collective noun
In linguistics, a collective noun is a word referring to a collection of things taken as a whole. Most collective nouns in everyday speech are not specific to one kind of thing. For example, the collective noun "group" can be applied to people ("a group of people"), or dogs ("a group of dogs"), or objects ("a group of stones"). Some collective nouns are specific to one kind of thing, especially terms of venery, which identify groups of specific animals. For example, "pride" as a term of venery always refers to lions, never to dogs or cows. Other examples come from popular culture such as a group of owls, which is called a "parliament". Different forms of English handle verb agreement with collective count nouns differently. For example, users of British English generally accept that collective nouns take either singular or plural verb forms depending on context and the metonymic shift that it implies, while in some other forms of English the verb agreement is less flexible. Derivation. Morphological derivation accounts for many collective words and various languages have common affixes for denoting collective nouns. Because derivation is a slower and less productive word formation process than the more overtly syntactical morphological methods, there are fewer collectives formed this way. As with all derived words, derivational collectives often differ semantically from the original words, acquiring new connotations and even new denotations. Affixes. Proto-Indo-European. Early Proto-Indo-European used the suffix *eh₂ to form collective nouns, which evolved into, among others, the Latin neuter plural ending -a, as in "datum/data". Late Proto-Indo-European used the ending *t, which evolved into the English ending -th, as in "young/youth". English. The English endings "-age" and "-ade" often signify a collective. Sometimes, the relationship is easily recognizable: "baggage, drainage, blockade". Though the etymology is plain to see, the derived words take on a distinct meaning. This is a productive ending, as evidenced in the recent coin, "signage". German. German uses the prefix "ge-" to create collectives. The root word often undergoes umlaut and suffixation as well as receiving the "ge-" prefix. Nearly all nouns created in that way are of neuter gender: There are also several endings that can be used to create collectives, such as "welt" and "masse". Dutch. Dutch has a similar pattern but sometimes uses the (unproductive) circumfix ": Swedish. The following Swedish example has different words in the collective form and in the individual form: Esperanto. Esperanto uses the collective infix -"ar"- to produce a large number of derived words: Metonymic merging of grammatical number. Two examples of collective nouns are "team" and "government", which are both words referring to groups of (usually) people. Both "team" and "government" are "countable" nouns (consider: "one team", "two teams", "most teams"; "one government", "two governments", "many governments"). Agreement in different forms of English. Confusion often stems from the way that different forms of English handle agreement with collective nouns—specifically, whether or not to use the collective singular: the singular verb form with a collective noun. The plural verb forms are often used in British English with the singular forms of these countable nouns (e.g., "The team "have" finished the project."). Conversely, in the English language as a whole, singular verb forms can often be used with nouns ending in "-s" that were once considered plural (e.g., "Physics "is" my favorite academic subject"). This apparent "number mismatch" is a natural and logical feature of human language, and its mechanism is a subtle metonymic shift in the concepts underlying the words. In British English, it is generally accepted that collective nouns can take either singular or plural verb forms depending on the context and the metonymic shift that it implies. For example, "the team "is" in the dressing room" ("formal agreement") refers to "the team" as an ensemble, while "the team "are" fighting among themselves" ("notional agreement") refers to "the team" as individuals. That is also the British English practice with names of countries and cities in sports contexts (e.g., "Newcastle "have" won the competition."). In American English, collective nouns almost always take singular verb forms (formal agreement). In cases that a metonymic shift would be revealed nearby, the whole sentence should be recast to avoid the metonymy. (For example, "The team are fighting among themselves" may become "the team "members" are fighting among themselves" or simply "the team is infighting".) Collective proper nouns are usually taken as singular ("Apple is expected to release a new phone this year"), unless the plural is explicit in the proper noun itself, in which case it is taken as plural ("The Green Bay Packers are scheduled to play the Minnesota Vikings this weekend"). More explicit examples of collective proper nouns include "General Motors is once again the world's largest producer of vehicles", and "Texas Instruments is a large producer of electronics here", and "British Airways is an airline company in Europe". Furthermore, "American Telephone & Telegraph is a telecommunications company in North America". Such phrases might look plural, but they are not. Examples of metonymic shift. A good example of such a metonymic shift in the singular-to-plural direction (which exclusively takes place in British English) is the following sentence: "The team have finished the project." In that sentence, the underlying thought is of the individual members of the team working together to finish the project. Their accomplishment is collective, and the emphasis is not on their individual identities, but they are still discrete individuals; the word choice "team have" manages to convey both their collective and discrete identities simultaneously. Collective nouns that have a singular form but take a plural verb form are called collective plurals. An example of such a metonymic shift in the plural-to-singular direction is the following sentence: "Mathematics is my favorite academic subject". The word "mathematics" may have originally been plural in concept, referring to mathematic endeavors, but metonymic shift (the shift in concept from "the endeavors" to "the whole set of endeavors") produced the usage of "mathematics" as a singular entity taking singular verb forms. (A true mass-noun sense of "mathematics" followed naturally.) Nominally singular pronouns can be collective nouns taking plural verb forms, according to the same rules that apply to other collective nouns. For example, it is correct usage in both British English and American English usage to say: "None are so fallible as those who are sure they're right." In that case, the plural verb is used because the context for "none" suggests more than one thing or person. This also applies to the use of an adjective as a collective noun: "The British are coming!"; "The poor will always be with you." Other examples include: This does not, however, affect the tense later in the sentence: Abbreviations provide other "exceptions" in American usage concerning plurals: When only the name is plural but not the object, place, or person: Terms of venery. The tradition of using "terms of venery" or "nouns of assembly", collective nouns that are specific to certain kinds of animals, stems from an English hunting tradition of the Late Middle Ages. The fashion of a consciously developed hunting language came to England from France. It was marked by an extensive proliferation of specialist vocabulary, applying different names to the same feature in different animals. The elements can be shown to have already been part of French and English hunting terminology by the beginning of the 14th century. In the course of the 14th century, it became a courtly fashion to extend the vocabulary, and by the 15th century, the tendency had reached exaggerated and even satirical proportions. Other synonyms for "terms of venery" include "company nouns", "gatherations", and "agminals". "The Treatise", written by Walter of Bibbesworth in the mid-1200s, is the earliest source for collective nouns of animals in any European vernacular (and also the earliest source for animal noises). The "Venerie" of Twiti (early 14th century) distinguished three types of droppings of animals, and three different terms for herds of animals. Gaston Phoebus (14th century) had five terms for droppings of animals, which were extended to seven in the "Master of the Game" (early 15th century). The focus on collective terms for groups of animals emerged in the later 15th century. Thus, a list of collective nouns in Egerton MS 1995, dated to under the heading of "termis of venery &c.", extends to 70 items, and the list in the "Book of Saint Albans" (1486) runs to 164 items, many of which, even though introduced by "the compaynys of beestys and fowlys", relate not to venery, but to human groups and professions and are humorous, such as "a Doctryne of doctoris", "a Sentence of Juges", "a Fightyng of beggers", "an uncredibilite of Cocoldis", "a Melody of harpers", "a Gagle of women", "a Disworship of Scottis", etc. The "Book of Saint Albans" became very popular during the 16th century and was reprinted frequently. Gervase Markham edited and commented on the list in his "The Gentleman's Academie", in 1595. The book's popularity had the effect of perpetuating many of these terms as part of the Standard English lexicon even if they were originally meant to be humorous and have long ceased to have any practical application. Even in their original context of medieval venery, the terms were of the nature of kennings, intended as a mark of erudition of the gentlemen able to use them correctly rather than for practical communication. The popularity of the terms in the modern period has resulted in the addition of numerous lighthearted, humorous, or facetious collective nouns.
7158
27823944
https://en.wikipedia.org/wiki?curid=7158
Carat (mass)
The carat (ct) is a unit of mass equal to , which is used for measuring gemstones and pearls. The current definition, sometimes known as the metric carat, was adopted in 1907 at the Fourth General Conference on Weights and Measures, and soon afterwards in many countries around the world. The carat is divisible into 100 "points" of 2 mg. Other subdivisions, and slightly different mass values, have been used in the past in different locations. In terms of diamonds, a paragon is a flawless stone of at least 100 carats (20 g). The ANSI X.12 EDI standard abbreviation for the carat is CD. Etymology. First attested in English in the mid-15th century, the word "carat" comes from Italian "carato", which comes from Arabic ("qīrāṭ"; قيراط), in turn borrowed from Greek "kerátion" κεράτιον 'carob seed', a diminutive of "keras" 'horn'. It was a unit of weight, equal to 1/1728 (1/12) of a pound (see Mina (unit)). History. Carob seeds have been used throughout history to measure jewelry, because it was believed that there was little variance in their mass distribution. However, this was a factual inaccuracy, as their mass varies about as much as seeds of other species. In the past, each country had its own carat. It was often used for weighing gold. Beginning in the 1570s, it was used to measure weights of diamonds. Standardization. An 'international carat' of 205 milligrams was proposed in 1871 by the Syndical Chamber of Jewellers, etc., in Paris, and accepted in 1877 by the Syndical Chamber of Diamond Merchants in Paris. A metric carat of 200 milligrams is exactly one-fifth of a gram and had often been suggested in various countries, and was finally proposed by the International Committee of Weights and Measures, and unanimously accepted at the fourth sexennial General Conference of the Metric Convention held in Paris in October 1907. It was soon made compulsory by law in France, but uptake of the new carat was slower in England, where its use was allowed by the Weights and Measures (Metric System) Act of 1897. Historical definitions. UK Board of Trade. In the United Kingdom the original Board of Trade carat was exactly grains (~3.170 grains = ~205 mg); in 1888, the Board of Trade carat was changed to exactly grains (~3.168 grains = ~205 mg). Despite it being a non-metric unit, a number of metric countries have used this unit for its limited range of application. The Board of Trade carat was divisible into four "diamond grains", but measurements were typically made in multiples of carat. Refiners' carats. There were also two varieties of "refiners' carats" once used in the United Kingdom—the pound carat and the ounce carat. The pound troy was divisible into 24 "pound carats" of 240 grains troy each; the pound carat was divisible into four "pound grains" of 60 grains troy each; and the pound grain was divisible into four "pound quarters" of 15 grains troy each. Likewise, the ounce troy was divisible into 24 "ounce carats" of 20 grains troy each; the ounce carat was divisible into four "ounce grains" of 5 grains troy each; and the ounce grain was divisible into four "ounce quarters" of grains troy each. Greco-Roman. The "solidus" was also a Roman weight unit. There is literary evidence that the weight of 72 coins of the type called "solidus" was exactly 1 Roman pound, and that the weight of 1 "solidus" was 24 "siliquae". The weight of a Roman pound is generally believed to have been 327.45 g or possibly up to 5 g less. Therefore, the metric equivalent of 1 "siliqua" was approximately 189 mg. The Greeks had a similar unit of the same value. Gold fineness in carats comes from carats and grains of gold in a solidus of coin. The conversion rates 1 solidus = 24 carats, 1 carat = 4 grains still stand. Woolhouse's "Measures, Weights and Moneys of All Nations" gives gold fineness in carats of 4 grains, and silver in troy pounds of 12 troy ounces of 20 pennyweight each.
7160
60840
https://en.wikipedia.org/wiki?curid=7160
European Conference of Postal and Telecommunications Administrations
The European Conference of Postal and Telecommunications Administrations (CEPT) was established on 26 June 1959 by nineteen European states in Montreux, Switzerland, as a coordinating body for European state telecommunications and postal organizations. The acronym comes from the French version of its name, . CEPT was responsible for the creation of the European Telecommunications Standards Institute (ETSI) in 1988. Organization. CEPT is organised into three main components: The entity is based on the concept of a Postal, telegraph and telephone service entity. Member countries. "As of March 2022: 46 countries." Albania, Andorra, Austria, Azerbaijan, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Moldova, Monaco, Montenegro, Netherlands, North Macedonia, Norway, Poland, Portugal, Romania, San Marino, Serbia, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, Vatican City. The Russian Federation and Belarus memberships were suspended indefinitely on 17 March 2022.
7162
7903804
https://en.wikipedia.org/wiki?curid=7162
Tramlink
Tramlink, previously Croydon Tramlink and currently branded as London Trams, is a light rail tram system serving Croydon and surrounding areas in South London, England. It is the first operational tram system serving the London region since 1952. Tramlink is presently managed by London Trams, a public body part of Transport for London (TfL), and has been operated by FirstGroup since 2017. It is one of two light rail networks in Greater London, the other being the Docklands Light Railway. Tramlink is the fourth-busiest light rail network in the UK behind the Docklands Light Railway, Manchester Metrolink and Tyne and Wear Metro. Studies for the delivery of a modern-day tram system in Croydon began in the 1960s and detailed planning was performed in the 1980s. Approval of the scheme was received in 1990 and, following a competitive tender process, construction and initial operation of the tramway was undertaken by "Tramtrack Croydon" (TC) via a 99-year Private Finance Initiative (PFI) contract. The official opening of Tramlink took place on 10 May 2000; by the end of the year three routes were operational. The network consists of 39 stops along of track, on a mixture of street track shared with other traffic, dedicated track in public roads, and off-street track consisting of new rights-of-way, former railway lines, and one right-of-way where the Tramlink track runs parallel to a third rail-electrified Network Rail line. The network's lines coincide in central Croydon, with eastern termini at Beckenham Junction, Elmers End and New Addington, and a western terminus at Wimbledon, where there is an interchange for London Underground. Since its original opening, the tram network has been expanded and additional rolling stock has been purchased. During 2008, TfL took over Tramlink operations, ending the PFI and making the company a subsidiary of TfL. Additional rolling stock was introduced during the early 2010s. Furthermore, numerous extensions to the network have been discussed, the most recent of which is the Sutton Link, an extension to connect Sutton to Colliers Wood. Sutton Link was paused in 2020 until funding can be secured. In the 2020s, TfL began work to order new trams for the system. History. Inception. In the first half of the 20th century, Croydon had many tramlines. However, these were all closed, the first was the Addiscombe – East Croydon station route through George Street to Cherry Orchard Road in 1927 and the last to close was the Purley - Embankment and Croydon (Coombe Road) - Thornton Heath routes closed in April 1951. However, in the Spring of 1950, the Highways Committee were presented by the Mayor with the concept of running trams between East Croydon station and the new estate being constructed at New Addington. This was based on the fact that the Feltham cars used in Croydon were going to Leeds to serve their new estates on reserved tracks. During 1962, a private study with assistance from BR engineers, showed how easy it was to convert the West Croydon - Wimbledon train service to tram operation and successfully prevent conflict between trams and trains. These two concepts became joined in joint LRTL/TLRS concept of New Addington to Wimbledon every 15 minutes via East and West Croydon and Mitcham plus New Addington to Tattenham Corner every 15 minutes via East and West Croydon, Sutton and Epsom Downs. A branch into Forestdale to give an overlap service from Sutton was also included. During the 1970s, several BR directors and up-and-coming managers were aware of the advantages. Chris Green, upon becoming managing director, Network South East, published his plans in 1987 expanding the concept to take in the Tattenham Corner and Caterham branches and provide a service from Croydon to Lewisham via Addiscombe and Hayes. Following on from the opening of the DLR a small group working under Tony Ridley, then managing director, London Transport, investigated the potential for further light rail in London. The report 'Light Rail for London', written by engineer David Catling and Transport Planner Jon Willis, looked at a number of possible schemes including conversion of the East London Line. However a light rail network focussed on Croydon, with the conversion of existing heavy rail routes, was the most promising. The London Borough of Croydon wanted to improve access to the town centre without further road building and also improve access to the LCC built New Addington estate. Furthermore, road traffic in Croydon expanded considerably during the 1980s and planners were keen to apply public transit to fulfil the recorded growth in demand in the area. The project was developed by a small team in LT, headed by Scott McIntosh and in Croydon by Jill Lucas. The scheme was accepted in principle in February 1990 by Croydon Council who worked with what was then London Regional Transport (LRT) to propose Tramlink to Parliament. The Croydon Tramlink Act 1994 resulted, which gave LRT the power to build and run Tramlink. Construction. Both the delivery and operation of the tramway was accomplished via a competitive tender process. During November 1995, it was announced that four consortia were shortlisted to build, operate and maintain Tramlink: In May 1996, Tramtrack Croydon (TC) was awarded a 99-year Private Finance Initiative (PFI) contract to design, build, operate and maintain Tramlink. The equity partners in TC were Amey (50%), Royal Bank of Scotland (20%), 3i (20%) and Sir Robert McAlpine with Bombardier Transportation contracted to build and maintain the trams and FirstGroup operate the service. TC retained the revenue generated by Tramlink and LRT had to pay compensation to TC for any changes to the fares and ticketing policy introduced later. The concession agreement with TC was signed in November 1996, allowing construction to begin. Construction work started in January 1997, with an expected opening in November 1999. The first tram was delivered in October 1998 to the new depot at Therapia Lane and testing on the sections of the Wimbledon line began shortly afterwards. Part of its track is the original route of the Surrey Iron Railway that opened in 1803. Opening. The official opening of Tramlink took place on 10 May 2000 when route 3 from Croydon to New Addington opened to the public. Route 2 from Croydon to Beckenham Junction followed on 23 May 2000, and route 1 from Elmers End to Wimbledon opened a week later on 30 May 2000. It was the first modern tram project in London, with low-floor trams and low platforms allowing accessibility for all. The new trams were numbered from 2530, following on from the last of the old trams withdrawn in the early 1950s. Buyout by Transport for London. In March 2008, TfL announced that it had reached agreement to buy TC for £98million. The purchase was finalised on 28 June 2008. The background to this purchase relates to the requirement that TfL (who took over from London Regional Transport in 2000) compensates TC for the consequences of any changes to the fares and ticketing policy introduced since 1996. In 2007, that payment was £4million, with an annual increase in rate. Despite this change, FirstGroup continues to operate the service. During October 2008, TfL introduced a new livery, using the blue, white and green of the routes on TfL maps, to distinguish the trams from buses operating in the area. The colour of the cars was changed to green, and the brand name was changed from Croydon Tramlink to simply Tramlink. The rebranding work was completed in early 2009. Additional stop and trams. Centrale tram stop, in Tamworth Road on the one-way central loop, opened on 10 December 2005, increasing journey times slightly. As turnround times were already quite tight, this raised the issue of buying an extra tram to maintain punctuality. Partly for this reason, but also to take into account the planned restructuring of services, (subsequently introduced in July 2006), TfL issued tenders for a new tram. However, nothing resulted from this. In January 2011, TfL opened a tender for the supply of ten new or second-hand trams from the end of summer 2011, for use between Therapia Lane and Elmers End. On 18 August 2011, TfL announced that Stadler Rail had won a £16.3million contract to supply six Variobahn trams similar to those used by Bybanen in Bergen, Norway. They entered service in 2012. In August 2013, TfL ordered an additional four Variobahn trams for delivery in 2015, for use on the Wimbledon to Croydon link, an order later increased to six. This brought the total Variobahn fleet up to ten in 2015, and 12 in 2016 when the final two trams were delivered. Current network. Stops. There are 39 stops, with 38 opened in the initial phase, and Centrale tram stop added on 10 December 2005. Most stops are long. The tram stops have low platforms, above rail level, virtually level with the doors. This level access from platform to tram allows wheelchairs, prams, pushchairs and the elderly to board easily with no steps. In street sections, the stop is integrated with the pavement. All platforms are all wider than . Tramlink uses some former main-line stations on the Wimbledon–West Croydon and Elmers End–Coombe Lane stretches of line. The railway platforms have been demolished and rebuilt to Tramlink specifications, except at Elmers End and Wimbledon where the track level was raised to meet the higher main-line platforms to enable cross-platform interchange. Stops are unstaffed and had automated ticket machines that are no longer in use due to TfL making trams cashless. In general, access between the platforms involves crossing the tracks by pedestrian level crossing. Stops also feature CCTV, a Passenger Help Point, a Passenger Information Display (PID), litter bins, a noticeboard and lamp-posts, and most also have seats and a shelter. The PIDs display the destinations and expected arrival times of the next two trams. They can also display any message the controllers want to display, such as information on delays or even safety instructions for vandals to stop putting rubbish or other objects onto the track. Routes. Tramlink has been shown on the principal tube map since 1 June 2016, having previously appeared only on the "London Connections" map. When Tramlink first opened it had three routes: Line 1 (yellow) from Wimbledon to Elmers End, Line 2 (red) from Croydon to Beckenham Junction, and Line 3 (green) from Croydon to New Addington. On 23 July 2006, the network was restructured, with Route 1 from Elmers End to Croydon, Route 2 from Beckenham Junction to Croydon and Route 3 from New Addington to Wimbledon. On 25 June 2012, Route 4 from Therapia Lane to Elmers End was introduced. On 4 April 2016, Route 4 was extended from Therapia Lane to Wimbledon. On 25 February 2018, the network and timetables were restructured again for more even and reliable services. As part of this change, trams would no longer display route numbers on their dot matrix destination screens. This resulted in three routes: Additionally, the first two trams from New Addington will run to Wimbledon. Overall, this would result in a decrease in 2tph leaving Elmers End, resulting in a 25% decrease in capacity here, and 14% in the Addiscombe area. However, this would also regulate waiting times in this area and on the Wimbledon branch to every five minutes, from every two–seven minutes. Former lines reused. Tramlink makes use of a number of National Rail lines, running parallel to franchised services, or in some cases, runs on previously abandoned railway corridors. Between Birkbeck and Beckenham Junction, Tramlink uses the Crystal Palace line, running on a single track alongside the track carrying Southern rail services. The National Rail track had been singled some years earlier. From Elmers End to Woodside, Tramlink follows the former Addiscombe Line. At Woodside, the old station buildings stand disused, and the original platforms have been replaced by accessible low platforms. Tramlink then follows the former Woodside and South Croydon Railway (W&SCR) to reach the current Addiscombe tram stop, adjacent to the site of the demolished Bingham Road railway station. It continues along the former railway route to near Sandilands, where Tramlink curves sharply towards Sandilands tram stop. Another route from Sandilands tram stop curves sharply on to the W&SCR before passing through Park Hill (or Sandilands) tunnels and to the site of Coombe Road station after which it curves away across Lloyd Park. Between Wimbledon station and Wandle Park, Tramlink follows the former West Croydon to Wimbledon Line, which was first opened in 1855 and closed on 31 May 1997 to allow for conversion into Tramlink. Within this section, from near Phipps Bridge to near Reeves Corner, Tramlink follows the Surrey Iron Railway, giving Tramlink a claim to one of the world's oldest railway alignments. Beyond Wandle Park, a Victorian footbridge beside Waddon New Road was dismantled to make way for the flyover over the West Croydon to Sutton railway line. The footbridge has been re-erected at Corfe Castle station on the Swanage Railway (although some evidence suggests that this was a similar footbridge removed from the site of Merton Park railway station). Feeder buses. Bus routes T31, T32 and T33 used to connect with Tramlink at the New Addington, Fieldway and Addington Village stops. T31 and T32 no longer run, and T33 has been renumbered as 433. Onboard announcements. The onboard announcements are by BBC News reader (and tram enthusiast) Nicholas Owen. The announcement pattern is as follows: e.g. "This tram is for Wimbledon; the next stop will be Merton Park". Rolling stock. Current fleet. Tramlink currently uses 35 trams. In summary: Bombardier CR4000. The original fleet comprised 24 articulated low floor Bombardier Flexity Swift CR4000 trams built in Vienna numbered beginning at 2530, continuing from the highest-numbered tram 2529 on London's former tram network, which closed in 1952. The original livery was red and white. One (2550) was painted in FirstGroup white, blue and pink livery. During 2006, the CR4000 fleet was refreshed, with the bus-style destination roller blinds being replaced with a digital dot-matrix display. Between 2008 and 2009 the fleet was repainted externally in the new green livery and the interiors were refurbished with new flooring, seat covers retrimmed in a new moquette and stanchions repainted from yellow to green. One (2551) has been permanently withdrawn having been significantly damaged in the 2016 Croydon tram derailment on 9 November 2016. In 2007, tram 2535 was named after Stephen Parascandolo, a well known tram enthusiast. Croydon Variobahn. In January 2011, Tramtrack Croydon invited tenders for the supply of then new or second-hand trams, and on 18 August 2011, TfL announced that Stadler Rail had won a £16.3million contract to supply six Variobahn trams similar to those used by Bybanen in Bergen, Norway. They entered service during 2012. In August 2013, TfL ordered an additional four Variobahn trams for delivery in 2015, an order that was later increased to six. This brought the total Variobahn fleet up to ten in 2015, and 12 in 2016 when the final two trams were delivered. Ancillary vehicles. Engineers' vehicles used in Tramlink construction were hired for that purpose. In November 2006, Tramlink purchased five second-hand engineering vehicles from Deutsche Bahn. These were two engineers' trams (numbered 058 and 059 in Tramlink service), and three 4-wheel wagons (numbered 060, 061, and 062). Service tram 058 and trailer 061 were both sold to the National Tramway Museum in 2010. Future fleet. In the 2020s, TfL began work to replace the CR4000 tram fleet, which are approaching their end of its life and becoming increasingly unreliable. In June 2023, one-fifth of the CR4000 fleet was temporarily withdrawn due to issues with their wheels. In January 2024, Tramtrack Croydon invited tenders for a base order of 24 new trams with an option for 16 more and a 30-year technical support contract, costed at £385million. In September 2024, TfL announced that four manufacturers (Alstom, Construcciones y Auxiliar de Ferrocarriles, Hitachi Rail and Stadler Rail Valencia) had been invited to place bids. The new fleet is intended to replace the CR4000 trams, which are reaching the end of their design life. Fares and ticketing. TfL Bus & Tram Passes are valid on Tramlink, as are Travelcards that include any of zones 3, 4, 5 and 6. Pay-as-you-go Oyster card fares are the same as on London Buses, although special fares may apply when using Tramlink feeder buses. When using Oyster cards, passengers must touch in on the platform before boarding the tram. Special arrangements apply at Wimbledon station, where the Tramlink stop is within the National Rail and London Underground station. Tramlink passengers must therefore touch in at the station entry barriers then again at the Tramlink platform to inform the system that no mainline/LUL rail journey has been made. Contactless payment cards can also be used to pay for fares in the same manner as Oyster cards. Ticket machines were withdrawn on 16 July 2018. Corporate affairs. Ownership and structure. The service was created as a result of the Croydon Tramlink Act 1994 that received Royal Assent on 21 July 1994, a Private Bill jointly promoted by London Regional Transport (the predecessor of Transport for London (TfL)) and Croydon London Borough Council. Following a competitive tender, a consortium company Tramtrack Croydon Limited (incorporated in 1995) was awarded a 99-year concession to build and run the system. On 17 March 2008, it was announced that TfL would take over Tramlink in exchange for £98million. Since 28 June 2008, the company has been a subsidiary of TfL. Tramlink is currently operated by Tram Operations Ltd (TOL), a subsidiary of FirstGroup, who have a contract to operate the service until 2030. TOL provides the drivers and management to operate the service; the infrastructure and trams are owned and maintained by a TfL subsidiary. Business trends. The key available trends in recent years for Tramlink are (years ending 31 March): Activities in the financial year 2020/21 were severely reduced by the impact of the coronavirus pandemic. Passenger numbers. Detailed passenger journeys since Tramlink commenced operations in May 2000 were: Proposals for extensions. Numerous extensions to the network have been discussed or proposed over the years, involving varying degrees of support and investigative effort. During 2002, as part of The Mayor's Transport Strategy for London, a number of proposed extensions were identified, including to Sutton from Wimbledon or Mitcham; to Crystal Palace; to Colliers Wood/Tooting; and along the A23. The Strategy said that "extensions to the network could, in principle, be developed at relatively modest cost where there is potential demand..." and sought initial views on the viability of a number of extensions by summer 2002. In 2006, in a TfL consultation on an extension to Crystal Palace, three options were presented: on-street, off-street and a mixture of the two. After the consultation, the off-street option was favoured, to include Crystal Palace Station and Crystal Palace Parade. TfL stated in 2008 that due to lack of funding the plans for this extension would not be taken forward. They were revived shortly after Boris Johnson's re-election as Mayor in May 2012, but six months later they were cancelled again. During November 2014, a 15-year plan, Trams 2030, called for upgrades to increase capacity on the network in line with an expected increase in ridership to 60million passengers by 2031 (although the passenger numbers at the time (2013/14: 31.2 million) have not been exceeded since (as at 2019)). The upgrades were to improve reliability, support regeneration in the Croydon metropolitan centre, and future-proof the network for Crossrail 2, a potential Bakerloo line extension, and extensions to the tram network itself to a wide variety of destinations. The plans involve dual-tracking across the network and introducing diverting loops on either side of Croydon, allowing for a higher frequency of trams on all four branches without increasing congestion in central Croydon. The £737million investment was to be funded by the Croydon Growth Zone, TfL Business Plan, housing levies, and the respective boroughs, and by the affected developers. All the various developments, if implemented, could theoretically require an increase in the fleet from 30 to up to 80 trams (depending on whether longer trams or coupled trams are used). As such, an increase in depot and stabling capacity would also be required; enlargement of the current Therapia Lane site, as well as sites near the Elmers End and Harrington Road tram stops, were shortlisted. Sutton Link. During July 2013, then Mayor Boris Johnson had affirmed that there was a reasonable business case for Tramlink to cover the Wimbledon – Sutton corridor, which might also include a loop via St Helier Hospital and an extension to The Royal Marsden Hospital. In 2014, a proposed £320M scheme for a new line to connect Wimbledon to Sutton via Morden was made and brought to consultation jointly by the London Boroughs of Merton and Sutton. Although £100M from TfL was initially secured in the draft 2016/17 budget, this was subsequently reallocated. In 2018, TfL opened a consultation on proposals for a connection to Sutton, with three route options: from South Wimbledon, from Colliers Wood (both having an option of a bus rapid transit route or a tram line) or from Wimbledon (only as a tram line). During February 2020, following the consultation, TfL announced their preference for a north–south tramway between Colliers Wood and Sutton town centre, with a projected cost of £425M, on the condition of securing additional funding. Work on the project stopped in July 2020, as Transport for London could not find sufficient funding for it to continue. In February 2020, TfL announced the preferred route of the extension, expressing their support for "Route Option 2 (Colliers Wood – Sutton) operated as a tram service ... assuming we are successful in securing funding to deliver the project"., On 24 July 2020, the project was temporarily put on hold due to the COVID-19 pandemic. TfL said they were pausing development work on the scheme "as the transport case is poor and there remains a significant funding gap". Andy Byford, London's Transport Commissioner, said that this involves making 'difficult choices' about which projects can be funded. During 2023, Sutton's council leader Ruth Dombey advocated for the project and urged TfL and the mayor's office to provide fair and adequate funding, especially in light of the ULEZ charge. However, London Mayor Sadiq Khan dismissed the project as inadequate and pointed out the £440M funding shortfall. London Mayor Sadiq Khan faced criticism from Sutton MP Paul Scully on 21 April 2023, for the delayed Sutton tram extension project and implementing the Ultra Low Emission Zone charge without sufficient public transport alternatives, while defending the delay citing a £440M funding gap. In December 2023, TfL stated that further progress will depend on funding agreements with other stakeholders such as local councils, the Department for Transport, as well as Government, and that the Sutton Link is currently the only extension being considered. Rival proposals included new bus routes.