id
stringlengths
2
8
revid
stringlengths
1
10
url
stringlengths
38
44
title
stringlengths
1
184
text
stringlengths
101
448k
3878
48978085
https://en.wikipedia.org/wiki?curid=3878
Biostatistics
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results. History. Biostatistics and genetics. Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbishire and Karl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such as Charles Davenport and Wilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian modern evolutionary synthesis. Solving these differences also allowed to define the concept of population genetics and brought together genetics and evolution. The three leading figures in the establishment of population genetics and this synthesis all relied on statistics and developed its use in biology. These and other biostatisticians, mathematical biologists, and statistically inclined geneticists helped bring together evolutionary biology and genetics into a consistent, coherent whole that could begin to be quantitatively modeled. In parallel to this overall development, the pioneering work of D'Arcy Thompson in "On Growth and Form" also helped to add quantitative discipline to biological study. Despite the fundamental importance and frequent necessity of statistical reasoning, there may nonetheless have been a tendency among biologists to distrust or deprecate results which are not qualitatively apparent. One anecdote describes Thomas Hunt Morgan banning the Friden calculator from his department at Caltech, saying "Well, I am like a guy who is prospecting for gold along the banks of the Sacramento River in 1849. With a little intelligence, I can reach down and pick up big nuggets of gold. And as long as I can do that, I'm not going to let any people in my department waste scarce resources in placer mining." Research planning. Any research in life sciences is proposed to answer a scientific question we might have. To answer this question with a high certainty, we need accurate results. The correct definition of the main hypothesis and the research plan will reduce errors while taking a decision in understanding a phenomenon. The research plan might include the research question, the hypothesis to be tested, the experimental design, data collection methods, data analysis perspectives and costs involved. It is essential to carry the study based on the three basic principles of experimental statistics: randomization, replication, and local control. Research question. The research question will define the objective of a study. The research will be headed by the question, so it needs to be concise, at the same time it is focused on interesting and novel topics that may improve science and knowledge and that field. To define the way to ask the scientific question, an exhaustive literature review might be necessary. So the research can be useful to add value to the scientific community. Hypothesis definition. Once the aim of the study is defined, the possible answers to the research question can be proposed, transforming this question into a hypothesis. The main propose is called null hypothesis (H0) and is usually based on a permanent knowledge about the topic or an obvious occurrence of the phenomena, sustained by a deep literature review. We can say it is the standard expected answer for the data under the situation in test. In general, HO assumes no association between treatments. On the other hand, the alternative hypothesis is the denial of HO. It assumes some degree of association between the treatment and the outcome. Although, the hypothesis is sustained by question research and its expected and unexpected answers. As an example, consider groups of similar animals (mice, for example) under two different diet systems. The research question would be: what is the best diet? In this case, H0 would be that there is no difference between the two diets in mice metabolism (H0: μ1 = μ2) and the alternative hypothesis would be that the diets have different effects over animals metabolism (H1: μ1 ≠ μ2). The hypothesis is defined by the researcher, according to his/her interests in answering the main question. Besides that, the alternative hypothesis can be more than one hypothesis. It can assume not only differences across observed parameters, but their degree of differences ("i.e." higher or shorter). Sampling. Usually, a study aims to understand an effect of a phenomenon over a population. In biology, a population is defined as all the individuals of a given species, in a specific area at a given time. In biostatistics, this concept is extended to a variety of collections possible of study. Although, in biostatistics, a population is not only the individuals, but the total of one specific component of their organisms, as the whole genome, or all the sperm cells, for animals, or the total leaf area, for a plant, for example. It is not possible to take the measures from all the elements of a population. Because of that, the sampling process is very important for statistical inference. Sampling is defined as to randomly get a representative part of the entire population, to make posterior inferences about the population. So, the sample might catch the most variability across a population. The sample size is determined by several things, since the scope of the research to the resources available. In clinical research, the trial type, as inferiority, equivalence, and superiority is a key in determining sample size. Experimental design. Experimental designs sustain those basic principles of experimental statistics. There are three basic experimental designs to randomly allocate treatments in all plots of the experiment. They are completely randomized design, randomized block design, and factorial designs. Treatments can be arranged in many ways inside the experiment. In agriculture, the correct experimental design is the root of a good study and the arrangement of treatments within the study is essential because environment largely affects the plots (plants, livestock, microorganisms). These main arrangements can be found in the literature under the names of "lattices", "incomplete blocks", "split plot", "augmented blocks", and many others. All of the designs might include control plots, determined by the researcher, to provide an error estimation during inference. In clinical studies, the samples are usually smaller than in other biological studies, and in most cases, the environment effect can be controlled or measured. It is common to use randomized controlled clinical trials, where results are usually compared with observational study designs such as case–control or cohort. Data collection. Data collection methods must be considered in research planning, because it highly influences the sample size and experimental design. Data collection varies according to the type of data. For qualitative data, collection can be done with structured questionnaires or by observation, considering presence or intensity of disease, using score criterion to categorize levels of occurrence. For quantitative data, collection is done by measuring numerical information using instruments. In agriculture and biology studies, yield data and its components can be obtained by metric measures. However, pest and disease injuries in plants are obtained by observation, considering score scales for levels of damage. Especially, in genetic studies, modern methods for data collection in field and laboratory should be considered, as high-throughput platforms for phenotyping and genotyping. These tools allow bigger experiments, while turn possible evaluate many plots in lower time than a human-based only method for data collection. Finally, all data collected of interest must be stored in an organized data frame for further analysis. Analysis and data interpretation. Descriptive tools. Data can be represented through tables or graphical representation, such as line charts, bar charts, histograms, scatter plot. Also, measures of central tendency and variability can be very useful to describe an overview of the data. Follow some examples: Frequency tables. One type of table is the frequency table, which consists of data arranged in rows and columns, where the frequency is the number of occurrences or repetitions of data. Frequency can be: Absolute: represents the number of times that a determined value appear; formula_1 Relative: obtained by the division of the absolute frequency by the total number; formula_2 In the next example, we have the number of genes in ten operons of the same organism. Line graph. Line graphs represent the variation of a value over another metric, such as time. In general, values are represented in the vertical axis, while the time variation is represented in the horizontal axis. Bar chart. A bar chart is a graph that shows categorical data as bars presenting heights (vertical bar) or widths (horizontal bar) proportional to represent values. Bar charts provide an image that could also be represented in a tabular format. In the bar chart example, we have the birth rate in Brazil for the December months from 2010 to 2016. The sharp fall in December 2016 reflects the outbreak of Zika virus in the birth rate in Brazil. Histograms. The histogram (or frequency distribution) is a graphical representation of a dataset tabulated and divided into uniform or non-uniform classes. It was first introduced by Karl Pearson. Scatter plot. A scatter plot is a mathematical diagram that uses Cartesian coordinates to display values of a dataset. A scatter plot shows the data as a set of points, each one presenting the value of one variable determining the position on the horizontal axis and another variable on the vertical axis. They are also called scatter graph, scatter chart, scattergram, or scatter diagram. Mean. The arithmetic mean is the sum of a collection of values (formula_3) divided by the number of items of this collection (formula_4). formula_5 Median. The median is the value in the middle of a dataset. Mode. The mode is the value of a set of data that appears most often. Box plot. Box plot is a method for graphically depicting groups of numerical data. The maximum and minimum values are represented by the lines, and the interquartile range (IQR) represent 25–75% of the data. Outliers may be plotted as circles. Correlation coefficients. Although correlations between two different kinds of data could be inferred by graphs, such as scatter plot, it is necessary validate this though numerical information. For this reason, correlation coefficients are required. They provide a numerical value that reflects the strength of an association. Pearson correlation coefficient. Pearson correlation coefficient is a measure of association between two variables, X and Y. This coefficient, usually represented by "ρ" (rho) for the population and "r" for the sample, assumes values between −1 and 1, where "ρ" = 1 represents a perfect positive correlation, "ρ" = −1 represents a perfect negative correlation, and "ρ" = 0 is no linear correlation. Inferential statistics. It is used to make inferences about an unknown population, by estimation and/or hypothesis testing. In other words, it is desirable to obtain parameters to describe the population of interest, but since the data is limited, it is necessary to make use of a representative sample in order to estimate them. With that, it is possible to test previously defined hypotheses and apply the conclusions to the entire population. The standard error of the mean is a measure of variability that is crucial to do inferences. Hypothesis testing is essential to make inferences about populations aiming to answer research questions, as settled in "Research planning" section. Authors defined four steps to be set: A confidence interval is a range of values that can contain the true real parameter value in given a certain level of confidence. The first step is to estimate the best-unbiased estimate of the population parameter. The upper value of the interval is obtained by the sum of this estimate with the multiplication between the standard error of the mean and the confidence level. The calculation of lower value is similar, but instead of a sum, a subtraction must be applied. Statistical considerations. Power and statistical error. When testing a hypothesis, there are two types of statistic errors possible: Type I error and Type II error. The significance level denoted by α is the type I error rate and should be chosen before performing the test. The type II error rate is denoted by β and statistical power of the test is 1 − β. p-value. The p-value is the probability of obtaining results as extreme as or more extreme than those observed, assuming the null hypothesis (H0) is true. It is also called the calculated probability. It is common to confuse the p-value with the significance level (α), but, the α is a predefined threshold for calling significant results. If p is less than α, the null hypothesis (H0) is rejected. Multiple testing. In multiple tests of the same hypothesis, the probability of the occurrence of false positives (familywise error rate) increase and a strategy is needed to account for this occurrence. This is commonly achieved by using a more stringent threshold to reject null hypotheses. The Bonferroni correction defines an acceptable global significance level, denoted by α* and each test is individually compared with a value of α = α*/m. This ensures that the familywise error rate in all m tests, is less than or equal to α*. When m is large, the Bonferroni correction may be overly conservative. An alternative to the Bonferroni correction is to control the false discovery rate (FDR). The FDR controls the expected proportion of the rejected null hypotheses (the so-called discoveries) that are false (incorrect rejections). This procedure ensures that, for independent tests, the false discovery rate is at most q*. Thus, the FDR is less conservative than the Bonferroni correction and have more power, at the cost of more false positives. Mis-specification and robustness checks. The main hypothesis being tested (e.g., no association between treatments and outcomes) is often accompanied by other technical assumptions (e.g., about the form of the probability distribution of the outcomes) that are also part of the null hypothesis. When the technical assumptions are violated in practice, then the null may be frequently rejected even if the main hypothesis is true. Such rejections are said to be due to model mis-specification. Verifying whether the outcome of a statistical test does not change when the technical assumptions are slightly altered (so-called robustness checks) is the main way of combating mis-specification. Model selection criteria. Model criteria selection will select or model that more approximate true model. The Akaike's Information Criterion (AIC) and The Bayesian Information Criterion (BIC) are examples of asymptotically efficient criteria. Developments and big data. Recent developments have made a large impact on biostatistics. Two important changes have been the ability to collect data on a high-throughput scale, and the ability to perform much more complex analysis using computational techniques. This comes from the development in areas as sequencing technologies, Bioinformatics and Machine learning (Machine learning in bioinformatics). Use in high-throughput data. New biomedical technologies like microarrays, next-generation sequencers (for genomics) and mass spectrometry (for proteomics) generate enormous amounts of data, allowing many tests to be performed simultaneously. Careful analysis with biostatistical methods is required to separate the signal from the noise. For example, a microarray could be used to measure many thousands of genes simultaneously, determining which of them have different expression in diseased cells compared to normal cells. However, only a fraction of genes will be differentially expressed. Multicollinearity often occurs in high-throughput biostatistical settings. Due to high intercorrelation between the predictors (such as gene expression levels), the information of one predictor might be contained in another one. It could be that only 5% of the predictors are responsible for 90% of the variability of the response. In such a case, one could apply the biostatistical technique of dimension reduction (for example via principal component analysis). Classical statistical techniques like linear or logistic regression and linear discriminant analysis do not work well for high dimensional data (i.e. when the number of observations n is smaller than the number of features or predictors p: n < p). As a matter of fact, one can get quite high R2-values despite very low predictive power of the statistical model. These classical statistical techniques (esp. least squares linear regression) were developed for low dimensional data (i.e. where the number of observations n is much larger than the number of predictors p: n » p). In cases of high dimensionality, one should always consider an independent validation test set and the corresponding residual sum of squares (RSS) and R2 of the validation test set, not those of the training set. Often, it is useful to pool information from multiple predictors together. For example, Gene Set Enrichment Analysis (GSEA) considers the perturbation of whole (functionally related) gene sets rather than of single genes. These gene sets might be known biochemical pathways or otherwise functionally related genes. The advantage of this approach is that it is more robust: It is more likely that a single gene is found to be falsely perturbed than it is that a whole pathway is falsely perturbed. Furthermore, one can integrate the accumulated knowledge about biochemical pathways (like the JAK-STAT signaling pathway) using this approach. Bioinformatics advances in databases, data mining, and biological interpretation. The development of biological databases enables storage and management of biological data with the possibility of ensuring access for users around the world. They are useful for researchers depositing data, retrieve information and files (raw or processed) originated from other experiments or indexing scientific articles, as PubMed. Another possibility is search for the desired term (a gene, a protein, a disease, an organism, and so on) and check all results related to this search. There are databases dedicated to SNPs (dbSNP), the knowledge on genes characterization and their pathways (KEGG) and the description of gene function classifying it by cellular component, molecular function and biological process (Gene Ontology). In addition to databases that contain specific molecular information, there are others that are ample in the sense that they store information about an organism or group of organisms. As an example of a database directed towards just one organism, but that contains much data about it, is the "Arabidopsis thaliana" genetic and molecular database – TAIR. Phytozome, in turn, stores the assemblies and annotation files of dozen of plant genomes, also containing visualization and analysis tools. Moreover, there is an interconnection between some databases in the information exchange/sharing and a major initiative was the International Nucleotide Sequence Database Collaboration (INSDC) which relates data from DDBJ, EMBL-EBI, and NCBI. Nowadays, increase in size and complexity of molecular datasets leads to use of powerful statistical methods provided by computer science algorithms which are developed by machine learning area. Therefore, data mining and machine learning allow detection of patterns in data with a complex structure, as biological ones, by using methods of supervised and unsupervised learning, regression, detection of clusters and association rule mining, among others. To indicate some of them, self-organizing maps and "k"-means are examples of cluster algorithms; neural networks implementation and support vector machines models are examples of common machine learning algorithms. Collaborative work among molecular biologists, bioinformaticians, statisticians and computer scientists is important to perform an experiment correctly, going from planning, passing through data generation and analysis, and ending with biological interpretation of the results. Use of computationally intensive methods. On the other hand, the advent of modern computer technology and relatively cheap computing resources have enabled computer-intensive biostatistical methods like bootstrapping and re-sampling methods. In recent times, random forests have gained popularity as a method for performing statistical classification. Random forest techniques generate a panel of decision trees. Decision trees have the advantage that you can draw them and interpret them (even with a basic understanding of mathematics and statistics). Random Forests have thus been used for clinical decision support systems. Applications. Public health. Public health, including epidemiology, health services research, nutrition, environmental health and health care policy & management. In these medicine contents, it's important to consider the design and analysis of the clinical trials. As one example, there is the assessment of severity state of a patient with a prognosis of an outcome of a disease. With new technologies and genetics knowledge, biostatistics are now also used for Systems medicine, which consists in a more personalized medicine. For this, is made an integration of data from different sources, including conventional patient data, clinico-pathological parameters, molecular and genetic data as well as data generated by additional new-omics technologies. Quantitative genetics. The study of population genetics and statistical genetics in order to link variation in genotype with a variation in phenotype. In other words, it is desirable to discover the genetic basis of a measurable trait, a quantitative trait, that is under polygenic control. A genome region that is responsible for a continuous trait is called a quantitative trait locus (QTL). The study of QTLs become feasible by using molecular markers and measuring traits in populations, but their mapping needs the obtaining of a population from an experimental crossing, like an F2 or recombinant inbred strains/lines (RILs). To scan for QTLs regions in a genome, a gene map based on linkage have to be built. Some of the best-known QTL mapping algorithms are Interval Mapping, Composite Interval Mapping, and Multiple Interval Mapping. However, QTL mapping resolution is impaired by the amount of recombination assayed, a problem for species in which it is difficult to obtain large offspring. Furthermore, allele diversity is restricted to individuals originated from contrasting parents, which limit studies of allele diversity when we have a panel of individuals representing a natural population. For this reason, the genome-wide association study was proposed in order to identify QTLs based on linkage disequilibrium, that is the non-random association between traits and molecular markers. It was leveraged by the development of high-throughput SNP genotyping. In animal and plant breeding, the use of markers in selection aiming for breeding, mainly the molecular ones, collaborated to the development of marker-assisted selection. While QTL mapping is limited due resolution, GWAS does not have enough power when rare variants of small effect that are also influenced by environment. So, the concept of Genomic Selection (GS) arises in order to use all molecular markers in the selection and allow the prediction of the performance of candidates in this selection. The proposal is to genotype and phenotype a training population, develop a model that can obtain the genomic estimated breeding values (GEBVs) of individuals belonging to a genotype and but not phenotype population, called testing population. This kind of study could also include a validation population, thinking in the concept of cross-validation, in which the real phenotype results measured in this population are compared with the phenotype results based on the prediction, what used to check the accuracy of the model. As a summary, some points about the application of quantitative genetics are: Expression data. Studies for differential expression of genes from RNA-Seq data, as for RT-qPCR and microarrays, demands comparison of conditions. The goal is to identify genes which have a significant change in abundance between different conditions. Then, experiments are designed appropriately, with replicates for each condition/treatment, randomization and blocking, when necessary. In RNA-Seq, the quantification of expression uses the information of mapped reads that are summarized in some genetic unit, as exons that are part of a gene sequence. As microarray results can be approximated by a normal distribution, RNA-Seq counts data are better explained by other distributions. The first used distribution was the Poisson one, but it underestimate the sample error, leading to false positives. Currently, biological variation is considered by methods that estimate a dispersion parameter of a negative binomial distribution. Generalized linear models are used to perform the tests for statistical significance and as the number of genes is high, multiple tests correction have to be considered. Some examples of other analysis on genomics data comes from microarray or proteomics experiments. Often concerning diseases or disease stages. Tools. There are a lot of tools that can be used to do statistical analysis in biological data. Most of them are useful in other areas of knowledge, covering a large number of applications (alphabetical). Here are brief descriptions of some of them: Scope and training programs. Almost all educational programmes in biostatistics are at postgraduate level. They are most often found in schools of public health, affiliated with schools of medicine, forestry, or agriculture, or as a focus of application in departments of statistics. In the United States, where several universities have dedicated biostatistics departments, many other top-tier universities integrate biostatistics faculty into statistics or other departments, such as epidemiology. Thus, departments carrying the name "biostatistics" may exist under quite different structures. For instance, relatively new biostatistics departments have been founded with a focus on bioinformatics and computational biology, whereas older departments, typically affiliated with schools of public health, will have more traditional lines of research involving epidemiological studies and clinical trials as well as bioinformatics. In larger universities around the world, where both a statistics and a biostatistics department exist, the degree of integration between the two departments may range from the bare minimum to very close collaboration. In general, the difference between a statistics program and a biostatistics program is twofold: (i) statistics departments will often host theoretical/methodological research which are less common in biostatistics programs and (ii) statistics departments have lines of research that may include biomedical applications but also other areas such as industry (quality control), business and economics and biological areas other than medicine.
3912
48626325
https://en.wikipedia.org/wiki?curid=3912
List of major biblical figures
The Bible is a collection of canonical sacred texts of Judaism and Christianity. Different religious groups include different books within their canons, in different orders, and sometimes divide or combine books, or incorporate additional material into canonical books. Christian Bibles range from the sixty-six books of the Protestant canon to the eighty-one books of the Ethiopian Orthodox Church canon. Hebrew Bible. Tribes of Israel. According to the Book of Genesis, the Israelites were descendants of the sons of Jacob, who was renamed Israel after wrestling with an angel. His twelve male children become the ancestors of the Twelve Tribes of Israel. New Testament. Apostles of Jesus. The Thirteen: Others:
3914
1301414901
https://en.wikipedia.org/wiki?curid=3914
British & Irish Lions
The British & Irish Lions is a rugby union team selected from players eligible for the national teams of England, Ireland, Scotland, and Wales. The Lions are a test side and most often select players who have already played for their national team, although they can pick uncapped players who are eligible for any of the four unions. The team tours every four years, with these rotating between Australia, New Zealand and South Africa in order. The most recent test series, the 2021 series against South Africa, was won 2–1 by South Africa. From 1888 onwards, combined British rugby sides toured the Southern Hemisphere. The first tour was a commercial venture, undertaken without official backing. The six subsequent visits enjoyed a growing degree of support from the authorities, before the 1910 South Africa tour, which was the first tour representative of the four Home Unions. In 1949 the four Home Unions formally created a Tours Committee and for the first time, every player of the 1950 Lions squad had played internationally before the tour. The 1950s tours saw high win rates in provincial games, but the Test series were typically lost or drawn. The series wins in 1971 (New Zealand) and 1974 (South Africa) interrupted this pattern. The last tour of the amateur age took place in 1993. The Lions have also played occasional matches in the Northern Hemisphere either as one-off exhibitions or before a Southern Hemisphere tour. Naming and symbols. Name. The Shaw and Shrewsbury team first played in 1888 and is considered the precursor of the British & Irish Lions. It was then primarily English in composition but also contained players from Scotland and Wales. Later the team used the name British Isles. On their 1950 tour of New Zealand and Australia they officially adopted the name British Lions, the nickname first used by British and South African journalists on the 1924 South African tour after the lion emblem on their ties, the emblem on their jerseys having been dropped in favour of the four-quartered badge with the symbols of the four represented unions. When the team first emerged in the 19th century, the United Kingdom of Great Britain and Ireland was one single state. The team continued after the Irish Free State was set up in 1922, but was still known as the British Lions or British Isles. The name "British & Irish Lions" has been used since the 2001 tour of Australia. The team is often referred to simply as the Lions. Anthem. As the Lions represent four rugby unions, which cover two sovereign states, they do not currently have a national anthem. For the 1989 tour, the British national anthem "God Save the Queen" was used. For the 2005 tour to New Zealand, the Lions management commissioned a song, "The Power of Four", although it was met with little support among Lions fans at the matches and has not been used since. Colours and strip. For more than half a century, the Lions have worn a red jersey that sports the amalgamated crests of the four unions. Prior to 1950 the strip went through a number of significantly different formats. Unsanctioned tours. In 1888, the promoter of the first expedition to Australia and New Zealand, Arthur Shrewsbury, demanded "something that would be good material and yet take them by storm out here". The result was a jersey in thick red, white and blue hoops, worn above white shorts and dark socks. The tours to South Africa in 1891 and 1896 retained the red, white and blue theme but this time as red and white hooped jerseys and dark blue shorts and socks. The 1899 trip to Australia saw a reversion to red, white and blue jerseys, but with the blue used in thick hoops and the red and white in thin bands. The shorts remained blue, as did the socks although a white flash was added to the latter. The one-off test in 1999 between England and Australia that was played to commemorate Australia's first test against Reverend Matthew Mullineux's British side saw England wear an updated version of this jersey. In 1903, the South Africa tour followed on from the 1896 tour, with red and white hooped jerseys. The slight differences were that the red hoops were slightly thicker than the white (the opposite was true in 1896), and the white flash on the socks introduced in 1899 was partially retained. The Australia tour of 1904 saw exactly the same kit as in 1899. In 1908, with the Scottish and Irish unions not taking part, the Anglo-Welsh side sported red jerseys with a thick white band on tour to Australia and New Zealand. Blue shorts were retained, but the socks were for the first time red, with a white flash. Blue jerseys, the Lions named and the crest adopted. The Scots were once again involved in Tom Smyth's 1910 team to South Africa. Thus, dark blue jerseys were introduced with white shorts and the red socks of 1908. The jerseys also had a single lion-rampant crest. The 1924 tour returned to South Africa, retaining the blue jerseys but now with shorts to match. It is the 1924 tour that is credited as being the first in which the team were referred to as "the Lions", the irony being that it was on this tour that the single lion-rampant crest was replaced with the forerunner of the four-quartered badge with the symbols of the four represented unions, that is still worn today. Although the lion had been dropped from the jersey, the players had worn the lion motif on their ties as they arrived in South Africa, which led the press and public referring to them as "the Lions". The unofficial 1927 Argentina tour used the same kit and badge, and three heraldic lions returned as the jersey badge in 1930. This was the tour to New Zealand where the tourists' now standard blue jerseys caused some controversy. The convention in rugby is for the home side to accommodate its guests when there is a clash of kit. The New Zealand side, by then already synonymous with the appellation "All Blacks", had an all black kit that clashed with the Lions' blue. After much reluctance and debate New Zealand agreed to change for the Tests and New Zealand played in all white for the first time. On the 1930 tour a delegation led by the Irish lock George Beamish expressed their displeasure at the fact that while the blue of Scotland, white of England and red of Wales were represented in the strip there was no green for Ireland. A green flash was added to the socks, which from 1938 became a green turnover (although on blue socks thus eliminating red from the kit), and that has remained a feature of the strip ever since. In 1936, the four-quartered badge returned for the tour to Argentina and has remained on the kits ever since, but other than that the strip remained the same. Red jerseys. The adoption of the red jersey happened in the 1950 tour. A return to New Zealand was accompanied by a desire to avoid the controversy of 1930 and so red replaced blue for the jersey with the resultant kit being that which is still worn today, the combination of red jersey, white shorts and green and blue socks, representing the four unions. The only additions to the strip since 1950 began appearing in 1993, with the addition of kit suppliers logos in prominent positions. Umbro had in 1989 asked for "maximum brand exposure whenever possible" but this did not affect the kit's appearance. Since then, Nike, Adidas and Canterbury have had more overt branding on the shirts, with sponsors Scottish Provident (1997), (2001), Zurich (2005), HSBC (2009 and 2013), Standard Life Investments (2017), Vodafone (2021) and Howden (2025). History. 1888–1909. The earliest tours date back to 1888, when a 21-man squad visited Australia and New Zealand. The squad drew players from England, Scotland and Wales, though English players predominated. The 35-match tour of two host nations included no tests, but the side played provincial, city and academic sides, winning 27 matches. They played 19 games of Australian rules football, against prominent clubs in Victoria and South Australia, winning six and drawing one of these (see Australian rules football in England). The first tour, although unsanctioned by rugby bodies, established the concept of Northern Hemisphere sporting sides touring to the Southern Hemisphere. Three years after the first tour, the Western Province union invited rugby bodies in Britain to tour South Africa. Some saw the 1891 team – the first sanctioned by the Rugby Football Union – as the England national team, though others referred to it as "the British Isles". The tourists played a total of twenty matches, three of them tests. The team also played the regional side of South Africa (South Africa did not exist as a political unit in 1891), winning all three matches. In a notable event of the tour, the touring side presented the Currie Cup to Griqualand West, the province they thought produced the best performance on the tour. Five years later a British Isles side returned to South Africa. They played one extra match on this tour, making the total of 21 games, including four tests against South Africa, with the British Isles winning three of them. The squad had a notable Irish orientation, with the Ireland national team contributing six players to the 21-man squad. In 1899 the British Isles touring side returned to Australia for the first time since the unofficial tour of 1888. The squad of 23 for the first time ever had players from each of the home nations. The team again participated in 21 matches, playing state teams as well as northern Queensland sides and Victorian teams. A four-test series took place against Australia, the tourists winning three out of the four. The team returned via Hawaii and Canada playing additional games en route. Four years later, in 1903, the British Isles team returned to South Africa. The opening performance of the side proved disappointing from the tourists' point of view, with defeats in its opening three matches by Western Province sides in Cape Town. From then on the team experienced mixed results, though more wins than losses. The side lost the test series to South Africa, drawing twice, but with the South Africans winning the decider 8 to nil. No more than twelve months passed before the British Isles team ventured to Australia and New Zealand in 1904. The tourists devastated the Australian teams, winning every single game. Australia also lost all three tests to the visitors, even getting held to a standstill in two of the three games. Though the New Zealand leg of the tour did not take long in comparison to the number of Australian games, the British Isles experienced considerable difficulty across the Tasman after whitewashing the Australians. The team managed two early wins before losing the test to New Zealand and only winning one more game as well as drawing once. Despite their difficulties in New Zealand, the tour proved a raging success on-field for the British Isles. In 1908, another tour took place to Australia and New Zealand. In a reversal of previous practice, the planners allocated more matches in New Zealand rather than in Australia: perhaps the strength of the New Zealand teams and the heavy defeats of all Australian teams on the previous tour influenced this decision. Some commentators thought that this tour hoped to reach out to rugby communities in Australia, as rugby league (infamously) started in Australia in 1908. The Anglo-Welsh side (Irish and Scottish unions did not participate) performed well in all the non-test matches, but drew a test against New Zealand and lost the other two. 1910–1949. Visits that took place before the 1910 South Africa tour (the first selected by a committee from the four Home Unions) had enjoyed a growing degree of support from the authorities, although only one of these included representatives of all four nations. The 1910 tour to South Africa marked the official beginning of British Isles rugby tours: the inaugural tour operating under all four unions. The team performed moderately against the non-test teams, claiming victories in just over half their matches, and the test series went to South Africa, who won two of the three games. A side managed by Oxford University — supposedly the England rugby team, but actually including three Scottish players — toured Argentina at the time: the people of Argentina termed it the "Combined British". The next British Isles team tour did not take place until 1924, again in South Africa. The team, led by Ronald Cove-Smith, struggled with injuries and lost three of the four test matches, drawing the other 3–3. In total, 21 games were played, with the touring side winning 9, drawing 3 and losing 9. In 1927 a short, nine-game series took place in Argentina, with the British isles winning all nine encounters, and the tour was a financial success for Argentine rugby. The Lions returned to New Zealand in 1930 with some success. The Lions won all of their games that did not have test status except for the matches against Auckland, Wellington and Canterbury, but they lost three of their four test matches against New Zealand, winning the first test 6–3. The side also visited Australia, losing a test but winning five out of the six non-test games. In 1936 the British Isles visited Argentina for the third time, winning all ten of their matches and only conceding nine points in the whole tour. Two years later in 1938 the British Isles toured in South Africa, winning more than half of their normal matches. Despite having lost the test series to South Africa by game three, they won the final test. This is when they were named THE LIONS by their then Captain Sam Walker. 1950–1969. The first post-war tour went to New Zealand and Australia in 1950. The Lions, sporting newly redesigned jerseys and displaying a fresh style of play, managed to win 22 and draw one of 29 matches over the two nations. The Lions won the opening four fixtures before losing to Otago and Southland, but succeeded in holding New Zealand to a 9–9 draw. The Lions performed well in the remaining All Black tests though they lost all three, the team did not lose another non-test in the New Zealand leg of the tour. The Lions won all their games in Australia except for their final fixture against a New South Wales XV in Newcastle. They won both tests against Australia, in Brisbane, Queensland and in Sydney. In 1955 the Lions toured South Africa and left with another imposing record, one draw and 19 wins from the 25 fixtures. The four-test series against South Africa, a thrilling affair, ended in a drawn series. The 1959 tour to Australia and New Zealand marked once again a very successful tour for the Lions, who only lost six of their 35 fixtures. The Lions easily won both tests against Australia and lost the first three tests against New Zealand, but did find victory (9–6) in the final test. After the glittering decade of the 1950s, the first tour of the 1960s proved not nearly as successful as previous ones. The 1962 tour to South Africa saw the Lions still win 16 of their 25 games, but did not fare well against the Springboks, losing three of the four tests. For the 1966 tour to Australia and New Zealand John Robins became the first Lions coach, and the trip started off very well for the Lions, who stormed through Australia, winning five non-tests and drawing one, and defeating Australia in two tests. The Lions experienced mixed results during the New Zealand leg of the tour, as well as losing all of the tests against New Zealand. The Lions also played a test against Canada on their way home, winning 19 to 8 in Toronto. The 1968 tour of South Africa saw the Lions win 15 of their 16 provincial matches, but the team actually lost three tests against the Springboks and drew one. 1970–1979. The 1970s saw a renaissance for the Lions. The 1971 British Lions tour to New Zealand and Australia, centred around the skilled Welsh half-back pairing of Gareth Edwards and Barry John, secured a series win over New Zealand. The tour started with a loss to Queensland but proceeded to storm through the next provincial fixtures, winning 11 games in a row. The Lions then went on to defeat New Zealand in Dunedin. The Lions only lost one match on the rest of the tour and won the test series against New Zealand, winning and drawing the last two games, to take the series two wins to one. The 1974 British Lions tour to South Africa was one of the best-known and most successful Lions teams. Apartheid concerns meant some players declined the tour. Nonetheless, led by the esteemed Irish forward Willie John McBride, the tour went through 22 games unbeaten and triumphed 3–0 (with one drawn) in the test series. The series featured a lot of violence. The management of the Lions concluded that the Springboks dominated their opponents with physical aggression. At that time, test match referees came from the home nation, substitutions took place only if a doctor found a player unable to continue and there were no video cameras or sideline officials to prevent violent play. The Lions decided "to get their retaliation in first" with the infamous "99 call". The Lions postulated that a South African referee would probably not send off all of the Lions if they all retaliated against "blatant thuggery". Famous video footage of the 'battle of Boet Erasmus Stadium' shows J. P. R. Williams running over half of the pitch and launching himself at Van Heerden after such a call. The 1977 British Lions tour to New Zealand saw the Lions drop only one non-test out of 21 games, a loss to a Universities side. The team did not win the test series though, winning one game but losing the other three. In August 1977 the British Lions made a stopover in Fiji on the way home from their tour of New Zealand. Fiji beat them 25–21 at Buckhurst Park, Suva. 1980–1989. The Lions toured South Africa in 1980, and completed a flawless non-test record, winning 14 out of 14 matches. The Lions lost the first three tests to South Africa, only winning the last one once the Springboks were guaranteed to win the series. The 1983 tour to New Zealand saw the team successful in the non-test games, winning all but two games, but being whitewashed in the test series against New Zealand. A tour to South Africa by the Lions was anticipated in 1986, but the invitation for the Lions to tour was never accepted because of controversy surrounding Apartheid and the tour did not go ahead. The Lions did not return to South Africa until 1997, after the Apartheid era. A Lions team was selected in April 1986 for the International Rugby Board centenary match against 'The Rest'. The team was organised by the Four Home Unions Committee and the players were given the status of official British Lions. The Lions tour to Australia in 1989 was a shorter affair, being only 12 matches in total. The tour was very successful for the Lions, who won all eight non-test matches and won the test series against Australia, two to one. 1990–1999. The tour to New Zealand in 1993 was the last of the amateur era. The Lions won six and lost four non-test matches, and lost the test series 2–1. The tour to South Africa in 1997 was a success for the Lions, who completed the tour with only two losses, and won the test series 2–1. 2000–2009. In 2001, the ten-game tour to Australia saw the Wallabies win the test series 2–1. This series saw the first award of the Tom Richards Trophy. In the Lions' 2005 tour to New Zealand, coached by Clive Woodward, the Lions won seven games against provincial teams, were defeated by the New Zealand Maori team, and suffered heavy defeats in all three tests. In 2009, the Lions toured South Africa. There they faced the World Cup winners South Africa, with Ian McGeechan leading a coaching team including Warren Gatland, Shaun Edwards and Rob Howley. The Lions were captained by Irish lock Paul O'Connell. The initial Lions selection consisted of fourteen Irish players, thirteen Welsh, eight English and two Scots in the 37-man squad. In the first Test on 20 June, they lost 26–21, and lost the series in the second 28–25 in a tightly fought game at Loftus Versfeld on 27 June. The Lions won the third Test 28–9 at Ellis Park, and the series finished 2–1 to South Africa. 2010–2019. During June 2013 the British & Irish Lions toured Australia. Former Scotland and Lions full-back Andy Irvine was appointed as tour manager in 2010. Wales head coach Warren Gatland was the Lions' head coach, and their tour captain was Sam Warburton. The tour started in Hong Kong with a match against the Barbarians before moving on to Australia for the main tour featuring six provincial matches and three tests. The Lions won all but one non-test matches, losing to the Brumbies 14–12 on 18 June. The first test was followed shortly after this, which saw the Lions go 1-up over Australia winning 23–21. Australia did have a chance to take the win in the final moments of the game, but a missed penalty by Kurtley Beale saw the Lions take the win. The Wallabies drew the series in the second test winning 16–15, though the Lions had a chance to steal the win had it not been because of a missed penalty by Leigh Halfpenny. With tour captain Warburton out of the final test due to injury, Alun Wyn Jones took over the captaincy in the final test in Sydney. The final test was won by the Lions in what was a record win, winning 41–16 to earn their first series win since 1997 and their first over Australia since 1989. Following his winning tour of Australia in 2013, Warren Gatland was reappointed as Lions Head Coach for the tour to New Zealand in June and July 2017. In April 2016, it was announced that the side would again be captained again by Sam Warburton. The touring schedule included 10 games: an opening game against the Provincial Barbarians, challenge matches against all five of New Zealand's Super Rugby sides, a match against the Māori All Blacks and three tests against . The Lions defeated the Provincial Barbarians in the first game of the tour, before being beaten by the Blues three days later. The team recovered to beat the Crusaders but this was followed up with another midweek loss, this time against the Highlanders. The Lions then faced the Māori All Blacks, winning comfortably to restore optimism and followed up with their first midweek victory of the tour against the Chiefs. On 24 June, the Lions, captained by Peter O'Mahony, faced New Zealand in Eden Park in the first Test and were beaten 30–15. This was followed by the final midweek game of the tour, a draw against the Hurricanes. For the second Test, Gatland recalled Warburton to the starting team as captain. In Wellington Regional Stadium, the Lions beat a 14-man New Zealand side 24–21 after Sonny Bill Williams was red-carded at the 24-minute mark after a shoulder charge on Anthony Watson. This tied the series going into the final game, ending the side's 47-game winning run at home. In the final test at Eden Park the following week, the teams were tied at 15 points apiece with 78 minutes gone. Romain Poite signaled a penalty to New Zealand for an offside infringement after Ken Owens received the ball in front of his teammate Liam Williams, giving New Zealand the opportunity to kick for goal and potentially win the series. Poite, however, decided to downgrade the penalty to a free-kick after discussing with assistant referee Jérôme Garcès and Lions captain Sam Warburton. The match finished as a draw and the series was tied. 2020–present. Warren Gatland was Lions head coach again for the tour to South Africa in 2021. In December 2019, the Lions' Test venues were announced, but the tour was significantly disrupted by the COVID-19 pandemic, and all the games were played behind closed doors. South Africa won the test series by two games to one. In the deciding third test, Morne Steyn again kicked a late penalty to win the series. In 2024, it was announced that Andy Farrell would succeed Gatland as the Lions head coach. A women's Lions team was established in 2024, with their inaugural tour to New Zealand to take place in 2027. For the 2025 tour to Australia, the first test match saw Ireland equal their record for most starters with 8 while Wales did not have a player in the matchday squad for the first time since 1896. They went on to win the first test 27–19. Overall test match record. Overall test series results Tours. Format. The Lions now regularly tour three Southern Hemisphere countries; Australia, South Africa and New Zealand. They also toured Argentina three times before the Second World War. Since 1989 tours have been held every four years. The most recent tour was to South Africa in 2021. In a break with tradition, the 2005 tour of New Zealand was preceded by a "home" fixture against Argentina at the Millennium Stadium in Cardiff on 23 May 2005. It finished in a 25–25 draw. A similar fixture was held against Japan before the 2021 tour of South Africa at Murrayfield, with the Lions winning 28–10. On tour, games take place against local provinces, clubs or representative sides as well as test matches against the host's national team. The Lions and their predecessor teams have also played games against other nearby countries on tour. For example, they played Rhodesia (now Zimbabwe) in 1910, 1924, 1938, 1955, 1962, 1968 and 1974 during their tours to South Africa. They were also beaten by Fiji on their 1977 tour to New Zealand. In addition, they visited pre-independence Namibia (then South West Africa), in 1955, 1962, 1968 and 1974. There have also been games in other countries on the way home. These include games in in 1959 and 1966, East Africa (then mostly Kenya, and held in Nairobi), and an unofficial game against Ceylon (now Sri Lanka) in 1950. Lions non-tour and home matches. The Lions have played a number of other matches against international opposition. With the exception of the matches against Argentina in 2005 and Japan in 2021, which were preparation matches for Lions tours, these matches have been one-offs to mark special occasions. The Lions played an unofficial international match in 1955 at Cardiff Arms Park against a Welsh XV to mark the 75th anniversary of the Welsh Rugby Union. The Lions won 20–17 but did not include all the big names of the 1955 tour, such as Tony O'Reilly, Jeff Butterfield, Phil Davies, Dickie Jeeps, Bryn Meredith and Jim Greenwood. In 1977, the Lions played their first official home game, against the Barbarians as a charity fund-raiser held as part of the Queen's silver jubilee celebrations. The Barbarians line-up featured J. P. R. Williams, Gerald Davies, Gareth Edwards, Jean-Pierre Rives and Jean-Claude Skrela. The Lions included 13 of the team who played in the fourth test against New Zealand three weeks before and won 23–14. In 1986, the Lions' planned tour to South Africa was cancelled for political resons, because of Apartheid in South Africa. A match was organised against The Rest as a celebration to mark the International Rugby Board's centenary. The Lions lost 15–7. In 1989, the Lions played against France in Paris. The game formed part of the celebrations of the bi-centennial of the French Revolution. The Lions, captained by Rob Andrew, won 29–27. In 1990, a Four Home Unions team played against the Rest of Europe in a match to raise money for the rebuilding of Romania following the overthrow of Nicolae Ceaușescu in December 1989. The team used the Lions' logo, while the Rest of Europe played under the symbol of the Romanian Rugby Federation. "Players in bold are still active at international level." "Only matches against full international sides are listed." Player records. Most caps. "Updated 7 August 2021" Most points. "Updated 31 July 2021" Most tries. "Updated 31 July 2021"
3916
20542576
https://en.wikipedia.org/wiki?curid=3916
Bass guitar
The bass guitar (), also known as the electric bass guitar, electric bass, or simply the bass, is the lowest-pitched member of the guitar family. It is similar in appearance and construction to an electric but with a longer neck and scale length. The electric bass guitar most commonly has four strings, though five- and six-stringed models are also built. Since the mid-1950s, the bass guitar has often replaced the double bass in popular music due to its lighter weight, smaller size and easier portability, most models' inclusion of frets for easier intonation, and electromagnetic pickups for amplification. The bass guitar is usually tuned the same as the double bass, corresponding to pitches one octave lower than the four lowest-pitched strings of a guitar (typically E, A, D, and G). It is played with the fingers and thumb or with a pick. Because the electric bass guitar is acoustically a quiet instrument, it requires external amplification, generally via electromagnetic or piezo-electric pickups. It can also be used with direct input boxes, audio interfaces, mixing consoles, computers, or bass-effects processors which offer headphone jacks. Terminology. The "New Grove Dictionary of Music and Musicians" refers to this instrument as an "Electric bass guitar, usually with four heavy strings tuned E1'–A1'–D2–G2." It also defines "bass" as "Bass (iv). A contraction of Double bass or Electric bass guitar." "Mottola's Cyclopedic Dictionary of Lutherie Terms" begins its definition of the instrument as "A bass guitar that produces sound primarily with the aid of electronic devices." According to some authors the proper term is "electric bass". Common names for the instrument are "bass guitar", "electric bass guitar", "electric bass", and simply "bass". and some authors claim that they are historically accurate. A bass guitar whose neck lacks frets is termed a fretless bass. Scale. The scale of a bass is defined as the length of the vibrating strings between the nut and the bridge saddles. On a modern 4-string bass guitar, 30" (76 cm) or less is considered short scale, 32" (81 cm) medium scale, 34" (86 cm) standard scale and 35" (89 cm) long scale. The double bass is "acoustically imperfect" like the viola. For a double bass to be acoustically perfect, its body size would have to be twice as that of a cello rendering it practically unplayable, so the double bass is made smaller to make it playable. The electric bass with its pickups an amplifier addresses the compromises of a double bass by allowing the low notes to be amplified electronically. Pickup. Bass pickups are attached to the body of the guitar and located beneath the strings. They are responsible for converting the vibrations of the strings into analogous electrical voltage sent as input to an instrument amplifier. Strings. Bass guitar strings are composed of a core and winding. The core is a wire which runs through the center of the string and is made of steel, nickel, or an alloy. The winding is a smaller gauge wire wrapped around the core. Bass guitar strings vary by the material and cross-sectional shape of the winding. Common string variants include roundwound, flatwound, halfwound (groundwound), coated, tapewound and taperwound strings. Roundwound and flatwound strings feature windings with circular and rounded-square cross-sections, respectively, with half-round strings being a hybrid between the two. Coated strings have their surface coated with a synthetic layer while tapewound strings feature a metal core with a plastic winding. Taperwound strings have a tapered end where the exposed core sits on the bridge saddle without windings. The choice of winding has considerable impact on the sound of the instrument, with certain winding styles often being preferred for certain musical genres. History. 1930s. In the 1930s, musician and inventor Paul Tutmarc of Seattle, Washington, developed the first electric bass guitar in its modern form, a fretted four-string instrument designed to be played horizontally in a position similar to a standard guitar. The 1935 sales catalog for Tutmarc's company Audiovox featured the "Model 736 Bass Fiddle", a solid body electric bass guitar with four strings, a scale length, and a single pickup. Around 100 were made during this period. Audiovox also sold their "Model 236" bass amplifier. 1950s. In the 1950s, Leo Fender and George Fullerton developed the first mass-produced electric bass guitar. The Fender Electric Instrument Manufacturing Company began producing the Precision Bass, or P-Bass, in October 1951. The design featured a simple uncontoured "slab" body design (with no edge contours) and a single coil pickup, both features similar to a Telecaster. By 1957, the Precision Bass began to resemble the Fender Stratocaster with the body edges beveled for comfort and the pickup changed to a separate halves split coil design. The Fender Bass was a revolutionary instrument for working musicians. In comparison to the upright bass, the bass guitar could be easily transported. When amplified, the bass guitar was also much less prone than acoustic basses to audio feedback. The addition of frets enabled bassists to play in tune more easily than on upright basses (the "precision" of the Fender model) and allowed guitarists to more easily adapt to the new instrument. In 1953, Monk Montgomery became the first bassist to tour with the Fender bass, in Lionel Hampton's postwar big band. Montgomery was also possibly the first to record with the electric bass, in a session on July 2, 1953, with the Art Farmer Septet. Roy Johnson (with Lionel Hampton), and Shifty Henry (with Louis Jordan and His Tympany Five), were other early Fender bass pioneers. Bill Black, who played with Elvis Presley and James Jamerson switched from upright bass to the Fender Precision Bass around 1957. The bass guitar was intended to appeal to guitarists as well as upright bass players, and many early pioneers of the instrument, such as Joe Osborn, and Paul McCartney were originally guitarists. Also in 1953, Gibson released the first short-scale violin-shaped electric bass, the EB-1, with an extendable end pin so a bassist could play it upright or horizontally. In 1958, Gibson released the maple arched-top EB-2 described in the Gibson catalog as a "hollow-body electric bass that features a Bass/Baritone pushbutton for two different tonal characteristics". In 1959, these were followed by the more conventional-looking EB-0 Bass. The EB-0 was very similar to a Gibson SG in appearance (although the earliest examples have a slab-sided body shape closer to that of the double-cutaway Les Paul Special). The Fender and Gibson versions used bolt-on and set necks. Several other companies also began manufacturing bass guitars during the 1950s. Kay Musical Instrument Company began production of the K162 in 1952. Also in 1956, at the German trade fair "Musikmesse Frankfurt", the distinctive Höfner 500/1 viola-shaped bass first appeared, constructed using violin techniques by Walter Höfner, a second-generation violin luthier. Due to its use by Paul McCartney, it became known as the "Beatle bass". In 1957, Rickenbacker introduced the model 4000, the first bass to feature a neck-through-body design in which the neck is part of the body wood. The Burns London Supersound was introduced in 1958. 1960s. With the explosion in popularity of rock music in the 1960s, many more manufacturers began making electric basses, including Yamaha, Teisco and Guyatone. Introduced in 1960, the Fender Jazz Bass, initially known as the "Deluxe Bass", used a body design known as an offset waist which was first seen on the Jazzmaster guitar in an effort to improve comfort while playing seated. The Jazz bass, or J-Bass, features two single-coil pickups. Providing a more "Gibson-scale" instrument, rather than the Jazz and Precision, Fender produced the Mustang Bass, a scale-length instrument. The Fender VI, a 6-string bass, was tuned one octave lower than standard guitar tuning. It was released in 1961, and was briefly favored by Jack Bruce of Cream. Gibson introduced its short-scale EB-3 in 1961, also used by Bruce. The EB-3 had a "mini-humbucker" at the bridge position. Gibson basses tended to be instruments with a shorter 30.5" scale length than the Precision. Gibson did not produce a -scale bass until 1963 with the release of the Thunderbird. The first commercial fretless bass guitar was the Ampeg AUB-1, introduced in 1966. In the late 1960s, eight-string basses, with four octave paired courses (similar to a 12 string guitar), were introduced, such as the Hagström H8. 1970s. In 1972, Alembic established what became known as "boutique" or "high-end" electric bass guitars. These expensive, custom-tailored instruments, as used by Phil Lesh, Jack Casady, and Stanley Clarke, featured unique designs, premium hand-finished wood bodies, and innovative construction techniques such as multi-laminate neck-through-body construction and graphite necks. Alembic also pioneered the use of onboard electronics for pre-amplification and equalization. Active electronics increase the output of the instrument, and allow more options for controlling tonal flexibility, giving the player the ability to amplify as well as to attenuate certain frequency ranges while improving the overall frequency response (including more low-register and high-register sounds). 1976 saw the UK company Wal begin production of their own range of active basses. In 1974 Music Man Instruments, founded by Tom Walker, Forrest White and Leo Fender, introduced the StingRay, the first widely produced bass with active (powered) electronics built into the instrument. Basses with active electronics can include a preamplifier and knobs for boosting and cutting the low and high frequencies. In the mid-1970s, five-string basses, with a very low "B" string, were introduced. In 1975, bassist Anthony Jackson commissioned luthier Carl Thompson to build a six-string bass tuned (low to high) B0, E1, A1, D2, G2, C3, adding a low B string and a high C string.
3921
7903804
https://en.wikipedia.org/wiki?curid=3921
Basketball
Basketball is a team sport in which two teams, most commonly of five players each, opposing one another on a rectangular court, compete with the primary objective of shooting a basketball (approximately in diameter) through the defender's hoop (a basket in diameter mounted high to a backboard at each end of the court), while preventing the opposing team from shooting through their own hoop. A field goal is worth two points, unless made from behind the three-point line, when it is worth three. After a foul, timed play stops and the player fouled or designated to shoot a technical foul is given one, two or three one-point free throws. The team with the most points at the end of the game wins, but if regulation play expires with the score tied, an additional period of play (overtime) is mandated. Players advance the ball by bouncing it while walking or running (dribbling) or by passing it to a teammate, both of which require considerable skill. On offense, players may use a variety of shotsthe layup, the jump shot, or a dunk; on defense, they may steal the ball from a dribbler, intercept passes, or block shots; either offense or defense may collect a rebound, that is, a missed shot that bounces from rim or backboard. It is a violation to lift or drag one's pivot foot without dribbling the ball, to carry it, or to hold the ball with both hands then resume dribbling. The five players on each side fall into five playing positions. The tallest player is usually the center, the second-tallest and strongest is the power forward, a slightly shorter but more agile player is the small forward, and the shortest players or the best ball handlers are the shooting guard and the point guard, who implement the coach's game plan by managing the execution of offensive and defensive plays (player positioning). Informally, players may play three-on-three, two-on-two, and one-on-one. Invented in 1891 by Canadian-American gym teacher James Naismith in Springfield, Massachusetts, in the United States, basketball has evolved to become one of the world's most popular and widely viewed sports. The National Basketball Association (NBA) is the most significant professional basketball league in the world in terms of popularity, salaries, talent, and level of competition (drawing most of its talent from U.S. college basketball). Outside North America, the top clubs from national leagues qualify to continental championships such as the EuroLeague and the Basketball Champions League Americas. The FIBA Basketball World Cup and Men's Olympic Basketball Tournament are the major international events of the sport and attract top national teams from around the world. Each continent hosts regional competitions for national teams, like EuroBasket and FIBA AmeriCup. The FIBA Women's Basketball World Cup and women's Olympic basketball tournament feature top national teams from continental championships. The main North American league is the WNBA (NCAA Women's Division I Basketball Championship is also popular), whereas the strongest European clubs participate in the EuroLeague Women. History. Early history. A game similar to basketball is mentioned in a 1591 book published in Frankfurt am Main that reports on the lifestyles and customs of coastal North American residents, (German; translates as "Truthful Depictions of the Savages": "Among other things, a game of skill is described in which balls must be thrown against a target woven from twigs, mounted high on a pole. There's a small reward for the player if the target is being hit." Creation. In December 1891, James Naismith, a Canadian-American professor of physical education and instructor at the International Young Men's Christian Association Training School (now Springfield College) in Springfield, Massachusetts, was trying to keep his gym class active on a rainy day. He sought a vigorous indoor game to keep his students occupied and at proper levels of fitness during the long New England winters. After rejecting other ideas as either too rough or poorly suited to walled-in gymnasiums, he invented a new game in which players would pass a ball to teammates and try to score points by tossing the ball into a basket mounted on a wall. Naismith wrote the basic rules and nailed a peach basket onto an elevated track. Naismith initially set up the peach basket with its bottom intact, which meant that the ball had to be retrieved manually after each "basket" or point scored. This quickly proved tedious, so Naismith removed the bottom of the basket to allow the balls to be poked out with a long dowel after each scored basket. Shortly after, Senda Berenson, instructor of physical culture at the nearby Smith College, went to Naismith to learn more about the game. Fascinated by the new sport and the values it could teach, she started to organize games with her pupils, following adjusted rules. The first official women's interinstitutional game was played barely 11 months later, between the University of California and Miss Head's School. In 1899, a committee was established at the Conference of Physical Training in Springfield to draw up general rules for women's basketball. Thus, the sport quickly spread throughout America's schools, colleges and universities with uniform rules for both sexes. Basketball was originally played with a soccer ball. These round balls from "association football" were made, at the time, with a set of laces to close off the hole needed for inserting the inflatable bladder after the other sewn-together segments of the ball's cover had been flipped outside-in. These laces could cause bounce passes and dribbling to be unpredictable. Eventually a lace-free ball construction method was invented, and this change to the game was endorsed by Naismith (whereas in American football, the lace construction proved to be advantageous for gripping and remains to this day). The first balls made specifically for basketball were brown, and it was only in the late 1950s that Tony Hinkle, searching for a ball that would be more visible to players and spectators alike, introduced the orange ball that is now in common use. Dribbling was not part of the original game except for the "bounce pass" to teammates. Passing the ball was the primary means of ball movement. Dribbling was eventually introduced but limited by the asymmetric shape of early balls. Dribbling was common by 1896, with a rule against the double dribble by 1898. The peach baskets were used until 1906 when they were finally replaced by metal hoops with backboards. A further change was soon made, so the ball merely passed through. Whenever a person got the ball in the basket, their team would gain a point. Whichever team got the most points won the game. The baskets were originally nailed to the mezzanine balcony of the playing court, but this proved impractical when spectators in the balcony began to interfere with shots. The backboard was introduced to prevent this interference; it had the additional effect of allowing rebound shots. Naismith's handwritten diaries, discovered by his granddaughter in early 2006, indicate that he was nervous about the new game he had invented, which incorporated rules from a children's game called duck on a rock, as many had failed before it. Frank Mahan, one of the players from the original first game, approached Naismith after the Christmas break, in early 1892, asking him what he intended to call his new game. Naismith replied that he had not thought of it because he had been focused on just getting the game started. Mahan suggested that it be called "Naismith ball", at which he laughed, saying that a name like that would kill any game. Mahan then said, "Why not call it basketball?" Naismith replied, "We have a basket and a ball, and it seems to me that would be a good name for it." The first official game was played in the YMCA gymnasium in Albany, New York, on January 20, 1892, with nine players. The game ended at 1–0; the shot was made from , on a court just half the size of a present-day Streetball or National Basketball Association (NBA) court. At the time, soccer was being played with 10 to a team (which was increased to 11). When winter weather got too icy to play soccer, teams were taken indoors, and it was convenient to have them split in half and play basketball with five on each side. By 1897–98, teams of five became standard. College basketball. Basketball's early adherents were dispatched to YMCAs throughout the United States, and it quickly spread through the United States and Canada. By 1895, it was well established at several women's high schools. While YMCA was responsible for initially developing and spreading the game, within a decade it discouraged the new sport, as rough play and rowdy crowds began to detract from YMCA's primary mission. However, other amateur sports clubs, colleges, and professional clubs quickly filled the void. In the years before World War I, the Amateur Athletic Union and the Intercollegiate Athletic Association of the United States (forerunner of the NCAA) vied for control over the rules for the game. The first pro league, the National Basketball League, was formed in 1898 to protect players from exploitation and to promote a less rough game. This league only lasted five years. James Naismith was instrumental in establishing college basketball. His colleague C. O. Beamis fielded the first college basketball team just a year after the Springfield YMCA game at the suburban Pittsburgh Geneva College. Naismith himself later coached at the University of Kansas for six years, before handing the reins to renowned coach Forrest "Phog" Allen. Naismith's disciple Amos Alonzo Stagg brought basketball to the University of Chicago, while Adolph Rupp, a student of Naismith's at Kansas, enjoyed great success as coach at the University of Kentucky. On February 9, 1895, the first intercollegiate 5-on-5 game was played at Hamline University between Hamline and the School of Agriculture, which was affiliated with the University of Minnesota. The School of Agriculture won in a 9–3 game. In 1901, colleges, including the University of Chicago, Columbia University, Cornell University, Dartmouth College, the University of Minnesota, the U.S. Naval Academy, the University of Colorado and Yale University began sponsoring men's games. In 1905, frequent injuries on the football field prompted President Theodore Roosevelt to suggest that colleges form a governing body, resulting in the creation of the Intercollegiate Athletic Association of the United States (IAAUS). In 1910, that body changed its name to the National Collegiate Athletic Association (NCAA). The first Canadian interuniversity basketball game was played at YMCA in Kingston, Ontario on February 6, 1904, when McGill UniversityNaismith's alma matervisited Queen's University. McGill won 9–7 in overtime; the score was 7–7 at the end of regulation play, and a ten-minute overtime period settled the outcome. A good turnout of spectators watched the game. The first men's national championship tournament, the National Association of Intercollegiate Basketball tournament, which still exists as the National Association of Intercollegiate Athletics (NAIA) tournament, was organized in 1937. The first national championship for NCAA teams, the National Invitation Tournament (NIT) in New York, was organized in 1938; the NCAA national tournament began one year later. College basketball was rocked by gambling scandals from 1948 to 1951, when dozens of players from top teams were implicated in game-fixing and point shaving. Partially spurred by an association with cheating, the NIT lost support to the NCAA tournament. High school basketball. Before widespread school district consolidation, most American high schools were far smaller than their present-day counterparts. During the first decades of the 20th century, basketball quickly became the ideal interscholastic sport due to its modest equipment and personnel requirements. In the days before widespread television coverage of professional and college sports, the popularity of high school basketball was unrivaled in many parts of America. Perhaps the most legendary of high school teams was Indiana's Franklin Wonder Five, which took the nation by storm during the 1920s, dominating Indiana basketball and earning national recognition. Today virtually every high school in the United States fields a basketball team in varsity competition. Basketball's popularity remains high, both in rural areas where they carry the identification of the entire community, as well as at some larger schools known for their basketball teams where many players go on to participate at higher levels of competition after graduation. In the 2016–17 season, 980,673 boys and girls represented their schools in interscholastic basketball competition, according to the National Federation of State High School Associations. The states of Illinois, Indiana and Kentucky are particularly well known for their residents' devotion to high school basketball, commonly called Hoosier Hysteria in Indiana; the critically acclaimed film "Hoosiers" shows high school basketball's depth of meaning to these communities. ⁣There is currently no tournament to determine a national high school champion. The most serious effort was the National Interscholastic Basketball Tournament at the University of Chicago from 1917 to 1930. The event was organized by Amos Alonzo Stagg and sent invitations to state champion teams. The tournament started out as a mostly Midwest affair but grew. In 1929 it had 29 state champions. Faced with opposition from the National Federation of State High School Associations and North Central Association of Colleges and Schools that bore a threat of the schools losing their accreditation the last tournament was in 1930. The organizations said they were concerned that the tournament was being used to recruit professional players from the prep ranks. The tournament did not invite minority schools or private/parochial schools. The National Catholic Interscholastic Basketball Tournament ran from 1924 to 1941 at Loyola University. The National Catholic Invitational Basketball Tournament from 1954 to 1978 played at a series of venues, including Catholic University, Georgetown and George Mason. The National Interscholastic Basketball Tournament for Black High Schools was held from 1929 to 1942 at Hampton Institute. The National Invitational Interscholastic Basketball Tournament was held from 1941 to 1967 starting out at Tuskegee Institute. Following a pause during World War II it resumed at Tennessee State College in Nashville. The basis for the champion dwindled after 1954 when "Brown v. Board of Education" began an integration of schools. The last tournaments were held at Alabama State College from 1964 to 1967. Professional basketball. Teams abounded throughout the 1920s. There were hundreds of men's professional basketball teams in towns and cities all over the United States, and little organization of the professional game. Players jumped from team to team and teams played in armories and smoky dance halls. Leagues came and went. Barnstorming squads such as the Original Celtics and two all-African American teams, the New York Renaissance Five ("Rens") and the (still existing) Harlem Globetrotters played up to two hundred games a year on their national tours. In 1946, the Basketball Association of America (BAA) was formed. The first game was played in Toronto, Ontario, Canada between the Toronto Huskies and New York Knickerbockers on November 1, 1946. Three seasons later, in 1949, the BAA merged with the National Basketball League (NBL) to form the National Basketball Association (NBA). By the 1950s, basketball had become a major college sport, thus paving the way for a growth of interest in professional basketball. In 1959, a basketball hall of fame was founded in Springfield, Massachusetts, site of the first game. Its rosters include the names of great players, coaches, referees and people who have contributed significantly to the development of the game. The hall of fame has people who have accomplished many goals in their career in basketball. An upstart organization, the American Basketball Association, emerged in 1967 and briefly threatened the NBA's dominance until the ABA-NBA merger in 1976. Today the NBA is the top professional basketball league in the world in terms of popularity, salaries, talent, and level of competition. The NBA has featured many famous players, including George Mikan, the first dominating "big man"; ball-handling wizard Bob Cousy and defensive genius Bill Russell of the Boston Celtics; charismatic center Wilt Chamberlain, who originally played for the barnstorming Harlem Globetrotters; all-around stars Oscar Robertson and Jerry West; more recent big men Kareem Abdul-Jabbar, Shaquille O'Neal, Hakeem Olajuwon and Karl Malone; playmakers John Stockton, Isiah Thomas and Steve Nash; crowd-pleasing forwards Julius Erving and Charles Barkley; European stars Dirk Nowitzki, Pau Gasol, Nikola Jokić and Tony Parker; Latin American stars Manu Ginobili, more recent superstars, Allen Iverson, Kobe Bryant, Tim Duncan, LeBron James, Stephen Curry, Giannis Antetokounmpo, etc.; and the three players who many credit with ushering the professional game to its highest level of popularity during the 1980s and 1990s: Larry Bird, Earvin "Magic" Johnson, and Michael Jordan. In 2001, the NBA formed a developmental league, the National Basketball Development League (later known as the NBA D-League and then the NBA G League after a branding deal with Gatorade). As of the 2023–24 season, the G League has 31 teams. International basketball. FIBA (International Basketball Federation) was formed in 1932 by eight founding nations: Argentina, Czechoslovakia, Greece, Italy, Latvia, Portugal, Romania and Switzerland. At this time, the organization only oversaw amateur players. Its acronym, derived from the French "Fédération Internationale de Basket-ball Amateur", was thus "FIBA". Men's basketball was first included at the Berlin 1936 Summer Olympics, although a demonstration tournament was held in 1904. The United States defeated Canada in the first final, played outdoors. This competition has usually been dominated by the United States, whose team has won all but three titles. The first of these came in a controversial final game in Munich in 1972 against the Soviet Union, in which the ending of the game was replayed three times until the Soviet Union finally came out on top. In 1950 the first FIBA World Championship for men, now known as the FIBA Basketball World Cup, was held in Argentina. Three years later, the first FIBA World Championship for women, now known as the FIBA Women's Basketball World Cup, was held in Chile. Women's basketball was added to the Olympics in 1976, which were held in Montreal, Quebec, Canada with teams such as the Soviet Union, Brazil and Australia rivaling the American squads. In 1989, FIBA allowed professional NBA players to participate in the Olympics for the first time. Prior to the 1992 Summer Olympics, only European and South American teams were allowed to field professionals in the Olympics. The United States' dominance continued with the introduction of the original Dream Team. In the 2004 Athens Olympics, the United States suffered its first Olympic loss while using professional players, falling to Puerto Rico (in a 19-point loss) and Lithuania in group games, and being eliminated in the semifinals by Argentina. It eventually won the bronze medal defeating Lithuania, finishing behind Argentina and Italy. The Redeem Team, won gold at the 2008 Olympics, and the B-Team, won gold at the 2010 FIBA World Championship in Turkey despite featuring no players from the 2008 squad. The United States continued its dominance as they won gold at the 2012 Olympics, 2014 FIBA World Cup and the 2016 Olympics. Worldwide, basketball tournaments are held for boys and girls of all age levels. The global popularity of the sport is reflected in the nationalities represented in the NBA. Players from all six inhabited continents currently play in the NBA. Top international players began coming into the NBA in the mid-1990s, including Croatians Dražen Petrović and Toni Kukoč, Serbian Vlade Divac, Lithuanians Arvydas Sabonis and Šarūnas Marčiulionis, Dutchman Rik Smits and German Detlef Schrempf. In the Philippines, the Philippine Basketball Association's first game was played on April 9, 1975, at the Araneta Coliseum in Cubao, Quezon City, Philippines. It was founded as a "rebellion" of several teams from the now-defunct Manila Industrial and Commercial Athletic Association, which was tightly controlled by the Basketball Association of the Philippines (now defunct), the then-FIBA recognized national association. Nine teams from the MICAA participated in the league's first season that opened on April 9, 1975. The NBL is Australia's pre-eminent men's professional basketball league. The league commenced in 1979, playing a winter season (April–September) and did so until the completion of the 20th season in 1998. The 1998–99 season, which commenced only months later, was the first season after the shift to the current summer season format (October–April). This shift was an attempt to avoid competing directly against Australia's various football codes. It features 8 teams from around Australia and one in New Zealand. A few players including Luc Longley, Andrew Gaze, Shane Heal, Chris Anstey and Andrew Bogut made it big internationally, becoming poster figures for the sport in Australia. The Women's National Basketball League began in 1981. Women's basketball. Women began to play basketball in the fall of 1892 at Smith College through Senda Berenson, substitute director of the newly opened gymnasium and physical education teacher, after having modified the rules for women. Shortly after Berenson was hired at Smith, she visited Naismith to learn more about the game. Fascinated by the new sport and the values it could teach, she instantly introduced the game as a class exercise and soon after teams were organized. The first women's collegiate basketball game was played on March 21, 1893, when her Smith freshmen and sophomores played against one another. The first official women's interinstitutional game was played later that year between the University of California and the Miss Head's School. In 1899, a committee was established at the Conference of Physical Training in Springfield to draw up general rules for women's basketball. These rules, designed by Berenson, were published in 1899. In 1902 Berenson became the editor of A. G. Spalding's first Women's Basketball Guide. The same year women of Mount Holyoke and Sophie Newcomb College (coached by Clara Gregory Baer), began playing basketball. By 1895, the game had spread to colleges across the country, including Wellesley, Vassar, and Bryn Mawr. The first intercollegiate women's game was on April 4, 1896. Stanford women played Berkeley, 9-on-9, ending in a 2–1 Stanford victory. Women's basketball development was more structured than that for men in the early years. In 1905, the executive committee on Basket Ball Rules (National Women's Basketball Committee) was created by the American Physical Education Association. These rules called for six to nine players per team and 11 officials. The International Women's Sports Federation (1924) included a women's basketball competition. 37 women's high school varsity basketball or state tournaments were held by 1925. And in 1926, the Amateur Athletic Union backed the first national women's basketball championship, complete with men's rules. The Edmonton Grads, a touring Canadian women's team based in Edmonton, Alberta, operated between 1915 and 1940. The Grads toured all over North America, and were exceptionally successful. They posted a record of 522 wins and only 20 losses over that span, as they met any team that wanted to challenge them, funding their tours from gate receipts. The Grads also shone on several exhibition trips to Europe, and won four consecutive exhibition Olympics tournaments, in 1924, 1928, 1932, and 1936; however, women's basketball was not an official Olympic sport until 1976. The Grads' players were unpaid, and had to remain single. The Grads' style focused on team play, without overly emphasizing skills of individual players. The first women's AAU All-America team was chosen in 1929. Women's industrial leagues sprang up throughout the United States, producing famous athletes, including Babe Didrikson of the Golden Cyclones, and the All American Red Heads Team, which competed against men's teams, using men's rules. By 1938, the women's national championship changed from a three-court game to two-court game with six players per team. The NBA-backed Women's National Basketball Association (WNBA) began in 1997. Though it had shaky attendance figures, several marquee players (Lisa Leslie, Diana Taurasi, and Candace Parker among others) have helped the league's popularity and level of competition. Other professional women's basketball leagues in the United States, such as the American Basketball League (1996–98), have folded in part because of the popularity of the WNBA. The WNBA has been looked at by many as a niche league. However, the league has recently taken steps forward. In June 2007, the WNBA signed a contract extension with ESPN. The new television deal ran from 2009 to 2016. Along with this deal, came the first-ever rights fees to be paid to a women's professional sports league. Over the eight years of the contract, "millions and millions of dollars" were "dispersed to the league's teams." In a March 12, 2009, article, NBA commissioner David Stern said that in the bad economy, "the NBA is far less profitable than the WNBA. We're losing a lot of money among a large number of teams. We're budgeting the WNBA to break even this year." Rules and regulations. Measurements and time limits discussed in this section often vary among tournaments and organizations; international and NBA rules are used in this section. The object of the game is to outscore one's opponents by throwing the ball through the opponents' basket from above while preventing the opponents from doing so on their own. An attempt to score in this way is called a shot. A successful shot is worth two points, or three points if it is taken from beyond the three-point arc from the basket in international games and in NBA games. A one-point shot can be earned when shooting from the foul line after a foul is made. After a team has scored from a field goal or free throw, play is resumed with a throw-in awarded to the non-scoring team taken from a point beyond the endline of the court where the points were scored. Playing regulations. Games are played in four quarters of 10 (FIBA) or 12 minutes (NBA). College men's games use two 20-minute halves, college women's games use 10-minute quarters, and most United States high school varsity games use 8-minute quarters; however, this varies from state to state. 15 minutes are allowed for a half-time break under FIBA, NBA, and NCAA rules and 10 minutes in United States high schools. Overtime periods are five minutes in length except for high school, which is four minutes in length. Teams exchange baskets for the second half. The time allowed is actual playing time; the clock is stopped while the play is not active. Therefore, games generally take much longer to complete than the allotted game time, typically about two hours. Five players from each team may be on the court at one time. Substitutions are unlimited but can only be done when play is stopped. Teams also have a coach, who oversees the development and strategies of the team, and other team personnel such as assistant coaches, managers, statisticians, doctors and trainers. For both men's and women's teams, a standard uniform consists of a pair of shorts and a jersey with a clearly visible number, unique within the team, printed on both the front and back. Players wear high-top sneakers that provide extra ankle support. Typically, team names, players' names and, outside of North America, sponsors are printed on the uniforms. A limited number of time-outs, clock stoppages requested by a coach (or sometimes mandated in the NBA) for a short meeting with the players, are allowed. They generally last no longer than one minute (100 seconds in the NBA) unless, for televised games, a commercial break is needed. The game is controlled by the officials consisting of the referee (referred to as crew chief in the NBA), one or two umpires (referred to as referees in the NBA) and the table officials. For college, the NBA, and many high schools, there are a total of three referees on the court. The table officials are responsible for keeping track of each team's scoring, timekeeping, individual and team fouls, player substitutions, team possession arrow, and the shot clock. Equipment. The only essential equipment in a basketball game is the ball and the court: a flat, rectangular surface with baskets at opposite ends. Competitive levels require the use of more equipment such as clocks, score sheets, scoreboards, alternating possession arrows, and whistle-operated stop-clock systems. A regulation basketball court in international games is long and wide. In the NBA and NCAA the court is . Most courts have wood flooring, usually constructed from maple planks running in the same direction as the longer court dimension. The name and logo of the home team is usually painted on or around the center circle. The basket is a steel rim diameter with an attached net affixed to a backboard that measures and one basket is at each end of the court. The white outlined box on the backboard is high and wide. At almost all levels of competition, the top of the rim is exactly above the court and inside the baseline. While variation is possible in the dimensions of the court and backboard, it is considered important for the basket to be of the correct height – a rim that is off by just a few inches can have an adverse effect on shooting. The net must "check the ball momentarily as it passes through the basket" to aid the visual confirmation that the ball went through. The act of checking the ball has the further advantage of slowing down the ball so the rebound does not go as far. The size of the basketball is also regulated. For men, the official ball is in circumference (size 7, or a "295 ball") and weighs . If women are playing, the official basketball size is in circumference (size 6, or a "285 ball") with a weight of . In 3x3, a formalized version of the halfcourt 3-on-3 game, a dedicated ball with the circumference of a size 6 ball but the weight of a size 7 ball is used in all competitions (men's, women's, and mixed teams). Violations. The ball may be advanced toward the basket by being shot, passed between players, thrown, tapped, rolled or dribbled (bouncing the ball while running). The ball must stay within the court; the last team to touch the ball before it travels out of bounds forfeits possession. The ball is out of bounds if it touches a boundary line, or touches any player or object that is out of bounds. There are limits placed on the steps a player may take without dribbling, which commonly results in an infraction known as traveling. Nor may a player stop their dribble and then resume dribbling. A dribble that touches both hands is considered stopping the dribble, giving this infraction the name double dribble. Within a dribble, the player cannot carry the ball by placing their hand on the bottom of the ball; doing so is known as carrying the ball. A team, once having established ball control in the front half of their court, may not return the ball to the backcourt and be the first to touch it. A violation of these rules results in loss of possession. The ball may not be kicked, nor be struck with the fist. For the offense, a violation of these rules results in loss of possession; for the defense, most leagues reset the shot clock and the offensive team is given possession of the ball out of bounds. There are limits imposed on the time taken before progressing the ball past halfway (8 seconds in FIBA and the NBA; 10 seconds in NCAA and high school for both sexes), before attempting a shot (24 seconds in FIBA, the NBA, and U Sports (Canadian universities) play for both sexes, and 30 seconds in NCAA play for both sexes), holding the ball while closely guarded (5 seconds), and remaining in the restricted area known as the free-throw lane, (or the "key") (3 seconds). These rules are designed to promote more offense. There are also limits on how players may block an opponent's field goal attempt or help a teammate's field goal attempt. Goaltending is a defender's touching of a ball that is on a downward flight toward the basket, while the related violation of basket interference is the touching of a ball that is on the rim or above the basket, or by a player reaching through the basket from below. Goaltending and basket interference committed by a defender result in awarding the basket to the offense, while basket interference committed by an offensive player results in cancelling the basket if one is scored. The defense gains possession in all cases of goaltending or basket interference. Fouls. An attempt to unfairly disadvantage an opponent through certain types of physical contact is illegal and is called a personal foul. These are most commonly committed by defensive players; however, they can be committed by offensive players as well. Players who are fouled either receive the ball to pass inbounds again, or receive one or more free throws if they are fouled in the act of shooting, depending on whether the shot was successful. One point is awarded for making a free throw, which is attempted from a line from the basket. The referee is responsible for judging whether contact is illegal, sometimes resulting in controversy. The calling of fouls can vary between games, leagues and referees. There is a second category of fouls called technical fouls, which may be charged for various rules violations including failure to properly record a player in the scorebook, or for unsportsmanlike conduct. These infractions result in one or two free throws, which may be taken by any of the five players on the court at the time. Repeated incidents can result in disqualification. A blatant foul involving physical contact that is either excessive or unnecessary is called an intentional foul (flagrant foul in the NBA). In FIBA and NCAA women's basketball, a foul resulting in ejection is called a disqualifying foul, while in leagues other than the NBA, such a foul is referred to as flagrant. If a team exceeds a certain limit of team fouls in a given period (quarter or half) – four for NBA, NCAA women's, and international games – the opposing team is awarded one or two free throws on all subsequent non-shooting fouls for that period, the number depending on the league. In the US college men's game and high school games for both sexes, if a team reaches 7 fouls in a half, the opposing team is awarded one free throw, along with a second shot if the first is made. This is called shooting "one-and-one". If a team exceeds 10 fouls in the half, the opposing team is awarded two free throws on all subsequent fouls for the half. When a team shoots foul shots, the opponents may not interfere with the shooter, nor may they try to regain possession until the last or potentially last free throw is in the air. After a team has committed a specified number of fouls, the other team is said to be "in the bonus". On scoreboards, this is usually signified with an indicator light reading "Bonus" or "Penalty" with an illuminated directional arrow or dot indicating that team is to receive free throws when fouled by the opposing team. (Some scoreboards also indicate the number of fouls committed.) If a team misses the first shot of a two-shot situation, the opposing team must wait for the completion of the second shot before attempting to reclaim possession of the ball and continuing play. If a player is fouled while attempting a shot and the shot is unsuccessful, the player is awarded a number of free throws equal to the value of the attempted shot. A player fouled while attempting a regular two-point shot thus receives two shots, and a player fouled while attempting a three-point shot receives three shots. If a player is fouled while attempting a shot and the shot is successful, typically the player will be awarded one additional free throw for one point. In combination with a regular shot, this is called a "three-point play" or "four-point play" (or more colloquially, an "and one") because of the basket made at the time of the foul (2 or 3 points) and the additional free throw (1 point). Common techniques and practices. Positions. Although the rules do not specify any positions whatsoever, they have evolved as part of basketball. During the early years of basketball's evolution, two guards, two forwards, and one center were used. In more recent times specific positions evolved, but the current trend, advocated by many top coaches including Mike Krzyzewski, is towards positionless basketball, where big players are free to shoot from outside and dribble if their skill allows it. Popular descriptions of positions include: Point guard (often called the "1") : usually the fastest player on the team, organizes the team's offense by controlling the ball and making sure that it gets to the right player at the right time. Shooting guard (the "2") : creates a high volume of shots on offense, mainly long-ranged; and guards the opponent's best perimeter player on defense. Small forward (the "3") : often primarily responsible for scoring points via cuts to the basket and dribble penetration; on defense seeks rebounds and steals, but sometimes plays more actively. Power forward (the "4"): plays offensively often with their back to the basket; on defense, plays under the basket (in a zone defense) or against the opposing power forward (in man-to-man defense). Center (the "5"): uses height and size to score (on offense), to protect the basket closely (on defense), or to rebound. The above descriptions are flexible. For most teams today, the shooting guard and small forward have very similar responsibilities and are often called the wings, as do the power forward and center, who are often called post players. While most teams describe two players as guards, two as forwards, and one as a center, on some occasions teams choose to call them by different designations. Strategy. There are two main defensive strategies: "zone defense" and "man-to-man defense". In a zone defense, each player is assigned to guard a specific area of the court. Zone defenses often allow the defense to double team the ball, a manoeuver known as a trap. In a man-to-man defense, each defensive player guards a specific opponent. Offensive plays are more varied, normally involving planned passes and movement by players without the ball. A quick movement by an offensive player without the ball to gain an advantageous position is known as a "cut". A legal attempt by an offensive player to stop an opponent from guarding a teammate, by standing in the defender's way such that the teammate cuts next to him, is a "screen" or "pick". The two plays are combined in the "pick and roll", in which a player sets a pick and then "rolls" away from the pick towards the basket. Screens and cuts are very important in offensive plays; these allow the quick passes and teamwork, which can lead to a successful basket. Teams almost always have several offensive plays planned to ensure their movement is not predictable. On court, the point guard is usually responsible for indicating which play will occur. Shooting. Shooting is the act of attempting to score points by throwing the ball through the basket, methods varying with players and situations. Typically, a player faces the basket with both feet facing the basket. A player will rest the ball on the fingertips of the dominant hand (the shooting arm) slightly above the head, with the other hand supporting the side of the ball. The ball is usually shot by jumping (though not always) and extending the shooting arm. The shooting arm, fully extended with the wrist fully bent, is held stationary for a moment following the release of the ball, known as a "follow-through". Players often try to put a steady backspin on the ball to absorb its impact with the rim. The ideal trajectory of the shot is somewhat controversial, but generally a proper arc is recommended. Players may shoot directly into the basket or may use the backboard to redirect the ball into the basket. The two most common shots that use the above described setup are the "set shot" and the "jump shot". Both are preceded by a crouching action which preloads the muscles and increases the power of the shot. In a set shot, the shooter straightens up and throws from a standing position with neither foot leaving the floor; this is typically used for free throws. For a jump shot, the throw is taken in mid-air with the ball being released near the top of the jump. This provides much greater power and range, and it also allows the player to elevate over the defender. Failure to release the ball before the feet return to the floor is considered a traveling violation. Another common shot is called the "layup". This shot requires the player to be in motion toward the basket, and to "lay" the ball "up" and into the basket, typically off the backboard (the backboard-free, underhand version is called a "finger roll"). The most crowd-pleasing and typically highest-percentage accuracy shot is the "slam dunk", in which the player jumps very high and throws the ball downward, through the basket while touching it. Another shot that is less common than the layup, is the "circus shot". The circus shot is a low-percentage shot that is flipped, heaved, scooped, or flung toward the hoop while the shooter is off-balance, airborne, falling down or facing away from the basket. A back-shot is a shot taken when the player is facing away from the basket, and may be shot with the dominant hand, or both; but there is a very low chance that the shot will be successful. A shot that misses both the rim and the backboard completely is referred to as an "air ball". A particularly bad shot, or one that only hits the backboard, is jocularly called a brick. The "hang time" is the length of time a player stays in the air after jumping, either to make a slam dunk, layup or jump shot. Rebounding. The objective of rebounding is to successfully gain possession of the basketball after a missed field goal or free throw, as it rebounds from the hoop or backboard. This plays a major role in the game, as most possessions end when a team misses a shot. There are two categories of rebounds: offensive rebounds, in which the ball is recovered by the offensive side and does not change possession, and defensive rebounds, in which the defending team gains possession of the loose ball. The majority of rebounds are defensive, as the team on defense tends to be in better position to recover missed shots; for example, about 75% of rebounds in the NBA are defensive. Passing. A pass is a method of moving the ball between players. Most passes are accompanied by a step forward to increase power and are followed through with the hands to ensure accuracy. A staple pass is the "chest pass". The ball is passed directly from the passer's chest to the receiver's chest. A proper chest pass involves an outward snap of the thumbs to add velocity and leaves the defence little time to react. Another type of pass is the "bounce pass". Here, the passer bounces the ball crisply about two-thirds of the way from his own chest to the receiver. The ball strikes the court and bounces up toward the receiver. The bounce pass takes longer to complete than the chest pass, but it is also harder for the opposing team to intercept (kicking the ball deliberately is a violation). Thus, players often use the bounce pass in crowded moments, or to pass around a defender. The "overhead pass" is used to pass the ball over a defender. The ball is released while over the passer's head. The "outlet pass" occurs after a team gets a defensive rebound. The next pass after the rebound is the "outlet pass". The crucial aspect of any good pass is it being difficult to intercept. Good passers can pass the ball with great accuracy and they know exactly where each of their other teammates prefers to receive the ball. A special way of doing this is passing the ball without looking at the receiving teammate. This is called a "no-look pass". Another advanced style of passing is the "behind-the-back pass", which, as the description implies, involves throwing the ball behind the passer's back to a teammate. Although some players can perform such a pass effectively, many coaches discourage no-look or behind-the-back passes, believing them to be difficult to control and more likely to result in turnovers or violations. Dribbling. Dribbling is the act of bouncing the ball continuously with one hand and is a requirement for a player to take steps with the ball. To dribble, a player pushes the ball down towards the ground with the fingertips rather than patting it; this ensures greater control. When dribbling past an opponent, the dribbler should dribble with the hand farthest from the opponent, making it more difficult for the defensive player to get to the ball. It is therefore important for a player to be able to dribble competently with both hands. Good dribblers (or "ball handlers") tend to keep their dribbling hand low to the ground, reducing the distance of travel of the ball from the floor to the hand, making it more difficult for the defender to "steal" the ball. Good ball handlers frequently dribble behind their backs, between their legs, and switch directions suddenly, making a less predictable dribbling pattern that is more difficult to defend against. This is called a crossover, which is the most effective way to move past defenders while dribbling. A skilled player can dribble without watching the ball, using the dribbling motion or peripheral vision to keep track of the ball's location. By not having to focus on the ball, a player can look for teammates or scoring opportunities, as well as avoid the danger of having someone steal the ball away from him/her. Blocking. A block is performed when, after a shot is attempted, a defender succeeds in altering the shot by touching the ball. In almost all variants of play, it is illegal to touch the ball after it is in the downward path of its arc; this is known as "goaltending". It is also illegal under NBA and Men's NCAA basketball to block a shot after it has touched the backboard, or when any part of the ball is directly above the rim. Under international rules it is illegal to block a shot that is in the downward path of its arc or one that has touched the backboard until the ball has hit the rim. After the ball hits the rim, it is again legal to touch it even though it is no longer considered as a block performed. To block a shot, a player has to be able to reach a point higher than where the shot is released. Thus, height can be an advantage in blocking. Players who are taller and playing the power forward or center positions generally record more blocks than players who are shorter and playing the guard positions. However, with good timing and a sufficiently high vertical leap, even shorter players can be effective shot blockers. Height. At the professional level, most male players are above and most women above . Guards, for whom physical coordination and ball-handling skills are crucial, tend to be the smallest players. Almost all forwards in the top men's pro leagues are or taller. Most centers are over tall. According to a survey given to all NBA teams, the average height of all NBA players is just under , with the average weight being close to . The tallest players ever in the NBA were Manute Bol and Gheorghe Mureșan, who were both tall. At , Margo Dydek was the tallest player in the history of the WNBA. The shortest player ever to play in the NBA is Muggsy Bogues at . Other average-height or relatively short players have thrived at the pro level, including Anthony "Spud" Webb, who was tall, but had a vertical leap, giving him significant height when jumping, and Temeka Johnson, who won the WNBA Rookie of the Year Award and a championship with the Phoenix Mercury while standing only . While shorter players are often at a disadvantage in certain aspects of the game, their ability to navigate quickly through crowded areas of the court and steal the ball by reaching low are strengths. Players regularly inflate their height in high school or college. Many prospects exaggerate their height while in high school or college to make themselves more appealing to coaches and scouts, who prefer taller players. Charles Barkley stated; "I've been measured at 6–5, 6-. But I started in college at 6–6." Sam Smith, a former writer from the "Chicago Tribune", said: "We sort of know the heights, because after camp, the sheet comes out. But you use that height, and the player gets mad. And then you hear from his agent. Or you file your story with the right height, and the copy desk changes it because they have the 'official' N.B.A. media guide, which is wrong. So you sort of go along with the joke." Since the 2019–20 NBA season heights of NBA players are recorded definitively by measuring players with their shoes off. Variations and similar games. Variations of basketball are activities based on the game of basketball, using common basketball skills and equipment (primarily the ball and basket). Some variations only have superficial rule changes, while others are distinct games with varying degrees of influence from basketball. Other variations include children's games, contests or activities meant to help players reinforce skills. An earlier version of basketball, played primarily by women and girls, was six-on-six basketball. Horseball is a game played on horseback where a ball is handled and points are scored by shooting it through a high net (approximately 1.5m×1.5m). The sport is like a combination of polo, rugby, and basketball. There is even a form played on donkeys known as Donkey basketball, which has attracted criticism from animal rights groups. Half-court. Perhaps the single most common variation of basketball is the half-court game, played in informal settings without referees or strict rules. Only one basket is used, and the ball must be "taken back" or "cleared" – passed or dribbled outside the three-point line each time possession of the ball changes from one team to the other. Half-court games require less cardiovascular stamina, since players need not run back and forth a full court. Half-court raises the number of players that can use a court or, conversely, can be played if there is an insufficient number to form full 5-on-5 teams. Half-court basketball is usually played 1-on-1, 2-on-2 or 3-on-3. The last of these variations is gradually gaining official recognition as 3x3, originally known as FIBA 33. It was first tested at the 2007 Asian Indoor Games in Macau and the first official tournaments were held at the 2009 Asian Youth Games and the 2010 Youth Olympics, both in Singapore. The first FIBA 3x3 Youth World Championships were held in Rimini, Italy in 2011, with the first FIBA 3x3 World Championships for senior teams following a year later in Athens. The sport is highly tipped to become an Olympic sport as early as 2016. In the summer of 2017, the BIG3 basketball league, a professional 3x3 half court basketball league that features former NBA players, began. The BIG3 features several rule variants including a four-point field goal. Other variations. Variations of basketball with their own page or subsection include: Spin-offs from basketball that are now separate sports include: Social forms of basketball. Basketball as a social and communal sport features environments, rules and demographics different from those seen in professional and televised basketball. Recreational basketball. Basketball is played widely as an extracurricular, intramural or amateur sport in schools and colleges. Notable institutions of recreational basketball include: Fantasy basketball. Fantasy basketball was popularized during the 1990s by ESPN Fantasy Sports, NBA.com, and Yahoo! Fantasy Sports. On the model of fantasy baseball and football, players create fictional teams, select professional basketball players to "play" on these teams through a mock draft or trades, then calculate points based on the players' real-world performance.
3928
48535834
https://en.wikipedia.org/wiki?curid=3928
Ball
A ball is a round object (usually spherical, but sometimes ovoid) with several uses. It is used in ball games, where the play of the game follows the state of the ball as it is hit, kicked or thrown by players. Balls can also be used for simpler activities, such as catch or juggling. Balls made from hard-wearing materials are used in engineering applications to provide very low friction bearings, known as ball bearings. Black-powder weapons use stone and metal balls as projectiles. Although many types of balls are today made from rubber, this form was unknown outside the Americas until after the voyages of Columbus. The Spanish were the first Europeans to see the bouncing rubber balls (although solid and not inflated) which were employed most notably in the Mesoamerican ballgame. Balls used in various sports in other parts of the world prior to Columbus were made from other materials such as animal bladders or skins, stuffed with various materials. As balls are one of the most familiar spherical objects to humans, the word "ball" may refer to or describe spherical or near-spherical objects. "Ball" is used metaphorically sometimes to denote something spherical or spheroid, e.g., armadillos and human beings curl up into a ball, or making a fist into a ball. Etymology. The first known use of the word "ball" in English in the sense of a globular body that is played with was in 1205 in "Layamon's Brut, or Chronicle of Britain" in the phrase, "" ("Some of them drove balls far across the fields.") The word came from the Middle English "bal" (inflected as "ball-e, -es"), in turn from Old Norse "böllr" (pronounced ; compare Old Swedish "baller", and Swedish "boll") from Proto-Germanic "ballu-z" (whence probably Middle High German "bal, ball-es", Middle Dutch "bal"), a cognate with Old High German "ballo, pallo", Middle High German balle from Proto-Germanic "*ballon" (weak masculine), and Old High German "ballâ, pallâ", Middle High German "balle", Proto-Germanic "*ballôn" (weak feminine). No Old English cognate of any of these is known. (The hypothetical corresponding forms in Old English would have been "beallu, -a, -e"—compare "bealluc, ballock".) If "ball-" was native in Germanic, it may have been a cognate with the Latin "foll-is" in sense of a "thing blown up or inflated." In the later Middle English spelling "balle" the word coincided graphically with the French "balle" "ball" and "bale" which has hence been erroneously assumed to be its source. French "balle" (but not "boule") is assumed to be of Germanic origin, itself, however. In Ancient Greek the word πάλλα ("palla") for "ball" is attested besides the word σφαίρα ("sfaíra"), "sphere". History. Some form of game with a ball is found portrayed on Egyptian monuments. In Homer, Nausicaa was playing at ball with her maidens when Odysseus first saw her in the land of the Phaeacians (Od. vi. 100). And Halios and Laodamas performed before Alcinous and Odysseus with ball play, accompanied with dancing (Od. viii. 370). The most ancient balls in Eurasia have been discovered in Karasahr, China and are 3000 years old. They were made of hair-filled leather. Ancient Greeks. Among the ancient Greeks, games with balls (σφαῖραι) were regarded as a useful subsidiary to the more violent athletic exercises, as a means of keeping the body supple, and rendering it graceful, but were generally left to boys and girls. Of regular rules for the playing of ball games, little trace remains, if there were any such. The names in Greek for various forms, which have come down to us in such works as the Ὀνομαστικόν of Julius Pollux, imply little or nothing of such; thus, ἀπόρραξις ("aporraxis") only means the putting of the ball on the ground with the open hand, οὐρανία ("ourania"), the flinging of the ball in the air to be caught by two or more players; φαινίνδα ("phaininda") would seem to be a game of catch played by two or more, where feinting is used as a test of quickness and skill. Pollux (i. x. 104) mentions a game called episkyros (ἐπίσκυρος), which has often been looked on as the origin of football. It seems to have been played by two sides, arranged in lines; how far there was any form of "goal" seems uncertain. It was impossible to produce a ball that was perfectly spherical; children usually made their own balls by inflating pig's bladders and heating them in the ashes of a fire to make them rounder, although Plato (fl. 420s BC – 340s BC) described "balls which have leather coverings in twelve pieces". Ancient Romans. Among the Romans, ball games were looked upon as an adjunct to the bath, and were graduated to the age and health of the bathers, and usually a place (sphaeristerium) was set apart for them in the baths (thermae). There appear to have been three types or sizes of ball, the pila, or small ball, used in catching games, the paganica, a heavy ball stuffed with feathers, and the follis, a leather ball filled with air, the largest of the three. This was struck from player to player, who wore a kind of gauntlet on the arm. There was a game known as trigon, played by three players standing in the form of a triangle, and played with the follis, and also one known as harpastum, which seems to imply a "scrimmage" among several players for the ball. These games are known to us through the Romans, though the names are Greek. Modern ball games. The various modern games played with a ball or balls and subject to rules are treated under their various names, such as polo, cricket, football, etc. Physics. In sports, many modern balls are pressurized. Some are pressurized at the factory (e.g. tennis, squash) and others are pressurized by users (e.g. volleyball, basketball, football). Almost all pressurized balls gradually leak air. If the ball is factory pressurized, there is usually a rule about whether the ball retains sufficient pressure to remain playable. Depressurized balls lack bounce and are often termed "dead". In extreme cases, a dead ball becomes flaccid. If the ball is pressured on use, there are generally rules about how the ball is pressurized before the match, and when (or whether) the ball can be repressurized or replaced. Due to the ideal gas law, ball pressure is a function of temperature, generally tracking ambient conditions. Softer balls that are struck hard (especially squash balls) increase in temperature due to inelastic collision. On the contrary, in certain sports ball is solid, some with uniform material (e.g. most hockey variations, lacrosse), others with different layered materials (e.g. baseball, cricket). Finally, some sports use hollow ones (e.g. sepaktakraw, pickleball, floorball). In outdoor sports, wet balls play differently than dry balls. In indoor sports, balls may become damp due to hand sweat. Any form of humidity or dampness will affect a ball's surface friction, which will alter a player's ability to impart spin on the ball. The action required to apply spin to a ball is governed by the physics of angular momentum. Spinning balls travelling through air (technically a fluid) will experience the Magnus effect, which can produce lateral deflections in addition to the normal up-down curvature induced by a combination of wind resistance and gravity.
3931
35498457
https://en.wikipedia.org/wiki?curid=3931
Binary relation
In mathematics, a binary relation associates some elements of one set called the "domain" with some elements of another set (possibly the same) called the "codomain". Precisely, a binary relation over sets formula_1 and formula_2 is a set of ordered pairs formula_3, where formula_4 is an element of formula_1 and formula_6 is an element of formula_2. It encodes the common concept of relation: an element formula_4 is "related" to an element formula_6, if and only if the pair formula_3 belongs to the set of ordered pairs that defines the binary relation. An example of a binary relation is the "divides" relation over the set of prime numbers formula_11 and the set of integers formula_12, in which each prime formula_13 is related to each integer formula_14 that is a multiple of formula_13, but not to an integer that is not a multiple of formula_13. In this relation, for instance, the prime number formula_17 is related to numbers such as formula_18, formula_19, formula_20, formula_21, but not to formula_22 or formula_23, just as the prime number formula_24 is related to formula_19, formula_20, and formula_23, but not to formula_28 or formula_29. A binary relation is called a homogeneous relation when formula_30. A binary relation is also called a "heterogeneous relation" when it is not necessary that formula_30. Binary relations, and especially homogeneous relations, are used in many branches of mathematics to model a wide variety of concepts. These include, among others: A function may be defined as a binary relation that meets additional constraints. Binary relations are also heavily used in computer science. A binary relation over sets formula_1 and formula_2 can be identified with an element of the power set of the Cartesian product formula_34 Since a powerset is a lattice for set inclusion (formula_35), relations can be manipulated using set operations (union, intersection, and complementation) and algebra of sets. In some systems of axiomatic set theory, relations are extended to classes, which are generalizations of sets. This extension is needed for, among other things, modeling the concepts of "is an element of" or "is a subset of" in set theory, without running into logical inconsistencies such as Russell's paradox. A binary relation is the most studied special case formula_36 of an formula_37-ary relation over sets formula_38, which is a subset of the Cartesian product formula_39 Definition. Given sets formula_1 and formula_2, the Cartesian product formula_42 is defined as formula_43 and its elements are called "ordered pairs". A formula_44 over sets formula_1 and formula_2 is a subset of formula_34 The set formula_1 is called the or of formula_44, and the set formula_2 the or of formula_44. In order to specify the choices of the sets formula_1 and formula_2, some authors define a or as an ordered triple formula_54, where formula_55 is a subset of formula_42 called the of the binary relation. The statement formula_57 reads "formula_4 is formula_44-related to formula_6" and is denoted by formula_61. The or of formula_44 is the set of all formula_4 such that formula_61 for at least one formula_6. The "codomain of definition", , or of formula_44 is the set of all formula_6 such that formula_61 for at least one formula_4. The of formula_44 is the union of its domain of definition and its codomain of definition. When formula_71 a binary relation is called a (or ). To emphasize the fact that formula_1 and formula_2 are allowed to be different, a binary relation is also called a heterogeneous relation. The prefix "hetero" is from the Greek ἕτερος ("heteros", "other, another, different"). A heterogeneous relation has been called a rectangular relation, suggesting that it does not have the square-like symmetry of a homogeneous relation on a set where formula_74 Commenting on the development of binary relations beyond homogeneous relations, researchers wrote, "... a variant of the theory has evolved that treats relations from the very beginning as or , i.e. as relations where the normal case is that they are relations between different sets." The terms "correspondence", dyadic relation and two-place relation are synonyms for binary relation, though some authors use the term "binary relation" for any subset of a Cartesian product formula_42 without reference to formula_1 and formula_2, and reserve the term "correspondence" for a binary relation with reference to formula_1 and formula_2. In a binary relation, the order of the elements is important; if formula_80 then formula_81 can be true or false independently of formula_61. For example, formula_24 divides formula_23, but formula_23 does not divide formula_24. Operations. Union. If formula_44 and formula_88 are binary relations over sets formula_1 and formula_2 then formula_91 is the of formula_44 and formula_88 over formula_1 and formula_2. The identity element is the empty relation. For example, formula_96 is the union of < and =, and formula_97 is the union of > and =. Intersection. If formula_44 and formula_88 are binary relations over sets formula_1 and formula_2 then formula_102 is the of formula_44 and formula_88 over formula_1 and formula_2. The identity element is the universal relation. For example, the relation "is divisible by 6" is the intersection of the relations "is divisible by 3" and "is divisible by 2". Composition. If formula_44 is a binary relation over sets formula_1 and formula_2, and formula_88 is a binary relation over sets formula_2 and formula_112 then formula_113 (also denoted by formula_114) is the of formula_44 and formula_88 over formula_1 and formula_112. The identity element is the identity relation. The order of formula_44 and formula_88 in the notation formula_121 used here agrees with the standard notational order for composition of functions. For example, the composition (is parent of)formula_122(is mother of) yields (is maternal grandparent of), while the composition (is mother of)formula_122(is parent of) yields (is grandmother of). For the former case, if formula_4 is the parent of formula_6 and formula_6 is the mother of formula_14, then formula_4 is the maternal grandparent of formula_14. Converse. If formula_44 is a binary relation over sets formula_1 and formula_2 then formula_133 is the , also called , of formula_44 over formula_2 and formula_1. For example, formula_137 is the converse of itself, as is formula_138, and formula_139 and formula_140 are each other's converse, as are formula_96 and formula_142 A binary relation is equal to its converse if and only if it is symmetric. Complement. If formula_44 is a binary relation over sets formula_1 and formula_2 then formula_146 (also denoted by formula_147) is the of formula_44 over formula_1 and formula_2. For example, formula_137 and formula_138 are each other's complement, as are formula_35 and formula_154, formula_155 and formula_156, formula_157 and formula_158, and for total orders also formula_139 and formula_97, and formula_140 and formula_96. The complement of the converse relation formula_163 is the converse of the complement: formula_164 If formula_71 the complement has the following properties: Restriction. If formula_44 is a binary homogeneous relation over a set formula_1 and formula_88 is a subset of formula_1 then formula_170 is the of formula_44 to formula_88 over formula_1. If formula_44 is a binary relation over sets formula_1 and formula_2 and if formula_88 is a subset of formula_1 then formula_179 is the of formula_44 to formula_88 over formula_1 and formula_2. If a relation is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, strict weak order, total preorder (weak order), or an equivalence relation, then so too are its restrictions. However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general not equal. For example, restricting the relation "formula_4 is parent of formula_6" to females yields the relation "formula_4 is mother of the woman formula_6"; its transitive closure does not relate a woman with her paternal grandmother. On the other hand, the transitive closure of "is parent of" is "is ancestor of"; its restriction to females does relate a woman with her paternal grandmother. Also, the various concepts of completeness (not to be confused with being "total") do not carry over to restrictions. For example, over the real numbers a property of the relation formula_96 is that every non-empty subset formula_189 with an upper bound in formula_190 has a least upper bound (also called supremum) in formula_191 However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation formula_96 to the rational numbers. A binary relation formula_44 over sets formula_1 and formula_2 is said to be a relation formula_88 over formula_1 and formula_2, written formula_199 if formula_44 is a subset of formula_88, that is, for all formula_202 and formula_203 if formula_61, then formula_205. If formula_44 is contained in formula_88 and formula_88 is contained in formula_44, then formula_44 and formula_88 are called written formula_212. If formula_44 is contained in formula_88 but formula_88 is not contained in formula_44, then formula_44 is said to be than formula_88, written formula_219 For example, on the rational numbers, the relation formula_140 is smaller than formula_97, and equal to the composition formula_222. Matrix representation. Binary relations over sets formula_1 and formula_2 can be represented algebraically by logical matrices indexed by formula_1 and formula_2 with entries in the Boolean semiring (addition corresponds to OR and multiplication to AND) where matrix addition corresponds to union of relations, matrix multiplication corresponds to composition of relations (of a relation over formula_1 and formula_2 and a relation over formula_2 and formula_112), the Hadamard product corresponds to intersection of relations, the zero matrix corresponds to the empty relation, and the matrix of ones corresponds to the universal relation. Homogeneous relations (when formula_30) form a matrix semiring (indeed, a matrix semialgebra over the Boolean semiring) where the identity matrix corresponds to the identity relation. Types of binary relations. Some important types of binary relations formula_44 over sets formula_1 and formula_2 are listed below. Uniqueness properties: Totality properties (only definable if the domain formula_1 and codomain formula_2 are specified): Uniqueness and totality properties (only definable if the domain formula_1 and codomain formula_2 are specified): If relations over proper classes are allowed: Sets versus classes. Certain mathematical "relations", such as "equal to", "subset of", and "member of", cannot be understood to be binary relations as defined above, because their domains and codomains cannot be taken to be sets in the usual systems of axiomatic set theory. For example, to model the general concept of "equality" as a binary relation formula_137, take the domain and codomain to be the "class of all sets", which is not a set in the usual set theory. In most mathematical contexts, references to the relations of equality, membership and subset are harmless because they can be understood implicitly to be restricted to some set in the context. The usual work-around to this problem is to select a "large enough" set formula_291, that contains all the objects of interest, and work with the restriction formula_292 instead of formula_137. Similarly, the "subset of" relation formula_35 needs to be restricted to have domain and codomain formula_295 (the power set of a specific set formula_291): the resulting set relation can be denoted by formula_297 Also, the "member of" relation needs to be restricted to have domain formula_291 and codomain formula_295 to obtain a binary relation formula_300 that is a set. Bertrand Russell has shown that assuming formula_157 to be defined over all sets leads to a contradiction in naive set theory, see "Russell's paradox". Another solution to this problem is to use a set theory with proper classes, such as NBG or Morse–Kelley set theory, and allow the domain and codomain (and so the graph) to be proper classes: in such a theory, equality, membership, and subset are binary relations without special comment. (A minor modification needs to be made to the concept of the ordered triple formula_54, as normally a proper class cannot be a member of an ordered tuple; or of course one can identify the binary relation with its graph in this context.) With this definition one can for instance define a binary relation over every set and its power set. Homogeneous relation. A homogeneous relation over a set formula_1 is a binary relation over formula_1 and itself, i.e. it is a subset of the Cartesian product formula_305 It is also simply called a (binary) relation over formula_1. A homogeneous relation formula_44 over a set formula_1 may be identified with a directed simple graph permitting loops, where formula_1 is the vertex set and formula_44 is the edge set (there is an edge from a vertex formula_4 to a vertex formula_6 if and only if formula_61). The set of all homogeneous relations formula_314 over a set formula_1 is the power set formula_316 which is a Boolean algebra augmented with the involution of mapping of a relation to its converse relation. Considering composition of relations as a binary operation on formula_314, it forms a semigroup with involution. Some important properties that a homogeneous relation formula_44 over a set formula_1 may have are: A is a relation that is reflexive, antisymmetric, and transitive. A is a relation that is irreflexive, asymmetric, and transitive. A is a relation that is reflexive, antisymmetric, transitive and connected. A is a relation that is irreflexive, asymmetric, transitive and connected. An is a relation that is reflexive, symmetric, and transitive. For example, "formula_4 divides formula_6" is a partial, but not a total order on natural numbers formula_357 "formula_358" is a strict total order on formula_357 and "formula_4 is parallel to formula_6" is an equivalence relation on the set of all lines in the Euclidean plane. All operations defined in section also apply to homogeneous relations. Beyond that, a homogeneous relation over a set formula_1 may be subjected to closure operations like: Calculus of relations. Developments in algebraic logic have facilitated usage of binary relations. The calculus of relations includes the algebra of sets, extended by composition of relations and the use of converse relations. The inclusion formula_199 meaning that formula_370 implies formula_371, sets the scene in a lattice of relations. But since formula_372 the inclusion symbol is superfluous. Nevertheless, composition of relations and manipulation of the operators according to Schröder rules, provides a calculus to work in the power set of formula_373 In contrast to homogeneous relations, the composition of relations operation is only a partial function. The necessity of matching target to source of composed relations has led to the suggestion that the study of heterogeneous relations is a chapter of category theory as in the category of sets, except that the morphisms of this category are relations. The of the category Rel are sets, and the relation-morphisms compose as required in a category. Induced concept lattice. Binary relations have been described through their induced concept lattices: A concept formula_374 satisfies two properties: For a given relation formula_379 the set of concepts, enlarged by their joins and meets, forms an "induced lattice of concepts", with inclusion formula_380 forming a preorder. The MacNeille completion theorem (1937) (that any partial order may be embedded in a complete lattice) is cited in a 2013 survey article "Decomposition of relations on concept lattices". The decomposition is formula_381, where formula_382 and formula_383 are functions, called or left-total, functional relations in this context. The "induced concept lattice is isomorphic to the cut completion of the partial order formula_384 that belongs to the minimal decomposition formula_385 of the relation formula_44." Particular cases are considered below: formula_384 total order corresponds to Ferrers type, and formula_384 identity corresponds to difunctional, a generalization of equivalence relation on a set. Relations may be ranked by the Schein rank which counts the number of concepts necessary to cover a relation. Structural analysis of relations with concepts provides an approach for data mining. Particular relations. Difunctional. The idea of a difunctional relation is to partition objects by distinguishing attributes, as a generalization of the concept of an equivalence relation. One way this can be done is with an intervening set formula_398 of indicators. The partitioning relation formula_399 is a composition of relations using relations formula_400 Jacques Riguet named these relations difunctional since the composition formula_401 involves functional relations, commonly called "partial functions". In 1950 Riguet showed that such relations satisfy the inclusion: formula_402 In automata theory, the term rectangular relation has also been used to denote a difunctional relation. This terminology recalls the fact that, when represented as a logical matrix, the columns and rows of a difunctional relation can be arranged as a block matrix with rectangular blocks of ones on the (asymmetric) main diagonal. More formally, a relation formula_44 on formula_42 is difunctional if and only if it can be written as the union of Cartesian products formula_405, where the formula_406 are a partition of a subset of formula_1 and the formula_408 likewise a partition of a subset of formula_2. Using the notation formula_410, a difunctional relation can also be characterized as a relation formula_44 such that wherever formula_412 and formula_413 have a non-empty intersection, then these two sets coincide; formally formula_414 implies formula_415 In 1997 researchers found "utility of binary decomposition based on difunctional dependencies in database management." Furthermore, difunctional relations are fundamental in the study of bisimulations. In the context of homogeneous relations, a partial equivalence relation is difunctional. Ferrers type. A strict order on a set is a homogeneous relation arising in order theory. In 1951 Jacques Riguet adopted the ordering of an integer partition, called a Ferrers diagram, to extend ordering to binary relations in general. The corresponding logical matrix of a general binary relation has rows which finish with a sequence of ones. Thus the dots of a Ferrer's diagram are changed to ones and aligned on the right in the matrix. An algebraic statement required for a Ferrers type relation R is formula_416 If any one of the relations formula_417 is of Ferrers type, then all of them are. Contact. Suppose formula_418 is the power set of formula_291, the set of all subsets of formula_291. Then a relation formula_383 is a contact relation if it satisfies three properties: The set membership relation, formula_425 "is an element of", satisfies these properties so formula_426 is a contact relation. The notion of a general contact relation was introduced by Georg Aumann in 1970. In terms of the calculus of relations, sufficient conditions for a contact relation include formula_427 where formula_428 is the converse of set membership (formula_157). Preorder R\R. Every relation formula_44 generates a preorder formula_431 which is the left residual. In terms of converse and complements, formula_432 Forming the diagonal of formula_433, the corresponding row of formula_434 and column of formula_435 will be of opposite logical values, so the diagonal is all zeros. Then formula_436, so that formula_431 is a reflexive relation. To show transitivity, one requires that formula_438 Recall that formula_439 is the largest relation such that formula_440 Then formula_441 formula_442 (repeat) formula_443 (Schröder's rule) formula_444 (complementation) formula_445 (definition) The inclusion relation Ω on the power set of formula_446 can be obtained in this way from the membership relation formula_157 on subsets of formula_446: formula_449 Fringe of a relation. Given a relation formula_44, its fringe is the sub-relation defined as formula_451 When formula_44 is a partial identity relation, difunctional, or a block diagonal relation, then formula_453. Otherwise the formula_454 operator selects a boundary sub-relation described in terms of its logical matrix: formula_455 is the side diagonal if formula_44 is an upper right triangular linear order or strict order. formula_455 is the block fringe if formula_44 is irreflexive (formula_459) or upper right block triangular. formula_455 is a sequence of boundary rectangles when formula_44 is of Ferrers type. On the other hand, formula_462 when formula_44 is a dense, linear, strict order. Mathematical heaps. Given two sets formula_291 and formula_418, the set of binary relations between them formula_466 can be equipped with a ternary operation formula_467 where formula_468 denotes the converse relation of formula_469. In 1953 Viktor Wagner used properties of this ternary operation to define semiheaps, heaps, and generalized heaps. The contrast of heterogeneous and homogeneous relations is highlighted by these definitions:
3933
1300856293
https://en.wikipedia.org/wiki?curid=3933
Braille
Braille ( , ) is a tactile writing system used by blind or visually impaired people. It can be read either on embossed paper or by using refreshable braille displays that connect to computers and smartphone devices. Braille can be written using a slate and stylus, a braille writer, an electronic braille notetaker or with the use of a computer connected to a braille embosser. For blind readers, braille is an independent writing system, rather than a code of printed orthography. Braille is named after its creator, Louis Braille, a Frenchman who lost his sight as a result of a childhood accident. In 1824, at the age of fifteen, he developed the braille code based on the French alphabet as an improvement on night writing. He published his system, which subsequently included musical notation, in 1829. The second revision, published in 1837, was the first binary form of writing developed in the modern era. Braille characters are formed using a combination of six raised dots arranged in a 3 × 2 matrix, called the braille cell. The number and arrangement of these dots distinguishes one character from another. Since the various braille alphabets originated as transcription codes for printed writing, the mappings (sets of character designations) vary from language to language, and even within one; in English braille there are three levels: "uncontracted"a letter-by-letter transcription used for basic literacy; "contracted"an addition of abbreviations and contractions used as a space-saving mechanism; and "grade 3" various non-standardized personal stenographies that are less commonly used. In addition to braille text (letters, punctuation, contractions), it is also possible to create embossed illustrations and graphs, with the lines either solid or made of series of dots, arrows, and bullets that are larger than braille dots. A full braille cell includes six raised dots arranged in two columns, each column having three dots. The dot positions are identified by numbers from one to six. There are 64 possible combinations, including no dots at all for a word space. Dot configurations can be used to represent a letter, digit, punctuation mark, or even a word. Early braille education is crucial to literacy, education and employment among the blind. Despite the evolution of new technologies, including screen reader software that reads information aloud, braille provides blind people with access to spelling, punctuation and other aspects of written language less accessible through audio alone. While some have suggested that audio-based technologies will decrease the need for braille, technological advancements such as braille displays have continued to make braille more accessible and available. Braille users highlight that braille remains as essential as print is to the sighted. History. Braille was based on a tactile code, now known as night writing, developed by Charles Barbier. (The name "night writing" was later given to it when it was considered as a means for soldiers to communicate silently at night and without a light source, but Barbier's writings do not use this term and suggest that it was originally designed as a simpler form of writing and for the visually impaired.) In Barbier's system, sets of 12 embossed dots were used to encode 36 different sounds. Braille identified three major defects of the code: first, the symbols represented phonetic sounds and not letters of the alphabetthus the code was unable to render the orthography of the words. Second, the 12-dot symbols could not easily fit beneath the pad of the reading finger. This required the reading finger to move in order to perceive the whole symbol, which slowed the reading process. (This was because Barbier's system was based only on the number of dots in each of two 6-dot columns, not the pattern of the dots.) Third, the code did not include symbols for numerals or punctuation. Braille's solution was to use 6-dot cells and to assign a specific pattern to each letter of the alphabet. Braille also developed symbols for representing numerals and punctuation. At first, braille was a one-to-one transliteration of the French alphabet, but soon various abbreviations (contractions) and even logograms were developed, creating a system much more like shorthand. Today, there are braille codes for over 133 languages. In English, some variations in the braille codes have traditionally existed among English-speaking countries. In 1991, work to standardize the braille codes used in the English-speaking world began. Unified English Braille (UEB) has been adopted in all seven member countries of the International Council on English Braille (ICEB) as well as Nigeria. Derivation. Braille is derived from the Latin alphabet, albeit indirectly. In Braille's original system, the dot patterns were assigned to letters according to their position within the alphabetic order of the French alphabet of the time, with accented letters and "w" sorted at the end. Unlike print, which consists of mostly arbitrary symbols, the braille alphabet follows a logical sequence. The first ten letters of the alphabet, "a"–"j", use the upper four dot positions: (black dots in the table below). These stand for the ten digits "1"–"9" and "0" in an alphabetic numeral system similar to Greek numerals (as well as derivations of it, including Hebrew numerals, Cyrillic numerals, Abjad numerals, also Hebrew gematria and Greek isopsephy). Though the dots are assigned in no obvious order, the cells with the fewest dots are assigned to the first three letters (and lowest digits), "abc" = "123" (), and to the three vowels in this part of the alphabet, "aei" (), whereas the even digits "4", "6", "8", "0" () are right angles. The next ten letters, "k"–"t", are identical to "a"–"j" respectively, apart from the addition of a dot at position 3 (red dots in the bottom left corners of the cells in the table below): : The next ten letters (the next "decade") are the same again, but with dots also at both position 3 and position 6 (green dots in the bottom rows of the cells in the table above). Here "w" was left out as it was not part of the official French alphabet in Braille's time; the French order of the decade was "u v x y z ç é à è ù" (). The next ten letters, ending in "w", are the same again, except that for this series position 6 (purple dot in the bottom right corner of the cell in the table above) is used without a dot at position 3. In French braille these are the letters "â ê î ô û ë ï ü œ w" (). "W" had been tacked onto the end of 39 letters of the French alphabet to accommodate English. The "a"–"j" series shifted down by one dot space () is used for punctuation. Letters "a" and "c" , which only use dots in the top row, were shifted two places for the apostrophe and hyphen: . (These are also the decade diacritics, on the left in the table below, of the second and third decade.) In addition, there are ten patterns that are based on the first two letters () with their dots shifted to the right; these were assigned to non-French letters ("ì ä ò" ), or serve non-letter functions: (superscript; in English the accent mark), (currency prefix), (capital, in English the decimal point), (number sign), (emphasis mark), (symbol prefix). The first four decades are similar in that the numeric sequence is extended by adding the decade dots, whereas in the fifth decade it is extended by shifting it downward. Originally there had been nine decades. The fifth through ninth used dashes as well as dots, but they proved to be impractical to distinguish by touch under normal conditions and were soon abandoned. From the beginning, these additional decades could be substituted with what we now know as the number sign () applied to the earlier decades, though that only caught on for the digits (the old 5th decade being replaced by applied to the 1st decade). The dash occupying the top row of the original sixth decade was simply omitted, producing the modern fifth decade. (See 1829 braille.) Assignment. Historically, there have been three principles in assigning the values of a linear script (print) to Braille: Using Louis Braille's original French letter values; reassigning the braille letters according to the sort order of the print alphabet being transcribed; and reassigning the letters to improve the efficiency of writing in braille. Under international consensus, most braille alphabets follow the French sorting order for the 26 letters of the basic Latin alphabet, and there have been attempts at unifying the letters beyond these 26 (see international braille), though differences remain, for example, in German Braille. This unification avoids the chaos of each nation reordering the braille code to match the sorting order of its print alphabet, as happened in Algerian Braille, where braille codes were numerically reassigned to match the order of the Arabic alphabet and bear little relation to the values used in other countries (compare modern Arabic Braille, which uses the French sorting order), and as happened in an early American version of English Braille, where the letters "w", "x", "y", "z" were reassigned to match English alphabetical order. A convention sometimes seen for letters beyond the basic 26 is to exploit the physical symmetry of braille patterns iconically, for example, by assigning a reversed "n" to "ñ" or an inverted "s" to "sh". (See Hungarian Braille and Bharati Braille, which do this to some extent.) A third principle was to assign braille codes according to frequency, with the simplest patterns (quickest ones to write with a stylus) assigned to the most frequent letters of the alphabet. Such frequency-based alphabets were used in Germany and the United States in the 19th century (see American Braille), but with the invention of the braille typewriter their advantage disappeared, and none are attested in modern use they had the disadvantage that the resulting small number of dots in a text interfered with following the alignment of the letters, and consequently made texts more difficult to read than Braille's more arbitrary letter assignment. Finally, there are braille scripts that do not order the codes numerically at all, such as Japanese Braille and Korean Braille, which are based on more abstract principles of syllable composition. Texts are sometimes written in a script of eight dots per cell rather than six, enabling them to encode a greater number of symbols. (See Gardner–Salinas braille codes.) Luxembourgish Braille has adopted eight-dot cells for general use; for example, accented letters take the unaccented versions plus dot 8. Form. Braille was the first writing system with binary encoding. The system as devised by Braille consists of two parts: Within an individual cell, the dot positions are arranged in two columns of three positions. A raised dot can appear in any of the six positions, producing 64 (26) possible patterns, including one in which there are no raised dots. For reference purposes, a pattern is commonly described by listing the positions where dots are raised, the positions being universally numbered, from top to bottom, as 1 to 3 on the left and 4 to 6 on the right. For example, dot pattern 1-3-4 describes a cell with three dots raised, at the top and bottom in the left column and at the top of the right column: that is, the letter "m". The lines of horizontal braille text are separated by a space, much like visible printed text, so that the dots of one line can be differentiated from the braille text above and below. Different assignments of braille codes (or code pages) are used to map the character sets of different printed scripts to the six-bit cells. Braille assignments have also been created for mathematical and musical notation. However, because the six-dot braille cell allows only 64 (26) patterns, including space, the characters of a braille script commonly have multiple values, depending on their context. That is, character mapping between print and braille is not one-to-one. For example, the character corresponds in print to both the letter "d" and the digit "4". In addition to simple encoding, many braille alphabets use contractions to reduce the size of braille texts and to increase reading speed. (See Contracted braille.) Writing braille. Braille may be produced by hand using a slate and stylus in which each dot is created from the back of the page, writing in mirror image, or it may be produced on a braille typewriter or Perkins Brailler, or an electronic Brailler or braille notetaker. Braille users with access to smartphones may also activate the on-screen braille input keyboard, to type braille symbols on to their device by placing their fingers on to the screen according to the dot configuration of the symbols they wish to form. These symbols are automatically translated into print on the screen. The different tools that exist for writing braille allow the braille user to select the method that is best for a given task. For example, the slate and stylus is a portable writing tool, much like the pen and paper for the sighted. Errors can be erased using a braille eraser or can be overwritten with all six dots (). "Interpoint" refers to braille printing that is offset, so that the paper can be embossed on both sides, with the dots on one side appearing between the divots that form the dots on the other. Using a computer or other electronic device, Braille may be produced with a braille embosser (printer) or a refreshable braille display (screen). Eight-dot braille. Braille has been extended to an 8-dot code, particularly for use with braille embossers and refreshable braille displays. In 8-dot braille the additional dots are added at the bottom of the cell, giving a matrix 4 dots high by 2 dots wide. The additional dots are given the numbers 7 (for the lower-left dot) and 8 (for the lower-right dot). Eight-dot braille has the advantages that the casing of each letter is coded in the cell and that every printable ASCII character can be encoded in a single cell. All 256 (28) possible combinations of 8 dots are encoded by the Unicode standard. Braille with six dots is frequently stored as Braille ASCII. Letters. The first 25 braille letters, up through the first half of the 3rd decade, transcribe "a–z" (skipping "w"). In English Braille, the rest of that decade is rounded out with the ligatures "and, for, of, the," and "with". Omitting dot 3 from these forms the 4th decade, the ligatures "ch, gh, sh, th, wh, ed, er, ou, ow" and the letter "w". Formatting. Various formatting marks affect the values of the letters that follow them. They have no direct equivalent in print. The most important in English Braille are: That is, is read as capital 'A', and as the digit '1'. Punctuation. Basic punctuation marks in English Braille include: is both the question mark and the opening quotation mark. Its reading depends on whether it occurs before a word or after. is used for both opening and closing parentheses. Its placement relative to spaces and other characters determines its interpretation. Punctuation varies from language to language. For example, French Braille uses for its question mark and swaps the quotation marks and parentheses (to and ); it uses () for both the period and the decimal point, and the English decimal point () to mark capitalization. Contractions. Braille contractions are words and affixes that are shortened so that they take up fewer cells. In English Braille, for example, the word "afternoon" is written with just three letters, , much like stenoscript. There are also several abbreviation marks that create what are effectively logograms. The most common of these is dot 5, which combines with the first letter of words. With the letter "m", the resulting word is "mother". There are also ligatures ("contracted" letters), which are single letters in braille but correspond to more than one letter in print. The letter "and", for example, is used to write words with the sequence "a-n-d" in them, such as "grand". Page dimensions. Most braille embossers support between 34 and 40 cells per line, and 25 lines per page. A manually operated Perkins braille typewriter supports a maximum of 42 cells per line (its margins are adjustable), and typical paper allows 25 lines per page. A large interlining Stainsby has 36 cells per line and 18 lines per page. An A4-sized Marburg braille frame, which allows interpoint braille (dots on both sides of the page, offset so they do not interfere with each other), has 30 cells per line and 27 lines per page. Braille writing machine. A Braille writing machine is a typewriter with six keys that allows the user to write braille on a regular hard copy page. The first Braille typewriter to gain general acceptance was invented by Frank Haven Hall (Superintendent of the Illinois School for the Blind), and was presented to the public in 1892. The Stainsby Brailler, developed by Henry Stainsby in 1903, is a mechanical writer with a sliding carriage that moves over an aluminium plate as it embosses Braille characters. An improved version was introduced around 1933. In 1951 David Abraham, a woodworking teacher at the Perkins School for the Blind, produced a more advanced Braille typewriter, the Perkins Brailler. Braille printers or embossers were produced in the 1950s. In 1960 Robert Mann, a teacher in MIT, wrote DOTSYS, a software that allowed automatic braille translation, and another group created an embossing device called "M.I.T. Braillemboss". The Mitre Corporation team of Robert Gildea, Jonathan Millen, Reid Gerhart and Joseph Sullivan (now president of Duxbury Systems) developed DOTSYS III, the first braille translator written in a portable programming language. DOTSYS III was developed for the Atlanta Public Schools as a public domain program. In 1991 Ernest Bate developed the Mountbatten Brailler, an electronic machine used to type braille on braille paper, giving it a number of additional features such as word processing, audio feedback and embossing. This version was improved in 2008 with a quiet writer that had an erase key. In 2011 David S. Morgan produced the first SMART Brailler machine, with added text to speech function and allowed digital capture of data entered. Braille reading. Braille is traditionally read in hardcopy form, such as with paper books written in braille, documents produced in paper braille (such as restaurant menus), and braille labels or public signage. It can also be read on a refreshable braille display either as a stand-alone electronic device or connected to a computer or smartphone. Refreshable braille displays convert what is visually shown on a computer or smartphone screen into braille through a series of pins that rise and fall to form braille symbols. Currently more than 1% of all printed books have been translated into hardcopy braille. The fastest braille readers apply a light touch and read braille with two hands, although reading braille with one hand is also possible. Although the finger can read only one braille character at a time, the brain chunks braille at a higher level, processing words a digraph, root or suffix at a time. The processing largely takes place in the visual cortex. Literacy. Children who are blind miss out on fundamental parts of early and advanced education if not provided with the necessary tools, such as access to educational materials in braille. Children who are blind or visually impaired can begin learning foundational braille skills from a very young age to become fluent braille readers as they get older. Sighted children are naturally exposed to written language on signs, on TV and in the books they see. Blind children require the same early exposure to literacy, through access to braille rich environments and opportunities to explore the world around them. Print-braille books, for example, present text in both print and braille and can be read by sighted parents to blind children (and vice versa), allowing blind children to develop an early love for reading even before formal reading instruction begins. Adults who experience sight loss later in life or who did not have the opportunity to learn it when they were younger can also learn braille. In most cases, adults who learn braille were already literate in print before vision loss and so instruction focuses more on developing the tactile and motor skills needed to read braille. While different countries publish statistics on how many readers in a given organization request braille, these numbers only provide a partial picture of braille literacy statistics. For example, this data does not survey the entire population of braille readers or always include readers who are no longer in the school system (adults) or readers who request electronic braille materials. Therefore, there are currently no reliable statistics on braille literacy rates, as described in a publication in the "Journal of Visual Impairment and Blindness". Regardless of the precise percentage of braille readers, there is consensus that braille should be provided to all those who benefit from it. Numerous factors influence access to braille literacy, including school budget constraints, technology advancements such as screen-reader software, access to qualified instruction, and different philosophical views over how blind children should be educated. In the US, a key turning point for braille literacy was the passage of the Rehabilitation Act of 1973, an act of Congress that moved thousands of children from specialized schools for the blind into mainstream public schools. Because only a small percentage of public schools could afford to train and hire braille-qualified teachers, braille literacy has declined since the law took effect. Braille literacy rates have improved slightly since the bill was passed, in part because of pressure from consumers and advocacy groups that has led 27 states to pass legislation mandating that children who are legally blind be given the opportunity to learn braille. In 1998 there were 57,425 legally blind students registered in the United States, but only 10% (5,461) of them used braille as their primary reading medium. Early Braille education is crucial to literacy for a blind or low-vision child. A study conducted in the state of Washington found that people who learned braille at an early age did just as well as, if not better than, their sighted peers in several areas, including vocabulary and comprehension. In the preliminary adult study, while evaluating the correlation between adult literacy skills and employment, it was found that 44% of the participants who had learned to read in braille were unemployed, compared to the 77% unemployment rate of those who had learned to read using print. Currently, among the estimated 85,000 blind adults in the United States, 90% of those who are braille-literate are employed. Among adults who do not know braille, only 33% are employed. Statistically, history has proven that braille reading proficiency provides an essential skill set that allows blind or low-vision children to compete with their sighted peers in a school environment and later in life as they enter the workforce. Regardless of the specific percentage of braille readers, proponents point out the importance of increasing access to braille for all those who can benefit from it. Braille transcription. Although it is possible to transcribe print by simply substituting the equivalent braille character for its printed equivalent, in English such a character-by-character transcription (known as "uncontracted braille") is typically used by beginners or those who only engage in short reading tasks (such as reading household labels). Braille characters are much larger than their printed equivalents, and the standard page has room for only 25 lines of 43 characters. To reduce space and increase reading speed, most braille alphabets and orthographies use ligatures, abbreviations, and contractions. Virtually all English braille books in hardcopy (paper) format are transcribed in contracted braille: The Library of Congress's "Instruction Manual for Braille Transcribing" runs to over 300 pages, and braille transcribers must pass certification tests. Uncontracted braille was previously known as grade 1 braille, and contracted braille was previously known as grade 2 braille. Uncontracted braille is a direct transliteration of print words (one-to-one correspondence); hence, the word "about" would contain all the same letters in uncontracted braille as it does in inkprint. Contracted braille includes short forms to save space; hence, for example, the letters "ab" when standing alone represent the word "about" in English contracted braille. In English, some braille users only learn uncontracted braille, particularly if braille is being used for shorter reading tasks such as reading household labels. However, those who plan to use braille for educational and employment purposes and longer reading texts often go on to contracted braille. The system of contractions in English Braille begins with a set of 23 words contracted to single characters. Thus the word "but" is contracted to the single letter "b", "can" to "c", "do" to "d", and so on. Even this simple rule creates issues requiring special cases; for example, "d" is, specifically, an abbreviation of the verb "do"; the noun "do" representing the note of the musical scale is a different word and must be spelled out. Portions of words may be contracted, and many rules govern this process. For example, the character with dots 2-3-5 (the letter "f" lowered in the Braille cell) stands for "ff" when used in the middle of a word. At the beginning of a word, this same character stands for the word "to"; the character is written in braille with no space following it. (This contraction was removed in the Unified English Braille Code.) At the end of a word, the same character represents an exclamation point. Some contractions are more similar than their print equivalents. For example, the contraction , meaning "letter", differs from , meaning "little", only by one dot in the second letter: "little", "letter". This causes greater confusion between the braille spellings of these words and can hinder the learning process of contracted braille. The contraction rules take into account the linguistic structure of the word; thus, contractions are generally not to be used when their use would alter the usual braille form of a base word to which a prefix or suffix has been added. Some portions of the transcription rules are not fully codified and rely on the judgment of the transcriber. Thus, when the contraction rules permit the same word in more than one way, preference is given to "the contraction that more nearly approximates correct pronunciation". "Grade 3 braille" is a variety of non-standardized systems that include many additional shorthand-like contractions. They are not used for publication, but by individuals for their personal convenience. Braille translation software. When people produce braille, this is called braille transcription. When computer software produces braille, this is called a braille translator. Braille translation software exists to handle almost all of the common languages of the world, and many technical areas, such as mathematics (mathematical notation), for example WIMATS, music (musical notation), and tactile graphics. Braille reading techniques. Since Braille is one of the few writing systems where tactile perception is used, as opposed to visual perception, a braille reader must develop new skills. One skill important for Braille readers is the ability to create smooth and even pressures when running one's fingers along the words. There are many different styles and techniques used for the understanding and development of braille, even though a study by B. F. Holland suggests that there is no specific technique that is superior to any other. Another study by Lowenfield & Abel shows that braille can be read "the fastest and best... by students who read using the index fingers of both hands". Another important reading skill emphasized in this study is to finish reading the end of a line with the right hand and to find the beginning of the next line with the left hand simultaneously. International uniformity. When Braille was first adapted to languages other than French, many schemes were adopted, including mapping the native alphabet to the alphabetical order of French – e.g. in English W, which was not in the French alphabet at the time, is mapped to braille X, X to Y, Y to Z, and Z to the first French-accented letter – or completely rearranging the alphabet such that common letters are represented by the simplest braille patterns. Consequently, mutual intelligibility was greatly hindered by this state of affairs. In 1878, the International Congress on Work for the Blind, held in Paris, proposed an international braille standard, where braille codes for different languages and scripts would be based, not on the order of a particular alphabet, but on phonetic correspondence and transliteration to Latin. This unified braille has been applied to the languages of India and Africa, Arabic, Vietnamese, Hebrew, Russian, and Armenian, as well as nearly all Latin-script languages. In Greek, for example, γ (g) is written as Latin "g", despite the fact that it has the alphabetic position of "c"; Hebrew ב (b), the second letter of the alphabet and cognate with the Latin letter "b", is sometimes pronounced /b/ and sometimes /v/, and is written "b" or "v" accordingly; Russian ц (ts) is written as "c", which is the usual letter for /ts/ in those Slavic languages that use the Latin alphabet; and Arabic ف (f) is written as "f", despite being historically "p" and occurring in that part of the Arabic alphabet (between historic "o" and "q"). Other braille conventions. Other systems for assigning values to braille patterns are also followed beside the simple mapping of the alphabetical order onto the original French order. Some braille alphabets start with unified braille, and then diverge significantly based on the phonology of the target languages, while others diverge even further. In the various Chinese systems, traditional braille values are used for initial consonants and the simple vowels. In both Mandarin and Cantonese Braille, however, characters have different readings depending on whether they are placed in syllable-initial (onset) or syllable-final (rime) position. For instance, the cell for Latin "k", , represents Cantonese "k" ("g" in Yale and other modern romanizations) when initial, but "aak" when final, while Latin "j", , represents Cantonese initial "j" but final "oei". Novel systems of braille mapping include Korean, which adopts separate syllable-initial and syllable-final forms for its consonants, explicitly grouping braille cells into syllabic groups in the same way as hangul. Japanese, meanwhile, combines independent vowel dot patterns and modifier consonant dot patterns into a single braille cell – an abugida representation of each Japanese mora. Uses. Braille is read by people who are blind, deafblind or who have low vision, and by both those born with a visual impairment and those who experience sight loss later in life. Braille may also be used by print impaired people, who although may be fully sighted, due to a physical disability are unable to read print. Even individuals with low vision will find that they benefit from braille, depending on level of vision or context (for example, when lighting or colour contrast is poor). Braille is used for both short and long reading tasks. Examples of short reading tasks include braille labels for identifying household items (or cards in a wallet), reading elevator buttons, accessing phone numbers, recipes, grocery lists and other personal notes. Examples of longer reading tasks include using braille to access educational materials, novels and magazines. People with access to a refreshable braille display can also use braille for reading email and ebooks, browsing the internet and accessing other electronic documents. It is also possible to adapt or purchase playing cards and board games in braille. In India there are instances where the parliament acts have been published in braille, such as "The Right to Information Act". Sylheti Braille is used in Northeast India. In Canada, passenger safety information in braille and tactile seat row markers are required aboard planes, trains, large ferries, and interprovincial busses pursuant to the Canadian Transportation Agency's regulations. In the United States, the Americans with Disabilities Act of 1990 requires various building signage to be in braille. In the United Kingdom, medicines are required to have the name of the medicine in Braille on the labeling. Currency. The current series of Canadian banknotes has a tactile feature consisting of raised dots that indicate the denomination, allowing bills to be easily identified by blind or low vision people. It does not use standard braille numbers to identify the value. Instead, the number of full braille cells, which can be simply counted by both braille readers and non-braille readers alike, is an indicator of the value of the bill. Mexican bank notes, Australian bank notes, Indian rupee notes, Israeli new shekel notes and Russian ruble notes also have special raised symbols to make them identifiable by persons who are blind or have low vision. Euro coins were designed in cooperation with organisations representing blind people, and as a result they incorporate many features allowing them to be distinguished by touch alone. In addition, their visual appearance is designed to make them easy to tell apart for persons who cannot read the inscriptions on the coins. "A good design for the blind and partially sighted is a good design for everybody" was the principle behind the cooperation of the European Central Bank and the European Blind Union during the design phase of the first series Euro banknotes in the 1990s. As a result, the design of the first euro banknotes included several characteristics which aid both the blind and partially sighted to confidently use the notes. Australia introduced the tactile feature onto their five-dollar banknote in 2016. In the United Kingdom, the front of the £10 polymer note (the side with raised print), has two clusters of raised dots in the top left hand corner, and the £20 note has three. This tactile feature helps blind and partially sighted people identify the value of the note. In 2003 the US Mint introduced the commemorative Alabama State Quarter, which recognized State Daughter Helen Keller on the Obverse, including the name Helen Keller in both English script and Braille inscription. This appears to be the first known use of Braille on US Coin Currency, though not standard on all coins of this type. Unicode. The Braille set was added to the Unicode Standard in version 3.0 (1999). Most braille embossers and refreshable braille displays do not use the Unicode code points, but instead reuse the 8-bit code points that are assigned to standard ASCII for braille ASCII. (Thus, for simple material, the same bitstream may be interpreted equally as visual letter forms for sighted readers or their exact semantic equivalent in tactile patterns for blind readers. However some codes have quite different tactile versus visual interpretations and most are not even defined in Braille ASCII.) Some embossers have proprietary control codes for 8-dot braille or for full graphics mode, where dots may be placed anywhere on the page without leaving any space between braille cells so that continuous lines can be drawn in diagrams, but these are rarely used and are not standard. The Unicode standard encodes 6-dot and 8-dot braille glyphs according to their binary appearance, rather than following their assigned numeric order. Dot 1 corresponds to the least significant bit of the low byte of the Unicode scalar value, and dot 8 to the high bit of that byte. The Unicode block for braille is U+2800 ... U+28FF. The mapping of patterns to characters etc. is language dependent: even for English for example, see American Braille and English Braille. Observation. Every year on 4 January, World Braille Day is observed internationally to commemorate the birth of Louis Braille and to recognize his efforts. Although the event is not considered a public holiday, it has been recognized by the United Nations as an official day of celebration since 2019. The 200th anniversary of the launch of braille was celebrated in 2024. In the UK, the Royal National Institute of Blind People has held celebrations from September 2024 until August 2025. Braille devices. There is a variety of contemporary electronic devices that serve the needs of blind people that operate in Braille, such as refreshable braille displays and Braille e-books that use different technologies for transmitting graphic information of different types (pictures, maps, graphs, texts, etc.).
3936
609725
https://en.wikipedia.org/wiki?curid=3936
Bastille Day
Bastille Day is the common name given in English-speaking countries to the national day of France, which is celebrated on 14 July each year. It is referred to, both legally and commonly, as () in French, though "la fête nationale" is also used in the press. French National Day is the anniversary of the Storming of the Bastille on 14 July 1789, a major event of the French Revolution, as well as the that celebrated the unity of the French people on 14 July 1790. Celebrations are held throughout France. One that has been reported as "the oldest and largest military parade in Europe" is held on 14 July on the Champs-Élysées in Paris in front of the President of France, along with other French officials and foreign guests. History. In 1789, tensions rose in France between reformist and conservative factions as the country struggled to resolve an economic crisis. In May, the Estates General legislative assembly was revived, but members of the Third Estate broke ranks, declaring themselves to be the National Assembly of the country, and on 20 June, vowed to write a constitution for the kingdom. On 11 July, Jacques Necker, the finance minister of Louis XVI, who was sympathetic to the Third Estate, was dismissed by the King, provoking an angry reaction among Parisians. Crowds formed, fearful of an attack by the royal army or by foreign regiments of mercenaries in the King's service and seeking to arm themselves. Early on 14 July, a crowd besieged the Hôtel des Invalides for firearms, muskets, and cannons stored in its cellars. That same day, another crowd stormed the Bastille, a fortress-prison in Paris that had historically held people jailed on the basis of "lettres de cachet" (literally "signet letters"), arbitrary royal indictments that could not be appealed and did not indicate the reason for the imprisonment, and which was believed to hold a cache of ammunition and gunpowder. As it happened, at the time of the attack, the Bastille held only seven inmates, none of great political significance. The crowd was eventually reinforced by the mutinous Régiment des Gardes Françaises ("Regiment of French Guards"), whose usual role was to protect public buildings. They proved a fair match for the fort's defenders, and Governor de Launay, the commander of the Bastille, capitulated and opened the gates to avoid a mutual massacre. According to the official documents, about 200 attackers and just one defender died before the capitulation. However, possibly because of a misunderstanding, fighting resumed. In this second round of fighting, de Launay and seven other defenders were killed, as was Jacques de Flesselles, the "prévôt des marchands" ("provost of the merchants"), the elected head of the city's guilds, who under the French monarchy had the responsibilities of a present-day mayor. Shortly after the storming of the Bastille, late in the evening of 4 August, after a very stormy session of the "Assemblée constituante", feudalism was abolished. On 26 August, the Declaration of the Rights of Man and of the Citizen ("Déclaration des Droits de l'Homme et du Citoyen") was proclaimed. "Fête de la Fédération". As early as 1789, the year of the storming of the Bastille, preliminary designs for a national festival were underway. These designs were intended to strengthen the country's national identity through the celebration of the events of 14 July 1789. One of the first designs was proposed by Clément Gonchon, a French textile worker, who presented his design for a festival celebrating the anniversary of the storming of the Bastille to the French city administration and the public on 9 December 1789. There were other proposals and unofficial celebrations of 14 July 1789, but the official festival sponsored by the National Assembly was called the Fête de la Fédération. The "Fête de la Fédération" on 14 July 1790 was a celebration of the unity of the French nation during the French Revolution. The aim of this celebration, one year after the Storming of the Bastille, was to symbolize peace. The event took place on the Champ de Mars, which was located far outside of Paris at the time. The work needed to transform the Champ de Mars into a suitable location for the celebration was not on schedule to be completed in time. On the day recalled as the Journée des brouettes ("The Day of the Wheelbarrow"), thousands of Parisian citizens gathered together to finish the construction needed for the celebration. The day of the festival, the National Guard assembled and proceeded along the boulevard du Temple in the pouring rain, and were met by an estimated 260,000 Parisian citizens at the Champ de Mars. A mass was celebrated by Talleyrand, bishop of Autun. The popular General Lafayette, as captain of the National Guard of Paris and a confidant of the king, took his oath to the constitution, followed by King Louis XVI. After the end of the official celebration, the day ended in a huge four-day popular feast, and people celebrated with fireworks, as well as fine wine and running nude through the streets in order to display their freedom. Origin of the current celebration. On 30 June 1878, a feast was officially arranged in Paris to honour the French Republic (the event was commemorated in a painting by Claude Monet). On 14 July 1879, there was another feast, with a semi-official aspect. The day's events included a reception in the Chamber of Deputies, organised and presided over by Léon Gambetta (a military reviewer at Longchamp), and a Republican Feast in the Pré Catelan. All throughout France, "Le Figaro" wrote, "people feasted much to honour the storming of the Bastille". In 1880, the government of the Third Republic wanted to revive the 14 July festival. The campaign for the reinstatement of the festival was sponsored by the notable politician Léon Gambetta and scholar Henri Baudrillant. On 21 May 1880, Benjamin Raspail proposed a law, signed by sixty-four members of government, to have "the Republic adopt 14 July as the day of an annual national festival". There were many disputes over which date to be remembered as the national holiday, including 4 August (the commemoration of the end of the feudal system), 5 May (when the Estates-General first assembled), 27 July (the fall of Robespierre), and 21 January (the date of Louis XVI's execution). The government decided that the date of the holiday would be 14 July, but that was still somewhat problematic. The events of 14 July 1789 were illegal under the previous government, which contradicted the Third Republic's need to establish legal legitimacy. French politicians also did not want the sole foundation of their national holiday to be rooted in a day of bloodshed and class-hatred as the day of storming the Bastille was. Instead, they based the establishment of the holiday as both the celebration of the Fête de la Fédération, a festival celebrating the anniversary of the Republic of France on 14 July 1789, and the storming of the Bastille. The Assembly voted in favor of the proposal on 21 May, and 8 June. The law was approved on 27 and 29 June. The celebration was made official on 6 July 1880. In the debate leading up to the adoption of the holiday, Senator Henri Martin, who wrote the National Day law, addressed the chamber on 29 June 1880: Bastille Day military parade. The Bastille Day military parade is the French military parade that has been held in the morning, every year in Paris, since 1880. While previously held elsewhere within or near the capital city, since 1918 it has been held on the Champs-Élysées, with the participation of the Allies as represented in the Versailles Peace Conference, and with the exception of the period of German occupation from 1940 to 1944 (when the ceremony took place in London under the command of General Charles de Gaulle); and 2020 when the COVID-19 pandemic forced its cancellation. The parade passes down the Champs-Élysées from the Arc de Triomphe to the Place de la Concorde, where the President of the French Republic, his government and foreign ambassadors to France stand. This is a popular event in France, broadcast on French TV, and is the oldest and largest regular military parade in Europe. Smaller military parades are held in French garrison towns, including Toulon and Belfort, with local troops. Bastille Day celebrations in other countries. Belgium. Liège celebrates Bastille Day each year since the end of the First World War, as Liège was decorated by the Légion d'Honneur for its unexpected resistance during the Battle of Liège. The city also hosts a fireworks show outside of Congress Hall. Specifically in Liège, celebrations of Bastille Day have been known to be bigger than the celebrations of the Belgian National holiday. Around 35,000 people gather to celebrate Bastille Day. There is a traditional festival dance of the French consul that draws large crowds, and many unofficial events over the city celebrate the relationship between France and the city of Liège. Canada. Vancouver, British Columbia, holds a celebration featuring exhibits, food, and entertainment. The Toronto Bastille Day festival is also celebrated in Toronto, Ontario. The festival is organized by the French-Canadian community in Toronto and sponsored by the Consulate General of France in Toronto. The celebration includes music, performances, sport competitions, and a French Market. At the end of the festival, there is also a traditional French . Czech Republic. Since 2008, Prague has hosted a French market "" ("Fourteenth of July Market") offering traditional French food and wine as well as music. The market takes place on Kampa Island, it is usually between 11 and 14 July. It acts as an event that marks the relinquish of the EU presidency from France to the Czech Republic. Traditional selections of French produce, including cheese, wine, meat, bread and pastries, are provided by the market. Throughout the event, live music is played in the evenings, with lanterns lighting up the square at night. Denmark. The amusement park Tivoli celebrates Bastille Day. Hungary. Budapest's two-day celebration is sponsored by the Institut de France. The festival is hosted along the Danube River, with streets filled with music and dancing. There are also local markets dedicated to French foods and wine, mixed with some traditional Hungarian specialties. At the end of the celebration, a fireworks show is held on the river banks. India. Bastille Day is celebrated with great festivity in Pondicherry, a former French colony. Ireland. The Embassy of France in Ireland organizes several events around Dublin, Cork and Limerick for Bastille Day; including evenings of French music and tasting of French food. Many members of the French community in Ireland take part in the festivities. Events in Dublin include live entertainment, speciality menus on French cuisine, and screenings of popular French films. New Zealand. The Auckland suburb of Remuera hosts an annual French-themed Bastille Day street festival. Visitors enjoy mimes, dancers, music, as well as French foods and drinks. The budding relationship between the two countries, with the establishment of a Maori garden in France and exchange of their analyses of cave art, resulted in the creation of an official reception at the Residence of France. There is also an event in Wellington for the French community held at the Residence of France. South Africa. Franschhoek's weekend festival has been celebrated since 1993. (Franschhoek, or 'French Corner,' is situated in the Western Cape.) As South Africa's gourmet capital, French food, wine and other entertainment is provided throughout the festival. The French Consulate in South Africa also celebrates their national holiday with a party for the French community. Activities also include dressing up in different items of French clothing. French Polynesia. Following colonial rule, France annexed a large portion of what is now French Polynesia. Under French rule, Tahitians were permitted to participate in sport, singing, and dancing competitions one day a year: Bastille Day. The single day of celebration evolved into the major Heiva i Tahiti festival in Papeete Tahiti, where traditional events such as canoe races, tattooing, and fire walks are held. The singing and dancing competitions continue with music composed with traditional instruments such as the nasal flute and ukulele. United Kingdom. Within the UK, London has a large French contingent, and celebrates Bastille Day at various locations across the city including Battersea Park, Camden Town and Kentish Town. Live entertainment is performed at Canary Wharf, with weeklong performances of French theatre at the Lion and Unicorn Theatre in Kentish Town. Restaurants feature cabarets and special menus across the city, and other celebrations include garden parties and sports tournaments. There is also a large event at the Bankside and Borough Market, where there is live music, street performers, and traditional French games played. United States. The United States has over 20 cities that conduct annual celebrations of Bastille Day. The different cities celebrate with many French staples such as food, music, games, and sometimes the recreation of famous French landmarks. Baltimore, Maryland, has a large Bastille Day celebration each year at Petit Louis in the Roland Park area of Baltimore. Boston has a celebration annually, hosted by the French Cultural Center for 40 years. The street festival occurs in Boston's Back Bay neighborhood, near the Cultural Center's headquarters. The celebration includes francophone musical performers, dancing, and French cuisine. New York City has numerous Bastille Day celebrations each July, including "Bastille Day on 60th Street" hosted by the French Institute Alliance Française between Fifth and Lexington Avenues on the Upper East Side of Manhattan, Bastille Day on Smith Street in Brooklyn, and Bastille Day in Tribeca. There is also the annual Bastille Day Ball, taking place since 1924. Philadelphia's Bastille Day, held at Eastern State Penitentiary, involves Marie Antoinette throwing locally manufactured Tastykakes at the Parisian militia, as well as a re-enactment of the storming of the Bastille. (This Philadelphia tradition ended in 2018.) In Newport, Rhode Island, the annual Bastille Day celebration is organized by the local chapter of the Alliance Française. It takes place at King Park in Newport at the monument memorializing the accomplishments of the General Comte de Rochambeau whose 6,000 to 7,000 French forces landed in Newport on 11 July 1780. Their assistance in the defeat of the English in the War of Independence is well documented and is proof of the special relationship between France and the United States. In Washington D.C., food, music, and auction events are sponsored by the Embassy of France. There is also a French Festival within the city, where families can meet period entertainment groups set during the time of the French Revolution. Restaurants host parties serving traditional French food. In Dallas, Texas, the Bastille Day celebration, "Bastille On Bishop", began in 2010 and is held annually in the Bishop Arts District of the North Oak Cliff neighborhood, southwest of downtown just across the Trinity River. Dallas' French roots are tied to the short lived socialist Utopian community La Réunion, formed in 1855 and incorporated into the City of Dallas in 1860. Miami's celebration is organized by "French & Famous" in partnership with the French American Chamber of Commerce, the Union des Français de l'Etranger and many French brands. The event gathers over 1,000 attendees to celebrate "La Fête Nationale". The location and theme change every year. In 2017, the theme was "Guinguette Party" and attracted 1,200 francophiles at The River Yacht Club. New Orleans, Louisiana, has multiple celebrations, the largest in the historic French Quarter. In Austin, Texas, the Alliance Française d'Austin usually conducts a family-friendly Bastille Day party at the French Legation, the home of the French representative to the Republic of Texas from 1841 to 1845. Chicago, Illinois, has hosted a variety of Bastille Day celebrations in a number of locations in the city, including Navy Pier and Oz Park. The recent incarnations have been sponsored in part by the Chicago branch of the French-American Chamber of Commerce and by the French Consulate-General in Chicago. Milwaukee's four-day street festival begins with a "Storming of the Bastille" with a 43-foot replica of the Eiffel Tower. Minneapolis, Minnesota, has a celebration with wine, French food, pastries, a flea market, circus performers and bands. Also in the Twin Cities area, the local chapter of the Alliance Française has hosted an annual event for years at varying locations with a competition for the "Best Baguette of the Twin Cities". Montgomery, Ohio, has a celebration with wine, beer, local restaurants' fare, pastries, games and bands. St. Louis, Missouri, has annual festivals in the Soulard neighborhood, the former French village of Carondelet, Missouri, and in the Benton Park neighborhood. The Chatillon-DeMenil Mansion in the Benton Park neighborhood, holds an annual Bastille Day festival with reenactments of the beheading of Marie Antoinette and Louis XVI, traditional dancing, and artillery demonstrations. Carondelet also began hosting an annual saloon crawl to celebrate Bastille Day in 2017. The Soulard neighborhood in St. Louis, Missouri celebrates its unique French heritage with special events including a parade, which honors the peasants who rejected the monarchy. The parade includes a 'gathering of the mob,' a walking and golf cart parade, and a mock beheading of the King and Queen. Portland, Oregon, has celebrated Bastille Day with crowds up to 8,000, in public festivals at various public parks, since 2001. The event is coordinated by the Alliance Française of Portland. Seattle's Bastille Day celebration, held at the Seattle Center, involves performances, picnics, wine and shopping. Sacramento, California, conducts annual "waiter races" in the midtown restaurant and shopping district, with a street festival.
3940
7903804
https://en.wikipedia.org/wiki?curid=3940
Blowfish (cipher)
Blowfish is a symmetric-key block cipher, designed in 1993 by Bruce Schneier and included in many cipher suites and encryption products. Blowfish provides a good encryption rate in software, and no effective cryptanalysis of it has been found to date for smaller files. It is recommended Blowfish should not be used to encrypt files larger than 4GB in size, Twofish should be used instead. Blowfish has a 64-bit block size and therefore it could be vulnerable to Sweet32 birthday attacks. Schneier designed Blowfish as a general-purpose algorithm, intended as an alternative to the aging DES and free of the problems and constraints associated with other algorithms. At the time Blowfish was released, many other designs were proprietary, encumbered by patents, or were commercial or government secrets. Schneier has stated that "Blowfish is unpatented, and will remain so in all countries. The algorithm is hereby placed in the public domain, and can be freely used by anyone." Notable features of the design include key-dependent S-boxes and a highly complex key schedule. The algorithm. Blowfish has a 64-bit block size and a variable key length from 32 bits up to 448 bits. It is a 16-round Feistel cipher and uses large key-dependent S-boxes. In structure it resembles CAST-128, which uses fixed S-boxes. The adjacent diagram shows Blowfish's encryption routine. Each line represents 32 bits. There are five subkey-arrays: one 18-entry P-array (denoted as K in the diagram, to avoid confusion with the Plaintext) and four 256-entry S-boxes (S0, S1, S2 and S3). Every round "r" consists of 4 actions: The F-function splits the 32-bit input into four 8-bit quarters and uses the quarters as input to the S-boxes. The S-boxes accept 8-bit input and produce 32-bit output. The outputs are added modulo 232 and XORed to produce the final 32-bit output (see image in the upper right corner). After the 16th round, undo the last swap, and XOR L with K18 and R with K17 (output whitening). Decryption is exactly the same as encryption, except that P1, P2, ..., P18 are used in the reverse order. This is not so obvious because xor is commutative and associative. A common misconception is to use inverse order of encryption as decryption algorithm (i.e. first XORing P17 and P18 to the ciphertext block, then using the P-entries in reverse order). Blowfish's key schedule starts by initializing the P-array and S-boxes with values derived from the hexadecimal digits of pi, which contain no obvious pattern (see nothing up my sleeve number). The secret key is then, byte by byte, cycling the key if necessary, XORed with all the P-entries in order. A 64-bit all-zero block is then encrypted with the algorithm as it stands. The resultant ciphertext replaces P1 and P2. The same ciphertext is then encrypted again with the new subkeys, and the new ciphertext replaces P3 and P4. This continues, replacing the entire P-array and all the S-box entries. In all, the Blowfish encryption algorithm will run 521 times to generate all the subkeys about 4 KB of data is processed. Because the P-array is 576 bits long, and the key bytes are XORed through all these 576 bits during the initialization, many implementations support key sizes up to 576 bits. The reason for that is a discrepancy between the original Blowfish description, which uses 448-bit keys, and its reference implementation, which uses 576-bit keys. The test vectors for verifying third-party implementations were also produced with 576-bit keys. When asked which Blowfish version is the correct one, Bruce Schneier answered: "The test vectors should be used to determine the one true Blowfish". Another opinion is that the 448 bits limit is present to ensure that every bit of every subkey depends on every bit of the key, as the last four values of the P-array don't affect every bit of the ciphertext. This point should be taken in consideration for implementations with a different number of rounds, as even though it increases security against an exhaustive attack, it weakens the security guaranteed by the algorithm. And given the slow initialization of the cipher with each change of key, it is granted a natural protection against brute-force attacks, which doesn't really justify key sizes longer than 448 bits. Blowfish in pseudocode. P[18] // "P-array of 18 elements" S[4][256] // "S-boxes: 4 arrays of 256 elements" function f(x): // "Calculates a function f on a 32-bit input x, using S-boxes and bit manipulation" high_byte := (x shifted right by 24 bits) second_byte := (x shifted right by 16 bits) AND 0xff third_byte := (x shifted right by 8 bits) AND 0xff low_byte := x AND 0xff h := S[0][high_byte] + S[1][second_byte] return (h XOR S[2][third_byte]) + S[3][low_byte] procedure blowfish_encrypt(L, R): // "Encrypts two 32-bit halves L and R using the P-array and function f over 16 rounds" for round := 0 to 15: L := L XOR P[round] R := f(L) XOR R swap values of L and R swap values of L and R R := R XOR P[16] L := L XOR P[17] procedure blowfish_decrypt(L, R): // "Decrypts two 32-bit halves L and R using the P-array and function f over 16 rounds in reverse" for round := 17 down to 2: L := L XOR P[round] R := f(L) XOR R swap values of L and R swap values of L and R R := R XOR P[1] L := L XOR P[0] // "Initializes the P-array and S-boxes using the provided key, followed by key expansion" //" Initialize P-array with the key values" key_position := 0 for i := 0 to 17: k := 0 for j := 0 to 3: k := (k shifted left by 8 bits) OR key[key_position] key_position := (key_position + 1) mod key_length P[i] := P[i] XOR k //" Blowfish key expansion (521 iterations)" L := 0, R := 0 for i := 0 to 17 by 2: blowfish_encrypt(L, R) P[i] := L P[i + 1] := R //" Fill S-boxes by encrypting L and R" for i := 0 to 3: for j := 0 to 255 by 2: blowfish_encrypt(L, R) S[i][j] := L S[i][j + 1] := R Blowfish in practice. Blowfish is a fast block cipher, except when changing keys. Each new key requires the pre-processing equivalent of encrypting about 4 kilobytes of text, which is very slow compared to other block ciphers. This prevents its use in certain applications, but is not a problem in others. Blowfish must be initialized with a key. It is good practice to have this key hashed with a hash function before use. In one application Blowfish's slow key changing is actually a benefit: the password-hashing method (crypt $2, i.e. bcrypt) used in OpenBSD uses an algorithm derived from Blowfish that makes use of the slow key schedule; the idea is that the extra computational effort required gives protection against dictionary attacks. "See" key stretching. Blowfish has a memory footprint of just over 4 kilobytes of RAM. This constraint is not a problem even for older desktop and laptop computers, though it does prevent use in the smallest embedded systems such as early smartcards. Blowfish was one of the first secure block ciphers not subject to any patents and therefore freely available for anyone to use. This benefit has contributed to its popularity in cryptographic software. bcrypt is a password hashing function which, combined with a variable number of iterations (work "cost"), exploits the expensive key setup phase of Blowfish to increase the workload and duration of hash calculations, further reducing threats from brute force attacks. bcrypt is also the name of a cross-platform file encryption utility developed in 2002 that implements Blowfish. Weakness and successors. Blowfish's use of a 64-bit block size (as opposed to e.g. AES's 128-bit block size) makes it vulnerable to birthday attacks, particularly in contexts like HTTPS. In 2016, the SWEET32 attack demonstrated how to leverage birthday attacks to perform plaintext recovery (i.e. decrypting ciphertext) against ciphers with a 64-bit block size. The GnuPG project recommends that Blowfish not be used to encrypt files larger than 4 GB due to its small block size. A reduced-round variant of Blowfish is known to be susceptible to known-plaintext attacks on reflectively weak keys. Blowfish implementations use 16 rounds of encryption, and are not susceptible to this attack. Bruce Schneier has recommended migrating to his Blowfish successor, Twofish. was released in 2005, developed by Alexander Pukall. It has exactly the same design but has twice as many S tables and uses 64-bit integers instead of 32-bit integers. It no longer works on 64-bit blocks but on 128-bit blocks like AES. Blowfish2 is used for example, in FreePascal.
3942
57939
https://en.wikipedia.org/wiki?curid=3942
Bijection
In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between two sets such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equivalently, a bijection is a relation between two sets such that each element of either set is paired with exactly one element of the other set. A function is bijective if it is invertible; that is, a function formula_1 is bijective if and only if there is a function formula_2 the "inverse" of , such that each of the two ways for composing the two functions produces an identity function: formula_3 for each formula_4 in formula_5 and formula_6 for each formula_7 in formula_8 For example, the "multiplication by two" defines a bijection from the integers to the even numbers, which has the "division by two" as its inverse function. A function is bijective if and only if it is both injective (or "one-to-one")—meaning that each element in the codomain is mapped from at most one element of the domain—and surjective (or "onto")—meaning that each element of the codomain is mapped from at least one element of the domain. The term "one-to-one correspondence" must not be confused with "one-to-one function", which means injective but not necessarily surjective. The elementary operation of counting establishes a bijection from some finite set to the first natural numbers , up to the number of elements in the counted set. It results that two finite sets have the same number of elements if and only if there exists a bijection between them. More generally, two sets are said to have the same cardinal number if there exists a bijection between them. A bijective function from a set to itself is also called a permutation, and the set of all permutations of a set forms its symmetric group. Some bijections with further properties have received specific names, which include automorphisms, isomorphisms, homeomorphisms, diffeomorphisms, permutation groups, and most geometric transformations. Galois correspondences are bijections between sets of mathematical objects of apparently very different nature. Definition. For a binary relation pairing elements of set "X" with elements of set "Y" to be a bijection, four properties must hold: Satisfying properties (1) and (2) means that a pairing is a function with domain "X". It is more common to see properties (1) and (2) written as a single statement: Every element of "X" is paired with exactly one element of "Y". Functions which satisfy property (3) are said to be "onto "Y" " and are called surjections (or "surjective functions"). Functions which satisfy property (4) are said to be "one-to-one functions" and are called injections (or "injective functions"). With this terminology, a bijection is a function which is both a surjection and an injection, or using other words, a bijection is a function which is both "one-to-one" and "onto". Examples. Batting line-up of a baseball or cricket team. Consider the batting line-up of a baseball or cricket team (or any list of all the players of any sports team where every player holds a specific spot in a line-up). The set "X" will be the players on the team (of size nine in the case of baseball) and the set "Y" will be the positions in the batting order (1st, 2nd, 3rd, etc.) The "pairing" is given by which player is in what position in this order. Property (1) is satisfied since each player is somewhere in the list. Property (2) is satisfied since no player bats in two (or more) positions in the order. Property (3) says that for each position in the order, there is some player batting in that position and property (4) states that two or more players are never batting in the same position in the list. Seats and students of a classroom. In a classroom there are a certain number of seats. A group of students enter the room and the instructor asks them to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set of students and the set of seats, where each student is paired with the seat they are sitting in. What the instructor observed in order to reach this conclusion was that: The instructor was able to conclude that there were just as many seats as there were students, without having to count either set. Inverses. A bijection "f" with domain "X" (indicated by "f": "X → Y" in functional notation) also defines a converse relation starting in "Y" and going to "X" (by turning the arrows around). The process of "turning the arrows around" for an arbitrary function does not, "in general", yield a function, but properties (3) and (4) of a bijection say that this inverse relation is a function with domain "Y". Moreover, properties (1) and (2) then say that this inverse "function" is a surjection and an injection, that is, the inverse function exists and is also a bijection. Functions that have inverse functions are said to be invertible. A function is invertible if and only if it is a bijection. Stated in concise mathematical notation, a function "f": "X → Y" is bijective if and only if it satisfies the condition for every "y" in "Y" there is a unique "x" in "X" with "y" = "f"("x"). Continuing with the baseball batting line-up example, the function that is being defined takes as input the name of one of the players and outputs the position of that player in the batting order. Since this function is a bijection, it has an inverse function which takes as input a position in the batting order and outputs the player who will be batting in that position. Composition. The composition formula_11 of two bijections "f": "X → Y" and "g": "Y → Z" is a bijection, whose inverse is given by formula_11 is formula_13. Conversely, if the composition formula_14 of two functions is bijective, it only follows that "f" is injective and "g" is surjective. Cardinality. If "X" and "Y" are finite sets, then there exists a bijection between the two sets "X" and "Y" if and only if "X" and "Y" have the same number of elements. Indeed, in axiomatic set theory, this is taken as the definition of "same number of elements" (equinumerosity), and generalising this definition to infinite sets leads to the concept of cardinal number, a way to distinguish the various sizes of infinite sets. Category theory. Bijections are precisely the isomorphisms in the category "Set" of sets and set functions. However, the bijections are not always the isomorphisms for more complex categories. For example, in the category "Grp" of groups, the morphisms must be homomorphisms since they must preserve the group structure, so the isomorphisms are "group isomorphisms" which are bijective homomorphisms. Generalization to partial functions. The notion of one-to-one correspondence generalizes to partial functions, where they are called "partial bijections", although partial bijections are only required to be injective. The reason for this relaxation is that a (proper) partial function is already undefined for a portion of its domain; thus there is no compelling reason to constrain its inverse to be a total function, i.e. defined everywhere on its domain. The set of all partial bijections on a given base set is called the symmetric inverse semigroup. Another way of defining the same notion is to say that a partial bijection from "A" to "B" is any relation "R" (which turns out to be a partial function) with the property that "R" is the graph of a bijection "f":'→', where ' is a subset of "A" and ' is a subset of "B". When the partial bijection is on the same set, it is sometimes called a "one-to-one partial transformation". An example is the Möbius transformation simply defined on the complex plane, rather than its completion to the extended complex plane. References. This topic is a basic concept in set theory and can be found in any text which includes an introduction to set theory. Almost all texts that deal with an introduction to writing proofs will include a section on set theory, so the topic may be found in any of these:
3943
48546149
https://en.wikipedia.org/wiki?curid=3943
Binary function
In mathematics, a binary function (also called bivariate function, or function of two variables) is a function that takes two inputs. Precisely stated, a function formula_1 is binary if there exists sets formula_2 such that formula_3 where formula_4 is the Cartesian product of formula_5 and formula_6 Alternative definitions. Set-theoretically, a binary function can be represented as a subset of the Cartesian product formula_7, where formula_8 belongs to the subset if and only if formula_9. Conversely, a subset formula_10 defines a binary function if and only if for any formula_11 and formula_12, there exists a unique formula_13 such that formula_8 belongs to formula_10. formula_16 is then defined to be this formula_17. Alternatively, a binary function may be interpreted as simply a function from formula_4 to formula_19. Even when thought of this way, however, one generally writes formula_16 instead of formula_21. Examples. Division of whole numbers can be thought of as a function. If formula_22 is the set of integers, formula_23 is the set of natural numbers (except for zero), and formula_24 is the set of rational numbers, then division is a binary function formula_25. In a vector space "V" over a field "F", scalar multiplication is a binary function. A scalar "a" ∈ "F" is combined with a vector "v" ∈ "V" to produce a new vector "av" ∈ "V". Another example is that of inner products, or more generally functions of the form formula_26, where , are real-valued vectors of appropriate size and is a matrix. If is a positive definite matrix, this yields an inner product. Functions of two real variables. Functions whose domain is a subset of formula_27 are often also called functions of two variables even if their domain does not form a rectangle and thus the cartesian product of two sets. Restrictions to ordinary functions. In turn, one can also derive ordinary functions of one variable from a binary function. Given any element formula_11, there is a function formula_29, or formula_30, from formula_31 to formula_19, given by formula_33. Similarly, given any element formula_12, there is a function formula_35, or formula_36, from formula_5 to formula_19, given by formula_39. In computer science, this identification between a function from formula_4 to formula_19 and a function from formula_5 to formula_43, where formula_43 is the set of all functions from formula_31 to formula_19, is called "currying". Generalisations. The various concepts relating to functions can also be generalised to binary functions. For example, the division example above is "surjective" (or "onto") because every rational number may be expressed as a quotient of an integer and a natural number. This example is "injective" in each input separately, because the functions "f" "x" and "f" "y" are always injective. However, it's not injective in both variables simultaneously, because (for example) "f" (2,4) = "f" (1,2). One can also consider "partial" binary functions, which may be defined only for certain values of the inputs. For example, the division example above may also be interpreted as a partial binary function from Z and N to Q, where N is the set of all natural numbers, including zero. But this function is undefined when the second input is zero. A binary operation is a binary function where the sets "X", "Y", and "Z" are all equal; binary operations are often used to define algebraic structures. In linear algebra, a bilinear transformation is a binary function where the sets "X", "Y", and "Z" are all vector spaces and the derived functions "f" "x" and "f""y" are all linear transformations. A bilinear transformation, like any binary function, can be interpreted as a function from "X" × "Y" to "Z", but this function in general won't be linear. However, the bilinear transformation can also be interpreted as a single linear transformation from the tensor product formula_47 to "Z". Generalisations to ternary and other functions. The concept of binary function generalises to "ternary" (or "3-ary") "function", "quaternary" (or "4-ary") "function", or more generally to "n-ary function" for any natural number "n". A "0-ary function" to "Z" is simply given by an element of "Z". One can also define an "A-ary function" where "A" is any set; there is one input for each element of "A". Category theory. In category theory, "n"-ary functions generalise to "n"-ary morphisms in a multicategory. The interpretation of an "n"-ary morphism as an ordinary morphisms whose domain is some sort of product of the domains of the original "n"-ary morphism will work in a monoidal category. The construction of the derived morphisms of one variable will work in a closed monoidal category. The category of sets is closed monoidal, but so is the category of vector spaces, giving the notion of bilinear transformation above.
3947
6060594
https://en.wikipedia.org/wiki?curid=3947
Blue Velvet (film)
Blue Velvet is a 1986 American neo-noir mystery thriller film written and directed by David Lynch. Blending psychological horror with film noir, the film stars Kyle MacLachlan, Isabella Rossellini, Dennis Hopper, and Laura Dern, and is named after the 1951 song of the same name. The film follows a college student who returns to his hometown and discovers a severed human ear in a field, which leads him to uncover a criminal conspiracy involving a troubled nightclub singer. The screenplay of "Blue Velvet" had been passed around multiple times in the late 1970s and early 1980s, with several major studios declining it due to its strong sexual and violent content. After the failure of his 1984 film "Dune", Lynch made attempts at developing a more "personal story", somewhat characteristic of the surrealist style displayed in his first film "Eraserhead" (1977). The independent studio De Laurentiis Entertainment Group, owned at the time by Italian film producer Dino De Laurentiis, agreed to finance and produce the film. "Blue Velvet" initially received a divided critical response, with many stating that its explicit content served little artistic purpose. Nevertheless, the film earned Lynch his second nomination for the Academy Award for Best Director, and received the year's Best Film and Best Director prizes from the National Society of Film Critics. It came to achieve cult status. As an example of a director casting against the norm, it was credited for revitalizing Hopper's career and for providing Rossellini with a dramatic outlet beyond her previous work as a fashion model and a cosmetics spokeswoman. In the years since, the film has been re-evaluated, and it is now widely regarded as one of Lynch's major works and one of the greatest films of the 1980s. Publications including "Sight & Sound", "Time", "Entertainment Weekly" and "BBC Magazine" have ranked it among the greatest American films of all time. In 2008, it was chosen by the American Film Institute as one of the ten greatest American mystery films. Plot. College student Jeffrey Beaumont returns to his suburban hometown of Lumberton, North Carolina, after his father, Tom, has a near-fatal attack from a medical condition. Walking home from the hospital, Jeffrey cuts through a vacant lot and discovers a severed human ear, which he takes to police detective John Williams. Williams's daughter Sandy tells Jeffrey that the ear somehow relates to a lounge singer named Dorothy Vallens. Intrigued, Jeffrey enters her apartment by posing as an exterminator. While there, he steals a spare key while she is distracted by a man in a distinctive yellow sport coat, whom Jeffrey nicknames the "Yellow Man". Jeffrey and Sandy attend Dorothy's nightclub act, in which she sings "Blue Velvet", and leave early so Jeffrey can break into her apartment. Dorothy returns home and undresses; she finds Jeffrey hiding in a closet and forces him to strip at knifepoint, but he retreats to the closet when Frank Booth, a psychopathic gangster and drug lord, arrives and interrupts their encounter. Frank beats and rapes Dorothy while inhaling gas from a canister, alternating between fits of sobbing and violent rage. After Frank leaves, Jeffrey sneaks away and seeks comfort from Sandy. Surmising that Frank has abducted Dorothy's husband Don, and son Donnie, to force her into sexual slavery, Jeffrey suspects that Frank cut off Don's ear to intimidate her into submission. While continuing to see Sandy, Jeffrey enters into a sadomasochistic relationship with Dorothy, in which she encourages him to hit her. Jeffrey sees Frank attending Dorothy's show and later observes him selling drugs and meeting with the Yellow Man. Jeffrey then sees the Yellow Man meeting with a "well-dressed man". When Frank catches Jeffrey leaving Dorothy's apartment, he abducts them and takes them to the lair of Ben, a criminal associate holding Don and Donnie hostage. Frank permits Dorothy to see her family and forces Jeffrey to watch Ben perform an impromptu lip-sync of Roy Orbison's "In Dreams", which moves Frank to tears. Afterwards, he and his gang take Jeffrey and Dorothy on a high-speed joyride to a sawmill yard, where he again attempts to sexually abuse Dorothy. When Jeffrey intervenes and punches him in the face, an enraged Frank and his gang pull him out of the car. Replaying the tape of "In Dreams", Frank smears lipstick on his face and violently kisses Jeffrey. Frank then has Jeffrey restrained and beats him unconscious, while Dorothy pleads for Frank to stop. Jeffrey awakens the next morning, bruised and bloodied. While visiting the police station, Jeffrey discovers that the Yellow Man is Detective Williams's partner Tom Gordon, who has been murdering Frank's rival drug dealers and stealing confiscated narcotics from the evidence room for Frank to sell. After Jeffrey and Sandy declare their love for each other at a party, they are pursued by a car which they assume belongs to Frank; as they arrive at Jeffrey's home, Sandy realizes the driver is her ex-boyfriend, Mike. After Mike threatens to beat Jeffrey for stealing his girlfriend, Dorothy appears on Jeffrey's porch naked, beaten, and confused. Mike backs down as Jeffrey and Sandy whisk Dorothy to Sandy's house to summon medical attention. When Dorothy calls Jeffrey "my secret lover", a distraught Sandy slaps him for cheating on her. Jeffrey asks Sandy to tell her father everything, and Detective Williams then leads a police raid on Frank's headquarters, killing Frank's men. Jeffrey returns alone to Dorothy's apartment, where he discovers Don dead and Gordon mortally wounded. As Jeffrey leaves the apartment, the "Well-Dressed Man" arrives, sees Jeffrey in the stairs, and chases him back inside. Jeffrey uses Gordon's walkie-talkie to say he is in the bedroom before hiding in the closet. The "Well-Dressed Man" arrives at the apartment and Jeffrey observes he is actually Frank in disguise. Jeffrey kills Frank with Gordon's gun when Frank opens the closet door. Moments later, Sandy and Detective Williams arrive. Some time later, Jeffrey and Sandy have continued their relationship, Tom Beaumont has recovered, and Dorothy has been reunited with her son. Production. Origin. The film's story originated from three ideas that crystallized in the filmmaker's mind over a period of time starting as early as 1973. The first idea was only "a feeling" and the title, as Lynch told "Cineaste" in 1987. The second idea was an image of a severed, human ear lying in a field. "I don't know why it had to be an ear. Except it needed to be an opening of a part of the body, a hole into something else ... The ear sits on the head and goes right into the mind so it felt perfect," Lynch remarked in a 1986 interview to "The New York Times". The third idea was Bobby Vinton's rendition of "Blue Velvet" and "the mood that came with that song a mood, a time, and things that were of that time." The scene in which Dorothy appears naked outside was inspired by a real-life experience Lynch had during childhood when he and his brother saw a naked woman walking down a neighborhood street at night. The experience was so traumatic to the young Lynch that it made him cry, and he had never forgotten it. After completing "The Elephant Man" (1980), Lynch met producer Richard Roth over coffee. Roth had read and enjoyed Lynch's "Ronnie Rocket" script, but did not think it was something he wanted to produce. He asked Lynch if the filmmaker had any other scripts, but the director only had ideas. "I told him I had always wanted to sneak into a girl's room to watch her into the night and that, maybe, at one point or another, I would see something that would be the clue to a murder mystery. Roth loved the idea and asked me to write a treatment. I went home and thought of the ear in the field." Production was announced in August 1984. Lynch wrote two more drafts before he was satisfied with the script of the film. The problem with them, Lynch has said, was that "there was maybe all the unpleasantness in the film but nothing else. A lot was not there. And so it went away for a while." Conditions at this point were ideal for Lynch's film: he had made a deal with Dino De Laurentiis that gave him complete artistic freedom and final cut privileges, with the stipulation that the filmmaker take a cut in his salary and work with a budget of only $6 million. This deal meant that "Blue Velvet" was the smallest film on De Laurentiis's slate. Consequently, Lynch would be left mostly unsupervised during production. "After "Dune" I was down so far that anything was up! So it was just a euphoria. And when you work with that kind of feeling, you can take chances. You can experiment." Casting. The cast of "Blue Velvet" included several then-relatively unknown actors. Lynch met Isabella Rossellini at a restaurant, and offered her the role of Dorothy Vallens. Helen Mirren had been Lynch's first choice for the role. Rossellini had gained some exposure before the film for her Lancôme ads in the early 1980s and for being the daughter of actress Ingrid Bergman and director Roberto Rossellini. After completion of the film, during test screenings, ICM Partners—the agency representing Rossellini—immediately dropped her as a client. Furthermore, the nuns at the school in Rome that Rossellini attended in her youth called to say they were praying for her. Kyle MacLachlan had played the central role in Lynch's critical and commercial failure "Dune" (1984), a science fiction epic based on the novel of the same name. MacLachlan later became a recurring collaborator with Lynch, who remarked: "Kyle plays innocents who are interested in the mysteries of life. He's the person you trust enough to go into a strange world with." Val Kilmer was offered a role in the film, but he turned it down as felt it was too "graphic" for him, a decision he later regretted. Dourif and Stockwell also rejoined Lynch from "Dune". Dennis Hopper was the best-known actor in the film, having directed and starred in "Easy Rider" (1969). Hopper—said to be Lynch's third choice (Michael Ironside has stated that Frank was written with him in mind)—accepted the role, reportedly having exclaimed, "I've got to play Frank! I am Frank!" Harry Dean Stanton and Steven Berkoff both turned down the role of Frank because of the violent content in the film. Laura Dern, then 18 years old, was cast as Sandy after several already-successful actresses turned the role down, one among those being Molly Ringwald. Shooting. Principal photography of "Blue Velvet" began in August 1985 and completed in November. The film was shot at EUE/Screen Gems studio in Wilmington, North Carolina, which also provided the exterior scenes of Lumberton. The scene with a raped and battered Dorothy proved to be particularly challenging. Several townspeople arrived to watch the filming with picnic baskets and rugs, against the wishes of Rossellini and Lynch. However, they continued filming as normal, and when Lynch yelled cut, the townspeople left. As a result, police told Lynch they were no longer permitted to shoot in any public areas of Wilmington. The Carolina Apartments in downtown Wilmington served as Dorothy's apartment building, with the adjacent Kenan fountain featured prominently in many shots. The building is also the birth place and death place of noted artist Claude Howell. The apartment building stands today, and the Kenan fountain was refurbished in 2020 after sustaining heavy damage during Hurricane Florence. Editing. Lynch's original rough cut ran for approximately four hours. He was contractually obligated to deliver a two-hour movie by De Laurentiis and cut many small subplots and character scenes. He also made cuts at the request of the MPAA. For example, when Frank slaps Dorothy after the first rape scene, the audience was supposed to see Frank actually hitting her. Instead, the film cuts away to Jeffrey in the closet, wincing at what he has just seen. This cut was made to satisfy the MPAA's concerns about violence, though Lynch thought that the change made the scene more disturbing. In 2011, Lynch announced that footage from the deleted scenes, long thought lost, had been discovered. The material was subsequently included on the Blu-ray Disc release of the film. Among the deleted footage was Megan Mullally as Jeffrey's college sweetheart Louise Wertham, whose entire role was cut from the theatrical release. The final cut of the film runs at just over two hours. Distribution. Because the material was completely different from anything that would be considered mainstream at the time, De Laurentiis Entertainment Group's marketing employees were unsure of how to promote the film, or even if it would be promoted at all; it wasn't until the positive reception the film received at various film festivals that they began to promote it. Interpretations. Despite "Blue Velvet"s initial appearance as a mystery, the film operates on a number of thematic levels. The film owes a large debt to 1950s film noir, containing and exploring such conventions as the femme fatale (Dorothy Vallens), a seemingly unstoppable villain (Frank Booth) and the questionable moral outlook of the hero (Jeffrey Beaumont), as well as its unusual use of shadowy, sometimes dark cinematography. "Blue Velvet" establishes Lynch's famous "askew vision" and introduces several common elements of his work, some of which would later become his trademarks, including distorted characters, a polarized world and debilitating damage to the skull or brain. Perhaps the most significant Lynchian trademark in the film is the unearthing of a dark underbelly in a seemingly idealized small town; Jeffrey even proclaims in the film that he is "seeing something that was always hidden". Lynch's characterization of films, symbols and motifs have become well known and his particular style, characterised largely in "Blue Velvet" for the first time, has been written about extensively using descriptions like "dreamlike", "ultraweird", "dark", and "oddball". Red curtains also appear in key scenes, specifically in Dorothy's apartment and at the night club where she sings, which have since become a Lynch trademark. The film has been compared to Alfred Hitchcock's "Psycho" (1960) because of its stark treatment of evil and mental illness. The premise of both films is curiosity, leading to an investigation that draws the lead characters into a hidden, voyeuristic underworld of crime. The film's thematic framework harks back to Edgar Allan Poe, Henry James and early gothic fiction, as well as films such as "Shadow of a Doubt" (1943) and "The Night of the Hunter" (1955) and the entire notion of film noir. Lynch has called it a "film about things that are hidden - within a small city and within people." Feminist psychoanalytic film theorist Laura Mulvey argues that "Blue Velvet" establishes a metaphorical Oedipal family—"the child", Jeffrey Beaumont and his "parents", Frank Booth and Dorothy Vallens - both through deliberate references to film noir. Michael Atkinson claims that the resulting violence in the film can be read as symbolic of domestic violence within real families. He reads Jeffrey as an innocent youth who is both horrified by the violence inflicted by Frank and tempted by it as the means of possessing Dorothy for himself. Atkinson takes a Freudian approach to the film, considering it to be an expression of the traumatised innocence which characterises Lynch's work. He states, "Dorothy represents the sexual force of the mother [figure] because she is forbidden and because she becomes the object of the unhealthy, infantile impulses at work in Jeffrey's subconscious." Symbolism. Symbolism is used heavily in "Blue Velvet". The most consistent symbolism in the film is an insect motif introduced at the end of the first scene, when the camera zooms in on a well-kept suburban lawn until it unearths a swarming underground nest of bugs. This is generally recognized as a metaphor for the seedy underworld that Jeffrey will soon discover under the surface of his own suburban paradise. The severed ear he finds is being overrun by black ants. The bug motif is recurrent throughout the film, most notably in the bug-like gas mask that Frank wears and Jeffrey's exterminator disguise. One of Frank's accomplices is also consistently identified through the yellow jacket he wears, possibly referencing the name of a type of wasp. Finally, a robin eating a bug on a fence becomes a topic of discussion in the last scene of the film. The severed ear that Jeffrey discovers is also a key symbolic element, leading Jeffrey into danger. Indeed, just as Jeffrey's troubles begin, the audience is treated to a nightmarish sequence in which the camera zooms into the canal of the severed, decomposing ear. After the danger has subsided at the end the ear canal closeup is repeated, zooming out on a healthy and clean ear. Soundtrack. The "Blue Velvet" soundtrack was supervised by Angelo Badalamenti (who makes a brief cameo appearance as the pianist at the Slow Club where Dorothy performs). The soundtrack makes heavy usage of vintage pop songs, such as Bobby Vinton's "Blue Velvet" and Roy Orbison's "In Dreams", juxtaposed with an orchestral score. During filming, Lynch placed speakers on set and in streets and played Shostakovich to set the mood he wanted to convey. The score alludes to Shostakovich's 15th Symphony, which Lynch had been listening to regularly while writing the screenplay. Lynch had originally opted to use "Song to the Siren" by This Mortal Coil during the scene in which Sandy and Jeffrey share a dance; however, he could not obtain the rights for the song at the time. He would go on to use this song in "Lost Highway" eleven years later. "Entertainment Weekly" ranked "Blue Velvet" soundtrack on its list of the "100 Greatest Film Soundtracks", at the 100th position. Critic John Alexander wrote, "the haunting soundtrack accompanies the title credits, then weaves through the narrative, accentuating the noir mood of the film." Lynch worked with music composer Angelo Badalamenti for the first time in this film and asked him to write a score that had to be "like Shostakovich, be very Russian, but make it the most beautiful thing but make it dark and a little bit scary." Badalamenti's success with "Blue Velvet" would lead him to contribute to all of Lynch's future full-length films until "Inland Empire" as well as the cult television program "Twin Peaks". Also included in the sound team was long-time Lynch collaborator Alan Splet, a sound editor and designer who had won an Academy Award for his work on "The Black Stallion" (1979) and been nominated for "Never Cry Wolf" (1983). Reception. Box office. "Blue Velvet" premiered in competition at the Montreal World Film Festival in August 1986, and at the Toronto Festival of Festivals on September 12, 1986, and a few days later in the United States. It debuted commercially in both countries on September 19, 1986, in 98 theatres across the United States. In its opening weekend, the film grossed a total of $789,409. It eventually expanded to another 15 theatres, and in the US and Canada grossed a total of $8,551,228. "Blue Velvet" was met with uproar during its audience reception, with lines formed around city blocks in New York City and Los Angeles. There were reports of mass walkouts and refund demands during its opening week. At a Chicago screening, a man fainted and had to have his pacemaker checked. Upon completion, he returned to the cinema to see the ending. At a Los Angeles cinema, two strangers became engaged in a heated disagreement, but decided to resolve the disagreement to return to the theatre. Critical reception. "Blue Velvet" was released to a very polarized reception in the United States. The critics who did praise the film were often vociferous. "The New York Times" critic Janet Maslin directed much praise toward the performances of Hopper and Rossellini: "Mr. Hopper and Miss Rossellini are so far outside the bounds of ordinary acting here that their performances are best understood in terms of sheer lack of inhibition; both give themselves entirely over to the material, which seems to be exactly what's called for." She called it "an instant cult classic", concluding that "Blue Velvet" "is as fascinating as it is freakish" and "confirms Mr. Lynch's stature as an innovator, a superb technician, and someone best not encountered in a dark alley." Sheila Benson of the "Los Angeles Times" called the film "the most brilliantly disturbing film ever to have its roots in small-town American life," describing it as "shocking, visionary, rapturously controlled". Film critic Gene Siskel included "Blue Velvet" on his list of the best films of 1986, at the fifth spot. Peter Travers, film critic for "Rolling Stone", named it the best film of the 1980s and referred to it as an "American masterpiece". Upon its initial release, Woody Allen and Martin Scorsese called "Blue Velvet" the best film of the year. On the other hand, Paul Attanasio of "The Washington Post" said "the film showcases a visual stylist utterly in command of his talents" and that Angelo Badalamenti "contributes an extraordinary score, slipping seamlessly from slinky jazz to violin figures to the romantic sweep of a classic Hollywood score," but stated that Lynch "isn't interested in communicating, he's interested in parading his personality. The movie doesn't progress or deepen, it just gets weirder, and to no good end." A general criticism from US critics was "Blue Velvet"s approach to sexuality and violence. They asserted that this detracted from the film's seriousness as a work of art, and some condemned the film as pornographic. One of its detractors, Roger Ebert, stated that the large amount of "jokey small-town satire" in the film made it impossible to take its themes seriously. Ebert praised Rossellini's performance as "convincing and courageous" but criticized how she was depicted in the film, even accusing David Lynch of misogyny: "degraded, slapped around, humiliated and undressed in front of the camera. And when you ask an actress to endure those experiences, you should keep your side of the bargain by putting her in an important film." While Ebert in later years came to consider Lynch a great filmmaker, his negative view of "Blue Velvet" remained unchanged after he revisited it in the 21st century. The film is now widely considered a masterpiece and holds an approval score of 91% on Rotten Tomatoes based on 138 reviews, with an average rating of 8.2/10. The website's critical consensus states: "If audiences walk away from this subversive, surreal shocker not fully understanding the story, they might also walk away with a deeper perception of the potential of film storytelling." The film also has a score of 75 out of 100 on Metacritic based on 15 critics, indicating "generally favorable" reviews. Looking back in his "Guardian/Observer" review, critic Philip French wrote, "The film is wearing well and has attained a classic status without becoming respectable or losing its sense of danger." Mark Kermode walked out on the film and gave the film a poor review upon its release, but revised his view of the film over time. In 2016, he remarked, "as a film critic, it taught me that when a film really gets under your skin and really provokes a visceral reaction, you have to be very careful about assessing it ... I didn't walk out on "Blue Velvet" because it was a bad film. I walked out on it because it was a really good film. The point was at the time I wasn't good enough for it." Accolades. David Lynch was nominated for the Academy Award for Best Director and the Golden Globe Award for Best Screenplay for his work on the film. Dennis Hopper was nominated for the Golden Globe Award for Best Supporting Actor – Motion Picture for his performance, while Isabella Rossellini won the Independent Spirit Award for Best Female Lead for her performance. Lynch won Best Director and Hopper won Best Supporting Actor at the Los Angeles Film Critics Association awards in 1987. That same year, the film received four National Society of Film Critics awards: Best Film, Best Director (Lynch), Best Cinematography (Frederick Elmes), and Best Supporting Actor (Hopper). Home media. "Blue Velvet" was released on VHS and LaserDisc by Karl-Lorimar Home Video in 1987 and re-issued by Warner Home Video in 1991. After that, it was released on DVD in 2000 and 2002 by MGM Home Entertainment. The film made its Blu-ray debut on November 8, 2011, with a special 25th-anniversary edition featuring never-before-seen deleted scenes. On May 28, 2019, the film was re-released on Blu-ray by the Criterion Collection, featuring a 4K digital restoration, the original stereo soundtrack and other special features, including a behind-the-scenes documentary titled "Blue Velvet Revisited". Criterion later released the film as a 4K Ultra HD Blu-ray/Blu-Ray combo pack on June 25, 2024. Legacy. Although it initially gained a relatively small theatrical audience in North America and was met with controversy over its artistic merit, "Blue Velvet" soon became the center of a "national firestorm" in 1986, and over time became regarded as an American classic. In the late 1980s, and early 1990s, after its release on videotape, the film became a widely recognized cult film, for its dark depiction of a suburban America. With its many VHS, LaserDisc and DVD releases, the film reached broader American audiences. It marked David Lynch's entry into the Hollywood mainstream and Dennis Hopper's comeback. Hopper's performance as Frank Booth has itself left an imprint on popular culture, with countless tributes, cultural references and parodies. The film's success also helped Hollywood address previously censored issues, as "Psycho" (1960) had. "Blue Velvet" has been frequently compared to that ground-breaking film. It has become one of the most significant, well-recognized films of its era, spawning countless imitations and parodies in media. The film's dark, stylish and erotic production design has served as a benchmark for a number of films, parodies and even Lynch's own later work, notably "Twin Peaks" (1990–91), and "Mulholland Drive" (2001). Peter Travers of "Rolling Stone" magazine cited it as one of the most "influential American films", as did Michael Atkinson, who dedicated a book to the film's themes and motifs. "Blue Velvet" now frequently appears in various critical assessments of all-time great films, also ranked as one of the greatest films of the 1980s, one of the best examples of American surrealism and one of the finest examples of David Lynch's work. In a poll of 54 American critics ranking the "most outstanding films of the decade", "Blue Velvet" was placed fourth, behind "Raging Bull" (1980), "E.T. the Extra-Terrestrial" (1982) and the German film "Wings of Desire" (1987). An "Entertainment Weekly" book special released in 1999 ranked "Blue Velvet" 37th of the greatest films of all time. The film was ranked by "The Guardian" in its list of the 100 Greatest Films. "Film Four" ranked it on their list of 100 Greatest Films. In a 2007 poll of the online film community held by "Variety", "Blue Velvet" came in at the 95th-greatest film of all time. "Total Film" ranked "Blue Velvet" as one of the all-time best films in both a critics' list and a public poll, in 2006 and 2007, respectively. In December 2002, a UK film critics' poll in "Sight & Sound" ranked the film fifth on their list of the 10 Best Films of the Last 25 Years. In a special "Entertainment Weekly" issue, 100 new film classics were chosen from 1983 to 2008: "Blue Velvet" was ranked at fourth. In addition to "Blue Velvet" various "all-time greatest films" rankings, the American Film Institute has awarded the film three honors in its lists: 96th on "100 Years ... 100 Thrills" in 2001, selecting cinema's most thrilling moments and ranked Frank Booth 36th of the 50 greatest villains in "100 Years ... 100 Heroes and Villains" in 2003. In June 2008, the AFI revealed its "ten Top Ten"—the best ten films in ten "classic" American film genres—after polling over 1,500 people from the creative community. "Blue Velvet" was acknowledged as the eighth best film in the mystery genre. "Premiere" magazine listed Frank Booth, played by Dennis Hopper, as the 54th on its list of 'The 100 Greatest Movie Characters of All Time', calling him one of "the most monstrously funny creations in cinema history". The film was ranked 84th on Bravo Television's four-hour program "100 Scariest Movie Moments" (2004). It is frequently sampled musically and an array of bands and solo artists have taken their names and inspiration from the film. In August 2012, "Sight & Sound" unveiled their latest list of the 250 greatest films of all time, with "Blue Velvet" ranking at 69th. "Blue Velvet" was also nominated for the following AFI lists: Inspired by the film, pop singer Lana Del Rey recorded a cover version of "Blue Velvet" in 2012. Used to endorse clothing line H&M, a music video accompanied the track and aired as a television commercial. Set in post-war America, the video drew influence from Lynch and "Blue Velvet". In the video, Del Rey plays the role of Dorothy Vallens, performing a private concert similar to the scene where Ben (Dean Stockwell) pantomimes "In Dreams" for Frank Booth. Del Rey's version, however, has her lip-syncing "Blue Velvet" when a little person dressed as Frank Sinatra approaches and unplugs a hidden Victrola, revealing Del Rey as a fraud. When Lynch heard of the music video, he praised it, telling "Artinfo": "Lana Del Rey, she's got some fantastic charisma and—this is a very interesting thing—it's like she's born out of another time. She's got something that's very appealing to people. And I didn't know she was influenced by me!"
3948
17350134
https://en.wikipedia.org/wiki?curid=3948
Binary operation
In mathematics, a binary operation or dyadic operation is a rule for combining two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two. More specifically, a binary operation on a set is a binary function that maps every pair of elements of the set to an element of the set. Examples include the familiar arithmetic operations like addition, subtraction, multiplication, set operations like union, complement, intersection. Other examples are readily found in different areas of mathematics, such as vector addition, matrix multiplication, and conjugation in groups. A binary function that involves several sets is sometimes also called a "binary operation". For example, scalar multiplication of vector spaces takes a scalar and a vector to produce a vector, and scalar product takes two vectors to produce a scalar. Binary operations are the keystone of most structures that are studied in algebra, in particular in semigroups, monoids, groups, rings, fields, and vector spaces. Terminology. More precisely, a binary operation on a set formula_1 is a mapping of the elements of the Cartesian product formula_2 to formula_1: formula_4 If formula_5 is not a function but a partial function, then formula_5 is called a partial binary operation. For instance, division is a partial binary operation on the set of all real numbers, because one cannot divide by zero: formula_7 is undefined for every real number formula_8. In both model theory and classical universal algebra, binary operations are required to be defined on all elements of formula_2. However, partial algebras generalize universal algebras to allow partial operations. Sometimes, especially in computer science, the term binary operation is used for any binary function. Properties and examples. Typical examples of binary operations are the addition (formula_10) and multiplication (formula_11) of numbers and matrices as well as composition of functions on a single set. For instance, Many binary operations of interest in both algebra and formal logic are commutative, satisfying formula_36 for all elements formula_8 and formula_38 in formula_1, or associative, satisfying formula_40 for all formula_8, formula_38, and formula_43 in formula_1. Many also have identity elements and inverse elements. The first three examples above are commutative and all of the above examples are associative. On the set of real numbers formula_12, subtraction, that is, formula_46, is a binary operation which is not commutative since, in general, formula_47. It is also not associative, since, in general, formula_48; for instance, formula_49 but formula_50. On the set of natural numbers formula_14, the binary operation exponentiation, formula_52, is not commutative since, formula_53 (cf. Equation xy = yx), and is also not associative since formula_54. For instance, with formula_55, formula_56, and formula_57, formula_58, but formula_59. By changing the set formula_14 to the set of integers formula_61, this binary operation becomes a partial binary operation since it is now undefined when formula_62 and formula_38 is any negative integer. For either set, this operation has a "right identity" (which is formula_64) since formula_65 for all formula_8 in the set, which is not an "identity" (two sided identity) since formula_67 in general. Division (formula_68), a partial binary operation on the set of real or rational numbers, is not commutative or associative. Tetration (formula_69), as a binary operation on the natural numbers, is not commutative or associative and has no identity element. Notation. Binary operations are often written using infix notation such as formula_70, formula_71, formula_72 or (by juxtaposition with no symbol) formula_73 rather than by functional notation of the form formula_74. Powers are usually also written without operator, but with the second argument as superscript. Binary operations are sometimes written using prefix or (more frequently) postfix notation, both of which dispense with parentheses. They are also called, respectively, Polish notation formula_75 and reverse Polish notation formula_76. Binary operations as ternary relations. A binary operation formula_5 on a set formula_1 may be viewed as a ternary relation on formula_1, that is, the set of triples formula_80 in formula_81 for all formula_8 and formula_38 in formula_1. Other binary operations. For example, scalar multiplication in linear algebra. Here formula_85 is a field and formula_1 is a vector space over that field. Also the dot product of two vectors maps formula_2 to formula_85, where formula_85 is a field and formula_1 is a vector space over formula_85. It depends on authors whether it is considered as a binary operation.
3950
40781349
https://en.wikipedia.org/wiki?curid=3950
Bagpipes
Bagpipes are a woodwind instrument using enclosed reeds fed from a constant reservoir of air in the form of a bag. The Great Highland bagpipes are well known, but people have played bagpipes for centuries throughout large parts of Europe, Northern Africa, Western Asia, around the Persian Gulf and northern parts of South Asia. The term "bagpipe" is equally correct in the singular or the plural, though pipers usually refer to the bagpipes as "the pipes", "a set of pipes" or "a stand of pipes". Bagpipes are part of the aerophone group because to play the instrument you must blow air into it to produce a sound. Construction. A set of bagpipes minimally consists of an air supply, a bag, a chanter, and usually at least one drone. Many bagpipes have more than one drone (and, sometimes, more than one chanter) in various combinations, held in place in stocks—sockets that fasten the various pipes to the bag. Air supply. The most common method of supplying air to the bag is through blowing into a blowpipe or blowstick. In some pipes the player must cover the tip of the blowpipe with the tongue while inhaling, in order to prevent unwanted deflation of the bag, but most blowpipes have a non-return valve that eliminates this need. In recent times, there are many instruments that assist in creating a clean air flow to the pipes and assist the collection of condensation. The use of a bellows to supply air is an innovation dating from the 16th or 17th century. In these pipes, sometimes called "cauld wind pipes", air is not heated or moistened by the player's breathing, so bellows-driven bagpipes can use more refined or delicate reeds. Such pipes include the Irish uilleann pipes; the border or Lowland pipes, Scottish smallpipes, Northumbrian smallpipes and pastoral pipes in Britain; the musette de cour, the musette bechonnet and the cabrette in France; and the , koziol bialy, and koziol czarny in Poland. Bag. The bag is an airtight reservoir that holds air and regulates its flow via arm pressure, allowing the player to maintain continuous, even sound. The player keeps the bag inflated by blowing air into it through a blowpipe or by pumping air into it with a bellows. Materials used for bags vary widely, but the most common are the skins of local animals such as goats, dogs, sheep, and cows. More recently, bags made of synthetic materials including Gore-Tex have become much more common. Some synthetic bags have zips that allow the player to fit a more effective moisture trap to the inside of the bag. However, synthetic bags still carry a risk of colonisation by fungal spores, and the associated danger of lung infection if they are not kept clean, even if they otherwise require less cleaning than do bags made from natural substances. Bags cut from larger materials are usually saddle-stitched with an extra strip folded over the seam and stitched (for skin bags) or glued (for synthetic bags) to reduce leaks. Holes are then cut to accommodate the stocks. In the case of bags made from largely intact animal skins, the stocks are typically tied into the points where the limbs and the head joined the body of the whole animal, a construction technique common in Central Europe. Different regions have different ways of treating the hide. The simplest methods involve just the use of salt, while more complex treatments involve milk, flour, and the removal of fur. The hide is normally turned inside out so that the fur is on the inside of the bag, as this helps to reduce the effect of moisture buildup within the bag. Chanter. The chanter is the melody pipe, played with two hands. All bagpipes have at least one chanter; some pipes have two chanters, particularly those in North Africa, in the Balkans, and in Southwest Asia. A chanter can be bored internally so that the inside walls are parallel (or "cylindrical") for its full length, or it can be bored in a conical shape. Popular woods include boxwood, cornel, plum or other fruit wood. The chanter is usually open-ended, so there is no easy way for the player to stop the pipe from sounding. Thus most bagpipes share a constant legato sound with no rests in the music. Primarily because of this inability to stop playing, technical movements are made to break up notes and to create the illusion of articulation and accents. Because of their importance, these embellishments (or "ornaments") are often highly technical systems specific to each bagpipe, and take many years of study to master. A few bagpipes (such as the musette de cour, the uilleann pipes, the Northumbrian smallpipes, the piva and the left chanter of the surdelina) have closed ends or stop the end on the player's leg, so that when the player "closes" (covers all the holes), the chanter becomes silent. A practice chanter is a chanter without bag or drones and has a much quieter reed, allowing a player to practice the instrument quietly and with no variables other than playing the chanter. The term "chanter" is derived from the Latin "cantare", or "to sing", much like the modern French verb meaning "to sing", "chanter". A distinctive feature of the gaida's chanter (which it shares with a number of other Eastern European bagpipes) is the "flea-hole" (also known as a "mumbler" or "voicer", "marmorka") which is covered by the index finger of the left hand. The flea-hole is smaller than the rest and usually consists of a small tube that is made out of metal or a chicken or duck feather. Uncovering the flea-hole raises any note played by a half step, and it is used in creating the musical ornamentation that gives Balkan music its unique character. Some types of gaida can have a double bored chanter, such as the Serbian three-voiced gajde. It has eight fingerholes: the top four are covered by the thumb and the first three fingers of the left hand, then the four fingers of the right hand cover the remaining four holes. Chanter reed. The note from the chanter is produced by a reed installed at its top. The reed may be a single (a reed with one vibrating tongue) or double reed (of two pieces that vibrate against each other). Double reeds are used with both conical- and parallel-bored chanters while single reeds are generally (although not exclusively) limited to parallel-bored chanters. In general, double-reed chanters are found in pipes of Western Europe while single-reed chanters appear in most other regions. They are made from reed ("arundo donax" or Phragmites), bamboo, or elder. A more modern variant for the reed is a combination of a cotton phenolic (Hgw2082) material from which the body of the reed is made and a clarinet reed cut to size in order to fit the body. These types of reeds produce a louder sound and are not so sensitive to humidity and temperature changes. Drone. Most bagpipes have at least one drone, a pipe that generally is not fingered but rather produces a constant harmonizing note throughout play (usually the tonic note of the chanter). Exceptions are generally those pipes that have a double-chanter instead. A drone is most commonly a cylindrically bored tube with a single reed, although drones with double reeds exist. The drone is generally designed in two or more parts with a sliding joint so that the pitch of the drone can be adjusted. Depending on the type of pipes, the drones may lie over the shoulder, across the arm opposite the bag, or may run parallel to the chanter. Some drones have a tuning screw, which effectively alters the length of the drone by opening a hole, allowing the drone to be tuned to two or more distinct pitches. The tuning screw may also shut off the drone altogether. In most types of pipes with one drone, it is pitched two octaves below the tonic of the chanter. Additional drones often add the octave below and then a drone consonant with the fifth of the chanter. History. Possible ancient origins. The evidence for bagpipes prior to the 13th century AD is still uncertain, but several textual and visual clues have been suggested. The "Oxford History of Music" posits that a sculpture of bagpipes has been found on a Hittite slab at Euyuk in Anatolia, dated to 1000 BC. Another interpretation of this sculpture suggests that it instead depicts a pan flute played along with a friction drum. Several authors identify the ancient Greek (ἀσκός "askos" – wine-skin, αὐλός "aulos" – reed pipe) with the bagpipe. In the 2nd century AD, Suetonius described the Roman emperor Nero as a player of the "tibia utricularis". Dio Chrysostom wrote in the 1st century of a contemporary sovereign (possibly Nero) who could play a pipe (tibia, Roman reedpipes similar to Greek and Etruscan instruments) with his mouth as well as by tucking a bladder beneath his armpit. Vereno suggests that such instruments, rather than being seen as an independent class, were understood as variants on mouth-blown instruments that used a bag as an alternative blowing aid and that it was not until drones were added in the European Medieval era that bagpipes were seen as a distinct class. Spread and development in Europe. In the early part of the second millennium, representation of bagpipes began to appear with frequency in Western European art and iconography. The Cantigas de Santa Maria, written in Galician-Portuguese and compiled in Castile in the mid-13th century, depicts several types of bagpipes. Several illustrations of bagpipes also appear in the "Chronique dite de Baudoin d’Avesnes", a 13th-century manuscript of northern French origin. Although evidence of bagpipes in the British Isles prior to the 14th century is contested, they are explicitly mentioned in "The Canterbury Tales" (written around 1380): Bagpipes were also frequent subjects for carvers of wooden choir stalls in the late 15th and early 16th century throughout Europe, sometimes with animal musicians. Actual specimens of bagpipes from before the 18th century are extremely rare; however, a substantial number of paintings, carvings, engravings, and manuscript illuminations survive. These artefacts are clear evidence that bagpipes varied widely throughout Europe, and even within individual regions. Many examples of early folk bagpipes in continental Europe can be found in the paintings of Brueghel, Teniers, Jordaens, and Durer. The earliest known artefact identified as a part of a bagpipe is a chanter found in 1985 at Rostock, Germany, that has been dated to the late 14th century or the first quarter of the 15th century. The first clear reference to the use of the Scottish Highland bagpipes is from a French history that mentions their use at the Battle of Pinkie in 1547. George Buchanan (1506–82) claimed that bagpipes had replaced the trumpet on the battlefield. This period saw the creation of the "ceòl mór" (great music) of the bagpipe, which reflected its martial origins, with battle tunes, marches, gatherings, salutes and laments. The Highlands of the early 17th century saw the development of piping families including the MacCrimmonds, MacArthurs, MacGregors, and the Mackays of Gairloch. The earliest Irish mention of the bagpipe is in 1206, approximately thirty years after the Anglo-Norman invasion; another mention attributes their use to Irish troops in Henry VIII's siege of Boulogne. Illustrations in the 1581 book "The Image of Irelande" by John Derricke clearly depict a bagpiper. Derricke's illustrations are considered to be reasonably faithful depictions of the attire and equipment of the English and Irish population of the 16th century. The "Battell" sequence from "My Ladye Nevells Booke" (1591) by William Byrd, which probably alludes to the Irish wars of 1578, contains a piece entitled "The bagpipe: & the drone". In 1760, the first serious study of the Scottish Highland bagpipe and its music was attempted in Joseph MacDonald's "Compleat Theory". A manuscript from the 1730s by a William Dixon of Northumberland contains music that fits the border pipes, a nine-note bellows-blown bagpipe with a chanter similar to that of the modern Great Highland bagpipe. However, the music in Dixon's manuscript varied greatly from modern Highland bagpipe tunes, consisting mostly of extended variation sets of common dance tunes. Some of the tunes in the Dixon manuscript correspond to those found in the early 19th century manuscript sources of Northumbrian smallpipe tunes, notably the rare book of 50 tunes, many with variations, by John Peacock. As Western classical music developed, both in terms of musical sophistication and instrumental technology, bagpipes in many regions fell out of favour because of their limited range and function. This triggered a long, slow decline that continued, in most cases, into the 20th century. Extensive and documented collections of traditional bagpipes may be found at the Metropolitan Museum of Art in New York City, the International Bagpipe Museum in Gijón, Spain, the Pitt Rivers Museum in Oxford, England and the Morpeth Chantry Bagpipe Museum in Northumberland, and the Musical Instrument Museum in Phoenix, Arizona. The is held every two years in Strakonice, Czech Republic. Recent history. During the 19th and 20th centuries, as a result of the participation of Scottish regiments in the expansion of the British Empire, the bagpipes became well known worldwide. This surge in the bagpipes' popularity was boosted by large numbers of Allied pipers which served in World War I and World War II. This coincided with a decline in the popularity of many traditional forms of bagpipe throughout Europe, which began to be displaced by instruments from the classical tradition and later by gramophone and radio. As pipers were easily identifiable, combat losses were high, estimated at one thousand in World War I. A front line role was prohibited following high losses in the Second Battle of El Alamein in 1943, though a few later instances occurred. In the United Kingdom and Commonwealth Nations such as Canada, New Zealand and Australia, the Great Highland bagpipe is commonly used in the military and is often played during formal ceremonies. Foreign militaries patterned after the British army have also adopted the Highland bagpipe, including those of Uganda, Sudan, India, Pakistan, Sri Lanka, Jordan, and Oman. Many police and fire services in Scotland, Canada, Australia, New Zealand, Hong Kong, and the United States have also adopted the tradition of fielding pipe bands. In recent years, often driven by revivals of native folk music and dance, many types of bagpipes have enjoyed a resurgence in popularity and, in many cases, instruments that had fallen into obscurity have become extremely popular. In Brittany, the Great Highland bagpipe and concept of the pipe band were appropriated to create a Breton interpretation known as the bagad. The pipe-band idiom has also been adopted and applied to the Galician gaita as well. Bagpipes have often been used in various films depicting moments from Scottish and Irish history; the film "Braveheart" and the theatrical show "Riverdance" have served to make the uilleann pipes more commonly known. Bagpipes are sometimes played at formal events at Commonwealth universities, particularly in Canada. Because of Scottish influences on the sport of curling, bagpipes are also the official instrument of the World Curling Federation and are commonly played during a ceremonial procession of teams before major curling championships. Bagpipe making was once a craft that produced instruments in many distinctive, local and traditional styles. Today, the world's largest producer of the instrument is Pakistan, where the industry was worth $6.8 million in 2010. In the late 20th century, various models of electronic bagpipes were invented. The first custom-built MIDI bagpipes were developed by the Asturian piper known as Hevia (José Ángel Hevia Velasco). Astronaut Kjell N. Lindgren is thought to be the first person to play the bagpipes in outer space, having played "Amazing Grace" in tribute to late research scientist Victor Hurst aboard the International Space Station in November 2015. Traditionally, one of the purposes of the bagpipe was to provide music for dancing. This has declined with the growth of dance bands, recordings, and the decline of traditional dance. In turn, this has led to many types of pipes developing a performance-led tradition, and indeed much modern music based on the dance music tradition played on bagpipes is suitable for use as dance music. Modern usage. Types of bagpipes. Numerous types of bagpipes today are widely spread across Europe, the Middle East and North Africa as well as through much of the former British Empire. The name bagpipe has almost become synonymous with its best-known form, the Great Highland bagpipe, overshadowing the great number and variety of traditional forms of bagpipe. Despite the decline of these other types of pipes over the last few centuries, in recent years many of these pipes have seen a resurgence or revival as musicians have sought them out; for example, the Irish piping tradition, which by the mid 20th century had declined to a handful of master players is today alive, well, and flourishing, a situation similar to that of the Asturian gaita, the Galician gaita, the Portuguese gaita transmontana, the Aragonese gaita de boto, Northumbrian smallpipes, the Breton biniou, the Balkan gaida, the Romanian cimpoi, the Black Sea tulum, the Scottish smallpipes and pastoral pipes, as well as other varieties. Bulgaria has the Kaba gaida, a large bagpipe of the Rhodope mountains with a hexagonal and rounded drone, often described as a deep-sounding gaida and the Dzhura gaida with a straight conical drone and of a higher pitch. The Macedonian gaida is structurally between a kaba and dzhura gaida and described as a medium pitched gaida. In Southeastern Europe and Eastern Europe bagpipes known as gaida include: the , , (), () () or (), (""), , also and . In Tunisia, it is known by the name "mezwed". It is used in the Tunisian pop music genre, also called mezwed, that is named after the instrument. Usage in non-traditional music. Since the 1960s, bagpipes have also made appearances in other forms of music, including rock, metal, jazz, hip-hop, punk, and classical music, for example with Paul McCartney's "Mull of Kintyre", AC/DC's "It's a Long Way to the Top (If You Wanna Rock 'n' Roll)", and Peter Maxwell Davies's composition "An Orkney Wedding, with Sunrise". Publications. Periodicals. "Periodicals covering specific types of bagpipes are addressed in the article for that bagpipe"
3952
1544984
https://en.wikipedia.org/wiki?curid=3952
Bedrock Records
Bedrock Records is an English record label for trance, progressive house and techno started by John Digweed. Its name comes from a long running and successful club night held in Hastings and also at Heaven nightclub, London – both also called Bedrock. Bedrock Records has released many singles from artists such as Astro & Glyde, Brancaccio & Aisher, Steve Lawler, Shmuel Flash, Steve Porter, Sahar Z, Guy J, Henry Saiz, Stelios Vassiloudis, Electric Rescue, The Japanese Popstars and Jerry Bonham. Bedrock is also the name that Digweed and Muir use as their production moniker. Bedrock has had different imprints: Bedrock Breaks, B_Rock and Black (Bedrock). Currently it has Bedrock Digital and one called Lost & Found belonging to Guy J. The first Bedrock album compiled and mixed by John Digweed was released in 1999, containing several tracks signed to the Bedrock label. In 2018, Digweed marked the 20th anniversary of the label with the release of "Bedrock XX".
3954
27823944
https://en.wikipedia.org/wiki?curid=3954
Biochemistry
Biochemistry, or biological chemistry, is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis that allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs as well as organism structure and function. Biochemistry is closely related to molecular biology, the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, functions, and interactions of biological macromolecules such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition, and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology. History. At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, "Animal chemistry, or, Organic chemistry in its applications to physiology and pathology", which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists. The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of "Zeitschrift für Physiologische Chemie" (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister. It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level. Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression. Starting materials: the chemical elements of life. Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminium and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts). Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more. Biomolecules. The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity. Carbohydrates. Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications. The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula C"n"H2"n"O"n", where "n" is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare. Two monosaccharides can be joined by a glycosidic or ester bond into a "disaccharide" through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed "hydrolysis". The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance. When a few (around three to six) monosaccharides are joined, it is called an "oligosaccharide" ("oligo-" meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. "Cellulose" is an important structural component of plant's cell walls and "glycogen" is used as a form of energy storage in animals. Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2). Lipids. Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid. Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and may be saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain). Most lipids have some polar character and are largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below. Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilizers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome). Proteins. Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that makes each amino acid different, and the properties of the side chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a "dipeptide", and short stretches of amino acids (usually, fewer than thirty) are called "peptides" or polypeptides. Longer stretches merit the title "proteins". As an example, the important blood serum protein albumin contains 585 amino acid residues. Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be "extremely" selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain. The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called "substrates"; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole. The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit. Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids. If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein. A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release ammonia into the water where it is quickly diluted. In general, mammals convert ammonia into urea, via the urea cycle. In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function. Nucleic acids. Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group. The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid. Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine, thymine, and uracil contain two hydrogen bonds, while hydrogen bonds formed between cytosine and guanine are three. Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA. Metabolism. Carbohydrates as energy source. Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides. Glycolysis (anaerobic). Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway. Aerobic. In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional "28" molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen. Gluconeogenesis. In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate. The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle. Relationship to other "molecular-scale" biological sciences. Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields:
3956
326639
https://en.wikipedia.org/wiki?curid=3956
Badminton
Badminton is a racquet sport played using racquets to hit a shuttlecock across a net. Although it may be played with larger teams, the most common forms of the game are "singles" (with one player per side) and "doubles" (with two players per side). Badminton is often played as a casual outdoor activity in a yard or on a beach; professional games are played on a rectangular indoor court. Points are scored by striking the shuttlecock with the racquet and landing it within the other team's half of the court, within the set boundaries. Each side may only strike the shuttlecock once before it passes over the net. Play ends once the shuttlecock has struck the floor or ground, or if a fault has been called by the umpire, service judge, or (in their absence) the opposing side. The shuttlecock is a feathered or (in informal matches) plastic projectile that flies differently from the balls used in many other sports. In particular, the feathers create much higher drag, causing the shuttlecock to decelerate more rapidly. Shuttlecocks also have a high top speed compared to the balls in other racquet sports, making badminton the fastest racquet sport in the world. The flight of the shuttlecock gives the sport its distinctive nature, and in certain languages the sport is named by reference to this feature (e.g., German , literally feather-ball). The game developed in British India from the earlier game of battledore and shuttlecock. European play came to be dominated by Denmark but the game has become very popular in Asia. In 1992, badminton debuted as a Summer Olympic sport with four events: men's singles, women's singles, men's doubles, and women's doubles; mixed doubles was added four years later. At high levels of play, the sport demands excellent fitness: players require aerobic stamina, agility, strength, speed, and precision. It is also a technical sport, requiring good motor coordination and the development of sophisticated racquet movements involving much greater flexibility in the wrist than some other racquet sports. History. Games employing shuttlecocks have been played for centuries across Eurasia, but the modern game of badminton developed in the mid-19th century among the expatriate officers of British India as a variant of the earlier game of battledore and shuttlecock. ("Battledore" was an older term for "racquet".) Its exact origin remains obscure. The name derives from the Duke of Beaufort's Badminton House in Gloucestershire, but why or when remains unclear. As early as 1860, a London toy dealer named Isaac Spratt published a booklet entitled "Badminton Battledore – A New Game", but no copy is known to have survived. An 1863 article in "The Cornhill Magazine" describes badminton as "battledore and shuttlecock played with sides, across a string suspended some five feet from the ground". The game originally developed in India among the British expatriates, where it was very popular by the 1870s. Ball badminton, a form of the game played with a wool ball instead of a shuttlecock, was being played in Thanjavur as early as the 1850s and was at first played interchangeably with badminton by the British, the woollen ball being preferred in windy or wet weather. Early on, the game was also known as Poona or Poonah after the garrison town of Poona (Pune), where it was particularly popular and where the first rules for the game were drawn up in 1873. By 1875, officers returning home had started a badminton club in Folkestone. Initially, the sport was played with sides ranging from 1 to 4 players, but it was quickly established that games between two or four competitors worked the best. The shuttlecocks were coated with India rubber and, in outdoor play, sometimes weighted with lead. Although the depth of the net was of no consequence, it was preferred that it should reach the ground. The sport was played under the Pune rules until 1887, when J. H. E. Hart of the Bath Badminton Club drew up revised regulations. In 1890, Hart and Bagnel Wild again revised the rules. The Badminton Association of England (BAE) published these rules in 1893 and officially launched the sport at a house called "Dunbar" in Portsmouth on 13 September. The BAE started the first badminton competition, the All England Open Badminton Championships for gentlemen's doubles, ladies' doubles, and mixed doubles, in 1899. Singles competitions were added in 1900 and an England–Ireland championship match appeared in 1904. England, Scotland, Wales, Canada, Denmark, France, Ireland, the Netherlands, and New Zealand were the founding members of the International Badminton Federation in 1934, now known as the Badminton World Federation. India joined as an affiliate in 1936. The BWF now governs international badminton. Although initiated in England, competitive men's badminton has traditionally been dominated in Europe by Denmark. Worldwide, Asian nations have become dominant in international competition. China, Denmark, Indonesia, Malaysia, India, South Korea, Taiwan (playing as 'Chinese Taipei') and Japan are the nations which have consistently produced world-class players in the past few decades, with China being the greatest force in men's and women's competition recently. Great Britain, where the rules of the modern game were codified, is not among the top powers in the sport, but has had significant Olympic and World success in doubles play, especially mixed doubles. The game has also become a popular backyard sport in the United States. Rules. The following information is a simplified summary of badminton rules based on the BWF Statutes publication, "Laws of Badminton". Court. The court is rectangular and divided into halves by a net. Courts are usually marked for both singles and doubles play, although badminton rules permit a court to be marked for singles only. The doubles court is wider than the singles court, but both are of the same length. The exception, which often causes confusion to newer players, is that the doubles court has a shorter serve-length dimension. The full width of the court is , and in singles this width is reduced to . The full length of the court is . The service courts are marked by a centre line dividing the width of the court, by a short service line at a distance of from the net, and by the outer side and back boundaries. In doubles, the service court is also marked by a long service line, which is from the back boundary. The net is high at the edges and high in the centre. The net posts are placed over the doubles sidelines, even when singles is played. The minimum height for the ceiling above the court is not mentioned in the Laws of Badminton. Nonetheless, a badminton court will not be suitable if the ceiling is likely to be hit on a high serve. Serving. When the server serves, the shuttlecock must pass over the short service line on the opponents' court or it will count as a fault. The server and receiver must remain within their service courts, without touching the boundary lines, until the server strikes the shuttlecock. The other two players may stand wherever they wish, so long as they do not block the vision of the server or receiver. At the start of the rally, the server and receiver stand in diagonally opposite "service courts" (see court dimensions). The server hits the shuttlecock so that it would land in the receiver's service court. This is similar to tennis, except that in a badminton serve the whole shuttle must be below 1.15 metres from the surface of the court at the instant of being hit by the server's racket, the shuttlecock is not allowed to bounce and in badminton, the players stand inside their service courts, unlike tennis. When the serving side loses a rally, the server immediately passes to their opponent(s) (this differs from the old system where sometimes the serve passes to the doubles partner for what is known as a "second serve"). In singles, the server stands in their right service court when their score is even, and in their left service court when their score is odd. In doubles, if the serving side wins a rally, the same player continues to serve, but he/she changes service courts so that she/he serves to a different opponent each time. If the opponents win the rally and their new score is even, the player in the right service court serves; if odd, the player in the left service court serves. The players' service courts are determined by their positions at the start of the previous rally, not by where they were standing at the end of the rally. A consequence of this system is that each time a side regains the service, the server will be the player who did "not" serve last time. Scoring. Each game is played to 21 points, with players scoring a point by winning a rally. This differs from the old system in which players may only win a point on their serve and each game is to 15 points. A match is the best of three games. If the score ties at 20–20, then the game continues until one side gains a two-point lead (such as 24–22), except when there is a tie at 29–29, in which the game goes to a golden point of 30. Whoever scores this point wins the game. At the start of a match, the shuttlecock is cast and the side towards which the shuttlecock is pointing serves first. Alternatively, a coin may be tossed, with the winners choosing whether to serve or receive first, or choosing which end of the court to occupy first, and their opponents making the remaining choice. In subsequent games, the winners of the previous game serve first. Matches are best out of three: a player or pair must win two games (of 21 points each) to win the match. For the first rally of any doubles game, the serving pair may decide who serves and the receiving pair may decide who receives. The players change ends at the start of the second game; if the match reaches a third game, they change ends both at the start of the game and when the leading player's or pair's score reaches 11 points. A new scoring system is being attempted by the BWF, in which the 21x3 scoring system may be replaced with 15x3. The move itself has been very controversial amongst several badminton players. Lets. If a let is called, the rally is stopped and replayed with no change to the score. Lets may occur because of some unexpected disturbance such as a shuttlecock landing on a court (having been hit there by players playing in adjacent court) or in small halls the shuttle may touch an overhead rail which can be classed as a let. If the receiver is not ready when the service is delivered, a let shall be called; yet, if the receiver attempts to return the shuttlecock, the receiver shall be judged to have been ready. Equipment. Badminton rules restrict the design and size of racquets and shuttlecocks. Racquets. Badminton racquets are lightweight, with top quality racquets weighing between not including grip or strings. They are composed of many different materials ranging from carbon fibre composite (graphite reinforced plastic) to solid steel, which may be augmented by a variety of materials. Carbon fibre has an excellent strength to weight ratio, is stiff, and gives excellent kinetic energy transfer. Before the adoption of carbon fibre composite, racquets were made of light metals such as aluminium. Earlier still, racquets were made of wood. Cheap racquets are still often made of metals such as steel, but wooden racquets are no longer manufactured for the ordinary market, because of their excessive mass and cost. Nowadays, nanomaterials such as carbon nanotubes and fullerenes are added to racquets giving them greater durability. There is a wide variety of racquet designs, although the laws limit the racquet size and shape. Different racquets have playing characteristics that appeal to different players. The traditional oval head shape is still available, but an isometric head shape is increasingly common in new racquets. Strings. Badminton strings for racquets are thin, high-performing strings with thicknesses ranging from about 0.62 to 0.73 mm. Thicker strings are more durable, but many players prefer the feel of thinner strings. String tension is normally in the range of 80 to 160 N (18 to 36 lbf). Recreational players generally string at lower tensions than professionals, typically between 80 and 110 N (18 and 25 lbf). Professionals string between about 110 and 160 N (25 and 36 lbf). Some string manufacturers measure the thickness of their strings under tension so they are actually thicker than specified when slack. Ashaway Micropower is actually 0.7mm but Yonex BG-66 is about 0.72mm. It is often argued that high string tensions improve control, whereas low string tensions increase power. The arguments for this generally rely on crude mechanical reasoning, such as claiming that a lower tension string bed is more bouncy and therefore provides more power. This is, in fact, incorrect, for a higher string tension can cause the shuttle to slide off the racquet and hence make it harder to hit a shot accurately. An alternative view suggests that the optimum tension for power depends on the player: the faster and more accurately a player can swing their racquet, the higher the tension for maximum power. Neither view has been subjected to a rigorous mechanical analysis, nor is there clear evidence in favour of one or the other. The most effective way for a player to find a good string tension is to experiment. Grip. The choice of grip allows a player to increase the thickness of their racquet handle and choose a comfortable surface to hold. A player may build up the handle with one or several grips before applying the final layer. Players may choose between a variety of grip materials. The most common choices are PU synthetic grips or towelling grips. Grip choice is a matter of personal preference. Players often find that sweat becomes a problem; in this case, a drying agent may be applied to the grip or hands, sweatbands may be used, the player may choose another grip material or change their grip more frequently. There are two main types of grip: "replacement" grips and "overgrips". Replacement grips are thicker and are often used to increase the size of the handle. Overgrips are thinner (less than 1 mm), and are often used as the final layer. Many players, however, prefer to use replacement grips as the final layer. Towelling grips are always replacement grips. Replacement grips have an adhesive backing, whereas overgrips have only a small patch of adhesive at the start of the tape and must be applied under tension; overgrips are more convenient for players who change grips frequently, because they may be removed more rapidly without damaging the underlying material. Shuttlecock. A shuttlecock (often abbreviated to "shuttle"; also called a "birdie") is a high-drag projectile, with an open conical shape: the cone is formed from sixteen overlapping feathers embedded into a rounded cork base. The cork is covered with thin leather or synthetic material. Synthetic shuttles are often used by recreational players to reduce their costs as feathered shuttles break easily. These nylon shuttles may be constructed with either natural cork or synthetic foam base and a plastic skirt. Badminton rules also provide for testing a shuttlecock for the correct speed: Shoes. Badminton shoes are lightweight with soles of rubber or similar high-grip, non-marking materials, similar to tennis shoes. Compared to running shoes, badminton shoes have little lateral support. High levels of lateral support are useful for activities where lateral motion is undesirable and unexpected. Badminton, however, requires powerful lateral movements. A highly built-up lateral support will not be able to protect the foot in badminton; instead, it will encourage catastrophic collapse at the point where the shoe's support fails, and the player's ankles are not ready for the sudden loading, which can cause sprains. For this reason, players should choose badminton shoes rather than general trainers or running shoes, because proper badminton shoes will have a very thin sole, lower a person's centre of gravity, and therefore result in fewer injuries. Outfits. The Badminton World Federation and Octagon developed a rule that female badminton players must wear dresses or skirts "to ensure attractive presentation", but although it was included in the official rulebook in 2011, it was dropped before it was supposed to go into effect in 2012. Technique. Strokes. Badminton offers a wide variety of basic strokes, and players require a high level of skill to perform all of them effectively. All strokes can be played either "forehand" or "backhand". A player's forehand side is the same side as their playing hand: for a right-handed player, the forehand side is their right side and the backhand side is their left side. Forehand strokes are hit with the front of the hand leading (like hitting with the palm), whereas backhand strokes are hit with the back of the hand leading (like hitting with the knuckles). Players frequently play certain strokes on the forehand side with a backhand hitting action, and vice versa. In the forecourt and midcourt, most strokes can be played equally effectively on either the forehand or backhand side; but in the rear court, players will attempt to play as many strokes as possible on their forehands, often preferring to play a "round-the-head" forehand overhead (a forehand "on the backhand side") rather than attempt a backhand overhead. Playing a backhand overhead has two main disadvantages. First, the player must turn their back to their opponents, restricting their view of them and the court. Second, backhand overheads cannot be hit with as much power as forehands: the hitting action is limited by the shoulder joint, which permits a much greater range of movement for a forehand overhead than for a backhand. The "backhand clear" is considered by most players and coaches to be the most difficult basic stroke in the game, since the precise technique is needed in order to muster enough power for the shuttlecock to travel the full length of the court. For the same reason, "backhand smashes" tend to be weak. Position of the shuttlecock and receiving player. The choice of stroke depends on how near the shuttlecock is to the net, whether it is above net height, and where an opponent is currently positioned: players have much better attacking options if they can reach the shuttlecock well above net height, especially if it is also close to the net. In the forecourt, a high shuttlecock will be met with a "net kill", hitting it steeply downwards and attempting to win the rally immediately. This is why it is best to drop the shuttlecock just over the net in this situation. In the midcourt, a high shuttlecock will usually be met with a powerful "smash", also hitting downwards and hoping for an outright winner or a weak reply. Athletic "jump smashes", where players jump upwards for a steeper smash angle, are a common and spectacular element of elite men's doubles play. In the rearcourt, players strive to hit the shuttlecock while it is still above them, rather than allowing it to drop lower. This "overhead" hitting allows them to play smashes, "clears" (hitting the shuttlecock high and to the back of the opponents' court), and "drop shots" (hitting the shuttlecock softly so that it falls sharply downwards into the opponents' forecourt). If the shuttlecock has dropped lower, then a smash is impossible and a full-length, high clear is difficult. Vertical position of the shuttlecock. When the shuttlecock is well below net height, players have no choice but to hit upwards. "Lifts", where the shuttlecock is hit upwards to the back of the opponents' court, can be played from all parts of the court. If a player does not lift, their only remaining option is to push the shuttlecock softly back to the net: in the forecourt, this is called a "net shot"; in the midcourt or rear court, it is often called a "push" or "block". When the shuttlecock is near to net height, players can hit "drives", which travel flat and rapidly over the net into the opponents' rear midcourt and rear court. Pushes may also be hit flatter, placing the shuttlecock into the front midcourt. Drives and pushes may be played from the midcourt or forecourt, and are most often used in doubles: they are an attempt to regain the attack, rather than choosing to lift the shuttlecock and defend against smashes. After a successful drive or push, the opponents will often be forced to lift the shuttlecock. Spin. Balls may be spun to alter their bounce (for example, topspin and backspin in tennis) or trajectory, and players may slice the ball (strike it with an angled racquet face) to produce such spin. The shuttlecock is not allowed to bounce, but slicing the shuttlecock does have applications in badminton. (See Basic strokes for an explanation of technical terms.) Due to the way that its feathers overlap, a shuttlecock also has a slight natural spin about its axis of rotational symmetry. The spin is in a counter-clockwise direction as seen from above when dropping a shuttlecock. This natural spin affects certain strokes: a tumbling net shot is more effective if the slicing action is from right to left, rather than from left to right. Biomechanics. Badminton biomechanics have not been the subject of extensive scientific study, but some studies confirm the minor role of the wrist in power generation and indicate that the major contributions to power come from internal and external rotations of the upper and lower arm. Recent guides to the sport thus emphasize forearm rotation rather than wrist movements. The feathers impart substantial drag, causing the shuttlecock to decelerate greatly over distance. The shuttlecock is also extremely aerodynamically stable: regardless of initial orientation, it will turn to fly cork-first and remain in the cork-first orientation. One consequence of the shuttlecock's drag is that it requires considerable power to hit it the full length of the court, which is not the case for most racquet sports. The drag also influences the flight path of a lifted ("lobbed") shuttlecock: the parabola of its flight is heavily skewed so that it falls at a steeper angle than it rises. With very high serves, the shuttlecock may even fall vertically. Other factors. When defending against a smash, players have three basic options: lift, block, or drive. In singles, a block to the net is the most common reply. In doubles, a lift is the safest option but it usually allows the opponents to continue smashing; blocks and drives are counter-attacking strokes but may be intercepted by the smasher's partner. Many players use a backhand hitting action for returning smashes on both the forehand and backhand sides because backhands are more effective than forehands at covering smashes directed to the body. Hard shots directed towards the body are difficult to defend. The service is restricted by the Laws and presents its own array of stroke choices. Unlike in tennis, the server's racquet must be pointing in a downward direction to deliver the serve so normally the shuttle must be hit upwards to pass over the net. The server can choose a "low serve" into the forecourt (like a push), or a lift to the back of the service court, or a flat "drive serve". Lifted serves may be either "high serves", where the shuttlecock is lifted so high that it falls almost vertically at the back of the court, or "flick serves", where the shuttlecock is lifted to a lesser height but falls sooner. Deception. Once players have mastered these basic strokes, they can hit the shuttlecock from and to any part of the court, powerfully and softly as required. Beyond the basics, however, badminton offers rich potential for advanced stroke skills that provide a competitive advantage. Because badminton players have to cover a short distance as quickly as possible, the purpose of many advanced strokes is to deceive the opponent, so that either they are tricked into believing that a different stroke is being played, or they are forced to delay their movement until they actually sees the shuttle's direction. "Deception" in badminton is often used in both of these senses. When a player is genuinely deceived, they will often lose the point immediately because they cannot change their direction quickly enough to reach the shuttlecock. Experienced players will be aware of the trick and cautious not to move too early, but the attempted deception is still useful because it forces the opponent to delay their movement slightly. Against weaker players whose intended strokes are obvious, an experienced player may move before the shuttlecock has been hit, anticipating the stroke to gain an advantage. "Slicing" and using a "shortened hitting action" are the two main technical devices that facilitate deception. Slicing involves hitting the shuttlecock with an angled racquet face, causing it to travel in a different direction than suggested by the body or arm movement. Slicing also causes the shuttlecock to travel more slowly than the arm movement suggests. For example, a good crosscourt "sliced drop shot" will use a hitting action that suggests a straight clear or a smash, deceiving the opponent about both the power and direction of the shuttlecock. A more sophisticated slicing action involves brushing the strings around the shuttlecock during the hit, in order to make the shuttlecock spin. This can be used to improve the shuttle's trajectory, by making it dip more rapidly as it passes the net; for example, a sliced low serve can travel slightly faster than a normal low serve, yet land on the same spot. Spinning the shuttlecock is also used to create "spinning net shots" (also called "tumbling net shots"), in which the shuttlecock turns over itself several times (tumbles) before stabilizing; sometimes the shuttlecock remains inverted instead of tumbling. The main advantage of a spinning net shot is that the opponent will be unwilling to address the shuttlecock until it has stopped tumbling, since hitting the feathers will result in an unpredictable stroke. Spinning net shots are especially important for high-level singles players. The lightness of modern racquets allows players to use a very short hitting action for many strokes, thereby maintaining the option to hit a powerful or a soft stroke until the last possible moment. For example, a singles player may hold their racquet ready for a net shot, but then flick the shuttlecock to the back instead with a shallow lift when they notice the opponent has moved before the actual shot was played. A shallow lift takes less time to reach the ground and as mentioned above a rally is over when the shuttlecock touches the ground. This makes the opponent's task of covering the whole court much more difficult than if the lift was hit higher and with a bigger, obvious swing. A short hitting action is not only useful for deception: it also allows the player to hit powerful strokes when they have no time for a big arm swing. A big arm swing is also usually not advised in badminton because bigger swings make it more difficult to recover for the next shot in fast exchanges. The use of grip tightening is crucial to these techniques, and is often described as "finger power". Elite players develop finger power to the extent that they can hit some power strokes, such as net kills, with less than a racquet swing. It is also possible to reverse this style of deception, by suggesting a powerful stroke before slowing down the hitting action to play a soft stroke. In general, this latter style of deception is more common in the rear court (for example, drop shots disguised as smashes), whereas the former style is more common in the forecourt and midcourt (for example, lifts disguised as net shots). Deception is not limited to slicing and short hitting actions. Players may also use "double motion", where they make an initial racquet movement in one direction before withdrawing the racquet to hit in another direction. Players will often do this to send opponents in the wrong direction. The racquet movement is typically used to suggest a straight angle but then play the stroke crosscourt, or vice versa. "Triple motion" is also possible, but this is very rare in actual play. An alternative to double motion is to use a "racquet head fake", where the initial motion is continued but the racquet is turned during the hit. This produces a smaller change in direction but does not require as much time. Injuries. In badminton, cramps, usually in the arms and legs, are very common. Elbow, and leg pain is also common due to fast movement. A notable incident in badminton is the death of Zhang Zhijie, who collapsed onto the court, and died of cardiac arrest. Another notable incident is the major knee injury of Carolina Marin, in which she had landed on her already surgically repaired knee awkwardly, breaking her right knee. She was leading 21–14, and in the second game, was forced to retire, 10–6. Strategy. To win in badminton, players need to employ a wide variety of strokes in the right situations. These range from powerful jumping smashes to delicate tumbling net returns. Often rallies finish with a smash, but setting up the smash requires subtler strokes. For example, a net shot can force the opponent to lift the shuttlecock, which gives an opportunity to smash. If the net shot is tight and tumbling, then the opponent's lift will not reach the back of the court, which makes the subsequent smash much harder to return. Deception is also important, helping players gain time back, and tricking the opponent (if played properly). Expert players prepare for many different strokes that look identical and use slicing to deceive their opponents about the speed or direction of the stroke. If an opponent tries to anticipate the stroke, they may move in the wrong direction and may be unable to change their body momentum in time to reach the shuttlecock. Singles. Since one person needs to cover the entire court, singles tactics are based on forcing the opponent to move as much as possible; this means that singles strokes are normally directed to the corners of the court. Players exploit the length of the court by combining lifts and clears with drop shots and net shots. Smashing tends to be less prominent in singles than in doubles because the smasher has no partner to follow up their effort and is thus vulnerable to a skillfully placed return. Moreover, frequent smashing can be exhausting in singles where the conservation of a player's energy is at a premium. However, players with strong smashes will sometimes use the shot to create openings, and players commonly smash weak returns to try to end rallies. In singles, players will often start the rally with a forehand high serve or with a flick serve. Low serves are also used frequently, either forehand or backhand. Drive serves are rare. At high levels of play, singles demand extraordinary fitness. Singles is a game of patient positional manoeuvring, unlike the all-out aggression of doubles. Doubles. Both pairs will try to gain and maintain the attack, smashing downwards when the opportunity arises. Whenever possible, a pair will adopt an ideal attacking formation with one player hitting down from the rear court, and their partner in the midcourt intercepting all smash returns except the lift. If the rear court attacker plays a drop shot, their partner will move into the forecourt to threaten the net reply. If a pair cannot hit downwards, they will use flat strokes in an attempt to gain the attack. If a pair is forced to lift or clear the shuttlecock, then they must defend: they will adopt a side-by-side position in the rear midcourt, to cover the full width of their court against the opponents' smashes. In doubles, players generally smash to the middle ground between two players in order to take advantage of confusion and clashes. At high levels of play, the backhand serve has become popular to the extent that forehand serves have become fairly rare at a high level of play. The straight low serve is used most frequently, in an attempt to prevent the opponents gaining the attack immediately. Flick serves are used to prevent the opponent from anticipating the low serve and attacking it decisively. At high levels of play, doubles rallies are extremely fast. Men's doubles are the most aggressive form of badminton, with a high proportion of powerful jump smashes and very quick reflex exchanges. Because of this, spectator interest is sometimes greater for men's doubles than for singles. Mixed doubles. In mixed doubles, both pairs typically try to maintain an attacking formation with the woman at the front and the man at the back. This is because the male players are usually substantially stronger, and can, therefore, produce smashes that are more powerful. As a result, mixed doubles require greater tactical awareness and subtler positional play. Clever opponents will try to reverse the ideal position, by forcing the woman towards the back or the man towards the front. In order to protect against this danger, mixed players must be careful and systematic in their shot selection. At high levels of play, the formations will generally be more flexible: the top women players are capable of playing powerfully from the back-court, and will happily do so if required. When the opportunity arises, however, the pair will switch back to the standard mixed attacking position, with the woman in front and men in the back. Organization. Governing bodies. The Badminton World Federation (BWF) is the internationally recognized governing body of the sport responsible for the regulation of tournaments and approaching fair play. Five regional confederations are associated with the BWF, the rest are unaffiliated, or are minor in comparison. Unaffiliated: Competitions. The BWF organizes several international competitions, including the Thomas Cup, the premier men's international team event first held in 1948–1949, and the Uber Cup, the women's equivalent first held in 1956–1957. The competitions now take place once every two years. More than 50 national teams compete in qualifying tournaments within continental confederations for a place in the finals. The final tournament involves 12 teams, following an increase from eight teams in 2004. It was further increased to 16 teams in 2012. The Sudirman Cup, a gender-mixed international team event held once every two years, began in 1989. Teams are divided into seven levels based on the performance of each country. To win the tournament, a country must perform well across all five disciplines (men's doubles and singles, women's doubles and singles, and mixed doubles). Like association football (soccer), it features a promotion and relegation system at every level. However, the system was last used in 2009 and teams competing will now be grouped by world rankings. Badminton was a demonstration event at the 1972 and 1988 Summer Olympics. It became an official Summer Olympic sport at the Barcelona Olympics in 1992 and its gold medals now generally rate as the sport's most coveted prizes for individual players. In the BWF World Championships, first held in 1977, currently only the highest-ranked 64 players in the world, and a maximum of four from each country can participate in any category. Therefore, it's not an "open" format. In both the BWF World and the Olympic competitions restrictions on the number of participants from any one country have caused some controversy, because they result in excluding some world elite level players from the strongest badminton nations. The Thomas, Uber, and Sudirman Cups, the Olympics, and the BWF World (and World Junior Championships), are all categorized as level one tournaments. At the start of 2007, the BWF introduced a new tournament structure for the highest level tournaments aside from those in level one: the BWF Super Series. This "level two" tournament series is a circuit for the world's elite players, staging twelve open tournaments around the world with 32 players (half the previous limit). The players collect points that determine whether they can play in Super Series Finals held at the year-end. Among the tournaments in this series is the venerable All-England Championships, first held in 1900, which was once considered the unofficial world championships of the sport. Level three tournaments consist of Grand Prix Gold and Grand Prix event. Top players can collect the world ranking points and enable them to play in the BWF Super Series open tournaments. These include the regional competitions in Asia (Badminton Asia Championships) and Europe (European Badminton Championships), which produce the world's best players as well as the Pan America Badminton Championships. The level four tournaments, known as International Challenge, International Series, and Future Series, encourage participation by junior players. Comparison with tennis. Badminton is frequently compared to tennis due to several qualities. The following is a list of manifest differences: Statistics such as the smash speed, above, prompt badminton enthusiasts to make other comparisons that are more contentious. For example, it is often claimed that badminton is the fastest racquet sport. Although badminton holds the record for the fastest initial speed of a racquet sports projectile, the shuttlecock decelerates substantially faster than other projectiles such as tennis balls. In turn, this qualification must be qualified by consideration of the distance over which the shuttlecock travels: a smashed shuttlecock travels a shorter distance than a tennis ball during a serve. While fans of badminton and tennis often claim that their sport is the more physically demanding, such comparisons are difficult to make objectively because of the differing demands of the games. No formal study currently exists evaluating the physical condition of the players or demands during gameplay. Badminton and tennis techniques differ substantially. The lightness of the shuttlecock and of badminton racquets allows badminton players to make use of the wrist and fingers much more than tennis players; in tennis, the wrist is normally held stable, and playing with a mobile wrist may lead to injury. For the same reasons, badminton players can generate power from a short racquet swing: for some strokes such as net kills, an elite player's swing may be less than . For strokes that require more power, a longer swing will typically be used, but the badminton racquet swing will rarely be as long as a typical tennis swing.
3957
7098284
https://en.wikipedia.org/wiki?curid=3957
Baroque
The Baroque ( , , ) is a Western style of architecture, music, dance, painting, sculpture, poetry, and other arts that flourished from the early 17th century until the 1750s. It followed Renaissance art and Mannerism and preceded the Rococo (in the past often referred to as "late Baroque") and Neoclassical styles. It was encouraged by the Catholic Church as a means to counter the simplicity and austerity of Protestant architecture, art, and music, though Lutheran Baroque art developed in parts of Europe as well. The Baroque style used contrast, movement, exuberant detail, deep color, grandeur, and surprise to achieve a sense of awe. The style began at the start of the 17th century in Rome, then spread rapidly to the rest of Italy, France, Spain, and Portugal, then to Austria, southern Germany, Poland and Russia. By the 1730s, it had evolved into an even more flamboyant style, called "rocaille" or "Rococo", which appeared in France and Central Europe until the mid to late 18th century. In the territories of the Spanish and Portuguese Empires including the Iberian Peninsula it continued, together with new styles, until the first decade of the 19th century. In the decorative arts, the style employs plentiful and intricate ornamentation. The departure from Renaissance classicism has its own ways in each country. But a general feature is that everywhere the starting point is the ornamental elements introduced by the Renaissance. The classical repertoire is crowded, dense, overlapping, loaded, in order to provoke shock effects. New motifs introduced by Baroque are: the cartouche, trophies and weapons, baskets of fruit or flowers, and others, made in marquetry, stucco, or carved. Origin of the word. The English word "baroque" comes directly from the French. Some scholars state that the French word originated from the Portuguese term 'a flawed pearl', pointing to the Latin 'wart', or to a word with the Romance suffix (common in pre-Roman Iberia). Other sources suggest a Medieval Latin term used in logic, , as the most likely source. In the 16th century the Medieval Latin word moved beyond scholastic logic and came into use to characterise anything that seemed absurdly complex. The French philosopher (1533–1592) helped to give the term (spelled by him) the meaning 'bizarre, uselessly complicated'. Other early sources associate with magic, complexity, confusion, and excess. The word "baroque" was also associated with irregular pearls before the 18th century. The French and Portuguese were terms often associated with jewelry. An example from 1531 uses the term to describe pearls in an inventory of Charles V of France's treasures. Later, the word appears in a 1694 edition of , which describes "baroque" as "only used for pearls that are imperfectly round." A 1728 Portuguese dictionary similarly describes as relating to a "coarse and uneven pearl". An alternative derivation of the word "baroque" points to the name of the Italian painter Federico Barocci (1528–1612). In the 18th century the term began to be used to describe music, and not in a flattering way. In an anonymous satirical review of the première of 's in October 1733, which was printed in the in May 1734, the critic wrote that the novelty in this opera was "", complaining that the music lacked coherent melody, was unsparing with dissonances, constantly changed key and meter, and speedily ran through every compositional device. In 1762 recorded that the term could figuratively describe something "irregular, bizarre or unequal". Jean-Jacques Rousseau, who was a musician and composer as well as a philosopher, wrote in the in 1768: "Baroque music is that in which the harmony is confused, and loaded with modulations and dissonances. The singing is harsh and unnatural, the intonation difficult, and the movement limited. It appears that term comes from the word 'baroco' used by logicians." In 1788 defined the term in the as "an architectural style that is highly adorned and tormented". The French terms and appeared in in 1835. By the mid-19th century, art critics and historians had adopted the term "baroque" as a way to ridicule post-Renaissance art. This was the sense of the word as used in 1855 by the leading art historian Jacob Burckhardt, who wrote that baroque artists "despised and abused detail" because they lacked "respect for tradition". In 1888 the art historian Heinrich Wölfflin published the first serious academic work on the style, "Renaissance und Barock", which described the differences between the painting, sculpture, and architecture of the Renaissance and the Baroque. Architecture: origins and characteristics. The Baroque style of architecture was a result of doctrines adopted by the Catholic Church at the Council of Trent in 1545–1563, in response to the Protestant Reformation. The first phase of the Counter-Reformation had imposed a severe, academic style on religious architecture, which had appealed to intellectuals but not the mass of churchgoers. The Council of Trent decided instead to appeal to a more popular audience, and declared that the arts should communicate religious themes with direct and emotional involvement. Similarly, Lutheran Baroque art developed as a confessional marker of identity, in response to the Great Iconoclasm of Calvinists. Baroque churches were designed with a large central space, where the worshippers could be close to the altar, with a dome or cupola high overhead, allowing light to illuminate the church below. The dome was one of the central symbolic features of Baroque architecture illustrating the union between the heavens and the earth. The inside of the cupola was lavishly decorated with paintings of angels and saints, and with stucco statuettes of angels, giving the impression to those below of looking up at heaven. Another feature of Baroque churches are the "quadratura"; trompe-l'œil paintings on the ceiling in stucco frames, either real or painted, crowded with paintings of saints and angels and connected by architectural details with the balustrades and consoles. "Quadratura" paintings of Atlantes below the cornices appear to be supporting the ceiling of the church. Unlike the painted ceilings of Michelangelo in the Sistine Chapel, which combined different scenes, each with its own perspective, to be looked at one at a time, the Baroque ceiling paintings were carefully created so the viewer on the floor of the church would see the entire ceiling in correct perspective, as if the figures were real. The interiors of Baroque churches became more and more ornate in the High Baroque, and focused around the altar, usually placed under the dome. The most celebrated baroque decorative works of the High Baroque are the Chair of Saint Peter (1647–1653) and St. Peter's Baldachin (1623–1634), both by Gian Lorenzo Bernini, in St. Peter's Basilica in Rome. The Baldequin of St. Peter is an example of the balance of opposites in Baroque art; the gigantic proportions of the piece, with the apparent lightness of the canopy; and the contrast between the solid twisted columns, bronze, gold and marble of the piece with the flowing draperies of the angels on the canopy. The Dresden Frauenkirche serves as a prominent example of Lutheran Baroque art, which was completed in 1743 after being commissioned by the Lutheran city council of Dresden and was "compared by eighteenth-century observers to St Peter's in Rome". The twisted column in the interior of churches is one of the signature features of the Baroque. It gives both a sense of motion and also a dramatic new way of reflecting light. The cartouche was another characteristic feature of Baroque decoration. These were large plaques carved of marble or stone, usually oval and with a rounded surface, which carried images or text in gilded letters, and were placed as interior decoration or above the doorways of buildings, delivering messages to those below. They showed a wide variety of invention, and were found in all types of buildings, from cathedrals and palaces to small chapels. Baroque architects sometimes used forced perspective to create illusions. For the Palazzo Spada in Rome, Francesco Borromini used columns of diminishing size, a narrowing floor and a miniature statue in the garden beyond to create the illusion that a passageway was thirty meters long, when it was actually only seven meters long. A statue at the end of the passage appears to be life-size, though it is only sixty centimeters high. Borromini designed the illusion with the assistance of a mathematician. Italian Baroque. The first building in Rome to have a Baroque façade was the Church of the Gesù in 1584; it was plain by later Baroque standards, but marked a break with the traditional Renaissance façades that preceded it. The interior of this church remained very austere until the high Baroque, when it was lavishly ornamented. In Rome in 1605, Paul V became the first of series of popes who commissioned basilicas and church buildings designed to inspire emotion and awe through a proliferation of forms, and a richness of colours and dramatic effects. Among the most influential monuments of the Early Baroque were the façade of St. Peter's Basilica (1606–1619), and the new nave and loggia which connected the façade to Michelangelo's dome in the earlier church. The new design created a dramatic contrast between the soaring dome and the disproportionately wide façade, and the contrast on the façade itself between the Doric columns and the great mass of the portico. In the mid to late 17th century the style reached its peak, later termed the High Baroque. Many monumental works were commissioned by Popes Urban VIII and Alexander VII. The sculptor and architect Gian Lorenzo Bernini designed a new quadruple colonnade around St. Peter's Square (1656 to 1667). The three galleries of columns in a giant ellipse balance the oversize dome and give the Church and square a unity and the feeling of a giant theatre. Another major innovator of the Italian High Baroque was Francesco Borromini, whose major work was the Church of San Carlo alle Quattro Fontane or Saint Charles of the Four Fountains (1634–1646). The sense of movement is given not by the decoration, but by the walls themselves, which undulate and by concave and convex elements, including an oval tower and balcony inserted into a concave traverse. The interior was equally revolutionary; the main space of the church was oval, beneath an oval dome. Painted ceilings, crowded with angels and saints and trompe-l'œil architectural effects, were an important feature of the Italian High Baroque. Major works included "The Entry of Saint Ignatius into Paradise" by Andrea Pozzo (1685–1695) in the Sant'Ignazio Church, Rome, and "The Triumph of the Name of Jesus" by Giovanni Battista Gaulli in the Church of the Gesù in Rome (1669–1683), which featured figures spilling out of the picture frame and dramatic oblique lighting and light-dark contrasts. The style spread quickly from Rome to other regions of Italy: It appeared in Venice in the church of Santa Maria della Salute (1631–1687) by Baldassare Longhena, a highly original octagonal form crowned with an enormous cupola. It appeared also in Turin, notably in the Chapel of the Holy Shroud (1668–1694) by Guarino Guarini. The style also began to be used in palaces; Guarini designed the Palazzo Carignano in Turin, while Longhena designed the Ca' Rezzonico on the Grand Canal, (1657), finished by Giorgio Massari with decorated with paintings by Giovanni Battista Tiepolo. A series of massive earthquakes in Sicily required the rebuilding of most of them and several were built in the exuberant late Baroque or Rococo style. Spanish Baroque. The Catholic Church in Spain, and particularly the Jesuits, were the driving force of Spanish Baroque architecture. The first major work in this style was the San Isidro Chapel in Madrid, begun in 1643 by Pedro de la Torre. It contrasted an extreme richness of ornament on the exterior with simplicity in the interior, divided into multiple spaces and using effects of light to create a sense of mystery. The Santiago de Compostela Cathedral was modernized with a series of Baroque additions beginning at the end of the 17th century, starting with a highly ornate bell tower (1680), then flanked by two even taller and more ornate towers, called the "Obradorio", added between 1738 and 1750 by Fernando de Casas Novoa. Another landmark of the Spanish Baroque is the chapel tower of the Palace of San Telmo in Seville by Leonardo de Figueroa. Granada had only been conquered from the Moors in the 15th century, and had its own distinct variety of Baroque. The painter, sculptor and architect Alonso Cano designed the Baroque interior of Granada Cathedral between 1652 and his death in 1657. It features dramatic contrasts of the massive white columns and gold decor. The most ornamental and lavishly decorated architecture of the Spanish Baroque is called Churrigueresque style, named after the brothers Churriguera, who worked primarily in Salamanca and Madrid. Their works include the buildings on Salamanca's main square, the Plaza Mayor (1729). This highly ornamental Baroque style was influential in many churches and cathedrals built by the Spanish in the Americas. Other notable Spanish baroque architects of the late Baroque include Pedro de Ribera, a pupil of Churriguera, who designed the Real Hospicio de San Fernando in Madrid, and Narciso Tomé, who designed the celebrated El Transparente altarpiece at Toledo Cathedral (1729–1732) which gives the illusion, in certain light, of floating upwards. The architects of the Spanish Baroque had an effect far beyond Spain; their work was highly influential in the churches built in the Spanish colonies in Latin America and the Philippines. The church built by the Jesuits for the College of San Francisco Javier in Tepotzotlán, with its ornate Baroque façade and tower, is a good example. Central Europe. From 1680 to 1750, many highly ornate cathedrals, abbeys, and pilgrimage churches were built in Central Europe, Austria, Bohemia and southwestern Poland. Some were in Rococo style, a distinct, more flamboyant and asymmetric style which emerged from the Baroque, then replaced it in Central Europe in the first half of the 18th century, until it was replaced in turn by classicism. The princes of the multitude of states in that region also chose Baroque or Rococo for their palaces and residences, and often used Italian-trained architects to construct them. A notable example is the St. Nicholas Church (Malá Strana) in Prague (1704–1755), built by Christoph Dientzenhofer and his son Kilian Ignaz Dientzenhofer. Decoration covers all of walls of interior of the church. The altar is placed in the nave beneath the central dome, and surrounded by chapels, light comes down from the dome above and from the surrounding chapels. The altar is entirely surrounded by arches, columns, curved balustrades and pilasters of coloured stone, which are richly decorated with statuary, creating a deliberate confusion between the real architecture and the decoration. The architecture is transformed into a theatre of light, colour and movement. In Poland, the Italian-inspired Polish Baroque lasted from the early 17th to the mid-18th century and emphasised richness of detail and colour. The first Baroque building in present-day Poland and probably one of the most recognizable is the Saints Peter and Paul Church, Kraków, designed by Giovanni Battista Trevano. Sigismund's Column in Warsaw, erected in 1644, was the world's first secular Baroque monument built in the form of a column. The palatial residence style was exemplified by the Wilanów Palace, constructed between 1677 and 1696. The most renowned Baroque architect active in Poland was Dutchman Tylman van Gameren and his notable works include Warsaw's St. Kazimierz Church and Krasiński Palace, Church of St. Anne, Kraków and Branicki Palace, Białystok. However, the most celebrated work of Polish Baroque is the Poznań Fara Church, with details by Pompeo Ferrari. After Thirty Years' War under the agreements of the Peace of Westphalia two unique baroque wattle and daub structures was built: Church of Peace in Jawor, Holy Trinity Church of Peace in Świdnica the largest wooden Baroque temple in Europe. German Baroque. The many states within the Holy Roman Empire on the territory of today's Germany all looked to represent themselves with impressive Baroque buildings. Notable architects included Johann Bernhard Fischer von Erlach, Lukas von Hildebrandt and Dominikus Zimmermann in Bavaria, Balthasar Neumann in Bruhl, and Matthäus Daniel Pöppelmann in Dresden. In Prussia, Frederick II of Prussia was inspired by the Grand Trianon of the Palace of Versailles, and used it as the model for his summer residence, Sanssouci, in Potsdam, designed for him by Georg Wenzeslaus von Knobelsdorff (1745–1747). Another work of Baroque palace architecture is the Zwinger (Dresden), the former orangerie of the palace of the electors of Saxony in the 18th century. One of the best examples of a rococo church is the Basilika Vierzehnheiligen, or Basilica of the Fourteen Holy Helpers, a pilgrimage church located near the town of Bad Staffelstein near Bamberg, in Bavaria, southern Germany. The Basilica was designed by Balthasar Neumann and was constructed between 1743 and 1772, its plan a series of interlocking circles around a central oval with the altar placed in the exact centre of the church. The interior of this church illustrates the summit of Rococo decoration. Another notable example of the style is the Pilgrimage Church of Wies (). It was designed by the brothers J. B. and Dominikus Zimmermann. It is located in the foothills of the Alps, in the municipality of Steingaden in the Weilheim-Schongau district, Bavaria, Germany. Construction took place between 1745 and 1754, and the interior was decorated with frescoes and with stuccowork in the tradition of the Wessobrunner School. It is now a UNESCO World Heritage Site. French Baroque. Baroque in France developed quite differently from the ornate and dramatic local versions of Baroque from Italy, Spain and the rest of Europe. It appears severe, more detached and restrained by comparison, preempting Neoclassicism and the architecture of the Enlightenment. Unlike Italian buildings, French Baroque buildings have no broken pediments or curvilinear façades. Even religious buildings avoided the intense spatial drama one finds in the work of Borromini. The style is closely associated with the works built for Louis XIV (reign 1643–1715), and because of this, it is also known as the Louis XIV style. Louis XIV invited the master of Baroque, Bernini, to submit a design for the new east wing of the Louvre, but rejected it in favor of a more classical design by Claude Perrault and Louis Le Vau. The main architects of the style included François Mansart (1598–1666), Pierre Le Muet (Church of Val-de-Grâce, 1645–1665) and Louis Le Vau (Vaux-le-Vicomte, 1657–1661). Mansart was the first architect to introduce Baroque styling, principally the frequent use of an applied order and heavy rustication, into the French architectural vocabulary. The mansard roof was not invented by Mansart, but it has become associated with him, as he used it frequently. The major royal project of the period was the expansion of Palace of Versailles, begun in 1661 by Le Vau with decoration by the painter Charles Le Brun. The gardens were designed by André Le Nôtre specifically to complement and amplify the architecture. The Galerie des Glaces (Hall of Mirrors), the centerpiece of the château, with paintings by Le Brun, was constructed between 1678 and 1686. Mansart completed the Grand Trianon in 1687. The chapel, designed by Robert de Cotte, was finished in 1710. Following the death of Louis XIV, Louis XV added the more intimate Petit Trianon and the highly ornate theatre. The fountains in the gardens were designed to be seen from the interior, and to add to the dramatic effect. The palace was admired and copied by other monarchs of Europe, particularly Peter the Great of Russia, who visited Versailles early in the reign of Louis XV, and built his own version at Peterhof Palace near Saint Petersburg, between 1705 and 1725. Portuguese Baroque. Baroque architecture in Portugal lasted about two centuries (the late seventeenth century and eighteenth century). The reigns of John V and Joseph I had increased imports of gold and diamonds, in a period called Royal Absolutism, which allowed the Portuguese Baroque to flourish. Baroque architecture in Portugal enjoys a special situation and different timeline from the rest of Europe. It is conditioned by several political, artistic, and economic factors, that originate several phases, and different kinds of outside influences, resulting in a unique blend, often misunderstood by those looking for Italian art, find instead specific forms and character which give it a uniquely Portuguese variety. Another key factor is the existence of the Jesuitical architecture, also called "plain style" (Estilo Chão or Estilo Plano) which like the name evokes, is plainer and appears somewhat austere. The buildings are single-room basilicas, deep main chapel, lateral chapels (with small doors for communication), without interior and exterior decoration, simple portal and windows. It is a practical building, allowing it to be built throughout the empire with minor adjustments, and prepared to be decorated later or when economic resources are available. In fact, the first Portuguese Baroque does not lack in building because "plain style" is easy to be transformed, by means of decoration (painting, tiling, etc.), turning empty areas into pompous, elaborate baroque scenarios. The same could be applied to the exterior. Subsequently, it is easy to adapt the building to the taste of the time and place, and add on new features and details. Practical and economical. With more inhabitants and better economic resources, the north, particularly the areas of Porto and Braga, witnessed an architectural renewal, visible in the large list of churches, convents and palaces built by the aristocracy. Porto is the city of Baroque in Portugal. Its historical centre is part of UNESCO World Heritage List. Many of the Baroque works in the historical area of the city and beyond, belong to Nicolau Nasoni an Italian architect living in Portugal, drawing original buildings with scenographic emplacement such as the church and tower of Clérigos, the logia of the Porto Cathedral, the church of Misericórdia, the Palace of São João Novo, the Palace of Freixo, the Episcopal Palace (Portuguese: "Paço Episcopal do Porto") along with many others. Russian Baroque. The debut of Russian Baroque, or Petrine Baroque, followed a long visit of Peter the Great to western Europe in 1697–1698, where he visited the Châteaux of Fontainebleau and Versailles as well as other architectural monuments. He decided, on his return to Russia, to construct similar monuments in St. Petersburg, which became the new capital of Russia in 1712. Early major monuments in the Petrine Baroque include the Peter and Paul Cathedral and Menshikov Palace. During the reign of Anna and Elisabeth, Russian architecture was dominated by the luxurious Baroque style of Italian-born Francesco Bartolomeo Rastrelli, which developed into Elizabethan Baroque. Rastrelli's signature buildings include the Winter Palace, the Catherine Palace and the Smolny Cathedral. Other distinctive monuments of the Elizabethan Baroque are the bell tower of the Troitse-Sergiyeva Lavra and the Red Gate. In Moscow, Naryshkin Baroque became widespread, especially in the architecture of Eastern Orthodox churches in the late 17th century. It was a combination of western European Baroque with traditional Russian folk styles. Baroque in the Spanish and Portuguese Colonial Americas. Due to the colonization of the Americas by European countries, the Baroque naturally moved to the New World, finding especially favorable ground in the regions dominated by Spain and Portugal, both countries being centralized and irreducibly Catholic monarchies, by extension subject to Rome and adherents of the Baroque Counter-Reformation. European artists migrated to America and made school, and along with the widespread penetration of Catholic missionaries, many of whom were skilled artists, created a multiform Baroque often influenced by popular taste. The Criollo and indigenous crafters did much to give this Baroque unique features. The main centres of American Baroque cultivation, that are still standing, are (in this order) Mexico, Peru, Brazil, Cuba, Ecuador, Colombia, Bolivia, Guatemala, Nicaragua, Puerto Rico and Panama. Of particular note is the so-called "Missionary Baroque", developed in the framework of the Spanish reductions in areas extending from Mexico and southwestern portions of current-day United States to as far south as Argentina and Chile, indigenous settlements organized by Spanish Catholic missionaries in order to convert them to the Christian faith and acculturate them in the Western life, forming a hybrid Baroque influenced by Native culture, where flourished Criollos and many indigenous artisans and musicians, even literate, some of great ability and talent of their own. Missionaries' accounts often repeat that Western art, especially music, had a hypnotic impact on foresters, and the images of saints were viewed as having great powers. Many natives were converted, and a new form of devotion was created, of passionate intensity, laden with mysticism, superstition, and theatricality, which delighted in festive masses, sacred concerts, and mysteries. The Colonial Baroque architecture in the Spanish America is characterized by a profuse decoration (portal of La Profesa Church, Mexico City; façades covered with Puebla-style azulejos, as in the Church of San Francisco Acatepec in San Andrés Cholula and Convent Church of San Francisco, Puebla), which will be exacerbated in the so-called Churrigueresque style (Façade of the Tabernacle of the Mexico City Metropolitan Cathedral, by Lorenzo Rodríguez; Church of San Francisco Javier, Tepotzotlán; Church of Santa Prisca de Taxco). In Peru, the constructions mostly developed in the cities of Lima, Cusco, Arequipa and Trujillo, since 1650 show original characteristics that are advanced even to the European Baroque, as in the use of cushioned walls and solomonic columns (Iglesia de la Compañía de Jesús, Cusco; Basilica and Convent of San Francisco, Lima). Other countries include: the Metropolitan Cathedral of Sucre in Bolivia; Cathedral Basilica of Esquipulas in Guatemala; Tegucigalpa Cathedral in Honduras; León Cathedral in Nicaragua; the Church of la Compañía de Jesús, Quito, Ecuador; the Church of San Ignacio, Bogotá, Colombia; the Caracas Cathedral in Venezuela; the Cabildo of Buenos Aires in Argentina; the Church of Santo Domingo in Santiago, Chile; and Havana Cathedral in Cuba. It is also worth remembering the quality of the churches of the Spanish Jesuit Missions in Bolivia, Spanish Jesuit missions in Paraguay, the Spanish missions in Mexico and the Spanish Franciscan missions in California. In Brazil, as in the metropolis, Portugal, the architecture has a certain Italian influence, usually of a Borrominesque type, as can be seen in the Co-Cathedral of Recife (1784) and Church of Nossa Senhora da Glória do Outeiro in Rio de Janeiro (1739). In the region of Minas Gerais, highlighted the work of Aleijadinho, author of a group of churches that stand out for their curved planimetry, façades with concave-convex dynamic effects and a plastic treatment of all architectural elements (Church of São Francisco de Assis, Ouro Preto, 1765–1788). Baroque in the Spanish and Portuguese Colonial Asia. In the Portuguese colonies of India (Goa, Daman and Diu) an architectural style of Baroque forms mixed with Hindu elements flourished, such as the Se Cathedral and the Basilica of Bom Jesus of Goa, which houses the tomb of St. Francis Xavier. The set of churches and convents of Goa was declared a World Heritage Site in 1986. In the Philippines, which was a Spanish colony for over three centuries, a large number of Baroque constructions are preserved. Four of these as well as the Baroque and Neoclassical city of Vigan are both UNESCO World Heritage Sites; and although they lack formal classification, The Walled City of Manila along with the city of Tayabas both contain a significant extent of Spanish-Baroque-era architecture. Echoes in Wallachia and Moldavia. As we saw, the Baroque is a Western style, born in Italy. Through the commercial and cultural relationships of Italians with countries of the Balkan Peninsula, including Moldavia and Wallachia, Baroque influences arrive to Eastern Europe. These influences were not very strong, since they usually take place in architecture and stone-sculpted ornaments, and are also mixed intensely with details taken from Byzantine and Islamic art. Before and after the fall of the Byzantine Empire, all the art of Wallachia and Moldavia was primarily influenced by that of Constantinople. Until the end of the 16th century, with little modifications, the plans of churches and monasteries, the murals, and the ornaments carved in stone remain the same as before. From a period starting with the reigns of Matei Basarab (1632–1654) and Vasile Lupu (1634–1653), which coincided with the popularization of Italian Baroque, new ornaments were added, and the style of religious furniture changed. This was not random at all. Decorative elements and principles were brought from Italy, through Venice, or through the Dalmatian regions, and they were adopted by architects and craftsmen from the east. The window and door frames, the "pisanie" with dedication, the tombstones, the columns and railings, and a part of the bronze, silver or wooden furniture, received a more important role than the one they had before. They existed before too, inspired by the Byzantine tradition, but they gained a more realist look, showing delicate floral motifs. The relief that existed before too, became more accentuated, having volume and consistency. Before this period, reliefs from Wallachia and Moldavia, like the ones from the East, had only two levels, at a small distance one from the other, one at the surface and the other in depth. Big flowers, maybe roses, peonies or thistles, thick leaves, of acanthus or another similar plant, were twisting on columns, or surround door and windows. A place where the Baroque had a strong influence was columns and the railings. Capitals were more decorated than before with foliage. Columns have often twisting shafts, a local reinterpretation of the Solomonic column. Maximalist railings are placed between these columns, decorated with rinceaux. Some of the ones from the Mogoșoaia Palace are also decorated with dolphins. Cartouches are also used sometimes, mostly on tombstones, like on the one of Constantin Brâncoveanu. This movement, is known as the Brâncovenesc style, after Constantin Brâncoveanu, a ruler of Wallachia whose reign (1654–1714) is highly associated with this kind of architecture and design. The style is also present during the 18th century, and in a part of the 19th. Many of the churches and residences erected by boyards and voivodes of these periods are Brâncovenesc. Although Baroque influences can be clearly seen, the Brâncovenesc style takes much more inspiration from the local tradition. As the 18th century passed, with the Phanariot (members of prominent Greek families in Phanar, Istanbul) reigns in Wallachia and Moldavia, Baroque influences come from Istanbul too. They came before too, during the 17th century, but with the Phanariots, more Western Baroque motifs that arrived to the Ottoman Empire had their final destination in present-day Romania. In Moldavia, Baroque elements come from Russia too, where the influence of Italian art was strong. Painting. Baroque painters worked deliberately to set themselves apart from the painters of the Renaissance and the Mannerism period after it. In their palette, they used intense and warm colours, and particularly made use of the primary colours red, blue and yellow, frequently putting all three in close proximity. They avoided the even lighting of Renaissance painting and used strong contrasts of light and darkness on certain parts of the picture to direct attention to the central actions or figures. In their composition, they avoided the tranquil scenes of Renaissance paintings, and chose the moments of the greatest movement and drama. Unlike the tranquil faces of Renaissance paintings, the faces in Baroque paintings clearly expressed their emotions. They often used asymmetry, with action occurring away from the centre of the picture, and created axes that were neither vertical nor horizontal, but slanting to the left or right, giving a sense of instability and movement. They enhanced this impression of movement by having the costumes of the personages blown by the wind, or moved by their own gestures. The overall impressions were movement, emotion and drama. Another essential element of baroque painting was allegory; every painting told a story and had a message, often encrypted in symbols and allegorical characters, which an educated viewer was expected to know and read. Early evidence of Italian Baroque ideas in painting occurred in Bologna, where Annibale Carracci, Agostino Carracci and Ludovico Carracci sought to return the visual arts to the ordered Classicism of the Renaissance. Their art, however, also incorporated ideas central the Counter-Reformation; these included intense emotion and religious imagery that appealed more to the heart than to the intellect. Another influential painter of the Baroque era was Michelangelo Merisi da Caravaggio. His realistic approach to the human figure, painted directly from life and dramatically spotlit against a dark background, shocked his contemporaries and opened a new chapter in the history of painting. Other major painters associated closely with the Baroque style include Artemisia Gentileschi, Elisabetta Sirani, Giovanna Garzoni, Guido Reni, Domenichino, Andrea Pozzo, and Paolo de Matteis in Italy; Francisco de Zurbarán, Bartolomé Esteban Murillo and Diego Velázquez in Spain; Adam Elsheimer in Germany; and Nicolas Poussin, Simon Vouet, Georges de La Tour and Claude Lorrain in France (though Poussin and Lorrain spent most of his working life in Italy). Poussin and de La Tour adopted a "classical" Baroque style with less focus on emotion and greater attention to the line of the figures in the painting than to colour. Peter Paul Rubens was the most important painter of the Flemish Baroque style. Rubens' highly charged compositions reference erudite aspects of classical and Christian history. His unique and immensely popular Baroque style emphasised movement, colour, and sensuality, which followed the immediate, dramatic artistic style promoted in the Counter-Reformation. Rubens specialized in making altarpieces, portraits, landscapes, and history paintings of mythological and allegorical subjects. One important domain of Baroque painting was "Quadratura", or paintings in "trompe-l'œil", which literally "fooled the eye". These were usually painted on the stucco of ceilings or upper walls and balustrades, and gave the impression to those on the ground looking up were that they were seeing the heavens populated with crowds of angels, saints and other heavenly figures, set against painted skies and imaginary architecture. In Italy, artists often collaborated with architects on interior decoration; Pietro da Cortona was one of the painters of the 17th century who employed this illusionist way of painting. Among his most important commissions were the frescoes he painted for the Palazzo Barberini (1633–39), to glorify the reign of Pope Urban VIII. Pietro da Cortona's compositions were the largest decorative frescoes executed in Rome since the work of Michelangelo at the Sistine Chapel. François Boucher was an important figure in the more delicate French Rococo style, which appeared during the late Baroque period. He designed tapestries, carpets and theatre decoration as well as painting. His work was extremely popular with Madame de Pompadour, the Mistress of King Louis XV. His paintings featured mythological romantic, and mildly erotic themes. Hispanic Americas. In the Hispanic Americas, the first influences were from Sevillan Tenebrism, mainly from Zurbarán—some of whose works are still preserved in Mexico and Peru—as can be seen in the work of the Mexicans José Juárez and Sebastián López de Arteaga, and the Bolivian Melchor Pérez de Holguín. The Cusco School of painting arose after the arrival of the Italian painter Bernardo Bitti in 1583, who introduced Mannerism in the Americas. It highlighted the work of Luis de Riaño, disciple of the Italian Angelino Medoro, author of the murals of the Church of San Pedro, Andahuaylillas. It also highlighted the Indian (Quechua) painters Diego Quispe Tito and Basilio Santa Cruz Pumacallao, as well as Marcos Zapata, author of the fifty large canvases that cover the high arches of Cusco Cathedral. In Ecuador, the Quito School was formed, mainly represented by the mestizo Miguel de Santiago and the criollo Nicolás Javier de Goríbar. In the 18th century sculptural altarpieces began to be replaced by paintings, developing notably the Baroque painting in the Americas. Similarly, the demand for civil works, mainly portraits of the aristocratic classes and the ecclesiastical hierarchy, grew. The main influence was the Murillesque, and in some cases—as in the criollo Cristóbal de Villalpando–that of Juan de Valdés Leal. The painting of this era has a more sentimental tone, with sweet and softer shapes. Its proponents include Gregorio Vasquez de Arce y Ceballos in Colombia, and Juan Rodríguez Juárez and Miguel Cabrera in Mexico. Sculpture. The dominant figure in baroque sculpture was Gian Lorenzo Bernini. Under the patronage of Pope Urban VIII, he made a remarkable series of monumental statues of saints and figures whose faces and gestures vividly expressed their emotions, as well as portrait busts of exceptional realism, and highly decorative works for the Vatican such as the imposing Chair of St. Peter beneath the dome in St. Peter's Basilica. In addition, he designed fountains with monumental groups of sculpture to decorate the major squares of Rome. Baroque sculpture was inspired by ancient Roman statuary, particularly by the famous first century CE statue of "Laocoön and His Sons", which was unearthed in 1506 and put on display in the gallery of the Vatican. When he visited Paris in 1665, Bernini addressed the students at the academy of painting and sculpture. He advised the students to work from classical models, rather than from nature. He told the students, "When I had trouble with my first statue, I consulted the "Antinous" like an oracle." That "Antinous" statue is known today as the Hermes of the Museo Pio-Clementino. Notable late French baroque sculptors included Étienne Maurice Falconet and Jean Baptiste Pigalle. Pigalle was commissioned by Frederick the Great to make statues for Frederick's own version of Versailles at Sanssouci in Potsdam, Germany. Falconet also received an important foreign commission, creating the famous "Bronze Horseman" statue of Peter the Great found in St. Petersburg. In Spain, the sculptor Francisco Salzillo worked exclusively on religious themes, using polychromed wood. Some of the finest baroque sculptural craftsmanship was found in the gilded stucco altars of churches of the Spanish colonies of the New World, made by local craftsmen; examples include the Chapel del Rosario, Puebla, (Mexico), 1724–1731. Furniture. The main motifs used are: horns of plenty, festoons, baby angels, lion heads holding a metal ring in their mouths, female faces surrounded by garlands, oval cartouches, acanthus leaves, classical columns, caryatids, pediments, and other elements of Classical architecture sculpted on some parts of pieces of furniture, baskets with fruits or flowers, shells, armour and trophies, heads of Apollo or Bacchus, and C-shaped volutes. During the first period of the reign of Louis XIV, furniture followed the previous Louis XIII style, and was massive, and profusely decorated with sculpture and gilding. After 1680, thanks in large part to the furniture designer André-Charles Boulle, a more original and delicate style appeared, sometimes known as Boulle work. It was based on the inlay of ebony and other rare woods, a technique first used in Florence in the 15th century, which was refined and developed by Boulle and others working for Louis XIV. Furniture was inlaid with plaques of ebony, copper, and exotic woods of different colors. New and often enduring types of furniture appeared; the commode, with two to four drawers, replaced the old "coffre", or chest. The "canapé", or sofa, appeared, in the form of a combination of two or three armchairs. New kinds of armchairs appeared, including the "fauteuil en confessionale" or "Confessional armchair", which had padded cushions ions on either side of the back of the chair. The console table also made its first appearance; it was designed to be placed against a wall. Another new type of furniture was the "table à gibier", a marble-topped table for holding dishes. Early varieties of the desk appeared; the Mazarin desk had a central section set back, placed between two columns of drawers, with four feet on each column. Music. The term "Baroque" is also used to designate the style of music composed during a period that overlaps with that of Baroque art. The first uses of the term 'baroque' for music were criticisms. In an anonymous, satirical review of the première in October 1733 of Jean-Philippe Rameau's "Hippolyte et Aricie," printed in the "Mercure de France" in May 1734, the critic implied that the novelty of this opera was "du barocque," complaining that the music lacked coherent melody, was filled with unremitting dissonances, constantly changed key and meter, and speedily ran through every compositional device. Jean-Jacques Rousseau, who was a musician and noted composer as well as philosopher, made a very similar observation in 1768 in the famous "Encyclopédie" of Denis Diderot: "Baroque music is that in which the harmony is confused, and loaded with modulations and dissonances. The singing is harsh and unnatural, the intonation difficult, and the movement limited. It appears that term comes from the word 'baroco' used by logicians." Common use of the term for the music of the period began only in 1919, by Curt Sachs, and it was not until 1940 that it was first used in English in an article published by Manfred Bukofzer. The baroque was a period of musical experimentation and innovation which explains the amount of ornaments and improvisation performed by the musicians. New forms were invented, including the concerto and sinfonia. Opera was born in Italy at the end of the 16th century (with Jacopo Peri's mostly lost "Dafne", produced in Florence in 1598) and soon spread through the rest of Europe: Louis XIV created the first Royal Academy of Music. In 1669 the poet Pierre Perrin opened an academy of opera in Paris, the first opera theatre in France open to the public, and premiered "Pomone", the first grand opera in French, with music by Robert Cambert, with five acts, elaborate stage machinery, and a ballet. Heinrich Schütz in Germany, Jean-Baptiste Lully in France, and Henry Purcell in England all helped to establish their national traditions in the 17th century. Several new instruments, including the piano, were introduced during this period. The invention of the piano is credited to Bartolomeo Cristofori (1655–1731) of Padua, Italy, who was employed by Ferdinando de' Medici, Grand Prince of Tuscany, as the Keeper of the Instruments. Cristofori named the instrument "un cimbalo di cipresso di piano e forte" ("a keyboard of cypress with soft and loud"), abbreviated over time as "pianoforte", "fortepiano", and later, simply, piano. Dance. The classical ballet also originated in the Baroque era. The style of court dance was brought to France by Marie de' Medici, and in the beginning the members of the court themselves were the dancers. Louis XIV himself performed in public in several ballets. In March 1662, the Académie Royale de Danse, was founded by the King. It was the first professional dance school and company, and set the standards and vocabulary for ballet throughout Europe during the period. Literary theory. Heinrich Wölfflin was the first to transfer the term Baroque to literature. The key concepts of Baroque literary theory, such as "conceit" ("concetto"), "wit" ("acutezza", "ingegno"), and "wonder" ("meraviglia"), were not fully developed in literary theory until the publication of Emanuele Tesauro's "Il Cannocchiale aristotelico" (The Aristotelian Telescope) in 1654. This seminal treatise - inspired by Giambattista Marino's epic "Adone" and the work of the Spanish Jesuit philosopher Baltasar Gracián - developed a theory of metaphor as a universal language of images and as a supreme intellectual act, at once an artifice and an epistemologically privileged mode of access to truth. Dramaturgy of Central Europe in the Baroque Walter Benjamin’s study of the Baroque in "The Origin of German Tragic Drama", is a notoriously difficult but also exceptionally beloved major historical standard on the period. According to its conceit the work concentrates on Baroque drama though in fact the content of this study is extraordinarily diverse and even arcane in both the depth and range of its contents, dealing with an overwhelming heterogeneity of material in this historical terrain—though especially focusing its attention on Central Europe and (while Austrians of the Holy Roman Empire are sometimes mentioned and even Spanish under the Habsburg Emperor Ferdinand) concentrating on Germany. A major theme of the work is Benjamin’s mapping of the way in which the period arises in reaction to the collectively traumagenic violence of the Thirty Years War. This was a war in which virtually all of Europe participated at the bloody climax of the Reformation, though it was fought more or less exclusively in the Holy Roman Empire with all major powers (with the exception of England and Russia, who nevertheless became embroiled or were effected in various ways) sending their armies to meet in battle on that terrain. For Walter Benjamin in his study of the Origin, the almost pathological-seeming (or at any rate historically aberrant and intense) elaboration of detail and tendency toward recursive involutions or even the horror vacui quality of cultural production characteristic of the era arises as a psychic defense or digressive suppression of terror and anomie in the absence of the symbolically transcendent authority so long manifest in the institutions and ritual forms of absolution projected across the continent by the Western Church in Rome in the collapse of its continental supremacy in administration and social control—a process which has sometimes been called the ‘dismemberment of Christendom,’ or more positively the birth of modernity and thus also of the hegemony of capitalism, as Max Weber and various other (including Hugh Trevor Roper’s "Crisis of the Seventeenth Century" and his more famous monograph on the "European Witch Craze"). Theatre. The Baroque period was a golden age for theatre in France and Spain; playwrights included Corneille, Racine and Molière in France; and Lope de Vega and Pedro Calderón de la Barca in Spain. During the Baroque period, the art and style of the theatre evolved rapidly, alongside the development of opera and of ballet. The design of newer and larger theatres, the invention the use of more elaborate machinery, the wider use of the proscenium arch, which framed the stage and hid the machinery from the audience, encouraged more scenic effects and spectacle. The Baroque had a Catholic and conservative character in Spain, following an Italian literary model during the Renaissance. The Hispanic Baroque theatre aimed for a public content with an ideal reality that manifested fundamental three sentiments: Catholic religion, monarchist and national pride and honour originating from the chivalric, knightly world. Two periods are known in the Baroque Spanish theatre, with the division occurring in 1630. The first period is represented chiefly by Lope de Vega, but also by Tirso de Molina, Gaspar Aguilar, Guillén de Castro, Antonio Mira de Amescua, Luis Vélez de Guevara, Juan Ruiz de Alarcón, Diego Jiménez de Enciso, Luis Belmonte Bermúdez, Felipe Godínez, Luis Quiñones de Benavente or Juan Pérez de Montalbán. Many of these figures attended "academias literarias" (literary academies) including the famous Medrano Academy founded by Sebastián Francisco de Medrano. The second period is represented by Pedro Calderón de la Barca and fellow dramatists Antonio Hurtado de Mendoza, Álvaro Cubillo de Aragón, Jerónimo de Cáncer, Francisco de Rojas Zorrilla, Juan de Matos Fragoso, Antonio Coello y Ochoa, Agustín Moreto, and Francisco Bances Candamo. These classifications are loose because each author had his own way and could occasionally adhere himself to the formula established by Lope. It may even be that Lope's "manner" was more liberal and structured than Calderón's. Lope de Vega introduced through his "Arte nuevo de hacer comedias en este tiempo" (1609) the "new comedy". He established a new dramatic formula that broke the three Aristotle unities of the Italian school of poetry (action, time, and place) and a fourth unity of Aristotle which is about style, mixing of tragic and comic elements showing different types of verses and stanzas upon what is represented. Although Lope has a great knowledge of the plastic arts, he did not use it during the major part of his career nor in theatre or scenography. The Lope's comedy granted a second role to the visual aspects of the theatrical representation. Tirso de Molina, Lope de Vega, and Calderón were the most important play writers in Golden Era Spain. Their works, known for their subtle intelligence and profound comprehension of a person's humanity, could be considered a bridge between Lope's primitive comedy and the more elaborate comedy of Calderón. Tirso de Molina is best known for two works, "The Convicted Suspicions" and "The Trickster of Seville", one of the first versions of the Don Juan myth. Upon his arrival to Madrid, Cosimo Lotti brought to the Spanish court the most advanced theatrical techniques of Europe. His techniques and mechanic knowledge were applied in palace exhibitions called "Fiestas" and in lavish exhibitions of rivers or artificial fountains called "Naumaquias". He was in charge of styling the Gardens of Buen Retiro, of Zarzuela, and of Aranjuez and the construction of the theatrical building of Coliseo del Buen Retiro. Lope's formulas begin with a verse that it unbefitting of the palace theatre foundation and the birth of new concepts that begun the careers of some play writers like Calderón de la Barca. Marking the principal innovations of the New Lopesian Comedy, Calderón's style marked many differences, with a great deal of constructive care and attention to his internal structure. Calderón's work is in formal perfection and a very lyric and symbolic language. Liberty, vitality and openness of Lope gave a step to Calderón's intellectual reflection and formal precision. In his comedy it reflected his ideological and doctrine intentions in above the passion and the action, the work of Autos sacramentales achieved high ranks. The genre of Comedia is political, multi-artistic and in a sense hybrid. The poetic text interweaved with Medias and resources originating from architecture, music and painting freeing the deception that is in the Lopesian comedy was made up from the lack of scenery and engaging the dialogue of action. The best known German playwright was Andreas Gryphius, who used the Jesuit model of the Dutch Joost van den Vondel and Pierre Corneille. There was also Johannes Velten who combined the traditions of the English comedians and the commedia dell'arte with the classic theatre of Corneille and Molière. His touring company was perhaps the most significant and important of the 17th century. The foremost Italian baroque tragedian was Federico Della Valle. His literary activity is summed up by the four plays that he wrote for the courtly theater: the tragicomedy "Adelonda di Frigia" (1595) and especially his three tragedies, "Judith" (1627), "Esther" (1627) and "La reina di Scotia" (1628). Della Valle had many imitators and followers who combined in their works Baroque taste and the didactic aims of the Jesuits (Francesco Sforza Pallavicino, Girolamo Graziani, etc.) In the Tsardom of Russia, the development of the Russian version of Baroque took shape only in the second half of the 17th century, primarily due to the initiative of tsar Alexis of Russia, who wanted to open a court theatre in 1672. Its director and dramatist was Johann Gottfried Gregorii, a German-Russian Lutheran pastor, who wrote, in particular, a 10-hour play "The Action of Artaxerxes". The dramaturgy of Symeon of Polotsk and Demetrius of Rostov became key contribution to the Russian Baroque. Spanish colonial Americas. Following the evolution marked from Spain, at the end of the 16th century, the companies of comedians, essentially transhumant, began to professionalize. With professionalization came regulation and censorship: as in Europe, the theatre oscillated between tolerance and even government protection and rejection (with exceptions) or persecution by the Church. The theatre was useful to the authorities as an instrument to disseminate the desired behavior and models, respect for the social order and the monarchy, school of religious dogma. The "corrales" were administered for the benefit of hospitals that shared the benefits of the representations. The itinerant companies (or "of the league"), who carried the theatre in improvised open-air stages by the regions that did not have fixed locals, required a viceregal license to work, whose price or "pinción" was destined to alms and works pious. For companies that worked stably in the capitals and major cities, one of their main sources of income was participation in the festivities of the Corpus Christi, which provided them with not only economic benefits, but also recognition and social prestige. The representations in the viceregal palace and the mansions of the aristocracy, where they represented both the comedies of their repertoire and special productions with great lighting effects, scenery, and stage, were also an important source of well-paid and prestigious work. Born in the Viceroyalty of New Spain but later settled in Spain, Juan Ruiz de Alarcón is the most prominent figure in the Baroque theatre of New Spain. Despite his accommodation to Lope de Vega's new comedy, his "marked secularism", his discretion and restraint, and a keen capacity for "psychological penetration" as distinctive features of Alarcón against his Spanish contemporaries have been noted. Noteworthy among his works "La verdad sospechosa", a comedy of characters that reflected his constant moralizing purpose. The dramatic production of Sor Juana Inés de la Cruz places her as the second figure of the Spanish-American Baroque theatre. It is worth mentioning among her works the auto sacramental "El divino Narciso" and the comedy "Los empeños de una casa". Gardens. The Baroque garden, also known as the "jardin à la française" or French formal garden, first appeared in Rome in the 16th century, and then most famously in France in the 17th century in the gardens of Vaux le Vicomte and the Palace of Versailles. Baroque gardens were built by Kings and princes in Germany, the Netherlands, Austria, Spain, Poland, Italy and Russia until the mid-18th century, when they began to be remade into by the more natural English landscape garden. The purpose of the baroque garden was to illustrate the power of man over nature, and the glory of its builder, Baroque gardens were laid out in geometric patterns, like the rooms of a house. They were usually best seen from the outside and looking down, either from a château or terrace. The elements of a baroque garden included parterres of flower beds or low hedges trimmed into ornate Baroque designs, and straight lanes and alleys of gravel which divided and crisscrossed the garden. Terraces, ramps, staircases and cascades were placed where there were differences of elevation, and provided viewing points. Circular or rectangular ponds or basins of water were the settings for fountains and statues. Bosquets or carefully trimmed groves or lines of identical trees, gave the appearance of walls of greenery and were backdrops for statues. On the edges, the gardens usually had pavilions, orangeries and other structures where visitors could take shelter from the sun or rain. Baroque gardens required enormous numbers of gardeners, continual trimming, and abundant water. In the later part of the Baroque period, the formal elements began to be replaced with more natural features, including winding paths, groves of varied trees left to grow untrimmed; rustic architecture and picturesque structures, such as Roman temples or Chinese pagodas, as well as "secret gardens" on the edges of the main garden, filled with greenery, where visitors could read or have quiet conversations. By the mid-18th century most of the Baroque gardens were partially or entirely transformed into variations of the English landscape garden. Besides Versailles and Vaux-le-Vicomte, celebrated baroque gardens still retaining much of their original appearance include the Royal Palace of Caserta near Naples; Nymphenburg Palace and Augustusburg and Falkenlust Palaces, Brühl in Germany; Het Loo Palace, Netherlands; the Belvedere Palace in Vienna; Royal Palace of La Granja de San Ildefonso, Spain; and Peterhof Palace in St. Petersburg, Russia. Urban planning and design. 16th through 19th century European cities witnessed a large change in urban design and planning principals that reshaped the landscapes and built environment. Rome, Paris, and other major cities were transformed to accommodate growing populations through improvements in housing, transportation, and public services. Throughout this time, the Baroque style was in full swing, and the influences of elaborate, dramatic, and artistic architectural styles extended into the urban fabric through what is known as Baroque urban planning. The experience of living and walking in the cities aims to complement the emotions of the Baroque style. This style of planning often embraced displaying the wealth and strength of the ruling powers, and the important buildings served as the visual and symbolic center of the cities. The replanning of the city of Rome under the rule of Pope Sixtus V revived and expanded the city in the 16th century. Many grand piazzas and squares were added as public spaces to contribute to the dramatic effect of the Baroque style. The piazzas featured fountains and other decorative features to embody the emotions of the time. An important factor in Baroque style planning was to connect churches, government structures, and piazzas together in a refined network of axis'. This allowed the important landmarks of the Catholic Church to become the focal points of the city. More characteristics of Baroque urban planning are embodied in Barcelona. The Eixample district, designed by Ildefons Cerdà, showcases wide avenues in a grid system with a few diagonal boulevards. The intersections are unique with octagonal blocks, which provide the streets with great visibility and light. Many works in this district come from architect Antoni Gaudí, who displays a unique style. Centered in the Eixample district design is the Sagrada Família by Gaudí, which poses great significance to the city. Posterity. Transition to rococo. The Rococo is the final stage of the Baroque, and in many ways took the Baroque's fundamental qualities of illusion and drama to their logical extremes. Beginning in France as a reaction against the heavy Baroque grandeur of Louis XIV's court at the Palace of Versailles, the rococo movement became associated particularly with the powerful (1721–1764), the mistress of the new king, Louis XV (1710–1774). Because of this, the style was also known as "Pompadour". Although it's highly associated with the reign of Louis XV, it didn't appear in this period. Multiple works from the last years of Louis XIV's reign are examples of early Rococo. The name of the movement derives from the French , or pebble, and refers to stones and shells that decorate the interiors of caves, as similar shell forms became a common feature in Rococo design. It began as a design and decorative arts style, and was characterized by elegant flowing shapes. Architecture followed and then painting and sculpture. The French painter with whom the term Rococo is most often associated is Jean-Antoine Watteau, whose pastoral scenes, or , dominate the early part of the 18th century. There are multiple similarities between Rococo and Baroque. Both styles insist on monumental forms, and so use continuous spaces, double columns or pilasters, and luxurious materials (including gilded elements). There also noticeable differences. Rococo designed freed themselves from the adherence to symmetry that had dominated architecture and design since the Renaissance. Many small objects, like ink pots or porcelain figures, but also some ornaments, are often asymmetrical. This goes hand in hand with the fact that most ornamentation consisted of interpretation of foliage and sea shells, not as many Classical ornaments inherited from the Renaissance like in Baroque. Another key difference is the fact that since the Baroque is the main cultural manifestation of the spirit of the Counter-Reformation, it is most often associated with ecclesiastical architecture. In contrast, the Rococo is mainly associated with palaces and domestic architecture. In Paris, the popularity of the Rococo coincided with the emergence of the salon as a new type of social gathering, the venues for which were often decorated in this style. Rococo rooms were typically smaller than their Baroque counterparts, reflecting a movement towards domestic intimacy. Colours also match this change, from the earthy tones of Caravaggio's paintings, and the interiors of red marble and gilded mounts of the reign of Louis XIV, to the pastel and relaxed pale blue, Pompadour pink, and white of the Louis XV and Madame de Pompadour's France. Similarly to colours, there was also a transition from serious, dramatic and moralistic subjects in painting and sculpture, to lighthearted and joyful themes. One last difference between Baroque and Rococo is the interest that 18th century aristocrats had for East Asia. Chinoiserie was a style in fine art, architecture and design, popular during the 18th century, that was heavily inspired by Chinese art, but also by Rococo at the same time. Because traveling to China or other Far Eastern countries was hard at that time and so remained mysterious to most Westerners, European imagination were fuelled by perceptions of Asia as a place of wealth and luxury, and consequently patrons from emperors to merchants vied with each other in adorning their living quarters with Asian goods and decorating them in Asian styles. Where Asian objects were hard to obtain, European craftsmen and painters stepped up to fill the demand, creating a blend of Rococo forms and Asian figures, motifs and techniques. Aside from European recreations of objects in East Asian style, Chinese lacquerware was reused in multiple ways. European aristocrats fully decorated a handful of rooms of palaces, with Chinese lacquer panels used as wall panels. Due to its aspect, black lacquer was popular for Western men's studies. Those panels used were usually glossy and black, made in the Henan province of China. They were made of multiple layers of lacquer, then incised with motifs in-filled with colour and gold. Chinese, but also Japanese lacquer panels were also used by some 18th century European carpenters for making furniture. In order to be produced, Asian screens were dismantled and used to veneer European-made furniture. Condemnation and academic rediscovery. The pioneer German art historian and archeologist Johann Joachim Winckelmann also condemned the baroque style, and praised the superior values of classical art and architecture. By the 19th century, Baroque was a target for ridicule and criticism. The neoclassical critic Francesco Milizia wrote: "Borrominini in architecture, Bernini in sculpture, Pietro da Cortona in painting...are a plague on good taste, which infected a large number of artists." In the 19th century, criticism went even further; the British critic John Ruskin declared that baroque sculpture was not only bad, but also morally corrupt. The Swiss-born art historian Heinrich Wölfflin (1864–1945) started the rehabilitation of the word Baroque in his "Renaissance und Barock" (1888); Wölfflin identified the Baroque as "movement imported into mass", an art antithetic to Renaissance art. He did not make the distinctions between Mannerism and Baroque that modern writers do, and he ignored the later phase, the academic Baroque that lasted into the 18th century. Baroque art and architecture became fashionable in the interwar period, and has largely remained in critical favor. The term "Baroque" may still be used, often pejoratively, describing works of art, craft, or design that are thought to have excessive ornamentation or complexity of line. At the same time "baroque" has become an accepted terms for various trends in Roman art and Roman architecture in the 2nd and 3rd centuries AD, which display some of the same characteristics as the later Baroque. Revivals and influence through eclecticism. Highly criticized, the Baroque would later be a source of inspiration for artists, architects and designers during the 19th century through Romanticism, a movement that developed in the 18th century and that reached its peak in the 19th. It was characterized by its emphasis on emotion and individualism, as well as glorification of the past and nature, preferring the medieval to the classical. A mix of literary, religious, and political factors prompted late-18th and 19th century British architects and designers to look back to the Middle Ages for inspiration. Romanticism is the reason the 19th century is best known as the century of revivals. In France, Romanticism was not the key factor that led to the revival of Gothic architecture and design. Vandalism of monuments and buildings associated with the Ancien Régime (Old Regime) happened during the French Revolution. Because of this an archaeologist, Alexandre Lenoir, was appointed curator of the Petits-Augustins depot, where sculptures, statues and tombs removed from churches, abbeys and convents had been transported. He organized the Museum of French Monuments (1795–1816), and was the first to bring back the taste for the art of the Middle Ages, which progressed slowly to flourish a quarter of a century later. This taste and revival of medieval art led to the revival of other periods, including the Baroque and Rococo. Revivalism started with themes first from the Middle Ages, then, towards the end of the reign of Louis Philippe I (1830–1848), from the Renaissance. Baroque and Rococo inspiration was more popular during the reign of Napoleon III (1852–1870), and continued later, after the fall of the Second French Empire. Compared to how in England architects and designers saw the Gothic as a national style, Rococo was seen as one of the most representative movements for France. The French felt much more connected to the styles of the Ancien Régime and Napoleon's Empire, than to the medieval or Renaissance past, although Gothic architecture appeared in France, not in England. The revivalism of the 19th century led in time to eclecticism (mix of elements of different styles). Because architects often revived Classical styles, most Eclectic buildings and designs have a distinctive look. Besides pure revivals, the Baroque was also one of the main sources of inspiration for eclecticism. The coupled column and the giant order, two elements widely used in Baroque, are often present in this kind of 19th and early 20th century buildings. Eclecticism was not limited only to architecture. Many designs from the Second Empire style (1848–1870) have elements taken from different styles. Little furniture from the period escaped its three most prevalent historicist influences, which are sometimes kept distinct and sometimes combined: the Renaissance, Louis XV (Rococo), and Louis XVI styles. Revivals and inspiration also came sometimes from Baroque, like in the case of remakes and arabesques that imitate Boulle marquetry, and from other styles, like Gothic, Renaissance, or English Regency. The Belle Époque was a period that begun around 1871–1880 and that ended with the outbreak of World War I in 1914. It was characterized by optimism, regional peace, economic prosperity, colonial expansion, and technological, scientific, and cultural innovations. Eclecticism reached its peak in this period, with Beaux Arts architecture. The style takes its name from the École des Beaux-Arts in Paris, where it developed and where many of the main exponents of the style studied. Buildings in this style often feature Ionic columns with their volues on the corner (like those found in French Baroque), a rusticated basement level, overall simplicity but with some really detailed parts, arched doors, and an arch above the entrance like that of the Petit Palais in Paris. The style aimed for a Baroque opulence through lavishly decorated monumental structures that evoked Louis XIV's Versailles. When it comes to the design of the Belle Époque, all furniture from the past was admired, including, perhaps, contrary to expectations, the Second Empire style (the style of the proceeding period), which remained popular until 1900. In the years around 1900, there was a gigantic recapitulation of styles of all countries in all preceding periods. Everything from Chinese to Spanish models, from Boulle to Gothic, found its way into furniture production, but some styles were more appreciated than others. The High Middle Ages and the early Renaissance were especially prized. Exoticism of every stripe and exuberant Rococo designs were also favoured. Revivals and influence of the Baroque faded away and disappeared with Art Deco, a style created as a collective effort of multiple French designers to make a new modern style around 1910. It was obscure before WW1, but became very popular during the interwar period, being heavily associated with the 1920s and the 1930s. The movement was a blend of multiple characteristics taken from Modernist currents from the 1900s and the 1910s, like the Vienna Secession, Cubism, Fauvism, Primitivism, Suprematism, Constructivism, Futurism, De Stijl, and Expressionism. Besides Modernism, elements taken from styles popular during the Belle Époque, like Rococo Revival, Neoclassicism, or the neo-Louis XVI style, are also present in Art Deco. The proportions, volumes and structure of Beaux Arts architecture before WW1 is present in early Art Deco buildings of the 1910s and 1920s. Elements taken from Baroque are quite rare, architects and designers preferring the Louis XVI style. At the end of the interwar period, with the rise in popularity of the International Style, characterized by the complete lack of any ornamentation led to the complete abandonment of influence and revivals of the Baroque. Multiple International Style architects and designers, but also Modernist artists criticized Baroque for its extravagance and what they saw as "excess". Ironically this was just at the same time as the critical appreciation of the original Baroque was reviving strongly. Postmodern appreciation and reinterpretations. Appreciation for the Baroque reappeared with the rise of Postmodernism, a movement that questioned Modernism (the status quo after WW2), and which promoted the inclusion of elements of historic styles in new designs, and appreciation for the pre-Modernist past. Specific references to Baroque are rare, since Postmodernism often included highly simplified elements that were 'quotations' of Classicism in general, like pediments or columns. More references to Baroque are found in Versace ceramic ware and fashion, decorated with maximalist acanthus rinceaux, very similar to the ones found in Italian Baroque ornament plates and in Boulle work, but also similar to the ones found on Empire objects, especially textiles, from the reign of Napoleon I.
3959
5229428
https://en.wikipedia.org/wiki?curid=3959
Boolean algebra (structure)
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution). Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle. History. The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, "The Mathematical Analysis of Logic", published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, "The Laws of Thought", published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 "Vorlesungen" of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 "Universal Algebra". Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoff's 1940 "Lattice Theory". In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing and Boolean-valued models. Definition. A Boolean algebra is a set , equipped with two binary operations (called "meet" or "and"), (called "join" or "or"), a unary operation (called "complement" or "not") and two elements and in (called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols and , respectively), such that for all elements , and of , the following axioms hold: Note, however, that the absorption law and even the associativity law can be excluded from the set of axioms as they can be derived from the other axioms (see Proven properties). A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. (In older works, some authors required and to be "distinct" elements in order to exclude this case.) It follows from the last three pairs of axioms above (identity, distributivity and complements), or from the absorption axiom, that     if and only if     . The relation defined by if these equivalent conditions hold, is a partial order with least element 0 and greatest element 1. The meet and the join of two elements coincide with their infimum and supremum, respectively, with respect to ≤. The first four pairs of axioms constitute a definition of a bounded lattice. It follows from the first five pairs of axioms that any complement is unique. The set of axioms is self-dual in the sense that if one exchanges with and with in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another Boolean algebra with the same elements; it is called its dual. * It has applications in logic, interpreting as "false", as "true", as "and", as "or", and as "not". Expressions involving variables and the Boolean operations represent statement forms, and two such expressions can be shown to be equal using the above axioms if and only if the corresponding statement forms are logically equivalent. * The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0 and 1 represent the two different states of one bit in a digital circuit, typically high and low voltage. Circuits are described by expressions containing variables, and two such expressions are equal for all values of the variables if and only if the corresponding circuits have the same input–output behavior. Furthermore, every possible input–output behavior can be modeled by a suitable Boolean expression. * The two-element Boolean algebra is also important in the general theory of Boolean algebras, because an equation involving several variables is generally true in all Boolean algebras if and only if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force algorithm for small numbers of variables). This can for example be used to show that the following laws ("Consensus theorems") are generally valid in all Boolean algebras: ** * After the two-element Boolean algebra, the simplest Boolean algebra is that defined by the power set of two atoms: Examples. formula_1 becomes a Boolean algebra when its operations are defined by and . Homomorphisms and isomorphisms. A "homomorphism" between two Boolean algebras and is a function such that for all , in : , . It then follows that for all in . The class of all Boolean algebras, together with this notion of morphism, forms a full subcategory of the category of lattices. An "isomorphism" between two Boolean algebras and is a homomorphism with an inverse homomorphism, that is, a homomorphism such that the composition is the identity function on , and the composition is the identity function on . A homomorphism of Boolean algebras is an isomorphism if and only if it is bijective. Boolean rings. Every Boolean algebra gives rise to a ring by defining (this operation is called symmetric difference in the case of sets and XOR in the case of logic) and . The zero element of this ring coincides with the 0 of the Boolean algebra; the multiplicative identity element of the ring is the of the Boolean algebra. This ring has the property that for all in ; rings with this property are called Boolean rings. Conversely, if a Boolean ring is given, we can turn it into a Boolean algebra by defining and . Since these two constructions are inverses of each other, we can say that every Boolean ring arises from a Boolean algebra, and vice versa. Furthermore, a map is a homomorphism of Boolean algebras if and only if it is a homomorphism of Boolean rings. The categories of Boolean rings and Boolean algebras are equivalent; in fact the categories are isomorphic. Hsiang (1985) gave a rule-based algorithm to check whether two arbitrary expressions denote the same value in every Boolean ring. More generally, Boudet, Jouannaud, and Schmidt-Schauß (1989) gave an algorithm to solve equations between arbitrary Boolean-ring expressions. Employing the similarity of Boolean rings and Boolean algebras, both algorithms have applications in automated theorem proving. Ideals and filters. An "ideal" of the Boolean algebra is a nonempty subset such that for all , in we have in and for all in we have in . This notion of ideal coincides with the notion of ring ideal in the Boolean ring . An ideal of is called "prime" if and if in always implies in or in . Furthermore, for every we have that , and then if is prime we have or for every . An ideal of is called "maximal" if and if the only ideal properly containing is itself. For an ideal , if and , then or is contained in another proper ideal . Hence, such an is not maximal, and therefore the notions of prime ideal and maximal ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal and maximal ideal in the Boolean ring . The dual of an "ideal" is a "filter". A "filter" of the Boolean algebra is a nonempty subset such that for all , in we have in and for all in we have in . The dual of a "maximal" (or "prime") "ideal" in a Boolean algebra is "ultrafilter". Ultrafilters can alternatively be described as 2-valued morphisms from to the two-element Boolean algebra. The statement "every filter in a Boolean algebra can be extended to an ultrafilter" is called the "ultrafilter lemma" and cannot be proven in Zermelo–Fraenkel set theory (ZF), if ZF is consistent. Within ZF, the ultrafilter lemma is strictly weaker than the axiom of choice. The ultrafilter lemma has many equivalent formulations: "every Boolean algebra has an ultrafilter", "every ideal in a Boolean algebra can be extended to a prime ideal", etc. Representations. It can be shown that every "finite" Boolean algebra is isomorphic to the Boolean algebra of all subsets of a finite set. Therefore, the number of elements of every finite Boolean algebra is a power of two. Stone's celebrated "representation theorem for Boolean algebras" states that "every" Boolean algebra is isomorphic to the Boolean algebra of all clopen sets in some (compact totally disconnected Hausdorff) topological space. Axiomatics. The first axiomatization of Boolean lattices/algebras in general was given by the English philosopher and mathematician Alfred North Whitehead in 1898. It included the above axioms and additionally and . In 1904, the American mathematician Edward V. Huntington (1874–1952) gave probably the most parsimonious axiomatization based on , , , even proving the associativity laws (see box). He also proved that these axioms are independent of each other. In 1933, Huntington set out the following elegant axiomatization for Boolean algebra. It requires just one binary operation and a unary functional symbol , to be read as 'complement', which satisfy the following laws: Herbert Robbins immediately asked: If the Huntington equation is replaced with its dual, to wit: do (1), (2), and (4) form a basis for Boolean algebra? Calling (1), (2), and (4) a "Robbins algebra", the question then becomes: Is every Robbins algebra a Boolean algebra? This question (which came to be known as the Robbins conjecture) remained open for decades, and became a favorite question of Alfred Tarski and his students. In 1996, William McCune at Argonne National Laboratory, building on earlier work by Larry Wos, Steve Winker, and Bob Veroff, answered Robbins's question in the affirmative: Every Robbins algebra is a Boolean algebra. Crucial to McCune's proof was the computer program EQP he designed. For a simplification of McCune's proof, see Dahn (1998). Further work has been done for reducing the number of axioms; see Minimal axioms for Boolean algebra. Generalizations. Removing the requirement of existence of a unit from the axioms of Boolean algebra yields "generalized Boolean algebras". Formally, a distributive lattice is a generalized Boolean lattice, if it has a smallest element and for any elements and in such that , there exists an element such that and . Defining as the unique such that and , we say that the structure is a "generalized Boolean algebra", while is a "generalized Boolean semilattice". Generalized Boolean lattices are exactly the ideals of Boolean lattices. A structure that satisfies all axioms for Boolean algebras except the two distributivity axioms is called an orthocomplemented lattice. Orthocomplemented lattices arise naturally in quantum logic as lattices of closed linear subspaces for separable Hilbert spaces.
3960
49907020
https://en.wikipedia.org/wiki?curid=3960
Bank of Italy
The Bank of Italy (Italian: "Banca d'Italia", , informally referred to as "Bankitalia") is the national central bank for Italy within the Eurosystem. It was the Italian central bank from 1893 to 1998, issuing the lira. Since 2014, it has also been Italy's national competent authority within European Banking Supervision. It is located in Palazzo Koch, via Nazionale, Rome. History. The institution was established in 1893 from the combination of three major banks in Italy (after the Banca Romana scandal). The new central bank first issued banknotes during 1926. Until 1928, it was directed by a general manager, after this time instead by a governor elected by an internal commission of managers, with a decree from the President of the Italian Republic, for a term of seven years. In 1863 the crisis of the world money market created panic and the rush to the counters to collect the metallic currency in exchange for the banknotes. The Italian government responded in 1866 by introducing the fiat and legal tender of paper money. The government was accused in this way of favouring the issuing banks, and a long debate called the "banking question" arose about the advisability of having one or more issuers. The Minghetti-Finali law of 1873 established the mandatory consortium of issuing institutions among the six existing issuing institutions, the National Bank of the Kingdom of Italy, Banca Nazionale Toscana, Banca Toscana di Credito, Banca Romana, Banco di Napoli, and Banco di Sicilia; but the measure proved insufficient. Following the Banca Romana scandal, the reorganization of the issuing institutions became necessary. Establishment. Law no. 449 of 10 August 1893 of the Giolitti I government established the Bank of Italy through the merger of four banks: the National Bank in the Kingdom of Italy (formerly Banca Nazionale in the Sardinian States), the Banca Nazionale Toscana, the Banca Toscana di Credito for the Industries and Commerce of Italy and with the liquidation management of Banca Romana. With a complex series of mergers between these banks, the current Bank of Italy was formed. Some families of bankers, historical partners: Bombrini, Bastogi, Balduino, were the supporters of the operation. The institute enjoyed (together with the Banks of Naples and Sicily) the issuing privilege, it also acted as a "bank of banks" through the rediscount of bills, but did not have supervisory powers over other banks. The bank remains a private limited company and was headed by a director. From 1900 to 1928 Bonaldo Stringher was the director, who gave the Bank the role of manager of Italian monetary policy and lender of last resort, bringing it closer to a modern central bank. In particular, he understood that a central bank cannot aim at maximizing profit (which is achieved by printing as much paper money) but must instead aim at price stability. In 1907, the Bank of Italy coordinated the rescue of the Italian Banking Company, a major lender of FIAT, an operation that ended with the absorption of the bank in crisis into the Italian Discount Bank. In 1911 the central bank organized a consortium to rescue the steel companies (Acciaierie di Terni, Ilva and others) of which the Bank of Italy was directly creditor, financing the operation also through the issue of banknotes. In 1912 the credit institute for cooperation, with social purposes, was established, led by the Bank of Italy and also participated by public bodies, savings banks, Monte dei Paschi di Siena, the Cassa di Previdenza, and the Credit Institution for the Cooperatives of Milan. The institute in 1929 was transformed by its director Arturo Osio into the Banca Nazionale del Lavoro. In 1913 the Subsidy Consortium was established, led by the Bank of Italy and also participated by the Banks of Naples and Sicily, some savings banks, Monte dei Paschi di Siena and by the San Paolo Bank of Turin. In 1922 the Consortium saved Ansaldo and took control of it, and in 1923 it did the same with Banco di Roma. In the same 1913 Francesco Saverio Nitti drew up a bill that entrusted the Bank of Italy with the supervision of other banks, but the private banks managed to avoid its approval. In 1914 the Bank of Italy assisted the Banco di Roma, which had to devalue its capital due to losses reported in the activities in the eastern Mediterranean. After the First World War, in 1921, it was always the Bank of Italy that led the consortium that managed the liquidation of the Italian Discount Bank and saved the Banco di Roma once again from crisis. Banking Law of 1926 and aftermath. In 1926, the fascist government of Benito Mussolini undertook a major reorganization of the monetary system, partly motivated by the desire to assert the state's dominance over the private credit system and especially the two Milanese investment banks, Banca Commerciale Italiana and Credito Italiano. The context of the new legislation was also affected by the ongoing turmoil affecting banks affiliated with the Christian-Democratic Italian People's Party, such as "Credito Nazionale". R.D.L. 812 of granted the Bank of Italy an exclusive right to issue currency, terminating the prior issuance privileges of the Banco di Napoli and Banco di Sicilia and abrogating the Royal Decree of , no. 204. Subsequently, R.D.L. 1820 of entrusted the Bank of Italy with the task of supervising savings banks. Also in 1926, the Subsidy Consortium was reorganized as Istituto Liquidazioni, still under the control of the central bank. In 1933 it would be separated from the bank and absorbed by the newly established Istituto per la Ricostruzione Industriale. In 1928 the Bank was reorganized. The general manager was joined by a governor with greater powers. While all the banks were in very bad conditions, the Banca Nazionale del Lavoro of the self-styled socialist Arturo Osio, in 1929 confiscated eleven Catholic banks, and in 1932 the Banca Agricola Italiana which had financed SNIA Viscosa di Gualino. Banks and the economy of the 1930s. Italy in the 1930s had an agricultural economy, a small number of industrial families who relied on the subcontracting of local suppliers, formed by a myriad of small family-run businesses, not international and whose survival depended on large groups of industrialists, in turn, linked to commercial banks. The savings from agriculture flowed into the rural coffers, the popular banks and the cooperative credit which financed the life of the provincial crafts, small businesses and construction. The job of the banks was to match the customers' short-term investment horizon with the long-term investments of large groups (Rediscount). National banks turned to local banks that had large deposits of deposits for smaller, low-risk loans. The Cassa Depositi e Prestiti channelled postal savings in favour of local authorities, public institutions and infrastructures, which were a way of absorbing mass unemployment, through a vast program of public works. The ideological basis of the law was that savings are a matter of national interest and must be protected by the State, a principle also enshrined in the Republican Constitution and concretized in the first place in the law establishing the interbank guarantee fund and in the policy of public bailouts. The banking legislation of 1936-1938 established a banking supervisory agency, the (IDREC), chaired by the Bank of Italy's governor. The bank no longer had the right to give credit to individuals but only to other banks as a lender of last resort. public bailout policy. Finally, it had the power to require other banks to deposit a portion of the available funds with the same central bank; by varying the share, the Bank of Italy could operate credit tightening or enlargements. The law established certain minimum capital and management requirements necessary to guarantee risk management, stability and operational continuity: minimum capital, minimum ratio between loans and deposits, credit limits, provisions for compulsory reserve. IRI and the war. After the "defenestration" of Bonaldo Stringher, Alberto Beneduce took over and was forced to retire in 1936 after a "heart attack" during a meeting at the Bank for International Settlements in Basel. They conceived the duty of the banks towards the public interest of the country, as the subject who had to collect savings to lend them to entrepreneurs, as a tool for development and growth. The process was to be led by a "circulation bank", which would increase the speed of circulation of money in the real economy. The Central Bank supported the fascist monetary policy of defending the stability of the Italian lira (known as the "Quota 90"), through the reduction of discounts and advances, and financing the enormous expenses of wars in the 1930s and 1940s through the unlimited issuing of money (and the "inflation tax", not progressive with income), as Hjalmar Schacht did in Germany under Hitler. Operationally, the government issued and sold debt securities to finance military spending, and the military industry reinvested its government profits in the purchase of such bonds as a "de facto" advance on future orders, fueling a closed financial circuit. In simple terms, this was something like the ECB issuing money and lending it to private banks who keep it in their current accounts with the ECB. This mechanism was called "capital circuit". The printing of tickets and the scarcity of consumer goods created an overabundance of money that poured into bank deposits, allowing a new expansion of credit, which was directed in favour of the economic sectors themselves. given that the state paid the banks a higher interest on the BOTs than the savers. The absorption of savings into investments in fixed capital had already taken place in the First World War and industries were working with existing production capacities. Without consumption and investments, public spending by the state remained. The war could start with a modest tax levy and inflation within the normal limits in the first months, before the black market and ration cards. The situation followed the conflict of interest between the state entrepreneur and the state bank, albeit in the name of a higher ideological purpose. In 1938, the government decreed the power to directly appoint presidents and vice-presidents of the board of directors of banks. Beneduce planned to have a public bank take over the long-term credit of large companies, financed with bonds of equal duration for public works, energy, and industry. After them, the Central Bank maintained a low-profile monetary policy, consistent with the directives of fascism. IRI operated differently, in agreement with the Italian banks and industries that supported fascism. The banks renounced exercising an option by "converting" the debts into shares (or a law in this regard), preferring not to enter directly into the ownership of the industrial groups. The groups transferred the bank debts to IRI, which became the new owner in exchange for shares (at the book value, not always the same as the market value), until they held control of the property and therefore of management. The debt of the IRI rose to nine and a half billion lire at the time, two-thirds of which were paid within the war, because they were drastically diluted by inflation which has the effect of lowering the real weight of debts until the accounting entries are cancelled. of issuance, but also to halve the purchasing power of small savers. The remaining debt was paid by 1953. The IRI in turn had debts towards the Bank of Italy for five billion lire: the State issued bonds for IRI for one and a half billion, "sterilizing" the debt that should have been repaid with "annuity" interest. accrued until 1971. The change of constitutional order and currency (exchange rate for conversion), and inflation meant that IRI (and industries) paid the Bank of Italy less than a third of the sum. After the armistice of 8 September, the German authorities demanded the delivery of the gold reserve. 173 tons of gold were first transferred to the Milan office, and then to Fortezza. Traces of it were subsequently lost. In the 1960s, the public debt increased and so did inflation. Governor Guido Carli made a policy of credit crunch to stop inflation, particularly in 1964. In general, the Bank of Italy played an important political role under this governorship. Other credit crunches were implemented between 1969 and 1970 due to the flight of capital abroad and in 1974 as a result of the oil crisis. In March 1979 the governor of the Bank of Italy, Paolo Baffi, and the deputy director in charge of supervision, Mario Sarcinelli, were accused by the Rome public prosecutor of private interest in official acts and personal aiding and abetting. Sarcinelli was arrested, and released from prison only after being suspended from duties relating to surveillance, while Baffi avoided prison due to his age. In 1981 the two will be completely acquitted. Subsequently, the suspicion will emerge that the indictment was wanted by P2 to prevent the Bank of Italy from supervising Roberto Cavali Banco Ambrosiano. The postwar period. The post-war inflation, also due to the Am-lire, was fought with the credit crunch desired by the governor Luigi Einaudi, which was obtained through the compulsory reserve on deposits. In particular, the instrument of compulsory reserves of banks at the central bank was used, introduced in 1926 but never really applied. In 1948 the governor was given the task of regulating the money supply and deciding the discount rate. The universal banks were the ones that had gained the most from war and inflation (under the Authorization Regime of the Interministerial Credit Committee), with the greatest growth in deposits. Along with the recovery, speculative stocks and capital flight abroad appeared. Credit limits were no longer tied to equity, as equity figures were completely distorted by inflation. The squeeze on lending, the liquidity crisis and the Eenaudian deflation pushed operators to finance themselves by placing stocks on the market and returning capital, thus blocking the rise in prices; and by resorting to self-financing (even without distributing profits), aided by the fact that inflation had made it possible to quickly amortize fixed assets whose book value was now nominal. During the years of the Reconstruction, governor Donato Menichella governed the issue in a gradual and balanced way: he did not implement expansionary manoeuvres to encourage growth but was careful to avoid the creation of credit crunches. In this, he was helped by the low public debt. Its monetary policy program was stability for development. A part of the available bank savings was channelled annually to the Treasury to cover the budget deficit (in the current year), while during his tenure the public debt of the state never rose above 1% of GDP, until 1964. In July 1981, a "divorce" between the State (Ministry of the Treasury) and its central bank was initiated by the decision of the then Treasury Minister Beniamino Andreatta. From that moment on, the institute was no longer required to purchase the bonds that the government was unable to place on the market, thus ceasing the monetization of the Italian public debt that it had carried out since the Second World War up to that moment. This decision was opposed by the Minister of Finance Rino Formica, who would have liked the Bank of Italy to be required to repay at least a portion of these securities, and from the summer of 1982 a series of intra-government verbal clashes between the two ministers known as the wives' quarrel, which was followed by the fall of the second Spadolini government a few months later. The divorce between the Ministry of the Treasury and the Bank of Italy is still considered by economic doctrine as a factor of great stabilization of inflation (which went from over 20% in 1980 to less than 5% in the following years) and a central prerequisite for guarantee the full independence of the technical monetary policy body (central bank) from the choices related to fiscal policy (under the responsibility of the government), but also a factor of considerable incidence of growth of the Italian public debt. The law of 7 February 1992 n. 82, proposed by the then Minister of the Treasury Guido Carli, clarifies that the decision on the discount rate is the exclusive competence of the governor and must no longer be agreed in concert with the Minister of the Treasury (the previous decree of the President of the Republic is modified in relation to the new law with the Presidential Decree of 18 July). The euro and the 2006 reform. The Legislative Decree 10 March 1998 n. 43 removes the Bank of Italy from management by the Italian government, sanctioning its belonging to the European system of central banks. From this date, therefore, the quantity of currency in circulation is decided autonomously by the Central Bank. With the introduction of the Euro on 1 January 1999, the Bank thus loses the function of presiding over national monetary policy. This function has since been exercised collectively by the Governing Council of the European Central Bank, which also includes the Governor of the Bank of Italy. On 13 June 1999 the Senate of the Republic, during the XIII Legislature, discussed bill no. 4083 "Rules on the ownership of the Bank of Italy and on the criteria for appointing the Board of Governors of the Bank of Italy". This bill would like the state to acquire all the shares of the institute, but it is never approved. On 4 January 2004, the weekly "Famiglia Cristiana" reports, for the first time in history, the list of participants in the capital of the Bank of Italy with the relative shares. The source is a Mediobanca Research & Studies dossier, directed by the researcher Fulvio Coltorti, who, by investigating backwards on the balance sheets of banks, insurance companies and institutions, and gradually noting the shares that indicated a shareholding in the capital of the Bank of Italia managed to reconstruct a large part of the list of participants of the highest Italian financial institution. On 20 September 2005, the list of shareholders was officially made available by the Bank of Italy; until now it was considered confidential. On 19 December 2005, after intense press campaigns and criticism of his actions in the context of the Bancopoli scandal, Governor Antonio Fazio resigned. A few days later, Mario Draghi, who took office on 16 January 2006, was appointed in his place. The law of 28 December 2005, n. 262, as part of various measures to protect savings, introduces for the first time a term to the mandate of the governor and the members of the directorate. It also dealt with (article 19, paragraph 10) the issue of ownership of the capital of the Bank of Italy, providing for the redefinition of the Bank's shareholding structure by means of a government regulation to be issued within three years of the law's entry into force. This regulation should have governed the methods of transferring shares held by "subjects other than the State or other public bodies". The delegation made by law 262/2005, therefore, expired without the regulation being issued, but the right to ownership of the shares of the current participants is in any case safeguarded by a provision of the Bank's Statute. On the basis of law 262/2005, Mario Draghi becomes the first governor to have a term of six years, renewable once for a further six years. Missions and organization. Missions. After the charge of monetary and exchange rate policies was shifted in 1998 to the European Central Bank, within the European institutional framework, the bank implements the decisions, issues euro banknotes and withdraws and destroys worn pieces. The main function has thus become banking and financial supervision. The objective is to ensure the stability and efficiency of the system and compliance with rules and regulations; the bank pursues it through secondary legislation, controls and cooperation with governmental authorities. Following a reform in 2005, which was prompted by takeover scandals, the bank has lost exclusive antitrust authority in the credit sector, which is now shared with the Italian Competition Authority (). Other functions include market supervision, oversight of the payment system and provision of settlement services, State treasury service, Central Credit Register, economic analysis and institutional consultancy. As of 2021, the Bank of Italy owned 2,451.8 tonnes of gold, the third-largest gold reserve in the world. Governing bodies. The bank's governing bodies are the General Meeting of Shareholders, the board of directors, the governor, the director general and three deputy directors-general; the last five constitute the directorate. The general meeting takes place yearly and with the purpose of approving accounts and appointing the auditors. The board of directors has administrative powers and is chaired by the governor (or by the director-general in his absence). Following a reform in 2005, the governor lost exclusive responsibility regarding decisions of external relevance (i.e. banking and financial supervision), which has been transferred to the directorate (by majority vote). The director-general is responsible for the day-to-day administration of the bank and acts as governor when absent. The board of auditors assesses the bank's administration and compliance with the law, regulations and statute. Appointment. The directorate's term of office lasts six years and is renewable once. The appointment of the governor is the responsibility of the government, head of the board of directors, with the approval of the president (formally a decree of the president). The board of directors is elected by the shareholders according to the bank statute. On 25 October 2011, Silvio Berlusconi nominated Ignazio Visco to be the bank's new governor to replace Mario Draghi when he left to become president of the European Central Bank in November. Currency and coinage. Italy has a long history of different coinage types, which spans thousands of years. Since Italy has been for centuries divided into many historic states, they all had different coinage systems, but when the country became unified in 1861, the Italian lira came into place, and was used until 2002. The term originates from "libra", the largest unit of the Carolingian monetary system used in Western Europe and elsewhere from the 8th to the 20th century. Italian lira was introduced by the Napoleonic Kingdom of Italy in 1807 at par with the French franc, and was subsequently adopted by the different states that would eventually form the Kingdom of Italy in 1861. It was subdivided into 100 "centesimi" (singular: "centesimo"), which means "hundredths" or "cents". The lira was also the currency of the Albanian Kingdom from 1941 to 1943. There was no standard sign or abbreviation for the Italian lira. The abbreviations "Lit." (standing for "Lira italiana") and L. (standing for "Lira") and the signs ₤ or £ were all accepted representations of the currency. Banks and financial institutions, including the Bank of Italy, often used "Lit." and this was regarded internationally as the abbreviation for the Italian lira. Handwritten documents and signs at market stalls would often use "£" or "₤", while coins used "L." Italian postage stamps mostly used the word in full but some (such as the 1975 monuments series) used "L." The name of the currency could also be written in full as a prefix or a suffix (e.g. Lire 100,000 or 100,000 lire). The ISO 4217 currency code for the lira was "ITL". The Italian lira was the official unit of currency in Italy until 1 January 1999, when it was replaced by the euro (euro coins and notes were not introduced until 2002). Old lira denominated currency ceased to be legal tender on 28 February 2002. The conversion rate is 1,936.27 lire to the euro. All lira banknotes in use immediately before the introduction of the euro, as all post WW2 coins, were still exchangeable for euros in all branches of the Bank of Italy until 28 February 2012. Shareholders. Banca d'Italia had 300,000 shares with a nominal value of €25,000. Originally scattered around the banks of Italy, the shares now accumulated due to the merger of the banks since the 1990s and also a number of pension and social security institutions. The status of the bank states that a minimum of 54% of profits would go to the Italian government, and only a maximum of 6% of profits would be distributed as dividends according to share ratios. Even so, the Bank of Italy stands out among central banks in the Eurosystem as having no state ownership (the National Bank of Belgium and Bank of Greece have mixed ownership). As of early 2024, the 15 largest shareholders represented slightly over half of the bank's equity, namely UniCredit (5.0 percent), (4.9 percent), (4.9 percent), (4.9 percent), Intesa Sanpaolo (4.9 percent), (3.7 percent), BPER Banca (3.3 percent), ICCREA Banca (3.1 percent), Generali Italia (3.0 percent), the National Institute for Social Security (3.0 percent), Istituto nazionale per l'assicurazione contro gli infortuni sul lavoro (3.0 percent), (3.0 percent), Cassa di Risparmio di Asti (3.0 percent), Banca Nazionale del Lavoro (2.8 percent), and Crédit Agricole Italia (2.8 percent). The remaining 49 percent were dispersed among 157 shareholders, mainly banks and banking foundations.
3963
7436027
https://en.wikipedia.org/wiki?curid=3963
Beachcomber (pen name)
Beachcomber is a "nom de plume" that has been used by several journalists writing a long-running humorous column in the "Daily Express". It was originated in 1917 by Major John Bernard Arbuthnot MVO as his signature on the column, titled 'By the Way'. The name Beachcomber was then passed to D. B. Wyndham Lewis in 1919 and, in turn, to J. B. Morton, who wrote the column till 1975. It was later revived by William Hartston, current author of the column. "By the Way" column. "By the Way" was originally a column in "The Globe", consisting of unsigned humorous pieces; P. G. Wodehouse was assistant editor of the column from August 1903 and editor from August 1904 to May 1909, during which time he was assisted by Herbert Westbrook. After the "Globe"'s closure, it was re-established as a society news column in the "Daily Express" from 1917, initially written by social correspondent Major John Arbuthnot, who invented the name "Beachcomber". After Arbuthnot was promoted to deputy editor, it was taken over sometime in 1919 by Wyndham-Lewis, who reinvented it as an outlet for his wit and humour. It was then passed to Morton during 1924, though it is likely there was a period when they overlapped. Morton wrote the column until 1975; it was revived in January 1996 and continues today, written by William Hartston. The column is unsigned except by "Beachcomber" and it was not publicly known that Morton or Wyndham-Lewis wrote it until the 1930s. The name is mainly associated with Morton, who has been credited as an influence by Spike Milligan amongst others. Morton introduced the recurring characters and serial stories that were a major feature of the column during his 51-year run. The format of the column was a random assortment of small paragraphs which were otherwise unconnected. These could be anything, such as: Morton's other interest, France, was occasionally represented by epic tales of his rambling walks through the French countryside. These were not intended as humour. "By the Way" was popular with the readership, and of course, this is one of the reasons it lasted so long. Its style and randomness could be off-putting and it is safe to say the humour could be something of an acquired taste. Oddly, one of the column's greatest opponents was the "Express" newspaper's owner, Lord Beaverbrook, who had to keep being assured the column was indeed funny. A prominent critic was George Orwell, who frequently referred to him in his essays and diaries as "A Catholic Apologist" and accused him of being "silly-clever", in line with his criticisms of G. K. Chesterton, Hilaire Belloc, Ronald Knox and Wyndham-Lewis. "By the Way" was one of the few features kept continuously running in the often seriously reduced "Daily Express" throughout World War II, when Morton's lampooning of Hitler, including the British invention of bracerot to make the Nazis' trousers fall down at inopportune moments, was regarded as valuable for morale. The column appeared daily until 1965 when it was changed to weekly. It was cancelled in 1975 and revived as a daily piece in January 1996. It continues to the present day in much the same format but is now entitled "Beachcomber", not "By the Way". Other media. The Will Hay film "Boys Will Be Boys" (1935) was set at Morton's Narkover school. According to Spike Milligan, the columns were an influence on the comedic style of his radio series, "The Goon Show". In 1969, Milligan based a BBC television series named "The World of Beachcomber" on the columns. A small selection was issued on a 1971 LP and a 2-cassette set of the series' soundtrack was made available in the late 1990s. In 1989, BBC Radio 4 broadcast the first of three series based on Morton's work. This featured Richard Ingrams as Beachcomber, John Wells as Prodnose, Patricia Routledge and John Sessions. The compilations prepared by Mike Barfield. Series 1 was also made available as a 2-cassette set.
3965
5278126
https://en.wikipedia.org/wiki?curid=3965
Bill Joy
William Nelson Joy (born November 8, 1954) is an American computer engineer and venture capitalist. He co-founded Sun Microsystems in 1982 along with Scott McNealy, Vinod Khosla, and Andy Bechtolsheim, and served as Chief Scientist and CTO at the company until 2003. He played an integral role in the early development of BSD UNIX while being a graduate student at Berkeley, and he is the original author of the vi text editor. He also wrote the 2000 essay "Why The Future Doesn't Need Us", in which he expressed deep concerns over the development of modern technologies. Joy was elected a member of the National Academy of Engineering (1999) for contributions to operating systems and networking software. Early career. Joy was born in the Detroit suburb of Farmington Hills, Michigan, to William Joy, a school vice-principal and counselor, and Ruth Joy. He earned a Bachelor of Science in electrical engineering from the University of Michigan and a Master of Science in electrical engineering and computer science from the University of California, Berkeley, in 1979. While a graduate student at Berkeley, he worked for Fabry's Computer Systems Research Group (CSRG) on the Berkeley Software Distribution (BSD) version of the Unix operating system. He initially worked on a Pascal compiler left at Berkeley by Ken Thompson, who had been visiting the university when Joy had just started his graduate work. He later moved on to improving the Unix kernel, and also handled BSD distributions. Some of his most notable contributions were the ex and vi editors and the C shell. Joy's prowess as a computer programmer is legendary, with an oft-told anecdote that he wrote the vi editor in a weekend. Joy denies this assertion. A few of his other accomplishments have also been sometimes exaggerated; Eric Schmidt, CEO of Novell at the time, inaccurately reported during an interview in PBS's documentary "Nerds 2.0.1" that Joy had personally rewritten the BSD kernel in a weekend. In 1980, he also wrote codice_1, about which Rob Pike and Brian W. Kernighan wrote that it went against Unix philosophy. According to a "Salon" article, during the early 1980s, DARPA had contracted the company Bolt, Beranek and Newman (BBN) to add TCP/IP to Berkeley UNIX. Joy had been instructed to plug BBN's stack into Berkeley Unix, but he refused to do so, as he had a low opinion of BBN's TCP/IP. So, Joy wrote his own high-performance TCP/IP stack. According to John Gage: Rob Gurwitz, who was working at BBN at the time, disputes this version of events. Sun Microsystems. In 1982, after the firm had been going for six months, Joy, Sun's sixteenth employee, was brought in with full co-founder status at Sun Microsystems. At Sun, Joy was an inspiration for the development of NFS, the SPARC microprocessors, the Java programming language, Jini/JavaSpaces, and JXTA. In 1986, Joy was awarded a Grace Murray Hopper Award by the ACM for his work on the Berkeley UNIX Operating System. On September 9, 2003, Sun announced Joy was leaving the company and that he "is taking time to consider his next move and has no definite plans". Post-Sun activities. In 1999, Joy co-founded a venture capital firm, HighBAR Ventures, with two Sun colleagues, Andy Bechtolsheim and Roy Thiele-Sardiña. In January 2005 he was named a partner in venture capital firm Kleiner Perkins. There, Joy has made several investments in green energy industries, even though he does not have any credentials in the field. He once said, "My method is to look at something that seems like a good idea and assume it's true". In 2011, he was inducted as a Fellow of the Computer History Museum for his work on the Berkeley Software Distribution (BSD) Unix system and the co-founding of Sun Microsystems. Technology concerns. In 2000, Joy gained notoriety with the publication of his article in "Wired" magazine, "Why The Future Doesn't Need Us", in which he declared, in what some have described as a "neo-Luddite" position, that he was convinced that growing advances in genetic engineering and nanotechnology would bring risks to humanity. He argued that intelligent robots would replace humanity, at the very least in intellectual and social dominance, in the relatively near future. He supports and promotes the idea of abandonment of GNR (genetics, nanotechnology, and robotics) technologies, instead of going into an arms race between negative uses of the technology and defense against those negative uses (good nano-machines patrolling and defending against Grey goo "bad" nano-machines). This stance of broad relinquishment was criticized by technologists such as technological-singularity thinker Ray Kurzweil, who instead advocates fine-grained relinquishment and ethical guidelines. Joy was also criticized by "The American Spectator", which characterized Joy's essay as a (possibly unwitting) rationale for statism. A bar-room discussion of these technologies with Ray Kurzweil started to set Joy's thinking along this path. He states in his essay that during the conversation, he became surprised that other serious scientists were considering such possibilities likely, and even more astounded at what he felt was a lack of consideration of the contingencies. After bringing the subject up with a few more acquaintances, he states that he was further alarmed by what he felt was that although many people considered these futures possible or probable, that very few of them shared as serious a concern for the dangers as he seemed to. This concern led to his in-depth examination of the issue and the positions of others in the scientific community on it, and eventually, to his current activities regarding it. Despite this, he is a venture capitalist, investing in technology companies. He has also raised a specialty venture fund to address the dangers of pandemic diseases, such as the H5N1 avian influenza and biological weapons. Joy's law. Of management. In his 2013 book "Makers", author Chris Anderson credited Joy with establishing "Joy's law" based on a quip: "No matter who you are, most of the smartest people work for someone else [other than you]." His argument was that companies use an inefficient process by not hiring the best employees, only those they are able to hire. His "law" was a continuation of Friedrich Hayek's "The Use of Knowledge in Society" and warned that the competition outside of a company would always have the potential to be greater than the company itself. Of computing. Joy devised a formula in 1983, also called "Joy's law", stating that the peak computer speed doubles each year and thus is given by a simple function of time. Specifically, formula_1 in which is the peak computer speed attained during year , expressed in MIPS.
3967
6036800
https://en.wikipedia.org/wiki?curid=3967
Bandwidth (signal processing)
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in unit of hertz (symbol Hz). It may refer more specifically to two subcategories: "Passband bandwidth" is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. "Baseband bandwidth" is equal to the upper cutoff frequency of a low-pass filter or baseband signal, which includes a zero frequency. Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel. A key characteristic of bandwidth is that any band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the is smaller. Overview. Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal. An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing. For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practically useful definition will refer to the frequencies beyond which performance is degraded. In the case of frequency response, degradation could, for example, mean more than 3 dB below the maximum value or it could mean below a certain absolute value. As with any definition of the "width" of a function, many definitions are suitable for different purposes. In the context of, for example, the sampling theorem and Nyquist sampling rate, bandwidth typically refers to baseband bandwidth. In the context of Nyquist symbol rate or Shannon-Hartley channel capacity for communication systems it refers to passband bandwidth. The of a simple radar pulse is defined as the inverse of its duration. For example, a one-microsecond pulse has a Rayleigh bandwidth of one megahertz. The is defined as the portion of a signal spectrum in the frequency domain which contains most of the energy of the signal. "x" dB bandwidth. In some contexts, the signal bandwidth in hertz refers to the frequency range in which the signal's spectral density (in W/Hz or V2/Hz) is nonzero or above a small threshold value. The threshold value is often defined relative to the maximum value, and is most commonly the , that is the point where the spectral density is half its maximum value (or the spectral amplitude, in formula_1 or formula_2, is 70.7% of its maximum). This figure, with a lower threshold value, can be used in calculations of the lowest sampling rate that will satisfy the sampling theorem. The bandwidth is also used to denote system bandwidth, for example in filter or communication channel systems. To say that a system has a certain bandwidth means that the system can process signals with that range of frequencies, or that the system reduces the bandwidth of a white noise input to that bandwidth. The 3 dB bandwidth of an electronic filter or communication channel is the part of the system's frequency response that lies within 3 dB of the response at its peak, which, in the passband filter case, is typically at or near its center frequency, and in the low-pass filter is at or near its cutoff frequency. If the maximum gain is 0 dB, the 3 dB bandwidth is the frequency range where attenuation is less than 3 dB. 3 dB attenuation is also where power is half its maximum. This same "half-power gain" convention is also used in spectral width, and more generally for the extent of functions as full width at half maximum (FWHM). In electronic filter design, a filter specification may require that within the filter passband, the gain is nominally 0 dB with a small variation, for example within the ±1 dB interval. In the stopband(s), the required attenuation in decibels is above a certain level, for example >100 dB. In a transition band the gain is not specified. In this case, the filter bandwidth corresponds to the passband width, which in this example is the 1 dB-bandwidth. If the filter shows amplitude ripple within the passband, the "x" dB point refers to the point where the gain is "x" dB below the nominal passband gain rather than "x" dB below the maximum gain. In signal processing and control theory the bandwidth is the frequency at which the closed-loop system gain drops 3 dB below peak. In communication systems, in calculations of the Shannon–Hartley channel capacity, bandwidth refers to the 3 dB-bandwidth. In calculations of the maximum symbol rate, the Nyquist sampling rate, and maximum bit rate according to the Hartley's law, the bandwidth refers to the frequency range within which the gain is non-zero. The fact that in equivalent baseband models of communication systems, the signal spectrum consists of both negative and positive frequencies, can lead to confusion about bandwidth since they are sometimes referred to only by the positive half, and one will occasionally see expressions such as formula_3, where formula_4 is the total bandwidth (i.e. the maximum passband bandwidth of the carrier-modulated RF signal and the minimum passband bandwidth of the physical passband channel), and formula_5 is the positive bandwidth (the baseband bandwidth of the equivalent channel model). For instance, the baseband model of the signal would require a low-pass filter with cutoff frequency of at least formula_5 to stay intact, and the physical passband channel would require a passband filter of at least formula_4 to stay intact. Relative bandwidth. The absolute bandwidth is not always the most appropriate or useful measure of bandwidth. For instance, in the field of antennas the difficulty of constructing an antenna to meet a specified absolute bandwidth is easier at a higher frequency than at a lower frequency. For this reason, bandwidth is often quoted relative to the frequency of operation which gives a better indication of the structure and sophistication needed for the circuit or device under consideration. There are two different measures of relative bandwidth in common use: "fractional bandwidth" (formula_8) and "ratio bandwidth" (formula_9). In the following, the absolute bandwidth is defined as follows, formula_10 where formula_11 and formula_12 are the upper and lower frequency limits respectively of the band in question. Fractional bandwidth. Fractional bandwidth is defined as the absolute bandwidth divided by the center frequency (formula_13), formula_14 The center frequency is usually defined as the arithmetic mean of the upper and lower frequencies so that, formula_15 and formula_16 However, the center frequency is sometimes defined as the geometric mean of the upper and lower frequencies, formula_17 and formula_18 While the geometric mean is more rarely used than the arithmetic mean (and the latter can be assumed if not stated explicitly) the former is considered more mathematically rigorous. It more properly reflects the logarithmic relationship of fractional bandwidth with increasing frequency. For narrowband applications, there is only marginal difference between the two definitions. The geometric mean version is inconsequentially larger. For wideband applications they diverge substantially with the arithmetic mean version approaching 2 in the limit and the geometric mean version approaching infinity. Fractional bandwidth is sometimes expressed as a percentage of the center frequency (percent bandwidth, formula_19), formula_20 Ratio bandwidth. Ratio bandwidth is defined as the ratio of the upper and lower limits of the band, formula_21 Ratio bandwidth may be notated as formula_22. The relationship between ratio bandwidth and fractional bandwidth is given by, formula_23 and formula_24 Percent bandwidth is a less meaningful measure in wideband applications. A percent bandwidth of 100% corresponds to a ratio bandwidth of 3:1. All higher ratios up to infinity are compressed into the range 100–200%. Ratio bandwidth is often expressed in octaves (i.e., as a frequency level) for wideband applications. An octave is a frequency ratio of 2:1 leading to this expression for the number of octaves, formula_25 Noise equivalent bandwidth. The noise equivalent bandwidth (or equivalent noise bandwidth (enbw)) of a system of frequency response formula_26 is the bandwidth of an ideal filter with rectangular frequency response centered on the system's central frequency that produces the same average power outgoing formula_26 when both systems are excited with a white noise source. The value of the noise equivalent bandwidth depends on the ideal filter reference gain used. Typically, this gain equals formula_28 at its center frequency, but it can also equal the peak value of formula_28. The noise equivalent bandwidth formula_30 can be calculated in the frequency domain using formula_26 or in the time domain by exploiting the Parseval's theorem with the system impulse response formula_32. If formula_26 is a lowpass system with zero central frequency and the filter reference gain is referred to this frequency, then: formula_34 The same expression can be applied to bandpass systems by substituting the equivalent baseband frequency response for formula_26. The noise equivalent bandwidth is widely used to simplify the analysis of telecommunication systems in the presence of noise. Photonics. In photonics, the term "bandwidth" carries a variety of meanings: A related concept is the spectral linewidth of the radiation emitted by excited atoms.
3968
49996668
https://en.wikipedia.org/wiki?curid=3968
Bodhisattva
In Buddhism, a bodhisattva is a person who has attained, or is striving towards, "bodhi" ('awakening', 'enlightenment') or Buddhahood. Often, the term specifically refers to a person who forgoes or delays personal nirvana or "bodhi" in order to compassionately help other individuals reach Buddhahood. In the Early Buddhist schools, as well as modern Theravāda Buddhism, bodhisattva (or bodhisatta) refers to someone who has made a resolution to become a Buddha and has also received a confirmation or prediction from a living Buddha that this will come to pass. In Theravāda Buddhism, the bodhisattva is mainly seen as an exceptional and rare individual. Only a few select individuals are ultimately able to become bodhisattvas, such as Maitreya. In Mahāyāna Buddhism, a bodhisattva refers to anyone who has generated "bodhicitta", a spontaneous wish and compassionate mind to attain Buddhahood for the benefit of all sentient beings. Mahayana bodhisattvas are spiritually heroic persons that work to attain awakening and are driven by a great compassion ("mahākaruṇā"). These beings are exemplified by important spiritual qualities such as the "four divine abodes" ("brahmavihāras") of loving-kindness ("maitrī"), compassion ("karuṇā"), empathetic joy ("muditā") and equanimity ("upekṣā"), as well as the various bodhisattva "perfections" ("pāramitās") which include "prajñāpāramitā" ("transcendent knowledge" or "perfection of wisdom") and skillful means ("upāya"). Mahāyāna Buddhism generally understands the bodhisattva path as being open to everyone, and Mahāyāna Buddhists encourage all individuals to become bodhisattvas. Spiritually advanced bodhisattvas such as Avalokiteshvara, Maitreya, and Manjushri are also widely venerated across the Mahāyāna Buddhist world and are believed to possess great magical power, which they employ to help all living beings. Etymology. Bodhisattva is a combination of two Sanskrit words: Bodhi (बोधि), meaning "awakening" or "enlightenment" and Sattva (सत्त्व), meaning "being", as pertaining to a person who has achieved, or is striving towards, "bodhi" ('awakening' or 'enlightenment') or Buddhahood. Early Buddhism. In pre-sectarian Buddhism, the term "bodhisatta" is used in the early texts to refer to Gautama Buddha in his previous lives and as a young man in his last life, when he was working towards liberation. In the early Buddhist discourses, the Buddha regularly uses the phrase "when I was an unawakened Bodhisatta" to describe his experiences before his attainment of awakening. The early texts which discuss the period before the Buddha's awakening mainly focus on his spiritual development. According to Bhikkhu Analayo, most of these passages focus on three main themes: "the bodhisattva's overcoming of unwholesome states of mind, his development of mental tranquillity, and the growth of his insight." Other early sources like the "Acchariyabbhutadhamma-sutta" (MN 123, and its Chinese parallel in Madhyama-āgama 32) discuss the marvelous qualities of the bodhisattva Gautama in his previous life in Tuṣita heaven. The Pali text focuses on how the bodhisattva was endowed with mindfulness and clear comprehension while living in Tuṣita, while the Chinese source states that his lifespan, appearance, and glory was greater than all the devas (gods). These sources also discuss various miracles which accompanied the bodhisattva's conception and birth, most famously, his taking seven steps and proclaiming that this was his last life. The Chinese source (titled "Discourse on Marvellous Qualities") also states that while living as a monk under the Buddha Kāśyapa he "made his initial vow to [realize] Buddhahood [while] practicing the holy life." Another early source that discusses the qualities of bodhisattvas is the "Mahāpadāna sutta." This text discusses bodhisattva qualities in the context of six previous Buddhas who lived long ago, such as Buddha Vipaśyī. Yet another important element of the bodhisattva doctrine, the prediction of someone's future Buddhahood, is found in another Chinese early Buddhist text, the "Discourse on an Explanation about the Past" (MĀ 66). In this discourse, a monk named Maitreya aspires to become a Buddha in the future and the Buddha then predicts that Maitreya will become a Buddha in the future Other discourses found in the " Ekottarika-āgama" present the "bodhisattva Maitreya" as an example figure (EĀ 20.6 and EĀ 42.6) and one sutra in this collection also discuss how the Buddha taught the bodhisattva path of the six perfections to Maitreya (EĀ 27.5). 'Bodhisatta' may also connote a being who is "bound for enlightenment", in other words, a person whose aim is to become fully enlightened. In the Pāli canon, the Bodhisatta (bodhisattva) is also described as someone who is still subject to birth, illness, death, sorrow, defilement, and delusion. According to the Theravāda monk Bhikkhu Bodhi, while all the Buddhist traditions agree that to attain Buddhahood, one must "make a deliberate resolution" and fulfill the spiritual perfections (pāramīs or pāramitās) as a bodhisattva, the actual bodhisattva path is not taught in the earliest strata of Buddhist texts such as the Pali Nikayas (and their counterparts such as the Chinese Āgamas) which instead focus on the ideal of the arahant. The oldest known story about how Gautama Buddha becomes a bodhisattva is the story of his encounter with the previous Buddha, Dīpankara. During this encounter, a previous incarnation of Gautama, variously named Sumedha, Megha, or Sumati offers five blue lotuses and spreads out his hair or entire body for Dīpankara to walk on, resolving to one day become a Buddha. Dīpankara then confirms that they will attain Buddhahood. Early Buddhist authors saw this story as indicating that the making of a resolution ("abhinīhāra") in the presence of a living Buddha and his prediction/confirmation ("vyākaraṇa") of one's future Buddhahood was necessary to become a bodhisattva. According to Drewes, "all known models of the path to Buddhahood developed from this basic understanding." Stories and teachings on the bodhisattva ideal are found in the various Jataka tale sources, which mainly focus on stories of the past lives of the Sakyamuni. Among the non-Mahayana Nikaya schools, the Jataka literature was likely the main genre that contained bodhisattva teachings. These stories had certainly become an important part of popular Buddhism by the time of the carving of the Bharhut Stupa railings (c. 125–100 BCE), which contain depictions of around thirty Jataka tales. Thus, it is possible that the bodhisattva ideal was popularized through the telling of Jatakas. Jataka tales contain numerous stories which focus on the past life deeds of Sakyamuni when he was a bodhisattva. These deeds generally express bodhisattva qualities and practices (such as compassion, the six perfections, and supernatural power) in dramatic ways, and include numerous acts of self-sacrifice. Apart from Jataka stories related to Sakyamuni, the idea that Metteya (Maitreya), who currently resides in Tuṣita, would become the future Buddha and that this had been predicted by the Buddha Sakyamuni was also an early doctrine related to the bodhisattva ideal. It first appears in the "Cakkavattisihanadasutta". According to A. L. Basham, it is also possible that some of the Ashokan edicts reveal knowledge of the bodhisattva ideal. Basham even argues that Ashoka may have considered himself a bodhisattva, as one edict states that he "set out for sambodhi." Nikāya schools. By the time that the Buddhist tradition had developed into various competing sects, the idea of the bodhisattva vehicle (Sanskrit: "bodhisattvayana") as a distinct (and superior) path from that of the arhat and solitary buddha was widespread among all the major non-Mahayana Buddhist traditions or Nikaya schools, including Theravāda, Sarvāstivāda and Mahāsāṃghika. The doctrine is found, for example, in 2nd century CE sources like the "Avadānaśataka" and the "Divyāvadāna." The bodhisattvayana was referred by other names such as "vehicle of the perfections" ("pāramitāyāna"), "bodhisatva dharma", "bodhisatva training", and "vehicle of perfect Buddhahood". According to various sources, some of the Nikaya schools (such as the Dharmaguptaka and some of the Mahasamghika sects) transmitted a collection of texts on bodhisattvas alongside the Tripitaka, which they termed "Bodhisattva Piṭaka" or "Vaipulya (Extensive) Piṭaka". None of these have survived. Dar Hayal attributes the historical development of the bodhisattva ideal to "the growth of bhakti (devotion, faith, love) and the idealisation and spiritualisation of the Buddha." The North Indian Sarvāstivāda school held it took Gautama three "incalculable aeons" ("asaṃkhyeyas") and ninety one aeons ("kalpas") to become a Buddha after his resolution ("praṇidhāna") in front of a past Buddha. During the first incalculable aeon he is said to have encountered and served 75,000 Buddhas, and 76,000 in the second, after which he received his first prediction ("vyākaraṇa") of future Buddhahood from Dīpankara, meaning that he could no longer fall back from the path to Buddhahood. For Sarvāstivāda, the first two incalculable aeons is a period of time in which a bodhisattva may still fall away and regress from the path. At the end of the second incalculable aeon, they encounter a buddha and receive their prediction, at which point they are certain to achieve Buddhahood. Thus, the presence of a living Buddha is also necessary for Sarvāstivāda. The "Mahāvibhāṣā" explains that its discussion of the bodhisattva path is partly meant "to stop those who are in fact not bodhisattvas from giving rise to the self-conceit that they are." However, for Sarvāstivāda, one is not technically a bodhisattva until the end of the third incalculable aeon, after which one begins to perform the actions which lead to the manifestation of the marks of a great person. The "Mahāvastu" of the Mahāsāṃghika-Lokottaravādins presents various ideas regarding the school's conception of the bodhisattva ideal. According to this text, bodhisattva Gautama had already reached a level of dispassion at the time of Buddha Dīpaṃkara many aeons ago and he is also said to have attained the perfection of wisdom countless aeons ago. The "Mahāvastu" also presents four stages or courses ("caryās)" of the bodhisattva path without giving specific time frames (though it's said to take various incalculable aeons). This set of four phases of the path is also found in other sources, including the Gandhari “"Many-Buddhas Sūtra"” (*"Bahubudha gasutra") and the Chinese "Fó běnxíng jí jīng" (佛本行 集經, Taisho vol. 3, no. 190, pp. 669a1–672a11). The four "caryās" (Gandhari: "caria") are the following: Theravāda. The bodhisattva ideal is also found in southern Buddhist sources, like the Theravāda school's "Buddhavaṃsa" (1st-2nd century BCE), which explains how Gautama, after making a resolution ("abhinīhāra") and receiving his prediction ("vyākaraṇa") of future Buddhahood from past Buddha Dīpaṃkara, he became certain ("dhuva") to attain Buddhahood. Gautama then took four incalculable aeons and a hundred thousand, shorter "kalpas" (aeons) to reach Buddhahood. Several sources in the Pali Canon depict the idea that there are multiple Buddhas and that there will be many future Buddhas, all of which must train as bodhisattas. Non-canonical Theravada Jataka literature also teaches about bodhisattvas and the bodhisattva path. The worship of bodhisattvas like Metteya, Saman and Natha (Avalokiteśvara) can also be found in Theravada Buddhism. By the time of the great scholar Buddhaghosa (5th-century CE), orthodox Theravāda held the standard Indian Buddhist view that there were three main spiritual paths within Buddhism: the way of the Buddhas ("buddhayāna") i.e. the bodhisatta path; the way of the individual Buddhas ("paccekabuddhayāna"); and the way of the disciples ("sāvakayāna"). The Sri Lankan commentator Dhammapāla (6th century CE) wrote a commentary on the "Cariyāpiṭaka", a text which focuses on the bodhisattva path and on the ten perfections of a bodhisatta. Dhammapāla's commentary notes that to become a bodhisattva one must make a valid resolution in front of a living Buddha. The Buddha then must provide a prediction ("vyākaraṇa") which confirms that one is irreversible ("anivattana") from the attainment of Buddhahood. The "Nidānakathā", as well as the "Buddhavaṃsa" and "Cariyāpiṭaka" commentaries makes this explicit by stating that one cannot use a substitute (such as a Bodhi tree, Buddha statue or Stupa) for the presence of a living Buddha, since only a Buddha has the knowledge for making a reliable prediction. This is the generally accepted view maintained in orthodox Theravada today. According to Theravāda commentators like Dhammapāla as well as the "Suttanipāta" commentary, there are three types of bodhisattvas: According to modern Theravada authors, meeting a Buddha is needed to truly make someone a bodhisattva because any other resolution to attain Buddhahood may easily be forgotten or abandoned during the aeons ahead. The Burmese monk Ledi Sayadaw (1846–1923) explains that though it is easy to make vows for future Buddhahood by oneself, it is very difficult to maintain the necessary conduct and views during periods when the Dharma has disappeared from the world. One will easily fall back during such periods and this is why one is not truly a full bodhisattva until one receives recognition from a living Buddha. Because of this, it was and remains a common practice in Theravada to attempt to establish the necessary conditions to meet the future Buddha Maitreya and thus receive a prediction from him. Medieval Theravada literature and inscriptions report the aspirations of monks, kings and ministers to meet Maitreya for this purpose. Modern figures such as Anagarika Dharmapala (1864–1933), and U Nu (1907–1995) both sought to receive a prediction from a Buddha in the future and believed meritorious actions done for the good of Buddhism would help in their endeavor to become bodhisattvas in the future. Over time the term came to be applied to other figures besides Gautama Buddha in Theravada lands, possibly due to the influence of Mahayana. The Theravada Abhayagiri tradition of Sri Lanka practiced Mahayana Buddhism and was very influential until the 12th century. Kings of Sri Lanka were often described as bodhisattvas, starting at least as early as Sirisanghabodhi (r. 247–249), who was renowned for his compassion, took vows for the welfare of the citizens, and was regarded as a mahāsatta (Sanskrit: "mahāsattva"), an epithet used almost exclusively in Mahayana Buddhism. Many other Sri Lankan kings from the 3rd until the 15th century were also described as bodhisattas and their royal duties were sometimes clearly associated with the practice of the ten pāramitās. In some cases, they explicitly claimed to have received predictions of Buddhahood in past lives. Popular Buddhist figures have also been seen as bodhisattvas in Theravada Buddhist lands. Shanta Ratnayaka notes that Anagarika Dharmapala, Asarapasarana Saranarikara Sangharaja, and Hikkaduwe Sri Sumamgala "are often called bodhisattvas". Buddhaghosa was also traditionally considered to be a reincarnation of Maitreya. Paul Williams writes that some modern Theravada meditation masters in Thailand are popularly regarded as bodhisattvas. Various modern figures of esoteric Theravada traditions (such as the weizzās of Burma) have also claimed to be bodhisattvas. Theravada bhikkhu and scholar Walpola Rahula writes that the bodhisattva ideal has traditionally been held to be higher than the state of a "śrāvaka" not only in Mahayana but also in Theravada. Rahula writes "the fact is that both the Theravada and the Mahayana unanimously accept the Bodhisattva ideal as the highest...Although the Theravada holds that anybody can be a Bodhisattva, it does not stipulate or insist that all must be Bodhisattva which is considered not practical." He also quotes the 10th century king of Sri Lanka, Mahinda IV (956–972 CE), who had the words inscribed "none but the bodhisattvas will become kings of a prosperous Lanka," among other examples. Jeffrey Samuels echoes this perspective, noting that while in Mahayana Buddhism the bodhisattva path is held to be universal and for everyone, in Theravada it is "reserved for and appropriated by certain exceptional people." Mahāyāna. Early Mahāyāna. Mahāyāna Buddhism (often also called "Bodhisattvayāna", "Bodhisattva Vehicle") is based principally upon the path of a bodhisattva. This path was seen as higher and nobler than becoming an arhat or a solitary Buddha. Hayal notes that Sanskrit sources generally depict the bodhisattva path as reaching a higher goal (i.e. "anuttara-samyak-sambodhi") than the goal of the path of the "disciples" (śrāvakas), which is the nirvana attained by arhats. For example, the "Lotus Sutra" states: To the sravakas, he preached the doctrine which is associated with the four Noble Truths and leads to Dependent Origination. It aims at transcending birth, old age, disease, death, sorrow, lamentation, pain, distress of mind and weariness; and it ends in nirvana. But, to the great being, the bodhisattva, he preached the doctrine, which is associated with the six perfections and which ends in the Knowledge of the Omniscient One after the attainment of the supreme and perfect bodhi. According to Peter Skilling, the Mahayana movement began when "at an uncertain point, let us say in the first century BCE, groups of monks, nuns, and lay-followers began to devote themselves exclusively to the Bodhisatva vehicle." These Mahayanists universalized the bodhisattvayana as a path which was open to everyone and which was taught for all beings to follow. This was in contrast to the Nikaya schools, which held that the bodhisattva path was only for a rare set of individuals. Indian Mahayanists preserved and promoted a set of texts called Vaipulya ("Extensive") sutras (later called Mahayana sutras). Mahayana sources like the "Lotus Sutra" also claim that arhats that have reached nirvana have not truly finished their spiritual quest, for they still have not attained the superior goal of sambodhi (Buddhahood) and thus must continue to strive until they reach this goal. The "", one of the earliest known Mahayana texts, contains a simple and brief definition for the term "bodhisattva", which is also the earliest known Mahāyāna definition. This definition is given as the following: "Because he has bodhi as his aim, a bodhisattva-mahāsattva is so called." Mahayana sutras also depict the bodhisattva as a being which, because they want to reach Buddhahood for the sake of all beings, is more loving and compassionate than the sravaka (who only wishes to end their own suffering). Thus, another major difference between the bodhisattva and the arhat is that the bodhisattva practices the path for the good of others ("par-ārtha"), due to their bodhicitta, while the sravakas do so for their own good ("sv-ārtha") and thus, do not have bodhicitta (which is compassionately focused on others). Mahayana bodhisattvas were not just abstract models for Buddhist practice, but also developed as distinct figures which were venerated by Indian Buddhists. These included figures like Manjushri and Avalokiteshvara, which are personifications of the basic virtues of wisdom and compassion respectively and are the two most important bodhisattvas in Mahayana. The development of bodhisattva devotion parallels the development of the Hindu bhakti movement. Indeed, Dayal sees the development of Indian bodhisattva cults as a Buddhist reaction to the growth of bhakti centered religion in India which helped to popularize and reinvigorate Indian Buddhism. Some Mahayana sutras promoted another revolutionary doctrinal turn, claiming that the three vehicles of the "Śrāvakayāna, Pratyekabuddhayāna" and the "Bodhisattvayāna" were really just one vehicle ("ekayana"). This is most famously promoted in the "Lotus Sūtra" which claims that the very idea of three separate vehicles is just an "upaya", a skillful device invented by the Buddha to get beings of various abilities on the path. But ultimately, it will be revealed to them that there is only one vehicle, the "ekayana", which ends in Buddhahood. Mature scholastic Mahāyāna. Classical Indian mahayanists held that the only sutras which teach the bodhisattva vehicle are the Mahayana sutras. Thus, Nagarjuna writes "the subjects based on the deeds of Bodhisattvas were not mentioned in [non-Mahāyāna] sūtras." They also held that the bodhisattva path was superior to the śrāvaka vehicle and so the bodhisattva vehicle is the "great vehicle" (mahayana) due to its greater aspiration to save others, while the śrāvaka vehicle is the "small" or "inferior" vehicle (hinayana). Thus, Asanga argues in his "Mahāyānasūtrālaṃkāra" that the two vehicles differ in numerous ways, such as intention, teaching, employment (i.e., means), support, and the time that it takes to reach the goal. Over time, Mahayana Buddhists developed mature systematized doctrines about the bodhisattva. The authors of the various Madhyamaka treatises often presented the view of the "ekayana", and thus held that all beings can become bodhisattvas. The texts and sutras associated with the Yogacara school developed a different theory of three separate "gotras" (families, lineages), that inherently predisposed a person to either the vehicle of the "arhat", "pratyekabuddha" or "samyak-saṃbuddha" (fully self-awakened one). For the yogacarins then, only some beings (those who have the "bodhisattva lineage") can enter the bodhisattva path. In East Asian Buddhism, the view of the one vehicle ("ekayana") which holds that all Buddhist teachings are really part of a single path, is the standard view. The term bodhisattva was also used in a broader sense by later authors. According to the eighth-century Mahāyāna philosopher Haribhadra, the term "bodhisattva" can refer to those who follow any of the three vehicles, since all are working towards "bodhi". Therefore, the specific term for a Mahāyāna bodhisattva is a "mahāsattva" (great being) "bodhisattva". According to Atiśa's 11th century "Bodhipathapradīpa," the central defining feature of a Mahāyāna bodhisattva is the universal aspiration to end suffering for all sentient beings, which is termed "bodhicitta" (the mind set on awakening). The bodhisattva doctrine went through a significant transformation during the development of Buddhist tantra, also known as Vajrayana. This movement developed new ideas and texts which introduced new bodhisattvas and re-interpreted old ones in new forms, developed in elaborate mandalas for them and introduced new practices which made use of mantras, mudras and other tantric elements. Entering the bodhisattva path. According to David Drewes, "Mahayana sutras unanimously depict the path beginning with the first arising of the thought of becoming a Buddha ("prathamacittotpāda"), or the initial arising of "bodhicitta", typically aeons before one first receives a Buddha's prediction, and apply the term bodhisattva from this point." The "Ten Stages Sutra", for example, explains that the arising of bodhicitta is the first step in the bodhisattva's career. Thus, the arising of bodhicitta, the compassionate mind aimed at awakening for the sake of all beings, is a central defining element of the bodhisattva path. Another key element of the bodhisattva path is the concept of a bodhisattva's "praṇidhāna" - which can mean a resolution, resolve, vow, prayer, wish, aspiration and determination. This more general idea of an earnest wish or solemn resolve which is closely connected with bodhicitta (and is the cause and result of bodhicitta) eventually developed into the idea that bodhisattvas take certain formulaic "bodhisattva vows." One of the earliest of these formulas is found in the "" and states: We having crossed (the stream of samsara), may we help living beings to cross! We being liberated, may we liberate others! We being comforted, may we comfort others! We being finally released, may we release others! Other sutras contain longer and more complex formulas, such as the ten vows found in the "Ten Stages Sutra". Mahayana sources also discuss the importance of a Buddha's prediction ("vyākaraṇa") of a bodhisattva's future Buddhahood. This is seen as an important step along the bodhisattva path. Later Mahayana Buddhists also developed specific rituals and devotional acts for which helped to develop various preliminary qualities, such as faith, worship, prayer, and confession, that lead to the arising of "bodhicitta". These elements, which constitute a kind of preliminary preparation for bodhicitta, are found in the "seven part worship" ("saptāṅgavidhi, saptāṇgapūjā" or "saptavidhā anuttarapūjā"). This ritual form is visible in the works of Shantideva (8th century) and includes: After these preliminaries have been accomplished, then the aspirant is seen as being ready to give rise to bodhicitta, often through the recitation of a bodhisattva vow. Contemporary Mahāyāna Buddhism encourages everyone to give rise to bodhicitta and ceremonially take bodhisattva vows. With these vows and precepts, one makes the promise to work for the complete enlightenment of all sentient beings by practicing the transcendent virtues or paramitas. In Mahāyāna, bodhisattvas are often not Buddhist monks and are former lay practitioners. Bodhisattva conduct (caryā). After a being has entered the path by giving rise to bodhicitta, they must make effort in the practice or conduct ("caryā") of the bodhisattvas, which includes all the duties, virtues and practices that bodhisattvas must accomplish to attain Buddhahood. An important early Mahayana source for the practice of the bodhisattva is the "Bodhisattvapiṭaka sūtra," a major sutra found in the "Mahāratnakūṭa" collection which was widely cited by various sources. According to Ulrich Pagel, this text is "one of the longest works on the bodhisattva in Mahayana literature" and thus provides extensive information on the topic of bodhisattva training, especially the perfections ("pāramitā"). Pagel also argues that this text was quite influential on later Mahayana writings which discuss the bodhisattva and thus was "of fundamental importance to the evolution of the bodhisattva doctrine." Other sutras in the "Mahāratnakūṭa" collection are also important sources for the bodhisattva path. According to Pagel, the basic outline of the bodhisattva practice in the "Bodhisattvapiṭaka" is outlined in a passage which states "the path to enlightenment comprises benevolence towards all sentient beings, striving after the perfections and compliance with the means of conversion." This path begins with contemplating the failures of samsara, developing faith in the Buddha, giving rise to bodhicitta and practicing the four immesurables. It then proceeds through all six perfections and finally discusses the four means of converting sentient beings ("saṃgrahavastu"). The path is presented through prose exposition, mnemonic lists ("matrka") and also through Jataka narratives. Using this general framework, the "Bodhisattvapiṭaka" incorporates discussions related to other practices including super knowledge ("abhijñā"), learning, 'skill' ("kauśalya"), accumulation of merit ("puṇyasaṃbhāra"), the thirty-seven factors of awakening ("bodhipakṣadharmas"), perfect mental quietude ("śamatha") and insight ("vipaśyanā"). Later Mahayana treatises ("śāstras") like the "Bodhisattvabhumi" and the "Mahāyānasūtrālamkāra" provide the following schema of bodhisattva practices: The first six perfections ("pāramitās") are the most significant and popular set of bodhisattva virtues and thus they serve as a central framework for bodhisattva practice. They are the most widely taught and commented upon virtues throughout the history of Mahayana Buddhist literature and feature prominently in major Sanskrit sources such as the "Bodhisattvabhumi", the "Mahāyānasūtrālamkāra," the "King of Samadhis Sutra" and the "Ten Stages Sutra". They are extolled and praised by these sources as "the great oceans of all the bright virtues and auspicious principles" ("Bodhisattvabhumi") and "the Teacher, the Way and the Light...the Refuge and the Shelter, the Support and the Sanctuary" ("Aṣṭasāhasrikā"). While many Mahayana sources discuss the bodhisattva's training in ethical discipline ("śīla") in classic Buddhist terms, over time, there also developed specific sets of ethical precepts for bodhisattvas (Skt. "bodhisattva-śīla"). These various sets of precepts are usually taken by bodhisattva aspirants (lay and ordained monastics) along with classic Buddhist pratimoksha precepts. However, in some Japanese Buddhist traditions, monastics rely solely on the bodhisattva precepts. The perfection of wisdom ("prajñāpāramitā") is generally seen as the most important and primary of the perfections, without which all the others fall short. Thus, the "Madhyamakavatara" (6:2) states that wisdom leads the other perfections as a man with eyes leads the blind. This perfect or transcendent wisdom has various qualities, such as being non-attached ("asakti"), non-conceptual and non-dual ("advaya") and signless ("animitta"). It is generally understood as a kind of insight into the true nature of all phenomena ("dharmas") which in Mahayana sutras is widely described as emptiness ("shunyatā"). Another key virtue which the bodhisattva must develop is great compassion ("mahā-karuṇā"), a vast sense of care aimed at ending the suffering of all sentient beings. This great compassion is the ethical foundation of the bodhisattva, and it is also an applied aspect of their bodhicitta. Great compassion must also be closely joined with the perfection of wisdom, which reveals that all the beings that the bodhisattva strives to save are ultimately empty of self ("anātman") and lack inherent existence ("niḥsvabhāva"). Due to the bodhisattva's compassionate wish to save all beings, they develop innumerable skillful means or strategies ("upaya") with which to teach and guide different kinds of beings with all sorts of different inclinations and tendencies. Another key virtue for the bodhisattva is mindfulness ("smṛti"), which Dayal calls "the sine qua non of moral progress for a bodhisattva." Mindfulness is widely emphasized by Buddhist authors and Sanskrit sources and it appears four times in the list of 37 "bodhipakṣadharmas". According to the "Aṣṭasāhasrikā", a bodhisattva must never lose mindfulness so as not to be confused or distracted. The "Mahāyānasūtrālamkāra" states that mindfulness is the principal asset of a bodhisattva, while both Asvaghosa and Shantideva state that without mindfulness, a bodhisattva will be helpless and uncontrolled (like a mad elephant) and will not succeed in conquering the mental afflictions. Length and nature of the path. Just as with non-Mahayana sources, Mahayana sutras generally depict the bodhisattva path as a long path that takes many lifetimes across many aeons. Some sutras state that a beginner bodhisattva could take anywhere from 3 to 22 countless eons ("mahāsaṃkhyeya kalpas") to become a Buddha. The "Mahāyānasaṃgraha" of Asanga states that the bodhisattva must cultivate the six paramitas for three incalculable aeons ("kalpāsaṃkhyeya"). Shantideva meanwhile states that bodhisattvas must practice each perfection for sixty aeons or kalpas and also declares that a bodhisattva must practice the path for an "inconceivable" ("acintya") number of kalpas. Thus, the bodhisattva path could take many billions upon billions of years to complete. Later developments in Indian and Asian Mahayana Buddhism (especially in Vajrayana or tantric Buddhism) lead to the idea that certain methods and practices could substantially shorten the path (and even lead to Buddhahood in a single lifetime). In Pure Land Buddhism, an aspirant might go to a Buddha's pure land or buddha-field ("buddhakṣetra"), like Sukhavati, where they can study the path directly with a Buddha. This could significantly shorten the length of the path, or at least make it more bearable. East Asian Pure Land Buddhist traditions, such as Jōdo-shū and Jōdo Shinshū, hold the view that realizing Buddhahood through the long bodhisattva path of the perfections is no longer practical in the current age (which is understood as a degenerate age called "mappo"). Thus, they rely on the salvific power of Amitabha to bring Buddhist practitioners to the pure land of Sukhavati, where they will better be able to practice the path. This view is rejected by other schools such as Tendai, Shingon and Zen. The founders of Tendai and Shingon, Saicho and Kukai, held that anyone who practiced the path properly could reach awakening in this very lifetime. Buddhist schools like Tiantai, Huayan, Chan and the various Vajrayāna traditions maintain that they teach ways to attain Buddhahood within one lifetime. Some of early depictions of the Bodhisattva path in texts such as the "Ugraparipṛcchā Sūtra" describe it as an arduous, difficult monastic path suited only for the few which is nevertheless the most glorious path one can take. Three kinds of bodhisattvas are mentioned: the forest, city, and monastery bodhisattvas—with forest dwelling being promoted a superior, even necessary path in sutras such as the "Ugraparipṛcchā" and the "Samadhiraja" sutras. The early "Rastrapalapariprccha sutra" also promotes a solitary life of meditation in the forests, far away from the distractions of the householder life. The "Rastrapala" is also highly critical of monks living in monasteries and in cities who are seen as not practicing meditation and morality. The "Ratnagunasamcayagatha" also says the bodhisattva should undertake ascetic practices ("dhūtaguṇa"), "wander freely without a home", practice the paramitas and train under a guru in order to perfect his meditation practice and realization of "prajñaparamita". The twelve "dhūtaguṇas" are also promoted by the "King of Samadhis Sutra", the "Ten Stages Sutra" and Shantideva. Some scholars have used these texts to argue for "the forest hypothesis", the theory that the initial Bodhisattva ideal was associated with a strict forest asceticism. But other scholars point out that many other Mahayana sutras do not promote this ideal, and instead teach "easy" practices like memorizing, reciting, teaching and copying Mahayana sutras, as well as meditating on Buddhas and bodhisattvas (and reciting or chanting their names). Ulrich Pagel also notes that in numerous sutras found in the "Mahāratnakūṭa" collection, the bodhisattva ideal is placed "firmly within the reach of non-celibate layfolk." Nirvana. Related to the different views on the different types of "yanas" or vehicles is the question of a bodhisattva's relationship to nirvāṇa. In the various Mahāyāna texts, two theories can be discerned. One view is the idea that a bodhisattva must postpone their awakening until full Buddhahood is attained (at which point one ceases to be reborn, which is the classical view of nirvāṇa). This view is promoted in some sutras like the "Pañcavimsatisahasrika-prajñaparamita-sutra". The idea is also found in the "Laṅkāvatāra Sūtra", which mentions that bodhisattvas take the following vow: "I shall not enter into final nirvana before all beings have been liberated." Likewise, the "Śikṣāsamuccaya" states "I must lead all beings to Liberation. I will stay here till the end, even for the sake of one living soul." The second theory is the idea that there are two kinds of nirvāṇa, the nirvāṇa of an arhat and a superior type of nirvāṇa called "apratiṣṭhita ("non-abiding) that allows a Buddha to remain engaged in the samsaric realms without being affected by them. This attainment was understood as a kind of non-dual state in which one is neither limited to samsara nor nirvana. A being who has reached this kind of nirvana is not restricted from manifesting in the samsaric realms, and yet they remain fully detached from the defilements found in these realms (and thus they can help others). This doctrine of non-abiding nirvana developed in the Yogacara school. As noted by Paul Williams, the idea of "apratiṣṭhita nirvāṇa" may have taken some time to develop and is not obvious in some of the early Mahāyāna literature, therefore while earlier sutras may sometimes speak of "postponement", later texts saw no need to postpone the "superior" "apratiṣṭhita nirvāṇa". In this Yogacara model, the bodhisattva definitely rejects and avoids the liberation of the "śravaka" and "pratyekabuddha", described in Mahāyāna literature as either inferior or "hina" (as in Asaṅga's fourth century "Yogācārabhūmi") or as ultimately false or illusory (as in the "Lotus Sūtra"). That a bodhisattva has the option to pursue such a lesser path, but instead chooses the long path towards Buddhahood is one of the five criteria for one to be considered a bodhisattva. The other four are: being human, being a man, making a vow to become a Buddha in the presence of a previous Buddha, and receiving a prophecy from that Buddha. Over time, a more varied analysis of bodhisattva careers developed focused on one's motivation. This can be seen in the Tibetan Buddhist teaching on three types of motivation for generating bodhicitta. According to Patrul Rinpoche's 19th-century "Words of My Perfect Teacher" ("Kun bzang bla ma'i gzhal lung"), a bodhisattva might be motivated in one of three ways. They are: These three are not types of people, but rather types of motivation. According to Patrul Rinpoche, the third quality of intention is most noble though the mode by which Buddhahood occurs is the first; that is, it is only possible to teach others the path to enlightenment once one has attained enlightenment oneself. Bodhisattva stages. According to James B. Apple, if one studies the earliest textual materials which discuss the bodhisattva path (which includes the translations of Lokakshema and the Gandharan manuscripts), "one finds four key stages that are demarcated throughout this early textual material that constitute the most basic elements in the path of a bodhisattva". These main elements are: According to Drewes, the "Aṣṭasāhasrikā Prajñāpāramitā Sūtra" divides the bodhisattva path into three main stages. The first stage is that of bodhisattvas who "first set out in the vehicle" ("prathamayānasaṃprasthita"), then there is the "irreversible" ("avinivartanīya") stage, and finally the third "bound by one more birth" ("ekajātipratibaddha"), as in, destined to become a Buddha in the next life. Lamotte also mentions four similar stages of the bodhiattva career which are found in the "Dazhidulun" translated by Kumarajiva: (1) "Prathamacittotpādika" ("who produces the mind of Bodhi for the first time"), (2) "Ṣaṭpāramitācaryāpratipanna" ("devoted to the practice of the six perfections"), (3) "Avinivartanīya" (non-regression), (4) "Ekajātipratibaddha" ("separated by only one lifetime from buddhahood"). Drewes notes that Mahāyāna sūtras mainly depict a bodhisattvas' first arising of bodhicitta as occurring in the presence of a Buddha. Furthermore, according to Drewes, most Mahāyāna sūtras "never encourage anyone to become a bodhisattva or present any ritual or other means of doing so." In a similar manner to the nikāya sources, Mahāyāna sūtras also see new bodhisattvas as likely to regress, while seeing irreversible bodhisattvas are quite rare. Thus, according to Drewes, "the "Aṣṭasāhasrikā", for instance, states that as many bodhisattvas as there grains of sand in the Ganges turn back from the pursuit of Buddhahood and that out of innumerable beings who give rise to bodhicitta and progress toward Buddhahood, only one or two will reach the point of becoming irreversible." Drewes also adds that early texts like the "Aṣṭasāhasrikā" treat bodhisattvas who are beginners ("ādikarmika") or "not long set out in the [great] vehicle" with scorn, describing them as "blind", "unintelligent", "lazy" and "weak". Early Mahayana works identify them with those who reject Mahayana or who abandon Mahayana, and they are seen as likely to become "śrāvakas" (those on the "arhat" path). Rather than encouraging them to become bodhisattvas, what early Mahayana sutras like the "Aṣṭa" do is to help individuals determine if they have already received a prediction in a past life, or if they are close to this point. The "Aṣṭa" provides a variety of methods, including forms of ritual or divination, methods dealing with dreams and various tests, especially tests based on one's reaction to the hearing of the content in the "Aṣṭasāhasrikā" itself. The text states that encountering and accepting its teachings mean one is close to being given a prediction and that if one does not "shrink back, cower or despair" from the text, but "firmly believes it", one is either irreversible or is close to this stage. Many other Mahayana sutras such as the "Akṣobhyavyūha", "Vimalakīrtinirdeśa", "Sukhāvatīvyūha", and the "Śūraṃgamasamādhi Sūtra" present textual approaches to determine one's status as an advanced bodhisattva. These mainly depend on a person's attitude towards listening to, believing, preaching, proclaiming, copying or memorizing and reciting the sutra as well as practicing the sutra's teachings. According to Drewes, this claim that merely having faith in Mahāyāna sūtras meant that one was an advanced bodhisattva, was a departure from previous Nikaya views about bodhisattvas. It created new groups of Buddhists who accepted each other's bodhisattva status. Some Mahayana texts are more open with their bodhisattva doctrine. The "Lotus Sutra" famously assures large numbers people that they will certainly achieve Buddhahood, with few requirements (other than hearing and accepting the "Lotus Sutra" itself). Avaivartika (non-retrogression). The term "avaivartika" refers to the stage in Buddhist practice where a practitioner reaches a point of irreversibility, ensuring that they will not regress in their spiritual progress. Alternative Sanskrit forms include "avivartika", "avinivartanīya" and "avaivartyabhūmi." Attaining this state guarantees that the practitioner remains steadfast on the path to enlightenment and will not abandon their aspirations or regress to a lower stage of realization. Within the framework of the Bodhisattva path, various Buddhist scriptures identify different stages at which non-retrogression is attained. Some sources associate it with the path of preparation ("prayogamārga"), where a bodhisattva solidifies their commitment and will no longer turn back to pursue the path of an arhat. Others link it to the first "bhūmi" (stage) of the bodhisattva path or, in later systematic presentations, to the eighth "bhūmi", after which full Buddhahood becomes inevitable. The concept of "avaivartika" appears in early Mahāyāna texts such as the "Mahāprajñāpāramitāśāstra", which distinguishes between bodhisattvas who are prone to regression ("vaivartika") and those who are not ("avaivartika"). True bodhisattvas are those who have transcended the possibility of falling back, while those who remain susceptible to regression are considered bodhisattvas only in a nominal sense. The "Aṣṭasāhasrikā Prajñāpāramitā Sūtra", particularly in its early Chinese translation by Lokakṣema, emphasizes "avaivartika" as a pivotal attainment. It describes how the bodhisattva, upon reaching the state of "anutpattikadharmakṣānti" (the realization of the unborn nature of phenomena), becomes irreversible in their journey toward complete enlightenment. Unlike later Mahāyāna texts, which integrate this stage within the structured "bhūmi" system, Lokakṣema's version presents it more fluidly, portraying the "avaivartin" as one of a few key categories of bodhisattvas. In Pure Land traditions, rebirth in Amitābha Buddha's Pure Land ("Sukhāvatī") is equated with entering the stage of non-retrogression. It is believed that those who attain birth in Sukhāvatī are assured of progressing toward enlightenment without the risk of falling back into lower states of existence. The attainment of "avaivartika" is often associated with the bodhisattva's ability to inspire and lead countless beings toward liberation. Some texts suggest that a bodhisattva's non-retrogression is linked to prior predictions ("vyākaraṇa") made by past Buddhas, affirming their inevitable attainment of supreme enlightenment. Moreover, while later traditions integrate skillful means ("upāyakauśalya") as a defining trait of the "avaivartin", early texts such as Lokakṣema's "Aṣṭa" emphasize avoiding complacency in meditative absorption, which could lead to an arhat-like state rather than the full Buddhahood sought by bodhisattvas. Bhūmis (stages). According to various Mahāyāna sources, on the way to becoming a Buddha, a bodhisattva proceeds through various stages ("bhūmis") of spiritual progress"." The term "bhūmi" means "earth" or "place" and figurately can mean "ground, plane, stage, level; state of consciousness". There are various lists of bhumis, the most common is a list of ten found in the "Daśabhūmikasūtra" (but there are also lists of seven stages as well as lists which have more than 10 stages). The "Daśabhūmikasūtra" lists the following ten stages: In some sources, these ten stages are correlated with a different schema of the buddhist path called the five paths which is derived from Vaibhasika Abhidharma sources. The "Śūraṅgama Sūtra" recognizes 57 stages. Various Vajrayāna schools recognize additional grounds (varying from 3 to 10 further stages), mostly 6 more grounds with variant descriptions. A bodhisattva above the 7th ground is called a "mahāsattva". Some bodhisattvas such as Samantabhadra are also said to have already attained Buddhahood. Sōtō Zen. As part of the Sōtō Zen school of Mahāyanā, Dōgen Zenji described Four Exemplary Acts of a Bodhisattva: Mahayana bodhisattvas. Buddhists (especially Mahayanists) venerate several bodhisattvas (such as Maitreya, Manjushri and Avalokiteshvara) which are seen as highly spiritually advanced (having attained the tenth bhumi) and thus possessing immense magical power. According to Lewis Lancaster, these "celestial" or "heavenly" bodhisattvas are seen as "either the manifestations of a Buddha or they are beings who possess the power of producing many bodies through great feats of magical transformation." The religious devotion to these bodhisattvas probably first developed in north India, and they are widely depicted in Gandharan and Kashmiri art. In Asian art, they are typically depicted as princes and princesses, with royal robes and jewellery (since they are the princes of the Dharma). In Buddhist art, a bodhisattva is often described as a beautiful figure with a serene expression and graceful manner. This is probably in accordance to the description of Prince Siddhārtha Gautama as a bodhisattva. The depiction of bodhisattva in Buddhist art around the world aspires to express the bodhisattva's qualities such as loving-kindness ("metta"), compassion ("karuna"), empathetic joy ("mudita") and equanimity ("upekkha"). Literature which glorifies such bodhisattvas and recounts their various miracles remains very popular in Asia. One example of such a work of literature is "More Records of Kuan-shih-yin's Responsive Manifestations" by Lu Kao (459–532) which was very influential in China. In Tibetan Buddhism, the "Maṇi Kambum" is a similarly influential text (a revealed text, or terma) which focuses on Chenrezig (Avalokiteshvara, who is seen as the country's patron bodhisattva) and his miraculous activities in Tibet. These celestial bodhisattvas like Avalokiteshvara (Guanyin) are also seen as compassionate savior figures, constantly working for the good of all beings. The Avalokiteshvara chapter of the "Lotus Sutra" even states that calling Avalokiteshvara to mind can help save someone from natural disasters, demons, and other calamities. It is also supposed to protect one from the afflictions (lust, anger and ignorance). Bodhisattvas can also transform themselves into whatever physical form is useful for helping sentient beings (a god, a bird, a male or female, even a Buddha). Because of this, bodhisattvas are seen as beings that one can pray to for aid and consolation from the sufferings of everyday life as well as for guidance in the path to enlightenment. Thus, the great translator Xuanzang is said to have constantly prayed to Avalokiteshvara for protection on his long journey to India. Eight main Bodhisattvas. In the later Indian Vajrayana tradition, there arose a popular grouping of eight bodhisattvas known as the "Eight Great Bodhisattvas", or "Eight Close Sons" (Skt. "aṣṭa utaputra"; Tib. "nyewé sé gyé") and are seen as the most important Mahayana bodhisattvas and appear in numerous esoteric mandalas (e.g. Garbhadhatu mandala). These same "Eight Great Bodhisattvas" (Chn. "Bādà Púsà", Jp. "Hachi Daibosatsu") also appear in East Asian Esoteric Buddhist sources, such as "The Sutra on the Maṇḍalas of the Eight Great Bodhisattvas" (八大菩薩曼荼羅經), translated by Amoghavajra in the 8th century and Faxian (10th century). While there are numerous lists of Eight Great Bodhisattvas, the most widespread or "standard" listing is: Female bodhisattvas. The bodhisattva Prajñāpāramitā-devi is a female personification of the perfection of wisdom and the "Prajñāpāramitā sutras". She became an important figure, widely depicted in Indian Buddhist art. Guanyin (Jp: Kannon), a female form of Avalokiteshvara, is the most widely revered bodhisattva in East Asian Buddhism, generally depicted as a motherly figure. Guanyin is venerated in various other forms and manifestations, including Cundī, Cintāmaṇicakra, Hayagriva, Eleven-Headed Thousand-Armed Guanyin and Guanyin Of The Southern Seas among others. Gender variant representations of some bodhisattvas, most notably Avalokiteśvara, has prompted conversation regarding the nature of a bodhisattva's appearance. Chan master Sheng Yen has stated that Mahāsattvas such as Avalokiteśvara (known as Guanyin in Chinese) are androgynous (Ch. 中性; pinyin: "zhōngxìng"), which accounts for their ability to manifest in masculine and feminine forms of various degrees. In Tibetan Buddhism, Tara or Jetsun Dölma ("rje btsun sgrol ma") is the most important female bodhisattva. Numerous Mahayana sutras feature female bodhisattvas as main characters and discuss their life, teachings and future Buddhahood. These include "The Questions of the Girl Vimalaśraddhā" (Tohoku Kangyur - Toh number 84), "The Questions of Vimaladattā" (Toh 77), "The Lion's Roar of Śrīmālādevī" (Toh 92), "The Inquiry of Lokadhara" (Toh 174), "The Sūtra of Aśokadattā's Prophecy" (Toh 76), "The Questions of Vimalaprabhā" (Toh 168), "The Sūtra of Kṣemavatī's Prophecy" (Toh 192), "The Questions of the Girl Sumati" (Toh 74), "The Questions of Gaṅgottara" (Toh 75), "The Questions of an Old Lady" (Toh 171), "The Miraculous Play of Mañjuśrī" (Toh 96), and "The Sūtra of the Girl Candrottarā's Prophecy" (Toh 191). Popular figures. Over time, numerous historical Buddhist figures also came to be seen as bodhisattvas in their own right, deserving of devotion. For example, an extensive hagiography developed around Nagarjuna, the Indian founder of the madhyamaka school of philosophy. Followers of Tibetan Buddhism consider the Dalai Lamas and the Karmapas to be an emanation of Chenrezig, the Bodhisattva of Compassion. Various Japanese Buddhist schools consider their founding figures like Kukai and Nichiren to be bodhisattvas. In Chinese Buddhism, various historical figures have been called bodhisattvas. Furthermore, various Hindu deities are considered to be bodhisattvas in Mahayana Buddhist sources. For example, in the "Kāraṇḍavyūhasūtra", Vishnu, Shiva, Brahma and Saraswati are said to be bodhisattvas, all emanations of Avalokiteshvara. Deities like Saraswati (Chinese: "Biàncáitiān", 辯才天, Japanese: Benzaiten) and Shiva (C: "Dàzìzàitiān", 大自在天; J: Daikokuten) are still venerated as bodhisattva devas and dharmapalas (guardian deities) in East Asian Buddhism. Both figures are closely connected with Avalokiteshvara. In a similar manner, the Hindu deity Harihara is called a bodhisattva in the famed "Nīlakaṇṭha Dhāraṇī," which states: "O Effulgence, World-Transcendent, come, oh Hari, the great bodhisattva." The empress Wu Zetian of the Tang dynasty, was the only female ruler of China. She used the growing popularity of Esoteric Buddhism in China for her own needs. Though she was not the only ruler to have made such a claim, the political utility of her claims, coupled with sincerity make her a great example. She built several temples and contributed to the finishing of the Longmen Caves and even went on to patronise Buddhism over Confucianism or Daoism. She ruled by the title of "Holy Emperor", and claimed to be a Bodhisattva too. She became one of China's most influential rulers. Others. Other important bodhisattvas in Mahayana Buddhism include: Fierce bodhisattvas. While bodhisattvas tend to be depicted as conventionally beautiful, there are instances of their manifestation as fierceful and monstrous looking beings. A notable example is Guanyin's manifestation as a preta named "Flaming Face" (). This trope is commonly employed among the Wisdom Kings, among whom Mahāmāyūrī Vidyārājñī stands out with a feminine title and benevolent expression. In some depictions, her mount takes on a wrathful appearance. This variation is also found among images of Vajrapani. In Tibetan Buddhism, fierce manifestations (Tibetan: "trowo)" of the major bodhisattvas are quite common and they often act as protector deities. Sacred places. The place of a bodhisattva's earthly deeds, such as the achievement of enlightenment or the acts of Dharma, is known as a "bodhimaṇḍa" (place of awakening), and may be a site of pilgrimage. Many temples and monasteries are famous as bodhimaṇḍas. Perhaps the most famous bodhimaṇḍa of all is the Bodhi Tree under which Śākyamuṇi achieved Buddhahood. There are also sacred places of awakening for bodhisattvas located throughout the Buddhist world. Mount Potalaka, a sacred mountain in India, is traditionally held to be Avalokiteshvara's bodhimaṇḍa. In Chinese Buddhism, there are four mountains that are regarded as bodhimaṇḍas for bodhisattvas, with each site having major monasteries and being popular for pilgrimages by both monastics and laypeople. These four sacred places are: In Theravada Buddhism. While the veneration of bodhisattvas is much more widespread and popular in the Mahayana Buddhist world, it is also found in Theravada Buddhist regions. Bodhisattvas which are venerated in Theravada lands include Natha Deviyo (Avalokiteshvara), Metteya (Maitreya), Upulvan (i.e. Vishnu), Saman (Samantabhadra) and Pattini. The veneration of some of these figures may have been influenced by Mahayana Buddhism. These figures are also understood as devas that have converted to Buddhism and have sworn to protect it. The recounting of Jataka tales, which discuss the bodhisattva deeds of Gautama before his awakening (i.e. during his past lives as a bodhisatta), also remains a popular practice. Etymology. The etymology of the Indic terms bodhisattva and bodhisatta is not fully understood. The term bodhi is uncontroversial and means "awakening" or "enlightenment" (from the root "budh-"). The second part of the compound has many possible meanings or derivations, including:
3969
49858626
https://en.wikipedia.org/wiki?curid=3969
Buckingham Palace
Buckingham Palace () is a royal residence in London, and the administrative headquarters of the monarch of the United Kingdom. Located in the City of Westminster, the palace is often at the centre of state occasions and royal hospitality. It has been a focal point for the British people at times of national rejoicing and mourning. Originally known as Buckingham House, the building at the core of today's palace was a large townhouse built for the Duke of Buckingham and Normanby in 1703 on a site that had been in private ownership for at least 150 years. It was acquired by George III in 1761 as a private residence for Queen Charlotte and became known as The Queen's House. During the 19th century it was enlarged by architects John Nash and Edward Blore, who constructed three wings around a central courtyard. Buckingham Palace became the London residence of the British monarch on the accession of Queen Victoria in 1837. The last major structural additions were made in the late 19th and early 20th centuries, including the East Front, which contains the balcony on which the royal family traditionally appears to greet crowds. A German bomb destroyed the palace chapel during the Second World War; the King's Gallery was built on the site and opened to the public in 1962 to exhibit works of art from the Royal Collection. The original early-19th-century interior designs, many of which survive, include widespread use of brightly coloured scagliola and blue and pink lapis, on the advice of Charles Long. King Edward VII oversaw a partial redecoration in a Belle Époque cream and gold colour scheme. Many smaller reception rooms are furnished in the Chinese regency style with furniture and fittings brought from the Royal Pavilion at Brighton and from Carlton House. The palace has 775 rooms, and the garden is the largest private garden in London. The state rooms, used for official and state entertaining, are open to the public each year for most of August and September and on some days in winter and spring. History. Pre-1624. In the Middle Ages, the site of the future palace formed part of the Manor of Ebury (also called Eia). The marshy ground was watered by the river Tyburn, which still flows below the courtyard and south wing of the palace. Where the river was fordable (at Cow Ford), the village of Eye Cross grew. Ownership of the site changed hands many times; owners included Edward the Confessor and Edith of Wessex in late Saxon times, and, after the Norman Conquest, William the Conqueror. William gave the site to Geoffrey de Mandeville, who bequeathed it to the monks of Westminster Abbey. In 1531, Henry VIII acquired the Hospital of St James, which became St James's Palace, from Eton College, and in 1536 he took the Manor of Ebury from Westminster Abbey. These transfers brought the site of Buckingham Palace back into royal hands for the first time since William the Conqueror had given it away almost 500 years earlier. Various owners leased it from royal landlords, and the freehold was the subject of frenzied speculation during the 17th century. By then, the old village of Eye Cross had long since fallen into decay, and the area was mostly wasteland. Needing money, James VI and I sold off part of the Crown freehold but retained part of the site on which he established a mulberry garden for the production of silk. (This is at the north-west corner of today's palace.) Clement Walker in "Anarchia Anglicana" (1649) refers to "new-erected sodoms and spintries at the Mulberry Garden at S. James's"; this suggests it may have been a place of debauchery. Eventually, in the late 17th century, the freehold was inherited from the property tycoon Hugh Audley by the great heiress Mary Davies. First houses on the site (1624–1761). Possibly the first house erected within the site was that of William Blake, around 1624. The next owner was George Goring, 1st Earl of Norwich, who from 1633 extended Blake's house, which came to be known as Goring House, and developed much of today's garden, then known as Goring Great Garden. He did not, however, obtain the freehold interest in the mulberry garden. Unbeknown to Goring, in 1640 the document "failed to pass the Great Seal before Charles I fled London, which it needed to do for legal execution". It was this critical omission that would help the British royal family regain the freehold under George III. When the improvident Goring defaulted on his rents, Henry Bennet, 1st Earl of Arlington was able to purchase the lease of Goring House and he was occupying it when it burned down in 1674, following which he constructed Arlington House on the site – the location of the southern wing of today's palace – the next year. In 1698, John Sheffield acquired the lease. He later became the first Duke of Buckingham and Normanby. Buckingham House was built for Sheffield in 1703 to the design of William Winde. The style chosen was of a large, three-floored central block with two smaller flanking service wings. It was eventually sold by Buckingham's illegitimate son, Charles Sheffield, in 1761 to George III for £21,000. Sheffield's leasehold on the mulberry garden site, the freehold of which was still owned by the royal family, was due to expire in 1774. From Queen's House to palace (1761–1837). Under the new royal ownership, the building was originally intended as a private retreat for Queen Charlotte, and was accordingly known as The Queen's House. Remodelling of the structure began in 1762. In 1775, an Act of Parliament settled the property on Queen Charlotte, in exchange for her rights to nearby Old Somerset House, and 14 of her 15 children were born there. Some furnishings were transferred from Carlton House and others had been bought in France after the French Revolution of 1789. While St James's Palace remained the official and ceremonial royal residence, the name "Buckingham Palace" was used from at least 1791. After his accession to the throne in 1820, George IV continued the renovation, intending to create a small, comfortable home. However, in 1826, while the work was in progress, the King decided to modify the house into a palace with the help of his architect John Nash. The façade was designed with George IV's preference for French neoclassical architecture in mind. The cost of the renovations grew dramatically, and by 1829 the extravagance of Nash's designs resulted in his removal as the architect. On the death of George IV in 1830, his younger brother William IV hired Edward Blore to finish the work. William never moved into the palace, preferring Clarence House, which had been built to his specifications and which had been his London home while he was heir presumptive. After the Palace of Westminster was destroyed by fire in 1834, he offered to convert Buckingham Palace into a new Houses of Parliament, but his offer was declined. Queen Victoria (1837–1901). Buckingham Palace became the principal royal residence in 1837, on the accession of Queen Victoria, who was the first monarch to reside there. While the state rooms were a riot of gilt and colour, the necessities of the new palace were somewhat less luxurious. The chimneys were said to smoke so much that the fires had to be allowed to die down, and consequently the palace was often cold. Ventilation was so bad that the interior smelled, and when it was decided to install gas lamps, there was a serious worry about the build-up of gas on the lower floors. It was also said that the staff were lax and lazy and the palace was dirty. Following the Queen's marriage in 1840, her husband, Prince Albert, concerned himself with a reorganisation of the household offices and staff, and with addressing the design faults of the palace. By the end of 1840, all the problems had been rectified. However, the builders were to return within a decade. By 1847, the couple found the palace too small for court life and their growing family and a new wing, designed by Edward Blore, was built by Thomas Cubitt, enclosing the central quadrangle. The work, carried out from 1847 to 1849, was paid for by the sale of Brighton Pavilion in 1850. The large East Front, facing The Mall, is today the "public face" of Buckingham Palace and contains the balcony from which the royal family acknowledge the crowds on momentous occasions and after the annual Trooping the Colour. The ballroom wing and a further suite of state rooms were also built in this period, designed by Nash's student James Pennethorne. Before Prince Albert's death, in addition to royal ceremonies, investitures and presentations Buckingham Palace was frequently the scene of lavish costume balls and musical entertainments. The most celebrated contemporary musicians entertained there; for example Felix Mendelssohn is known to have played there on three occasions, and Johann Strauss II and his orchestra played there when in England. Widowed in 1861, the grief-stricken Queen withdrew from public life and left Buckingham Palace to live at Windsor Castle, Balmoral Castle and Osborne House. For many years the palace was seldom used, even neglected. In 1864, a note was found pinned to the fence, saying: "These commanding premises to be let or sold, in consequence of the late occupant's declining business." Eventually, public opinion persuaded the Queen to return to London, though even then she preferred to live elsewhere whenever possible. Court functions were still held at Windsor Castle, presided over by the sombre Queen habitually dressed in mourning black, while Buckingham Palace remained shuttered for most of the year. Early 20th century (1901–1945). In 1901, the new king, Edward VII, began redecorating the palace. He and his wife, Queen Alexandra, had always been at the forefront of London high society, and their friends, known as "the Marlborough House Set", were considered to be the most eminent and fashionable of the age. Buckingham Palace—the Ballroom, Grand Entrance, Marble Hall, Grand Staircase, vestibules and galleries were redecorated in the Belle Époque cream and gold colour scheme they retain today—once again became a setting for entertaining on a majestic scale but leaving some to feel Edward's heavy redecorations were at odds with Nash's original work. The last major building work took place during the reign of George V when, in 1913, Aston Webb redesigned Blore's 1850 East Front to resemble in part Giacomo Leoni's Lyme Park in Cheshire. This new refaced principal façade (of Portland stone) was designed to be the backdrop to the Victoria Memorial, a large memorial statue of Queen Victoria created by sculptor Thomas Brock, erected outside the main gates on a surround constructed by architect Aston Webb. George V, who had succeeded Edward VII in 1910, had a more serious personality than his father; greater emphasis was now placed on official entertainment and royal duties than on lavish parties. He arranged a series of command performances featuring jazz musicians such as the Original Dixieland Jazz Band (1919; the first jazz performance for a head of state), Sidney Bechet and Louis Armstrong (1932), which earned the palace a nomination in 2009 for a (Kind of) Blue Plaque by the Brecon Jazz Festival as one of the venues making the greatest contribution to jazz music in the United Kingdom. During the First World War, which lasted from 1914 until 1918, the palace escaped unscathed. Its more valuable contents were evacuated to Windsor, but the royal family remained in residence. The King imposed rationing at the palace, much to the dismay of his guests and household. To the King's later regret, David Lloyd George persuaded him to go further and ostentatiously lock the wine cellars and refrain from alcohol, to set a good example to the supposedly inebriated working class. The workers continued to imbibe, and the King was left unhappy at his enforced abstinence. George V's wife, Queen Mary, was a connoisseur of the arts and took a keen interest in the Royal Collection of furniture and art, both restoring and adding to it. Queen Mary also had many new fixtures and fittings installed, such as the pair of marble Empire style chimneypieces by Benjamin Vulliamy, dating from 1810, in the ground floor Bow Room, the huge low room at the centre of the garden façade. Queen Mary was also responsible for the decoration of the Blue Drawing Room. This room, long, previously known as the South Drawing Room, has a ceiling designed by Nash, coffered with huge gilt console brackets. In 1938, the northwest pavilion, designed by Nash as a conservatory, was converted into a swimming pool. Second World War. During the Second World War, which broke out in 1939, the palace was bombed nine times. The most serious and publicised incident destroyed the palace chapel in 1940. One bomb fell in the palace quadrangle while George VI and Queen Elizabeth (the future Queen Mother) were in the palace, and many windows were blown in and the chapel destroyed. The King and Queen were filmed inspecting their bombed home, and the newsreel footage shown in cinemas throughout the United Kingdom to show the common suffering of rich and poor. As "The Sunday Graphic" reported: It was at this time the Queen famously declared: "I'm glad we have been bombed. Now I can look the East End in the face". On 15 September 1940, known as Battle of Britain Day, an RAF pilot, Ray Holmes of No. 504 Squadron, rammed a German Dornier Do 17 bomber he believed was going to bomb the palace. Holmes had run out of ammunition to shoot down the bomber and made the quick decision to ram it. He bailed out and the bomber crashed into the forecourt of London Victoria station. Its engine was later exhibited at the Imperial War Museum in London. Holmes became a King's Messenger after the war and died at the age of 90 in 2005. On VE Day—8 May 1945—the palace was the centre of British celebrations. The King, the Queen, Princess Elizabeth (the future queen) and Princess Margaret appeared on the balcony, with the palace's blacked-out windows behind them, to cheers from a vast crowd in The Mall. The damaged palace was carefully restored after the war by John Mowlem & Co. Mid-20th century to present day. Many of the palace's contents are part of the Royal Collection; they can, on occasion, be viewed by the public at the King's Gallery, near the Royal Mews. The purpose-built gallery opened in 1962 and displays a changing selection of items from the collection. It occupies the site of the chapel that was destroyed in the Second World War. The palace was designated a Grade I listed building in 1970. Its state rooms have been open to the public during August and September and on some dates throughout the year since 1993. The money raised in entry fees was originally put towards the rebuilding of Windsor Castle after the 1992 fire devastated many of its staterooms. In the year to 31 March 2017, 580,000 people visited the palace, and 154,000 visited the gallery. In 2004, the palace attempted to claim money from the community energy fund to heat Buckingham Palace, but the claim was rejected due to fear of public backlash. The palace used to racially segregate staff. In 1968, Charles Tryon, 2nd Baron Tryon, acting as treasurer to Queen Elizabeth II, sought to exempt Buckingham Palace from full application of the Race Relations Act 1968. He stated that the palace did not hire people of colour for clerical jobs, only as domestic servants. He arranged with civil servants for an exemption that meant that complaints of racism against the royal household would be sent directly to the Home Secretary and kept out of the legal system. The palace, like Windsor Castle, is owned by the reigning monarch in right of the Crown. Occupied royal palaces are not part of the Crown Estate, nor are they the monarch's personal property, unlike Sandringham House and Balmoral Castle. The Government of the United Kingdom is responsible for maintaining the palace in exchange for the profits made by the Crown Estate. In 2015, the State Dining Room was closed for a year and a half because its ceiling had become potentially dangerous. A 10-year schedule of maintenance work, including new plumbing, wiring, boilers and radiators, and the installation of solar panels on the roof, has been estimated to cost £369 million and was approved by the prime minister in November 2016. It will be funded by a temporary increase in the Sovereign Grant paid from the income of the Crown Estate and is intended to extend the building's working life by at least 50 years. In 2017, the House of Commons backed funding for the project by 464 votes to 56. Buckingham Palace is a symbol and home of the British monarchy, an art gallery and a tourist attraction. Behind the gilded railings and gates that were completed by the Bromsgrove Guild in 1911, lies Webb's famous façade, which was described in a book published by the Royal Collection Trust as looking "like everybody's idea of a palace". It has not only been a weekday home of Elizabeth II and Prince Philip but was also the London residence and office of the Duke of York until 2023. Prince Edward, Duke of Edinburgh and Sophie, Duchess of Edinburgh continue to have a private apartment in the palace for use when they are in London. The palace also houses their offices, as well as those of the Princess Royal and Princess Alexandra, and is the workplace of more than 800 people. Charles III and Queen Camilla live at Clarence House while restoration work continues, although they conduct official business at Buckingham Palace, including audiences and receptions. Every year, some 50,000 invited guests are entertained at garden parties, receptions, audiences and banquets. Three garden parties are held in the summer, usually in July. The forecourt of Buckingham Palace is used for the Changing of the Guard, a major ceremony and tourist attraction (daily from April to July; every other day in other months). Interior. The front of the palace measures across, by deep, by high and contains over of floorspace. There are 775 rooms, including 188 staff bedrooms, 92 offices, 78 bathrooms, 52 principal bedrooms and 19 state rooms. It also has a post office, cinema, swimming pool, doctor's surgery, and jeweller's workshop. The royal family occupy a small suite of private rooms in the north wing. Principal rooms. The principal rooms are contained on the first-floor "piano nobile" behind the west-facing garden façade at the rear of the palace. The centre of this ornate suite of state rooms is the Music Room, its large bow the dominant feature of the façade. Flanking the Music Room are the Blue and the White Drawing Rooms. At the centre of the suite, serving as a corridor to link the state rooms, is the Picture Gallery, which is top-lit and long. The Gallery is hung with numerous works including some by Rembrandt, van Dyck, Rubens and Vermeer; other rooms leading from the Picture Gallery are the Throne Room and the Green Drawing Room. The Green Drawing Room serves as a huge anteroom to the Throne Room, and is part of the ceremonial route to the throne from the Guard Room at the top of the Grand Staircase. The Guard Room contains white marble statues of Queen Victoria and Prince Albert, in Roman costume, set in a tribune lined with tapestries. These very formal rooms are used only for ceremonial and official entertaining but are open to the public every summer. Semi-state apartments. Directly underneath the state apartments are the less grand semi-state apartments. Opening from the Marble Hall, these rooms are used for less formal entertaining, such as luncheon parties and private audiences. At the centre of this floor is the Bow Room, through which thousands of guests pass annually to the monarch's garden parties. When paying a state visit to Britain, foreign heads of state are usually entertained by the monarch at Buckingham Palace. They are allocated an extensive suite of rooms known as the Belgian Suite, situated at the foot of the Minister's Staircase, on the ground floor of the west-facing Garden Wing. Some of the rooms are named and decorated for particular visitors, such as the 1844 Room, decorated in that year for the state visit of Nicholas I of Russia, and the 1855 Room, in honour of the visit of Napoleon III of France. The former is a sitting room that also serves as an audience room and is often used for personal investitures. Narrow corridors link the rooms of the suite; one of them is given extra height and perspective by saucer domes designed by Nash in the style of Soane. A second corridor in the suite has Gothic-influenced cross-over vaulting. The suite was named after Leopold I of Belgium, uncle of Queen Victoria and Prince Albert. In 1936, the suite briefly became the private apartments of the palace when Edward VIII occupied them. The original early-19th-century interior designs, many of which still survive, included widespread use of brightly coloured scagliola and blue and pink lapis, on the advice of Charles Long. Edward VII oversaw a partial redecoration in a Belle Époque cream and gold colour scheme. East wing. Between 1847 and 1850, when Blore was building the new east wing, the Brighton Pavilion was once again plundered of its fittings. As a result, many of the rooms in the new wing have a distinctly oriental atmosphere. The red and blue Chinese Luncheon Room is made up of parts of the Brighton Banqueting and Music Rooms with a large oriental chimneypiece designed by Robert Jones and sculpted by Richard Westmacott. It was formerly in the Music Room at the Brighton Pavilion. The ornate clock, known as the Kylin Clock, was made in Jingdezhen, Jiangxi Province, China, in the second half of the 18th century; it has a later movement by Benjamin Vulliamy circa 1820. The Yellow Drawing Room has hand-painted Chinese wallpaper supplied in 1817 for the Brighton Saloon, and a chimneypiece which is a European vision of a Chinese chimneypiece. It has nodding mandarins in niches and fearsome winged dragons, designed by Robert Jones. At the centre of this wing is the famous balcony with the Centre Room behind its glass doors. This is a Chinese-style saloon enhanced by Queen Mary, who, working with the designer Charles Allom, created a more "binding" Chinese theme in the late 1920s, although the lacquer doors were brought from Brighton in 1873. Running the length of the "piano nobile" of the east wing is the Great Gallery, modestly known as the Principal Corridor, which runs the length of the eastern side of the quadrangle. It has mirrored doors and mirrored cross walls reflecting porcelain pagodas and other oriental furniture from Brighton. The Chinese Luncheon Room and Yellow Drawing Room are situated at each end of this gallery, with the Centre Room in between. Court ceremonies. Investitures for the awarding of honours (which include the conferring of knighthoods by dubbing with a sword) usually take place in the palace's Throne Room. Investitures are conducted by the King or another senior member of the royal family: a military band plays in the musicians' gallery, as recipients receive their honours, watched by their families and friends. State banquets take place in the Ballroom, built in 1854. At long, wide and high, it is the largest room in the palace; at one end of the room is a throne dais (beneath a giant, domed velvet canopy, known as a "shamiana" or baldachin, that was used at the Delhi Durbar in 1911). State Banquets are formal dinners held on the first evening of a state visit by a foreign head of state. On these occasions, for up to 170 guests in formal "white tie and decorations", including tiaras, the dining table is laid with the Grand Service, a collection of silver-gilt plate made in 1811 for the Prince of Wales, later George IV. The largest and most formal reception at Buckingham Palace takes place every November when the King entertains members of the diplomatic corps. On this grand occasion, all the state rooms are in use, as the royal family proceed through them, beginning at the great north doors of the Picture Gallery. As Nash had envisaged, all the large, double-mirrored doors stand open, reflecting the numerous crystal chandeliers and sconces, creating a deliberate optical illusion of space and light. Smaller ceremonies such as the reception of new ambassadors take place in the "1844 Room". Here too, the King holds small lunch parties, and often meetings of the Privy Council. Larger lunch parties often take place in the curved and domed Music Room or the State Dining Room. Since the bombing of the palace chapel in World War II, royal christenings have sometimes taken place in the Music Room. Queen Elizabeth II's first three children were all baptised there. On all formal occasions, the ceremonies are attended by the Yeomen of the Guard, in their historic uniforms, and other officers of the court such as the Lord Chamberlain. Former ceremonial. Court dress. Formerly, men not wearing military uniform wore knee breeches of 18th-century design. Women's evening dress included trains and tiaras or feathers in their hair (often both). The dress code governing formal court uniform and dress has progressively relaxed. After the First World War, when Queen Mary wished to follow fashion by raising her skirts a few inches from the ground, she requested a lady-in-waiting to shorten her own skirt first to gauge the King's reaction. King George V disapproved, so the Queen kept her hemline unfashionably low. Following his accession in 1936, King George VI and Queen Elizabeth allowed the hemline of daytime skirts to rise. Today, there is no official dress code. Most men invited to Buckingham Palace in the daytime choose to wear service uniform or lounge suits; a minority wear morning coats, and in the evening, depending on the formality of the occasion, black tie or white tie. Court presentation of débutantes. Débutantes were aristocratic young ladies making their first entrée into society through a presentation to the monarch at court. These occasions, known as "coming out", took place at the palace from the reign of Edward VII. The débutantes entered—wearing full court dress, with three ostrich feathers in their hair—curtsied, performed a backwards walk and a further curtsey, while manoeuvring a dress train of prescribed length. The ceremony, known as an evening court, corresponded to the "court drawing rooms" of Victoria's reign. After World War II, the ceremony was replaced by less formal afternoon receptions, omitting the requirement of court evening dress. In 1958, Queen Elizabeth II abolished the presentation parties for débutantes, replacing them with Garden Parties, for up to 8,000 invitees in the Garden. They are the largest functions of the year. Garden and surroundings. At the rear of the palace is the large and park-like garden, which together with its lake is the largest private garden in London. There, Elizabeth II hosted her annual garden parties each summer and also held large functions to celebrate royal milestones, such as jubilees. It covers and includes a helicopter landing area, a lake and a tennis court. Adjacent to the palace is the Royal Mews, also designed by Nash, where the royal carriages, including the Gold State Coach, are housed. This Rococo styled gilt coach, designed by William Chambers in 1760, has painted panels by Giovanni Battista Cipriani. It was first used for the State Opening of Parliament by George III in 1762 and has been used by the monarch for every coronation since William IV. It was last used for the coronation of King Charles III and Queen Camilla. Also housed in the mews are the coach horses used at royal ceremonial processions as well as many of the cars used by the royal family. The Mall, a ceremonial approach route to the palace, was designed by Aston Webb and completed in 1911 as part of a grand memorial to Queen Victoria. It extends from Admiralty Arch, across St James's Park to the Victoria Memorial, concluding at the entrance gates into the palace forecourt. This route is used by the cavalcades and motorcades of visiting heads of state, and by the royal family on state occasions—such as the annual Trooping the Colour. Security breaches. The boy Jones was an intruder who gained entry to the palace on three occasions between 1838 and 1841. At least 12 people have managed to gain unauthorised entry into the palace or its grounds since 1914, including Michael Fagan, who broke into the palace twice in 1982 and entered Queen Elizabeth II's bedroom on the second occasion on 9 July. At the time, news media reported that he had a long conversation with her while she waited for security officers to arrive, but in a 2012 interview with "The Independent", Fagan said she ran out of the room, and no conversation took place. It was only in 2007 that trespassing on the palace grounds became a specific criminal offence.
3970
19372301
https://en.wikipedia.org/wiki?curid=3970
British Airways
British Airways plc (BA) is the flag carrier of the United Kingdom. It is headquartered in London, England, near its main hub at Heathrow Airport. The airline is the second largest UK-based carrier, based on fleet size and passengers carried, behind easyJet. In January 2011, BA merged with Iberia, creating the International Airlines Group (IAG), a holding company registered in Madrid, Spain. IAG is the world's third-largest airline group in terms of annual revenue and the second-largest in Europe. It is listed on the London Stock Exchange and in the FTSE 100 Index. British Airways is the first passenger airline to have generated more than US$1 billion on a single air route in a year (from 1 April 2017, to 31 March 2018, on the New York-JFK – London-Heathrow route). BA's earliest predecessor company is Aircraft Transport and Travel (AT&T), which was established in 1916 and began operating one of the world's first scheduled international passenger air services in 1919. In 1972 a British Airways Board was established by the British government to manage the two nationalised airline corporations, British Overseas Airways Corporation and British European Airways, and two regional airlines, Cambrian Airways and Northeast Airlines. On 31 March 1974, all four companies were merged to form British Airways. BA was privatised in February 1987 as part of a wider privatisation plan by the Conservative government. The carrier expanded with the acquisition of British Caledonian in 1987, Dan-Air in 1992, and British Midland International in 2012. It is a founding member of the Oneworld airline alliance, the third-largest alliance after SkyTeam and Star Alliance. History. The corporate lineage of British Airways goes back to five airlines established in the United Kingdom between 1916 and 1922, the first of which was Aircraft Transport and Travel (AT&T) on 5 October 1916. AT&T began the world's first daily international commercial air service from London to Paris on 25 August 1919. The five airlines merged in 1924 and several other airlines were established and merged during the 1930s and 1940s. The mergers and acquisitions resulted in two state-owned airlines, the British Overseas Airways Corporation (BOAC), formed in 1939, and British European Airways (BEA), formed in 1947. Proposals to establish a single British airline, combining the assets of the BOAC and BEA, were first raised in 1953 as a result of difficulties in attempts by BOAC and BEA to negotiate air rights through the British colony of Cyprus. Increasingly BOAC was protesting that BEA was using its subsidiary Cyprus Airways to circumvent an agreement that BEA would not fly routes further east than Cyprus, particularly to the increasingly important oil regions in the Middle East. The chairman of BOAC, Miles Thomas, was in favour of a merger as a potential solution to this disagreement and had backing for the idea from the Chancellor of the Exchequer at the time, Rab Butler. However, opposition from the Treasury blocked the proposal. Consequently, it was only following the recommendations of the 1969 Edwards Report that a new British Airways Board, managing both BEA and BOAC, and the two regional British airlines Cambrian Airways based at Cardiff, and Northeast Airlines based at Newcastle upon Tyne, was constituted on 1 April 1972. Although each airline's branding was maintained initially, two years later the British Airways Board unified its branding, effectively establishing British Airways as an airline on 31 March 1974. Following two years of fierce competition with British Caledonian, the second-largest airline in the United Kingdom at the time, the Government changed its aviation policy in 1976 so that the two carriers would no longer compete on long-haul routes. British Airways and Air France operated the supersonic Concorde airliner, and the world's first supersonic passenger service flew on 21 January 1976 from London Heathrow Airport to Bahrain International Airport. Services to the U.S. began on 24 May 1976 with a flight to Washington Dulles airport, and flights to New York JFK airport followed on 22 September 1977. Service to Singapore was established in co-operation with Singapore Airlines as a continuation of the flight to Bahrain. Following the crash of Air France Flight 4590 and the 11 September attacks, British Airways decided to cease Concorde operations in 2003 after 27 years of service. The final commercial Concorde flight was BA002 from New York-JFK to London-Heathrow on 24 October 2003. In 1981 the airline was instructed to prepare for privatisation by the Conservative Thatcher government. Sir John King, later Lord King, was appointed chairman, charged with bringing the airline back into profitability. While many other large airlines struggled, King was credited with transforming British Airways into one of the most profitable air carriers in the world. The flag carrier was privatised and was floated on the London Stock Exchange in February 1987. British Airways effected the takeover of the UK's "second" airline, British Caledonian, in July of that same year. The formation of Richard Branson's Virgin Atlantic in 1984 created a competitor for BA. The intense rivalry between British Airways and Virgin Atlantic culminated in the former being sued for libel in 1993, arising from claims and counterclaims over a "dirty tricks" campaign against Virgin. This campaign included allegations of poaching Virgin Atlantic customers, tampering with private files belonging to Virgin, and undermining Virgin's financial reputation in the city. As a result of the case BA management apologised "unreservedly", and the company agreed to pay £110,000 in damages to Virgin, £500,000 to Branson personally and £3 million legal costs. Lord King stepped down as chairman in 1993 and was replaced by his deputy, Colin Marshall, while Bob Ayling took over as CEO. Virgin filed a separate action in the U.S. that same year regarding BA's domination of the trans-Atlantic routes, but it was thrown out in 1999. In 1992 British Airways expanded through the acquisition of the financially troubled Dan-Air, giving BA a much larger presence at Gatwick Airport. British Asia Airways, a subsidiary based in Taiwan, was formed in March 1993 to operate between London and Taipei. That same month BA purchased a 25% stake in the Australian airline Qantas and, with the acquisition of Brymon Airways in May, formed British Airways Citiexpress (later BA Connect). In September 1998, British Airways, along with American Airlines, Cathay Pacific, Qantas, and Canadian Airlines, formed the Oneworld airline alliance. Oneworld began operations on 1 February 1999, and is the third-largest airline alliance in the world, behind SkyTeam and Star Alliance. Bob Ayling's leadership led to a cost savings of £750M and the establishment of a budget airline, Go, in 1998. The next year, however, British Airways reported an 84% drop in profits in its first quarter alone, its worst in seven years. In March 2000, Ayling was removed from his position and British Airways announced Rod Eddington as his successor. That year, British Airways and KLM conducted talks on a potential merger, reaching a decision in July to file an official merger plan with the European Commission. The plan fell through in September 2000. British Asia Airways ceased operations in 2001 after BA suspended flights to Taipei. Go was sold to its management and the private equity firm 3i in June 2001. Eddington would make further workforce cuts due to reduced demand following 11 September attacks in 2001, and BA sold its stake in Qantas in September 2004. In 2005 Willie Walsh, managing director of Aer Lingus and a former pilot, became the chief executive officer of British Airways. BA unveiled its new subsidiary OpenSkies in January 2008, taking advantage of the liberalisation of transatlantic traffic rights between Europe and the United States. OpenSkies flies non-stop from Paris to New York's JFK and Newark airports. In July 2008, British Airways announced a merger plan with Iberia, another flag carrier airline in the Oneworld alliance, wherein each airline would retain its original brand. The agreement was confirmed in April 2010, and in July the European Commission and U.S. Department of Transportation permitted the merger and began to co-ordinate transatlantic routes with American Airlines. On 6 October 2010 the alliance between British Airways, American Airlines and Iberia formally began operations. The alliance generates an estimated £230 million in annual cost-saving for BA, in addition to the £330 million which would be saved by the merger with Iberia. This merger was finalised on 21 January 2011, resulting in the establishment of International Airlines Group S.A. (IAG), the world's third-largest airline in terms of annual revenue and the second-largest airline group in Europe. Prior to merging, British Airways owned a 13.5% stake in Iberia, and thus received ownership of 55% of the combined International Airlines Group; Iberia's other shareholders received the remaining 45%. As a part of the merger, British Airways ceased trading independently on the London Stock Exchange after 23 years as a constituent of the FTSE 100 Index. In September 2010 Willie Walsh, now CEO of IAG, announced that the group was considering acquiring other airlines and had drawn up a shortlist of twelve possible acquisitions. In November 2011 IAG announced an agreement in principle to purchase British Midland International from Lufthansa. A contract to purchase the airline was agreed the next month, and the sale was completed for £172.5 million on 30 March 2012. The airline established a new subsidiary based at London City Airport operating Airbus A318s. British Airways was the official airline partner of the London 2012 Olympic Games. On 18 May 2012 it flew the Olympic flame from Athens International Airport to RNAS Culdrose while carrying various dignitaries. On 27 May 2017, British Airways suffered a computer power failure. All flights were cancelled and thousands of passengers were affected. By the following day, the company had not succeeded in reestablishing the normal function of its computer systems. When asked by reporters for more information on the ongoing problems, British Airways stated "The root cause was a power supply issue which our affected our IT systems - we continue to investigate this" and declined to comment further. Willie Walsh later attributed the crash to an electrical engineer disconnecting the UPS and said there would be an independent investigation. Amidst the decline in the value of Iranian currency due to the reintroduction of U.S. sanctions on Iran, BA announced that the Iranian route was "not commercially viable" and ended services to Iran on 22 September 2018. In 2018, British Airways partnered with British tailor and designer Ozwald Boateng to redesign the company's historic uniforms, in honour of its approaching centenary, creating a new look for BA, while adhering to its traditional style. The new collection "A British Original" was launched in 2023. This design initiative also included English bone china manufactured by William Edwards and cutlery by Studio William for the company's first class service. In 2019, as part of the celebration of its centenary of airline operations, staffed dressed in heritage uniforms dating back to the 1930s to greet Queen Elizabeth II and British Airways announced that four aircraft would receive retro liveries. The first of these is a Boeing 747-400 (G-BYGC), which was repainted into the former BOAC livery, which it retained until its retirement. Two more Boeing 747-400s were repainted with former British Airways liveries. One wore the "Landor" livery until its retirement in 2020 (G-BNLY), the other (G-CIVB), wore the original "Union Jack" livery until its retirement in 2020 also. An Airbus A319 was repainted into British European Airways livery, which is still flying as G-EUPJ. On 28 April 2020, the company set out plans to make up to 12,000 staff redundant because of the global collapse of air traffic due to the COVID-19 pandemic and that it may not reopen its operations at Gatwick airport. They reopened at Gatwick in March 2022. In July 2020, British Airways announced the immediate retirement of its entire 747-400 fleet, having originally intended to phase out the remaining 747s in 2024. The airline stated that its decision to bring forward the date was in part due to the downturn in air travel following the COVID-19 pandemic and to focus on incorporating more modern and fuel-efficient aircraft such as the Airbus A350 and Boeing 787. At the same time, British Airways also announced its intention to eliminate carbon emissions by 2050. On 28 July 2020, the company's cabin crew union issued an "industrial action" warning in order to prevent the 12,000 job cuts and pay cuts. On 12 October 2020, it was announced that Sean Doyle, CEO of Aer Lingus (also part of the IAG airline group) would succeed Álex Cruz as CEO. On 24 June 2024, British Airways was voted "2024 Most Family Friendly Airline" "in the World" by Skytrax. The award encompasses the overall family travel experience such as seating policies, check-in facilities, priority boarding, meals and amenities for children, as well as other family-oriented aspects. Corporate affairs. Business trends. The key trends for the British Airways PLC Group are shown below. On the merger with Iberia, the accounting reference date was changed from 31 March to 31 December; figures below are therefore for the years to 31 March up to 2010, for the nine months to 31 December 2010, and for the years to 31 December thereafter: In 2020, due to the crisis caused by the COVID-19 pandemic, British Airways had to reduce its 42,000-strong workforce by 12,000 jobs. According to the estimate by IAG, a parent company, it will take the air travel industry several years to return to previous performance and profitability levels. However, 2022 saw a dramatic increase in travel, and the company now faced a worker shortage, forcing it to cancel more than 1,500 flights. During February 2023, The international airlines group, the owners of British Airways announced that the group has returned to making an annual profit of €1.3 billion for the first time since the pandemic, following a €2.8 billion loss in 2021. The company warned that due to the surge in demand for flying this could lead to more disruption. Operations. British Airways is the largest airline based in the United Kingdom in terms of fleet size, international flights, and international destinations and was, until 2008, the largest airline by passenger numbers. The airline carried 34.6 million passengers in 2008, but, rival carrier easyJet transported 44.5 million passengers that year, passing British Airways for the first time. British Airways holds a United Kingdom Civil Aviation Authority Type A Operating Licence, it is permitted to carry passengers, cargo, and mail on aircraft with 20 or more seats. The airline's head office, Waterside, is located in Harmondsworth, a village near Heathrow Airport. Waterside was completed in June 1998 to replace British Airways' previous head office, Speedbird House, located on the grounds of Heathrow. British Airways' main base is at Heathrow Airport, but it also has a major presence at Gatwick Airport. It also has a base at London City Airport, where its subsidiary BA CityFlyer is the largest operator. BA had previously operated a significant hub at Manchester Airport. Manchester to New York (JFK) services were withdrawn; later all international services outside London ceased when the subsidiary BA Connect was sold. Passengers wishing to travel internationally with BA either to or from regional UK destinations must now transfer in London. Heathrow Airport is dominated by British Airways, which owns 50% of the slots available at the airport as of 2019, growing from 40% in 2004. The majority of BA services operate from Terminal 5, with the exception of some flights at Terminal 3 owing to insufficient capacity at Terminal 5. At London City Airport, the company owns 52% of the slots as of 2019. In August 2014, Willie Walsh advised the airline would continue to use flight paths over Iraq despite the hostilities there. A few days earlier Qantas announced it would avoid Iraqi airspace, while other airlines did likewise. The issue arose following the downing of Malaysia Airlines Flight 17 over Ukraine, and a temporary suspension of flights to and from Ben Gurion Airport during the 2014 Israel–Gaza conflict. Subsidiaries. Over its history, BA has had many subsidiaries. In addition to the below, British Airways also owned Airways Aero Association, the operator of the British Airways flying club based at Wycombe Air Park in High Wycombe, until it was sold to Surinder Arora in 2007. Shareholdings. British Airways obtained a 15% stake in the now-defunct UK regional airline Flybe from the sale of BA Connect in March 2007. It sold the stake in 2014. BA also owned a 10% stake in InterCapital and Regional Rail (ICRR), the company that managed the operations of Eurostar (UK) Ltd from 1998 to 2010, when the management of Eurostar was restructured. Industrial relations. Staff working for British Airways are represented by a number of trade unions, pilots are represented by British Air Line Pilots' Association, cabin crew by British Airlines Stewards and Stewardesses Association (a branch of Unite the Union), while other branches of Unite the Union and the GMB Union represent other employees. Bob Ayling's management faced strike action by cabin crew over a £1 billion cost-cutting drive to return BA to profitability in 1997; this was the last time BA cabin crew would strike until 2009, although staff morale has reportedly been unstable since that incident. In an effort to increase interaction between management, employees, and the unions, various conferences and workshops have taken place, often with thousands in attendance. In 2005, wildcat action was taken by union members over a decision by Gate Gourmet not to renew the contracts of 670 workers and replace them with agency staff; it is estimated that the strike cost British Airways £30 million and caused disruption to 100,000 passengers. In October 2006, BA became involved in a civil rights dispute when a Christian employee was forbidden to wear a necklace bearing the cross, a religious symbol. BA's practice of forbidding such symbols has been publicly questioned by British politicians such as the former Home Secretary John Reid and the former Foreign Secretary Jack Straw. Relations have been turbulent between BA and Unite. In 2007, cabin crew threatened strike action over salary changes to be imposed by BA management. The strike was called off at the last minute, British Airways losing £80 million. In December 2009, a ballot for strike action over Christmas received a high level of support, action was blocked by a court injunction that deemed the ballot illegal. Negotiations failed to stop strike action in March, BA withdrew perks for strike participants. Allegations were made by "The Guardian" newspaper that BA had consulted outside firms methods to undermine the unions: the story was later withdrawn. A strike was announced for May 2010, British Airways again sought an injunction. Members of the Socialist Workers Party disrupted negotiations between BA management and Unite to prevent industrial action. Further disruption struck when Derek Simpson, a Unite co-leader, was discovered to have leaked details of confidential negotiations online via Twitter. Industrial action re-emerged in 2017, this time by BA's Mixed Fleet flight attendants, whom were employed on much less favorable pay and terms and conditions compared to previous cabin staff who joined prior to 2010. A ballot for industrial action was distributed to Mixed Fleet crew in November 2016 and resulted in an overwhelming yes majority for industrial action. Unite described Mixed Fleet crew as on "poverty pay", with many Mixed Fleet flight attendants sleeping in their cars in between shifts because they cannot afford the fuel to drive home, or operating while sick as they cannot afford to call in sick and lose their pay for the shift. Unite also blasted BA of removing staff travel concessions, bonus payments and other benefits to all cabin crew who undertook industrial action, as well as strike-breaking tactics such as wet-leasing aircraft from other airlines and offering financial incentives for cabin crew not to strike. The first dates of strikes during Christmas 2016 were cancelled due to pay negotiations. Industrial action by Mixed Fleet commenced in January 2017 after rejecting a pay offer. Strike action continued throughout 2017 in numerous discontinuous periods, resulting in one of the longest running disputes in aviation history. On 31 October 2017, after 85 days of discontinuous industrial action, Mixed Fleet accepted a new pay deal from BA which ended the dispute. Destinations. British Airways serves over 170 destinations in 70 countries, including eight domestic and 27 in the United States. Alliances. British Airways co-founded the airline alliance Oneworld in 1999 with airlines American Airlines, Cathay Pacific and Qantas. Codeshare agreements. British Airways has codeshares with the following airlines: Fleet. , the British Airways operates a fleet of 274 aircraft with 42 orders. BA operates a mix of Airbus narrow and wide-body aircraft, and Boeing wide-body aircraft, specifically the 777 and 787. In October 2020, British Airways retired its fleet of 747-400 aircraft. It was one of the largest operators of the 747, having previously operated the -100, -200, and -400 aircraft from 1974 (1969 with BOAC). British Airways Engineering. The airline has its own engineering branch to maintain its aircraft fleet, this includes line maintenance at over 70 airports around the world. Amongst the company's various hangar facilities are its two major maintenance centres at Glasgow and Cardiff Airports. Marketing. Branding. The musical theme predominantly used on British Airways advertising has been "The Flower Duet" by Léo Delibes. This was first used in a 1984 advertisement directed by Tony Scott, in an arrangement by Howard Blake. It was reworked by Malcolm McLaren and Yanni for 1989's iconic "Face" advertisement, and subsequently appeared in many different arrangements between 1990 and 2010. The slogan 'the world's favourite airline', first used in 1983, was dropped in 2001 after Lufthansa overtook BA in terms of passenger numbers. Other advertising slogans have included "The World's Best Airline", "We'll Take More Care of You", "Fly the Flag", and "To Fly, To Serve". BA had an account for 23 years with Saatchi & Saatchi, an agency that created many of their most famous advertisements, including "The World's Biggest Offer" and the influential "Face" campaign. Saatchi & Saatchi later imitated this advert for Silverjet, a rival of BA, after BA discontinued their business activities. Since 2007, BA used Bartle Bogle Hegarty as its advertising agency. In October 2022, BA launched a brand new ad campaign, titled "A British Original" produced by London-based Uncommon Creative Studio. This was to be another record-breaking campaign for its use of 500 unique executions along with a series of 32 short films, coinciding with the launch of Ozwald Boateng's new collection of uniform. British Airways purchased the internet domain ba.com in 2002 from previous owner Bell Atlantic, 'BA' being the company's initialism and its IATA Airline code. British Airways is the official airline of the Wimbledon Championship tennis tournament, and was the official airline and tier one partner of the 2012 Summer Olympics and Paralympics. BA was also the official airline of England's bid to host the 2018 Football World Cup. "High Life", founded in 1973, is the official in-flight magazine of the airline. Safety video. The airline used a cartoon safety video from circa 2005 until 2017. Beginning on 1 September 2017 the airline introduced the new Comic Relief live action safety video hosted by Chabuddy G, with appearances by British celebrities Gillian Anderson, Rowan Atkinson, Jim Broadbent, Rob Brydon, Warwick Davis, Chiwetel Ejiofor, Ian McKellen, Thandie Newton, and Gordon Ramsay. A "sequel" video, also hosted by Chabuddy G, was released in 2018, with Michael Caine, Olivia Colman, Jourdan Dunn, Naomie Harris, Joanna Lumley, and David Walliams. The two videos are part of Comic Relief's charity programme. On 17 April 2023, the airline launched a new safety video as a part of “A British Original” campaign, with Emma Raducanu, Robert Peston, Little Simz, and Steven Bartlett. Liveries, logos, and tail fins. The aeroplanes that British Airways inherited from the four-way merger between BOAC, BEA, Cambrian, and Northeast were temporarily given the text logo "British airways" but retained the original airline's livery. With its formation in 1974, British Airways' aeroplanes were given a new white, blue, and red colour scheme with a cropped Union Jack painted on their tail fins, designed by Negus & Negus. In 1984, a new livery designed by Landor Associates updated the airline's look as it prepared for privatization. To celebrate its centenary in 2019, BA announced four retro liveries: three on Boeing 747-400 aircraft (one in each of BOAC, Negus & Negus, and Landor Associates liveries), and one A319 in BEA livery. In 1997, there was a controversial change to a new Project Utopia livery; all aircraft used the corporate colours consistently on the fuselage, but tailfins bore one of multiple designs. Several people spoke out against the change, including the former prime minister Margaret Thatcher, who famously covered the tail of a model 747 at an event with a handkerchief, to show her displeasure. BA's traditional rival, Virgin Atlantic, took advantage of the negative press coverage by applying the Union flag to the winglets of their aircraft along with the slogan "Britain's national flagcarrier". In 1999, the CEO of British Airways, Bob Ayling, announced that all BA planes would adopt the tailfin design "Chatham Dockyard Union Flag" originally intended to be used only on the Concorde, based on the Union Flag. Arms. In 2011, British Airways made a brand relaunch project, in which BA introduced a stylised, metallic version of the arms by For People Design to be used along with its "Speedmarque" logo. This is used exclusively on aircraft, First Wing Lounge and advertisements. In 2024, the damaged letters patent of the arms went up for auction online before being withdrawn. Loyalty programme. British Airways' tiered loyalty programme, called the British Airways Club, is a programme designed to incentivise its members to travel on British Airways and other partners, by advertising benefits and awarding members with currency. Members would accrue points called 'Avios' and 'tier points' based on methods permitted by the airline, which included flying on the airline itself. Avios is a currency owned by its parent company International Airlines Group. 'Tier points' are used to determine a member's tier in the programme. Once a member reached a high enough tier by attaining enough 'tier points', they could access airport lounges and dedicated "fast" queues. Members of the programme were also granted status within the Oneworld alliance, which permitted similar benefits when flying with Oneworld member airlines. The level of benefits were determined by the member's tier. On 1 April 2025, the programme was rebranded from 'Executive Club' to 'The British Airways Club'. Before the rebrand, 'tier points' were earned based on the airline, distance and cabin class flown. With the 'British Airways Club', 'tier points' are now earned based on absolute spending with the airline (including the fare component and carrier-imposed surcharges, but not Government Taxes or Airport fees such as Air Passenger Duty). 1 'tier point' is awarded per £1 spent. Cabins and services. Short haul. "Euro Traveller" is British Airways' economy class cabin on all short-haul flights within Europe, including domestic flights within the UK. Heathrow and Gatwick-based flights are operated by Airbus A320 series aircraft. Seat pitch varies from 78" to 31" depending on aircraft type and location of the seat. All flights from Heathrow and Gatwick have a buy on board system with a range of food designed by Tom Kerridge. Food can be pre-ordered through the British Airways mobile application. Alternatively, a limited selection can be purchased on-board using credit and debit card or by using Frequent Flyer Avios points. British Airways is rolling out Wi-Fi across its fleet of aircraft with 90% expected to be Wi-Fi enabled by 2020. "Club Europe" is the short-haul business class available on all short-haul flights. This class allows for access to business lounges at most airports and complimentary onboard catering, as well as fast-track security at most airports. The middle seat of the standard Airbus configured cabin is left free. Instead, a cocktail table folds up from under the middle seat on refurbished aircraft. Mid-haul and long haul. "First" is offered on all Airbus A380s, Boeing 777-300ERs, Boeing 787-9/10s and on some Boeing 777-200ERs. There are between eight and fourteen private suites depending on the aircraft type. Each First suite comes with a bed, a wide entertainment screen, in-seat power and complimentary Wi-Fi access on select aircraft. The exclusive Concorde Room lounge at Heathrow Terminal 5 offers pre-flight dining with waiter service and more intimate space. Dedicated British Airways 'Galleries First' lounges are available at some airports, and Business lounges are used where these are not available. Some feature a 'First Dining' section where passengers holding a first class ticket can access a pre-flight dining service. "Club World" is the long-haul business class cabin. The cabin features fully convertible flat bed seats. In March 2019, BA unveiled its new business-class seats - named "Club Suite" - on the new A350 aircraft, which feature a suite with a door. "World Traveller Plus" is the premium economy class cabin provided on all BA long haul aircraft. This cabin offers wider seats, extended leg-room, additional seat comforts such as larger IFE screen, a foot rest and power sockets. "World Traveller" is the mid-haul and long-haul economy class cabin. It offers seat-back entertainment, complimentary food and drink, pillows, and blankets. While the in-flight entertainment screens are available on all long-haul aircraft, international power outlets are available on the aircraft based at Heathrow. Wifi is also available on selected aircraft at an extra fee. Incidents and accidents. British Airways is known to have a strong reputation for safety and has been consistently ranked within the top 20 safest airlines globally according to "Business Insider" and "AirlineRatings.com." Since BA's inception in 1974, it has been involved in three hull-loss incidents (British Airways Flight 149 was destroyed on the ground at Kuwait International Airport as a result of military action during the First Gulf War with no one on board) and two hijacking attempts. To date, the only fatal accident experienced by a BA aircraft occurred in 1976 with British Airways Flight 476 which was involved in a midair collision later attributed to an error made by air traffic control.
3973
17559279
https://en.wikipedia.org/wiki?curid=3973
Bicycle
A bicycle, also called a pedal cycle, bike, push-bike or cycle, is a human-powered or motor-assisted, pedal-driven, single-track vehicle, with two wheels attached to a frame, one behind the other. A is called a cyclist, or bicyclist. Bicycles were introduced in the 19th century in Europe. By the early 21st century there were more than 1 billion bicycles. There are many more bicycles than cars. Bicycles are the principal means of transport in many regions. They also provide a popular form of recreation, and have been adapted for use as children's toys. Bicycles are used for fitness, military and police applications, courier services, bicycle racing, and artistic cycling. The basic shape and configuration of a typical upright or "safety" bicycle, has changed little since the first chain-driven model was developed around 1885. However, many details have been improved, especially since the advent of modern materials and computer-aided design. These have allowed for a proliferation of specialized designs for many types of cycling. In the 21st century, electric bicycles have become popular. The bicycle's invention has had an enormous effect on society, both in terms of culture and of advancing modern industrial methods. Several components that played a key role in the development of the automobile were initially invented for use in the bicycle, including ball bearings, pneumatic tires, chain-driven sprockets, and tension-spoked wheels. Etymology. The word "bicycle" first appeared in English print in "The Daily News" in 1868, to describe "Bysicles and trysicles" on the "Champs Elysées and Bois de Boulogne". The word was first used in 1847 in a French publication to describe an unidentified two-wheeled vehicle, possibly a carriage. The design of the bicycle was an advance on the velocipede, although the words were used with some degree of overlap for a time. Other words for bicycle include "bike", "pushbike", "pedal cycle", or "cycle". In Unicode, the code point for "bicycle" is 0x1F6B2. The entity codice_1 in HTML produces 🚲. Although bike and cycle are used interchangeably to refer mostly to two types of two-wheelers, the terms still vary across the world. In India, for example, a cycle refers only to a two-wheeler using pedal power whereas the term bike is used to describe a two-wheeler using internal combustion engine or electric motors as a source of motive power instead of motorcycle/motorbike. History. The "dandy horse", also called "Draisienne" or "Laufmaschine" ("running machine"), was the first human means of transport to use only two wheels in tandem and was invented by the German Baron Karl von Drais. It is regarded as the first bicycle and von Drais is seen as the "father of the bicycle", but it did not have pedals. Von Drais introduced it to the public in Mannheim in 1817 and in Paris in 1818. Its rider sat astride a wooden frame supported by two in-line wheels and pushed the vehicle along with his or her feet while steering the front wheel. The first mechanically propelled, two-wheeled vehicle may have been built by Kirkpatrick MacMillan, a Scottish blacksmith, in 1839, although the claim is often disputed. He is also associated with the first recorded instance of a cycling traffic offense, when a Glasgow newspaper in 1842 reported an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a little girl in Glasgow and was fined five shillings (). In the early 1860s, Frenchmen Pierre Michaux and Pierre Lallement took bicycle design in a new direction by adding a mechanical crank drive with pedals on an enlarged front wheel (the velocipede). This was the first in mass production. Another French inventor named Douglas Grasso had a failed prototype of Pierre Lallement's bicycle several years earlier. Several inventions followed using rear-wheel drive, the best known being the rod-driven velocipede by Scotsman Thomas McCall in 1869. In that same year, bicycle wheels with wire spokes were patented by Eugène Meyer of Paris. The French "vélocipède", made of iron and wood, developed into the "penny-farthing" (historically known as an "ordinary bicycle", a retronym, since there was then no other kind). It featured a tubular steel frame on which were mounted wire-spoked wheels with solid rubber tires. These bicycles were difficult to ride due to their high seat and poor weight distribution. In 1868 Rowley Turner, a sales agent of the Coventry Sewing Machine Company (which soon became the Coventry Machinists Company), brought a Michaux cycle to Coventry, England. His uncle, Josiah Turner, and business partner James Starley, used this as a basis for the 'Coventry Model' in what became Britain's first cycle factory. The "dwarf ordinary" addressed some of these faults by reducing the front wheel diameter and setting the seat further back. This, in turn, required gearing—effected in a variety of ways—to efficiently use pedal power. Having to both pedal and steer via the front wheel remained a problem. Englishman J.K. Starley (nephew of James Starley), J.H. Lawson, and Shergold solved this problem by introducing the chain drive (originated by the unsuccessful "bicyclette" of Englishman Henry Lawson), connecting the frame-mounted cranks to the rear wheel. These models were known as safety bicycles, dwarf safeties, or upright bicycles for their lower seat height and better weight distribution, although without pneumatic tires the ride of the smaller-wheeled bicycle would be much rougher than that of the larger-wheeled variety. Starley's 1885 Rover, manufactured in Coventry is usually described as the first recognizably modern bicycle. Soon the "seat tube" was added which created the modern bike's double-triangle "diamond frame". Further innovations increased comfort and ushered in a second bicycle craze, the 1890s "Golden Age of Bicycles". In 1888, Scotsman John Boyd Dunlop introduced the first practical pneumatic tire, which soon became universal. Willie Hume demonstrated the supremacy of Dunlop's tyres in 1889, winning the tyre's first-ever races in Ireland and then England. Soon after, the rear freewheel was developed, enabling the rider to coast. This refinement led to the 1890s invention of coaster brakes. Dérailleur gears and hand-operated Bowden cable-pull brakes were also developed during these years, but were only slowly adopted by casual riders. The Svea Velocipede with vertical pedal arrangement and locking hubs was introduced in 1892 by the Swedish engineers Fredrik Ljungström and Birger Ljungström. It attracted attention at the World Fair and was produced in a few thousand units. In the 1870s many cycling clubs flourished. They were popular in a time when there were no cars on the market and the principal mode of transportation was horse-drawn vehicles. Among the earliest clubs was The Bicycle Touring Club, which has operated since 1878. By the turn of the century, cycling clubs flourished on both sides of the Atlantic, and touring and racing became widely popular. The Raleigh Bicycle Company was founded in Nottingham, England in 1888. It became the biggest bicycle manufacturing company in the world, making over two million bikes per year. Bicycles and horse buggies were the two mainstays of private transportation just prior to the automobile, and the grading of smooth roads in the late 19th century was stimulated by the widespread advertising, production, and use of these devices. More than 1 billion bicycles have been manufactured worldwide as of the early 21st century. Bicycles are the most common vehicle of any kind in the world, and the most numerous model of any kind of vehicle, whether human-powered or motor vehicle, is the Chinese Flying Pigeon, with numbers exceeding 500 million. The next most numerous vehicle, the Honda Super Cub motorcycle, has more than 100 million units made, while most produced car, the Toyota Corolla, has reached 44 million and counting. Uses. Bicycles are used for transportation, bicycle commuting, and utility cycling. They are also used professionally by mail carriers, paramedics, police, messengers, and general delivery services. Military uses of bicycles include communications, reconnaissance, troop movement, supply of provisions, and patrol, such as in bicycle infantries. They are also used for recreational purposes, including bicycle touring, mountain biking, physical fitness, and play. Bicycle sports include racing, BMX racing, track racing, criterium, roller racing, sportives and time trials. Major multi-stage professional events are the Giro d'Italia, the Tour de France, the Vuelta a España, the Tour de Pologne, and the Volta a Portugal. They are also used for entertainment and pleasure in other ways, such as in organised mass rides, artistic cycling and freestyle BMX. Technical aspects. The bicycle has undergone continual adaptation and improvement since its inception. These innovations have continued with the advent of modern materials and computer-aided design, allowing for a proliferation of specialized bicycle types, improved bicycle safety, and riding comfort. Types. Bicycles can be categorized in many different ways: by function, by number of riders, by general construction, by gearing or by means of propulsion. The more common types include utility bicycles, mountain bicycles, racing bicycles, touring bicycles, hybrid bicycles, cruiser bicycles, and BMX bikes. Less common are tandems, low riders, tall bikes, fixed gear, folding models, amphibious bicycles, cargo bikes, recumbents and electric bicycles. Unicycles, tricycles and quadracycles are not strictly bicycles, as they have respectively one, three and four wheels, but are often referred to informally as "bikes" or "cycles". Dynamics. A bicycle stays upright while moving forward by being steered so as to keep its center of mass over the wheels. This steering is usually provided by the rider, but under certain conditions may be provided by the bicycle itself. The combined center of mass of a bicycle and its rider must lean into a turn to successfully navigate it. This lean is induced by a method known as countersteering, which can be performed by the rider turning the handlebars directly with the hands or indirectly by leaning the bicycle. Short-wheelbase or tall bicycles, when braking, can generate enough stopping force at the front wheel to flip longitudinally. The act of purposefully using this force to lift the rear wheel and balance on the front without tipping over is a trick known as a stoppie, endo, or front wheelie. Performance. The bicycle is extraordinarily efficient in both biological and mechanical terms. The bicycle is the most efficient human-powered means of transportation in terms of energy a person must expend to travel a given distance. From a mechanical viewpoint, up to 99% of the energy delivered by the rider into the pedals is transmitted to the wheels, although the use of gearing mechanisms may reduce this by 10–15%. In terms of the ratio of cargo weight a bicycle can carry to total weight, it is also an efficient means of cargo transportation. A human traveling on a bicycle at low to medium speeds of around uses only the power required to walk. Air drag, which is proportional to the square of speed, requires dramatically higher power outputs as speeds increase. If the rider is sitting upright, the rider's body creates about 75% of the total drag of the bicycle/rider combination. Drag can be reduced by seating the rider in a more aerodynamically streamlined position. Drag can also be reduced by covering the bicycle with an aerodynamic fairing. The fastest recorded unpaced speed on a flat surface is . In addition, the carbon dioxide generated in the production and transportation of the food required by the bicyclist, per mile traveled, is less than that generated by energy efficient motorcars. Parts. Frame. The great majority of modern bicycles have a frame with upright seating that looks much like the first chain-driven bike. These upright bicycles almost always feature the "diamond frame", a truss consisting of two triangles: the front triangle and the rear triangle. The front triangle consists of the head tube, top tube, down tube, and seat tube. The head tube contains the headset, the set of bearings that allows the fork to turn smoothly for steering and balance. The top tube connects the head tube to the seat tube at the top, and the down tube connects the head tube to the bottom bracket. The rear triangle consists of the seat tube and paired chain stays and seat stays. The chain stays run parallel to the chain, connecting the bottom bracket to the rear dropout, where the axle for the rear wheel is held. The seat stays connect the top of the seat tube (at or near the same point as the top tube) to the rear fork ends. Historically, women's bicycle frames had a top tube that connected in the middle of the seat tube instead of the top, resulting in a lower standover height at the expense of compromised structural integrity, since this places a strong bending load in the seat tube, and bicycle frame members are typically weak in bending. This design, referred to as a "step-through frame" or as an "open frame", allows the rider to mount and dismount in a dignified way while wearing a skirt or dress. While some women's bicycles continue to use this frame style, there is also a variation, the "mixte", which splits the top tube laterally into two thinner top tubes that bypass the seat tube on each side and connect to the rear fork ends. The ease of stepping through is also appreciated by those with limited flexibility or other joint problems. Because of its persistent image as a "women's" bicycle, step-through frames are not common for larger frames. Step-throughs were popular partly for practical reasons and partly for social mores of the day. For most of the history of bicycles' popularity women have worn long skirts, and the lower frame accommodated these better than the top-tube. Furthermore, it was considered "unladylike" for women to open their legs to mount and dismount—in more conservative times women who rode bicycles at all were vilified as immoral or immodest. These practices were akin to the older practice of riding horse sidesaddle. Another style is the recumbent bicycle. These are inherently more aerodynamic than upright versions, as the rider may lean back onto a support and operate pedals that are on about the same level as the seat. The world's fastest bicycle is a recumbent bicycle but this type was banned from competition in 1934 by the Union Cycliste Internationale. Historically, materials used in bicycles have followed a similar pattern as in aircraft, the goal being high strength and low weight. Since the late 1930s alloy steels have been used for frame and fork tubes in higher quality machines. By the 1980s aluminum welding techniques had improved to the point that aluminum tube could safely be used in place of steel. Since then aluminum alloy frames and other components have become popular due to their light weight, and most mid-range bikes are now principally aluminum alloy of some kind. More expensive bikes use carbon fibre due to its significantly lighter weight and profiling ability, allowing designers to make a bike both stiff and compliant by manipulating the lay-up. Virtually all professional racing bicycles now use carbon fibre frames, as they have the best strength to weight ratio. A typical modern carbon fiber frame can weigh less than . Other exotic frame materials include titanium and advanced alloys. Bamboo, a natural composite material with high strength-to-weight ratio and stiffness has been used for bicycles since 1894. Recent versions use bamboo for the primary frame with glued metal connections and parts, priced as exotic models. Drivetrain and gearing. The "drivetrain" begins with pedals which rotate the cranks, which are held in axis by the bottom bracket. Most bicycles use a chain to transmit power to the rear wheel. A very small number of bicycles use a shaft drive to transmit power, or special belts. Hydraulic bicycle transmissions have been built, but they are currently inefficient and complex. Since cyclists' legs are most efficient over a narrow range of pedaling speeds, or cadence, a variable gear ratio helps a cyclist to maintain an optimum pedalling speed while covering varied terrain. Some, mainly utility, bicycles use hub gears with between 3 and 14 ratios, but most use the generally more efficient dérailleur system, by which the chain is moved between different cogs called chainrings and sprockets to select a ratio. A dérailleur system normally has two dérailleurs, or mechs, one at the front to select the chainring and another at the back to select the sprocket. Most bikes have two or three chainrings, and from 5 to 12 sprockets on the back, with the number of theoretical gears calculated by multiplying front by back. In reality, many gears overlap or require the chain to run diagonally, so the number of usable gears is fewer. An alternative to chaindrive is to use a synchronous belt. These are toothed and work much the same as a chain—popular with commuters and long distance cyclists they require little maintenance. They cannot be shifted across a cassette of sprockets, and are used either as single speed or with a hub gear. Different gears and ranges of gears are appropriate for different people and styles of cycling. Multi-speed bicycles allow gear selection to suit the circumstances: a cyclist could use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. In a lower gear every turn of the pedals leads to fewer rotations of the rear wheel. This allows the energy required to move the same distance to be distributed over more pedal turns, reducing fatigue when riding uphill, with a heavy load, or against strong winds. A higher gear allows a cyclist to make fewer pedal turns to maintain a given speed, but with more effort per turn of the pedals. With a "chain drive" transmission, a "chainring" attached to a crank drives the chain, which in turn rotates the rear wheel via the rear sprocket(s) (cassette or freewheel). There are four gearing options: two-speed hub gear integrated with chain ring, up to 3 chain rings, up to 12 sprockets, hub gear built into rear wheel (3-speed to 14-speed). The most common options are either a rear hub or multiple chain rings combined with multiple sprockets (other combinations of options are possible but less common). Steering. The handlebars connect to the stem that connects to the fork that connects to the front wheel, and the whole assembly connects to the bike and rotates about the steering axis via the headset bearings. Three styles of handlebar are common. "Upright handlebars", the norm in Europe and elsewhere until the 1970s, curve gently back toward the rider, offering a natural grip and comfortable upright position. "Drop handlebars" "drop" as they curve forward and down, offering the cyclist best braking power from a more aerodynamic "crouched" position, as well as more upright positions in which the hands grip the brake lever mounts, the forward curves, or the upper flat sections for increasingly upright postures. Mountain bikes generally feature a 'straight handlebar' or 'riser bar' with varying degrees of sweep backward and centimeters rise upwards, as well as wider widths which can provide better handling due to increased leverage against the wheel. Seating. Saddles also vary with rider preference, from the cushioned ones favored by short-distance riders to narrower saddles which allow more room for leg swings. Comfort depends on riding position. With comfort bikes and hybrids, cyclists sit high over the seat, their weight directed down onto the saddle, such that a wider and more cushioned saddle is preferable. For racing bikes where the rider is bent over, weight is more evenly distributed between the handlebars and saddle, the hips are flexed, and a narrower and harder saddle is more efficient. Differing saddle designs exist for male and female cyclists, accommodating the genders' differing anatomies and sit bone width measurements, although bikes typically are sold with saddles most appropriate for men. Suspension seat posts and seat springs provide comfort by absorbing shock but can add to the overall weight of the bicycle. A recumbent bicycle has a reclined chair-like seat that some riders find more comfortable than a saddle, especially riders who suffer from certain types of seat, back, neck, shoulder, or wrist pain. Recumbent bicycles may have either under-seat or over-seat steering. Brakes. Bicycle brakes may be rim brakes, in which friction pads are compressed against the wheel rims; hub brakes, where the mechanism is contained within the wheel hub, or disc brakes, where pads act on a rotor attached to the hub. Most road bicycles use rim brakes, but some use disc brakes. Disc brakes are more common for mountain bikes, tandems and recumbent bicycles than on other types of bicycles, due to their increased power, coupled with an increased weight and complexity. With hand-operated brakes, force is applied to brake levers mounted on the handlebars and transmitted via Bowden cables or hydraulic lines to the friction pads, which apply pressure to the braking surface, causing friction which slows the bicycle down. A rear hub brake may be either hand-operated or pedal-actuated, as in the back pedal "coaster brakes" which were popular in North America until the 1960s. Track bicycles do not have brakes, because all riders ride in the same direction around a track which does not necessitate sharp deceleration. Track riders are still able to slow down because all track bicycles are fixed-gear, meaning that there is no freewheel. Without a freewheel, coasting is impossible, so when the rear wheel is moving, the cranks are moving. To slow down, the rider applies resistance to the pedals, acting as a braking system which can be as effective as a conventional rear wheel brake, but not as effective as a front wheel brake. Suspension. Bicycle suspension refers to the system or systems used to "suspend" the rider and all or part of the bicycle. This serves two purposes: to keep the wheels in continuous contact with the ground, improving control, and to isolate the rider and luggage from jarring due to rough surfaces, improving comfort. Bicycle suspensions are used primarily on mountain bicycles, but are also common on hybrid bicycles, as they can help deal with problematic vibration from poor surfaces. Suspension is especially important on recumbent bicycles, since while an upright bicycle rider can stand on the pedals to achieve some of the benefits of suspension, a recumbent rider cannot. Basic mountain bicycles and hybrids usually have front suspension only, whilst more sophisticated ones also have rear suspension. Road bicycles tend to have no suspension. Wheels and tires. The wheel axle fits into fork ends in the frame and fork. A pair of wheels may be called a wheelset, especially in the context of ready-built "off the shelf", performance-oriented wheels. Tires vary enormously depending on their intended purpose. Road bicycles use tires 18 to 25 millimeters wide, most often completely smooth, or slick, and inflated to high pressure to roll fast on smooth surfaces. Off-road tires are usually between wide, and have treads for gripping in muddy conditions or metal studs for ice. Groupset. Groupset generally refers to all of the components that make up a bicycle excluding the bicycle frame, fork, stem, wheels, tires, and rider contact points, such as the saddle and handlebars. Accessories. Some components, which are often optional accessories on sports bicycles, are standard features on utility bicycles to enhance their usefulness, comfort, safety and visibility. Fenders with spoilers (mudflaps) protect the cyclist and moving parts from spray when riding through wet areas. In some countries (e.g. Germany, UK), fenders are called mudguards. The chainguards protect clothes from oil on the chain while preventing clothing from being caught between the chain and crankset teeth. Kick stands keep bicycles upright when parked, and bike locks deter theft. Front-mounted baskets, front or rear luggage carriers or racks, and panniers mounted above either or both wheels can be used to carry equipment or cargo. Pegs can be fastened to one, or both of the wheel hubs to either help the rider perform certain tricks, or allow a place for extra riders to stand, or rest. Parents sometimes add rear-mounted child seats, an auxiliary saddle fitted to the crossbar, or both to transport children. Bicycles can also be fitted with a hitch to tow a trailer for carrying cargo, a child, or both. Toe-clips and toestraps and clipless pedals help keep the foot locked in the proper pedal position and enable cyclists to pull and push the pedals. Technical accessories include cyclocomputers for measuring speed, distance, heart rate, GPS data etc. Other accessories include lights, reflectors, mirrors, racks, trailers, bags, water bottles and cages, and bell. Bicycle lights, reflectors, and helmets are required by law in some geographic regions depending on the legal code. It is more common to see bicycles with bottle generators, dynamos, lights, fenders, racks and bells in Europe. Bicyclists also have specialized form fitting and high visibility clothing. Children's bicycles may be outfitted with cosmetic enhancements such as bike horns, streamers, and spoke beads. Training wheels are sometimes used when learning to ride, but a dedicated balance bike teaches independent riding more effectively. Bicycle helmets can reduce injury in the event of a collision or accident, and a suitable helmet is legally required of riders in many jurisdictions. Helmets may be classified as an accessory or as an item of clothing. Bike trainers are used to enable cyclists to cycle while the bike remains stationary. They are frequently used to warm up before races or indoors when riding conditions are unfavorable. Standards. A number of formal and industry standards exist for bicycle components to help make spare parts exchangeable and to maintain a minimum product safety. The International Organization for Standardization (ISO) has a special technical committee for cycles, TC149, that has the scope of "Standardization in the field of cycles, their components and accessories with particular reference to terminology, testing methods and requirements for performance and safety, and interchangeability". The European Committee for Standardization (CEN) also has a specific Technical Committee, TC333, that defines European standards for cycles. Their mandate states that EN cycle standards shall harmonize with ISO standards. Some CEN cycle standards were developed before ISO published their standards, leading to strong European influences in this area. European cycle standards tend to describe minimum safety requirements, while ISO standards have historically harmonized parts geometry. Maintenance and repair. Like all devices with mechanical moving parts, bicycles require a certain amount of regular maintenance and replacement of worn parts. A bicycle is relatively simple compared with a car, so some cyclists choose to do at least part of the maintenance themselves. Some components are easy to handle using relatively simple tools, while other components may require specialist manufacturer-dependent tools. Many bicycle components are available at several different price/quality points; manufacturers generally try to keep all components on any particular bike at about the same quality level, though at the very cheap end of the market there may be some skimping on less obvious components (e.g. bottom bracket). Maintenance. The most basic maintenance item is keeping the tires correctly inflated; this can make a noticeable difference as to how the bike feels to ride. Bicycle tires usually have a marking on the sidewall indicating the pressure appropriate for that tire. Bicycles use much higher pressures than cars: car tires are normally in the range of , whereas bicycle tires are normally in the range of . Another basic maintenance item is regular lubrication of the chain and pivot points for derailleurs and brake components. Most of the bearings on a modern bike are sealed and grease-filled and require little or no attention; such bearings will usually last for or more. The crank bearings require periodic maintenance, which involves removing, cleaning and repacking with the correct grease. The chain and the brake blocks are the components which wear out most quickly, so these need to be checked from time to time, typically every or so. Most local bike shops will do such checks for free. Note that when a chain becomes badly worn it will also wear out the rear cogs/cassette and eventually the chain ring(s), so replacing a chain when only moderately worn will prolong the life of other components. Over the longer term, tires do wear out, after ; a rash of punctures is often the most visible sign of a worn tire. Repair. Very few bicycle components can actually be repaired; replacement of the failing component is the normal practice. The most common roadside problem is a puncture of the tire's inner tube. A patch kit may be employed to fix the puncture or the tube can be replaced, though the latter solution comes at a greater cost and waste of material. Some brands of tires are much more puncture-resistant than others, often incorporating one or more layers of Kevlar; the downside of such tires is that they may be heavier and/or more difficult to fit and remove. Tools. There are specialized bicycle tools for use both in the shop and at the roadside. Many cyclists carry tool kits. These may include a tire patch kit (which, in turn, may contain any combination of a hand pump or CO2 pump, tire levers, spare tubes, self-adhesive patches, or tube-patching material, an adhesive, a piece of sandpaper or a metal grater (for roughening the tube surface to be patched) and sometimes even a block of French chalk), wrenches, hex keys, screwdrivers, and a chain tool. Special, thin wrenches are often required for maintaining various screw-fastened parts, specifically, the frequently lubricated ball-bearing "cones". There are also cycling-specific multi-tools that combine many of these implements into a single compact device. More specialized bicycle components may require more complex tools, including proprietary tools specific for a given manufacturer. Social and historical aspects. The bicycle has had a considerable effect on human society, in both the cultural and industrial realms. In daily life. Around the turn of the 20th century, bicycles reduced crowding in inner-city tenements by allowing workers to commute from more spacious dwellings in the suburbs. They also reduced dependence on horses. Bicycles allowed people to travel for leisure into the country, since bicycles were three times as energy efficient as walking and three to four times as fast. In built-up cities around the world, urban planning uses cycling infrastructure like bikeways to reduce traffic congestion and air pollution. A number of cities around the world have implemented schemes known as bicycle sharing systems or community bicycle programs. The first of these was the White Bicycle plan in Amsterdam in 1965. It was followed by yellow bicycles in La Rochelle and green bicycles in Cambridge. These initiatives complement public transport systems and offer an alternative to motorized traffic to help reduce congestion and pollution. In Europe, especially in the Netherlands and parts of Germany and Denmark, bicycle commuting is common. In Copenhagen, a cyclists' organization runs a Cycling Embassy that promotes biking for commuting and sightseeing. The United Kingdom has a tax break scheme (IR 176) that allows employees to buy a new bicycle tax free to use for commuting. In the Netherlands all train stations offer free bicycle parking, or a more secure parking place for a small fee, with the larger stations also offering bicycle repair shops. Cycling is so popular that the parking capacity may be exceeded, while in some places such as Delft the capacity is usually exceeded. In Trondheim in Norway, the Trampe bicycle lift has been developed to encourage cyclists by giving assistance on a steep hill. Buses in many cities have bicycle carriers mounted on the front. There are towns in some countries where bicycle culture has been an integral part of the landscape for generations, even without much official support. That is the case of Ílhavo, in Portugal. In cities where bicycles are not integrated into the public transportation system, commuters often use bicycles as elements of a mixed-mode commute, where the bike is used to travel to and from train stations or other forms of rapid transit. Some students who commute several miles drive a car from home to a campus parking lot, then ride a bicycle to class. Folding bicycles are useful in these scenarios, as they are less cumbersome when carried aboard. Los Angeles removed a small amount of seating on some trains to make more room for bicycles and wheel chairs. Some US companies, notably in the tech sector, are developing both innovative cycle designs and cycle-friendliness in the workplace. Foursquare, whose CEO Dennis Crowley "pedaled to pitch meetings ... [when he] was raising money from venture capitalists" on a two-wheeler, chose a new location for its New York headquarters "based on where biking would be easy". Parking in the office was also integral to HQ planning. Mitchell Moss, who runs the Rudin Center for Transportation Policy & Management at New York University, said in 2012: "Biking has become the mode of choice for the educated high tech worker". Bicycles offer an important mode of transport in many developing countries. Until recently, bicycles have been a staple of everyday life throughout Asian countries. They are the most frequently used method of transport for commuting to work, school, shopping, and life in general. In Europe, bicycles are commonly used. They also offer a degree of exercise to keep individuals healthy. Bicycles are also celebrated in the visual arts. An example of this is the Bicycle Film Festival, a film festival hosted all around the world. Female emancipation. The safety bicycle gave women unprecedented mobility, contributing to their emancipation in Western nations. As bicycles became safer and cheaper, more women had access to the personal freedom that bicycles embodied, and so the bicycle came to symbolize the New Woman of the late 19th century, especially in Britain and the United States. The bicycle craze in the 1890s also led to a movement for so-called rational dress, which helped liberate women from corsets and ankle-length skirts and other restrictive garments, substituting the then-shocking bloomers. The bicycle was recognized by 19th-century feminists and suffragists as a "freedom machine" for women. American Susan B. Anthony said in a "New York World" interview on 2 February 1896: "I think it has done more to emancipate woman than any one thing in the world. I rejoice every time I see a woman ride by on a wheel. It gives her a feeling of self-reliance and independence the moment she takes her seat; and away she goes, the picture of untrammelled womanhood." In 1895 Frances Willard, the tightly laced president of the Woman's Christian Temperance Union, wrote "A Wheel Within a Wheel: How I Learned to Ride the Bicycle, with Some Reflections by the Way", a 75-page illustrated memoir praising "Gladys", her bicycle, for its "gladdening effect" on her health and political optimism. Willard used a cycling metaphor to urge other suffragists to action. In 1985, Georgena Terry started the first women-specific bicycle company. Her designs featured frame geometry and wheel sizes chosen to better fit women, with shorter top tubes and more suitable reach. Economic implications. Bicycle manufacturing proved to be a training ground for other industries and led to the development of advanced metalworking techniques, both for the frames themselves and for special components such as ball bearings, washers, and sprockets. These techniques later enabled skilled metalworkers and mechanics to develop the components used in early automobiles and aircraft. Wilbur and Orville Wright, a pair of businessmen, ran the Wright Cycle Company which designed, manufactured and sold their bicycles during the bike boom of the 1890s. They also served to teach the industrial models later adopted, including mechanization and mass production (later copied and adopted by Ford and General Motors), vertical integration (also later copied and adopted by Ford), aggressive advertising (as much as 10% of all advertising in U.S. periodicals in 1898 was by bicycle makers), lobbying for better roads (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride), all first practiced by Pope. In addition, bicycle makers adopted the annual model change (later derided as planned obsolescence, and usually credited to General Motors), which proved very successful. Early bicycles were an example of conspicuous consumption, being adopted by the fashionable elites. In addition, by serving as a platform for accessories, which could ultimately cost more than the bicycle itself, it paved the way for the likes of the Barbie doll. Bicycles helped create, or enhance, new kinds of businesses, such as bicycle messengers, traveling seamstresses, riding academies, and racing rinks. Their board tracks were later adapted to early motorcycle and automobile racing. There were a variety of new inventions, such as spoke tighteners, and specialized lights, socks and shoes, and even cameras, such as the Eastman Company's Poco. Probably the best known and most widely used of these inventions, adopted well beyond cycling, is Charles Bennett's Bike Web, which came to be called the jock strap. They also presaged a move away from public transit that would explode with the introduction of the automobile. J. K. Starley's company became the Rover Cycle Company Ltd. in the late 1890s, and then renamed the Rover Company when it started making cars. Morris Motors Limited (in Oxford) and Škoda also began in the bicycle business, as did the Wright brothers. Alistair Craig, whose company eventually emerged to become the engine manufacturers Ailsa Craig, also started from manufacturing bicycles, in Glasgow in March 1885. In general, U.S. and European cycle manufacturers used to assemble cycles from their own frames and components made by other companies, although very large companies (such as Raleigh) used to make almost every part of a bicycle (including bottom brackets, axles, etc.) In recent years, those bicycle makers have greatly changed their methods of production. Now, almost none of them produce their own frames. Many newer or smaller companies only design and market their products; the actual production is done by Asian companies. For example, some 60% of the world's bicycles are now being made in China. Despite this shift in production, as nations such as China and India become more wealthy, their own use of bicycles has declined due to the increasing affordability of cars and motorcycles. One of the major reasons for the proliferation of Chinese-made bicycles in foreign markets is the lower cost of labor in China. In line with the European financial crisis of that time, in 2011 the number of bicycle sales in Italy (1.75 million) passed the number of new car sales. Environmental impact. One of the profound economic implications of bicycle use is that it liberates the user from motor fuel consumption. (Ballantine, 1972) The bicycle is an inexpensive, fast, healthy and environmentally friendly mode of transport. Ivan Illich stated that bicycle use extended the usable physical environment for people, while alternatives such as cars and motorways degraded and confined people's environment and mobility. Currently, two billion bicycles are in use around the world. Children, students, professionals, laborers, civil servants and seniors are pedaling around their communities. They all experience the freedom and the natural opportunity for exercise that the bicycle easily provides. Bicycle also has lowest carbon intensity of travel. Manufacturing. The global bicycle market is $61 billion in 2011. , 130 million bicycles were sold every year globally and 66% of them were made in China. Legal requirements. Early in its development, as with automobiles, there were restrictions on the operation of bicycles. Along with advertising, and to gain free publicity, Albert A. Pope litigated on behalf of cyclists. The 1968 Vienna Convention on Road Traffic of the United Nations considers a bicycle to be a vehicle, and a person controlling a bicycle (whether actually riding or not) is considered an operator or driver. The traffic codes of many countries reflect these definitions and demand that a bicycle satisfy certain legal requirements before it can be used on public roads. In many jurisdictions, it is an offense to use a bicycle that is not in a roadworthy condition. In some countries, bicycles must have functioning front and rear lights when ridden after dark. Some countries require child and/or adult cyclists to wear helmets, as this may protect riders from head trauma. Countries which require adult cyclists to wear helmets include Spain, New Zealand and Australia. Mandatory helmet wearing is one of the most controversial topics in the cycling world, with proponents arguing that it reduces head injuries and thus is an acceptable requirement, while opponents argue that by making cycling seem more dangerous and cumbersome, it reduces cyclist numbers on the streets, creating an overall negative health effect (fewer people cycling for their own health, and the remaining cyclists being more exposed through a reversed safety in numbers effect). Theft. Bicycles are popular targets for theft, due to their value and ease of resale. The number of bicycles stolen annually is difficult to quantify as a large number of crimes are not reported. Around 50% of the participants in the Montreal International Journal of Sustainable Transportation survey were subjected to a bicycle theft in their lifetime as active cyclists. Most bicycles have serial numbers that can be recorded to verify identity in case of theft.
3974
28481209
https://en.wikipedia.org/wiki?curid=3974
Biopolymer
Biopolymers are natural polymers produced by the cells of living organisms. Like other polymers, biopolymers consist of monomeric units that are covalently bonded in chains to form larger molecules. There are three main classes of biopolymers, classified according to the monomers used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. The polynucleotides, RNA and DNA, are long polymers of nucleotides. Polypeptides include proteins and shorter polymers of amino acids; some major examples include collagen, actin, and fibrin. Polysaccharides are linear or branched chains of sugar carbohydrates; examples include starch, cellulose, and alginate. Other examples of biopolymers include natural rubbers (polymers of isoprene), suberin and lignin (complex polyphenolic polymers), cutin and cutan (complex polymers of long-chain fatty acids), melanin, and polyhydroxyalkanoates (PHAs). In addition to their many essential roles in living organisms, biopolymers have applications in many fields including the food industry, manufacturing, packaging, and biomedical engineering. Biopolymers versus synthetic polymers. A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of biopolymers. In contrast, most synthetic polymers have much simpler and more random (or stochastic) structures. This fact leads to a molecular mass distribution that is missing in biopolymers. In fact, as their synthesis is controlled by a template-directed process in most "in vivo" systems, all biopolymers of a type (say one specific protein) are all alike: they all contain similar sequences and numbers of monomers and thus all have the same mass. This phenomenon is called monodispersity in contrast to the polydispersity encountered in synthetic polymers. As a result, biopolymers have a dispersity of 1. Conventions and nomenclature. Polypeptides. The convention for a polypeptide is to list its constituent amino acid residues as they occur from the amino terminus to the carboxylic acid terminus. The amino acid residues are always joined by peptide bonds. Protein, though used colloquially to refer to any polypeptide, refers to larger or fully functional forms and can consist of several polypeptide chains as well as single chains. Proteins can also be modified to include non-peptide components, such as saccharide chains and lipids. Nucleic acids. The convention for a nucleic acid sequence is to list the nucleotides as they occur from the 5' end to the 3' end of the polymer chain, where 5' and 3' refer to the numbering of carbons around the ribose ring which participate in forming the phosphate diester linkages of the chain. Such a sequence is called the primary structure of the biopolymer. Polysaccharides. Polysaccharides (sugar polymers) can be linear or branched and are typically joined with glycosidic bonds. The exact placement of the linkage can vary, and the orientation of the linking functional groups is also important, resulting in α- and β-glycosidic bonds with numbering definitive of the linking carbons' location in the ring. In addition, many saccharide units can undergo various chemical modifications, such as amination, and can even form parts of other molecules, such as glycoproteins. Structural characterization. There are a number of biophysical techniques for determining sequence information. Protein sequence can be determined by Edman degradation, in which the N-terminal residues are hydrolyzed from the chain one at a time, derivatized, and then identified. Mass spectrometer techniques can also be used. Nucleic acid sequence can be determined using gel electrophoresis and capillary electrophoresis. Lastly, mechanical properties of these biopolymers can often be measured using optical tweezers or atomic force microscopy. Dual-polarization interferometry can be used to measure the conformational changes or self-assembly of these materials when stimulated by pH, temperature, ionic strength or other binding partners. Common biopolymers. Collagen: Collagen is the primary structure of vertebrates and is the most abundant protein in mammals. Because of this, collagen is one of the most easily attainable biopolymers, and used for many research purposes. Because of its mechanical structure, collagen has high tensile strength and is a non-toxic, easily absorbable, biodegradable, and biocompatible material. Therefore, it has been used for many medical applications such as in treatment for tissue infection, drug delivery systems, and gene therapy. Silk fibroin: Silk Fibroin (SF) is another protein rich biopolymer that can be obtained from different silkworm species, such as the mulberry worm Bombyx mori. In contrast to collagen, SF has a lower tensile strength but has strong adhesive properties due to its insoluble and fibrous protein composition. In recent studies, silk fibroin has been found to possess anticoagulation properties and platelet adhesion. Silk fibroin has been additionally found to support stem cell proliferation in vitro. Gelatin: Gelatin is obtained from type I collagen consisting of cysteine, and produced by the partial hydrolysis of collagen from bones, tissues and skin of animals. There are two types of gelatin, Type A and Type B. Type A collagen is derived by acid hydrolysis of collagen and has 18.5% nitrogen. Type B is derived by alkaline hydrolysis containing 18% nitrogen and no amide groups. Elevated temperatures cause the gelatin to melts and exists as coils, whereas lower temperatures result in coil to helix transformation. Gelatin contains many functional groups like NH2, SH, and COOH which allow for gelatin to be modified using nanoparticles and biomolecules. Gelatin is an Extracellular Matrix protein which allows it to be applied for applications such as wound dressings, drug delivery and gene transfection. Starch: Starch is an inexpensive biodegradable biopolymer and copious in supply. Nanofibers and microfibers can be added to the polymer matrix to increase the mechanical properties of starch improving elasticity and strength. Without the fibers, starch has poor mechanical properties due to its sensitivity to moisture. Starch being biodegradable and renewable is used for many applications including plastics and pharmaceutical tablets. Cellulose: Cellulose is very structured with stacked chains that result in stability and strength. The strength and stability comes from the straighter shape of cellulose caused by glucose monomers joined by glycogen bonds. The straight shape allows the molecules to pack closely. Cellulose is very common in application due to its abundant supply, its biocompatibility, and is environmentally friendly. Cellulose is used vastly in the form of nano-fibrils called nano-cellulose. Nano-cellulose presented at low concentrations produces a transparent gel material. This material can be used for biodegradable, homogeneous, dense films that are very useful in the biomedical field. Alginate: Alginate is the most copious marine natural polymer derived from brown seaweed. Alginate biopolymer applications range from packaging, textile and food industry to biomedical and chemical engineering. The first ever application of alginate was in the form of wound dressing, where its gel-like and absorbent properties were discovered. When applied to wounds, alginate produces a protective gel layer that is optimal for healing and tissue regeneration, and keeps a stable temperature environment. Additionally, there have been developments with alginate as a drug delivery medium, as drug release rate can easily be manipulated due to a variety of alginate densities and fibrous composition. Biopolymer applications. The applications of biopolymers can be categorized under two main fields, which differ due to their biomedical and industrial use. Biomedical. Because one of the main purposes for biomedical engineering is to mimic body parts to sustain normal body functions, due to their biocompatible properties, biopolymers are used vastly for tissue engineering, medical devices and the pharmaceutical industry. Many biopolymers can be used for regenerative medicine, tissue engineering, drug delivery, and overall medical applications due to their mechanical properties. They provide characteristics like wound healing, and catalysis of bioactivity, and non-toxicity. Compared to synthetic polymers, which can present various disadvantages like immunogenic rejection and toxicity after degradation, many biopolymers are normally better with bodily integration as they also possess more complex structures, similar to the human body. More specifically, polypeptides like collagen and silk, are biocompatible materials that are being used in ground-breaking research, as these are inexpensive and easily attainable materials. Gelatin polymer is often used on dressing wounds where it acts as an adhesive. Scaffolds and films with gelatin allow for the scaffolds to hold drugs and other nutrients that can be used to supply to a wound for healing. As collagen is one of the more popular biopolymers used in biomedical science, here are some examples of their use: Collagen based drug delivery systems: collagen films act like a barrier membrane and are used to treat tissue infections like infected corneal tissue or liver cancer. Collagen films have all been used for gene delivery carriers which can promote bone formation. Collagen sponges: Collagen sponges are used as a dressing to treat burn victims and other serious wounds. Collagen based implants are used for cultured skin cells or drug carriers that are used for burn wounds and replacing skin. Collagen as haemostat: When collagen interacts with platelets it causes a rapid coagulation of blood. This rapid coagulation produces a temporary framework so the fibrous stroma can be regenerated by host cells. Collagen based haemostat reduces blood loss in tissues and helps manage bleeding in organs such as the liver and spleen. Chitosan is another popular biopolymer in biomedical research. Chitosan is derived from chitin, the main component in the exoskeleton of crustaceans and insects and the second most abundant biopolymer in the world. Chitosan has many excellent characteristics for biomedical science. Chitosan is biocompatible, it is highly bioactive, meaning it stimulates a beneficial response from the body, it can biodegrade which can eliminate a second surgery in implant applications, can form gels and films, and is selectively permeable. These properties allow for various biomedical applications of chitosan. Chitosan as drug delivery: Chitosan is used mainly with drug targeting because it has potential to improve drug absorption and stability. In addition, chitosan conjugated with anticancer agents can also produce better anticancer effects by causing gradual release of free drug into cancerous tissue. Chitosan as an anti-microbial agent: Chitosan is used to stop the growth of microorganisms. It performs antimicrobial functions in microorganisms like algae, fungi, bacteria, and gram-positive bacteria of different yeast species. Chitosan composite for tissue engineering: Chitosan powder blended with alginate is used to form functional wound dressings. These dressings create a moist, biocompatible environment which aids in the healing process. This wound dressing is also biodegradable and has porous structures that allows cells to grow into the dressing. Furthermore, thiolated chitosans (see thiomers) are used for tissue engineering and wound healing, as these biopolymers are able to crosslink via disulfide bonds forming stable three-dimensional networks. Industrial. Food: Biopolymers are being used in the food industry for things like packaging, edible encapsulation films and coating foods. Polylactic acid (PLA) is very common in the food industry due to is clear color and resistance to water. However, most polymers have a hydrophilic nature and start deteriorating when exposed to moisture. Biopolymers are also being used as edible films that encapsulate foods. These films can carry things like antioxidants, enzymes, probiotics, minerals, and vitamins. The food consumed encapsulated with the biopolymer film can supply these things to the body. Packaging: The most common biopolymers used in packaging are polyhydroxyalkanoates (PHAs), polylactic acid (PLA), and starch. Starch and PLA are commercially available and biodegradable, making them a common choice for packaging. However, their barrier properties (either moisture-barrier or gas-barrier properties) and thermal properties are not ideal. Hydrophilic polymers are not water resistant and allow water to get through the packaging which can affect the contents of the package. Polyglycolic acid (PGA) is a biopolymer that has great barrier characteristics and is now being used to correct the barrier obstacles from PLA and starch. Water purification: Chitosan has been used for water purification. It is used as a flocculant that only takes a few weeks or months rather than years to degrade in the environment. Chitosan purifies water by chelation. This is the process in which binding sites along the polymer chain bind with the metal ions in the water forming chelates. Chitosan has been shown to be an excellent candidate for use in storm and wastewater treatment. As materials. Some biopolymers- such as PLA, naturally occurring zein, and poly-3-hydroxybutyrate can be used as plastics, replacing the need for polystyrene or polyethylene based plastics. Some plastics are now referred to as being 'degradable', 'oxy-degradable' or 'UV-degradable'. This means that they break down when exposed to light or air, but these plastics are still primarily (as much as 98 per cent) oil-based and are not currently certified as 'biodegradable' under the European Union directive on Packaging and Packaging Waste (94/62/EC). Biopolymers will break down, and some are suitable for domestic composting. Biopolymers (also called renewable polymers) are produced from biomass for use in the packaging industry. Biomass comes from crops such as sugar beet, potatoes, or wheat: when used to produce biopolymers, these are classified as non food crops. These can be converted in the following pathways: Sugar beet > Glyconic acid > Polyglyconic acid Starch > (fermentation) > Lactic acid > Polylactic acid (PLA) Biomass > (fermentation) > Bioethanol > Ethene > Polyethylene Many types of packaging can be made from biopolymers: food trays, blown starch pellets for shipping fragile goods, thin films for wrapping. Environmental impacts. Biopolymers can be sustainable, carbon neutral and are always renewable, because they are made from plant or animal materials which can be grown indefinitely. Since these materials come from agricultural crops, their use could create a sustainable industry. In contrast, the feedstocks for polymers derived from petrochemicals will eventually deplete. In addition, biopolymers have the potential to cut carbon emissions and reduce CO2 quantities in the atmosphere: this is because the CO2 released when they degrade can be reabsorbed by crops grown to replace them: this makes them close to carbon neutral. Almost all biopolymers are biodegradable in the natural environment: they are broken down into CO2 and water by microorganisms. These biodegradable biopolymers are also compostable: they can be put into an industrial composting process and will break down by 90% within six months. Biopolymers that do this can be marked with a 'compostable' symbol, under European Standard EN 13432 (2000). Packaging marked with this symbol can be put into industrial composting processes and will break down within six months or less. An example of a compostable polymer is PLA film under 20μm thick: films which are thicker than that do not qualify as compostable, even though they are "biodegradable". In Europe there is a home composting standard and associated logo that enables consumers to identify and dispose of packaging in their compost heap.
3975
589223
https://en.wikipedia.org/wiki?curid=3975
2001 United Kingdom general election
The 2001 United Kingdom general election was held on Thursday 7 June 2001, four years after the previous election on 1 May 1997, to elect 659 members to the House of Commons. The governing Labour Party led by Prime Minister Tony Blair was re-elected to serve a second term in government with another landslide victory with a 166-seat majority, returning 412 members of Parliament versus 418 from the previous election, a net loss of six seats, although with a significantly lower turnout than before—59.4%, compared to 71.6% at the previous election. The number of votes Labour received fell by nearly three million. Blair went on to become the only Labour prime minister to serve two consecutive full terms in office. As Labour retained almost all of their seats won in the 1997 landslide victory, the media dubbed the 2001 election "the quiet landslide". There was little change outside Northern Ireland, with 620 out of the 641 seats in Great Britain electing candidates from the same party as they did in 1997. A strong economy contributed to the Labour victory. The opposition Conservative Party under William Hague's leadership was still deeply divided on the issue of Europe and the party's policy platform had drifted considerably to the right. The party put the issue of European monetary union, in particular the prospect of the UK joining the Eurozone, at the centre of its campaign but failed to resonate with the electorate. The Conservatives briefly had a narrow lead in the polls during the 2000 fuel strikes but Labour successfully resolved them by year end. Furthermore, a series of publicity stunts that backfired also harmed Hague, and he immediately announced his resignation as party leader when the election result was clear, formally stepping down three months later, therefore becoming the first leader of the Conservative Party in the House of Commons since Austen Chamberlain nearly eighty years prior not to serve as prime minister. The election was largely a repeat of the 1997 general election, with Labour losing only six seats overall and the Conservatives making a net gain of one seat (gaining nine seats but losing eight). The Conservatives gained a seat in Scotland, which ended the party's status as an "England-only" party in the prior parliament, but failed again to win any seats in Wales. Although they did not gain many seats, three of the few new MPs elected were future Conservative Prime Ministers David Cameron and Boris Johnson and future Conservative Chancellor of the Exchequer George Osborne; Osborne would serve in the same Cabinet as Cameron from 2010 to 2016. The Liberal Democrats led by Charles Kennedy made a net gain of six seats. Change was seen in Northern Ireland, with the moderate unionist Ulster Unionist Party (UUP) losing four seats to the more hardline Democratic Unionist Party (DUP). A similar transition appeared in the nationalist community, with the moderate Social Democratic and Labour Party (SDLP) losing votes to the more staunchly republican and abstentionist Sinn Féin. Exceptionally low voter turnout, which fell below 60% for the first time since 1918, also marked this election. The election was broadcast live on BBC One and presented by David Dimbleby, Jeremy Paxman, Andrew Marr, Peter Snow, and Tony King. The 2001 general election was notable for being the first in which pictures of the party logos appeared on the ballot paper. Prior to this, the ballot paper had only displayed the candidate's name, address, and party name. Notable departing MPs included former Prime Ministers Edward Heath (also Father of the House) and John Major, former Deputy Prime Minister Michael Heseltine, former Liberal Democrat leader Paddy Ashdown, former Cabinet ministers Tony Benn, Tom King, John Morris, Mo Mowlam, John MacGregor and Peter Brooke, Teresa Gorman, and then Mayor of London Ken Livingstone. Background. The elections were marked by voter apathy, with turnout falling to 59.4%, the lowest (and first under 70%) since the Coupon Election of 1918. Throughout the election the Labour Party had maintained a significant lead in the opinion polls and the result was deemed to be so certain that some bookmakers paid out for a Labour majority before election day. However, the opinion polls the previous autumn had shown the first Tory lead (though only by a narrow margin) in the opinion polls for eight years as they benefited from the public anger towards the government over the fuel protests which had led to a severe shortage of motor fuel. By the end of 2000, however, the dispute had been resolved and Labour were firmly back in the lead of the opinion polls. In total, a mere 29 parliamentary seats changed hands at the 2001 Election. 2001 also saw the rare election of an independent. Richard Taylor of Independent Kidderminster Hospital and Health Concern (usually now known simply as "Health Concern") unseated a government MP, David Lock, in Wyre Forest. There was also a high vote for British National Party leader Nick Griffin in Oldham West and Royton, in the wake of recent race riots in the town of Oldham. In Northern Ireland, the election was far more dramatic and marked a move by unionists away from support for the Good Friday Agreement, with the moderate unionist Ulster Unionist Party (UUP) losing to the more hardline Democratic Unionist Party (DUP). This polarisation was also seen in the nationalist community, with the Social Democratic and Labour Party (SDLP) vote losing out to more left-wing and republican Sinn Féin. It also saw a tightening of the parties as the small UK Unionist Party lost its only seat. Campaign. The election had been expected on 3 May, to coincide with local elections, but on 2 April 2001, the local elections were postponed to 7 June because of rural movement restrictions imposed in response to the foot-and-mouth outbreak that had started in February. On 8 May, Prime Minister Tony Blair announced that the general election would be held on the 7 June as expected, on the same day as the local elections. Blair made the announcement in a speech at St Saviour's and St Olave's Church of England School in Bermondsey, London rather than on the steps of Downing Street. For Labour, the last four years had run relatively smoothly. The party had successfully defended all their by election seats, and many suspected a Labour win was inevitable from the start. Many in the party, however, were afraid of voter apathy, which was epitomised in a poster of "Hague with Margaret Thatcher's hair", captioned "Get out and vote. Or they get in." Despite recessions in mainland Europe and the United States, due to the bursting of global tech bubbles, Britain was notably unaffected and Labour however could rely on a strong economy as unemployment continued to decline toward election day, putting to rest any fears of a Labour government putting the economic situation at risk. For William Hague, however, the Conservative Party had still not fully recovered from the loss in 1997. The party was still divided over Europe, and talk of a referendum on joining the Eurozone was rife, and as a result "Save The Pound" was one of the key slogans deployed in the Conservatives' campaign. As Labour remained at the political centre, the Conservatives moved to the right. A policy gaffe by Oliver Letwin over public spending cuts left the party with an own goal that Labour soon exploited. Thatcher gave a speech to the Conservative Election Rally in Plymouth on 22 May 2001, calling New Labour "rootless, empty, and artificial." She also added to Hague's troubles when speaking out strongly against the Euro to applause. Hague himself, although a witty performer at Prime Minister's Questions, was dogged in the press and reminded of his speech, given at the age of 16, at the 1977 Conservative Conference. "The Sun" newspaper only added to the Conservatives' woes by backing Labour for a second consecutive election, calling Hague a "dead parrot" during the Conservative Party's conference in October 1998. The Conservatives campaigned on a strongly right-wing platform, emphasising the issues of Europe, immigration and tax, the fabled "Tebbit Trinity". They also released a poster showing a heavily pregnant Tony Blair, stating "Four years of Labour and he still hasn't delivered". However, Labour countered by asking where the proposed tax cuts were going to come from, and decried the Tory policy as "cut here, cut there, cut everywhere", in reference to the widespread belief that the Conservatives would make major cuts to public services in order to fund tax cuts. Labour also capitalised on the strong economic conditions of the time, and another major line of attack (primarily directed towards Michael Portillo, now Shadow Chancellor after returning to Parliament via a by-election) was to warn of a return to "Tory Boom and Bust" under a Conservative administration. Charles Kennedy contested his first election as leader of the Liberal Democrats. During the election Sharron Storer, a resident of Birmingham, criticised Prime Minister Tony Blair in front of television cameras about conditions in the National Health Service. The widely televised incident happened on 16 May during a campaign visit by Blair to the Queen Elizabeth Hospital in Birmingham. Sharron Storer's partner, Keith Sedgewick, a cancer patient with non-Hodgkin lymphoma and therefore highly susceptible to infection, was being treated at the time in the bone marrow unit, but no bed could be found for him and he was transferred to the casualty unit for his first 24 hours. On the evening of the same day Deputy Prime Minister John Prescott punched a protestor after being hit by an egg on his way to an election rally in Rhyl, North Wales. Results. The election result was effectively a repeat of 1997, as the Labour Party retained an overwhelming majority, with the BBC announcing the victory at 02:58 on the early morning of 8 June. Having presided over relatively serene political, economic and social conditions, the feeling of prosperity in the United Kingdom had been maintained into the new millennium, and Labour would have a free hand to assert its ideals in the subsequent parliament. Despite the victory, voter apathy was a major issue, as turnout fell below 60%, 12 percentage points down on 1997. All three of the main parties saw their total votes fall, with Labour's total vote dropping by 2.8 million on 1997, the Conservatives 1.3 million, and the Liberal Democrats 428,000. Some suggested this dramatic fall was a sign of the general acceptance of the status quo and the likelihood of Labour's majority remaining unassailable. For the Conservatives, the huge loss they had sustained in 1997 was repeated. Despite gaining nine seats, they lost seven to the Liberal Democrats, and one even to Labour (South Dorset). William Hague was quick to announce his resignation, doing so at 07:44 outside the Conservative Party headquarters. Some believed that Hague had been unlucky; although most considered him to be a talented orator and an intelligent statesman, he had come up against the charismatic Tony Blair in the peak of his political career, and it was no surprise that little progress was made in reducing Labour's majority after a relatively smooth parliament. Staying at what they considered rock bottom, however, showed that the Conservatives had failed to improve their negative public image, had remained somewhat disunited over Europe, and had not regained the trust that they had lost in the 1990s. Hague's focus on the "Save The Pound" campaign narrative had failed to gain any traction; Labour's successful countertactic was to be repeatedly vague over the issue of future monetary union – and said that the UK would only consider joining the Eurozone "when conditions were right". But in Scotland, despite flipping one seat from the Scottish National Party, their vote collapse continued. They failed to retake former strongholds in Scotland as the Nationalists consolidated their grip on the Northeastern portion of the country. The Liberal Democrats could point to steady progress under their new leader, Charles Kennedy, gaining more seats than the main two parties—albeit only six overall—and maintaining the performance of a pleasing 1997 election, where the party had doubled its number of seats from 20 to 46. While they had yet to become electable as a government, they underlined their growing reputation as a worthwhile alternative to Labour and Conservative, offering plenty of debate in Parliament and representing more than a mere protest vote. The SNP failed to gain any new seats and lost a seat to the Conservatives by just 79 votes. In Wales, Plaid Cymru both gained a seat from Labour and lost one to them. In Northern Ireland the Ulster Unionists, despite gaining North Down, lost five other seats. <section begin="UK General Election 2001"/> <section end="UK General Election 2001"/> "All parties with more than 500 votes shown." "The seat gains reflect changes on the 1997 general election result. Two seats had changed hands in by-elections in the intervening period. These were as follows:" The results of the election give a Gallagher index of dis-proportionality of 17.74. Voter Demographics. MORI interviewed 18,657 adults in Great Britain after the election which suggested the following demographic breakdown...
3978
7903804
https://en.wikipedia.org/wiki?curid=3978
Book of Mormon
The Book of Mormon is a religious text of the Latter Day Saint movement, first published in 1830 by Joseph Smith as The Book of Mormon: An Account Written by the Hand of Mormon upon Plates Taken from the Plates of Nephi. The book is one of the earliest and most well-known unique writings of the Latter Day Saint movement. The denominations of the Latter Day Saint movement typically regard the text primarily as scripture (sometimes as one of four standard works) and secondarily as a record of God's dealings with ancient inhabitants of the Americas. The majority of Latter Day Saints believe the book to be a record of real-world history, with Latter Day Saint denominations viewing it variously as an inspired record of scripture to the linchpin or "keystone" of their religion. Independent archaeological, historical, and scientific communities have discovered little evidence to support the existence of the civilizations described therein. Characteristics of the language and content point toward a nineteenth-century origin of the Book of Mormon. Various academics and apologetic organizations connected to the Latter Day Saint movement nevertheless argue that the book is an authentic account of the pre-Columbian exchange world. The Book of Mormon has a number of doctrinal discussions on subjects such as the fall of Adam and Eve, the nature of the Christian atonement, eschatology, agency, priesthood authority, redemption from physical and spiritual death, the nature and conduct of baptism, the age of accountability, the purpose and practice of communion, personalized revelation, economic justice, the anthropomorphic and personal nature of God, the nature of spirits and angels, and the organization of the latter day church. The pivotal event of the book is an appearance of Jesus Christ in the Americas shortly after his resurrection. Common teachings of the Latter Day Saint movement hold that the Book of Mormon fulfills numerous biblical prophecies by ending a global apostasy and signaling a restoration of Christian gospel. The Book of Mormon is divided into smaller books — which are usually titled after individuals named as primary authors — and in most versions, is divided into chapters and verses. Its English text imitates the style of the King James Version of the Bible. The Book of Mormon has been fully or partially translated into at least 112 languages. Origin. According to Smith's account and the book's narrative, the Book was originally engraved in otherwise unknown characters on golden plates by ancient prophets; the last prophet to contribute to the book, Moroni, had buried it in what is present-day Manchester, New York and then appeared in a vision to Smith in 1827, revealing the location of the plates and instructing him to translate the plates into English. A different view is that Smith authored the Book, drawing on material and ideas from his contemporary 19th-century environment, rather than translating an ancient record. Conceptual emergence. According to Joseph Smith, in 1823, when he was seventeen years old, an angel of God named Moroni appeared to him and said that a collection of ancient writings was buried in a nearby hill in present-day Wayne County, New York, engraved on golden plates by ancient prophets. The writings were said to describe a people whom God had led from Jerusalem to the Western hemisphere 600 years before Jesus's birth. Smith said this vision occurred on the evening of September 21, 1823, and that on the following day, via divine guidance, he located the burial location of the plates on this hill and was instructed by Moroni to meet him at the same hill on September 22 of the following year to receive further instructions, which repeated annually until 1827. Smith told his entire immediate family about this angelic encounter by the next night, and his brother William reported that the family "believed all he [Joseph Smith] said" about the angel and plates. Smith and his family reminisced that as part of what Smith believed was angelic instruction, Moroni provided Smith with a "brief sketch" of the "origin, progress, civilization, laws, governments... righteousness and iniquity" of the "aboriginal inhabitants of the country" (referring to the Nephites and Lamanites who figure in the Book of Mormon's primary narrative). Smith sometimes shared what he said he had learned through such angelic encounters with his family as well. In Smith's account, Moroni allowed him, accompanied by his wife Emma Hale Smith, to take the plates on September 22, 1827, four years after his initial visit to the hill, and directed him to translate them into English. Smith said the angel Moroni strictly instructed him to not let anyone else see the plates without divine permission. Neighbors, some of whom had collaborated with Smith in earlier treasure-hunting enterprises, tried several times to steal the plates from Smith while he and his family guarded them. Dictation. As Smith and contemporaries reported, the English manuscript of the Book of Mormon was produced as scribes wrote down Smith's dictation in multiple sessions between 1828 and 1829. The dictation of the extant Book of Mormon was completed in 1829 in between 53 and 74 working days. Descriptions of the way in which Smith dictated the Book of Mormon vary. Smith himself called the Book of Mormon a translated work, but in public he generally described the process itself only in vague terms, saying he translated by a miraculous gift from God. According to some accounts from his family and friends at the time, early on, Smith copied characters off the plates as part of a process of learning to translate an initial corpus. For the majority of the process, Smith dictated the text by voicing strings of words which a scribe would write down; after the scribe confirmed they had finished writing, Smith would continue. Smith, his first scribe Martin Harris & his wife Emma all claimed that Joseph dictated by translating the ancient text through the use of the Urim and Thummim that accompanied the plates, prepared by the Lord for the purpose of translating. This "Urim and Thummim," after the biblical divination stones, also called "Nephite interpreters" were described as two clear seer stones which Smith said he could look through in order to translate, bound together by a metal rim and attached to a breastplate. Other accounts say that Smith used a seer stone he already possessed placed inside of a hat to darken the area around the stone. Beginning around 1832, both the interpreters and Smith's own seer stone were at times referred to as the "Urim and Thummim", and Smith sometimes used the term interchangeably with "spectacles". Emma Smith's and David Whitmer's accounts describe Smith using the interpreters while dictating for Martin Harris's scribing and switching to only using his seer stone(s) in subsequent translation. Religious studies scholar Grant Hardy summarizes Smith's known dictation process as follows: "Smith looked at a seer stone placed in his hat and then dictated the text of the Book of Mormon to scribes". Early on, Smith sometimes separated himself from his scribe with a blanket between them, as he did while Martin Harris, a neighbor, scribed his dictation in 1828. At other points in the process, such as when Oliver Cowdery or Emma Smith scribed, the plates were left covered up but in the open. During some dictation sessions the plates were entirely absent.In 1828, while scribing for Smith, Harris, at the prompting of his wife Lucy Harris, repeatedly asked Smith to loan him the manuscript pages of the dictation thus far. Smith reluctantly acceded to Harris's requests. Within weeks, Harris lost the manuscript, which was most likely stolen by a member of his extended family. After the loss, Smith recorded that he lost the ability to translate and that Moroni had taken back the plates to be returned only after Smith repented. Smith later stated that God allowed him to resume translation, but directed that he begin where he left off (in what is now called the Book of Mosiah), without retranslating what had been in the lost manuscript. Smith recommenced some Book of Mormon dictation between September 1828 and April 1829 with his wife Emma Smith scribing with occasional help from his brother Samuel Smith, though transcription accomplished was limited. In April 1829, Oliver Cowdery met Smith and, believing Smith's account of the plates, began scribing for Smith in what became a "burst of rapid-fire translation". In May, Joseph and Emma Smith along with Cowdery moved in with the Whitmer family, sympathetic neighbors, in an effort to avoid interruptions as they proceeded with producing the manuscript. While living with the Whitmers, Smith said he received permission to allow eleven specific others to see the uncovered golden plates and, in some cases, handle them. Their written testimonies are known as the Testimony of Three Witnesses, who described seeing the plates in a visionary encounter with an angel, and the Testimony of Eight Witnesses, who described handling the plates as displayed by Smith. Statements signed by them have been published in most editions of the Book of Mormon. In addition to Smith and these eleven, several others described encountering the plates by holding or moving them wrapped in cloth, although without seeing the plates themselves. Their accounts of the plates' appearance tend to describe a golden-colored compilation of thin metal sheets (the "plates") bound together by wires in the shape of a book. The manuscript was completed in June 1829. E. B. Grandin published the Book of Mormon in Palmyra, New York, and it went on sale in his bookstore on March 26, 1830. Smith said he returned the plates to Moroni upon the publication of the book. Views on composition. Multiple theories of naturalistic composition have been proposed. In the twenty-first century, leading naturalistic interpretations of Book of Mormon origins hold that Smith authored it himself, whether consciously or subconsciously, and simultaneously sincerely believed the Book of Mormon was an authentic sacred history. Most adherents of the Latter Day Saint movement consider the Book of Mormon an authentic historical record, translated by Smith from actual ancient plates through divine revelation. The Church of Jesus Christ of Latter-day Saints (LDS Church), the largest Latter Day Saint denomination, maintains this as its official position. Methods. The Book of Mormon as a written text is the transcription of what scholars Grant Hardy and William L. Davis call an "extended oral performance", one which Davis considers "comparable in length and magnitude to the classic oral epics, such as Homer's "Iliad" and "Odyssey"". Eyewitnesses said Smith never referred to notes or other documents while dictating, and Smith's followers and those close to him insisted he lacked the writing and narrative skills necessary to consciously produce a text like the Book of Mormon. Some naturalistic interpretations have therefore compared Smith's dictation to automatic writing arising from the subconscious. However, Ann Taves considers this description problematic for overemphasizing "lack of control" when historical and comparative study instead suggests Smith "had a highly focused awareness" and "a considerable degree of control over the experience" of dictation. Independent scholar William L. Davis posits that after believing he had encountered an angel in 1823, Smith "carefully developed his ideas about the narratives" of the Book of Mormon for several years by making outlines, whether mental or on private notes, until he began dictating in 1828. Smith's oral recitations about Nephites to his family could have been an opportunity to work out ideas and practice oratory, and he received some formal education as a lay Methodist exhorter. In this interpretation, Smith believed the dictation he produced reflected an ancient history, but he assembled the narrative in his own words. Inspirations. Early observers, presuming Smith incapable of writing something as long or as complex as the Book of Mormon, often searched for a possible source he might have plagiarized. In the nineteenth century, a popular hypothesis was that Smith collaborated with Sidney Rigdon to plagiarize an unpublished manuscript written by Solomon Spalding and turn into the Book of Mormon. Historians have considered the Spalding manuscript source hypothesis debunked since 1945, when Fawn M. Brodie thoroughly disproved it in her critical biography of Smith. Historians since the early twentieth century have suggested Smith was inspired by "View of the Hebrews", an 1823 book which propounded the Hebraic Indian theory, since both associate American Indians with ancient Israel and describe clashes between two dualistically opposed civilizations ("View" as speculation about American Indian history and the Book of Mormon as its narrative). Whether or not "View" influenced the Book of Mormon is the subject of debate. A pseudo-anthropological treatise, "View" presented allegedly empirical evidence in support of its hypothesis. The Book of Mormon is written as a narrative, and Christian themes predominate rather than supposedly Indigenous parallels. Additionally, while "View" supposes that Indigenous American peoples descended from the Ten Lost Tribes, the Book of Mormon actively rejects the hypothesis; the peoples in its narrative have an "ancient Hebrew" origin but do not descend from the lost tribes. The book ultimately heavily revises, rather than borrows, the Hebraic Indian theory. The Book of Mormon may creatively reconfigure, without plagiarizing, parts of the popular 1678 Christian allegory "Pilgrim's Progress" written by John Bunyan"." For example, the martyr narrative of Abinadi in the Book of Mormon shares a complex matrix of descriptive language with Faithful's martyr narrative in "Progress". Some other Book of Mormon narratives, such as the dream Lehi has in the book's opening, also resemble creative reworkings of "Progress" story arcs as well as elements of other works by Bunyan, such as "The Holy War" and "Grace Abounding". Historical scholarship also suggests it is plausible for Smith to have produced the Book of Mormon himself, based on his knowledge of the Bible and enabled by a democratizing religious culture. Content. Presentation. The style of the Book of Mormon's English text resembles that of the King James Version of the Bible. Novelist Jane Barnes considered the book "difficult to read", and according to religious studies scholar Grant Hardy, the language is an "awkward, repetitious form of English" with a "nonmainstream literary aesthetic". Narratively and structurally, the book is complex, with multiple arcs that diverge and converge in the story while contributing to the book's overarching plot and themes. Historian Daniel Walker Howe concluded in his own appraisal that the Book of Mormon "is a powerful epic written on a grand scale" and "should rank among the great achievements of American literature". The Book of Mormon presents its text through multiple narrators explicitly identified as figures within the book's own narrative. Narrators describe reading, redacting, writing, and exchanging records. The book also embeds sermons, given by figures from the narrative, throughout the text, and these internal orations make up just over 40 percent of the Book of Mormon. Periodically, the book's primary narrators reflexively describe themselves creating the book in a move that is "almost postmodern" in its self-consciousness. Historian Laurie Maffly-Kipp explains that "the mechanics of editing and transmitting thereby become an important feature of the text". Barnes calls the Book of Mormon a "scripture about writing and its influence in a post-modern world of texts" and "a statement about different voices, and possibly the problem of voice, in sacred literature". Organization. The Book of Mormon is organized as a compilation of smaller books, each named after its main named narrator or a prominent leader, beginning with the First Book of Nephi (1 Nephi) and ending with the Book of Moroni. The book's sequence is primarily chronological based on the narrative content of the book. Exceptions include the Words of Mormon and the Book of Ether. The Words of Mormon contains editorial commentary by Mormon. The Book of Ether is presented as the narrative of an earlier group of people who had come to the American continent before the immigration described in 1 Nephi. First Nephi through Omni are written in first-person narrative, as are Mormon and Moroni. The remainder of the Book of Mormon is written in third-person historical narrative, said to be compiled and abridged by Mormon (with Moroni abridging the Book of Ether and writing the latter part of Mormon and the Book of Moroni). Most modern editions of the book have been divided into chapters and verses. Most editions of the book also contain supplementary material, including the "Testimony of Three Witnesses" and the "Testimony of Eight Witnesses" which appeared in the original 1830 edition and every official Latter-day Saint edition thereafter. Narrative. The books from First Nephi to Omni are described as being from "the small plates of Nephi". This account begins in ancient Jerusalem around 600 BC, telling the story of a man named Lehi, his family, and several others as they are led by God from Jerusalem shortly before the fall of that city to the Babylonians. The book describes their journey across the Arabian peninsula, and then to a "promised land", presumably an unspecified location in the Americas, by ship. These books recount the group's dealings from approximately 600 BC to about 130 BC, during which time the community grows and splits into two main groups, called Nephites and Lamanites, that frequently war with each other throughout the rest of the narrative. Following this section is the Words of Mormon, a small book that introduces Mormon, the principal narrator for the remainder of the text. The narration describes the proceeding content (Book of Mosiah through to chapter 7 of the internal Book of Mormon) as being Mormon's abridgment of "the large plates of Nephi", existing records that detailed the people's history up to Mormon's own life. Part of this portion is the Book of Third Nephi, which describes a visit by Jesus to the people of the Book of Mormon sometime after his resurrection and ascension; historian John Turner calls this episode "the climax of the entire scripture". After this visit, the Nephites and Lamanites unite in a harmonious, peaceful society which endures for several generations before breaking into warring factions again, and in this conflict the Nephites are destroyed while the Lamanites emerge victorious. In the narrative, Mormon, a Nephite, lives during this period of war, and he dies before finishing his book. His son Moroni takes over as narrator, describing himself taking his father's record into his charge and finishing its writing. Before the very end of the book, Moroni describes making an abridgment (called the Book of Ether) of a record from a much earlier people. There is a subsequent subplot describing a group of families who God leads away from the Tower of Babel after it falls. Led by a man named Jared and his brother, described as a prophet of God, these Jaredites travel to the "promised land" and establish a society there. After successive violent reversals between rival monarchs and faction, their society collapses around the time that Lehi's family arrive in the promised land further south. The narrative returns to Moroni's present (Book of Moroni) in which he transcribes a few short documents, meditates on and addresses the book's audience, finishes the record, and buries the plates upon which they are narrated to be inscribed upon, before implicitly dying as his father did, in what allegedly would have been the early 400s AD. Teachings. Jesus. On its title page, the Book of Mormon describes its central purpose as being the "convincing of the Jew and Gentile that Jesus is the Christ, the Eternal God, manifesting himself unto all nations." Although much of the Book of Mormon's internal chronology takes place prior to the birth of Jesus, prophets in the book frequently see him in vision and preach about him, and the people in the narrative worship Jesus as "pre-Christian Christians." For example, the book's first narrator Nephi describes having a vision of the birth, ministry, and death of Jesus, said to have taken place nearly 600 years prior to Jesus' birth. Late in the book, a narrator refers to converted peoples as "children of Christ". By depicting ancient prophets and peoples as familiar with Jesus as a Savior, the Book of Mormon universalizes Christian salvation as being accessible across all time and places. By implying that even more ancient peoples were familiar with Jesus Christ, the book presents a "polygenist Christian history" in which Christianity has multiple origins. In the climax of the book, Jesus visits some early inhabitants of the Americas after his resurrection in an extended bodily theophany. During this ministry, he reiterates many teachings from the New Testament, re-emphasizes salvific baptism, and introduces the ritual consumption of bread and wine "in remembrance of [his] body", a teaching that became the basis for modern Latter-day Saints' "memorialist" view of their sacrament ordinance (analogous to communion). Jesus's ministry in the Book of Mormon resembles his portrayal in the Gospel of John, as Jesus similarly teaches without parables and preaches faith and obedience as a central message. Barnes argues that the Book of Mormon depicts Jesus as a "revolutionary new character" different from that of the New Testament in a portrayal that is "constantly, subtly revising the Christian tradition". According to historian John Turner, the Book of Mormon's depiction provides "a twist" on Christian trinitarianism, as Jesus in the Book of Mormon is distinct from God the Father—as he prays to God during a post-resurrection visit with the Nephites—while also emphasizing that Jesus and God have "divine unity," with other parts of the book calling Jesus "the Father and the Son". Beliefs among the churches of the Latter Day Saint movement range between social trinitarianism (such as among Latter-day Saints) and traditional trinitarianism (such as in Community of Christ). Plan of salvation. The Christian concept of God's plan of salvation for humanity is a frequently recurring theme of the Book of Mormon. While the Bible does not directly outline a plan of salvation, the Book of Mormon explicitly refers to the concept thirty times, using a variety of terms such as "plan of salvation", "plan of happiness", and "plan of redemption". The Book of Mormon's plan of salvation doctrine describes life as a probationary time for people to learn the gospel of Christ through revelation given to prophets and have the opportunity to choose whether or not to obey God. Jesus' atonement then makes repentance possible, enabling the righteous to enter a heavenly state after a final judgment. Although most of Christianity traditionally considers the fall of man a negative development for humanity, the Book of Mormon instead portrays the fall as a foreordained step in God's plan of salvation, necessary to securing human agency, eventual righteousness, and bodily joy through physical experience. This positive interpretation of the Adam and Eve story contributes to the Book of Mormon's emphasis "on the importance of human freedom and responsibility" to choose salvation. Dialogic revelation. In the Book of Mormon, revelation from God typically manifests as a dialogue between God and persons, characterizing deity as an anthropomorphic being who hears prayers and provides direct answers to questions. Multiple narratives in the book portray revelation as a dialogue in which petitioners and deity engage one another in a mutual exchange in which God's contributions originate from outside the mortal recipient. The Book of Mormon also emphasizes regular prayer as a significant component of devotional life, depicting it as a central means through which such dialogic revelation can take place. While the Old Testament of the Christian Bible links revelation specifically to prophetic authority, the Book of Mormon's portrayal democratizes the idea of revelation, depicting it as the right of every person. Figures such as Nephi and Ammon receive visions and revelatory direction prior to or without ever becoming prophets, and Laman and Lemuel are rebuked for hesitating to pray for revelation. Also in contrast with traditional Christian conceptions of revelations is the Book of Mormon's broader range of revelatory content. In the Book of Mormon, figures petition God for revelatory answers to doctrinal questions and ecclesiastical crises as well as for inspiration to guide hunts, military campaigns, and sociopolitical decisions. The Book of Mormon depicts revelation as an active and sometimes laborious experience. For example, the Book of Mormon's Brother of Jared learns to act not merely as a petitioner with questions but moreover as an interlocutor with "a specific proposal" for God to consider as part of a guided process of miraculous assistance. Apocalyptic reversal and Indigenous or nonwhite liberation. The Book of Mormon's "eschatological content" lends to a "theology of Native and/or nonwhite liberation", in the words of American studies scholar Jared Hickman. The Book of Mormon's narrative content includes prophecies describing how although Gentiles (generally interpreted as being whites of European descent) would conquer the Indigenous residents of the Americas (imagined in the Book of Mormon as being a remnant of descendants of the Lamanites), this conquest would only precede the Native Americans' revival and resurgence as a God-empowered people. The Book of Mormon narrative's prophecies envision a Christian eschaton in which Indigenous people are destined to rise up as the true leaders of the continent, manifesting in a new utopia to be called "Zion". White Gentiles would have an opportunity to repent of their sins and join themselves to the Indigenous remnant, but if white Gentile society fails to do so, the Book of Mormon's content foretells a future "apocalyptic reversal" in which Native Americans will destroy white American society and replace it with a godly, Zionic society. This prophecy commanding whites to repent and become supporters of American Indians even bears "special authority as an utterance of Jesus" Christ himself during a messianic appearance at the book's climax. Furthermore, the Book of Mormon's "formal logic" criticizes the theological supports for racism and white supremacy prevalent in the antebellum United States by enacting a textual apocalypse. The book's apparently white Nephite narrators fail to recognize and repent of their own sinful, hubristic prejudices against the seemingly darker-skinned Lamanites in the narrative. In their pride, the Nephites repeatedly backslide into producing oppressive social orders, such that the book's narrative performs a sustained critique of colonialist racism. The book concludes with its own narrative implosion in which Lamanites suddenly succeed over and destroy Nephites in a literary turn seemingly designed to jar the average antebellum white American reader into recognizing the "utter inadequacy of his or her rac(ial)ist common sense". Religious significance. Early Mormonism. Adherents of the early Latter Day Saint movement frequently read the Book of Mormon as a corroboration of and supplement to the Bible, persuaded by its resemblance to the King James Version's form and language. For these early readers, the Book of Mormon confirmed the Bible's scriptural veracity and resolved then-contemporary theological controversies the Bible did not seem to adequately address, such as the appropriate mode of baptism, the role of prayer, and the nature of the Christian atonement. Early church administrative design also drew inspiration from the Book of Mormon. Oliver Cowdery and Joseph Smith, respectively, used the depiction of the Christian church in the Book of Mormon as a template for their "Articles of the Church" and "Articles and Covenants of the Church". The Book of Mormon was also significant in the early movement as a sign, proving Joseph Smith's claimed prophetic calling, signalling the "restoration of all things", and ending what was believed to have been an apostasy from true Christianity. Early Latter Day Saints tended to interpret the Book of Mormon through a millenarian lens and consequently believed the book portended Christ's imminent Second Coming. And during the movement's first years, observers identified converts with the new scripture they propounded, nicknaming them "Mormons". Early Mormons also cultivated their own individual relationships with the Book of Mormon. Reading the book became an ordinary habit for some, and some would reference passages by page number in correspondence with friends and family. Historian Janiece Johnson explains that early converts' "depth of Book of Mormon usage is illustrated most thoroughly through intertextuality—the pervasive echoes, allusions, and expansions on the Book of Mormon text that appear in the early converts' own writings." Early Latter Day Saints alluded to Book of Mormon narratives, incorporated Book of Mormon turns of phrase into their writing styles, and even gave their children Book of Mormon names. Joseph Smith. Like many other early adherents of the Latter Day Saint movement, Smith referenced Book of Mormon scriptures in his preaching relatively infrequently and cited the Bible more often. In 1832, Smith dictated a revelation that condemned the "whole church" for treating the Book of Mormon lightly, although even after doing so Smith still referenced the Book of Mormon less often than the Bible. Nevertheless, in 1841 Joseph Smith characterized the Book of Mormon as "the most correct of any book on earth, and the keystone of [the] religion". Although Smith quoted the book infrequently, he accepted the Book of Mormon narrative world as his own. The Church of Jesus Christ of Latter-day Saints. The Church of Jesus Christ of Latter-day Saints (LDS Church) accepts the Book of Mormon as one of the four sacred texts in its scriptural canon called the "standard works". Church leaders and publications have "strongly affirm[ed]" Smith's claims of the book's significance to the faith. According to the church's "Articles of Faith"—a document written by Joseph Smith in 1842 and canonized by the church as scripture in 1880—members "believe the Bible to be the word of God as far as it is translated correctly," and they "believe the Book of Mormon to be the word of God," without qualification. In their evangelism, Latter-day Saint leaders and missionaries have long emphasized the book's place in a causal chain which held that if the Book of Mormon was "verifiably true revelation of God," then it justified Smith's claims to prophetic authority to restore the New Testament church. Latter-day Saints have also long believed the Book of Mormon's contents confirm and fulfill biblical prophecies. For example, "many Latter-day Saints" consider the biblical patriarch Jacob's description of his son Joseph as "a fruitful bough... whose branches run over a wall" a prophecy of Lehi's posterity—described as descendants of Joseph—overflowing into the New World. Latter-day Saints also believe the Bible prophesies of the Book of Mormon as an additional testament to God's dealings with humanity. In the 1980s, the church placed greater emphasis on the Book of Mormon as a central text of the faith. In 1982, it added the subtitle "Another Testament of Jesus Christ" to its official editions of the Book of Mormon. Ezra Taft Benson, the church's thirteenth president (1985–1994), especially emphasized the Book of Mormon. Referencing Smith's 1832 revelation, Benson said the church remained under condemnation for treating the Book of Mormon lightly. Since the late 1980s, Latter-day Saint leaders have encouraged church members to read from the Book of Mormon daily, and in the twenty-first century, many Latter-day Saints use the book in private devotions and family worship. Literary scholar Terryl Givens observes that for Latter-day Saints, the Book of Mormon is "the principal scriptural focus", a "cultural touchstone, and "absolutely central" to worship, including in weekly services, Sunday School, youth seminaries, and more. Approximately 90 to 95% of all Book of Mormon printings have been affiliated with the church. As of October 2020, it has published more than 192 million copies of the Book of Mormon. Community of Christ. The Community of Christ (formerly the Reorganized Church of Jesus Christ of Latter Day Saints or RLDS Church) views the Book of Mormon as scripture which provides an additional witness of Jesus Christ in support of the Bible. The Community of Christ publishes two versions of the book. The first is the Authorized Edition, first published by the then-RLDS Church in 1908, whose text is based on comparing the original printer's manuscript and the 1837 Second Edition (or "Kirtland Edition") of the Book of Mormon. Its content is similar to the Latter-day Saint edition of the Book of Mormon, but the versification is different. The Community of Christ also publishes a "New Authorized Version" (also called a "reader's edition"), first released in 1966, which attempts to modernize the language of the text by removing archaisms and standardizing punctuation. Use of the Book of Mormon varies among Community of Christ membership. The church describes it as scripture and includes references to the Book of Mormon in its official lectionary. In 2010, representatives told the National Council of Churches that "the Book of Mormon is in our DNA". The book remains a symbol of the denomination's belief in continuing revelation from God. Nevertheless, its usage in North American congregations declined between the mid-twentieth and twenty-first centuries. Community of Christ theologian Anthony Chvala-Smith describes the Book of Mormon as being akin to a "subordinate standard" relative to the Bible, giving the Bible priority over the Book of Mormon, and the denomination does not emphasize the book as part of its self-conceived identity. Book of Mormon use varies in what David Howlett calls "Mormon heritage regions": North America, Western Europe, and French Polynesia. Outside these regions, where there are tens of thousands of members, congregations almost never use the Book of Mormon in their worship, and they may be entirely unfamiliar with it. Some in Community of Christ remain interested in prioritizing the Book of Mormon in religious practice and have variously responded to these developments by leaving the denomination or by striving to re-emphasize the book. During this time, the Community of Christ moved away from emphasizing the Book of Mormon as an authentic record of a historical past. By the late-twentieth century, church president W. Grant McMurray made open the possibility the book was nonhistorical. McMurray reiterated this ambivalence in 2001, reflecting, "The proper use of the Book of Mormon as sacred scripture has been under wide discussion in the 1970s and beyond, in part because of long-standing questions about its historical authenticity and in part because of perceived theological inadequacies, including matters of race and ethnicity." When a resolution was submitted at the 2007 Community of Christ World Conference to "reaffirm the Book of Mormon as a divinely inspired record", church president Stephen M. Veazey ruled it out-of-order. He stated, "while the Church affirms the Book of Mormon as scripture, and makes it available for study and use in various languages, we do not attempt to mandate the degree of belief or use. This position is in keeping with our longstanding tradition that belief in the Book of Mormon is not to be used as a test of fellowship or membership in the church." Greater Latter Day Saint movement. Since the death of Joseph Smith in 1844, there have been approximately seventy different churches that have been part of the Latter Day Saint movement, fifty of which were extant as of 2012. Religious studies scholar Paul Gutjahr explains that "each of these sects developed its own special relationship with the Book of Mormon". For example James Strang, who led a denomination in the nineteenth century, reenacted Smith's production of the Book of Mormon by claiming in the 1840s and 1850s to receive and translate new scriptures engraved on metal plates, which became the Voree Plates and the Book of the Law of the Lord. William Bickerton led another denomination, The Church of Jesus Christ of Latter Day Saints (today called The Church of Jesus Christ), which accepted the Book of Mormon as scripture alongside the Bible although it did not canonize other Latter Day Saint religious texts like the Doctrine and Covenants and Pearl of Great Price. The contemporary Church of Jesus Christ continues to consider the "Bible and Book of Mormon together" to be "the foundation of [their] faith and the building blocks of" their church. Nahua-Mexican Latter-day Saint Margarito Bautista believed the Book of Mormon told an Indigenous history of Mexico before European contact, and he identified himself as a "descendant of Father Lehi", a prophet in the Book of Mormon. Bautista believed the Book of Mormon revealed that Indigenous Mexicans were a chosen remnant of biblical Israel and therefore had a sacred destiny to someday lead the church spiritually and the world politically. To promote this belief, he wrote a theological treatise synthesizing Mexican nationalism and Book of Mormon content, published in 1935. Anglo-American LDS Church leadership suppressed the book and eventually excommunicated Bautista, and he went on to found a new Mormon denomination. Officially named "El Reino de Dios en su Plenitud", the denomination continues to exist in Colonial Industrial, Ozumba, Mexico as a church with several hundred members who call themselves "Mormons". Separate editions of the Book of Mormon have been published by a number of churches in the Latter Day Saint movement, along with private individuals and organizations not endorsed by any specific denomination. Historicity. Mainstream views. Mainstream archaeological, historical, and scientific communities do not consider the Book of Mormon an ancient record of actual historical events. Principally, the content of the Book of Mormon does not correlate with archaeological, genetic, or linguistic evidence about the past of the Americas or ancient Near East. Archaeology. There is no accepted correlation between locations described in the Book of Mormon and known American archaeological sites. Additionally, the Book of Mormon's narrative refers to the presence of animals, plants, metals, and technologies of which archaeological and scientific studies have found little or no evidence in post-Pleistocene, pre-Columbian America. Such anachronistic references include crops such as barley, wheat, and silk; livestock like cattle, donkeys, horses, oxen, and sheep; and metals and technology such as brass, steel, the wheel, and chariots. Mesoamerica is the preferred setting for the Book of Mormon among many apologists who advocate a limited geography model of Book of Mormon events. However, there is no evidence accepted by non-Mormons in Mesoamerican societies of cultural influence from anything described in the Book of Mormon. Genetics. Until the late-twentieth century, most adherents of the Latter Day Saint movement who affirmed Book of Mormon historicity believed the people described in the Book of Mormon text were the exclusive ancestors of all Indigenous peoples in the Americas. DNA evidence proved that to be impossible, as no DNA evidence links any Native American group to ancestry from the ancient Near East as a belief in Book of Mormon peoples as the exclusive ancestors of Indigenous Americans would require. Instead, detailed genetic research indicates that Indigenous Americans' ancestry traces back to Asia, and reveals numerous details about the movements and settlements of ancient Americans which are either lacking in, or contradicted by, the Book of Mormon narrative. Linguistics and intertextuality. There are no widely accepted linguistic connections between any Native American languages and Near Eastern languages, and "the diversity of Native American languages could not have developed from a single origin in the time frame" that would be necessary to validate a hemispheric view of Book of Mormon historicity. The Book of Mormon states it was written in a language called "Reformed Egyptian", clashing with Book of Mormon peoples' purported origin as the descendants of a family from the Kingdom of Judah, where inhabitants would have communicated in Aramaic, not Egyptian. There are no known examples of "Reformed Egyptian". The Book of Mormon also includes excerpts from and demonstrates intertextuality with portions of the biblical Book of Isaiah whose widely accepted periods of creation postdate the alleged departure of Lehi's family from Jerusalem circa 600 BCE. No Latter-day Saint arguments for a unified Isaiah or criticisms of the Deutero-Isaiah and Trito-Isaiah understandings have matched the extent of scholarship supporting later datings for authorship. Latter Day Saint views. Most adherents of the Latter Day Saint movement consider the Book of Mormon to be historically authentic and to describe events that genuinely took place in the ancient Americas. Within the Latter Day Saint movement there are several individuals and apologetic organizations, most of whom are or which are composed of lay Latter-day Saints, that seek to answer challenges to or advocate for Book of Mormon historicity. For example, in response to linguistics and genetics rendering long-popular hemispheric models of Book of Mormon geography impossible, many apologists posit Book of Mormon peoples could have dwelled in a limited geographical region while Indigenous peoples of other descents occupied the rest of the Americas. To account for anachronisms, apologists often suggest Smith's translation assigned familiar terms to unfamiliar ideas. In the context of a miraculously translated Book of Mormon, supporters affirm that anachronistic intertextuality may also have miraculous explanations. Some apologists strive to identify parallels between the Book of Mormon and biblical antiquity, such as the presence of several complex chiasmi resembling a literary form used in ancient Hebrew poetry and in the Old Testament. Others attempt to identify parallels between Mesoamerican archaeological sites and locations described in the Book of Mormon, such as John L. Sorenson, according to whom the Santa Rosa archaeological site resembles the city of Zarahemla in the Book of Mormon. When mainstream, non-Mormon scholars examine alleged parallels between the Book of Mormon and the ancient world, however, scholars typically deem them "chance based upon only superficial similarities" or "parallelomania", the result of having predetermined ideas about the subject. Despite the popularity and influence among Latter-day Saints of literature propounding Book of Mormon historicity, not all Mormons who affirm Book of Mormon historicity are universally persuaded by apologetic work. Some claim historicity more modestly, such as Richard Bushman's statement that "I read the Book of Mormon as informed Christians read the Bible. As I read, I know the arguments against the book's historicity, but I can't help feeling that the words are true and the events happened. I believe it in the face of many questions." Some denominations and adherents of the Latter Day Saint movement consider the Book of Mormon a work of inspired fiction akin to pseudepigrapha or biblical midrash that constitutes scripture by revealing true doctrine about God, similar to a common interpretation of the biblical Book of Job. Many in Community of Christ hold this view, and the leadership takes no official position on Book of Mormon historicity; among lay members, views vary. Some Latter-day Saints consider the Book of Mormon fictional, although this view is marginal in the denomination at large. Beliefs about geographical setting. Related to the work's historicity is consideration of where its events are claimed to have occurred if historical. The LDS Church—the largest denomination in the Latter Day Saint movement—affirms the book as literally historical but does not make a formal claim of where precisely its events took place. Throughout much of the 19th and 20th centuries, Joseph Smith and others in the Latter Day Saint movement claimed that the book's events occurred broadly throughout North and South America. During the twentieth century, Latter-day Saint apologists backed away from this hemispheric belief in favor of believing the book's events took place in a more limited geographic setting within the Americas. This limited geography model gained broader currency in the LDS Church in the 1990s, and in the twenty-first century it is the most popular belief about Book of Mormon geography among those who believe it is historical. In 2006, the LDS Church revised its introduction to LDS editions of the Book of Mormon, which previously read that Lamanites were "the principal ancestors of the American Indians", to read that they are "among the ancestors of the American Indians". A movement among Latter-day Saints called Heartlanders believes that the Book of Mormon took place specifically within what is presently the United States. Historical context. American Indian origins. Contact with the Indigenous peoples of the Americas prompted intellectual and theological controversy among many Europeans and European Americans who wondered how biblical narratives of world history could account for hitherto unrecognized Indigenous societies. From the seventeenth century through the early nineteenth, numerous European and American writers proposed that ancient Jews, perhaps through the Lost Ten Tribes, were the ancestors of Native Americans. One of the first books to suggest that Native Americans descended from Jews was written by Jewish-Dutch rabbi and scholar Manasseh ben Israel in 1650. Such curiosity and speculation about Indigenous origins persisted in the United States into the antebellum period when the Book of Mormon was published, as archaeologist Stephen Williams explains that "relating the American Indians to the Lost Tribes of Israel was supported by many" at the time of the book's production and publication. Although the Book of Mormon did not explicitly identify Native Americans as descendants of the diasporic Israelites in its narrative, nineteenth-century readers consistently drew that conclusion and considered the book theological support for believing American Indians were of Israelite descent. European descended settlers took note of earthworks left behind by the Mound Builder cultures and had some difficulty believing that Native Americans, denigrated in racist colonial worldviews and whose numbers had been greatly reduced over the previous centuries, could have produced them. A common theory was that a more "civilized" and "advanced" people had built them, but were overrun and destroyed by a more savage, numerous group. Some Book of Mormon content resembles this "mound-builder" genre pervasive in the nineteenth century. Historian Curtis Dahl wrote, "Undoubtedly the most famous and certainly the most influential of all Mound-Builder literature is the "Book of Mormon" (1830). Whether one wishes to accept it as divinely inspired or the work of Joseph Smith, it fits exactly into the tradition." Historian Richard Bushman argues the Book of Mormon does not comfortably fit the Mound Builder genre because contemporaneous writings that speculated about Native origins "were explicit about recognizable Indian practices" whereas the "Book of Mormon deposited its people on some unknown shore—not even definitely identified as America—and had them live out their history" without including tropes that Euro-Americans stereotyped as Indigenous. Critique of the United States. The Book of Mormon can be read as a critique of the United States during Smith's lifetime. Historian of religion Nathan O. Hatch called the Book of Mormon "a document of profound social protest", and historian Bushman "found the book thundering no to the state of the world in Joseph Smith's time." In the Jacksonian era of antebellum America, class inequality was a major concern as fiscal downturns and the economy's transition from guild-based artisanship to private business sharpened economic inequality, and poll taxes in New York limited access to the vote, and the culture of civil discourse and mores surrounding liberty allowed social elites to ignore and delegitimize populist participation in public discourse. Against the backdrop of these trends, the Book of Mormon condemned upper class wealth as elitist, and it criticized social norms around public discourse that silenced critique of the country. Manuscripts. Joseph Smith dictated the Book of Mormon to several scribes over a period of 13 months, resulting in three manuscripts. Upon examination of pertinent historical records, the book appears to have been dictated over the course of 57 to 63 days within the 13-month period. The 116 lost pages contained the first portion of the Book of Lehi; it was lost after Smith loaned the original, uncopied manuscript to Martin Harris. The first completed manuscript, called the original manuscript, was completed using a variety of scribes. Portions of the original manuscript were also used for typesetting. In October 1841, the entire original manuscript was placed into the cornerstone of the Nauvoo House, and sealed up until nearly forty years later when the cornerstone was reopened. It was then discovered that much of the original manuscript had been destroyed by water seepage and mold. Surviving manuscript pages were handed out to various families and individuals in the 1880s. Only 28 percent of the original manuscript now survives, including a remarkable find of fragments from 58 pages in 1991. The majority of what remains of the original manuscript is now kept in the LDS Church's archives. The second completed manuscript, called the printer's manuscript, was a copy of the original manuscript produced by Oliver Cowdery and two other scribes. It is at this point that initial copyediting of the Book of Mormon was completed. Observations of the original manuscript show little evidence of corrections to the text. Shortly before his death in 1850, Cowdery gave the printer's manuscript to David Whitmer, another of the Three Witnesses. In 1903, the manuscript was bought from Whitmer's grandson by the Reorganized Church of Jesus Christ of Latter Day Saints, now known as the Community of Christ. On September 20, 2017, the LDS Church purchased the manuscript from the Community of Christ at a reported price of $35million. The printer's manuscript is now the earliest surviving complete copy of the Book of Mormon. The manuscript was imaged in 1923 and has been made available for viewing online. Critical comparisons between surviving portions of the manuscripts show an average of two to three changes per page from the original manuscript to the printer's manuscript. The printer's manuscript was further edited, adding paragraphing and punctuation to the first third of the text. The printer's manuscript was not used fully in the typesetting of the 1830 version of Book of Mormon; portions of the original manuscript were also used for typesetting. The original manuscript was used by Smith to further correct errors printed in the 1830 and 1837 versions of the Book of Mormon for the 1840 printing of the book. Printer's manuscript ownership history. In the late-19th century the extant portion of the printer's manuscript remained with the family of David Whitmer, who had been a principal founder of the Latter Day Saints and who, by the 1870s, led the Church of Christ (Whitmerite). During the 1870s, according to the "Chicago Tribune", the LDS Church unsuccessfully attempted to buy it from Whitmer for a record price. Church president Joseph F. Smith refuted this assertion in a 1901 letter, believing such a manuscript "possesses no value whatever." In 1895, Whitmer's grandson George Schweich inherited the manuscript. By 1903, Schweich had mortgaged the manuscript for $1,800 and, needing to raise at least that sum, sold a collection including 72 percent of the book of the original printer's manuscript (John Whitmer's manuscript history, parts of Joseph Smith's translation of the Bible, manuscript copies of several revelations, and a piece of paper containing copied Book of Mormon characters) to the RLDS Church (now the Community of Christ) for $2,450, with $2,300 of this amount for the printer's manuscript. In 2015, this remaining portion was published by the Church Historian's Press in its "Joseph Smith Papers" series, in Volume Three of "Revelations and Translations"; and, in 2017, the church bought the printer's manuscript for . Editions. Chapter and verse notation systems. The original 1830 publication had no verses (breaking the text into paragraphs instead). The Church of Jesus Christ of Latter-day Saints' (LDS Church) 1852 edition numbered the paragraphs. Orson Pratt, an apostle of the denomination, divided the text into shorter chapters, organized it into verses, and added paratextual footnotes. In 1920, the LDS Church published a new edition edited by apostle James E. Talmage, who reformatted the text into double columns, imitating the prevailing format of Bibles in the United States. The next new Latter-day saint edition of the book was published in 1981; this edition added new chapter summaries and cross references. The Reorganized Church of Jesus Christ of Latter Day Saints (RLDS Church; later renamed to Community of Christ) also published editions of the Book of Mormon. It printed editions at Plano, Illinois and Lamoni, Iowa in 1874. In 1892, the church published a large-print edition that split the original edition's paragraphs into shorter ones. From 1906 to 1908, Reversification Committee appointed by the RLDS Church revised its Book of Mormon text based on manuscript comparison and also reformatted it into shorter verses (though it retained the original chapter lengths); the RLDS Church published this version as the Revised Authorized edition, printing it beginning in April 1909. Historic editions. The following editions no longer in publication marked major developments in the text or reader's helps printed in the Book of Mormon. Non-English translations. The Latter-day Saints version of the Book of Mormon has been translated into 83 languages and selections have been translated into an additional 25 languages. In 2001, the LDS Church reported that all or part of the Book of Mormon was available in the native language of 99 percent of Latter-day Saints and 87 percent of the world's total population. Translations into languages without a tradition of writing (e.g., Kaqchikel, Tzotzil) have been published as audio recordings and as transliterations with Latin characters. Translations into American Sign Language are available as video recordings. Typically, translators are Latter-day Saints who are employed by the church and translate the text from the original English. Each manuscript is reviewed several times before it is approved and published. In 1998, the church stopped translating selections from the Book of Mormon and announced that instead each new translation it approves will be a full edition. Representations in media. Artists have depicted Book of Mormon scenes and figures in visual art since the beginnings of the Latter Day Saint movement. The nonprofit Book of Mormon Art Catalog documents the existence of at least 2,500 visual depictions of Book of Mormon content. According to art historian Jenny Champoux, early artwork of the Book of Mormon relied on European iconography; eventually, a distinctive "Latter-day Saint style" developed. Events of the Book of Mormon are the focus of several films produced by the LDS Church, including "The Life of Nephi" (1915), "How Rare a Possession" (1987) and "The Testaments of One Fold and One Shepherd" (2000). Depictions of Book of Mormon narratives in films not officially commissioned by the church (sometimes colloquially known as Mormon cinema) include "The Book of Mormon Movie, Vol. 1: The Journey" (2003) and "Passage to Zarahemla" (2007). In "one of the most complex uses of Mormonism in cinema," Alfred Hitchcock's film "Family Plot" portrays a funeral service in which a priest (apparently non-Mormon, by his appearance) reads Second Nephi 9:20–27, a passage describing Jesus Christ having victory over death. In 2011, a long-running satirical musical titled "The Book of Mormon", written by "South Park" creators Trey Parker and Matt Stone in collaboration with Robert Lopez, premiered on Broadway, winning nine Tony Awards, including Best Musical. Its London production won the Olivier Award for best musical. The musical is not principally about Book of Mormon content and tells an original story about Latter-day Saint missionaries in the twenty-first century. In 2019, the church began producing a series of live-action adaptations of various stories within the Book of Mormon, titled Book of Mormon Videos, which it distributed on its website and YouTube channel.
3979
35599429
https://en.wikipedia.org/wiki?curid=3979
Baptists
Baptists are a Protestant denomination of Christianity distinguished by baptizing only believers (believer's baptism) and doing so by complete immersion. Modern Baptist churches generally subscribe to the doctrines of soul competency (the responsibility and accountability of every person before God), "sola fide" (justification by faith alone), "sola scriptura" (the Bible as the sole infallible authority) and congregationalist church government. Baptists generally recognize at least two ordinances (or sacraments): Baptism and the Lord's Supper. Diverse from their beginning, those identifying as Baptists today may differ widely from one another in what they believe, how they worship, their attitudes toward other Christians, and their understanding of what is important in Christian discipleship. Baptist missionaries have spread various Baptist churches to every continent. The largest Baptist communion of churches is the Baptist World Alliance, and there are many different groupings of Baptist churches and Baptist congregations. Historians trace the Baptists to English Dissenters from the Church of England, more specifically, to the dissenting church in Gainsborough led by the cleric John Smyth. The Gainsborough congregation and the Scrooby congregation went into exile in Amsterdam in 1608. In accordance with their exegesis of the New Testament, they came to reject infant baptism and instituted baptism only of professing believers. The Baptists spread across England, where the General Baptists considered Christ's atonement to extend to all people, while the Particular Baptists believed that it extended only to the elect. The Second London Confession of Faith of 1689 is the greatest creedal document for Particular Baptists, whereas the Orthodox Creed of 1679 is the one widely accepted by General Baptists. Thomas Helwys returned to England with the congregation exiled in Amsterdam, where he formulated a distinctive philosophical request that the church and the state be kept separate in matters of law, so that individuals might have freedom of conscience. Origins. Baptist historian Bruce Gourley outlines four main views of Baptist origins: English separatist view. Modern Baptist churches trace their history to the English Separatist movement in the 17th century, over a century after the foundation of the Church of England during the Protestant Reformation. This view of Baptist origins has the most historical support and is the most widely accepted. Adherents to this position consider the influence of Anabaptists upon early Baptists to be minimal. It was a time of considerable political and religious turmoil. Both individuals and churches were willing to give up their theological roots if they became convinced that a more biblical "truth" had been discovered. During the Reformation, the Church of England (Anglicans) separated from the Roman Catholic Church. There were some Christians who were not content with the achievements of the mainstream Protestant Reformation. There also were Christians who were disappointed that the Church of England had not made corrections of what some considered to be errors and abuses. Of those most critical of the church's direction, some chose to stay and try to make constructive changes from within the Anglican Church. They became known as "Puritans" and are described by Gourley as cousins of the English Separatists. Others decided they must leave the established church because of their dissatisfaction and became known as the Separatists. In 1579, Faustus Socinus founded the Unitarian Polish Brethren in Poland-Lithuania, which was a tolerant country. The Unitarians taught baptism by immersion. After their expulsion from the Commonwealth in 1658, many of them fled to the Netherlands. In the Netherlands, the Unitarians introduced immersion baptism to the Dutch Mennonites. Baptist churches have their origins in a movement started by John Smyth and Thomas Helwys in Amsterdam. Because they shared beliefs with the Congregationalists, they went into exile in 1607 with other believers who held the same positions. They believe that the Bible is to be the only guide and that the believer's baptism is what the scriptures require. In 1609, the year considered to be the foundation of the movement, they baptized believers and founded the first Baptist church. In 1609, while still there, Smyth wrote a tract titled "The Character of the Beast". In it he expressed two propositions: first, infants are not to be baptized; and second, "Antichristians converted are to be admitted into the true Church by baptism." Hence, his conviction was that a scriptural church should consist only of regenerate believers who have been baptized on a confession of faith. He rejected the Separatist movement's doctrine of infant baptism. Shortly thereafter, Smyth left the group. Ultimately, Smyth became committed to believers' baptism as the only biblical baptism. He was convinced on the basis of his interpretation of Scripture that infants would not be damned should they die in infancy. Smyth, convinced that his self-baptism was invalid, applied with the Mennonites for membership. He died while waiting for membership, and some of his congregation became Mennonites. Helwys and others kept their baptism and commitments that would originate the Baptist Tradition. The modern Baptist denomination is an outgrowth of this movement. Baptists rejected the name Anabaptist when they were called that by opponents in derision. McBeth writes that as late as the 18th century, many Baptists referred to themselves as "the Christians commonly—though "falsely"—called Anabaptists." Helwys took over the leadership, leading the church back to England in 1611, and he published the first Baptist confession of faith "A Declaration of Faith of English People" in 1611. He founded the first General Baptist Church in Spitalfields, east London, in 1612. Another milestone in the early development of Baptist doctrine was in 1638 with John Spilsbury, a Calvinist minister who helped to promote the strict practice of believer's baptism by immersion (as opposed to affusion or aspersion). According to Tom Nettles, professor of historical theology at Southern Baptist Theological Seminary, "Spilsbury's cogent arguments for a gathered, disciplined congregation of believers baptized by immersion as constituting the New Testament church gave expression to and built on insights that had emerged within separatism, advanced in the life of John Smyth and the suffering congregation of Thomas Helwys, and matured in Particular Baptists." Anabaptist influence view. A minority view is that early 17th century Baptists were influenced by (but not directly connected to) continental Anabaptists. According to this view, the General Baptists shared similarities with Dutch Waterlander Mennonites (one of many Anabaptist groups) including believer's baptism only, religious liberty, separation of church and state, and Arminian views of salvation, predestination and original sin. It is certain that the early Baptist church led by Smyth had contacts with the Anabaptists; however it is debated if these influences found their way into the English General Baptists. Representatives of this theory include A.C. Underwood and William R. Estep. Gourley writes that among some contemporary Baptist scholars who emphasize the faith of the community over soul liberty, the Anabaptist influence theory is making a comeback. This view was also taught by the Reformed historian Philip Schaff. However, the relations between Baptists and Anabaptists were early strained. In 1624, the five existing Baptist churches of London issued a condemnation of the Anabaptists. Furthermore, the original group associated with Smyth (popularly believed to be the first Baptists) broke with the Waterlander Mennonite Anabaptists after a brief period of association in the Netherlands. Perpetuity and succession view. Traditional Baptist historians write from the perspective that Baptists had existed since the time of Christ. Proponents of the Baptist successionist or perpetuity view consider the Baptist movement to have existed independently from Roman Catholicism and prior to the Protestant Reformation. This view has been characterized as "apologetic and polemical" and "without consideration of a critical, scientific methodology". The perpetuity view is often identified with "The Trail of Blood", a booklet of five lectures by James Milton Carroll published in 1931. Other Baptist writers who advocate the successionist theory of Baptist origins are John T. Christian and Thomas Crosby. This view was held by English Baptist preacher Charles Spurgeon as well as Jesse Mercer, the namesake of Mercer University. In 1898 William Whitsitt was pressured to resign his presidency of the Southern Baptist Theological Seminary for denying Baptist successionism. Baptist origins in the United Kingdom. In 1612 Helwys established a Baptist congregation in London, consisting of congregants from Smyth's church. A number of other Baptist churches sprang up, and they became known as the General Baptists. The Particular Baptists were established when a group of Calvinist Separatists adopted believers' Baptism. The Particular Baptists consisted of seven churches by 1644 and had created a confession of faith called the First London Confession of Faith. Baptist origins in North America. Both Roger Williams and John Clarke are variously credited as founding the earliest Baptist church in North America. In 1639 Williams established a Baptist church in Providence, Rhode Island, and Clarke began a Baptist church in Newport, Rhode Island. According to a Baptist historian who has researched the matter extensively, "There is much debate over the centuries as to whether the Providence or Newport church deserved the place of 'first' Baptist congregation in America. Exact records for both congregations are lacking." The First Great Awakening energized the Baptist movement, and the Baptist community experienced spectacular growth. Baptists became the largest Christian community in many southern states, including among the enslaved Black population. Baptist missionary work in Canada began in the British colony of Nova Scotia (present day Nova Scotia and New Brunswick) in the 1760s. The first official record of a Baptist church in Canada was Horton Baptist Church (now Wolfville) in Wolfville, Nova Scotia on 29 October 1778. The church was established with the assistance of the New Light evangelist Henry Alline. Many of Alline's followers, after his death, converted and strengthened the Baptist presence in the Atlantic region. Two major groups of Baptists formed the basis of the churches in the Maritimes. These were referred to as Regular Baptist (Calvinistic in their doctrine) and Free Will Baptists (Arminian in their doctrine). In May 1845, the Baptist congregations in the United States split over slavery and missions. The Home Mission Society prevented slaveholders from being appointed as missionaries. The split created the Southern Baptist Convention, while the northern congregations formed their own umbrella organization now called the American Baptist Churches USA (ABC-USA). In 2015, Baptists in the U.S. number 50 million people and constitute roughly one-third of American Protestants. Baptist origins in Germany. The first Baptist church in Hamburg, known as the Union of Evangelical Free Churches in Germany, was founded by the German missionary Johann Gerhard Oncken in 1834. Founded in 1849, the Union of United Congregations of Baptized Christians in Germany and Denmark merged in 1942 with the Union of Free Church Christians, which originated in the Brethren movement, and thereby took its present name. According to a census published by the association in 2023, it claimed 786 churches and 75,767 members. The ('Evangelical Christians-Baptists') are mostly of Russian-German origin. They were formed in 1944 from the merger of Evangeliums-Christen with the Baptists. Later, other evangelical free churches joined them. In contrast to their Eastern European countries of origin, no unified union of Evangelical Christians-Baptists was founded in Germany. Some of the newly formed congregations have come together in congregational associations such as ('the Brotherhood of Free Evangelical Christian Congregations') or the ('Working Group of Evangelical Congregations'). Another part is connected with German Baptists through the in the Union of Evangelical Free Churches or is united with Mennonite Brethren congregations in the Bund Taufgesinnter Gemeinden ('Union of Baptist-Minded Congregations'). In addition, there are also congregations outside of congregational associations. The congregations in the ('Union of Baptist-Minded Congregations'; BTG) have partly Baptist, partly Mennonite roots. The federation was formed in 1989 from the merger of originally six Baptist-oriented congregations, which were primarily located in the region of Ostwestfalen-Lippe. The BTG has about 6000 members spread over 30 congregations. The ('Bible seminary'), the theological training center of this association of congregations, is located in Bonn and offers a regular study program as well as a theological correspondence course and a theological evening school. The International Baptist Convention goes back to church plantings by American soldiers. In Germany, 25 English-speaking congregations belong to it. From its beginnings in Wiesbaden and Frankfurt, a loose working group was formed in 1958, the Association of Baptists in Continental Europe, which was joined by other congregations and, from 1961, supported by the North American congregational association of the Southern Baptist Convention. In 1964, the Association adopted its current name. Baptist origins in Finland. Preacher Karl Justin Mathias Möllersvärd was the first to preach Baptist teachings in Finland and sparked a revival, though he did not stay long due to fierce opposition. The movement, however, continued to grow. In 1855, a resident of Åland returned from Stockholm with material about Baptist beliefs written by former Lutheran priest Anders Wiberg. Farmer Johan Erik Östling was inspired to travel to Stockholm the next year and be baptized, making him the first Finnish Baptist. Several baptisms were performed in Föglö, Åland the following year; they joined together with those who had already been baptized in Sweden to found the first Finnish Baptist church in 1856 in Föglö. Lutheran priest Henrik Heikel, who spoke with the Baptists to learn more about their beliefs, play a key role in the church's spread. Heikel moved to mainland Finland with his family in 1860. Pedersöre in Ostrobothnia was to become their home as he was installed as priest of the Lutheran church there. The family maintained contact with the Baptists in Åland; after his death in 1867, his son Viktor and daughter Anna were baptized as Baptists by Wiberg in Stockholm. Their conversions led to many more after Anna returned with literature and began to hold meetings. The family received a visit from a Baptist pastor, Adolf Herman Valén, who had been at the hearing with Heikel ten years earlier; they held meetings together and his preaching led to more conversions to the movement. He was the first Baptist to preach on the mainland. The first Baptist baptisms on the mainland were performed the same year, with Maria Ekqvist and Petter Stormåns being the first to be baptized. The movement continued to grow in the following years and a Swedish-speaking congregation was founded in Jakobstad (Finnish: ), near Pedersöre, in 1870, by thirteen people including four members of the Heikel family. That year, the Conventicle Act was also repealed in Finland. (In 1891 the Jakobstad Baptist church underwent a split due to differing views, particularly over open or closed communion.) Ostrobothnia has since remained a hub of the Baptist movement in Finland. Today, the only two Swedish-speaking Baptist churches outside the region are located in Helsinki and Karis. The Baptist movement did not take long to reach the Finnish-speaking population as well. Some of the people who had been baptized in Ostrobothnia were bilingual and began to preach in Finnish, founding several Finnish-speaking churches. Henrik Nars, from Purmo, was one such preacher. In 1870, a solely Finnish-speaking sailor named Henriksson evangelized in the southwestern Finnish countryside of Satakunta after being converted in England. The Luvia Baptist congregation near Pori (Swedish: ) traces its origin, also dated 1870, to his work. In 1871, John Hymander, also a Finnish speaker, who had been a priest in the Lutheran church for 40 years, left his position in Parikkala in South Karelia and was baptized by the Baptists in Stockholm. He was known to have had a friendly relationship with the Heikel family. Several in Hymander's family soon followed in baptism and a Baptist church was founded in Parikkala in 1872. More Finnish-speaking Baptist churches were soon founded in Jurva (1879), Turku (1884), Kuopio (1886), Tampere (1890), and Ylistaro (1894). The Finnish Baptist Union was officially formed in 1903. Reverend Erik Jansson was also a key figure in the church's spread beginning in the 1880s. After joining evangelist Dwight L. Moody's church in Chicago, Jansson returned to Finland and later joined the Baptist church in 1881. He became pastor of the Petalax Baptist Church, which at times had over 400 members and baptized 1000 people. After the Swedish Baptists had exclusively supported the development work in Finland over the first few decades, the Home Mission Society of the American Baptist Churches USA contributed financially starting around 1889. According to Statistics Finland's demographic statistics, the number of Baptists in Finland was 2,305 at the end of 2024. Baptist origins in Ukraine. The Baptist churches in Ukraine were preceded by the German Anabaptist and Mennonite communities, who had been living in southern Ukraine since the 16th century, and who practiced adult believer's baptism. The first Baptist baptism (adult baptism by full immersion) in Ukraine took place in 1864 on the river Inhul in the Yelizavetgrad region (now Kropyvnytskyi region), in a German settlement. In 1867, the first Baptist communities were organized in that area. From there, the Baptist movement spread across the south of Ukraine and then to other regions as well. One of the first Baptist communities was registered in Kyiv in 1907, and in 1908 the First All-Russian Convention of Baptists was held there, as Ukraine was still controlled by the Russian Empire. The All-Russian Union of Baptists was established in Yekaterinoslav (now Dnipro) in southern Ukraine. At the end of the 19th century, there were between 100,000 and 300,000 Baptists in Ukraine. An independent All-Ukrainian Baptist Union of Ukraine was established during the brief period of Ukraine's independence in early 20th-century and once again after the fall of the Soviet Union, the largest of which is currently known as the Evangelical Baptist Union of Ukraine. Prior to its independence in 1991, Ukraine was home to the second largest Baptist community in the world, after the United States, and was called the "Bible Belt" of the Soviet Union. Baptist churches. Some Baptist church congregations choose to be independent of larger church organizations (Independent Baptist). Other Baptist churches choose to be part of an international or national Baptist Christian denomination or association while still adhering to a congregationalist polity. This cooperative relationship allows the development of common organizations, for mission and societal purposes, such as humanitarian aid, schools, theological institutes and hospitals. The majority of Baptist churches are part of national denominations (or 'associations' or 'cooperative groups'), as well as the Baptist World Alliance (BWA), formed in 1905 by 24 Baptist denominations from various countries. The BWA's goals include caring for the needy, leading in world evangelism and defending human rights and religious freedom. Missionary organizations. Missionary organizations favored the development of the movement on all continents. The BMS World Mission was founded in 1792 at Kettering, England. In United States, International Ministries was founded in 1814, and the International Mission Board was founded in 1845. Membership. Membership policies vary due to the autonomy of churches, but generally an individual becomes a member of a church through believer's baptism (which is a public profession of faith in Jesus, followed by immersion baptism). Most Baptists do not believe that baptism is a requirement for salvation but rather a public expression of inner repentance and faith. In general, Baptist churches do not have a stated age restriction on membership, but believer's baptism requires that an individual be able to freely and earnestly profess their faith. In 2010, an estimated 100 million Christians identified as Baptist or belonging to a Baptist-type church. In 2020, according to the researcher Sébastien Fath of the CNRS, the Baptist movement has around 170 million believers in the world. According to a census released in 2024, the BWA includes 266 participating fellowships in 134 countries, with 178,000 churches and 51 million baptized members. These statistics may not be fully representative, however, since some churches in the United States have dual or triple national Baptist affiliation, causing a church and its members to be counted possibly by more than one Baptist association, if these associations are members of the BWA. Among the censuses carried out by individual Baptist associations in 2023, those which claimed the most members on each continent were: Beliefs. Since the early days of the Baptist movement, various associations have adopted common confessions of faith as the basis for cooperative work among churches. Each church has a particular confession of faith and a common confession of faith if it is a member of an association of churches. For Reformed Baptists, historically significant Baptist doctrinal documents include the 1689 London Baptist Confession of Faith, the 1742 Philadelphia Baptist Confession, and the 1833 New Hampshire Baptist Confession of Faith. For General Baptists (Freewill Baptists), the Orthodox Creed and the Treatise on the Faith and Practice of the Free Will Baptists are adhered to. Written church covenants which are adopted by some individual Baptist churches, especially Independent Baptist congregations, as a statement of their faith and beliefs. Baptist theology is a subset of evangelical theology. It is based on believers' Church doctrine. Baptists, like other Christians, are defined by school of thought—some of it common to all orthodox and evangelical groups, and a portion of it distinctive to Baptists. Through the years, different Baptist groups have issued confessions of faith—without considering them to be "creeds"—to express their particular doctrinal distinctions in comparison to other Christians as well as in comparison to other Baptists. Baptist denominations are traditionally seen as belonging to two parties, General Baptists who uphold Arminian theology, and Particular Baptists who uphold Reformed theology (Calvinism). During the holiness movement, some General Baptists accepted the teaching of a second work of grace and formed denominations that emphasized this belief, such as the Ohio Valley Association of the Christian Baptist Churches of God and the Holiness Baptist Association. Most Baptists are evangelical in doctrine, but their beliefs may vary due to the congregational governance system that gives autonomy to individual local Baptist churches. Historically, Baptists have played a key role in encouraging religious freedom and the doctrine of separation of church and state. Shared doctrines would include beliefs about one God; the virgin birth of Jesus; miracles; substitutionary atonement for sins through the death, burial, and bodily resurrection of Jesus; the Trinity; the need for salvation (through belief in Jesus Christ as the Son of God, his death and resurrection); grace; the Kingdom of God; last things (eschatology) (Jesus Christ will return personally and visibly in glory to Earth, the dead will be raised and Christ will judge everyone in righteousness); and evangelism and missions. Most Baptists hold that no church or ecclesiastical organization has inherent authority over a Baptist church. Churches can properly relate to each other under this polity only through voluntary cooperation, never by any sort of coercion. Furthermore, this Baptist polity calls for freedom from governmental control. Exceptions to this local form of local governance include a few churches that submit to the leadership of a body of elders, as well as the Episcopal Baptists who have an Episcopal system. Baptists generally believe in the literal Second Coming of Christ. Beliefs among Baptists regarding the "end times" include amillennialism, both dispensational and historic premillennialism, with views such as postmillennialism and preterism receiving some support. Some additional distinctive Baptist principles held by many Baptists: Beliefs that vary among Baptists. Since there is no hierarchical authority and each Baptist church is autonomous, there is no official set of Baptist theological beliefs. These differences exist among associations and even among churches within the associations. Some doctrinal issues on which there is widespread difference among Baptists are: Excommunication may be used as a last resort by some denominations and churches for members who do not want to repent of beliefs or behavior at odds with the confession of faith of the community. When an entire congregation is excluded, it is often called disfellowship. Types of Baptists. General Baptists. General Baptists are Baptists who hold to the doctrine of general atonement, believing that Jesus Christ's vicarious death was an atonement for all humanity—not only for the elect. The first credobaptist Puritans (Baptists) in England and Wales were General Baptists. Some General Baptists were conformist Puritans while others were nonconformists; both wished further reformation in the Church of England. They produced the Standard Confession of Faith, in 1660, and the Orthodox Creed, in 1679. Later, General Baptists became very ecumenical, especially with the Anglican Church, and approached the Classical Arminian soteriology under Thomas Grantham's influence. Free Will Baptists in the United States are a sub-group within the General Baptist strand. Reformed Baptists (Particular Baptists). The Particular Baptists or Reformed Baptists or Calvinistic Baptists are Baptists who hold to the Calvinistic view of salvation. Depending on the denomination, Calvinistic Baptists adhere to varying degrees of Reformed theology, ranging from simply embracing the Five Points of Calvinism, to accepting a modified form of federalism; all Calvinistic Baptists reject the classical Reformed teaching on infant baptism. While the Reformed Baptist confessions affirm views of the nature of baptism similar to those of the classical Reformed, they reject infants as the proper subjects of baptism. In distinction to the General Baptists who emphasized separation from the Church of England, many Particular Baptists sought more ecumenism. Primitive Baptists are a type of Calvinist Baptists who adhere to some type of Reformed beliefs, who came out of the controversy among Baptists on the use of mission boards, tract societies and temperamence societies. Primitive Baptists reject some elements of classical Reformed theology, such as infant baptism, and avoid the term "Calvinist". They are still Calvinist in the sense of holding strongly to the Five Points of Calvinism and they explicitly reject Arminianism. They are also characterized by "intense conservatism". Missionary Baptists. The Missionary Baptists were a group of Baptists that came from the missionary controversy in the United States, where the Missionary Baptists supported the usage of missionaries. Independent Baptists. Independent Baptists are Baptists who arose from local Baptist congregations whose members were concerned about the doctrines of theological liberalism in national Baptist conventions. Independent Baptists are primarily fundamentalist, and although they may differ on multiple issues such as soteriology, dress standards, music, the practice of communion among others, they are homogenous on issues such as opposition to homosexuality, ordination of women, the charismatic movement, evolution and abortion. New Independent Baptists. During the 21st century, the New Independent Fundamental Baptist movement was founded out of the Independent Baptist movement by Steven Anderson. However, this movement has been heavily criticized by Independent Baptists due to many doctrinal differences. Some former New IFB pastors have also charged the association of being a cult. Seventh Day Baptists. Seventh Day Baptists are Baptists who practice seventh-day Sabbatarianism. However, it is not certain when Seventh Day Baptists took denominational form, and they do not claim an unbroken succession of church organization from before the Reformation. Landmark Baptists. Landmark Baptists are a Baptist movement which originated in the 19th century in United States, with leaders such as J. R. Graves, J. M. Pendleton and A. C. Dayton, although they denied being a new movement, but a continuation of the old-fashioned Baptists. Landmark Baptists believe that the term "church" should be reserved for Baptist churches exclusively, arguing that groups such as Methodists or Presbyterians are not churches at all, but only religious societies. They believe that Baptists share an unbroken line of succession from the early church. Worship. In Baptist churches, worship service is part of the life of the church and includes praise, worship, of prayers to God, a sermon based on the Bible, offering, and periodically the Lord's Supper. Some churches have services with traditional Christian music, others with contemporary Christian music, and some offer both in separate services. In many churches, there are services adapted for children, even teenagers. Prayer meetings are also held during the week. The architecture is generally sober, and the Latin cross is one of the only spiritual symbols that can usually be seen on the building of a Baptist church and that identifies the place where it belongs. Education. Baptist churches established elementary and secondary schools, Bible colleges, colleges and universities as early as the 1680s in England, before continuing in various countries. In 2006, the International Association of Baptist Colleges and Universities was founded in the United States. In 2023, it had 42 member universities. Sexuality. Many churches promote virginity pledges to young Baptist Christians, who are invited to engage in a public ceremony of sexual abstinence until Christian marriage. This pact is often symbolized by a purity ring. Programs like True Love Waits, founded in 1993 by the Southern Baptist Convention, have been developed to support the commitments. Most Baptist associations around the world believe only in marriage between a man and a woman. Some Baptist associations do not have official beliefs about marriage in a confession of faith and invoke congregationalism to leave the choice to each church to decide. This is the case for American Baptist Churches USA, Progressive National Baptist Convention (USA), Cooperative Baptist Fellowship (USA), National Baptist Convention, USA, and the Baptist Union of Great Britain. Some Baptist associations support same-sex marriage. This is the case for the Alliance of Baptists (USA), the Canadian Association for Baptist Freedoms, the Aliança de Batistas do Brasil, the Fraternidad de Iglesias Bautistas de Cuba, and the Association of Welcoming and Affirming Baptists (international). Controversies. Baptists have faced many controversies in their 400-year history, controversies of the level of crises. Baptist historian Walter Shurden says the word "crisis" comes from the Greek word meaning 'to decide.' Shurden writes that contrary to the presumed negative view of crises, some controversies that reach a crisis level may actually be "positive and highly productive." He claims that even schism, though never ideal, has often produced positive results. In his opinion, crises among Baptists each have become decision moments that shaped their future. Missions crisis. Early in the 19th century, the rise of the modern missions movement, and the backlash against it, led to widespread and bitter controversy among the American Baptists. During this era, the American Baptists were split between missionary and anti-missionary. A substantial secession of Baptists went into the movement led by Alexander Campbell to return to a more fundamental church. Slavery crisis. United States. Leading up to the American Civil War, Baptists became embroiled in the controversy over slavery in the United States. Whereas in the First Great Awakening, Methodist and Baptist preachers had opposed slavery and urged manumission, over the decades they made more of an accommodation with the institution. They worked with slaveholders in the South to urge a paternalistic institution. Both denominations made direct appeals to slaves and free Blacks for conversion. The Baptists particularly allowed them active roles in congregations. By the mid-19th century, northern Baptists tended to oppose slavery. As tensions increased, in 1844 the Home Mission Society refused to appoint a slaveholder as a missionary who had been proposed by Georgia. It noted that missionaries could not take servants with them, and also that the board did not want to appear to condone slavery. In 1845, a group of churches in favor of slavery and in disagreement with the abolitionism of the Triennial Convention (now American Baptist Churches USA) left to form the Southern Baptist Convention. They believed that the Bible sanctions slavery, and that it was acceptable for Christians to own slaves. They believed slavery was a human institution which Baptist teaching could make less harsh. By this time, many planters were part of Baptist congregations, and some of the denomination's prominent preachers, such as Basil Manly Sr., president of the University of Alabama, were also planters who owned slaves. As early as the late 18th century, Black Baptists began to organize separate churches, associations and mission agencies. Blacks set up some independent Baptist congregations in the South before the Civil War. White Baptist associations maintained some oversight of these churches. In the postwar years, freedmen quickly left the white congregations and associations, setting up their own churches. In 1866, the Consolidated American Baptist Convention, formed from Black Baptists of the South and West, helped southern associations set up Black state conventions, which they did in Alabama, Arkansas, Virginia, North Carolina, and Kentucky. In 1880, Black state conventions united in the national Foreign Mission Convention to support Black Baptist missionary work. Two other national Black conventions were formed, and in 1895 they united as the National Baptist Convention. This organization later went through its own changes, spinning off other conventions. It is the largest Black religious organization and the second-largest Baptist organization in the world. Baptists are numerically most dominant in the Southeast. In 2007, the Pew Research Center's Religious Landscape Survey found that 45% of all African Americans identify with Baptist denominations, with the vast majority of those being within the historically Black tradition. In the American South, the interpretation of the Civil War, abolition of slavery and postwar period has differed sharply by race since those years. Americans have often interpreted great events in religious terms. Historian Wilson Fallin contrasts the interpretation of Civil War and Reconstruction in White versus Black memory by analyzing Baptist sermons documented in Alabama. They quickly organized their own congregations and developed their own regional and state associations and, by the end of the 19th century, a national convention. White preachers in Alabama after Reconstruction expressed the view that: Black preachers interpreted the Civil War, Emancipation and Reconstruction as "God's gift of freedom." They had a gospel of liberation, having long identified with the Book of Exodus from slavery in the Old Testament. They took opportunities to exercise their independence, to worship in their own way, to affirm their worth and dignity, and to proclaim the fatherhood of God and the brotherhood of man. Most of all, they quickly formed their own churches, associations, and conventions to operate freely without white supervision. These institutions offered self-help and racial uplift, a place to develop and use leadership, and places for proclamation of the gospel of liberation. As a result, Black preachers said that God would protect and help him and God's people; God would be their rock in a stormy land. The Southern Baptist Convention supported white supremacy and its results: disenfranchising most Blacks and many poor whites at the turn of the 20th century by raising barriers to voter registration, and passage of racial segregation laws that enforced the system of Jim Crow. Its members largely resisted the civil rights movement in the South, which sought to enforce their constitutional rights for public access and voting; and enforcement of midcentury federal civil rights laws. In 1995, the Southern Baptist Convention passed a resolution that recognized the failure of their ancestors to protect the civil rights of African Americans. More than 20,000 Southern Baptists registered for the meeting in Atlanta. The resolution declared that messengers, as SBC delegates are called, "unwaveringly denounce racism, in all its forms, as deplorable sin" and "lament and repudiate historic acts of evil such as slavery from which we continue to reap a bitter harvest." It offered an apology to all African Americans for "condoning and/or perpetuating individual and systemic racism in our lifetime" and repentance for "racism of which we have been guilty, whether consciously or unconsciously." Although Southern Baptists have condemned racism in the past, this was the first time the convention, predominantly White since the Reconstruction era, had specifically addressed the issue of slavery. The statement sought forgiveness "from our African-American brothers and sisters" and pledged to "eradicate racism in all its forms from Southern Baptist life and ministry." In 1995, about 500,000 members of the 15.6-million-member denomination were African Americans and another 300,000 were ethnic minorities. The resolution marked the denomination's first formal acknowledgment that racism played a role in its founding. Caribbean islands. Elsewhere in the Americas, in the Caribbean in particular, Baptist missionaries and members took an active role in the anti-slavery movement. In Jamaica, for example, William Knibb, a prominent British Baptist missionary, worked toward the emancipation of slaves in the British West Indies (which took place in full in 1838). Knibb supported the creation of "Free Villages" and sought funding from English Baptists to buy land for freedmen to cultivate; the Free Villages were envisioned as rural communities to be centered around a Baptist church where emancipated slaves could farm their own land. Thomas Burchell, missionary minister in Montego Bay, was active in this movement, gaining funds from Baptists in England to buy land for what became known as Burchell Free Village. Prior to emancipation, Baptist deacon Samuel Sharpe, who served with Burchell, organized a general strike of slaves seeking better conditions. It developed into a major rebellion of as many as 60,000 slaves, which became known as the Christmas Rebellion or the Baptist War. It was put down by government troops within two weeks. During and after the rebellion, an estimated 200 slaves were killed outright, with more than 300 judicially executed later by prosecution in the courts, sometimes for minor offenses. Baptists were active after emancipation in promoting the education of former slaves; for example, Jamaica's Calabar High School, named after the port of Calabar in Nigeria, was founded by Baptist missionaries. At the same time, during and after slavery, slaves and free Blacks formed their own Spiritual Baptist movements—breakaway spiritual movements which theology often expressed resistance to oppression. Landmark crisis. Southern Baptist Landmarkism sought to reset the ecclesiastical separation which had characterized the old Baptist churches, in an era when inter-denominational union meetings were the order of the day. James Robinson Graves was an influential Baptist of the 19th century and the primary leader of this movement. While some Landmarkers eventually separated from the Southern Baptist Convention, the movement continued to influence the Convention into the 20th and 21st centuries. Modernist crisis. The rise of theological modernism in the late 19th and early 20th centuries also greatly affected Baptists. The Landmark movement has been described as a reaction among Southern Baptists in the United States against incipient modernism. In England, Charles Spurgeon fought against modernistic views of the Scripture in the Downgrade Controversy and severed his church from the Baptist Union as a result. The Northern Baptist Convention in the United States had internal conflict over modernism in the early 20th century, ultimately embracing it. Two new conservative associations of congregations that separated from the convention were founded as a result: the General Association of Regular Baptist Churches in 1933 and the Conservative Baptist Association of America in 1947. Following similar conflicts over modernism, the Southern Baptist Convention adhered to conservative theology as its official position. In the late 20th century, Southern Baptists who disagreed with this direction founded two new groups: the liberal Alliance of Baptists in 1987 and the more moderate Cooperative Baptist Fellowship in 1991. Originally both schisms continued to identify as Southern Baptist, but over time they "became permanent new families of Baptists." Criticism. In his 1963 book, "Strength to Love", Baptist pastor Martin Luther King Jr. criticized some Baptist churches for their anti-intellectualism, especially because of the lack of theological training among pastors. In 2018, Baptist theologian Russell D. Moore criticized some Baptists in the United States for their moralism emphasizing strongly the condemnation of certain personal sins, but silent on the social injustices that afflict entire populations, such as racism. In 2020, the North American Baptist Fellowship, a region of the Baptist World Alliance, officially made a commitment to social justice and spoke out against institutionalized discrimination in the American justice system. In 2022, the Baptist World Alliance adopted a resolution encouraging Baptist churches and associations that have historically contributed to the sin of slavery to engage in restorative justice.
3981
7903804
https://en.wikipedia.org/wiki?curid=3981
Blackjack
Blackjack (formerly black jack or vingt-un) is a casino banking game. It is the most widely played casino banking game in the world. It uses decks of 52 cards and descends from a global family of casino banking games known as "twenty-one". This family of card games also includes the European games "vingt-et-un" and pontoon, and the Russian game . The game is a comparing card game where players compete against the dealer, rather than each other. History. Blackjack's immediate precursor was the English version of "twenty-one" called "vingt-un", a game of unknown provenance. The first written reference is found in a book by the Spanish author Miguel de Cervantes. Cervantes was a gambler, and the protagonists of his "Rinconete y Cortadillo", from "Novelas Ejemplares", are card cheats in Seville. They are proficient at cheating at "veintiuno" (Spanish for "twenty-one") and state that the object of the game is to reach 21 points without going over and that the ace values 1 or 11. The game is played with the Spanish "baraja" deck. "Rinconete y Cortadillo" was written between 1601 and 1602, implying that "veintiuno" was played in Castile since the beginning of the 17th century or earlier. Later references to this game are found in France and Spain. The first record of the game in France occurs in 1888 and in Britain during the 1770s and 1780s, but the first rules appeared in Britain in 1800 under the name of "vingt-un". Twenty-one, still known then as "vingt-un", appeared in the United States in the early 1800s. The first American rules were an 1825 reprint of the 1800 English rules. English "vingt-un" later developed into an American variant in its own right which was renamed "blackjack" around 1899. According to popular myth, when "vingt-un" was introduced into the United States (in the early 1800s, during the First World War, or in the 1930s, depending on the source), gambling houses offered bonus payouts to stimulate players' interests. One such bonus was a ten-to-one payout if the player's hand consisted of the ace of spades and a black jack (either the jack of clubs or the jack of spades). This hand was called a "blackjack", and the name stuck even after the ten-to-one bonus was withdrawn. French card historian Thierry Depaulis debunks this story, showing that prospectors during the Klondike Gold Rush (1896–99) gave the name "blackjack" to the game of American "vingt-un", the bonus being the usual ace and any 10-point card. Since "blackjack" also refers to the mineral zincblende, which was often associated with gold or silver deposits, he suggests that the mineral name was transferred by prospectors to the top bonus hand. He could not find any historical evidence for a special bonus for having the combination of an ace and a black jack. In September 1956, Roger Baldwin, Wilbert Cantey, Herbert Maisel, and James McDermott published a paper titled "The Optimum Strategy in Blackjack" in the "Journal of the American Statistical Association", the first mathematically sound optimal blackjack strategy. This paper became the foundation of future efforts to beat blackjack. Ed Thorp used Baldwin's hand calculations to verify the basic strategy and later published (in 1963) "Beat the Dealer". Rules of play at casinos. The object of the game is to win money by creating card totals higher than those of the dealer's hand but not exceeding 21, or by stopping at a total in the hope that the dealer will bust. Number cards count as their number, the jack, queen, and king ("face cards" or "pictures") count as 10, and aces count as either 1 or 11 depending on whether or not counting it as 11 would cause a bust. If a player exceeds 21 points, they bust and automatically lose. A total of 21 on the starting two cards is called a "blackjack" or "natural," and is the strongest hand. At a blackjack table, the dealer faces five to nine playing positions from behind a semicircular table. Between one and eight standard 52-card decks are shuffled together. To start each round, players place bets in the "betting box" at each position. In jurisdictions allowing back betting, up to three players can be at each position. The player whose bet is at the front of the betting box controls the position, and the dealer consults the controlling player for playing decisions; the other bettors "play behind". A player can usually bet in one or multiple boxes at a single table, but in many U.S. casinos, players are limited to playing one to three positions at a table. The dealer deals from their left ("first base") to their far right ("third base"). Each box gets an initial hand of two cards. The dealer's hand gets its first card face-up. In "hole card" games, the dealer also gets a second card face-down (the hole card), and if the first card is a 10-A, the dealer will peek at the hole card to see whether they have a blackjack. If they do, they reveal it immediately, the hand ends, and the dealer takes all wagers whose hands are not also a blackjack. Hole card games are sometimes played on tables with a small mirror or electronic sensor used to peek securely at the hole card. In European casinos, "no hole card" games are prevalent; the dealer's second card is not drawn until all the players have played their hands. Dealers deal the cards from one or two handheld decks, from a dealer's shoe or from a shuffling machine. One card is dealt to each wagered-on position clockwise from the dealer's left, followed by one card to the dealer, followed by an additional card to each of the positions in play, followed by the dealer's hole card if applicable. The players' initial cards may be dealt face-up or face-down (more common in single and double-deck games). Once all the hands are dealt, play begins with the player to the left of the dealer and proceeds clockwise. Player decisions. On the initial two cards, the player has up to five options: "hit", "stand", "double down", "split", or "surrender". Once a hand has more than two cards, hitting and standing are the only options available. Each option has a corresponding hand signal. "Signal": Scrape cards against the table (in handheld games); tap the table with a finger or wave a hand toward the body (in games dealt face-up). "Signal": Slide cards under chips face-down (in handheld games); wave hand horizontally (in games dealt face-up). "Signal": Place additional chips beside the original bet outside the betting box and point with one finger. "Signal": Place additional chips next to the original bet outside the betting box and point with two fingers spread into a V formation. "Signal": Using the index finger, draw a horizontal line behind the bet. Surrender can also be announced verbally. In handheld games, a player must reveal their cards if they have a blackjack, bust, or wish to double down, split, or surrender. Hand signals help the "eye in the sky" make a video recording of the table, which resolves disputes and identifies dealer mistakes. It is also used to protect the casino against dealers who steal chips or players who cheat. Recordings can also identify advantage players. When a player's hand signal disagrees with their words, the hand signal takes precedence. After the players have finished playing, the dealer's hand is resolved by drawing cards until the hand achieves a total of 17 or higher. If the dealer has a total of 17 including an ace valued as 11 (a "soft 17"), some games require the dealer to stand while other games require the dealer to hit. The dealer never doubles, splits, or surrenders. If the dealer busts, all players who haven't busted win. If the dealer does not bust, each remaining bet wins if its hand is higher than the dealer's and loses if it is lower. In the case of a tie ("push" or "standoff"), bets are returned without adjustment. A blackjack beats any hand that is not a blackjack, even one with a value of 21. A player blackjack wins immediately unless the dealer also has one, in which case the hand is a push. If the dealer is dealt blackjack, all players who do not have a blackjack lose. Wins are paid out at even money, except for player blackjacks, which are traditionally paid out at 3 to 2 odds. Some tables today pay blackjacks at less than 3:2. Insurance. If the dealer shows an ace, an "insurance" bet is allowed. Insurance is a side bet that the dealer has a blackjack. The dealer asks for insurance bets before the first player plays. Insurance bets of up to half the player's current bet are placed on the "insurance bar" above the player's cards. If the dealer has a blackjack, insurance pays 2 to 1. In most casinos, the dealer looks at the down card and pays off or takes the insurance bet immediately. In other casinos, the payoff waits until the end of the play. In face-down games, if a player has more than one hand, they can look at all their hands before deciding. This is the only condition where a player can look at multiple hands. Players with blackjack can also take insurance. When this happens, it is called 'even money,' as the player is giving up their 3:2 payout for a 1:1 payout when taking insurance with a blackjack, under the condition that they still get paid if the dealer also has a blackjack. Insurance bets lose money in the long run. The dealer has a blackjack less than one-third of the time. In some games, players can also take insurance when a 10-valued card shows, but the dealer has an ace in the hole less than one-tenth of the time. The insurance bet is susceptible to advantage play. It is advantageous to make an insurance bet whenever the hole card has more than a one in three chance of being a ten. Card counting techniques can identify such situations. Rule variations and effects on house edge. "Note: Where changes in the house edge due to changes in the rules are stated in percentage terms, the difference is usually stated here in percentage points, not the percentage change. For example, if an edge of 10% is reduced to 9%, it is reduced by one percentage point, not reduced by ten percent." Blackjack rules are generally set by regulations that establish permissible rule variations at the casino's discretion. Blackjack comes with a "house edge"; the casino's statistical advantage is built into the game. Most of the house's edge comes from the fact that the player loses when both the player and dealer bust. Blackjack players using basic strategy lose on average less than 1% of their action over the long run, giving blackjack one of the lowest edges in the casino. The house edge for games where blackjack pays 6 to 5 instead of 3 to 2 increases by about 1.4%. Player deviations from basic strategy also increase the house edge. Each game has a rule about whether the dealer must hit or stand on soft 17, which is generally printed on the table surface. The variation where the dealer must hit soft 17 is abbreviated "H17" in blackjack literature, with "S17" used for the stand-on-soft-17 variation. Substituting an "H17" rule with an "S17" rule in a game benefits the player, decreasing the house edge by about 0.2%. All other things equal, using fewer decks decreases the house edge. This is due to a combination of an increased probability of blackjack (which generally pays 3:2 for the player), an increased probability of the dealer busting, and doubling down being more beneficial for the player in a game with fewer decks. Casinos generally compensate by tightening other rules in games with fewer decks, to preserve the house edge or discourage play altogether. When offering single-deck blackjack games, casinos are more likely to disallow doubling on soft hands or after splitting, restrict resplitting, require higher minimum bets, or pay the player less than 3:2 for a winning blackjack. The following table illustrates the mathematical effect on the house edge of the number of decks, by considering games with various deck counts under the following ruleset: double after split allowed, resplit to four hands allowed, no hitting split aces, no surrendering, double on any two cards, original bets only lost on dealer blackjack, dealer hits soft 17, and cut-card used. The increase in house edge per unit increase in the number of decks is most dramatic when comparing the single-deck game to the two-deck game, and becomes progressively smaller as more decks are added. Surrender, for those games that allow it, is usually not permitted against a dealer blackjack; if the dealer's first card is an ace or ten, the hole card is checked to make sure there is no blackjack before surrender is offered. This rule protocol is consequently known as "late" surrender. The alternative, "early" surrender, gives the player the option to surrender "before" the dealer checks for blackjack, or in a no hole card game. Early surrender is much more favorable to the player than late surrender. For late surrender, however, while it is tempting to opt for surrender on any hand which will probably lose, the correct strategy is to only surrender on the very worst hands, because having even a one-in-four chance of winning the full bet is better than losing half the bet and pushing the other half, as entailed by surrendering. If the cards of a post-split hand have the same value, most games allow the player to split again, or "resplit". The player places a further wager, and the dealer separates the new pair dealing a further card to each as before. Some games allow unlimited resplitting, while others may limit it to a certain number of hands, such as four hands (for example, "resplit to 4"). After splitting aces, the common rule is that only one card will be dealt to each ace; the player cannot split, double, or take another hit on either hand. Rule variants include allowing resplitting aces or allowing the player to hit split aces. Games allowing aces to be resplit are not uncommon, but those allowing the player to hit split aces are extremely rare. Allowing the player to hit hands resulting from split aces reduces the house edge by about 0.13%; allowing resplitting of aces reduces the house edge by about 0.03%. Note that a ten-value card dealt on a split ace (or vice versa) will not be counted as a blackjack but as a soft 21. After a split, most games allow doubling down on the new two-card hands. Disallowing doubling after a split increases the house edge by about 0.12%. Under the "Reno rule", doubling down is only permitted on hard totals of 9, 10, or 11 (under a similar European rule, only 10 or 11). The basic strategy would otherwise call for some doubling down with hard 9 and soft 13–18, and advanced players can identify situations where doubling on soft 19–20 and hard 8, 7, and even 6 is advantageous. The Reno rule prevents the player from taking advantage of double-down in these situations and thereby increases the player's expected loss. The Reno rule increases the house edge by around 0.1%, and its European version by around 0.2%. In most non-U.S. casinos, a "no hole card" game is played, meaning that the dealer does not draw nor consult their second card until after all players have finished making decisions. With no hole card, it is rarely the correct basic strategy to double or split against a dealer ten or ace, since a dealer blackjack will result in the loss of the split and double bets; the only exception is with a pair of aces against a dealer 10, where it is still correct to split. In all other cases, a stand, hit, or surrender is called for. For instance, when holding 11 against a dealer 10, the correct strategy is to double in a hole card game (where the player knows the dealer's second card is not an ace), but to hit in a no-hole card game. The no-hole-card rule adds approximately 0.11% to the house edge. The "original bets only" rule variation appearing in certain no hole card games states that if the player's hand loses to a dealer blackjack, only the mandatory initial bet ("original") is forfeited, and all optional bets, meaning doubles and splits, are pushed. "Original bets only" is also known by the acronym OBO; it has the same effect on basic strategy and the house edge as reverting to a hole card game. In many casinos, a blackjack pays only 6:5 or even 1:1 instead of the usual 3:2. This is most common at tables with lower table minimums. Although this payoff was originally limited to single-deck games, it has spread to double-deck and shoe games. Among common rule variations in the U.S., these altered payouts for blackjack are the most damaging to the player, causing the greatest increase in house edge. Since blackjack occurs in approximately 4.8% of hands, the 1:1 game increases the house edge by 2.3%, while the 6:5 game adds 1.4% to the house edge. Video blackjack machines generally pay a 1:1 payout for a blackjack. The rule that bets on tied hands are lost rather than pushed is catastrophic to the player. Though rarely used in standard blackjack, it is sometimes seen in "blackjack-like" games, such as in some charity casinos. Blackjack strategy. Basic strategy. Each blackjack game has a basic strategy, the optimal method of playing any hand. When using basic strategy, the long-term house advantage (the expected loss of the player) is minimized. An example of a basic strategy is shown in the table below, which applies to a game with the following specifications: Key: S = Stand H = Hit Dh = Double (if not allowed, then hit) Ds = Double (if not allowed, then stand) SP = Split Uh = Surrender (if not allowed, then hit) Us = Surrender (if not allowed, then stand) Usp = Surrender (if not allowed, then split) Most basic strategy decisions are the same for all blackjack games. Rule variations call for changes in only a few situations. For example, to use the table above on a game with the stand-on-soft-17 rule (which favors the player, and is typically found only at higher-limit tables today) only 6 cells would need to be changed: hit on 11 "vs." A, hit on 15 "vs." A, stand on 17 "vs." A, stand on A,7 "vs." 2, stand on A,8 "vs." 6, and split on 8,8 "vs." A. Regardless of the specific rule variations, taking insurance or "even money" is never the correct play under a basic strategy. Estimates of the house edge for blackjack games quoted by casinos and gaming regulators are based on the assumption that the players follow basic strategy. Most blackjack games have a house edge of between 0.5% and 1%, placing blackjack among the cheapest casino table games for the player. Casino promotions such as complimentary matchplay vouchers or 2:1 blackjack payouts allow players to acquire an advantage without deviating from basic strategy. Composition-dependent strategy. The basic strategy is based on a player's point total and the dealer's visible card. Players can sometimes improve on this decision by considering the composition of their hand, not just the point total. For example, players should ordinarily stand when holding 12 against a dealer 4. But in a single deck game, players should hit if their 12 consists of a 10 and a 2. The presence of a 10 in the player's hand has two consequences: Even when basic and composition-dependent strategies lead to different actions, the difference in expected reward is small, and it becomes smaller with more decks. Using a composition-dependent strategy rather than a basic strategy in a single-deck game reduces the house edge by 0.04%, which falls to 0.003% for a six-deck game. Advantage play. Blackjack has been a high-profile target for advantage players since the 1960s. Advantage play attempts to win more using skills such as memory, computation, and observation. While these techniques are legal, they can give players a mathematical edge in the game, making advantage players unwanted customers for casinos. Advantage play can lead to ejection or blacklisting. Some advantageous play techniques in blackjack include: Card counting. During the course of a blackjack shoe, the dealer exposes the dealt cards. Players can infer from their accounting of the exposed cards which cards remain. These inferences can be used in the following ways: A card counting system assigns a point score to each card rank (e.g., 1 point for 2–6, 0 points for 7–9, and −1 point for 10–A). When a card is exposed, a counter adds the score of that card to a running total, the 'count'. A card counter uses this count to make betting and playing decisions. The count starts at 0 for a freshly shuffled deck for "balanced" counting systems. Unbalanced counts are often started at a value that depends on the number of decks used in the game. Blackjack's house edge is usually around 0.5–1% when players use basic strategy. Card counting can give the player an edge of up to about 2%. Card counting works best when a few cards remain. This makes single-deck games better for counters. As a result, casinos are more likely to insist that players do not reveal their cards to one another in single-deck games. In games with more decks, casinos limit penetration by ending the shoe and reshuffling when one or more decks remain undealt. Casinos also sometimes use a shuffling machine to reintroduce the cards whenever a deck has been played. Card counting is legal, but a casino might inform counters that they are no longer welcome to play blackjack. Sometimes a casino might ban a card counter from the property. The use of external devices to assist in card counting is illegal in Nevada. Shuffle tracking. Another advantage play technique, mainly applicable in multi-deck games, involves tracking groups of cards (also known as slugs, clumps, or packs) through the shuffle and then playing and betting according to when those cards come into play from a new shoe. Shuffle tracking requires excellent eyesight and powers of visual estimation but is harder to detect; shuffle trackers' actions are largely unrelated to the composition of the cards in the shoe. Arnold Snyder's articles in "Blackjack Forum" magazine brought shuffle tracking to the general public. His book, "The Shuffle Tracker's Cookbook", mathematically analyzed the player edge available from shuffle tracking based on the actual size of the tracked slug. Jerry L. Patterson also developed and published a shuffle-tracking method for tracking favorable clumps of cards and cutting them into play and tracking unfavorable clumps of cards and cutting them out of play. Identifying concealed cards. The player can also gain an advantage by identifying cards from distinctive wear markings on their backs, or by hole carding (observing during the dealing process the front of a card dealt face-down). These methods are generally legal although their status in particular jurisdictions may vary. Side bets. Many blackjack tables offer side bets on various outcomes including: The side wager is typically placed in a designated area next to the box for the main wager. A player wishing to wager on a side bet usually must place a wager on blackjack. Some games require that the blackjack wager should equal or exceed any side bet wager. A non-controlling player of a blackjack hand is usually permitted to place a side bet regardless of whether the controlling player does so. The house edge for side bets is generally higher than for the blackjack game itself. Nonetheless, side bets can be susceptible to card counting. A side count designed specifically for a particular side bet can improve the player's edge. Only a few side bets, like "Insurance" and "Lucky Ladies", correlate well with the high-low counting system and offer a sufficient win rate to justify the effort of advantage play. In team play, it is common for team members to be dedicated to only counting a side bet using a specialized count. Video blackjack. Some casinos, as well as general betting outlets, provide blackjack among a selection of casino-style games at electronic consoles. Video blackjack game rules are generally more favorable to the house; e.g., paying out only even money for winning blackjacks. Video and online blackjack games generally deal each round from a fresh shoe (i.e., use an RNG for each deal), rendering card counting ineffective in most situations. Variants and related games. Blackjack is a member of the family of traditional card games played recreationally worldwide. Most of these games have not been adapted for casino play. Furthermore, the casino game development industry actively produces blackjack variants, most of which are ultimately not adopted by casinos. The following are the most prominent and established variants in casinos. Examples of local traditional and recreational related games include French "vingt-et-un" ('twenty-one') and German "Siebzehn und Vier" ('seventeen and four'). Neither game allows splitting. An ace counts only eleven, but two aces count as a blackjack. It is mostly played in private circles and barracks. The popular British member of the "vingt-un" family is called "pontoon", the name being probably a corruption of "vingt-et-un". Blackjack Hall of Fame. In 2002, professional gamblers worldwide were invited to nominate great blackjack players for admission into the Blackjack Hall of Fame. Seven members were inducted in 2002, with new people inducted every year after. The Hall of Fame is at the Barona Casino in San Diego. Members include Edward O. Thorp, author of the 1960s book "Beat the Dealer"; Ken Uston, who popularized the concept of team play; Arnold Snyder, author and editor of the "Blackjack Forum" trade journal; and Stanford Wong, author and popularizer of "Wonging".
3982
49907020
https://en.wikipedia.org/wiki?curid=3982
Bicarbonate
In inorganic chemistry, bicarbonate (IUPAC-recommended nomenclature: hydrogencarbonate) is an intermediate form in the deprotonation of carbonic acid. It is a polyatomic anion with the chemical formula . Bicarbonate serves a crucial biochemical role in the physiological pH buffering system. The term "bicarbonate" was coined in 1814 by the English chemist William Hyde Wollaston. The name lives on as a trivial name. Chemical properties. The bicarbonate ion (hydrogencarbonate ion) is an anion with the empirical formula and a molecular mass of 61.01 daltons; it consists of one central carbon atom surrounded by three oxygen atoms in a trigonal planar arrangement, with a hydrogen atom attached to one of the oxygens. It is isoelectronic with nitric acid (). The bicarbonate ion carries a negative one formal charge and is an amphiprotic species which has both acidic and basic properties. It is both the conjugate base of carbonic acid (); and the conjugate acid of , the carbonate ion, as shown by these equilibrium reactions: A bicarbonate salt forms when a positively charged ion attaches to the negatively charged oxygen atoms of the ion, forming an ionic compound. Many bicarbonates are soluble in water at standard temperature and pressure; in particular, sodium bicarbonate contributes to total dissolved solids, a common parameter for assessing water quality. Physiological role. Bicarbonate is a vital component of the pH buffering system of the human body (maintaining acid–base homeostasis). 70%–75% of in the body is converted into carbonic acid (), which is the conjugate acid of and can quickly turn into it. With carbonic acid as the central intermediate species, bicarbonate – in conjunction with water, hydrogen ions, and carbon dioxide – forms this buffering system, which is maintained at the volatile equilibrium required to provide prompt resistance to pH changes in both the acidic and basic directions. This is especially important for protecting tissues of the central nervous system, where pH changes too far outside of the normal range in either direction could prove disastrous (see acidosis or alkalosis). Recently it has been also demonstrated that cellular bicarbonate metabolism can be regulated by mTORC1 signaling. Additionally, bicarbonate plays a key role in the digestive system. It raises the internal pH of the stomach, after highly acidic digestive juices have finished in their digestion of food. Bicarbonate also acts to regulate pH in the small intestine. It is released from the pancreas in response to the hormone secretin to neutralize the acidic chyme entering the duodenum from the stomach. Bicarbonate in the environment. Bicarbonate is the dominant form of dissolved inorganic carbon in sea water, and in most fresh waters. As such it is an important sink in the carbon cycle. Some plants like "Chara" utilize carbonate and produce calcium carbonate () as a result of biological metabolism. In freshwater ecology, strong photosynthetic activity by freshwater plants in daylight releases gaseous oxygen into the water and at the same time produces bicarbonate ions. These shift the pH upward until in certain circumstances the degree of alkalinity can become toxic to some organisms or can make other chemical constituents such as ammonia toxic. In darkness, when no photosynthesis occurs, respiration processes release carbon dioxide, and no new bicarbonate ions are produced, resulting in a rapid fall in pH. The flow of bicarbonate ions from rocks weathered by the carbonic acid in rainwater is an important part of the carbon cycle. Other uses. The most common salt of the bicarbonate ion is sodium bicarbonate, , which is commonly known as baking soda. When heated or exposed to an acid such as acetic acid (vinegar), sodium bicarbonate releases carbon dioxide. This is used as a leavening agent in baking. Ammonium bicarbonate is used in the manufacturing of some cookies, crackers, and biscuits. Diagnostics. In diagnostic medicine, the blood value of bicarbonate is one of several indicators of the state of acid–base physiology in the body. It is measured, along with chloride, potassium, and sodium, to assess electrolyte levels in an electrolyte panel test (which has Current Procedural Terminology, CPT, code 80051). The parameter "standard bicarbonate concentration" (SBCe) is the bicarbonate concentration in the blood at a PaCO2 of , full oxygen saturation and 36 °C.
3984
2842084
https://en.wikipedia.org/wiki?curid=3984
Bernie Federko
Bernard Allan Federko (born May 12, 1956) is a Canadian former professional ice hockey centre who played fourteen seasons in the National Hockey League from 1976 through 1990. Playing career. Federko began playing hockey at a young age in his home town of Foam Lake, Saskatchewan. He was captain of the 1971 Bantam provincial champions. He also played Senior hockey with the local Foam Lake Flyers of the Fishing Lake Hockey League, winning the league scoring title as a bantam-aged player. Federko continued his career with the Saskatoon Blades of the WHL where he set and still holds the team record for assists. He played three seasons with the Blades, and in his final year with the club he led the league in assists and points in both the regular season "and" playoffs. Federko was drafted 7th overall by the St. Louis Blues in the 1976 NHL Amateur Draft. He started the next season with the Kansas City Blues of the Central Hockey League and was leading the league in points when he was called up mid-season to play 31 games with St. Louis. He scored three hat tricks in those 31 games. In the 1978–79 NHL season, Federko developed into a bona fide star, as he scored 95 points. Federko scored 100 points in a season four times, and was a consistent and underrated performer for the Blues. Federko scored at least 90 points in seven of the eight seasons between 1978 and 1986, and became the first player in NHL history to record at least 50 assists in 10 consecutive seasons. However, in an era when Wayne Gretzky was scoring 200 points a season, Federko never got the attention many felt he deserved. In 1986, in a poll conducted by GOAL magazine, he was named the most overlooked talent in hockey. His General Manager Ron Caron said he was "A great playmaker. He makes the average or above average player look like a star at times. He's such an unselfish player." On March 19, 1988, Federko became the 22nd NHL player to record 1000 career points. After he had a poor season as a captain in 1988–89, he was traded to the Detroit Red Wings with Tony McKegney for future Blues star Adam Oates, and Paul MacLean. In Detroit, Federko re-united with former Blues head coach Jacques Demers, but he had to play behind Steve Yzerman and did not get his desired ice time. After his lowest point output since his rookie season, Federko decided to retire after the 1989–90 season, having played exactly 1,000 NHL games with his final game on April 1, 1990. Post-NHL career. Less than a year after retiring as a player, the Blues retired number 24 in his honour on March 16, 1991. Federko was eventually inducted into the Hockey Hall of Fame in 2002, the first Hall of Famer to earn his credentials primarily as a Blue. Currently, Federko is a television colour commentator and studio analyst for Bally Sports Midwest during Blues broadcasts. He was the head coach/general manager of the St. Louis Vipers roller hockey team of the Roller Hockey International for the 1993 and 1994 seasons. External links.
3985
49464013
https://en.wikipedia.org/wiki?curid=3985
Buffalo, New York
Buffalo is a city in the U.S. state of New York. It lies in Western New York at the eastern end of Lake Erie, at the head of the Niagara River on the Canadian border. It is the second-most populous city in New York with a population of 278,349 at the 2020 census, while the Buffalo–Niagara Falls metropolitan area with over 1.16 million residents is the 51st-largest metropolitan area in the United States. It is the county seat of Erie County. Before the 17th century, the region was inhabited by nomadic Paleo-Indians who were succeeded by the Neutral, Erie, and Iroquois nations. In the early 17th century, the French began to explore the region. In the 18th century, Iroquois land surrounding Buffalo Creek was ceded through the Holland Land Purchase, and a small village was established at its headwaters. In 1825, after its harbor was improved, Buffalo was selected as the terminus of the Erie Canal, which led to its incorporation in 1832. The canal stimulated its growth as the primary inland port between the Great Lakes and the Atlantic Ocean. Transshipment made Buffalo the world's largest grain port of that era. After the coming of railroads greatly reduced the canal's importance, the city became the second-largest railway hub (after Chicago). During the mid-19th century, Buffalo transitioned to manufacturing, which came to be dominated by steel production. Later, deindustrialization and the opening of the St. Lawrence Seaway saw the city's economy decline and diversify. It developed its service industries, such as health care, retail, tourism, logistics, and education, while retaining some manufacturing. In 2019, the gross domestic product of the Buffalo–Niagara Falls MSA was $53 billion (~$ in ). The city's cultural landmarks include the oldest urban parks system in the United States, the Buffalo AKG Art Museum, the Buffalo History Museum, the Buffalo Philharmonic Orchestra, Shea's Performing Arts Center, the Buffalo Museum of Science, and several annual festivals. Its educational institutions include the University at Buffalo, Buffalo State University, Canisius University, and D'Youville University. Buffalo is also known for its winter weather, Buffalo wings, and two major-league sports teams: the National Football League's Buffalo Bills and the National Hockey League's Buffalo Sabres. History. Pre-Columbian era to European exploration. Before the arrival of Europeans, nomadic Paleo-Indians inhabited the western New York region from the 8th millennium BCE. The Woodland period began around 1000 BC, marked by the rise of the Iroquois Confederacy and the spread of its tribes throughout the state. Seventeenth-century Jesuit missionaries were the first Europeans to visit the area. During French exploration of the region in 1620, the region was sparsely populated and occupied by the agrarian Erie people in the south and the Neutral Nation in the north, with a relatively small tribe, the Wenrohronon, between and the Senecas, an Iroquois tribe, occupying the land just east of the region. The Neutral grew tobacco and hemp to trade with the Iroquois, who traded furs with the French for European goods. The tribes used animal- and war paths to travel and move goods across what today is New York State. (Centuries later, these same paths were gradually improved, then paved, then developed into major modern roads.) Traditional Seneca oral legends, as recounted by professional storytellers known as Hagéotâ, were highly participatory. These tales were told only during winter, as they were believed to have the power to put even animals and plants to sleep, which could affect the harvest. At the conclusion, audience members typically offered gifts, such as tobacco, to the storyteller as a sign of appreciation. During the Beaver Wars in the mid-17th century the Senecas conquered the Erie and Neutrals in the region. Native Americans did not settle along Buffalo Creek permanently until 1780, when displaced Senecas were relocated from Fort Niagara. The Seneca town of , meaning "Between the basswoods," was historically located on Buffalo Creek, and continues to be used as the Seneca name for the modern city of Buffalo. Louis Hennepin and Sieur de La Salle explored the upper Niagara and Ontario regions in the late 1670s. In 1679, La Salle's ship, Le Griffon, became the first to sail above Niagara Falls near Cayuga Creek. Baron de Lahontan visited the site of Buffalo in 1687. A small French settlement along Buffalo Creek lasted for only a year (1758). After the French and Indian War, the region was ruled by Britain. After the American Revolution, the Province of New York—now a U.S. state—began westward expansion, looking for arable land by following the Iroquois. New York and Massachusetts were vying for the territory which included Buffalo, and Massachusetts had the right to purchase all but a one-mile-(1600-meter)-wide portion of land. The rights to the Massachusetts territories were sold to Robert Morris in 1791. Despite objections from Seneca chief Red Jacket, Morris brokered a deal between fellow chief Cornplanter and the Dutch dummy corporation Holland Land Company. The Holland Land Purchase gave the Senecas three reservations, and the Holland Land Company received for about thirty-three cents per acre. Permanent white settlers along the creek were prisoners captured during the Revolutionary War. Early landowners were Iroquois interpreter Captain William Johnston, former enslaved man Joseph "Black Joe" Hodges and Cornelius Winney, a Dutch trader who arrived in 1789. As a result of the war, in which the Iroquois sided with the British Army, Iroquois territory was gradually reduced in the late 1700s by European settlers through successive statewide treaties which included the Treaty of Fort Stanwix (1784) and the First Treaty of Buffalo Creek (1788). The Iroquois were moved onto reservations, including Buffalo Creek. By the end of the 18th century, only of reservations remained. After the Treaty of Big Tree removed Iroquois title to lands west of the Genesee River in 1797, Joseph Ellicott surveyed land at the mouth of Buffalo Creek. In the middle of the village was an intersection of eight streets at present-day Niagara Square. Originally named New Amsterdam, its name was soon changed to Buffalo. Erie Canal, grain and commerce. The village of Buffalo was named for Buffalo Creek. British military engineer John Montresor referred to "Buffalo Creek" in his 1764 journal, the earliest recorded appearance of the name. A road to Pennsylvania from Buffalo was built in 1802 for migrants traveling to the Connecticut Western Reserve in Ohio. Before an east–west turnpike across the state was completed, traveling from Albany to Buffalo would take a week; a trip from nearby Williamsville to Batavia could take over three days. British forces burned Buffalo and the northwestern village of Black Rock in 1813. The battle and subsequent fire was in response to the destruction of Niagara-on-the-Lake by American forces and other skirmishes during the War of 1812. Rebuilding was swift, completed in 1815. As a remote outpost, village residents hoped that the proposed Erie Canal would bring prosperity to the area. To accomplish this, Buffalo's harbor was expanded with the help of Samuel Wilkeson; it was selected as the canal's terminus over the rival Black Rock. It opened in 1825, ushering in commerce, manufacturing and hydropower. By the following year, the Buffalo Creek Reservation (at the western border of the village) was transferred to Buffalo. Buffalo was incorporated as a city in 1832. During the 1830s, businessman Benjamin Rathbun significantly expanded its business district. The city doubled in size from 1845 to 1855. Almost two-thirds of the city's population was foreign-born, largely a mix of unskilled (or educated) Irish and German Catholics. Fugitive slaves made their way north to Buffalo during the 1840s. Buffalo was a terminus of the Underground Railroad, with many free Black people crossing the Niagara River to Fort Erie, Ontario; others remained in Buffalo. During this time, Buffalo's port continued to develop. Passenger and commercial traffic expanded, leading to the creation of feeder canals and the expansion of the city's harbor. Unloading grain in Buffalo was a laborious job, and grain handlers working on lake freighters would make $1.50 a day () in a six-day work week. Local inventor Joseph Dart and engineer Robert Dunbar created the grain elevator in 1843, adapting the steam-powered elevator. Dart's Elevator initially processed one thousand bushels per hour, speeding global distribution to consumers. Buffalo was the transshipment hub of the Great Lakes, and weather, maritime and political events in other Great Lakes cities had a direct impact on the city's economy. In addition to grain, Buffalo's primary imports included agricultural products from the Midwest (meat, whiskey, lumber and tobacco), and its exports included leather, ships and iron products. The mid-19th century saw the rise of new manufacturing capabilities, particularly with iron. By the 1860s, many railroads terminated in Buffalo; they included the Buffalo, Bradford and Pittsburgh Railroad, Buffalo and Erie Railroad, the New York Central Railroad, and the Lehigh Valley Railroad. During this time, Buffalo controlled one-quarter of all shipping traffic on Lake Erie. After the Civil War, canal traffic began to drop as railroads expanded into Buffalo. Unionization began to take hold in the late 19th century, highlighted by the Great Railroad Strike of 1877 and 1892 Buffalo switchmen's strike. Steel, challenges, and the modern era. At the start of the 20th century, Buffalo was the world's leading grain port and a national flour-milling hub. Local mills were among the first to benefit from hydroelectricity generated by the Niagara River. Buffalo hosted the 1901 Pan-American Exposition after the Spanish–American War, showcasing the nation's advances in art, architecture, and electricity. Its centerpiece was the Electric Tower, with over two million light bulbs, but some exhibits were jingoistic and racially charged. At the exposition, President William McKinley was assassinated by anarchist Leon Czolgosz. When McKinley died, Theodore Roosevelt was sworn in at the Wilcox Mansion in Buffalo. Attorney John Milburn and local industrialists convinced the Lackawanna Iron and Steel Company to relocate from Scranton, Pennsylvania to the town of West Seneca in 1904. Employment was competitive, with many Eastern Europeans and Scrantonians vying for jobs. From the late 19th century to the 1920s, mergers and acquisitions led to distant ownership of local companies; this had a negative effect on the city's economy. Examples include the acquisition of Lackawanna Steel by Bethlehem Steel and, later, the relocation of Curtiss-Wright in the 1940s. The Great Depression saw severe unemployment, especially among the working class. New Deal relief programs operated in full force, and the city became a stronghold of labor unions and the Democratic Party. During World War II, Buffalo regained its manufacturing strength as military contracts enabled the city to manufacture steel, chemicals, aircraft, trucks and ammunition. The 15th-most-populous US city in 1950, Buffalo's economy relied almost entirely on manufacturing; eighty percent of area jobs were in the sector. The city also had over a dozen railway terminals, as railroads remained a significant industry. The St. Lawrence Seaway was proposed in the 19th century as a faster shipping route to Europe, and later as part of a bi-national hydroelectric project with Canada. Its combination with an expanded Welland Canal led to a grim outlook for Buffalo's economy. After its 1959 opening, the city's port and barge canal became largely irrelevant. Shipbuilding in Buffalo wound down in the 1960s due to reduced waterfront activity, ending an industry which had been part of the city's economy since 1812. Downsizing of the steel mills was attributed to the threat of higher wages and unionization efforts. Racial tensions culminated in riots in 1967. Suburbanization led to the selection of the town of Amherst for the new University at Buffalo campus by 1970. Unwilling to modernize its plant, Bethlehem Steel began cutting thousands of jobs in Lackawanna during the mid-1970s before closing it in 1983. The region lost at least 70,000 jobs between 1970 and 1984. Like much of the Rust Belt, Buffalo has focused on recovering from the effects of late-20th-century deindustrialization. Geography. Topography. Buffalo is on the eastern end of Lake Erie opposite Fort Erie, Ontario. It is at the head of the Niagara River, which flows north over Niagara Falls into Lake Ontario. The Buffalo metropolitan area is on the Erie/Ontario Lake Plain of the Eastern Great Lakes Lowlands, a narrow plain extending east to Utica, New York. The city is generally flat, except for elevation changes in the University Heights and Fruit Belt neighborhoods. The Southtowns are hillier, leading to the Cattaraugus Hills in the Appalachian Upland. Several types of shale, limestone and lagerstätten are prevalent in Buffalo and its surrounding area, lining their stream beds. According to Fox Weather, Buffalo is one of the top five snowiest large cities in the country, receiving, on average, 95 inches of snow annually. Although the city has not experienced any recent or significant earthquakes, Buffalo is in the Southern Great Lakes Seismic Zone (part of the Great Lakes tectonic zone). Buffalo has four channels within its boundaries: the Niagara River, Buffalo River (and Creek), Scajaquada Creek, and the Black Rock Canal, adjacent to the Niagara River. The city's Bureau of Forestry maintains a database of over seventy thousand trees. According to the United States Census Bureau, Buffalo has an area of ; is land, and the rest is water. The city's total area is 22.66 percent water. In 2010, its population density was 6,470.6 per square mile. Cityscape. Buffalo's architecture is diverse, with a collection of 19th- and 20th-century buildings. Downtown Buffalo landmarks include Louis Sullivan's Guaranty Building, an early skyscraper; the Ellicott Square Building, once one of the largest of its kind in the world; the Art Deco Buffalo City Hall and the McKinley Monument, and the Electric Tower. Beyond downtown, the Buffalo Central Terminal was built in the Broadway-Fillmore neighborhood in 1929; the Richardson Olmsted Complex, built in 1881, was an insane asylum until its closure in the 1970s. Urban renewal from the 1950s to the 1970s spawned the Brutalist-style Buffalo City Court Building and Seneca One Tower, the city's tallest building. In the city's Parkside neighborhood, the Darwin D. Martin House was designed by Frank Lloyd Wright in his Prairie School style. Since 2016, Washington DC real estate developer Douglas Jemal has been acquiring, and redeveloping, iconic properties throughout the city. Neighborhoods. According to Mark Goldman, the city has a "tradition of separate and independent settlements". The boundaries of Buffalo's neighborhoods have changed over time. The city is divided into five districts, each containing several neighborhoods, for a total of thirty-five neighborhoods. Main Street divides Buffalo's east and west sides, and the west side was fully developed earlier. This division is seen in architectural styles, street names, neighborhood and district boundaries, demographics, and socioeconomic conditions; Buffalo's West Side is generally more affluent than its East Side. Several neighborhoods in Buffalo have had increased investment since the 1990s, beginning with the Elmwood Village. The 2002 redevelopment of the Larkin Terminal Warehouse led to the creation of Larkinville, home to several mixed-use projects and anchored by corporate offices. Downtown Buffalo and its central business district (CBD) had a 10.6-percent increase in residents from 2010 to 2017, as over 1,061 housing units became available; the Seneca One Tower was redeveloped in 2020. Other revitalized areas include Chandler Street, in the Grant-Amherst neighborhood, and Hertel Avenue in Parkside. The Buffalo Common Council adopted its Green Code in 2017, replacing zoning regulations which were over sixty years old. Its emphasis on regulations promoting pedestrian safety and mixed land use received an award at the 2019 Congress for the New Urbanism conference. Climate. [[File:Snow removal via frontloader on Cottage Street after December 2019 winter storm, Buffalo, New York - 20191211.jpg|thumb|left|alt=Snowy city streets, seen from above|Buffalo in winter, 2019]] Buffalo has a [[humid continental climate]] ([[Köppen climate classification|Köppen]]: [[humid continental climate#Hot summer subtype|Dfa]]), and temperatures have been [[Climate change in the U.S.|warming]] with the rest of the US. [[Lake-effect snow]] is characteristic of Buffalo winters, with [[Snowsquall#Frontal snowsquall|snow bands]] (producing intense snowfall in the city and surrounding area) depending on wind direction off Lake Erie. However, Buffalo is rarely the [[Golden Snowball Award|snowiest city in the state]]. The [[Blizzard of 1977]] resulted from a combination of high winds and snow which accumulated on land and on the frozen [[Lake Erie]]. Although snow does not typically impair the city's operation, it can cause significant damage in autumn (as the [[Lake Storm Aphid|October 2006 storm]] did). In November 2014 (called "[[November 13–21, 2014 North American winter storm|Snowvember]]"), the region had a [[November 13–21, 2014 North American winter storm|record-breaking storm]] which produced over of snow. Buffalo's lowest recorded temperature was , which occurred twice: on February 9, 1934, and February 2, 1961. Although the city's summers are drier and sunnier than other cities in the northeastern United States, its vegetation receives enough precipitation to remain hydrated. Buffalo summers are characterized by abundant sunshine, with moderate [[humidity]] and temperatures; the city benefits from cool, southwestern Lake Erie summer breezes which temper warmer temperatures. Temperatures rise above an average of three times a year. No official recording of or more has occurred to date, with a maximum temperature of reached on August 27, 1948. Rainfall is moderate, typically falling at night, and cooler lake temperatures hinder storm development in July. August is usually rainier and [[wikt:muggier|muggier]], as the warmer lake loses its temperature-controlling ability. Demographics. [[File:Race and ethnicity 2010- Buffalo (5559869161) (cropped).png|thumb|left|alt=See caption|Racial distribution in Buffalo in 2010: Each dot represents 25 residents. ]] Several hundred Seneca, Tuscarora and other Iroquois tribal peoples were the primary residents of the Buffalo area before 1800, concentrated along Buffalo Creek. After the Revolutionary War, settlers from New England and eastern New York began to move into the area. From the 1830s to the 1850s, they were joined by Irish and German immigrants from Europe, both peasants and working class, who settled in enclaves on the city's south and east sides. At the turn of the 20th century, Polish immigrants replaced Germans on the East Side, who moved to newer housing; Italian immigrant families settled throughout the city, primarily on the lower West Side. During the 1830s, Buffalo residents were generally intolerant of the small groups of [[African Americans|Black Americans]] who began settling on the city's East Side. In the 20th century, wartime and manufacturing jobs attracted [[Black Belt in the American South|Black Americans from the South]] during the [[Great Migration (African American)|First]] and [[Second Great Migration (African American)|Second Great Migrations]]. In the World War II and postwar years from 1940 to 1970, the city's Black population rose by 433 percent. They replaced most of the Polish community on the East Side, who were moving out to suburbs. However, the effects of [[redlining]], steering, [[social inequality]], [[blockbusting]], [[white flight]] and other racial policies resulted in the city (and region) becoming one of the most [[Residential segregation in the United States|segregated]] in the U.S. During the 1940s and 1950s, [[Puerto Ricans|Puerto Rican]] migrants arrived en masse, also seeking industrial jobs, settling on the East Side and moving westward. In the 21st century, Buffalo is classified as a [[Majority minority|majority minority city]], with a plurality of residents who are Black and Latino. Buffalo has experienced effects of [[urban decay]] since the 1970s, and also saw population loss to the suburbs and [[Sun Belt]] states, and experienced job losses from deindustrialization. The city's population peaked at 580,132 in 1950, when Buffalo was the 15th-largest city in the United Statesdown from the eighth-largest city in 1900, after its growth rate slowed during the 1920s. Buffalo finally saw a population gain of 6.5% in the 2020 census, reversing a decades long trend of population decline. The city has 278,349 residents as of the 2020 census, making it the [[List of United States cities by population|76th-most populous city in the United States]]. Its metropolitan area had 1.1 million residents in 2020, the country's 49th-largest. [[File:Ethnic Origins in Buffalo, NY.png|left|thumb|235x235px|Ethnic origins in Buffalo]] Compared to other major US metropolitan areas, the number of foreign-born immigrants to Buffalo is low. New immigrants are primarily resettled refugees (especially from war- or disaster-affected nations) and refugees who had previously settled in other U.S. cities. During the early 2000s, most immigrants came from [[Canadian Americans|Canada]] and [[Yemeni Americans|Yemen]]; this shifted in the 2010s to [[Myanmar|Burmese]] ([[Karen people|Karen]]) refugees and [[Bangladeshi people|Bangladeshi]] immigrants. Between 2008 and 2016, Burmese, [[Somali Americans|Somali]], [[Bhutanese Americans|Bhutanese]], and [[Iraqi Americans]] were the four largest ethnic immigrant groups in Erie County. A 2008 report noted that although [[food desert]]s were seen in larger cities and not in Buffalo, the city's neighborhoods of color have access only to smaller grocery stores and lack the supermarkets more typical of newer, white neighborhoods. A 2018 report noted that over fifty city blocks on Buffalo's East Side lacked adequate access to a supermarket. Health disparities exist compared to the rest of [[New York (state)|the state]]: Erie County's average 2019 lifespan was three years lower (78.4 years); its 17-percent [[Smoking in the United States|smoking]] and 30-percent [[Obesity in the United States|obesity]] rates were slightly higher than the state average. According to the Partnership for the Public Good, educational achievement in the city is lower than in the surrounding area; city residents are almost twice as likely as adults in the metropolitan area to lack a high-school diploma. Religion. [[File:Temple Beth Zion 2.jpg|thumb|[[Temple Beth Zion (Buffalo, New York)|Temple Beth Zion]]]] During the early 19th century, [[Seneca mission|Presbyterian missionaries]] tried to convert the [[Seneca people]] on the Buffalo Creek Reservation to Christianity. Initially resistant, some tribal members set aside their traditions and practices to form their own sect. Later, European immigrants added other faiths. Christianity is the predominant religion in Buffalo and Western New York. [[Catholicism]] (primarily the [[Latin Church]]) has a significant presence in the region, with 161 [[parish]]es and over 570,000 adherents in the [[Diocese of Buffalo]]. A [[Jewish diaspora|Jewish]] community began developing in the city with immigrants from the mid-1800s; about one thousand [[History of the Jews in Germany|German]] and [[Lithuanian Jews]] settled in Buffalo before 1880. Buffalo's first [[synagogue]], Temple Beth El, was established in 1847. The city's [[Temple Beth Zion (Buffalo, New York)|Temple Beth Zion]] is the region's largest synagogue. With changing demographics and an increased number of refugees from other areas on the city's East Side, Islam and Buddhism have expanded their presence. In this area, new residents have converted empty churches into [[mosque]]s and Buddhist temples. Hinduism maintains a small, active presence in the area, including the town of Amherst. A 2016 [[American Bible Society]] survey reported that Buffalo is the fifth-least "Bible-minded" city in the United States; 13 percent of its residents associate with the [[Bible]]. Economy. The Erie Canal was the impetus for Buffalo's economic growth as a transshipment hub for grain and other agricultural products headed east from the Midwest. Later, manufacturing of steel and automotive parts became central to the city's economy. When these industries downsized in the region, Buffalo's economy became service-based. Its primary sectors include health care, business services (banking, accounting, and insurance), retail, tourism and [[logistics]], especially with Canada. Despite the loss of large-scale manufacturing, some manufacturing of metals, chemicals, machinery, food products, and electronics remains in the region. Advanced manufacturing has increased, with an emphasis on [[research and development]] (R&D) and [[automation]]. In 2019, the U.S. [[Bureau of Economic Analysis]] valued the [[gross domestic product]] (GDP) of the Buffalo–Niagara Falls MSA at $53 billion (~$ in ). The [[civic sector]] is a major source of employment in the Buffalo area, and includes public, non-profit, healthcare and educational institutions. New York State, with over 19,000 employees, is the region's largest employer. In the private sector, top employers include the [[Kaleida Health]] and [[Catholic Health]] [[hospital network]]s and [[M&T Bank]], the sole [[Fortune 500]] company headquartered in the city. Most have been the top employers in the region for several decades. Buffalo is home to the headquarters of [[Rich Products]], [[Delaware North]] and [[New Era Cap Company]]; the [[aerospace]] manufacturer [[Moog Inc.]] and toy maker [[Fisher-Price]] are based in nearby [[East Aurora]]. [[National Fuel Gas]] and [[Life Storage]] are headquartered in [[Williamsville, New York]]. Buffalo weathered the [[Great Recession]] of 2006–09 well in comparison with other U.S. cities, exemplified by increased home prices during this time. The region's economy began to improve in the early 2010s, adding over 25,000 jobs from 2009 to 2017. With [[Buffalo Billion|state aid]], [[Tesla, Inc.]]'s [[Giga New York]] plant opened in South Buffalo in 2017. The effects of the [[COVID-19 pandemic in the United States]], however, increased the local unemployment rate to 7.5 percent by December 2020. The local unemployment rate had been 4.2 percent in 2019, higher than the national average of 3.5 percent. Culture. Performing arts and music. [[File:Shea’s Buffalo Theater, Main Street, Buffalo, NY.jpg|thumb|right|[[Shea's Performing Arts Center]]]] Buffalo is home to over 20 theater companies, with many centered in the downtown [[Buffalo Theatre District|Theatre District]]. [[Shea's Performing Arts Center]] is the city's largest theater. Designed by [[Louis Comfort Tiffany]] and built in 1926, the theater presents [[Broadway theatre|Broadway musicals]] and concerts. [[Shakespeare in Delaware Park]] has been held outdoors every summer since 1976. [[Stand-up comedy]] can be found throughout the city and is anchored by Helium Comedy Club, which hosts both local talent and national touring acts. The [[Nickel City Opera]] (also known as [[Nickel City Opera|NC Opera Buffalo]] and [[Nickel City Opera|NCO]]) is an [[opera company]] based in Buffalo. It was founded in 2004 by [[Valerian Ruminski]]. and operated between 2009 and 2024. The Nickel City Opera|NCO has collaborated with the Buffalo Philharmonic Orchestra, has commissioned an opera and staged operatic works. Matthias Manasi was music director of Nickel City Opera from 2017 to 2021, his predecessor [[Michael Ching]] was music director of NCO from 2012 to 2017. [[Shea's Performing Arts Center]] was designed by the well-known [[Chicago]] firm [[Rapp and Rapp]]. The [[opera house]] was modeled in the [[Architectural style|style]] of European operahouses and decorated in a combination of French and Spanish Baroque and Rococo styles. The [[interior design]] was designed by the [[designer]] and [[artist]] [[Louis Comfort Tiffany]], and many of its elements are still there today. Originally there were nearly 4,000 seats, but in the [[1930s]] the number of seats was reduced to the current number of 3,019 seats last but not least to increase the place for the orchestra by increasing the size of the [[orchestra pit]]. [[File:Kleinhans buffalo.jpg|thumb|right|[[Kleinhans Music Hall]]]] The [[Buffalo Philharmonic Orchestra]] was formed in 1935 and performs at [[Kleinhans Music Hall]], whose acoustics have been praised. Although the orchestra nearly disbanded during the late 1990s due to a lack of funding, philanthropic contributions and state aid stabilized it. Under the direction of [[JoAnn Falletta]], the orchestra has received a number of [[Grammy Awards|Grammy Award]] nominations and won the [[Grammy Award for Best Contemporary Classical Composition#2000s|Grammy Award for Best Contemporary Classical Composition]] in 2009. [[KeyBank Center]] draws national music acts year-round. [[Sahlen Field]] hosts the annual [[WYRK]] Taste of Country music festival every summer with national [[country music]] acts. [[Canalside]] regularly hosts outdoor summer concerts, a tradition that spun off from the defunct [[Thursday at the Square]] concert series. [[Colored Musicians Club]], an extension of what was a separate musicians'-union chapter, maintains jazz history. [[Rick James]] was born and raised in Buffalo and later lived on a ranch in the nearby [[Aurora, Erie County, New York|Town of Aurora]]. James formed his Stone City Band in Buffalo, and had national appeal with several [[crossover music|crossover single]]s in the [[Contemporary R&B|R&B]], [[disco]] and [[funk]] genres in the late 1970s and early 1980s. Around the same time, the [[jazz fusion]] band [[Spyro Gyra]] and jazz [[saxophonist]] [[Grover Washington Jr.]] also got their start in the city. The [[Goo Goo Dolls]], an [[alternative rock]] group which formed in 1986, had 19 top-ten singles. Singer-songwriter and activist [[Ani DiFranco]] has released over 20 folk and [[indie rock]] albums on [[Righteous Babe Records]], her Buffalo-based label. Underground hip-hop acts in the city partner with Buffalo-based [[Griselda Records]], whose artists include [[Westside Gunn]], [[Conway the Machine]], and [[Benny the Butcher]], who all occasionally refer to Buffalo culture in their lyrics. Cuisine. [[File:Buffalo - Wings at Airport Anchor Bar.jpg|thumb|left|alt=Buffalo wings and celery, with a blue-cheese dip|Buffalo wings with [[celery]] and blue cheese]] The city's cuisine encompasses a variety of cultures and ethnicities. In 2015, the [[National Geographic Society]] ranked Buffalo third on its "World's Top Ten Food Cities" list. Teressa Bellissimo first prepared [[Buffalo wing]]s (seasoned chicken wings) at the [[Anchor Bar]] in 1964. The Anchor Bar has a crosstown rivalry with [[Duff's Famous Wings]], but Buffalo wings are served at many bars and restaurants throughout the city (some with unique cooking styles and flavor profiles). Buffalo wings are traditionally served with [[blue cheese dressing]] and celery. In 2003, the Anchor Bar received a [[James Beard Foundation Award]] in the America's Classics category. The Buffalo area has over 600 pizzerias, estimated at more per capita than New York City. Several [[Craft brewery and microbrewery|craft breweries]] began opening in the 1990s, and the city's [[last call (bar term)|last call]] is 4 am. Other mainstays of Buffalo cuisine include [[beef on weck]], [[butter lamb]]s, [[kielbasa]], [[pierogi]], [[sponge candy]], chicken finger subs (including the stinger - a version that also includes steak), and the [[Fish and chips|fish fry]] (popular any time of year, but especially during [[Lent]]). With an influx of refugees and other immigrants to Buffalo, its number of ethnic restaurants (including the West Side Bazaar [[kitchen incubator]]) has increased. Some restaurants use [[food truck]]s to serve customers, and nearly fifty food trucks appeared at Larkin Square in 2019. Museums and tourism. [[File:Albright-Knox Art Gallery 2.jpg|thumb|The Albright–Knox Art Gallery, seen from Hoyt Lake in [[Delaware Park–Front Park System|Delaware Park]]]] Buffalo was ranked the seventh-best city in the United States to visit in 2021 by "[[Travel + Leisure]]", which noted the growth and potential of the city's cultural institutions. The [[Albright–Knox Art Gallery]] is a [[Modern art|modern]] and [[contemporary art]] museum with a collection of more than 8,000 works, of which only two percent are on display. With a donation from [[Jeffrey Gundlach]], a three-story addition designed by the Dutch architectural firm [[Office for Metropolitan Architecture|OMA]] opened June 2023 . Across the street, the [[Burchfield Penney Art Center]] contains paintings by [[Charles E. Burchfield]] and is operated by [[Buffalo State College]]. Buffalo is home to the [[Freedom Wall]], a 2017 art installation commemorating civil-rights activists throughout history. Near both museums is the [[Buffalo History Museum]], featuring artwork, literature and exhibits related to the city's history and major events, and the [[Buffalo Museum of Science]] is on the city's East Side. [[Canalside]], Buffalo's historic business district and harbor, attracts more than 1.5 million visitors annually. It includes the [[Explore & More Children's Museum]], the [[Buffalo and Erie County Naval & Military Park]], [[LECOM Harborcenter]], and a number of shops and restaurants. A restored 1924 carousel (now solar-powered) and a replica boathouse were added to Canalside in 2021. Other city attractions include the Theodore Roosevelt Inaugural National Historic Site, the [[Michigan Street Baptist Church]], [[Buffalo RiverWorks]], [[Seneca Buffalo Creek Casino]], [[Buffalo Transportation Pierce-Arrow Museum]], and the [[Rev. J. Edward Nash Sr. House|Nash House Museum]]. The [[National Buffalo Wing Festival]] is held every [[Labor Day]] at [[Sahlen Field]]. Since 2002, it has served over 4.8 million Buffalo wings and has had a total attendance of 865,000. The [[Taste of Buffalo]] is a two-day food festival held in July at Niagara Square, attracting 450,000 visitors annually. Other events include the [[Allentown Art Festival]], the Polish-American [[Dyngus Day]], the Elmwood Avenue Festival of the Arts, [[Juneteenth]] in [[Martin Luther King Jr. Park]], the [[World's Largest Disco]] in October and [[Friendship Festival]] in summer, which celebrates Canada-US relations. Sports. Buffalo has three major professional sports teams: the [[Buffalo Sabres]] ([[National Hockey League]]), the [[Buffalo Bills]] ([[National Football League]]), and the [[Buffalo Bandits]] ([[National Lacrosse League]]). The Bills were a founding member of the [[American Football League]] in 1960, and have played at [[Highmark Stadium (New York)|Highmark Stadium]] in [[Orchard Park, New York|Orchard Park]] since they moved from [[War Memorial Stadium (Buffalo, New York)|War Memorial Stadium]] in 1973. They are the only NFL team based in New York State. Before the [[Super Bowl]] era, the Bills won the [[American Football League playoffs|American Football League Championship]] in 1964 and 1965. With mixed success throughout their history, the Bills had a [[Wide Right (Buffalo Bills)|close loss]] in Super Bowl XXV and returned to consecutive Super Bowls after the 1991, 1992, and 1993 seasons (losing each time). The Sabres, an [[expansion team]] in 1970, share [[KeyBank Center]] with the Bandits. The Bandits are the most decorated of the city's professional teams, with seven championships. The Bills, Sabres and Bandits are owned by [[Pegula Sports and Entertainment]]. Buffalo's minor-league professional teams include the [[Buffalo Bisons]] ([[International League]]), who play at [[Sahlen Field]], and [[Buffalo Pro Soccer]] ([[USL Championship]]). Semi-professional teams include the [[Buffalo eXtreme]] ([[American Basketball Association (2000–present)|American Basketball Association]]), [[FC Buffalo]] ([[USL League Two]]), [[FC Buffalo (women)|FC Buffalo Women]] ([[USL W League]]), and [[Buffalo Stallions (NPSL)|Buffalo Stallions]] ([[National Premier Soccer League]]). Several colleges and universities in the area field intercollegiate sports teams; the [[Buffalo Bulls]] and the [[Canisius Golden Griffins]] compete in [[NCAA Division I]]. The Bulls have 16 varsity sports in the [[Mid-American Conference]] (MAC); the Golden Griffins field 15 teams in the [[Metro Atlantic Athletic Conference]] (MAAC), with the men's hockey team part of the [[Atlantic Hockey|Atlantic Hockey Association]] (AHA). The Bulls participate in the [[NCAA Division I Football Bowl Subdivision|Football Bowl Subdivision]], the highest level of college football. Parks and recreation. [[File:TifftNaturePreserve.jpg|thumb|left|alt=Boardwalk through a marsh|[[Tifft Nature Preserve]]]] [[Frederick Law Olmsted]] described Buffalo as being "the best planned city [...] in the United States, if not the world". With encouragement from city stakeholders, he and [[Calvert Vaux]] augmented the city's grid plan by [[Haussmann's renovation of Paris|drawing inspiration from Paris]] and introducing [[landscape architecture]] with aspects of the countryside. Their plan would introduce [[Park system|a system]] of interconnected parks, [[parkway]]s and trails, unlike the singular [[Central Park]] in [[New York City]]. The largest would be [[Delaware Park–Front Park System|Delaware Park]], across [[Forest Lawn Cemetery (Buffalo)|Forest Lawn Cemetery]] to amplify the amount of open space. With construction of the system finishing in 1876, it is regarded as the country's oldest; however, some of Olmsted's plans were never fully realized. Some parks later diminished and succumbed to diseases, highway construction, and weather events such as Lake Storm Aphid in 2006. The non-profit Buffalo Olmsted Park Conservancy was created in 2004 to help preserve the of parkland. Olmsted's work in Buffalo inspired similar efforts in cities such as San Francisco, Chicago, and Boston. The city's Division of Parks and Recreation manages over 180 parks and facilities, seven recreational centers, twenty-one pools and [[splash pad]]s, and three ice rinks. The Delaware Park features the [[Buffalo Zoo]], Hoyt Lake, a golf course, and playing fields. Buffalo collaborated with its sister city [[Kanazawa]] to create the park's Japanese Garden in 1970, where [[cherry blossom]]s bloom in the spring. Opening in 1976, [[Tifft Nature Preserve]] in South Buffalo is on of remediated industrial land. The preserve is an [[Important Bird Area]], including a meadow with trails for hiking and [[cross-country skiing]], [[marsh]]land and fishing. The Olmsted-designed [[Cazenovia Park–South Park System|Cazenovia and South Park]]s, the latter home to the [[Buffalo and Erie County Botanical Gardens]], are also in South Buffalo. According to [[the Trust for Public Land]], Buffalo's 2022 ParkScore ranking had high marks for access to parks, with 89 percent of city residents living within a ten-minute walk from a park. The city ranked lower in acreage, however; nine percent of city land is devoted to parks, compared with the national median of about fifteen percent. [[File:Canalside 2.jpg|thumb|Looking down [[Canalside]]'s Central Wharf]] Efforts to convert Buffalo's former industrial waterfront into recreational space have attracted national attention, with some writers comparing its appeal to that of Niagara Falls. Redevelopment of the waterfront began in the early 2000s, with the reconstruction of historically aligned canals on the site of the former [[Buffalo Memorial Auditorium]]. [[Placemaking]] initiatives would lead to the area's popularity, rather than permanent buildings and attractions. Under Mayor [[Byron Brown]], [[Canalside]] was cited by the Brookings Institution as an example of waterfront revitalization for other U.S. cities to follow. Summer events have included [[Pedalo|paddle-boating]] and fitness classes, and the frozen canals permit [[ice skating]], [[curling]], and [[ice cycle|ice cycling]] in winter. Its success spurred the state to create [[Buffalo Harbor State Park]] in 2014; the park has trails, open recreation areas, bicycle paths and piers. The park's Gallagher Beach, the city's only public beach, has prohibited swimming due to high bacteria levels and other environmental concerns. The Shoreline Trail passes through Buffalo near the Outer Harbor, Centennial Park, and the Black Rock Canal. The North Buffalo–[[Tonawanda (town), New York|Tonawanda]] [[rail trail]] begins in Shoshone Park, near the [[LaSalle station (Buffalo Metro Rail)|LaSalle metro station]] in North Buffalo. Government. [[File:Buffalo City Hall, Interior, thirteenth floor, council chamber.jpg|thumb|left|[[Buffalo Common Council|Common Council]] Chamber, [[Buffalo City Hall]]]] Buffalo has a [[Strong Mayor|Strong mayor–council government]]. As the [[Executive (government)|chief executive]] of city government, the mayor oversees the heads of the city's departments, participates in ceremonies, boards and commissions, and is as the liaison between the city and local cultural institutions. Some agencies, including utilities, urban renewal and [[public housing]], are state- and federally-funded [[New York state public-benefit corporations|public benefit-corporations]] semi-independent of city government. [[Christopher Scanlon]] has served as acting mayor since 2024, following the resignation of [[Byron Brown]]. No Republican has been mayor of Buffalo since [[Chester A. Kowal]] in 1965. With its nine districts, the [[Buffalo Common Council]] enacts laws, levies taxes, and approves mayoral appointees and the city budget. Bryan Bollman has been the Common Council president since 2024. Generally reflecting the city's electorate, all nine councilmen are members of the Democratic Party. Buffalo is the [[Erie County, New York|Erie County]] seat, and is within five of the county's eleven legislative districts. The city is part of the [[Judiciary of New York (state)|Eighth Judicial District]]. Court cases handled at the city level include [[misdemeanor]]s, violations, housing matters, and claims under $15,000; more severe cases are handled at the county level. Buffalo is represented by members of the [[New York State Assembly]] and [[New York State Senate]]. At the federal level, the city takes up most of and has been represented by Democrat [[Tim Kennedy (politician)|Tim Kennedy]] since 2024. Federal offices in the city include the Buffalo District of the [[United States Army Corps of Engineers|United States Army Corps of Engineers']] [[Great Lakes and Ohio River Division]], the [[List of FBI field offices#New York|Federal Bureau of Investigation]], and the [[United States District Court for the Western District of New York]]. In 2020, the city spent $519 million (~$ in ) on the effects of the [[COVID-19 pandemic in New York (state)|COVID-19 pandemic]]. The city in 2024 is hampered with a severe [[budget deficit]] attributed to the [[Byron Brown]] administration. Public safety. Buffalo is served by the [[Buffalo Police Department]]. The [[police commissioner]] is Byron Lockwood, who was appointed by Mayor Byron Brown in 2018. Although some criminal activity in the city remains higher than the national average, total crimes have decreased since the 1990s; one reason may be the [[gun buyback program]] implemented by the Brown administration in the mid-2000s. Before this, the city was part of the nationwide [[Crack epidemic in the United States|crack epidemic of the 1980s and 1990s]] and its accompanying record-high crime levels. In 2018, city police began wearing 300 [[Police body camera|body cameras]]. A 2021 Partnership for the Public Good report noted that the BPD, which had a 2020–21 budget of about $145.7 million, had an above-average police-to-citizen ratio of 28.9 officers per 10,000 residents in 2020higher than peer cities [[Minneapolis]] and [[Toledo, Ohio]]. The force had a roster of 740 officers during the year, about two-thirds of whom handled emergency requests, road patrol and other non-office assignments. The department has been criticized for [[Police brutality in the United States|misconduct and brutality]], including the 2004 wrongful termination of officer Cariol Horne for opposing police brutality toward a suspect and a 2020 [[Buffalo police shoving incident|protest-shoving incident]]. The [[Buffalo Fire Department]] and [[American Medical Response]] (AMR) handle fire-protection and [[emergency medical services]] (EMS) calls in the city. The fire department has about 710 firefighters and thirty-five [[Fire station|stations]], including twenty-three [[Glossary of firefighting#E|engine companies]] and twelve [[Glossary of firefighting#L|ladder companies]]. The department also operates the "[[Edward M. Cotter (fireboat)|Edward M. Cotter]]", considered the world's oldest active [[fireboat]]. With vacant and abandoned homes prone to [[arson]], [[squatting]], [[Prostitution in the United States|prostitution]] and other criminal activities, the fire and police department's resources were overburdened before the 2010s. Buffalo ranked second nationwide to [[St. Louis]] for vacant homes per capita in 2007, and the city began a five-year program to demolish five thousand vacant, damaged and abandoned homes. On [[2022 Buffalo shooting|May 14, 2022, there was a mass shooting]] in a Tops supermarket on the East Side of Buffalo where 13 victims were shot in a racially motivated attack by a [[white supremacist]] who was not a Buffalo native. Ten victims, all of whom were black, were murdered and three were injured. Media. [[File:The buffalo news building.jpg|thumb|left|"[[The Buffalo News]]" headquarters]] Buffalo's major daily newspaper is "[[The Buffalo News]]." Established in 1880 as the "Buffalo Evening News," the newspaper is estimated to have a daily circulation of 35,000 (down from a high of 310,000). The newspaper announced a pending sale of its building in February 2023, and the relocation of its printing operations to [[Cleveland, Ohio]]. Other newspapers in the Buffalo area include the Black-focused "[[Buffalo Criterion]]" and "Challenger Community News," "The Record" of Buffalo State University, "[[The Spectrum (University at Buffalo)|The Spectrum]]" of the University at Buffalo, and "[[American City Business Journals|Buffalo Business First]]." "Investigative Post" is an online [[watchdog journalism|watchdog]] news organization founded by former "Buffalo News" reporter and [[Pulitzer Prize|Pulitzer]] nominee Jim Heaney. Eighteen radio stations are licensed in Buffalo, including an FM station at Buffalo State College. Over ninety FM and AM radio signals can be received throughout the city. Eight full-power television outlets serve the city. Major commercial stations include [[WGRZ]] 2 ([[NBC]]), [[WIVB-TV]] 4 ([[CBS]]) and its sister station [[WNLO (TV)|WNLO]] 23 ([[The CW|CW]] [[Owned-and-operated station|O&O]]), [[WKBW-TV]] 7 ([[American Broadcasting Company|ABC]]), and [[WUTV]] 29 ([[Fox Broadcasting Company|Fox]], received in parts of Southern Ontario) and its sister station [[WNYO-TV]] 49 ([[MyNetworkTV]]). Buffalo's public television station is [[WNED-TV]] 17 ([[PBS]]); WNED has reported that most of the station's members live in the [[Greater Toronto Area]]. According to [[Nielsen Media Research]], the Buffalo television market was the 51st largest in the United States . Movies shooting significant footage in Buffalo include "[[Hide in Plain Sight]]" (1980), [[Tuck Everlasting (1981 film)|"Tuck Everlasting"]] (1981), [[Best Friends (1982 film)|"Best Friends"]] (1982), [[The Natural (film)|"The Natural"]] (1984), "[[Vamping]]" (1984), "[[Canadian Bacon]]" (1995), "[[Buffalo '66]]" (1998), [[Manna from Heaven (film)|"Manna from Heaven"]] (2002), "[[Bruce Almighty]] "(2003), [[The Savages (film)|"The Savages"]] (2007), Slime City Massacre (2010), "[[Henry's Crime]]" (2011), "[[Sharknado 2: The Second One]]" (2014), "[[Killer Rack]] (2015), [[Teenage Mutant Ninja Turtles: Out of the Shadows]]" (2016), "[[Marshall (film)|Marshall]]" (2016), "[[The American Side]]" (2017), "[[The First Purge]]" (2018), "[[The True Adventures of Wolfboy]]" (2019) and "[[A Quiet Place Part II]]" (2021). Although higher Buffalo production costs led to some films being finished elsewhere, tax credits and other economic incentives have enabled new film studios and production facilities to open. In 2021, several studio projects were in the planning stages. Education. Primary and secondary education. [[File:City Honors frontview.JPG|thumb|alt=Multi-story school building|[[City Honors School]]]] The [[Buffalo Public Schools]] have about thirty-four thousand students enrolled in their [[Primary education|primary]] and [[Secondary education|secondary]] schools. The district administers about sixty [[Public school (government funded)|public schools]], including thirty-six [[primary school]]s, five [[Middle school|middle high schools]], fourteen [[Secondary school|high schools]] and three [[alternative school]]s, with a total of about 3,500 teachers. Its [[board of education]], authorized by the state, has nine elected members who select the superintendent and oversee the budget, curriculum, personnel, and facilities. In 2020, the graduation rate was seventy-six percent. The public [[City Honors School]] was ranked the top high school in the city and 178th nationwide by "[[U.S. News & World Report]]" in 2021. There are twenty [[charter school]]s in Buffalo, with some oversight by the district. The city has over a dozen private schools, including [[Bishop Timon – St. Jude High School]], [[Canisius High School]], [[Mount Mercy Academy (Buffalo, New York)|Mount Mercy Academy]], and [[Nardin Academy]]—[[Roman Catholic Diocese of Buffalo|all Roman Catholic]], and [[Darul Uloom Al-Madania]] and Universal School of Buffalo (both Islamic schools); [[nonsectarian]] options include [[Buffalo Seminary]] and the [[Nichols School]]. Colleges and universities. [[File:BuffaloStateOverhead.jpg|thumb|left|The quad at [[Buffalo State College]]]] Founded by [[Millard Fillmore]], the [[University at Buffalo]] (UB) is one of the [[State University of New York]]'s two flagship universities and the state's largest public university. A [[Research I university]], over 32,000 undergraduate, graduate and professional students attend its thirteen schools and colleges. Two of UB's three campuses (the South and Downtown Campuses) are in the city, but most university functions take place at the large North Campus in Amherst. In 2020, "[[U.S. News & World Report Best Colleges Ranking|U.S. News & World Report]]" ranked UB the 34th-best public university and 88th in national universities. [[Buffalo State College]], founded as a [[normal school]], is one of SUNY's thirteen comprehensive colleges. The city's four-year private institutions include [[Canisius University]], [[D'Youville University]], [[Trocaire College]], and [[Villa Maria College]]. [[SUNY Erie]], the county's two-year public higher-education institution, and the [[Proprietary colleges|for-profit]] [[Bryant & Stratton College]] have small downtown campuses. Libraries. [[File:Reading Park, Central Library, Buffalo, New York - 20190907 - 01.jpg|thumb|alt=A park with chairs fronting a library in a downtown area|Reading Park at Buffalo's Central Library]] Established in 1835, Buffalo's main library is the Central Library of the [[Buffalo & Erie County Public Library]] system. Rebuilt in 1964, it contains an auditorium, the original manuscript of the "[[Adventures of Huckleberry Finn]]" (donated by [[Mark Twain]]), and a collection of about two million books. Its Grosvenor Room maintains a special-collections listing of nearly five hundred thousand resources for researchers. A [[pocket park]] funded by [[Southwest Airlines]] opened in 2020, and brought landscaping improvements and seating to Lafayette Square. The system's free library cards are valid at the city's eight branch libraries and at member libraries throughout Erie County. Infrastructure. Healthcare. [[File:Roswell Park Cancer Institute, Buffalo, New York - 20191009.jpg|thumb|[[Roswell Park Comprehensive Cancer Center]]]] Nine hospitals are operated in the city: [[John R. Oishei Children's Hospital|Oishei Children's Hospital]] and Buffalo General Medical Center by [[Kaleida Health]], Mercy Hospital and [[Sisters of Charity Hospital (Buffalo)|Sisters of Charity Hospital]] (Catholic Health), [[Roswell Park Comprehensive Cancer Center]], the county-run [[Erie County Medical Center]] (ECMC), Buffalo VA Medical Center, BryLin (Psychiatric) Hospital and the state-operated Buffalo Psychiatric Center. John R. Oishei Children's Hospital, built in 2017, is adjacent to Buffalo General Medical Center on the [[Buffalo Niagara Medical Campus]] north of downtown; its [[Gates Vascular Institute]] specializes in acute [[stroke recovery]]. The medical campus includes the [[University at Buffalo]] [[Jacobs School of Medicine and Biomedical Sciences]], the [[Hauptman-Woodward Medical Research Institute]] and Roswell Park Comprehensive Cancer Center, ranked the 14th-best cancer-treatment center in the United States by "U.S. News & World Report". Transportation. [[File:New Flyer Xcelsior CHARGE NG electric bus being road-tested on NFTA Metro route 12, Buffalo, New York - 20230215.jpg|thumb|[[Niagara Frontier Transportation Authority]] electric bus in [[Elmwood Village, Buffalo|Elmwood Village]]]] Growth and changing transportation needs altered Buffalo's [[grid plan]], which was developed by Joseph Ellicott in 1804. His plan laid out streets like the spokes of a wheel, naming them after Dutch landowners and Native American tribes. City streets expanded outward, denser in the west and spreading out east of [[Main Street (Buffalo)|Main Street]]. Buffalo is a [[List of Canada–United States border crossings|port of entry with Canada]]; the [[Peace Bridge]] crosses the Niagara River and links the [[Interstate 190 (New York)|Niagara Thruway]] (I-190) and [[Queen Elizabeth Way]]. I-190, [[New York State Route 5|NY 5]] and [[New York State Route 33|NY 33]] are the primary [[Controlled-access highway|expressway]]s serving the city, carrying a total of over 245,000 vehicles daily. NY 5 carries traffic to the Southtowns, and NY 33 carries traffic to the eastern suburbs and the Buffalo Airport. The east-west Scajacquada Expressway ([[New York State Route 198|NY 198]]) bisects Delaware Park, connecting I-190 with the Kensington Expressway (NY 33) on the city's East Side to form a partial [[Ring road|beltway]] around the city center. The Scajacquada and Kensington Expressways and the Buffalo Skyway (NY 5) have been targeted for [[Freeway removal in the United States|redesign or removal]]. Other major highways include [[U.S. Route 62 in New York|US 62]] on the city's East Side; [[New York State Route 354|NY 354]] and a portion of [[New York State Route 130|NY 130]], both east–west routes; and [[New York State Route 265|NY 265]], [[New York State Route 266|NY 266]] and [[New York State Route 384|NY 384]], all north–south routes on the city's West Side. Buffalo has a higher-than-average percentage of households without a car: 30 percent in 2015, decreasing to 28.2 percent in 2016; the 2016 national average was 8.7 percent. Buffalo averaged 1.03 cars per household in 2016, compared to the national average of 1.8. [[File:AmherstStStation.jpg|thumb|alt=Passengers entering a subway train|Buffalo Metro Rail train at the [[Amherst Street station]]]] The [[Niagara Frontier Transportation Authority]] (NFTA) operates the region's public transit, including its airport, light-rail system, buses, and harbors. The NFTA operates 323 buses on 61 lines throughout Western New York. [[Buffalo Metro Rail]] is a line which runs from Canalside to the [[University Heights, Buffalo|University Heights]] district. The line's downtown section, south of the [[Fountain Plaza station]], runs at grade and is free of charge. The Buffalo area ranks twenty-third nationwide in transit ridership, with thirty trips per capita per year. [[Proposed expansion of the Buffalo Metro Rail|Expansions have been proposed]] since Buffalo Metro Rail's inception in the 1980s, with the latest plan (in the late 2010s) reaching the town of Amherst. [[Buffalo Niagara International Airport]] in [[Cheektowaga (town), New York|Cheektowaga]] has daily scheduled flights by domestic, charter and regional carriers. The airport handled nearly five million passengers in 2019. It received a [[J.D. Power]] award in 2018 for customer satisfaction at a mid-sized airport, and underwent a $50 million expansion in 2020–21. The airport, light rail, small-boat harbor and buses are monitored by the NFTA's [[transit police]]. [[File:ReddyRackBuffalo.jpg|thumb|alt=Row of red rental bicycles|Reddy Bikeshare at [[250 Delaware Avenue]]]] Buffalo has an [[Amtrak]] intercity train station, [[Buffalo–Exchange Street station]], which was rebuilt in 2020. The city's eastern suburbs are served by Amtrak's [[Buffalo–Depew station]] in [[Depew, New York|Depew]], which was built in 1979. Buffalo was a major stop on through routes between Chicago and New York City through the lower [[Ontario Peninsula]]; trains stopped at [[Buffalo Central Terminal]], which operated from 1929 to 1979. Intercity buses depart and arrive from the NFTA's [[Buffalo Metropolitan Transportation Center|Metropolitan Transportation Center]] on Ellicott Street. Since Buffalo adopted a [[complete streets]] policy in 2008, efforts have been made to accommodate cyclists and pedestrians into new infrastructure projects. Improved corridors have [[bike lane]]s, and Niagara Street received [[Cycle track|separate bike lanes]] in 2020. [[Walk Score]] gave Buffalo a "somewhat walkable" rating of 68 out of 100, with Allentown and downtown considered more walkable than other areas of the city. Utilities. [[File:Albany County DPW vehicles assisting with snow removal in the aftermath of the December 2022 blizzard, Michigan Avenue, Buffalo, New York - 20221228.jpg|thumb|Erie County snow removal vehicles in Masten Park neighborhood, following the [[December 2022 North American winter storm|Blizzard of 2022]]]] Buffalo's water system is operated by [[Veolia Water]], and water treatment begins at the Colonel Francis G. Ward Pumping Station. When it opened in 1915, the station's capacity was second only to Paris. [[Wastewater treatment|Wastewater]] is treated by the Buffalo Sewer Authority, its coverage extending to the eastern suburbs. [[Niagara Mohawk Power Corporation|National Grid]] and [[New York State Electric & Gas]] (NYSEG) provide electricity, and [[National Fuel Gas]] provides natural gas. The city's primary telecommunications provider is [[Spectrum (TV service)|Spectrum]]; [[Verizon Fios]] serves the North Park neighborhood. A 2018 report by [[Ookla]] noted that Buffalo was one of the bottom five U.S. cities in average download speeds at 66 [[megabits per second]]. The city's Department of Public Works manages Buffalo's [[snow removal|snow]] and trash removal and [[Street cleaner|street cleaning]]. Snow removal generally operates from November 15 to April 1. A [[snow emergency]] is declared by the National Weather Service after a snowstorm, and the city's roads, major sidewalks and bridges are cleared by over seventy [[snowplow]]s within 24 hours. [[Rock salt]] is the principal agent for preventing snow accumulation and melting ice. Snow removal may coincide with driving bans and parking restrictions. The area along the Outer Harbor is the most dangerous driving area during a snowstorm; when weather conditions dictate, the Buffalo Skyway is closed by the city's police department. To prevent [[ice jam]]s which may impact hydroelectric plants in Niagara Falls, the [[New York Power Authority]] and [[Ontario Power Generation]] began installing an ice [[Boom (containment)|boom]] annually in 1964. The boom's installation date is temperature-dependent, and it is removed on April 1 unless there is more than of ice remaining on eastern Lake Erie. It stretches from the outer [[breakwall]] at the Buffalo Outer Harbor to the Canadian shore near Fort Erie. Originally made of wood, the boom now consists of steel [[Float (nautical)|pontoon]]s. Sister cities. Buffalo has eighteen [[sister city|sister cities]]: External links. [[Category:Buffalo, New York| ]] [[Category:1801 establishments in New York (state)]] [[Category:Cities in Erie County, New York]] [[Category:Cities in New York (state)]] [[Category:County seats in New York (state)]] [[Category:Erie Canal]] [[Category:Inland port cities and towns of the United States]] [[Category:New York State Heritage Areas]] [[Category:Populated places established in 1801]] [[Category:New York (state) populated places on Lake Erie]] [[Category:Western New York]]
3986
47242410
https://en.wikipedia.org/wiki?curid=3986
Benjamin Franklin
Benjamin Franklin (April 17, 1790) was an American polymath: a writer, scientist, inventor, statesman, diplomat, printer, publisher and political philosopher. Among the most influential intellectuals of his time, Franklin was one of the Founding Fathers of the United States; a drafter and signer of the Declaration of Independence; and the first postmaster general. Born in the Province of Massachusetts Bay, Franklin became a successful newspaper editor and printer in Philadelphia, the leading city in the colonies, publishing "The Pennsylvania Gazette" at age 23. He became wealthy publishing this and "Poor Richard's Almanack", which he wrote under the pseudonym "Richard Saunders". After 1767, he was associated with the "Pennsylvania Chronicle", a newspaper known for its revolutionary sentiments and criticisms of the policies of the British Parliament and the Crown. He pioneered and was the first president of the Academy and College of Philadelphia, which opened in 1751 and later became the University of Pennsylvania. He organized and was the first secretary of the American Philosophical Society and was elected its president in 1769. He was appointed deputy postmaster-general for the British colonies in 1753, which enabled him to set up the first national communications network. Franklin was active in community affairs and colonial and state politics, as well as national and international affairs. He became a hero in America when, as an agent in London for several colonies, he spearheaded the repeal of the unpopular Stamp Act by the British Parliament. An accomplished diplomat, he was widely admired as the first U.S. ambassador to France and was a major figure in the development of positive FrancoAmerican relations. His efforts proved vital in securing French aid for the American Revolution. From 1785 to 1788, he served as President of Pennsylvania. At some points in his life, he owned slaves and ran "for sale" ads for slaves in his newspaper, but by the late 1750s, he began arguing against slavery, became an active abolitionist, and promoted the education and integration of African Americans into U.S. society. As a scientist, Franklin's studies of electricity made him a major figure in the American Enlightenment and the history of physics. He also charted and named the Gulf Stream current. His numerous important inventions include the lightning rod, bifocals, glass harmonica and the Franklin stove. He founded many civic organizations, including the Library Company, Philadelphia's first fire department, and the University of Pennsylvania. Franklin earned the title of "The First American" for his early and indefatigable campaigning for colonial unity. He was the only person to sign the Declaration of Independence, the Treaty of Paris peace with Britain, and the Constitution. Foundational in defining the American ethos, Franklin has been called "the most accomplished American of his age and the most influential in inventing the type of society America would become". Franklin's life and legacy of scientific and political achievement, and his status as one of America's most influential Founding Fathers, have seen him honored for more than two centuries after his death on the $100 bill and in the names of warships, many towns and counties, educational institutions and corporations, as well as in numerous cultural references and a portrait in the Oval Office. His more than 30,000 letters and documents have been collected in "The Papers of Benjamin Franklin." Anne Robert Jacques Turgot said of him: "Eripuit fulmen cœlo, mox sceptra tyrannis" ("He snatched lightning from the sky and the scepter from tyrants"). Ancestry. Benjamin Franklin's father, Josiah Franklin, was a tallow chandler, soaper, and candlemaker. Josiah Franklin was born at Ecton, Northamptonshire, England, on December 23, 1657, the son of Thomas Franklin, a blacksmith and farmer, and his wife, Jane White. Benjamin's father and all four of his grandparents were born in England. Josiah Franklin had a total of seventeen children with his two wives. He married his first wife, Anne Child, in about 1677 in Ecton and emigrated with her to Boston in 1683; they had three children before emigration and four after. Following her death, Josiah married Abiah Folger on July 9, 1689, in the Old South Meeting House by Reverend Samuel Willard, and had ten children with her. Benjamin, their eighth child, was Josiah Franklin's fifteenth child overall, and his tenth and final son. Benjamin Franklin's mother, Abiah, was born in Nantucket, Massachusetts Bay Colony, on August 15, 1667, to Peter Folger, a miller and schoolteacher, and his wife, Mary Morrell Folger, a former indentured servant. Mary Folger came from a Puritan family that was among the first Pilgrims to flee to Massachusetts for religious freedom, sailing for Boston in 1635 after King Charles I of England had begun persecuting Puritans. Her father Peter was "the sort of rebel destined to transform colonial America." As clerk of the court, he was arrested on February 10, 1676, and jailed on February 19 for his inability to pay bail. He spent over a year and a half in jail. Early life and education. Boston. Franklin was born on Milk Street in Boston, Province of Massachusetts Bay on January 17, 1706, and baptized at the Old South Meeting House in Boston. As a child growing up along the Charles River, Franklin recalled that he was "generally the leader among the boys." Franklin's father wanted him to attend school with the clergy but only had enough money to send him to school for two years. He attended Boston Latin School but did not graduate; he continued his education through voracious reading. Although "his parents talked of the church as a career" for Franklin, his schooling ended when he was ten. He worked for his father for a time, and at 12 he became an apprentice to his brother James, a printer, who taught him the printing trade. When Benjamin was 15, James founded "The New-England Courant", which was the third newspaper founded in Boston. When denied the chance to write a letter to the paper for publication, Franklin adopted the pseudonym of "Silence Dogood", a middle-aged widow. Mrs. Dogood's letters were published and became a subject of conversation around town. Neither James nor the "Courant" readers were aware of the ruse, and James was unhappy with Benjamin when he discovered the popular correspondent was his younger brother. Franklin was an advocate of free speech from an early age. When his brother was jailed for three weeks in 1722 for publishing material unflattering to the governor, young Franklin took over the newspaper and had Mrs. Dogood proclaim, quoting "Cato's Letters", "Without freedom of thought there can be no such thing as wisdom and no such thing as public liberty without freedom of speech." Franklin left his apprenticeship without his brother's permission, and in so doing became a fugitive. Moves to Philadelphia and London. At age 17, Franklin ran away to Philadelphia, seeking a new start in a new city. When he first arrived, he worked in several printing shops there, but he was not satisfied by the immediate prospects in any of these jobs. After a few months, while working in one printing house, Pennsylvania governor Sir William Keith convinced him to go to London, ostensibly to acquire the equipment necessary for establishing another newspaper in Philadelphia. Discovering that Keith's promises of backing a newspaper were empty, he worked as a typesetter in a printer's shop in what is today the Lady Chapel of Church of St Bartholomew-the-Great in the Smithfield area of London, which had at that time been deconsecrated. He returned to Philadelphia in 1726 with the help of Thomas Denham, an English merchant who had emigrated but returned to England, and who employed Franklin as a clerk, shopkeeper, and bookkeeper in his business. Junto and library. In 1727, at age 21, Franklin formed the Junto, a group of "like minded aspiring artisans and tradesmen who hoped to improve themselves while they improved their community." The Junto was a discussion group for issues of the day; it subsequently gave rise to many organizations in Philadelphia. The Junto was modeled after English coffeehouses that Franklin knew well and which had become the center of the spread of Enlightenment ideas in Britain. Reading was a great pastime of the Junto, but books were rare and expensive. The members created a library, initially assembled from their own books, after Franklin wrote: This did not suffice, however. Franklin conceived the idea of a subscription library, which would pool the funds of the members to buy books for all to read. This was the birth of the Library Company of Philadelphia, whose charter he composed in 1731. Newspaperman. Upon Denham's death, Franklin returned to his former trade. In 1728, he set up a printing house in partnership with Hugh Meredith; the following year he became the publisher of "The Pennsylvania Gazette", a newspaper in Philadelphia. The "Gazette" gave Franklin a forum for agitation about a variety of local reforms and initiatives through printed essays and observations. Over time, his commentary, and his adroit cultivation of a positive image as an industrious and intellectual young man, earned him a great deal of social respect. But even after he achieved fame as a scientist and statesman, he habitually signed his letters with the unpretentious 'B. Franklin, Printer'. In 1732, he published the first German-language newspaper in America – "Die Philadelphische Zeitung" – although it failed after only one year because four other newly founded German papers quickly dominated the newspaper market. Franklin also printed Moravian religious books in German. He often visited Bethlehem, Pennsylvania, staying at the Moravian Sun Inn. In a 1751 pamphlet on demographic growth and its implications for the Thirteen Colonies, he called the Pennsylvania Germans "Palatine Boors" who could never acquire the "Complexion" of Anglo-American settlers and referred to "Blacks and Tawneys" as weakening the social structure of the colonies. Although he apparently reconsidered shortly thereafter, and the phrases were omitted from all later printings of the pamphlet, his views may have played a role in his political defeat in 1764. According to Ralph Frasca, Franklin promoted the printing press as a device to instruct colonial Americans in moral virtue. Frasca argues he saw this as a service to God, because he understood moral virtue in terms of actions, thus, doing good provides a service to God. Despite his own moral lapses, Franklin saw himself as uniquely qualified to instruct Americans in morality. He tried to influence American moral life through the construction of a printing network based on a chain of partnerships from the Carolinas to New England. He thereby invented the first newspaper chain. It was more than a business venture, for like many publishers he believed that the press had a public-service duty. When he established himself in Philadelphia, shortly before 1730, the town boasted two "wretched little" news sheets, Andrew Bradford's "The American Weekly Mercury" and Samuel Keimer's "Universal Instructor in all Arts and Sciences, and Pennsylvania Gazette". This instruction in all arts and sciences consisted of weekly extracts from "Chambers's Universal Dictionary". Franklin quickly did away with all of this when he took over the "Instructor" and made it "The Pennsylvania Gazette". The "Gazette" soon became his characteristic organ, which he freely used for satire, for the play of his wit, even for sheer excess of mischief or of fun. From the first, he had a way of adapting his models to his own uses. The series of essays called "The Busy-Body", which he wrote for Bradford's "American Mercury" in 1729, followed the general Addisonian form, already modified to suit homelier conditions. The thrifty Patience, in her busy little shop, complaining of the useless visitors who waste her valuable time, is related to the women who address Mr. Spectator. The Busy-Body himself is a true Censor Morum, as Isaac Bickerstaff had been in the "Tatler". And a number of the fictitious characters, Ridentius, Eugenius, Cato, and Cretico, represent traditional 18th-century classicism. Franklin even used this classical framework for contemporary satire, as seen in the character of Cretico, the "sour Philosopher", who is clearly a caricature of his rival, Samuel Keimer. Franklin had mixed success in his plan to establish an inter-colonial network of newspapers that would produce a profit for him and disseminate virtue. Over the years he sponsored two dozen printers in Pennsylvania, South Carolina, New York, Connecticut, and even the Caribbean. By 1753, eight of the fifteen English language newspapers in the colonies were published by him or his partners. He began in Charleston, South Carolina, in 1731. After his second editor died, the widow, Elizabeth Timothy, took over and made it a success. She was one of the colonial era's first woman printers. For three decades Franklin maintained a close business relationship with her and her son Peter Timothy, who took over the "South Carolina Gazette" in 1746. The "Gazette" was impartial in political debates, while creating the opportunity for public debate, which encouraged others to challenge authority. Timothy avoided blandness and crude bias and, after 1765, increasingly took a patriotic stand in the growing crisis with Great Britain. Franklin's "Connecticut Gazette" (1755–68), however, proved unsuccessful. As the Revolution approached, political strife slowly tore his network apart. Freemasonry. In 1730 or 1731, Franklin was initiated into the local Masonic lodge. He became a grand master in 1734, indicating his rapid rise to prominence in Pennsylvania. The same year, he edited and published the first Masonic book in the Americas, a reprint of James Anderson's "Constitutions of the Free-Masons". He was the secretary of St. John's Lodge in Philadelphia from 1735 to 1738. In January 1738, "Franklin appeared as a witness" in a manslaughter trial against two men who killed "a simple-minded apprentice" named Daniel Rees in a fake Masonic initiation gone wrong. One of the men "threw, or accidentally spilled, the burning spirits, and Daniel Rees died of his burns two days later." While Franklin did not directly participate in the hazing that led to Rees' death, he knew of the hazing before it turned fatal, and did nothing to stop it. He was criticized for his inaction in "The American Weekly Mercury", by his publishing rival Andrew Bradford. Ultimately, "Franklin replied in his own defense in the "Gazette"." Franklin remained a Freemason for the rest of his life. Common-law marriage to Deborah Read. At age 17 in 1723, Franklin proposed to 15-year-old Deborah Read while a boarder in the Read home. At that time, Deborah's mother was wary of allowing her young daughter to marry Franklin, who was on his way to London at Governor Keith's request, and also because of his financial instability. Her own husband had recently died, and she declined Franklin's request to marry her daughter. Franklin travelled to London, and after he failed to communicate as expected with Deborah and her family, they interpreted his long silence as a breaking of his promises. At the urging of her mother, Deborah married a potter named John Rogers on August 5, 1725. John soon fled to Barbados with her dowry in order to avoid debts and prosecution. Since Rogers' fate was unknown, bigamy laws prevented Deborah from remarrying. Franklin returned in 1726 and resumed his courtship of Deborah. They established a common-law marriage on September 1, 1730. They took in his recently acknowledged illegitimate young son and raised him in their household. They had two children together. Their son, Francis Folger Franklin, was born in October 1732 and died of smallpox in 1736. Their daughter, Sarah "Sally" Franklin, was born in 1743 and eventually married Richard Bache. Deborah's fear of the sea meant that she never accompanied Franklin on any of his extended trips to Europe; another possible reason why they spent much time apart is that he may have blamed her for possibly preventing their son Francis from being inoculated against the disease that subsequently killed him. Deborah wrote to him in November 1769, saying she was ill due to "dissatisfied distress" from his prolonged absence, but he did not return until his business was done. Deborah Read Franklin died of a stroke on December 14, 1774, while Franklin was on an extended mission to Great Britain; he returned in 1775. William Franklin. In 1730, 24-year-old Franklin publicly acknowledged his illegitimate son William and raised him in his household. William was born on February 22, 1730, but his mother's identity is unknown. He was educated in Philadelphia and beginning at about age 30 studied law in London in the early 1760s. William himself fathered an illegitimate son, William Temple Franklin, born on the same day and month: February 22, 1760. The boy's mother was never identified, and he was placed in foster care. In 1762, the elder William Franklin married Elizabeth Downes, daughter of a planter from Barbados, in London. In 1763, he was appointed as the last royal governor of New Jersey. A Loyalist to the king, William Franklin saw his relations with father Benjamin eventually break down over their differences about the American Revolutionary War, as Benjamin Franklin could never accept William's position. Deposed in 1776 by the revolutionary government of New Jersey, William was placed under house arrest at his home in Perth Amboy for six months. After the Declaration of Independence, he was formally taken into custody by order of the Provincial Congress of New Jersey, an entity which he refused to recognize, regarding it as an "illegal assembly." He was incarcerated in Connecticut for two years, in Wallingford and Middletown, and, after being caught surreptitiously engaging Americans into supporting the Loyalist cause, was held in solitary confinement at Litchfield for eight months. When finally released in a prisoner exchange in 1778, he moved to New York City, which was occupied by the British at the time. While in New York City, he became leader of the Board of Associated Loyalists, a quasi-military organization chartered by King George III and headquartered in New York City. They initiated guerrilla forays into New Jersey, southern Connecticut, and New York counties north of the city. When British troops evacuated from New York, William Franklin left with them and sailed to England. He settled in London, never to return to North America. In the preliminary peace talks in 1782 with Britain, "... Benjamin Franklin insisted that loyalists who had borne arms against the United States would be excluded from this plea (that they be given a general pardon). He was undoubtedly thinking of William Franklin." Success as an author. In 1732, Franklin began to publish the noted "Poor Richard's Almanack" (with content both original and borrowed) under the pseudonym Richard Saunders, on which much of his popular reputation is based. He frequently wrote under pseudonyms. The first issue published was for the upcoming year, 1733. He had developed a distinct, signature style that was plain, pragmatic and had a sly, soft but self-deprecating tone with declarative sentences. Although it was no secret that he was the author, his Richard Saunders character repeatedly denied it. "Poor Richard's Proverbs", adages from this almanac, such as "A penny saved is twopence dear" (often misquoted as "A penny saved is a penny earned") and "Fish and visitors stink in three days", remain common quotations in the modern world. Wisdom in folk society meant the ability to provide an apt adage for any occasion, and his readers became well prepared. He sold about ten thousand copies per year—it became an institution. In 1741, Franklin began publishing "The General Magazine and Historical Chronicle for all the British Plantations in America." He used the heraldic badge of the Prince of Wales as the cover illustration. Franklin wrote a letter, "Advice to a Friend on Choosing a Mistress", dated June 25, 1745, in which he gives advice to a young man about channeling sexual urges. Due to its licentious nature, it was not published in collections of his papers during the 19th century. Federal court rulings from the mid-to-late 20th century cited the document as a reason for overturning obscenity laws and against censorship. Public life. Early steps in Pennsylvania. In 1736, Franklin created the Union Fire Company, one of the first volunteer firefighting companies in America. In the same year, he printed a new currency for New Jersey based on innovative anti-counterfeiting techniques he had devised. His political career also commenced, particularly as the Chief Clerk of the Pennsylvania Provincial Assembly, a capacity wherein he served until 1751. Throughout his career, he was an advocate for paper money, publishing "A Modest Enquiry into the Nature and Necessity of a Paper Currency" in 1729, and his printer printed money. He was influential in the more restrained and thus successful monetary experiments in the Middle Colonies, which stopped deflation without causing excessive inflation. In 1766, he made a case for paper money to the British House of Commons. As he matured, Franklin began to concern himself more with public affairs. In 1743, he first devised a scheme for the Academy, Charity School, and College of Philadelphia; however, the person he had in mind to run the academy, Rev. Richard Peters, refused and Franklin put his ideas away until 1749 when he printed his own pamphlet, "Proposals Relating to the Education of Youth in Pensilvania." He was appointed president of the Academy on November 13, 1749; the academy and the charity school opened in 1751. In 1743, he founded the American Philosophical Society to help scientific men discuss their discoveries and theories. He began the electrical research that, along with other scientific inquiries, would occupy him for the rest of his life, in between bouts of politics and moneymaking. During King George's War, Franklin raised a militia called the Association for General Defense because the legislators of the city had decided to take no action to defend Philadelphia "either by erecting fortifications or building Ships of War." He raised money to create earthwork defenses and buy artillery. The largest of these was the "Association Battery" or "Grand Battery" of 50 guns. In 1747, Franklin (already a very wealthy man) retired from printing and went into other businesses. He formed a partnership with his foreman, David Hall, which provided Franklin with half of the shop's profits for 18 years. This lucrative business arrangement provided leisure time for study, and in a few years he had made many new discoveries. Franklin became involved in Philadelphia politics and rapidly progressed. In October 1748, he was selected as a councilman; in June 1749, he became a justice of the peace for Philadelphia; and in 1751, he was elected to the Pennsylvania Assembly. On August 10, 1753, he was appointed deputy postmaster-general of British North America. His service in domestic politics included reforming the postal system, with mail sent out every week. In 1751, Franklin and Thomas Bond obtained a charter from the Pennsylvania legislature to establish a hospital. Pennsylvania Hospital was the first hospital in the colonies. In 1752, Franklin organized the Philadelphia Contributionship, the Colonies' first homeowner's insurance company. Between 1750 and 1753, the "educational triumvirate" of Franklin, Samuel Johnson of Stratford, Connecticut, and schoolteacher William Smith built on Franklin's initial scheme and created what Bishop James Madison, president of the College of William & Mary, called a "new-model" plan or style of American college. Franklin solicited, printed in 1752, and promoted an American textbook of moral philosophy by Samuel Johnson, titled "Elementa Philosophica", to be taught in the new colleges. In June 1753, Johnson, Franklin, and Smith met in Stratford. They decided the new-model college would focus on the professions, with classes taught in English instead of Latin, have subject matter experts as professors instead of one tutor leading a class for four years, and there would be no religious test for admission. Johnson went on to found King's College (now Columbia University) in New York City in 1754, while Franklin hired Smith as provost of the College of Philadelphia, which opened in 1755. At its first commencement, on May 17, 1757, seven men graduated; six with a Bachelor of Arts and one with a Master of Arts. It was later merged with the University of the State of Pennsylvania to become the University of Pennsylvania. The college was to become influential in guiding the founding documents of the United States: in the Continental Congress, for example, over one-third of the college-affiliated men who contributed to the Declaration of Independence between September 4, 1774, and July 4, 1776, were affiliated with the college. In 1754, he headed the Pennsylvania delegation to the Albany Congress. This meeting of several colonies had been requested by the Board of Trade in England to improve relations with the Indians and defense against the French. Franklin proposed a broad Plan of Union for the colonies. While the plan was not adopted, elements of it found their way into the Articles of Confederation and the Constitution. In 1753, Harvard University and Yale awarded him honorary master of arts degrees. In 1756, he was awarded an honorary Master of Arts degree from the College of William & Mary. Later in 1756, Franklin organized the Pennsylvania Militia. He used Tun Tavern as a gathering place to recruit a regiment of soldiers to go into battle against the Native American uprisings that beset the American colonies. Postmaster. Well known as a printer and publisher, Franklin was appointed postmaster of Philadelphia in 1737, holding the office until 1753, when he and publisher William Hunter were named deputy postmasters–general of British North America, the first to hold the office. (Joint appointments were standard at the time, for political reasons.) He was responsible for the British colonies from Pennsylvania north and east, as far as the island of Newfoundland. A post office for local and outgoing mail had been established in Halifax, Nova Scotia, by local stationer Benjamin Leigh, on April 23, 1754, but service was irregular. Franklin opened the first post office to offer regular, monthly mail in Halifax on December 9, 1755. Meantime, Hunter became postal administrator in Williamsburg, Virginia, and oversaw areas south of Annapolis, Maryland. Franklin reorganized the service's accounting system and improved speed of delivery between Philadelphia, New York, and Boston. By 1761, efficiencies led to the first profits for the colonial post office. When the lands of New France were ceded to the British under the Treaty of Paris in 1763, the British province of Quebec was created among them, and Franklin saw mail service expanded between Montreal, Trois-Rivières, Quebec City, and New York. For the greater part of his appointment, he lived in England (from 1757 to 1762, and again from 1764 to 1774)—about three-quarters of his term. Eventually, his sympathies for the rebel cause in the American Revolution led to his dismissal on January 31, 1774. On July 26, 1775, the Second Continental Congress established the United States Post Office and named Franklin as the first United States postmaster general. He had been a postmaster for decades and was a natural choice for the position. He had just returned from England and was appointed chairman of a Committee of Investigation to establish a postal system. The report of the committee, providing for the appointment of a postmaster general for the 13 American colonies, was considered by the Continental Congress on July 25 and 26. On July 26, 1775, Franklin was appointed postmaster general, the first appointed under the Continental Congress. His apprentice, William Goddard, felt that his ideas were mostly responsible for shaping the postal system and that the appointment should have gone to him, but he graciously conceded it to Franklin, 36 years his senior. Franklin, however, appointed Goddard as Surveyor of the Posts, issued him a signed pass, and directed him to investigate and inspect the various post offices and mail routes as he saw fit. The newly established postal system became the United States Post Office, a system that continues to operate today. Political work. In 1757, he was sent to England by the Pennsylvania Assembly as a colonial agent to protest against the political influence of the Penn family, the proprietors of the colony. He remained there for five years, striving to end the proprietors' prerogative to overturn legislation from the elected Assembly and their exemption from paying taxes on their land. His lack of influential allies in Whitehall led to the failure of this mission. At this time, many members of the Pennsylvania Assembly were feuding with William Penn's heirs, who controlled the colony as proprietors. After his return to the colony, Franklin led the "anti-proprietary party" in the struggle against the Penn family and was elected Speaker of the Pennsylvania House in May 1764. His call for a change from proprietary to royal government was a rare political miscalculation, however: Pennsylvanians worried that such a move would endanger their political and religious freedoms. Because of these fears and because of political attacks on his character, Franklin lost his seat in the October 1764 Assembly elections. The anti-proprietary party dispatched him to England again to continue the struggle against the Penn family proprietorship. During this trip, events drastically changed the nature of his mission. In London, Franklin opposed the 1765 Stamp Act. Unable to prevent its passage, he made another political miscalculation and recommended a friend to the post of stamp distributor for Pennsylvania. Pennsylvanians were outraged, believing that he had supported the measure all along, and threatened to destroy his home in Philadelphia. Franklin soon learned of the extent of colonial resistance to the Stamp Act, and he testified during the House of Commons proceedings that led to its repeal. With this, Franklin suddenly emerged as the leading spokesman for American interests in England. He wrote popular essays on behalf of the colonies. Georgia, New Jersey, and Massachusetts also appointed him as their agent to the Crown. During his lengthy missions to London between 1757 and 1775, Franklin lodged in a house on Craven Street, just off the Strand in central London. During his stays there, he developed a close friendship with his landlady, Margaret Stevenson, and her circle of friends and relations, in particular, her daughter Mary, who was more often known as Polly. The house is now a museum known as the Benjamin Franklin House. Whilst in London, Franklin became involved in radical politics. He belonged to a gentlemen's club (which he called "the honest Whigs"), which held stated meetings, and included members such as Richard Price, the minister of Newington Green Unitarian Church who ignited the Revolution controversy, and Andrew Kippis. Scientific work. In 1756, Franklin had become a member of the Society for the Encouragement of Arts, Manufactures & Commerce (now the Royal Society of Arts), which had been founded in 1754. After his return to the United States in 1775, he became the Society's Corresponding Member, continuing a close connection. The Royal Society of Arts instituted a Benjamin Franklin Medal in 1956 to commemorate the 250th anniversary of his birth and the 200th anniversary of his membership of the RSA. The study of natural philosophy (referred today as science in general) drew him into overlapping circles of acquaintance. Franklin was, for example, a corresponding member of the Lunar Society of Birmingham. In 1759, the University of St Andrews awarded him an honorary doctorate in recognition of his accomplishments. In October 1759, he was granted Freedom of the Borough of St Andrews. He was also awarded an honorary doctorate by Oxford University in 1762. Because of these honors, he was often addressed as " Franklin." While living in London in 1768, he developed a phonetic alphabet in "A Scheme for a new Alphabet and a Reformed Mode of Spelling". This reformed alphabet discarded six letters he regarded as redundant (c, j, q, w, x, and y), and substituted six new letters for sounds he felt lacked letters of their own. This alphabet never caught on, and he eventually lost interest. Return to London and Travels in Europe. From the mid-1750s to the mid-1770s, Franklin returned to England and spent much of his time in London., using the city as a base from which to travel. In 1771, he made short journeys through different parts of England, staying with Joseph Priestley at Leeds, Thomas Percival at Manchester and Erasmus Darwin at Lichfield. In Scotland, he spent five days with Lord Kames near Stirling and stayed for three weeks with David Hume in Edinburgh. In 1759, he visited Edinburgh with his son and later reported that he considered his six weeks in Scotland "six weeks of the densest happiness I have met with in any part of my life." In Ireland, he stayed with Lord Hillsborough. Franklin noted of him that "all the plausible behaviour I have described is meant only, by patting and stroking the horse, to make him more patient, while the reins are drawn tighter, and the spurs set deeper into his sides." In Dublin, Franklin was invited to sit with the members of the Irish Parliament rather than in the gallery. He was the first American to receive this honor. While touring Ireland, he was deeply moved by the level of poverty he witnessed. The economy of the Kingdom of Ireland was affected by the same trade regulations and laws that governed the Thirteen Colonies. He feared that the American colonies could eventually come to the same level of poverty if the regulations and laws continued to apply to them. Franklin spent two months in German lands in 1766, but his connections to the country stretched across a lifetime. He declared a debt of gratitude to German scientist Otto von Guericke for his early studies of electricity. Franklin also co-authored the first treaty of friendship between Prussia and America in 1785. In September 1767, he visited Paris with his usual traveling partner, Sir John Pringle, 1st Baronet. News of his electrical discoveries was widespread in France. His reputation meant that he was introduced to many influential scientists and politicians, and also to King Louis XV. Defending the American cause. One line of argument in Parliament was that Americans should pay a share of the costs of the French and Indian War and therefore taxes should be levied on them. Franklin became the American spokesman in highly publicized testimony in Parliament in 1766. He stated that Americans already contributed heavily to the defense of the Empire. He said local governments had raised, outfitted and paid 25,000 soldiers to fight France—as many as Great Britain itself sent—and spent many millions from American treasuries doing so in the French and Indian War alone. In 1772, Franklin obtained private letters of Thomas Hutchinson and Andrew Oliver, governor and lieutenant governor of the Province of Massachusetts Bay, proving that they had encouraged the Crown to crack down on Bostonians. Franklin sent them to North America, where they escalated tensions. The letters were finally leaked to the public in the "Boston Gazette" in mid-June 1773, causing a political firestorm in Massachusetts and raising significant questions in England. The British began to regard him as the fomenter of serious trouble. Hopes for a peaceful solution ended as he was systematically ridiculed and humiliated by Solicitor-General Alexander Wedderburn, before the Privy Council on January 29, 1774. He returned to Philadelphia in March 1775, and abandoned his accommodationist stance. In 1773, Franklin published two of his most celebrated pro-American satirical essays: , and "" Agent for British and Hellfire Club membership. Franklin is known to have occasionally attended the Hellfire Club's meetings during 1758 as a non-member during his time in England. However, some authors and historians would argue he was in fact a British spy. As there are no records left (having been burned in 1774), many of these members are just assumed or linked by letters sent to each other. One early proponent that Franklin was a member of the Hellfire Club and a double agent is the historian Donald McCormick, who has a history of making controversial claims. Coming of revolution. In 1763, soon after Franklin returned to Pennsylvania from England for the first time, the western frontier was engulfed in a bitter war known as Pontiac's Rebellion. The Paxton Boys, a group of settlers convinced that the Pennsylvania government was not doing enough to protect them from American Indian raids, murdered a group of peaceful Susquehannock Indians and marched on Philadelphia. Franklin helped to organize a local militia to defend the capital against the mob. He met with the Paxton leaders and persuaded them to disperse. Franklin wrote a scathing attack against the racial prejudice of the Paxton Boys. "If an "Indian" injures me", he asked, "does it follow that I may revenge that injury on all "Indians"?" He provided an early response to British surveillance through his own network of counter-surveillance and manipulation. "He waged a public relations campaign, secured secret aid, played a role in privateering expeditions, and churned out effective and inflammatory propaganda." Declaration of Independence. By the time Franklin arrived in Philadelphia on May 5, 1775, after his second mission to Great Britain, the American Revolution had begun at the Battles of Lexington and Concord the previous month, on April 19, 1775. The New England militia had forced the main British army to remain inside Boston. The Pennsylvania Assembly unanimously chose Franklin as their delegate to the Second Continental Congress. In June 1776, he was appointed a member of the Committee of Five that drafted the Declaration of Independence. Although he was temporarily disabled by gout and unable to attend most meetings of the committee, he made several "small but important" changes to the draft sent to him by Thomas Jefferson. The "all hang together" saying ascribed to Franklin at the signing is probably apocryphal. He reportedly replied to John Hancock when Hancock stated that they must all hang together, "Yes, we must, indeed, all hang together, or most assuredly we shall all hang separately." Carl Van Doren in "Benjamin Franklin’s Autobiographical Writings" writes that the person who said this was most likely Richard Penn, former governor of Pennsylvania, replying to a member of Congress who had said "they must all hang together"... 'If you do not, gentlemen,' said Mr. Penn, 'I can tell you that you will be very apt to hang separately.'" Ambassador to France (1776–1785). On October 26, 1776, Franklin was dispatched to France as commissioner for the United States. He took with him as secretary his 16-year-old grandson, William Temple Franklin. They lived in a home in the Parisian suburb of Passy, donated by Jacques-Donatien Le Ray de Chaumont, who supported the United States. Franklin remained in France until 1785. He conducted the affairs of his country toward the French nation with great success, which included securing a critical military alliance in 1778 and signing the 1783 Treaty of Paris. Among his associates in France was Honoré Gabriel Riqueti, comte de Mirabeau—a French Revolutionary writer, orator and statesman who in 1791 was elected president of the National Assembly. In July 1784, Franklin met with Mirabeau and contributed anonymous materials that the Frenchman used in his first signed work: "Considerations sur l'ordre de Cincinnatus". The publication was critical of the Society of the Cincinnati, established in the United States. Franklin and Mirabeau thought of it as a "noble order", inconsistent with the egalitarian ideals of the new republic. During his stay in France, he was active as a Freemason, serving as venerable master of the lodge Les Neuf Sœurs from 1779 until 1781. In 1784, when Franz Mesmer began to publicize his theory of "animal magnetism" which was considered offensive by many, Louis XVI appointed a commission to investigate it. These included the chemist Antoine Lavoisier, the physician Joseph-Ignace Guillotin, the astronomer Jean Sylvain Bailly, and Franklin. In doing so, the committee concluded, through blind trials that mesmerism only seemed to work when the subjects expected it, which discredited mesmerism and became the first major demonstration of the placebo effect, which was described at that time as "imagination." In 1781, he was elected a fellow of the American Academy of Arts and Sciences. Franklin's advocacy for religious tolerance in France contributed to arguments made by French philosophers and politicians that resulted in Louis XVI's signing of the Edict of Versailles in November 1787. This edict effectively nullified the Edict of Fontainebleau, which had denied non-Catholics civil status and the right to openly practice their faith. Franklin also served as American minister to Sweden, although he never visited that country. He negotiated a treaty that was signed in April 1783. On August 27, 1783, in Paris, he witnessed the world's first hydrogen balloon flight. "Le Globe", created by professor Jacques Charles and Les Frères Robert, was watched by a vast crowd as it rose from the Champ de Mars (now the site of the Eiffel Tower). Franklin became so enthusiastic that he subscribed financially to the next project to build a manned hydrogen balloon. On December 1, 1783, Franklin was seated in the special enclosure for honored guests it took off from the Jardin des Tuileries, piloted by Charles and Nicolas-Louis Robert. Walter Isaacson describes a chess game between Franklin and the Duchess of Bourbon, "who made a move that inadvertently exposed her king. Ignoring the rules of the game, he promptly captured it. 'Ah,' said the duchess, 'we do not take Kings so.' Replied Franklin in a famous quip: 'We do in America. Return to North America. When he returned home in 1785, Franklin occupied a position second only to that of George Washington as the champion of American independence. Le Ray honored him with a commissioned portrait painted by Joseph Duplessis, which now hangs in the National Portrait Gallery of the Smithsonian Institution in Washington, D.C. After his return, Franklin became an abolitionist and freed his two slaves. He eventually became president of the Pennsylvania Abolition Society. President of Pennsylvania and Delegate to the Constitutional convention. Special balloting conducted October 18, 1785, unanimously elected him the sixth president of the Supreme Executive Council of Pennsylvania, replacing John Dickinson. The office was practically that of the governor. He held that office for slightly over three years, longer than any other, and served the constitutional limit of three full terms. Shortly after his initial election, he was re-elected to a full term on October 29, 1785, and again in the fall of 1786 and on October 31, 1787. In that capacity, he served as host to the Constitutional Convention of 1787 in Philadelphia. He also served as a delegate to the Convention. It was primarily an honorary position and he seldom engaged in debate. According to James McHenry, Elizabeth Willing Powel asked Franklin what kind of government they had wrought. He replied: "A republic, madam, if you can keep it." Death. Franklin suffered from obesity throughout his middle age and elder years, which resulted in multiple health problems, including gout, which worsened as he aged. In poor health during the signing of the U.S. Constitution in 1787, he was rarely seen in public after then until his death. Franklin died from pleuritic attack at his home in Philadelphia on April 17, 1790, at age 84. His last reported words, conveyed to his daughter, were, "a dying man can do nothing easy", after she suggested that he change position in bed and lie on his side so he could breathe more easily. Franklin's death is described in the book "The Life of Benjamin Franklin", quoting from the account of John Paul Jones: Approximately 20,000 people attended Franklin's funeral, after which he was interred in Christ Church Burial Ground in Philadelphia. Upon learning of his death, the Constitutional Assembly in Revolutionary France entered into a state of mourning for a period of three days, and memorial services were conducted in honor of Franklin throughout the country. In 1728, at age 22, Franklin wrote what he hoped would be his own epitaph: Franklin's actual grave, however, as he specified in his final will, simply reads "Benjamin and Deborah Franklin." Inventions and scientific inquiries. Franklin was a prodigious inventor. Among his many creations were the lightning rod, Franklin stove, bifocal glasses and the flexible urinary catheter. He never patented his inventions; in his autobiography he wrote, "... as we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously." Electricity, light. Franklin was, along with his contemporary Leonhard Euler, the only major scientist who supported Christiaan Huygens's wave theory of light, which was basically ignored by the rest of the scientific community. In the 18th century, Isaac Newton's corpuscular theory was held to be true; it took Thomas Young's well-known slit experiment in 1803 to persuade most scientists to believe Huygens's theory. Franklin started exploring the phenomenon of electricity in the 1740s, after he met the itinerant lecturer Archibald Spencer, who used static electricity in his demonstrations. He proposed that "vitreous" and "resinous" electricity were not different types of "electrical fluid" (as electricity was called then), but the same "fluid" under different pressures. (The same proposal was made independently that same year by William Watson.) He was the first to label them as positive and negative respectively, which replaced the then current distinction made between 'vitreous' and 'resinous' electricity, and he was the first to discover the principle of conservation of charge. In 1748, he constructed a multiple plate capacitor, that he called an "electrical battery" (not a true battery like Volta's pile) by placing eleven panes of glass sandwiched between lead plates, suspended with silk cords and connected by wires. In pursuit of more pragmatic uses for electricity, remarking in spring 1749 that he felt "chagrin'd a little" that his experiments had heretofore resulted in "Nothing in this Way of Use to Mankind", Franklin planned a practical demonstration. He proposed a dinner party where a turkey was to be killed via electric shock and roasted on an electrical spit. After having prepared several turkeys this way, he noted that "the birds kill'd in this manner eat uncommonly tender." Franklin recounted that in the process of one of these experiments, he was shocked by a pair of Leyden jars, resulting in numbness in his arms that persisted for one evening, noting "I am Ashamed to have been Guilty of so Notorious a Blunder." Franklin briefly investigated electrotherapy, including the use of the electric bath. This work led to the field becoming widely known. In recognition of his work with electricity, he received the Royal Society's Copley Medal in 1753, and in 1756, he became one of the few 18th-century Americans elected a fellow of the Society. The CGS unit of electric charge has been named after him: one "franklin" (Fr) is equal to one statcoulomb. Franklin advised Harvard University in its acquisition of new electrical laboratory apparatus after the complete loss of its original collection, in a fire that destroyed the original Harvard Hall in 1764. The collection he assembled later became part of the Harvard Collection of Historical Scientific Instruments, now on public display in its Science Center. Kite experiment and lightning rod. Franklin published a proposal for an experiment to prove that lightning is electricity by flying a kite in a storm. On May 10, 1752, Thomas-François Dalibard of France conducted Franklin's experiment using a iron rod instead of a kite, and he extracted electrical sparks from a cloud. On June 15, 1752, Franklin may possibly have conducted his well-known kite experiment in Philadelphia, successfully extracting sparks from a cloud. He described the experiment in his newspaper, "The Pennsylvania Gazette", on October 19, 1752, without mentioning that he himself had performed it. This account was read to the Royal Society on December 21 and printed as such in the "Philosophical Transactions". Joseph Priestley published an account with additional details in his 1767 "History and Present Status of Electricity". Franklin was careful to stand on an insulator, keeping dry under a roof to avoid the danger of electric shock. Others, such as Georg Wilhelm Richmann in Russia, were indeed electrocuted in performing lightning experiments during the months immediately following his experiment. In his writings, Franklin indicates that he was aware of the dangers and offered alternative ways to demonstrate that lightning was electrical, as shown by his use of the concept of electrical ground. He did not perform this experiment in the way that is often pictured in popular literature, flying the kite and waiting to be struck by lightning, as it would have been dangerous. Instead he used the kite to collect some electric charge from a storm cloud, showing that lightning was electrical. On October 19, 1752, in a letter to England with directions for repeating the experiment, he wrote: Franklin's electrical experiments led to his invention of the lightning rod. He said that conductors with a sharp rather than a smooth point could discharge silently and at a far greater distance. He surmised that this could help protect buildings from lightning by attaching "upright Rods of Iron, made sharp as a Needle and gilt to prevent Rusting, and from the Foot of those Rods a Wire down the outside of the Building into the Ground; ... Would not these pointed Rods probably draw the Electrical Fire silently out of a Cloud before it came nigh enough to strike, and thereby secure us from that most sudden and terrible Mischief!" Following a series of experiments on Franklin's own house, lightning rods were installed on the Academy of Philadelphia (later the University of Pennsylvania) and the Pennsylvania State House (later Independence Hall) in 1752. Though Franklin is famously associated with kites from his lightning experiments, he has also been noted by many for using kites to pull humans and ships across waterways. George Pocock in the book "A Treatise on The Aeropleustic Art, or Navigation in the Air, by means of Kites, or Buoyant Sails" noted being inspired by Benjamin Franklin's traction of his body by kite power across a waterway. Thermodynamics. Franklin noted a principle of refrigeration by observing that on a very hot day, he stayed cooler in a wet shirt in a breeze than he did in a dry one. To understand this phenomenon more clearly, he conducted experiments. In 1758 on a warm day in Cambridge, England, he and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Another thermometer showed that the room temperature was constant at . In his letter "Cooling by Evaporation", Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day." In 1761, Franklin wrote a letter to Mary Stevenson describing his experiments on the relationship between color and heat absorption. He found that darker color clothes got hotter when exposed to sunlight than lighter color clothes, an early demonstration of black body thermal radiation. One experiment he performed consisted of placing square pieces of cloth of various color out in the snow on a sunny day. He waited some time and then measured that the black pieces sank furthest into the snow of all the colors, indicating that they got the hottest and melted the most snow. According to Michael Faraday, Franklin's experiments on the non-conduction of ice are worth mentioning, although the law of the general effect of liquefaction on electrolytes is not attributed to Franklin. However, as reported in 1836 by Franklin's great-grandson Alexander Dallas Bache of the University of Pennsylvania, the law of the effect of heat on the conduction of bodies otherwise non-conductors, for example, glass, could be attributed to Franklin. Franklin wrote, "... A certain quantity of heat will make some bodies good conductors, that will not otherwise conduct ..." and again, "... And water, though naturally a good conductor, will not conduct well when frozen into ice." Oceanography and hydrodynamics. As deputy postmaster, Franklin became interested in North Atlantic Ocean circulation patterns. While in England in 1768, he heard a complaint from the Colonial Board of Customs. British packet ships carrying mail had taken several weeks longer to reach New York than it took an average merchant ship to reach Newport, Rhode Island. The merchantmen had a longer and more complex voyage because they left from London, while the packets left from Falmouth in Cornwall. Franklin put the question to his cousin Timothy Folger, a Nantucket whaler captain, who told him that merchant ships routinely avoided a strong eastbound mid-ocean current. The mail packet captains sailed dead into it, thus fighting an adverse current of . Franklin worked with Folger and other experienced ship captains, learning enough to chart the current and name it the Gulf Stream, by which it is still known today. Franklin published his Gulf Stream chart in 1770 in England, where it was ignored. Subsequent versions were printed in France in 1778 and the U.S. in 1786. The British original edition of the chart had been so thoroughly ignored that everyone assumed it was lost forever until Phil Richardson, a Woods Hole oceanographer and Gulf Stream expert, discovered it in the Bibliothèque Nationale in Paris in 1980. This find received front-page coverage in "The New York Times". It took many years for British sea captains to adopt Franklin's advice on navigating the current; once they did, they were able to trim two weeks from their sailing time. In 1853, the oceanographer and cartographer Matthew Fontaine Maury noted that while Franklin charted and codified the Gulf Stream, he did not discover it: An aging Franklin accumulated all his oceanographic findings in "Maritime Observations", published by the Philosophical Society's "transactions" in 1786. It contained ideas for sea anchors, catamaran hulls, watertight compartments, shipboard lightning rods and a soup bowl designed to stay stable in stormy weather. While traveling on a ship, Franklin had observed that the wake of a ship was diminished when the cooks scuttled their greasy water. He studied the effects on a large pond in Clapham Common, London. "I fetched out a cruet of oil and dropt a little of it on the water ... though not more than a teaspoon full, produced an instant calm over a space of several yards square." He later used the trick to "calm the waters" by carrying "a little oil in the hollow joint of [his] cane." Meteorology. On October 21, 1743, according to the popular myth, a storm moving from the southwest denied Franklin the opportunity of witnessing a lunar eclipse. He was said to have noted that the prevailing winds were actually from the northeast, contrary to what he had expected. In correspondence with his brother, he learned that the same storm had not reached Boston until after the eclipse, despite the fact that Boston is to the northeast of Philadelphia. He deduced that storms do not always travel in the direction of the prevailing wind, a concept that greatly influenced meteorology. After the Icelandic volcanic eruption of Laki in 1783, and the subsequent harsh European winter of 1784, Franklin made observations on the causal nature of these two seemingly separate events. He wrote about them in a lecture series. Population studies. Franklin had a major influence on the emerging science of demography or population studies. In the 1730s and 1740s, he began taking notes on population growth, finding that the American population had the fastest growth rate on Earth. Emphasizing that population growth depended on food supplies, he emphasized the abundance of food and available farmland in America. He calculated that America's population was doubling every 20 years and would surpass that of England in a century. In 1751, he drafted "Observations concerning the Increase of Mankind, Peopling of Countries, etc." Four years later, it was anonymously printed in Boston and was quickly reproduced in Britain, where it influenced the economist Adam Smith and later the demographer Thomas Malthus, who credited Franklin for discovering a rule of population growth. Franklin's predictions on how British mercantilism was unsustainable alarmed British leaders who did not want to be surpassed by the colonies, so they became more willing to impose restrictions on the colonial economy. Kammen (1990) and Drake (2011) say Franklin's "Observations concerning the Increase of Mankind" (1755) stands alongside Ezra Stiles' "Discourse on Christian Union" (1760) as the leading works of 18th-century Anglo-American demography; Drake credits Franklin's "wide readership and prophetic insight." Franklin was also a pioneer in the study of slave demography, as shown in his 1755 essay. In his capacity as a farmer, he wrote at least one critique about the negative consequences of price controls, trade restrictions, and subsidy of the poor. This is succinctly preserved in his letter to the "London Chronicle" published November 29, 1766, titled "On the Price of Corn, and Management of the poor." Decision-making. In a 1772 letter to Joseph Priestley, Franklin laid out the earliest known description of the Pro & Con list, a common decision-making technique, now sometimes called a decisional balance sheet: Views on religion, morality, and slavery. Like the other advocates of republicanism, Franklin emphasized that the new republic could survive only if the people were virtuous. All his life, he explored the role of civic and personal virtue, as expressed in "Poor Richard's" aphorisms. He felt that organized religion was necessary to keep men good to their fellow men, but rarely attended religious services himself. When he met Voltaire in Paris and asked his fellow member of the Enlightenment vanguard to bless his grandson, Voltaire said in English, "God and Liberty", and added, "this is the only appropriate benediction for the grandson of Monsieur Franklin." Franklin's parents were both pious Puritans. The family attended the Old South Church, the most liberal Puritan congregation in Boston, where Benjamin Franklin was baptized in 1706. Franklin's father, a poor chandler, owned a copy of a book, "Bonifacius: Essays to Do Good", by the Puritan preacher and family friend Cotton Mather, which Franklin often cited as a key influence on his life. "If I have been a useful citizen," Franklin wrote to Cotton Mather's son seventy years later, "the public owes the advantage of it to that book." His first pen name, Silence Dogood, paid homage both to the book and to a widely known sermon by Mather. The book preached the importance of forming voluntary associations to benefit society. Franklin learned about forming do-good associations from Mather, but his organizational skills made him the most influential force in making voluntarism an enduring part of the American ethos. Franklin formulated a presentation of his beliefs and published it in 1728. He no longer accepted the key Puritan ideas regarding salvation, the divinity of Jesus, or indeed much religious dogma. He classified himself as a deist in his 1771 autobiography, although he still considered himself a Christian. He retained a strong faith in a God as the wellspring of morality and goodness in man, and as a Providential actor in history responsible for American independence. At a critical impasse during the Constitutional Convention in June 1787, he attempted to introduce the practice of daily common prayer with these words: The motion gained almost no support and was never brought to a vote. Franklin was an enthusiastic admirer of the evangelical minister George Whitefield during the First Great Awakening. He did not himself subscribe to Whitefield's theology, but he admired Whitefield for exhorting people to worship God through good works. He published all of Whitefield's sermons and journals, thereby earning a lot of money and boosting the Great Awakening. When he stopped attending church, Franklin wrote in his autobiography: Franklin retained a lifelong commitment to the non-religious Puritan virtues and political values he had grown up with, and through his civic work and publishing, he succeeded in passing these values into the American culture permanently. He had a "passion for virtue." These Puritan values included his devotion to egalitarianism, education, industry, thrift, honesty, temperance, charity and community spirit. Thomas Kidd states, "As an adult, Franklin touted ethical responsibility, industriousness, and benevolence, even as he jettisoned Christian orthodoxy." The classical authors read in the Enlightenment period taught an abstract ideal of republican government based on hierarchical social orders of king, aristocracy and commoners. It was widely believed that English liberties relied on their balance of power, but also hierarchal deference to the privileged class. "Puritanism ... and the epidemic evangelism of the mid-eighteenth century, had created challenges to the traditional notions of social stratification" by preaching that the Bible taught all men are equal, that the true value of a man lies in his moral behavior, not his class, and that all men can be saved. Franklin, steeped in Puritanism and an enthusiastic supporter of the evangelical movement, rejected the salvation dogma but embraced the radical notion of egalitarian democracy. Franklin's commitment to teach these values was itself something he gained from his Puritan upbringing, with its stress on "inculcating virtue and character in themselves and their communities." These Puritan values and the desire to pass them on, were one of his quintessentially American characteristics and helped shape the character of the nation. Max Weber considered Franklin's ethical writings a culmination of the Protestant ethic, which ethic created the social conditions necessary for the birth of capitalism. One of his characteristics was his respect, tolerance and promotion of all churches. Referring to his experience in Philadelphia, he wrote in his autobiography, "new Places of worship were continually wanted, and generally erected by voluntary Contribution, my Mite for such purpose, whatever might be the Sect, was never refused." "He helped create a new type of nation that would draw strength from its religious pluralism." The evangelical revivalists who were active mid-century, such as Whitefield, were the greatest advocates of religious freedom, "claiming liberty of conscience to be an 'inalienable right of every rational creature. Whitefield's supporters in Philadelphia, including Franklin, erected "a large, new hall, that ... could provide a pulpit to anyone of any belief." Franklin's rejection of dogma and doctrine and his stress on the God of ethics and morality and civic virtue made him the "prophet of tolerance." He composed "A Parable Against Persecution", an apocryphal 51st chapter of Genesis in which God teaches Abraham the duty of tolerance. While he was living in London in 1774, he was present at the birth of British Unitarianism, attending the inaugural session of the Essex Street Chapel, at which Theophilus Lindsey drew together the first avowedly Unitarian congregation in England; this was somewhat politically risky and pushed religious tolerance to new boundaries, as a denial of the doctrine of the Trinity was illegal until the 1813 Act. Although his parents had intended for him a career in the church, Franklin as a young man adopted the Enlightenment religious belief in deism, that God's truths can be found entirely through nature and reason, declaring, "I soon became a thorough Deist." He rejected Christian dogma in a 1725 pamphlet "A Dissertation on Liberty and Necessity, Pleasure and Pain", which he later saw as an embarrassment, while simultaneously asserting that God is "all wise, all good, all powerful." He defended his rejection of religious dogma with these words: "I think opinions should be judged by their influences and effects; and if a man holds none that tend to make him less virtuous or more vicious, it may be concluded that he holds none that are dangerous, which I hope is the case with me." After the disillusioning experience of seeing the decay in his own moral standards, and those of two friends in London whom he had converted to deism, Franklin decided that deism was true but it was not as useful in promoting personal morality as were the controls imposed by organized religion. Ralph Frasca contends that in his later life he can be considered a non-denominational Christian, although he did not believe Christ was divine. In a major scholarly study of his religion, Thomas Kidd argues that Franklin believed that true religiosity was a matter of personal morality and civic virtue. Kidd says Franklin maintained his lifelong resistance to orthodox Christianity while arriving finally at a "doctrineless, moralized Christianity." According to David Morgan, Franklin was a proponent of "generic religion." He prayed to "Powerful Goodness" and referred to God as "the infinite." John Adams noted that he was a mirror in which people saw their own religion: "The Catholics thought him almost a Catholic. The Church of England claimed him as one of them. The Presbyterians thought him half a Presbyterian, and the Friends believed him a wet Quaker." Adams himself decided that Franklin best fit among the "Atheists, Deists, and Libertines." Whatever else Franklin was, concludes Morgan, "he was a true champion of generic religion." In a letter to Richard Price, Franklin states that he believes religion should support itself without help from the government, claiming, "When a Religion is good, I conceive that it will support itself; and, when it cannot support itself, and God does not take care to support, so that its Professors are oblig'd to call for the help of the Civil Power, it is a sign, I apprehend, of its being a bad one." In 1790, just about a month before he died, Franklin wrote a letter to Ezra Stiles, president of Yale University, who had asked him his views on religion: On July 4, 1776, Congress appointed a three-member committee composed of Franklin, Jefferson, and Adams to design the Great Seal of the United States. Franklin's proposal (which was not adopted) featured the motto: "Rebellion to Tyrants is Obedience to God" and a scene from the Book of Exodus he took from the frontispiece of the Geneva Bible, with Moses, the Israelites, the pillar of fire, and George III depicted as pharaoh. The design that was produced was not acted upon by Congress, and the Great Seal's design was not finalized until a third committee was appointed in 1782. Franklin strongly supported the right to freedom of speech: Thirteen Virtues. Franklin sought to cultivate his character by a plan of 13 virtues, which he developed at age 20 (in 1726) and continued to practice in some form for the rest of his life. His autobiography lists his 13 virtues as: Franklin did not try to work on them all at once. Instead, he worked on only one each week "leaving all others to their ordinary chance." While he did not adhere completely to the enumerated virtues, and by his own admission he fell short of them many times, he believed the attempt made him a better man, contributing greatly to his success and happiness, which is why in his autobiography, he devoted more pages to this plan than to any other single point and wrote, "I hope, therefore, that some of my descendants may follow the example and reap the benefit." Slavery. Franklin's views and practices concerning slavery evolved over the course of his life. In his early years, Franklin owned seven slaves, including two men who worked in his household and his shop, but in his later years became an adherent of abolition. A revenue stream for his newspaper was paid ads for the sale of slaves and for the capture of runaway slaves and Franklin allowed the sale of slaves in his general store. He later became an outspoken critic of slavery. In 1758, he advocated the opening of a school for the education of black slaves in Philadelphia. He took two slaves to England with him, Peter and King. King escaped with a woman to live in the outskirts of London, and by 1758 he was working for a household in Suffolk. After returning from England in 1762, Franklin became more abolitionist in nature, attacking American slavery. In the wake of "Somerset v Stewart", he voiced frustration at British abolitionists: Franklin refused to publicly debate the issue of slavery at the 1787 Constitutional Convention. At the time of the American founding, there were about half a million slaves in the United States, mostly in the five southernmost states, where they made up 40% of the population. Many of the leading American founderssuch as Thomas Jefferson, George Washington, and James Madisonowned slaves, but many others did not. Benjamin Franklin thought that slavery was "an atrocious debasement of human nature" and "a source of serious evils." In 1787, Franklin and Benjamin Rush helped write a new constitution for the Pennsylvania Society for Promoting the Abolition of Slavery, and that same year Franklin became president of the organization. In 1790, Quakers from New York and Pennsylvania presented their petition for abolition to Congress. Their argument against slavery was backed by the Pennsylvania Abolitionist Society. In his later years, as Congress was forced to deal with the issue of slavery, Franklin wrote several essays that stressed the importance of the abolition of slavery and of the integration of African Americans into American society. These writings included: Vegetarianism. Franklin became a vegetarian when he was a teenager apprenticing at a print shop, after coming upon a book by the early vegetarian advocate Thomas Tryon. In addition, he would have also been familiar with the moral arguments espoused by prominent vegetarian Quakers in the colonial-era Province of Pennsylvania, including Benjamin Lay and John Woolman. His reasons for vegetarianism were based on health, ethics, and economy: Franklin also declared the consumption of fish to be "unprovoked murder." Despite his convictions, he began to eat fish after being tempted by fried cod on a boat sailing from Boston, justifying the eating of animals by observing that the fish's stomach contained other fish. Nonetheless, he recognized the faulty ethics in this argument and would continue to be a vegetarian on and off. He was "excited" by tofu, which he learned of from the writings of a Spanish missionary to Southeast Asia, Domingo Fernández Navarrete. Franklin sent a sample of soybeans to prominent American botanist John Bartram and had previously written to British diplomat and Chinese trade expert James Flint inquiring as to how tofu was made, with their correspondence believed to be the first documented use of the word "tofu" in the English language. Franklin's "Second Reply to "Vindex Patriae,"" a 1766 letter advocating self-sufficiency and less dependence on England, lists various examples of the bounty of American agricultural products, and does not mention meat. Detailing new American customs, he wrote that, "[t]hey resolved last spring to eat no more lamb; and not a joint of lamb has since been seen on any of their tables ... the sweet little creatures are all alive to this day, with the prettiest fleeces on their backs imaginable." View on inoculation. The concept of preventing smallpox by variolation was introduced to colonial America by an African slave named Onesimus via his owner Cotton Mather in the early eighteenth century, but the procedure was not immediately accepted. James Franklin's newspaper carried articles in 1721 that vigorously denounced the concept. However, by 1736 Benjamin Franklin was known as a supporter of the procedure. Therefore, when four-year-old "Franky" died of smallpox, opponents of the procedure circulated rumors that the child had been inoculated, and that this was the cause of his subsequent death. When Franklin became aware of this gossip, he placed a notice in the "Pennsylvania Gazette", stating: "I do hereby sincerely declare, that he was not inoculated, but receiv'd the Distemper in the common Way of Infection ... I intended to have my Child inoculated." The child had a bad case of flux diarrhea, and his parents had waited for him to get well before having him inoculated. Franklin wrote in his "Autobiography": "In 1736 I lost one of my sons, a fine boy of four years old, by the small-pox, taken in the common way. I long regretted bitterly, and still regret that I had not given it to him by inoculation. This I mention for the sake of parents who omit that operation, on the supposition that they should never forgive themselves if a child died under it; my example showing that the regret may be the same either way, and that, therefore, the safer should be chosen." Views on the future of technology. In a letter to Joseph Priestley (8 Feb. 1780), Benjamin Franklin speculated that in the future “all Diseases may by sure means be prevented or cured, not excepting even that of Old Age, and our Lives lengthened at pleasure even beyond the antediluvian Standard”. In the same letter, Franklin wrote: The rapid progress true science now makes, occasions my regretting sometimes that I was born so soon: it is impossible to imagine the height to which may be carried, in a thousand years, the power of man over matter; we may perhaps learn to deprive large masses of their gravity, and give them absolute levity for the sake of easy transport. Agriculture may diminish its labour and double its produce... In 1773, Franklin imagined a technology similar to cryonics: I wish it were possible to invent a method of embalming drowned persons in such a manner that they might be recalled to life at any period, however distant; for having a very ardent desire to see and observe the state of America a hundred years hence... Interests and activities. Musical endeavors. Franklin is known to have played the violin, the harp, and the guitar. He also composed music, which included a string quartet in early classical style. While he was in London, he developed a much-improved version of the glass harmonica, in which the glasses rotate on a shaft, with the player's fingers held steady, instead of the other way around. He worked with the London glassblower Charles James to create it, and instruments based on his mechanical version soon found their way to other parts of Europe. Joseph Haydn, a fan of Franklin's enlightened ideas, had a glass harmonica in his instrument collection. Wolfgang Amadeus Mozart composed for Franklin's glass harmonica, as did Beethoven. Gaetano Donizetti used the instrument in the accompaniment to Amelia's aria "Par che mi dica ancora" in the tragic opera "Il castello di Kenilworth" (1821), as did Camille Saint-Saëns in his 1886 "The Carnival of the Animals". Richard Strauss calls for the glass harmonica in his 1917 "Die Frau ohne Schatten", and numerous other composers used Franklin's instrument as well. Chess. Franklin was an avid chess player. He was playing chess by around 1733, making him the first chess player known by name in the American colonies. His essay on "The Morals of Chess" in "Columbian Magazine" in December 1786 is the second known writing on chess in America. This essay in praise of chess and prescribing a code of behavior for the game has been widely reprinted and translated. He and a friend used chess as a means of learning the Italian language, which both were studying; the winner of each game between them had the right to assign a task, such as parts of the Italian grammar to be learned by heart, to be performed by the loser before their next meeting. Franklin was able to play chess more frequently against stronger opposition during his many years as a civil servant and diplomat in England, where the game was far better established than in America. He was able to improve his playing standard by facing more experienced players during this period. He regularly attended Old Slaughter's Coffee House in London for chess and socializing, making many important personal contacts. While in Paris, both as a visitor and later as ambassador, he visited the famous Café de la Régence, which France's strongest players made their regular meeting place. No records of his games have survived, so it is not possible to ascertain his playing strength in modern terms. Franklin was inducted into the U.S. Chess Hall of Fame in 1999. The Franklin Mercantile Chess Club in Philadelphia, the second oldest chess club in the U.S., is named in his honor. Legacy. Bequest. Franklin bequeathed £1,000 (about $4,400 at the time, or about $125,000 in 2021 dollars) each to the cities of Boston and Philadelphia, in trust to gather interest for 200 years. The trust began in 1785 when the French mathematician Charles-Joseph Mathon de la Cour, who admired Franklin greatly, wrote a friendly parody of Franklin's "Poor Richard's Almanack" called "Fortunate Richard". The main character leaves a smallish amount of money in his will, five lots of 100 "livres", to collect interest over one, two, three, four or five full centuries, with the resulting astronomical sums to be spent on impossibly elaborate utopian projects. Franklin, who was 79 years old at the time, wrote thanking him for a great idea and telling him that he had decided to leave a bequest of 1,000 pounds each to his native Boston and his adopted Philadelphia. By 1990, more than $2,000,000 (~$ in ) had accumulated in Franklin's Philadelphia trust, which had loaned the money to local residents. From 1940 to 1990, the money was used mostly for mortgage loans. When the trust came due, Philadelphia decided to spend it on scholarships for local high school students. Franklin's Boston trust fund accumulated almost $5,000,000 during that same time; at the end of its first 100 years a portion was allocated to help establish a trade school that became the Franklin Institute of Boston, and the entire fund was later dedicated to supporting this institute. In 1787, a group of prominent ministers in Lancaster, Pennsylvania, proposed the foundation of a new college named in Franklin's honor. Franklin donated £200 towards the development of Franklin College (now called Franklin & Marshall College). Likeness and image. As the only person to have signed the Declaration of Independence in 1776, Treaty of Alliance with France in 1778, Treaty of Paris in 1783, and U.S. Constitution in 1787, Franklin is considered one of the leading Founding Fathers of the United States. His pervasive influence in the early history of the nation has led to his being jocularly called "the only president of the United States who was never president of the United States." Franklin's likeness is ubiquitous. Since 1914, it has adorned American $100 bills. From 1948 to 1963, Franklin's portrait was on the half-dollar. He has appeared on a $50 bill and on several varieties of the $100 bill from 1914 and 1918. Franklin also appears on the $1,000 Series EE savings bond. On April 12, 1976, as part of a bicentennial celebration, Congress dedicated a tall marble statue in Philadelphia's Franklin Institute as the Benjamin Franklin National Memorial. Vice President Nelson Rockefeller presided over the dedication ceremony. Many of Franklin's personal possessions are on display at the institute. In London, his house at 36 Craven Street, which is the only surviving former residence of Franklin, was first marked with a blue plaque and has since been opened to the public as the Benjamin Franklin House. In 1998, workmen restoring the building dug up the remains of six children and four adults hidden below the home. A total of 15 bodies have been recovered. The Friends of Benjamin Franklin House (the organization responsible for the restoration) note that the bones were likely placed there by William Hewson, who lived in the house for two years and who had built a small anatomy school at the back of the house. They note that while Franklin likely knew what Hewson was doing, he probably did not participate in any dissections because he was much more of a physicist than a medical man. He has been honored on U.S. postage stamps many times. The image of Franklin, the first postmaster general of the United States, occurs on the face of U.S. postage more than any other American save that of George Washington. He appeared on the first U.S. postage stamp issued in 1847. From 1908 through 1923, the U.S. Post Office issued a series of postage stamps commonly referred to as the Washington–Franklin Issues, in which Washington and Franklin were depicted many times over a 14-year period, the longest run of any one series in U.S. postal history. However, he only appears on a few . Some of the finest portrayals of Franklin on record can be found on the engravings inscribed on the face of U.S. postage.
3989
18872885
https://en.wikipedia.org/wiki?curid=3989
Banach space
In mathematics, more specifically in functional analysis, a Banach space (, ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space. Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn and Eduard Helly. Maurice René Fréchet was the first to use the term "Banach space" and Banach in turn then coined the term "Fréchet space". Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces. Definition. A Banach space is a complete normed space formula_1 A normed space is a pair formula_2 consisting of a vector space formula_3 over a scalar field formula_4 (where formula_4 is commonly formula_6 or formula_7) together with a distinguished norm formula_8 Like all norms, this norm induces a translation invariant distance function, called the "canonical" or "(norm) induced metric", defined for all vectors formula_9 by formula_10 This makes formula_3 into a metric space formula_12 A sequence formula_13 is called or or if for every real formula_14 there exists some index formula_15 such that formula_16 whenever formula_17 and formula_18 are greater than formula_19 The normed space formula_2 is called a Banach space and the canonical metric formula_21 is called a "complete metric" if formula_22 is a complete metric space, which by definition means for every Cauchy sequence formula_13 in formula_24 there exists some formula_25 such that formula_26 where because formula_27 this sequence's convergence to formula_28 can equivalently be expressed as formula_29 The norm formula_30 of a normed space formula_2 is called a if formula_2 is a Banach space. L-semi-inner product. For any normed space formula_33 there exists an L-semi-inner product formula_34 on formula_3 such that formula_36 for all formula_37 In general, there may be infinitely many L-semi-inner products that satisfy this condition and the proof of the existence of L-semi-inner products relies on the non-constructive Hahn–Banach theorem. L-semi-inner products are a generalization of inner products, which are what fundamentally distinguish Hilbert spaces from all other Banach spaces. This shows that all normed spaces (and hence all Banach spaces) can be considered as being generalizations of (pre-)Hilbert spaces. Characterization in terms of series. The vector space structure allows one to relate the behavior of Cauchy sequences to that of converging series of vectors. A normed space formula_3 is a Banach space if and only if each absolutely convergent series in formula_3 converges to a value that lies within formula_40 symbolically formula_41 Topology. The canonical metric formula_21 of a normed space formula_2 induces the usual metric topology formula_44 on formula_40 which is referred to as the "canonical" or "norm induced topology". Every normed space is automatically assumed to carry this Hausdorff topology, unless indicated otherwise. With this topology, every Banach space is a Baire space, although there exist normed spaces that are Baire but not Banach. The norm formula_46 is always a continuous function with respect to the topology that it induces. The open and closed balls of radius formula_47 centered at a point formula_25 are, respectively, the sets formula_49 Any such ball is a convex and bounded subset of formula_40 but a compact ball/neighborhood exists if and only if formula_3 is finite-dimensional. In particular, no infinite–dimensional normed space can be locally compact or have the Heine–Borel property. If formula_52 is a vector and formula_53 is a scalar, then formula_54 Using formula_55 shows that the norm-induced topology is translation invariant, which means that for any formula_25 and formula_57 the subset formula_58 is open (respectively, closed) in formula_3 if and only if its translation formula_60 is open (respectively, closed). Consequently, the norm induced topology is completely determined by any neighbourhood basis at the origin. Some common neighborhood bases at the origin include formula_61 where formula_62 can be any sequence of positive real numbers that converges to formula_63 in formula_64 (common choices are formula_65 or formula_66). So, for example, any open subset formula_67 of formula_3 can be written as a union formula_69 indexed by some subset formula_70 where each formula_71 may be chosen from the aforementioned sequence formula_72 (The open balls can also be replaced with closed balls, although the indexing set formula_73 and radii formula_71 may then also need to be replaced). Additionally, formula_73 can always be chosen to be countable if formula_3 is a , which by definition means that formula_3 contains some countable dense subset. Homeomorphism classes of separable Banach spaces. All finite–dimensional normed spaces are separable Banach spaces and any two Banach spaces of the same finite dimension are linearly homeomorphic. Every separable infinite–dimensional Hilbert space is linearly isometrically isomorphic to the separable Hilbert sequence space formula_78 with its usual norm formula_79 The Anderson–Kadec theorem states that every infinite–dimensional separable Fréchet space is homeomorphic to the product space formula_80 of countably many copies of formula_6 (this homeomorphism need not be a linear map). Thus all infinite–dimensional separable Fréchet spaces are homeomorphic to each other (or said differently, their topology is unique up to a homeomorphism). Since every Banach space is a Fréchet space, this is also true of all infinite–dimensional separable Banach spaces, including formula_82 In fact, formula_78 is even homeomorphic to its own unit formula_84 which stands in sharp contrast to finite–dimensional spaces (the Euclidean plane formula_85 is not homeomorphic to the unit circle, for instance). This pattern in homeomorphism classes extends to generalizations of metrizable (locally Euclidean) topological manifolds known as , which are metric spaces that are around every point, locally homeomorphic to some open subset of a given Banach space (metric Hilbert manifolds and metric Fréchet manifolds are defined similarly). For example, every open subset formula_67 of a Banach space formula_3 is canonically a metric Banach manifold modeled on formula_3 since the inclusion map formula_89 is an open local homeomorphism. Using Hilbert space microbundles, David Henderson showed in 1969 that every metric manifold modeled on a separable infinite–dimensional Banach (or Fréchet) space can be topologically embedded as an subset of formula_78 and, consequently, also admits a unique smooth structure making it into a formula_91 Hilbert manifold. Compact and convex subsets. There is a compact subset formula_58 of formula_78 whose convex hull formula_94 is closed and thus also compact. However, like in all Banach spaces, the convex hull formula_95 of this (and every other) compact subset will be compact. In a normed space that is not complete then it is in general guaranteed that formula_95 will be compact whenever formula_58 is; an example can even be found in a (non-complete) pre-Hilbert vector subspace of formula_82 As a topological vector space. This norm-induced topology also makes formula_99 into what is known as a topological vector space (TVS), which by definition is a vector space endowed with a topology making the operations of addition and scalar multiplication continuous. It is emphasized that the TVS formula_99 is a vector space together with a certain type of topology; that is to say, when considered as a TVS, it is associated with particular norm or metric (both of which are "forgotten"). This Hausdorff TVS formula_99 is even locally convex because the set of all open balls centered at the origin forms a neighbourhood basis at the origin consisting of convex balanced open sets. This TVS is also , which by definition refers to any TVS whose topology is induced by some (possibly unknown) norm. Normable TVSs are characterized by being Hausdorff and having a bounded convex neighborhood of the origin. All Banach spaces are barrelled spaces, which means that every barrel is neighborhood of the origin (all closed balls centered at the origin are barrels, for example) and guarantees that the Banach–Steinhaus theorem holds. Comparison of complete metrizable vector topologies. The open mapping theorem implies that when formula_102 and formula_103 are topologies on formula_3 that make both formula_105 and formula_106 into complete metrizable TVSes (for example, Banach or Fréchet spaces), if one topology is finer or coarser than the other, then they must be equal (that is, if formula_107 or formula_108 then formula_109). So, for example, if formula_110 and formula_111 are Banach spaces with topologies formula_112 and formula_113 and if one of these spaces has some open ball that is also an open subset of the other space (or, equivalently, if one of formula_114 or formula_115 is continuous), then their topologies are identical and the norms formula_116 and formula_117 are equivalent. Completeness. Complete norms and equivalent norms. Two norms, formula_116 and formula_119 on a vector space formula_3 are said to be "equivalent" if they induce the same topology; this happens if and only if there exist real numbers formula_121 such that formula_122 for all formula_37 If formula_116 and formula_117 are two equivalent norms on a vector space formula_3 then formula_110 is a Banach space if and only if formula_111 is a Banach space. See this footnote for an example of a continuous norm on a Banach space that is equivalent to that Banach space's given norm. All norms on a finite-dimensional vector space are equivalent and every finite-dimensional normed space is a Banach space. Complete norms vs complete metrics. A metric formula_129 on a vector space formula_3 is induced by a norm on formula_3 if and only if formula_129 is translation invariant and "absolutely homogeneous", which means that formula_133 for all scalars formula_134 and all formula_135 in which case the function formula_136 defines a norm on formula_3 and the canonical metric induced by formula_30 is equal to formula_139 Suppose that formula_2 is a normed space and that formula_141 is the norm topology induced on formula_142 Suppose that formula_129 is metric on formula_3 such that the topology that formula_129 induces on formula_3 is equal to formula_147 If formula_129 is translation invariant then formula_2 is a Banach space if and only if formula_150 is a complete metric space. If formula_129 is translation invariant, then it may be possible for formula_2 to be a Banach space but for formula_150 to be a complete metric space (see this footnote for an example). In contrast, a theorem of Klee, which also applies to all metrizable topological vector spaces, implies that if there exists complete metric formula_129 on formula_3 that induces the norm topology formula_141 on formula_40 then formula_2 is a Banach space. A Fréchet space is a locally convex topological vector space whose topology is induced by some translation-invariant complete metric. Every Banach space is a Fréchet space but not conversely; indeed, there even exist Fréchet spaces on which no norm is a continuous function (such as the space of real sequences formula_159 with the product topology). However, the topology of every Fréchet space is induced by some countable family of real-valued (necessarily continuous) maps called seminorms, which are generalizations of norms. It is even possible for a Fréchet space to have a topology that is induced by a countable family of (such norms would necessarily be continuous) but to not be a Banach/normable space because its topology can not be defined by any norm. An example of such a space is the Fréchet space formula_160 whose definition can be found in the article on spaces of test functions and distributions. Complete norms vs complete topological vector spaces. There is another notion of completeness besides metric completeness and that is the notion of a complete topological vector space (TVS) or TVS-completeness, which uses the theory of uniform spaces. Specifically, the notion of TVS-completeness uses a unique translation-invariant uniformity, called the canonical uniformity, that depends on vector subtraction and the topology formula_141 that the vector space is endowed with, and so in particular, this notion of TVS completeness is independent of whatever norm induced the topology formula_141 (and even applies to TVSs that are even metrizable). Every Banach space is a complete TVS. Moreover, a normed space is a Banach space (that is, its norm-induced metric is complete) if and only if it is complete as a topological vector space. If formula_163 is a metrizable topological vector space (such as any norm induced topology, for example), then formula_163 is a complete TVS if and only if it is a complete TVS, meaning that it is enough to check that every Cauchy in formula_163 converges in formula_163 to some point of formula_3 (that is, there is no need to consider the more general notion of arbitrary Cauchy nets). If formula_163 is a topological vector space whose topology is induced by (possibly unknown) norm (such spaces are called ), then formula_163 is a complete topological vector space if and only if formula_3 may be assigned a norm formula_30 that induces on formula_3 the topology formula_141 and also makes formula_2 into a Banach space. A Hausdorff locally convex topological vector space formula_3 is normable if and only if its strong dual space formula_176 is normable, in which case formula_176 is a Banach space (formula_176 denotes the strong dual space of formula_40 whose topology is a generalization of the dual norm-induced topology on the continuous dual space formula_180; see this footnote for more details). If formula_3 is a metrizable locally convex TVS, then formula_3 is normable if and only if formula_176 is a Fréchet–Urysohn space. This shows that in the category of locally convex TVSs, Banach spaces are exactly those complete spaces that are both metrizable and have metrizable strong dual spaces. Completions. Every normed space can be isometrically embedded onto a dense vector subspace of a Banach space, where this Banach space is called a "completion" of the normed space. This Hausdorff completion is unique up to isometric isomorphism. More precisely, for every normed space formula_40 there exists a Banach space formula_185 and a mapping formula_186 such that formula_187 is an isometric mapping and formula_188 is dense in formula_189 If formula_190 is another Banach space such that there is an isometric isomorphism from formula_3 onto a dense subset of formula_192 then formula_190 is isometrically isomorphic to formula_189 The Banach space formula_185 is the Hausdorff "completion" of the normed space formula_142 The underlying metric space for formula_185 is the same as the metric completion of formula_40 with the vector space operations extended from formula_3 to formula_189 The completion of formula_3 is sometimes denoted by formula_202 General theory. Linear operators, isomorphisms. If formula_3 and formula_185 are normed spaces over the same ground field formula_205 the set of all continuous formula_4-linear maps formula_186 is denoted by formula_208 In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed space formula_3 to another normed space is continuous if and only if it is bounded on the closed unit ball of formula_142 Thus, the vector space formula_211 can be given the operator norm formula_212 For formula_185 a Banach space, the space formula_211 is a Banach space with respect to this norm. In categorical contexts, it is sometimes convenient to restrict the function space between two Banach spaces to only the short maps; in that case the space formula_215 reappears as a natural bifunctor. If formula_3 is a Banach space, the space formula_217 forms a unital Banach algebra; the multiplication operation is given by the composition of linear maps. If formula_3 and formula_185 are normed spaces, they are isomorphic normed spaces if there exists a linear bijection formula_186 such that formula_187 and its inverse formula_222 are continuous. If one of the two spaces formula_3 or formula_185 is complete (or reflexive, separable, etc.) then so is the other space. Two normed spaces formula_3 and formula_185 are "isometrically isomorphic" if in addition, formula_187 is an isometry, that is, formula_228 for every formula_28 in formula_142 The Banach–Mazur distance formula_231 between two isomorphic but not isometric spaces formula_3 and formula_185 gives a measure of how much the two spaces formula_3 and formula_185 differ. Continuous and bounded linear functions and seminorms. Every continuous linear operator is a bounded linear operator and if dealing only with normed spaces then the converse is also true. That is, a linear operator between two normed spaces is bounded if and only if it is a continuous function. So in particular, because the scalar field (which is formula_64 or formula_7) is a normed space, a linear functional on a normed space is a bounded linear functional if and only if it is a continuous linear functional. This allows for continuity-related results (like those below) to be applied to Banach spaces. Although boundedness is the same as continuity for linear maps between normed spaces, the term "bounded" is more commonly used when dealing primarily with Banach spaces. If formula_238 is a subadditive function (such as a norm, a sublinear function, or real linear functional), then formula_239 is continuous at the origin if and only if formula_239 is uniformly continuous on all of formula_3; and if in addition formula_242 then formula_239 is continuous if and only if its absolute value formula_244 is continuous, which happens if and only if formula_245 is an open subset of formula_142 And very importantly for applying the Hahn–Banach theorem, a linear functional formula_239 is continuous if and only if this is true of its real part formula_248 and moreover, formula_249 and the real part formula_248 completely determines formula_251 which is why the Hahn–Banach theorem is often stated only for real linear functionals. Also, a linear functional formula_239 on formula_3 is continuous if and only if the seminorm formula_254 is continuous, which happens if and only if there exists a continuous seminorm formula_255 such that formula_256; this last statement involving the linear functional formula_239 and seminorm formula_116 is encountered in many versions of the Hahn–Banach theorem. Basic notions. The Cartesian product formula_259 of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are commonly used, such as formula_260 which correspond (respectively) to the coproduct and product in the category of Banach spaces and short maps (discussed above). For finite (co)products, these norms give rise to isomorphic normed spaces, and the product formula_259 (or the direct sum formula_262) is complete if and only if the two factors are complete. If formula_263 is a closed linear subspace of a normed space formula_40 there is a natural norm on the quotient space formula_265 formula_266 The quotient formula_267 is a Banach space when formula_3 is complete. The quotient map from formula_3 onto formula_265 sending formula_25 to its class formula_272 is linear, onto, and of norm formula_273 except when formula_274 in which case the quotient is the null space. The closed linear subspace formula_263 of formula_3 is said to be a "complemented subspace" of formula_3 if formula_263 is the range of a surjective bounded linear projection formula_279 In this case, the space formula_3 is isomorphic to the direct sum of formula_263 and formula_282 the kernel of the projection formula_283 Suppose that formula_3 and formula_185 are Banach spaces and that formula_286 There exists a canonical factorization of formula_187 as formula_288 where the first map formula_289 is the quotient map, and the second map formula_290 sends every class formula_291 in the quotient to the image formula_292 in formula_189 This is well defined because all elements in the same class have the same image. The mapping formula_290 is a linear bijection from formula_295 onto the range formula_296 whose inverse need not be bounded. Classical spaces. Basic examples of Banach spaces include: the Lp spaces formula_297 and their special cases, the sequence spaces formula_298 that consist of scalar sequences indexed by natural numbers formula_299; among them, the space formula_300 of absolutely summable sequences and the space formula_301 of square summable sequences; the space formula_302 of sequences tending to zero and the space formula_303 of bounded sequences; the space formula_304 of continuous scalar functions on a compact Hausdorff space formula_305 equipped with the max norm, formula_306 According to the Banach–Mazur theorem, every Banach space is isometrically isomorphic to a subspace of some formula_307 For every separable Banach space formula_40 there is a closed subspace formula_263 of formula_300 such that formula_311 Any Hilbert space serves as an example of a Banach space. A Hilbert space formula_312 on formula_313 is complete for a norm of the form formula_314 where formula_315 is the inner product, linear in its first argument that satisfies the following: formula_316 For example, the space formula_317 is a Hilbert space. The Hardy spaces, the Sobolev spaces are examples of Banach spaces that are related to formula_297 spaces and have additional structure. They are important in different branches of analysis, Harmonic analysis and Partial differential equations among others. Banach algebras. A "Banach algebra" is a Banach space formula_319 over formula_320 or formula_321 together with a structure of algebra over formula_4, such that the product map formula_323 is continuous. An equivalent norm on formula_319 can be found so that formula_325 for all formula_326 Dual space. If formula_3 is a normed space and formula_4 the underlying field (either the reals or the complex numbers), the "continuous dual space" is the space of continuous linear maps from formula_3 into formula_205 or "continuous linear functionals". The notation for the continuous dual is formula_355 in this article. Since formula_4 is a Banach space (using the absolute value as norm), the dual formula_180 is a Banach space, for every normed space formula_142 The Dixmier–Ng theorem characterizes the dual spaces of Banach spaces. The main tool for proving the existence of continuous linear functionals is the Hahn–Banach theorem. In particular, every continuous linear functional on a subspace of a normed space can be continuously extended to the whole space, without increasing the norm of the functional. An important special case is the following: for every vector formula_28 in a normed space formula_40 there exists a continuous linear functional formula_239 on formula_3 such that formula_363 When formula_28 is not equal to the formula_365 vector, the functional formula_239 must have norm one, and is called a "norming functional" for formula_367 The Hahn–Banach separation theorem states that two disjoint non-empty convex sets in a real Banach space, one of them open, can be separated by a closed affine hyperplane. The open convex set lies strictly on one side of the hyperplane, the second convex set lies on the other side but may touch the hyperplane. A subset formula_58 in a Banach space formula_3 is "total" if the linear span of formula_58 is dense in formula_142 The subset formula_58 is total in formula_3 if and only if the only continuous linear functional that vanishes on formula_58 is the formula_365 functional: this equivalence follows from the Hahn–Banach theorem. If formula_3 is the direct sum of two closed linear subspaces formula_263 and formula_378 then the dual formula_180 of formula_3 is isomorphic to the direct sum of the duals of formula_263 and formula_19 If formula_263 is a closed linear subspace in formula_40 one can associate the formula_263 in the dual, formula_386 The orthogonal formula_387 is a closed linear subspace of the dual. The dual of formula_263 is isometrically isomorphic to formula_389 The dual of formula_267 is isometrically isomorphic to formula_391 The dual of a separable Banach space need not be separable, but: When formula_180 is separable, the above criterion for totality can be used for proving the existence of a countable total subset in formula_142 Weak topologies. The "weak topology" on a Banach space formula_3 is the coarsest topology on formula_3 for which all elements formula_396 in the continuous dual space formula_180 are continuous. The norm topology is therefore finer than the weak topology. It follows from the Hahn–Banach separation theorem that the weak topology is Hausdorff, and that a norm-closed convex subset of a Banach space is also weakly closed. A norm-continuous linear map between two Banach spaces formula_3 and formula_185 is also "weakly continuous", that is, continuous from the weak topology of formula_3 to that of formula_189 If formula_3 is infinite-dimensional, there exist linear maps which are not continuous. The space formula_403 of all linear maps from formula_3 to the underlying field formula_4 (this space formula_403 is called the algebraic dual space, to distinguish it from formula_180 also induces a topology on formula_3 which is finer than the weak topology, and much less used in functional analysis. On a dual space formula_409 there is a topology weaker than the weak topology of formula_409 called the "weak* topology". It is the coarsest topology on formula_180 for which all evaluation maps formula_412 where formula_28 ranges over formula_40 are continuous. Its importance comes from the Banach–Alaoglu theorem. The Banach–Alaoglu theorem can be proved using Tychonoff's theorem about infinite products of compact Hausdorff spaces. When formula_3 is separable, the unit ball formula_416 of the dual is a metrizable compact in the weak* topology. Examples of dual spaces. The dual of formula_302 is isometrically isomorphic to formula_300: for every bounded linear functional formula_239 on formula_420 there is a unique element formula_421 such that formula_422 The dual of formula_300 is isometrically isomorphic to formula_303. The dual of Lebesgue space formula_425 is isometrically isomorphic to formula_426 when formula_427 and formula_428 For every vector formula_429 in a Hilbert space formula_430 the mapping formula_431 defines a continuous linear functional formula_432 on formula_433The Riesz representation theorem states that every continuous linear functional on formula_312 is of the form formula_432 for a uniquely defined vector formula_429 in formula_433 The mapping formula_438 is an antilinear isometric bijection from formula_312 onto its dual formula_440 When the scalars are real, this map is an isometric isomorphism. When formula_348 is a compact Hausdorff topological space, the dual formula_442 of formula_304 is the space of Radon measures in the sense of Bourbaki. The subset formula_444 of formula_442 consisting of non-negative measures of mass 1 (probability measures) is a convex w*-closed subset of the unit ball of formula_446 The extreme points of formula_444 are the Dirac measures on formula_448 The set of Dirac measures on formula_305 equipped with the w*-topology, is homeomorphic to formula_448 The result has been extended by Amir and Cambern to the case when the multiplicative Banach–Mazur distance between formula_304 and formula_452 is formula_453 The theorem is no longer true when the distance is formula_454 In the commutative Banach algebra formula_455 the maximal ideals are precisely kernels of Dirac measures on formula_305 formula_457 More generally, by the Gelfand–Mazur theorem, the maximal ideals of a unital commutative Banach algebra can be identified with its characters—not merely as sets but as topological spaces: the former with the hull-kernel topology and the latter with the w*-topology. In this identification, the maximal ideal space can be viewed as a w*-compact subset of the unit ball in the dual formula_458 Not every unital commutative Banach algebra is of the form formula_304 for some compact Hausdorff space formula_448 However, this statement holds if one places formula_304 in the smaller category of commutative C*-algebras. Gelfand's representation theorem for commutative C*-algebras states that every commutative unital "C"*-algebra formula_319 is isometrically isomorphic to a formula_304 space. The Hausdorff compact space formula_348 here is again the maximal ideal space, also called the spectrum of formula_319 in the C*-algebra context. Bidual. If formula_3 is a normed space, the (continuous) dual formula_467 of the dual formula_180 is called the ' or ' of formula_142 For every normed space formula_40 there is a natural map, formula_471 This defines formula_472 as a continuous linear functional on formula_409 that is, an element of formula_474 The map formula_475 is a linear map from formula_3 to formula_474 As a consequence of the existence of a norming functional formula_239 for every formula_479 this map formula_480 is isometric, thus injective. For example, the dual of formula_481 is identified with formula_482 and the dual of formula_300 is identified with formula_484 the space of bounded scalar sequences. Under these identifications, formula_480 is the inclusion map from formula_302 to formula_487 It is indeed isometric, but not onto. If formula_480 is surjective, then the normed space formula_3 is called "reflexive" (see below). Being the dual of a normed space, the bidual formula_467 is complete, therefore, every reflexive normed space is a Banach space. Using the isometric embedding formula_491 it is customary to consider a normed space formula_3 as a subset of its bidual. When formula_3 is a Banach space, it is viewed as a closed linear subspace of formula_474 If formula_3 is not reflexive, the unit ball of formula_3 is a proper subset of the unit ball of formula_474 The Goldstine theorem states that the unit ball of a normed space is weakly*-dense in the unit ball of the bidual. In other words, for every formula_498 in the bidual, there exists a net formula_499 in formula_3 so that formula_501 The net may be replaced by a weakly*-convergent sequence when the dual formula_180 is separable. On the other hand, elements of the bidual of formula_300 that are not in formula_300 cannot be weak*-limit of in formula_482 since formula_300 is weakly sequentially complete. Banach's theorems. Here are the main general results about Banach spaces that go back to the time of Banach's book () and are related to the Baire category theorem. According to this theorem, a complete metric space (such as a Banach space, a Fréchet space or an F-space) cannot be equal to a union of countably many closed subsets with empty interiors. Therefore, a Banach space cannot be the union of countably many closed subspaces, unless it is already equal to one of them; a Banach space with a countable Hamel basis is finite-dimensional. The Banach–Steinhaus theorem is not limited to Banach spaces. It can be extended for example to the case where formula_3 is a Fréchet space, provided the conclusion is modified as follows: under the same hypothesis, there exists a neighborhood formula_67 of formula_365 in formula_3 such that all formula_187 in formula_512 are uniformly bounded on formula_513 formula_514 This result is a direct consequence of the preceding "Banach isomorphism theorem" and of the canonical factorization of bounded linear maps. This is another consequence of Banach's isomorphism theorem, applied to the continuous bijection from formula_515 onto formula_3 sending formula_517 to the sum formula_518 Reflexivity. The normed space formula_3 is called "reflexive" when the natural map formula_520 is surjective. Reflexive normed spaces are Banach spaces. This is a consequence of the Hahn–Banach theorem. Further, by the open mapping theorem, if there is a bounded linear operator from the Banach space formula_3 onto the Banach space formula_522 then formula_185 is reflexive. Indeed, if the dual formula_524 of a Banach space formula_185 is separable, then formula_185 is separable. If formula_3 is reflexive and separable, then the dual of formula_180 is separable, so formula_180 is separable. Hilbert spaces are reflexive. The formula_297 spaces are reflexive when formula_531 More generally, uniformly convex spaces are reflexive, by the Milman–Pettis theorem. The spaces formula_532 are not reflexive. In these examples of non-reflexive spaces formula_40 the bidual formula_467 is "much larger" than formula_142 Namely, under the natural isometric embedding of formula_3 into formula_467 given by the Hahn–Banach theorem, the quotient formula_538 is infinite-dimensional, and even nonseparable. However, Robert C. James has constructed an example of a non-reflexive space, usually called "the James space" and denoted by formula_539 such that the quotient formula_540 is one-dimensional. Furthermore, this space formula_541 is isometrically isomorphic to its bidual. When formula_3 is reflexive, it follows that all closed and bounded convex subsets of formula_3 are weakly compact. In a Hilbert space formula_430 the weak compactness of the unit ball is very often used in the following way: every bounded sequence in formula_312 has weakly convergent subsequences. Weak compactness of the unit ball provides a tool for finding solutions in reflexive spaces to certain optimization problems. For example, every convex continuous function on the unit ball formula_546 of a reflexive space attains its minimum at some point in formula_547 As a special case of the preceding result, when formula_3 is a reflexive space over formula_549 every continuous linear functional formula_239 in formula_180 attains its maximum formula_552 on the unit ball of formula_142 The following theorem of Robert C. James provides a converse statement. The theorem can be extended to give a characterization of weakly compact convex sets. On every non-reflexive Banach space formula_40 there exist continuous linear functionals that are not "norm-attaining". However, the Bishop–Phelps theorem states that norm-attaining functionals are norm dense in the dual formula_180 of formula_142 Weak convergences of sequences. A sequence formula_557 in a Banach space formula_3 is "weakly convergent" to a vector formula_25 if formula_560 converges to formula_561 for every continuous linear functional formula_239 in the dual formula_563 The sequence formula_557 is a "weakly Cauchy sequence" if formula_560 converges to a scalar limit formula_566 for every formula_239 in formula_563 A sequence formula_569 in the dual formula_180 is "weakly* convergent" to a functional formula_571 if formula_572 converges to formula_561 for every formula_28 in formula_142 Weakly Cauchy sequences, weakly convergent and weakly* convergent sequences are norm bounded, as a consequence of the Banach–Steinhaus theorem. When the sequence formula_557 in formula_3 is a weakly Cauchy sequence, the limit formula_578 above defines a bounded linear functional on the dual formula_409 that is, an element formula_578 of the bidual of formula_40 and formula_578 is the limit of formula_557 in the weak*-topology of the bidual. The Banach space formula_3 is "weakly sequentially complete" if every weakly Cauchy sequence is weakly convergent in formula_142 It follows from the preceding discussion that reflexive spaces are weakly sequentially complete. An orthonormal sequence in a Hilbert space is a simple example of a weakly convergent sequence, with limit equal to the formula_365 vector. The unit vector basis of formula_298 for formula_588 or of formula_420 is another example of a "weakly null sequence", that is, a sequence that converges weakly to formula_590 For every weakly null sequence in a Banach space, there exists a sequence of convex combinations of vectors from the given sequence that is norm-converging to formula_590 The unit vector basis of formula_300 is not weakly Cauchy. Weakly Cauchy sequences in formula_300 are weakly convergent, since formula_594-spaces are weakly sequentially complete. Actually, weakly convergent sequences in formula_300 are norm convergent. This means that formula_300 satisfies Schur's property. Results involving the basis. Weakly Cauchy sequences and the formula_300 basis are the opposite cases of the dichotomy established in the following deep result of H. P. Rosenthal. A complement to this result is due to Odell and Rosenthal (1975). By the Goldstine theorem, every element of the unit ball formula_598 of formula_467 is weak*-limit of a net in the unit ball of formula_142 When formula_3 does not contain formula_482 every element of formula_598 is weak*-limit of a in the unit ball of formula_142 When the Banach space formula_3 is separable, the unit ball of the dual formula_409 equipped with the weak*-topology, is a metrizable compact space formula_305 and every element formula_498 in the bidual formula_467 defines a bounded function on formula_348: formula_611 This function is continuous for the compact topology of formula_348 if and only if formula_498 is actually in formula_40 considered as subset of formula_474 Assume in addition for the rest of the paragraph that formula_3 does not contain formula_617 By the preceding result of Odell and Rosenthal, the function formula_498 is the pointwise limit on formula_348 of a sequence formula_620 of continuous functions on formula_305 it is therefore a first Baire class function on formula_448 The unit ball of the bidual is a pointwise compact subset of the first Baire class on formula_448 Sequences, weak and weak* compactness. When formula_3 is separable, the unit ball of the dual is weak*-compact by the Banach–Alaoglu theorem and metrizable for the weak* topology, hence every bounded sequence in the dual has weakly* convergent subsequences. This applies to separable reflexive spaces, but more is true in this case, as stated below. The weak topology of a Banach space formula_3 is metrizable if and only if formula_3 is finite-dimensional. If the dual formula_180 is separable, the weak topology of the unit ball of formula_3 is metrizable. This applies in particular to separable reflexive Banach spaces. Although the weak topology of the unit ball is not metrizable in general, one can characterize weak compactness using sequences. A Banach space formula_3 is reflexive if and only if each bounded sequence in formula_3 has a weakly convergent subsequence. A weakly compact subset formula_319 in formula_300 is norm-compact. Indeed, every sequence in formula_319 has weakly convergent subsequences by Eberlein–Šmulian, that are norm convergent by the Schur property of formula_617 Type and cotype. A way to classify Banach spaces is through the probabilistic notion of type and cotype, these two measure how far a Banach space is from a Hilbert space. Schauder bases. A "Schauder basis" in a Banach space formula_3 is a sequence formula_636 of vectors in formula_3 with the property that for every vector formula_479 there exist defined scalars formula_639 depending on formula_640 such that formula_641 Banach spaces with a Schauder basis are necessarily separable, because the countable set of finite linear combinations with rational coefficients (say) is dense. It follows from the Banach–Steinhaus theorem that the linear mappings formula_642 are uniformly bounded by some constant formula_643 Let formula_644 denote the coordinate functionals which assign to every formula_28 in formula_3 the coordinate formula_647 of formula_28 in the above expansion. They are called "biorthogonal functionals". When the basis vectors have norm formula_273 the coordinate functionals formula_644 have norm formula_651 in the dual of formula_142 Most classical separable spaces have explicit bases. The Haar system formula_653 is a basis for formula_425 when formula_655 The trigonometric system is a basis in formula_656 when formula_531 The Schauder system is a basis in the space formula_658 The question of whether the disk algebra formula_328 has a basis remained open for more than forty years, until Bočkarev showed in 1974 that formula_328 admits a basis constructed from the Franklin system. Since every vector formula_28 in a Banach space formula_3 with a basis is the limit of formula_663 with formula_664 of finite rank and uniformly bounded, the space formula_3 satisfies the bounded approximation property. The first example by Enflo of a space failing the approximation property was at the same time the first example of a separable Banach space without a Schauder basis. Robert C. James characterized reflexivity in Banach spaces with a basis: the space formula_3 with a Schauder basis is reflexive if and only if the basis is both shrinking and boundedly complete. In this case, the biorthogonal functionals form a basis of the dual of formula_142 Tensor product. Let formula_3 and formula_185 be two formula_4-vector spaces. The tensor product formula_671 of formula_3 and formula_185 is a formula_4-vector space formula_190 with a bilinear mapping formula_676 which has the following universal property: If formula_677 is any bilinear mapping into a formula_4-vector space formula_679 then there exists a unique linear mapping formula_680 such that formula_681 The image under formula_187 of a couple formula_683 in formula_259 is denoted by formula_685 and called a "simple tensor". Every element formula_686 in formula_671 is a finite sum of such simple tensors. There are various norms that can be placed on the tensor product of the underlying vector spaces, amongst others the projective cross norm and injective cross norm introduced by A. Grothendieck in 1955. In general, the tensor product of complete spaces is not complete again. When working with Banach spaces, it is customary to say that the "projective tensor product" of two Banach spaces formula_3 and formula_185 is the formula_690 of the algebraic tensor product formula_671 equipped with the projective tensor norm, and similarly for the "injective tensor product" formula_692 Grothendieck proved in particular that formula_693 where formula_348 is a compact Hausdorff space, formula_695 the Banach space of continuous functions from formula_348 to formula_185 and formula_698 the space of Bochner-measurable and integrable functions from formula_699 to formula_522 and where the isomorphisms are isometric. The two isomorphisms above are the respective extensions of the map sending the tensor formula_701 to the vector-valued function formula_702 Tensor products and the approximation property. Let formula_3 be a Banach space. The tensor product formula_704 is identified isometrically with the closure in formula_339 of the set of finite rank operators. When formula_3 has the approximation property, this closure coincides with the space of compact operators on formula_142 For every Banach space formula_522 there is a natural norm formula_709 linear map formula_710 obtained by extending the identity map of the algebraic tensor product. Grothendieck related the approximation problem to the question of whether this map is one-to-one when formula_185 is the dual of formula_142 Precisely, for every Banach space formula_40 the map formula_714 is one-to-one if and only if formula_3 has the approximation property. Grothendieck conjectured that formula_690 and formula_717 must be different whenever formula_3 and formula_185 are infinite-dimensional Banach spaces. This was disproved by Gilles Pisier in 1983. Pisier constructed an infinite-dimensional Banach space formula_3 such that formula_721 and formula_722 are equal. Furthermore, just as Enflo's example, this space formula_3 is a "hand-made" space that fails to have the approximation property. On the other hand, Szankowski proved that the classical space formula_724 does not have the approximation property. Some classification results. Characterizations of Hilbert space among Banach spaces. A necessary and sufficient condition for the norm of a Banach space formula_3 to be associated to an inner product is the parallelogram identity: It follows, for example, that the Lebesgue space formula_425 is a Hilbert space only when formula_727 If this identity is satisfied, the associated inner product is given by the polarization identity. In the case of real scalars, this gives: formula_728 For complex scalars, defining the inner product so as to be formula_7-linear in formula_640 antilinear in formula_731 the polarization identity gives: formula_732 To see that the parallelogram law is sufficient, one observes in the real case that formula_733 is symmetric, and in the complex case, that it satisfies the Hermitian symmetry property and formula_734 The parallelogram law implies that formula_733 is additive in formula_367 It follows that it is linear over the rationals, thus linear by continuity. Several characterizations of spaces isomorphic (rather than isometric) to Hilbert spaces are available. The parallelogram law can be extended to more than two vectors, and weakened by the introduction of a two-sided inequality with a constant formula_737: Kwapień proved that if formula_738 for every integer formula_18 and all families of vectors formula_740 then the Banach space formula_3 is isomorphic to a Hilbert space. Here, formula_742 denotes the average over the formula_743 possible choices of signs formula_744 In the same article, Kwapień proved that the validity of a Banach-valued Parseval's theorem for the Fourier transform characterizes Banach spaces isomorphic to Hilbert spaces. Lindenstrauss and Tzafriri proved that a Banach space in which every closed linear subspace is complemented (that is, is the range of a bounded linear projection) is isomorphic to a Hilbert space. The proof rests upon Dvoretzky's theorem about Euclidean sections of high-dimensional centrally symmetric convex bodies. In other words, Dvoretzky's theorem states that for every integer formula_745 any finite-dimensional normed space, with dimension sufficiently large compared to formula_745 contains subspaces nearly isometric to the formula_18-dimensional Euclidean space. The next result gives the solution of the so-called . An infinite-dimensional Banach space formula_3 is said to be "homogeneous" if it is isomorphic to all its infinite-dimensional closed subspaces. A Banach space isomorphic to formula_301 is homogeneous, and Banach asked for the converse. An infinite-dimensional Banach space is "hereditarily indecomposable" when no subspace of it can be isomorphic to the direct sum of two infinite-dimensional Banach spaces. The Gowers dichotomy theorem asserts that every infinite-dimensional Banach space formula_3 contains, either a subspace formula_185 with unconditional basis, or a hereditarily indecomposable subspace formula_192 and in particular, formula_190 is not isomorphic to its closed hyperplanes. If formula_3 is homogeneous, it must therefore have an unconditional basis. It follows then from the partial solution obtained by Komorowski and Tomczak–Jaegermann, for spaces with an unconditional basis, that formula_3 is isomorphic to formula_756 Metric classification. If formula_186 is an isometry from the Banach space formula_3 onto the Banach space formula_185 (where both formula_3 and formula_185 are vector spaces over formula_64), then the Mazur–Ulam theorem states that formula_187 must be an affine transformation. In particular, if formula_764 this is formula_187 maps the zero of formula_3 to the zero of formula_522 then formula_187 must be linear. This result implies that the metric in Banach spaces, and more generally in normed spaces, completely captures their linear structure. Topological classification. Finite dimensional Banach spaces are homeomorphic as topological spaces, if and only if they have the same dimension as real vector spaces. Anderson–Kadec theorem (1965–66) proves that any two infinite-dimensional separable Banach spaces are homeomorphic as topological spaces. Kadec's theorem was extended by Torunczyk, who proved that any two Banach spaces are homeomorphic if and only if they have the same density character, the minimum cardinality of a dense subset. Spaces of continuous functions. When two compact Hausdorff spaces formula_769 and formula_770 are homeomorphic, the Banach spaces formula_771 and formula_772 are isometric. Conversely, when formula_769 is not homeomorphic to formula_774 the (multiplicative) Banach–Mazur distance between formula_771 and formula_772 must be greater than or equal to formula_777 see above the results by Amir and Cambern. Although uncountable compact metric spaces can have different homeomorphy types, one has the following result due to Milutin: The situation is different for countably infinite compact Hausdorff spaces. Every countably infinite compact formula_348 is homeomorphic to some closed interval of ordinal numbers formula_779 equipped with the order topology, where formula_780 is a countably infinite ordinal. The Banach space formula_304 is then isometric to . When formula_782 are two countably infinite ordinals, and assuming formula_783 the spaces and are isomorphic if and only if . For example, the Banach spaces formula_784 are mutually non-isomorphic. Derivatives. Several concepts of a derivative may be defined on a Banach space. See the articles on the Fréchet derivative and the Gateaux derivative for details. The Fréchet derivative allows for an extension of the concept of a total derivative to Banach spaces. The Gateaux derivative allows for an extension of a directional derivative to locally convex topological vector spaces. Fréchet differentiability is a stronger condition than Gateaux differentiability. The quasi-derivative is another generalization of directional derivative that implies a stronger condition than Gateaux differentiability, but a weaker condition than Fréchet differentiability. Generalizations. Several important spaces in functional analysis, for instance the space of all infinitely often differentiable functions formula_785 or the space of all distributions on formula_549 are complete but are not normed vector spaces and hence not Banach spaces. In Fréchet spaces one still has a complete metric, while LF-spaces are complete uniform vector spaces arising as limits of Fréchet spaces.
3992
1300829444
https://en.wikipedia.org/wiki?curid=3992
Bram Stoker
Abraham Stoker (8 November 1847 – 20 April 1912), better known by his pen name Bram Stoker, was an Irish theatre manager and novelist. He is best known as the author of "Dracula" (1897), an epistolary Gothic horror novel that is regarded as a milestone in vampire literature. During the early part of his career, he spent ten years in the civil service at Dublin Castle, during which time he was also a drama critic for the "Dublin Evening Mail". Following this, he was employed as a theatre critic for several newspapers, including the "Daily Telegraph", and occasionally wrote short stories and theatre commentaries. During his life, he was better known as the personal assistant of actor Sir Henry Irving and the business manager of the West End's Lyceum Theatre, which Irving owned. He regularly travelled during his free time, particularly to Cruden Bay in Scotland, which was the setting for two of his novels and also served as the inspiration for writing "Dracula". Stoker was friends with both Arthur Conan Doyle and Oscar Wilde, and he collaborated with other authors in writing experimental novels such as "The Fate of Fenella" (1892). Although Stoker wrote a total of 12 mystery novels and novellas, his reputation as one of the most influential writers of Gothic horror fiction lies solely with "Dracula". Since his death, the novel has become one of the best-selling works of vampire literature and the character of Count Dracula remains one of the best-known fictional figures of the Victorian era. Following its publication, there have been more than 700 adaptations of "Dracula" across virtually all forms of media. Early life. Stoker was born on 8 November 1847 at 15 Marino Crescent, Clontarf, in Dublin, Ireland. The park adjacent to the house is now known as Bram Stoker Park. His parents were Abraham Stoker (1799–1876), an Anglo-Irishman from Dublin and Charlotte Mathilda Blake Thornley (1818–1901), of English and Irish descent, who was raised in County Sligo. Stoker was the third of seven children, the eldest of whom was Sir Thornley Stoker, 1st Baronet. Abraham and Charlotte were members of the Church of Ireland Parish of Clontarf and attended the parish church with their children, who were baptised there. Abraham was a senior civil servant. Stoker was bedridden with an unknown illness until he started school at the age of seven when he made a complete recovery. Of this time, Stoker wrote, "I was naturally thoughtful, and the leisure of long illness gave opportunity for many thoughts which were fruitful according to their kind in later years." He was privately educated at Bective House school run by the Reverend William Woods. After his recovery, he grew up without further serious illnesses, even excelling as an athlete at Trinity College, Dublin, which he attended from 1864 to 1870. He graduated with a BA in 1870 and paid to receive his MA in 1875. Though he later in life recalled graduating "with honours in mathematics", this appears to have been a mistake. He was named University Athlete, participating in multiple sports, including playing rugby for Dublin University. He was auditor of the College Historical Society ("the Hist") and president of the University Philosophical Society (he remains the only student in Trinity's history to hold both positions), where his first paper was on "Sensationalism in Fiction and Society". Early career. Stoker became interested in the theatre while a student through his friend Dr. Maunsell. While working for the Irish Civil Service, he became the theatre critic for the "Dublin Evening Mail", which was co-owned by Sheridan Le Fanu, an author of Gothic tales. Theatre critics were held in low esteem at the time, but Stoker attracted notice by the quality of his reviews. In December 1876, he gave a favourable review of Henry Irving's "Hamlet" at the Theatre Royal in Dublin. Irving invited Stoker for dinner at the Shelbourne Hotel where he was staying, and they became friends. Stoker also wrote stories, and "Crystal Cup" was published by the London Society in 1872, followed by "The Chain of Destiny" in four parts in "The Shamrock". In 1876, while a civil servant in Dublin, Stoker wrote the non-fiction book "The Duties of Clerks of Petty Sessions in Ireland" (published 1879), which remained a standard work. Furthermore, he possessed an interest in art and was a founder of the Dublin Sketching Club in 1879. Lyceum Theatre. In 1878, Stoker married Florence Balcombe, daughter of Lieutenant-Colonel James Balcombe of 1 Marino Crescent. She was a celebrated beauty whose former suitor had been Oscar Wilde. Stoker had known Wilde from his student days, having proposed him for membership of the university's Philosophical Society while he was president. Wilde was upset at Florence's decision, but Stoker later resumed the acquaintanceship, and, after Wilde's fall, visited him on the Continent. The Stokers moved to London, where Stoker became acting manager and then business manager of Irving's Lyceum Theatre in the West End, a post he held for 27 years. On 31 December 1879, Bram and Florence's only child was born, a son whom they christened Irving Noel Thornley Stoker. The collaboration with Henry Irving was important for Stoker and through him, he became involved in London's high society, where he met James Abbott McNeill Whistler and Sir Arthur Conan Doyle. Working for Irving, the most famous actor of his time, and managing one of the most successful theatres in London made Stoker a notable if busy man. He was dedicated to Irving and his memoirs show he idolised him. In London, Stoker also met Hall Caine, who became one of his closest friends – he dedicated "Dracula" to him. In the course of Irving's tours, Stoker travelled the world, although he never visited Eastern Europe, a setting for his most famous novel. Stoker enjoyed the United States, where Irving was popular. With Irving, he was invited twice to the White House and knew William McKinley and Theodore Roosevelt. Stoker set two of his novels in America and used Americans as characters, the most notable being Quincey Morris. He also met one of his literary idols, Walt Whitman, having written to him in 1872 an extraordinary letter that some have interpreted as the expression of a deeply-suppressed homosexuality. Bram Stoker in Cruden Bay. Stoker was a regular visitor to Cruden Bay in Scotland between 1892 and 1910. His month-long holidays to the Aberdeenshire coastal village provided a large portion of available time for writing his books. Two novels were set in Cruden Bay: "The Watter's Mou' "(1895) and "The Mystery of the Sea" (1902). He started writing "Dracula" there in 1895 while in residence at the Kilmarnock Arms Hotel. The guest book with his signatures from 1894 and 1895 still survives. The nearby Slains Castle (also known as New Slains Castle) is linked with Bram Stoker and plausibly provided the visual palette for the descriptions of Castle Dracula during the writing phase. A distinctive room in Slains Castle, the octagonal hall, matches the description of the octagonal room in Castle Dracula. Writings. Stoker visited the English coastal town of Whitby in 1890, and that visit was said to be part of the inspiration for "Dracula", staying at a guesthouse in West Cliff at 6 Royal Crescent, doing his research at the public library at 7 Pier Road (now "Quayside Fish and Chips"). Count Dracula comes ashore at Whitby, and in the shape of a black dog runs up the 199 steps to the graveyard of St Mary's Church in the shadow of the Whitby Abbey ruins. Stoker began writing novels while working as manager for Irving and secretary and director of London's Lyceum Theatre, beginning with "The Snake's Pass" in 1890 and "Dracula" in 1897. During this period, he was part of the literary staff of "The Daily Telegraph" in London, and he wrote other fiction, including the horror novels "The Lady of the Shroud" (1909) and "The Lair of the White Worm" (1911). He published his "Personal Reminiscences of Henry Irving" in 1906, after Irving's death, which proved successful, and managed productions at the Prince of Wales Theatre. Before writing "Dracula", Stoker met Ármin Vámbéry, a Hungarian-Jewish writer and traveller (born in Szent-György, Kingdom of Hungary now Svätý Jur, Slovakia). Dracula likely emerged from Vámbéry's dark stories of the Carpathian Mountains. However this claim has been challenged by many including Elizabeth Miller, a professor who, since 1990, has had as her major field of research and writing "Dracula", and its author, sources, and influences. She has stated, "The only comment about the subject matter of the talk was that Vambery 'spoke loudly against Russian aggression.'" There had been nothing in their conversations about the "tales of the terrible Dracula" that are supposed to have "inspired Stoker to equate his vampire-protagonist with the long-dead tyrant." At any rate, by this time, Stoker's novel was well underway, and he was already using the name Dracula for his vampire. Stoker then spent several years researching Central and East European folklore and mythological stories of vampires. The 1972 book "In Search of Dracula" by Radu Florescu and Raymond McNally claimed that the Count in Stoker's novel was based on Vlad III Dracula. However, according to Elizabeth Miller, Stoker borrowed only the name and "scraps of miscellaneous information" about Romanian history; further, there are no comments about Vlad III in the author's working notes. "Dracula" is an epistolary novel, written as a collection of realistic but completely fictional diary entries, telegrams, letters, ship's logs, and newspaper clippings, all of which added a level of detailed realism to the story, a skill which Stoker had developed as a newspaper writer. At the time of its publication, "Dracula" was considered a "straightforward horror novel" based on imaginary creations of supernatural life. "It gave form to a universal fantasy ... and became a part of popular culture." It is one of the most famous works in English literature, and the titular character of Count Dracula has been adapted more times than any other fictional figure. The book also established Stoker's reputation as one of the most acclaimed writers of Gothic horror fiction. According to the "Encyclopedia of World Biography", Stoker's stories are today included in the categories of horror fiction, romanticized Gothic stories, and melodrama. They are classified alongside other works of popular fiction, such as Mary Shelley's "Frankenstein", which also used the myth-making and story-telling method of having multiple narrators telling the same tale from different perspectives. According to historian Jules Zanger, this leads the reader to the assumption that "they can't all be lying". The original 541-page typescript of "Dracula" was believed to have been lost until it was found in a barn in northwestern Pennsylvania in the early 1980s. It consisted of typed sheets with many emendations, and handwritten on the title page was "THE UN-DEAD." The author's name was shown at the bottom as Bram Stoker. Author Robert Latham remarked: "the most famous horror novel ever published, its title changed at the last minute". The typescript was purchased by Microsoft co-founder Paul Allen. Stoker's inspirations for the story, in addition to Whitby, may have included a visit to Slains Castle in Aberdeenshire, a visit to the crypts of St. Michan's Church in Dublin, and the novella "Carmilla" by Sheridan Le Fanu. Stoker's original research notes for the novel are kept by the Rosenbach Museum and Library in Philadelphia. A facsimile edition of the notes was created by Elizabeth Miller and Robert Eighteen-Bisang in 1998. Stoker at the London Library. Stoker was a member of the London Library and conducted much of the research for "Dracula" there. In 2018, the Library discovered some of the books that Stoker used for his research, complete with notes and marginalia. Death. After suffering a number of strokes, Stoker died at No. 26 St George's Square, London on 20 April 1912. Some biographers attribute the cause of death to overwork, others to tertiary syphilis. His death certificate listed the cause of death as "Locomotor ataxia 6 months", presumed to be a reference to syphilis. He was cremated, and his ashes are contained in a display urn at Golders Green Crematorium in north London. The ashes of Irving Noel Stoker, the author's son, were added to his father's urn following his death in 1961. The original plan had been to keep his parents' ashes together, but after Florence Stoker's death, her ashes were scattered at the Gardens of Rest. Beliefs and philosophy. Stoker was raised a Protestant in the Church of Ireland. He was a strong supporter of the Liberal Party and took a keen interest in Irish affairs. As a "philosophical home ruler", he supported Home Rule for Ireland brought about by peaceful means. He remained an ardent monarchist who believed that Ireland should remain within the British Empire. He was an admirer of Prime Minister William Ewart Gladstone, whom he knew personally, and supported his plans for Ireland. Stoker believed in progress and took a keen interest in science and science-based medicine. Some of Stoker's novels represent early examples of science fiction, such as "The Lady of the Shroud" (1909). He had a writer's interest in the occult, notably mesmerism, but despised fraud and believed in the superiority of the scientific method over superstition. Stoker counted among his friends J. W. Brodie-Innis, a member of the Hermetic Order of the Golden Dawn, and hired member Pamela Colman Smith as an artist for the Lyceum Theatre, but no evidence suggests that Stoker ever joined the Order himself. Like Irving, who was an active Freemason, Stoker also became a member of the order, "initiated into Freemasonry in Buckingham and Chandos Lodge No. 1150 in February 1883, passed in April of that same year, and raised to the degree of Master Mason on 20 June 1883." Stoker however was not a particularly active Freemason, spent only six years as an active member, and did not take part in any Masonic activities during his time in London. Posthumous. His novel "Dracula" has become one of the most influential and well-known works of both vampire fiction and English literature. Count Dracula is also ranked among the most depicted fictional characters of the Victorian era, with over 700 adaptations. The short story collection "Dracula's Guest and Other Weird Stories" was published in 1914 by Stoker's widow, Florence Stoker, who was also his literary executrix. The first film adaptation of "Dracula" was F. W. Murnau's "Nosferatu", released in 1922, with Max Schreck starring as Count Orlok. Florence Stoker eventually sued the filmmakers and was represented by the attorneys of the British Incorporated Society of Authors. Her chief legal complaint was that she had neither been asked for permission for the adaptation nor paid any royalty. The case dragged on for some years, with Mrs. Stoker demanding the destruction of the negative and all prints of the film. The suit was finally resolved in the widow's favour in July 1925. A single print of the film survived, however, and it has become well known. The first authorised film version of "Dracula" did not come about until almost a decade later when Universal Studios released Tod Browning's "Dracula" starring Bela Lugosi. Dacre Stoker. Canadian writer Dacre Stoker, a great-grandnephew of Bram Stoker, decided to write "a sequel that bore the Stoker name" to "reestablish creative control over" the original novel, with encouragement from screenwriter Ian Holt, because of the Stokers' frustrating history with "Dracula's" copyright. In 2009, "Dracula: The Un-Dead" was released, written by Dacre Stoker and Ian Holt. Both writers "based [their work] on Bram Stoker's own handwritten notes for characters and plot threads excised from the original edition" along with their own research for the sequel. This also marked Dacre Stoker's writing debut. In spring 2012, Dacre Stoker in collaboration with Elizabeth Miller presented the "lost" Dublin Journal written by Bram Stoker, which had been kept by his great-grandson Noel Dobbs. Stoker's diary entries shed a light on the issues that concerned him before his London years. A remark about a boy who caught flies in a bottle might be a clue for the later development of the Renfield character in "Dracula". Commemorations. On 8 November 2012, Stoker was honoured with a Google Doodle on Google's homepage commemorating the 165th anniversary of his birth. An annual festival takes place in Dublin, the birthplace of Bram Stoker, in honour of his literary achievements. The Dublin City Council Bram Stoker Festival encompasses spectacles, literary events, film, family-friendly activities and outdoor events, and takes place every October Bank Holiday Weekend in Dublin. The festival is supported by the Bram Stoker Estate and is funded by Dublin City Council.
3995
160367
https://en.wikipedia.org/wiki?curid=3995
Contract bridge
Contract bridge, or simply bridge, is a trick-taking card game using a standard 52-card deck. In its basic format, it is played by four players in two competing partnerships, with partners sitting opposite each other around a table. Millions of people play bridge worldwide in clubs, tournaments, online and with friends at home, making it one of the world's most popular card games, particularly among seniors. The World Bridge Federation (WBF) is the governing body for international competitive bridge, with numerous other bodies governing it at the regional level. The game consists of a number of , each progressing through four phases. The cards are to the players; then the players "call" (or "bid") in an seeking to take the , specifying how many tricks the partnership receiving the contract (the declaring side) needs to take to receive points for the deal. During the auction, partners use their bids to exchange information about their hands, including overall strength and distribution of the suits; no other means of conveying or implying any information is permitted. The cards are then played, the trying to fulfill the contract, and the trying to stop the declaring side from achieving its goal. The deal is scored based on the number of tricks taken, the contract, and various other factors which depend to some extent on the variation of the game being played. Rubber bridge is the most popular variation for casual play, but most club and tournament play involves some variant of duplicate bridge, where the cards are not re-dealt on each occasion, but the same deal is played by two or more sets of players (or "tables") to enable comparative scoring. History and etymology. Bridge is a member of the family of trick-taking games and is a derivative of whist, which had become the dominant such game and enjoyed a loyal following for centuries. The idea of a trick-taking, 52-card game has its first documented origins in Italy and France. The French physician and author Rabelais (1493–1553) mentions a game called "La Triomphe" in one of his works, and Juan Luis Vives's "Linguae latinae exercitio" (Exercise in the Latin language) of 1539 features a dialogue on card games in which the characters play 'Triumphus hispanicus' (Spanish Triumph). Bridge departed from whist with the creation of "Biritch" in the 19th century and evolved through the late 19th and early 20th centuries to form the present game. The first known rule book for bridge, dated 1886, is "" written by John Collinson, an English financier working in Ottoman Constantinople. It and his subsequent letter to "The Saturday Review", dated 28 May 1906, document the origin of "Biritch" as being the Russian community in Constantinople. The word "biritch" is thought to be a transliteration of the Russian word (бирчий, бирич), an occupation of a diplomatic clerk or an announcer. Another theory is that British soldiers invented the game bridge while serving in the Crimean War, and named it after the Galata Bridge, which they crossed on their way to a coffeehouse to play cards. Biritch had many significant bridge-like developments: dealer chose the trump suit, or nominated his partner to do so; there was a call of "no trumps" ("biritch"); dealer's partner's hand became dummy; points were scored above and below the line; game was 3NT, 4 and 5 (although 8 club odd tricks and 15 spade odd tricks were needed); the score could be doubled and redoubled; and there were slam bonuses. It also has some features in common with solo whist. This game, and variants of it known as "bridge" and "bridge whist", became popular in the United States and the United Kingdom in the 1890s despite the long-established dominance of whist. Its breakthrough was its acceptance in 1894 by Lord Brougham at London's Portland Club. In 1904, auction bridge was developed, in which the players bid in a competitive auction to decide the contract and declarer. The object became to make at least as many tricks as were contracted for, and penalties were introduced for failing to do so. In auction bridge, bidding beyond winning the auction is pointless; for example, if taking all 13 tricks, there is no difference in score between a 1 and a 7 final contract, as the bonus for rubber, small slam or grand slam depends on the number of tricks taken rather than the number of tricks contracted for. The modern game of contract bridge was the result of innovations to the scoring of auction bridge by Harold Stirling Vanderbilt and others. The most significant change was that only the tricks contracted for were scored below the line toward game or a slam bonus, a change that resulted in bidding becoming much more challenging and interesting. Also new was the concept of "vulnerability", which made sacrifices to protect the lead in a rubber more expensive. The various scores were adjusted to produce a more balanced and interesting game. Vanderbilt set out his rules in 1925, and within a few years contract bridge had so supplanted other forms of the game that "bridge" became synonymous with "contract bridge". The form of bridge mostly played in clubs, tournaments and online is duplicate bridge. The number of people who play contract bridge has declined since its peak in the 1940s, when a survey found it was played in 44% of US households. The game is still widely played, especially amongst retirees, and in 2005 the ACBL estimated there were 25 million players in the US. Gameplay. Overview. Bridge is a four-player partnership trick-taking game with thirteen tricks per deal. The dominant variations of the game are rubber bridge, which is more common in social play; and duplicate bridge, which enables comparative scoring in tournament play. Each player is dealt thirteen cards from a standard 52-card deck. A starts when a player leads (i.e., plays the first card). The leader to the first trick is determined by the auction; the leader to each subsequent trick is the player who won the preceding trick. Each player, in clockwise order, plays one card on the trick. Players must play a card of the same suit as the original card led, unless they have none (said to be "void"), in which case they may play any card. The rank of the cards played determines which player wins the trick. Within each suit, the ace is ranked highest followed by the king, queen and jack and then the ten through to the two. In a deal in which the auction has determined that there is no trump suit, the trick is won by the highest-ranked card of the suit led; cards of suits other than that led cannot win. In a deal with a trump suit, cards of that suit are superior in rank to any of the cards of any other suit. If one or more players plays a trump to a trick when void in the suit led, the highest-ranked trump wins. For example, if the trump suit is spades and a player is void in the suit led and plays a spade card, they win the trick if no other player plays a higher spade. If a card of the trump suit is led, the usual rule for trick-taking applies and the highest-ranked card of that suit wins. Unlike that of its predecessor, whist, the goal of bridge is not simply to take the most tricks in a deal. Instead, the goal is successfully to estimate how many tricks one's partnership can take, and then to meet or exceed that estimate. To illustrate this, the simpler partnership trick-taking game of spades has a similar mechanism: the usual trick-taking rules apply with the trump suit being spades, but in the beginning of the game, players "bid" or estimate how many tricks they can win, and the number of tricks bid by both players in a partnership are added. If a partnership takes at least that many tricks, they receive points for the round; otherwise, they lose penalty points. Bridge extends the concept of bidding into an , in which partnerships compete to win a , specifying both how many tricks they will need to take in order to receive points and the trump suit (or "no trump", meaning that there will be no trump suit). Players take turns to call in a clockwise order: each player in turn either passes, doubleswhich increases the penalties for not making the contract specified by the opposing partnership's last bid, but also increases the reward for making itor redoubles, or states a contract that their partnership will adopt, which must be higher than the previous highest bid (if any). Eventually, the player who bid the highest contractwhich is determined by the contract's level as well as the trump suit or no trumpwins the contract for their partnership. In the example auction below, the east–west pair secures the contract of 6; the auction concludes when there have been three successive passes. Note that six tricks are added to stated contract values, so the six-level contract is a contract of twelve tricks. In practice, estimating a good contract without information about one's partner's hand is difficult, so there exist many bidding systems assigning meanings to bids, with common ones including Standard American, Acol, and 2/1 game forcing. Contrast with Spades, where players only have to bid their own hand. After the contract is decided and the first lead is made, the declarer's partner (dummy) lays their cards face up on the table, and the declarer plays the dummy's cards as well as their own. The opposing partnership is called the , and their goal is to stop the declarer from fulfilling his contract. Once all the cards have been played, the deal is scored: if the declaring side makes their contract, they receive points based on the level of the contract, with some trump suits being worth more points than others and "no trump" being worth even more, as well as bonus points for any . If the declarer fails to fulfill the contract, the defenders receive points depending on the declaring side's undertricks (the number of tricks short of the contract) and whether the contract was "doubled" or "redoubled". Setup and dealing. The four players sit in two partnerships with players sitting opposite their partners. A cardinal direction is assigned to each seat, so that one partnership sits in North and South, while the other sits in West and East. The cards may be freshly dealt or, in duplicate bridge games, pre-dealt. All that is needed in basic games are the cards and a method of keeping score, but there is often other equipment on the table, such as a board containing the cards to be played (in duplicate bridge), bidding boxes, or screens. In rubber bridge each player draws a card at the start of the game; the player who draws the highest card deals first. The second highest card becomes the dealer's partner and takes the chair on the opposite side of the table. They play against the other two. The deck is shuffled and cut, usually by the player to the left of the dealer, before dealing. Players take turns to deal, in clockwise order. The dealer deals the cards clockwise, one card at a time. Normally, rubber bridge is played with two packs of cards and whilst one pack is being dealt, the dealer's partner shuffles the other pack. After shuffling the pack is placed on the right ready for the next dealer. Before dealing, the next dealer passes the cards to the previous dealer who cuts them. In duplicate bridge the cards are pre-dealt, either by hand or by a computerized dealing machine, in order to allow for competitive scoring. Once dealt, the cards are placed in a device called a "board", having slots designated for each player's cardinal direction seating position. After a deal has been played, players return their cards to the appropriate slot in the board, ready to be played by the next table. Auction. The dealer opens the auction and can make the first call, and the auction proceeds clockwise. When it is their turn to call, a player may passbut can enter into the bidding lateror bid a contract, specifying the level of their contract and either the trump suit or "no trump" (the denomination), provided that it is higher than the last bid by any player, including their partner. All bids promise to take a number of tricks in excess of six, so a bid must be between one (seven tricks) and seven (thirteen tricks). A bid is higher than another bid if either the level is greater (e.g., 2 over 1NT) or the denomination is higher at the same level, with the order being in ascending (or alphabetical) order: , , , , and NT (no trump) (e.g., 3 over 3). Calls may be made orally or with a bidding box. If the last bid was by the opposing partnership, one may also the opponents' bid, increasing the penalties for undertricks, but also increasing the reward for making the contract. Doubling does not carry to future bids by the opponents unless future bids are doubled again. A player on the opposing partnership being doubled may also , which increases the penalties and rewards further. Players may not see their partner's hand during the auction, only their own. There exist many bidding conventions that assign agreed meanings to various calls to assist players in reaching an optimal contract (or obstruct the opponents). The auction ends when, after a player bids, doubles, or redoubles, every other player has passed, in which case the action proceeds to the play; or every player has passed and no bid has been made, in which case the round is considered to be "passed out" and not played. Play. The player from the declaring side who first bid the denomination named in the final contract becomes declarer. The player left to the declarer leads to the first trick. Dummy then lays his or her cards face-up on the table, organized in columns by suit. Play proceeds clockwise, with each player required to follow suit if possible. Tricks are won by the highest trump, or if there were none played, the highest card of the led suit. The player who won the previous trick leads to the next trick. The declarer has control of the dummy's cards and tells his partner which card to play at dummy's turn. There also exist conventions that communicate further information between defenders about their hands during the play. At any time, a player may , stating that their side will win a specific number of the remaining tricks. The claiming player lays his cards down on the table and explains the order in which he intends to play the remaining cards. The opponents can either accept the claim and the round is scored accordingly, or dispute the claim. If the claim is disputed, play continues with the claiming player's cards face up in rubber games, or in duplicate games, play ceases and the tournament director is called to adjudicate the hand. Scoring. At the end of the hand, points are awarded to the declaring side if they make the contract, or else to the defenders. Partnerships can be , increasing the rewards for making the contract, but also increasing the penalties for undertricks. In rubber bridge, if a side has won 100 contract points, they have won a and are vulnerable for the remaining rounds, but in duplicate bridge, vulnerability is predetermined based on the number of each board. If the declaring side makes their contract, they receive points for , or tricks bid and made in excess of six. In both rubber and duplicate bridge, the declaring side is awarded 20 points per odd trick for a contract in clubs or diamonds, and 30 points per odd trick for a contract in hearts or spades. For a contract in notrump, the declaring side is awarded 40 points for the first odd trick and 30 points for the remaining odd tricks. Contract points are doubled or quadrupled if the contract is respectively doubled or redoubled. In rubber bridge, a partnership wins one game once it has accumulated 100 contract points; excess contract points do not carry over to the next game. A partnership that wins two games wins the rubber, receiving a bonus of 500 points if the opponents have won a game, and 700 points if they have not. Overtricks score the same number of points per odd trick, although their doubled and redoubled values differ. Bonuses vary between the two bridge variations both in score and in type (for example, rubber bridge awards a bonus for holding a certain combination of high cards), although some are common between the two. A larger bonus is awarded if the declaring side makes a small slam or grand slam, a contract of 12 or 13 tricks respectively. If the declaring side is not vulnerable, a small slam gets 500 points, and a grand slam 1000 points. If the declaring side is vulnerable, a small slam is 750 points and a grand slam is 1,500. In rubber bridge, the rubber finishes when a partnership has won two games, but the partnership receiving the most "overall" points wins the rubber. Duplicate bridge is scored comparatively, meaning that the score for the hand is compared to other tables playing the same cards and match points are scored according to the comparative results: usually either "matchpoint scoring", where each partnership receives 2 points (or 1 point) for each pair that they beat, and 1 point (or point) for each tie; or IMPs (international matchpoint) scoring, where the number of IMPs varies (but less than proportionately) with the points difference between the teams. Undertricks are scored in both variations as follows: Rules. The rules of the game are referred to as the "laws" as promulgated by various bridge organizations. Duplicate. The official rules of duplicate bridge are promulgated by the WBF as "The Laws of Duplicate Bridge 2017". The Laws Committee of the WBF, composed of world experts, updates the Laws every 10 years; it also issues a Laws Commentary advising on interpretations it has rendered. In addition to the basic rules of play, there are many additional rules covering playing conditions and the rectification of irregularities, which are primarily for use by tournament directors who act as referees and have overall control of procedures during competitions. But various details of procedure are left to the discretion of the zonal bridge organisation for tournaments under their aegis and some (for example, the choice of "movement") to the sponsoring organisation (for example, the club). Some zonal organisations of the WBF also publish editions of the Laws. For example, the American Contract Bridge League (ACBL) publishes the "Laws of Duplicate Bridge" and additional documentation for club and tournament directors. Rubber. There are no universally accepted rules for rubber bridge, but some zonal organisations have published their own. An example for those wishing to abide by a published standard is "The Laws of Rubber Bridge" as published by the American Contract Bridge League. The majority of rules mirror those of duplicate bridge in the bidding and play and differ primarily in procedures for dealing and scoring. Online. In 2001, the WBF promulgated a set of laws for online play. Tournaments. Bridge is a game of skill played with randomly dealt cards, which makes it also a game of chance, or more exactly, a tactical game with inbuilt randomness, imperfect knowledge and restricted communication. The chance element is in the deal of the cards; in duplicate bridge some of the chance element is eliminated by comparing results of multiple pairs in identical situations. This is achievable when there are eight or more players, sitting at two or more tables, and the deals from each table are preserved and passed to the next table, thereby "duplicating" them for the other table(s) of players. At the end of a session, the scores for each deal are compared, and the most points are awarded to the players doing the best with each particular deal. This measures relative skill (but still with an element of luck) because each pair or team is being judged only on the ability to bid with, and play, the same cards as other players. Duplicate bridge is played in clubs and tournaments, which can gather as many as several hundred players. Duplicate bridge is a mind sport, and its popularity gradually became comparable to that of chess, with which it is often compared for its complexity and the mental skills required for high-level competition. Bridge and chess are the only "mind sports" recognized by the International Olympic Committee, although they were not found eligible for the main Olympic program. In October 2017 the British High Court ruled against the English Bridge Union, finding that Bridge is not a sport under a definition of sport as involving physical activity, but did not rule on the "broad, somewhat philosophical question" as to whether or not bridge is a sport. The basic premise of duplicate bridge had previously been used for whist matches as early as 1857. Initially, bridge was not thought to be suitable for duplicate competition; it was not until the 1920s that (auction) bridge tournaments became popular. In 1925 when contract bridge first evolved, bridge tournaments were becoming popular, but the rules were somewhat in flux, and several different organizing bodies were involved in tournament sponsorship: the American Bridge League (formerly the "American Auction Bridge League", which changed its name in 1929), the American Whist League, and the United States Bridge Association. In 1937, the first officially recognized world championship was held in Budapest. In 1958, the World Bridge Federation (WBF) was founded to promote bridge worldwide, coordinate periodic revision to the Laws (each ten years, next in 2027) and conduct world championships. Bidding boxes and screens. In tournaments, "bidding boxes" are frequently used, as noted above. These avoid the possibility of players at other tables hearing any spoken bids. The bidding cards are laid out in sequence as the auction progresses. Although it is not a formal rule, many clubs adopt a protocol that the bidding cards stay revealed until the first playing card is tabled, after which point the bidding cards are put away. Bidding pads are an alternative to bidding boxes. A bidding pad is a block of 100mm square tear-off sheets. Players write their bids on the top sheet. When the first trick is complete the sheet is torn off and discarded. In top national and international events, "bidding screens" are used. These are placed diagonally across the table, preventing partners from seeing each other during the game; often the screen is removed after the auction is complete. Strategy. Bidding. Much of the complexity in bridge arises from the difficulty of arriving at a good final contract in the auction (or deciding to let the opponents declare the contract). This is a difficult problem: the two players in a partnership must try to communicate enough information about their hands to arrive at a makeable contract, but the information they can exchange is restricted – information may be passed only by the calls made and later by the cards played, not by other means; in addition, the agreed-upon meaning of each call and play must be available to the opponents. Since a partnership that has freedom to bid gradually at leisure can exchange more information, and since a partnership that can interfere with the opponents' bidding (as by raising the bidding level rapidly) can cause difficulties for their opponents, bidding systems are both informational and strategic. It is this mixture of information exchange and evaluation, deduction, and tactics that is at the heart of bidding in bridge. A number of basic rules of thumb in bridge bidding and play are summarized as bridge maxims. Systems and conventions. A "bidding system" is a set of partnership agreements on the meanings of bids. A partnership's bidding system is usually made up of a core system, modified and complemented by specific conventions (optional customizations incorporated into the main system for handling specific bidding situations) which are pre-chosen between the partners prior to play. The line between a well-known convention and a part of a system is not always clear-cut: some bidding systems include specified conventions by default. Bidding systems can be divided into mainly natural systems such as Acol and Standard American, and mainly artificial systems such as the Precision Club and Polish Club. Calls are usually considered to be either "natural" or "conventional" (artificial). A natural call carries a meaning that reflects the call; a natural bid intuitively showing hand or suit strength based on the level or suit of the bid, and a natural double expressing that the player believes that the opposing partnership will not make their contract. By contrast, a conventional (artificial) call offers and/or asks for information by means of pre-agreed coded interpretations, in which some calls convey very specific information or requests that are not part of the natural meaning of the call. Thus in response to 4NT, a 'natural' bid of 5 would state a preference towards a diamond suit or a desire to play in five diamonds, whereas if the partners have agreed to use the common Blackwood convention, a bid of 5 in the same situation would say nothing about the diamond suit, but would tell the partner that the hand in question contains exactly one ace. Conventions are valuable in bridge because of the need to pass information beyond a simple like or dislike of a particular suit, and because the limited bidding space can be used more efficiently by adopting a conventional (artificial) meaning for a given call where a natural meaning has less utility, because the information it conveys is not valuable or because the desire to convey that information arises only rarely. The conventional meaning conveys more useful (or more frequently useful) information. There are a very large number of conventions from which players can choose; many books have been written detailing bidding conventions. Well-known conventions include Stayman (to ask the opening 1NT bidder to show any four-card major suit), Jacoby transfers (a request by (usually) the weak hand for the partner to bid a particular suit first, and therefore to become the declarer), and the Blackwood convention (to ask for information on the number of aces and kings held, used in slam bidding situations). The term "preempt" refers to a high-level tactical bid by a weak hand, relying upon a very long suit rather than high cards for tricks. Preemptive bids serve a double purpose – they allow players to indicate they are bidding on the basis of a long suit in an otherwise weak hand, which is important information to share, and they also consume substantial bidding space which prevents a possibly strong opposing pair from exchanging information on their cards. Several systems include the use of opening bids or other early bids with weak hands including long (usually six to eight card) suits at the 2, 3 or even 4 or 5 levels as preempts. Basic natural systems. As a rule, a natural suit bid indicates a holding of at least four (or more, depending on the situation and the system) cards in that suit as an opening bid, or a lesser number when supporting partner; a natural NT bid indicates a balanced hand. Most systems use a count of high card points as the basic evaluation of the strength of a hand, refining this by reference to shape and distribution if appropriate. In the most commonly used point count system, aces are counted as 4 points, kings as 3, queens as 2, and jacks as 1 point; therefore, the deck contains 40 points. In addition, the "distribution" of the cards in a hand into suits may also contribute to the strength of a hand and be counted as distribution points. A better than average hand, containing 12 or 13 points, is usually considered sufficient to "open" the bidding, i.e., to make the first bid in the auction. A combination of two such hands (i.e., 25 or 26 points shared between partners) is often sufficient for a partnership to bid, and generally to make, game in a major suit or notrump (more are usually needed for a minor suit game, as the level is higher). In natural systems, a 1NT opening bid usually reflects a hand that has a relatively balanced shape (usually between two and four (or less often five) cards in each suit) and a sharply limited number of high card points, usually somewhere between 12 and 18 – the most common ranges use a span of exactly three points (for example, 12–14, 15–17 or 16–18), but some systems use a four-point range, usually 15–18. Opening bids of three or higher are preemptive bids, i.e., bids made with weak hands that especially favor a particular suit, opened at a high level in order to define the hand's value quickly and to frustrate the opposition. For example, a hand of would be a candidate for an opening bid of 3, designed to make it difficult for the opposing team to bid and find their optimum contract even if they have the bulk of the points. This hand is nearly valueless unless spades are trumps but it contains good enough spades that the penalty for being set should not be higher than the value of an opponent game. The high card weakness makes it likely that the opponents have enough strength to make game themselves. Openings at the 2 level are either unusually strong (2NT, natural, and 2, artificial) or preemptive, depending on the system. Unusually strong bids communicate an especially high number of points (normally 20 or more) or a high trick-taking potential (normally 8 or more). Also 2 as the strongest (by HCP and by DP+HCP) has become more common, perhaps especially at websites that offer duplicate bridge. Here the 2 opening is used for either hands with a good 6-card suit or longer (max one losing card) and a total of 18 HCP up to 23 total points – or "NT", like 2NT but with 22–23 HCP. Whilst the 2 opening bid takes care of all hands with 24 points (HCP or with distribution points included) with the only exception of "Gambling 3NT". Opening bids at the one level are made with hands containing 12–13 points or more and which are not suitable for one of the preceding bids. Using Standard American with 5-card majors, opening hearts or spades usually promises a 5-card suit. Partnerships who agree to play 5-card majors open a minor suit with 4-card majors and then bid their major suit at the next opportunity. This means that an opening bid of 1 or 1 will sometimes be made with only 3 cards in that suit. Doubles are sometimes given conventional meanings in otherwise mostly natural systems. A natural, or "penalty" double, is one used to try to gain extra points when the defenders are confident of setting (defeating) the contract. The most common example of a conventional double is the takeout double of a low-level suit bid, implying support for the unbid suits or the unbid major suits and asking partner to choose one of them. Basic variations. Bidding systems depart from these basic ideas in varying degrees. Standard American, for instance, is a collection of conventions designed to bolster the accuracy and power of these basic ideas, while Precision Club is a system that uses the 1 opening bid for all or almost all strong hands (but sets the threshold for "strong" rather lower than most other systems – usually 16 high card points) and may include other artificial calls to handle other situations (but it may contain natural calls as well). Many experts today use a system called 2/1 game forcing (enunciated as two over one game forcing), which amongst other features adds some complexity to the treatment of the one notrump response as used in Standard American. In the UK, Acol is the most common system; its main features are a weak one notrump opening with 12–14 high card points and several variations for 2-level openings. There are also a variety of advanced techniques used for hand evaluation. The most basic is the Milton Work point count, (the 4-3-2-1 system detailed above) but this is sometimes modified in various ways, or either augmented or replaced by other approaches such as losing trick count, honor point count, law of total tricks, or Zar Points. Common conventions and variations within natural systems include: Within play, it is also commonly agreed what systems of opening leads, signals and discards will be played: Advanced techniques. Every call (including "pass", also sometimes called "no bid") serves two purposes. It confirms or passes some information to a partner, and, by implication, denies any other kind of hand which would have tended to support an alternative call. For example, a bid of 2NT immediately after partner's 1NT not only shows a balanced hand of a certain point range, but also almost always denies possession of a five-card major suit (otherwise the player would have bid it) or even a four card major suit (in that case, the player should use the Stayman convention). Likewise, in some partnerships the bid of 2 in the sequence 1NT–2–2–2 between partners (opponents passing throughout) explicitly shows five hearts but also confirms four cards in spades: the bidder must hold at least five hearts to make it worth looking for a heart fit after 2 denied a four card major, and with at least five hearts, a Stayman bid must have been justified by having exactly four spades, the other major (since Stayman (as used by this partnership) is not useful with anything except a four card major suit). Thus an astute partner can read much more than the surface meaning into the bidding. Alternatively, many partnerships play this same bidding sequence as "Crawling Stayman" by which the responder shows a weak hand (less than eight high card points) with shortness in diamonds but at least four hearts and four spades; the opening bidder may correct to spades if that appears to be the better contract. The situations detailed here are extremely simple examples; many instances of advanced bidding involve specific agreements related to very specific situations and subtle inferences regarding entire sequences of calls. Play techniques. Terence Reese, a prolific author of bridge books, points out that there are only four ways of taking a trick by force, two of which are very easy: Nearly all trick-taking techniques in bridge can be reduced to one of these four methods. The optimum play of the cards can require much thought and experience and is the subject of whole books on bridge. Example. The cards are dealt as shown in the bridge hand diagram; North is the dealer and starts the auction which proceeds as shown in the bidding table. As neither North nor East have sufficient strength to "open" the bidding, they each pass, denying such strength. South, next in turn, opens with the bid of 1, which denotes a reasonable heart suit (at least 4 or 5 cards long, depending on the bidding system) and at least 12 high card points. On this hand, South has 14 high card points. West "overcalls" with 1, since he has a long spade suit of reasonable quality and 10 high card points (an overcall can be made on a hand that is not quite strong enough for an opening bid). North "supports" partner's suit with 2, showing heart support and about points. East supports spades with 2. South inserts a "game try" of 3, "inviting" the partner to bid the "game" of 4 with good club support and overall values. North complies, as North is at the higher end of the range for his 2 bid, and has a fourth trump (the 2 bid promised only three), and the "doubleton" queen of clubs to fit with partner's strength there. (North could instead have bid 3, indicating not enough strength for game, asking South to pass and so play 3.) In the auction, north–south are trying to investigate whether their cards are sufficient to make a game (nine tricks at notrump, ten tricks in hearts or spades, 11 tricks in clubs or diamonds), which yields bonus points if bid and made. East-West are "competing" in spades, hoping to play a contract in spades at a low level. 4 is the final contract, 10 tricks being required for to make with hearts as trump. South is the "declarer", having been first to bid hearts, and the player to South's left, West, has to choose the first card in the play, known as the "opening lead". West chooses the spade king because spades is the suit the partnership has shown strength in, and because they have agreed that when they hold two "touching honors" (or "adjacent honors") they will play the higher one first. West plays the card face down, to give their partner and the declarer (but not dummy) a chance to ask any last questions about the bidding or to object if they believe West is not the correct hand to lead. After that, North's cards are laid on the table and North becomes "dummy", as both the North and South hands will be controlled by the declarer. West turns the lead card face up, and the declarer studies the two hands to make a plan for the play. On this hand, the trump ace, a spade, and a diamond trick must be lost, so declarer must not lose a trick in clubs. If the K is held by West, South will find it very hard to prevent it from making a trick (unless West leads a club). There is an almost equal chance that it is held by East, in which case it can be trapped against the ace, and will be beaten, using a tactic known as a "finesse". After considering the cards, the declarer directs dummy (North) to play a small spade. East plays "low" (small card) and South takes the A, gaining the "lead". (South may also elect to "duck", but for the purpose of this example, let us assume South wins the A at trick 1). South proceeds by "drawing trump", leading the K. West decides there is no benefit to holding back, and so wins the trick with the ace, and then cashes the Q. For fear of conceding a "ruff and discard", West plays the 2 instead of another spade. Declarer plays low from the table, and East scores the Q. Not having anything better to do, East returns the remaining trump, taken in South's hand. The trumps now accounted for, South can now execute the finesse, perhaps trapping the king as planned. South "enters" the dummy (i.e. wins a trick in the dummy's hand) by leading a low diamond, using dummy's A to win the trick, and leads the Q from dummy to the next trick. East "covers" the queen with the king, and South takes the trick with the ace, and proceeds by "cashing" the remaining "master" J. (If East does not play the king, then South will play a low club from South's hand and the queen will win anyway, this being the essence of the finesse). The game is now safe: South "ruffs" a small club with a dummy's trump, then ruffs a diamond in hand for an "entry" back, and ruffs the last club in dummy (sometimes described as a "crossruff"). Finally, South "claims" the remaining tricks by showing his or her hand, as it now contains only high trumps and there's no need to play the hand out to prove they are all winners. (The trick-by-trick notation used above can be also expressed in tabular form, but a textual explanation is usually preferred in practice, for reader's convenience. Plays of small cards or "discards" are often omitted from such a description, unless they were important for the outcome). North-South score the required 10 tricks, and their opponents take the remaining three. The contract is fulfilled, and North enters the pair numbers, the contract, and the score of +420 for the winning side (North is in charge of bookkeeping in duplicate tournaments) on the traveling sheet. North asks East to check the score entered on the traveller. All players return their own cards to the board, and the next deal is played. On the prior hand, it is quite possible that the K is held by West. For example, by swapping the K and A between the defending hands. Then the 4 contract would fail by one trick (unless West had led a club early in the play). The failure of the contract would not mean that 4 was a bad contract on this hand. The contract depends on the club finesse working, or a defense error. The bonus points awarded for making a game contract far outweigh the penalty for going one off, so it is best strategy in the long run to bid game contracts such as this one. Similarly, there is a minuscule chance that the K is in the west hand, but the west hand has no other clubs. In that case, declarer can succeed by simply cashing the A, felling the K and setting up the Q as a winner. The chance of this is far lower than the chance that East started with the K. Therefore, the superior percentage play is to take the club finesse, as described above. Computers. After many years of little progress, computer bridge made great progress at the end of the 20th century. In 1996, the ACBL initiated the official World Championships Computer Bridge, to be held annually along with a major bridge event. The first Computer Bridge Championship took place in 1997 at the North American Bridge Championships in Albuquerque, New Mexico. Stand-alone software. Strong bridge playing programs such as Jack Bridge (World Champion in 2001, 2002, 2003, 2004, 2006, 2009, 2010, 2012, 2013 and 2015) and Wbridge5 (World Champion in 2005, 2007, 2008, 2016, 2017 and 2018), probably rank among the top few thousand human pairs worldwide. A series of articles published in 2005 and 2006 in the Dutch bridge magazine IMP describes matches between Jack Bridge and seven top Dutch pairs. A total of 196 boards were played. Jack Bridge lost, but by a small margin (359 versus 385 IMPs). Online play. There are several free and subscription-based services available for playing bridge on the internet. For example: Some national contract bridge organizations now offer online bridge play to their members, including the English Bridge Union, the Dutch Bridge Federation and the Australian Bridge Federation. MSN and Yahoo! Games have several online rubber bridge rooms. In 2001, the WBF issued a special edition of the lawbook adapted for internet and other electronic forms of the game.
3996
7903804
https://en.wikipedia.org/wiki?curid=3996
Boat
A boat is a watercraft of a large range of types and sizes, but generally smaller than a ship, which is distinguished by its larger size or capacity, its shape, or its ability to carry boats. Small boats are typically used on inland waterways such as rivers and lakes, or in protected coastal areas. However, some boats (such as whaleboats) were intended for offshore use. In modern naval terms, a boat is a vessel small enough to be carried aboard a ship. Boats vary in proportion and construction methods with their intended purpose, available materials, or local traditions. Canoes have been used since prehistoric times and remain in use throughout the world for transportation, fishing, and sport. Fishing boats vary widely in style partly to match local conditions. Pleasure craft used in recreational boating include ski boats, pontoon boats, and sailboats. House boats may be used for vacationing or long-term residence. Lighters are used to move cargo to and from large ships unable to get close to shore. Lifeboats have rescue and safety functions. Boats can be propelled by manpower (e.g. rowboats and paddle boats), wind (e.g. sailboats), and inboard/outboard motors (including gasoline, diesel, and electric). History. Differentiation from other prehistoric watercraft. The earliest watercraft are considered to have been rafts. These would have been used for voyages such as the settlement of Australia sometime between 50,000 and 60,000 years ago. A boat differs from a raft by obtaining its buoyancy by having most of its structure exclude water with a waterproof layer, e.g. the planks of a wooden hull, the hide covering (or tarred canvas) of a currach. In contrast, a raft is buoyant because it joins components that are themselves buoyant, for example, logs, bamboo poles, bundles of reeds, floats (such as inflated hides, sealed pottery containers or, in a modern context, empty oil drums). The key difference between a raft and a boat is that the former is a "flow through" structure, with waves able to pass up through it. Consequently, except for short river crossings, a raft is not a practical means of transport in colder regions of the world as the users would be at risk of hypothermia. Today that climatic limitation restricts rafts to between 40° north and 40° south, with, in the past, similar boundaries that have moved as the world's climate has varied. Types. The earliest boats may have been either dugouts or hide boats. The oldest recovered boat in the world, the Pesse canoe, found in the Netherlands, is a dugout made from the hollowed tree trunk of a "Pinus sylvestris" that was constructed somewhere between 8200 and 7600 BC. This canoe is exhibited in the Drents Museum in Assen, Netherlands. Other very old dugout boats have also been recovered. Hide boats, made from covering a framework with animal skins, could be equally as old as logboats, but such a structure is much less likely to survive in an archaeological context. Plank-built boats are considered, in most cases, to have developed from the logboat. There are examples of logboats that have been expanded: by deforming the hull under the influence of heat, by raising up the sides with added planks, or by splitting down the middle and adding a central plank to make it wider. (Some of these methods have been in quite recent usethere is no simple developmental sequence). The earliest known plank-built boats are from the Nile, dating to the third millennium BC. Outside Egypt, the next earliest are from England. The Ferriby boats are dated to the early part of the second millennium BC and the end of the third millennium. Plank-built boats require a level of woodworking technology that was first available in the Neolithic with more complex versions only becoming achievable in the Bronze Age. Types. Boats can be categorized by their means of propulsion. These divide into: A number of large vessels are usually referred to as boats. Submarines are a prime example. Other types of large vessels which are traditionally called boats include Great Lakes freighters, riverboats, and ferryboats. Though large enough to carry their own boats and heavy cargo, these vessels are designed for operation on inland or protected coastal waters. Terminology. The hull is the main, and in some cases only, structural component of a boat. It provides both capacity and buoyancy. The keel is a boat's "backbone", a lengthwise structural member to which the perpendicular frames are fixed. On some boats, a deck covers the hull, in part or whole. While a ship often has several decks, a boat is unlikely to have more than one. Above the deck are often lifelines connected to stanchions, bulwarks perhaps topped by gunnels, or some combination of the two. A cabin may protrude above the deck forward, aft, along the centerline, or cover much of the length of the boat. Vertical structures dividing the internal spaces are known as bulkheads. The forward end of a boat is called the bow, the aft end the stern. Facing forward the right side is referred to as starboard and the left side as port. Building materials. Until the mid-19th century, most boats were made of natural materials, primarily wood, although bark and animal skins were also used. Early boats include the birch bark canoe, the animal hide-covered kayak and coracle and the dugout canoe made from a single log. By the mid-19th century, some boats had been built with iron or steel frames but still planked in wood. In 1855 ferro-cement boat construction was patented by the French, who coined the name "ferciment". This is a system by which a steel or iron wire framework is built in the shape of a boat's hull and covered over with cement. Reinforced with bulkheads and other internal structures it is strong but heavy, easily repaired, and, if sealed properly, will not leak or corrode. As the forests of Britain and Europe continued to be over-harvested to supply the keels of larger wooden boats, and the Bessemer process (patented in 1855) cheapened the cost of steel, steel ships and boats began to be more common. By the 1930s boats built entirely of steel from frames to plating were seen replacing wooden boats in many industrial uses and fishing fleets. Private recreational boats of steel remain uncommon. In 1895 WH Mullins produced steel boats of galvanized iron and by 1930 became the world's largest producer of pleasure boats. Mullins also offered boats in aluminum from 1895 through 1899 and once again in the 1920s, but it was not until the mid-20th century that aluminium gained widespread popularity. Though much more expensive than steel, aluminum alloys exist that do not corrode in salt water, allowing a similar load carrying capacity to steel at much less weight. Around the mid-1960s, boats made of fiberglass (aka "glass fiber") became popular, especially for recreational boats. Fiberglass is also known as "GRP" (glass-reinforced plastic) in the UK, and "FRP" (for fiber-reinforced plastic) in the US. Fiberglass boats are strong and do not rust, corrode, or rot. Instead, they are susceptible to structural degradation from sunlight and extremes in temperature over their lifespan. Fiberglass structures can be made stiffer with sandwich panels, where the fiberglass encloses a lightweight core such as balsa or foam. Cold molding is a modern construction method, using wood as the structural component. In one cold molding process, very thin strips of wood are layered over a form. Each layer is coated with resin, followed by another directionally alternating layer laid on top. Subsequent layers may be stapled or otherwise mechanically fastened to the previous, or weighted or vacuum bagged to provide compression and stabilization until the resin sets. An alternative process uses thin sheets of plywood shaped over a disposable male mold, and coated with epoxy. Propulsion. The most common means of boat propulsion are as follows: Buoyancy. A boat displaces its weight in water, regardless whether it is made of wood, steel, fiberglass, or even concrete. If weight is added to the boat, the volume of the hull drawn below the waterline will increase to keep the balance above and below the surface equal. Boats have a natural or designed level of buoyancy. Exceeding it will cause the boat first to ride lower in the water, second to take on water more readily than when properly loaded, and ultimately, if overloaded by any combination of structure, cargo, and water, sink. As commercial vessels must be correctly loaded to be safe, and as the sea becomes less buoyant in brackish areas such as the Baltic, the Plimsoll line was introduced to prevent overloading. European Union classification. Since 1998 all new leisure boats and barges built in Europe between 2.5m and 24m must comply with the EU's Recreational Craft Directive (RCD). The Directive establishes four categories that permit the allowable wind and wave conditions for vessels in each class: Europe is the main producer of recreational boats (the second production in the world is located in Poland). European brands are known all over the world - in fact, these are the brands that created RCD and set the standard for shipyards around the world.
3997
27823944
https://en.wikipedia.org/wiki?curid=3997
Blood
Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells. Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, and hormones. The blood cells are mainly red blood cells (erythrocytes), white blood cells (leukocytes), and (in mammals) platelets (thrombocytes). The most abundant cells are red blood cells. These contain hemoglobin, which facilitates oxygen transport by reversibly binding to it, increasing its solubility. Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasites. Platelets are important in the clotting of blood. Blood is circulated around the body through blood vessels by the pumping action of the heart. In animals with lungs, arterial blood carries oxygen from inhaled air to the tissues of the body, and venous blood carries carbon dioxide, a waste product of metabolism produced by cells, from the tissues to the lungs to be exhaled. Blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated. Medical terms related to blood often begin with "hemo-", "hemato-", "haemo-" or "haemato-" from the Greek word ("") for "blood". In terms of anatomy and histology, blood is considered a specialized form of connective tissue, given its origin in the bones and the presence of potential molecular fibers in the form of fibrinogen. Functions. Blood performs many important functions within the body, including: Constituents. In mammals. Blood accounts for 7% of the human body weight, with an average density around 1060 kg/m3, very close to pure water's density of 1000 kg/m3. The average adult has a blood volume of roughly or 1.3 gallons, which is composed of plasma and "formed elements". The formed elements are the two types of blood cell or "corpuscle" – the red blood cells, (erythrocytes) and white blood cells (leukocytes) – and the cell fragments called platelets that are involved in clotting. By volume, the red blood cells constitute about 45% of whole blood, the plasma about 54.3%, and white cells about 0.7%. Whole blood (plasma and cells) exhibits non-Newtonian fluid dynamics. Cells. One microliter of blood contains: Plasma. About 55% of blood is blood plasma, a fluid that is the blood's liquid medium, which by itself is straw-yellow in color. The blood plasma volume totals of 2.7–3.0 liters (2.8–3.2 quarts) in an average human. It is essentially an aqueous solution containing 92% water, 8% blood plasma proteins, and trace amounts of other materials. Plasma circulates dissolved nutrients, such as glucose, amino acids, and fatty acids (dissolved in the blood or bound to plasma proteins), and removes waste products, such as carbon dioxide, urea, and lactic acid. Other important components include: The term serum refers to plasma from which the clotting proteins have been removed. Most of the proteins remaining are albumin and immunoglobulins. Acidity. Blood pH is regulated to stay within the narrow range of 7.35 to 7.45, making it slightly basic (compensation). Extra-cellular fluid in blood that has a pH below 7.35 is too acidic, whereas blood pH above 7.45 is too basic. A pH below 6.9 or above 7.8 is usually lethal. Blood pH, partial pressure of oxygen (pO2), partial pressure of carbon dioxide (pCO2), and bicarbonate (HCO3−) are carefully regulated by a number of homeostatic mechanisms, which exert their influence principally through the respiratory system and the urinary system to control the acid–base balance and respiration, which is called compensation. An arterial blood gas test measures these. Plasma also circulates hormones transmitting their messages to various tissues. The list of normal reference ranges for various blood electrolytes is extensive. In non-mammals. Human blood is typical of that of mammals, although the precise details concerning cell numbers, size, protein structure, and so on, vary somewhat between species. In non-mammalian vertebrates, however, there are some key differences: Physiology. Circulatory system. Blood is circulated around the body through blood vessels by the pumping action of the heart. In humans, blood is pumped from the strong left ventricle of the heart through arteries to peripheral tissues and returns to the right atrium of the heart through veins. It then enters the right ventricle and is pumped through the pulmonary artery to the lungs and returns to the left atrium through the pulmonary veins. Blood then enters the left ventricle to be circulated again. Arterial blood carries oxygen from inhaled air to all of the cells of the body, and venous blood carries carbon dioxide, a waste product of metabolism by cells, to the lungs to be exhaled. However, one exception includes pulmonary arteries, which contain the most deoxygenated blood in the body, while the pulmonary veins contain oxygenated blood. Additional return flow may be generated by the movement of skeletal muscles, which can compress veins and push blood through the valves in veins toward the right atrium. The blood circulation was described by William Harvey in 1628. Cell production and degradation. In vertebrates, the various cells of blood are made in the bone marrow in a process called hematopoiesis, which includes erythropoiesis, the production of red blood cells; and myelopoiesis, the production of white blood cells and platelets. During childhood, almost every human bone produces red blood cells; as adults, red blood cell production is limited to the larger bones: the bodies of the vertebrae, the breastbone (sternum), the ribcage, the pelvic bones, and the bones of the upper arms and legs. In addition, during childhood, the thymus gland, found in the mediastinum, is an important source of T lymphocytes. The proteinaceous component of blood (including clotting proteins) is produced predominantly by the liver, while hormones are produced by the endocrine glands and the watery fraction is regulated by the hypothalamus and maintained by the kidney. Healthy erythrocytes have a plasma life of about 120 days before they are degraded by the spleen, and the Kupffer cells in the liver. The liver also clears some proteins, lipids, and amino acids. The kidney actively secretes waste products into the urine. Oxygen transport. About 98.5% of the oxygen in a sample of arterial blood in a healthy human breathing air at sea-level pressure is chemically combined with the hemoglobin. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in mammals and many other species. Hemoglobin has an oxygen binding capacity between 1.36 and 1.40 ml O2 per gram hemoglobin, which increases the total blood oxygen capacity seventyfold, compared to if oxygen solely were carried by its solubility of 0.03 ml O2 per liter blood per mm Hg partial pressure of oxygen (about 100 mm Hg in arteries). With the exception of pulmonary and umbilical arteries and their corresponding veins, arteries carry oxygenated blood away from the heart and deliver it to the body via arterioles and capillaries, where the oxygen is consumed; afterwards, venules and veins carry deoxygenated blood back to the heart. Under normal conditions in adult humans at rest, hemoglobin in blood leaving the lungs is about 98–99% saturated with oxygen, achieving an oxygen delivery between 950 and 1150 ml/min to the body. In a healthy adult at rest, oxygen consumption is approximately 200–250 ml/min, and deoxygenated blood returning to the lungs is still roughly 75% (70 to 78%) saturated. Increased oxygen consumption during sustained exercise reduces the oxygen saturation of venous blood, which can reach less than 15% in a trained athlete; although breathing rate and blood flow increase to compensate, oxygen saturation in arterial blood can drop to 95% or less under these conditions. Oxygen saturation this low is considered dangerous in an individual at rest (for instance, during surgery under anesthesia). Sustained hypoxia (oxygenation less than 90%), is dangerous to health, and severe hypoxia (saturations less than 30%) may be rapidly fatal. A fetus, receiving oxygen via the placenta, is exposed to much lower oxygen pressures (about 21% of the level found in an adult's lungs), so fetuses produce another form of hemoglobin with a much higher affinity for oxygen (hemoglobin F) to function under these conditions. Carbon dioxide transport. CO2 is carried in blood in three different ways. (The exact percentages vary depending whether it is arterial or venous blood). Most of it (about 70%) is converted to bicarbonate ions by the enzyme carbonic anhydrase in the red blood cells by the reaction ; about 7% is dissolved in the plasma; and about 23% is bound to hemoglobin as carbamino compounds. Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO2 bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO2 decreases the amount of oxygen that is bound for a given partial pressure of oxygen. The decreased binding to carbon dioxide in the blood due to increased oxygen levels is known as the Haldane effect, and is important in the transport of carbon dioxide from the tissues to the lungs. A rise in the partial pressure of CO2 or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect. Transport of hydrogen ions. Some oxyhemoglobin loses oxygen and becomes deoxyhemoglobin. Deoxyhemoglobin binds most of the hydrogen ions as it has a much greater affinity for more hydrogen than does oxyhemoglobin. Lymphatic system. In mammals, blood is in equilibrium with lymph, which is continuously formed in tissues from blood by capillary ultrafiltration. Lymph is collected by a system of small lymphatic vessels and directed to the thoracic duct, which drains into the left subclavian vein, where lymph rejoins the systemic blood circulation. Thermoregulation. Blood circulation transports heat throughout the body, and adjustments to this flow are an important part of thermoregulation. Increasing blood flow to the surface (e.g., during warm weather or strenuous exercise) causes warmer skin, resulting in faster heat loss. In contrast, when the external temperature is low, blood flow to the extremities and surface of the skin is reduced and to prevent heat loss and is circulated to the important organs of the body, preferentially. Rate of flow. Rate of blood flow varies greatly between different organs. Liver has the most abundant blood supply with an approximate flow of 1350 ml/min. Kidney and brain are the second and the third most supplied organs, with 1100 ml/min and ~700 ml/min, respectively. Relative rates of blood flow per 100 g of tissue are different, with kidney, adrenal gland and thyroid being the first, second and third most supplied tissues, respectively. Hydraulic functions. The restriction of blood flow can also be used in specialized tissues to cause engorgement, resulting in an erection of that tissue; examples are the erectile tissue in the penis and clitoris. Another example of a hydraulic function is the jumping spider, in which blood forced into the legs under pressure causes them to straighten for a powerful jump, without the need for bulky muscular legs. Color. Hemoglobin is the principal determinant of the color of blood (hemochrome). Each molecule has four heme groups, and their interaction with various molecules alters the exact color. Arterial blood and capillary blood are bright red, as oxygen imparts a strong red color to the heme group. Deoxygenated blood is a darker shade of red; this is present in veins, and can be seen during blood donation and when venous blood samples are taken. This is because the spectrum of light absorbed by hemoglobin differs between the oxygenated and deoxygenated states. Blood in carbon monoxide poisoning is bright red, because carbon monoxide causes the formation of carboxyhemoglobin. In cyanide poisoning, the body cannot use oxygen, so the venous blood remains oxygenated, increasing the redness. There are some conditions affecting the heme groups present in hemoglobin that can make the skin appear blue – a symptom called cyanosis. If the heme is oxidized, methemoglobin, which is more brownish and cannot transport oxygen, is formed. In the rare condition sulfhemoglobinemia, arterial hemoglobin is partially oxygenated, and appears dark red with a bluish hue. Veins close to the surface of the skin appear blue for a variety of reasons. However, the factors that contribute to this alteration of color perception are related to the light-scattering properties of the skin and the processing of visual input by the visual cortex, rather than the actual color of the venous blood. Skinks in the genus "Prasinohaema" have green blood due to a buildup of the waste product biliverdin. Disorders. Carbon monoxide poisoning. Substances other than oxygen can bind to hemoglobin; in some cases, this can cause irreversible damage to the body. Carbon monoxide, for example, is extremely dangerous when carried to the blood via the lungs by inhalation, because carbon monoxide irreversibly binds to hemoglobin to form carboxyhemoglobin, so that less hemoglobin is free to bind oxygen, and fewer oxygen molecules can be transported throughout the blood. This can cause suffocation. A fire burning in an enclosed room with poor ventilation presents a dangerous hazard, since it can create a build-up of carbon monoxide in the air. Some carbon monoxide binds to hemoglobin when smoking tobacco. Treatments. Transfusion. Blood for transfusion is obtained from human donors by blood donation and stored in a blood bank. There are many different blood types in humans, the ABO blood group system, and the Rhesus blood group system being the most important. Transfusion of blood of an incompatible blood group may cause severe, often fatal, complications, so crossmatching is done to ensure that a compatible blood product is transfused. Other blood products administered intravenously are platelets, blood plasma, cryoprecipitate, and specific coagulation factor concentrates. Intravenous administration. Many forms of medication (from antibiotics to chemotherapy) are administered intravenously, as they are not readily or adequately absorbed by the digestive tract. After severe acute blood loss, liquid preparations, generically known as plasma expanders, can be given intravenously, either solutions of salts (NaCl, KCl, CaCl2 etc.) at physiological concentrations, or colloidal solutions, such as dextrans, human serum albumin, or fresh frozen plasma. In these emergency situations, a plasma expander is a more effective life-saving procedure than a blood transfusion, because the metabolism of transfused red blood cells does not restart immediately after a transfusion. Letting. In modern evidence-based medicine, bloodletting is used in management of a few rare diseases, including hemochromatosis and polycythemia. However, bloodletting and leeching were common unvalidated interventions used until the 19th century, as many diseases were incorrectly thought to be due to an excess of blood, according to Hippocratic medicine. Etymology. English "blood" (Old English "blod") derives from Germanic and has cognates with a similar range of meanings in all other Germanic languages (e.g. German "Blut", Swedish "blod", Gothic "blōþ"). There is no accepted Indo-European etymology. History. Classical Greek medicine. Robin Fåhræus (a Swedish physician who devised the erythrocyte sedimentation rate) suggested that the Ancient Greek system of humorism, wherein the body was thought to contain four distinct bodily fluids (associated with different temperaments), were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen. A dark clot forms at the bottom (the "black bile"). Above the clot is a layer of red blood cells (the "blood"). Above this is a whitish layer of white blood cells (the "phlegm"). The top layer is clear yellow serum (the "yellow bile"). In general, Greek thinkers believed that blood was made from food. Plato and Aristotle are two important sources of evidence for this view, but it dates back to Homer's "Iliad". Plato thinks that fire in our bellies transform food into blood. Plato believes that the movements of air in the body as we exhale and inhale carry the fire as it transforms our food into blood. Aristotle believed that food is concocted into blood in the heart and transformed into our body's matter. Types. The ABO blood group system was discovered in the year 1900 by Karl Landsteiner. Jan Janský is credited with the first classification of blood into the four types (A, B, AB, and O) in 1907, which remains in use today. In 1907 the first blood transfusion was performed that used the ABO system to predict compatibility. The first non-direct transfusion was performed on 27 March 1914. The Rhesus factor was discovered in 1937. Culture and religion. Due to its importance to life, blood is associated with a large number of beliefs. One of the most basic is the use of blood as a symbol for family relationships through birth/parentage; to be "related by blood" is to be related by ancestry or descendence, rather than marriage. This bears closely to bloodlines, and sayings such as "blood is thicker than water" and "bad blood", as well as "Blood brother". Blood is given particular emphasis in the Islamic, Jewish, and Christian religions, because Leviticus 17:11 says "the life of a creature is in the blood." This phrase is part of the Levitical law forbidding the drinking of blood or eating meat with the blood still intact instead of being poured off. Mythic references to blood can sometimes be connected to the life-giving nature of blood, seen in such events as childbirth, as contrasted with the blood of injury or death. Indigenous Australians. In many indigenous Australian Aboriginal peoples' traditions, ochre (particularly red) and blood, both high in iron content and considered Maban, are applied to the bodies of dancers for ritual. As Lawlor states: Lawlor comments that blood employed in this fashion is held by these peoples to attune the dancers to the invisible energetic realm of the Dreamtime. Lawlor then connects these invisible energetic realms and magnetic fields, because iron is magnetic. European paganism. Among the Germanic tribes, blood was used during their sacrifices; the "Blóts". The blood was considered to have the power of its originator, and, after the butchering, the blood was sprinkled on the walls, on the statues of the gods, and on the participants themselves. This act of sprinkling blood was called "blóedsian" in Old English, and the terminology was borrowed by the Roman Catholic Church becoming "to bless" and "blessing". The Hittite word for blood, "ishar" was a cognate to words for "oath" and "bond", see Ishara. The Ancient Greeks believed that the blood of the gods, "ichor", was a substance that was poisonous to mortals. As a relic of Germanic Law, the cruentation, an ordeal where the corpse of the victim was supposed to start bleeding in the presence of the murderer, was used until the early 17th century. Christianity. In Genesis 9:4, God prohibited Noah and his sons from eating blood (see Noahide Law). This command continued to be observed by the Eastern Orthodox Church. It is also found in the Bible that when the Angel of Death came around to the Hebrew house that the first-born child would not die if the angel saw lamb's blood wiped across the doorway. At the Council of Jerusalem, the apostles prohibited certain Christians from consuming blood – this is documented in Acts 15:20 and 29. This chapter specifies a reason (especially in verses 19–21): It was to avoid offending Jews who had become Christians, because the Mosaic Law Code prohibited the practice. Christ's blood is the means for the atonement of sins. Also, "... the blood of Jesus Christ his [God] Son cleanseth us from all sin." (1 John 1:7), "... Unto him [God] that loved us, and washed us from our sins in his own blood." (Revelation 1:5), and "And they overcame him (Satan) by the blood of the Lamb [Jesus the Christ], and by the word of their testimony ..." (Revelation 12:11). Some Christian churches, including Roman Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Assyrian Church of the East teach that, when consecrated, the Eucharistic wine actually becomes the blood of Jesus for worshippers to drink. Thus in the consecrated wine, Jesus becomes spiritually and physically present. This teaching is rooted in the Last Supper, as written in the four gospels of the Bible, in which Jesus stated to his disciples that the bread that they ate was his body, and the wine was his blood. ""This cup is the new testament in my blood, which is shed for you." ()". Most forms of Protestantism, especially those of a Methodist or Presbyterian lineage, teach that the wine is no more than a symbol of the blood of Christ, who is spiritually but not physically present. Lutheran theology teaches that the body and blood is present together "in, with, and under" the bread and wine of the Eucharistic feast. Judaism. In Judaism, animal blood may not be consumed even in the smallest quantity (Leviticus 3:17 and elsewhere); this is reflected in Jewish dietary laws (Kashrut). Blood is purged from meat by rinsing and soaking in water (to loosen clots), salting and then rinsing with water again several times. Eggs must also be checked and any blood spots removed before consumption. Although blood from fish is biblically kosher, it is rabbinically forbidden to consume fish blood to avoid the appearance of breaking the Biblical prohibition. Another ritual involving blood involves the covering of the blood of fowl and game after slaughtering (Leviticus 17:13); the reason given by the Torah is: "Because the life of the animal is [in] its blood" (ibid 17:14). In relation to human beings, Kabbalah expounds on this verse that the animal soul of a person is in the blood, and that physical desires stem from it. Likewise, the mystical reason for salting temple sacrifices and slaughtered meat is to remove the blood of animal-like passions from the person. By removing the animal's blood, the animal energies and life-force contained in the blood are removed, making the meat fit for human consumption. Islam. Consumption of food containing blood is forbidden by Islamic dietary laws. This is derived from the statement in the Qur'an, sura Al-Ma'ida (5:3): "Forbidden to you (for food) are: dead meat, blood, the flesh of swine, and that on which has been invoked the name of other than Allah." Blood is considered unclean, hence there are specific methods to obtain physical and ritual status of cleanliness once bleeding has occurred. Specific rules and prohibitions apply to menstruation, postnatal bleeding and irregular vaginal bleeding. When an animal has been slaughtered, the animal's neck is cut in a way to ensure that the spine is not severed, hence the brain may send commands to the heart to pump blood to it for oxygen. In this way, blood is removed from the body, and the meat is generally now safe to cook and eat. In modern times, blood transfusions are generally not considered against the rules. Jehovah's Witnesses. Based on their interpretation of scriptures such as Acts 15:28, 29 ("Keep abstaining...from blood."), many Jehovah's Witnesses neither consume blood nor accept transfusions of whole blood or its major components: red blood cells, white blood cells, platelets (thrombocytes), and plasma. Members may personally decide whether they will accept medical procedures that involve their own blood or substances that are further fractionated from the four major components. Vampirism. Vampires are mythical creatures that drink blood directly for sustenance, usually with a preference for human blood. Cultures all over the world have myths of this kind; for example the 'Nosferatu' legend, a human who achieves damnation and immortality by drinking the blood of others, originates from Eastern European folklore. Ticks, leeches, female mosquitoes, vampire bats, and an assortment of other natural creatures do consume the blood of other animals, but only bats are associated with vampires. This has no relation to vampire bats, which are New World creatures discovered well after the origins of the European myths. Invertebrates. In invertebrates, a body fluid analogous to blood called hemolymph is found, the main difference being that hemolymph is not contained in a closed circulatory system. Hemolymph may function to carry oxygen, although hemoglobin is not necessarily used. Crustaceans and mollusks use hemocyanin instead of hemoglobin. In most insects, their hemolymph does not contain oxygen-carrying molecules because their bodies are small enough for their tracheal system to suffice for supplying oxygen. Other uses. Forensic and archaeological. Blood residue can help forensic investigators identify weapons, reconstruct a criminal action, and link suspects to the crime. Through bloodstain pattern analysis, forensic information can also be gained from the spatial distribution of bloodstains. Blood residue analysis is also a technique used in archeology. Artistic. Blood is one of the body fluids that has been used in art. In particular, the performances of Viennese Actionist Hermann Nitsch, Istvan Kantor, Franko B, Lennie Lee, Ron Athey, Yang Zhichao, Lucas Abela and Kira O'Reilly, along with the photography of Andres Serrano, have incorporated blood as a prominent visual element. Marc Quinn has made sculptures using frozen blood, including a cast of his own head made using his own blood. Genealogical. The term "blood" is used in genealogical circles to refer to one's ancestry, origins, and ethnic background as in the word "bloodline". Other terms where blood is used in a family history sense are "blue-blood", "royal blood", "mixed-blood" and "blood relative".
3999
762163
https://en.wikipedia.org/wiki?curid=3999
Benoit Mandelbrot
Benoit B. Mandelbrot (20 November 1924 – 14 October 2010) was a Polish-born French-American mathematician and polymath with broad interests in the practical sciences, especially regarding what he labeled as "the art of roughness" of physical phenomena and "the uncontrolled element in life". He referred to himself as a "fractalist" and is recognized for his contribution to the field of fractal geometry, which included coining the word "fractal", as well as developing a theory of "roughness and self-similarity" in nature. In 1936, at the age of 11, Mandelbrot and his family emigrated from Warsaw, Poland, to France. After World War II ended, Mandelbrot studied mathematics, graduating from universities in Paris and in the United States and receiving a master's degree in aeronautics from the California Institute of Technology. He spent most of his career in both the United States and France, having dual French and American citizenship. In 1958, he began a 35-year career at IBM, where he became an IBM Fellow, and periodically took leaves of absence to teach at Harvard University. At Harvard, following the publication of his study of U.S. commodity markets in relation to cotton futures, he taught economics and applied sciences. Because of his access to IBM's computers, Mandelbrot was one of the first to use computer graphics to create and display fractal geometric images, leading to his discovery of the Mandelbrot set in 1980. He showed how visual complexity can be created from simple rules. He said that things typically considered to be "rough", a "mess", or "chaotic", such as clouds or shorelines, actually had a "degree of order". His math- and geometry-centered research included contributions to such fields as statistical physics, meteorology, hydrology, geomorphology, anatomy, taxonomy, neurology, linguistics, information technology, computer graphics, economics, geology, medicine, physical cosmology, engineering, chaos theory, econophysics, metallurgy, and the social sciences. Toward the end of his career, he was Sterling Professor of Mathematical Sciences at Yale University, where he was the oldest professor in Yale's history to receive tenure. Mandelbrot also held positions at the Pacific Northwest National Laboratory, Université Lille Nord de France, Institute for Advanced Study and Centre National de la Recherche Scientifique. During his career, he received over 15 honorary doctorates and served on many science journals, and won numerous awards. His autobiography, "The Fractalist: Memoir of a Scientific Maverick", was published posthumously in 2012. Early years. Benedykt Mandelbrot was born in a Lithuanian Jewish family, in Warsaw during the Second Polish Republic. His father made his living trading clothing; his mother was a dental surgeon. During his first two school years, he was tutored privately by an uncle who despised rote learning: "Most of my time was spent playing chess, reading maps and learning how to open my eyes to everything around me." In 1936, when he was 11, the family emigrated from Poland to France. The move, World War II, and the influence of his father's brother, the mathematician Szolem Mandelbrojt (who had moved to Paris around 1920), further prevented a standard education. "The fact that my parents, as economic and political refugees, joined Szolem in France saved our lives," he writes. Mandelbrot attended the Lycée Rollin (now the Collège-lycée Jacques-Decour) in Paris until the start of World War II, when his family moved to Tulle, France, where he enrolled in Lycée Edmond Perrier. He was helped by Rabbi David Feuerwerker, the Rabbi of Brive-la-Gaillarde, to continue his studies. Much of France was occupied by the Nazis at the time, and Mandelbrot recalls this period: In 1944, Mandelbrot returned to Paris, studied at the Lycée du Parc in Lyon, and in 1945 to 1947 attended the École Polytechnique, where he studied under Gaston Julia and Paul Lévy. From 1947 to 1949 he studied at California Institute of Technology, where he earned a master's degree in aeronautics. Returning to France, he obtained his PhD degree in Mathematical Sciences at the University of Paris in 1952. Research career. From 1949 to 1958, Mandelbrot was a staff member at the Centre National de la Recherche Scientifique. During this time he spent a year at the Institute for Advanced Study in Princeton, New Jersey, where he was sponsored by John von Neumann. In 1955 he married Aliette Kagan and moved to Geneva, Switzerland (to collaborate with Jean Piaget at the International Centre for Genetic Epistemology) and later to the Université Lille Nord de France. In 1958 the couple moved to the United States where Mandelbrot joined the research staff at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York. He remained at IBM for 35 years, becoming an IBM Fellow, and later Fellow Emeritus. From 1951 onward, Mandelbrot worked on problems and published papers not only in mathematics but in applied fields such as information theory, economics, and fluid dynamics. Randomness and fractals in financial markets. Mandelbrot saw financial markets as an example of "wild randomness", characterized by concentration and long-range dependence. He developed several original approaches for modelling financial fluctuations. In his early work, he found that the price changes in financial markets did not follow a Gaussian distribution, but rather Lévy stable distributions having infinite variance. He found, for example, that cotton prices followed a Lévy stable distribution with parameter "α" equal to 1.7 rather than 2 as in a Gaussian distribution. "Stable" distributions have the property that the sum of many instances of a random variable follows the same distribution but with a larger scale parameter. The latter work from the early 60s was done with daily data of cotton prices from 1900, long before he introduced the word 'fractal'. In later years, after the concept of fractals had matured, the study of financial markets in the context of fractals became possible only after the availability of high frequency data in finance. In the late 1980s, Mandelbrot used intra-daily tick data supplied by Olsen & Associates in Zurich to apply fractal theory to market microstructure. This cooperation led to the publication of the first comprehensive papers on scaling law in finance. This law shows similar properties at different time scales, confirming Mandelbrot's insight of the fractal nature of market microstructure. Mandelbrot's own research in this area is presented in his books "Fractals and Scaling in Finance" and "The (Mis)behavior of Markets". Developing "fractal geometry" and the Mandelbrot set. As a visiting professor at Harvard University, Mandelbrot began to study mathematical objects called Julia sets that were invariant under certain transformations of the complex plane. Building on previous work by Gaston Julia and Pierre Fatou, Mandelbrot used a computer to plot images of the Julia sets. While investigating the topology of these Julia sets, he studied the Mandelbrot set which was introduced by him in 1979. In 1975, Mandelbrot coined the term "fractal" to describe these structures and first published his ideas in the French book "Les Objets Fractals: Forme, Hasard et Dimension", later translated in 1977 as "Fractals: Form, Chance and Dimension". According to computer scientist and physicist Stephen Wolfram, the book was a "breakthrough" for Mandelbrot, who until then would typically "apply fairly straightforward mathematics ... to areas that had barely seen the light of serious mathematics before". Wolfram adds that as a result of this new research, he was no longer a "wandering scientist", and later called him "the father of fractals": Wolfram briefly describes fractals as a form of geometric repetition, "in which smaller and smaller copies of a pattern are successively nested inside each other, so that the same intricate shapes appear no matter how much you zoom in to the whole. Fern leaves and Romanesque broccoli are two examples from nature." He points out an unexpected conclusion: Mandelbrot used the term "fractal" as it derived from the Latin word "fractus", defined as broken or shattered glass. Using the newly developed IBM computers at his disposal, Mandelbrot was able to create fractal images using graphics computer code, images that an interviewer described as looking like "the delirious exuberance of the 1960s psychedelic art with forms hauntingly reminiscent of nature and the human body". He also saw himself as a "would-be Kepler", after the 17th-century scientist Johannes Kepler, who calculated and described the orbits of the planets. Mandelbrot, however, never felt he was inventing a new idea. He described his feelings in a documentary with science writer Arthur C. Clarke: According to Clarke, "the Mandelbrot set is indeed one of the most astonishing discoveries in the entire history of mathematics. Who could have dreamed that such an incredibly simple equation could have generated images of literally "infinite" complexity?" Clarke also notes an "odd coincidence": the name Mandelbrot, and the word "mandala"—for a religious symbol—which I'm sure is a pure coincidence, but indeed the Mandelbrot set does seem to contain an enormous number of mandalas. In 1982, Mandelbrot expanded and updated his ideas in "The Fractal Geometry of Nature". This influential work brought fractals into the mainstream of professional and popular mathematics, as well as silencing critics, who had dismissed fractals as "program artifacts". Mandelbrot left IBM in 1987, after 35 years and 12 days, when IBM decided to end pure research in his division. He joined the Department of Mathematics at Yale, and obtained his first tenured post in 1999, at the age of 75. He invited his colleague Michael Frame to work at Yale and co-published various articles with him. At the time of Mandelobrot's retirement in 2005, he was Sterling Professor of Mathematical Sciences. Fractals and the "theory of roughness". Mandelbrot created the first-ever "theory of roughness", and he saw "roughness" in the shapes of mountains, coastlines and river basins; the structures of plants, blood vessels and lungs; the clustering of galaxies. His personal quest was to create some mathematical formula to measure the overall "roughness" of such objects in nature. He began by asking himself various kinds of questions related to nature: In his paper "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension", published in "Science" in 1967, Mandelbrot discusses self-similar curves that have Hausdorff dimension that are examples of "fractals", although Mandelbrot does not use this term in the paper, as he did not coin it until 1975. The paper is one of Mandelbrot's first publications on the topic of fractals. Mandelbrot emphasized the use of fractals as realistic and useful models for describing many "rough" phenomena in the real world. He concluded that "real roughness is often fractal and can be measured." Although Mandelbrot coined the term "fractal", some of the mathematical objects he presented in "The Fractal Geometry of Nature" had been previously described by other mathematicians. Before Mandelbrot, however, they were regarded as isolated curiosities with unnatural and non-intuitive properties. Mandelbrot brought these objects together for the first time and turned them into essential tools for the long-stalled effort to extend the scope of science to explaining non-smooth, "rough" objects in the real world. His methods of research were both old and new: Fractals are also found in human pursuits, such as music, painting, architecture, and in the financial field. Mandelbrot believed that fractals, far from being unnatural, were in many ways more intuitive and natural than the artificially smooth objects of traditional Euclidean geometry: Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.  —Mandelbrot, in his introduction to "The Fractal Geometry of Nature" Mandelbrot has been called an artist, and a visionary and a maverick. His informal and passionate style of writing and his emphasis on visual and geometric intuition (supported by the inclusion of numerous illustrations) made "The Fractal Geometry of Nature" accessible to non-specialists. The book sparked widespread popular interest in fractals and contributed to chaos theory and other fields of science and mathematics. Mandelbrot also put his ideas to work in cosmology. He offered in 1974 a new explanation of Olbers' paradox (the "dark night sky" riddle), demonstrating the consequences of fractal theory as a sufficient, but not necessary, resolution of the paradox. He postulated that if the stars in the universe were fractally distributed (for example, like Cantor dust), it would not be necessary to rely on the Big Bang theory to explain the paradox. His model would not rule out a Big Bang, but would allow for a dark sky even if the Big Bang had not occurred. Awards and honors. Mandelbrot's awards include the Wolf Prize in Physics in 1993, the Lewis Fry Richardson Prize of the European Geophysical Society in 2000, the Japan Prize in 2003, and the Einstein Lectureship of the American Mathematical Society in 2006. The small asteroid 27500 Mandelbrot was named in his honor. In November 1990, he was made a Chevalier in France's Legion of Honour. In December 2005, Mandelbrot was appointed to the position of Battelle Fellow at the Pacific Northwest National Laboratory. Mandelbrot was promoted to an Officer of the Legion of Honour in January 2006. An honorary degree from Johns Hopkins University was bestowed on Mandelbrot in the May 2010 commencement exercises. A partial list of awards received by Mandelbrot: Death and legacy. Mandelbrot died from pancreatic cancer at the age of 85 in a hospice in Cambridge, Massachusetts, on 14 October 2010. Reacting to news of his death, mathematician Heinz-Otto Peitgen said: "[I]f we talk about impact inside mathematics, and applications in the sciences, he is one of the most important figures of the last fifty years." Chris Anderson, TED conference curator, described Mandelbrot as "an icon who changed how we see the world". Nicolas Sarkozy, President of France at the time of Mandelbrot's death, said Mandelbrot had "a powerful, original mind that never shied away from innovating and shattering preconceived notions [... h]is work, developed entirely outside mainstream research, led to modern information theory." Mandelbrot's obituary in "The Economist" points out his fame as "celebrity beyond the academy" and lauds him as the "father of fractal geometry". Best-selling essayist-author Nassim Nicholas Taleb has remarked that Mandelbrot's book "The (Mis)Behavior of Markets" is in his opinion "The deepest and most realistic finance book ever published". In Popular Culture. The indie music satirist Jonathan Coulton's song, "Mandelbrot Set," is dedicated to Mandelbrot, who at the time of the songs release "is still alive and teaching math at Yale." The song incorporates the concepts and images associated with order out of chaos. "Mandelbrot Set, you're a Rorschach Test on fire" "You're a day-glo pterodactyl" "You're a heart-shaped box of springs and wire" "You're one badass fucking fractal" "And you're just in time to save the day" "Sweeping all our fears away" "You can change the world in a tiny way" The songs pre-chorus describes a mathematical equation that is not the Mandelbrot Set, to which Coulton has said, “if any of you are mathematicians and you’re just waiting anxiously to come up to me afterwards and point out the fact that the equation is not quite right, that I actually described a Julia set, and not a Mandlebrot Set, I already know that shit.”
4001
12602014
https://en.wikipedia.org/wiki?curid=4001
Benedict of Nursia
Benedict of Nursia (; ; 2 March 480 – 21 March 547), often known as Saint Benedict, was a Christian monk. He is famed in the Catholic Church, the Eastern Orthodox Church, the Lutheran Churches, the Anglican Communion, and Old Catholic Churches. In 1964, Pope Paul VI declared Benedict a patron saint of Europe. Benedict founded twelve communities for monks at Subiaco in present-day Lazio, Italy (about to the east of Rome), before moving southeast to Monte Cassino in the mountains of central Italy. The present-day Order of Saint Benedict emerged later and, moreover, is not an "order" as the term is commonly understood, but a confederation of autonomous congregations. Benedict's main achievement, his "Rule of Saint Benedict", contains a set of rules for his monks to follow. Heavily influenced by the writings of John Cassian ( – ), it shows strong affinity with the earlier "Rule of the Master", but it also has a unique spirit of balance, moderation and reasonableness (, "epieíkeia"), which persuaded most Christian religious communities founded throughout the Middle Ages to adopt it. As a result, Benedict's Rule became one of the most influential religious rules in Western Christendom. For this reason, Giuseppe Carletti regarded Benedict as the founder of Western Christian monasticism. Hagiography. Apart from a short poem attributed to Mark of Monte Cassino, the only ancient account of Benedict is found in the second volume of Pope Gregory I's four-book "Dialogues", thought to have been written in 593, although the authenticity of this work is disputed. Gregory's account of Benedict's life, however, is not a biography in the modern sense of the word. It provides instead a spiritual portrait of the gentle, disciplined abbot. In a letter to Bishop Maximilian of Syracuse, Gregory states his intention for his "Dialogues", saying they are a kind of "floretum" (an "anthology", literally, 'flower garden') of the most striking miracles of Italian holy men. Gregory did not set out to write a chronological, historically anchored story of Benedict, but he did base his anecdotes on direct testimony. To establish his authority, Gregory explains that his information came from what he considered the best sources: a handful of Benedict's disciples who lived with him and witnessed his various miracles. These followers, he says, are Constantinus, who succeeded Benedict as Abbot of Monte Cassino, Honoratus, who was abbot of Subiaco when St. Gregory wrote his "Dialogues", Valentinianus, and Simplicius. In Gregory's day, history was not recognised as an independent field of study; it was a branch of grammar or rhetoric, and "historia" was an account that summed up the findings of the learned when they wrote what was, at that time, considered history. Gregory's "Dialogues", Book Two, then, an authentic medieval hagiography cast as a conversation between the Pope and his deacon Peter, is designed to teach spiritual lessons. Early life. Benedict was the son of a Roman noble of Nursia, the modern Norcia, in Umbria. According to Gregory's narrative, Benedict was born around 480, and the year in which he abandoned his studies and left home "was probably a few years before 500." Benedict was sent to Rome to study, but was disappointed by the academic studies he encountered there. Seeking to flee the great city, he left with his nurse and settled in Enfide. Enfide, which the tradition of Subiaco identifies with the modern Affile, is in the Simbruini mountains, about forty miles from Rome and two miles from Subiaco. A short distance from Enfide is the entrance to a narrow, gloomy valley, penetrating the mountains and leading directly to Subiaco. The path continues to ascend, and the side of the ravine on which it runs becomes steeper until a cave is reached, above this point the mountain now rises almost perpendicularly; while on the right, it strikes in a rapid descent down to where, in Benedict's day, below, lay the blue waters of a lake. The cave has a large triangular-shaped opening and is about ten feet deep. On his way from Enfide, Benedict met a monk, Romanus of Subiaco, whose monastery was on the mountain above the cliff overhanging the cave. Romanus discussed with Benedict the purpose which had brought him to Subiaco, and gave him the monk's habit. By his advice Benedict became a hermit and for three years lived in this cave above the lake. Later life. Gregory tells little of Benedict's later life. He now speaks of Benedict no longer as a youth (), but as a man () of God. Romanus, Gregory states, served Benedict in every way he could. The monk apparently visited him frequently, and on fixed days brought him food. During these three years of solitude, broken only by occasional communications with the outer world and by the visits of Romanus, Benedict matured both in mind and character, in knowledge of himself and of his fellow-man, and at the same time he became not merely known to, but secured the respect of, those about him; so much so that on the death of the abbot of a monastery in the neighbourhood (identified by some with Vicovaro), the community came to him and begged him to become its abbot. Benedict was acquainted with the life and discipline of the monastery, and knew that "their manners were diverse from his and therefore that they would never agree together: yet, at length, overcome with their entreaty, he gave his consent". The experiment failed; the monks tried to poison him. The legend goes that they first tried to poison his drink. He prayed a blessing over the cup and the cup shattered. Thus he left the group and went back to his cave at Subiaco. There lived in the neighborhood a priest called Florentius who, moved by envy, tried to ruin him. He tried to poison him with poisoned bread. When he prayed a blessing over the bread, a raven swept in and took the loaf away. From this time his miracles seem to have become frequent, and many people, attracted by his sanctity and character, came to Subiaco to be under his guidance. Having failed by sending him poisonous bread, Florentius tried to seduce his monks with some prostitutes. To avoid further temptations, in about 530 Benedict left Subiaco. He founded 12 monasteries in the vicinity of Subiaco, and, eventually, in 530 he founded the great Benedictine monastery of Monte Cassino, which lies on a hilltop between Rome and Naples. Veneration. Benedict died of a fever at Monte Cassino not long after his sister, Scholastica, and was buried in the same tomb. According to tradition, this occurred on 21 March 547. He was named patron protector of Europe by Pope Paul VI in 1964. In 1980, Pope John Paul II declared him co-patron of Europe, together with Cyril and Methodius. Furthermore, he is the patron saint of speleologists. On the island of Tenerife (Spain) he is the patron saint of fields and farmers. An important romeria ("Romería Regional de San Benito Abad") is held on this island in his honor, one of the most important in the country. In the pre-1970 General Roman Calendar, his feast is kept on 21 March, the day of his death according to some manuscripts of the "Martyrologium Hieronymianum" and that of Bede. Because on that date his liturgical memorial would always be impeded by the observance of Lent, the 1969 revision of the General Roman Calendar moved his memorial to 11 July, the date that appears in some Gallic liturgical books of the end of the 8th century as the feast commemorating his birth ("Natalis S. Benedicti"). There is some uncertainty about the origin of this feast. Accordingly, on 21 March the Roman Martyrology mentions in a line and a half that it is Benedict's day of death and that his memorial is celebrated on 11 July, while on 11 July it devotes seven lines to speaking of him, and mentions the tradition that he died on 21 March. The Eastern Orthodox Church commemorates Saint Benedict on 14 March. The Lutheran Churches celebrate the Feast of Saint Benedict on July 11. The Anglican Communion has no single universal calendar, but a provincial calendar of saints is published in each province. In almost all of these, Saint Benedict is commemorated on 11 July. Benedict is remembered in the Church of England with a Lesser Festival on 11 July. "Rule of Saint Benedict". Benedict wrote the "Rule" for monks living communally under the authority of an abbot. The "Rule" comprises seventy-three short chapters. Its wisdom is twofold: spiritual (how to live a Christocentric life on earth) and administrative (how to run a monastery efficiently). More than half of the chapters describe how to be obedient and humble, and what to do when a member of the community is not. About one-fourth regulate the work of God (the "opus Dei"). One-tenth outline how, and by whom, the monastery should be managed. Benedictine asceticism is known for its moderation. Saint Benedict Medal. This devotional medal originally came from a cross in honor of Saint Benedict. On one side, the medal has an image of Saint Benedict, holding the Holy Rule in his left hand and a cross in his right. There is a raven on one side of him, with a cup on the other side of him. Around the medal's outer margin are the words "Eius in obitu nostro praesentia muniamur" ("May we be strengthened by his presence in the hour of our death"). The other side of the medal has a cross with the initials CSSML on the vertical bar which signify "Crux Sacra Sit Mihi Lux" ("May the Holy Cross be my light") and on the horizontal bar are the initials NDSMD which stand for "Non-Draco Sit Mihi Dux" ("Let not the dragon be my guide"). The initials CSPB stand for "Crux Sancti Patris Benedicti" ("The Cross of the Holy Father Benedict") and are located on the interior angles of the cross. Either the inscription "PAX" (Peace) or the Christogram "IHS" may be found at the top of the cross in most cases. Around the medal's margin on this side are the "Vade Retro Satana" initials VRSNSMV which stand for "Vade Retro Satana, Nonquam Suade Mihi Vana" ("Begone Satan, do not suggest to me thy vanities") then a space followed by the initials SMQLIVB which signify "Sunt Mala Quae Libas, Ipse Venena Bibas" ("Evil are the things thou profferest, drink thou thine own poison"). This medal was first struck in 1880 to commemorate the fourteenth centenary of Benedict's birth and is also called the Jubilee Medal; its exact origin, however, is unknown. In 1647, during a witchcraft trial at Natternberg near Metten Abbey in Bavaria, the accused women testified they had no power over Metten, which was under the protection of the cross. An investigation found a number of painted crosses on the walls of the abbey with the letters now found on St Benedict medals, but their meaning had been forgotten. A manuscript written in 1415 was eventually found that had a picture of Benedict holding a scroll in one hand and a staff which ended in a cross in the other. On the scroll and staff were written the full words of the initials contained on the crosses. Medals then began to be struck in Germany, which then spread throughout Europe. This medal was first approved by Pope Benedict XIV in his briefs of 23 December 1741 and 12 March 1742. Benedict has been also the motif of many collector's coins around the world. The Austria 50 euro 'The Christian Religious Orders', issued on 13 March 2002 is one of them. Influence. The early Middle Ages have been called "the Benedictine centuries". In April 2008, Pope Benedict XVI discussed the influence St Benedict had on Western Europe. The pope said that "with his life and work St Benedict exercised a fundamental influence on the development of European civilization and culture" and helped Europe to emerge from the "dark night of history" that followed the fall of the Roman Empire. Benedict contributed more than anyone else to the rise of monasticism in the West. His Rule was the foundational document for thousands of religious communities in the Middle Ages. To this day, The Rule of St. Benedict is the most common and influential Rule used by monasteries and monks, more than 1,400 years after its writing. A basilica was built upon the birthplace of Benedict and Scholastica in the 1400s. Ruins of their familial home were excavated from beneath the church and preserved. The earthquake of 30 October 2016 completely devastated the structure of the basilica, leaving only the front facade and altar standing. "See also ."
4005
7903804
https://en.wikipedia.org/wiki?curid=4005
Battle of Pharsalus
The Battle of Pharsalus was the decisive battle of Caesar's Civil War fought on 9 August 48 BC near Pharsalus in Central Greece. Julius Caesar and his allies formed up opposite the army of the Roman Republic under the command of Pompey. Pompey had the backing of a majority of Roman senators and his army significantly outnumbered the veteran Caesarian legions. Pressured by his officers, Pompey reluctantly engaged in battle and suffered an overwhelming defeat, ultimately fleeing the camp and his men, disguised as an ordinary citizen. Eventually making his way to Egypt, he was assassinated upon his arrival at the order of Ptolemy XIII. Prelude. Following the start of the Civil War, Caesar had captured Rome, forced Pompey and his allies to withdraw from Italy, and defeated Pompey's legates in Spain. In the campaign season for 48 BC, Caesar crossed the Adriatic and advanced on Dyrrachium. There, he besieged it, but was defeated. Caesar then withdrew east into Thessaly, partly to relieve one of his legates from attack by Metellus Scipio's forces arriving from Syria. He besieged Gomphi after it resisted him. Pompey pursued, seeking to spare Italy from invasion by concluding the war on Greek soil, to prevent Caesar from defeating Metellus Scipio's forces arriving from Syria, and under pressure from his overconfident allies who accused him of prolonging the war to extend his command. Date. The decisive battle took place on 9 August 48 BC according to the Republican calendar. Location. The location of the battlefield was for a long time the subject of controversy among scholars. Caesar himself, in his "Commentarii de Bello Civili", mentions few place-names; and although the battle is called after Pharsalos by modern authors, four ancient writers – the author of the "Bellum Alexandrinum" (48.1), Frontinus ("Strategemata" 2.3.22), Eutropius (20), and Orosius (6.15.27) – place it specifically at "Palae"pharsalus ("Old" Pharsalus). Strabo in his "Geographica" ("Γεωγραφικά") mentions both old and new Pharsaloi, and notes that the Thetideion, the temple to Thetis south of Scotoussa, was near both. In 198 BC, in the Second Macedonian War, Philip V of Macedon sacked Palaepharsalos (Livy, "Ab Urbe Condita" 32.13.9), but left new Pharsalos untouched. These two details perhaps imply that the two cities were not close neighbours. Many scholars, therefore, unsure of the site of Palaepharsalos, followed Appian (2.75) and located the battle of 48 BC south of the Enipeus or close to Pharsalos (today's Pharsala). Among the scholars arguing for the south side are Béquignon (1928), Bruère (1951), and Gwatkin (1957). An increasing number of scholars, however, have argued for a location on the north side of the river. These include Perrin (1885), Holmes (1908), Lucas (1921), Rambaud (1955), Pelling (1973), Morgan (1983), and Sheppard (2006). John D. Morgan in his definitive "Palae-pharsalus – the Battle and the Town", shows that Palaepharsalus cannot have been at Palaiokastro, as Béquignon thought (a site abandoned c. 500 BC), nor the hill of Fatih-Dzami within the walls of Pharsalus itself, as Kromayer (1903, 1931) and Gwatkin thought; and Morgan argues that it is probably also not the hill of Khtouri (Koutouri), some 7 miles north-west of Pharsalus on the south bank of the Enipeus, as Lucas and Holmes thought, although that remains a possibility. However, Morgan believes it is most likely to have been the hill just east of the village of (Krini Larisas, formerly Driskoli) very close to the ancient highway from Larisa to Pharsalus. This site is some north of Pharsalus, and three miles north of the river Enipeus, and not only has remains dating back to Neolithic times but also signs of habitation in the 1st century BC and later. The identification seems to be confirmed by the location of a place misspelled "Palfari" or "Falaphari" shown on a medieval route map of the road just north of Pharsalus. Morgan places Pompey's camp a mile to the west of Krini, just north of the village of Avra (formerly Sarikayia), and Caesar's camp some four miles to the east-south-east of Pompey's. According to this reconstruction, therefore, the battle took place not between Pharsalus and the river, as Appian wrote, but between Old Pharsalus and the river. An interesting side-note on Palaepharsalus is that it was sometimes identified in ancient sources with Phthia, the home of Achilles. Near Old and New Pharsalus was a "Thetideion", or temple dedicated to Thetis, the mother of Achilles. However, Phthia, the kingdom of Achilles and his father Peleus, is more usually identified with the lower valley of the Spercheios river, much further south. Name of the battle. Although it is often called the Battle of Pharsalus by modern historians, this name was rarely used in the ancient sources. Caesar merely calls it the "proelium in Thessaliā" ("battle in Thessalia"); Marcus Tullius Cicero and Hirtius call it the "Pharsālicum proelium" ("Pharsalic battle") or "pugna Pharsālia" ("Pharsalian battle"), and similar expressions are also used in other authors. But Hirtius (if he is the author of the de Bello Alexandrino) also refers to the battle as having taken place at "Palaepharsalus", and this name also occurs in Strabo, Frontinus, Eutropius, and Orosius. Lucan in his poem about the Civil War regularly uses the name "Pharsālia", and this term is also used by the epitomiser of Livy and by Tacitus. The only ancient sources to refer to the battle as being at Pharsalus are a certain calendar known as the Fasti Amiternini and the Greek authors Plutarch, Appian, and Polyaenus. It has therefore been argued by some scholars that "Pharsalia" would be a more accurate name for the battle than Pharsalus. Opposing armies. The total number of soldiers on each side is unknown because ancient accounts of the battle focused primarily on giving the numbers of Italian legionaries only, regarding allied non-citizen contingents as inferior and inconsequential. According to Caesar, his own army included 22,000 Roman legionaries distributed throughout 80 cohorts (8 legions), alongside 1,000 Gallic and Germanic cavalry. All of Caesar's legions were understrength; some only had about a thousand men at the time of Pharsalus, due partly to losses at Dyrrhachium and partly to Caesar's wish to rapidly advance with a picked body as opposed to a ponderous movement with a large army. Another source adds that he had recruited Greek light infantry from Dolopia, Acarnania and Aetolia; these numbered no more than a few thousand. Caesar, Appian and Plutarch give Pompey an army of 45,000 Roman infantry. Osorius describes Pompey as having 88 cohorts of Roman infantry, which at full strength would come to 44,000 men, while Brunt and Wylie estimated Pompey's Roman infantry as being as 38,000 men, and Greenhalgh said they contained a maximum of 36,000. It was in his auxiliary troops and in particular his cavalry, all of which vastly outnumbered Caesar's own, that Pompey had his greatest advantage. He seems to have had at his disposal anywhere between 5,000 and 7,000 cavalry, and thousands of archers, slingers and light infantrymen in general. These all formed a remarkably diverse group, including Gallic and Germanic horsemen alongside all polyglot peoples of the east – namely Greeks, Thracians, and Anatolians from the Balkans and Syrians, Phoenicians and Jews from the Levant. To this heterogeneous force Pompey added horsemen conscripted from his own slaves. Many of the foreigners were serving under their own rulers, for more than a dozen despots and petty kings under Roman influence in the east were Pompey's personal clients and some elected to attend in person, or send proxies. Caesarian legions. Caesar had the following legions with him: The bulk of Caesar's army at Pharsalus was made up of his veterans from the Gallic Wars; very experienced, battle-hardened troops who were absolutely devoted to their commander. Deployment. The two generals deployed their legions in the traditional three lines ("triplex acies"), with Pompey's right and Caesar's left flanks resting on river Enipeus. As the stream provided enough protection to that side, Pompey moved almost all of his cavalry, archers, and slingers to the left, to make the most of their numerical strength. Only a small force of 500–600 Pontic cavalry and some Cappadocian light infantry was placed on his right flank. Pompey stationed his strongest legions in the center and wings of his infantry line, and dispersed some 2,000 re-enlisted veterans throughout the entire line in order to inspire the less experienced. The Pompeian cohorts were arrayed in an unusually thick formation, 10 men deep: their task was just to tie down the enemy foot while Pompey's cavalry, his key to victory, swept through Caesar's flank and rear. The column of legions was divided under command of three subordinates, with Lentulus in charge of the left, Scipio of the center and Ahenobarbus the right. Labienus was entrusted with command of the cavalry charge, while Pompey himself took up a position behind the left wing in order to oversee the course of the battle. Caesar also deployed his men in three lines, but, being outnumbered, had to thin his ranks to a depth of only six men, in order to match the frontage presented by Pompey. His left flank, resting on the Enipeus River, consisted of his battle-worn IX legion supplemented by the VIII legion, these were commanded by Mark Antony. The VI, XII, XI and XIII formed the centre and were commanded by Domitius, then came the VII and upon his right he placed his favored X legion, giving Sulla command of this flank – Caesar himself took his stand on the right, across from Pompey. Upon seeing the disposition of Pompey's army Caesar grew discomforted, and further thinned his third line in order to form a fourth line on his right: this to counter the onslaught of the enemy cavalry, which he knew his numerically inferior cavalry could not withstand. He gave this new line detailed instructions for the role they would play, hinting that upon them would rest the fortunes of the day, and gave strict orders to his third line not to charge until specifically ordered. Battle. There was significant distance between the two armies, according to Caesar. Pompey ordered his men not to charge, but to wait until Caesar's legions came into close quarters; Pompey's adviser Gaius Triarius believed that Caesar's infantry would be fatigued and fall into disorder if they were forced to cover twice the expected distance of a battle march. Also, stationary troops were expected to be able to defend better against pila throws. Seeing that Pompey's army was not advancing, Caesar's infantry under Mark Antony and Gnaeus Domitius Calvinus started the advance. As Caesar's men neared throwing distance, without orders, they stopped to rest and regroup before continuing the charge; Pompey's right and centre line held as the two armies collided. As Pompey's infantry fought, Labienus ordered the Pompeian cavalry on his left flank to attack Caesar's cavalry; as expected they successfully pushed back Caesar's cavalry. Caesar then revealed his hidden fourth line of infantry and surprised Pompey's cavalry charge; Caesar's men were ordered to leap up and use their pila to thrust at Pompey's cavalry instead of throwing them. Pompey's cavalry panicked and suffered hundreds of casualties, as Caesar's cavalry came about and charged after them. After failing to reform, the rest of Pompey's cavalry retreated to the hills, leaving the left wing of his legions exposed to the hidden troops as Caesar's cavalry wheeled around their flank. Caesar then ordered in his third line, containing his most battle-hardened veterans, to attack. This broke Pompey's left wing troops, who fled the battlefield. After routing Pompey's cavalry, Caesar threw in his last line of reservesa move which at this point meant that the battle was more or less decided. Pompey lost the will to fight as he watched both cavalry and legions under his command break formation and flee from battle, and he retreated to his camp, leaving the rest of his troops at the centre and right flank to their own devices. He ordered the garrisoned auxiliaries to defend the camp as he gathered his family, loaded up gold, and threw off his general's cloak to make a quick escape. As the rest of Pompey's army were left confused, Caesar urged his men to end the day by routing the rest of Pompey's troops and capturing the Pompeian camp. They complied with his wishes; after finishing off the remains of Pompey's men, they furiously attacked the camp walls. The Thracians and the other auxiliaries who were left in the Pompeian camp, in total seven cohorts, defended bravely, but were not able to fend off the assault. Caesar had won his greatest victory, claiming to have only lost about 200 soldiers and 30 centurions and assigning the Optimate losses to be 60,000 men. These numbers seem suspiciously exaggerated with Appian suggesting the Caesarean losses to be as many as 1,200 men and the Pompeian losses to be 6,000. In his history of the war, Caesar would praise his own men's discipline and experience, and remembered each of his centurions by name. He also questioned Pompey's decision not to charge. Aftermath. Pompey, despairing of the defeat, fled with his advisors overseas to Mytilene and thence to Cilicia where he held a council of war; at the same time, Cato and supporters at Dyrrachium attempted first to hand over command to Marcus Tullius Cicero, who refused, deciding instead to return to Italy. They then regrouped at Corcyra and went thence to Libya. Others, including Marcus Junius Brutus sought Caesar's pardon, travelling over marshlands to Larissa where he was then welcomed graciously by Caesar in his camp. Pompey's council of war decided to flee to Egypt, which had in the previous year supplied him with military aid. In the aftermath of the battle, Caesar captured Pompey's camp and burned Pompey's correspondence. He then announced that he would forgive all who asked for mercy. Pompeian naval forces in the Adriatic and Italy mostly withdrew or surrendered. Hearing of Pompey's flight to Egypt, Caesar remained in hot pursuit, first landing in Asia and reaching Alexandria on 2 October 48 BC, where he learnt of Pompey's murder and then was embroiled in a dynastic dispute between Ptolemy XIII and Cleopatra. Importance. Paul K. Davis wrote that "Caesar's victory took him to the pinnacle of power, effectively ending the Republic." The battle itself did not end the civil war but it was decisive and gave Caesar a much needed boost in legitimacy. Until then much of the Roman world outside Italy supported Pompey and his allies due to the extensive list of clients he held in all corners of the Republic. After Pompey's defeat former allies began to align themselves with Caesar as some came to believe the gods favored him, while for others it was simple self-preservation. The ancients took great stock in success as a sign of favoritism by the gods. This is especially true of success in the face of almost certain defeat – as Caesar experienced at Pharsalus. This allowed Caesar to parlay this single victory into a huge network of willing clients to better secure his hold over power and force the Optimates into near exile in search for allies to continue the fight against Caesar. In popular culture. The battle gives its name to the following artistic, geographical, and business concerns: In Alexandre Dumas' "The Three Musketeers", the author makes reference to Caesar's purported order that his men try to cut the faces of their opponents – their vanity supposedly being of more value to them than their lives. In Mankiewicz's 1963 film "Cleopatra", the immediate aftermath of Pharsalus is used as an opening scene to set the action in motion.
4009
35936988
https://en.wikipedia.org/wiki?curid=4009
Bigfoot
Bigfoot (), also commonly referred to as Sasquatch (), is a large, hairy mythical creature said to inhabit forests in North America, particularly in the Pacific Northwest. Bigfoot is featured in both American and Canadian folklore, and since the mid-20th century has become a cultural icon, permeating popular culture and becoming the subject of its own distinct subculture. Enthusiasts of Bigfoot, such as those within the pseudoscience of cryptozoology, have offered various forms of dubious evidence to support Bigfoot's existence, including anecdotal claims of sightings as well as supposed photographs, video and audio recordings, hair samples, and casts of large footprints. However, the evidence is a combination of folklore, misidentification and hoax, and the creature is not a living animal. Folklorists trace the phenomenon of Bigfoot to a combination of factors and sources, including the European wild man figure, folk tales, and indigenous cultures. Examples of similar folk tales of wild, hair-covered humanoids exist throughout the world, such as the Skunk ape of the southeastern United States, the Almas, Yeren, and Yeti in Asia, the Australian Yowie, and creatures in the mythologies of indigenous people. Wishful thinking, a cultural increase in environmental concerns, and overall societal awareness of the subject have been cited as additional factors. Description. Bigfoot is often described as a large, muscular, and bipedal human or ape-like creature covered in black, dark brown, or dark reddish hair. Anecdotal descriptions estimate a height of roughly , with some descriptions having the creatures standing as tall as . Some alleged observations describe Bigfoot as more human than ape, particularly in regard to the face. In 1971, multiple people in The Dalles, Oregon, filed a police report describing an "overgrown ape", and one of the men claimed to have sighted the creature in the scope of his rifle but could not bring himself to shoot it because "it looked more human than animal". Common descriptions include broad shoulders, no visible neck, and long arms, which many skeptics attribute to misidentification of a bear standing upright. Some alleged nighttime sightings have stated the creature's eyes "glowed" yellow or red. However, eyeshine is not present in humans or any other known great apes, and so proposed explanations for observable eyeshine off of the ground in the forest include owls, raccoons, or opossums perched in foliage. Michael Rugg, the owner of the Bigfoot Discovery Museum, claims to have smelled Bigfoot, stating, "Imagine a skunk that had rolled around in dead animals and had hung around the garbage pits." The enormous footprints for which the creature is named are claimed to be as large as long and wide. Some footprint casts have also contained claw marks, making it likely that they came from known animals such as bears, which have five toes and claws. History. Folklore and early records. Ecologist Robert Pyle argues that most cultures have accounts of human-like giants in their folk history, expressing a need for "some larger-than-life creature". Each language had its name for the creature featured in the local version of such legends. Many names mean something like "wild man" or "hairy man", although other names described common actions that it was said to perform, such as eating clams or shaking trees. European folklore traditionally had many instances of the "wild man of the woods," or "wild people," often described as "a naked creature covered in hair, with only the face, feet and hands (and in some cases the knees, elbows, or breasts) remaining bare" These European wild people ranged from human hermits, to human-like monsters. Upon migrating to North America, myths of the "wild people" persisted, with documented sightings of "wild people" reported in what is now New York state and Pennsylvania. In a 2007 paper titled "Images of the Wildman Inside and Outside Europe" it stated: Many of the indigenous cultures across the North American continent include tales of mysterious hair-covered creatures living in forests, and according to anthropologist David Daegling, these legends existed long before contemporary reports of the creature described as Bigfoot. These stories differed in their details regionally and between families in the same community and are particularly prevalent in the Pacific Northwest.Chief Mischelle of the Nlaka'pamux at Lytton, British Columbia, told such a story to Charles Hill-Tout in 1898. On the Tule River Indian Reservation, petroglyphs created by a tribe of Yokuts at a site called Painted Rock are alleged by Kathy Moskowitz Strain, author of the 2008 book "Giants, Cannibals, Monsters: Bigfoot in Native Culture", to depict a group of Bigfoots called "the Family". The largest glyph is called "Hairy Man" (also known as "Mayak Datat") and they are estimated to be 1,000 years old. According to the Tulare County Board of Education in 1975, "Big Foot, the Hairy Man, was a creature that was like a great big giant with long, shaggy hair. His long shaggy hair made him look like a big animal. He was good in a way, because he ate the animals that might harm people", and Yokuts parents warned their children not to venture near the river at night or they may encounter the creature. 16th-century Spanish explorers and Mexican settlers told tales of the "los Vigilantes Oscuros", or "Dark Watchers", large creatures alleged to stalk their camps at night. In the region that is now Mississippi, a Jesuit priest was living with the Natchez in 1721 and reported stories of hairy creatures in the forest known to scream loudly and steal livestock. In 1929, Indian agent and teacher J.W. Burns, who lived and worked with the Sts'ailes Nation (then called the Chehalis First Nation), published a collection of stories titled, "Introducing B.C.'s Hairy Giants: A collection of strange tales about British Columbia's wild men as told by those who say they have seen them", in Maclean's magazine. The stories offered various anecdotal reports of wild people; including an encounter a tribal member had with a hairy wild woman who could speak the language of the Douglas First Nation. Burns coined the term "Sasquatch", believed to be the anglicized version of "sasq'ets" (sas-kets), roughly translating to "hairy man" in the Halq'emeylem language. Burns describes the Sasquatch as, "a tribe of hairy people whom they claim have always lived in the mountains – in tunnels and caves". The folklore of the Cherokee includes tales of the "Tsul 'Kalu", who were described as "slant-eyed giants" that resided in the Appalachian Mountains, and is sometimes associated with Bigfoot. Members of the Lummi tell tales about creatures known as "Ts'emekwes". The stories are similar to each other in the general descriptions of "Ts'emekwes", but details differed among various family accounts concerning the creature's diet and activities. Some regional versions tell of more threatening creatures: the "stiyaha" or "kwi-kwiyai" were a nocturnal race, and children were warned against saying the names so that the "monsters" would not come and carry them off to be killed. The Iroquois tell of an aggressive, hair covered giant with rock-hard skin known as the "Ot ne yar heh" or "Stone Giant", more commonly referred to as the "Genoskwa". In 1847, Paul Kane reported stories by the natives about "skoocooms", a race of cannibalistic wild men living on the peak of Mount St. Helens. U.S. President Theodore Roosevelt, in his 1893 book, "The Wilderness Hunter", writes of a story he was told by an elderly mountain man named Bauman in which a foul-smelling, bipedal creature ransacked his beaver trapping camp, stalked him, and later became hostile when it fatally broke his companion's neck. Roosevelt notes that Bauman appeared fearful while telling the story but attributed the trapper's German ancestry to have potentially influenced him. The Alutiiq of the Kenai Peninsula in Alaska tell of the "Nantinaq", a Bigfoot-like creature. This folklore was featured in the Discovery+ television series, "Alaskan Killer Bigfoot", which claims the "Nantinaq" was responsible for the population decrease of Portlock in the 1940s. Less menacing versions have been recorded, such as one by Reverend Elkanah Walker in 1840. Walker was a Protestant missionary who recorded stories of giants among the natives living near Spokane, Washington. These giants were said to live on and around the peaks of the nearby mountains, stealing salmon from the fishermen's nets. Ape Canyon incident. On July 16, 1924, an article in "The Oregonian" made national news when a story was published describing a conflict between a group of gold prospectors and a group of "ape-men" in a gorge near Mount St. Helens. The prospectors reported encountering "gorilla men" near their remote cabin. One of the men, Fred Beck, indicated that he shot one of the creatures with a rifle. That night, they reported coming under attack by the creatures, who were said to have thrown large rocks at the cabin, damaging the roof and knocking Beck unconscious. The men fled the area the following morning. The U.S. Forest Service investigated the site of the alleged incident. The investigators found no compelling evidence of the event and concluded it was likely a fabrication. Stories of large, hair covered bipedal ape-men or "mountain devils" had been a persistent piece of folklore in the area for centuries prior to the alleged incident. Today, the area is known as Ape Canyon and is cemented within Bigfoot-related folklore. Origin of the "Bigfoot" name. Jerry Crew and Andrew Genzoli. In 1958, Jerry Crew, bulldozer operator for a logging company in Humboldt County, California, discovered a set of large, human-like footprints sunk deep within the mud in the Six Rivers National Forest. Upon informing his coworkers, many claimed to have seen similar tracks on previous job sites as well as telling of odd incidents such as an oil drum weighing having been moved without explanation. The logging company men soon began using the word "Bigfoot" to describe the apparent culprit. Crew and others initially believed someone was playing a prank on them. After observing more of these massive footprints, he contacted reporter Andrew Genzoli of the "Humboldt Times" newspaper. Genzoli interviewed lumber workers and wrote articles about the mysterious footprints, introducing the name "Bigfoot" in relation to the tracks and the local tales of large, hairy wild men. A plaster cast was made of the footprints and Crew appeared, holding one of the casts, on the front page of the newspaper on October 6, 1958. The story spread rapidly as Genzoli began to receive correspondence from major media outlets including the "New York Times" and "Los Angeles Times". As a result, the term Bigfoot became widespread as a reference to an apparently large, unknown creature leaving massive footprints in Northern California. Ray Wallace and Rant Mullens. In 2002, the family of Jerry Crew's deceased coworker Ray Wallace revealed a collection of large, carved wooden feet stored in his basement. They stated that Wallace had been secretly making the footprints and was responsible for the tracks discovered by Crew. Wallace was inspired by another hoaxer, Rant Mullens, who revealed information about his hoaxes in 1982. In the 1930s in Toledo, Washington, Mullens and a group of other foresters carved pairs of large feet made of wood and used them to create footprints in the mud to scare huckleberry pickers in the Gifford Pinchot National Forest. The group would also claim to be responsible for hoaxing the alleged Ape Canyon incident in 1924. Mullens and the group of foresters began referring to themselves as the St. Helens Apes, and would later have a cave dedicated to them. Wallace, also from Toledo, knew Mullens and stated he collaborated with him to obtain a pair of the large wooden feet and subsequently used them to create footprints on the 1958 construction site as a means to scare away potential thieves. Other historical uses of "Bigfoot". In the 1830s, a Wyandot chief was nicknamed "Big Foot" due to his significant size, strength and large feet. Potawatomi Chief Maumksuck, known as Chief "Big Foot", is today synonymous with the area of Walworth County, Wisconsin, and has a state park and school named for him. William A. A. Wallace, a famous 19th century Texas Ranger, was nicknamed "Bigfoot" due to his large feet and today has a town named for him: Bigfoot, Texas. Lakota leader Spotted Elk was also called "Chief Big Foot". In the late 19th and early 20th centuries, at least two enormous marauding grizzly bears were widely noted in the press and each nicknamed "Bigfoot." The first grizzly bear called "Bigfoot" was reportedly killed near Fresno, California, in 1895 after killing sheep for 15 years; his weight was estimated at 2,000 pounds (900 kg). The second one was active in Idaho in the 1890s and 1900s between the Snake and Salmon rivers, and supernatural powers were attributed to it. Regional and other names. Many regions throughout North America have differentiating names for Bigfoot. In Canada, the name "Sasquatch" is widely used in addition to Bigfoot. The United States uses both of these names but also has numerous names and descriptions of the creatures depending on the region and area in which they are allegedly sighted. These include the "Skunk ape" in Florida and other southern states, the "Ohio Grassman" in Ohio, "Fouke Monster" in Arkansas, "Wood Booger" in Virginia, the "Monster of Whitehall" in Whitehall, New York, "Momo" in Missouri, "Honey Island Swamp Monster" in Louisiana, "Dewey Lake Monster" in Michigan, "Mogollon Monster" in Arizona, the "Big Muddy Monster" in southern Illinois, and "The Old Men of the Mountain" in West Virginia. The term "Wood Ape" is also used by some as a means to deviate from the perceived mythical connotation surrounding the name "Bigfoot". Other names include "Bushman", "Treeman", and "Wildman". Patterson-Gimlin film. On October 20, 1967, Bigfoot enthusiast Roger Patterson and his partner Robert "Bob" Gimlin were filming a Bigfoot docudrama in an area called Bluff Creek in Northern California. The pair claimed they came upon a Bigfoot and filmed the encounter. The 59.5-second-long video, dubbed the "Patterson-Gimlin film" (PGF), has become iconic in popular culture and Bigfoot-related history and lore. The PGF continues to be a highly scrutinized, analyzed, and debated subject. Academic experts from related fields have typically judged the film as providing no supportive data of any scientific value, with perhaps the most common proposed explanation being that it was a hoax. Proposed explanations. Various explanations have been suggested for sightings and to offer conjecture on what existing animal has been misidentified in supposed sightings of Bigfoot. Scientists typically attribute sightings to hoaxes or misidentifications of known animals and their tracks, particularly black bears. Misidentification. Bears. Scientists theorize that mistaken identification of American black bears as Bigfoot are a likely explanation for most reported sightings, particularly when observers view a subject from afar, are in dense foliage, or there are poor lighting conditions. Additionally, black bears have been observed and recorded walking upright, often as the result of an injury. While upright, adult black bears stand roughly , and grizzly bears roughly . According to data scientist Floe Foxon, more people report seeing Bigfoot in areas with documented black bear populations. Foxon concludes, "If bigfoot is there, it may be many bears". Foxon acknowledges that alleged Bigfoot sightings have been reported in areas with minimal or no known black bear populations. She states, "Although this may be interpreted as evidence for the existence of an unknown hominid in North America, it is also explained by misidentification of other animals (including humans), among other possibilities". Escaped apes. Some have proposed that sightings of Bigfoot may simply be people observing and misidentifying known great apes such as chimpanzees, gorillas, and orangutans that have escaped from captivity such as zoos, circuses, and exotic pets belonging to private owners. This explanation is often proposed in relation to the Skunk ape, as some scientists argue the humid subtropical climate of the southeastern United States could potentially support a population of escaped apes. Humans. Humans have been mistaken for Bigfoot, with some incidents leading to injuries. In 2013, a 21-year-old man in Oklahoma was arrested after he told law enforcement he accidentally shot his friend in the back while their group was allegedly hunting for Bigfoot. In 2017, a shamanist wearing clothing made of animal furs was vacationing in a North Carolina forest when local reports of alleged Bigfoot sightings flooded in. The Greenville Police Department issued a public notice not to shoot Bigfoot for fear of mistakenly injuring or killing someone in a fur suit. In 2018, a person was shot multiple times by a hunter near Helena, Montana, who claimed he mistook him for a Bigfoot. Additionally, some have attributed feral humans or hermits living in the wilderness as being another explanation for alleged Bigfoot sightings. One story, the Wild Man of the Navidad, tells of a wild ape-man who roamed the wilderness of eastern Texas in the mid-19th century, stealing food and goods from residents. A search party allegedly captured an escaped African slave attributed to the story. During the 1980s, several psychologically damaged American Vietnam veterans were stated by the state of Washington's veterans' affairs director, Randy Fisher, to have been living in remote wooded areas of the state. Pareidolia. Some have proposed that pareidolia may explain Bigfoot sightings, specifically the tendency to observe human-like faces and figures within the natural environment. Photos and videos of poor quality alleged to depict Bigfoots are often attributed to this phenomenon and commonly referred to as "Blobsquatch". Misidentified vocalizations. The majority of mainstream scientists maintain that the source of the sounds often attributed to Bigfoot are either hoaxes, anthropomorphization, or likely misidentified and produced by known animals such as owl, wolf, coyote, and fox. Hoaxes. Both Bigfoot believers and non-believers agree that many reported sightings are hoaxes. "Gigantopithecus". Bigfoot proponents Grover Krantz and Geoffrey H. Bourne both believed that Bigfoot could be a relict population of the extinct southeast Asian ape species "Gigantopithecus blacki". According to Bourne, "G. blacki" may have followed the many other species of animals that migrated across the Bering land bridge to the Americas. To date, no "Gigantopithecus" fossils have been found in the Americas. In Asia, the only recovered fossils have been of mandibles and teeth, leaving uncertainty about "G. blacki"s locomotion. Krantz has argued that "G. blacki" could have been bipedal, based on his extrapolation from the shape of its mandible. However, the relevant part of the mandible is not present in any fossils. The consensus view is that "G. blacki" was quadrupedal, as its enormous mass would have made it difficult for it to adopt a bipedal gait. Anthropologist Matt Cartmill criticizes the "G. blacki" hypothesis: The trouble with this account is that "Gigantopithecus" was not a hominin and maybe not even a crown group hominoid; yet the physical evidence implies that Bigfoot is an upright biped with buttocks and a long, stout, permanently adducted hallux. These are hominin autapomorphies, not found in other mammals or other bipeds. It seems unlikely that "Gigantopithecus" would have evolved these uniquely hominin traits in parallel. Paleoanthropologist Bernard G. Campbell writes: "That "Gigantopithecus" is in fact extinct has been questioned by those who believe it survives as the Yeti of the Himalayas and the Sasquatch of the north-west American coast. But the evidence for these creatures is not convincing." Extinct hominidae. Primatologist John R. Napier and anthropologist Gordon Strasenburg have suggested a species of "Paranthropus" as a possible candidate for Bigfoot's identity, such as "Paranthropus robustus", with its gorilla-like crested skull and bipedal gait —despite the fact that fossils of "Paranthropus" are found only in Africa. Michael Rugg of the Bigfoot Discovery Museum presented a comparison between human, "Gigantopithecus," and "Meganthropus" skulls (reconstructions made by Grover Krantz) in episodes 131 and 132 of the Bigfoot Discovery Museum Show. Bigfoot enthusiasts that think Bigfoot may be the "missing link" between apes and humans have promoted the idea that Bigfoot is a descendant of "Gigantopithecus blacki", but that ape diverged from orangutans around 12 million years ago and is not related to humans. Some suggest Neanderthal, "Homo erectus", or "Homo heidelbergensis" to be the creature, but, like all other great apes, no remains of any of those species have been found in the Americas. Scientific view. Expert consensus is that allegations of the existence of Bigfoot are not credible. Belief in the existence of such a large, ape-like creature is more often attributed to hoaxes, confusion, or delusion rather than to sightings of a genuine creature. In a 1996 "USA Today" article, Washington State zoologist John Crane said, "There is no such thing as Bigfoot. No data other than material that's clearly been fabricated has ever been presented." The author of one review article states that, in their opinion, it is impossible even to consider cryptozoology a science if it continues to consider Bigfoot seriously. As with other similar beings, climate and food supply issues would make such a creature's survival in reported habitats unlikely. Bigfoot is alleged to live in regions unusual for a large, nonhuman primate, i.e., temperate latitudes in the northern hemisphere; all recognized nonhuman apes are found in the tropics of Africa and Asia. Great apes have not been found in the fossil record in the Americas, and no Bigfoot remains are known to have been found. Phillips Stevens, a cultural anthropologist at the University at Buffalo, summarized the scientific consensus as follows: In the 1970s, when Bigfoot "experts" were frequently given high-profile media coverage, McLeod writes that the scientific community generally avoided lending credence to such fringe theories by refusing even to debate them. Primatologist Jane Goodall was asked for her personal opinion of Bigfoot in a 2002 interview on National Public Radio's "Science Friday". Goodall responded saying, "Well, now you will be amazed when I tell you that I'm sure that they exist." She later added, "Well, I'm a romantic, so I always wanted them to exist," and "Of course, the big, the big criticism of all this is, "Where is the body?" You know, why isn't there a body? I can't answer that, and maybe they don't exist, but I want them to." In 2012, when asked again by the "Huffington Post", Goodall said "I'm fascinated and would actually love them to exist," adding, "Of course, it's strange that there has never been a single authentic hide or hair of the Bigfoot, but I've read all the accounts." Paleontologist and author Darren Naish states in a 2016 article for "Scientific American" that if "Bigfoot" existed, an abundance of evidence would also exist that cannot be found anywhere today, making the existence of such a creature exceedingly unlikely. Naish summarizes the evidence for "Bigfoot" that would exist if the creature itself existed: Researchers. Ivan T. Sanderson and Bernard Heuvelmans, founders of the study of cryptozoology, spent parts of their career searching for Bigfoot. Later scientists who researched the topic included Jason Jarvis, Carleton S. Coon, George Allen Agogino and William Charles Osman Hill, though they later stopped their research due to lack of evidence for the alleged creature. John Napier asserts that the scientific community's attitude towards Bigfoot stems primarily from insufficient evidence. Other scientists who have shown varying degrees of interest in the creature are Grover Krantz, Jeffrey Meldrum, John Bindernagel, David J. Daegling, George Schaller, Russell Mittermeier, Daris Swindler, Esteban Sarmiento, and Mireya Mayor. Formal studies. One study was conducted by John Napier and published in his book "Bigfoot: The Yeti and Sasquatch in Myth and Reality" in 1973. Napier wrote that if a conclusion is to be reached based on scant extant "'hard' evidence," science must declare "Bigfoot does not exist." However, he found it difficult to entirely reject thousands of alleged tracks, "scattered over 125,000 square miles" (325,000 km2) or to dismiss all "the many hundreds" of eyewitness accounts. Napier concluded, "I am convinced that Sasquatch exists, but whether it is all it is cracked up to be is another matter altogether. There must be "something" in north-west America that needs explaining, and that something leaves man-like footprints." In 1974, the National Wildlife Federation funded a field study seeking Bigfoot evidence. No formal federation members were involved and the study made no notable discoveries. Also in 1974, the now defunct North American Wildlife Research Team constructed a "Bigfoot trap" in the Rogue River–Siskiyou National Forest. It was baited with animal carcasses and captured multiple bears, but no Bigfoot. Upkeep of the trap ended in the early 1980s, but in 2006 the United States Forest Service repaired the trap, which today is a tourist destination along the Collings Mountain hiking trail. Beginning in the late 1970s, physical anthropologist Grover Krantz published several articles and four book-length treatments of Bigfoot. However, his work was found to contain multiple scientific failings including falling for hoaxes. A study published in the "Journal of Biogeography" in 2009 by J.D. Lozier et al. used ecological niche modeling on reported sightings of Bigfoot, using their locations to infer preferred ecological parameters. They found a very close match with the ecological parameters of the American black bear. They also note that an upright bear looks much like a Bigfoot's purported appearance and consider it highly improbable that two species should have very similar ecological preferences, concluding that Bigfoot sightings are likely misidentified sightings of black bears. In the first systematic genetic analysis of 30 hair samples that were suspected to be from Bigfoot-like creatures, only one was found to be primate in origin, and that was identified as human. A joint study by the University of Oxford and Lausanne's Cantonal Museum of Zoology and published in the "Proceedings of the Royal Society B" in 2014, the team used a previously published cleaning method to remove all surface contamination and the ribosomal mitochondrial DNA 12S fragment of the sample. The sample was sequenced and then compared to GenBank to identify the species origin. The samples submitted were from different parts of the world, including the United States, Russia, the Himalayas, and Sumatra. Other than one sample of human origin, all but two are from common animals. Black and brown bears accounted for most of the samples, other animals include cow, horse, dog/wolf/coyote, sheep, goat, deer, raccoon, porcupine, and tapir. The last two samples were thought to match a fossilized genetic sample of a 40,000 year old polar bear of the Pleistocene epoch; a second test identified these hairs as being from a rare type of brown bear. In 2019, the FBI declassified an analysis it conducted on alleged Bigfoot hairs in 1976. Bigfoot researcher Peter Byrne sent the FBI 15 hairs attached to a small skin fragment and asked if the bureau could assist him in identifying it. Jay Cochran Jr., assistant director of the FBI's Scientific and Technical Services division responded in 1977 that the hairs were of deer family origin. Claims. Claims about the origins and characteristics of Bigfoot vary. Thomas Sewid, a Bigfoot researcher and member of the Kwakwakaʼwakw tribe claims, "They're just the other tribe. They're just big, hairy humans with nocturnal vision that choose not to have weapons or fire or permanent shelters". The subject of Bigfoot has also crossed over with other paranormal claims, including that Bigfoot, extraterrestrials, and UFOs are related or that Bigfoot are psychic, can shapeshift, are able to cross into different dimensions, or are completely supernatural in origin. Additionally, claims regarding Bigfoot have been associated with conspiracy theories including a government cover-up. There have also been claims that Bigfoot is responsible for the disappearances of people in the wilderness, such as the 1969 disappearance of Dennis Martin in Great Smoky Mountains National Park. Additionally, there have been claims that Bigfoot has been responsible for vehicle accidents, vandalizing property, delaying construction, and killing people. In 2022, a man from Oklahoma claimed he killed his friend because he believed he had summoned Bigfoot and was going to be sacrificed to the creature. Sightings. According to "Live Science", there have been over 10,000 reported Bigfoot sightings in the continental United States. About one-third of all claims of Bigfoot sightings are located in the Pacific Northwest, with the remaining reports spread throughout the rest of North America. Most reports are considered mistakes or hoaxes, even by those researchers who claim Bigfoot exists. Sightings predominantly occur in the northwestern region of Washington state, Oregon, Northern California, and British Columbia. According to data collected from the Bigfoot Field Researchers Organization's (BFRO) Bigfoot sightings database in 2019, Washington has over 2,000 reported sightings, California over 1,600, Pennsylvania over 1,300, New York and Oregon over 1,000, and Texas has just over 800. The debate over the legitimacy of Bigfoot sightings reached a peak in the 1970s, and Bigfoot has been regarded as the first widely popularized example of pseudoscience in American culture. Reports of alleged Bigfoot sightings are often featured in news stories throughout the United States. Alleged behavior. Some Bigfoot researchers allege that Bigfoot throws rocks as territorial displays and for communication. Other alleged behaviors include audible blows struck against trees or "wood knocking", further alleged to be communicative. Skeptics argue that these behaviors are easily hoaxed. Additionally, structures of broken and twisted foliage seemingly placed in specific areas have been attributed by some to Bigfoot behavior. In some reports, lodgepole pine and other small trees have been observed bent, uprooted, or stacked in patterns such as weaved and crisscrossed, leading some to theorize that they are potential territorial markings. Some instances have also included entire deer skeletons being suspended high in trees. Some researchers and enthusiasts believe Bigfoot construct teepee-like structures out of dead trees and foliage. In Washington state, a team of amateur Bigfoot researchers called the Olympic Project claimed to have discovered a collection of nests. The group brought in primatologists to study them, with the conclusion being that they appear to have been created by a primate. Jeremiah Byron, host of the "Bigfoot Society Podcast", believes Bigfoot are omnivores, stating, "They eat both plants and meat. I've seen accounts that they eat everything from berries, leaves, nuts, and fruit to salmon, rabbit, elk, and bear. Ronny Le Blanc, host of "Expedition Bigfoot" on the Travel Channel indicated he has heard anecdotal reports of Bigfoot allegedly hunting and consuming deer. In the 2001 nature documentary "Great North", a dark bipedal figure was captured on film while the filmmakers were recording a herd of caribou. The footage has sparked debate, as some Bigfoot researchers claim the figure is a Bigfoot stalking the caribou. In 2016, Bigfoot researcher ThinkerThunker released a YouTube video in which he interviewed one of the "Great North" directors, William Reeve, who claims it could not have been a human but was possibly a bear, although he and his crew denied seeing any bears while filming. Some Bigfoot researchers have reported the creatures moving or taking possession of intentional "gifts" left by humans such as food and jewelry, and leaving items in their places such as rocks and twigs. Many alleged sightings are reported to occur at night leading some cryptozoologists to hypothesize that Bigfoot may possess nocturnal tendencies. However, experts find such behavior untenable in a supposed ape- or human-like creature, as all known apes, including humans, are diurnal, with only lesser primates exhibiting nocturnality. Most anecdotal sightings of Bigfoot describe the creatures allegedly observed as solitary, although some reports have described groups being allegedly observed together. Alleged vocalizations. Alleged vocalizations such as howls, screams, moans, grunts, whistles, and even a form of supposed language have been reported and allegedly recorded. Some of these alleged vocalization recordings have been analyzed by individuals such as retired U.S. Navy cryptologic linguist Scott Nelson. He analyzed audio recordings from the early 1970s said to be recorded in the Sierra Nevada mountains dubbed the "Sierra Sounds" and stated, "It is definitely a language, it is definitely not human in origin, and it could not have been faked". Les Stroud has spoken of a strange vocalization he heard in the wilderness while filming "Survivorman" that he stated sounded primate in origin. A number of anecdotal reports of Bigfoot encounters have resulted in witnesses claiming to be disoriented, dizzy and anxious. Some Bigfoot researchers, such as paranormal author Nick Redfern, have proposed that Bigfoot may produce infrasound, which could explain reports of this nature. Alleged encounters. In Fouke, Arkansas, in 1971, a family reported that a large, hair-covered creature startled a woman after reaching through a window. This alleged incident caused hysteria in the Fouke area and inspired the horror movie, "The Legend of Boggy Creek" (1972). The report was later deemed a hoax. In 1974, the "New York Times" presented the dubious tale of Albert Ostman, a Canadian prospector, who stated that he was kidnapped and held captive by a family of Bigfoot for six days in 1924. In 1994, former U.S. Forest Service ranger Paul Freeman, a Bigfoot researcher, videotaped an alleged Bigfoot he reportedly encountered in the Blue Mountains in Oregon. The tape, often referred to as the "Freeman footage", continues to be scrutinized and its authenticity debated. Freeman had previously gained media recognition in the 1980s for documenting alleged Bigfoot tracks, claiming they possessed dermal ridges. On May 26, 1996, Lori Pate, who was on a camping trip near the Washington state-Canada border, videotaped a dark subject she reported encountering running across a field and claimed it was Bigfoot. The film, dubbed the "Memorial Day Bigfoot footage", is often depicted in Bigfoot-related media, most notably in the 2003 documentary, "". In his research, Daniel Perez of the "Skeptical Inquirer" concluded that the footage was likely a hoax perpetuated by a human in a gorilla costume. In 2018, Bigfoot researcher Claudia Ackley garnered international attention after filing a lawsuit with the California Department of Fish and Wildlife (CDFW) for failing to acknowledge the existence of Bigfoot. Ackley claimed to have encountered and filmed a Bigfoot in the San Bernardino Mountains in 2017, describing what she saw as a "Neanderthal man with a lot of hair". Ackley contacted emergency services as well as the CDFW; a state investigator concluded that she encountered a bear. Until her death in 2023, Ackley also ran an online support group for individuals claiming to experience psychological trauma as a result of alleged Bigfoot encounters. In October 2023, a woman named Shannon Parker uploaded a video of an alleged Bigfoot to Facebook. The footage went viral on social media and was shared via various news publications. Shannon Parker reported she and others observed the subject while riding a train on the Durango and Silverton Narrow Gauge Railroad in the San Juan Mountains in Colorado. The authenticity of the video was debated across social media. Skeptics on Reddit speculated it was a publicity hoax perpetrated by an RV company located the area, Sasquatch Expedition Campers. The company denied the allegations. Anthropologist Jeffrey Meldrum notes that any large predatory animal is potentially dangerous, specifically if provoked, but indicates that most anecdotal accounts of Bigfoot encounter result in the creatures hiding or fleeing from people. The 2021 Hulu documentary series, "Sasquatch", describes marijuana farmers telling stories of Bigfoots harassing and killing people within the Emerald Triangle region in the 1970s through the 1990s; and specifically the alleged murder of three migrant workers in 1993. Investigative journalist David Holthouse attributes the stories to illegal drug operations using the local Bigfoot lore to scare away the competition, specifically superstitious immigrants, and that the high rate of murder and missing persons in the area is attributed to human actions. Skeptics argue that many of these alleged encounters are easily hoaxed, the result of misidentification, or are outright fabrications. Evidence claims. A body print taken in the year 2000 from the Gifford Pinchot National Forest in Washington state dubbed the Skookum cast is also believed by some to have been made by a Bigfoot that sat down in the mud to eat fruit left out by researchers during the filming of an episode of the "Animal X" television show. Skeptics believe the cast to have been made by a known animal such as an elk. Alleged Bigfoot footprints are often suggested by Bigfoot enthusiasts as evidence for the creature's existence. Anthropologist Jeffrey Meldrum, who specializes in the study of primate bipedalism, possesses over 300 footprint casts that he maintains could not be made by wood carvings or human feet based on their anatomy, but instead are evidence of a large, non-human primate present today in North America. In 2005, Matt Crowley obtained a copy of an alleged Bigfoot footprint cast, called the "Onion Mountain Cast", and was able to painstakingly recreate the dermal ridges. Michael Dennett of the "Skeptical Inquirer" spoke to police investigator and primate fingerprint expert Jimmy Chilcutt in 2006 for comment on the replica and he stated, "Matt has shown artifacts can be created, at least under laboratory conditions, and field researchers need to take precautions". Chilcutt had previously stated that some of the alleged Bigfoot footprint plaster casts he examined were genuine due to the presence of "unique dermal ridges". Dennett states that Chilcutt published nothing to substantiate his claims, nor had anyone else published anything on that topic, with Chilcutt making his statements solely through a posting on the Internet. Dennett states further that no reviews on Chilcutt's statements had been performed beyond those by what Dennett states to be, "other Bigfoot enthusiasts". In 2007, the Bigfoot Field Researchers Organization claimed to have photographs depicting a juvenile Bigfoot allegedly captured on a camera trap in the Allegheny National Forest. The Pennsylvania Game Commission, however, stated that the photos were of a bear with mange. The Pennsylvania Game Commission unsuccessfully attempted to locate the suspected mangey bear. Scientist Vanessa Woods, after estimating that the subject in the photo had approximately long arms and a torso, concluded it was more comparable to a chimpanzee. In 2015, Centralia College professor Michael Townsend claimed to have discovered prey bones with "human-like" bite impressions on the southside of Mount St. Helens. Townsend claimed the bites were over two times wider than a human bite, and that he and two of his students also found 16-inch footprints in the area. Melba Ketchum press release. After what "The Huffington Post" described as "a five-year study of purported Bigfoot (also known as Sasquatch) DNA samples", but prior to peer review of the work, DNA Diagnostics, a veterinary laboratory headed by veterinarian Melba Ketchum issued a press release on November 24, 2012, claiming that they had found proof that the Sasquatch "is a human relative that arose approximately 15,000 years ago as a hybrid cross of modern "Homo sapiens" with an unknown primate species." Ketchum called for this to be recognized officially, saying that "Government at all levels must recognize them as an indigenous people and immediately protect their human and Constitutional rights against those who would see in their physical and cultural differences a 'license' to hunt, trap, or kill them." Failing to find a scientific journal that would publish their results, Ketchum announced on February 13, 2013, that their research had been published in the "DeNovo Journal of Science". The title "DeNovo: Journal of Science" in which the paper was published was found to be a Web site—registered anonymously only nine days before the paper was announced—whose first and only "journal" issue contained nothing but the "Sasquatch" article. Shortly after publication, the paper was analyzed and outlined by Sharon Hill of Doubtful News for the Committee for Skeptical Inquiry. Hill reported on the questionable journal, mismanaged DNA testing and poor quality paper, stating that "The few experienced geneticists who viewed the paper reported a dismal opinion of it noting it made little sense." "The Scientist" magazine also analyzed the paper, reporting that: In popular culture. Bigfoot has a demonstrable impact in popular culture, and has been compared to Michael Jordan as a cultural icon. In 2018, "Smithsonian" magazine declared, "Interest in the existence of the creature is at an all-time high". A poll in 2020 suggested that about 1 in 10 American adults believe Bigfoot to be "a real, living creature". According to a May 2023 data study, the terms "Bigfoot" and "Sasquatch" are inputted via internet search engines over 200,000 times annually in the United States, and over 660,000 times worldwide. The creature has inspired the naming of a medical company, music festival, amusement park ride, monster truck, and a Marvel Comics superhero. Some commentators have been critical of Bigfoot's rise to fame, arguing that the appearance of the creatures in cartoons, reality shows, and advertisements trivialize the potential validity of serious scientific research into their supposed existence. Others propose that society's fascination with the concept of Bigfoot stems from human interest in mystery, the paranormal, and loneliness. In a 2022 article discussing recent Bigfoot sightings, journalist John Keilman of the "Chicago Tribune" states, "As UFOs have gained newfound respect, becoming the subject of a Pentagon investigative panel, the alleged Bigfoot sighting is a reminder that other paranormal phenomena are still out there, entrancing true believers and amusing skeptics". In the Pacific Northwest. Bigfoot and its likeness is symbolic with the Pacific Northwest and its culture, including the Cascadia movement. Two National Basketball Association teams located in the Pacific Northwest have used Bigfoot as a mascot; Squatch of the now-defunct Seattle SuperSonics from 1993 until 2008, and Douglas Fur of the Portland Trail Blazers. Legend the Bigfoot was selected as the official mascot for the 2022 World Athletics Championships held in Eugene, Oregon. In 2024, the United Soccer League (USL) announced the Bigfoot Football Club based in Maple Valley, Washington will begin competing in 2025. There are laws and ordinances regarding harming or killing Bigfoot in the state of Washington. In 1969, a law was passed that criminalized killing a Bigfoot, making the act a felony, that upon conviction was punishable by a fine of up to $10,000 or by five years imprisonment. In 1984, the law was amended to make the crime a misdemeanor and the entire county was declared a "Sasquatch refuge". Whatcom County followed suit in 1991, declaring the county a "Sasquatch Protection and Refuge Area". In 2022, Grays Harbor County, Washington, passed a similar resolution after a local elementary school in Hoquiam submitted a classroom project asking for a "Sasquatch Protection and Refuge Area" to be granted. In media and the arts. Bigfoot is featured in various films. It is often depicted as the antagonist in low budget monster movies, but has also been depicted as intelligent and friendly, with a notable example being "Harry and the Hendersons" (1987). "Sasquatch Sunset" (2024) depicts a family of Bigfoot engaging in alleged behaviors reported by Bigfoot enthusiasts and researchers. Bigfoot is also featured in television, notably as a subject of reality and paranormal television series, with notable examples being "Finding Bigfoot" (2011), "Mountain Monsters" (2013), "10 Million Dollar Bigfoot Bounty" (2014), "Expedition Bigfoot" (2019), and "Alaskan Killer Bigfoot" (2021). Dean Mitchell is a saxophonist notable for musical performances in a Bigfoot costume, going by the stage name of Saxsquatch. In advocacy. Bigfoot has been used for environmental protection and nature conservation campaigns and advocacy. Bigfoot was used in an environmental protection campaign, albeit comedically, by the U.S. Forest Service in 2015. Bigfoot is a mascot for the U.S. Fish & Wildlife Service's "Leave No Trace Principles", a national educational program to inform the public about reducing the damage caused by outdoor activities. The 360 mile "Bigfoot Trail" in Oregon, is named for the creature. Environmental organization Oregon Wild also uses Bigfoot to promote its nature advocacy, stating, "If there really is a Sasquatch out there, there is definitely more than one, and in order to maintain a healthy breeding population a species of hominid (as Sasquatch is assumed to be) would need extremely vast expanses of uninterrupted forest. Remote Wilderness areas would be prime habitat for Sasquatch, so if there are any out there to protect, making sure Oregon's forests get the protections they need to stay untrammeled is of the utmost importance". In 2024, Bigfoot was used as a mascot for a government recycling campaign in Whitfield County, Georgia. In the 2018 podcast "Wild Thing", creator and journalist Laura Krantz argues that the concept of Bigfoot can be an important part of environmental interest and protection, stating, "If you look at it from the angle that Bigfoot is a creature that has eluded capture or hasn't left any concrete evidence behind, then you just have a group of people who are curious about the environment and want to know more about it, which isn't that far off from what naturalists have done for centuries". During the onset of the COVID-19 pandemic in 2020, Bigfoot became a part of many North American social distancing advocacy campaigns, with the creature being referred to as the "Social Distancing Champion" and as the subject of various internet memes related to the pandemic. Bigfoot subculture. There is an entire subculture surrounding Bigfoot. The act of searching for the creatures is often referred to as "Squatching", "Squatchin'" or "Squatch'n", popularized by the Animal Planet series, "Finding Bigfoot". Bigfoot researchers and believers are often called "Bigfooters" or "Squatchers". 20th century Bigfooters Peter C. Byrne, René Dahinden, John Green and Grover Krantz have been dubbed by cryptozoologist and author Loren Coleman as the "Four Horsemen of Sasquatchery". The 2024 book "The Secret History of Bigfoot" by journalist John O'Connor explores this subculture of Bigfooters, particularly the wide assortment of beliefs enthusiasts of the subject hold. In 2004, David Fahrenthold of "The Washington Post" published an article describing a feud between Bigfoot researchers in the eastern and western United States. Fahrenthold writes, "On the one hand, East Coast Bigfooters say they have to fight discrimination from Western counterparts who think the creature does not live east of the Rocky Mountains. On the other, they have to deal with reports from a more urban population, which includes some who are unfamiliar with wildlife and apt to mistake a black bear for the missing link". People have been injured or killed while searching for Bigfoot in the wilderness. On December 28, 2024, two men were found deceased in the Gifford Pinchot National Forest in Washington state after setting off on Christmas to search for Bigfoot. Their disappearance prompted a large scale search and rescue effort, with the Skamania County Sheriff's Office concluding they were likely not prepared for the inclement weather. October 20, the anniversary of the Patterson-Gimlin film recording, is considered by some enthusiasts as "National Sasquatch Awareness Day". In 2015, World Champion taxidermist Ken Walker completed what he believes to be a lifelike Bigfoot model based on the subject in the Patterson–Gimlin film. He entered it into the 2015 World Taxidermy & Fish Carving Championships in Missouri and was the subject of Dan Wayne's 2019 documentary "Big Fur". Tourism and events. Bigfoot and related folklore has an impact on tourism. Willow Creek, California, considers itself the "Bigfoot Capital of the World". The Willow Creek Chamber of Commerce has hosted the "Bigfoot Daze" festival annually since the 1960s, drawing on the popularity of the local folklore, notably that of the Patterson-Gimlin film. Jefferson, Texas proclaimed itself the "Bigfoot Capital of Texas" in 2018. The city has hosted the Texas Bigfoot Conference since 2000. In 2021, U.S. Representative Justin Humphrey, in an effort to bolster tourism, proposed an official Bigfoot hunting season in Oklahoma, indicating that the Oklahoma Department of Wildlife Conservation would regulate permits and the state would offer a $3 million bounty if such a creature was captured alive and unharmed. In 2024, mayor Grant Nicely of Derry, Pennsylvania declared Bigfoot the "official cryptid" of the borough and stated, "Willful harm or capture of the species will be punishable by law." Council Vice-president Nathan Bundy stated, "By proclaiming Bigfoot as our official cryptid and establishing Derry as a sanctuary, we are embracing our local folklore and the rich history that makes our community unique". Events such as conferences and festivals dedicated to Bigfoot draw thousands of attendees and contribute to the economies of areas in which they are held. These events commonly include guest speakers, research and lore presentations, and sometimes live music, vendors, food trucks, and other activities such as costume contests and "Bigfoot howl" competitions. Some receive collaboration between local government and corporations, such as the Smoky Mountain Bigfoot Festival in Townsend, Tennessee, which is sponsored by Monster Energy. The 2023 Bigfoot Festival in Marion, North Carolina, saw approximately 40,000 people in attendance, resulting in a large economic boost for the small town of less than 8,000 residents. In February 2016, the University of New Mexico at Gallup held a two-day Bigfoot conference at a cost of $7,000 in university funds. Bigfoot is also featured in events alongside other famous cryptids such as the Loch Ness Monster, Mothman, and Chupacabra. There are museums dedicated to Bigfoot. In 2019, Bigfoot researcher Cliff Barackman, notable for his role on "Finding Bigfoot", opened the North American Bigfoot Center in Boring, Oregon. In 2022, The Bigfoot Crossroads of America Museum and Research Center in Hastings, Nebraska, was selected for addition into the archives of the U.S. Library of Congress. The High Desert Museum in Bend, Oregon features an exhibit called "Sensing Sasquatch", which presents the subject from an Indigenous point-of-view. According to Executive Director Dana Whitelaw, "Rather than the popular, mainstream view of Sasquatch, this exhibition shows Sasquatch as a protective entity for many Indigenous peoples of the High Desert. The exhibit reflects the reverence that Native peoples have for Sasquatch and will be centered on Indigenous art, voices and storytelling". Organizations. There are several organizations dedicated to Bigfoot. The oldest and largest is the Bigfoot Field Researchers Organization (BFRO). The BFRO also provides a free database to individuals and other organizations. Their website includes reports from across North America that have been investigated by BFRO researchers. Other similar organizations exist throughout many U.S. states and their members come from a variety of backgrounds. The North American Wood Ape Conservancy (NAWAC), a nonprofit organization, states its mission is to "ultimately have the wood ape species documented, protected, and the land they inhabit protected. Author Mike Mays of NAWAC states, "If just anyone hauled in a Bigfoot carcass the blowback from animal rights groups and beyond would be ruinous".
4010
42151856
https://en.wikipedia.org/wiki?curid=4010
Bing Crosby
Harry Lillis "Bing" Crosby Jr. (May 3, 1903 – October 14, 1977) was an American singer, comedian, entertainer and actor. The first multimedia star, he was one of the most popular and influential musical artists of the 20th century worldwide. Crosby was a leader in record sales, network radio ratings, and motion picture grosses from 1926 to 1977. He was one of the first global cultural icons. Crosby made over 70 feature films and recorded more than 1,600 songs. Crosby's early career coincided with recording innovations that allowed him to develop an intimate singing style that influenced many male singers who followed, such as Frank Sinatra, Perry Como, Dean Martin, Dick Haymes, Elvis Presley, and John Lennon. "Yank" magazine said that Crosby was "the person who had done the most for the morale of overseas servicemen" during World War II. In 1948, American polls declared him the "most admired man alive", ahead of Jackie Robinson and Pope Pius XII. In 1948, "Music Digest" estimated that Crosby's recordings filled more than half of the 80,000 weekly hours allocated to recorded radio music in America. Crosby won the Academy Award for Best Actor for his performance in "Going My Way" (1944) and was nominated for its sequel, "The Bells of St. Mary's" (1945), opposite Ingrid Bergman, becoming the first of six actors to be nominated twice for playing the same character. Crosby was the number one box office attraction for five consecutive years from 1944 to 1948. At his screen apex in 1946, Crosby starred in three of the year's five highest-grossing films: "The Bells of St. Mary's", "Blue Skies", and "Road to Utopia". In 1963, he received the first Grammy Global Achievement Award. Crosby is one of 33 people to have three stars on the Hollywood Walk of Fame, in the categories of motion pictures, radio, and audio recording. He was also known for his collaborations with his friend Bob Hope, starring in the "Road to ..." films from 1940 to 1962. Crosby influenced the development of the post–World War II recording industry. After seeing a demonstration of a German broadcast quality reel-to-reel tape recorder brought to the United States by John T. Mullin, Crosby invested $50,000 in the California electronics company Ampex to build copies. He then persuaded ABC to allow him to tape his shows and became the first performer to prerecord his radio shows and master his commercial recordings onto magnetic tape. Crosby has been associated with the Christmas season since he starred in Irving Berlin's musical film "Holiday Inn" and also sang "White Christmas" in the film of the same name. Through audio recordings, Crosby produced his radio programs with the same directorial tools and craftsmanship (editing, retaking, rehearsal, time shifting) used in motion picture production, a practice that became the industry standard. In addition to his work with early audio tape recording, Crosby helped finance the development of videotape, bought television stations, bred racehorses, and co-owned the Pittsburgh Pirates baseball team, during which time the team won two World Series (1960 and 1971). Early life. Crosby was born on May 3, 1903, in Tacoma, Washington, in a house his father built at 1112 North J Street. Three years later, his family moved to Spokane in Eastern Washington State, where Crosby was raised. In 1913, his father built a house at 508 E. Sharp Avenue. The house stands on the campus of Crosby's alma mater, Gonzaga University, as a museum housing over 200 artifacts from his life and career, including his Oscar. Crosby was the fourth of seven children: brothers Laurence Earl "Larry" (1895–1975), Everett Nathaniel (1896–1966), Edward John "Ted" (1900–1973), and George Robert "Bob" (1913–1993); and two sisters, Catherine Cordelia (1904–1974) and Mary Rose (1906–1990). His parents were Harry Lillis Crosby (1870–1950), a bookkeeper, and Catherine Helen "Kate" (née Harrigan; 1873–1964). His mother was a second-generation Irish-American. His father was of Scottish and English descent; an ancestor, Simon Crosby, emigrated from the Kingdom of England to New England in the 1630s during the Puritan migration to New England. Through another line, also on his father's side, Crosby is descended from "Mayflower" passenger William Brewster ( 1567 – 1644). In 1917, Crosby took a summer job as property boy at Spokane's Auditorium, where he witnessed some of the acts of the day, including Al Jolson, who held Crosby spellbound with ad-libbing and parodies of Hawaiian songs. Crosby later described Jolson's delivery as "electric". Crosby graduated from Gonzaga High School in 1920 and enrolled at Gonzaga University. He attended Gonzaga for three years but did not earn a degree. As a freshman, Crosby played on the university's baseball team. The university granted him an honorary doctorate in 1937. Gonzaga University houses a large collection of photographs, correspondence, and other material related to Crosby. On November 8, 1937, after Lux Radio Theatre's adaptation of "She Loves Me Not", Joan Blondell asked Crosby how he got his nickname: As it happens, that story was pure whimsy for dramatic effect; the Associated Press had reported as early as February 1932—as would later be confirmed by both Bing himself and his biographer Charles Thompson—that it was in fact a neighbor—Valentine Hobart, circa 1910—who had named him "Bingo from Bingville" after a comic feature in the local paper called "The Bingville Bugle" which the young Harry liked. In time, Bingo got shortened to Bing. Career. Early years. In 1923, Crosby was invited to join a new band composed of high-school students a few years younger than himself. Al and Miles Rinker (brothers of singer Mildred Bailey), James Heaton, Claire Pritchard and Robert Pritchard, along with drummer Crosby, formed the Musicaladers, who performed at dances both for high school students and club-goers. The group performed on Spokane radio station KHQ, but disbanded after two years. Crosby and Al Rinker obtained work at the Clemmer Theatre in Spokane (now known as the Bing Crosby Theater). On August 14, 1925, Bing appeared at the Clemmer Theater as part of The Clemmer Trio (Frank McBride, Lloyd Grinnell and Harry Crosby) and they were shown as being presented with special stage effects.Rinker played piano in the pit. They continued at the theater alongside the film of the week for a very successful 12 weeks. They were initially billed as The Clemmer Trio and later as The Clemmer Entertainers depending on who performed. In October 1925, Crosby and Rinker decided to seek fame in California. They traveled to Los Angeles, where Bailey introduced them to her show business contacts. The Fanchon and Marco Time Agency hired them for 13 weeks for the revue "The Syncopation Idea" starting at the Boulevard Theater in Los Angeles and then on the Loew's circuit. They each earned $75 a week. As minor parts of "The Syncopation Idea", Crosby and Rinker started to develop as entertainers. They had a lively style that was popular with college students. After "The Syncopation Idea" closed, they worked in the Will Morrissey Music Hall Revue. They honed their skills with Morrissey, and when they got a chance to present an independent act, they were spotted by a member of the Paul Whiteman organization. Whiteman needed something different to break up his musical selections, and Crosby and Rinker filled this requirement. After less than a year in show business, they were attached to one of the biggest names. Hired for $150 a week in 1926, they debuted with Whiteman on December 6 at the Tivoli Theatre in Chicago. Their first recording, in October 1926, was "I've Got the Girl" with Don Clark's Orchestra, but the Columbia-issued record was inadvertently recorded at a slow speed, which increased the singers' pitch when played at 78 rpm. Throughout his career, Crosby often credited Bailey for getting him his first important job in the entertainment business. The Rhythm Boys. Success with Whiteman was followed by disaster when they reached New York. Whiteman considered letting them go. However, the addition of pianist and aspiring songwriter Harry Barris made the difference, and The Rhythm Boys were born. The additional voice meant they could be heard more easily in large New York theaters. Crosby gained valuable experience on tour for a year with Whiteman and performing and recording with Bix Beiderbecke, Jack Teagarden, Tommy Dorsey, Jimmy Dorsey, Eddie Lang, and Hoagy Carmichael. Crosby matured as a performer and was in demand as a solo singer. Crosby became the star attraction of the Rhythm Boys. In 1928, he had his first number one hit, a jazz-influenced rendition of "Ol' Man River". In 1929, the Rhythm Boys appeared in the film "King of Jazz" with Whiteman, but Crosby's growing dissatisfaction with Whiteman led to the Rhythm Boys leaving his organization. They joined the Gus Arnheim Orchestra, performing nightly in the Coconut Grove of the Ambassador Hotel. Singing with the Arnheim Orchestra, Crosby's solos began to steal the show while the Rhythm Boys' act gradually became redundant. Harry Barris wrote several of Crosby's hits, including "At Your Command", "I Surrender Dear", and "Wrap Your Troubles in Dreams". When Mack Sennett signed Crosby to a solo film contract in 1931, a break with the Rhythm Boys became almost inevitable. Crosby married Dixie Lee in September 1930. After a threat of divorce in March 1931, he applied himself to his career. Success as a solo singer. On September 2, 1931, "15 Minutes with Bing Crosby", his nationwide solo radio debut, began broadcasting. The weekly broadcast made Crosby a hit. Before the end of the year, he with both Brunswick Records and CBS Radio. "Out of Nowhere", "Just One More Chance", "At Your Command", and "I Found a Million Dollar Baby (in a Five and Ten Cent Store)" were among the best-selling songs of 1931. Ten of the top 50 songs of 1931 included Crosby with others or as a solo act. A "Battle of the Baritones" with singer Russ Columbo proved short-lived, replaced with the slogan "Bing Was King". Crosby played the lead in a series of musical comedy short films for Mack Sennett, signed with Paramount, and starred in his first full-length film, 1932's "The Big Broadcast" (1932), the first of 55 films in which he received top billing. Crosby would appear in almost 80 pictures. He signed a contract with Jack Kapp's new record company, Decca, in late 1934. Crosby's first commercial sponsor on radio was Cremo Cigars and his fame spread nationwide. After a long run in New York, Crosby went back to Hollywood to film "The Big Broadcast". His appearances, records, and radio work substantially increased his impact. The success of his first film brought Crosby a contract with Paramount, and he began a pattern of making three films a year. Crosby led his radio show for Woodbury Soap for two seasons while his live appearances dwindled. Crosby's records produced hits during the Depression when sales were down. Audio engineer Steve Hoffman stated, "By the way, Bing actually saved the record business in 1934 when he agreed to support Decca founder Jack Kapp's crazy idea of lowering the price of singles from a dollar to 35 cents and getting a royalty for records sold instead of a flat fee. Bing's name and his artistry saved the recording industry. All the other artists signed to Decca after Bing did. Without him, Jack Kapp wouldn't have had a chance in hell of making Decca work and the Great Depression would have wiped out phonograph records for good." His first son Gary was born in 1933 with twin boys following in 1934. By 1936, Crosby replaced his former boss, Paul Whiteman, as host of the weekly NBC radio program "Kraft Music Hall", where he remained for the next decade. "Where the Blue of the Night (Meets the Gold of the Day)", with his trademark whistling, became his theme song and signature tune. Crosby's vocal style helped take popular singing beyond the "belting" associated with Al Jolson and Billy Murray, who had been obligated to reach the back seats in New York theaters without the aid of a microphone. As music critic Henry Pleasants noted in "The Great American Popular Singers", something new had entered American music, a style that might be called "singing in American" with conversational ease. This new sound led to the popular epithet "crooner". Crosby admired Louis Armstrong for his musical ability, and the trumpet maestro was a formative influence on Crosby's singing style. When the two met, they became friends. In 1936, Crosby exercised an option in his Paramount contract to regularly star in an out-of-house film. Signing an agreement with Columbia for a single motion picture, Crosby wanted Armstrong to appear in a screen adaptation of "The Peacock Feather" that eventually became "Pennies from Heaven". Crosby asked Harry Cohn, but Cohn had no desire to pay for the flight or to meet Armstrong's "crude, mob-linked but devoted manager, Joe Glaser". Crosby threatened to leave the film and refused to discuss the matter. Cohn gave in; Armstrong's musical scenes and comic dialogue extended his influence to the silver screen, creating more opportunities for him and other African Americans to appear in future films. Crosby also ensured behind the scenes that Armstrong received equal billing with his white co-stars. Armstrong appreciated Crosby's progressive attitudes on race, and often expressed gratitude for the role in later years. During World War II, Crosby made live appearances before American troops who had been fighting in the European Theater. He learned how to pronounce German from written scripts and read propaganda broadcasts intended for German forces. The nickname "Der Bingle" was common among Crosby's German listeners and came to be used by his English-speaking fans. In a poll of U.S. troops at the close of World War II, Crosby topped the list as the person who had done the most for G.I. morale, ahead of President Franklin D. Roosevelt, General Dwight Eisenhower, and Bob Hope. The June 18, 1945, issue of "Life" magazine stated, "America's number one star, Bing Crosby, has won more fans, made more money than any entertainer in history. Today he is a kind of national institution." "In all, 60,000,000 Crosby discs have been marketed since he made his first record in 1931. His biggest best seller is 'White Christmas', 2,000,000 impressions of which have been sold in the U.S. and 250,000 in Great Britain." "Nine out of ten singers and bandleaders listen to Crosby's broadcasts each Thursday night and follow his lead. The day after he sings a song over the air—any song—some 50,000 copies of it are sold throughout the U.S. Time and again Crosby has taken some new or unknown ballad, has given it what is known in trade circles as the 'big goose' and made it a hit single-handed and overnight... Precisely what the future holds for Crosby neither his family nor his friends can conjecture. He has achieved greater popularity, made more money, attracted vaster audiences than any other entertainer in history. And his star is still in the ascendant. His contract with Decca runs until 1955. His contract with Paramount runs until 1954. Records which he made ten years ago are selling better than ever before. The nation's appetite for Crosby's voice and personality appears insatiable. To soldiers overseas and to foreigners he has become a kind of symbol of America, of the amiable, humorous citizen of a free land. Crosby, however, seldom bothers to contemplate his future. For one thing, he enjoys hearing himself sing, and if ever a day should dawn when the public wearies of him, he will complacently go right on singing—to himself." White Christmas. The biggest hit song of Crosby's career was his recording of Irving Berlin's "White Christmas", which Crosby introduced on a Christmas Day radio broadcast in 1941. A copy of the recording from the radio program is owned by the estate of Bing Crosby and was loaned to "CBS Sunday Morning" for their December 25, 2011, program. The song appeared in his films "Holiday Inn" (1942), and—a decade later—in "White Christmas" (1954). Crosby's record hit the charts on October 3, 1942, and rose to number 1 on October 31, where it stayed for 11 weeks. A holiday perennial, the song was repeatedly re-released by Decca, charting another 16 times. It topped the charts again in 1945 and a third time in January 1947. The song remains the bestselling single of all time. Crosby's recording of "White Christmas" has sold over 50 million copies worldwide. His recording was so popular that Crosby was obliged to re-record it in 1947 using the same musicians and backup singers; the original 1942 master had become damaged due to its frequent use in pressing additional singles. In 1977, after Crosby died, the song was re-released and reached No. 5 in the UK Singles Chart. Crosby was dismissive of his role in the song's success, saying "a jackdaw with a cleft palate could have sung it successfully". Motion pictures. In the wake of a solid decade of headlining mainly smash hit musical comedy films in the 1930s, Crosby starred with Bob Hope and Dorothy Lamour in six of the seven "Road to" musical comedies between 1940 and 1962 (Lamour was replaced with Joan Collins in "The Road to Hong Kong" and limited to a lengthy cameo), cementing Crosby and Hope as an on-and-off duo, despite never declaring themselves a "team" in the sense that Laurel and Hardy or Martin and Lewis (Dean Martin and Jerry Lewis) were teams. The series consists of "Road to Singapore" (1940), "Road to Zanzibar" (1941), "Road to Morocco" (1942), "Road to Utopia" (1946), "Road to Rio" (1947), "Road to Bali" (1952), and "The Road to Hong Kong" (1962). When they appeared solo, Crosby and Hope frequently made note of the other in a comically insulting fashion. They performed together countless times on stage, radio, film, and television, and made numerous brief and not so brief appearances together in movies aside from the "Road" pictures, "Variety Girl" (1947) being an example of lengthy scenes and songs together along with billing. In the 1949 Disney animated film "The Adventures of Ichabod and Mr. Toad", Crosby provided the narration and song vocals for "The Legend of Sleepy Hollow" segment. In 1960, he starred in "High Time", a collegiate comedy with Fabian Forte and Tuesday Weld that predicted the emerging gap between Crosby and the new younger generation of musicians and actors who had begun their careers after World War II. The following year, Crosby and Hope reunited for one more "Road" movie, "The Road to Hong Kong", which teamed them up with the much younger Joan Collins and Peter Sellers. Collins was used in place of their longtime partner Dorothy Lamour, whom Crosby felt was getting too old for the role, though Hope refused to do the film without her, and she instead made a lengthy and elaborate cameo appearance. Shortly before his death in 1977, Crosby had planned another "Road" film in which he, Hope, and Lamour search for the Fountain of Youth. Crosby won an Academy Award for Best Actor for "Going My Way" in 1944 and was nominated for the 1945 sequel, "The Bells of St. Mary's". He received critical acclaim and his third Academy Award nomination for his performance as an alcoholic entertainer in "The Country Girl". Television. "The Fireside Theater" (1950) was his first television production. The series of 26-minute shows was filmed at Hal Roach Studios rather than performed live on the air. The "telefilms" were syndicated to individual television stations. Crosby was a frequent guest on the musical variety shows of the 1950s and 1960s, appearing on various variety shows as well as numerous late-night talk shows and his own highly rated specials. Bob Hope memorably devoted one of his monthly NBC specials to his long intermittent partnership with Crosby titled "On the Road With Bing". Crosby was associated with ABC's "The Hollywood Palace" as the show's first and most frequent guest host and appeared annually on its Christmas edition with his wife Kathryn and his younger children, and continued after "The Hollywood Palace" was eventually canceled. In the early 1970s, Crosby made two late appearances on the "Flip Wilson Show", singing duets with the comedian. His last TV appearance was a Christmas special, "Merrie Olde Christmas", taped in London in September 1977 and aired weeks after his death. It was on this special that Crosby recorded a duet of "The Little Drummer Boy" and "Peace on Earth" with rock musician David Bowie. Their duet was released in 1982 as a single 45 rpm record and reached No. 3 in the UK singles charts. It has since become a staple of holiday radio and the final popular hit of Crosby's career. At the end of the 20th century, "TV Guide" listed the Crosby-Bowie duet one of the 25 most memorable musical moments of 20th-century television. Bing Crosby Productions, affiliated with Desilu Studios and later CBS Television Studios, produced a number of television series, including Crosby's own unsuccessful ABC sitcom "The Bing Crosby Show" in the 1964–1965 season (with co-stars Beverly Garland and Frank McHugh). The company produced two ABC medical dramas, "Ben Casey" (1961–1966) and "Breaking Point" (1963–1964), the popular "Hogan's Heroes" (1965–1971) military comedy on CBS, as well as the lesser-known show "Slattery's People" (1964–1965). Singing style and vocal characteristics. Crosby was one of the first singers to exploit the intimacy of the microphone rather than use the loud, penetrating vaudeville style associated with Al Jolson. Crosby was, by his own definition, a "phraser", a singer who placed equal emphasis on both the lyrics and the music. Paul Whiteman's hiring of Crosby, with phrasing that echoed jazz, particularly his bandmate Bix Beiderbecke's trumpet, helped bring the genre to a wider audience. In the framework of the novelty-singing style of the Rhythm Boys, Crosby bent notes and added off-tune phrasing, an approach that was rooted in jazz. He had already been introduced to Louis Armstrong and Bessie Smith before his first appearance on record. Crosby and Armstrong remained warm acquaintances for decades, occasionally singing together in later years, e.g. "Now You Has Jazz" in the film "High Society" (1956). In Crosby's performances, the presence of jazz phrasing, jazz rhythm and jazz improvisation varied depending on the piece of music, but those were elements that Crosby frequently used. This can be observed particularly in his straight jazz work during the late 1920s/early 1930s, Crosby's recordings with Buddy Cole and His Trio from the mid-1950s, as well as in his numerous collaborations with such jazz musicians as Louis Armstrong, Duke Ellington, Ella Fitzgerald, Joe Venuti, or Eddie Lang. However, while Crosby can be called a jazz singer, he was not strictly only a jazz singer as he modeled the style and techniques to a broad scope of music that he performed, ranging from Jazz to Country to even such material as operetta arias. During the early portion of his solo career (about 1931–1934), Crosby's emotional, often pleading style of crooning was popular. But Jack Kapp, manager of Brunswick and later Decca, talked Crosby into dropping many of his jazzier mannerisms in favor of a clear vocal style. Crosby credited Kapp for choosing hit songs, working with many other musicians, and most important, diversifying his repertoire into several styles and genres. Kapp helped Crosby have number one hits in Christmas music, Hawaiian music, and country music, and top-30 hits in Irish music, French music, rhythm and blues, and ballads. Crosby elaborated on an idea of Al Jolson's: phrasing, or the art of making a song's lyric ring true. "I used to tell Sinatra over and over," said Tommy Dorsey, "there's only one singer you ought to listen to and his name is Crosby. All that matters to him is the words, and that's the only thing that ought to for you, too." Critic Henry Pleasants wrote in 1985: [While] the octave B flat to B flat in Bing's voice at that time [1930s] is, to my ears, one of the loveliest I have heard in forty-five years of listening to baritones, both classical and popular, it dropped conspicuously in later years. From the mid-1950s, Bing was more comfortable in a bass range while maintaining a baritone quality, with the best octave being G to G, or even F to F. In a recording he made of 'Dardanella' with Louis Armstrong in 1960, he attacks lightly and easily on a low E flat. This is lower than most opera basses care to venture, and they tend to sound as if they were in the cellar when they get there. Career achievements. Crosby's was among the most popular and successful musical acts of the 20th century. "Billboard" magazine used different methodologies during his career, but his chart success remains impressive: 396 chart singles, including roughly 41 number 1 hits. Crosby had separate charting singles every year between 1931 and 1954; the annual re-release of "White Christmas" extended that streak to 1957. He had 24 separate popular singles in 1939 alone. Statistician Joel Whitburn at "Billboard" determined that Crosby was America's most successful recording act of the 1930s and again in the 1940s. The number of Bing Crosby record sales varies. Organizations that audit record sales do not have an official tally, but some claim sales are notable, namely: In 1960, Crosby was honored as "First Citizen of Record Industry" based on having sold 200 million discs. The Guinness Book reported some of the singer's worldwide sales on a few occasions: In 1973, Crosby had sold more than 400 million records worldwide, and by 1977 he had sold 500 million discs, being ranked as the most successful and best-selling musical artist in 1978. Some sources contradict these alleged sales to the Guinness Book, as it is not an organization that counts or audits artists' sales in the United States or worldwide. According to different sources, Bing Crosby's sales number varies between: 300 million, 500 million, or even 1 billion, making him one of the best-selling singers in history. The single "White Christmas" sold over 50 million copies according to "Guinness World Records". For 15 years (1934, 1937, 1940, 1943–1954), Crosby was among the top 10 acts in box-office sales, and for five of those years (1944–1948) he topped the world. Crosby sang four Academy Award-winning songs—"Sweet Leilani" (1937), "White Christmas" (1942), "Swinging on a Star" (1944), "In the Cool, Cool, Cool of the Evening" (1951)—and won the Academy Award for Best Actor for his role in "Going My Way" (1944). A survey in 2000 found that with 1,077,900,000 movie tickets sold, Crosby was the third-most-popular actor of all time, behind Clark Gable (1,168,300,000) and John Wayne (1,114,000,000). The "International Motion Picture Almanac" lists Crosby in a tie for second-most years at number one on the All Time Number One Stars List with Clint Eastwood, Tom Hanks, and Burt Reynolds. His most popular film, "White Christmas", grossed $30 million in 1954 ($ million in current value). Crosby received 23 gold and platinum records, according to the book "Million Selling Records". The Recording Industry Association of America did not institute its gold record certification program until 1958 when Crosby's record sales were low. Before 1958, gold records were awarded by record companies. Crosby charted 23 "Billboard" hits from 47 recorded songs with the Andrews Sisters, whose Decca record sales were second only to Crosby's throughout the 1940s. They were his most frequent collaborators on disc from 1939 to 1952, a partnership that produced four million-selling singles: "Pistol Packin' Mama", "Jingle Bells", "Don't Fence Me In", and "South America, Take It Away". They made one film appearance together in "Road to Rio" singing "You Don't Have to Know the Language", and sang together on radio airwaves throughout the 1940s and 1950s. They appeared as guests on each other's shows and on Armed Forces Radio Service programming during and after World War II. The quartet's additional Top-10 "Billboard" hits from 1943 to 1945 include "The Vict'ry Polka", "There'll Be a Hot Time in the Town of Berlin (When the Yanks Go Marching In)", and "Is You Is or Is You Ain't (Ma' Baby?)" which helped the morale of the American public. In 1962, Crosby was given the Grammy Lifetime Achievement Award. He has been inducted into the halls of fame for both radio and popular music. In 2007, Crosby was inducted into the Hit Parade Hall of Fame and in 2008 the Western Music Hall of Fame. Global impact. Crosby's popularity around the world was such that Dorothy Masuka, the best-selling African recording artist, stated that, "Only Bing Crosby the famous American crooner sold more records than me in Africa." His great popularity throughout the continent led other African singers to emulate him, including Masuka, Dolly Rathebe, and Míriam Makeba, known locally as "The Bing Crosby of Africa". Presenter Mike Douglas commented in a 1975 interview, "During my days in the Navy in World War II, I remember walking the streets of Calcutta, India, on the coast; it was a lonely night, so far from my home and from my new wife, Gen. I needed something to lift my spirits. As I passed a Hindu sitting on the corner of a street, I heard something surprisingly familiar. I came back to see the man playing one of those old Vitrolas, like those of RCA with the horn speaker. The man was listening to Bing Crosby sing, "Ac-Cent-Tchu-Ate The Positive". I stopped and smiled in grateful acknowledgment. The Hindu nodded and smiled back. The whole world knew and loved Bing Crosby." His popularity in India led many Hindu singers to imitate and emulate him, notably Kishore Kumar, considered the "Bing Crosby of India". Throughout Europe and Russia, Crosby was also known as "Der Bingle", a pseudonym coined in 1944 by Bob Musel, an American journalist based in London, after Crosby had recorded three 15-minute programs with Jack Russin for broadcast to Germany from ABSIE. Entrepreneurship. According to Shoshana Klebanoff, Crosby became one of the richest men in the history of show business. He had investments in real estate, mines, oil wells, cattle ranches, race horses, music publishing, baseball teams, and television. Crosby made a fortune from the Minute Maid Orange Juice Corporation, in which he was a principal stockholder. Role in early tape recording. During the Golden Age of Radio, performers had to create their shows live, sometimes even redoing the program a second time for the West Coast time zone. Crosby had to do two live radio shows on the same day, three hours apart, for the East and West Coasts. Crosby's radio career took a significant turn in 1945, when he clashed with NBC over his insistence that he be allowed to pre-record his radio shows. The live production of radio shows was reinforced by the musicians' union and ASCAP, which wanted to ensure continued work for their members. In "On the Air: The Encyclopedia of Old-Time Radio", John Dunning wrote about German engineers having developed a tape recorder with a near-professional broadcast quality standard: Crosby's insistence eventually factored into the further development of magnetic tape sound recording and the radio industry's widespread adoption of it. He used his clout, both professionally and financially, for innovations in audio. But NBC and CBS refused to broadcast prerecorded radio programs. Crosby left the network and remained off the air for seven months, creating a legal battle with his sponsor Kraft that was settled out of court. Crosby returned to broadcasting for the last 13 weeks of the 1945–1946 season. The Mutual Network, on the other hand, pre-recorded some of its programs as early as 1938 for "The Shadow" with Orson Welles. ABC was formed from the sale of the NBC Blue Network in 1943 after a federal antitrust suit and was willing to join Mutual in breaking the tradition. ABC offered Crosby $30,000 per week to produce a recorded show every Wednesday that would be sponsored by Philco. He would get an additional $40,000 from 400 independent stations for the rights to broadcast the 30-minute show, which was sent to them every Monday on three lacquer discs that played ten minutes per side at rpm. Murdo MacKenzie of Bing Crosby Enterprises had seen a demonstration of the German Magnetophon in June 1947—the same device that Jack Mullin had brought back from Radio Frankfurt with 50 reels of tape, at the end of the war. It was one of the magnetic tape recorders that BASF and AEG had built in Germany starting in 1935. The 6.5 mm ferric-oxide-coated tape could record 20 minutes per reel of high-quality sound. Alexander M. Poniatoff ordered Ampex, which he founded in 1944, to manufacture an improved version of the Magnetophone. Crosby hired Mullin to start recording his "Philco Radio Time" show on his German-made machine in August 1947 using the same 50 reels of I.G. Farben magnetic tape that Mullin had found at a radio station at Bad Nauheim near Frankfurt while working for the U.S. Army Signal Corps. The advantage was editing. As Crosby wrote in his autobiography: Mullin's 1976 memoir of these early days of experimental recording agrees with Crosby's account: Crosby invested $50,000 in Ampex with the intent to produce more machines. In 1948, the second season of Philco shows was recorded with the Ampex Model 200A and Scotch 111 tape from 3M. Mullin explained how one new broadcasting technique was invented on the Crosby show with these machines: Crosby started the tape recorder revolution in America. In his 1950 film "Mr. Music", Crosby is seen singing into an Ampex tape recorder that reproduced his voice better than anything else. Also quick to adopt tape recording was his friend Bob Hope. Crosby gave one of the first Ampex Model 300 recorders to his friend, guitarist Les Paul, which led to Paul's invention of multitrack recording. His organization, the Crosby Research Foundation, held tape recording patents and developed equipment and recording techniques such as the laugh track that are still in use. With Frank Sinatra, Crosby was one of the principal backers for the United Western Recorders studio complex in Los Angeles. Videotape development. Mullin continued to work for Crosby to develop a videotape recorder (VTR). Television production was mostly live television in its early years, but Crosby wanted the same ability to record that he had achieved in radio. "The Fireside Theater" (1950) sponsored by Procter & Gamble, was his first television production. Mullin had not yet succeeded with videotape, so Crosby filmed the series of 26-minute shows at the Hal Roach Studios, and the "telefilms" were syndicated to individual television stations. Crosby continued to finance the development of videotape. Bing Crosby Enterprises gave the world's first demonstration of videotape recording in Los Angeles on November 11, 1951. Developed by John T. Mullin and Wayne R. Johnson since 1950, the device aired what were described as "blurred and indistinct" images, using a modified Ampex 200 tape recorder and standard quarter-inch (6.3 mm) audio tape moving at per second. Television station ownership. A Crosby-led group purchased station KCOP-TV, in Los Angeles, California, in 1954. NAFI Corporation and Crosby purchased television station KPTV in Portland, Oregon, for $4 million on September 1, 1959. In 1960, NAFI purchased KCOP from Crosby's group. In the early 1950s, Crosby helped establish the CBS television affiliate in his hometown of Spokane, Washington. Crosby partnered with Ed Craney, who owned the CBS radio affiliate KXLY (AM) and built a television studio west of Crosby's alma mater, Gonzaga University. After it began broadcasting, the station was sold within a year to Northern Pacific Radio and Television Corporation. Thoroughbred horse racing. Crosby was a fan of thoroughbred horse racing and bought his first racehorse in 1935. Two years later, Crosby became a founding partner of the Del Mar Thoroughbred Club and a member of its board of directors. Operating from the Del Mar Racetrack in Del Mar, California, the group included millionaire businessman Charles S. Howard, who owned a successful racing stable that included Seabiscuit. Charles' son, Lindsay C. Howard, became one of Crosby's closest friends; Crosby named his son Lindsay after him, and would purchase his 40-room Hillsborough, California, estate from Lindsay in 1965. Crosby and Lindsay Howard formed Binglin Stable to race and breed thoroughbred horses at a ranch in Moorpark in Ventura County, California. They also established the Binglin Stock Farm in Argentina, where they raced horses at Hipódromo de Palermo in Palermo, Buenos Aires. A number of Argentine-bred horses were purchased and shipped to race in the United States. On August 12, 1938, the Del Mar Thoroughbred Club hosted a $25,000 winner-take-all match race won by Charles S. Howard's Seabiscuit over Binglin's horse Ligaroti. In 1943, Binglin's horse Don Bingo won the Suburban Handicap at Belmont Park in Elmont, New York. The Binglin Stable partnership came to an end in 1953 as a result of a liquidation of assets by Crosby, who needed to raise enough funds to pay the hefty federal and state inheritance taxes on his deceased wife's estate. The Bing Crosby Breeders' Cup Handicap at Del Mar Racetrack is named in his honor. Sports. Crosby had a keen interest in sports. In the 1930s, his friend and former college classmate, Gonzaga head coach Mike Pecarovich, appointed Crosby as an assistant football coach. From 1946 until his death, Crosby owned a 25% share of the Pittsburgh Pirates. Although he was passionate about the team, Crosby was too nervous to watch the deciding Game 7 of the 1960 World Series, choosing to go to Paris with Kathryn and listen to its radio broadcast. Crosby had arranged for Ampex, another of his financial investments, to record the NBC telecast on kinescope. The game was one of the most famous in baseball history, capped off by Bill Mazeroski's walk-off home run that won the game for Pittsburgh. Crosby apparently viewed the complete film just once, and then stored it in his wine cellar, where it remained undisturbed until it was discovered in December 2009. The restored broadcast was shown on MLB Network in December 2010. Crosby was also an early investor in Bob Cobb's Billings Mustangs baseball club in 1948, joining other Hollywood stars Cecil B. DeMille, Robert Taylor, and Barbara Stanwyck who were also shareholders in the club. Crosby was also the honorary chairman of the club's board of directors. Crosby was also an avid golfer. He first took up golf at age 12 as a caddy. Crosby was already spending much time on the golf course while touring the country in a vaudeville act or with Paul Whiteman's orchestra in the mid to late 1920s. Eventually, Crosby became accomplished at the sport, at his best reaching a two handicap. Crosby competed in both the British and U.S. Amateur championships, was a five-time club champion at Lakeside Golf Club in Hollywood, and once made a hole-in-one on the 16th hole at Cypress Point. In 1937, Crosby hosted the first 'Crosby Clambake', a pro-am tournament at Rancho Santa Fe Golf Club in Rancho Santa Fe, California, the event's location prior to World War II. After the war, the event resumed play in 1947 on golf courses in Pebble Beach, where it has been played ever since. Now the AT&T Pebble Beach Pro-Am, the tournament is a staple of the PGA Tour, having featured Hollywood stars and other celebrities. In 1950, Crosby became the third person to win the William D. Richardson award, which is given to a non-professional golfer "who has consistently made an outstanding contribution to golf". In 1978, he and Bob Hope were voted the Bob Jones Award, the highest honor given by the United States Golf Association in recognition of distinguished sportsmanship. Crosby is a member of the World Golf Hall of Fame, having been inducted in 1978. Crosby also was a keen fisherman. In the summer of 1966, he spent a week as the guest of Lord Egremont, staying in Cockermouth and fishing on the River Derwent. Crosby's trip was filmed for "The American Sportsman" on ABC, although all did not go well at first as the salmon were not running. He did make up for it at the end of the week by catching a number of sea trout. In Front Royal, Virginia, a baseball stadium was named in Crosby's honor. The Front Royal Cardinals of the Valley Baseball League play their home games here. The Bing is also home to both of the county's high schools' baseball teams. Personal life. Crosby reportedly had a problem with alcohol abuse between the late 1920s and early 1930s, spending 60 days in jail for drinking and crashing his car during prohibition. He got his drinking under control in 1931. In 1977, Crosby told Barbara Walters in a televised interview that he thought marijuana should be legalized, because he believed it would make it much easier for the authorities to exert proper legal control over the market. In December 1999, the New York Post published an article by Bill Hoffmann and Murray Weiss called "Bing Crosby's Single Life" which claimed that "recently published" FBI files revealed connections with figures in the Mafia "since his youth". However, Crosby's FBI files had already been published in 1992 and provide no indication that Crosby had ties to the Mafia except for one major, but accidental, encounter in Chicago in 1929 which is not mentioned in the files, but is told by Crosby himself in his as-told-to autobiography "Call Me Lucky". In the over 280 pages of Crosby's FBI files, there is only one reference to organized crime or gambling dens, the content of some of the many threats that Crosby received throughout his life. The comments made by FBI investigators in the memos discredited the claims made in the letters. In the FBI files, there is only one reference to a person associated with the Mafia. A memorandum dated January 16, 1959, said, "The Salt Lake City Office has developed information indicating that Moe Dalitz received an invitation to join a deer hunting party at Bing Crosby's Elko, Nevada, ranch, together with the crooner, his Las Vegas dentist and several business associates." However, Crosby had already sold his Elko ranch a year earlier, in 1958, and it is doubtful how much he was really involved in that meeting. Romantic relationships. Crosby was married twice. His first wife was actress and nightclub singer Dixie Lee, to whom he was married from 1930 until she died of ovarian cancer in 1952. They had four sons: Gary, twins Dennis and Phillip, and Lindsay. "" (1947) was rumored to be based on Dixie's life. The Crosby family lived at 10500 Camarillo Street in North Hollywood for more than five years. After his wife died, Crosby had relationships with model Pat Sheehan, who married his son Dennis in 1958, and actresses Inger Stevens and Grace Kelly. Crosby married actress Kathryn Grant, who converted to Catholicism, in 1957. They had three children: Harry Lillis III, who played Bill in "Friday the 13th", Mary Frances, best known for portraying Kristin Shepard on TV's "Dallas", and Nathaniel, the 1981 U.S. Amateur champion in golf. Particularly during the late 1930s and the 1940s, Crosby's domestic life was dominated by his wife's excessive drinking. His efforts to cure her with the help of specialists failed. Tired of Dixie's drinking, Crosby even asked her for a divorce in January 1941. During the 1940s, he consistently had difficulties trying to stay away from home, while also trying to be there as much as possible for his children. Crosby had one confirmed extramarital affair between 1945 and the late 1940s, while married to his first wife Dixie. Actress Patricia Neal, who herself at the time was having an affair with the married Gary Cooper, wrote in her 1988 autobiography "As I Am" about a cruise to England with actress Joan Caulfield in 1948: In the 2018 Crosby biography "Bing Crosby: Swinging on a Star; the War Years, 1940–1946", there are excerpts from an original diary of two sisters, Violet and Mary Barsa, who, as young women, used to stalk Crosby in New York City in December 1945 and January 1946, and who detailed their observations in the diary. The document reveals that, during that time, Crosby was taking Caulfield out to dinner, visited theaters and opera houses with her, and Caulfield and a person in her company entered the Waldorf Hotel where Crosby was staying. The document also clearly indicates that at their meetings a third person, in most instances, Caulfield's mother, was present. In 1954, Caulfield admitted to a relationship with a "top film star" who was a married man with children, who, in the end, chose his wife and children over her. Caulfield's sister, Betty Caulfield, confirmed the romantic relationship between Caulfield and Crosby. Despite being a Catholic, Crosby was seriously considering divorce in order to marry Caulfield. Either in December 1945 or January 1946, Crosby approached Cardinal Francis Spellman with his difficulties with dealing with his wife's alcoholism, his love for Caulfield and his plan to file for divorce. According to Betty Caulfield, Spellman told Crosby: "Bing, you are Father O'Malley and under no circumstances can Father O'Malley get a divorce." Around the same time, Crosby talked to his mother about his intentions and she protested. Ultimately, Crosby chose to end the relationship and to stay with his wife. Crosby and Dixie reconciled, and he continued trying to help her overcome her alcohol issues. Homes. In November 1958, Crosby purchased the 1,350-acre Rising River Ranch in Cassel, California after renting a portion of it for several years. Attorney Ira Shadwell declined to disclose the purchase price. In October 1978, actor Clint Eastwood purchased the ranch under the name of his business manager, Roy Kaufman, for $1.5 million. Crosby and his family lived in the San Francisco area for many years. In 1963, he and his wife Kathryn moved with their three young children from Los Angeles to a $175,000 ten-bedroom Tudor estate in Hillsborough, formerly owned by fellow horseman Lindsay C. Howard, one of Crosby's closest friends, because they did not want to raise their children in Hollywood, according to son Nathaniel. This house went up for sale by its current owners in 2021 for $13.75 million. In 1965, the Crosbys moved to a larger, 40-room French chateau-style house on nearby Jackling Drive, where Kathryn Crosby continued to reside after Bing's death. This house served as a setting for some of the family's Minute Maid orange juice television commercials. Children. After Crosby's death, his eldest son, Gary, published a highly critical memoir, "Going My Own Way" (1983; written in collaboration with noted music journalist Ross Firestone), depicting his father as cruel, cold, remote, and physically and psychologically abusive. While acknowledging that corporal punishments took place, there were reports of all of Gary's immediate siblings distancing themselves from the abuse claims, either in public or in private. Crosby's younger son Phillip disputed his brother Gary's claims about their father. Around the time Gary published his claims, Phillip stated to the press that "Gary is a whining, bitching crybaby, walking around with a two-by-four on his shoulder and just daring people to nudge it off." Nevertheless, Phillip did not deny that Crosby believed in corporal punishment. In an interview with "People" magazine, Phillip stated that "we never got an extra whack or a cuff we didn't deserve". Shortly before Gary's book was actually published, Lindsay said, "I'm glad [Gary] did it. I hope it clears up a lot of the old lies and rumors." Unlike Gary, Lindsay stated that he preferred to remember "all the good things I did with my dad and forget the times that were rough". "Lindsay Crosby supported his brother (Gary) at the time of its publication but had a tempered view of its revelations. 'I never expected affection from my father so it didn't bother me,' he once told an interviewer.'" However, after the book was published, Lindsay addressed the abuse claims and what the media had made out of them: Dennis Crosby reportedly "said his older brother (Gary) was the most severely treated of the four boys. 'He got the first licking, and we got the second.'" Gary's first wife of 19 years, Barbara Cosentino, of whom Gary wrote in his book, "I could confide in her about Mom and Dad and my childhood", and with whom Gary stayed friendly after the divorce, stated: Gary Crosby's adopted son, Steven Crosby, said in a 2003 interview: Bing's younger brother, singer and jazz bandleader Bob Crosby, recalled at the time of Gary's revelations that Bing was a "disciplinarian", as their mother and father had been. He added, "We were brought up that way." In an interview for the same article, Gary clarified that Bing "was like a lot of fathers of that time. He was not out to be vicious, to beat children for his kicks." The author of the 2018 biography on Bing Crosby, Gary Giddins, claims that Gary Crosby's memoir is not reliable on many instances and cannot be trusted on the abuse stories. Crosby's will established a blind trust in which none of the sons received an inheritance until they reached the age of 65, intended by Crosby to keep them out of trouble. They instead received several thousand dollars per month from a trust left in 1952 by their mother, Dixie Lee. The trust, tied to high-performing oil stocks, folded in December 1989 following the 1980s oil glut. Lindsay Crosby died in 1989 at age 51, and Dennis Crosby died in 1991 at age 56, both by suicide from self-inflicted gunshot wounds. Gary Crosby died of lung cancer in 1995 at age 62. Phillip Crosby died of a heart attack in 2004 at age 69. Nathaniel Crosby, Crosby's younger son from his second marriage, is a former high-level golfer who won the U.S. Amateur in 1981 at age 19, becoming the youngest winner in the history of that event at the time. Harry Crosby is an investment banker who occasionally makes singing appearances. Denise Crosby, Dennis Crosby's daughter, is an actress and is known for her role as Tasha Yar on "". She appeared in the 1989 film adaptation of Stephen King's novel "Pet Sematary". In 2006, Crosby's niece through his sister Mary Rose, Carolyn Schneider, published the laudatory book "Me and Uncle Bing". Disputes between Crosby's two families began in the late 1990s. When Dixie died in 1952, her will provided that her share of the community property be distributed in trust to her sons. After Crosby's death in 1977, he left the residue of his estate to a marital trust for the benefit of his widow, Kathryn, and HLC Properties, Ltd., was formed for the purpose of managing his interests, including his right of publicity. In 1996, Dixie's trust sued HLC and Kathryn for declaratory relief as to the trust's entitlement to interest, dividends, royalties, and other income derived from the community property of Crosby and Dixie. In 1999, the parties settled for approximately $1.5 million. Relying on a retroactive amendment to the California Civil Code, Dixie's trust brought suit again, in 2010, alleging that Crosby's right of publicity was community property, and that Dixie's trust was entitled to a share of the revenue it produced. The trial court granted Dixie's trust's claim. The California Court of Appeals reversed it, holding that the 1999 settlement barred the claim. In light of the court's ruling, it was unnecessary for the court to decide whether a right of publicity can be characterized as community property under California law. Health and death. Following his recovery from a life-threatening fungal infection in his right lung in January 1974, Crosby emerged from semi-retirement to start a new spate of albums and concerts. On March 20, 1977, after videotaping a CBS concert special, "Bing – 50th Anniversary Gala", at the Ambassador Auditorium with Bob Hope looking on, Crosby fell off the stage into an orchestra pit, rupturing a disc in his back requiring a month-long stay in the hospital. Crosby's first performance after the accident was his last American concert, on August 16, 1977, the day Elvis Presley died, at the Concord Pavilion in Concord, California. When the electric power failed during his performance, Crosby continued singing without amplification. On August 27, Crosby gave a televised concert in Norway. In September, Crosby, his family and singer Rosemary Clooney began a concert tour of Britain that included two weeks at the London Palladium. While in the UK, Crosby recorded his final album, "Seasons", and his final TV Christmas special with guest David Bowie on September 11, which aired a little over a month after Crosby's death. Crosby's last concert was in the Brighton Centre on October 10, four days before his death, with British entertainer Gracie Fields in attendance. The following day, Crosby made his final appearance in a recording studio and sang eight songs at the BBC's Maida Vale Studios for a radio program, which included an interview with Alan Dell. Accompanied by the Gordon Rose Orchestra, Crosby's last recorded performance was of the song "Once in a While". Later that afternoon, he met with Chris Harding to take photographs for the "Seasons" album jacket. On October 13, 1977, Crosby flew alone to Spain to play golf and hunt partridge. The next day, Crosby played 18 holes of golf at the La Moraleja Golf Course near Madrid. His partner was World Cup champion Manuel Piñero. Their opponents were club president César de Zulueta and Valentín Barrios. According to Barrios, Crosby was in good spirits throughout the day, and was photographed several times during the round. At the ninth hole, construction workers building a house nearby recognized Crosby, and when asked for a song, Crosby sang "Strangers in the Night". Crosby, who had a 13 handicap, won with his partner by one stroke. As Crosby and his party headed back to the clubhouse at around 6:30 p.m., Crosby said, "That was a great game of golf, fellas. Let's go have a Coca-Cola." Those were his last words. About from the clubhouse entrance, Crosby collapsed and died instantly from a heart attack. At the clubhouse and later in the ambulance, house physician Dr. Laiseca tried to revive him, but was unsuccessful. At Reina Victoria Hospital, Crosby was administered the last rites of the Catholic Church and was pronounced dead at the age of 74. On October 18, 1977, following a private funeral Mass at St. Paul the Apostle Catholic Church in Westwood, Los Angeles, Crosby was buried at Holy Cross Cemetery in Culver City, California. Legacy. Crosby is a member of the National Association of Broadcasters Hall of Fame in the radio division. The family created an official website on October 14, 2007, the 30th anniversary of Crosby's death. In his autobiography "Don't Shoot, It's Only Me!" (1990), Bob Hope wrote, "Dear old Bing, as we called him, the "Economy-sized Sinatra". And what a voice. God I miss that voice. I can't even turn on the radio around Christmas time without crying anymore." Calypso musician Roaring Lion wrote a tribute song in 1939 titled "Bing Crosby", in which he wrote: "Bing has a way of singing with his very heart and soul / Which captivates the world / His millions of listeners never fail to rejoice / At his golden voice..." Bing Crosby Stadium in Front Royal, Virginia, was named after Crosby in honor of his fundraising and cash contributions for its construction from 1948 to 1950. In 2006, the former Metropolitan Theater of Performing Arts ('The Met') in Spokane, Washington, was renamed to The Bing Crosby Theater. Crosby has three stars on the Hollywood Walk of Fame. One each for radio, recording, and motion pictures. Compositions. Crosby wrote or co-wrote lyrics to 22 songs. His composition "At Your Command" was number 1 for three weeks on the U.S. pop singles chart beginning on August 8, 1931. "I Don't Stand a Ghost of a Chance With You" was his most successful composition, recorded by Duke Ellington, Frank Sinatra, Thelonious Monk, Billie Holiday, and Mildred Bailey, among others. Songs co-written by Crosby include: Grammy Hall of Fame. Four performances by Bing Crosby have been inducted into the Grammy Hall of Fame, which is a special Grammy award established in 1973 to honor recordings that are at least 25 years old and that have "qualitative or historical significance".
4012
5229428
https://en.wikipedia.org/wiki?curid=4012
Basel Convention
The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, usually known as the Basel Convention, is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to restrict the transfer of hazardous waste from developed to less developed countries. It does not address the movement of radioactive waste, controlled by the International Atomic Energy Agency. The Basel Convention is also intended to minimize the rate and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist developing countries in environmentally sound management of the hazardous and other wastes they generate. The convention was opened for signature on 21 March 1989, and entered into force on 5 May 1992. As of June 2024, there are 191 parties to the convention. In addition, Haiti and the United States have signed the convention but did not ratify it. Following a petition urging action on the issue signed by more than a million people around the world, most of the world's countries, but not the United States, agreed in May 2019 to an amendment of the Basel Convention to include plastic waste as regulated material. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the transportation of plastic waste is prohibited in just about every other country. History. With the tightening of environmental laws (for example, RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, the globalization of shipping made cross-border movement of waste easier, and many less developed countries were desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to poorer countries, grew rapidly. In 1990, OECD countries exported around 1.8 million tons of hazardous waste. Although most of this waste was shipped to other developed countries, a number of high-profile incidents of hazardous waste-dumping led to calls for regulation. One of the incidents which led to the creation of the Basel Convention was the "Khian Sea" waste disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the United States dumped half of its load on a beach in Haiti before being forced away. It sailed for many months, changing its name several times. Unable to unload the cargo in any port, the crew was believed to have dumped much of it at sea. Another incident was a 1988 case in which five ships transported 8,000 barrels of hazardous waste from Italy to the small Nigerian town of Koko in exchange for $100 monthly rent which was paid to a Nigerian for the use of his farmland. At its meeting that took place from 27 November to 1 December 2006, the parties of the Basel Agreement focused on issues of electronic waste and the dismantling of ships. Increased trade in recyclable materials has led to an increase in a market for used products such as computers. This market is valued in billions of dollars. At issue is the distinction when used computers stop being a "commodity" and become a "waste". As of June 2023, there are 191 parties to the treaty, which includes 188 UN member states, the Cook Islands, the European Union, and the State of Palestine. The five UN member states that are not party to the treaty are East Timor, Fiji, Haiti, South Sudan, and United States. Definition of "hazardous waste". Waste falls under the scope of the convention if it is within the category of wastes listed in Annex I of the convention and it exhibits one of the hazardous characteristics contained in Annex III. In other words, it must both be listed and possess a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or any of the countries of transit. The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or recovery. Examples of disposal are broad, including recovery and recycling. Alternatively, to fall under the scope of the convention, it is sufficient for waste to be included in Annex II, which lists other wastes, such as household wastes and residue that comes from incinerating household waste. Radioactive waste that is covered under other international control systems and wastes from the normal operation of ships are not covered. Annex IX attempts to define wastes which are not considered hazardous wastes and which would be excluded from the scope of the Basel Convention. If these wastes however are contaminated with hazardous materials to an extent causing them to exhibit an Annex III characteristic, they are not excluded. Obligations. In addition to conditions on the import and export of the above wastes, there are stringent requirements for notice, consent and tracking for movement of wastes across national boundaries. The convention places a general prohibition on the exportation or importation of wastes between parties and non-parties. The exception to this rule is where the waste is subject to another treaty that does not take away from the Basel Convention. The United States is a notable non-party to the convention and has a number of such agreements for allowing the shipping of hazardous wastes to Basel Party countries. The OECD Council also has its own control system that governs the transboundary movement of hazardous materials between OECD member countries. This allows, among other things, the OECD countries to continue trading in wastes with countries like the United States that have not ratified the Basel Convention. Parties to the convention must honor import bans of other parties. Article 4 of the Basel Convention calls for an overall reduction of waste generation. By encouraging countries to keep wastes within their boundaries and as close as possible to its source of generation, the internal pressures should provide incentives for waste reduction and pollution prevention. Parties are generally prohibited from exporting covered wastes to, or importing covered waste from, non-parties to the convention. The convention states that illegal hazardous waste traffic is criminal but contains no enforcement provisions. According to Article 12, parties are directed to adopt a protocol that establishes liability rules and procedures that are appropriate for damage that comes from the movement of hazardous waste across borders. The current consensus is that as space is not classed as a "country" under the specific definition, export of e-waste to non-terrestrial locations would not be covered. Basel Ban Amendment. After the initial adoption of the convention, some least developed countries and environmental organizations argued that it did not go far enough. Many nations and NGOs argued for a total ban on shipment of all hazardous waste to developing countries. In particular, the original convention did not prohibit waste exports to any location except Antarctica but merely required a notification and consent system known as "prior informed consent" or PIC. Further, many waste traders sought to exploit the good name of recycling and begin to justify all exports as moving to recycling destinations. Many believed a full ban was needed including exports for recycling. These concerns led to several regional waste trade bans, including the Bamako Convention. Lobbying at 1995 Basel conference by developing countries, Greenpeace and several European countries such as Denmark, led to the adoption of an amendment to the convention in 1995 termed the Basel Ban Amendment to the Basel Convention. The amendment has been accepted by 86 countries and the European Union, but has not entered into force (as that requires ratification by three-fourths of the member states to the convention). On 6 September 2019, Croatia became the 97th country to ratify the amendment which will enter into force after 90 days on 5 December 2019. The amendment prohibits the export of hazardous waste from a list of developed (mostly OECD) countries to developing countries. The Basel Ban applies to export for any reason, including recycling. An area of special concern for advocates of the amendment was the sale of ships for salvage, shipbreaking. The Ban Amendment was strenuously opposed by a number of industry groups as well as nations including Australia and Canada. The number of ratification for the entry-into force of the Ban Amendment is under debate: Amendments to the convention enter into force after ratification of "three-fourths of the Parties who accepted them" [Art. 17.5]; so far, the parties of the Basel Convention could not yet agree whether this would be three-fourths of the parties that were party to the Basel Convention when the ban was adopted, or three-fourths of the current parties of the convention [see Report of COP 9 of the Basel Convention]. The status of the amendment ratifications can be found on the Basel Secretariat's web page. The European Union fully implemented the Basel Ban in its Waste Shipment Regulation (EWSR), making it legally binding in all EU member states. Norway and Switzerland have similarly fully implemented the Basel Ban in their legislation. In the light of the blockage concerning the entry into force of the Ban Amendment, Switzerland and Indonesia have launched a "Country-led Initiative" (CLI) to discuss in an informal manner a way forward to ensure that the trans boundary movements of hazardous wastes, especially to developing countries and countries with economies in the transition, do not lead to an unsound management of hazardous wastes. This discussion aims at identifying and finding solutions to the reasons why hazardous wastes are still brought to countries that are not able to treat them in a safe manner. It is hoped that the CLI will contribute to the realization of the objectives of the Ban Amendment. The Basel Convention's website informs about the progress of this initiative. Regulation of plastic waste. In the wake of popular outcry, in May 2019 most of the world's countries, but not the United States, agreed to amend the Basel Convention to include plastic waste as a regulated material. The world's oceans are estimated to contain 100 million metric tons of plastic, with up to 90% of this quantity originating in land-based sources. The United States, which produces an annual 42 million metric tons of plastic waste, more than any other country in the world, opposed the amendment, but since it is not a party to the treaty it did not have an opportunity to vote on it to try to block it. Information about, and visual images of, wildlife, such as seabirds, ingesting plastic, and scientific findings that nanoparticles do penetrate through the blood–brain barrier were reported to have fueled public sentiment for coordinated international legally binding action. Over a million people worldwide signed a petition demanding official action. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the Basel Convention as amended in May 2019 prohibits the transportation of plastic waste to just about every other country. The Basel Convention contains three main entries on plastic wastes in Annex II, VIII and IX of the convention. The Plastic Waste Amendments of the convention are now binding on 186 States. In addition to ensuring the trade in plastic waste is more transparent and better regulated, under the Basel Convention governments must take steps not only to ensure the environmentally sound management of plastic waste, but also to tackle plastic waste at its source. Basel watchdog. The Basel Action Network (BAN) is a charitable civil society non-governmental organization that works as a consumer watchdog for implementation of the Basel Convention. BAN's principal aims is fighting exportation of toxic waste, including plastic waste, from industrialized societies to developing countries. BAN is based in Seattle, Washington, United States, with a partner office in the Philippines. BAN works to curb trans-border trade in hazardous electronic waste, land dumping, incineration, and the use of prison labor.
4013
18897847
https://en.wikipedia.org/wiki?curid=4013
Bar Kokhba (album)
Bar Kokhba is a double album by John Zorn, recorded between 1994 and 1996. It features music from Zorn's "Masada" project, rearranged for small ensembles. It also features the original soundtrack from "The Art of Remembrance – Simon Wiesenthal", a film by Hannah Heer and Werner Schmiedel (1994–95). Reception. The AllMusic review by Marc Gilman noted: "While some compositions retain their original structure and sound, some are expanded and probed by Zorn's arrangements, and resemble avant-garde classical music more than jazz. But this is the beauty of the album; the ensembles provide a forum for Zorn to expand his compositions. The album consistently impresses." Track listing. "All compositions by John Zorn"
4015
27015025
https://en.wikipedia.org/wiki?curid=4015
BASIC
BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1964. They wanted to enable students in non-scientific fields to use computers. At the time, nearly all computers required writing custom software, which only scientists and mathematicians tended to learn. In addition to the programming language, Kemeny and Kurtz developed the Dartmouth Time-Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs simultaneously on remote terminals. This general model became popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC. The emergence of microcomputers in the mid-1970s led to the development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects were also created. BASIC was available for almost any system of the era and became the "de facto" programming language for home computer systems that emerged in the late 1970s. These PCs almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge. BASIC declined in popularity in the 1990s, as more powerful microcomputers came to market and programming languages with advanced features (such as Pascal and C) became tenable on such computers. By then, most nontechnical personal computer users relied on pre-written applications rather than writing their own programs. In 1991, Microsoft released Visual Basic, combining an updated version of BASIC with a visual forms builder. This reignited use of the language and "VB" remains a major programming language in the form of VB.NET, while a hobbyist scene for BASIC more broadly continues to exist. Origin. John G. Kemeny was the chairman of the Dartmouth College Mathematics Department. Based largely on his reputation as an innovator in math teaching, in 1959 the college won an Alfred P. Sloan Foundation award for $500,000 to build a new department building. Thomas E. Kurtz had joined the department in 1956, and from the 1960s Kemeny and Kurtz agreed on the need for programming literacy among students outside the traditional STEM fields. Kemeny later noted that "Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate. It was as simple as that." Kemeny and Kurtz had made two previous experiments with simplified languages, DARSIMCO (Dartmouth Simplified Code) and DOPE (Dartmouth Oversimplified Programming Experiment). These did not progress past a single freshman class. New experiments using Fortran and ALGOL followed, but Kurtz concluded these languages were too tricky for what they desired. As Kurtz noted, Fortran had numerous oddly formed commands, notably an "almost impossible-to-memorize convention for specifying a loop: . Is it '1, 10, 2' or '1, 2, 10', and is the comma after the line number required or not?" Moreover, the lack of any sort of immediate feedback was a key problem; the machines of the era used batch processing and took a long time to complete a run of a program. While Kurtz was visiting MIT, John McCarthy suggested that time-sharing offered a solution; a single machine could divide up its processing time among many users, giving them the illusion of having a (slow) computer to themselves. Small programs would return results in a few seconds. This led to increasing interest in a system using time-sharing and a new language specifically for use by non-STEM students. Kemeny wrote the first version of BASIC. The acronym "BASIC" comes from the name of an unpublished paper by Thomas Kurtz. The new language was heavily patterned on FORTRAN II; statements were one-to-a-line, numbers were used to indicate the target of loops and branches, and many of the commands were similar or identical to Fortran. However, the syntax was changed wherever it could be improved. For instance, the difficult to remember codice_1 loop was replaced by the much easier to remember , and the line number used in the DO was instead indicated by the codice_2. Likewise, the cryptic codice_3 statement of Fortran, whose syntax matched a particular instruction of the machine on which it was originally written, became the simpler . These changes made the language much less idiosyncratic while still having an overall structure and feel similar to the original FORTRAN. The project received a $300,000 grant from the National Science Foundation, which was used to purchase a GE-225 computer for processing, and a Datanet-30 realtime processor to handle the Teletype Model 33 teleprinters used for input and output. A team of a dozen undergraduates worked on the project for about a year, writing both the DTSS system and the BASIC compiler. The first version BASIC language was released on 1 May 1964. Initially, BASIC concentrated on supporting straightforward mathematical work, with matrix arithmetic support from its initial implementation as a batch language, and character string functionality being added by 1965. Usage in the university rapidly expanded, requiring the main CPU to be replaced by a GE-235, and still later by a GE-635. By the early 1970s there were hundreds of terminals connected to the machines at Dartmouth, some of them remotely. Wanting use of the language to become widespread, its designers made the compiler available free of charge. In the 1960s, software became a chargeable commodity; until then, it was provided without charge as a service with expensive computers, usually available only to lease. They also made it available to high schools in the Hanover, New Hampshire, area and regionally throughout New England on Teletype Model 33 and Model 35 teleprinter terminals connected to Dartmouth via dial-up phone lines, and they put considerable effort into promoting the language. In the following years, as other dialects of BASIC appeared, Kemeny and Kurtz's original BASIC dialect became known as "Dartmouth BASIC". New Hampshire recognized the accomplishment in 2019 when it erected a highway historical marker in Hanover describing the creation of "the first user-friendly programming language". Spread on time-sharing services. The emergence of BASIC took place as part of a wider movement toward time-sharing systems. First conceptualized during the late 1950s, the idea became so dominant in the computer industry by the early 1960s that its proponents were speaking of a future in which users would "buy time on the computer much the same way that the average household buys power and water from utility companies". General Electric, having worked on the Dartmouth project, wrote their own underlying operating system and launched an online time-sharing system known as Mark I. It featured BASIC as one of its primary selling points. Other companies in the emerging field quickly followed suit; Tymshare introduced SUPER BASIC in 1968, CompuServe had a version on the DEC-10 at their launch in 1969, and by the early 1970s BASIC was largely universal on general-purpose mainframe computers. Even IBM eventually joined the club with the introduction of VS-BASIC in 1973. Although time-sharing services with BASIC were successful for a time, the widespread success predicted earlier was not to be. The emergence of minicomputers during the same period, and especially low-cost microcomputers in the mid-1970s, allowed anyone to purchase and run their own systems rather than buy online time which was typically billed at dollars per minute. Spread on minicomputers. BASIC, by its very nature of being small, was naturally suited to porting to the minicomputer market, which was emerging at the same time as the time-sharing services. These machines had small main memory, perhaps as little as 4 KB in modern terminology, and lacked high-performance storage like hard drives that make compilers practical. On these systems, BASIC was normally implemented as an interpreter rather than a compiler due to its lower requirement for working memory. A particularly important example was HP Time-Shared BASIC, which, like the original Dartmouth system, used two computers working together to implement a time-sharing system. The first, a low-end machine in the HP 2100 series, was used to control user input and save and load their programs to tape or disk. The other, a high-end version of the same underlying machine, ran the programs and generated output. For a cost of about $100,000, one could own a machine capable of running between 16 and 32 users at the same time. The system, bundled as the HP 2000, was the first mini platform to offer time-sharing and was an immediate runaway success, catapulting HP to become the third-largest vendor in the minicomputer space, behind DEC and Data General (DG). DEC, the leader in the minicomputer space since the mid-1960s, had initially ignored BASIC. This was due to their work with RAND Corporation, who had purchased a PDP-6 to run their JOSS language, which was conceptually very similar to BASIC. This led DEC to introduce a smaller, cleaned up version of JOSS known as FOCAL, which they heavily promoted in the late 1960s. However, with timesharing systems widely offering BASIC, and all of their competition in the minicomputer space doing the same, DEC's customers were clamoring for BASIC. After management repeatedly ignored their pleas, David H. Ahl took it upon himself to buy a BASIC for the PDP-8, which was a major success in the education market. By the early 1970s, FOCAL and JOSS had been forgotten and BASIC had become almost universal in the minicomputer market. DEC would go on to introduce their updated version, BASIC-PLUS, for use on the RSTS/E time-sharing operating system. During this period a number of simple text-based games were written in BASIC, most notably Mike Mayfield's "Star Trek". David Ahl collected these, some ported from FOCAL, and published them in an educational newsletter he compiled. He later collected a number of these into book form, "101 BASIC Computer Games", published in 1973. During the same period, Ahl was involved in the creation of a small computer for education use, an early personal computer. When management refused to support the concept, Ahl left DEC in 1974 to found the seminal computer magazine, "Creative Computing". The book remained popular, and was re-published on several occasions. Explosive growth: the home computer era. The introduction of the first microcomputers in the mid-1970s was the start of explosive growth for BASIC. It had the advantage that it was fairly well known to the young designers and computer hobbyists who took an interest in microcomputers, many of whom had seen BASIC on minis or mainframes. Despite Dijkstra's famous judgment in 1975, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration", BASIC was one of the few languages that was both high-level enough to be usable by those without training and small enough to fit into the microcomputers of the day, making it the "de facto" standard programming language on early microcomputers. The first microcomputer version of BASIC was co-written by Bill Gates, Paul Allen and Monte Davidoff for their newly formed company, Micro-Soft. This was released by MITS in punch tape format for the Altair 8800 shortly after the machine itself, immediately cementing BASIC as the primary language of early microcomputers. Members of the Homebrew Computer Club began circulating copies of the program, causing Gates to write his Open Letter to Hobbyists, complaining about this early example of software piracy. Partially in response to Gates's letter, and partially to make an even smaller BASIC that would run usefully on 4 KB machines, Bob Albrecht urged Dennis Allison to write their own variation of the language. How to design and implement a stripped-down version of an interpreter for the BASIC language was covered in articles by Allison in the first three quarterly issues of the "People's Computer Company" newsletter published in 1975 and implementations with source code published in "". This led to a wide variety of Tiny BASICs with added features or other improvements, with versions from Tom Pittman and Li-Chen Wang becoming particularly well known. Micro-Soft, by this time Microsoft, ported their interpreter for the MOS 6502, which quickly become one of the most popular microprocessors of the 8-bit era. When new microcomputers began to appear, notably the "1977 trinity" of the TRS-80, Commodore PET and Apple II, they either included a version of the MS code, or quickly introduced new models with it. Ohio Scientific's personal computers also joined this trend at that time. By 1978, MS BASIC was a "de facto" standard and practically every home computer of the 1980s included it in ROM. Upon boot, a BASIC interpreter in direct mode was presented. Commodore Business Machines includes Commodore BASIC, based on Microsoft BASIC. The Apple II and TRS-80 each have two versions of BASIC: a smaller introductory version with the initial releases of the machines and a Microsoft-based version introduced as interest in the platforms increased. As new companies entered the field, additional versions were added that subtly changed the BASIC family. The Atari 8-bit computers use the 8 KB Atari BASIC which is not derived from Microsoft BASIC. Sinclair BASIC was introduced in 1980 with the Sinclair ZX80, and was later extended for the Sinclair ZX81 and the Sinclair ZX Spectrum. The BBC-published BBC BASIC, developed by Acorn Computers, incorporates extra structured programming keywords and floating-point features. As the popularity of BASIC grew in this period, computer magazines published complete source code in BASIC for video games, utilities, and other programs. Given BASIC's straightforward nature, it was a simple matter to type in the code from the magazine and execute the program. Different magazines were published featuring programs for specific computers, though some BASIC programs were considered universal and could be used in machines running any variant of BASIC (sometimes with minor adaptations). Many books of type-in programs were also available, and in particular, Ahl published versions of the original 101 BASIC games converted into the Microsoft dialect and published it from "Creative Computing" as "BASIC Computer Games". This book, and its sequels, provided hundreds of ready-to-go programs that could be easily converted to practically any BASIC-running platform. The book reached the stores in 1978, just as the home computer market was starting off, and it became the first million-selling computer book. Later packages, such as Learn to Program BASIC would also have gaming as an introductory focus. On the business-focused CP/M computers which soon became widespread in small business environments, Microsoft BASIC (MBASIC) was one of the leading applications. In 1978, David Lien published the first edition of "The BASIC Handbook: An Encyclopedia of the BASIC Computer Language", documenting keywords across over 78 different computers. By 1981, the second edition documented keywords from over 250 different computers, showcasing the explosive growth of the microcomputer era. IBM PC and compatibles. When IBM was designing the IBM PC, they followed the paradigm of existing home computers in having a built-in BASIC interpreter. They sourced this from Microsoft – IBM Cassette BASIC – but Microsoft also produced several other versions of BASIC for MS-DOS/PC DOS including IBM Disk BASIC (BASIC D), IBM BASICA (BASIC A), GW-BASIC (a BASICA-compatible version that did not need IBM's ROM) and QBasic, all typically bundled with the machine. In addition they produced the Microsoft QuickBASIC Compiler (1985) for power users and hobbyists, and the Microsoft BASIC Professional Development System (PDS) for professional programmers. Turbo Pascal-publisher Borland published Turbo Basic 1.0 in 1985 (successor versions were marketed under the name PowerBASIC). On Unix-like systems, specialized implementations were created such as XBasic and X11-Basic. XBasic was ported to Microsoft Windows as XBLite, and cross-platform variants such as SmallBasic, yabasic, Bywater BASIC, nuBasic, MyBasic, Logic Basic, Liberty BASIC, and wxBasic emerged. FutureBASIC and Chipmunk Basic meanwhile targeted the Apple Macintosh, while yab is a version of yaBasic optimized for BeOS, ZETA and Haiku. These later variations introduced many extensions, such as improved string manipulation and graphics support, access to the file system and additional data types. More important were the facilities for structured programming, including additional control structures and proper subroutines supporting local variables. The addition of an integrated development environment (IDE) and electronic Help files made the products easier to work with and supported learning tools and school curriculum. In 1989, Microsoft Press published "Learn BASIC Now", a book-and-software system designed to teach BASIC programming to self-taught learners who were using IBM-PC compatible systems and the Apple Macintosh. "Learn BASIC Now" included software disks containing the Microsoft QuickBASIC Interpreter and a programming tutorial written by Michael Halvorson and David Rygmyr. Learning systems like "Learn BASIC Now" popularized structured BASIC and helped QuickBASIC reach an installed base of four million active users. By the late 1980s, many users were using pre-made applications written by others rather than learning programming themselves, and professional developers had a wide range of advanced languages available on small computers. C and later C++ became the languages of choice for professional "shrink wrap" application development. A niche that BASIC continued to fill was for hobbyist video game development, as game creation systems and readily available game engines were still in their infancy. The Atari ST had STOS BASIC while the Amiga had AMOS BASIC for this purpose. Microsoft first exhibited BASIC for game development with DONKEY.BAS for GW-BASIC, and later GORILLA.BAS and NIBBLES.BAS for QuickBASIC. QBasic maintained an active game development community, which helped later spawn the QB64 and FreeBASIC implementations. An early example of this market is the QBasic software package Microsoft Game Shop (1990), a hobbyist-inspired release that included six "arcade-style" games that were easily customizable in QBasic. In 2013, a game written in QBasic and compiled with QB64 for modern computers entitled "Black Annex" was released on Steam. Blitz Basic, Dark Basic, SdlBasic, Super Game System Basic, PlayBASIC, CoolBasic, AllegroBASIC, ethosBASIC, GLBasic and Basic4GL further filled this demand, right up to the modern RCBasic, NaaLaa, AppGameKit, Monkey 2, and Cerberus-X. Visual Basic. In 1991, Microsoft introduced Visual Basic, an evolutionary development of QuickBASIC. It included constructs from that language such as block-structured control statements, parameterized subroutines and optional static typing as well as object-oriented constructs from other languages such as "With" and "For Each". The language retained some compatibility with its predecessors, such as the Dim keyword for declarations, "Gosub"/Return statements and optional line numbers which could be used to locate errors. An important driver for the development of Visual Basic was as the new macro language for Microsoft Excel, a spreadsheet program. To the surprise of many at Microsoft who still initially marketed it as a language for hobbyists, the language came into widespread use for small custom business applications shortly after the release of VB version 3.0, which is widely considered the first relatively stable version. Microsoft also spun it off as Visual Basic for Applications and Embedded Visual Basic. While many advanced programmers still scoffed at its use, VB met the needs of small businesses efficiently as by that time, computers running Windows 3.1 had become fast enough that many business-related processes could be completed "in the blink of an eye" even using a "slow" language, as long as large amounts of data were not involved. Many small business owners found they could create their own small, yet useful applications in a few evenings to meet their own specialized needs. Eventually, during the lengthy lifetime of VB3, knowledge of Visual Basic had become a marketable job skill. Microsoft also produced VBScript in 1996 and Visual Basic .NET in 2001. The latter has essentially the same power as C# and Java but with syntax that reflects the original Basic language, and also features some cross-platform capability through implementations such as Mono-Basic. The IDE, with its event-driven GUI builder, was also influential on other rapid application development tools, most notably Borland Software's Delphi for Object Pascal and its own descendants such as Lazarus. Mainstream support for the final version 6.0 of the original Visual Basic ended on March 31, 2005, followed by extended support in March 2008. Owing to its persistent remaining popularity, third-party attempts to further support it exist. On February 2, 2017, Microsoft announced that development on VB.NET would no longer be in parallel with that of C#, and on March 11, 2020, it was announced that evolution of the VB.NET language had also concluded. Even so, the language was still supported. Post-1990 versions and dialects. Many other BASIC dialects have also sprung up since 1990, including the open source QB64 and FreeBASIC, inspired by QBasic, and the Visual Basic-styled RapidQ, HBasic, Basic For Qt and Gambas. Modern commercial incarnations include PureBasic, PowerBASIC, Xojo, Monkey X and True BASIC (the direct successor to Dartmouth BASIC from a company controlled by Kurtz). Several web-based simple BASIC interpreters also now exist, including Microsoft's Small Basic and Google's wwwBASIC. A number of compilers also exist that convert BASIC into JavaScript. such as NS Basic.Building from earlier efforts such as Mobile Basic, many dialects are now available for smartphones and tablets. On game consoles, an application for the Nintendo 3DS and Nintendo DSi called "Petit Computer" allows for programming in a slightly modified version of BASIC with DS button support. A version has also been released for Nintendo Switch, which has also been supplied a version of the Fuze Code System, a BASIC variant first implemented as a custom Raspberry Pi machine. Previously BASIC was made available on consoles as Family BASIC (for the Nintendo Famicom) and PSX Chipmunk Basic (for the original PlayStation), while yabasic was ported to the PlayStation 2 and FreeBASIC to the original Xbox. Calculators. Variants of BASIC are available on graphing and otherwise programmable calculators made by Texas Instruments (TI-BASIC), HP (HP BASIC), Casio (Casio BASIC), and others. Windows command-line. QBasic, a version of Microsoft QuickBASIC without the linker to make EXE files, is present in the Windows NT and DOS-Windows 95 streams of operating systems and can be obtained for more recent releases like Windows 7 which do not have them. Prior to DOS 5, the Basic interpreter was GW-Basic. QuickBasic is part of a series of three languages issued by Microsoft for the home and office power user and small-scale professional development; QuickC and QuickPascal are the other two. For Windows 95 and 98, which do not have QBasic installed by default, they can be copied from the installation disc, which will have a set of directories for old and optional software; other missing commands like Exe2Bin and others are in these same directories. Other. The various Microsoft, Lotus, and Corel office suites and related products are programmable with Visual Basic in one form or another, including LotusScript, which is very similar to VBA 6. The Host Explorer terminal emulator uses WWB as a macro language; or more recently the programme and the suite in which it is contained is programmable in an in-house Basic variant known as Hummingbird Basic. The VBScript variant is used for programming web content, Outlook 97, Internet Explorer, and the Windows Script Host. WSH also has a Visual Basic for Applications (VBA) engine installed as the third of the default engines along with VBScript, JScript, and the numerous proprietary or open source engines which can be installed like PerlScript, a couple of Rexx-based engines, Python, Ruby, Tcl, Delphi, XLNT, PHP, and others; meaning that the two versions of Basic can be used along with the other mentioned languages, as well as LotusScript, in a WSF file, through the component object model, and other WSH and VBA constructions. VBScript is one of the languages that can be accessed by the 4DOS, 4NT, and Take Command enhanced shells. SaxBasic and WWB are also very similar to the Visual Basic line of Basic implementations. The pre-Office 97 macro language for Microsoft Word is known as WordBASIC. Excel 4 and 5 use Visual Basic itself as a macro language. Chipmunk Basic, an interpreter similar to BASICs of the 1970s, is available for Linux, Windows, and macOS. Legacy. The ubiquity of BASIC interpreters on personal computers was such that textbooks once included simple "Try It In BASIC" exercises that encouraged students to experiment with mathematical and computational concepts on classroom or home computers. Popular computer magazines of the day typically included type-in programs. Futurist and sci-fi writer David Brin mourned the loss of ubiquitous BASIC in a 2006 "Salon" article as have others who first used computers during this era. In turn, the article prompted Microsoft to develop and release Small Basic; it also inspired similar projects like Basic-256 and the web based Quite Basic. Dartmouth held a 50th anniversary celebration for BASIC on 1 May 2014. The pedagogical use of BASIC has been followed by other languages, such as Pascal, Java and particularly Python. Dartmouth College celebrated the 50th anniversary of the BASIC language with a day of events on April 30, 2014. A short documentary film was produced for the event. Syntax. Data types and variables. Minimal versions of BASIC had only integer variables and one- or two-letter variable names, which minimized requirements of limited and expensive memory (RAM). More powerful versions had floating-point arithmetic, and variables could be labelled with names six or more characters long. There were some problems and restrictions in early implementations; for example, Applesoft BASIC allowed variable names to be several characters long, but only the first two were significant, thus it was possible to inadvertently write a program with variables "LOSS" and "LOAN", which would be treated as being the same; assigning a value to "LOAN" would silently overwrite the value intended as "LOSS". Keywords could not be used in variables in many early BASICs; "SCORE" would be interpreted as "SC" OR "E", where OR was a keyword. String variables are usually distinguished in many microcomputer dialects by having $ suffixed to their name as a sigil, and values are often identified as strings by being delimited by "double quotation marks". Arrays in BASIC could contain integers, floating point or string variables. Some dialects of BASIC supported matrices and matrix operations, which can be used to solve sets of simultaneous linear algebraic equations. These dialects would directly support matrix operations such as assignment, addition, multiplication (of compatible matrix types), and evaluation of a determinant. Many microcomputer BASICs did not support this data type; matrix operations were still possible, but had to be programmed explicitly on array elements. Examples. Unstructured BASIC. New BASIC programmers on a home computer might start with a simple program, perhaps using the language's PRINT statement to display a message on the screen; a well-known and often-replicated example is Kernighan and Ritchie's "Hello, World!" program: 10 PRINT "Hello, World!" 20 END An infinite loop could be used to fill the display with the message: 10 PRINT "Hello, World!" 20 GOTO 10 Note that the codice_58 statement is optional and has no action in most dialects of BASIC. It was not always included, as is the case in this example. This same program can be modified to print a fixed number of messages using the common codice_59 statement: 10 LET N=10 20 FOR I=1 TO N 30 PRINT "Hello, World!" 40 NEXT I Most home computers BASIC versions, such as MSX BASIC and GW-BASIC, supported simple data types, loop cycles, and arrays. The following example is written for GW-BASIC, but will work in most versions of BASIC with minimal changes: 10 INPUT "What is your name: "; U$ 20 PRINT "Hello "; U$ 30 INPUT "How many stars do you want: "; N 40 S$ = "" 50 FOR I = 1 TO N 60 S$ = S$ + "*" 70 NEXT I 80 PRINT S$ 90 INPUT "Do you want more stars? "; A$ 100 IF LEN(A$) = 0 THEN GOTO 90 110 A$ = LEFT$(A$, 1) 120 IF A$ = "Y" OR A$ = "y" THEN GOTO 30 130 PRINT "Goodbye "; U$ 140 END The resulting dialog might resemble: What is your name: Mike Hello Mike How many stars do you want: 7 Do you want more stars? yes How many stars do you want: 3 Do you want more stars? no Goodbye Mike The original Dartmouth Basic was unusual in having a matrix keyword, MAT. Although not implemented by most later microprocessor derivatives, it is used in this example from the 1968 manual which averages the numbers that are input: 5 LET S = 0 10 MAT INPUT V 20 LET N = NUM 30 IF N = 0 THEN 99 40 FOR I = 1 TO N 45 LET S = S + V(I) 50 NEXT I 60 PRINT S/N 70 GO TO 5 99 END Structured BASIC. Second-generation BASICs (for example, VAX Basic, SuperBASIC, True BASIC, QuickBASIC, BBC BASIC, Pick BASIC, PowerBASIC, Liberty BASIC, QB64 and (arguably) COMAL) introduced a number of features into the language, primarily related to structured and procedure-oriented programming. Usually, line numbering is omitted from the language and replaced with labels (for GOTO) and procedures to encourage easier and more flexible design. In addition keywords and structures to support repetition, selection and procedures with local variables were introduced. The following example is in Microsoft QuickBASIC: REM QuickBASIC example REM Forward declaration - allows the main code to call a REM subroutine that is defined later in the source code DECLARE SUB PrintSomeStars (StarCount!) REM Main program follows INPUT "What is your name: ", UserName$ PRINT "Hello "; UserName$ DO INPUT "How many stars do you want: ", NumStars CALL PrintSomeStars(NumStars) DO INPUT "Do you want more stars? ", Answer$ LOOP UNTIL Answer$ <> "" Answer$ = LEFT$(Answer$, 1) LOOP WHILE UCASE$(Answer$) = "Y" PRINT "Goodbye "; UserName$ END REM subroutine definition SUB PrintSomeStars (StarCount) REM This procedure uses a local variable called Stars$ Stars$ = STRING$(StarCount, "*") PRINT Stars$ END SUB Object-oriented BASIC. Third-generation BASIC dialects such as Visual Basic, Xojo, Gambas, StarOffice Basic, BlitzMax and PureBasic introduced features to support object-oriented and event-driven programming paradigm. Most built-in procedures and functions are now represented as "methods" of standard objects rather than "operators". Also, the operating system became increasingly accessible to the BASIC language. The following example is in Visual Basic .NET: Public Module StarsProgram Private Function Ask(prompt As String) As String Console.Write(prompt) Return Console.ReadLine() End Function Public Sub Main() Dim userName = Ask("What is your name: ") Console.WriteLine("Hello {0}", userName) Dim answer As String Do Dim numStars = CInt(Ask("How many stars do you want: ")) Dim stars As New String("*"c, numStars) Console.WriteLine(stars) Do answer = Ask("Do you want more stars? ") Loop Until answer <> "" Loop While answer.StartsWith("Y", StringComparison.OrdinalIgnoreCase) Console.WriteLine("Goodbye {0}", userName) End Sub End Module
4016
46007279
https://en.wikipedia.org/wiki?curid=4016
List of Byzantine emperors
The foundation of Constantinople in 330 AD marks the conventional start of the Eastern Roman Empire, which fell to the Ottoman Empire in 1453 AD. Only the emperors who were recognized as legitimate rulers and exercised sovereign authority are included, to the exclusion of junior co-emperors who never attained the status of sole or senior ruler, as well as of the various usurpers or rebels who claimed the imperial title. The following list starts with Constantine the Great, the first Christian emperor, who rebuilt the city of Byzantium as an imperial capital, Constantinople, and who was regarded by the later emperors as the model ruler. Modern historians distinguish this later phase of the Roman Empire as Byzantine due to the imperial seat moving from Rome to Byzantium, the Empire's integration of Christianity, and the predominance of Greek instead of Latin. The Byzantine Empire was the direct legal continuation of the eastern half of the Roman Empire following the division of the Roman Empire in 395. Emperors listed below up to Theodosius I in 395 were sole or joint rulers of the entire Roman Empire. The Western Roman Empire continued until 476. Byzantine emperors considered themselves to be Roman emperors in direct succession from Augustus; the term "Byzantine" became convention in Western historiography in the 19th century. The use of the title "Roman Emperor" by those ruling from Constantinople was not contested until after the papal coronation of the Frankish Charlemagne as Holy Roman emperor (25 December 800). The title of all emperors preceding Heraclius was officially ""Augustus", although other titles such as "Dominus" were also used. Their names were preceded by "Imperator Caesar" and followed by "Augustus". Following Heraclius, the title commonly became the Greek "Basileus" (Gr. Βασιλεύς), which had formerly meant sovereign, though "Augustus" continued to be used in a reduced capacity. Following the establishment of the rival Holy Roman Empire in Western Europe, the title "Autokrator"" (Gr. Αὐτοκράτωρ) was increasingly used. In later centuries, the emperor could be referred to by Western Christians as the "emperor of the Greeks". Towards the end of the Empire, the standard imperial formula of the Byzantine ruler was "[Emperor's name] in Christ, Emperor and Autocrat of the Romans" (cf. Ῥωμαῖοι and Rûm). Dynasties were a common tradition and structure for rulers and government systems in the Medieval period. The principle or formal requirement for hereditary succession was not a part of the Empire's governance; hereditary succession was a custom and tradition, carried on as habit and benefited from some sense of legitimacy, but not as a "rule" or inviolable requirement for office at the time.
4024
7903804
https://en.wikipedia.org/wiki?curid=4024
Butterfly effect
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The term is closely associated with the work of the mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome. The idea that small causes may have large effects in weather was earlier acknowledged by the French mathematician and physicist Henri Poincaré. The American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of "instability" of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. The concept of the butterfly effect has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. History. In "The Vocation of Man" (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole". Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology. In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908. In 1950, Alan Turing noted: "The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping." The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury in which a time traveller alters the future by inadvertently treading on a butterfly in the past. More precisely, though, almost the exact idea and the exact phrasing —of a tiny insect's wing affecting the entire atmosphere's winds— was published in a children's book which became extremely successful and well-known globally in 1962, the year before Lorenz published: In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario. Lorenz wrote: In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called "Deterministic Nonperiodic Flow" (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated: Following proposals from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted "Does the flap of a butterfly's wings in Brazil set off a tornado in Texas?" as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely. The phrase refers to the effect of a butterfly's wings creating tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can "cause" the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing creates a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado. The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions. Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos. While the "butterfly effect" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be "observationally indistinguishable" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics. In the book entitled "The Essence of Chaos" published in 1993, Lorenz defined butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC. Theory and mathematical definition. Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately. A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. Lorenz defined sensitive dependence as follows: "The property characterizing an orbit (i.e., a solution) if most other orbits that pass close to it at some point do not remain close to it as time advances." If "M" is the state space for the map formula_1, then formula_1 displays sensitive dependence to initial conditions if for any x in "M" and any δ > 0, there are y in "M", with distance such that formula_3 and such that formula_4 for some positive parameter "a". The definition does not require that all points from a neighborhood separate from the base point "x", but it requires one positive Lyapunov exponent. In addition to a positive Lyapunov exponent, boundedness is another major feature within chaotic systems. The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map: formula_5 which, unlike most chaotic maps, has a closed-form solution: formula_6 where the initial condition parameter formula_7 is given by formula_8. For rational formula_7, after a finite number of iterations formula_10 maps into a periodic sequence. But almost all formula_7 are irrational, and, for irrational formula_7, formula_10 never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2"n" shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps formula_10 folded within the range [0, 1]. In physical systems. In weather. Overview. The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: "Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong." Differentiating types of butterfly effects. The concept of the butterfly effect encompasses several phenomena. The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. In Palmer et al., a new type of butterfly effect is introduced, highlighting the potential impact of small-scale processes on finite predictability within the Lorenz 1969 model. Additionally, the identification of ill-conditioned aspects of the Lorenz 1969 model points to a practical form of finite predictability. These two distinct mechanisms suggesting finite predictability in the Lorenz 1969 model are collectively referred to as the third kind of butterfly effect. The authors in have considered Palmer et al.'s suggestions and have aimed to present their perspective without raising specific contentions. The third kind of butterfly effect with finite predictability, as discussed in, was primarily proposed based on a convergent geometric series, known as Lorenz's and Lilly's formulas. Ongoing discussions are addressing the validity of these two formulas for estimating predictability limits in. A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. In recent studies, it was reported that both meteorological and non-meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance. Recent debates on butterfly effects. The first kind of butterfly effect (BE1), known as SDIC (Sensitive Dependence on Initial Conditions), is widely recognized and demonstrated through idealized chaotic models. However, opinions differ regarding the second kind of butterfly effect, specifically the impact of a butterfly flapping its wings on tornado formation, as indicated in two 2024 articles. In more recent discussions published by "Physics Today", it is acknowledged that the second kind of butterfly effect (BE2) has never been rigorously verified using a realistic weather model. While the studies suggest that BE2 is unlikely in the real atmosphere, its invalidity in this context does not negate the applicability of BE1 in other areas, such as pandemics or historical events. For the third kind of butterfly effect, the limited predictability within the Lorenz 1969 model is explained by scale interactions in one article and by system ill-conditioning in another more recent study. Finite predictability in chaotic systems. According to Lighthill (1986), the presence of SDIC (commonly known as the butterfly effect) implies that chaotic systems have a finite predictability limit. In a literature review, it was found that Lorenz's perspective on the predictability limit can be condensed into the following statement: Recently, a short video has been created to present Lorenz's perspective on predictability limit. A recent study refers to the two-week predictability limit, initially calculated in the 1960s with the Mintz-Arakawa model's five-day doubling time, as the "Predictability Limit Hypothesis." Inspired by Moore's Law, this term acknowledges the collaborative contributions of Lorenz, Mintz, and Arakawa under Charney's leadership. The hypothesis supports the investigation into extended-range predictions using both partial differential equation (PDE)-based physics methods and Artificial Intelligence (AI) techniques. Revised perspectives on chaotic and non-chaotic systems. By revealing coexisting chaotic and non-chaotic attractors within Lorenz models, Shen and his colleagues proposed a revised view that "weather possesses chaos and order", in contrast to the conventional view of "weather is chaotic". As a result, sensitive dependence on initial conditions (SDIC) does not always appear. Namely, SDIC appears when two orbits (i.e., solutions) become the chaotic attractor; it does not appear when two orbits move toward the same point attractor. The above animation for double pendulum motion provides an analogy. For large angles of swing the motion of the pendulum is often chaotic. By comparison, for small angles of swing, motions are non-chaotic. Multistability is defined when a system (e.g., the double pendulum system) contains more than one bounded attractor that depends only on initial conditions. The multistability was illustrated using kayaking in Figure on the right side (i.e., Figure 1 of ) where the appearance of strong currents and a stagnant area suggests instability and local stability, respectively. As a result, when two kayaks move along strong currents, their paths display SDIC. On the other hand, when two kayaks move into a stagnant area, they become trapped, showing no typical SDIC (although a chaotic transient may occur). Such features of SDIC or no SDIC suggest two types of solutions and illustrate the nature of multistability. By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows: "The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons." In quantum mechanics. The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics, including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist. Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which "measures the rate at which identical initial states diverge when subjected to slightly different dynamics". They consider fidelity decay to be "the closest quantum analog to the (purely classical) butterfly effect". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos. In popular culture. The butterfly effect has appeared across media such as literature (for instance, "A Sound of Thunder"), films and television (such as "The Simpsons"), video games (such as "Life Is Strange"), webcomics (such as "Homestuck"), musical references (such as "Butterfly Effect" by Travis Scott), AI-driven expansive language models, and more.
4027
7903804
https://en.wikipedia.org/wiki?curid=4027
Borland
Borland Software Corporation was a computing technology company founded in 1983 by Niels Jensen, Ole Henriksen, Mogens Glad, and Philippe Kahn. Its main business was developing and selling software development and software deployment products. Borland was first headquartered in Scotts Valley, California, then in Cupertino, California, and then in Austin, Texas. In 2009, the company became a full subsidiary of the British firm Micro Focus International plc. In 2023, Micro Focus (including Borland) was acquired by Canadian firm OpenText, which later absorbed Borland's portfolio into its application delivery management division. History. The 1980s: Foundations. Borland Ltd. was founded in August 1981 by three Danish citizens Niels Jensen, Ole Henriksen, and Mogens Glad to develop products like Word Index for the CP/M operating system using an off-the-shelf company. However, the response to the company's products at the CP/M-82 show in San Francisco showed that a U.S. company would be needed to reach the American market. They met Philippe Kahn, who had just moved to Silicon Valley and had been a key developer of the Micral. Kahn was chairman, president, and CEO of Borland Inc. at its inception in 1983 and until 1995. The first name for the company was not "Borland". It was "MIT". The acronym MIT stood for "Market In Time". The name "Borland" originated from a small company in Ireland, which was one of MIT initial customers. After they went bankrupt, MIT sought permission to acquire and use the name "Borland" in the U.S., following a legal recommendation during a rebranding prompted by a letter from MIT (Massachusetts Institute of Technology). The main shareholders at the incorporation of Borland were Niels Jensen (250,000 shares), Ole Henriksen (160,000), Mogens Glad (100,000), and Kahn (80,000). Borland International, Inc. era. Borland developed various software development tools. Its first product was Turbo Pascal in 1983, developed by Anders Hejlsberg (who later developed .NET and C# for Microsoft) and before Borland acquired the product which was sold in Scandinavia under the name "Compas Pascal". In 1984, Borland launched Sidekick, a time organization, notebook, and calculator utility that was an early terminate-and-stay-resident program (TSR) for MS-DOS compatible operating systems. By the mid-1980s, the company had an exhibit at the 1985 West Coast Computer Faire along with IBM and AT&T. Bruce Webster reported that "the legend of Turbo Pascal has by now reached mythic proportions, as evidenced by the number of firms that, in marketing meetings, make plans to become 'the next Borland'". After Turbo Pascal and Sidekick, the company launched other applications such as SuperKey and Lightning, all developed in Denmark. While the Danes remained majority shareholders, board members included Kahn, Tim Berry, John Nash, and David Heller. With the assistance of John Nash and David Heller, both British members of the Borland Board, the company was taken public on London's Unlisted Securities Market (USM) in 1986. Schroders was the lead investment banker. According to the London IPO filings, the management team was Philippe Kahn as president, Spencer Ozawa as VP of Operations, Marie Bourget as CFO, and Spencer Leyton as VP of sales and business development. All software development continued to take place in Denmark and later London as the Danish co-founders moved there. A first US IPO followed in 1989 after Ben Rosen joined the Borland board with Goldman Sachs as the lead banker and a second offering in 1991 with Lazard as the lead banker. In 1985, Borland acquired Analytica and its Reflex database product. Forrester Research considered Borland with Analytica, Ashton-Tate, Lotus Development, and Microsoft the "Big Four" of personal computer software. The engineering team of Analytica, managed by Brad Silverberg and including Reflex co-founder Adam Bosworth, became the core of Borland's engineering team in the US. Brad Silverberg was VP of engineering until he left in early 1990 to head up the Personal Systems division at Microsoft. Adam Bosworth initiated and headed up the Quattro project until moving to Microsoft later in 1990 to take over the project which eventually became Access. In 1987, Borland purchased Wizard Systems and incorporated portions of the Wizard C technology into Turbo C. Bob Jervis, the author of Wizard C became a Borland employee. Turbo C was released on May 18, 1987. This drove a wedge between Borland and Niels Jensen and the other members of his team who had been working on a brand-new series of compilers at their London development centre. They reached an agreement and spun off a company named Jensen & Partners International (JPI), later TopSpeed. JPI first launched an MS-DOS compiler named JPI Modula-2, which later became TopSpeed Modula-2, and followed up with TopSpeed C, TopSpeed C++, and TopSpeed Pascal compilers for both the MS-DOS and OS/2 operating systems. The TopSpeed compiler technology still exists as the underlying technology of the Clarion 4GL programming language, a Windows development tool. Fiscal 1987 revenue was $29.2 million and pretax earnings were $4.7 million. Borland by that year was directly confronting Lotus and Ashton-Tate. In September 1987, Borland purchased Ansa-Software, including their Paradox (version 2.0) database management tool. Richard Schwartz, a cofounder of Ansa, became Borland's CTO and Ben Rosen joined the Borland board. The Quattro Pro spreadsheet was launched in 1989. Lotus Development, under the leadership of Jim Manzi, sued Borland for copyright infringement (see Look and feel). The litigation, "Lotus Dev. Corp. v. Borland Int'l, Inc.", brought forward Borland's open standards position as opposed to Lotus' closed approach. Borland, under Kahn's leadership, took a position of principle and announced that they would defend against Lotus' legal position and "fight for programmer's rights". After a decision in favour of Borland by the First Circuit Court of Appeals, the case went to the United States Supreme Court. Because Justice John Paul Stevens had recused himself, only eight justices heard the case, and concluded in a 4–4 tie. As a result, the First Circuit Court decision remained standing but did not bind any other court and set no national precedent. Additionally, Borland's approach towards software piracy and intellectual property (IP) included its "Borland no-nonsense license agreement"; allowing the developer/user to utilize its products "just like a book". The user was allowed to make multiple copies of a program, as long as it was the only copy in use at any point in time. The 1990s: Rise and change. In September 1991, Borland purchased Ashton-Tate, bringing the dBASE and InterBase databases to the house, in an all-stock transaction. However, competition with Microsoft was fierce. Microsoft launched the competing database Microsoft Access and bought the dBASE clone FoxPro in 1992, undercutting Borland's prices. During the early 1990s, Borland's implementation of C and C++ outsold Microsoft's. Borland survived as a company, but no longer dominated the software tools that it once had. It went through a radical transition in products, financing, and staff, and became a very different company from the one which challenged Microsoft and Lotus in the early 1990s. The internal problems that arose with the Ashton-Tate merger were a large part of the downfall. Ashton-Tate's product portfolio proved to be weak, with no provision for evolution into the GUI environment of Windows. Almost all product lines were discontinued. The consolidation of duplicate support and development offices was costly and disruptive. Worst of all, the highest revenue earner of the combined company was dBASE with no Windows version ready. Borland had an internal project to clone dBASE which was intended to run on Windows and was part of the strategy of the acquisition, but by late 1992 this was abandoned due to technical flaws and the company had to constitute a replacement team (the ObjectVision team, redeployed) headed by Bill Turpin to redo the job. Borland lacked the financial strength to project its marketing and move internal resources off other products to shore up the dBASE/W effort. Layoffs occurred in 1993 to keep the company afloat, the third instance of this in five years. By the time dBASE for Windows eventually shipped, the developer community had moved on to other products such as Clipper or FoxBase, and dBASE never regained a significant share of Ashton-Tate's former market. This happened against the backdrop of the rise in Microsoft's combined Office product marketing. A change in market conditions also contributed to Borland's fall from prominence. In the 1980s, companies had few people who understood the growing personal computer phenomenon and so most technical people were given free rein to purchase whatever software they thought they needed. Borland had done an excellent job marketing to those with a highly technical bent. By the mid-1990s, however, companies were beginning to ask what the return was on the investment they had made in this loosely controlled PC software buying spree. Company executives were starting to ask questions that were hard for technically minded staff to answer, and so corporate standards began to be created. This required new kinds of marketing and support materials from software vendors, but Borland remained focused on the technical side of its products. In 1993 Borland explored ties with WordPerfect as a possible way to form a suite of programs to rival Microsoft's nascent integration strategy. WordPerfect itself was struggling with a late and troubled transition to Windows. The eventual joint company effort, named Borland Office for Windows (a combination of the WordPerfect word processor, Quattro Pro spreadsheet, and Paradox database) was introduced at the 1993 Comdex computer show. Borland Office never made significant inroads against Microsoft Office. WordPerfect was then bought by Novell. In October 1994, Borland sold Quattro Pro and rights to sell up to a million copies of Paradox to Novell for $140 million in cash, repositioning the company on its core software development tools and the Interbase database engine and shifting toward client-server scenarios in corporate applications. This later proved a good foundation for the shift to web development tools. Philippe Kahn and the Borland board disagreed on how to focus the company, and Kahn resigned as chairman, CEO and president, after 12 years, in January 1995. Kahn remained on the board until November 7, 1996. Borland named Gary Wetsel as CEO, but he resigned in July 1996. William F. Miller was interim CEO until September of that year, when Whitney G. Lynn (the current chairman at mergers & acquisitions company XRP Healthcare) became interim president and CEO (along with other executive changes), followed by a succession of CEOs including Dale Fuller and Tod Nielsen. The Delphi 1 rapid application development (RAD) environment was launched in 1995, under the leadership of Anders Hejlsberg. In 1996 Borland acquired Open Environment Corporation, a Cambridge-based company founded by John J. Donovan. On November 25, 1996, Del Yocam was hired as Borland CEO and chairman. In 1997, Borland sold Paradox to Corel, but retained all development rights for the core BDE. In November 1997, Borland acquired Visigenic, a middleware company that was focused on implementations of CORBA. Inprise Corporation era. In April 1998, Borland International, Inc. announced it had become Inprise Corporation. For several years, before and during the Inprise name, Borland suffered from serious financial losses and poor public image. When the name was changed to Inprise, many thought Borland had gone out of business. In March 1999, dBASE was sold to KSoft, Inc. which was soon renamed dBASE Inc. (In 2004 dBASE Inc. was renamed to DataBased Intelligence, Inc.). In 1999, Dale L. Fuller replaced Yocam. At this time Fuller's title was "interim president and CEO". The "interim" was dropped in December 2000. Keith Gottfried served in senior executive positions with the company from 2000 to 2004. A proposed merger between Inprise and Corel was announced in February 2000, aimed at producing Linux-based products. The plan was abandoned when Corel's shares fell and it became clear that there was no strategic fit. InterBase 6.0 was made available as open-source software in July 2000. In November 2000, Inprise Corporation announced the company intended to officially change its name to Borland Software Corporation. The legal name of the company would continue to be Inprise Corporation until the completion of the renaming process during the first quarter of 2001. Once the name change was completed, the company would also expect to change its Nasdaq market symbol from "INPR" to "BORL". Borland Software Corporation era. On January 2, 2001, Borland Software Corporation announced it had completed its name change from Inprise Corporation. Effective at the opening of trading on Nasdaq, the company's Nasdaq market symbol would also be changed from "INPR" to "BORL". Under the Borland name and a new management team headed by president and CEO Dale L. Fuller, a now-smaller and profitable Borland refocused on Delphi and created a version of Delphi and C++Builder for Linux, both under the name Kylix. This brought Borland's expertise in integrated development environments to the Linux platform for the first time. Kylix was launched in 2001. Plans to spin off the InterBase division as a separate company were abandoned after Borland and the people who were to run the new company could not agree on terms for the separation. Borland stopped open-source releases of InterBase and has developed and sold new versions at a fast pace. In 2001, Delphi 6 became the first integrated development environment to support web services. All of the company's development platforms now support web services. C#Builder was released in 2003 as a native C# development tool, competing with Visual Studio .NET. By the 2005 release, C#Builder, Delphi for Win32, and Delphi for .NET were combined into one IDE named "Borland Developer Studio", though it was still popularly known as "Delphi". In late 2002 Borland purchased design tool vendor TogetherSoft and tool publisher Starbase, makers of the StarTeam configuration management tool and the CaliberRM requirements management tool (eventually, CaliberRM was renamed as "Caliber"). The latest releases of JBuilder and Delphi integrate these tools to give developers a broader set of tools for development. Former CEO Dale Fuller quit in July 2005, but remained on the board of directors. Former COO Scott Arnold took the title of interim president and chief executive officer until November 8, 2005, when it was announced that Tod Nielsen would take over as CEO effective November 9, 2005. Nielsen remained with the company until January 2009, when he accepted the position of chief operating officer at VMware; CFO Erik Prusch then took over as acting president and CEO. In early 2007 Borland announced new branding for its focus around open application life-cycle management. In April 2007 Borland announced that it would relocate its headquarters and development facilities to Austin, Texas. It also had development centers in Singapore, Santa Ana, California, Prague, Czech Republic, and Linz, Austria. On May 6, 2009, the company announced it was to be acquired by Micro Focus for $75 million. The transaction was approved by Borland shareholders on July 22, 2009, with Micro Focus acquiring the company for $1.50 per share. Following Micro Focus shareholder approval and the required corporate filings, the transaction was completed in late July 2009. Borland was estimated to have 750 employees at the time. On April 5, 2015, Micro Focus announced the completion of integrating the Attachmate Group of companies that was merged on November 20, 2014. During the integration period, the affected companies were merged into one organization. In the announced reorganization, Borland products would be part of the Micro Focus portfolio. Products. Recent. The products acquired from Segue Software include Silk Central, Silk Performer, and Silk Test. The Silk line was first announced in 1997. Other programs are: Marketing. Renaming to Inprise Corporation. Along with renaming from Borland International, Inc. to Inprise Corporation, the company refocused its efforts on targeting enterprise applications development. Borland hired a marketing firm Lexicon Branding to come up with a new name for the company. Yocam explained that the new name, Inprise, was meant to evoke "integrating the enterprise". The idea was to integrate Borland's tools, Delphi, C++Builder, and JBuilder with enterprise environment software, including Visigenic's implementations of CORBA, Visibroker for C++ and Java, and the new product, Application Server. Frank Borland. Frank Borland is a mascot character for Borland products. According to Philippe Kahn, the mascot first appeared in advertisements and the cover of Borland Sidekick 1.0 manual, which was in 1984 during Borland International, Inc. era. Frank Borland also appeared in Turbo Tutor - A Turbo Pascal Tutorial, Borland JBuilder 2. A live action version of Frank Borland was made after Micro Focus plc had acquired Borland Software Corporation. This version was created by True Agency Limited. An introductory film was also made about the mascot.
4031
3492060
https://en.wikipedia.org/wiki?curid=4031
Buckminster Fuller
Richard Buckminster Fuller (; July 12, 1895 – July 1, 1983) was an American architect, systems theorist, writer, designer, inventor, philosopher, and futurist. He styled his name as R. Buckminster Fuller in his writings, publishing more than 30 books and coining or popularizing such terms as "Spaceship Earth", "Dymaxion" (e.g., Dymaxion house, Dymaxion car, Dymaxion map), "ephemeralization", "synergetics", and "tensegrity". Fuller developed numerous inventions, mainly architectural designs, and popularized the widely known geodesic dome; carbon molecules known as fullerenes were later named by scientists for their structural and mathematical resemblance to geodesic spheres. He also served as the second World President of Mensa International from 1974 to 1983. Fuller was awarded 28 United States patents and many honorary doctorates. In 1960, he was awarded the Frank P. Brown Medal from The Franklin Institute. He was elected an honorary member of Phi Beta Kappa in 1967, on the occasion of the 50-year reunion of his Harvard class of 1917 (from which he had been expelled in his first year). He was elected a Fellow of the American Academy of Arts and Sciences in 1968. The same year, he was elected into the National Academy of Design as an Associate member. He became a full Academician in 1970, and he received the Gold Medal award from the American Institute of Architects the same year. Also in 1970, Fuller received the title of Master Architect from Alpha Rho Chi (APX), the national fraternity for architecture and the allied arts. In 1976, he received the St. Louis Literary Award from the Saint Louis University Library Associates. In 1977, he received the Golden Plate Award of the American Academy of Achievement. He also received numerous other awards, including the Presidential Medal of Freedom, presented to him on February 23, 1983, by President Ronald Reagan. Life and work. Fuller was born on July 12, 1895, in Milton, Massachusetts, the son of Richard Buckminster Fuller, a prosperous leather and tea merchant, and Caroline Wolcott Andrews. He was a grand-nephew of Margaret Fuller, an American journalist, critic, and women's rights advocate associated with the American transcendentalism movement. The unusual middle name, Buckminster, was an ancestral family name. As a child, Richard Buckminster Fuller tried numerous variations of his name. He used to sign his name differently each year in the guest register of his family summer vacation home at Bear Island, Maine. He finally settled on R. Buckminster Fuller. Fuller spent much of his youth on Bear Island, in Penobscot Bay off the coast of Maine. He attended Froebelian Kindergarten He was dissatisfied with the way geometry was taught in school, disagreeing with the notions that a chalk dot on the blackboard represented an "empty" mathematical point, or that a line could stretch off to infinity. To him these were illogical, and led to his work on synergetics. He often made items from materials he found in the woods, and sometimes made his own tools. He experimented with designing a new apparatus for human propulsion of small boats. By age 12, he had invented a 'push pull' system for propelling a rowboat by use of an inverted umbrella connected to the transom with a simple oar lock which allowed the user to face forward to point the boat toward its destination. Later in life, Fuller took exception to the term "invention." Years later, he decided that this sort of experience had provided him with not only an interest in design, but also a habit of being familiar with and knowledgeable about the materials that his later projects would require. Fuller earned a machinist's certification, and knew how to use the press brake, stretch press, and other tools and equipment used in the sheet metal trade. Education. Fuller attended Milton Academy in Massachusetts, and after that began studying at Harvard University, where he was affiliated with Adams House. He was expelled from Harvard twice: first for spending all his money partying with a vaudeville troupe, and then, after having been readmitted, for his "irresponsibility and lack of interest." By his own appraisal, he was a non-conforming misfit in the fraternity environment. Wartime experience. Between his sessions at Harvard, Fuller worked in Canada as a mechanic in a textile mill, and later as a laborer in the meat-packing industry. He also served in the U.S. Navy in World War I, as a shipboard radio operator, as an editor of a publication, and as commander of the crash rescue boat USS "Inca". After discharge, he worked again in the meat-packing industry, acquiring management experience. In 1917, he married Anne Hewlett. During the early 1920s, he and his father-in-law developed the Stockade Building System for producing lightweight, weatherproof, and fireproof housing—although the company would ultimately fail in 1927. Depression and epiphany. Fuller recalled 1927 as a pivotal year of his life. His daughter Alexandra had died in 1922 of complications from polio and spinal meningitis just before her fourth birthday. Barry Katz, a Stanford University scholar who wrote about Fuller, found signs that around this time in his life Fuller had developed depression and anxiety. Fuller dwelled on his daughter's death, suspecting that it was connected with the Fullers' damp and drafty living conditions. This provided motivation for Fuller's involvement in Stockade Building Systems, a business which aimed to provide affordable, efficient housing. In 1927, at age 32, Fuller lost his job as president of Stockade. The Fuller family had no savings, and the birth of their daughter Allegra in 1927 added to the financial challenges. Fuller drank heavily and reflected upon the solution to his family's struggles on long walks around Chicago. During the autumn of 1927, Fuller contemplated suicide by drowning in Lake Michigan, so that his family could benefit from a life insurance payment. Fuller said that he had experienced a profound incident which would provide direction and purpose for his life. He felt as though he was suspended several feet above the ground enclosed in a white sphere of light. A voice spoke directly to Fuller, and declared: Fuller stated that this experience led to a profound re-examination of his life. He ultimately chose to embark on "an experiment, to find what a single individual could contribute to changing the world and benefiting all humanity." Speaking to audiences later in life, Fuller would frequently recount the story of his Lake Michigan experience, and its transformative impact on his life. Recovery. In 1927, Fuller resolved to think independently which included a commitment to "the search for the principles governing the universe and help advance the evolution of humanity in accordance with them ... finding ways of "doing more with less" to the end that all people everywhere can have more and more." By 1928, Fuller was living in Greenwich Village and spending much of his time at the popular café Romany Marie's, where he had spent an evening in conversation with Marie and Eugene O'Neill several years earlier. Fuller accepted a job decorating the interior of the café in exchange for meals, giving informal lectures several times a week, and models of the Dymaxion house were exhibited at the café. Isamu Noguchi arrived during 1929—Constantin Brâncuși, an old friend of Marie's, had directed him there—and Noguchi and Fuller were soon collaborating on several projects, including the modeling of the Dymaxion car based on recent work by Aurel Persu. It was the beginning of their lifelong friendship. Geodesic domes. Fuller taught at Black Mountain College in North Carolina during the summers of 1948 and 1949, serving as its Summer Institute director in 1949. Fuller had been shy and withdrawn, but he was persuaded to participate in a theatrical performance of Erik Satie's "Le piège de Méduse" produced by John Cage, who was also teaching at Black Mountain. During rehearsals, under the tutelage of Arthur Penn, then a student at Black Mountain, Fuller broke through his inhibitions to become confident as a performer and speaker. At Black Mountain, with the support of a group of professors and students, he began reinventing a project that would make him famous: the geodesic dome. Although the geodesic dome had been created, built and awarded a German patent on June 19, 1925, by Dr. Walther Bauersfeld, Fuller was awarded United States patents. Fuller's patent application made no mention of Bauersfeld's self-supporting dome built some 26 years prior. Although Fuller undoubtedly popularized this type of structure he is mistakenly given credit for its design. One of his early models was first constructed in 1945 at Bennington College in Vermont, where he lectured often. Although Bauersfeld's dome could support a full skin of concrete it was not until 1949 that Fuller erected a geodesic dome building that could sustain its own weight with no practical limits. It was in diameter and constructed of aluminium aircraft tubing and a vinyl-plastic skin, in the form of an icosahedron. To prove his design, Fuller suspended from the structure's framework several students who had helped him build it. The U.S. government recognized the importance of this work, and employed his firm Geodesics, Inc. in Raleigh, North Carolina to make small domes for the Marines. Within a few years, there were thousands of such domes around the world. Fuller's first "continuous tension – discontinuous compression" geodesic dome (full sphere in this case) was constructed at the University of Oregon Architecture School in 1959 with the help of students. These continuous tension – discontinuous compression structures featured single force compression members (no flexure or bending moments) that did not touch each other and were 'suspended' by the tensional members. Dymaxion Chronofile. For half of a century, Fuller developed many ideas, designs, and inventions, particularly regarding practical, inexpensive shelter and transportation. He documented his life, philosophy, and ideas scrupulously by a daily diary (later called the "Dymaxion Chronofile"), and by twenty-eight publications. Fuller financed some of his experiments with inherited funds, sometimes augmented by funds invested by his collaborators, one example being the Dymaxion car project. World stage. International recognition began with the success of huge geodesic domes during the 1950s. Fuller lectured at North Carolina State University in Raleigh in 1949, where he met James Fitzgibbon, who would become a close friend and colleague. Fitzgibbon was director of Geodesics, Inc. and Synergetics, Inc. the first licensees to design geodesic domes. Thomas C. Howard was lead designer, architect, and engineer for both companies. Richard Lewontin, a new faculty member in population genetics at North Carolina State University, provided Fuller with computer calculations for the lengths of the domes' edges. Fuller began working with architect Shoji Sadao in 1954, together designing a hypothetical Dome over Manhattan in 1960, and in 1964 they co-founded the architectural firm Fuller & Sadao Inc., whose first project was to design the large geodesic dome for the U.S. Pavilion at Expo 67 in Montreal. This building is now the "Montreal Biosphère". In 1962, the artist and searcher John McHale wrote the first monograph on Fuller, published by George Braziller in New York. After employing several Southern Illinois University Carbondale (SIU) graduate students to rebuild his models following an apartment fire in the summer of 1959, Fuller was recruited by longtime friend Harold Cohen to serve as a research professor of "design science exploration" at the institution's School of Art and Design. According to SIU architecture professor Jon Davey, the position was "unlike most faculty appointments ... more a celebrity role than a teaching job" in which Fuller offered few courses and was only stipulated to spend two months per year on campus. Nevertheless, his time in Carbondale was "extremely productive", and Fuller was promoted to university professor in 1968 and distinguished university professor in 1972. Working as a designer, scientist, developer, and writer, he continued to lecture for many years around the world. He collaborated at SIU with John McHale. In 1965, they inaugurated the World Design Science Decade (1965 to 1975) at the meeting of the International Union of Architects in Paris, which was, in Fuller's own words, devoted to "applying the principles of science to solving the problems of humanity." From 1972 until retiring as university professor emeritus in 1975, Fuller held a joint appointment at Southern Illinois University Edwardsville, where he had designed the dome for the campus Religious Center in 1971. During this period, he also held a joint fellowship at a consortium of Philadelphia-area institutions, including the University of Pennsylvania, Bryn Mawr College, Haverford College, Swarthmore College, and the University City Science Center; as a result of this affiliation, the University of Pennsylvania appointed him university professor emeritus in 1975. Fuller believed human societies would soon rely mainly on renewable sources of energy, such as solar- and wind-derived electricity. He hoped for an age of "omni-successful education and sustenance of all humanity." Fuller referred to himself as "the property of universe" and during one radio interview he gave later in life, declared himself and his work "the property of all humanity." For his lifetime of work, the American Humanist Association named him the 1969 Humanist of the Year. In 1976, Fuller was a key participant at UN Habitat I, the first UN forum on human settlements. Last filmed appearance. Fuller was interviewed on film on June 21, 1983, in which he spoke at Norman Foster's Royal Gold Medal for architecture ceremony. His speech can be watched in the archives of the AA School of Architecture, in which he spoke after Sir Robert Sainsbury's introductory speech and Foster's keynote address. In May, 1983 Buckminster Fuller participated in an interview with futurist Barbara Marx Hubbard. The hour-long DVD, "Our Spiritual Experience: A Conversation with Buckminster Fuller and Barbara Marx Hubbard" was produced by David L. Smith and was hosted by Michael Toms of New Dimensions Radio. The program was recorded at Xavier University in Cincinnati, Ohio. It can be viewed at Spiritual Visionaries.com, a new website expected to go "public" in February, 2025.[David L. Smith Productions] Death. In the year of his death, Fuller described himself as follows: Fuller died on July 1, 1983, 11 days before his 88th birthday. During the period leading up to his death, his wife had been lying comatose in a Los Angeles hospital, dying of cancer. It was while visiting her there that he exclaimed, at a certain point: "She is squeezing my hand!" He then stood up, had a heart attack, and died an hour later, at age 87. His wife of 66 years died 36 hours later. They are buried in Mount Auburn Cemetery in Cambridge, Massachusetts. Philosophy. Buckminster Fuller was a Unitarian, and, like his grandfather Arthur Buckminster Fuller (brother of Margaret Fuller), a Unitarian minister. Fuller was also an early environmental activist, aware of Earth's finite resources, and promoted a principle he termed "ephemeralization", which, according to futurist and Fuller disciple Stewart Brand, was defined as "doing more with less". Resources and waste from crude, inefficient products could be recycled into making more valuable products, thus increasing the efficiency of the entire process. Fuller also coined the word synergetics, a catch-all term used broadly for communicating experiences using geometric concepts, and more specifically, the empirical study of systems in transformation; his focus was on total system behavior unpredicted by the behavior of any isolated components. Fuller was a pioneer in thinking globally and explored energy and material efficiency in the fields of architecture, engineering, and design. In his book "Critical Path" (1981) he cited the opinion of François de Chadenèdes (1920–1999) that petroleum, from the standpoint of its replacement cost in our current energy "budget" (essentially, the net incoming solar flux), had cost nature "over a million dollars" per U.S. gallon ($300,000 per litre) to produce. From this point of view, its use as a transportation fuel by people commuting to work represents a huge net loss compared to their actual earnings. An encapsulation quotation of his views might best be summed up as: "There is no energy crisis, only a crisis of ignorance." Though Fuller was concerned about sustainability and human survival under the existing socioeconomic system, he remained optimistic about humanity's future. Defining wealth in terms of knowledge as the "technological ability to protect, nurture, support, and accommodate all growth needs of life", his analysis of the condition of "Spaceship Earth" caused him to conclude that at a certain time during the 1970s, humanity had attained an unprecedented state. He was convinced that the accumulation of relevant knowledge, combined with the quantities of major recyclable resources that had already been extracted from the earth, had attained a critical level, such that competition for necessities had become unnecessary. Cooperation had become the optimum survival strategy. He declared: "selfishness is unnecessary and hence-forth unrationalizable ... War is obsolete." He criticized previous utopian schemes as too exclusive and thought this was a major source of their failure. To work, he felt that a utopia needed to include everyone. Fuller was influenced by Alfred Korzybski's idea of general semantics. In the 1950s, Fuller attended seminars and workshops organized by the Institute of General Semantics, and he delivered the annual Alfred Korzybski Memorial Lecture in 1955. Korzybski is mentioned in the Introduction of his book "Synergetics". The two shared a remarkable amount of similarity in their general semantics formulations. In his 1970 book, "I Seem To Be a Verb", he wrote: "I live on Earth at present, and I don't know what I am. I know that I am not a category. I am not a thing—a noun. I seem to be a verb, an evolutionary process—an integral function of the universe." Fuller wrote that the universe's natural analytic geometry was based on tetrahedra arrays. He developed this in several ways, from the close-packing of spheres and the number of compressive or tensile members required to stabilize an object in space. One confirming result was that the strongest possible homogeneous truss is cyclically tetrahedral. He had become a guru of the design, architecture, and "alternative" communities, such as Drop City, the community of experimental artists to whom he awarded the 1966 "Dymaxion Award" for "poetically economic" domed living structures. Major design projects. The geodesic dome. Fuller was most famous for his lattice shell structures – geodesic domes, which have been used as parts of military radar stations, civic buildings, environmental protest camps, and exhibition attractions. An examination of the geodesic design by Walther Bauersfeld for the Zeiss-Planetarium, built some 28 years prior to Fuller's work, reveals that Fuller's Geodesic Dome patent (U.S. 2,682,235; awarded in 1954) is the same design as Bauersfeld's. Their construction is based on extending some basic principles to build simple "tensegrity" structures (tetrahedron, octahedron, and the closest packing of spheres), making them lightweight and stable. The geodesic dome was a result of Fuller's exploration of nature's constructing principles to find design solutions. The Fuller Dome is referenced in the Hugo Award-winning 1968 novel "Stand on Zanzibar" by John Brunner, in which a geodesic dome is said to cover the entire island of Manhattan, and it floats on air due to the hot-air balloon effect of the large air-mass under the dome (and perhaps its construction of lightweight materials). Transportation. The Dymaxion car was a vehicle designed by Fuller, featured prominently at Chicago's 1933-1934 Century of Progress World's Fair. During the Great Depression, Fuller formed the "Dymaxion Corporation" and built three prototypes with noted naval architect Starling Burgess and a team of 27 workmen — using donated money as well as a family inheritance. Fuller associated the word "Dymaxion", a blend of the words dynamic", maximum", and "tension" to sum up the goal of his study, "maximum gain of advantage from minimal energy input". The Dymaxion was not an automobile but rather the 'ground-taxying mode' of a vehicle that might one day be designed to fly, land and drive — an "Omni-Medium Transport" for air, land and water. Fuller focused on the landing and taxiing qualities, and noted severe limitations in its handling. The team made improvements and refinements to the platform, and Fuller noted the Dymaxion "was an invention that could not be made available to the general public without considerable improvements". The bodywork was aerodynamically designed for increased fuel efficiency and its platform featured a lightweight cromoly-steel hinged chassis, rear-mounted V8 engine, front-drive, and three wheels. The vehicle was steered via the third wheel at the rear, capable of 90° steering lock. Able to steer in a tight circle, the Dymaxion often caused a sensation, bringing nearby traffic to a halt. Shortly after launch, a prototype rolled over and crashed, killing the Dymaxion's driver and seriously injuring its passengers. Fuller blamed the accident on a second car that collided with the Dymaxion. Eyewitnesses reported, however, that the other car hit the Dymaxion only after it had begun to roll over. Despite courting the interest of important figures from the auto industry, Fuller used his family inheritance to finish the second and third prototypes — eventually selling all three, dissolving "Dymaxion Corporation" and maintaining the Dymaxion was never intended as a commercial venture. One of the three original prototypes survives. Housing. Fuller's energy-efficient and inexpensive Dymaxion house garnered much interest, but only two prototypes were ever produced. Here the term "Dymaxion" is used in effect to signify a "radically strong and light tensegrity structure". One of Fuller's Dymaxion Houses is on display as a permanent exhibit at the Henry Ford Museum in Dearborn, Michigan. Designed and developed during the mid-1940s, this prototype is a round structure (not a dome), shaped something like the flattened "bell" of certain jellyfish. It has several innovative features, including revolving dresser drawers, and a fine-mist shower that reduces water consumption. According to Fuller biographer Steve Crooks, the house was designed to be delivered in two cylindrical packages, with interior color panels available at local dealers. A circular structure at the top of the house was designed to rotate around a central mast to use natural winds for cooling and air circulation. Conceived nearly two decades earlier, and developed in Wichita, Kansas, the house was designed to be lightweight, adapted to windy climates, cheap to produce and easy to assemble. Because of its light weight and portability, the Dymaxion House was intended to be the ideal housing for individuals and families who wanted the option of easy mobility. The design included a "Go-Ahead-With-Life Room" stocked with maps, charts, and helpful tools for travel "through time and space". It was to be produced using factories, workers, and technologies that had produced World War II aircraft. It looked ultramodern at the time, built of metal, and sheathed in polished aluminum. The basic model enclosed of floor area. Due to publicity, there were many orders during the early Post-War years, but the company that Fuller and others had formed to produce the houses failed due to management problems. In 1967, Fuller developed a concept for an offshore floating city named Triton City and published a report on the design the following year. Models of the city aroused the interest of President Lyndon B. Johnson who, after leaving office, had them placed in the Lyndon Baines Johnson Library and Museum. In 1969, Fuller began the Otisco Project, named after its location in Otisco, New York. The project developed and demonstrated concrete spray with mesh-covered wireforms for producing large-scale, load-bearing spanning structures built on-site, without the use of pouring molds, other adjacent surfaces, or hoisting. The initial method used a circular concrete footing in which anchor posts were set. Tubes cut to length and with ends flattened were then bolted together to form a duodeca-rhombicahedron (22-sided hemisphere) geodesic structure with spans ranging to . The form was then draped with layers of ¼-inch wire mesh attached by twist ties. Concrete was sprayed onto the structure, building up a solid layer which, when cured, would support additional concrete to be added by a variety of traditional means. Fuller referred to these buildings as monolithic ferroconcrete geodesic domes. However, the tubular frame form proved problematic for setting windows and doors. It was replaced by an iron rebar set vertically in the concrete footing and then bent inward and welded in place to create the dome's wireform structure and performed satisfactorily. Domes up to three stories tall built with this method proved to be remarkably strong. Other shapes such as cones, pyramids, and arches proved equally adaptable. The project was enabled by a grant underwritten by Syracuse University and sponsored by U.S. Steel (rebar), the Johnson Wire Corp (mesh), and Portland Cement Company (concrete). The ability to build large complex load bearing concrete spanning structures in free space would open many possibilities in architecture, and is considered one of Fuller's greatest contributions. Dymaxion map and World Game. Fuller, along with co-cartographer Shoji Sadao, also designed an alternative projection map, called the Dymaxion map. This was designed to show Earth's continents with minimum distortion when projected or printed on a flat surface. In the 1960s, Fuller developed the World Game, a collaborative simulation game played on a 70-by-35-foot Dymaxion map, in which players attempt to solve world problems. The object of the simulation game is, in Fuller's words, to "make the world work, for 100% of humanity, in the shortest possible time, through spontaneous cooperation, without ecological offense or the disadvantage of anyone". Appearance and style. Buckminster Fuller wore thick-lensed spectacles to correct his extreme hyperopia, a condition that went undiagnosed for the first five years of his life. Fuller's hearing was damaged during his naval service in World War I and deteriorated during the 1960s. After experimenting with bullhorns as hearing aids during the mid-1960s, Fuller adopted electronic hearing aids from the 1970s onward. In public appearances, Fuller always wore dark-colored suits, appearing like "an alert little clergyman". Previously, he had experimented with unconventional clothing immediately after his 1927 epiphany, but found that breaking social fashion customs made others devalue or dismiss his ideas. Fuller learned the importance of physical appearance as part of one's credibility, and decided to become "the invisible man" by dressing in clothes that would not draw attention to himself. With self-deprecating humor, Fuller described this black-suited appearance as resembling a "second-rate bank clerk". Writer Guy Davenport met him in 1965 and described him thus: Lifestyle. Following his global prominence from the 1960s onward, Fuller became a frequent flier, often crossing time zones to lecture. In the 1960s and 1970s, he wore three watches simultaneously; one for the time zone of his office at Southern Illinois University, one for the time zone of the location he would next visit, and one for the time zone he was currently in. In the 1970s, Fuller was only in 'homely' locations (his personal home in Carbondale, Illinois; his holiday retreat in Bear Island, Maine; and his daughter's home in Pacific Palisades, California) roughly 65 nights per year—the other 300 nights were spent in hotel beds in the locations he visited on his lecturing and consulting circuits. In the 1920s, Fuller experimented with polyphasic sleep, which he called "Dymaxion sleep". Inspired by the sleep habits of animals such as dogs and cats, Fuller worked until he was tired, and then slept short naps. This generally resulted in Fuller sleeping 30-minute naps every 6 hours. This allowed him "twenty-two thinking hours a day", which aided his work productivity. Fuller reportedly kept this Dymaxion sleep habit for two years, before quitting the routine because it conflicted with his business associates' sleep habits. Despite no longer personally partaking in the habit, in 1943 Fuller suggested Dymaxion sleep as a strategy that the United States could adopt to win World War II. Despite only practicing true polyphasic sleep for a period during the 1920s, Fuller was known for his stamina throughout his life. He was described as "tireless" by Barry Farrell in "Life" magazine, who noted that Fuller stayed up all night replying to mail during Farrell's 1970 trip to Bear Island. In his seventies, Fuller generally slept for 5–8 hours per night. Fuller documented his life copiously from 1915 to 1983, approximately of papers in a collection called the Dymaxion Chronofile. He also kept copies of all incoming and outgoing correspondence. The enormous R. Buckminster Fuller Collection is currently housed at Stanford University. Language and neologisms. Buckminster Fuller spoke and wrote in a unique style and said it was important to describe the world as accurately as possible. Fuller often created long run-on sentences and used unusual compound words (omniwell-informed, intertransformative, omni-interaccommodative, omniself-regenerative), as well as terms he himself invented. His style of speech was characterized by progressively rapid and breathless delivery and rambling digressions of thought, which Fuller described as "thinking out loud". The effect, combined with Fuller's dry voice and non-rhotic New England accent, was varyingly considered "hypnotic" or "overwhelming". Fuller used the word "Universe" without the definite or indefinite article ("the" or "a") and always capitalized the word. Fuller wrote that "by Universe I mean: the aggregate of all humanity's consciously apprehended and communicated (to self or others) Experiences". The words "down" and "up", according to Fuller, are awkward in that they refer to a planar concept of direction inconsistent with human experience. The words "in" and "out" should be used instead, he argued, because they better describe an object's relation to a gravitational center, the Earth. "I suggest to audiences that they say, 'I'm going "outstairs" and "instairs."' At first that sounds strange to them; They all laugh about it. But if they try saying in and out for a few days in fun, they find themselves beginning to realize that they are indeed going inward and outward in respect to the center of Earth, which is our Spaceship Earth. And for the first time they begin to feel real 'reality.'" Fuller preferred the term "world-around" to replace "worldwide". The general belief in a flat Earth died out in classical antiquity, so using "wide" is an anachronism when referring to the surface of the Earth—a spheroidal surface has area and encloses a volume but has no width. Fuller held that unthinking use of obsolete scientific ideas detracts from and misleads intuition. Other neologisms collectively invented by the Fuller family, according to Allegra Fuller Snyder, are the terms "sunsight" and "sunclipse", replacing "sunrise" and "sunset" to overturn the geocentric bias of most pre-Copernican celestial mechanics. Fuller also invented the word "livingry", as opposed to weaponry (or "killingry"), to mean that which is in support of all human, plant, and Earth life. "The architectural profession—civil, naval, aeronautical, and astronautical—has always been the place where the most competent thinking is conducted regarding livingry, as opposed to weaponry." As well as contributing significantly to the development of tensegrity technology, Fuller invented the term "tensegrity", a portmanteau of "tensional integrity". "Tensegrity describes a structural-relationship principle in which structural shape is guaranteed by the finitely closed, comprehensively continuous, tensional behaviors of the system and not by the discontinuous and exclusively local compressional member behaviors. Tensegrity provides the ability to yield increasingly without ultimately breaking or coming asunder." "Dymaxion" is a portmanteau of "dynamic maximum tension". It was invented around 1929 by two admen at Marshall Field's department store in Chicago to describe Fuller's concept house, which was shown as part of a house of the future store display. They created the term using three words that Fuller used repeatedly to describe his design – dynamic, maximum, and tension. Fuller also helped to popularize the concept of Spaceship Earth: "The most important fact about Spaceship Earth: an instruction manual didn't come with it." In the preface for his "cosmic fairy tale" "Tetrascroll: Goldilocks and the Three Bears", Fuller stated that his distinctive speaking style grew out of years of embellishing the classic tale for the benefit of his daughter, allowing him to explore both his new theories and how to present them. The "Tetrascroll" narrative was eventually transcribed onto a set of tetrahedral lithographs (hence the name), as well as being published as a traditional book. Fuller's language posed problems for his credibility. John Julius Norwich recalled commissioning a 600-word introduction for a planned history of world architecture from him, and receiving a 3500-word proposal which ended: Norwich commented: "On reflection, I asked Dr. Nikolaus Pevsner instead." Concepts and buildings. His concepts and buildings include: Influence and legacy. Among the many people who were influenced by Buckminster Fuller are: Constance Abernathy, Ruth Asawa, J. Baldwin, Michael Ben-Eli, Pierre Cabrol, John Cage, Joseph Clinton, Peter Floyd, Norman Foster, Medard Gabel, Michael Hays, Ted Nelson, David Johnston, Peter Jon Pearce, Shoji Sadao, Edwin Schlossberg, Kenneth Snelson, Robert Anton Wilson, Stewart Brand, Jason McLennan, and John Denver. An allotrope of carbon, fullerene—and a particular molecule of that allotrope C60 (buckminsterfullerene or buckyball) has been named after him. The Buckminsterfullerene molecule, which consists of 60 carbon atoms, very closely resembles a spherical version of Fuller's geodesic dome. The 1996 Nobel Prize in Chemistry was given to Kroto, Curl, and Smalley for their discovery of the fullerene. On July 12, 2004, the United States Post Office released a new commemorative stamp honoring R. Buckminster Fuller on the 50th anniversary of his patent for the geodesic dome and by the occasion of his 109th birthday. The stamp's design replicated the January 10, 1964, cover of "Time" magazine. Fuller was the subject of two documentary films: "The World of Buckminster Fuller" (1971) and "" (1996). Additionally, filmmaker Sam Green and the band Yo La Tengo collaborated on a 2012 "live documentary" about Fuller, "The Love Song of R. Buckminster Fuller". In June 2008, the Whitney Museum of American Art presented "Buckminster Fuller: Starting with the Universe", the most comprehensive retrospective to date of his work and ideas. The exhibition traveled to the Museum of Contemporary Art, Chicago in 2009. It presented a combination of models, sketches, and other artifacts, representing six decades of the artist's integrated approach to housing, transportation, communication, and cartography. It also featured the extensive connections with Chicago from his years spent living, teaching, and working in the city. In 2009, a number of US companies decided to repackage spherical magnets and sell them as toys. One company, Maxfield & Oberton, told "The New York Times" that they saw the product on YouTube and decided to repackage them as "Buckyballs", because the magnets could self-form and hold together in shapes reminiscent of the Fuller inspired buckyballs. The buckyball toy launched at New York International Gift Fair in 2009 and sold in the hundreds of thousands, but by 2010 began to experience problems with toy safety issues and the company was forced to recall the packages that were labelled as toys. In 2012, the San Francisco Museum of Modern Art hosted "The Utopian Impulse" – a show about Buckminster Fuller's influence in the Bay Area. Featured were concepts, inventions and designs for creating "free energy" from natural forces, and for sequestering carbon from the atmosphere. The show ran January through July. In 2025 historian Eva Díaz published the book "After Spaceship Earth: Art, Techno-utopia, and Other Science Fictions" (Yale University Press) about the legacy of Buckminster Fuller's work in contemporary culture. The book considers works of art and design using geodesic domes in various ways: as ad-hoc architectural projects dealing with climate change, as spaces of exhibition display and communication design, as proposals to solve housing crises, and as critiques of corporate and governmental surveillance. The book also takes up the influence of Fuller and Stewart Brand in artworks exploring outer space exploration and colonization. In popular culture. Fuller is quoted in "The Tower of Babble" from the musical "Godspell": "Man is a complex of patterns and processes." Belgian rock band dEUS released the song "The Architect", inspired by Fuller, on their 2008 album "Vantage Point". Indie band Driftless Pony Club titled their 2011 album "Buckminster" after Fuller. Each of the album's songs is based upon his life and works. The design podcast "99% Invisible" (2010–present) takes its title from a Fuller quote: "Ninety-nine percent of who you are is invisible and untouchable." Fuller is briefly mentioned in "" (2014) when Kitty Pryde is giving a lecture to a group of students regarding utopian architecture. Robert Kiyosaki's 2009 book "Conspiracy of the Rich" and 2015 book "Second Chance" both concern Kiyosaki's interactions with Fuller as well as Fuller's unusual final book, "Grunch of Giants". In "The House of Tomorrow" (2017), based on Peter Bognanni's 2010 novel of the same name, Ellen Burstyn's character is obsessed with Fuller and provides retro-futurist tours of her geodesic home that include videos of Fuller sailing and talking with Burstyn, who had in real life befriended Fuller. In "The Simpsons"' Treehouse of Horror episode airing on October 29, 1992, a scan over Springfield graveyard reveals graves for American workmanship, Drexell's Class, slapstick, and Buckminster Fuller.
4032
1298707782
https://en.wikipedia.org/wiki?curid=4032
Bill Watterson
William Boyd Watterson II (born July 5, 1958) is an American cartoonist who authored the comic strip "Calvin and Hobbes". The strip was syndicated from 1985 to 1995. Watterson concluded "Calvin and Hobbes" with a short statement to newspaper readers that he felt he had achieved all he could in the medium. Watterson is known for his negative views on comic syndication and licensing, his efforts to expand and elevate the newspaper comic as an art form, and his move back into private life after "Calvin and Hobbes" ended. Watterson was born in Washington, D.C., and grew up in Chagrin Falls, Ohio. The suburban Midwestern United States setting of Ohio was part of the inspiration for the setting of "Calvin and Hobbes". Watterson currently lives in Cleveland Heights, Ohio. Early life. Bill Watterson was born on July 5, 1958, in Washington, D.C., to Kathryn Watterson (1933–2022) and James Godfrey Watterson (1932–2016). His father worked as a patent attorney. In 1965, six-year-old Watterson and his family moved to Chagrin Falls, Ohio, a suburb of Cleveland. Watterson has a younger brother, Thomas Watterson, who lives in Austin, Texas, and worked as a musician before becoming an educator. Watterson drew his first cartoon at age eight and spent much time in his childhood alone drawing and cartooning. This continued through his school years, during which time he discovered comic strips such as Walt Kelly's "Pogo", George Herriman's "Krazy Kat", and Charles M. Schulz's "Peanuts" which subsequently inspired and influenced his desire to become a professional cartoonist. On one occasion when he was in fourth grade, he wrote a letter to Schulz, who responded, much to Watterson's surprise. This made a big impression on him at the time. His parents encouraged him in his artistic pursuits. Later, they recalled him as a "conservative child" — imaginative, but "not in a fantasy way", and certainly nothing like the character of Calvin that he later created. Watterson found avenues for his cartooning talents throughout primary and secondary school, creating high school-themed super hero comics with his friends and contributing cartoons and art to the school newspaper and yearbook. After high school, Watterson attended Kenyon College, where he majored in political science. He had already decided on a career in cartooning but he felt studying political science would help him move into editorial cartooning. He continued to develop his art skills and during his sophomore year he painted Michelangelo's "Creation of Adam" on the ceiling of his dormitory room. He also contributed cartoons to the college newspaper, some of which included the original "Spaceman Spiff" cartoons. Watterson graduated from Kenyon in 1980 with a Bachelor of Arts degree. Later, when Watterson was creating names for the characters in his comic strip, he decided on Calvin (after the Protestant reformer John Calvin) and Hobbes (after the political philosopher Thomas Hobbes), allegedly as a "tip of the hat" to Kenyon's political science department. In "The Complete Calvin and Hobbes", Watterson stated that Calvin was named for "a 16th-century theologian who believed in predestination" and Hobbes for "a 17th-century philosopher with a dim view of human nature". Career. Early work. Watterson was inspired by the work of "The Cincinnati Enquirer" political cartoonist Jim Borgman, a 1976 graduate of Kenyon College, and decided to try to follow the same career path as Borgman, who in turn offered support and encouragement to the aspiring artist. Watterson graduated in 1980 and was hired on a trial basis at the "Cincinnati Post", a competing paper of the "Enquirer". Watterson quickly discovered that the job was full of unexpected challenges which prevented him from performing his duties to the standards set for him. Not the least of these challenges was his unfamiliarity with the Cincinnati political scene, as he had never resided in or near the city, having grown up in the Cleveland area and attended college in central Ohio. The "Post" fired Watterson before his contract was up. He then joined a small advertising agency and worked there for four years as a designer, creating grocery advertisements while also working on his own projects, including development of his own cartoon strip and contributions to "Target: The Political Cartoon Quarterly". As a freelance artist, Watterson has drawn other works for various merchandise, including album art for his brother's band, calendars, clothing graphics, educational books, magazine covers, posters, and post cards. "Calvin and Hobbes" and rise to success. Watterson has said that he works for personal fulfillment. As he told the graduating class of 1990 at Kenyon College, "It's surprising how hard we'll work when the work is done just for ourselves." "Calvin and Hobbes" was first published on November 18, 1985. In "Calvin and Hobbes Tenth Anniversary Book", he wrote that his influences included "Peanuts", "Pogo", and "Krazy Kat". Watterson wrote the introduction to the first volume of "The Komplete Kolor Krazy Kat". Watterson's style also reflects the influence of Winsor McCay's "Little Nemo in Slumberland". Like many artists, Watterson incorporated elements of his life, interests, beliefs, and values into his work—for example, his hobby as a cyclist, memories of his own father's speeches about "building character", and his views on merchandising and corporations. Watterson's cat Sprite very much inspired the personality and physical features of Hobbes. Watterson spent much of his career trying to change the climate of newspaper comics. He believed that the artistic value of comics was being undermined and that the space that they occupied in newspapers continually decreased, subject to arbitrary whims of shortsighted publishers. Furthermore, he stated that art should not be judged by the medium for which it is created (i.e., there is no "high" art or "low" art—just art). Watterson wrote a foreword for "FoxTrot." Fight against merchandising his characters. For years, Watterson battled against pressure from publishers to merchandise his work, something that he felt would cheapen his comic through compromising the act of creation or reading. He refused to merchandise his creations on the grounds that displaying "Calvin and Hobbes" images on commercially sold mugs, stickers, and T-shirts would devalue the characters and their personalities. Watterson said that Universal kept putting pressure on him and that he had signed his contract without fully perusing it because, as a new artist, he was happy just to find a syndicate willing to give him a chance (two other syndicates had previously turned him down). He added that the contract was so one-sided that, if Universal really wanted to, they could license his characters against his will, and could even fire him and continue "Calvin and Hobbes" with a new artist. Watterson's position eventually won out, and he was able to renegotiate his contract so that he would receive all rights to his work. Later he said that the licensing fight exhausted him and contributed to the need for a nine-month sabbatical in 1991. Despite Watterson's efforts, many unofficial knockoffs have been found, including items that depict Calvin and Hobbes consuming alcohol or Calvin urinating on a logo. Watterson has said, "Only thieves and vandals have made money on "Calvin and Hobbes" merchandise." Changing the format of the Sunday strip. Watterson was critical of the prevailing format for the Sunday comic strip that was in place when he began drawing (and remained so, to varying degrees). The typical layout consists of three rows with eight total squares, which take up half a page if published with its normal size. Some newspapers are restricted with space for their Sunday features and reduce the size of the strip. One of the more common ways is to cut out the top two panels, which Watterson believed forced him to waste the space on throwaway jokes that did not always fit the strip. While he was set to return from his first sabbatical, Watterson discussed with his syndicate a new format for "Calvin and Hobbes" that would enable him to use his space more efficiently and would almost require the papers to publish it as a half-page. Universal agreed that they would sell the strip as the half-page and nothing else, which garnered anger from papers and criticism for Watterson from both editors and some of his fellow cartoonists (whom he described as "unnecessarily hot-tempered"). Eventually, Universal compromised and agreed to offer papers a choice between the full half-page or a reduced-sized version to alleviate concerns about the size issue. Watterson conceded that this caused him to lose space in many papers, but he said that, in the end, it was a benefit because he felt that he was giving the papers' readers a better strip for their money and editors were free not to run "Calvin and Hobbes" at their own risk. He added that he was not going to apologize for drawing a popular feature. End of "Calvin and Hobbes". On November 9, 1995, Watterson announced the end of "Calvin and Hobbes" with the following letter to newspaper editors: The last strip of "Calvin and Hobbes" was published on December 31, 1995. After "Calvin and Hobbes". In the years since "Calvin and Hobbes" was ended, many attempts have been made to contact Watterson. Both "The Plain Dealer" and the "Cleveland Scene" sent reporters, in 1998 and 2003 respectively, but neither were able to make contact with the media-shy Watterson. Since 1995, Watterson has taken up painting, at one point drawing landscapes of the woods with his father. He has kept away from the public eye and shown no interest in resuming the strip, creating new works based on the strip's characters, or embarking on new commercial projects, though he has published several "Calvin and Hobbes" "treasury collection" anthologies. He does not sign autographs or license his characters. Watterson was once known to sneak autographed copies of his books onto the shelves of the Fireside Bookshop, a family-owned bookstore in his hometown of Chagrin Falls, Ohio. He ended this practice after discovering that some of the autographed books were being sold online for high prices. Watterson rarely gives interviews or makes public appearances. His lengthiest interviews include the cover story in "The Comics Journal" No. 127 in February 1989, an interview that appeared in a 1987 issue of "Honk Magazine", and one in a 2015 Watterson exhibition catalogue. On December 21, 1999, a short piece was published in the "Los Angeles Times", written by Watterson to mark the forthcoming retirement of "Peanuts" creator Charles M. Schulz. Circa 2003, Gene Weingarten of "The Washington Post" sent Watterson the first edition of the "Barnaby" book as an incentive, hoping to land an interview. Weingarten passed the book to Watterson's parents, along with a message, and declared that he would wait in his hotel for as long as it took Watterson to contact him. Watterson's editor Lee Salem called the next day to tell Weingarten that the cartoonist would not be coming. In 2004, Watterson and his wife Melissa bought a home in the Cleveland suburb of Cleveland Heights, Ohio. In 2005, they completed the move from their home in Chagrin Falls to their new residence. In October 2005, Watterson answered 15 questions submitted by readers. In October 2007, he wrote a review of "Schulz and Peanuts", a biography of Charles M. Schulz, in "The Wall Street Journal". In 2008, he provided a foreword for the first book collection of Richard Thompson's "Cul de Sac" comic strip. In April 2011, a representative for Andrews McMeel received a package from a "William Watterson in Cleveland Heights, Ohio" which contained a oil-on-board painting of "Cul de Sac" character Petey Otterloop, done by Watterson for the "Team Cul de Sac" fundraising project for Parkinson's disease in honor of Richard Thompson, who was diagnosed in 2009. Watterson's syndicate revealed that the painting was the first new artwork of his that the syndicate has seen since "Calvin and Hobbes" ended in 1995. In October 2009, Nevin Martell published a book called "Looking for Calvin and Hobbes," which included a story about the author seeking an interview with Watterson. In his search he interviews friends, co-workers and family but never gets to meet the artist himself. In early 2010, Watterson was interviewed by "The Plain Dealer" on the 15th anniversary of the end of "Calvin and Hobbes". Explaining his decision to discontinue the strip, he said, In October 2013, the magazine "Mental Floss" published an interview with Watterson, only the second since the strip ended. Watterson again confirmed that he would not be revisiting "Calvin and Hobbes", and that he was satisfied with his decision. He also gave his opinion on the changes in the comic-strip industry and where it would be headed in the future: In 2013 the documentary "Dear Mr. Watterson", exploring the cultural impact of "Calvin and Hobbes", was released. Watterson himself did not appear in the film. On February 26, 2014, Watterson published his first cartoon since the end of "Calvin and Hobbes": a poster for the documentary "Stripped". In 2014, Watterson co-authored "The Art of Richard Thompson" with "Washington Post" cartoonist Nick Galifianakis and David Apatoff. In June 2014, three strips of "Pearls Before Swine" (published June 4, June 5, and June 6, 2014) featured guest illustrations by Watterson after mutual friend Nick Galifianakis connected him and cartoonist Stephan Pastis, who communicated via e-mail. Pastis likened this unexpected collaboration to getting "a glimpse of Bigfoot". "I thought maybe Stephan and I could do this goofy collaboration and then use the result to raise some money for Parkinson's research in honor of Richard Thompson. It seemed like a perfect convergence", Watterson told "The Washington Post". The day that Stephan Pastis returned to his own strip, he paid tribute to Watterson by alluding to the final strip of "Calvin and Hobbes" from December 31, 1995. On November 5, 2014, a poster was unveiled, drawn by Watterson for the 2015 Angoulême International Comics Festival where he was awarded the Grand Prix in 2014. On April 1, 2016, for April Fools' Day, Berkeley Breathed posted on Facebook that Watterson had signed "the franchise over to my 'administration'". He then posted a comic with Calvin, Hobbes, and Opus all featured. The comic is signed by Watterson, though the degree of his involvement was speculative. Breathed posted another "Calvin County" strip featuring Calvin and Hobbes, also "signed" by Watterson on April 1, 2017, along with a fake "New York Times" story ostensibly detailing the "merger" of the two strips. Berkeley Breathed included Hobbes in a November 27, 2017, strip as a stand-in for the character Steve Dallas. Hobbes has also returned in the June 9, 11, and 12, 2021, strips as a stand-in for Bill The Cat. Exhibitions. In 2001, the Billy Ireland Cartoon Library & Museum at Ohio State University mounted an exhibition of Watterson's Sunday strips. He chose thirty-six of his favorites, displaying them with both the original drawing and the colored finished product, with most pieces featuring personal annotations. Watterson also wrote an accompanying essay that served as the foreword for the exhibit, called "Calvin and Hobbes: Sunday Pages 1985–1995", which opened on September 10, 2001. It was taken down in January 2002. The accompanying published catalog had the same title. From March 22 to August 3, 2014, Watterson exhibited again at the Billy Ireland Cartoon Library & Museum at Ohio State University. In conjunction with this exhibition, Watterson also participated in an interview with the school. An exhibition catalog named "Exploring Calvin and Hobbes" was released with the exhibit. The book contained a lengthy interview with Bill Watterson, conducted by Jenny Robb, the curator of the museum. "The Mysteries". Watterson released his first published work in 28 years on October 10, 2023, called "The Mysteries". It was an illustrated "fable for grown-ups" about "what lies beyond human understanding". The work was a collaboration with the illustrator and caricaturist John Kascht. Awards and honors. Watterson was awarded the National Cartoonists Society's Reuben Award in both 1986 and 1988. Watterson's second Reuben win made him the youngest cartoonist to be so honored, and only the sixth person to win twice, following Milton Caniff, Charles M. Schulz, Dik Browne, Chester Gould, and Jeff MacNelly. Gary Larson is the only cartoonist to win a second Reuben since Watterson. In 2014, Watterson was awarded the Grand Prix at the Angoulême International Comics Festival for his body of work, becoming just the fourth non-European cartoonist to be so honored in the first 41 years of the event. Bibliography. Treasury collections
4035
622881
https://en.wikipedia.org/wiki?curid=4035
Black
Black is a color that results from the absence or complete absorption of visible light. It is an achromatic color, without chroma, like white and grey. It is often used symbolically or figuratively to represent darkness. Black and white have often been used to describe opposites such as good and evil, the Dark Ages versus the Age of Enlightenment, and night versus day. Since the Middle Ages, black has been the symbolic color of solemnity and authority, and for this reason it is still commonly worn by judges and magistrates. Black was one of the first colors used by artists in Neolithic cave paintings. It was used in ancient Egypt and Greece as the color of the underworld. In the Roman Empire, it became the color of mourning, and over the centuries it was frequently associated with death, evil, witches, and magic. In the 14th century, it was worn by royalty, clergy, judges, and government officials in much of Europe. It became the color worn by English romantic poets, businessmen and statesmen in the 19th century, and a high fashion color in the 20th century. According to surveys in Europe and North America, it is the color most commonly associated with mourning, the end, secrets, magic, force, violence, fear, evil, and elegance. Black is the most common ink color used for printing books, newspapers and documents, as it provides the highest contrast with white paper and thus is the easiest color to read. Similarly, black text on a white screen is the most common format used on computer screens. the darkest material is made by MIT engineers from vertically aligned carbon nanotubes. Etymology. The word "black" comes from Old English "blæc" ("black, dark", "also", "ink"), from Proto-Germanic *"blakkaz" ("burned"), from Proto-Indo-European *"bhleg-" ("to burn, gleam, shine, flash"), from base *"bhel-" ("to shine"), related to Old Saxon "blak" ("ink"), Old High German "blach" ("black"), Old Norse "blakkr" ("dark"), Dutch "blaken" ("to burn"), and Swedish "bläck" ("ink"). More distant cognates include Latin "flagrare" ("to blaze, glow, burn"), and Ancient Greek "phlegein" ("to burn, scorch"). The Ancient Greeks sometimes used the same word to name different colors, if they had the same intensity. "Kuanos" could mean both dark blue and black. The Ancient Romans had two words for black: "ater" was a flat, dull black, while "niger" was a brilliant, saturated black. "Ater" has vanished from the vocabulary, but "niger" was the source of the country name "Nigeria," the English word "Negro", and the word for "black" in most modern Romance languages (French: "noir"; Spanish and Portuguese: "negro"; Italian: "nero"; Romanian: "negru"). Old High German also had two words for black: "swartz" for dull black and "blach" for a luminous black. These are parallelled in Middle English by the terms "swart" for dull black and "blaek" for luminous black. "Swart" still survives as the word "swarthy", while "blaek" became the modern English "black". The former is cognate with the words used for black in most modern Germanic languages aside from English (German: "schwarz", Dutch: "zwart", Swedish: "svart", Danish: "sort", Icelandic: "svartr"). In heraldry, the word used for the black color is sable, named for the black fur of the sable, an animal. Art. Prehistoric. Black was one of the first colors used in art. The Lascaux Cave in France contains drawings of bulls and other animals drawn by paleolithic artists between 18,000 and 17,000 years ago. They began by using charcoal, and later achieved darker pigments by burning bones or grinding a powder of manganese oxide. Ancient. For the ancient Egyptians, black had positive associations; being the color of fertility and the rich black soil flooded by the Nile. It was the color of Anubis, the god of the underworld, who took the form of a black jackal, and offered protection against evil to the dead. To ancient Greeks, black represented the underworld, separated from the living by the river Acheron, whose water ran black. Those who had committed the worst sins were sent to Tartarus, the deepest and darkest level. In the center was the palace of Hades, the king of the underworld, where he was seated upon a black ebony throne. Black was one of the most important colors used by ancient Greek artists. In the 6th century BC, they began making black-figure pottery and later red figure pottery, using a highly original technique. In black-figure pottery, the artist would paint figures with a glossy clay slip on a red clay pot. When the pot was fired, the figures painted with the slip would turn black, against a red background. Later they reversed the process, painting the spaces between the figures with slip. This created magnificent red figures against a glossy black background. In the social hierarchy of ancient Rome, purple was reserved for the emperor; red was the color worn by soldiers (red cloaks for the officers, red tunics for the soldiers); white the color worn by the priests, and black was worn by craftsmen and artisans. The black they wore was not deep and rich; the vegetable dyes used to make black were not solid or lasting, so the blacks often faded to gray or brown. In Latin, the word for black, "ater" and to darken, "atere", were associated with cruelty, brutality and evil. They were the root of the English words "atrocious" and "atrocity". For the Romans, black symbolized death and mourning. In the 2nd century BC Roman magistrates wore a dark toga, called a "toga pulla", to funeral ceremonies. Later, under the Empire, the family of the deceased also wore dark colors for a long period; then, after a banquet to mark the end of mourning, exchanged the black for a white toga. In Roman poetry, death was called the "hora nigra", the black hour. The German and Scandinavian peoples worshipped their own goddess of the night, Nótt, who crossed the sky in a chariot drawn by a black horse. They also feared Hel, the goddess of the kingdom of the dead, whose skin was black on one side and red on the other. They also held sacred the raven. They believed that Odin, the king of the Nordic pantheon, had two black ravens, Huginn and Muninn, who served as his agents, traveling the world for him, watching and listening. Postclassical. In the early Middle Ages, black was commonly associated with darkness and evil. In Medieval paintings, the devil was usually depicted as having human form, but with wings and black skin or hair. 12th and 13th centuries. In fashion, black did not have the prestige of red, the color of the nobility. It was worn by Benedictine monks as a sign of humility and penitence. In the 12th century a famous theological dispute broke out between the Cistercian monks, who wore white, and the Benedictines, who wore black. A Benedictine abbot, Pierre the Venerable, accused the Cistercians of excessive pride in wearing white instead of black. Saint Bernard of Clairvaux, the founder of the Cistercians responded that black was the color of the devil, hell, "of death and sin", while white represented "purity, innocence and all the virtues". Black symbolized both power and secrecy in the medieval world. The emblem of the Holy Roman Empire of Germany was a black eagle. The black knight in the poetry of the Middle Ages was an enigmatic figure, hiding his identity, usually wrapped in secrecy. Black ink, invented in China, was traditionally used in the Middle Ages for writing, for the simple reason that black was the darkest color and therefore provided the greatest contrast with white paper or parchment, making it the easiest color to read. It became even more important in the 15th century, with the invention of printing. A new kind of ink, printer's ink, was created out of soot, turpentine and walnut oil. The new ink made it possible to spread ideas to a mass audience through printed books, and to popularize art through black and white prints. Because of its contrast and clarity, black ink on white paper continued to be the standard for printing books, newspapers and documents; and for the same reason black text on a white background is the most common format used on computer screens. 14th and 15th centuries. In the early Middle Ages, princes, nobles and the wealthy usually wore bright colors, particularly scarlet cloaks from Italy. Black was rarely part of the wardrobe of a noble family. The one exception was the fur of the sable. This glossy black fur, from an animal of the marten family, was the finest and most expensive fur in Europe. It was imported from Russia and Poland and used to trim the robes and gowns of royalty. In the 14th century, the status of black began to change. First, high-quality black dyes began to arrive on the market, allowing garments of a deep, rich black. Magistrates and government officials began to wear black robes, as a sign of the importance and seriousness of their positions. A third reason was the passage of sumptuary laws in some parts of Europe which prohibited the wearing of costly clothes and certain colors by anyone except members of the nobility. The famous bright scarlet cloaks from Venice and the peacock blue fabrics from Florence were restricted to the nobility. The wealthy bankers and merchants of northern Italy responded by changing to black robes and gowns, made with the most expensive fabrics. The change to the more austere but elegant black was quickly picked up by the kings and nobility. It began in northern Italy, where the Duke of Milan and the Count of Savoy and the rulers of Mantua, Ferrara, Rimini and Urbino began to dress in black. It then spread to France, led by Louis I, Duke of Orleans, younger brother of King Charles VI of France. It moved to England at the end of the reign of King Richard II (1377–1399), where all the court began to wear black. In 1419–20, black became the color of the powerful Duke of Burgundy, Philip the Good. It moved to Spain, where it became the color of the Spanish Habsburgs, of Charles V and of his son, Philip II of Spain (1527–1598). European rulers saw it as the color of power, dignity, humility and temperance. By the end of the 16th century, it was the color worn by almost all the monarchs of Europe and their courts. Modern. 16th and 17th centuries. While black was the color worn by the Catholic rulers of Europe, it was also the emblematic color of the Protestant Reformation in Europe and the Puritans in England and America. John Calvin, Philip Melanchthon and other Protestant theologians denounced the richly colored and decorated interiors of Roman Catholic churches. They saw the color red, worn by the pope and his cardinals, as the color of luxury, sin, and human folly. In some northern European cities, mobs attacked churches and cathedrals, smashed the stained glass windows and defaced the statues and decoration. In Protestant doctrine, clothing was required to be sober, simple and discreet. Bright colors were banished and replaced by blacks, browns and grays; women and children were recommended to wear white. In the Protestant Netherlands, Rembrandt used this sober new palette of blacks and browns to create portraits whose faces emerged from the shadows expressing the deepest human emotions. The Catholic painters of the Counter-Reformation, like Rubens, went in the opposite direction; they filled their paintings with bright and rich colors. The new Baroque churches of the Counter-Reformation were usually shining white inside and filled with statues, frescoes, marble, gold and colorful paintings, to appeal to the public. But European Catholics of all classes, like Protestants, eventually adopted a sober wardrobe that was mostly black, brown and gray. In the second part of the 17th century, Europe and America experienced an epidemic of fear of witchcraft. People widely believed that the devil appeared at midnight in a ceremony called a Black Mass or black sabbath, usually in the form of a black animal, often a goat, a dog, a wolf, a bear, a deer or a rooster, accompanied by their familiar spirits, black cats, serpents and other black creatures. This was the origin of the widespread superstition about black cats and other black animals. In medieval Flanders, in a ceremony called "Kattenstoet," black cats were thrown from the belfry of the Cloth Hall of Ypres to ward off witchcraft. Witch trials were common in both Europe and America during this period. During the notorious Salem witch trials in New England in 1692–93, one of those on trial was accused of being able turn into a "black thing with a blue cap", and others of having familiars in the form of a black dog, a black cat and a black bird. Nineteen women and men were hanged as witches. 18th and 19th centuries. In the 18th century, during the European Age of Enlightenment, black receded as a fashion color. Paris became the fashion capital, and pastels, blues, greens, yellow and white became the colors of the nobility and upper classes. But after the French Revolution, black again became the dominant color. Black was the color of the industrial revolution, largely fueled by coal, and later by oil. Thanks to coal smoke, the buildings of the large cities of Europe and America gradually turned black. By 1846 the industrial area of the West Midlands of England was "commonly called 'the Black Country'". Charles Dickens and other writers described the dark streets and smoky skies of London, and they were vividly illustrated in the wood-engravings of French artist Gustave Doré. A different kind of black was an important part of the romantic movement in literature. Black was the color of melancholy, the dominant theme of romanticism. The novels of the period were filled with castles, ruins, dungeons, storms, and meetings at midnight. The leading poets of the movement were usually portrayed dressed in black, usually with a white shirt and open collar, and a scarf carelessly over their shoulder, Percy Bysshe Shelley and Lord Byron helped create the enduring stereotype of the romantic poet. The invention of inexpensive synthetic black dyes and the industrialization of the textile industry meant that high-quality black clothes were available for the first time to the general population. In the 19th century black gradually became the most popular color of business dress of the upper and middle classes in England, the Continent, and America. Black dominated literature and fashion in the 19th century, and played a large role in painting. James McNeill Whistler made the color the subject of his most famous painting, "Arrangement in grey and black number one" (1871), better known as "Whistler's Mother". Some 19th-century French painters had a low opinion of black: "Reject black," Paul Gauguin said, "and that mix of black and white they call gray. Nothing is black, nothing is gray." But Édouard Manet used blacks for their strength and dramatic effect. Manet's portrait of painter Berthe Morisot was a study in black which perfectly captured her spirit of independence. The black gave the painting power and immediacy; he even changed her eyes, which were green, to black to strengthen the effect. Henri Matisse quoted the French impressionist Pissarro telling him, "Manet is stronger than us all – he made light with black." Pierre-Auguste Renoir used luminous blacks, especially in his portraits. When someone told him that black was not a color, Renoir replied: "What makes you think that? Black is the queen of colors. I always detested Prussian blue. I tried to replace black with a mixture of red and blue, I tried using cobalt blue or ultramarine, but I always came back to ivory black." Vincent van Gogh used black lines to outline many of the objects in his paintings, such as the bed in the famous painting of his bedroom. making them stand apart. His painting of black crows over a cornfield, painted shortly before he died, was particularly agitated and haunting. In the late 19th century, black also became the color of anarchism. (See the section political movements.) 20th and 21st centuries. In the 20th century, black was utilised by Italian and German fascism. (See the section political movements). In art, the colour regained some of the territory that it had lost during the 19th century. The Russian painter Kasimir Malevich, a member of the Suprematist movement, created the "Black Square" in 1915, is widely considered the first purely abstract painting. He wrote, "The painted work is no longer simply the imitation of reality, but is this very reality ... It is not a demonstration of ability, but the materialization of an idea." Black was appreciated by Henri Matisse. "When I didn't know what color to put down, I put down black," he said in 1945. "Black is a force: I used black as ballast to simplify the construction ... Since the impressionists it seems to have made continuous progress, taking a more and more important part in color orchestration, comparable to that of the double bass as a solo instrument." In the 1950s, black came to be a symbol of individuality and intellectual and social rebellion, the color of those who did not accept established norms and values. In Paris, it was worn by Left-Bank intellectuals and performers such as Juliette Gréco, and by some members of the Beat Movement in New York and San Francisco. Black leather jackets were worn by motorcycle gangs such as the Hells Angels and street gangs on the fringes of society in the United States. Black as a color of rebellion was celebrated in such films as "The Wild One", with Marlon Brando. By the end of the 20th century, black was the emblematic color of punk fashion and the goth subculture. Goth fashion, which emerged in England in the 1980s, was inspired by Victorian era mourning dress. In men's fashion, black gradually ceded its dominance to navy blue, particularly in business suits. Black evening dress and formal dress in general were worn less and less. In 1960, John F. Kennedy was the last American President to be inaugurated wearing formal dress; Lyndon Johnson and his successors were inaugurated wearing business suits. Women's fashion was revolutionized and simplified in 1926 by the French designer Coco Chanel, who published a drawing of a simple black dress in "Vogue" magazine. She famously said, "A woman needs just three things; a black dress, a black sweater, and, on her arm, a man she loves." French designer Jean Patou also followed suit by creating a black collection in 1929. Other designers contributed to the trend of the little black dress. The Italian designer Gianni Versace said, "Black is the quintessence of simplicity and elegance," and French designer Yves Saint Laurent said, "black is the liaison which connects art and fashion. One of the most famous black dresses of the century was designed by Hubert de Givenchy and was worn by Audrey Hepburn in the 1961 film "Breakfast at Tiffany's". The American civil rights movement in the 1950s was a struggle for the political equality of African Americans. It developed into the Black Power movement in the early 1960s until the late 1980s, and the Black Lives Matter movement in the 2010s and 2020s. It also popularized the slogan "Black is Beautiful". Science. Physics. Black can be defined as the color perceived when no visible light reaches the eye. Pigments or dyes that absorb almost all light rather than reflect it back to the eye look black. A black pigment can, however, result from a combination of several pigments that collectively absorb all wavelengths of visible light. If appropriate proportions of three primary pigments are mixed, the result reflects so little light as to be called black. This provides two superficially opposite but actually complementary descriptions of black. Black is the color produced by the absorption of all wavelengths of visible light, or an exhaustive combination of multiple colors of pigment. In physics, a black body is a perfect absorber of light, but, by a thermodynamic rule, it is also the best emitter. Thus, the best radiative cooling, out of sunlight, is by using black paint, though it is important that it be black (a nearly perfect absorber) in the infrared as well. In elementary science, far ultraviolet light is called "black light" because, while itself unseen, it causes many minerals and other substances to fluoresce. Absorption of light is contrasted by transmission, reflection and diffusion, where the light is only redirected, causing objects to appear transparent, reflective or white respectively. A material is said to be black if most incoming light is absorbed equally in the material. Light (electromagnetic radiation in the visible spectrum) interacts with the atoms and molecules, which causes the energy of the light to be converted into other forms of energy, usually heat. This means that black surfaces can act as thermal collectors, absorbing light and generating heat (see Solar thermal collector). As of September 2019, the darkest material is made from vertically aligned carbon nanotubes. The material was grown by MIT engineers and was reported to have a 99.995% absorption rate of any incoming light. This surpasses any former darkest materials including Vantablack, which has a peak absorption rate of 99.965% in the visible spectrum. Chemistry. Pigments. The earliest pigments used by Neolithic man were charcoal, red ocher and yellow ocher. The black lines of cave art were drawn with the tips of burnt torches made of a wood with resin. Different charcoal pigments were made by burning different woods and animal products, each of which produced a different tone. The charcoal would be ground and then mixed with animal fat to make the pigment. The 15th-century painter Cennino Cennini described how this pigment was made during the Renaissance in his famous handbook for artists: "...there is a black which is made from the tendrils of vines. And these tendrils need to be burned. And when they have been burned, throw some water onto them and put them out and then mull them in the same way as the other black. And this is a lean and black pigment and is one of the perfect pigments that we use." Cennini also noted that "There is another black which is made from burnt almond shells or peaches and this is a perfect, fine black." Similar fine blacks were made by burning the pits of the peach, cherry or apricot. The powdered charcoal was then mixed with gum arabic or the yellow of an egg to make a paint. Different civilizations burned different plants to produce their charcoal pigments. The Inuit of Alaska used wood charcoal mixed with the blood of seals to paint masks and wooden objects. The Polynesians burned coconuts to produce their pigment. Dyes. Good-quality black dyes were not known until the middle of the 14th century. The most common early dyes were made from bark, roots or fruits of different trees; usually walnuts, chestnuts, or certain oak trees. The blacks produced were often more gray, brown or bluish. The cloth had to be dyed several times to darken the color. One solution used by dyers was add to the dye some iron filings, rich in iron oxide, which gave a deeper black. Another was to first dye the fabric dark blue, and then to dye it black. A much richer and deeper black dye was eventually found made from the oak apple or "gall-nut". The gall-nut is a small round tumor which grows on oak and other varieties of trees. They range in size from 2–5 cm, and are caused by chemicals injected by the larva of certain kinds of gall wasp in the family Cynipidae. The dye was very expensive; a great quantity of gall-nuts were needed for a very small amount of dye. The gall-nuts which made the best dye came from Poland, eastern Europe, the near east and North Africa. Beginning in about the 14th century, dye from gall-nuts was used for clothes of the kings and princes of Europe. Another important source of natural black dyes from the 17th century onwards was the logwood tree, or Haematoxylum campechianum, which also produced reddish and bluish dyes. It is a species of flowering tree in the legume family, Fabaceae, that is native to southern Mexico and northern Central America. The modern nation of Belize grew from 17th century English logwood logging camps. Since the mid-19th century, synthetic black dyes have largely replaced natural dyes. One of the important synthetic blacks is Nigrosin, a mixture of synthetic black dyes (CI 50415, Solvent black 5) made by heating a mixture of nitrobenzene, aniline and aniline hydrochloride in the presence of a copper or iron catalyst. Its main industrial uses are as a colorant for lacquers and varnishes and in marker-pen inks. Inks. The first known inks were made by the Chinese, and date back to the 23rd century B.C. They used natural plant dyes and minerals such as graphite ground with water and applied with an ink brush. Early Chinese inks similar to the modern inkstick have been found dating to about 256 BC at the end of the Warring States period. They were produced from soot, usually produced by burning pine wood, mixed with animal glue. To make ink from an inkstick, the stick is continuously ground against an inkstone with a small quantity of water to produce a dark liquid which is then applied with an ink brush. Artists and calligraphists could vary the thickness of the resulting ink by reducing or increasing the intensity and time of ink grinding. These inks produced the delicate shading and subtle or dramatic effects of Chinese brush painting. India ink (or "Indian ink" in British English) is a black ink once widely used for writing and printing and now more commonly used for drawing, especially when inking comic books and comic strips. The technique of making it probably came from China. India ink has been in use in India since at least the 4th century BC, where it was called "masi". In India, the black color of the ink came from bone char, tar, pitch and other substances. The ancient Romans had a black writing ink they called "atramentum librarium". Its name came from the Latin word "atrare", which meant to make something black. (This was the same root as the English word "atrocious".) It was usually made, like India ink, from soot, although one variety, called "atramentum elephantinum", was made by burning the ivory of elephants. Gall-nuts were also used for making fine black writing ink. Iron gall ink (also known as iron gall nut ink or oak gall ink) was a purple-black or brown-black ink made from iron salts and tannic acids from gall nut. It was the standard writing and drawing ink in Europe, from about the 12th century to the 19th century, and remained in use well into the 20th century. Astronomy. Why the night sky and space are black – Olbers' paradox. The fact that outer space is black is sometimes called Olbers' paradox. In theory, because the universe is full of stars, and is believed to be infinitely large, it would be expected that the light of an infinite number of stars would be enough to brilliantly light the whole universe all the time. However, the background color of outer space is black. This contradiction was first noted in 1823 by German astronomer Heinrich Wilhelm Matthias Olbers, who posed the question of why the night sky was black. The current accepted answer is that, although the universe may be infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black. The daytime sky on Earth is blue because light from the Sun strikes molecules in Earth's atmosphere scattering light in all directions. Blue light is scattered more than other colors, and reaches the eye in greater quantities, making the daytime sky appear blue. This is known as Rayleigh scattering. The nighttime sky on Earth is black because the part of Earth experiencing night is facing away from the Sun, the light of the Sun is blocked by Earth itself, and there is no other bright nighttime source of light in the vicinity. Thus, there is not enough light to undergo Rayleigh scattering and make the sky blue. On the Moon, on the other hand, because there is virtually no atmosphere to scatter the light, the sky is black both day and night. This also holds true for other locations without an atmosphere, such as Mercury. Culture. In China, the color black is associated with water, one of the five fundamental elements believed to compose all things; and with winter, cold, and the direction north, usually symbolized by a black tortoise. It is also associated with disorder, including the positive disorder which leads to change and new life. When the first emperor of China Qin Shi Huang seized power from the Zhou dynasty, he changed the Imperial color from red to black, saying that black extinguished red. Only when the Han dynasty appeared in 206 BC was red restored as the imperial color. In Japan, black is associated with mystery, the night, the unknown, the supernatural, the invisible and death. Combined with white, it can symbolize intuition. In 10th- and 11th-century Japan, it was believed that wearing black could bring misfortune. It was worn at court by those who wanted to set themselves apart from the established powers or who had renounced material possessions. In Japan black can also symbolize experience, as opposed to white, which symbolizes naiveté. The black belt in martial arts symbolizes experience, while a white belt is worn by novices. Japanese men traditionally wear a black kimono with some white decoration on their wedding day. Black is associated with depth in Indonesia, as well as the subterranean world, demons, disaster, and the left hand. When combined with white, however, it symbolizes harmony and equilibrium. Political movements. Anarchism. Anarchism is a political philosophy, most popular in the late 19th and early 20th centuries, which holds that governments and capitalism are harmful and undesirable. The symbols of anarchism was usually either a black flag or a black letter A. More recently it is usually represented with a bisected red and black flag, to emphasise the movement's socialist roots in the First International. Anarchism was most popular in Spain, France, Italy, Ukraine and Argentina. There were also small but influential movements in the United States, Russia and many other countries all around the world. Fascism. The Blackshirts () were Fascist paramilitary groups in Italy during the period immediately following World War I and until the end of World War II. The Blackshirts were officially known as the Voluntary Militia for National Security ("Milizia Volontaria per la Sicurezza Nazionale", or MVSN). Inspired by the black uniforms of the Arditi, Italy's elite storm troops of World War I, the Fascist Blackshirts were organized by Benito Mussolini as the military tool of his political movement. They used violence and intimidation against Mussolini's opponents. The emblem of the Italian fascists was a black flag with fasces, an axe in a bundle of sticks, an ancient Roman symbol of authority. Mussolini came to power in 1922 through his March on Rome with the blackshirts. Black was also adopted by Adolf Hitler and the Nazis in Germany. Red, white and black were the colors of the flag of the German Empire from 1870 to 1918. In "Mein Kampf", Hitler explained that they were "revered colors expressive of our homage to the glorious past". Hitler also wrote that "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement". The black swastika was meant to symbolize the Aryan race, which, according to the Nazis, "was always anti-Semitic and will always be anti-Semitic." Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design. Black became the color of the uniform of the SS, the "Schutzstaffel" or "defense corps", the paramilitary wing of the Nazi Party, and was worn by SS officers from 1932 until the end of World War II. The Nazis used a black triangle to symbolize anti-social elements. The symbol originates from Nazi concentration camps, where every prisoner had to wear one of the Nazi concentration camp badges on their jacket, the color of which categorized them according to "their kind". Many Black Triangle prisoners were either mentally disabled or mentally ill. The homeless were also included, as were alcoholics, the Romani people, the habitually "work-shy", prostitutes, draft dodgers and pacifists. More recently the black triangle has been adopted as a symbol in lesbian culture and by disabled activists. Black shirts were also worn by the British Union of Fascists before World War II, and members of fascist movements in the Netherlands. Patriotic resistance. The Lützow Free Corps, composed of volunteer German students and academics fighting against Napoleon in 1813, could not afford to make special uniforms and therefore adopted black, as the only color that could be used to dye their civilian clothing without the original color showing. In 1815 the students began to carry a red, black and gold flag, which they believed (incorrectly) had been the colors of the Holy Roman Empire (the imperial flag had actually been gold and black). In 1848, this banner became the flag of the German confederation. In 1866, Prussia unified Germany under its rule, and imposed the red, white and black of its own flag, which remained the colors of the German flag until the end of the Second World War. In 1949 the Federal Republic of Germany returned to the original flag and colors of the students and professors of 1815, which is the flag of Germany today. Military. Black has been a traditional color of cavalry and armoured or mechanized troops. German armoured troops (Panzerwaffe) traditionally wore black uniforms, and even in others, a black beret is common. In Finland, black is the symbolic color for both armoured troops and combat engineers, and military units of these specialities have black flags and unit insignia. The black beret and the color black is also a symbol of special forces in many countries. Soviet and Russian OMON special police and Russian naval infantry wear a black beret. A black beret is also worn by military police in the Canadian, Czech, Croatian, Portuguese, Spanish and Serbian armies. The silver-on-black skull and crossbones symbol or Totenkopf and a black uniform were used by Hussars and Black Brunswickers, the German Panzerwaffe and the Nazi Schutzstaffel, and U.S. 400th Missile Squadron (crossed missiles), and continues in use with the Estonian Kuperjanov Battalion. Religion. In Christian theology, black was the color of the universe before God created light. In many religious cultures, from Mesoamerica to Oceania to India and Japan, the world was created out of a primordial darkness. In the Bible the light of faith and Christianity is often contrasted with the darkness of ignorance and paganism. In Christianity, the devil is often called the "prince of darkness". The term was used in John Milton's poem "Paradise Lost", published in 1667, referring to Satan, who is viewed as the embodiment of evil. It is an English translation of the Latin phrase "princeps tenebrarum", which occurs in the "Acts of Pilate", written in the fourth century, in the 11th-century hymn "Rhythmus de die mortis" by Pietro Damiani, and in a sermon by Bernard of Clairvaux from the 12th century. The phrase also occurs in "King Lear" by William Shakespeare (), Act III, Scene IV, l. 14: 'The prince of darkness is a gentleman." Priests and pastors of the Roman Catholic, Eastern Orthodox and Protestant churches commonly wear black, as do monks of the Benedictine Order, who consider it the color of humility and penitence. Associations and symbolism. In the West, black is commonly associated with mourning and bereavement, and usually worn at funerals and memorial services. In some traditional societies, for example in Greece and Italy, some widows wear black for the rest of their lives. In contrast, across much of Africa and parts of Asia like Vietnam, white is a color of mourning. A "black day" (or week or month) usually refers to tragic date. The Romans marked "fasti" days with white stones and "nefasti" days with black. The term is often used to remember massacres. Black months include the Black September in Jordan, when large numbers of Palestinians were killed, and Black July in Sri Lanka, the killing of members of the Tamil population by the Sinhalese government. In the financial world, the term often refers to a dramatic drop in the stock market. For example, the Wall Street crash of 1929, the stock market crash on 29 October 1929, which marked the start of the Great Depression, is nicknamed Black Tuesday, and was preceded by Black Thursday, a downturn on 24 October the previous week. In western popular culture, black has long been associated with evil and darkness. It is the traditional color of witchcraft and black magic. Black is frequently used as a color of power, law and authority. In many countries judges and magistrates wear black robes. That custom began in Europe in the 13th and 14th centuries. Jurists, magistrates and certain other court officials in France began to wear long black robes during the reign of Philip IV of France (1285–1314), and in England from the time of Edward I (1271–1307). The custom spread to the cities of Italy at about the same time, between 1300 and 1320. The robes of judges resembled those worn by the clergy, and represented the law and authority of the King, while those of the clergy represented the law of God and authority of the church. Until the 20th century most police uniforms were black, until they were largely replaced by blue in France, the U.S. and other countries. In the United States, police cars are frequently Black and white. The riot control units of the Basque Autonomous Police in Spain are known as "beltzak" ("blacks") after their uniform. Black formal attire is still worn at many solemn occasions or ceremonies, from graduations to formal balls. Graduation gowns are copied from the gowns worn by university professors in the Middle Ages, which in turn were copied from the robes worn by judges and priests, who often taught at the early universities. The mortarboard hat worn by graduates is adapted from a square cap called a biretta worn by Medieval professors and clerics. In the 19th and 20th centuries, many machines and devices, large and small, were painted black, to stress their functionality. These included telephones, sewing machines, steamships, railroad locomotives, and automobiles. The Ford Model T, the first mass-produced car, was available only in black from 1914 to 1926. Of means of transportation, only airplanes were rarely ever painted black. The term "Black" is often used in the West to describe people whose skin is darker. In the United States, it is particularly used to describe African Americans. Black is also commonly used as a racial description in the United Kingdom, since ethnicity was first measured in the 2001 census. In Canada, census respondents can identify themselves as Black. In Brazil, the Brazilian Institute of Geography and Statistics (IBGE) asks people to identify themselves as "branco" (white), "pardo" (brown), "preto" (black), or "amarelo" (yellow). Black and white have often been used to describe opposites, particularly light and darkness and good and evil. In Medieval literature, the white knight usually represented virtue, the black knight something mysterious and sinister. In American westerns, the hero often wore a white hat, the villain a black hat. In philosophy and arguments, the issue is often described as black-and-white, meaning that the issue at hand is dichotomized (having two clear, opposing sides with no middle ground). Black is commonly associated with secrecy. Black is the color most commonly associated with elegance in Europe and the United States. Black first became a fashionable color for men in Europe in the 17th century, in the courts of Italy and Spain. In the 19th century, it was the fashion for men both in business and for evening wear. For women's fashion, the defining moment was the invention of the simple black dress by Coco Chanel in 1926. Thereafter, a long black gown was used for formal occasions, while the simple black dress could be used for everything else. The expression "X is the new black" is a reference to the latest trend or fad that is considered a wardrobe basic for the duration of the trend, on the basis that black is always fashionable. The phrase has taken on a life of its own and has become a cliché.
4037
9719271
https://en.wikipedia.org/wiki?curid=4037
Bletchley Park
Bletchley Park is an English country house and estate in Bletchley, Milton Keynes (Buckinghamshire), that became the principal centre of Allied code-breaking during the Second World War. During World War II, the estate housed the Government Code and Cypher School (GC&CS), which regularly penetrated the secret communications of the Axis Powers most importantly the German Enigma and Lorenz ciphers. The GC&CS team of codebreakers included John Tiltman, Dilwyn Knox, Alan Turing, Harry Golombek, Gordon Welchman, Hugh Alexander, Donald Michie, Bill Tutte and Stuart Milner-Barry. The team at Bletchley Park, 75% women, devised automatic machinery to help with decryption, culminating in the development of Colossus, the world's first programmable digital electronic computer. Codebreaking operations at Bletchley Park ended in 1946 and all information about the wartime operations was classified until the mid-1970s. After the war it had various uses and now houses the Bletchley Park museum. House and pre-war history. It was first known as Bletchley Park after its purchase in 1877 by the architect Samuel Lipscomb Seckham, who built a house there. The estate of was bought in 1883 by Sir Herbert Samuel Leon, who expanded the house into what architect Landis Gores called a "maudlin and monstrous pile", combining Victorian Gothic, Tudor, and Dutch Baroque styles. After the death of Herbert Leon in 1926, the estate continued to be occupied by his widow Fanny Leon (née Higham) until her death in 1937. In 1938, the mansion and much of the site was bought by a builder for a housing estate, but in May 1938 Admiral Sir Hugh Sinclair, head of the Secret Intelligence Service (SIS or MI6), bought the site for use in the event of war. World War II. Preparation for War. Sinclair bought the mansion and of land for use by Code and Cypher School and SIS. A key advantage seen by Sinclair and his colleagues (inspecting the site under the cover of "Captain Ridley's shooting party") was Bletchley's geographical centrality. It was almost immediately adjacent to Bletchley railway station, where the "Varsity Line" between Oxford and Cambridgewhose universities were expected to supply many of the code-breakersmet the main West Coast railway line connecting London, Birmingham, Manchester, Liverpool, Glasgow and Edinburgh. Watling Street, the main road linking London to the north-west (subsequently the A5) was close by, and high-volume communication links were available at the telegraph and telephone repeater station in nearby Fenny Stratford. Sinclair bought the park using £6,000 (£ today) of his own money as the Government said they did not have the budget to do so. Five weeks before the outbreak of war, Warsaw's Cipher Bureau revealed its achievements in breaking Enigma to astonished French and British personnel. The British used the Poles' information and techniques, and the Enigma clone sent to them in August 1939, which greatly increased their (previously very limited) success in decrypting Enigma messages. Early work. The first personnel of the Government Code and Cypher School (GC&CS) moved to Bletchley Park on 15 August 1939. The Naval, Military, and Air Sections were on the ground floor of the mansion, together with a telephone exchange, teleprinter room, kitchen, and dining room; the top floor was allocated to MI6. Construction of the wooden huts began in late 1939, and Elmers School, a neighbouring boys' boarding school in a Victorian Gothic redbrick building by a church, was acquired for the Commercial and Diplomatic Sections. The only direct enemy damage to the site was done 2021 November 1940 by three bombs probably intended for Bletchley railway station; Hut 4, shifted two feet off its foundation, was winched back into place as work inside continued. Action This Day. During a morale-boosting visit on 9 September 1941, Winston Churchill reportedly remarked to Denniston or Menzies: "I told you to leave no stone unturned to get staff, but I had no idea you had taken me so literally." Six weeks later, having failed to get sufficient typing and unskilled staff to achieve the productivity that was possible, Turing, Welchman, Alexander and Milner-Barry wrote directly to Churchill. His response was "Action this day make sure they have all they want on extreme priority and report to me that this has been done." Allied involvement. After the United States joined World War II, a number of American cryptographers were posted to Hut 3, and from May 1943 onwards there was close co-operation between British and American intelligence leading to the 1943 BRUSA Agreement which was the forerunner of the Five Eyes partnership. In contrast, the Soviet Union was never officially told of Bletchley Park and its activities, a reflection of Churchill's distrust of the Soviets even during the US-UK-USSR alliance imposed by the Nazi threat. However Bletchley Park was infiltrated by the Soviet mole John Cairncross, a member of the Cambridge Spy Ring, who leaked Ultra material to Moscow. Personnel. Admiral Hugh Sinclair was the founder and head of GC&CS between 1919 and 1938 with Commander Alastair Denniston being operational head of the organization from 1919 to 1942, beginning with its formation from the Admiralty's Room 40 (NID25) and the War Office's MI1b. Key GC&CS cryptanalysts who moved from London to Bletchley Park included John Tiltman, Dillwyn "Dilly" Knox, Josh Cooper, Oliver Strachey and Nigel de Grey. These people had a variety of backgroundslinguists and chess champions were common, and Knox's field was papyrology. The British War Office recruited top solvers of cryptic crossword puzzles, as these individuals had strong lateral thinking skills. On the day Britain declared war on Germany, Denniston wrote to the Foreign Office about recruiting "men of the professor type". Personal networking drove early recruitments, particularly of men from the universities of Cambridge and Oxford. Trustworthy women were similarly recruited for administrative and clerical jobs. In one 1941 recruiting stratagem, "The Daily Telegraph" was asked to organise a crossword competition, after which promising contestants were discreetly approached about "a particular type of work as a contribution to the war effort". Denniston recognised, however, that the enemy's use of electromechanical cipher machines meant that formally trained mathematicians would also be needed; Oxford's Peter Twinn joined GC&CS in February 1939; Cambridge's Alan Turing and Gordon Welchman began training in 1938 and reported to Bletchley the day after war was declared, along with John Jeffreys. Later-recruited cryptanalysts included the mathematicians Derek Taunt, Jack Good, Bill Tutte, and Max Newman; historian Harry Hinsley, and chess champions Hugh Alexander and Stuart Milner-Barry. Joan Clarke was one of the few women employed at Bletchley as a full-fledged cryptanalyst. When seeking to recruit more suitably advanced linguists, John Tiltman turned to Patrick Wilkinson of the Italian section for advice, and he suggested asking Lord Lindsay of Birker, of Balliol College, Oxford, S. W. Grose, and Martin Charlesworth, of St John's College, Cambridge, to recommend classical scholars or applicants to their colleges. This eclectic staff of "Boffins and Debs" (scientists and debutantes, young women of high society) caused GC&CS to be whimsically dubbed the "Golf, Cheese and Chess Society". Among those who worked there and later became famous in other fields were historian Asa Briggs, politician Roy Jenkins and novelist Angus Wilson. After initial training at the Inter-Service Special Intelligence School set up by John Tiltman (initially at an RAF depot in Buckingham and later in Bedfordwhere it was known locally as "the Spy School") staff worked a six-day week, rotating through three shifts: 4 p.m. to midnight, midnight to 8 a.m. (the most disliked shift), and 8 a.m. to 4 p.m., each with a half-hour meal break. At the end of the third week, a worker went off at 8 a.m. and came back at 4 p.m., thus putting in 16 hours on that last day. The irregular hours affected workers' health and social life, as well as the routines of the nearby homes at which most staff lodged. The work was tedious and demanded intense concentration; staff got one week's leave four times a year, but some "girls" collapsed and required extended rest. Recruitment took place to combat a shortage of experts in Morse code and German. In January 1945, at the peak of codebreaking efforts, 8,995 personnel were working at Bletchley and its outstations. About three-quarters of these were women. Many of the women came from middle-class backgrounds and held degrees in the areas of mathematics, physics and engineering; they were given the chance due to the lack of men, who had been sent to war. They performed calculations and coding and hence were integral to the computing processes. Among them were Eleanor Ireland, who worked on the Colossus computers and Ruth Briggs, a German scholar, who worked within the Naval Section. The female staff in Dilwyn Knox's section were sometimes termed "Dilly's Fillies". Knox's methods enabled Mavis Lever (who married mathematician and fellow code-breaker Keith Batey) and Margaret Rock to solve a German code, the Abwehr cipher. Many of the women had backgrounds in languages, particularly French, German and Italian. Among them were Rozanne Colchester, a translator who worked mainly for the Italian air forces Section, and Cicely Mayhew, recruited straight from university, who worked in Hut 8, translating decoded German Navy signals, as did Jane Fawcett (née Hughes) who decrypted a vital message concerning the German battleship "Bismarck" and after the war became an opera singer and buildings conservationist. Alan Brooke (CIGS) in his secret wartime diary frequently refers to “intercepts”: 16 April 1942: "Took lunch in car and went to see the organization for breaking down ciphers [Bletchley Park] – a wonderful set of professors and genii! I marvel at the work they succeed in doing." 28 June 1945: "After lunch (with Andrew Cunningham (RN) and Sinclair (RAF) we went to “The Park” … I began by addressing some 400 of the workers who consist of all 3 services, both sexes, and civilians, They come from every sort of walk of life, professors, students, actors, dancers, mathematicians, electricians signallers, etc. I thanked them on behalf of the Chiefs of Staff and congratulated them on the results of their work. We then toured round the establishment and had tea before returning." Secrecy. Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities that made Bletchley's attacks just barely feasible. These vulnerabilities, however, could have been remedied by relatively simple improvements in enemy procedures, and such changes would certainly have been implemented had Germany had any hint of Bletchley's success. Thus the intelligence Bletchley produced was considered wartime Britain's "Ultra secret"higher even than the normally highest classification and security was paramount. All staff signed the Official Secrets Act (1939) and a 1942 security warning emphasised the importance of discretion even within Bletchley itself: "Do not talk at meals. Do not talk in the transport. Do not talk travelling. Do not talk in the billet. Do not talk by your own fireside. Be careful even in your Hut ..." Nevertheless, there were security leaks. Jock Colville, the Assistant Private Secretary to Winston Churchill, recorded in his diary on 31 July 1941, that the newspaper proprietor Lord Camrose had discovered Ultra and that security leaks "increase in number and seriousness". Despite the high degree of secrecy surrounding Bletchley Park during the Second World War, unique and hitherto unknown amateur film footage of the outstation at nearby Whaddon Hall came to light in 2020, after being anonymously donated to the Bletchley Park Trust. A spokesman for the Trust noted the film's existence was all the more incredible because it was "very, very rare even to have [still] photographs" of the park and its associated sites. Bletchley Park was known as "B.P." to those who worked there. "Station X" (X = Roman numeral ten), "London Signals Intelligence Centre", and "Government Communications Headquarters" were all cover names used during the war. The formal posting of the many "Wrens"members of the Women's Royal Naval Serviceworking there, was to HMS "Pembroke V". Royal Air Force names of Bletchley Park and its outstations included RAF Eastcote, RAF Lime Grove and RAF Church Green. The postal address that staff had to use was "Room 47, Foreign Office". Intelligence reporting. Initially, when only a very limited amount of Enigma traffic was being read, deciphered non-Naval Enigma messages were sent from Hut 6 to Hut 3 which handled their translation and onward transmission. Subsequently, under Group Captain Eric Jones, Hut 3 expanded to become the heart of Bletchley Park's intelligence effort, with input from decrypts of "Tunny" (Lorenz SZ42) traffic and many other sources. Early in 1942 it moved into Block D, but its functions were still referred to as Hut 3. Hut 3 contained a number of sections: Air Section "3A", Military Section "3M", a small Naval Section "3N", a multi-service Research Section "3G" and a large liaison section "3L". It also housed the Traffic Analysis Section, SIXTA. An important function that allowed the synthesis of raw messages into valuable Military intelligence was the indexing and cross-referencing of information in a number of different filing systems. Intelligence reports were sent out to the Secret Intelligence Service, the intelligence chiefs in the relevant ministries, and later on to high-level commanders in the field. Naval Enigma deciphering was in Hut 8, with translation in Hut 4. Verbatim translations were sent to the Naval Intelligence Division (NID) of the Admiralty's Operational Intelligence Centre (OIC), supplemented by information from indexes as to the meaning of technical terms and cross-references from a knowledge store of German naval technology. Where relevant to non-naval matters, they would also be passed to Hut 3. Hut 4 also decoded a manual system known as the dockyard cipher, which sometimes carried messages that were also sent on an Enigma network. Feeding these back to Hut 8 provided excellent "cribs" for Known-plaintext attacks on the daily naval Enigma key. Listening stations. Initially, a wireless room was established at Bletchley Park. It was set up in the mansion's water tower under the code name "Station X", a term now sometimes applied to the codebreaking efforts at Bletchley as a whole. The "X" is the Roman numeral "ten", this being the Secret Intelligence Service's tenth such station. Due to the long radio aerials stretching from the wireless room, the radio station was moved from Bletchley Park to nearby Whaddon Hall to avoid drawing attention to the site. Subsequently, other listening stationsthe Y-stations, such as the ones at Chicksands in Bedfordshire, Beaumanor Hall, Leicestershire (where the headquarters of the War Office "Y" Group was located) and Beeston Hill Y Station in Norfolkgathered raw signals for processing at Bletchley. Coded messages were taken down by hand and sent to Bletchley on paper by motorcycle despatch riders or (later) by teleprinter. Additional buildings. The wartime needs required the building of additional accommodation. Huts. Often a hut's number became so strongly associated with the work performed inside that even when the work was moved to another building it was still referred to by the original "Hut" designation. Blocks. In addition to the wooden huts, there were a number of brick-built "blocks". Work on specific countries' signals. German signals. Most German messages decrypted at Bletchley were produced by one or another version of the Enigma cipher machine, but an important minority were produced by the even more complicated twelve-rotor Lorenz SZ42 on-line teleprinter cipher machine used for high command messages, known as Fish. The bombe was an electromechanical device whose function was to discover some of the daily settings of the Enigma machines on the various German military networks. Its pioneering design was developed by Alan Turing (with an important contribution from Gordon Welchman) and the machine was engineered by Harold 'Doc' Keen of the British Tabulating Machine Company. Each machine was about high and wide, deep and weighed about a ton. At its peak, GC&CS was reading approximately 4,000 messages per day. As a hedge against enemy attack most bombes were dispersed to installations at Adstock and Wavendon (both later supplanted by installations at Stanmore and Eastcote), and Gayhurst. Luftwaffe messages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed before they could be broken. When, in February 1942, the German navy introduced the four-rotor Enigma for communications with its Atlantic U-boats, this traffic became unreadable for a period of ten months. Britain produced modified bombes, but it was the success of the US Navy Bombe that was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and fro across the Atlantic by enciphered teleprinter links. Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". While not changing the events, "Ultra" decrypts featured prominently in the story of Operation SALAM, László Almásy's mission across the desert behind Allied lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions. The Lorenz messages were codenamed "Tunny" at Bletchley Park. They were only sent in quantity from mid-1942. The Tunny networks were used for high-level messages between German High Command and field commanders. With the help of German operator errors, the cryptanalysts in the Testery (named after Ralph Tester, its head) worked out the logical structure of the machine despite not knowing its physical form. They devised automatic machinery to help with decryption, which culminated in Colossus, the world's first programmable digital electronic computer. This was designed and built by Tommy Flowers and his team at the Post Office Research Station at Dollis Hill. The prototype first worked in December 1943, was delivered to Bletchley Park in January and first worked operationally on 5 February 1944. Enhancements were developed for the Mark 2 Colossus, the first of which was working at Bletchley Park on the morning of 1 June in time for D-day. Flowers then produced one Colossus a month for the rest of the war, making a total of ten with an eleventh part-built. The machines were operated mainly by Wrens in a section named the Newmanry after its head Max Newman. Italian signals. Italian signals had been of interest since Italy's attack on Abyssinia in 1935. During the Spanish Civil War the Italian Navy used the K model of the commercial Enigma without a plugboard; this was solved by Knox in 1937. When Italy entered the war in 1940 an improved version of the machine was used, though little traffic was sent by it and there were "wholesale changes" in Italian codes and cyphers. Knox was given a new section for work on Enigma variations, which he staffed with women ("Dilly's girls"), who included Margaret Rock, Jean Perrin, Clare Harding, Rachel Ronald, Elisabeth Granger; and Mavis Lever. Mavis Lever solved the signals revealing the Italian Navy's operational plans before the Battle of Cape Matapan in 1941, leading to a British victory. Although most Bletchley staff did not know the results of their work, Admiral Cunningham visited Bletchley in person a few weeks later to congratulate them. On entering World War II in June 1940, the Italians were using book codes for most of their military messages. The exception was the Italian Navy, which after the Battle of Cape Matapan started using the C-38 version of the Boris Hagelin rotor-based cipher machine, particularly to route their navy and merchant marine convoys to the conflict in North Africa. As a consequence, JRM Butler recruited his former student Bernard Willson to join a team with two others in Hut 4. In June 1941, Willson became the first of the team to decode the Hagelin system, thus enabling military commanders to direct the Royal Navy and Royal Air Force to sink enemy ships carrying supplies from Europe to Rommel's Afrika Korps. This led to increased shipping losses and, from reading the intercepted traffic, the team learnt that between May and September 1941 the stock of fuel for the Luftwaffe in North Africa reduced by 90 per cent. After an intensive language course, in March 1944 Willson switched to Japanese language-based codes. A Middle East Intelligence Centre (MEIC) was set up in Cairo in 1939. When Italy entered the war in June 1940, delays in forwarding intercepts to Bletchley via congested radio links resulted in cryptanalysts being sent to Cairo. A Combined Bureau Middle East (CBME) was set up in November, though the Middle East authorities made "increasingly bitter complaints" that GC&CS was giving too little priority to work on Italian cyphers. However, the principle of concentrating high-grade cryptanalysis at Bletchley was maintained. John Chadwick started cryptanalysis work in 1942 on Italian signals at the naval base 'HMS Nile' in Alexandria. Later, he was with GC&CS; in the Heliopolis Museum, Cairo and then in the Villa Laurens, Alexandria. Soviet signals. Soviet signals had been studied since the 1920s. In 193940, John Tiltman (who had worked on Russian Army traffic from 1930) set up two Russian sections at Wavendon (a country house near Bletchley) and at Sarafand in Palestine. Two Russian high-grade army and navy systems were broken early in 1940. Tiltman spent two weeks in Finland, where he obtained Russian traffic from Finland and Estonia in exchange for radio equipment. In June 1941, when the Soviet Union became an ally, Churchill ordered a halt to intelligence operations against it. In December 1941, the Russian section was closed down, but in late summer 1943 or late 1944, a small GC&CS Russian cypher section was set up in London overlooking Park Lane, then in Sloane Square. Japanese signals. An outpost of the Government Code and Cypher School had been set up in Hong Kong in 1935, the Far East Combined Bureau (FECB). The FECB naval staff moved in 1940 to Singapore, then Colombo, Ceylon, then Kilindini, Mombasa, Kenya. They succeeded in deciphering Japanese codes with a mixture of skill and good fortune. The Army and Air Force staff went from Singapore to the Wireless Experimental Centre at Delhi, India. In early 1942, a six-month crash course in Japanese, for 20 undergraduates from Oxford and Cambridge, was started by the Inter-Services Special Intelligence School in Bedford, in a building across from the main Post Office. This course was repeated every six months until war's end. Most of those completing these courses worked on decoding Japanese naval messages in Hut 7, under John Tiltman. By mid-1945, well over 100 personnel were involved with this operation, which co-operated closely with the FECB and the US Signal intelligence Service at Arlington Hall, Virginia. In 1999, Michael Smith wrote that: "Only now are the British codebreakers (like John Tiltman, Hugh Foss, and Eric Nave) beginning to receive the recognition they deserve for breaking Japanese codes and cyphers". Post-war uses and legacy. The Government Code & Cypher School became the Government Communications Headquarters (GCHQ), moving to Eastcote in 1946 and to Cheltenham in 1951. Continued secrecy and official recognition. Until the mid 1970s the thirty year rule meant that there was no official mention of the work done at Bletchley Park. This meant that there were many operations where codes broken by Bletchley Park played an important role, but this was not present in the history of those events. After the War, the secrecy imposed on Bletchley staff remained in force, so that most relatives never knew more than that a child, spouse, or parent had done some kind of secret war work. Churchill referred to the Bletchley staff as "the geese that laid the golden eggs and never cackled". That said, occasional mentions of the work performed at Bletchley Park slipped the censor's net and appeared in print. With the publication of F. W. Winterbotham's "The Ultra Secret" in 1974 public discussion of Bletchley Park's work in the English speaking world finally became accepted, although some former staff considered themselves bound to silence forever. Winterbotham's book was written from memory and although officially allowed, there was no access to archives. Not until July 2009 did the British government fully acknowledge the contribution of the many people working at Bletchley Park. Only then was a commemorative medal struck to be presented to those involved. The gilded medal bears the inscription "GC&CS 1939–1945 Bletchley Park and its Outstations". Decline and partial demolition. The site passed through a succession of hands and saw a number of uses, including as a teacher-training college, and by various government agencies, including the GPO and the Civil Aviation Authority. One large building, block F, was demolished in 1987 by which time the site was being run down with tenants leaving. Bletchley Park museums. Bletchley Park Trust was set up in 1991 by a group of people who recognised the site's importance, as it was at risk of being sold off for housing. and in February 1992, the Milton Keynes Borough Council declared most of the Park a conservation area. The site opened to visitors in 1993, and was formally inaugurated by the Duke of Kent as Chief Patron in July 1994 and Tony Sale became the first director of the Bletchley Park Museums in 1994. In 1999 the land owners, the Property Advisors to the Civil Estate and BT Group, granted a lease to the Trust giving it control over most of the site. June 2014 saw the completion of an £8 million restoration project by museum design specialist, Event Communications, which was marked by a visit from Catherine, Duchess of Cambridge. The Duchess' paternal grandmother, Valerie Middleton, and Valerie's twin sister, Mary ("née" Glassborow), both worked at Bletchley Park during the war. The twin sisters worked as Foreign Office Civilians in Hut 6, where they managed the interception of enemy and neutral diplomatic signals for decryption. Valerie married Catherine's grandfather, Captain Peter Middleton. A memorial at Bletchley Park commemorates Mary and Valerie Middleton's work as code-breakers. The Bletchley Park Learning Department offers educational group visits with active learning activities for schools and universities. Visits can be booked in advance during term time, where students can engage with the history of Bletchley Park and understand its wider relevance for computer history and national security. Their workshops cover introductions to codebreaking, cyber security and the story of Enigma and Lorenz. Funding. In October 2005, American billionaire Sidney Frank donated £500,000 to Bletchley Park Trust to fund a new Science Centre dedicated to Alan Turing. Simon Greenish joined as Director in 2006 to lead the fund-raising effort in a post he held until 2012 when Iain Standen took over the leadership role. In July 2008, a letter to "The Times" from more than a hundred academics condemned the neglect of the site. In September 2008, PGP, IBM and other technology firms announced a fund-raising campaign to repair the facility. On 6 November 2008 it was announced that English Heritage would donate £300,000 to help maintain the buildings at Bletchley Park, and that they were in discussions regarding the donation of a further £600,000. In October 2011, the Bletchley Park Trust received a £4.6 million Heritage Lottery Fund grant to be used "to complete the restoration of the site, and to tell its story to the highest modern standards" on the condition that £1.7 million of match funding is raised by the Bletchley Park Trust. Just weeks later, Google contributed £550,000 and by June 2012 the trust had successfully raised £2.4 million to unlock the grants to restore Huts 3 and 6, as well as develop its exhibition centre in Block C. Additional income is raised by renting Block H to the National Museum of Computing, and some office space in various parts of the park to private firms. Due to the COVID-19 pandemic the Trust expected to lose more than £2 million in 2020 and be required to cut a third of its workforce. Former MP John Leech asked Amazon, Apple, Google, Facebook and Microsoft to donate £400,000 each to secure the future of the Trust. Leech had led the successful campaign to pardon Alan Turing and implement Turing's Law. Other organisations sharing the campus. The National Museum of Computing. The National Museum of Computing is housed in Block H, which is rented from the Bletchley Park Trust. Its Colossus and Tunny galleries tell an important part of allied breaking of German codes during World War II. There is a working reconstruction of a Bombe and a rebuilt Colossus computer which was used on the high-level Lorenz cipher, codenamed "Tunny" by the British. The museum, which opened in 2007, is an independent voluntary organisation that is governed by its own board of trustees. Its aim is "To collect and restore computer systems particularly those developed in Britain and to enable people to explore that collection for inspiration, learning and enjoyment." Through its many exhibits, the museum displays the story of computing through the mainframes of the 1960s and 1970s, and the rise of personal computing in the 1980s. It has a policy of having as many of the exhibits as possible in full working order. Science and Innovation Centre. This consisted of serviced office accommodation housed in Bletchley Park's Blocks A and E, and the upper floors of the Mansion. Its aim was to foster the growth and development of dynamic knowledge-based start-ups and other businesses. It closed in 2021 and blocks A and E were taken into use as part of the museum. Proposed National College of Cyber Security. In April 2020 Bletchley Park Capital Partners, a private company run by Tim Reynolds, Deputy Chairman of the National Museum of Computing, announced plans to sell off the freehold to part of the site containing former Block G for commercial development. Offers of between £4 million and £6 million were reportedly being sought for the 3 acre plot, for which planning permission for employment purposes was granted in 2005. Previously, the construction of a National College of Cyber Security for students aged from 16 to 19 years old had been envisaged on the site, to be housed in Block G after renovation with funds supplied by the Bletchley Park Science and Innovation Centre. RSGB National Radio Centre. The Radio Society of Great Britain's National Radio Centre (including a library, radio station, museum and bookshop) are in a newly constructed building close to the main Bletchley Park entrance. External links. Maps
4041
47988270
https://en.wikipedia.org/wiki?curid=4041
Bede
Bede (; ; 672/326 May 735), also known as Saint Bede, Bede of Jarrow, the Venerable Bede, and Bede the Venerable (), was an English monk, author and scholar. He was one of the most known writers during the Early Middle Ages, and his most famous work, "Ecclesiastical History of the English People", gained him the title "The Father of English History". He served at the monastery of St Peter and its companion monastery of St Paul in the Kingdom of Northumbria of the Angles. Born on lands belonging to the twin monastery of Monkwearmouth–Jarrow in present-day Tyne and Wear, England, Bede was sent to Monkwearmouth at the age of seven and later joined Abbot Ceolfrith at Jarrow. Both of them survived a plague that struck in 686 and killed the majority of the population there. While Bede spent most of his life in the monastery, he travelled to several abbeys and monasteries across the British Isles, even visiting the archbishop of York and King Ceolwulf of Northumbria. His theological writings were extensive and included a number of Biblical commentaries and other works of exegetical erudition. Another important area of study for Bede was the academic discipline of "computus", otherwise known to his contemporaries as the science of calculating calendar dates. One of the more important dates Bede tried to compute was Easter, an effort that was mired in controversy. He also helped to popularise the practice of dating forward from the birth of Christ ("Anno Domini—"in the year of our Lord), a practice which eventually became commonplace in medieval Europe. He is considered by many historians to be the most important scholar of antiquity for the period between the death of Pope Gregory I in 604 and the coronation of Charlemagne in 800. In 1899 Pope Leo XIII declared him a Doctor of the Church. He is the only native of Great Britain to achieve this designation. Bede was moreover a skilled linguist and translator, and his work made the Latin and Greek writings of the early Church Fathers much more accessible to his fellow Anglo-Saxons, which contributed significantly to English Christianity. Bede's monastery had access to a library that included works by Eusebius, Orosius, and many others. Life. Childhood. Almost everything that is known of Bede's life is contained in the last chapter of his "Ecclesiastical History of the English People", a history of the church in England. It was completed in about 731, and Bede implies that he was then in his fifty-ninth year, which would give a birth date in 672 or 673. A minor source of information is the letter by his disciple Cuthbert (not to be confused with the saint, Cuthbert, who is mentioned in Bede's work) which relates Bede's death. Bede, in the "Historia", gives his birthplace as "on the lands of this monastery". He is referring to the twin monasteries of Monkwearmouth and Jarrow, in modern-day Wearside and Tyneside respectively. There is also a tradition that he was born at Monkton, from the site where the monastery at Jarrow was later built. Bede says nothing of his origins, but his connections with men of noble ancestry suggest that his own family was well-to-do. Bede's first abbot was Benedict Biscop, and the names "Biscop" and "Beda" both appear in a list of the kings of Lindsey from around 800, further suggesting that Bede came from a noble family. Bede's name reflects West Saxon "Bīeda" (Anglian "Bēda"). It is an Old English short name formed on the root of "bēodan" "to bid, command". The name also occurs in the "Anglo-Saxon Chronicle", s.a. 501, as "Bieda", one of the sons of the Saxon founder of Portsmouth. The "Liber Vitae" of Durham Cathedral names two priests with this name, one of whom is presumably Bede himself. Some manuscripts of the "Life of Cuthbert", one of Bede's works, mention that Cuthbert's own priest was named Bede; it is possible that this priest is the other name listed in the "Liber Vitae". Boyhood. At the age of seven, Bede was sent as a "puer oblatus" to the monastery of Monkwearmouth by his family to be educated by Benedict Biscop and later by Ceolfrith. Bede does not say whether it was already intended at that point that he would be a monk. It was fairly common in Ireland at this time for young boys, particularly those of noble birth, to be fostered out as an oblate; the practice was also likely to have been common among the Germanic peoples in England. Monkwearmouth's sister monastery at Jarrow was founded by Ceolfrith in 682, and Bede probably transferred to Jarrow with Ceolfrith that year. The dedication stone for the church has survived ; it is dated 23 April 685, and as Bede would have been required to assist with menial tasks in his day-to-day life it is possible that he helped in building the original church. In 686, plague broke out at Jarrow. The "Life of Ceolfrith", written in about 710, records that only two surviving monks were capable of singing the full offices; one was Ceolfrith and the other a young boy, who according to the anonymous writer had been taught by Ceolfrith. The two managed to do the entire service of the liturgy until others could be trained. The young boy was almost certainly Bede, who would have been about 14. Youth. When Bede was about 17 years old Adomnán, the abbot of Iona Abbey, visited Monkwearmouth and Jarrow. Bede would probably have met the abbot during this visit, and it may be that Adomnán sparked Bede's interest in the Easter dating controversy. In about 692, in Bede's nineteenth year, Bede was ordained a deacon by his diocesan bishop, John, who was Bishop of Hexham. The canonical age for the ordination of a deacon was 25; Bede's early ordination may mean that his abilities were considered exceptional, but it is also possible that the minimum age requirement was often disregarded. There might have been minor orders ranking below a deacon; but there is no record of whether Bede held any of these offices. Adulthood. In Bede's thirtieth year (about 702), he became a priest, with the ordination again performed by Bishop John. In about 701 Bede wrote his first works, the "De Arte Metrica" and "De Schematibus et Tropis"; both were intended for use in the classroom. He continued to write for the rest of his life, eventually completing over 60 books, most of which have survived. Not all his output can be easily dated, and Bede may have worked on some texts over a period of many years. His last surviving work is a letter to Ecgbert of York, a former student, written in 734. A 6th-century Greek and Latin manuscript of "Acts of the Apostles" that is believed to have been used by Bede survives and is now in the Bodleian Library at the University of Oxford. It is known as the Codex Laudianus. Bede may have worked on some of the Latin Bibles that were copied at Jarrow, one of which, the Codex Amiatinus, is now held by the Laurentian Library in Florence. Bede was a teacher as well as a writer; he enjoyed music and was said to be accomplished as a singer and as a reciter of poetry in the vernacular. It is possible that he suffered a speech impediment, but this depends on a phrase in the introduction to his verse life of St Cuthbert. Translations of this phrase differ, and it is uncertain whether Bede intended to say that he was cured of a speech problem, or merely that he was inspired by the saint's works. In 708 some monks at Hexham accused Bede of having committed heresy in his work "De Temporibus". The standard theological view of world history at the time was known as the Six Ages of the World; in his book, Bede calculated the age of the world for himself, rather than accepting the authority of Isidore of Seville, and came to the conclusion that Christ had been born 3,952 years after the creation of the world, rather than the figure of over 5,000 years that was commonly accepted by theologians. The accusation occurred in front of the bishop of Hexham, Wilfrid, who was present at a feast when some drunken monks made the accusation. Wilfrid did not respond to the accusation, but a monk present relayed the episode to Bede, who replied within a few days to the monk, writing a letter setting forth his defence and asking that the letter also be read to Wilfrid. Bede had another brush with Wilfrid, for the historian says that he met Wilfrid sometime between 706 and 709 and discussed Æthelthryth, the abbess of Ely. Wilfrid had been present at the exhumation of her body in 695, and Bede questioned the bishop about the exact circumstances of the body and asked for more details of her life, as Wilfrid had been her advisor. One further oddity in his writings is that in one of his works, the "Commentary on the Seven Catholic Epistles", he writes in a manner that gives the impression he was married. The section in question is the only one in that work that is written in first-person view. Bede says: "Prayers are hindered by the conjugal duty because as often as I perform what is due to my wife I am not able to pray." Another passage, in the "Commentary on Luke", also mentions a wife in the first person: "Formerly I possessed a wife in the lustful passion of desire and now I possess her in honourable sanctification and true love of Christ." The historian Benedicta Ward argued that these passages are Bede employing a rhetorical device. Final years. In 733 Bede travelled to York to visit Ecgbert, who was then Bishop of York. The See of York was elevated to an archbishopric in 735, and it is likely that Bede and Ecgbert discussed the proposal for the elevation during his visit. Bede hoped to visit Ecgbert again in 734 but was too ill to make the journey. Bede also travelled to the monastery of Lindisfarne and at some point visited the otherwise unknown monastery of a monk named , a visit that is mentioned in a letter to that monk. Because of his widespread correspondence with others throughout the British Isles, and because many of the letters imply that Bede had met his correspondents, it is likely that Bede travelled to some other places, although nothing further about timing or locations can be guessed. It seems certain that he did not visit Rome, however, as he did not mention it in the autobiographical chapter of his "Historia Ecclesiastica". Nothhelm, a correspondent of Bede's who assisted him by finding documents for him in Rome, is known to have visited Bede, though the date cannot be determined beyond the fact that it was after Nothhelm's visit to Rome. Except for a few visits to other monasteries, his life was spent in a round of prayer, observance of the monastic discipline and study of the Sacred Scriptures. He was considered the most learned man of his time. Bede died at Jarrow on the Feast of the Ascension, 26 May 735 and was buried there. Cuthbert, a disciple of Bede's, wrote a letter to a Cuthwin (of whom nothing else is known), describing Bede's last days and his death. According to Cuthbert, Bede fell ill, "with frequent attacks of breathlessness but almost without pain", before Easter. On the Tuesday, two days before Bede died, his breathing became worse and his feet swelled. He continued to dictate to a scribe, however, and despite spending the night awake in prayer he dictated again the following day. At three o'clock, according to Cuthbert, he asked for a box of his to be brought and distributed among the priests of the monastery "a few treasures" of his: "some pepper, and napkins, and some incense". That night he dictated a final sentence to the scribe, a boy named Wilberht, and died soon afterwards. The account of Cuthbert does not make entirely clear whether Bede died before midnight or after. However, by the reckoning of Bede's time, passage from the old day to the new occurred at sunset, not midnight, and Cuthbert is clear that he died after sunset. Thus, while his box was brought at three o'clock Wednesday afternoon of 25 May, by the time of the final dictation it was considered 26 May, although it might still have been 25 May in modern usage. Cuthbert's letter also relates a five-line poem in the vernacular that Bede composed on his deathbed, known as "Bede's Death Song". It is the most-widely copied Old English poem and appears in 45 manuscripts, but its attribution to Bede is not certain—not all manuscripts name Bede as the author, and the ones that do are of later origin than those that do not. Bede's remains may have been translated to Durham Cathedral in the 11th century; his tomb there was looted in 1541, but the contents were probably re-interred in the Galilee chapel at the cathedral. Works. Bede wrote scientific, historical and theological works, reflecting the range of his writings from music and metrics to exegetical Scripture commentaries. He knew patristic literature, as well as Pliny the Elder, Virgil, Lucretius, Ovid, Horace and other classical writers. He knew some Greek. Bede's scriptural commentaries employed the allegorical method of interpretation, and his history includes accounts of miracles, which to modern historians has seemed at odds with his critical approach to the materials in his history. Modern studies have shown the important role such concepts played in the world-view of early medieval scholars. Although Bede is mainly studied as a historian now, in his time his works on grammar, chronology, and biblical studies were as important as his historical and hagiographical works. The non-historical works contributed greatly to the Carolingian Renaissance. He has been credited with writing a penitential, though his authorship of this work is disputed. "Ecclesiastical History of the English People". Bede's best-known work is the , or "An Ecclesiastical History of the English People", completed in about 731. Bede was aided in writing this book by Albinus, abbot of St Augustine's Abbey, Canterbury. The first of the five books begins with some geographical background and then sketches the history of England, beginning with Caesar's invasion in 55 BC. A brief account of Christianity in Roman Britain, including the martyrdom of St Alban, is followed by the story of Augustine's mission to England in 597, which brought Christianity to the Anglo-Saxons. The second book begins with the death of Gregory the Great in 604 and follows the further progress of Christianity in Kent and the first attempts to evangelise Northumbria. These ended in disaster when Penda, the pagan king of Mercia, killed the newly Christian Edwin of Northumbria at the Battle of Hatfield Chase in about 632. The setback was temporary, and the third book recounts the growth of Christianity in Northumbria under kings Oswald of Northumbria and Oswy. The climax of the third book is the account of the Council of Whitby, traditionally seen as a major turning point in English history. The fourth book begins with the consecration of Theodore as Archbishop of Canterbury and recounts Wilfrid's efforts to bring Christianity to the Kingdom of Sussex. The fifth book brings the story up to Bede's day and includes an account of missionary work in Frisia and of the conflict with the British church over the correct dating of Easter. Bede wrote a preface for the work, in which he dedicates it to Ceolwulf, king of Northumbria. The preface mentions that Ceolwulf received an earlier draft of the book; presumably Ceolwulf knew enough Latin to understand it, and he may even have been able to read it. The preface makes it clear that Ceolwulf had requested the earlier copy, and Bede had asked for Ceolwulf's approval; this correspondence with the king indicates that Bede's monastery had connections among the Northumbrian nobility. Sources. The monastery at Wearmouth-Jarrow had an excellent library. Both Benedict Biscop and Ceolfrith had acquired books from the Continent, and in Bede's day the monastery was a renowned centre of learning. It has been estimated that there were about 200 books in the monastic library. For the period prior to Augustine's arrival in 597, Bede drew on earlier writers, including Gaius Julius Solinus. He had access to two works of Eusebius: the "Historia Ecclesiastica", and also the "Chronicon", though he had neither in the original Greek; instead he had a Latin translation of the "Historia", by Rufinus, and Jerome's translation of the "Chronicon". He also knew Orosius's "Adversus Paganus", and Gregory of Tours' "Historia Francorum", both Christian histories, as well as the work of Eutropius, a pagan historian. He used Constantius of Lyon's "Life of Germanus" as a source for Germanus of Auxerre's visits to Britain. Bede's account of the Anglo-Saxon settlement of Britain is drawn largely from Gildas's "De Excidio et Conquestu Britanniae". Bede would also have been familiar with more recent accounts such as Stephen of Ripon's "Life of Wilfrid", and the anonymously written "Life" "of Gregory the Great" and "Life of Cuthbert". He also drew on Josephus's "Antiquities", and the works of Cassiodorus, and there was a copy of the "Liber Pontificalis" in Bede's monastery. Bede quotes from several classical authors, including Cicero, Plautus, and Terence, but he may have had access to their work via a Latin grammar rather than directly. However, it is clear he was familiar with the works of Virgil and with Pliny the Elder's "Natural History", and his monastery also owned copies of the works of Dionysius Exiguus. He probably drew his account of Alban from a life of that saint which has not survived. He acknowledges two other lives of saints directly; one is a life of Fursa, and the other of Æthelburh; the latter no longer survives. He also had access to a life of Ceolfrith. Some of Bede's material came from oral traditions, including a description of the physical appearance of Paulinus of York, who had died nearly 90 years before Bede's "Historia Ecclesiastica" was written. Bede had correspondents who supplied him with material. Albinus, the abbot of the monastery in Canterbury, provided much information about the church in Kent, and with the assistance of Nothhelm, at that time a priest in London, obtained copies of Gregory the Great's correspondence from Rome relating to Augustine's mission. Almost all of Bede's information regarding Augustine is taken from these letters. Bede acknowledged his correspondents in the preface to the "Historia Ecclesiastica"; he was in contact with Bishop Daniel of Winchester, for information about the history of the church in Wessex and also wrote to the monastery at Lastingham for information about Cedd and Chad of Mercia. Bede also mentions an Abbot Esi as a source for the affairs of the East Anglian church, and Bishop Cynibert for information about Lindsey. The historian Walter Goffart argues that Bede based the structure of the "Historia" on three works, using them as the framework around which the three main sections of the work were structured. For the early part of the work, up until the Gregorian mission, Goffart feels that Bede used "De excidio". The second section, detailing the Gregorian mission of Augustine of Canterbury was framed on "Life of Gregory the Great" written at Whitby. The last section, detailing events after the Gregorian mission, Goffart feels was modelled on "Life of Wilfrid". Most of Bede's informants for information after Augustine's mission came from the eastern part of Britain, leaving significant gaps in the knowledge of the western areas, which were those areas likely to have a native Briton presence. Models and style. Bede's stylistic models included some of the same authors from whom he drew the material for the earlier parts of his history. His introduction imitates the work of Orosius, and his title is an echo of Eusebius's "Historia Ecclesiastica". Bede also followed Eusebius in taking the "Acts of the Apostles" as the model for the overall work: where Eusebius used the "Acts" as the theme for his description of the development of the church, Bede made it the model for his history of the Anglo-Saxon church. Bede quoted his sources at length in his narrative, as Eusebius had done. Bede also appears to have taken quotes directly from his correspondents at times. For example, he almost always uses the terms "Australes" and "Occidentales" for the South and West Saxons respectively, but in a passage in the first book he uses "Meridiani" and "Occidui" instead, as perhaps his informant had done. At the end of the work, Bede adds a brief autobiographical note; this was an idea taken from Gregory of Tours' earlier "History of the Franks". Bede's work as a hagiographer and his detailed attention to dating were both useful preparations for the task of writing the "Historia Ecclesiastica". His interest in computus, the science of calculating the date of Easter, was also useful in the account he gives of the controversy between the British and Anglo-Saxon church over the correct method of obtaining the Easter date. Bede is described by Michael Lapidge as "without question the most accomplished Latinist produced in these islands in the Anglo-Saxon period". His Latin has been praised for its clarity, but his style in the "Historia Ecclesiastica" is not simple. He knew rhetoric and often used figures of speech and rhetorical forms which cannot easily be reproduced in translation, depending as they often do on the connotations of the Latin words. However, unlike contemporaries such as Aldhelm, whose Latin is full of difficulties, Bede's own text is easy to read. In the words of Charles Plummer, one of the best-known editors of the "Historia Ecclesiastica", Bede's Latin is "clear and limpid ... it is very seldom that we have to pause to think of the meaning of a sentence ... Alcuin rightly praises Bede for his unpretending style." Intent. Bede's primary intention in writing the "Historia Ecclesiastica" was to show the growth of the united church throughout England. The native Britons, whose Christian church survived the departure of the Romans, earn Bede's ire for refusing to help convert the Anglo-Saxons; by the end of the "Historia" the English, and their church, are dominant over the Britons. This goal, of showing the movement towards unity, explains Bede's animosity towards the British method of calculating Easter: much of the "Historia" is devoted to a history of the dispute, including the final resolution at the Synod of Whitby in 664. Bede is also concerned to show the unity of the English, despite the disparate kingdoms that still existed when he was writing. He also wants to instruct the reader by spiritual example and to entertain, and to the latter end he adds stories about many of the places and people about which he wrote. N. J. Higham argues that Bede designed his work to promote his reform agenda to Ceolwulf, the Northumbrian king. Bede painted a highly optimistic picture of the current situation in the Church, as opposed to the more pessimistic picture found in his private letters. Bede's extensive use of miracles can prove difficult for readers who consider him a more or less reliable historian but do not accept the possibility of miracles. Yet both reflect an inseparable integrity and regard for accuracy and truth, expressed in terms both of historical events and of a tradition of Christian faith that continues. Bede, like Gregory the Great whom Bede quotes on the subject in the "Historia", felt that faith brought about by miracles was a stepping stone to a higher, truer faith, and that as a result miracles had their place in a work designed to instruct. Omissions and biases. Bede is somewhat reticent about the career of Wilfrid, a contemporary and one of the most prominent clerics of his day. This may be because Wilfrid's opulent lifestyle was uncongenial to Bede's monastic mind; it may also be that the events of Wilfrid's life, divisive and controversial as they were, simply did not fit with Bede's theme of the progression to a unified and harmonious church. Bede's account of the early migrations of the Angles and Saxons to England omits any mention of a movement of those peoples across the English Channel from Britain to Brittany described by Procopius, who was writing in the sixth century. Frank Stenton describes this omission as "a scholar's dislike of the indefinite"; traditional material that could not be dated or used for Bede's didactic purposes had no interest for him. Bede was a Northumbrian, and this tinged his work with a local bias. The sources to which he had access gave him less information about the west of England than for other areas. He says relatively little about the achievements of Mercia and Wessex, omitting, for example, any mention of Boniface, a West Saxon missionary to the continent of some renown and of whom Bede had almost certainly heard, though Bede does discuss Northumbrian missionaries to the continent. He is also parsimonious in his praise for Aldhelm, a West Saxon who had done much to convert the native Britons to the Roman form of Christianity. He lists seven kings of the Anglo-Saxons whom he regards as having held "imperium", or overlordship; only one king of Wessex, Ceawlin, is listed as Bretwalda, and none from Mercia, though elsewhere he acknowledges the secular power several of the Mercians held. Historian Robin Fleming states that he was so hostile to Mercia because Northumbria had been diminished by Mercian power that he consulted no Mercian informants and included no stories about its saints. Bede relates the story of Augustine's mission from Rome, and tells how the British clergy refused to assist Augustine in the conversion of the Anglo-Saxons. This, combined with Gildas's negative assessment of the British church at the time of the Anglo-Saxon invasions, led Bede to a very critical view of the native church. However, Bede ignores the fact that at the time of Augustine's mission, the history between the two was one of warfare and conquest, which, in the words of Barbara Yorke, would have naturally "curbed any missionary impulses towards the Anglo-Saxons from the British clergy." Use of "Anno Domini". At the time Bede wrote the "Historia Ecclesiastica", there were two common ways of referring to dates. One was to use indictions, which were 15-year cycles, counting from 312 AD. There were three different varieties of indiction, each starting on a different day of the year. The other approach was to use regnal years—the reigning Roman emperor, for example, or the ruler of whichever kingdom was under discussion. This meant that in discussing conflicts between kingdoms, the date would have to be given in the regnal years of all the kings involved. Bede used both these approaches on occasion but adopted a third method as his main approach to dating: the "Anno Domini" method invented by Dionysius Exiguus. Although Bede did not invent this method, his adoption of it and his promulgation of it in "De Temporum Ratione", his work on chronology, is the main reason it is now so widely used. Bede's Easter table, contained in "De Temporum Ratione", was developed from Dionysius Exiguus' Easter table. Assessment. The "Historia Ecclesiastica" was copied often in the Middle Ages, and about 160 manuscripts containing it survive. About half of those are located on the European continent, rather than in the British Isles. Most of the 8th- and 9th-century texts of Bede's "Historia" come from the northern parts of the Carolingian Empire. This total does not include manuscripts with only a part of the work, of which another 100 or so survive. It was printed for the first time between 1474 and 1482, probably at Strasbourg. Modern historians have studied the "Historia" extensively, and several editions have been produced. For many years, early Anglo-Saxon history was essentially a retelling of the "Historia", but recent scholarship has focused as much on what Bede did not write as what he did. The belief that the "Historia" was the culmination of Bede's works, the aim of all his scholarship, was a belief common among historians in the past but is no longer accepted by most scholars. Modern historians and editors of Bede have been lavish in their praise of his achievement in the "Historia Ecclesiastica". Stenton regards it as one of the "small class of books which transcend all but the most fundamental conditions of time and place", and regards its quality as dependent on Bede's "astonishing power of co-ordinating the fragments of information which came to him through tradition, the relation of friends, or documentary evidence ... In an age where little was attempted beyond the registration of fact, he had reached the conception of history." Patrick Wormald describes him as "the first and greatest of England's historians". The "Historia Ecclesiastica" has given Bede a high reputation, but his concerns were different from those of a modern writer of history. His focus on the history of the organisation of the English church, and on heresies and the efforts made to root them out, led him to exclude the secular history of kings and kingdoms except where a moral lesson could be drawn or where they illuminated events in the church. Besides the "Anglo-Saxon Chronicle", the medieval writers William of Malmesbury, Henry of Huntingdon and Geoffrey of Monmouth used his works as sources and inspirations. Early modern writers, such as Polydore Vergil and Matthew Parker, the Elizabethan Archbishop of Canterbury, also utilised the "Historia", and his works were used by both Protestant and Catholic sides in the wars of religion. Some historians have questioned the reliability of some of Bede's accounts. One historian, Charlotte Behr, thinks that the "Historia's" account of the arrival of the Germanic invaders in Kent should not be considered to relate what actually happened, but rather relates myths that were current in Kent during Bede's time. It is likely that Bede's work, because it was so widely copied, discouraged others from writing histories and may even have led to the disappearance of manuscripts containing older historical works. Other historical works. Chronicles. As Chapter 66 of his "On the Reckoning of Time", in 725 Bede wrote the "Greater Chronicle" ("chronica maiora"), which sometimes circulated as a separate work. For recent events the "Chronicle", like his "Ecclesiastical History", relied upon Gildas, upon a version of the "Liber Pontificalis" current at least to the papacy of Pope Sergius I (687–701), and other sources. For earlier events he drew on Eusebius's "Chronikoi Kanones." The dating of events in the "Chronicle" is inconsistent with his other works, using the era of creation, the "Anno Mundi". Hagiography. His other historical works included lives of the abbots of Wearmouth and Jarrow, as well as verse and prose lives of St Cuthbert, an adaptation of Paulinus of Nola's "Life of St Felix", and a translation of the Greek "Passion" of St Anastasius. He also created a listing of saints, the "Martyrology". Theological works. In his own time, Bede was as well known for his biblical commentaries, and for his exegetical and other theological works. The majority of his writings were of this type and covered the Old Testament and the New Testament. Most survived the Middle Ages, but a few were lost. It was for his theological writings that he earned the title of "Doctor Anglorum" and why he was declared a saint. Bede first wrote commentaries on biblical books which previous patristic authors had not treated in depth, to his knowledge: "On the Gospel of Mark", "Commentary on Revelation", "Commentary on the Catholic Epistles", "Commentary on Acts", "Reconsideration on the Books of Acts",; and from the Old Testament "Commentary on Samuel", "Commentary on Genesis", "Commentaries on Ezra and Nehemiah", "On the Temple", "On the Tabernacle", "Commentaries on Tobit", "Commentaries on Proverbs", "Commentaries on the Song of Songs", "Commentaries on the Canticle of Habakkuk". The works on Ezra, the tabernacle and the temple were especially influenced by Gregory the Great's writings. He also wrote "On the Gospel of Luke", and "Homilies on the Gospels". Bede also wrote homilies, works written to explain theology used in worship services. He wrote homilies on the major Christian seasons such as Advent, Lent, or Easter, as well as on other subjects such as anniversaries of significant events. Both types of Bede's theological works circulated widely in the Middle Ages. Several of his biblical commentaries were incorporated into the "Glossa Ordinaria", an 11th-century collection of biblical commentaries. Some of Bede's homilies were collected by Paul the Deacon, and they were used in that form in the Monastic Office. Boniface used Bede's homilies in his missionary efforts on the continent. At the time of his death he was working on a translation of the Gospel of John into English. He did this for the last 40 days of his life. When the last passage had been translated he said: "All is finished." Sources. Bede synthesised and transmitted the learning from his predecessors, as well as made careful, judicious innovation in knowledge (such as recalculating the age of the Earth—for which he was censured before surviving the heresy accusations and eventually having his views championed by Archbishop Ussher in the sixteenth century—see below) that had theological implications. In order to do this, he learned Greek and attempted to learn Hebrew. He spent time reading and rereading both the Old and the New Testaments. He mentions that he studied from a text of Jerome's Vulgate, which itself was from the Hebrew text. He also studied both the Latin and the Greek Fathers of the Church. In the monastic library at Jarrow were numerous books by theologians, including works by Basil of Caesarea, John Cassian, John Chrysostom, Isidore of Seville, Origen, Gregory of Nazianzus, Augustine of Hippo, Jerome, Pope Gregory I, Ambrose of Milan, Cassiodorus and Cyprian. He used these, in conjunction with the Biblical texts themselves, to write his commentaries and other theological works. He had a Latin translation by Evagrius of Antioch of Athanasius's "Life of Antony" and a copy of Sulpicius Severus' "Life of St Martin". He also used lesser known writers, such as Fulgentius of Ruspe, Julian of Eclanum, Ticonius and Prosper of Aquitaine. Bede was the first to refer to Jerome, Augustine, Pope Gregory and Ambrose as the four Latin Fathers of the Church. It is clear from Bede's own comments that he felt his calling was to explain to his students and readers the theology and thoughts of the Church Fathers. Bede sometimes included in his theological books an acknowledgement of the predecessors on whose works he drew. In two cases he left instructions that his marginal notes, which gave the details of his sources, should be preserved by the copyist, and he may have originally added marginal comments about his sources to others of his works. Where he does not specify, it is still possible to identify books to which he must have had access by quotations that he uses. A full catalogue of the library available to Bede in the monastery cannot be reconstructed, but it is possible to tell, for example, that Bede was very familiar with the works of Virgil. There is little evidence that he had access to any other of the pagan Latin writers—he quotes many of these writers, but the quotes are almost always found in the Latin grammars that were common in his day, one or more of which would certainly have been at the monastery. Another difficulty is that manuscripts of early writers were often incomplete: it is apparent that Bede had access to Pliny's "Encyclopaedia", for example, but it seems that the version he had was missing book xviii, since he did not quote from it in his "De temporum ratione". Historical and astronomical chronology. "De temporibus", or "On Time", written in about 703, provides an introduction to the principles of Easter computus. This was based on parts of Isidore of Seville's "Etymologies", and Bede also included a chronology of the world which was derived from Eusebius, with some revisions based on Jerome's translation of the Bible. In about 723, Bede wrote a longer work on the same subject, "On the Reckoning of Time", which was influential throughout the Middle Ages. He also wrote several shorter letters and essays discussing specific aspects of computus. "On the Reckoning of Time" ("De temporum ratione") included an introduction to the traditional ancient and medieval view of the cosmos, including an explanation of how the spherical Earth influenced the changing length of daylight, of how the seasonal motion of the Sun and Moon influenced the changing appearance of the new moon at evening twilight. Bede also records the effect of the moon on tides. He shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. Since the focus of his book was the computus, Bede gave instructions for computing the date of Easter from the date of the Paschal full moon, for calculating the motion of the Sun and Moon through the zodiac, and for many other calculations related to the calendar. He gives some information about the months of the Anglo-Saxon calendar. Any codex of Bede's Easter table is normally found together with a codex of his "De temporum ratione". His Easter table, being an exact extension of Dionysius Exiguus' Paschal table and covering the time interval AD 532–1063, contains a 532-year Paschal cycle based on the so-called classical Alexandrian 19-year lunar cycle, being the close variant of bishop Theophilus' 19-year lunar cycle proposed by Annianus and adopted by bishop Cyril of Alexandria around AD 425. The ultimate similar (but rather different) predecessor of this Metonic 19-year lunar cycle is the one invented by Anatolius around AD 260. For calendric purposes, Bede made a new calculation of the age of the world since the creation, which he dated as 3952 BC. Because of his innovations in computing the age of the world, he was accused of heresy at the table of Bishop Wilfrid, his chronology being contrary to accepted calculations. Once informed of the accusations of these "lewd rustics", Bede refuted them in his Letter to Plegwin. In addition to these works on astronomical timekeeping, he also wrote "De natura rerum", or "On the Nature of Things", modelled in part after the work of the same title by Isidore of Seville. His works were so influential that late in the ninth century Notker the Stammerer, a monk of the Monastery of St Gall in Switzerland, wrote that "God, the orderer of natures, who raised the Sun from the East on the fourth day of Creation, in the sixth day of the world has made Bede rise from the West as a new Sun to illuminate the whole Earth". Educational works. Bede wrote some works designed to help teach grammar in the abbey school. One of these was "De arte metrica", a discussion of the composition of Latin verse, drawing on previous grammarians' work. It was based on Donatus's "De pedibus" and Servius's "De finalibus" and used examples from Christian poets as well as Virgil. It became a standard text for the teaching of Latin verse during the next few centuries. Bede dedicated this work to Cuthbert, apparently a student, for he is named "beloved son" in the dedication, and Bede says "I have laboured to educate you in divine letters and ecclesiastical statutes." "De orthographia" is a work on orthography, designed to help a medieval reader of Latin with unfamiliar abbreviations and words from classical Latin works. Although it could serve as a textbook, it appears to have been mainly intended as a reference work. The date of composition for both of these works is unknown. "De schematibus et tropis sacrae scripturae" discusses the Bible's use of rhetoric. Bede was familiar with pagan authors such as Virgil, but it was not considered appropriate to teach biblical grammar from such texts, and Bede argues for the superiority of Christian texts in understanding Christian literature. Similarly, his text on poetic metre uses only Christian poetry for examples. Latin poetry. A number of poems have been attributed to Bede. His poetic output has been systematically surveyed and edited by Michael Lapidge, who concluded that the following works belong to Bede: the "Versus de die iudicii" ("verses on the day of Judgement", found complete in 33 manuscripts and fragmentarily in 10); the metrical "Vita Sancti Cudbercti" ("Life of St Cuthbert"); and two collections of verse mentioned in the "Historia ecclesiastica" V.24.2. Bede names the first of these collections as "librum epigrammatum heroico metro siue elegiaco" ("a book of epigrams in the heroic or elegiac metre"), and much of its content has been reconstructed by Lapidge from scattered attestations under the title "Liber epigrammatum". The second is named as "liber hymnorum diuerso metro siue rythmo" ("a book of hymns, diverse in metre or rhythm"); this has been reconstructed by Lapidge as containing ten liturgical hymns, one paraliturgical hymn (for the Feast of St Æthelthryth), and four other hymn-like compositions. Vernacular poetry. According to his disciple Cuthbert, Bede was "doctus in nostris carminibus" ("learned in our songs"). Cuthbert's letter on Bede's death, the "Epistola Cuthberti de obitu Bedae", moreover, commonly is understood to indicate that Bede composed a five-line vernacular poem known to modern scholars as "Bede's Death Song" As Opland notes, however, it is not entirely clear that Cuthbert is attributing this text to Bede: most manuscripts of the latter do not use a finite verb to describe Bede's presentation of the song, and the theme was relatively common in Old English and Anglo-Latin literature. The fact that Cuthbert's description places the performance of the Old English poem in the context of a series of quoted passages from Sacred Scripture might be taken as evidence simply that Bede also cited analogous vernacular texts. On the other hand, the inclusion of the Old English text of the poem in Cuthbert's Latin letter, the observation that Bede "was learned in our song," and the fact that Bede composed a Latin poem on the same subject all point to the possibility of his having written it. By citing the poem directly, Cuthbert seems to imply that its particular wording was somehow important, either since it was a vernacular poem endorsed by a scholar who evidently frowned upon secular entertainment or because it is a direct quotation of Bede's last original composition. Veneration. There is no evidence for cult being paid to Bede in England in the 8th century. One reason for this may be that he died on the feast day of Augustine of Canterbury. Later, when he was venerated in England, he was either commemorated after Augustine on 26 May, or his feast was moved to 27 May. However, he was venerated outside England, mainly through the efforts of Boniface and Alcuin, both of whom promoted the cult on the continent. Boniface wrote repeatedly back to England during his missionary efforts, requesting copies of Bede's theological works. Alcuin, who was taught at the school set up in York by Bede's pupil Ecgbert, praised Bede as an example for monks to follow and was instrumental in disseminating Bede's works to all of Alcuin's friends. Bede's cult became prominent in England during the 10th-century revival of monasticism and by the 14th century had spread to many of the cathedrals of England. Wulfstan, Bishop of Worcester was a particular devotee of Bede's, dedicating a church to him in 1062, which was Wulfstan's first undertaking after his consecration as bishop. His body was 'translated' (the ecclesiastical term for relocation of relics) from Jarrow to Durham Cathedral around 1020, where it was placed in the same tomb with St Cuthbert. Later Bede's remains were moved to a shrine in the Galilee Chapel at Durham Cathedral in 1370. The shrine was destroyed during the English Reformation, but the bones were reburied in the chapel. In 1831 the bones were dug up and then reburied in a new tomb, which is still there. Other relics were claimed by York, Glastonbury and Fulda. His scholarship and importance to Catholicism were recognised in 1899 when the Vatican declared him a Doctor of the Church. He is the only Englishman named a Doctor of the Church. He is also the only Englishman in Dante's "Paradise" ("Paradiso" X.130), mentioned among theologians and doctors of the church in the same canto as Isidore of Seville and the Scot Richard of St Victor. His feast day was included in the General Roman Calendar in 1899, for celebration on 27 May rather than on his date of death, 26 May, which was then the feast day of St Augustine of Canterbury. He is venerated in the Catholic Church, in the Church of England and in the Episcopal Church (United States) on 25 May, and in the Eastern Orthodox Church, with a feast day on 27 May (Βεδέα του Ομολογητού). Bede became known as "Venerable Bede" (Latin: ) by the 9th century because of his holiness, but this was not linked to consideration for sainthood by the Catholic Church. According to a legend, the epithet was miraculously supplied by angels, thus completing his unfinished epitaph. It is first utilised in connection with Bede in the 9th century, where Bede was grouped with others who were called "venerable" at two ecclesiastical councils held at Aachen in 816 and 836. Paul the Deacon then referred to him as venerable consistently. By the 11th and 12th century, it had become commonplace. Modern legacy. Bede's reputation as a historian, based mostly on the "Historia Ecclesiastica", remains strong. Thomas Carlyle called him "the greatest historical writer since Herodotus". Walter Goffart says of Bede that he "holds a privileged and unrivalled place among first historians of Christian Europe". He is patron of Beda College in Rome which prepares older men for the Catholic priesthood. His life and work have been celebrated with the annual Jarrow Lecture, held at St Paul's Church, Jarrow, since 1958. Jarrow Hall (formerly Bede's World), in Jarrow, is a museum that documents the history of Bede and other parts of English heritage, on the site where he lived. Bede Metro station, part of the Tyne and Wear Metro light rail network, is named after him.
4045
332841
https://en.wikipedia.org/wiki?curid=4045
Bubble tea
Bubble tea (also known as pearl milk tea, bubble milk tea, tapioca milk tea, boba tea, or boba; , ) is a tea-based drink most often containing chewy tapioca balls, milk, and flavouring. It originated in Taiwan in the early 1980s and spread to other countries where there is a large East Asian diaspora population. Bubble tea is most commonly made with tapioca pearls (also known as "boba" or "balls"), but it can be made with other toppings as well, such as grass jelly, aloe vera, red bean, and popping boba. It has many varieties and flavours, but the two most popular varieties are pearl black milk tea and pearl green milk tea ("pearl" for the tapioca balls at the bottom). Description. Bubble teas fall under two categories: teas without milk and milk teas. Both varieties come with a choice of black, green, or oolong tea as the base. Milk teas usually include powdered or fresh milk, but may also use condensed milk, almond milk, soy milk, or coconut milk. The oldest known bubble tea drink consisted of a mixture of hot Taiwanese black tea, tapioca pearls (), condensed milk, and syrup () or honey. Bubble tea is most commonly served cold. The tapioca pearls that give bubble tea its name were originally made from the starch of the cassava, a tropical shrub known for its starchy roots which was introduced to Taiwan from South America during Japanese colonial rule. Larger pearls () quickly replaced these. Some cafés specialize in bubble tea production. While some cafés may serve bubble tea in a glass, most Taiwanese bubble tea shops serve the drink in a plastic cup and use a machine to seal the top of the cup with heated plastic cellophane. The method allows the tea to be shaken in the serving cup and makes it spill-free until a person is ready to drink it. The cellophane is then pierced with an oversized straw, referred to as a boba straw, which is larger than a typical drinking straw to allow the toppings to pass through. Due to its popularity, bubble tea has inspired a variety of bubble tea flavoured snacks, such as bubble tea ice cream and bubble tea candy. The market size of bubble tea was valued at in 2022 and is projected to reach by the end of 2027. Some of the largest global bubble tea chains include Chatime, CoCo Fresh Tea & Juice and Gong Cha. Variants. Drink. Bubble tea comes in many variations which usually consist of black tea, green tea, oolong tea, and sometimes white tea. Another variation, yuenyeung, (, named after the Mandarin duck) originated in Hong Kong and consists of black tea, coffee, and milk. Other varieties of the drink include blended tea drinks. These variations are often either blended using ice cream, or are smoothies that contain both tea and fruit. Boba ice cream bars have also been produced. There are many popular flavours of bubble tea, such as taro, mango, coffee, and coconut. Flavouring ingredients such as a syrup or powder determine the flavour and usually the colour of the bubble tea, while other ingredients such as tea, milk, and boba are the basis. Toppings. Tapioca pearls (boba) are the most common ingredient, although there are other ways to make the chewy spheres found in bubble tea. The pearls vary in color according to the ingredients mixed in with the tapioca. Most pearls are black from brown sugar. Jelly comes in different shapes: small cubes, stars, or rectangular strips, and flavours such as coconut jelly, konjac, lychee, grass jelly, mango, coffee, and green tea. Azuki bean or mung bean paste, typical toppings for Taiwanese shaved ice desserts, give bubble tea an added subtle flavour as well as texture. Aloe, egg pudding (custard), and sago also can be found in many bubble tea shops. Popping boba, or spheres that have fruit juices or syrups inside them, are another popular bubble tea topping. Flavours include mango, strawberry, coconut, kiwi, and honey melon. Some shops offer milk or cheese foam on top of the drink, giving the drink a consistency similar to that of whipped cream, and a saltier flavour profile. One shop described the effect of the cheese foam as "neutraliz[ing] the bitterness of the tea...and as you drink it you taste the returning sweetness of the tea." Ice and sugar level. Bubble tea shops often give customers the option of choosing the amount of ice or sugar in their drink. Ice levels are usually specified ordinally (e.g., no ice, less ice, normal ice, more ice), and sugar levels in quarterly intervals (e.g., 0%, 25%, 50%, 75%, 100%). Packaging. In Southeast Asia, bubble tea is usually packaged in a plastic takeaway cup, sealed with plastic or a rounded cap. New entrants into the market have attempted to distinguish their products by packaging it in bottles and other shapes. Some have used sealed plastic bags. Nevertheless, the plastic takeaway cup with a sealed cap is still the most common packaging method. Preparation method. The tea can be made in batches during the day or the night before. Brewing different types of teas takes different amounts of time and temperature. For instance, green tea requires brewing at a lower temperature, typically between with a brewing time of 8–10 minutes to extract its optimal flavour. In contrast, black tea needs to be made with hotter water, usually around with a brewing of around 15–20 minutes to bring out its sweetness. A tea warmer dispenser allows the tea to remain heated for up to eight hours. Pearls (boba) are made from tapioca starch. Most bubble tea stores buy packaged tapioca pearls in an uncooked stage. When the boba is uncooked and in the package, it is uncolored and hard. The boba does not turn chewy and dark until they are cooked and sugar is added to bring out its taste. Uncooked tapioca pearls in their package can be stored for around 9 to 12 months. Once cooked, they can be stored in a sealed container in the refrigerator. Despite this, most bubble tea stores will not sell their boba after 24 hours because it will start to harden and lose its chewiness. The traditional preparation method is to mix the ingredients (sugar, powders, and other flavourings) together using a bubble tea shaker cup, by hand. However, many present-day bubble tea shops use a bubble tea shaker machine. This eliminates the need for humans to shake the bubble tea by hand. It also reduces staffing needs as multiple cups of bubble tea may be prepared by a single barista. History. Milk and sugar have been added to tea in Taiwan since the Dutch colonization of Taiwan from 1624 to 1662. Before the invention of bubble tea, a similar tea beverage was created in Taiwan called bubble foam tea (). This drink was made by mixing tea with fructose syrup and then shaking it with ice cubes in a shaker. The vigorous shaking created a fine foam, giving the drink its signature texture. Unlike modern pearl milk tea, bubble foam tea did not initially contain tapioca balls. There are two competing stories for the discovery of bubble tea. One is associated with the Chun Shui Tang tea room in Taichung. Its founder, Liu Han-Chieh, began serving Chinese tea cold after he observed coffee was served cold in Japan while on a visit in the 1980s. The new style of serving tea propelled his business and multiple chains serving this tea were established. The company's product development manager, Lin Hsiu Hui, said she created the first bubble tea in 1988 when she poured tapioca balls into her tea during a staff meeting and encouraged others to drink it. The beverage was well received at the meeting, leading to its inclusion on the menu. It ultimately became the franchise's top-selling product. Another claim for the invention of bubble tea comes from the Hanlin Tea Room () in Tainan. It claims that bubble tea was invented in 1986 when teahouse owner Tu Tsong-he was inspired by white tapioca balls he saw in the local market of Yā-mǔ-liáo (). He later made tea using these traditional Taiwanese snacks. This resulted in what is known as "pearl tea." Popularity. In the 1990s, bubble tea spread across East and Southeast Asia with ever-growing popularity. In regions like Hong Kong, mainland China, Japan, Vietnam, and Singapore, the bubble tea trend has expanded rapidly among young people. In some popular shops, people would line up for more than thirty minutes to get a drink. In recent years, the popularity of bubble tea has gone beyond the beverage itself, with boba lovers inventing various bubble tea flavoured-foods, including ice cream, pizza, toast, sushi, and ramen. Taiwan. In Taiwan, bubble tea has become not just a beverage, but an enduring icon of the culture and food history for the nation. In 2020, the date 30 April was officially declared as National Bubble Tea Day in Taiwan. That same year, the image of bubble tea was proposed as an alternative cover design for Taiwan's passport. According to Al Jazeera, bubble tea has become synonymous with Taiwan and is an important symbol of Taiwanese identity both domestically and internationally. Bubble tea is used to represent Taiwan in the context of the Milk Tea Alliance. 50 Lan is a bubble tea chain founded in Tainan. Hong Kong. Hong Kong is famous for its traditional Hong Kong–style milk tea, which is made with brewed black tea and evaporated milk. While milk tea has long become integrated into people's daily lives, the expansion of Taiwanese bubble tea chains, including Tiger Sugar, Youiccha, and Xing Fu Tang, into Hong Kong created a new wave for "boba tea." Mainland China. Since the idea of adding tapioca pearls into milk tea was introduced into China in the 1990s, bubble tea has increased in popularity. In 2020 it was estimated that the consumption of bubble tea was 5 times that of coffee in recent years. According to data from QianZhen Industry Research Institute, the value of the tea-related beverage market in China reached (about ) in 2018. In 2019, annual sales from bubble tea shops reached as high as (roughly ). While bubble tea chains from Taiwan (e.g., Gong Cha and Coco) are still popular, more local brands, like Yi Dian Dian, Nayuki, Hey Tea, etc., are now dominating the market. In China, young people's growing obsession with bubble tea has shaped their way of social interaction. Buying someone a cup of bubble tea has become a new way of informally thanking someone. It is also a favoured topic among friends and on social media. Japan. Bubble tea first entered Japan by the late 1990s, but it failed to leave a lasting impression on the public markets. It was not until the 2010s when the bubble tea trend finally swept Japan. Shops from Taiwan, Korea, and China, as well as local brands, began to pop up in cities, and bubble tea has remained one of the hottest trends since then. Bubble tea has become so commonplace among teenagers that teenage girls in Japan invented a slang for it: "tapiru" (タピる). The word is short for drinking tapioca tea in Japanese, and it won first place in a survey of "Japanese slang for middle school girls" in 2018. A bubble tea theme park was open for a limited time in 2019 in Harajuku, Tokyo. Singapore. Known locally in Chinese as (), bubble tea is loved by many in Singapore. The drink was sold in Singapore as early as 1992 and became phenomenally popular among young people in 2001. This soon ended because of the intense competition and price wars among shops. As a result, most bubble tea shops closed and bubble tea lost its popularity by 2003. When Taiwanese chains like Koi and Gong Cha came to Singapore in 2007 and 2009, the beverage experienced only short resurgences in popularity. In 2018, interest in bubble tea rose again at an unprecedented speed in Singapore, as new brands like The Alley and Tiger Sugar entered the market; social media also played an important role in driving this renaissance of bubble tea. Malaysia. Bubble tea was introduced to Malaysia in the late 1990s and saw a surge in popularity during the early 2000s, particularly in urban areas and night markets. The arrival of Taiwanese chains such as Chatime in 2010 marked a significant shift in the industry, as franchised outlets began appearing in major cities. By 2013, Malaysia accounted for around 50% of Chatime's global revenue. In 2017, a high-profile legal dispute between Chatime's franchisor and its Malaysian licensee, Loob Holding, led to the rebranding of over 160 outlets as Tealive. Tealive has since become the leading homegrown bubble tea brand in Malaysia, with hundreds of outlets nationwide and regional expansion across Southeast Asia. Other international and local brands, such as Gong Cha, The Alley, and Chizu, also maintain a strong presence. The Malaysian bubble tea market has experienced significant growth and popularity in recent years, becoming a prominent segment of the country's beverage industry. Bubble tea has evolved into a mainstream beverage preference among the populace, propelled by the influx of international franchises and the emergence of indigenous brands. United States. Taiwanese immigrants introduced bubble tea to the United States in the 1990s, initially in California through regions including Los Angeles County. Some of the first stand-alone bubble tea shops can be traced to a food court in Arcadia, in Southern California, and Fantasia Coffee & Tea in Cupertino, in Northern California. Chains like Tapioca Express, Quickly, Lollicup, and Happy Lemon emerged in the late 1990s and early 2000s, bringing the Taiwanese bubble tea trend to the US. Within the Asian American community, bubble tea is commonly known under its colloquial term "boba." As the beverage gained popularity in the US, it gradually became more than a drink, but a cultural identity for Asian Americans. This phenomenon was referred to as "boba life" by Chinese-American brothers Andrew and David Fung in their music video, "Bobalife," released in 2013. Boba symbolizes a subculture that Asian Americans as social minorities could define themselves as, and "boba life" is a reflection of their desire for both cultural and political recognition. It is also used disparagingly in the term boba liberal, a term that derides mainstream Asian-American liberalism. Other regions with large concentrations of bubble tea restaurants in the United States are the Northeast and Southwest. This is reflected in the coffeehouse-style teahouse chains that originate from the regions, such as Boba Tea Company from Albuquerque, New Mexico, No. 1 Boba Tea in Las Vegas, Nevada, and Kung Fu Tea from New York City. Albuquerque and Las Vegas have a large concentrations of boba tea restaurants, as the drink is popular especially among the Hispano, Navajo, Pueblo, and other Native American, Hispanic and Latino American communities in the Southwest. A massive shipping and supply chain crisis on the U.S. West coast, coupled with the obstruction of the Suez Canal in March 2021, caused a shortage of tapioca pearls for bubble tea shops in the U.S. and Canada. Most of the tapioca consumed in the U.S. is imported from Asia, since the critical ingredient, tapioca starch, is mostly grown in Asia. TikTok trends and the Korean Wave also fueled the popularity of bubble tea in the United States. Vietnam. Taiwanese milk tea was introduced to Vietnam in the early 2000s, but it took a few years for this drink to become popular with young people. Roadside stalls and carts rarely served milk tea, and the milk tea trend gradually cooled down in the late 2000s. Many shops had to liquidate or close, while others struggled to survive. Bubble tea also gained controversy because of information about tea of unknown origin, tapioca pearls allegedly being made from polymer plastics, etc. By 2012, Taiwanese brands arrived in Vietnam, still the same old milk tea but served in a completely new style: milk tea with toppings, developing a chain model, and a space designed as well as any famous coffee shop. Also, the halo of Taiwanese milk tea gradually returned, especially around the end of 2016, to the beginning of 2017. According to a survey by Lozi, in 2017, the Vietnamese milk tea market witnessed an explosion with 100 large and small brands coexisting and over 1,500 points of sale, including major brands from Taiwan such as Ding Tea, Gong Cha, BoBaPop. This survey also shows that milk tea is becoming a popular drink in Vietnam when 53% of people are confirmed to drink milk tea at least once a week. From the consumer perspective, milk tea is characterized by its sweet, creamy taste, suitable for many customers, not only students, but also children and office workers. In addition, milk tea is constantly "transforming" to meet all customer needs, from cheese cream tea, fruit tea to low-fat tea. Another important point that makes milk tea popular is the service style. Instead of small shops and school gate carts like in the past, the milk tea is designed into a spacious space, with fixed seats, and cool air conditioning. Korea. Milk tea is not only a daily drink, but it has also become a "fever drink" loved in many countries, including South Korea. In the capital Seoul alone, there are 4 famous milk tea shops, which are popular places for entertainment, dating, and meeting of Korean youth every weekend, which are Gong Cha, Cofioca, Amasvin, and Happy Lemon. In Korea, there are many different large and small milk tea shops, famous brands or just small shops with a drink counter and a table. Although pearl milk tea originated in Taiwan, it took certain changes in Korea. Koreans are very concerned about keeping in shape, every meal they have to check exactly how many calories they take in, so that they can do appropriate exercises to burn off excess fat. Therefore, when entering restaurants or bakeries in Korea, we will see the calorie index recorded very carefully as a way to protect the health of consumers. For example, at Gong Cha milk tea shops there, customers can choose the sweetness of their milk tea by choosing the sugar level (0%, 30%, 50%, 70%, and 100%) and similarly choose ice to add personal favourite flavour to their milk tea. Australia. Individual bubble tea shops began to appear in Australia in the 1990s, along with other regional drinks like Eis Cendol. Chains of stores were established as early as 2002, when the Bubble Cup franchise opened its first store in Melbourne. Although originally associated with the rapid growth of immigration from Asia and the vast tertiary student cohort from Asia, in Melbourne and Sydney bubble tea has become popular across many communities. Mauritius. The first bubble tea shop in Mauritius opened in late 2012, and since then there have been bubble tea shops in most shopping malls on the island. Bubble tea shops have become a popular place for teenagers to hang out. Cultural influence. In 2020, the Unicode Consortium released the bubble tea emoji () as part of its version 13.0 update. On 29 January 2023, Google celebrated Bubble Tea with a doodle. Potential health concerns. In July 2019, Singapore's Mount Alvernia Hospital warned against the high sugar content of bubble tea since the drink had become extremely popular in Singapore. While it acknowledged the benefits of drinking green tea and black tea in reducing risk of cardiovascular disease, diabetes, arthritis, and cancer, respectively, the hospital cautions that the addition of other ingredients like non-dairy creamer and toppings in the tea could raise the fat and sugar content of the tea and increase the risk of chronic diseases. Non-dairy creamer is a milk substitute that contains trans fat in the form of hydrogenated palm oil. The hospital warned that this oil has been strongly correlated with an increased risk of heart disease and stroke. The other concern about bubble tea is its high calorie content, partially attributed to the high-carbohydrate tapioca pearls (), which can make up to half the calorie-count in a serving of bubble tea.
4049
1298783298
https://en.wikipedia.org/wiki?curid=4049
Battle of Blenheim
The Battle of Blenheim (; ; ) fought on , was a major battle of the War of the Spanish Succession. The overwhelming Allied victory ensured the safety of Vienna from the Franco-Bavarian army, thus preventing the collapse of the reconstituted Grand Alliance. Louis XIV of France sought to knock the Holy Roman Emperor, Leopold, out of the war by seizing Vienna, the Habsburg capital, and gain a favourable peace settlement. The dangers to Vienna were considerable: Maximilian II Emanuel, Elector of Bavaria, and Marshal Ferdinand de Marsin's forces in Bavaria threatened from the west, and Marshal Louis Joseph de Bourbon, duc de Vendôme's large army in northern Italy posed a serious danger with a potential offensive through the Brenner Pass. Vienna was also under pressure from Rákóczi's Hungarian revolt from its eastern approaches. Realising the danger, the Duke of Marlborough resolved to alleviate the peril to Vienna by marching his forces south from Bedburg to help maintain Emperor Leopold within the Grand Alliance. A combination of deception and skilled administration – designed to conceal his true destination from friend and foe alike – enabled Marlborough to march unhindered from the Low Countries to the River Danube in five weeks. After securing Donauwörth on the Danube, Marlborough sought to engage Maximilian's and Marsin's army before Marshal Camille d'Hostun, duc de Tallard, could bring reinforcements through the Black Forest. The Franco-Bavarian commanders proved reluctant to fight until their numbers were deemed sufficient, and Marlborough failed in his attempts to force an engagement. When Tallard arrived to bolster Maximilian's army, and Prince Eugene of Savoy arrived with reinforcements for the Allies, the two armies finally met on the banks of the Danube in and around the small village of Blindheim, from which the English "Blenheim" is derived. Blenheim was one of the battles that altered the course of the war, which until then was favouring the French and Spanish Bourbons. Although the battle did not win the war, it prevented a potentially devastating loss for the Grand Alliance and shifted the war's momentum, ending French plans of knocking Emperor Leopold out of the war. The French suffered catastrophic casualties in the battle, including their commander-in-chief, Tallard, who was taken captive to England. Before the 1704 campaign ended, the Allies had taken Landau, and the towns of Trier and Trarbach on the Moselle in preparation for the following year's campaign into France itself. This offensive never materialised, for the Grand Alliance's army had to depart the Moselle to defend Liège from a French counter-offensive. The war continued for another decade before ending in 1714. Background. By 1704, the War of the Spanish Succession was in its fourth year. The previous year had been one of successes for France and her allies, most particularly on the Danube, where Marshal Claude-Louis-Hector de Villars and Maximilian II Emanuel, Elector of Bavaria, had created a direct threat to Vienna, the Habsburg capital. Vienna had been saved by dissension between the two commanders, leading to Villars being replaced by the less dynamic Marshal Ferdinand de Marsin. Nevertheless, the threat was still real: Rákóczi's Hungarian revolt was threatening the Empire's eastern approaches, and Marshal Louis Joseph, Duke of Vendôme's forces threatened an invasion from northern Italy. In the courts of Versailles and Madrid, Vienna's fall was confidently anticipated, an event which would almost certainly have led to the collapse of the reconstituted Grand Alliance. To isolate the Danube from any Allied intervention, Marshal François de Neufville, duc de Villeroi's 46,000 troops were expected to pin the 70,000 Dutch and British troops around Maastricht in the Low Countries, while General Robert Jean Antoine de Franquetot de Coigny protected Alsace against surprise with a further corps. The only forces immediately available for Vienna's defence were the imperial army under Margrave Louis William of Baden of 36,000 men stationed in the Lines of Stollhofen to watch Marshal Camille d'Hostun, duc de Tallard, at Strasbourg; and 10,000 men under Prince Eugene of Savoy south of Ulm. Various Allied statesmen, including the Imperial Austrian Ambassador in London, Count Wratislaw, and the Duke of Marlborough realised the implications of the situation on the Danube. To maintain secrecy, Marlborough kept his plans hidden from both the Dutch States General and the Parliament of England. In the Dutch Republic, only a select few – Grand Pensionary Anthonie Heinsius, Simon van Slingelandt, , and – were privy to his strategy from the outset. In England, Marlborough confided only in Sidney Godolphin, Queen Anne, and her husband. Marlborough, realising the only way to reinforce the Austrians was by the use of secrecy and guile, pretended to move his troops to the Moselle – a plan approved of by the Dutch States General – but once there, he would move further and link up with Austrian forces in southern Germany. The Dutch diplomat and field deputy Van Rechteren-Almelo would come to play an important role. He made sure that on their 450-kilometre-long march, the Allies would nowhere be denied passage by local rulers, nor would they need to look for provisions, horsefeed or new boots. He also saw to it that sufficient stopovers were arranged along the way to ensure that the Allies arrived at their destination in good condition. This was of paramount importance, for the success of the operation depended on a quick elimination of the Bavarian elector. However, it was not possible to make the logistical arrangements in advance that would have been indispensable to supply the Allied army south of the Danube. For this, the Allies should have had access to the free imperial cities of Ulm and Augsburg, but the Bavarian elector had taken these two cities. This could have become a problem for Marlborough had the Elector avoided a battle and instead entrenched himself south of the Danube. Had Villeroy then managed to take advantage of the weakening of Allied forces in the Netherlands by recapturing Liège and besieging Maastricht, it would have validated the concerns of some of his Dutch adversaries, who were against any major weakening of the forces in the Spanish Netherlands. Prelude. Protagonists march to the Danube. Marlborough's march started on 19 May from Bedburg, northwest of Cologne. The army assembled by Marlborough's brother, General Charles Churchill, consisted of 66 squadrons of cavalry, 31 battalions of infantry and 38 guns and mortars, totalling 21,000 men, 16,000 of whom were British. This force was augmented en route, and by the time it reached the Danube it numbered 40,00047 battalions and 88 squadrons. While Marlborough led this army south, the Dutch general, Henry Overkirk, Count of Nassau, maintained a defensive position in the Dutch Republic against the possibility of Villeroi mounting an attack. Marlborough had assured the Dutch that if the French were to launch an offensive he would return in good time, but he calculated that as he marched south, the French army would be drawn after him. In this assumption Marlborough proved correct: Villeroi shadowed him with 30,000 men in 60 squadrons and 42 battalions. Marlborough wrote to Godolphin: "I am very sensible that I take a great deal upon me, but should I act otherwise, the Empire would be undone ..." In the meantime, the appointment of Henry Overkirk as Field Marshal caused significant controversy in the Dutch Republic. After the Earl of Athlone's death, the Dutch States General had put Overkirk in charge of the Dutch States Army, which led to much discontent among the other high-ranking Dutch generals. Ernst Wilhelm von Salisch, Daniël van Dopff and Menno van Coehoorn threatened to resign or go into the service of other countries, although all were eventually convinced to stay. The new infantry generals were also disgruntled — the Lord of Slangenburg because he had to serve the less experienced Overkirk; and the Count of Noyelles because he had to serve the orders of the 'insupportable' Slangenburg. Then there was the major problem of the position of the Prince of Orange. The provinces of Friesland and Groningen demanded that their 17-year-old stadtholder be appointed supreme infantry general. This divided the parties so much that a second , as had existed in 1651, was considered. However, after pressure from the other provinces, Friesland and Groningen adjusted their demands and a compromise was found. The Prince of Orange would nominally be appointed infantry general, behind Slangenburg and Noyelles, but he would not really be in command until he was 20. While the Allies were making their preparations, the French were striving to maintain and re-supply Marsin. He had been operating with Maximilian II against Margrave Louis William, and was somewhat isolated from France: his only lines of communication lay through the rocky passes of the Black Forest. On 14 May, Tallard brought 8,000 reinforcements and vast supplies and munitions through the difficult terrain, whilst outmanoeuvring , the Imperial general who sought to block his path. Tallard then returned with his own force to the Rhine, once again side-stepping Thüngen's efforts to intercept him. On 26 May, Marlborough reached Coblenz, where the Moselle meets the Rhine. If he intended an attack along the Moselle his army would now have to turn west; instead it crossed to the right bank of the Rhine, and was reinforced by 5,000 waiting Hanoverians and Prussians. The French realised that there would be no campaign on the Moselle. A second possible objective now occurred to theman Allied incursion into Alsace and an attack on Strasbourg. Marlborough furthered this apprehension by constructing bridges across the Rhine at Philippsburg, a ruse that not only encouraged Villeroi to come to Tallard's aid in the defence of Alsace, but one that ensured the French plan to march on Vienna was delayed while they waited to see what Marlborough's army would do. Encouraged by Marlborough's promise to return to the Netherlands if a French attack developed there, transferring his troops up the Rhine on barges at a rate of a day, the Dutch States General agreed to release the Danish contingent of seven battalions and 22 squadrons as reinforcements. Marlborough reached Ladenburg, in the plain of the Neckar and the Rhine, and there halted for three days to rest his cavalry and allow the guns and infantry to close up. On 6 June he arrived at Wiesloch, south of Heidelberg. The following day, the Allied army swung away from the Rhine towards the hills of the Swabian Jura and the Danube beyond. At last Marlborough's destination was established without doubt. Strategy. On 10 June, Marlborough met for the first time the President of the Imperial War Council, Prince Eugene – accompanied by Count Wratislaw – at the village of Mundelsheim, halfway between the Danube and the Rhine. By 13 June, the Imperial Field Commander, Margrave Louis William of Baden, had joined them in Großheppach. The three generals commanded a force of nearly 110,000 men. At this conference, it was decided that Prince Eugene would return with 28,000 men to the Lines of Stollhofen on the Rhine to watch Villeroi and Tallard and prevent them going to the aid of the Franco-Bavarian army on the Danube. Meanwhile, Marlborough's and Margrave Louis William's forces would combine, totalling 80,000 men, and march on the Danube to seek out Maximilian II and Marsin before they could be reinforced. Knowing Marlborough's destination, Tallard and Villeroi met at Landau in the Palatinate on 13 June to construct a plan to save Bavaria. The rigidity of the French command system was such that any variations from the original plan had to be sanctioned by Versailles. The Count of Mérode-Westerloo, commander of the Flemish troops in Tallard's army, wrote "One thing is certain: we delayed our march from Alsace for far too long and quite inexplicably." Approval from King Louis arrived on 27 June: Tallard was to reinforce Marsin and Maximilian II on the Danube via the Black Forest, with 40 battalions and 50 squadrons; Villeroi was to pin down the Allies defending the Lines of Stollhofen, or, if the Allies should move all their forces to the Danube, he was to join with Tallard; Coigny with 8,000 men would protect Alsace. On 1 July Tallard's army of 35,000 re-crossed the Rhine at Kehl and began its march. On 22 June, Marlborough's forces linked up with the Imperial forces at Launsheim, having covered in five weeks. Thanks to a carefully planned timetable, the effects of wear and tear had been kept to a minimum. Captain Parker described the march discipline: "As we marched through the country of our Allies, commissars were appointed to furnish us with all manner of necessaries for man and horse ... the soldiers had nothing to do but pitch their tents, boil kettles and lie down to rest." In response to Marlborough's manoeuvres, Maximilian and Marsin, conscious of their numerical disadvantage with only 40,000 men, moved their forces to the entrenched camp at Dillingen on the north bank of the Danube. Marlborough could not attack Dillingen because of a lack of siege guns – he had been unable to bring any from the Low Countries, and Margrave Louis William had failed to supply any, despite prior assurances that he would. The Allies needed a base for provisions and a good river crossing. Consequently, on 2 July Marlborough stormed the fortress of Schellenberg on the heights above the town of Donauwörth. Count Jean d'Arco had been sent with 12,000 men from the Franco-Bavarian camp to hold the town and grassy hill, but after a fierce battle, with heavy casualties on both sides, Schellenberg fell. This forced Donauwörth to surrender shortly afterward. Maximilian, knowing his position at Dillingen was now not tenable, took up a position behind the strong fortifications of Augsburg. Tallard's march presented a dilemma for Prince Eugene. If the Allies were not to be outnumbered on the Danube, he realised that he had to either try to cut Tallard off before he could get there, or to reinforce Marlborough. If he withdrew from the Rhine to the Danube, Villeroi might also make a move south to link up with Maximilian and Marsin. Prince Eugene compromisedleaving 12,000 troops behind guarding the Lines of Stollhofenhe marched off with the rest of his army to forestall Tallard. Lacking in numbers, Prince Eugene could not seriously disrupt Tallard's march but the French marshal's progress was proving slow. Tallard's force had suffered considerably more than Marlborough's troops on their march – many of his cavalry horses were suffering from glanders and the mountain passes were proving tough for the 2,000 wagonloads of provisions. Local German peasants, angry at French plundering, compounded Tallard's problems, leading Mérode-Westerloo to bemoan – "the enraged peasantry killed several thousand of our men before the army was clear of the Black Forest." At Augsburg, Maximilian was informed on 14 July that Tallard was on his way through the Black Forest. This good news bolstered his policy of inaction, further encouraging him to wait for the reinforcements. This reticence to fight induced Marlborough to undertake a controversial policy of spoliation in Bavaria, burning buildings and crops throughout the rich lands south of the Danube. This had two aims: firstly to put pressure on Maximilian to fight or come to terms before Tallard arrived with reinforcements; and secondly, to ruin Bavaria as a base from which the French and Bavarian armies could attack Vienna, or pursue Marlborough into Franconia if, at some stage, he had to withdraw northwards. But this destruction, coupled with a protracted siege of the town of Rain over 9 to 16 July, caused Prince Eugene to lament "... since the Donauwörth action I cannot admire their performances", and later to conclude "If he has to go home without having achieved his objective, he will certainly be ruined." Final positioning. Tallard, with 34,000 men, reached Ulm, joining with Maximilian and Marsin at Augsburg on 5 August, although Maximilian had dispersed his army in response to Marlborough's campaign of ravaging the region. Also on 5 August, Prince Eugene reached Höchstädt, riding that same night to meet with Marlborough at Schrobenhausen. Marlborough knew that another crossing point over the Danube was required in case Donauwörth fell to the enemy; so on 7 August, the first of Margrave Louis William's 15,000 Imperial troops left Marlborough's main force to besiege the heavily defended city of Ingolstadt, farther down the Danube, with the remainder following two days later. With Prince Eugene's forces at Höchstädt on the north bank of the Danube, and Marlborough's at Rain on the south bank, Tallard and Maximilian debated their next move. Tallard preferred to bide his time, replenish supplies and allow Marlborough's Danube campaign to flounder in the colder autumn weather; Maximilian and Marsin, newly reinforced, were keen to push ahead. The French and Bavarian commanders eventually agreed to attack Prince Eugene's smaller force. On 9 August, the Franco-Bavarian forces began to cross to the north bank of the Danube. On 10 August, Prince Eugene sent an urgent dispatch reporting that he was falling back to Donauwörth. By a series of swift marches Marlborough concentrated his forces on Donauwörth and, by noon 11 August, the link-up was complete. During 11 August, Tallard pushed forward from the river crossings at Dillingen. By 12 August, the Franco-Bavarian forces were encamped behind the small River Nebel near the village of Blenheim on the plain of Höchstädt. On the same day, Marlborough and Prince Eugene carried out a reconnaissance of the French position from the church spire at Tapfheim, and moved their combined forces to Münster – from the French camp. A French reconnaissance under Jacques Joseph Vipart, Marquis de Silly went forward to probe the enemy, but were driven off by Allied troops who had deployed to cover the pioneers of the advancing army, labouring to bridge the numerous streams in the area and improve the passage leading westwards to Höchstädt. Marlborough quickly moved forward two brigades under the command of Lieutenant General John Wilkes and Brigadier Archibald Rowe to secure the narrow strip of land between the Danube and the wooded Fuchsberg hill, at the Schwenningen defile. Tallard's army numbered 56,000 men and 90 guns; the army of the Grand Alliance, 52,000 men and 66 guns. Some Allied officers who were acquainted with the superior numbers of the enemy, and aware of their strong defensive position, remonstrated with Marlborough about the hazards of attacking; but he was resolute – partly because the Dutch officer Willem Vleertman had scouted the marshy ground before them and reported that the land was perfectly suitable for the troops. Battle. The battlefield. The battlefield stretched for nearly . The extreme right flank of the Franco-Bavarian army rested on the Danube, the undulating pine-covered hills of the Swabian Jura lay to their left. A small stream, the Nebel, fronted the French line; the ground either side of this was marshy and only fordable intermittently. The French right rested on the village of Blenheim near where the Nebel flows into the Danube; the village itself was surrounded by hedges, fences, enclosed gardens, and meadows. Between Blenheim and the village of Oberglauheim to the north west the fields of wheat had been cut to stubble and were now ideal for the deployment of troops. From Oberglauheim to the next hamlet of Lutzingen the terrain of ditches, thickets and brambles was potentially difficult ground for the attackers. Initial manoeuvres. At 02:00 on 13 August, 40 Allied cavalry squadrons were sent forward, followed at 03:00, in eight columns, by the main Allied force pushing over the River Kessel. At about 06:00 they reached Schwenningen, from Blenheim. The British and German troops who had held Schwenningen through the night joined the march, making a ninth column on the left of the army. Marlborough and Prince Eugene made their final plans. The Allied commanders agreed that Marlborough would command 36,000 troops and attack Tallard's force of 33,000 on the left, including capturing the village of Blenheim, while Prince Eugene's 16,000 men would attack Maximilian and Marsin's combined forces of 23,000 troops on the right. If this attack was pressed hard, it was anticipated that Maximilian and Marsin would feel unable to send troops to aid Tallard on their right. Lieutenant-General John Cutts would attack Blenheim in concert with Prince Eugene's attack. With the French flanks busy, Marlborough could cross the Nebel and deliver the fatal blow to the French at their centre. The Allies would have to wait until Prince Eugene was in position before the general engagement could begin. Tallard was not anticipating an Allied attack; he had been deceived by intelligence gathered from prisoners taken by de Silly the previous day, and his army's strong position. Tallard and his colleagues believed that Marlborough and Prince Eugene were about to retreat north-westwards towards Nördlingen. Tallard wrote a report to this effect to King Louis that morning. Signal guns were fired to bring in the foraging parties and pickets as the French and Bavarian troops drew into battle-order to face the unexpected threat. At around 08:00 the French artillery on their right wing opened fire, answered by Colonel Holcroft Blood's batteries. The guns were heard by Prince Louis in his camp before Ingolstadt. An hour later Tallard, Maximilian, and Marsin climbed Blenheim's church tower to finalise their plans. It was settled that Maximilian and Marsin would hold the front from the hills to Oberglauheim, whilst Tallard would defend the ground between Oberglauheim and the Danube. The French commanders were divided as to how to utilise the Nebel. Tallard's preferred tactic was to lure the Allies across before unleashing his cavalry upon them. This was opposed by Marsin and Maximilian who felt it better to close their infantry right up to the stream itself, so that while the enemy was struggling in the marshes, they would be caught in crossfire from Blenheim and Oberglauheim. Tallard's approach was sound if all its parts were implemented, but in the event it allowed Marlborough to cross the Nebel without serious interference and fight the battle he had planned. Deployment. The Franco-Bavarian commanders deployed their forces. In the village of Lutzingen, Count Alessandro de Maffei positioned five Bavarian battalions with a great battery of 16 guns at the village's edge. In the woods to the left of Lutzingen, seven French battalions under César Armand, Marquis de Rozel moved into place. Between Lutzingen and Oberglauheim Maximilian placed 27 squadrons of cavalry and 14 Bavarian squadrons commanded by d'Arco with 13 more in support nearby under Baron Veit Heinrich Moritz Freiherr von Wolframsdorf. To their right stood Marsin's 40 French squadrons and 12 battalions. The village of Oberglauheim was packed with 14 battalions commanded by Jean-Jules-Armand Colbert, Marquis de Blainville, including the effective Irish Brigade known as the "Wild Geese". Six batteries of guns were ranged alongside the village. On the right of these French and Bavarian positions, between Oberglauheim and Blenheim, Tallard deployed 64 French and Walloon squadrons, 16 of which were from Marsin, supported by nine French battalions standing near the Höchstädt road. In the cornfield next to Blenheim stood three battalions from the Regiment de Roi. Nine battalions occupied the village itself, commanded by Philippe, Marquis de Clérambault. Four battalions stood to the rear and a further eleven were in reserve. These battalions were supported by Count Gabriel d'Hautefeuille's twelve squadrons of dismounted dragoons. By 11:00 Tallard, Maximilian, and Marsin were in place. Many of the Allied generals were hesitant to attack such a strong position. The Earl of Orkney later said that, "had I been asked to give my opinion, I had been against it." Prince Eugene was expected to be in position by 11:00, but due to the difficult terrain and enemy fire, progress was slow. Cutts' column – which by 10:00 had expelled the enemy from two water mills on the Nebel – had already deployed by the river against Blenheim, enduring over the next three hours severe fire from a six-gun heavy battery posted near the village. The rest of Marlborough's army, waiting in their ranks on the forward slope, were also forced to bear the cannonade from the French artillery, suffering 2,000 casualties before the attack could even start. Meanwhile, engineers repaired a stone bridge across the Nebel, and constructed five additional bridges or causeways across the marsh between Blenheim and Oberglauheim. Marlborough's anxiety was finally allayed when, just past noon, Colonel William Cadogan reported that Prince Eugene's Prussian and Danish infantry were in place – the order for the general advance was given. At 13:00, Cutts was ordered to attack the village of Blenheim whilst Prince Eugene was requested to assault Lutzingen on the Allied right flank. Blenheim. Cutts ordered Rowe's brigade to attack. The English infantry rose from the edge of the Nebel, and silently marched towards Blenheim, a distance of some . James Ferguson's Scottish brigade supported Rowe's left, and moved towards the barricades between the village and the river, defended by Hautefeuille's dragoons. As the range closed to within , the French fired a deadly volley. Rowe had ordered that there should be no firing from his men until he struck his sword upon the palisades, but as he stepped forward to give the signal, he fell mortally wounded. The survivors of the leading companies closed up the gaps in their ranks and rushed forward. Small parties penetrated the defences, but repeated French volleys forced the English back and inflicted heavy casualties. As the attack faltered, eight squadrons of elite Gens d'Armes, commanded by the veteran Swiss officer, , fell on the English troops, cutting at the exposed flank of Rowe's own regiment. Wilkes' Hessian brigade, nearby in the marshy grass at the water's edge, stood firm and repulsed the Gens d'Armes with steady fire, enabling the English and Hessians to re-order and launch another attack. Although the Allies were again repulsed, these persistent attacks on Blenheim eventually bore fruit, panicking Clérambault into making the worst French error of the day. Without consulting Tallard, Clérambault ordered his reserve battalions into the village, upsetting the balance of the French position and nullifying the French numerical superiority. "The men were so crowded in upon one another", wrote Mérode-Westerloo, "that they couldn't even fire – let alone receive or carry out any orders". Marlborough, spotting this error, now countermanded Cutts' intention to launch a third attack, and ordered him simply to contain the enemy within Blenheim; no more than 5,000 Allied soldiers were able to pen in twice the number of French infantry and dragoons. Lutzingen. On the Allied right, Prince Eugene's Prussian and Danish forces were desperately fighting the numerically superior forces of Maximilian and Marsin. Leopold I, Prince of Anhalt-Dessau led forward four brigades across the Nebel to assault the well-fortified position of Lutzingen. Here, the Nebel was less of an obstacle, but the great battery positioned on the edge of the village enjoyed a good field of fire across the open ground stretching to the hamlet of Schwennenbach. As soon as the infantry crossed the stream, they were struck by Maffei's infantry, and salvoes from the Bavarian guns positioned both in front of the village and in enfilade on the wood-line to the right. Despite heavy casualties the Prussians attempted to storm the great battery, whilst the Danes, under Count Jobst von Scholten, attempted to drive the French infantry out of the copses beyond the village. With the infantry heavily engaged, Prince Eugene's cavalry picked its way across the Nebel. After an initial success, his first line of cavalry, under the Imperial General of Horse, Prince Maximilian of Hanover, were pressed by the second line of Marsin's cavalry and forced back across the Nebel in confusion. The exhausted French were unable to follow up their advantage, and both cavalry forces tried to regroup and reorder their ranks. Without cavalry support, and threatened with envelopment, the Prussian and Danish infantry were in turn forced to pull back across the Nebel. Panic gripped some of Prince Eugene's troops as they crossed the stream. Ten infantry colours were lost to the Bavarians, and hundreds of prisoners taken; it was only through the leadership of Prince Eugene and the Prince Maximilian of Hanover that the Imperial infantry was prevented from abandoning the field. After rallying his troops near Schwennenbach – well beyond their starting point – Prince Eugene prepared to launch a second attack, led by the second-line squadrons under the Duke of Württemberg-Teck. Yet again they were caught in the murderous crossfire from the artillery in Lutzingen and Oberglauheim, and were once again thrown back in disarray. The French and Bavarians were almost as disordered as their opponents, and they too were in need of inspiration from their commander, Maximilian, who was seen " ... riding up and down, and inspiring his men with fresh courage." Anhalt-Dessau's Danish and Prussian infantry attacked a second time but could not sustain the advance without proper support. Once again they fell back across the stream. Centre and Oberglauheim. Whilst these events around Blenheim and Lutzingen were taking place, Marlborough was preparing to cross the Nebel. Hulsen's brigade of Hessians and Hanoverians and the earl of Orkney's British brigade advanced across the stream and were supported by dismounted British dragoons and ten British cavalry squadrons. This covering force allowed Charles Churchill's Dutch, British and German infantry and further cavalry units to advance and form up on the plain beyond. Marlborough arranged his infantry battalions in a novel manner, with gaps sufficient to allow the cavalry to move freely between them. He ordered the formation forward. Once again Zurlauben's Gens d'Armes charged, looking to rout Henry Lumley's English cavalry who linked Cutts' column facing Blenheim with Churchill's infantry. As the elite French cavalry attacked, they were faced by five English squadrons under Colonel Francis Palmes. To the consternation of the French, the Gens d'Armes were pushed back in confusion and were pursued well beyond the Maulweyer stream that flows through Blenheim. "What? Is it possible?" exclaimed Maximilian, "the gentlemen of France fleeing?" Palmes attempted to follow up his success but was repulsed by other French cavalry and musket fire from the edge of Blenheim. Nevertheless, Tallard was alarmed by the repulse of the Gens d'Armes and urgently rode across the field to ask Marsin for reinforcements; but on the basis of being hard pressed by Prince Eugene – whose second attack was in full flood – Marsin refused. As Tallard consulted with Marsin, more of his infantry were taken into Blenheim by Clérambault. Fatally, Tallard, although aware of the situation, did nothing to rectify it, leaving him with just the nine battalions of infantry near the Höchstädt road to oppose the massed enemy ranks in the centre. Zurlauben tried several more times to disrupt the Allies forming on Tallard's side of the stream. His front-line cavalry darted forward down the gentle slope towards the Nebel, but the attacks lacked co-ordination, and the Allied infantry's steady volleys disconcerted the French horsemen. During these skirmishes Zurlauben fell mortally wounded; he died two days later. At this stage the time was just after 15:00. The Danish cavalry, under Carl Rudolf, Duke of Württemberg-Neuenstadt, had made slow work of crossing the Nebel near Oberglauheim. Harassed by Marsin's infantry near the village, the Danes were driven back across the stream. Count Horn's Dutch infantry managed to push the French back from the water's edge, but it was apparent that before Marlborough could launch his main effort against Tallard, Oberglauheim would have to be secured. Count Horn directed Anton Günther, Fürst von Holstein-Beck to take the village, but his two Dutch brigades were cut down by the French and Irish troops, capturing and badly wounding Holstein-Beck during the action. The battle was now in the balance. If Holstein-Beck's Dutch column were destroyed, the Allied army would be split in two: Prince Eugene's wing would be isolated from Marlborough's, passing the initiative to the Franco-Bavarian forces. Seeing the opportunity, Marsin ordered his cavalry to change from facing Prince Eugene, and turn towards their right and the open flank of Churchill's infantry drawn up in front of Unterglau. Marlborough, who had crossed the Nebel on a makeshift bridge to take personal control, ordered Hulsen's Hanoverian battalions to support the Dutch infantry. A nine-gun artillery battery and a Dutch cavalry brigade under Averock were also called forward, but the cavalry soon came under pressure from Marsin's more numerous squadrons. Marlborough now requested Prince Eugene to release Count Hendrick Fugger and his Imperial Cuirassier brigade to help repel the French cavalry thrust. Despite his own difficulties, Prince Eugene at once complied. Although the Nebel stream lay between Fugger's and Marsin's squadrons, the French were forced to change front to meet this new threat, thus preventing Marsin from striking at Marlborough's infantry. Fugger's cuirassiers charged and, striking at a favourable angle, threw back Marsin's squadrons in disorder. With support from Blood's batteries, the Hessian, Hanoverian and Dutch infantry – now commanded by Count Berensdorf – succeeded in pushing the French and Irish infantry back into Oberglauheim so that they could not again threaten Churchill's flank as he moved against Tallard. The French commander in the village, de Blainville, was numbered among the heavy casualties. Breakthrough. By 16:00, with large parts of the Franco-Bavarian army besieged in Blenheim and Oberglau, the Allied centre of 81 squadrons (nine squadrons had been transferred from Cutts' column) supported by 18 battalions was firmly planted amidst the French line of 64 squadrons and nine battalions of raw recruits. There was now a pause in the battle: Marlborough wanted to attack simultaneously along the whole front, and Prince Eugene, after his second repulse, needed time to reorganise. By just after 17:00 all was ready along the Allied front. Marlborough's two lines of cavalry had now moved to the front of his line of battle, with the two supporting lines of infantry behind them. Mérode-Westerloo attempted to extricate some French infantry crowded into Blenheim, but Clérambault ordered the troops back into the village. The French cavalry exerted themselves once more against the Allied first line – Lumley's English and Scots on the Allied left, and Reinhard Vincent Graf von Hompesch's Dutch and German squadrons on the Allied right. Tallard's squadrons, which lacked infantry support and were tired, managed to push the Allied first line back to their infantry support. With the battle still not won, Marlborough had to rebuke one of his cavalry officers who was attempting to leave the field – "Sir, you are under a mistake, the enemy lies that way ..." Marlborough commanded the second Allied line, under and , to move forward, and, driving through the centre, the Allies finally routed Tallard's tired cavalry. The Prussian Life Dragoons' Colonel, Ludwig von Blumenthal, and his second in command, Lieutenant Colonel von Hacke, fell next to each other, but the charge succeeded. With their cavalry in headlong flight, the remaining nine French infantry battalions fought with desperate valour, trying to form a square, but they were overwhelmed by Blood's close-range artillery and platoon fire. Mérode-Westerloo later wrote – "[They] died to a man where they stood, stationed right out in the open plain – supported by nobody." The majority of Tallard's retreating troops headed for Höchstädt but most did not make the safety of the town, plunging instead into the Danube where over 3,000 French horsemen drowned; others were cut down by the pursuing Allied cavalry. The Marquis de Gruignan attempted a counter-attack, but he was brushed aside by the triumphant Allies. After a final rally behind his camp's tents, shouting entreaties to stand and fight, Tallard was caught up in the rout and swept towards Sonderheim. Surrounded by a squadron of Hessian troops, Tallard surrendered to Lieutenant Colonel de Boinenburg, the Prince of Hesse-Kassel's "aide-de-camp", and was sent under escort to Marlborough. Marlborough welcomed the French commander – "I am very sorry that such a cruel misfortune should have fallen upon a soldier for whom I have the highest regard." Meanwhile, the Allies had once again attacked the Bavarian stronghold at Lutzingen. Prince Eugene became exasperated with the performance of his Imperial cavalry whose third attack had failed: he had already shot two of his troopers to prevent a general flight. Then, declaring in disgust that he wished to "fight among brave men and not among cowards", Prince Eugene went into the attack with the Prussian and Danish infantry, as did Leopold I, waving a regimental colour to inspire his troops. This time the Prussians were able to storm the great Bavarian battery, and overwhelm the guns' crews. Beyond the village, Scholten's Danes defeated the French infantry in a desperate hand-to-hand bayonet struggle. When they saw that the centre had broken, Maximilian and Marsin decided the battle was lost; like the remnants of Tallard's army, they fled the battlefield, albeit in better order than Tallard's men. Attempts to organise an Allied force to prevent Marsin's withdrawal failed owing to the exhaustion of the cavalry, and the growing confusion in the field. Fall of Blenheim. Marlborough now turned his attention from the fleeing enemy to direct Churchill to detach more infantry to storm Blenheim. Orkney's infantry, Hamilton's English brigade and St Paul's Hanoverians moved across the trampled wheat to the cottages. Fierce hand-to-hand fighting gradually forced the French towards the village centre, in and around the walled churchyard which had been prepared for defence. Lord John Hay and Charles Ross's dismounted dragoons were also sent, but suffered under a counter-charge delivered by the regiments of Artois and Provence under command of Colonel de la Silvière. Colonel Belville's Hanoverians were fed into the battle to steady the resolve of the dragoons, who attacked again. The Allied progress was slow and hard, and like the defenders, they suffered many casualties. Many of the cottages were now burning, obscuring the field of fire and driving the defenders out of their positions. Hearing the din of battle in Blenheim, Tallard sent a message to Marlborough offering to order the garrison to withdraw from the field. "Inform Monsieur Tallard", replied Marlborough, "that, in the position in which he is now, he has no command." Nevertheless, as dusk came the Allied commander was anxious for a quick conclusion. The French infantry fought tenaciously to hold on to their position in Blenheim, but their commander was nowhere to be found. By now Blenheim was under assault from every side by three British generals: Cutts, Churchill, and Orkney. The French had repulsed every attack, but many had seen what had happened on the plain: their army was routed and they were cut off. Orkney, attacking from the rear, now tried a different tactic – "... it came into my head to beat parley", he later wrote, "which they accepted of and immediately their Brigadier de Nouville capitulated with me to be prisoner at discretion and lay down their arms." Threatened by Allied guns, other units followed their example. It was not until 21:00 that the Marquis de Blanzac, who had taken charge in Clérambault's absence, reluctantly accepted the inevitability of defeat, and some 10,000 of France's best infantry had laid down their arms. During these events Marlborough was still in the saddle organising the pursuit of the broken enemy. Pausing for a moment, he scribbled on the back of an old tavern bill a note addressed to his wife, Sarah: "I have no time to say more but to beg you will give my duty to the Queen, and let her know her army has had a glorious victory." Aftermath. French losses were immense, with over 27,000 killed, wounded and captured. Moreover, the myth of French invincibility had been destroyed, and King Louis's hopes of a victorious early peace were over. Mérode-Westerloo summarised the case against Tallard's army: It was a hard-fought contest: Prince Eugene observed that "I have not a squadron or battalion which did not charge four times at least." Although the war dragged on for years, the Battle of Blenheim was probably its most decisive victory; Marlborough and Prince Eugene had saved the Habsburg Empire and thereby preserved the Grand Alliance from collapse. Munich, Augsburg, Ingolstadt, Ulm and the remaining territory of Bavaria soon fell to the Allies. By the Treaty of Ilbersheim, signed on 7 November, Bavaria was placed under Austrian military rule, allowing the Habsburgs to use its resources for the rest of the conflict. The remnants of Maximilian and Marsin's wing limped back to Strasbourg, losing another 7,000 men through desertion. Despite being offered the chance to remain as ruler of Bavaria, under the strict terms of an alliance with Austria, Maximilian left his country and family in order to continue the war against the Allies from the Spanish Netherlands where he still held the post of governor-general. Tallard – who, unlike his subordinates, was not ransomed or exchanged – was taken to England and imprisoned in Nottingham until his release in 1711. The 1704 campaign lasted longer than usual, for the Allies sought to extract the maximum advantage. Realising that France was too powerful to be forced to make peace by a single victory, Prince Eugene, Marlborough and Prince Louis met to plan their next moves. For the following year Marlborough proposed a campaign along the valley of the Moselle to carry the war deep into France. This required the capture of the major fortress of Landau which guarded the Rhine, and the towns of Trier and Trarbach on the Moselle itself. Trier was taken on 27 October and Landau fell on 23 November to Prince Louis and Prince Eugene; with the fall of Trarbach on 20 December, the campaign season for 1704 came to an end. The planned offensive never materialised as the Grand Alliance's army had to depart the Moselle to defend Liège from a French counteroffensive. The war raged on for another decade. Marlborough returned to England on 14 December (O.S) to the acclamation of Queen Anne and the country. In the first days of January, the 110 cavalry standards and 128 infantry colours that had been captured during the battle were borne in procession to Westminster Hall. In February 1705, Queen Anne, who had made Marlborough a duke in 1702, granted him the Park of Woodstock Palace and promised a sum of £240,000 to build a suitable house as a gift from a grateful Crown in recognition of his victory; this resulted in the construction of Blenheim Palace. The British historian Sir Edward Shepherd Creasy considered Blenheim one of the pivotal battles in history, writing: "Had it not been for Blenheim, all Europe might at this day suffer under the effect of French conquests resembling those of Alexander in extent and those of the Romans in durability." The military historian John A. Lynn considers this claim unjustified, for King Louis never had such an objective; the campaign in Bavaria was intended only to bring a favourable peace settlement and not domination over Europe. Lake Poet Robert Southey criticised the Battle of Blenheim in his anti-war poem "After Blenheim", but later praised the victory as "the greatest victory which had ever done honour to British arms".
4050
1301442900
https://en.wikipedia.org/wiki?curid=4050
Battle of Ramillies
The Battle of Ramillies (), fought on 23 May 1706, was a battle of the War of the Spanish Succession. For the Grand AllianceAustria, England, and the Dutch Republicthe battle had followed an indecisive campaign against the Bourbon armies of King Louis XIV of France in 1705. Although the Allies had captured Barcelona that year, they had been forced to abandon their campaign on the Moselle, had stalled in the Spanish Netherlands and suffered defeat in northern Italy. Yet despite his opponents' setbacks LouisXIV wanted peace, but on reasonable terms. Because of this, as well as to maintain their momentum, the French and their allies took the offensive in 1706. The campaign began well for Louis XIV's generals: in Italy Marshal Vendôme defeated the Austrians at the Battle of Calcinato in April, while in Alsace Marshal Villars forced the Margrave of Baden back across the Rhine. Encouraged by these early gains LouisXIV urged Marshal Villeroi to go over to the offensive in the Spanish Netherlands and, with victory, gain a 'fair' peace. Accordingly, the French Marshal set off from Leuven ("Louvain") at the head of 60,000 men and marched towards Tienen ("Tirlemont"), as if to threaten Zoutleeuw ("Léau"). Also determined to fight a major engagement, the Duke of Marlborough, commander-in-chief of Anglo-Dutch forces, assembled his armysome 62,000 mennear Maastricht, and marched past Zoutleeuw. With both sides seeking battle, they soon encountered each other on the dry ground between the rivers Mehaigne and Petite Gette, close to the small village of Ramillies. In less than four hours Marlborough's Dutch, English, and Danish forces overwhelmed Villeroi's and Max Emanuel's Franco-Spanish-Bavarian army. The Duke's subtle moves and changes in emphasis during the battlesomething his opponents failed to realise until it was too latecaught the French in a tactical vice. With their foe broken and routed, the Allies were able to fully exploit their victory. Town after town fell, including Brussels, Bruges and Antwerp; by the end of the campaign Villeroi's army had been driven from most of the Spanish Netherlands. With Prince Eugene's subsequent success at the Battle of Turin in northern Italy, the Allies had imposed the greatest loss of territory and resources that LouisXIV would suffer during the war. Thus, the year 1706 proved, for the Allies, to be an "annus mirabilis". Background. After their disastrous defeat at Blenheim in 1704, the French found some respite in next year. The Duke of Marlborough had intended the 1705 campaignan invasion of France through the Moselle valleyto complete the work of Blenheim and persuade King LouisXIV to make peace but the plan had been thwarted by friend and foe alike. The reluctance of his Dutch allies to see their frontiers denuded of troops for another gamble in Germany had denied Marlborough the initiative but of far greater importance was the Margrave of Baden's pronouncement that he could not join the Duke in strength for the coming offensive. This was in part due to the sudden switching of troops from the Rhine to reinforce Prince Eugene in Italy and part due to the deterioration of Baden's health brought on by the re-opening of a severe foot wound he had received at the storming of the Schellenberg the previous year. Marlborough had to cope with the death of Emperor LeopoldI in May and the accession of JosephI, which unavoidably complicated matters for the Grand Alliance. The resilience of the French king and the efforts of his generals also added to Marlborough's problems. Marshal Villeroi, exerting considerable pressure on the Dutch commander, Count Overkirk, along the Meuse, took Huy on 10 June before pressing on towards Liège. With Marshal Villars sitting strong on the Moselle, the Allied commanderwhose supplies had by now become very shortwas forced to call off his campaign on 16 June. "What a disgrace for Marlborough," exulted Villeroi, "to have made false movements without any result!" With Marlborough's departure north, the French transferred troops from the Moselle valley to reinforce Villeroi in Flanders, while Villars marched off to the Rhine. The Anglo-Dutch forces gained minor compensation for the failed Moselle campaign with the success at Elixheim and the crossing of the Lines of Brabant in the Spanish Netherlands (Huy was also retaken on 11 July) but a chance to bring the French to a decisive engagement eluded Marlborough. The year 1705 proved almost entirely barren for the Duke, whose military disappointments were only partly compensated by efforts on the diplomatic front where, at the courts of Düsseldorf, Frankfurt, Vienna, Berlin and Hanover, Marlborough sought to bolster support for the Grand Alliance and extract promises of prompt assistance for the following year's campaign. Prelude. On 11 January 1706 Marlborough finally reached London at the end of his diplomatic tour but he had already been planning his strategy for the coming season. The first option (although it is debatable to what extent the Duke was committed to such an enterprise) was a plan to transfer his forces from the Spanish Netherlands to northern Italy; once there, he intended linking up with Prince Eugene in order to defeat the French and safeguard Savoy from being overrun. Savoy would then serve as a gateway into France by way of the mountain passes or an invasion with naval support along the Mediterranean coast via Nice and Toulon, in connexion with redoubled Allied efforts in Spain. It seems that the Duke's favoured scheme was to return to the Moselle valley (where Marshal Marsin had recently taken command of French forces) and once more attempt an advance into the heart of France. But these decisions soon became academic. Shortly after Marlborough landed in the Dutch Republic on 14 April, news arrived of big Allied setbacks in the wider war. Determined to show the Grand Alliance that France was still resolute, LouisXIV prepared to launch a double surprise in Alsace and northern Italy. On the latter front Marshal Vendôme defeated the Imperial army at Calcinato on 19 April, pushing the Imperialists back in confusion (French forces were now in a position to prepare for the long-anticipated siege of Turin). In Alsace, Marshal Villars took Baden by surprise and captured Haguenau, driving him back across the Rhine in some disorder, thus creating a threat on Landau. With these reverses, the Dutch refused to contemplate Marlborough's ambitious march to Italy or any plan that denuded their borders of the Duke and their army. In the interest of coalition harmony, Marlborough prepared to campaign in the Low Countries. On the move. The Duke left The Hague on 9 May. "God knows I go with a heavy heart," he wrote six days later to his friend and political ally in England, Lord Godolphin, "for I have no hope of doing anything considerable, unless the French do what I am very confident they will not..."in other words, court battle. On 17 May the Duke concentrated his Dutch and English troops at Tongeren, near Maastricht. The Hanoverians, Hessians and Danes, despite earlier undertakings, found, or invented, pressing reasons for withholding their support. Marlborough wrote an appeal to the Duke of Württemberg, the commander of the Danish contingent: "I send you this express to request your Highness to bring forward by a double march your cavalry so as to join us at the earliest moment..." Additionally, the King "in" Prussia, Frederick I, had kept his troops in quarters behind the Rhine while his personal disputes with Vienna and the States General at The Hague remained unresolved. Nevertheless, the Duke could think of no circumstances why the French would leave their strong positions and attack his army, even if Villeroi was first reinforced by substantial transfers from Marsin's command. But in this he had miscalculated. Although LouisXIV wanted peace he wanted it on reasonable terms; for that, he needed victory in the field and to convince the Allies that his resources were by no means exhausted. Following the successes in Italy and along the Rhine, LouisXIV was now hopeful of similar results in Flanders. Far from standing on the defensive thereforeand unbeknown to MarlboroughLouisXIV was persistently goading his marshal into action. "[Villeroi] began to imagine," wrote St Simon, "that the King doubted his courage, and resolved to stake all at once in an effort to vindicate himself." Accordingly, on 18 May, Villeroi set off from Leuven at the head of 70 battalions, 132 squadrons and 62 cannoncomprising an overall force of some 60,000 troopsand crossed the river Dyle to seek battle with the enemy. Spurred on by his growing confidence in his ability to out-general his opponent, and by Versailles’ determination to avenge Blenheim, Villeroi and his generals anticipated success. Neither opponent expected the clash at the exact moment or place where it occurred. The French moved first to Tienen, (as if to threaten Zoutleeuw, abandoned by the French in October 1705), before turning southwards, heading for Jodoignethis line of march took Villeroi's army towards the narrow aperture of dry ground between the rivers Mehaigne and Petite Gette close to the small villages of Ramillies and Taviers; but neither commander quite appreciated how far his opponent had travelled. Villeroi still believed (on 22 May) the Allies were a full day's march away when in fact they had camped near Corswaren waiting for the Danish squadrons to catch up; for his part, Marlborough deemed Villeroi still at Jodoigne when in reality he was now approaching the plateau of Mont St. André with the intention of pitching camp near Ramillies (see map at right). However, the Prussian infantry was not there. Marlborough wrote to Lord Raby, the English resident at Berlin: "If it should please God to give us victory over the enemy, the Allies will be little obliged to the King [Frederick] for the success." The following day, at 01:00, Marlborough dispatched Cadogan, his Quartermaster-General, with an advanced guard to reconnoitre the same dry ground that Villeroi's army was now heading toward, country that was well known to the Duke from previous campaigns. Two hours later the Duke followed with the main body: 74 battalions, 123 squadrons, 90 pieces of artillery and 20 mortars, totalling 62,000 troops. About 08:00, after Cadogan had just passed Merdorp, his force made brief contact with a party of French hussars gathering forage on the edge of the plateau of Jandrenouille. After a brief exchange of shots the French retired and Cadogan's dragoons pressed forward. With a short lift in the mist, Cadogan soon discovered the smartly ordered lines of Villeroi's advance guard some off; a galloper hastened back to warn Marlborough. Two hours later the Duke, accompanied by the Dutch field commander Field Marshal Overkirk, Quartermaster-General Daniël van Dopff, and the Allied staff, rode up to Cadogan where on the horizon to the westward he could discern the massed ranks of the French army deploying for battle along the front. Marlborough later told Bishop Burnet: "The French army looked the best of any he had ever seen." Battle. Battlefield. The battlefield of Ramillies is very similar to that of Blenheim, for here too there is an immense area of arable land unimpeded by woods or hedges. Villeroi's right rested on the villages of Franquenée and Taviers, with the river Mehaigne protecting his flank. A large open plain, about wide, lay between Taviers and Ramillies, but unlike Blenheim, there was no stream to hinder the cavalry. His centre was secured by Ramillies itself, lying on a slight eminence which gave distant views to the north and east. The French left flank was protected by broken country, and by a stream, the Petite Gheete, which runs deep between steep and slippery slopes. On the French side of the stream the ground rises to Offus, the village which, together with Autre-Eglise farther north, anchored Villeroi's left flank. To the west of the Petite Gheete rises the plateau of Mont St. André; a second plain, the plateau of Jandrenouilleupon which the Anglo-Dutch army amassedrises to the east. Initial dispositions. At 11:00 the Duke ordered the army to take standard battle formation. On the far right, towards Foulz, the British battalions and squadrons took up their posts in a double line near the Jeuche stream. The centre was formed by the mass of Dutch, German, Protestant Swiss and Scottish infantryperhaps 30,000 menfacing Offus and Ramillies. Also facing Ramillies Marlborough placed a powerful battery of thirty 24-pounders, dragged into position by a team of oxen; further batteries were positioned overlooking the Petite Gheete. On their left, on the broad plain between Taviers and Ramilliesand where Marlborough thought the decisive encounter must take placeOverkirk drew the 69 squadrons of the Dutch and Danish horse, supported by 19 battalions of Dutch infantry and two artillery pieces. Meanwhile, Villeroi deployed his forces. In Taviers on his right, he placed two battalions of the Greder Suisse Régiment, with a smaller force forward in Franquenée; the whole position was protected by the boggy ground of the river Mehaigne, thus preventing an Allied flanking movement. In the open country between Taviers and Ramillies, he placed 82 squadrons under General de Guiscard supported by several interleaved brigades of French, Swiss and Bavarian infantry. Along the Ramillies–Offus–Autre Eglise ridge-line, Villeroi positioned Walloon and Bavarian infantry, supported by the Elector of Bavaria's 50 squadrons of Bavarian and Walloon cavalry placed behind on the plateau of Mont St. André. Ramillies, Offus and Autre-Eglise were all packed with troops and put in a state of defence, with alleys barricaded and walls loop-holed for muskets. Villeroi also positioned powerful batteries near Ramillies. These guns (some of which were of the three barrelled kind first seen at Elixheim the previous year) enjoyed good arcs of fire, able to fully cover the approaches of the plateau of Jandrenouille over which the Allied infantry would have to pass. Marlborough, however, noticed several important weaknesses in the French dispositions. Tactically, it was imperative for Villeroi to occupy Taviers on his right and Autre-Eglise on his left, but by adopting this posture he had been forced to over-extend his forces. Moreover, this dispositionconcave in relation to the Allied armygave Marlborough the opportunity to form a more compact line, drawn up in a shorter front between the 'horns' of the French crescent; when the Allied blow came it would be more concentrated and carry more weight. Additionally, the Duke's disposition facilitated the transfer of troops across his front far more easily than his foe, a tactical advantage that would grow in importance as the events of the afternoon unfolded. Although Villeroi had the option of enveloping the flanks of the Allied army as they deployed on the plateau of Jandrenouillethreatening to encircle their armythe Duke correctly gauged that the characteristically cautious French commander was intent on a defensive battle along the ridge-line. Taviers. At 13:00 the batteries went into action; a little later two Allied columns set out from the extremities of their line and attacked the flanks of the Franco-Bavarian army. To the south, 4 Dutch battalions, under the command of Colonel Wertmüller, came forward with their two field guns to seize the hamlet of Franquenée. The small Swiss garrison in the village, shaken by the sudden onslaught and unsupported by the battalions to their rear, were soon compelled back towards the village of Taviers. Taviers was of particular importance to the Franco-Bavarian position: it protected the otherwise unsupported flank of General de Guiscard's cavalry on the open plain, while at the same time, it allowed the French infantry to pose a threat to the flanks of the Dutch and Danish squadrons as they came forward into position. But hardly had the retreating Swiss rejoined their comrades in that village when the Dutch Guards renewed their attack. The fighting amongst the alleys and cottages soon deteriorated into a fierce bayonet and clubbing "mêlée", but the superiority in Dutch firepower soon told. The accomplished French officer, Colonel de la Colonie, standing on the plain nearby remembered: "This village was the opening of the engagement, and the fighting there was almost as murderous as the rest of the battle put together." By about 15:00 the Swiss had been pushed out of the village into the marshes beyond. Villeroi's right flank fell into chaos and was now open and vulnerable. Alerted to the situation de Guiscard ordered an immediate attack with 14 squadrons of French dragoons currently stationed in the rear. Two other battalions of the Greder Suisse Régiment were also sent, but the attack was poorly co-ordinated and consequently went in piecemeal. The Anglo-Dutch commanders now sent dismounted Dutch dragoons into Taviers, which, together with the Guards and their field guns, poured concentrated musketry- and canister-fire into the advancing French troops. Colonel d’Aubigni, leading his regiment, fell mortally wounded. As the French ranks wavered, the leading squadrons of Württemberg's Danish horsenow unhampered by enemy fire from either villagewere also sent into the attack and fell upon the exposed flank of the Franco-Swiss infantry and dragoons. De la Colonie, with his Grenadiers Rouge regiment, together with the Cologne Guards who were brigaded with them, was now ordered forward from his post south of Ramillies to support the faltering counter-attack on the village. But on his arrival, all was chaos: "Scarcely had my troops got over when the dragoons and Swiss who had preceded us, came tumbling down upon my battalions in full flight... My own fellows turned about and fled along with them." De La Colonie managed to rally some of his grenadiers, together with the remnants of the French dragoons and Greder Suisse battalions, but it was an entirely peripheral operation, offering only fragile support for Villeroi's right flank. Offus and Autre-Eglise. While the attack on Taviers went on the Earl of Orkney launched his first line of English across the Petite Gheete in a determined attack against the barricaded villages of Offus and Autre-Eglise on the Allied right. Villeroi, posting himself near Offus, watched anxiously the redcoats' advance, mindful of the counsel he had received on 6 May from LouisXIV: "Have particular care to that part of the line which will endure the first shock of the English troops." Heeding this advice the French commander began to transfer battalions from his centre to reinforce the left, drawing more foot from the already weakened right to replace them. As the English battalions descended the gentle slope of the Petite Gheete valley, struggling through the boggy stream, they were met by Major General de la Guiche's disciplined Walloon infantry sent forward from around Offus. After concentrated volleys, exacting heavy casualties on the redcoats, the Walloons reformed back to the ridgeline in good order. The English took some time to reform their ranks on the dry ground beyond the stream and press on up the slope towards the cottages and barricades on the ridge. The vigour of the English assault, however, was such that they threatened to break through the line of the villages and out onto the open plateau of Mont St André beyond. This was potentially dangerous for the Allied infantry who would then be at the mercy of the Elector's Bavarian and Walloon squadrons patiently waiting on the plateau for the order to move. Although Henry Lumley's English cavalry had managed to cross the marshy ground around the Petite Gheete, it was soon evident to Marlborough that sufficient cavalry support would not be practicable and that the battle could not be won on the Allied right. The Duke, therefore, called off the attack against Offus and Autre-Eglise. To make sure that Orkney obeyed his order to withdraw, Marlborough sent his Quartermaster-General in person with the command. Despite Orkney's protestations, Cadogan insisted on compliance and, reluctantly, Orkney gave the word for his troops to fall back to their original positions on the edge of the plateau of Jandrenouille. It is still not clear how far Orkney's advance was planned only as a feint; according to historian David Chandler it is probably more accurate to surmise that Marlborough launched Orkney in a serious probe with a view to sounding out the possibilities of the sector. Nevertheless, the attack had served its purpose. Villeroi had given his personal attention to that wing and strengthened it with large bodies of horse and foot that ought to have been taking part in the decisive struggle south of Ramillies. Ramillies. Meanwhile, the Dutch assault on Ramillies was gaining pace. Marlborough's younger brother, General of Infantry Charles Churchill, ordered four brigades of foot to attack the village. The assault consisted of 12 battalions of Dutch infantry commanded by Major Generals Scholten and Sparre; two brigades of Saxons under Count Schulenburg; a Scottish brigade in Dutch service led by the 2nd Duke of Argyle; and a small brigade of Protestant Swiss. The 20 French and Bavarian battalions in Ramillies, supported by the Irish who had left Ireland in the Flight of the Wild Geese to join Clare's Dragoons who fought as infantry and captured a colour from the British 3rd Regiment of Foot and a small brigade of Cologne and Bavarian Guards under the Marquis de Maffei, put up a determined defence, initially driving back the attackers with severe losses as commemorated in the song "Clare's Dragoons". Seeing that Scholten and Sparre were faltering, Marlborough now ordered Orkney's second-line British and Danish battalions (who had not been used in the assault on Offus and Autre-Eglise) to move south towards Ramillies. Shielded as they were from observation by a slight fold in the land, their commander, Brigadier-General Van Pallandt, ordered the regimental colours to be left in place on the edge of the plateau to convince their opponents they were still in their initial position. Therefore, unbeknown to the French who remained oblivious to the Allies' real strength and intentions on the opposite side of the Petite Gheete, Marlborough was throwing his full weight against Ramillies and the open plain to the south. Villeroi meanwhile, was still moving more reserves of infantry in the opposite direction towards his left flank; crucially, it would be some time before the French commander noticed the subtle change in emphasis of the Allied dispositions. Around 15:30 Overkirk advanced his massed squadrons on the open plain in support of the infantry attack on Ramillies. 48 Dutch squadrons, supported on their left by 21 Danish squadrons, led by Count Tilly and Lieutenants Generals Hompesch, d'Auvergne, Ostfriesland and Dopffsteadily advanced towards the enemy (taking care not to prematurely tire the horses), before breaking into a trot to gain the impetus for their charge. The Marquis de Feuquières writing after the battle described the scene: "They advanced in four lines... As they approached they advanced their second and fourth lines into the intervals of their first and third lines; so that when they made their advance upon us, they formed only one front, without any intermediate spaces." This made it nearly impossible for the French cavalry to perform flanking manoeuvres. The initial clash favoured the Dutch and Danish squadrons. The disparity of numbersexacerbated by Villeroi stripping their ranks of infantry to reinforce his left flankenabled Overkirk's cavalry to throw the first line of French horse back in some disorder towards their second-line squadrons. This line also came under severe pressure and, in turn, was forced back to their third-line of cavalry and the few battalions still remaining on the plain. But these French horsemen were amongst the best in LouisXIV's armythe "Maison du Roi", supported by four elite squadrons of Bavarian Cuirassiers. Ably led by de Guiscard, the French cavalry rallied, thrusting back the Allied squadrons in successful local counterattacks. On Overkirk's right flank, close to Ramillies, ten of his squadrons suddenly broke ranks and were scattered, riding headlong to the rear to recover their order, leaving the left flank of the Allied assault on Ramillies dangerously exposed. Notwithstanding the lack of infantry support, de Guiscard threw his cavalry forward in an attempt to split the Allied army in two. A crisis threatened the centre, but from his vantage point Marlborough was at once aware of the situation. The Allied commander now summoned the cavalry on the right wing to reinforce his centre, leaving only the English squadrons in support of Orkney. Thanks to a combination of battle-smoke and favourable terrain, his redeployment went unnoticed by Villeroi who made no attempt to transfer any of his own 50 unused squadrons. While he waited for the fresh reinforcements to arrive, Marlborough flung himself into the "mêlée", rallying some of the Dutch cavalry who were in confusion. But his personal involvement nearly led to his undoing. A number of French horsemen, recognising the Duke, came surging towards his party. Marlborough's horse tumbled and the Duke was thrown"Milord Marlborough was rid over," wrote Orkney some time later. It was a critical moment of the battle. "Major-General Murray," recalled one eyewitness: "...seeing him fall, marched up in all haste with two Swiss battalions to save him and stop the enemy who were hewing all down in their way." Samuel Constant de Rebecque helped Marlborough back on his feet, while Marlborough's newly appointed aide-de-camp, Richard Molesworth, galloped to the rescue, mounted the Duke on his horse and made good their escape, before Murray's disciplined ranks threw back the pursuing French troopers. After a brief pause, Marlborough's equerry, Colonel Bringfield (or Bingfield), led up another of the Duke's spare horses; but while assisting him onto his mount, the unfortunate Bringfield was hit by an errant cannonball that sheared off his head. One account has it that the cannonball flew between the Captain-General's legs before hitting the unfortunate colonel, whose torso fell at Marlborough's feeta moment subsequently depicted in a lurid set of contemporary playing cards. Nevertheless, the danger passed and Overkirk and Tilly restored order among the confused squadrons and ordered them to attack again, enabling the Duke to attend to the positioning of the cavalry reinforcements feeding down from his right flanka change of which Villeroi remained blissfully unaware. Breakthrough. The time was about 16:30, and the two armies were in close contact across the whole front, from the skirmishing in the marshes in the south, through the vast cavalry battle on the open plain; to the fierce struggle for Ramillies at the centre, and to the north, where, around the cottages of Offus and Autre-Eglise, Orkney and de la Guiche faced each other across the Petite Gheete ready to renew hostilities. The arrival of the transferring squadrons now began to tip the balance in favour of the Allies. Tired, and suffering a growing list of casualties, the numerical inferiority of Guiscard's squadrons battling on the plain at last began to tell. After earlier failing to hold or retake Franquenée and Taviers, Guiscard's right flank had become dangerously exposed and a fatal gap had opened on the right of their line. Taking advantage of this breach, Württemberg's Danish cavalry now swept forward, wheeling to penetrate the flank of the Maison du Roi whose attention was almost entirely fixed on holding back the Dutch. Sweeping forwards, virtually without resistance, the 21 Danish squadrons reformed behind the French around the area of the Tomb of Ottomond, facing north across the plateau of Mont St André towards the exposed flank of Villeroi's army. The final Allied reinforcements for the cavalry contest to the south were at last in position; Marlborough's superiority on the left could no longer be denied, and his fast-moving plan took hold of the battlefield. Now, far too late, Villeroi tried to redeploy his 50 unused squadrons, but a desperate attempt to form line facing south, stretching from Offus to Mont St André, floundered amongst the baggage and tents of the French camp carelessly left there after the initial deployment. The Allied commander ordered his cavalry forward against the now heavily outnumbered French and Bavarian horsemen. De Guiscard's right flank, without proper infantry support, could no longer resist the onslaught and, turning their horses northwards, they broke and fled in complete disorder. Even the squadrons currently being scrambled together by Villeroi behind Ramillies could not withstand the onslaught. "We had not got forty yards on our retreat," remembered Captain Peter Drake, an Irishman serving with the French"when the words "sauve qui peut" went through the great part, if not the whole army, and put all to confusion" In Ramillies the Allied infantry, now reinforced by the English troops brought down from the north, at last broke through. The Régiment de Picardie stood their ground but were caught between Colonel Borthwick's Scots-Dutch regiment and the English reinforcements. Borthwick was killed, as was Charles O’Brien, the Irish Viscount Clare in French service, fighting at the head of his regiment. The Marquis de Maffei attempted one last stand with his Bavarian and Cologne Guards, but it proved in vain. Noticing a rush of horsemen fast approaching from the south, he later recalled: "...I went towards the nearest of these squadrons to instruct their officer, but instead of being listened to [I] was immediately surrounded and called upon to ask for quarter." Pursuit. The roads leading north and west were choked with fugitives. Orkney now sent his English troops back across the Petite Gheete stream to once again storm Offus where de la Guiche's infantry had begun to drift away in the confusion. To the right of the infantry Lord John Hay's 'Scots Greys' also picked their way across the stream and charged the Régiment du Roi within Autre-Eglise. "Our dragoons," wrote John Deane, "pushing into the village... made terrible slaughter of the enemy." The Bavarian Horse Grenadiers and the Electoral Guards withdrew and formed a shield about Villeroi and the Elector but were scattered by Lumley's cavalry. Stuck in the mass of fugitives fleeing the battlefield, the French and Bavarian commanders narrowly escaped capture by General Cornelius Wood who, unaware of their identity, had to content himself with the seizure of two Bavarian Lieutenant-Generals. Far to the south, the remnants of de la Colonie's brigade headed in the opposite direction towards the French held fortress of Namur. The retreat became a rout. Individual Allied commanders drove their troops forward in pursuit, allowing their beaten enemy no chance to recover. Soon the Allied infantry could no longer keep up, but their cavalry were off the leash, heading through the gathering night for the crossings on the river Dyle. At last, however, Marlborough called a halt to the pursuit shortly after midnight near Meldert, from the field. "It was indeed a truly shocking sight to see the miserable remains of this mighty army," wrote Captain Drake, "...reduced to a handful." Aftermath. What was left of Villeroi's army was now broken in spirit; the imbalance of the casualty figures amply demonstrates the extent of the disaster for LouisXIV's army: ("see below"). In addition, hundreds of French soldiers were fugitives, many of whom would never remuster to the colours. Villeroi also lost 52 artillery pieces and his entire engineer pontoon train. In the words of Marshal Villars, the French defeat at Ramillies was "the most shameful, humiliating and disastrous of routs". Town after town now succumbed to the Allies. Leuven fell on 25 May 1706; three days later, the Allies entered Brussels, the capital of the Spanish Netherlands. Marlborough realised the great opportunity created by the early victory of Ramillies: "We now have the whole summer before us," wrote the Duke from Brussels to Robert Harley: "...and with the blessing of God I shall make the best use of it." Malines, Lierre, Ghent, Alost, Damme, Oudenaarde, Bruges, and on 6 June Antwerp, all subsequently fell to Marlborough's victorious army and, like Brussels, proclaimed the Austrian candidate for the Spanish throne, the Archduke Charles, as their sovereign. Villeroi was helpless to arrest the process of collapse. When LouisXIV learnt of the disaster he recalled Marshal Vendôme from northern Italy to take command in Flanders; but it would be weeks before the command changed hands. As news spread of the Allies' triumph, the Prussians, Hessians and Hanoverian contingents, long delayed by their respective rulers, eagerly joined the pursuit of the broken French and Bavarian forces. "This," wrote Marlborough wearily, "I take to be owing to our late success." Meanwhile, Overkirk took the port of Ostend on 4 July thus opening a direct route to the English Channel for communication and supply, but the Allies were making scant progress against Dendermonde whose governor, the Marquis de Valée, was stubbornly resisting. Only later when Cadogan and Churchill went to take charge did the town's defences begin to fail. Vendôme formally took over command in Flanders on 4 August; Villeroi would never again receive a major command: "I cannot foresee a happy day in my life save only that of my death." LouisXIV was more forgiving to his old friend: "At our age, Marshal, we must no longer expect good fortune." In the meantime, Marlborough invested the elaborate fortress of Menin which, after a costly siege, capitulated on 22 August. Dendermonde finally succumbed on 6 September followed by Aththe last conquest of 1706on 2 October. By the time Marlborough had closed down the Ramillies campaign he had denied the French most of the Spanish Netherlands west of the Meuse and north of the Sambreit was an unsurpassed operational triumph for the English Duke but once again it was not decisive as these gains did not defeat France. The immediate question for the Allies was how to deal with the Spanish Netherlands, a subject on which the Austrians and the Dutch were diametrically opposed. Emperor JosephI, acting on behalf of his younger brother King CharlesIII, absent in Spain, claimed that reconquered Brabant and Flanders should be put under immediate possession of a governor named by himself. The Dutch, however, who had supplied the major share of the troops and money to secure the victory (the Austrians had produced nothing of either) claimed the government of the region till the war was over, and that after the peace they should continue to garrison Barrier Fortresses stronger than those which had fallen so easily to LouisXIV's forces in 1701. Marlborough mediated between the two parties but favoured the Dutch position. To sway the Duke's opinion, the Emperor offered Marlborough the governorship of the Spanish Netherlands. It was a tempting offer, but in the name of Allied unity, it was one he refused. In the end England and the Dutch Republic took control of the newly won territory for the duration of the war; after which it was to be handed over to the direct rule of CharlesIII, subject to the reservation of a Dutch Barrier, the extent and nature of which had yet to be settled. Meanwhile, on the Upper Rhine, Villars had been forced onto the defensive as battalion after battalion had been sent north to bolster collapsing French forces in Flanders; there was now no possibility of his undertaking the re-capture of Landau. Further good news for the Allies arrived from northern Italy where, on 7 September, Prince Eugene had routed a French army before the Piedmontese capital, Turin, driving the Franco-Spanish forces from northern Italy. Only from Spain did LouisXIV receive any good news where Das Minas and Galway had been forced to retreat from Madrid towards Valencia, allowing PhilipV to re-enter his capital on 4 October. All in all though, the situation had changed considerably and LouisXIV began to look for ways to end what was fast becoming a ruinous war for France. For Queen Anne also, the Ramillies campaign had one overriding significance: "Now we have God be thanked so hopeful a prospect of peace." Instead of continuing the momentum of victory, however, cracks in Allied unity would enable LouisXIV to reverse some of the major setbacks suffered at Turin and Ramillies. Casualties. The total number of French casualties cannot be calculated precisely, so complete was the collapse of the Franco-Bavarian army that day. David G. Chandler's "Marlborough as Military Commander" and "A Guide to the Battlefields of Europe" are consistent with regards to French casualty figures, i.e. 12,000 dead and wounded plus some 7,000 taken prisoner. James Falkner, in "Ramillies 1706: Year of Miracles", also notes 12,000 dead and wounded and "up to 10,000" taken prisoner. In "Notes on the history of military medicine", Garrison puts French casualties at 13,000, including 2,000 killed, 3,000 wounded and 6,000 missing. In "The Collins Encyclopaedia of Military History", Dupuy puts Villeroi's dead and wounded at 8,000, with a further 7,000 captured. Neil Litten, using French archives, suggests 7,000 killed and wounded and 6,000 captured, with a further 2,000 choosing to desert. John Millner's memoirs"Compendious Journal" (1733)is more specific, recording 12,087 of Villeroi's army were killed or wounded, with another 9,729 taken prisoner. In "Marlborough", however, Correlli Barnett puts the total casualty figure as high as 15,000–30,000 dead and wounded with an additional 15,000 taken captive. Trevelyan estimates Villeroi's casualties at 13,000 but adds "his losses by desertion may have doubled that number". La Colonie omits a casualty figure in his "Chronicles of an old Campaigner" but Saint-Simon in his "Memoirs" states 4,000 killed adding "many others were wounded and many important persons were taken prisoner". Voltaire, however, in "Histoire du siècle du LouisXIV" records "the French lost there twenty thousand men". Gaston Bodart states 2,000 killed or wounded, 6,000 captured and 7,000 scattered for a total of 13,000 casualties. Périni writes that both sides lost 2 to 3,000 killed or wounded (the Dutch losing precisely 716 killed and 1,712 wounded), and that 5,600 French were captured.
4051
211905
https://en.wikipedia.org/wiki?curid=4051
Brian Kernighan
Brian Wilson Kernighan (; born January 30, 1942) is a Canadian computer scientist. He worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language ("The C Programming Language") with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language ("it's entirely Dennis Ritchie's work"). Kernighan authored many Unix programs, including ditroff. He is coauthor of the AWK and AMPL programming languages. The "K" of K&R C and of AWK both stand for "Kernighan". In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan has been a professor of computer science at Princeton University since 2000 and is the director of undergraduate studies in the department of computer science. In 2015, he co-authored the book "The Go Programming Language". Early life and education. Kernighan was born in Toronto. He attended the University of Toronto between 1960 and 1964, earning his bachelor's degree in engineering physics. He received his Ph.D. in electrical engineering from Princeton University in 1969, completing a doctoral dissertation titled "Some graph partitioning problems related to program segmentation" under the supervision of Peter G. Weiner. Career and research. Kernighan has held a professorship in the department of computer science at Princeton since 2000. Each fall he teaches a course called "Computers in Our World", which introduces the fundamentals of computing to non-majors. Kernighan was the software editor for Prentice Hall International. His "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for BASIC, FORTRAN, and Pascal, and most notably his "Ratfor" (rational FORTRAN) was put in the public domain. He has said that if stranded on an island with only one programming language it would have to be C. Kernighan coined the term "Unix" and helped popularize Thompson's Unix philosophy. Kernighan is also known for coining the expression "What You See Is All You Get" (WYSIAYG), which is a sarcastic variant of the original "What You See Is What You Get" (WYSIWYG). Kernighan's term is used to indicate that WYSIWYG systems might throw away information in a document that could be useful in other contexts. In 1972, Kernighan described memory management in strings using "hello" and "world", in the B programming language, which became the iconic example we know today. Kernighan's original 1978 implementation of was sold at The Algorithm Auction, the world's first auction of computer algorithms. In 1996, Kernighan taught CS50 which is the Harvard University introductory course in computer science. Kernighan was an influence on David J. Malan who subsequently taught the course and scaled it up to run at multiple universities and in multiple digital formats. Kernighan was elected a member of the National Academy of Engineering in 2002 for contributions to software and to programming languages. He was also elected a member of the American Academy of Arts and Sciences in 2019. In 2022, Kernighan stated that he was actively working on improvements to the AWK programming language, which he took part in creating in 1977.
4052
27015025
https://en.wikipedia.org/wiki?curid=4052
BCPL
BCPL ("Basic Combined Programming Language") is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. Design. BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Furthermore, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 person-months. This approach became common practice later (e.g. Pascal, Java). The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the same platform architecture's machine word and of adequate capacity to represent any valid storage address. For many machines of the time, this data type was a 16-bit word. This choice later proved to be a significant problem when BCPL was used on machines in which the smallest addressable item was not a word but a byte or on machines with larger word sizes such as 32-bit or 64-bit. The interpretation of any value was determined by the operators used to process the values. (For example, codice_1 added two values together, treating them as integers; codice_2 indirected through a value, effectively treating it as a pointer.) In order for this to work, the implementation provided no type checking. The mismatch between BCPL's word orientation and byte-oriented hardware was addressed in several ways. One was by providing standard library routines for packing and unpacking words into byte strings. Later, two language features were added: the bit-field selection operator and the infix byte indirection operator (denoted by codice_3). BCPL handles bindings spanning separate compilation units in a unique way. There are no user-declarable global variables; instead, there is a global vector, similar to "blank common" in Fortran. All data shared between different compilation units comprises scalars and pointers to vectors stored in a pre-arranged place in the global vector. Thus, the header files (files included during compilation using the "GET" directive) become the primary means of synchronizing global data between compilation units, containing "GLOBAL" directives that present lists of symbolic names, each paired with a number that associates the name with the corresponding numerically addressed word in the global vector. As well as variables, the global vector contains bindings for external procedures. This makes dynamic loading of compilation units very simple to achieve. Instead of relying on the link loader of the underlying implementation, effectively, BCPL gives the programmer control of the linking process. The global vector also made it very simple to replace or augment standard library routines. A program could save the pointer from the global vector to the original routine and replace it with a pointer to an alternative version. The alternative might call the original as part of its processing. This could be used as a quick "ad hoc" debugging aid. BCPL was the first brace programming language and the braces survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences codice_4 and codice_5 or codice_6 and codice_7 in place of the symbols codice_8 and codice_9. The single-line codice_10 comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99. The book "BCPL: The language and its compiler" describes the philosophy of BCPL as follows: History. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was a response to difficulties with its predecessor, Cambridge Programming Language, later renamed Combined Programming Language (CPL), which was designed during the early 1960s. Richards created BCPL by "removing those features of the full language which make compilation difficult". The first compiler implementation, for the IBM 7094 under Compatible Time-Sharing System, was written while Richards was visiting Project MAC at the Massachusetts Institute of Technology in the spring of 1967. The language was first described in a paper presented to the 1969 Spring Joint Computer Conference. BCPL has been rumored to have originally stood for "Bootstrap Cambridge Programming Language", but CPL was never created since development stopped at BCPL, and the acronym was later reinterpreted for the BCPL book. BCPL is the language in which the original "Hello, World!" program was written. The first MUD was also written in BCPL ("MUD1"). Several operating systems were written partially or wholly in BCPL (for example, TRIPOS and the earliest versions of AmigaDOS). BCPL was also the initial language used in the Xerox PARC Alto project. Among other projects, the Bravo document preparation system was written in BCPL. An early compiler, bootstrapped in 1969, by starting with a paper tape of the O-code of Richards's Atlas 2 compiler, targeted the ICT 1900 series. The two machines had different word-lengths (48 vs 24 bits), different character encodings, and different packed string representations—and the successful bootstrapping increased confidence in the practicality of the method. By late 1970, implementations existed for the Honeywell 635 and Honeywell 645, IBM 360, PDP-10, TX-2, CDC 6400, UNIVAC 1108, PDP-9, KDF 9 and Atlas 2. In 1974 a dialect of BCPL was implemented at BBN without using the intermediate O-code. The initial implementation was a cross-compiler hosted on BBN's TENEX PDP-10s, and directly targeted the PDP-11s used in BBN's implementation of the second generation IMPs used in the ARPANET. There was also a version produced for the BBC Micro in the mid-1980s, by Richards Computer Products, a company started by John Richards, the brother of Martin Richards. The BBC Domesday Project made use of the language. Versions of BCPL for the Amstrad CPC and Amstrad PCW computers were also released in 1986 by UK software house Arnor Ltd. MacBCPL was released for the Apple Macintosh in 1985 by Topexpress Ltd, of Kensington, England. Both the design and philosophy of BCPL strongly influenced B, which in turn influenced C. Programmers at the time debated whether an eventual successor to C would be called "D", the next letter in the alphabet, or "P", the next letter in the parent language name. The language most accepted as being C's successor is C++ (with codice_11 being C's increment operator), although meanwhile, a D programming language also exists. In 1979, implementations of BCPL existed for at least 25 architectures; the language gradually fell out of favour as C became popular on non-Unix systems. Martin Richards maintains a modern version of BCPL on his website, last updated in 2023. This can be set up to run on various systems including Linux, FreeBSD, and Mac OS X. The latest distribution includes graphics and sound libraries, and there is a comprehensive manual. He continues to program in it, including for his research on musical automated score following. A common informal MIME type for BCPL is . Examples. Hello world. Richards and Whitby-Strevens provide an example of the "Hello, World!" program for BCPL using a standard system header, 'LIBHDR': Further examples. If these programs are run using Richards' current version of Cintsys (December 2018), LIBHDR, START and WRITEF must be changed to lower case to avoid errors. Print factorials: Count solutions to the N queens problem:
4054
45918763
https://en.wikipedia.org/wiki?curid=4054
Battleship
A battleship is a large, heavily armored warship with a main battery consisting of large guns, designed to serve as a capital ship. From their advent in the late 1880s, battleships were among the largest and most formidable weapon systems ever built, until they were surpassed by aircraft carriers beginning in the 1940s. The modern battleship traces its origin to the sailing ship of the line, which was developed into the steam ship of the line and soon thereafter the ironclad warship. After a period of extensive experimentation in the 1870s and 1880s, ironclad design was largely standardized by the British , which are usually referred to as the first "pre-dreadnought battleships". These ships carried an armament that usually included four large guns and several medium-caliber guns that were to be used against enemy battleships, and numerous small guns for self-defense. Naval powers around the world built dozens of pre-dreadnoughts in the 1890s and early 1900s, though they saw relatively little combat; only two major wars were fought during the period that included pre-dreadnought battles: the Spanish-American War in 1898 and the Russo-Japanese War of 1904–1905. The following year, the British launched the revolutionary all-big-gun battleship . This ship discarded the medium-caliber guns in exchange for a uniform armament of ten large guns. All other major navies quickly began (or had already started) "dreadnoughts" of their own, leading to a major naval arms race. During World War I, only one major fleet engagement took place: the Battle of Jutland in 1916, but neither side was able to achieve a decisive result. In the Interwar period, the major naval powers concluded a series of agreements beginning with the Washington Naval Treaty that imposed limits on battleship building to stop a renewed arms race. During this period, relatively few battleships were built, but advances in technology led to the maturation of the fast battleship concept, and several of these ships were built in the 1930s. The treaty system eventually broke down after Japan refused to sign the Second London Naval Treaty in 1936. Although the rise of the aircraft carrier during World War II largely relegated battleships to secondary duties, they still saw significant action during that conflict. Notable engagements include the battles of Cape Spartivento and Cape Matapan in 1940 and 1941, respectively; the sortie by the German battleship in 1941; the Naval Battle of Guadalcanal in 1942; and the Battle of Leyte Gulf in 1944. After World War II, most battleships were placed in reserve, broken up, or used as target ships, and few saw significant active service during the Cold War. The four American s were reactivated during the Korean War in the early 1950s and again in the 1980s as part of the 600-ship Navy. Even at the height of their dominance of naval combat, some strategists questioned the usefulness of battleships. Beginning in the mid-1880s, the (Young School) argued that construction of expensive capital ships should stop in favor of cheap cruisers and torpedo boats. Despite a period of popularity for the , the idea fell out of favor and the battleship remained the arbiter of naval combat until World War II. Even afterward, they remained potent symbols of a country's might and they retained significant psychological and diplomatic effects. A number of battleships—predominantly American—remain as museum ships. Background. Ships of the line. A ship of the line was a large, unarmored wooden sailing ship which mounted a battery of up to 120 smoothbore guns and carronades, which came to prominence with the adoption of line of battle tactics in the early 17th century. From 1794, the alternative term 'line of battle ship' was contracted to 'battle ship' or 'battleship'. The sheer number of guns fired broadside meant a ship of the line could wreck any wooden enemy, holing her hull, knocking down masts, wrecking her rigging, and killing her crew. They also imparted a psychological effect on the crews of smaller vessels. Ships of the line were also fairly resilient to the guns of the day; for example, the British Royal Navy lost no first-rate (the largest type of ship of the line) to enemy action during the entire 18th century. Over time, ships of the line gradually became larger and carried more guns, but otherwise remained quite similar. Development of the first-rates was particularly conservative, as these ships represented a major investment. By the early 1800s, the traditional "seventy-four" (so-named because it carried 74 guns) was no longer considered to be a proper ship of the line, having been supplanted by 84- and 120-gun ships. The first major change to the ship of the line concept was the introduction of steam power as an auxiliary propulsion system. Steam power was gradually introduced to the navy in the first half of the 19th century, initially for small craft and later for frigates. Early vessels used paddle wheels for propulsion, but by the 1840s, the first screw propeller equipped vessels began to appear. The value of these smaller steam-powered warships demonstrated their worth, when vessels like the British "Nemesis" proved to be critical to the Anglo-French success in the First Opium War in the 1840s. The French Navy introduced steam to the line of battle with the 90-gun in 1850—the first true steam battleship. "Napoléon", which was designed by Henri Dupuy de Lôme, was armed as a conventional ship-of-the-line, but her steam engines could give her a speed of , regardless of the wind. This was a potentially decisive advantage in a naval engagement. The introduction of steam accelerated the growth in size of battleships. France and the United Kingdom were the only countries to develop fleets of wooden, steam-screw battleships although several other navies operated small numbers of screw battleships, including Russia (9), the Ottoman Empire (3), Sweden (2), Naples (1), Denmark (1) and Austria (1). Concurrent with the development of steam power, another major technological step heralded the end of the traditional ship of the line: guns capable of firing explosive shells. Pioneering work was done by the French artillery officer Henri-Joseph Paixhans beginning in 1809. The American artillerist George Bomford followed not far behind, designing the first shell-firing Columbiad in 1812. The British and Russians began to follow suit in the 1830s, though early smoothbore guns could not fire shells as far as solid shot, which hampered widespread adoption in any fleet. By the early 1840s, the French Paixhans gun and American Dahlgren gun had begun to be adopted by their respective navies. In the Crimean War of 1853–1855, six Russian ships of the line and two frigates of the Black Sea Fleet destroyed seven Turkish frigates and three corvettes with explosive shells at the Battle of Sinop in 1853. The battle was widely seen as vindication of the shell gun. Nevertheless, wooden-hulled ships stood up comparatively well to shells, as shown in the 1866 Battle of Lissa, where the modern Austrian steam ship of the line ranged across a confused battlefield, rammed an Italian ironclad and took 80 hits from Italian ironclads, many of which were shells, but including at least one shot at point-blank range. Despite losing her bowsprit and her foremast, and being set on fire, she was ready for action again the very next day. Ironclads. As amply demonstrated at the Battle of Sinope, and again during the Anglo-French blockade of Sevastopol from 1854 to 1855, wooden ships had become vulnerable to shell-firing guns. This prompted the French emperor Napoleon III to order the first ironclad warships: the s. Three of these ships led the Anglo-French attack on the Russian fortress on the Kinburn Peninsula in the Battle of Kinburn in 1855, where they bore the brunt of Russian artillery fire, but were not seriously damaged. The success of these ships prompted the French and British to order several similar vessels. In March 1858, the French took development of the ironclad to its next logical step: a proper, ocean-going armored warship. This vessel, another design by Dupuy de Lome, was , and after her launching in 1859, Napoleon III ordered another five similar ships, which sparked a naval arms race with Britain. The first French ironclads had the profile of a ship of the line, cut to one deck due to weight considerations. Although made of wood and reliant on sail for most journeys, "Gloire" and her contemporaries were fitted with screw propellers, and their wooden hulls were protected by a layer of thick iron armor. Britain responded promptly with , a similar but much larger ironclad with an iron hull. By the time "Warrior" was completed in 1861, another nine ironclads were under construction in British shipyards, some of which were conversions of screw ships of the line that were already being built. During the Unification of Italy in 1860, the Kingdom of Sardinia entered the ironclad building race by ordering the s from French shipyards; their long-term rival across the Adriatic Sea, the Austrian Empire, quickly responded later that year with the two s. Spain and Russia ordered ironclads in 1861, as did the United States and rebel Confederate States of America after the start of the American Civil War. Construction of these large and expensive warships remained controversial until March 1862, when news of the Battle of Hampton Roads, fought between the Union and the Confederate , firmly settled debate in favor of even larger construction programs. From the 1860s to 1880s, navies experimented with the positioning of guns, in turrets, central-batteries, or barbettes; ironclads of the period also prominently used the ram as a principal weapon. As steam technology developed, masts were gradually removed from battleship designs. The British Chief Constructor, Edward Reed, produced the s in 1869. These were mastless turret ships, which adopted twin-screw propulsion and an arrangement of two pairs of guns, one fore and one aft of the superstructure, that prefigured the advent of the pre-dreadnought battleship some two decades later. By the mid-1870s steel was used as a construction material alongside iron and wood. The French Navy's , laid down in 1873 and launched in 1876, was a combination central battery and barbette ship, which became the first capital ship in the world to use steel as the principal building material. The rapid pace of technological developments, particularly in terms of gun capabilities and thickness of armor to combat them, quickly rendered ships obsolescent. In the continuous attempt by gun manufacturers to keep ahead of developments in armor plate, larger and larger guns were fitted to many of the later ironclads. Some of these, such as the British , carried guns as large as in diameter, while the Italian s were armed with colossal guns. The French experimented with very large guns in the 1870s, but after significant trouble with these guns (and the development of slower-burning gunpowder), they led the way toward smaller-caliber guns with longer barrels, which had higher muzzle velocity and thus greater penetration than the larger guns. "Jeune École". In the 1880s, opposition to fleets of large, expensive ironclads arose around the world, but most notably in France, where a group of naval officers led by Admiral Théophile Aube formed the (Young School). The theory, which held as one of its core tenets that small, cheap torpedo boats could easily defeat ironclads, was based on combat experience during the Russo-Turkish War of 1877–1878. The doctrine also posited that modern steel-hulled cruisers could defeat a more powerful navy by attacking the country's merchant shipping, rather than engage in a direct battle. The concept proved to be highly influential for several years, shaping the construction programs of France, Germany, Italy, and Austria-Hungary, among others throughout the world. Development of the modern battleship. Pre-dreadnought battleships. In 1889, the British government passed the Naval Defence Act 1889, which embarked on a major naval construction program aimed at establishing the so-called two-power standard, whereby the Royal Navy would be stronger than the next two largest navies combined. The plan saw the construction of the eight s, which have been regarded as the first class of battleship that would retrospectively be referred to as "pre-dreadnought battleships". These large battleships incorporated a number of major improvements over earlier vessels like the "Devastation"s, including a high freeboard for true ocean-going capability, more extensive armor protection, heavier secondary battery guns, and greater speed. The ships were armed with four guns in two twin mounts, fore and aft, which established the pattern for subsequent battleships. After building a trio of smaller second class battleships intended for the colonial empire, Britain followed with the nine-strong s in 1893–1895, which improved on the basic "Royal Sovereign" design. These ships adopted the gun, which would become the standard for all subsequent British pre-dreadnoughts. Foreign navies quickly began pre-dreadnoughts of their own; France began in 1889 and Germany laid down four s in 1890. The United States Navy laid down three s in 1891, the same year work began on the Russian battleship . Japan ordered the two s from British yards, to an improved "Royal Sovereign" design, in 1894. The Austro-Hungarian Navy eventually ordered its own pre-dreadnoughts, beginning with the in 1899. All of these ships carried guns of between , save the Austro-Hungarian vessels, which, being significantly smaller than the rest, only carried guns. Most pre-dreadnoughts followed the same general pattern, which typically saw a ship armed with four large guns, usually 12-inch weapons, along with a secondary of medium-caliber guns (usually guns early in the period), which were also intended for combat at close range with other battleships. They also generally carried a light armament for defense against torpedo boats and other light craft. Some ships varied from this general pattern, such as the American "Indiana"s, which carried a heavier secondary battery of guns, and the German "Brandenburg"s, which had six 11-inch guns for instead of the usual four heavy guns. Many of the early French pre-dreadnoughts, such as , carried a mixed heavy armament of two 12-inch and two guns. Pre-dreadnoughts continued the technical innovations of the ironclad throughout the 1890s and early 1900s. Compound armor gave way to much stronger Harvey armor developed in the United States in 1890, which was in turn superseded by the German Krupp armor in 1894. As armor became stronger, it could be reduced in thickness considerably, which saved weight that could be allocated to other aspects of the ship design, and generally permitted larger and more capable battleships. At the same time, the advent of smokeless powder continued the trend begun in the French navy of comparatively smaller guns firing at higher velocities. Early on in the pre-dreadnought era, most navies standardized on the 12-inch gun; only Germany remained the significant outlier, relying on 11-inch and even 9.4-inch guns for its pre-dreadnoughts. Similarly, later in the pre-dreadnought era, the secondary batteries grew in caliber, usually to guns. Some final classes, such as the British with a secondary battery of guns, or the French that had 9.4-inch secondaries, have been subsequently referred to as "semi-dreadnoughts", reflecting their transitional step between classic pre-dreadnought designs and the all-big-gun battleships that would soon appear. In the last years of the 19th century and the first years of the 20th, the escalation in the building of battleships became an arms race between Britain and Germany. The German naval laws of 1890 and 1898 authorized a fleet of 38 battleships, a vital threat to the balance of naval power. Britain answered with further shipbuilding, but by the end of the pre-dreadnought era, British supremacy at sea had markedly weakened. In 1883, the United Kingdom had 38 ironclad battleships, twice as many as France and almost as many as the rest of the world put together. In 1897, Britain's lead was far smaller due to competition from France, Germany, and Russia, as well as the development of pre-dreadnought fleets in Italy, the United States and Japan. The Ottoman Empire, Spain, Sweden, Denmark, Norway, the Netherlands, Chile, and Brazil all had second-rate fleets led by armored cruisers, coastal defence ships or monitors. Early combat experiences. Pre-dreadnought battleships received their first test in combat in the Spanish-American War in 1898 at the Battle of Santiago de Cuba. An American squadron that included four pre-dreadnoughts had blockaded a Spanish squadron of four armored cruisers in Santiago de Cuba until 3 July, when the Spanish ships attempted to break through and escape. All four cruisers were destroyed in the ensuing engagement, as were a pair of Spanish destroyers, and American ships received little damage in return. The battle seemed to indicate that the mixed batteries of pre-dreadnought battleships were very effective, as the medium-caliber guns had inflicted most of the damage (which reinforced the observations of the Battle of Manila Bay, where only cruisers armed with medium guns had been present). It also led navies around the world to begin working on better solutions for rangefinding in the hope of improving gunnery at longer ranges. Conflicting colonial ambitions in Korea and Manchuria led Russia and Japan to the next major use of pre-dreadnoughts in combat. During the Russo-Japanese War of 1904–1905, squadrons of battleships engaged in a number of battles, including the Battle of the Yellow Sea and the Battle of Tsushima. Naval mines also proved to be a deadly threat to battleships on both sides, sinking the Russian in March 1904 and the Japanese battleships and on the same day in May. The action in the Yellow Sea began during a Russian attempt to break out of Port Arthur, which the Japanese under Admiral Tōgō Heihachirō had blockaded. The Russians outmaneuvered the Japanese and briefly escaped, but the latter's superior speed allowed them to catch up. A 12-inch shell struck the Russian flagship, killing the squadron commander and causing the Russian ships to fall into disarray and retreat back to Port Arthur. With night falling, the Japanese broke off and reimposed the blockade. At Tsushima, Togo outmaneuvered the Russian Second Pacific Squadron that had been sent to reinforce the Pacific Fleet, and the Japanese battleships quickly inflicted fatal damage with long-range fire from their 12-inch guns. In both actions during the Russo-Japanese War, the fleets engaged at longer range (as far as at the Yellow Sea), where only their 12-inch guns were effective. Only in the final stages of the battle at Tsushima, by which time the Russian fleet had been severely damaged and most of its modern battleships sunk or disabled, did the Japanese fleet close to effective range of their secondary guns, fighting as close as . The actions, particularly the decisive engagement at Tsushima, demonstrated that the lessons taken from the Spanish-American War were incorrect, and that the large-caliber gun should be the only offensive weapon carried by battleships. Dreadnought battleships. In the early 1900s, some naval theorists had begun to argue for future battleships to discard the heavy secondary batteries and instead carry only big guns. The first prominent example was Vittorio Cuniberti, the chief engineer of the Italian (Royal Navy); he published an article in 1903 titled "An Ideal Battleship for the British Navy" in "Jane's Fighting Ships". By the time that British Admiral Sir John ("Jackie") Fisher became the First Sea Lord in late 1904, he had already become convinced that a similar concept—that of a fast capital ship carrying the largest quick-firing guns available (which at that time were weapons)—was the path forward. The Japanese Navy was the first to actually order any of these new ships, beginning with the two s in 1904, though due to shortages of 12-inch guns, they were completed with a mix of 12- and guns. By early 1905, Fisher had converted to the 12-inch gun for his proposed new capital ships, and in March that year, the German Navy had decided to build an all-big-gun battleship for the planned . The American was authorized in 1905, but work did not begin until December 1906. Though several navies had begun design work on all-big-gun battleships, the first to be completed was the British , which had been ordered by Fisher. He actually preferred a very large armored cruiser equipped with an all-big-gun armament, which would come to be known as the battlecruiser, and he only included "Dreadnought" in his 1905 construction program to appease naval officers who favored continued battleship building. Fisher believed that Britain's security against the French and Russian threats would be better guaranteed by squadrons of fast battlecruisers, three of which were laid down in 1906. Regardless of Fisher's intentions, the rapidly changing strategic calculus invalidated his plans and ensured that when the 1906–1907 program was being debated, Germany would be Britain's primary rival, the Royal Navy chose to build three more dreadnoughts instead of further battlecruisers. Reactions from the other naval powers was immediate; very few pre-dreadnoughts were built afterward, and in the first seven years of the ensuing arms race, all of the major naval powers either had their own dreadnoughts in service or nearing completion. Of these competitions, the Anglo-German race was the most significant, though others took place, such as the South American contest. Even naval powers of the second and third rank, such as Spain; Brazil, Chile, and Argentina in South America; and Greece and the Ottoman Empire in the Mediterranean had begun dreadnought programs, either domestically or ordering abroad. "Dreadnought" carried ten 12-inch guns, all in twin turrets: one was forward, two further aft, all on the centerline, and the remaining pair were wing turrets with more restricted arcs of fire. She disposed of the medium-caliber secondary battery and carried only guns for anti-torpedo boat work. A variety of experimental arrangements followed, including the "hexagonal" layout adopted by the German "Nassau"s (which had four of their six twin turrets on the "wings"), or the Italian and Russian s that mounted their guns all on the centerline, but with restricted arcs of fire for half of the guns. The "South Carolina"s dispensed with "Dreadnought"s wing turrets, adopting instead a superfiring arrangement of eight guns in four twin turrets, which gave them the same broadside as "Dreadnought", despite having two fewer guns. Technological development continued over the decade that followed "Dreadnought"s launch. Already by 1910, the British had begun the first of the so-called "super-dreadnoughts" that carried significantly more powerful guns, all on the centerline. The United States followed suit in 1911, though increasing the caliber of their guns to . France adopted a gun for its s, laid down in 1912. That year, Japan laid down the first of its s, also armed with a 14-inch main battery. The Germans waited until 1913, but skipped directly to guns. By this time, Britain had led the way to the 15-inch gun with the begun in late 1912. But more importantly than the increase of caliber, these were the first completely oil-fired battleships these were the first fast battleships. At around the same time, the United States introduced the next major innovation in battleship design: the all or nothing armor system in the laid down in 1912. The heaviest possible armor was used to protect the ship's propulsion machinery and ammunition magazines, but intermediate protection was stripped away from non-essential areas, since this mid-weight armor only served to detonate armor-piercing shells. World War I. By the start of World War I in July 1914, the Royal Navy's Grand Fleet outnumbered the German High Seas Fleet by 21 to 13 in numbers of dreadnought battleships and 4 to 3 in battlecruisers. And over the course of the war, Britain would add another 14 dreadnoughts, while Germany completed another 6. German strategy presumed that Britain would launch an immediate offensive into the southern North Sea, but the British preferred to establish a distant blockade, which very quickly stopped German maritime trade. Both sides were aware that, because of the greater number of British dreadnoughts, a full fleet engagement would be likely to result in a British victory. The German strategy was therefore to try to provoke an engagement on their terms: either to induce a part of the Grand Fleet to enter battle alone, or to fight a pitched battle near the German coastline, where friendly minefields, torpedo-boats and submarines could be used to even the odds. The British fleet commander, Admiral John Jellicoe, refused to be drawn into unfavorable conditions and enforced the blockade at the English Channel and between Scotland and Norway. In the Baltic Sea, Germany found itself in the reverse situation, in an even more lopsided fashion versus its Russian opponent. The Russian Baltic Fleet had only four dreadnoughts at the start of the war, so they adopted a purely defensive approach to guard the capital at Petrograd and the northern flank of the Russian army units fighting on the Eastern Front. In the Mediterranean Sea, Italy initially remained neutral, despite being a member of the Triple Alliance with Germany and Austria-Hungary, leaving the latter to face the French Navy and British Mediterranean Fleet alone. After ensuring the French army units in French North Africa were safely convoyed to France, the French fleet sailed to the Adriatic Sea to blockade the Austro-Hungarian fleet, which refused to leave their fortified bases. The French, like the other major European naval commanders, had failed to consider that their opponents would not concede to engaging in battle on terms unfavorable to them. The Adriatic quickly turned into another stalemate as the threat of Austro-Hungarian mines and submarines prevented a more aggressive employment of the French fleet. The Germans embarked on a number of sweeps into the North Sea and raids on British coastal towns to draw out part of the Grand Fleet, which would be isolated and destroyed. These included the raid on Scarborough, Hartlepool and Whitby, where the Germans nearly caught an isolated British battle squadron, but turned away, thinking that it was the entire Grand Fleet. This strategy ultimately led to the Battle of Jutland on 31 May – 1 June 1916, the largest clash of battleship fleets. The first stage of the battle was fought largely by the two sides' battlecruiser squadrons, though the British were supported by four of the "Queen Elizabeth"-class battleships. After both battleship fleets engaged, the British crossed the Germans' "T" twice, but the latter managed to extricate themselves from the action as darkness fell. Early on 1 June, the High Seas Fleet had reached port. In the course of the fighting, three British battlecruisers were destroyed, as was one German battlecruiser and the old pre-dreadnought . Numerous cruisers and destroyers were lost on both sides as well. The Germans made two further offensive operations in the months after Jutland. The first, which led to the inconclusive action of 19 August, saw one German battleship torpedoed by a British submarine and two British cruisers sunk by German U-boats. This incident convinced the British that the risks posed by submarines were too great to send the Grand Fleet into the southern North Sea, barring exceptional circumstances like a German invasion of Britain. In the second German operation, which took place on 18–19 October, a German cruiser was damaged by a submarine and the Grand Fleet remained in port. By this time, the Germans were similarly convinced of the futility of their attempts to isolate part of the British fleet, and discontinued such raids. They instead turned to unrestricted submarine warfare, which resulted in their battleships being reduced to a supporting force that guarded the U-boat bases. In the Baltic, the Germans made two attempts to capture the islands in the Gulf of Riga. The first came in August 1915, and in the ensuing Battle of the Gulf of Riga, a pair of German dreadnoughts engaged in an artillery duel at long range with the Russian pre-dreadnought guarding the minefields that protected the gulf. The Germans drove off the Russian ship, cleared the minefield, but by the time they entered the gulf, submarines had reportedly arrived. Unwilling to risk the battleships in the shallow, confined waters of the gulf, the Germans retreated. The second attempt—Operation Albion—took place in October 1917. During the Battle of Moon Sound, another pair of German dreadnoughts damaged "Slava" so badly that she had to be scuttled, and the Germans completed their amphibious assault on the islands. The modern units of the French and British fleets in the Mediterranean spent much of the war guarding the entrance to the Adriatic, first based at Malta and later moving to Corfu. They saw very little action through the war. In May 1915, Italy entered the war on the side of the Triple Entente, declaring war on their former allies; the Austro-Hungarians, who were prepared for the betrayal, sailed with the bulk of their fleet to raid the Italian coast on the first hours of the war on 24 May; the battleships were sent to bombard Ancona, but there were no heavy Italian or French units close enough to intervene. For their part, the Italians were content to reinforce the blockading force guarding the Adriatic, as they, too, were unwilling to risk their capital ships in the mine and submarine infested waters of the Austrian Littoral. Instead, light forces carried out most of the operations. Meanwhile, several French and British pre-dreadnoughts were sent to attack the Ottoman defenses guarding the Dardanelles. In the ensuing naval operations from February to March 1915, several battleships were sunk or damaged by mines and torpedoes. When the fleets failed to break through the defenses, the British and French decided to land at Gallipoli to try to take the fortifications by land; the remaining battleships were thereafter used to provide naval gunfire support. This, too, ultimately failed and by January 1916, the British and French withdrew their troops. Russian battleships saw more action in the Black Sea against their Ottoman opponents. The Ottomans had the battlecruiser "Yavuz Sultan Selim (formerly the German "Goeben"), which the Russians attempted to destroy in a series of short engagements, including the Battle of Cape Sarych in November 1914, the Action of 10 May 1915, and the Action of 8 January 1916, though they were unsuccessful in all three attempts, primarily because the faster "Yavuz Sultan Selim" could easily escape from the more numerous but slow Russian pre-dreadnoughts. By 1916, the Russians had completed a pair of dreadnoughts in the Black Sea, which severely curtailed Ottoman freedom of maneuver. In the course of the war, older pre-dreadnoughts proved to be highly vulnerable to underwater damage, whether by naval mine or ship-launched or submarine-delivered torpedoes. was sunk by a German U-boat in the English Channel in 1915. At the Dardanelles, was sunk by a German U-boat, was sunk by the Ottoman destroyer . The British and and the French were all sunk by mines. and were both sunk by mines in the Mediterranean in 1916 and 1917, respectively. was similarly mined and sunk off the British coast in 1916, and was sunk by a U-boat in the final days of the war. The French were sunk by U-boats in 1916, and was torpedoed and sunk by a U-boat in 1917 At Jutland, the only battleship lost was the old pre-dreadnought "Pommern", which was torpedoed by a destroyer. In contrast, dreadnoughts proved to be much more resilient to underwater attack. was damaged by a torpedo at Jutland, but nevertheless returned to port. The German was torpedoed at the action of 19 August 1916, and and were torpedoed by the same submarine in November 1917; all three survived. was mined during Operation Albion and remained in action against Russian artillery batteries for some time thereafter. Dreadnoughts lost to underwater attack were rare. was sunk by a mine in October 1914, the Austro-Hungarian was sunk by Italian MAS boats in June 1918, and five months later, Italian frogmen sank using a powerful limpet mine. Inter-war period. In the immediate aftermath of the war, the most modern units of the German fleet was interned at Scapa Flow, where in June 1919, their crews scuttled the fleet to avoid it being handed over to the Allies. The remaining dreadnoughts still in German ports were therefore seized as compensation for the scuttled ships. The postwar of Weimar Germany was limited to a contingent of eight old pre-dreadnoughts (of which two would be kept in reserve) under the terms of the Treaty of Versailles; new battleships were subject to severe restrictions on size and armament. The surviving battleships of Austria-Hungary, the other defeated Central Power, were soon distributed among the Allies, to be broken up. While the other major naval powers remained free to build new battleships, most of them were financially crippled after the war. The prospect of a renewed naval arms race between the United States, United Kingdom, and Japan, appealed to few politicians in the three countries, and so they concluded the Washington Naval Treaty in 1922, which also included Italy and France. The treaty limited the number and size of battleships, and imposed a ten-year building holiday, along with other provisions. The treaty also imposed a ratio of 5:5:3 on total displacement of battleships for the US, UK, and Japan, respectively, and it severed the Anglo-Japanese Alliance. The only exceptions to the building holiday were for the two British s, which were permitted to give Britain parity with the latest American and Japanese battleships, which were all armed with guns. The Washington treaty was followed by a series of other naval treaties, including the First London Naval Treaty (1930) and the Second London Naval Treaty (1936), which both set additional limits on major warships. The treaty limitations meant that fewer new battleships were launched in 1919–1939 than in 1905–1914. The treaties also inhibited development by imposing upper limits on the weights of ships. Designs like the projected British , the first American , and the Japanese —all of which continued the trend to larger ships with bigger guns and thicker armor—never got off the drawing board. Those designs which were commissioned during this period were referred to as treaty battleships. The collapse of the treaty system began during the negotiations for the Second London Treaty, where Japan demanded parity with Britain and the US, which the latter two flatly rejected. Japan withdrew from the treaty system in 1936, though the agreements remained in effect until January 1937. Rise of air power. As early as 1914, the British Admiral Percy Scott predicted that battleships would soon be made irrelevant by aircraft. Between 1916 and 1918, US Admiral William Fullam published a series of papers stating that aircraft would become an independent strike arm of the fleet, and argued that the s then under construction should be converted to aircraft carriers than scrapped. By the end of World War I, aircraft had successfully adopted the torpedo as a weapon. In 1921 the Italian general and air theorist Giulio Douhet completed a hugely influential treatise on strategic bombing titled "The Command of the Air", which foresaw the dominance of air power over conventional military and naval forces. In 1921, US General Billy Mitchell used the ex-German dreadnought in a series of bombing tests conducted by the Navy and Army. The test involved a series of attacks on the stationary, unmanned ships using low-level, land-based bombers dropping bombs that ranged from . "Ostfriesland" was sunk by the heaviest bombs, though Mitchell broke the rules of the tests and the subsequent report concluded that had the ship been crewed, underway, and firing back at the aircraft, damage control teams aboard "Ostfriesland" could have managed any damage inflicted. Mitchell and his supporters nevertheless embarked on a public campaign that falsely claimed that "Ostfriesland" was a super-battleship, and the quick sinking proved that battleships were obsolete. Mitchell would eventually be subjected to a court martial, convicted, and discharged from the Army over his insubordinate tactics. Naval aviation traces its origin back to the first decade of the 20th century, though early efforts were based on using aircraft to scout for the fleet and help direct gunfire at long range. A number of experimental aircraft carriers were employed during World War I, primarily by the Royal Navy, all converted from merchant vessels or existing warships. The US Navy completed its first carrier, , in 1922. But aircraft carriers in the 1920s faced a number of challenges to be overcome: aircraft of the day were short-ranged, which meant the carrier had to be very close to the enemy to be able to launch and then recover a strike, which exposed the carriers to attack. In addition, the available planes had insufficient power to carry meaningful bomb loads. Early naval aviators nevertheless pioneered effective tactics like dive bombing during this period. Fast battleships and the end of the treaty system. Because the Washington Treaty system precluded the construction of any new battleships until the early 1930s, the major naval powers began a program of modernization for their most effective battleships. Britain conducted a series of refits to their "Queen Elizabeth"-class battleships through the 1920s, adding anti-torpedo bulges, additional anti-aircraft guns, and aircraft catapults; further refits in the 1930s increased armor protection and further strengthened their anti-aircraft batteries. The s were less heavily modified during the period. The US , , and es received similar treatments in the 1920s, while the "Nevada|" and es received new turbines, additional armor, and more anti-aircraft guns. The Japanese similarly updated their "Fusō", , and s, and rebuilt three of the four s into fast battleships, albeit with significantly inferior protection compared to the other ships. They all also received distinctive pagoda masts. was initially disarmed to serve as a training ship under the terms of the Washington Treaty, but was remilitarized in the late 1930s. In the 1930s, all four classes were lengthened and had their propulsion systems improved to increase their speeds. The French and Italian navies were exempted from the 10-year building holiday, owing to the comparative obsolescence of their battleships; they were permitted to build worth of battleships. But the weak economies of both countries led both to defer new construction until Germany began building the of heavily armed cruiser at the end of the 1920s. This prompted the French to build the of small, fast battleships armed with guns, which led to a short arms race in Europe in the mid-1930s. The Italians responded with the significantly larger and more powerful , armed with 15-inch guns. The French, in turn, began the s to counter the "Littorio"s. By this time, Nazi Germany had signed the Anglo-German Naval Agreement in 1935, which removed the restrictions imposed by Versailles and pegged German naval strength to 35% of British tonnage. This permitted the construction of two s, which were also a response to the "Dunkerque"s. The advent of the "Richelieu"s prompted the Germans to build the two s late in the decade. The Germans thereafter embarked on the ambitious Plan Z naval construction program, which included a total of eight battleships, of which the "Bismarck"s would be the first two. Against the backdrop of European rearmament in the mid-1930s, Britain began planning its first battleship class in a decade: the . These were armed with 14-inch guns intended to comply with the terms of the Second London Naval Conference, and they were laid down in 1937. The United States began their at the same time, and though they were intended to be armed with 14-inch guns, Japan's refusal to agree to the Second London Treaty led the US to invoke a clause of the treaty that allowed an increase to 16-inch guns. In 1939, these were followed by the four s, and in 1940 by the first of four s. For its part, Japan had decided to embark on a program of four very large s, armed with guns, as early as 1934, though work did not begin on the first ship until late 1937. World War II. European theater. The German pre-dreadnought fired the first shots of World War II by initiating the bombardment of the Polish garrison at Westerplatte in the early hours on 1 September 1939. The German "Scharnhorst"-class battleships caught the British carrier off the coast of Norway and sank her during the Norwegian campaign. Following the collapse of France in June 1940 and subsequent surrender, the British embarked on a campaign to neutralize or destroy French battleships that might be seized to reinforce the German fleet, including the attack on Mers-el-Kébir and the Battle of Dakar in July and September, respectively. In the former action, the British sank a pair of older "Bretagne"-class battleships and the fast battleship , though the latter was refloated and repaired. At Dakar, the French battleship and other forces fended off the British task force, which resulted in the torpedoing of the battleship , which was severely damaged. Italy entered the war in June 1940, shortly before the French defeat. In November, the British launched a nighttime airstrike on the naval base at Taranto; in the Battle of Taranto, Fairey Swordfish torpedo bombers disabled three Italian battleships, though they were subsequently repaired. Over the next year, Italian and British battleships engaged in a number of inconclusive actions as they contested the supply lines to North Africa. These included the Battle of Cape Spartivento in November 1940 and the Battle of Cape Matapan in March 1941. At Matapan, the battleship was badly damaged by a Swordfish, though the ship returned to port. The British battleships , , and nevertheless caught a group of three heavy cruisers that evening and destroyed them in a furious, close-range night action. Convoy battles continued through 1941 and into 1942, with actions such as First and Second Sirte. By 1943, Italian operations were sharply reduced due to a shortage of fuel, and after the Allied invasion of Italy, the country surrendered, allowing most of its fleet to be interned at Malta. While on the way, the battleship was sunk by a German Fritz X guided glide bomb. In the meantime, in January 1941, the Germans began to send their few battleships on commerce raiding operations in the Atlantic, starting with the two "Scharnhorst"-class ships in Operation Berlin, which was not particularly successful. followed with Operation Rheinübung in May, which resulted in two actions, the Battle of the Denmark Strait and the sinking of "Bismarck". During the operation, "Bismarck" was crippled by Swordfish torpedo bombers, which allowed a pair of British battleships to catch and destroy her. By 1942, the last operational German battleships— and —were sent to occupied Norway to serve as a fleet in being to tie down British naval resources and to attack supply convoys to the Soviet Union. The battleship eventually sank "Scharnhorst" at the Battle of North Cape in December 1943, and "Tirpitz" was destroyed by British heavy bombers in 1944. Pacific theater. On 7 December 1941, the Japanese launched a surprise attack on the US naval base at Pearl Harbor. Over the course of two waves of dive-, level- and torpedo bombers, the Japanese sank or destroyed five battleships and inflicted serious damage to the facilities there. Three days later, land-based Japanese aircraft operating out of French Indochina, then occupied by Japan, caught and sank the British battleship and the battlecruiser off the coast of British Malaya. Though the Taranto and Pearl Harbor strikes were significant steps toward aircraft replacing the battleship as the primary naval striking arm, the sinking of "Prince of Wales" and "Repulse" marked the first time aircraft had sunk capital ships that were maneuvering and returning fire. Employment of battleships during the Pacific War was limited by a number of factors. Japanese strategic doctrine, the , envisioned a decisive clash of battleships at the end of the war, and so kept most of their battleships in home waters, and only the four "Kongō"s were routinely detached to escort the aircraft carriers of the . For their part, the US kept its surviving pre-war battleships out of action primarily because they were too slow to keep up with the carriers. Later in the war, they were employed as coastal bombardment vessels. Nevertheless, American and Japanese battleships saw significant action during the Guadalcanal campaign in 1942, most notably at the Naval Battle of Guadalcanal in November. There, an American squadron centered on the battleships and intercepted and sank the battleship , though "South Dakota" received significant damage in return. As more and more of the American fast battleships entered service from 1942, onward, they were frequently used as escorts for the fast carrier task force that was the US Navy's primary striking arm in its campaign across the central Pacific. During the Philippines campaign, battleships played a central role during the Battle of Leyte Gulf in October 1944. The action was one of the largest naval battles in history, which took place over several days and as three Japanese fleets attacked the Allied invasion fleet. The Japanese battleship , part of Center Force, was sunk by American carrier aircraft during the Battle of Sibuyan Sea on 24 October. During the Battle of Surigao Strait the following night, several US battleships that had been repaired from the attack on Pearl Harbor defeated the Japanese Southern Force that included a pair of battleships. Center Force attacked again on 25 October and in the Battle off Samar, but was driven off by destroyers and aircraft from several escort carriers. During the Battle of Okinawa in April 1945, Japan sent "Yamato" as a final suicide mission to attack the landing beaches and attempt to stop the invasion of the island. American aircraft scored between nine and thirteen torpedo hits and six bomb hits on the ship and sank her. was sunk by US aircraft off Kure, Japan, in July. Only survived the war. The war ended with the Japanese surrender aboard the battleship in September 1945. Cold War: end of the battleship era. After World War II, several navies retained their existing battleships, but most were either placed in their reserve fleets or scrapped outright. Of their surviving pre-war battleships, most of the American vessels were either scrapped or sunk as target ships by 1948, though the most modern vessels, those of the and es, survived until the late 1950s and early 1960s. One of the earlier vessels, , was preserved as a museum ship. The four "King George V"-class ships were all broken up by 1957, Only two battleships—the British and the French —were completed after the war. "Vanguard" did not long outlast the "King George V"s, being scrapped herself in 1960. "Jean Bart" (and her sister "Richelieu") remained in the French fleet's inventory until the early 1960s, when they were discarded. Three of the six American "North Carolina"- and "South Dakota"-class ships were similarly scrapped in the early 1960s, but the other three—, , and —were retained as museum ships. With the reduced naval budgets of the immediate postwar period, the US Navy chose to concentrate its resources on its carrier force. Besides the rise of aircraft carriers as the preeminent naval striking force, the advent of nuclear weapons influenced the decision to abandon large battleship fleets. In 1946, "Nagato", which was seized by the US, and four American battleships were used during the Operation Crossroads nuclear weapons tests, though three of the American ships survived the two blasts and were later sunk with conventional weapons. Of the remaining, smaller battleships fleets, Italy retained its two s, of 1913 vintage, until the late 1950s and early 1960s, when they were scrapped. One other battleship, was taken by the Soviets as reparations and renamed "Novorossiysk"; she was sunk by a mine in the Black Sea on 29 October 1955. The two surviving "Littorio"-class ships were taken by the US and UK as war reparations and scrapped in the late 1940s. The Soviets still had a pair of World War I-era battleships— and —, but they, too, were scrapped in the late 1950s. The three large South American navies still had a handful of pre-World War I dreadnoughts in service after the war. Brazil eventually discarded its two s in the early 1950s; "Argentina" sold its two s in 1956; the last ship in the region, the Chilean , followed them to the breakers' yard in 1959. The four "Iowa"-class battleships were the only vessels of the type to see significant combat after World War II. All four ships were reactivated for gunfire support duties during the Korean War in the early 1950s, and was also deployed during the Vietnam War in 1968–1969 for the same task. All four ships were modernized in the early 1980s with Tomahawk cruise missiles, Harpoon anti-ship missiles, and Phalanx CIWS systems, along with the latest radar systems. They were recommissioned as part of the 600-ship Navy program under President Ronald Reagan. "New Jersey" next saw action in 1982, bombarding Syrian artillery during the Lebanese Civil War. "Missouri" and took part in Operation Desert Storm against Iraqi forces in 1991, bombarding enemy positions along the coast. The ships proved to be expensive to operate, and they required thousands of men to keep in service, so "Iowa" and "New Jersey" were already back in reserve by that time, and "Missouri" and "Wisconsin" were also decommissioned by the end of 1991. All four were struck from the Naval Vessel Register in 1995. When the last "Iowa"-class ship was finally stricken from the Naval Vessel Registry, no battleships remained in service or in reserve with any navy worldwide. A number are preserved as museum ships, either afloat or in drydock. The U.S. has eight battleships on display: "Massachusetts", "North Carolina", "Alabama", "Iowa", "New Jersey", "Missouri", "Wisconsin", and "Texas". "Missouri" and "New Jersey" are museums at Pearl Harbor and Camden, New Jersey, respectively. "Iowa" is on display as an educational attraction at the Los Angeles Waterfront in San Pedro, California. "Wisconsin" now serves as a museum ship in Norfolk, Virginia. "Massachusetts", which has the distinction of never having lost a man during service, is on display at the Battleship Cove naval museum in Fall River, Massachusetts. "Texas", the first battleship turned into a museum, is normally on display at the San Jacinto Battleground State Historic Site, near Houston, but as of 2021 is closed for repairs. "North Carolina" is on display in Wilmington, North Carolina. "Alabama" is on display in Mobile, Alabama. The USS Arizona Memorial was erected over the wreck of , which was sunk during the Pearl Harbor attack in 1941, to commemorate those killed in the raid. Memorials were also placed to mark the wreck of , also sunk during the attack. The only other 20th-century battleship on display is the Japanese pre-dreadnought , preserved since 1923. Strategy and doctrine. Doctrine. For much of their existence, battleships were the embodiment of sea power. The American naval officer Alfred Thayer Mahan argued in his seminal 1890 work, "The Influence of Sea Power upon History", a strong navy was vital to the success of a nation, and control of the seas was a prerequisite for the projection of force. Conversely, countries with weak navies would inevitably decline. Mahan argued that the cruiser warfare advocated by the "Jeune Ecole" could never be decisive, and that only fleets of battleships could control sea lanes and enforce blockades of an enemy's coast. The book proved to be widely influential across the world's navies; it was translated into German in 1896, where it was used to support the German naval expansion program championed by Alfred von Tirpitz. A Japanese-language translation also appeared in 1896, and it soon became required reading at the Japanese naval academy. By the end of the decade, Russian, French, Italian, and Spanish versions were produced. A competing doctrine, that of the "fleet in being" dates at least as far back as the 17th century Royal Navy; its commander, Lord Torrington, argued that his fleet, though significantly outnumbered by the French navy of the day, posed enough of a risk as to dissuade a French attempt to invade England. The "fleet in being" in part acts as a deterrent against attack. The concept underpinned Tirpitz's so-called "risk theory" that was the basis of his program to build a German battle fleet. Even if the Royal Navy maintained a numerical superiority, the risk that the German fleet would inflict grievous damage even in the case of a British victory would militate against any such battle taking place, and Germany would be free to pursue its global ambitions. Tactics. By the 1890s, naval tacticians had developed a number of formations in which to employ battleships. The most prominent were referred to as "line ahead" and "line abreast". The former, the standard tactic during the age of sail, arrayed ships in a single-file line, which emphasized broadside fire. The latter placed ships side-by-side, which was suited to close-range melees where ramming and torpedoes could be effectively employed; after Tegetthoff's success at Lissa in 1866 used a modified line abreast formation, the tactic enjoyed a period of popularity for several years. By the 1880s, line-ahead tactics had returned to prominence. Royal Navy officers devised the tactic referred to as "crossing the T" of an enemy fleet, whereby one fleet steaming in line-ahead formation crossed in front of another line of battleships. This maneuver would allow one's own battleships to concentrate entire broadsides on the leading enemy ship, while one's opponent could only reply with their forward guns. Many navies adopted the tactic soon thereafter. As the threat of underwater attack, including mines and torpedoes, developed after the 1860s, capital ships could no longer maintain close blockades of enemy ports. This required smaller, faster scouts to observe hostile ports so that an enemy fleet could be brought to battle. Modern cruisers began to be built in the 1880s for this purpose. Almost immediately after the invention of the airplane, navies recognized its potential as a reconnaissance unit for the fleet's battleships. The Austro-Hungarian Navy, then following the "Jeune École" doctrine of the 1870s and 1880s, devised the tactic of placing torpedo boats alongside battleships; these would hide behind the larger ships until gun-smoke obscured visibility enough for them to dart out and fire their torpedoes. While this tactic was made less effective by the development of smokeless propellant, the threat from more capable torpedo craft (later including submarines) remained. By the 1890s, the Royal Navy had developed the first destroyers, which were initially designed to intercept and drive off any attacking torpedo boats. The other major naval powers quickly followed suit with similar vessels of their own. Psychological and diplomatic impact. The presence of battleships had a great psychological and diplomatic impact. Similar to possessing nuclear weapons in the second half of the 20th century, the ownership of battleships marked a country as a regional or global power, and the ability to build them domestically signified that a country could claim to be a great power. Even during the Cold War, the psychological impact of a battleship was significant. In 1946, USS "Missouri" was dispatched to deliver the remains of the ambassador from Turkey, and her presence in Turkish and Greek waters staved off a possible Soviet thrust into the Balkan region. In September 1983, when Druze militia in Lebanon's Shouf Mountains fired upon U.S. Marine peacekeepers, the arrival of USS "New Jersey" stopped the firing. Gunfire from "New Jersey" later killed militia leaders. Cost-effectiveness. Battleships were the largest and most complex, and hence the most expensive warships of their time; as a result, the value of investment in battleships has always been contested. As the French politician Etienne Lamy wrote in 1879, "The construction of battleships is so costly, their effectiveness so uncertain and of such short duration, that the enterprise of creating an armored fleet seems to leave fruitless the perseverance of a people". The "Jeune École" school of thought of the 1870s and 1880s sought alternatives to the crippling expense and debatable utility of a conventional battlefleet. It proposed what would nowadays be termed a sea denial strategy, based on fast, long-ranged cruisers for commerce raiding and torpedo boat flotillas to attack enemy ships attempting to blockade French ports. The ideas of the "Jeune École" were ahead of their time; it was not until the 20th century that efficient mines, torpedoes, submarines, and aircraft were available that allowed similar ideas to be effectively implemented. The determination of powers such as Germany to build battlefleets with which to confront much stronger rivals has been criticized by historians, who emphasize the futility of investment in a battlefleet that has no chance of matching its opponent in an actual battle.
4055
35936988
https://en.wikipedia.org/wiki?curid=4055
Bifröst
In Norse mythology, Bifröst (modern Icelandic: Bifröst ; from Old Norse: /ˈbiv.rɔst/), also called Bilröst and often anglicized as Bifrost, is a burning rainbow bridge that reaches between Midgard (Earth) and Asgard, the realm of the gods. The bridge is attested as "Bilröst" in the "Poetic Edda", compiled in the 13th century from earlier traditional sources; as "Bifröst" in the "Prose Edda", written in the 13th century by Snorri Sturluson; and in the poetry of skalds. Both the "Poetic Edda" and the "Prose Edda" also refer to the bridge as Ásbrú (Old Norse "Æsir's bridge"). According to the "Prose Edda", the bridge ends in heaven at Himinbjörg, the residence of the god Heimdall, who guards it from the jötnar. The bridge's destruction during Ragnarök by the forces of Muspell is foretold. Scholars have proposed that the bridge may have originally represented the Milky Way and have noted parallels between the bridge and another bridge in Norse mythology, Gjallarbrú. Etymology. Scholar Andy Orchard suggests that may mean "shimmering path". He notes that the first element of — (meaning "a moment")—"suggests the fleeting nature of the rainbow", which he connects to the first element of —the Old Norse verb (meaning "to shimmer" or "to shake")—noting that the element evokes notions of the "lustrous sheen" of the bridge. Austrian Germanist Rudolf Simek says that either means "the swaying road to heaven" (also citing ) or, if is the original form of the two (which Simek says is likely), "the fleetingly glimpsed rainbow" (possibly connected to , perhaps meaning "moment, weak point"). Attestations. Two poems in the "Poetic Edda" and two books in the "Prose Edda" provide information about the bridge: "Poetic Edda". In the "Poetic Edda", the bridge is mentioned in the poems "Grímnismál" and "Fáfnismál", where it is referred to as "Bilröst". In one of two stanzas in the poem "Grímnismál" that mentions the bridge, Grímnir (the god Odin in disguise) provides the young Agnarr with cosmological knowledge, including that Bilröst is the best of bridges. Later in "Grímnismál", Grímnir notes that Asbrú "burns all with flames" and that, every day, the god Thor wades through the waters of Körmt and Örmt and the two Kerlaugar: In , the dying wyrm Fafnir tells the hero Sigurd that, during the events of Ragnarök, bearing spears, gods will meet at Óskópnir. From there, the gods will cross Bilröst, which will break apart as they cross over it, causing their horses to dredge through an immense river. "Prose Edda". The bridge is mentioned in the "Prose Edda" books "Gylfaginning" and "Skáldskaparmál", where it is referred to as "Bifröst". In chapter 13 of "Gylfaginning", Gangleri (King Gylfi in disguise) asks the enthroned figure of High what way exists between heaven and earth. Laughing, High replies that the question is not an intelligent one, and goes on to explain that the gods built a bridge from heaven and earth. He incredulously asks Gangleri if he has not heard the story before. High says that Gangleri must have seen it, and notes that Gangleri may call it a rainbow. High says that the bridge consists of three colors, has great strength, "and is built with art and skill to a greater extent than other constructions." High notes that, although the bridge is strong, it will break when "Muspell's lads" attempt to cross it, and their horses will have to make do with swimming over "great rivers". Gangleri says that it does not seem that the gods "built the bridge in good faith if it is liable to break, considering that they can do as they please." High responds that the gods do not deserve blame for the breaking of the bridge, for "there is nothing in this world that will be secure when Muspell's sons attack." In chapter 15 of "Gylfaginning", Just-As-High says that Bifröst is also called "Asbrú", and that every day the gods ride their horses across it (with the exception of Thor, who instead wades through the boiling waters of the rivers Körmt and Örmt) to reach Urðarbrunnr, a holy well where the gods have their court. As a reference, Just-As-High quotes the second of the two stanzas in "Grímnismál" that mention the bridge (see above). Gangleri asks if fire burns over Bifröst. High says that the red in the bridge is burning fire, and, without it, the frost jotnar and mountain jotnar would "go up into heaven" if anyone who wanted could cross Bifröst. High adds that, in heaven, "there are many beautiful places" and that "everywhere there has divine protection around it." In chapter 17, High tells Gangleri that the location of Himinbjörg "stands at the edge of heaven where Bifrost reaches heaven." While describing the god Heimdallr in chapter 27, High says that Heimdallr lives in Himinbjörg by Bifröst, and guards the bridge from mountain jotnar while sitting at the edge of heaven. In chapter 34, High quotes the first of the two "Grímnismál" stanzas that mention the bridge. In chapter 51, High foretells the events of Ragnarök. High says that, during Ragnarök, the sky will split open, and from the split will ride forth the "sons of Muspell". When the "sons of Muspell" ride over Bifröst it will break, "as was said above". In the "Prose Edda" book "Skáldskaparmál", the bridge receives a single mention. In chapter 16, a work by the 10th century skald Úlfr Uggason is provided, where Bifröst is referred to as "the powers' way". Theories. In his translation of the "Poetic Edda", Henry Adams Bellows comments that the "Grímnismál" stanza mentioning Thor and the bridge stanza may mean that "Thor has to go on foot in the last days of the destruction, when the bridge is burning. Another interpretation, however, is that when Thor leaves the heavens (i.e., when a thunder-storm is over) the rainbow-bridge becomes hot in the sun." John Lindow points to a parallel between Bifröst, which he notes is "a bridge between earth and heaven, or earth and the world of the gods", and the bridge Gjallarbrú, "a bridge between earth and the underworld, or earth and the world of the dead." Several scholars have proposed that Bifröst may represent the Milky Way. Adaptations. In the final scene of Richard Wagner's 1869 opera "Das Rheingold", the god Froh summons a rainbow bridge, over which the gods cross to enter Valhalla. In J. R. R. Tolkien's legendarium, the "level bridge" of "The Fall of Númenor", an early version of the "Akallabêth", recalls Bifröst. It departs from the earth at a tangent, allowing immortal Elves but not mortal Men to travel the Old Straight Road to the lost earthly paradise of Valinor after the world has been remade (from a flat plane to a sphere). Bifröst appears in comic books associated with the Marvel Comics character Thor and in subsequent adaptations of those comic books. In the Marvel Cinematic Universe film "Thor", Jane Foster describes the Bifröst as an Einstein–Rosen bridge, which functions as a means of transportation across space in a short period of time.
4057
10289486
https://en.wikipedia.org/wiki?curid=4057
Battlecruiser
The battlecruiser (also written as battle cruiser or battle-cruiser) was a type of capital ship of the first half of the 20th century. These were similar in displacement, armament and cost to battleships, but differed in form and balance of attributes. Battlecruisers typically had thinner armour (to a varying degree) and a somewhat lighter main gun battery than contemporary battleships, installed on a longer hull with much higher engine power in order to attain greater speeds. The first battlecruisers were designed in the United Kingdom as a successor to the armoured cruiser, at the same time as the dreadnought succeeded the pre-dreadnought battleship. The goal of the battlecruiser concept was to outrun any ship with similar armament, and chase down any ship with lesser armament; they were intended to hunt down slower, older armoured cruisers and destroy them with heavy gunfire while avoiding combat with the more powerful but slower battleships. However, as more and more battlecruisers were built, they were increasingly used alongside the better-protected battleships. Battlecruisers served in the navies of the United Kingdom, Germany, the Ottoman Empire, Australia and Japan during World War I, most notably at the Battle of the Falkland Islands and in the several raids and skirmishes in the North Sea which culminated in a pitched fleet battle, the Battle of Jutland. British battlecruisers in particular suffered heavy losses at Jutland, where poor fire safety and ammunition handling practices left them vulnerable to catastrophic magazine explosions following hits to their main turrets from large-calibre shells. This dismal showing led to a persistent general belief that battlecruisers were too thinly armoured to function successfully. By the end of the war, capital ship design had developed, with battleships becoming faster and battlecruisers becoming more heavily armoured, blurring the distinction between a battlecruiser and a fast battleship. The Washington Naval Treaty, which limited capital ship construction from 1922 onwards, treated battleships and battlecruisers identically, and the new generation of battlecruisers planned by the United States, Great Britain and Japan were scrapped or converted into aircraft carriers under the terms of the treaty. Improvements in armour design and propulsion created the 1930s "fast battleship" with the speed of a battlecruiser and armour of a battleship, making the battlecruiser in the traditional sense effectively an obsolete concept. Thus from the 1930s on, only the Royal Navy continued to use "battlecruiser" as a classification for the World War I–era capital ships that remained in the fleet; while Japan's battlecruisers remained in service, they had been significantly reconstructed and were re-rated as full-fledged fast battleships. Some new vessels built during that decade, the German s and s and the French s are all sometimes referred to as battlecruisers, although the owning navies referred to them as "battleships" (), "armoured ships" () and "battleships" () respectively. Battlecruisers were put into action again during World War II, and only one survived to the end, . There was also renewed interest in large "cruiser-killer" type warships whose design was scaled-up from a heavy cruiser rather than a lighter/faster battleship derivative, but few were ever begun and only two members of the were commissioned in time to see war service. Construction of large cruisers as well as fast battleships were curtailed in favor of more-needed aircraft carriers, convoy escorts, and cargo ships. During (and after) the Cold War, the Soviet of large guided missile cruisers have been the only ships termed "battlecruisers"; the class is also the only example of a nuclear-powered battlecruiser. As of 2024, Russia operates two units: the "Pyotr Velikiy" has remained in active service since its 1998 commissioning, while the "Admiral Nakhimov" has been inactive (in storage or refitting) since 1999. Background. The battlecruiser was developed by the Royal Navy in the first years of the 20th century as an evolution of the armoured cruiser. The first armoured cruisers had been built in the 1870s, as an attempt to give armour protection to ships fulfilling the typical cruiser roles of patrol, trade protection and power projection. However, the results were rarely satisfactory, as the weight of armour required for any meaningful protection usually meant that the ship became almost as slow as a battleship. As a result, navies preferred to build protected cruisers with an armoured deck protecting their engines, or simply no armour at all. In the 1890s, new Krupp steel armour meant that it was now possible to give a cruiser side armour which would protect it against the quick-firing guns of enemy battleships and cruisers alike. In 1896–97 France and Russia, who were regarded as likely allies in the event of war, started to build large, fast armoured cruisers taking advantage of this. In the event of a war between Britain and France or Russia, or both, these cruisers threatened to cause serious difficulties for the British Empire's worldwide trade. Britain, which had concluded in 1892 that it needed twice as many cruisers as any potential enemy to adequately protect its empire's sea lanes, responded to the perceived threat by laying down its own large armoured cruisers. Between 1899 and 1905, it completed or laid down seven classes of this type, a total of 35 ships. This building program, in turn, prompted the French and Russians to increase their own construction. The Imperial German Navy began to build large armoured cruisers for use on their overseas stations, laying down eight between 1897 and 1906. In the period 1889–1896, the Royal Navy spent £7.3 million on new large cruisers. From 1897 to 1904, it spent £26.9 million. Many armoured cruisers of the new kind were just as large and expensive as the equivalent battleship. The increasing size and power of the armoured cruiser led to suggestions in British naval circles that cruisers should displace battleships entirely. The battleship's main advantage was its 12-inch heavy guns, and heavier armour designed to protect from shells of similar size. However, for a few years after 1900 it seemed that those advantages were of little practical value. The torpedo now had a range of 2,000 yards, and it seemed unlikely that a battleship would engage within torpedo range. However, at ranges of more than 2,000 yards it became increasingly unlikely that the heavy guns of a battleship would score any hits, as the heavy guns relied on primitive aiming techniques. The secondary batteries of 6-inch quick-firing guns, firing more plentiful shells, were more likely to hit the enemy. As naval expert Fred T. Jane wrote in June 1902,Is there anything outside of 2,000 yards that the big gun in its hundreds of tons of medieval castle can affect, that its weight in 6-inch guns without the castle could not affect equally well? And inside 2,000, what, in these days of gyros, is there that the torpedo cannot effect with far more certainty? In 1904, Admiral John "Jacky" Fisher became First Sea Lord, the senior officer of the Royal Navy. He had for some time thought about the development of a new fast armoured ship. He was very fond of the "second-class battleship" , a faster, more lightly armoured battleship. As early as 1901, there is confusion in Fisher's writing about whether he saw the battleship or the cruiser as the model for future developments. This did not stop him from commissioning designs from naval architect W. H. Gard for an armoured cruiser with the heaviest possible armament for use with the fleet. The design Gard submitted was for a ship between , capable of , armed with four 9.2-inch and twelve guns in twin gun turrets and protected with six inches of armour along her belt and 9.2-inch turrets, on her 7.5-inch turrets, 10 inches on her conning tower and up to on her decks. However, mainstream British naval thinking between 1902 and 1904 was clearly in favour of heavily armoured battleships, rather than the fast ships that Fisher favoured. The Battle of Tsushima proved the effectiveness of heavy guns over intermediate ones and the need for a uniform main caliber on a ship for fire control. Even before this, the Royal Navy had begun to consider a shift away from the mixed-calibre armament of the 1890s pre-dreadnought to an "all-big-gun" design, and preliminary designs circulated for battleships with all 12-inch or all 10-inch guns and armoured cruisers with all 9.2-inch guns. In late 1904, not long after the Royal Navy had decided to use 12-inch guns for its next generation of battleships because of their superior performance at long range, Fisher began to argue that big-gun cruisers could replace battleships altogether. The continuing improvement of the torpedo meant that submarines and destroyers would be able to destroy battleships; this in Fisher's view heralded the end of the battleship or at least compromised the validity of heavy armour protection. Nevertheless, armoured cruisers would remain vital for commerce protection. Fisher's views were very controversial within the Royal Navy, and even given his position as First Sea Lord, he was not in a position to insist on his own approach. Thus he assembled a "Committee on Designs", consisting of a mixture of civilian and naval experts, to determine the approach to both battleship and armoured cruiser construction in the future. While the stated purpose of the committee was to investigate and report on future requirements of ships, Fisher and his associates had already made key decisions. The terms of reference for the committee were for a battleship capable of with 12-inch guns and no intermediate calibres, capable of docking in existing drydocks; and a cruiser capable of , also with 12-inch guns and no intermediate armament, armoured like , the most recent armoured cruiser, and also capable of using existing docks. First battlecruisers. Under the Selborne plan of 1903, the Royal Navy intended to start three new battleships and four armoured cruisers each year. However, in late 1904 it became clear that the 1905–1906 programme would have to be considerably smaller, because of lower than expected tax revenue and the need to buy out two Chilean battleships under construction in British yards, lest they be purchased by the Russians for use against the Japanese, Britain's ally. These economic realities meant that the 1905–1906 programme consisted only of one battleship, but three armoured cruisers. The battleship became the revolutionary battleship , and the cruisers became the three ships of the . Fisher later claimed, however, that he had argued during the committee for the cancellation of the remaining battleship. The construction of the new class was begun in 1906 and completed in 1908, delayed perhaps to allow their designers to learn from any problems with "Dreadnought". The ships fulfilled the design requirement quite closely. On a displacement similar to "Dreadnought", the "Invincible"s were longer to accommodate additional boilers and more powerful turbines to propel them at . Moreover, the new ships could maintain this speed for days, whereas pre-dreadnought battleships could not generally do so for more than an hour. Armed with eight 12-inch Mk X guns, compared to ten on "Dreadnought", they had of armour protecting the hull and the gun turrets. ("Dreadnought"s armour, by comparison, was at its thickest.) The class had a very marked increase in speed, displacement and firepower compared to the most recent armoured cruisers but no more armour. While the "Invincible"s were to fill the same role as the armoured cruisers they succeeded, they were expected to do so more effectively. Specifically their roles were: Confusion about how to refer to these new battleship-size armoured cruisers set in almost immediately. Even in late 1905, before work was begun on the "Invincible"s, a Royal Navy memorandum refers to "large armoured ships" meaning both battleships and large cruisers. In October 1906, the Admiralty began to classify all post-Dreadnought battleships and armoured cruisers as "capital ships", while Fisher used the term "dreadnought" to refer either to his new battleships or the battleships and armoured cruisers together. At the same time, the "Invincible" class themselves were referred to as "cruiser-battleships", "dreadnought cruisers"; the term "battlecruiser" was first used by Fisher in 1908. Finally, on 24 November 1911, Admiralty Weekly Order No. 351 laid down that "All cruisers of the "Invincible" and later types are for the future to be described and classified as "battle cruisers" to distinguish them from the armoured cruisers of earlier date." Along with questions over the new ships' nomenclature came uncertainty about their actual role due to their lack of protection. If they were primarily to act as scouts for the battle fleet and hunter-killers of enemy cruisers and commerce raiders, then the seven inches of belt armour with which they had been equipped would be adequate. If, on the other hand, they were expected to reinforce a battle line of dreadnoughts with their own heavy guns, they were too thin-skinned to be safe from an enemy's heavy guns. The "Invincible"s were essentially extremely large, heavily armed, fast armoured cruisers. However, the viability of the armoured cruiser was already in doubt. A cruiser that could have worked with the Fleet might have been a more viable option for taking over that role. Because of the "Invincible"s size and armament, naval authorities considered them capital ships almost from their inception—an assumption that might have been inevitable. Complicating matters further was that many naval authorities, including Lord Fisher, had made overoptimistic assessments from the Battle of Tsushima in 1905 about the armoured cruiser's ability to survive in a battle line against enemy capital ships due to their superior speed. These assumptions had been made without taking into account the Russian Baltic Fleet's inefficiency and tactical ineptitude. By the time the term "battlecruiser" had been given to the "Invincible"s, the idea of their parity with battleships had been fixed in many people's minds. Not everyone was so convinced. "Brasseys Naval Annual", for instance, stated that with vessels as large and expensive as the "Invincible"s, an admiral "will be certain to put them in the line of battle where their comparatively light protection will be a disadvantage and their high speed of no value." Those in favor of the battlecruiser countered with two points—first, since all capital ships were vulnerable to new weapons such as the torpedo, armour had lost some of its validity; and second, because of its greater speed, the battlecruiser could control the range at which it engaged an enemy. Battlecruisers in the dreadnought arms race. Between the launching of the "Invincible"s to just after the outbreak of the First World War, the battlecruiser played a junior role in the developing dreadnought arms race, as it was never wholeheartedly adopted as the key weapon in British imperial defence, as Fisher had presumably desired. The biggest factor for this lack of acceptance was the marked change in Britain's strategic circumstances between their conception and the commissioning of the first ships. The prospective enemy for Britain had shifted from a Franco-Russian alliance with many armoured cruisers to a resurgent and increasingly belligerent Germany. Diplomatically, Britain had entered the Entente cordiale in 1904 and the Anglo-Russian Entente. Neither France nor Russia posed a particular naval threat; the Russian navy had largely been sunk or captured in the Russo-Japanese War of 1904–1905, while the French were in no hurry to adopt the new dreadnought-type design. Britain also boasted very cordial relations with two of the significant new naval powers: Japan (bolstered by the Anglo-Japanese Alliance, signed in 1902 and renewed in 1905), and the US. These changed strategic circumstances, and the great success of the "Dreadnought" ensured that she rather than the "Invincible" became the new model capital ship. Nevertheless, battlecruiser construction played a part in the renewed naval arms race sparked by the "Dreadnought". For their first few years of service, the "Invincible"s entirely fulfilled Fisher's vision of being able to sink any ship fast enough to catch them, and run from any ship capable of sinking them. An "Invincible" would also, in many circumstances, be able to take on an enemy pre-dreadnought battleship. Naval circles concurred that the armoured cruiser in its current form had come to the logical end of its development and the "Invincible"s were so far ahead of any enemy armoured cruiser in firepower and speed that it proved difficult to justify building more or bigger cruisers. This lead was extended by the surprise both "Dreadnought" and "Invincible" produced by having been built in secret; this prompted most other navies to delay their building programmes and radically revise their designs. This was particularly true for cruisers, because the details of the "Invincible" class were kept secret for longer; this meant that the last German armoured cruiser, , was armed with only guns, and was no match for the new battlecruisers. The Royal Navy's early superiority in capital ships led to the rejection of a 1905–1906 design that would, essentially, have fused the battlecruiser and battleship concepts into what would eventually become the fast battleship. The 'X4' design combined the full armour and armament of "Dreadnought" with the 25-knot speed of "Invincible". The additional cost could not be justified given the existing British lead and the new Liberal government's need for economy; the slower and cheaper , a relatively close copy of "Dreadnought", was adopted instead. The X4 concept would eventually be fulfilled in the and later by other navies. The next British battlecruisers were the three , slightly improved "Invincible"s built to fundamentally the same specification, partly due to political pressure to limit costs and partly due to the secrecy surrounding German battlecruiser construction, particularly about the heavy armour of . This class came to be widely seen as a mistake and the next generation of British battlecruisers were markedly more powerful. By 1909–1910 a sense of national crisis about rivalry with Germany outweighed cost-cutting, and a naval panic resulted in the approval of a total of eight capital ships in 1909–1910. Fisher pressed for all eight to be battlecruisers, but was unable to have his way; he had to settle for six battleships and two battlecruisers of the . The "Lion"s carried eight 13.5-inch guns, the now-standard caliber of the British "super-dreadnought" battleships. Speed increased to and armour protection, while not as good as in German designs, was better than in previous British battlecruisers, with armour belt and barbettes. The two "Lion"s were followed by the very similar . By 1911 Germany had built battlecruisers of her own, and the superiority of the British ships could no longer be assured. Moreover, the German Navy did not share Fisher's view of the battlecruiser. In contrast to the British focus on increasing speed and firepower, Germany progressively improved the armour and staying power of their ships to better the British battlecruisers. "Von der Tann", begun in 1908 and completed in 1910, carried eight 11.1-inch guns, but with 11.1-inch (283 mm) armour she was far better protected than the "Invincible"s. The two s were quite similar but carried ten 11.1-inch guns of an improved design. , designed in 1909 and finished in 1913, was a modified "Moltke"; speed increased by one knot to , while her armour had a maximum thickness of 12 inches, equivalent to the s of a few years earlier. "Seydlitz" was Germany's last battlecruiser completed before World War I. The next step in battlecruiser design came from Japan. The Imperial Japanese Navy had been planning the ships from 1909, and was determined that, since the Japanese economy could support relatively few ships, each would be more powerful than its likely competitors. Initially the class was planned with the "Invincible"s as the benchmark. On learning of the British plans for "Lion", and the likelihood that new U.S. Navy battleships would be armed with guns, the Japanese decided to radically revise their plans and go one better. A new plan was drawn up, carrying eight 14-inch guns, and capable of , thus marginally having the edge over the "Lion"s in speed and firepower. The heavy guns were also better-positioned, being superfiring both fore and aft with no turret amidships. The armour scheme was also marginally improved over the "Lion"s, with nine inches of armour on the turrets and on the barbettes. The first ship in the class was built in Britain, and a further three constructed in Japan. The Japanese also re-classified their powerful armoured cruisers of the "Tsukuba" and "Ibuki" classes, carrying four 12-inch guns, as battlecruisers; nonetheless, their armament was weaker and they were slower than any battlecruiser. The next British battlecruiser, , was intended initially as the fourth ship in the "Lion" class, but was substantially redesigned. She retained the eight 13.5-inch guns of her predecessors, but they were positioned like those of "Kongō" for better fields of fire. She was faster (making on sea trials), and carried a heavier secondary armament. "Tiger" was also more heavily armoured on the whole; while the maximum thickness of armour was the same at nine inches, the height of the main armour belt was increased. Not all the desired improvements for this ship were approved, however. Her designer, Sir Eustace Tennyson d'Eyncourt, had wanted small-bore water-tube boilers and geared turbines to give her a speed of , but he received no support from the authorities and the engine makers refused his request. 1912 saw work begin on three more German battlecruisers of the , the first German battlecruisers to mount 12-inch guns. These ships, like "Tiger" and the "Kongō"s, had their guns arranged in superfiring turrets for greater efficiency. Their armour and speed was similar to the previous "Seydlitz" class. In 1913, the Russian Empire also began the construction of the four-ship , which were designed for service in the Baltic Sea. These ships were designed to carry twelve 14-inch guns, with armour up to 12 inches thick, and a speed of . The heavy armour and relatively slow speed of these ships made them more similar to German designs than to British ships; construction of the "Borodino"s was halted by the First World War and all were scrapped after the end of the Russian Civil War. World War I. Construction. For most of the combatants, capital ship construction was very limited during the war. Germany finished the "Derfflinger" class and began work on the . The "Mackensen"s were a development of the "Derfflinger" class, with 13.8-inch guns and a broadly similar armour scheme, designed for . In Britain, Jackie Fisher returned to the office of First Sea Lord in October 1914. His enthusiasm for big, fast ships was unabated, and he set designers to producing a design for a battlecruiser with 15-inch guns. Because Fisher expected the next German battlecruiser to steam at 28 knots, he required the new British design to be capable of 32 knots. He planned to reorder two s, which had been approved but not yet laid down, to a new design. Fisher finally received approval for this project on 28 December 1914 and they became the . With six 15-inch guns but only 6-inch armour they were a further step forward from "Tiger" in firepower and speed, but returned to the level of protection of the first British battlecruisers. At the same time, Fisher resorted to subterfuge to obtain another three fast, lightly armoured ships that could use several spare gun turrets left over from battleship construction. These ships were essentially light battlecruisers, and Fisher occasionally referred to them as such, but officially they were classified as "large light cruisers". This unusual designation was required because construction of new capital ships had been placed on hold, while there were no limits on light cruiser construction. They became and her sisters and , and there was a bizarre imbalance between their main guns of 15 inches (or in "Furious") and their armour, which at thickness was on the scale of a light cruiser. The design was generally regarded as a failure (nicknamed in the Fleet "Outrageous", "Uproarious" and "Spurious"), though the later conversion of the ships to aircraft carriers was very successful. Fisher also speculated about a new mammoth, but lightly built battlecruiser, that would carry guns, which he termed ; this never got beyond the concept stage. It is often held that the "Renown" and "Courageous" classes were designed for Fisher's plan to land troops (possibly Russian) on the German Baltic coast. Specifically, they were designed with a reduced draught, which might be important in the shallow Baltic. This is not clear-cut evidence that the ships were designed for the Baltic: it was considered that earlier ships had too much draught and not enough freeboard under operational conditions. Roberts argues that the focus on the Baltic was probably unimportant at the time the ships were designed, but was inflated later, after the disastrous Dardanelles Campaign. The final British battlecruiser design of the war was the , which was born from a requirement for an improved version of the "Queen Elizabeth" battleship. The project began at the end of 1915, after Fisher's final departure from the Admiralty. While initially envisaged as a battleship, senior sea officers felt that Britain had enough battleships, but that new battlecruisers might be required to combat German ships being built (the British overestimated German progress on the "Mackensen" class as well as their likely capabilities). A battlecruiser design with eight 15-inch guns, 8 inches of armour and capable of 32 knots was decided on. The experience of battlecruisers at the Battle of Jutland meant that the design was radically revised and transformed again into a fast battleship with armour up to 12 inches thick, but still capable of . The first ship in the class, , was built according to this design to counter the possible completion of any of the Mackensen-class ship. The plans for her three sisters, on which little work had been done, were revised once more later in 1916 and in 1917 to improve protection. The Admiral class would have been the only British ships capable of taking on the German "Mackensen" class; nevertheless, German shipbuilding was drastically slowed by the war, and while two "Mackensen"s were launched, none were ever completed. The Germans also worked briefly on a further three ships, of the , which were modified versions of the "Mackensen"s with 15-inch guns. Work on the three additional Admirals was suspended in March 1917 to enable more escorts and merchant ships to be built to deal with the new threat from U-boats to trade. They were finally cancelled in February 1919. Battlecruisers in action. The first combat involving battlecruisers during World War I was the Battle of Heligoland Bight in August 1914. A force of British light cruisers and destroyers entered the Heligoland Bight (the part of the North Sea closest to Hamburg) to attack German destroyer patrols. When they met opposition from light cruisers, Vice Admiral David Beatty took his squadron of five battlecruisers into the Bight and turned the tide of the battle, ultimately sinking three German light cruisers and killing their commander, Rear Admiral Leberecht Maass. The German battlecruiser perhaps made the most impact early in the war. Stationed in the Mediterranean, she and the escorting light cruiser evaded British and French ships on the outbreak of war, and steamed to Constantinople (Istanbul) with two British battlecruisers in hot pursuit. The two German ships were handed over to the Ottoman Navy, and this was instrumental in bringing the Ottoman Empire into the war as one of the Central Powers. "Goeben" herself, renamed "Yavuz Sultan Selim", fought engagements against the Imperial Russian Navy in the Black Sea before being knocked out of the action for the remainder of the war after the Battle of Imbros against British forces in the Aegean Sea in January 1918. The original battlecruiser concept proved successful in December 1914 at the Battle of the Falkland Islands. The British battlecruisers and did precisely the job for which they were intended when they chased down and annihilated the German East Asia Squadron, centered on the armoured cruisers and , along with three light cruisers, commanded by Admiral Maximilian Graf Von Spee, in the South Atlantic Ocean. Prior to the battle, the Australian battlecruiser had unsuccessfully searched for the German ships in the Pacific. During the Battle of Dogger Bank in 1915, the aftermost barbette of the German flagship "Seydlitz" was struck by a British 13.5-inch shell from HMS "Lion". The shell did not penetrate the barbette, but it dislodged a piece of the barbette armour that allowed the flame from the shell's detonation to enter the barbette. The propellant charges being hoisted upwards were ignited, and the fireball flashed up into the turret and down into the magazine, setting fire to charges removed from their brass cartridge cases. The gun crew tried to escape into the next turret, which allowed the flash to spread into that turret as well, killing the crews of both turrets. "Seydlitz" was saved from near-certain destruction only by emergency flooding of her after magazines, which had been effected by Wilhelm Heidkamp. This near-disaster was due to the way that ammunition handling was arranged and was common to both German and British battleships and battlecruisers, but the lighter protection on the latter made them more vulnerable to the turret or barbette being penetrated. The Germans learned from investigating the damaged "Seydlitz" and instituted measures to ensure that ammunition handling minimised any possible exposure to flash. Apart from the cordite handling, the battle was mostly inconclusive, though both the British flagship "Lion" and "Seydlitz" were severely damaged. "Lion" lost speed, causing her to fall behind the rest of the battleline, and Beatty was unable to effectively command his ships for the remainder of the engagement. A British signalling error allowed the German battlecruisers to withdraw, as most of Beatty's squadron mistakenly concentrated on the crippled armoured cruiser "Blücher", sinking her with great loss of life. The British blamed their failure to win a decisive victory on their poor gunnery and attempted to increase their rate of fire by stockpiling unprotected cordite charges in their ammunition hoists and barbettes. At the Battle of Jutland on 31 May 1916, both British and German battlecruisers were employed as fleet units. The British battlecruisers became engaged with both their German counterparts, the battlecruisers, and then German battleships before the arrival of the battleships of the British Grand Fleet. The result was a disaster for the Royal Navy's battlecruiser squadrons: "Invincible", "Queen Mary", and exploded with the loss of all but a handful of their crews. The exact reason why the ships' magazines detonated is not known, but the abundance of exposed cordite charges stored in their turrets, ammunition hoists and working chambers in the quest to increase their rate of fire undoubtedly contributed to their loss. Beatty's flagship "Lion" herself was almost lost in a similar manner, save for the heroic actions of Major Francis Harvey. The better-armoured German battlecruisers fared better, in part due to the poor performance of British fuzes (the British shells tended to explode or break up on impact with the German armour). —the only German battlecruiser lost at Jutland—had only 128 killed, for instance, despite receiving more than thirty hits. The other German battlecruisers, , "Von der Tann", "Seydlitz", and , were all heavily damaged and required extensive repairs after the battle, "Seydlitz" barely making it home, for they had been the focus of British fire for much of the battle. Interwar period. In the years immediately after World War I, Britain, Japan and the US all began design work on a new generation of ever more powerful battleships and battlecruisers. The new burst of shipbuilding that each nation's navy desired was politically controversial and potentially economically crippling. This nascent arms race was prevented by the Washington Naval Treaty of 1922, where the major naval powers agreed to limits on capital ship numbers. The German navy was not represented at the talks; under the terms of the Treaty of Versailles, Germany was not allowed any modern capital ships at all. Through the 1920s and 1930s only Britain and Japan retained battlecruisers, often modified and rebuilt from their original designs. The line between the battlecruiser and the modern fast battleship became blurred; indeed, the Japanese "Kongō"s were formally redesignated as battleships after their very comprehensive reconstruction in the 1930s. Plans in the aftermath of World War I. "Hood", launched in 1918, was the last World War I battlecruiser to be completed. Owing to lessons from Jutland, the ship was modified during construction; the thickness of her belt armour was increased by an average of 50 percent and extended substantially, she was given heavier deck armour, and the protection of her magazines was improved to guard against the ignition of ammunition. This was hoped to be capable of resisting her own weapons—the classic measure of a "balanced" battleship. "Hood" was the largest ship in the Royal Navy when completed; because of her great displacement, in theory she combined the firepower and armour of a battleship with the speed of a battlecruiser, causing some to refer to her as a fast battleship. However, her protection was markedly less than that of the British battleships built immediately after World War I, the . The navies of Japan and the United States, not being affected immediately by the war, had time to develop new heavy guns for their latest designs and to refine their battlecruiser designs in light of combat experience in Europe. The Imperial Japanese Navy began four s. These vessels would have been of unprecedented size and power, as fast and well armoured as "Hood" whilst carrying a main battery of ten 16-inch guns, the most powerful armament ever proposed for a battlecruiser. They were, for all intents and purposes, fast battleships—the only differences between them and the s which were to precede them were less side armour and a increase in speed. The United States Navy, which had worked on its battlecruiser designs since 1913 and watched the latest developments in this class with great care, responded with the . If completed as planned, they would have been exceptionally fast and well armed with eight 16-inch guns, but carried armour little better than the "Invincible"s—this after an increase in protection following Jutland. The final stage in the post-war battlecruiser race came with the British response to the "Amagi" and "Lexington" types: four G3 battlecruisers. Royal Navy documents of the period often described any battleship with a speed of over about as a battlecruiser, regardless of the amount of protective armour, although the G3 was considered by most to be a well-balanced fast battleship. The Washington Naval Treaty meant that none of these designs came to fruition. Ships that had been started were either broken up on the slipway or converted to aircraft carriers. In Japan, "Amagi" and were selected for conversion. "Amagi" was damaged beyond repair by the 1923 Great Kantō earthquake and was broken up for scrap; the hull of one of the proposed "Tosa"-class battleships, , was converted in her stead. The United States Navy also converted two battlecruiser hulls into aircraft carriers in the wake of the Washington Treaty: and , although this was only considered marginally preferable to scrapping the hulls outright (the remaining four: "Constellation", "Ranger", "Constitution" and "United States" were scrapped). In Britain, Fisher's "large light cruisers," were converted to carriers. "Furious" had already been partially converted during the war and "Glorious" and "Courageous" were similarly converted. Rebuilding programmes. In total, nine battlecruisers survived the Washington Naval Treaty, although HMS "Tiger" later became a victim of the London Naval Conference 1930 and was scrapped. Because their high speed made them valuable surface units in spite of their weaknesses, most of these ships were significantly updated before World War II. and were modernized significantly in the 1920s and 1930s. Between 1934 and 1936, "Repulse" was partially modernized and had her bridge modified, an aircraft hangar, catapult and new gunnery equipment added and her anti-aircraft armament increased. "Renown" underwent a more thorough reconstruction between 1937 and 1939. Her deck armour was increased, new turbines and boilers were fitted, an aircraft hangar and catapult added and she was completely rearmed aside from the main guns which had their elevation increased to +30 degrees. The bridge structure was also removed and a large bridge similar to that used in the battleships installed in its place. While conversions of this kind generally added weight to the vessel, "Renown"s tonnage actually decreased due to a substantially lighter power plant. Similar thorough rebuildings planned for "Repulse" and "Hood" were cancelled due to the advent of World War II. Unable to build new ships, the Imperial Japanese Navy also chose to improve its existing battlecruisers of the "Kongō" class (initially the , , and —the only later as it had been disarmed under the terms of the Washington treaty) in two substantial reconstructions (one for "Hiei"). During the first of these, elevation of their main guns was increased to +40 degrees, anti-torpedo bulges and of horizontal armour added, and a "pagoda" mast with additional command positions built up. This reduced the ships' speed to . The second reconstruction focused on speed as they had been selected as fast escorts for aircraft carrier task forces. Completely new main engines, a reduced number of boilers and an increase in hull length by allowed them to reach up to 30 knots once again. They were reclassified as "fast battleships," although their armour and guns still fell short compared to surviving World War I–era battleships in the American or the British navies, with dire consequences during the Pacific War, when "Hiei" and "Kirishima" were easily crippled by US gunfire during actions off Guadalcanal, forcing their scuttling shortly afterwards. Perhaps most tellingly, "Hiei" was crippled by medium-caliber gunfire from heavy and light cruisers in a close-range night engagement. There were two exceptions: Turkey's "Yavuz Sultan Selim" and the Royal Navy's "Hood". The Turkish Navy made only minor improvements to the ship in the interwar period, which primarily focused on repairing wartime damage and the installation of new fire control systems and anti-aircraft batteries. "Hood" was in constant service with the fleet and could not be withdrawn for an extended reconstruction. She received minor improvements over the course of the 1930s, including modern fire control systems, increased numbers of anti-aircraft guns, and in March 1941, radar. Naval rearmament. In the late 1930s navies began to build capital ships again, and during this period a number of large commerce raiders and small, fast battleships were built that are sometimes referred to as battlecruisers, such as the s and s and the French s. Germany and Russia designed new battlecruisers during this period, though only the latter laid down two of the 35,000-ton . They were still on the slipways when the Germans invaded in 1941 and construction was suspended. Both ships were scrapped after the war. The Germans planned three battlecruisers of the as part of the expansion of the Kriegsmarine (Plan Z). With six 15-inch guns, high speed, excellent range, but very thin armour, they were intended as commerce raiders. Only one was ordered shortly before World War II; no work was ever done on it. No names were assigned, and they were known by their contract names: 'O', 'P', and 'Q'. The new class was not universally welcomed in the Kriegsmarine. Their abnormally-light protection gained it the derogatory nickname "Ohne Panzer Quatsch" (without armour nonsense) within certain circles of the Navy. World War II. The Royal Navy deployed some of its battlecruisers during the Norwegian Campaign in April 1940. The and the were engaged during the action off Lofoten by "Renown" in very bad weather and disengaged after "Gneisenau" was damaged. One of "Renown"s 15-inch shells passed through "Gneisenau"s director-control tower without exploding, severing electrical and communication cables as it went and destroyed the rangefinders for the forward 150 mm (5.9 in) turrets. Main-battery fire control had to be shifted aft due to the loss of electrical power. Another shell from "Renown" knocked out "Gneisenau"s aft turret. The British ship was struck twice by German shells that failed to inflict any significant damage. She was the only pre-war battlecruiser to survive the war. In the early years of the war various German ships had a measure of success hunting merchant ships in the Atlantic. Allied battlecruisers such as "Renown", "Repulse", and the fast battleships "Dunkerque" and were employed on operations to hunt down the commerce-raiding German ships. The one stand-up fight occurred when the battleship and the heavy cruiser sortied into the North Atlantic to attack British shipping and were intercepted by "Hood" and the battleship in May 1941 in the Battle of the Denmark Strait. "Hood" was destroyed when the "Bismarck"s 15-inch shells caused a magazine explosion. Only three men survived. The first battlecruiser to see action in the Pacific War was "Repulse" when she was sunk by Japanese torpedo bombers north of Singapore on 10 December 1941 whilst in company with "Prince of Wales". She was lightly damaged by a single bomb and near-missed by two others in the first Japanese attack. Her speed and agility enabled her to avoid the other attacks by level bombers and dodge 33 torpedoes. The last group of torpedo bombers attacked from multiple directions and "Repulse" was struck by five torpedoes. She quickly capsized with the loss of 27 officers and 486 crewmen; 42 officers and 754 enlisted men were rescued by the escorting destroyers. The loss of "Repulse" and "Prince of Wales" conclusively proved the vulnerability of capital ships to aircraft without air cover of their own. The Japanese "Kongō"-class battlecruisers were extensively used as carrier escorts for most of their wartime career due to their high speed. Although classified as fast battleships by the Japanese, their World War I–era armament was weaker and their upgraded armour was still thin compared to contemporary battleships. On 13 November 1942, during the First Naval Battle of Guadalcanal, "Hiei" stumbled across American cruisers and destroyers at point-blank range. The ship was badly damaged in the encounter and had to be towed by her sister ship "Kirishima". Both were spotted by American aircraft the following morning and "Kirishima" was forced to cast off her tow because of repeated aerial attacks. "Hiei"s captain ordered her crew to abandon ship after further damage and scuttled "Hiei" in the early evening of 14 November. On the night of 14/15 November during the Second Naval Battle of Guadalcanal, "Kirishima" returned to Ironbottom Sound, but encountered the American battleships and . While failing to detect "Washington", "Kirishima" engaged "South Dakota" with some effect. "Washington" opened fire a few minutes later at short range and badly damaged "Kirishima", knocking out her aft turrets, jamming her rudder, and hitting the ship below the waterline. The flooding proved to be uncontrollable and "Kirishima" capsized three and a half hours later. Returning to Japan after the Battle of Leyte Gulf, "Kongō" was torpedoed and sunk by the American submarine on 21 November 1944. "Haruna" was moored at Kure, Japan when the naval base was attacked by American carrier aircraft on 24 and 28 July 1945. The ship was only lightly damaged by a single bomb hit on 24 July, but was hit a dozen more times on 28 July and sank at her pier. She was refloated after the war and scrapped in early 1946. Large cruisers or "cruiser killers". A late renaissance in popularity of ships between battleships and cruisers in size occurred on the eve of World War II. Described by some as battlecruisers, but never classified as capital ships, they were variously described as "super cruisers", "large cruisers" or even "unrestricted cruisers". The Dutch, American, and Japanese navies all planned these new classes specifically to counter the heavy cruisers, or their counterparts, being built by their naval rivals. The first such battlecruisers were the Dutch Design 1047, designed to protect their colonies in the East Indies in the face of Japanese aggression. Never officially assigned names, these ships were designed with German and Italian assistance. While they broadly resembled the German "Scharnhorst" class and had the same main battery, they would have been more lightly armoured and only protected against eight-inch gunfire. Although the design was mostly completed, work on the vessels never commenced as the Germans overran the Netherlands in May 1940. The first ship would have been laid down in June of that year. The only class of these late battlecruisers actually built were the United States Navy's "large cruisers". Two of them were completed, and ; a third, , was cancelled while under construction and three others, to be named "Philippines", "Puerto Rico" and "Samoa", were cancelled before they were laid down. The USN classified them "large cruisers" instead of battlecruisers. These ships were named after territories or protectorates, while battleships were named after states and cruisers after cities. With a displacement of and a main armament of nine 12-inch guns in three triple turrets, they were twice the size of s and had guns some 50% larger in diameter. The "Alaska"s design was a scaled-up cruiser rather than a lighter/faster battleship derivative, as they lacked the thick armoured belt and intricate torpedo defence system of contemporary battleships. However, unlike World War I-era battlecruisers, the "Alaska"s were considered a balanced design according to cruiser standards as their protection could withstand fire from their own caliber of gun, albeit only in a very narrow range band. They were designed to hunt down Japanese heavy cruisers, though by the time they entered service most Japanese cruisers had been sunk by American aircraft or submarines. Like the contemporary fast battleships, their speed ultimately made them more useful as carrier escorts and bombardment ships than as the surface combatants they were developed to be. The Japanese started designing the B64 class, which was similar to the "Alaska" but with guns. News of the "Alaska"s led them to upgrade the design, creating Design B-65. Armed with 356 mm guns, the B65s would have been the best armed of the new breed of battlecruisers, but they still would have had only sufficient protection to keep out eight-inch shells. Much like the Dutch, the Japanese got as far as completing the design for the B65s, but never laid them down. By the time the designs were ready the Japanese Navy recognized that they had little use for the vessels and that their priority for construction should lie with aircraft carriers. Like the "Alaska"s, the Japanese did not call these ships battlecruisers, referring to them instead as super-heavy cruisers. Cold War–era designs. In spite of the fact that most navies abandoned the battleship and battlecruiser concepts after World War II, Joseph Stalin's fondness for big-gun-armed warships caused the Soviet Union to plan a large cruiser class in the late 1940s. In the Soviet Navy, they were termed "heavy cruisers" ("tyazhelny kreyser"). The fruits of this program were the Project 82 ("Stalingrad") cruisers, of standard load, nine guns and a speed of . Three ships were laid down in 1951–1952, but they were cancelled in April 1953 after Stalin's death. Only the central armoured hull section of the first ship, "Stalingrad", was launched in 1954 and then used as a target. The Soviet is sometimes referred to as a battlecruiser. This description arises from their over displacement, which is roughly equal to that of a First World War battleship and more than twice the displacement of contemporary cruisers; upon entry into service, "Kirov" was the largest surface combatant to be built since World War II. The "Kirov" class lacks the armour that distinguishes battlecruisers from ordinary cruisers and they are classified as heavy nuclear-powered missile cruisers ("Тяжелый Атомный Ракетный Крейсер" (ТАРКР)) by Russia, with their primary surface armament consisting of twenty P-700 Granit surface to surface missiles. Four members of the class were completed during the 1980s and 1990s, but due to budget constraints only the is operational with the Russian Navy, though plans were announced in 2010 to return the other three ships to service. As of 2021, was being refitted, but the other two ships are reportedly beyond economical repair.
4059
45010142
https://en.wikipedia.org/wiki?curid=4059
Bob Hawke
Robert James Lee Hawke (9 December 1929 – 16 May 2019) was an Australian politician and trade unionist who served as the 23rd prime minister of Australia from 1983 to 1991. He held office as the leader of the Labor Party (ALP), having previously served as president of the Australian Council of Trade Unions from 1969 to 1980 and president of the Labor Party national executive from 1973 to 1978. Hawke was born in Border Town, South Australia. He attended the University of Western Australia and went on to study at University College, Oxford as a Rhodes Scholar. In 1956, Hawke joined the Australian Council of Trade Unions (ACTU) as a research officer. Having risen to become responsible for national wage case arbitration, he was elected as president of the ACTU in 1969, where he achieved a high public profile. In 1973, he was appointed as president of the Labor Party. In 1980, Hawke stood down from his roles as ACTU and Labor Party president to announce his intention to enter parliamentary politics, and was subsequently elected to the Australian House of Representatives as a member of parliament (MP) for the division of Wills at the 1980 federal election. Three years later, he was elected unopposed to replace Bill Hayden as leader of the Australian Labor Party, and within five weeks led Labor to a landslide victory at the 1983 election, and was sworn in as prime minister. He led Labor to victory a further three times, with successful outcomes in 1984, 1987 and 1990 elections, making him the most electorally successful prime minister in the history of the Labor Party. The Hawke government implemented a significant number of reforms, including major economic reforms, the establishment of Landcare, the introduction of the universal healthcare scheme Medicare, brokering the Prices and Incomes Accord, creating APEC, floating the Australian dollar, deregulating the financial sector, introducing the Family Assistance Scheme, enacting the Sex Discrimination Act to prevent discrimination in the workplace, declaring "Advance Australia Fair" as the country's national anthem, initiating superannuation pension schemes for all workers, negotiating a ban on mining in Antarctica and overseeing passage of the Australia Act that removed all remaining jurisdiction by the United Kingdom from Australia. In June 1991, Hawke faced a leadership challenge by the Treasurer, Paul Keating, but Hawke managed to retain power; however, Keating mounted a second challenge six months later, and won narrowly, replacing Hawke as prime minister. Hawke subsequently retired from parliament, pursuing both a business career and a number of charitable causes, until his death in 2019, aged 89. Hawke remains his party's longest-serving Prime Minister, and Australia's third-longest-serving prime minister behind Robert Menzies and John Howard. He is also the only prime minister to be born in South Australia and the only one raised and educated in Western Australia. Hawke holds the highest-ever approval rating for an Australian prime minister, reaching 75% approval in 1984. Hawke is frequently ranked within the upper tier of Australian prime ministers by historians. Early life and family. Bob Hawke was born on 9 December 1929 in Border Town, South Australia, the second child of Arthur "Clem" Hawke (1898–1989), a Congregationalist minister, and his wife Edith Emily (Lee) (1897–1979) (known as Ellie), a schoolteacher. His uncle, Bert, was the Labor premier of Western Australia between 1953 and 1959. Hawke's brother Neil, who was seven years his senior, died at the age of seventeen after contracting meningitis, for which there was no cure at the time. Ellie Hawke subsequently developed an almost messianic belief in her son's destiny, and this contributed to Hawke's supreme self-confidence throughout his career. At the age of fifteen, he presciently boasted to friends that he would one day become the prime minister of Australia. At the age of seventeen, Hawke had a serious crash while riding his Panther motorcycle that left him in a critical condition for several days. This near-death experience acted as his catalyst, driving him to make the most of his talents and not let his abilities go to waste. He joined the Labor Party in 1947 at the age of eighteen. Education and early career. Hawke was educated at West Leederville State School, Perth Modern School and the University of Western Australia, graduating in 1952 with Bachelor of Arts and Bachelor of Laws degrees. He was also president of the university's guild during the same year. The following year, Hawke won a Rhodes Scholarship to attend University College, Oxford, where he began a Bachelor of Arts course in philosophy, politics and economics (PPE). He soon found he was covering much the same ground as he had in his education at the University of Western Australia, and transferred to a Bachelor of Letters course. He wrote his thesis on wage-fixing in Australia and successfully presented it in January 1956. In 1956, Hawke accepted a scholarship to undertake doctoral studies in the area of arbitration law in the law department at the Australian National University in Canberra. Soon after his arrival at ANU, he became the students' representative on the University Council. A year later, he was recommended to the President of the ACTU to become a research officer, replacing Harold Souter who had become ACTU Secretary. The recommendation was made by Hawke's mentor at ANU, H. P. Brown, who for a number of years had assisted the ACTU in national wage cases. Hawke decided to abandon his doctoral studies and accept the offer, moving to Melbourne with his wife Hazel. World record beer skol (scull). Hawke is well known for a "world record" allegedly achieved at Oxford University for a beer skol (scull) of a yard of ale in 11 seconds. The record is widely regarded as having been important to his career and ocker chic image. A 2023 article in the "Journal of Australian Studies" by C. J. Coventry concluded that Hawke's achievement was "possibly fabricated" and "cultural propaganda" designed to make Hawke appealing to unionised workers and nationalistic middle-class voters. The article contends that "its location and time remain uncertain; there are no known witnesses; the field of competition was exclusive and with no scientific accountability; the record was first published in a beer pamphlet; and Hawke's recollections were unreliable." Australian Council of Trade Unions. Not long after Hawke began work at the ACTU, he became responsible for the presentation of its annual case for higher wages to the national wages tribunal, the Commonwealth Conciliation and Arbitration Commission. He was first appointed as an ACTU advocate in 1959. The 1958 case, under previous advocate R.L. Eggleston, had yielded only a five-shilling increase. The 1959 case found for a fifteen-shilling increase, and was regarded as a personal triumph for Hawke. He went on to attain such success and prominence in his role as an ACTU advocate that, in 1969, he was encouraged to run for the position of ACTU President, despite the fact that he had never held elected office in a trade union. He was elected ACTU President in 1969 on a modernising platform by the narrow margin of 399 to 350, with the support of the left of the union movement, including some associated with the Communist Party of Australia. He later credited Ray Gietzelt, General Secretary of the FMWU, as the single most significant union figure in helping him achieve this outcome. Questioned after his election on his political stance, Hawke stated that "socialist is not a word I would use to describe myself", saying instead his approach to politics was pragmatic. His commitment to the cause of Jewish Refuseniks purportedly led to a planned assassination attempt on Hawke by the Popular Front for the Liberation of Palestine, and its Australian operative Munif Mohammed Abou Rish. In 1971, Hawke along with other members of the ACTU requested that South Africa send a non-racially biased team for the rugby union tour, with the intention of unions agreeing not to serve the team in Australia. Prior to arrival, the Western Australian branch of the Transport Workers' Union, and the Barmaids' and Barmens' Union, announced that they would serve the team, which allowed the Springboks to land in Perth. The tour commenced on 26 June and riots occurred as anti-apartheid protesters disrupted games. Hawke and his family started to receive malicious mail and phone calls from people who thought that sport and politics should not mix. Hawke remained committed to the ban on apartheid teams and later that year, the South African cricket team was successfully denied and no apartheid team was to ever come to Australia again. It was this ongoing dedication to racial equality in South Africa that would later earn Hawke the respect and friendship of Nelson Mandela. In industrial matters, Hawke continued to demonstrate a preference for, and considerable skill at, negotiation, and was generally liked and respected by employers as well as the unions he advocated for. As early as 1972, speculation began that he would seek to enter the Parliament of Australia and eventually run to become the Leader of the Australian Labor Party. But while his professional career continued successfully, his heavy drinking and womanising placed considerable strains on his family life. In June 1973, Hawke was elected as the Federal President of the Labor Party. Two years later, when the Whitlam government was controversially dismissed by the Governor-General, Hawke showed an initial keenness to enter Parliament at the ensuing election. Harry Jenkins, the MP for Scullin, came under pressure to step down to allow Hawke to stand in his place, but he strongly resisted this push. Hawke eventually decided not to attempt to enter Parliament at that time, a decision he soon regretted. After Labor was defeated at the election, Whitlam initially offered the leadership to Hawke, although it was not within Whitlam's power to decide who would succeed him. Despite not taking on the offer, Hawke remained influential, playing a key role in averting national strike action. During the 1977 federal election, he emerged as a strident opponent of accepting Vietnamese boat people as refugees into Australia, stating that they should be subject to normal immigration requirements and should otherwise be deported. He further stated only refugees selected off-shore should be accepted. Hawke resigned as President of the Labor Party in August 1978. Neil Batt was elected in his place. The strain of this period took its toll on Hawke and in 1979 he suffered a physical collapse. This shock led Hawke to publicly announce his alcoholism in a television interview, and that he would make a concerted—and ultimately successful—effort to overcome it. He was helped through this period by the relationship that he had established with writer Blanche d'Alpuget, who, in 1982, published a biography of Hawke. His popularity with the public was, if anything, enhanced by this period of rehabilitation, and opinion polling suggested that he was a more popular public figure than either Labor Leader Bill Hayden or Liberal Prime Minister Malcolm Fraser. Informer for the United States. During the period of 1973 to 1979, Hawke acted as an informant for the United States government. According to Coventry, Hawke as concurrent leader of the ACTU and ALP informed the US of details surrounding labour disputes, especially those relating to American companies and individuals, such as union disputes with Ford Motor Company and the black ban of Frank Sinatra. The major industrial action taken against Sinatra came about because Sinatra had made sexist comments against female journalists. The dispute was the subject of the 2003 film "The Night We Called It a Day". Hawke was described by US diplomats as "a bulwark against anti-American sentiment and resurgent communism during the economic turmoil of the 1970s", and often disputed with the Whitlam government over issues of foreign policy and industrial relations. US diplomats played a major role in shaping Hawke's consensus politics and economics. Although Hawke was the most prolific Australian informer for the United States in the 1970s, there were other prominent people at that time who secretly gave information. Biographer Troy Bramston rejects the view that Hawke's prolonged, discreet involvement with known members of the Central Intelligence Agency within the US Embassy amounted to Hawke being a CIA "spy". Member of Parliament. Hawke's first attempt to enter Parliament came during the 1963 federal election. He stood in the seat of Corio in Geelong and managed to achieve a 3.1% swing against the national trend, although he fell short of ousting longtime Liberal incumbent Hubert Opperman. Hawke rejected several opportunities to enter Parliament throughout the 1970s, something he later wrote that he "regretted". He eventually stood for election to the House of Representatives at the 1980 election for the safe Melbourne seat of Wills, winning it comfortably. Immediately upon his election to Parliament, Hawke was appointed to the Shadow Cabinet by Labor Leader Bill Hayden as Shadow Minister for Industrial Relations. Hayden, after having led the Labor Party to narrowly lose the 1980 election, was increasingly subject to criticism from Labor MPs over his leadership style. To quell speculation over his position, Hayden called a leadership spill on 16 July 1982, believing that if he won he would be guaranteed to lead Labor through to the next election. Hawke decided to challenge Hayden in the spill, but Hayden defeated him by five votes; the margin of victory, however, was too slim to dispel doubts that he could lead the Labor Party to victory at an election. Despite his defeat, Hawke began to agitate more seriously behind the scenes for a change in leadership, with opinion polls continuing to show that Hawke was a far more popular public figure than both Hayden and Prime Minister Malcolm Fraser. Hayden was further weakened after Labor's unexpectedly poor performance at a by-election in December 1982 for the Victorian seat of Flinders, following the resignation of the sitting member, former deputy Liberal leader Phillip Lynch. Labor needed a swing of 5.5% to win the seat and had been predicted by the media to win, but could only achieve 3%. Labor Party power-brokers, such as Graham Richardson and Barrie Unsworth, now openly switched their allegiance from Hayden to Hawke. More significantly, Hayden's staunch friend and political ally, Labor's Senate Leader John Button, had become convinced that Hawke's chances of victory at an election were greater than Hayden's. Initially, Hayden believed that he could remain in his job, but Button's defection proved to be the final straw in convincing Hayden that he would have to resign as Labor Leader. Less than two months after the Flinders by-election result, Hayden announced his resignation as Leader of the Labor Party on 3 February 1983. Hawke was subsequently elected as Leader unopposed on 8 February, and became Leader of the Opposition in the process. Having learned that morning about the possible leadership change, on the same that Hawke assumed the leadership of the Labor Party, Malcolm Fraser called a snap election for 5 March 1983, unsuccessfully attempting to prevent Labor from making the leadership change. However, he was unable to have the Governor-General confirm the election before Labor announced the change. At the 1983 election, Hawke led Labor to a landslide victory, achieving a 24-seat swing and ending seven years of Liberal Party rule. With the election called at the same time that Hawke became Labor leader this meant that Hawke never sat in Parliament as Leader of the Opposition having spent the entirety of his short Opposition leadership in the election campaign which he won. Prime Minister of Australia (1983–1991). Leadership style. After Labor's landslide victory, Hawke was sworn in as the Prime Minister by the Governor-General Ninian Stephen on 11 March 1983. The style of the Hawke government was deliberately distinct from the Whitlam government, the Labor government that preceded it. Rather than immediately initiating multiple extensive reform programs as Whitlam had, Hawke announced that Malcolm Fraser's pre-election concealment of the budget deficit meant that many of Labor's election commitments would have to be deferred. As part of his internal reforms package, Hawke divided the government into two tiers, with only the most senior ministers sitting in the Cabinet of Australia. The Labor caucus was still given the authority to determine who would make up the Ministry, but this move gave Hawke unprecedented powers to empower individual ministers. After Australia won the America's Cup in 1983 Hawke said "any boss who sacks anyone for not turning up today is a bum", effectively declaring an impromptu national public holiday. In particular, the political partnership that developed between Hawke and his Treasurer, Paul Keating, proved to be essential to Labor's success in government, with multiple Labor figures in years since citing the partnership as the party's greatest ever. The two men proved a study in contrasts: Hawke was a Rhodes Scholar; Keating left high school early. Hawke's enthusiasms were cigars, betting and most forms of sport; Keating preferred classical architecture, Mahler symphonies and collecting British Regency and French Empire antiques. Despite not knowing one another before Hawke assumed the leadership in 1983, the two formed a personal as well as political relationship which enabled the Government to pursue a significant number of reforms, although there were occasional points of tension between the two. The Labor Caucus under Hawke also developed a more formalised system of parliamentary factions, which significantly altered the dynamics of caucus operations. Unlike many of his predecessor leaders, Hawke's authority within the Labor Party was absolute. This enabled him to persuade MPs to support a substantial set of policy changes which had not been considered achievable by Labor governments in the past. Individual accounts from ministers indicate that while Hawke was not often the driving force behind individual reforms, outside of broader economic changes, he took on the role of providing political guidance on what was electorally feasible and how best to sell it to the public, tasks at which he proved highly successful. Hawke took on a very public role as Prime Minister, campaigning frequently even outside of election periods, and for much of his time in office proved to be incredibly popular with the Australian electorate; to this date he still holds the highest ever AC Nielsen approval rating of 75%. Economic policy. The Hawke government oversaw significant economic reforms, and is often cited by economic historians as being a "turning point" from a protectionist, agricultural model to a more globalised and services-oriented economy. According to the journalist Paul Kelly, "the most influential economic decisions of the 1980s were the floating of the Australian dollar and the deregulation of the financial system". Although the Fraser government had played a part in the process of financial deregulation by commissioning the 1981 Campbell Report, opposition from Fraser himself had stalled this process. Shortly after its election in 1983, the Hawke government took the opportunity to implement a comprehensive program of economic reform, in the process "transform(ing) economics and politics in Australia". Hawke and Keating together led the process for overseeing the economic changes by launching a "National Economic Summit" one month after their election in 1983, which brought together business and industrial leaders together with politicians and trade union leaders; the three-day summit led to a unanimous adoption of a national economic strategy, generating sufficient political capital for widespread reform to follow. Among other reforms, the Hawke government floated the Australian dollar, repealed rules that prohibited foreign-owned banks to operate in Australia, dismantled the protectionist tariff system, privatised several state sector industries, ended the subsidisation of loss-making industries, and sold off part of the state-owned Commonwealth Bank. The taxation system was also significantly reformed, with income tax rates reduced and the introduction of a fringe benefits tax and a capital gains tax; the latter two reforms were strongly opposed by the Liberal Party at the time, but were never reversed by them when they eventually returned to office in 1996. Partially offsetting these imposts upon the business community—the "main loser" from the 1985 Tax Summit according to Paul Kelly—was the introduction of full dividend imputation, a reform insisted upon by Keating. Funding for schools was also considerably increased as part of this package, while financial assistance was provided for students to enable them to stay at school longer; the number of Australian children completing school rose from 3 in 10 at the beginning of the Hawke government to 7 in 10 by its conclusion in 1991. Considerable progress was also made in directing assistance "to the most disadvantaged recipients over the whole range of welfare benefits." Social and environmental policy. Although criticisms were leveled against the Hawke government that it did not achieve all it said it would do on social policy, it nevertheless enacted a series of reforms which remain in place to the present day. From 1983 to 1989, the Government oversaw the permanent establishment of universal health care in Australia with the creation of Medicare, doubled the number of subsidised childcare places, began the introduction of occupational superannuation, oversaw a significant increase in school retention rates, created subsidised homecare services, oversaw the elimination of poverty traps in the welfare system, increased the real value of the old-age pension, reintroduced the six-monthly indexation of single-person unemployment benefits, and established a wide-ranging programme for paid family support, known as the Family Income Supplement. A number of other new social security benefits were introduced under the Hawke-Keating Government. In 1984, for instance, a remote area allowance was introduced for pensioners and beneficiaries residing in special areas of Tax Zone A, and in 1985 a special addition to family allowances was made payable (as noted by one study) “to certain families with multiple births (three children or more) until the children reach six years of age.” The following year, rent assistance was extended to unemployment beneficiaries, together with a young homeless allowance for sickness and unemployment beneficiaries under the age of 18 who were homeless and didn't have parental or custodial support. However, the payment of family allowances for student children reaching the age of 18 was discontinued except in the case of certain families on low incomes. During the 1980s, the proportion of total government outlays allocated to families, the sick, single parents, widows, the handicapped, and veterans was significantly higher than under the previous Fraser and Whitlam governments. In 1984, the Hawke government enacted the landmark Sex Discrimination Act 1984, which eliminated discrimination on the grounds of sex within the workplace. In 1989, Hawke oversaw the gradual re-introduction of some tuition fees for university study, setting up the Higher Education Contributions Scheme (HECS). Under the original HECS, a fee of , equivalent to in , was charged to all university students, and the Commonwealth paid the balance. A student could defer payment of this HECS amount and repay the debt through the tax system, when the student's income exceeds a threshold level. As part of the reforms, Colleges of Advanced Education entered the university sector by various means. By doing so, university places were able to be expanded. Further notable policy decisions taken during the Government's time in office included the public health campaign regarding HIV/AIDS, and Indigenous land rights reform, with an investigation of the idea of a treaty between Aboriginal and Torres Strait Islander peoples and the Government being launched, although the latter would be overtaken by events, notably the Mabo court decision. The Hawke government also drew attention for a series of notable environmental decisions, particularly in its second and third terms. In 1983, Hawke personally vetoed the construction of the Franklin Dam in Tasmania, responding to a groundswell of protest around the issue. Hawke also secured the nomination of the Wet Tropics of Queensland as a UNESCO World Heritage Site in 1987, preventing the forests there from being logged. Hawke would later appoint Graham Richardson as Environment Minister, tasking him with winning the second-preference support from environmental parties, something which Richardson later claimed was the major factor in the government's narrow re-election at the 1990 election. In the Government's fourth term, Hawke personally led the Australian delegation to secure changes to the Protocol on Environmental Protection to the Antarctic Treaty, ultimately winning a guarantee that drilling for minerals within Antarctica would be totally prohibited until 2048 at the earliest. Hawke later claimed that the Antarctic drilling ban was his "proudest achievement". Industrial relations policy. As a former ACTU President, Hawke was well-placed to engage in reform of the industrial relations system in Australia, taking a lead on this policy area as in few others. Working closely with ministerial colleagues and the ACTU Secretary, Bill Kelty, Hawke negotiated with trade unions to establish the Prices and Incomes Accord in 1983, an agreement whereby unions agreed to restrict their demands for wage increases, and in turn the Government guaranteed to both minimise inflation and promote an increased social wage, including by establishing new social programmes such as Medicare. Inflation had been a significant issue for the previous decade prior to the election of the Hawke government, regularly running into double-digits. The process of the Accord, by which the Government and trade unions would arbitrate and agree upon wage increases in many sectors, led to a decrease in both inflation and unemployment through to 1990. Criticisms of the Accord would come from both the right and the left of politics. Left-wing critics claimed that it kept real wages stagnant, and that the Accord was a policy of class collaboration and corporatism. By contrast, right-wing critics claimed that the Accord reduced the flexibility of the wages system. Supporters of the Accord, however, pointed to the improvements in the social security system that occurred, including the introduction of rental assistance for social security recipients, the creation of labour market schemes such as NewStart, and the introduction of the Family Income Supplement. In 1986, the Hawke government passed a bill to de-register the Builders Labourers Federation federally due to the union not following the Accord agreements. Despite a percentage fall in real money wages from 1983 to 1991, the social wage of Australian workers was argued by the Government to have improved drastically as a result of these reforms, and the ensuing decline in inflation. The Accord was revisited six further times during the Hawke government, each time in response to new economic developments. The seventh and final revisiting would ultimately lead to the establishment of the enterprise bargaining system, although this would be finalised shortly after Hawke left office in 1991. Foreign policy. Arguably the most significant foreign policy achievement of the Government took place in 1989, after Hawke proposed a south-east Asian region-wide forum for leaders and economic ministers to discuss issues of common concern. After winning the support of key countries in the region, this led to the creation of the Asia-Pacific Economic Cooperation (APEC). The first APEC meeting duly took place in Canberra in November 1989; the economic ministers of Australia, Brunei, Canada, Indonesia, Japan, South Korea, Malaysia, New Zealand, Philippines, Singapore, Thailand and the United States all attended. APEC would subsequently grow to become one of the most pre-eminent high-level international forums in the world, particularly after the later inclusions of China and Russia, and the Keating government's later establishment of the APEC Leaders' Forum. Elsewhere in Asia, the Hawke government played a significant role in the build-up to the United Nations peace process for Cambodia, culminating in the Transitional Authority; Hawke's Foreign Minister Gareth Evans was nominated for the Nobel Peace Prize for his role in negotiations. Hawke also took a major public stand after the 1989 Tiananmen Square protests and massacre; despite having spent years trying to get closer relations with China, Hawke gave a tearful address on national television describing the massacre in graphic detail, and unilaterally offered asylum to over 42,000 Chinese students who were living in Australia at the time, many of whom had publicly supported the Tiananmen protesters. Hawke did so without even consulting his Cabinet, stating later that he felt he simply had to act. The Hawke government pursued a close relationship with the United States, assisted by Hawke's close friendship with US Secretary of State George Shultz; this led to a degree of controversy when the Government supported the US's plans to test ballistic missiles off the coast of Tasmania in 1985, as well as seeking to overturn Australia's long-standing ban on uranium exports. Although the US ultimately withdrew the plans to test the missiles, the furore led to a fall in Hawke's approval ratings. Shortly after the 1990 election, Hawke would lead Australia into its first overseas military campaign since the Vietnam War, forming a close alliance with US President George H. W. Bush to join the coalition in the Gulf War. The Royal Australian Navy contributed several destroyers and frigates to the war effort, which successfully concluded in February 1991, with the expulsion of Iraqi forces from Kuwait. The success of the campaign, and the lack of any Australian casualties, led to a brief increase in the popularity of the Government. Through his role on the Commonwealth Heads of Government Meeting, Hawke played a leading role in ensuring the Commonwealth initiated an international boycott on foreign investment into South Africa, building on work undertaken by his predecessor Malcolm Fraser, and in the process clashing publicly with Prime Minister of the United Kingdom Margaret Thatcher, who initially favoured a more cautious approach. The resulting boycott, led by the Commonwealth, was widely credited with helping bring about the collapse of apartheid, and resulted in a high-profile visit by Nelson Mandela in October 1990, months after the latter's release from a 27-year stint in prison. During the visit, Mandela publicly thanked the Hawke government for the role it played in the boycott. Election wins and leadership challenges. Hawke benefited greatly from the disarray into which the Liberal Party fell after the resignation of Fraser following the 1983 election. The Liberals were torn between supporters of the more conservative John Howard and the more liberal Andrew Peacock, with the pair frequently contesting the leadership. Hawke and Keating were also able to use the concealment of the size of the budget deficit by Fraser before the 1983 election to great effect, damaging the Liberal Party's economic credibility as a result. However, Hawke's time as Prime Minister also saw friction develop between himself and the grassroots of the Labor Party, many of whom were unhappy at what they viewed as Hawke's iconoclasm and willingness to cooperate with business interests. Hawke regularly and publicly expressed his willingness to cull Labor's "sacred cows". The Labor Left faction, as well as prominent Labor backbencher Barry Jones, offered repeated criticisms of a number of government decisions. Hawke was also subject to challenges from some former colleagues in the trade union movement over his "confrontationalist style" in siding with the airline companies in the 1989 Australian pilots' strike. Nevertheless, Hawke was able to comfortably maintain a lead as preferred prime minister in the vast majority of opinion polls carried out throughout his time in office. He recorded the highest popularity rating ever measured by an Australian opinion poll, reaching 75% approval in 1984. After leading Labor to a comfortable victory in the snap 1984 election, called to bring the mandate of the House of Representatives back in line with the Senate, Hawke was able to secure an unprecedented third consecutive term for Labor with a comfortable victory in the double dissolution election of 1987. Hawke was subsequently able to lead the nation in the bicentennial celebrations of 1988, culminating with him welcoming Queen Elizabeth II to open the newly constructed Parliament House. The late-1980s recession, and the accompanying high interest rates, saw the Government fall in opinion polls, with many doubting that Hawke could win a fourth election. Keating, who had long understood that he would eventually succeed Hawke as prime minister, began to plan a leadership change; at the end of 1988, Keating put pressure on Hawke to retire in the new year. Hawke rejected this suggestion but reached a secret agreement with Keating, the so-called "Kirribilli Agreement", stating that he would step down in Keating's favour at some point after the 1990 election. Hawke subsequently won that election, in the process leading Labor to a record fourth consecutive electoral victory, albeit by a slim margin. Hawke appointed Keating as deputy prime minister to replace the retiring Lionel Bowen. By the end of 1990, frustrated by the lack of any indication from Hawke as to when he might retire, Keating made a provocative speech to the Federal Parliamentary Press Gallery. Hawke considered the speech disloyal, and told Keating he would renege on the Kirribilli Agreement as a result. After attempting to force a resolution privately, Keating finally resigned from the Government in June 1991 to challenge Hawke for the leadership. His resignation came soon after Hawke vetoed in Cabinet a proposal backed by Keating and other ministers for mining to take place at Coronation Hill in Kakadu National Park. Hawke won the leadership spill, and in a press conference after the result, Keating declared that he had fired his "one shot" on the leadership. Hawke appointed John Kerin to replace Keating as Treasurer. Despite his victory in the June spill, Hawke quickly began to be regarded by many of his colleagues as a "wounded" leader; he had now lost his long-term political partner, his ratings in opinion polls were beginning to fall significantly, and after nearly nine years as Prime Minister, there was speculation that it would soon be time for a new leader. Hawke's leadership was ultimately irrevocably damaged at the end of 1991; after Liberal Leader John Hewson released 'Fightback!', a detailed proposal for sweeping economic change, including the introduction of a goods and services tax, Hawke was forced to sack Kerin as Treasurer after the latter made a public gaffe attempting to attack the policy. Keating duly challenged for the leadership a second time on 19 December, arguing that he would be better placed to defeat Hewson; this time, Keating succeeded, narrowly defeating Hawke by 56 votes to 51. In a speech to the House of Representatives following the vote, Hawke declared that his nine years as prime minister had left Australia a better and wealthier country, and he was given a standing ovation by those present. He subsequently tendered his resignation to the Governor-General and pledged support to his successor. Hawke briefly returned to the backbench, before resigning from Parliament on 20 February 1992, sparking a by-election which was won by the independent candidate Phil Cleary from among a record field of 22 candidates. Keating would go on to lead Labor to a fifth victory at the 1993 election, although he was defeated by the Liberal Party at the 1996 election. Hawke wrote that he had very few regrets over his time in office, although stated he wished he had been able to advance the cause of Indigenous land rights further. His bitterness towards Keating over the leadership challenges surfaced in his earlier memoirs, although by the 2000s Hawke stated he and Keating had buried their differences, and that they regularly dined together and considered each other friends. The publication of the book "Hawke: The Prime Minister", by Hawke's second wife, Blanche d'Alpuget, in 2010, reignited conflict between the two, with Keating accusing Hawke and d'Alpuget of spreading falsehoods about his role in the Hawke government. Despite this, the two campaigned together for Labor several times, including at the 2019 election, where they released their first joint article for nearly three decades; Craig Emerson, who worked for both men, said they had reconciled in later years after Hawke grew ill. Retirement and later life. After leaving Parliament, Hawke entered the business world, taking on a number of directorships and consultancy positions which enabled him to achieve considerable financial success. He avoided public involvement with the Labor Party during Keating's tenure as prime minister, not wanting to be seen as attempting to overshadow his successor. After Keating's defeat and the election of the Howard government at the 1996 election, he returned to public campaigning with Labor and regularly appearing at election launches. Despite his personal affection for Queen Elizabeth II, boasting that he had been her "favourite Prime Minister", Hawke was an enthusiastic republican and joined the campaign for a Yes vote in the 1999 republic referendum. In 2002, Hawke was named to South Australia's Economic Development Board during the Rann government. In the lead up to the 2007 election, Hawke made a considerable personal effort to support Kevin Rudd, making speeches at a large number of campaign office openings across Australia, and appearing in multiple campaign advertisements. As well as campaigning against WorkChoices, Hawke also attacked John Howard's record as Treasurer, stating "it was the judgement of every economist and international financial institution that it was the restructuring reforms undertaken by my government, with the full cooperation of the trade union movement, which created the strength of the Australian economy today". In February 2008, after Rudd's victory, Hawke joined former Prime Ministers Gough Whitlam, Malcolm Fraser and Paul Keating in Parliament House to witness the long anticipated apology to the Stolen Generations. In 2009, Hawke helped establish the Centre for Muslim and Non-Muslim Understanding at the University of South Australia. Interfaith dialogue was an important issue for Hawke, who told "The Adelaide Review" that he was "convinced that one of the great potential dangers confronting the world is the lack of understanding in regard to the Muslim world. Fanatics have misrepresented what Islam is. They give a false impression of the essential nature of Islam." In 2016, after taking part in Andrew Denton's Better Off Dead podcast, Hawke added his voice to calls for voluntary euthanasia to be legalised. Hawke labelled as 'absurd' the lack of political will to fix the problem. He revealed that he had such an arrangement with his wife Blanche should such a devastating medical situation occur. He also publicly advocated for nuclear power and the importation of international spent nuclear fuel to Australia for storage and disposal, stating that this could lead to considerable economic benefits for Australia. In late December 2018, Hawke revealed that he was in "terrible health". While predicting a Labor win in the upcoming 2019 federal election, Hawke said he "may not witness the party's success". In May 2019, the month of the election, he issued a joint statement with Paul Keating endorsing Labor's economic plan and condemning the Liberal Party for "completely [giving] up the economic reform agenda". They stated that "Shorten's Labor is the only party of government focused on the need to modernise the economy to deal with the major challenge of our time: human induced climate change". It was the first joint press statement released by the two since 1991. In March 2022, Troy Bramston, a journalist for "The Australian" and a political historian, wrote an unauthorised biography of Hawke titled "Bob Hawke: Demons and Destiny". Hawke gave Bramston full access to his previously unavailable personal papers and granted a series of interviews for the book. Bramston was the last person to interview Hawke before his death. The book, drawing on extensive Australian and international archives, and interviews with more than 100 people, is regarded as "definitive" and was shortlisted for the Australian Political Book of the Year Award. On 16 May 2019, two days before the election, Hawke died at his home in Northbridge at the age of 89, following a short illness. His family held a private cremation on 27 May at Macquarie Park Cemetery and Crematorium where he was subsequently interred. A state memorial was held at the Sydney Opera House on 14 June; speakers included Craig Emerson as master of ceremonies and Kim Beazley reading the eulogy, as well as Paul Keating, Julia Gillard, Bill Kelty, Ross Garnaut, and incumbent Prime Minister Scott Morrison and Opposition Leader Anthony Albanese. Personal life. Hawke married Hazel Masterson in 1956 at Perth Trinity Church. They had three children: Susan (born 1957), Stephen (born 1959) and Roslyn (born 1961). Their fourth child, Robert Jr, died in early infancy in 1963. Hawke was named Victorian Father of the Year in 1971, an honour which his wife disputed due to his heavy drinking and womanising. The couple divorced in 1994, after he left her for the writer Blanche d'Alpuget, and the two lived together in Northbridge, a suburb on the North Shore of Sydney. The divorce estranged Hawke from some of his family for a period, although they had reconciled by the 2010s. Hawke was a supporter of National Rugby League club the Canberra Raiders. Alcoholism and abstinence. Throughout his life before politics, Hawke was a heavy drinker. Hawke eventually suffered from alcohol poisoning following the death of his and Hazel's infant son in 1963. He publicly announced in 1980 that he would abstain from alcohol to seek election to Parliament, in a move which garnered significant public attention and support. It is popularly stated that Hawke began to drink again following his retirement from politics, although to a more manageable extent; on several occasions, in his later years, videos of Hawke downing beer at cricket matches would frequently go viral. There is evidence that Hawke did drink alcohol while in office, provided by then vice-president of the United States of America, George H. W. Bush, who later recalled shared drunken behaviour during Hawke's 1983 first official visit to the United States. Religious views. On the subject of religion, Hawke wrote, while attending the 1952 World Christian Youth Conference in India, that "there were all these poverty stricken kids at the gate of this palatial place where we were feeding our face and I just (was) struck by this enormous sense of irrelevance of religion to the needs of people". He subsequently abandoned his Christian beliefs. By the time he entered politics he was a self-described agnostic. Hawke told Andrew Denton in 2008 that his father's Christian faith had continued to influence his outlook, saying "My father said if you believe in the fatherhood of God you must necessarily believe in the brotherhood of man, it follows necessarily, and even though I left the church and was not religious, that truth remained with me." Legacy. A biographical television film, "Hawke", premiered on the Ten Network in Australia on 18 July 2010, with Richard Roxburgh playing the title character. Rachael Blake and Felix Williamson portrayed Hazel Hawke and Paul Keating, respectively. Roxburgh reprised his role as Hawke in the 2020 episode "Terra Nullius" of the Netflix series "The Crown". The Bob Hawke Gallery in Bordertown, which contains memorabilia from his life, was opened by Hawke in 2002. Hawke House, the house in Bordertown where Hawke spent his early childhood, was purchased by the Australian Government in 2021 and opened as an accommodation and function space in May 2024. A bronze bust of Hawke is located at the town's civic centre. In December 2020, the Western Australian Government announced that it had purchased Hawke's childhood home in West Leederville and would maintain it as a state asset. The property will also be assessed for entry onto the State Register of Heritage Places. The Australian Government pledged $5 million in July 2019 to establish a new annual scholarship—the Bob Hawke John Monash Scholarship—through the General Sir John Monash Foundation. Bob Hawke College, a high school in Subiaco, Western Australia named after Hawke, was opened in February 2020. In March 2020, the Australian Electoral Commission announced that it would create a new Australian electoral division in the House of Representatives named in honour of Hawke. The Division of Hawke was first contested at the 2022 federal election, and is located in the state of Victoria, near the seat of Wills, which Hawke represented from 1980 to 1992. Honours. Orders Foreign honours Awards. Fellowships Honorary degrees
4060
41906662
https://en.wikipedia.org/wiki?curid=4060
Baldr
Baldr (Old Norse also Balder, Baldur) is a god in Germanic mythology. In Norse mythology, he is a son of the god Odin and the goddess Frigg, and has numerous brothers, such as Thor and Váli. In wider Germanic mythology, the god was known in Old English as ', and in Old High German as ', all ultimately stemming from the Proto-Germanic theonym ('hero' or 'prince'). During the 12th century, Danish accounts by Saxo Grammaticus and other Danish Latin chroniclers recorded a euhemerized account of his story. Compiled in Iceland during the 13th century, but based on older Old Norse poetry, the "Poetic Edda" and the "Prose Edda" contain numerous references to the death of Baldr as both a great tragedy to the Æsir and a harbinger of Ragnarök. According to "Gylfaginning", a book of Snorri Sturluson's Prose Edda, Baldr's wife is Nanna and their son is Forseti. Baldr had the greatest ship ever built, "Hringhorni", and there is no place more beautiful than his hall, Breidablik. Name. The Old Norse theonym "Baldr" ('brave, defiant'; also 'lord, prince') and its various Germanic cognates – including Old English "Bældæg" and Old High German "Balder" (or "Palter") – probably stems from Proto-Germanic "*Balðraz" ('Hero, Prince'; cf. Old Norse "mann-baldr" 'great man', Old English "bealdor" 'prince, hero'), itself a derivative of "*balþaz", meaning 'brave' (cf. Old Norse "ballr" 'hard, stubborn', Gothic "balþa*" 'bold, frank', Old English "beald" 'bold, brave, confident', Old Saxon "bald" 'valiant, bold', Old High German "bald" 'brave, courageous'). This etymology was originally proposed by Jacob Grimm (1835), who also speculated on a comparison with the Lithuanian "báltas" ('white', also the name of a light-god) based on the semantic development from 'white' to 'shining' then 'strong'. According to linguist Vladimir Orel, this could be linguistically tenable. Philologist Rudolf Simek also argues that the Old English "Bældæg" should be interpreted as meaning 'shining day', from a Proto-Germanic root *"bēl"- (cf. Old English "bæl", Old Norse "bál" 'fire') attached to "dæg" ('day'). Old Norse also shows the usage of the word as an honorific in a few cases, as in "baldur î brynju" (Sæm. 272b) and "herbaldr" (Sæm. 218b), in general epithets of heroes. In continental Saxon and Anglo-Saxon tradition, the son of Woden is called not "Bealdor" but "Baldag" (Saxon) and "Bældæg, Beldeg" (Anglo-Saxon), which shows association with "day", possibly with Day personified as a deity. This, as Grimm points out, would agree with the meaning "shining one, white one, a god" derived from the meaning of Baltic "baltas", further adducing Slavic "Belobog" and German "Berhta". Attestations. Merseburg Incantation. One of the two Merseburg Incantations names "Balder" (in the genitive singular "Balderes"), but also mentions a figure named "Phol", considered to be a byname for Baldr (as in Scandinavian "Falr", "Fjalarr"; (in Saxo) "Balderus" : "Fjallerus"). The incantation relates of "Phol ende Wotan" riding to the woods, where the foot of Baldr's foal is sprained. Sinthgunt (the sister of the sun), Frigg and Odin sing to the foot in order for it to heal. The identification with Balder is not conclusive. Modern scholarship suggests that the god Freyr might be meant. "Poetic Edda". Unlike the Prose Edda, in the Poetic Edda the tale of Baldr's death is referred to rather than recounted at length. Baldr is mentioned in "Völuspá", in "Lokasenna", and is the subject of the Eddic poem "Baldr's Dreams". Among the visions which the Völva sees and describes in Völuspá is Baldr's death. In stanza 32, the Völva says she saw the fate of Baldr "the bleeding god": In the next two stanzas, the Völva refers to Baldr's killing, describes the birth of Váli for the slaying of Höðr and the weeping of Frigg: In stanza 62 of Völuspá, looking far into the future, the Völva says that Höðr and Baldr will come back, with the union, according to Bellows, being a symbol of the new age of peace: Baldr is mentioned in two stanzas of Lokasenna, a poem which describes a flyting between the gods and the god Loki. In the first of the two stanzas, Frigg, Baldr's mother, tells Loki that if she had a son like Baldr, Loki would be killed: In the next stanza, Loki responds to Frigg, and says that he is the reason Baldr "will never ride home again": The Eddic poem "Baldr's Dreams" opens with the gods holding a council discussing why Baldr had had bad dreams: Odin then rides to Hel to a Völva's grave and awakens her using magic. The Völva asks Odin, who she does not recognize, who he is, and Odin answers that he is Vegtam ("Wanderer"). Odin asks the Völva for whom are the benches covered in rings and the floor covered in gold. The Völva tells him that in their location mead is brewed for Baldr, and that she spoke unwillingly, so she will speak no more: Odin asks the Völva to not be silent and asks her who will kill Baldr. The Völva replies and says that Höðr will kill Baldr, and again says that she spoke unwillingly, and that she will speak no more: Odin again asks the Völva to not be silent and asks her who will avenge Baldr's death. The Völva replies that Váli will, when he will be one night old. Once again, she says that she will speak no more: Odin again asks the Völva to not be silent and says that he seeks to know who the women that will then weep be. The Völva realizes that Vegtam is Odin in disguise. Odin says that the Völva is not a Völva, and that she is the mother of three giants. The Völva tells Odin to ride back home proud, because she will speak to no more men until Loki escapes his bounds. "Prose Edda". In "Gylfaginning", Baldr is described as follows: Apart from this description, Baldr is known primarily for the story of his death, which is seen as the first in a chain of events that will ultimately lead to the destruction of the gods at Ragnarök. Baldr had a dream of his own death and his mother, Frigg, had the same dream. Since dreams were usually prophetic, this depressed him, and so Frigg made every object on earth vow never to hurt Baldr. All objects made this vow, save for the mistletoe—a detail which has traditionally been explained with the idea that it was too unimportant and nonthreatening to bother asking it to make the vow, but which Merrill Kaplan has instead argued echoes the fact that young people were not eligible to swear legal oaths, which could make them a threat later in life. When Loki, the mischief-maker, heard of this, he made a magical spear from this plant (in some later versions, an arrow). He hurried to the place where the gods were indulging in their new pastime of hurling objects at Baldr, which would bounce off without harming him. Loki gave the spear to Baldr's brother, the blind god Höðr, who then inadvertently killed his brother with it (other versions suggest that Loki guided the arrow himself). For this act, Odin and the "ásynja" Rindr gave birth to Váli, who grew to adulthood within a day and slew Höðr. Baldr was ceremonially burnt upon his ship Hringhorni, the largest of all ships. On the pyre he was given the magical ring Draupnir. At first the gods were not able to push the ship out onto sea, and so they sent for Hyrrokin, a giantess, who came riding on a wolf and gave the ship such a push that fire flashed from the rollers and all the earth shook. As he was carried to the ship, Odin whispered something in his ear. The import of this speech was held to be unknowable, and the question of what was said was thus used as an unanswerable riddle by Odin in other sources, namely against the giant Vafthrudnir in the Eddic poem "Vafthrudnismal" and in the riddles of Gestumblindi in "Hervarar saga". Upon seeing the corpse being carried to the ship, Nanna, his wife, died of grief. She was then placed on the funeral fire (perhaps a toned-down instance of Sati, also attested in the Arab traveller Ibn Fadlan's account of a funeral among the Rus'), after which it was set on fire. Baldr's horse with all its trappings was also laid on the pyre. As the pyre was set on fire, Thor blessed it with his hammer Mjǫllnir. As he did a small dwarf named Litr came running before his feet. Thor then kicked him into the pyre. Upon Frigg's entreaties, delivered through the messenger Hermod, Hel promised to release Baldr from the underworld if all objects alive and dead would weep for him. All did, except a giantess, Þökk (often presumed to be the god Loki in disguise), who refused to mourn the slain god. Thus Baldr had to remain in the underworld, not to emerge until after Ragnarök, when he and his brother Höðr would be reconciled and rule the new earth together with Thor's sons. Besides these descriptions of Baldr, the Prose Edda also explicitly links him to the Anglo-Saxon "Beldeg" in its prologue. "Gesta Danorum". Writing during the end of the 12th century, the Danish historian Saxo Grammaticus tells the story of Baldr (recorded as "Balderus") in a form that professes to be historical. According to him, Balderus and Høtherus were rival suitors for the hand of Nanna, daughter of Gewar, King of Norway. Balderus was a demigod and common steel could not wound his sacred body. The two rivals encountered each other in a terrific battle. Though Odin and Thor and the other gods fought for Balderus, he was defeated and fled away, and Høtherus married the princess. Nevertheless, Balderus took heart of grace and again met Høtherus in a stricken field. But he fared even worse than before. Høtherus dealt him a deadly wound with a magic sword which he had received from Mimir, the satyr of the woods; after lingering three days in pain Balderus died of his injury and was buried with royal honours in a barrow. Utrecht Inscription. A Latin votive inscription from Utrecht, from the 3rd or 4th century C.E., has been theorized as containing the dative form "Baldruo", pointing to a Latin nominative singular *"Baldruus", which some have identified with the Norse/Germanic god, although both the reading and this interpretation have been questioned. "Anglo-Saxon Chronicle". In the Anglo-Saxon Chronicle Baldr is named as the ancestor of the monarchy of Kent, Bernicia, Deira, and Wessex through his supposed son Brond. Toponyms. There are a few old place names in Scandinavia that contain the name "Baldr". The most certain and notable one is the (former) parish name Balleshol in Hedmark county, Norway: "a Balldrshole" 1356 (where the last element is "hóll" m "mound; small hill"). Others may be (in Norse forms) "Baldrsberg" in Vestfold county, "Baldrsheimr" in Hordaland county "Baldrsnes" in Sør-Trøndelag county—and (very uncertain) the Balsfjorden fjord and Balsfjord Municipality in Troms county. In Copenhagen, there is also a Baldersgade, or "Balder's Street". A street in downtown Reykjavík is called Baldursgata (Baldur's Street). In Sweden there is a Baldersgatan (Balder's Street) in Stockholm. There is also Baldersnäs (Balder's isthmus), Baldersvik (Balder's bay), Balders udde (Balder's headland) and Baldersberg (Balder's mountain) at various places.
4061
35936988
https://en.wikipedia.org/wiki?curid=4061
Breidablik
<noinclude> Breiðablik (sometimes anglicised as Breithablik or Breidablik) is the home of Baldr in Nordic mythology. Meaning. The word has been variously translated as 'broad sheen', 'Broad gleam', 'Broad-gleaming' or 'the far-shining one', Attestations. Grímismál. The Eddic poem Grímnismál describes Breiðablik as the fair home of Baldr: Gylfaginning. In Snorri Sturluson's Gylfaginning, Breiðablik is described in a list of places in heaven, identified by some scholars as Asgard: Later in the work, when Snorri describes Baldr, he gives another description, citing "Grímnismál", though he does not name the poem: Interpretation and discussion. The name of Breiðablik has been noted to link with Baldr's attributes of light and beauty. Similarities have been drawn between the description of Breiðablik in Grímnismál and Heorot in Beowulf, which are both free of 'baleful runes' ( and respectively). In Beowulf, the lack of refers to the absence of crimes being committed, and therefore both halls have been proposed to be sanctuaries.
4062
35936988
https://en.wikipedia.org/wiki?curid=4062
Bilskirnir
Bilskirnir (Old Norse "lightning-crack") is the hall of the god Thor in Norse mythology. Here he lives with his wife Sif and their children. According to "Grímnismál", the hall is the greatest of buildings and contains 540 rooms, located in Asgard, as are all the dwellings of the gods, in the kingdom of Þrúðheimr (or Þrúðvangar according to "Gylfaginning" and "Ynglinga saga").
4063
3519991
https://en.wikipedia.org/wiki?curid=4063
Brísingamen
In Norse mythology, Brísingamen (or Brísinga men) is the torc or necklace of the goddess Freyja, of which little else is known for certain. Etymology. The name is an Old Norse compound "brísinga-men" whose second element is "men" "(ornamental) neck-ring (of precious metal), torc". The etymology of the first element is uncertain. It has been derived from Old Norse "brísingr", a poetic term for "fire" or "amber" mentioned in the anonymous versified word-lists ("þulur") appended to many manuscripts of the Prose Edda, making Brísingamen "gleaming torc", "sunny torc", or the like. However, "Brísingr" can also be an ethnonym, in which case "Brísinga men" is "torc of the Brísings"; the Old English parallel in "Beowulf" supports this derivation, though who the Brísings (Old Norse "Brísingar") may have been remains unknown. Attestations. "Beowulf". Brísingamen is referred to in the Anglo-Saxon epic "Beowulf" as "Brosinga mene". The brief mention in "Beowulf" is as follows (trans. by Howell Chickering, 1977): The "Beowulf" poet is clearly referring to the legends about Theoderic the Great. The "Þiðrekssaga" tells that the warrior Heime ("Háma" in Old English) takes sides against Ermanaric ("Eormanric"), king of the Goths, and has to flee his kingdom after robbing him; later in life, Hama enters a monastery and gives them all his stolen treasure. However, this saga makes no mention of the great necklace. "Poetic Edda". In the poem "Þrymskviða" of the "Poetic Edda", Þrymr, the king of the jǫtnar, steals Thor's hammer, Mjölnir. Freyja lends Loki her falcon cloak to search for it; but upon returning, Loki tells Freyja that Þrymr has hidden the hammer and demanded to marry her in return. Freyja is so wrathful that all the Æsir’s halls beneath her are shaken and the necklace Brísingamen breaks off from her neck. Later, Thor borrows Brísingamen when he dresses up as Freyja to go to the wedding at Jǫtunheimr. "Prose Edda". "Húsdrápa", a skaldic poem partially preserved in the "Prose Edda", relates the story of the theft of Brísingamen by Loki. One day when Freyja wakes up and finds Brísingamen missing, she enlists the help of Heimdallr to help her search for it. Eventually they find the thief, who turns out to be Loki and who has transformed himself into a seal. Heimdallr turns into a seal as well and fights Loki (trans. Byock 2005): After a lengthy battle at Singasteinn, Heimdallr wins and returns Brísingamen to Freyja. Snorri Sturluson quoted this old poem in "Skáldskaparmál", saying that because of this legend Heimdallr is called "Seeker of Freyja's Necklace" ("Skáldskaparmál", section 8) and Loki is called "Thief of Brísingamen" ("Skáldskaparmál", section 16). A similar story appears in the later "Sörla þáttr", where Heimdallr does not appear. "Sörla þáttr". Sörla þáttr is a short story in the later and extended version of the "Saga of Olaf Tryggvason" in the manuscript of the "Flateyjarbók", which was written and compiled by two Christian priests, Jon Thordson and Magnus Thorhalson, in the late 14th century. In the end of the story, the arrival of Christianity dissolves the old curse that traditionally was to endure until Ragnarök. The battle of Högni and Heðinn is recorded in several medieval sources, including the skaldic poem "Ragnarsdrápa", "Skáldskaparmál" (section 49), and "Gesta Danorum": king Högni's daughter, Hildr, is kidnapped by king Heðinn. When Högni comes to fight Heðinn on an island, Hildr comes to offer her father a necklace on behalf of Heðinn for peace; but the two kings still battle, and Hildr resurrects the fallen to make them fight until Ragnarök. None of these earlier sources mentions Freyja or king Olaf Tryggvason, the historical figure who Christianized Norway and Iceland in the 10th Century. Archaeological record. A Völva was buried with considerable splendour in Hagebyhöga in Östergötland, Sweden. In addition to being buried with her wand, she had received great riches which included horses, a wagon and an Arabian bronze pitcher. There was also a silver pendant, which represents a woman with a broad necklace around her neck. This kind of necklace was only worn by the most prominent women during the Iron Age and some have interpreted it as Freyja's necklace Brísingamen. The pendant may represent Freyja herself. Modern influence. Alan Garner wrote a children's fantasy novel called "The Weirdstone of Brisingamen", published in 1960, about an enchanted teardrop bracelet. Diana Paxson's novel "Brisingamen" features Freyja and her necklace. Black Phoenix Alchemy Lab has a perfumed oil scent named Brisingamen. Freyja's necklace Brisingamen features prominently in Betsy Tobin's novel "Iceland", where the necklace is seen to have significant protective powers. The Brisingamen feature as a major item in Joel Rosenberg's Keepers of the Hidden Ways series of books. In it, there are seven jewels that were created for the necklace by the Dwarfs and given to the Norse goddess. She in turn eventually split them up into the seven separate jewels and hid them throughout the realm, as together they hold the power to shape the universe by its holder. The book's plot is about discovering one of them and deciding what to do with the power they allow while avoiding Loki and other Norse characters. In Christopher Paolini's "The Inheritance Cycle", the word "brisingr" means fire. This is probably a distillation of the word "brisinga". Ursula Le Guin's short story "Semley's Necklace", the first part of her novel "Rocannon's World", is a retelling of the Brisingamen story on an alien planet. Brisingamen is represented as a card in the "Yu-Gi-Oh!" Trading Card Game, "Nordic Relic Brisingamen". Brisingamen was part of MMORPG "Ragnarok Online" lore, which is ranked as "God item". The game is heavily based from Norse mythology. In the "Firefly Online" game, one of the planets of the Himinbjörg system (which features planets named after figures from Germanic mythology) is named Brisingamen. It is third from the star, and has moons named Freya, Beowulf, and Alberich. The Brisingamen is an item that can be found and equipped in the video game, "". In the French comics "Freaks' Squeele", the character of Valkyrie accesses her costume change ability by touching a decorative torc necklace affixed to her forehead, named Brizingamen.
4064
7903804
https://en.wikipedia.org/wiki?curid=4064
Borsuk–Ulam theorem
In mathematics, the Borsuk–Ulam theorem states that every continuous function from an "n"-sphere into Euclidean "n"-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center. Formally: if formula_1 is continuous then there exists an formula_2 such that: formula_3. The case formula_4 can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case. The case formula_5 is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space. The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that formula_6 is the "n"-sphere and formula_7 is the "n"-ball: History. According to , the first historical mention of the statement of the Borsuk–Ulam theorem appears in . The first proof was given by , where the formulation of the problem was attributed to Stanisław Ulam. Since then, many alternative proofs have been found by various authors, as collected by . Equivalent statements. The following statements are equivalent to the Borsuk–Ulam theorem. With odd functions. A function formula_16 is called "odd" (aka "antipodal" or "antipode-preserving") if for every formula_17, formula_18. The Borsuk–Ulam theorem is equivalent to each of the following statements: (1) Each continuous odd function formula_19 has a zero. (2) There is no continuous odd function formula_20. Here is a proof that the Borsuk-Ulam theorem is equivalent to (1): (formula_21) If the theorem is correct, then it is specifically correct for odd functions, and for an odd function, formula_22 iff formula_10. Hence every odd continuous function has a zero. (formula_24) For every continuous function formula_25, the following function is continuous and odd: formula_26. If every odd continuous function has a zero, then formula_16 has a zero, and therefore, formula_28. To prove that (1) and (2) are equivalent, we use the following continuous odd maps: The proof now writes itself. formula_32 We prove the contrapositive. If there exists a continuous odd function formula_33, then formula_34 is a continuous odd function formula_35. formula_36 Again we prove the contrapositive. If there exists a continuous odd function formula_37, then formula_38 is a continuous odd function formula_39. Proofs. 1-dimensional case. The 1-dimensional case can easily be proved using the intermediate value theorem (IVT). Let formula_16 be the odd real-valued continuous function on a circle defined by formula_26. Pick an arbitrary formula_17. If formula_10 then we are done. Otherwise, without loss of generality, formula_44 But formula_45 Hence, by the IVT, there is a point formula_46 at which formula_47. General case. Algebraic topological proof. Assume that formula_48 is an odd continuous function with formula_49 (the case formula_50 is treated above, the case formula_51 can be handled using basic covering theory). By passing to orbits under the antipodal action, we then get an induced continuous function formula_52 between real projective spaces, which induces an isomorphism on fundamental groups. By the Hurewicz theorem, the induced ring homomorphism on cohomology with formula_53 coefficients [where formula_53 denotes the field with two elements], formula_55 sends formula_56 to formula_57. But then we get that formula_58 is sent to formula_59, a contradiction. One can also show the stronger statement that any odd map formula_60 has odd degree and then deduce the theorem from this result. Combinatorial proof. The Borsuk–Ulam theorem can be proved from Tucker's lemma. Let formula_8 be a continuous odd function. Because "g" is continuous on a compact domain, it is uniformly continuous. Therefore, for every formula_62, there is a formula_63 such that, for every two points of formula_64 which are within formula_65 of each other, their images under "g" are within formula_66 of each other. Define a triangulation of formula_64 with edges of length at most formula_65. Label each vertex formula_69 of the triangulation with a label formula_70 in the following way: Because "g" is odd, the labeling is also odd: formula_73. Hence, by Tucker's lemma, there are two adjacent vertices formula_74 with opposite labels. Assume w.l.o.g. that the labels are formula_75. By the definition of "l", this means that in both formula_76 and formula_77, coordinate #1 is the largest coordinate: in formula_76 this coordinate is positive while in formula_77 it is negative. By the construction of the triangulation, the distance between formula_76 and formula_77 is at most formula_66, so in particular formula_83 (since formula_84 and formula_85 have opposite signs) and so formula_86. But since the largest coordinate of formula_76 is coordinate #1, this means that formula_88 for each formula_89. So formula_90, where formula_91 is some constant depending on formula_92 and the norm formula_93 which you have chosen. The above is true for every formula_62; since formula_64 is compact there must hence be a point "u" in which formula_96. Equivalent results. Above we showed how to prove the Borsuk–Ulam theorem from Tucker's lemma. The converse is also true: it is possible to prove Tucker's lemma from the Borsuk–Ulam theorem. Therefore, these two theorems are equivalent.
4067
41906662
https://en.wikipedia.org/wiki?curid=4067
Bragi
Bragi (Old Norse) is the skaldic god of poetry in Norse mythology. Etymology. The theonym Bragi probably stems from the masculine noun "bragr", which can be translated in Old Norse as 'poetry' (cf. Icelandic "bragur" 'poem, melody, wise') or as 'the first, noblest' (cf. poetic Old Norse "bragnar" 'chiefs, men', "bragningr" 'king'). It is unclear whether the theonym semantically derives from the first meaning or the second. A connection has been also suggested with the Old Norse "bragarfull", the cup drunk in solemn occasions with the taking of vows. The word is usually taken to semantically derive from the second meaning of "bragr" ('first one, noblest'). A relation with the Old English term "brego" ('lord, prince') remains uncertain. "Bragi" regularly appears as a personal name in Old Norse and Old Swedish sources, which according to linguist Jan de Vries might indicate the secondary character of the god's name. Attestations. Snorri Sturluson writes in the "Gylfaginning" after describing Odin, Thor, and Baldr: In "Skáldskaparmál" Snorri writes: That Bragi is Odin's son is clearly mentioned only here and in some versions of a list of the sons of Odin (see Sons of Odin). But "wish-son" in stanza 16 of the "Lokasenna" could mean "Odin's son" and is translated by Hollander as "Odin's kin". Bragi's mother is possibly Frigg. In that poem Bragi at first forbids Loki to enter the hall but is overruled by Odin. Loki then gives a greeting to all gods and goddesses who are in the hall save to Bragi. Bragi generously offers his sword, horse, and an arm ring as peace gift but Loki only responds by accusing Bragi of cowardice, of being the most afraid to fight of any of the Æsir and Elves within the hall. Bragi responds that if they were outside the hall, he would have Loki's head, but Loki only repeats the accusation. When Bragi's wife Iðunn attempts to calm Bragi, Loki accuses her of embracing her brother's slayer, a reference to matters that have not survived. It may be that Bragi had slain Iðunn's brother. A passage in the "Poetic Edda" poem "Sigrdrífumál" describes runes being graven on the sun, on the ear of one of the sun-horses and on the hoofs of the other, on Sleipnir's teeth, on bear's paw, on eagle's beak, on wolf's claw, and on several other things including on Bragi's tongue. Then the runes are shaved off and the shavings are mixed with mead and sent abroad so that Æsir have some, Elves have some, Vanir have some, and Men have some, these being speech runes and birth runes, ale runes, and magic runes. The meaning of this is obscure. The first part of Snorri Sturluson's "Skáldskaparmál" is a dialogue between Ægir and Bragi about the nature of poetry, particularly skaldic poetry. Bragi tells the origin of the mead of poetry from the blood of Kvasir and how Odin obtained this mead. He then goes on to discuss various poetic metaphors known as "kennings". Snorri Sturluson clearly distinguishes the god Bragi from the mortal skald Bragi Boddason, whom he often mentions separately. The appearance of Bragi in the "Lokasenna" indicates that if these two Bragis were originally the same, they have become separated for that author also, or that chronology has become very muddled and Bragi Boddason has been relocated to mythological time. Compare the appearance of the Welsh Taliesin in the second branch of the Mabinogi. Legendary chronology sometimes does become muddled. Whether Bragi the god originally arose as a deified version of Bragi Boddason was much debated in the 19th century, especially by the scholars Eugen Mogk and Sophus Bugge. The debate remains undecided. In the poem "Eiríksmál" Odin, in Valhalla, hears the coming of the dead Norwegian king Eric Bloodaxe and his host, and bids the heroes Sigmund and Sinfjötli rise to greet him. Bragi is then mentioned, questioning how Odin knows that it is Eric and why Odin has let such a king die. In the poem "Hákonarmál", Hákon the Good is taken to Valhalla by the valkyrie Göndul and Odin sends Hermóðr and Bragi to greet him. In these poems Bragi could be either a god or a dead hero in Valhalla. Attempting to decide is further confused because "Hermóðr" also seems to be sometimes the name of a god and sometimes the name of a hero. That Bragi was also the first to speak to Loki in the "Lokasenna" as Loki attempted to enter the hall might be a parallel. It might have been useful and customary that a man of great eloquence and versed in poetry should greet those entering a hall. He is also depicted in tenth-century court poetry of helping to prepare Valhalla for new arrivals and welcoming the kings who have been slain in battle to the hall of Odin. Skalds named Bragi. Bragi Boddason. In the "Prose Edda" Snorri Sturluson quotes many stanzas attributed to Bragi Boddason the old ("Bragi Boddason inn gamli"), a Norwegian court poet who served several Swedish kings, Ragnar Lodbrok, Östen Beli and Björn at Hauge who reigned in the first half of the 9th century. This Bragi was reckoned as the first skaldic poet, and was certainly the earliest skaldic poet then remembered by name whose verse survived in memory. Snorri especially quotes passages from Bragi's "Ragnarsdrápa", a poem supposedly composed in honor of the famous legendary Viking Ragnar Lodbrok ('Hairy-breeches') describing the images on a decorated shield which Ragnar had given to Bragi. The images included Thor's fishing for Jörmungandr, Gefjun's ploughing of Zealand from the soil of Sweden, the attack of Hamdir and Sorli against King Jörmunrekk, and the never-ending battle between Hedin and Högni. Bragi son of Hálfdan the Old. Bragi son of Hálfdan the Old is mentioned only in the "Skjáldskaparmál". This Bragi is the sixth of the second of two groups of nine sons fathered by King Hálfdan the Old on Alvig the Wise, daughter of King Eymund of Hólmgard. This second group of sons are all eponymous ancestors of legendary families of the north. Snorri says: Bragi, from whom the Bragnings are sprung (that is the race of Hálfdan the Generous). Of the Bragnings as a race and of Hálfdan the Generous nothing else is known. However, "Bragning" is often, like some others of these dynastic names, used in poetry as a general word for 'king' or 'ruler'. Bragi Högnason. In the eddic poem "Helgakviða Hundingsbana II", Bragi Högnason, his brother Dag, and his sister Sigrún were children of Högne, the king of East Götaland. The poem relates how Sigmund's son Helgi Hundingsbane agreed to take Sigrún daughter of Högni as his wife against her unwilling betrothal to Hodbrodd son of Granmar the king of Södermanland. In the subsequent battle of Frekastein (probably one of the 300 hill forts of Södermanland, as "stein" meant "hill fort") against Högni and Granmar, all the chieftains on Granmar's side are slain, including Bragi, except for Bragi's brother Dag. In popular culture. In the 2002 Ensemble Studios game "Age of Mythology", Bragi is one of nine minor gods Norse players can worship.
4068
37569883
https://en.wikipedia.org/wiki?curid=4068
Blaise Pascal
Blaise Pascal (19June 162319August 1662) was a French mathematician, physicist, inventor, philosopher, and Catholic writer. Pascal was a child prodigy who was educated by his father, a tax collector in Rouen. His earliest mathematical work was on projective geometry; he wrote a significant treatise on the subject of conic sections at the age of 16. He later corresponded with Pierre de Fermat on probability theory, strongly influencing the development of modern economics and social science. In 1642, he started some pioneering work on calculating machines (called Pascal's calculators and later Pascalines), establishing him as one of the first two inventors of the mechanical calculator. Like his contemporary René Descartes, Pascal was also a pioneer in the natural and applied sciences. Pascal wrote in defense of the scientific method and produced several controversial results. He made important contributions to the study of fluids, and clarified the concepts of pressure and vacuum by generalising the work of Evangelista Torricelli. The SI unit for pressure is named for Pascal. Following Torricelli and Galileo Galilei, in 1647 he rebutted the likes of Aristotle and Descartes who insisted that nature abhors a vacuum. He is also credited as the inventor of modern public transportation, having established the carrosses à cinq sols, the first modern public transport service, shortly before his death in 1662. In 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing influential works on philosophy and theology. His two most famous works date from this period: the and the "Pensées", the former set in the conflict between Jansenists and Jesuits. The latter contains Pascal's wager, known in the original as the "Discourse on the Machine", a fideistic probabilistic argument for why one should believe in God. In that year, he also wrote an important treatise on the arithmetical triangle. Between 1658 and 1659, he wrote on the cycloid and its use in calculating the volume of solids. Following several years of illness, Pascal died in Paris at the age of 39. Early life and education. Pascal was born in Clermont-Ferrand, which is in France's Auvergne region, by the Massif Central. He lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal, also an amateur mathematician, was a local judge and member of the "Noblesse de Robe". Pascal had two sisters, the younger Jacqueline and the elder Gilberte. Move to Paris. In 1631, five years after the death of his wife, Étienne Pascal moved with his children to Paris. The newly arrived family soon hired Louise Delfault, a maid who eventually became a key member of the family. Étienne, who never remarried, decided that he alone would educate his children. The young Pascal showed an extraordinary intellectual ability, with an amazing aptitude for mathematics and science. Etienne had tried to keep his son from learning mathematics; but by the age of 12, Pascal had rediscovered, on his own, using charcoal on a tile floor, Euclid’s first thirty-two geometric propositions, and was thus given a copy of Euclid's "Elements". "Essay on Conics". Particularly of interest to Pascal was a work of Desargues on conic sections. Following Desargues' thinking, the 16-year-old Pascal produced, as a means of proof, a short treatise on what was called the "Mystic Hexagram", "Essai pour les coniques" ("Essay on Conics") and sent it — his first serious work of mathematics — to Père Mersenne in Paris; it is known still today as Pascal's theorem. It states that if a hexagon is inscribed in a circle (or conic) then the three intersection points of opposite sides lie on a line (called the Pascal line). Pascal's work was so precocious that René Descartes was convinced that Pascal's father had written it. When assured by Mersenne that it was, indeed, the product of the son and not the father, Descartes dismissed it with a sniff: "I do not find it strange that he has offered demonstrations about conics more appropriate than those of the ancients," adding, "but other matters related to this subject can be proposed that would scarcely occur to a 16-year-old child." Leaving Paris. In France at that time offices and positions could be—and were—bought and sold. In 1631, Étienne sold his position as second president of the "Cour des Aides" for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, and enjoy, Paris, but in 1638 Cardinal Richelieu, desperate for money to carry on the Thirty Years' War, defaulted on the government's bonds. Suddenly Étienne Pascal's worth had dropped from nearly 66,000 livres to less than 7,300. Like so many others, Étienne was eventually forced to flee Paris because of his opposition to the fiscal policies of Richelieu, leaving his three children in the care of his neighbour Madame Sainctot, a great beauty with an infamous past who kept one of the most glittering and intellectual salons in all France. It was only when Jacqueline performed well in a children's play with Richelieu in attendance that Étienne was pardoned. In time, Étienne was back in good graces with the Cardinal and in 1639 had been appointed the king's commissioner of taxes in the city of Rouen—a city whose tax records, thanks to uprisings, were in utter chaos. Pascaline. In 1642, in an effort to ease his father's endless, exhausting calculations, and recalculations, of taxes owed and paid (into which work the young Pascal had been recruited), Pascal, not yet 19, constructed a mechanical calculator capable of addition and subtraction, called "Pascal's calculator" or the "Pascaline". Of the eight Pascalines known to have survived, four are held by the Musée des Arts et Métiers in Paris and one more by the Zwinger museum in Dresden, Germany, exhibit two of his original mechanical calculators. Although these machines are pioneering forerunners to a further 400 years of development of mechanical methods of calculation, and in a sense to the later field of computer engineering, the calculator failed to be a great commercial success. Partly because it was still quite cumbersome to use in practice, but probably primarily because it was extraordinarily expensive, the Pascaline became little more than a toy, and a status symbol, for the very rich both in France and elsewhere in Europe. Pascal continued to make improvements to his design through the next decade, and he refers to some 50 machines that were built to his design. He built 20 finished machines over the following 10 years. Mathematics. Probability. In 1654, prompted by his friend the Chevalier de Méré, Pascal corresponded with Pierre de Fermat on the subject of gambling problems, and from that collaboration was born the mathematical theory of probability. The specific problem was that of two players who want to finish a game early and, given the current circumstances of the game, want to divide the stakes fairly, based on the chance each has of winning the game from that point. From this discussion, the notion of expected value was introduced. John Ross writes, "Probability theory and the discoveries following it changed the way we regard uncertainty, risk, decision-making, and an individual's and society's ability to influence the course of future events." Pascal, in the "Pensées", used a probabilistic argument, Pascal's wager, to justify belief in God and a virtuous life. However, Pascal and Fermat, though doing important early work in probability theory, did not develop the field very far. Christiaan Huygens, learning of the subject from the correspondence of Pascal and Fermat, wrote the first book on the subject. Later figures who continued the development of the theory include Abraham de Moivre and Pierre-Simon Laplace. The work done by Fermat and Pascal into the calculus of probabilities laid important groundwork for Leibniz's formulation of the calculus. "Treatise on the Arithmetical Triangle". Pascal's "Traité du triangle arithmétique", written in 1654 but published posthumously in 1665, described a convenient tabular presentation for binomial coefficients which he called the arithmetical triangle, but is now called Pascal's triangle. The triangle can also be represented: He defined the numbers in the triangle by recursion: Call the number in the ("m" + 1)th row and ("n" + 1)th column "t""mn". Then "t""mn" = "t""m"−1,"n" + "t""m","n"−1, for "m" = 0, 1, 2, ... and "n" = 0, 1, 2, ... The boundary conditions are "t""m",−1 = 0, "t"−1,"n" = 0 for "m" = 1, 2, 3, ... and "n" = 1, 2, 3, ... The generator "t"00 = 1. Pascal concluded with the proof, formula_1 In the same treatise, Pascal gave an explicit statement of the principle of mathematical induction. In 1654, he proved "Pascal's identity" relating the sums of the "p"-th powers of the first "n" positive integers for "p" = 0, 1, 2, ..., "k". That same year, Pascal had a religious experience, and mostly gave up work in mathematics. Cycloid. In 1658, Pascal, while suffering from a toothache, began considering several problems concerning the cycloid. His toothache disappeared, and he took this as a heavenly sign to proceed with his research. Eight days later he had completed his essay and, to publicize the results, proposed a contest. Pascal proposed three questions relating to the center of gravity, area and volume of the cycloid, with the winner or winners to receive prizes of 20 and 40 Spanish doubloons. Pascal, Gilles de Roberval and Pierre de Carcavi were the judges, and neither of the two submissions (by John Wallis and Antoine de Lalouvère) were judged to be adequate. While the contest was ongoing, Christopher Wren sent Pascal a proposal for a proof of the rectification of the cycloid; Roberval claimed promptly that he had known of the proof for years. Wallis published Wren's proof (crediting Wren) in Wallis's "Tractus Duo", giving Wren priority for the first published proof. Physics. Pascal contributed to several fields in physics, most notably the fields of fluid mechanics and pressure. In honour of his scientific contributions, the name "Pascal" has been given to the SI unit of pressure and Pascal's law (an important principle of hydrostatics). He introduced a primitive form of roulette and the roulette wheel in his search for a perpetual motion machine. Fluid dynamics. His work in the fields of hydrodynamics and hydrostatics centered on the principles of hydraulic fluids. His inventions include the hydraulic press (using hydraulic pressure to multiply force) and the syringe. He proved that hydrostatic pressure depends not on the weight of the fluid but on the elevation difference. He demonstrated this principle by attaching a thin tube to a barrel full of water and filling the tube with water up to the level of the third floor of a building. This caused the barrel to leak, in what became known as Pascal's barrel experiment. Vacuum. By 1647, Pascal had learned of Evangelista Torricelli's experimentation with barometers. Having replicated an experiment that involved placing a tube filled with mercury upside down in a bowl of mercury, Pascal questioned what force kept some mercury in the tube and what filled the space above the mercury in the tube. At the time, most scientists including Descartes believed in a plenum, i. e. some invisible matter filled all of space, rather than a vacuum ("Nature abhors a vacuum)." This was based on the Aristotelian notion that everything in motion was a substance, moved by another substance. Furthermore, light passed through the glass tube, suggesting a substance such as aether rather than vacuum filled the space. Following more experimentation in this vein, in 1647 Pascal produced "Experiences nouvelles touchant le vide" ("New experiments with the vacuum"), which detailed basic rules describing to what degree various liquids could be supported by air pressure. It also provided reasons why it was indeed a vacuum above the column of liquid in a barometer tube. This work was followed by "Récit de la grande expérience de l'équilibre des liqueurs" ("Account of the great experiment on equilibrium in liquids") published in 1648. First atmospheric pressure vs. altitude experiment. The Torricellian vacuum found that air pressure is equal to the weight of 30 inches of mercury. If air has a finite weight, Earth's atmosphere must have a maximum height. Pascal reasoned that if true, air pressure on a high mountain must be less than at a lower altitude. He lived near the Puy de Dôme mountain, tall, but his health was poor so could not climb it. On 19 September 1648, after many months of Pascal's friendly but insistent prodding, Florin Périer, husband of Pascal's elder sister Gilberte, was finally able to carry out the fact-finding mission vital to Pascal's theory. The account, written by Périer, reads: Pascal replicated the experiment in Paris by carrying a barometer up to the top of the bell tower at the church of Saint-Jacques-de-la-Boucherie, a height of about 50 metres. The mercury dropped two lines. He found with both experiments that an ascent of 7 fathoms lowers the mercury by half a line. Note: Pascal used "pouce" and "ligne" for "inch" and "line", and "toise" for "fathom". In a reply to Étienne Noël, who believed in the plenum, Pascal wrote, echoing contemporary notions of science and falsifiability: "In order to show that a hypothesis is evident, it does not suffice that all the phenomena follow from it; instead, if it leads to something contrary to a single one of the phenomena, that suffices to establish its falsity." Blaise Pascal Chairs are given to outstanding international scientists to conduct their research in the Ile de France region. Adult life: religion, literature, and philosophy. Religious conversion. In the winter of 1646, Pascal's 58-year-old father broke his hip when he slipped and fell on an icy street of Rouen; given the man's age and the state of medicine in the 17th century, a broken hip could be a very serious condition, perhaps even fatal. Rouen was home to two of the finest doctors in France, Deslandes and de la Bouteillerie. The elder Pascal "would not let anyone other than these men attend him...It was a good choice, for the old man survived and was able to walk again..." However treatment and rehabilitation took three months, during which time La Bouteillerie and Deslandes had become regular visitors. Both men were followers of Jean Guillebert, proponent of a splinter group from Catholic teaching known as Jansenism. This still fairly small sect was making surprising inroads into the French Catholic community at that time. It espoused rigorous Augustinism. Blaise spoke with the doctors frequently, and after their successful treatment of his father, borrowed from them works by Jansenist authors. In this period, Pascal experienced a sort of "first conversion" and began to write on theological subjects in the course of the following year. Pascal fell away from this initial religious engagement and experienced a few years of what some biographers have called his "worldly period" (1648–54). His father died in 1651 and left his inheritance to Pascal and his sister Jacqueline, for whom Pascal acted as conservator. Jacqueline announced that she would soon become a postulant in the Jansenist convent of Port-Royal. Pascal was deeply affected and very sad, not because of her choice, but because of his chronic poor health; he needed her just as she had needed him. By the end of October in 1651, a truce had been reached between brother and sister. In return for a healthy annual stipend, Jacqueline signed over her part of the inheritance to her brother. Gilberte had already been given her inheritance in the form of a dowry. In early January, Jacqueline left for Port-Royal. On that day, according to Gilberte concerning her brother, "He retired very sadly to his rooms without seeing Jacqueline, who was waiting in the little parlor..." In early June 1653, after what must have seemed like endless badgering from Jacqueline, Pascal formally signed over the whole of his sister's inheritance to Port-Royal, which, to him, "had begun to smell like a cult." With two-thirds of his father's estate now gone, the 29-year-old Pascal was now consigned to genteel poverty. For a while, Pascal pursued the life of a bachelor. During visits to his sister at Port-Royal in 1654, he displayed contempt for affairs of the world but was not drawn to God. "Memorial". On the 23 of November, 1654, between 10:30 and 12:30 at night, Pascal had an intense religious experience and immediately wrote a brief note to himself which began: "Fire. God of Abraham, God of Isaac, God of Jacob, not of the philosophers and the scholars..." and concluded by quoting Psalm 119:16: "I will not forget thy word. Amen." He seems to have carefully sewn this document into his coat and always transferred it when he changed clothes; a servant discovered it only by chance after his death. This piece is now known as the "Memorial". The story of a carriage accident as having led to the experience described in the "Memorial" is disputed by some scholars. His belief and religious commitment revitalized, Pascal visited the older of two convents at Port-Royal for a two-week retreat in January 1655. For the next four years, he regularly travelled between Port-Royal and Paris. It was at this point immediately after his conversion when he began writing his first major literary work on religion, the "Provincial Letters". Literature. In literature, Pascal is regarded as one of the most important authors of the French Classical Period and is read today as one of the greatest masters of French prose. His use of satire and wit influenced later polemicists. The "Provincial Letters". Beginning in 1656–57, Pascal published his memorable attack on casuistry, a popular ethical method used by Catholic thinkers in the early modern period (especially the Jesuits, and in particular Antonio Escobar). Pascal denounced casuistry as the mere use of complex reasoning to justify moral laxity and all sorts of sins. The 18-letter series was published between 1656 and 1657 under the pseudonym Louis de Montalte and incensed Louis XIV. The king ordered that the book be shredded and burnt in 1660. In 1661, in the midst of the formulary controversy, the Jansenist school at Port-Royal was condemned and closed down; those involved with the school had to sign a 1656 papal bull condemning the teachings of Jansen as heretical. The final letter from Pascal, in 1657, had defied Alexander VII himself. Even Pope Alexander, while publicly opposing them, nonetheless was persuaded by Pascal's arguments. Aside from their religious influence, the "Provincial Letters" were popular as a literary work. Pascal's use of humor, mockery, and vicious satire in his arguments made the letters ripe for public consumption, and influenced the prose of later French writers like Voltaire and Jean-Jacques Rousseau. It is in the "Provincial Letters" that Pascal made his oft-quoted apology for writing a long letter, as he had not had time to write a shorter one. From Letter XVI, as translated by Thomas M'Crie: 'Reverend fathers, my letters were not wont either to be so prolix, or to follow so closely on one another. Want of time must plead my excuse for both of these faults. The present letter is a very long one, simply because I had no leisure to make it shorter.' Charles Perrault wrote of the "Letters": "Everything is there—purity of language, nobility of thought, solidity in reasoning, finesse in raillery, and throughout an "agrément" not to be found anywhere else." Philosophy. Pascal is arguably best known as a philosopher, considered by some the second greatest French mind behind René Descartes. He was a dualist following Descartes. However, he is also remembered for his opposition to both the rationalism of the likes of Descartes and simultaneous opposition to the main countervailing epistemology, empiricism, preferring fideism. In terms of God, Descartes and Pascal disagreed. Pascal wrote that "I cannot forgive Descartes. In all his philosophy he would have been quite willing to dispense with God, but he couldn't avoid letting him put the world in motion; afterwards he didn't need God anymore". He opposed the rationalism of people like Descartes as applied to the existence of a God, preferring faith as "reason can decide nothing here". For Pascal the nature of God was such that such proofs cannot reveal God. Humans "are in darkness and estranged from God" because "he has hidden Himself from their knowledge". He cared above all about the philosophy of religion. Pascalian theology has grown out of his perspective that humans are, according to Wood, "born into a duplicitous world that shapes us into duplicitous subjects and so we find it easy to reject God continually and deceive ourselves about our own sinfulness". Philosophy of mathematics. Pascal's major contribution to the philosophy of mathematics came with his "De l'Esprit géométrique" ("Of the Geometrical Spirit"), originally written as a preface to a geometry textbook for one of the famous Petites écoles de Port-Royal ("Little Schools of Port-Royal"). The work was unpublished until over a century after his death. Here, Pascal looked into the issue of discovering truths, arguing that the ideal of such a method would be to found all propositions on already established truths. At the same time, however, he claimed this was impossible because such established truths would require other truths to back them up—first principles, therefore, cannot be reached. Based on this, Pascal argued that the procedure used in geometry was as perfect as possible, with certain principles assumed and other propositions developed from them. Nevertheless, there was no way to know the assumed principles to be true. Pascal also used "De l'Esprit géométrique" to develop a theory of definition. He distinguished between definitions which are conventional labels defined by the writer and definitions which are within the language and understood by everyone because they naturally designate their referent. The second type would be characteristic of the philosophy of essentialism. Pascal claimed that only definitions of the first type were important to science and mathematics, arguing that those fields should adopt the philosophy of formalism as formulated by Descartes. In "De l'Art de persuader" ("On the Art of Persuasion"), Pascal looked deeper into geometry's axiomatic method, specifically the question of how people come to be convinced of the axioms upon which later conclusions are based. Pascal agreed with Montaigne that achieving certainty in these axioms and conclusions through human methods is impossible. He asserted that these principles can be grasped only through intuition, and that this fact underscored the necessity for submission to God in searching out truths. Pensées. Pascal's most influential theological work, referred to posthumously as the "Pensées" ("Thoughts") is widely considered to be a masterpiece, and a landmark in French prose. When commenting on one particular section (Thought #72), Sainte-Beuve praised it as the finest pages in the French language. Will Durant hailed the Pensées as "the most eloquent book in French prose". The "Pensées" was not completed before his death. It was to have been a sustained and coherent examination and defense of the Christian faith, with the original title "Apologie de la religion Chrétienne" ("Defense of the Christian Religion"). The first version of the numerous scraps of paper found after his death appeared in print as a book in 1669 titled "Pensées de M. Pascal sur la religion, et sur quelques autres sujets" ("Thoughts of M. Pascal on religion, and on some other subjects") and soon thereafter became a classic. One of the "Apologie"s main strategies was to use the contradictory philosophies of Pyrrhonism and Stoicism, personalized by Montaigne on one hand, and Epictetus on the other, in order to bring the unbeliever to such despair and confusion that he would embrace God. Last works and death. T. S. Eliot described him during this phase of his life as "a man of the world among ascetics, and an ascetic among men of the world." Pascal's ascetic lifestyle derived from a belief that it was natural and necessary for a person to suffer. In 1659, Pascal fell seriously ill. During his last years, he frequently tried to reject the ministrations of his doctors, saying, "Don't pity me, sickness is the natural state of Christians, because in it we are, as we should always be, in the suffering of evils, in the deprivation of all the goods and pleasures of the senses, free from all the passions that work throughout the course of life, without ambition, without avarice, in the continual expectation of death." Desiring to imitate Jesus’ poverty of spirit, in his spirit of zeal and charity, Pascal said if God allowed him to recover from his illness, he would be resolved to "have no other employment or occupation for the rest of my life than the service of the poor." Louis XIV suppressed the Jansenist movement at Port-Royal in 1661. In response, Pascal wrote one of his final works, "Écrit sur la signature du formulaire" ("Writ on the Signing of the Form"), exhorting the Jansenists not to give in. Later that year, his sister Jacqueline died, which convinced Pascal to cease his polemics on Jansenism. Inventor of public transportation. Pascal's last major achievement, returning to his mechanical genius, was inaugurating one of the first land-based public transport services, the carrosses à cinq sols, a network of horse-drawn multi-seat carriages that carried passengers on five fixed routes. Pascal also designated the operation principles which were later used to plan public transportation - the carriages had a fixed route, fixed price (five sols, hence the name), and left even if there were no passengers. The lines were not commercially successful, and the last one closed by 1675. Nonetheless, he has been described as the inventor of public transportation. Illness and death. In 1662, Pascal's illness became more violent, and his emotional condition had severely worsened since his sister's death. Aware that his health was fading quickly, he sought a move to the hospital for incurable diseases, but his doctors declared that he was too unstable to be carried. In Paris on 18 August 1662, Pascal went into convulsions and received extreme unction. He died the next morning, his last words being "May God never abandon me," and was buried in the cemetery of Saint-Étienne-du-Mont. An autopsy performed after his death revealed grave problems with his stomach and other organs of his abdomen, along with damage to his brain. Despite the autopsy, the cause of his poor health was never precisely determined, though speculation focuses on tuberculosis, stomach cancer, or a combination of the two. The headaches which affected Pascal are generally attributed to his brain lesion. Legacy. One of the Universities of Clermont-Ferrand, FranceUniversité Blaise Pascalis named after him. Établissement scolaire français Blaise-Pascal in Lubumbashi, Democratic Republic of the Congo is named after Pascal. The 1969 Eric Rohmer film "My Night at Maud's" is based on the work of Pascal. Roberto Rossellini directed a filmed biopic, "Blaise Pascal", which originally aired on Italian television in 1971. Pascal was a subject of the first edition of the 1984 BBC Two documentary, "Sea of Faith", presented by Don Cupitt. The chameleon in the animated film "Tangled" is named for Pascal. A programming language is named for Pascal. In 2014, Nvidia announced its new Pascal microarchitecture, which is named for Pascal. The first graphics cards featuring Pascal were released in 2016. The 2017 game "" has multiple characters named after famous philosophers; one of these is a sentient pacifistic machine named Pascal, who serves as a major supporting character. Pascal creates a village for machines to live peacefully with the androids they are at war with and acts as a parental figure for other machines trying to adapt to their newly-found individuality. The otter in the "Animal Crossing" series is named for Pascal. The minor planet 4500 Pascal is named in his honor. Pope Paul VI, in encyclical "Populorum progressio", issued in 1967, quotes Pascal's "Pensées": In 2023, Pope Francis released an apostolic letter, "Sublimitas et miseria hominis", dedicated to Blaise Pascal, in commemoration of the fourth centenary of his birth. Pascal influenced both French sociologist Pierre Bourdieu, who named his "Pascalian Meditations" (1997) after him, and French philosopher Louis Althusser.
4069
45368705
https://en.wikipedia.org/wiki?curid=4069
Brittonic languages
The Brittonic languages (also Brythonic or British Celtic; ; ; and ) form one of the two branches of the Insular Celtic languages; the other is Goidelic. It comprises the extant languages Breton, Cornish, and Welsh. The name "Brythonic" was derived by Welsh Celticist John Rhys from the Welsh word , meaning Ancient Britons as opposed to an Anglo-Saxon or Gael. The Brittonic languages derive from the Common Brittonic language, spoken throughout Great Britain during the Iron Age and Roman period. In the 5th and 6th centuries emigrating Britons also took Brittonic speech to the continent, most significantly in Brittany and Britonia. During the next few centuries, in much of Britain the language was replaced by Old English and Scottish Gaelic, with the remaining Common Brittonic language splitting into regional dialects, eventually evolving into Welsh, Cornish, Breton, Cumbric, and probably Pictish. Welsh and Breton continue to be spoken as native languages, while a revival in Cornish has led to an increase in speakers of that language. Cumbric and Pictish are extinct, having been replaced by Goidelic and Anglic speech. The Isle of Man and Orkney may also have originally spoken a Brittonic language, but this was later supplanted by Goidelic on the Isle of Man and Norse on Orkney. There is also a community of Brittonic language speakers in (the Welsh settlement in Patagonia). Name. The names "Brittonic" and "Brythonic" are scholarly conventions referring to the Celtic languages of Britain and to the ancestral language they originated from, designated Common Brittonic, in contrast to the Goidelic languages originating in Ireland. Both were created in the 19th century to avoid the ambiguity of earlier terms such as "British" and "Cymric". "Brythonic" was coined in 1879 by the Celticist John Rhys from the Welsh word . "Brittonic", derived from "Briton" and also earlier spelled "Britonic" and "Britonnic", emerged later in the 19th century. "Brittonic" became more prominent through the 20th century, and was used in Kenneth H. Jackson's highly influential 1953 work on the topic, "Language and History in Early Britain". Jackson noted by that time that "Brythonic" had become a dated term: "of late there has been an increasing tendency to use Brittonic instead." Today, "Brittonic" often replaces "Brythonic" in the literature. Rudolf Thurneysen used "Britannic" in his influential "A Grammar of Old Irish", although this never became popular among subsequent scholars. Comparable historical terms include the Medieval Latin and and the Welsh . Some writers use "British" for the language and its descendants, although, due to the risk of confusion, others avoid it or use it only in a restricted sense. Jackson, and later John T. Koch, use "British" only for the early phase of the Common Brittonic language. Before Jackson's work, "Brittonic" and "Brythonic" were often used for all the P-Celtic languages, including not just the varieties in Britain but those Continental Celtic languages that similarly experienced the evolution of the Proto-Celtic language element to . However, subsequent writers have tended to follow Jackson's scheme, rendering this use obsolete. The name "Britain" itself comes from , via Old French and Middle English , possibly influenced by Old English , probably also from Latin , ultimately an adaptation of the native word for the island, . An early written reference to the British Isles may derive from the works of the Greek explorer Pytheas of Massalia; later Greek writers such as Diodorus of Sicily and Strabo who quote Pytheas' use of variants such as (), "The Britannic [land, island]", and (), "Britannic islands", with being a Celtic word that might mean 'painted ones' or 'tattooed folk', referring to body decoration. Evidence. Knowledge of the Brittonic languages comes from a variety of sources. The early language's information is obtained from coins, inscriptions, and comments by classical writers as well as place names and personal names recorded by them. For later languages, there is information from medieval writers and modern native speakers, together with place names. The names recorded in the Roman period are given in Rivet and Smith. Characteristics. The Brittonic branch is also referred to as "P-Celtic" because linguistic reconstruction of the Brittonic reflex of the Proto-Indo-European phoneme is "p" as opposed to Goidelic "k". Such nomenclature usually implies acceptance of the P-Celtic and Q-Celtic hypothesis rather than the Insular Celtic hypothesis because the term includes certain Continental Celtic languages as well. Other major characteristics include: Classification. The family tree of the Brittonic languages is as follows: Brittonic languages in use today are Welsh, Cornish and Breton. Welsh and Breton have been spoken continuously since they formed. For all practical purposes Cornish died out during the 18th or 19th century, but a revival movement has more recently created small numbers of new speakers. Also notable are the extinct language Cumbric, and possibly the extinct Pictish. One view, advanced in the 1950s and based on apparently unintelligible ogham inscriptions, was that the Picts may have also used a non-Indo-European language. This view, while attracting broad popular appeal, has virtually no following in contemporary linguistic scholarship. History and origins. The modern Brittonic languages are generally considered to all derive from a common ancestral language termed "Brittonic", "British", "Common Brittonic", "Old Brittonic" or "Proto-Brittonic", which is thought to have developed from Proto-Celtic or early Insular Celtic by the 6th century BC. A major archaeogenetics study uncovered a migration into southern Britain in the middle to late Bronze Age, during the 500-year period 1,300–800 BC. The newcomers were genetically most similar to ancient individuals from Gaul. During 1,000–875 BC, their genetic markers swiftly spread through southern Britain, but not northern Britain. The authors describe this as a "plausible vector for the spread of early Celtic languages into Britain". There was much less inward migration during the Iron Age, so it is likely that Celtic reached Britain before then. Barry Cunliffe suggests that a Goidelic branch of Celtic may already have been spoken in Britain, but that this middle Bronze Age migration would have introduced the Brittonic branch. Brittonic languages were probably spoken before the Roman invasion throughout most of Great Britain, though the Isle of Man later had a Goidelic language, Manx. During the period of the Roman occupation of what is now England and Wales (AD 43 to ), Common Brittonic borrowed a large stock of Latin words, both for concepts unfamiliar in the pre-urban society of Celtic Britain such as urbanization and new tactics of warfare, as well as for rather more mundane words which displaced native terms (most notably, the word for 'fish' in all the Brittonic languages derives from the Latin rather than the native – which may survive, however, in the Welsh name of the River Usk, ). Approximately 800 of these Latin loan-words have survived in the three modern Brittonic languages. Pictish may have resisted Latin influence to a greater extent than the other Brittonic languages. It is probable that at the start of the Post-Roman period, Common Brittonic was differentiated into at least two major dialect groups – Southwestern and Western. (Additional dialects have also been posited, but have left little or no evidence, such as an Eastern Brittonic spoken in what is now the East of England.) Between the end of the Roman occupation and the mid-6th century, the two dialects began to diverge into recognizably separate varieties, the Western into Cumbric and Welsh, and the Southwestern into Cornish and its closely related sister language Breton, which was carried to continental Armorica. Jackson showed that a few of the dialect distinctions between West and Southwest Brittonic go back a long way. New divergencies began around AD 500 but other changes that were shared occurred in the 6th century. Other common changes occurred in the 7th century onward and are possibly due to inherent tendencies. Thus the concept of a Common Brittonic language ends by AD 600. Substantial numbers of Britons certainly remained in the expanding area controlled by Anglo-Saxons, but over the fifth and sixth centuries they mostly adopted the Old English language and culture. Decline. The Brittonic languages spoken in what are now Scotland, the Isle of Man, and England began to be displaced in the 5th century through the settlement of Irish-speaking Gaels and Germanic peoples. Henry of Huntingdon wrote that Pictish was "no longer spoken". The displacement of the languages of Brittonic descent was probably complete in all of Britain except Cornwall, Wales, and the English counties bordering these areas such as Devon, by the 11th century. Western Herefordshire continued to speak Welsh until the late nineteenth century, and isolated pockets of Shropshire speak Welsh today. Sound changes. The large array of Brittonic sound changes has been documented by Schrijver (1995), building upon Jackson (1953). Changes to long vowels and diphthongs. Brittonic has undergone an extensive remodeling of Proto-Celtic diphthongs and long vowels. All original Proto-Celtic diphthongs turned into monophthongs, albeit a number of these re-diphthongized at later stages. Changes to short vowels. The distribution of Proto-Celtic short vowels were reshuffled by various processes in Brittonic, such as the two i-affections, a-affection, raisings, and contact with lenited consonants like "*g" > and "*s" > "*h". The default outcomes of stressed short vowels in Brittonic are as follows: Raisings of "*e" and "*o". Welsh exhibits raisings of "*e" to "*i" > ' > ' and "*o" > before a nasal followed by a stop. It is difficult to determine whether the raising from "*o" to "*u" also affected Cornish and Breton, since both of those languages generally merge "*o" with "*u". The raising of "*e" to "*i" occurred in all three major Brittonic languages: Other raising environments identified by Schrijver include: This raising preceded a-affection, since a-affection reverses this raising whenever it applied. All these raisings not only affected native vocabulary, but also affected Latin loanwords. Interactions of vowels followed by "*g". Multiple special interactions of vowels occurred when followed by "*g". Assimilation of "*oRa" to "*aRa". Closely paralleling the common Celtic change of "*eRa" > "*aRa" (Joseph's rule) is the change of "*oRa" to "*aRa" in Brittonic, with "R" standing for any lone sonorant. Unlike Joseph's rule, "*oRa" to "*aRa" did not occur in Goidelic. Schrijver demonstrates this rule with the following examples: Assuming that Welsh "manach" (borrowed from Latin "monachus" "monk") also underwent this assimilation, Schrijver concludes that this change must predate the raising of vowels in "*mVn-" sequences, which in turn predates a-affection (an early fifth-century process). /je/ > /ja/. In Brittonic, Celtic "*ye" generally became /ja/. Some examples cited by Schrijver include: "*wo". The sequence "*wo" was quite volatile in Brittonic. It originally manifested as "*wo" in unlenited position and "*wa" in lenited position. Word-initially, this allomorphy was gone in medieval times, leveled out in various ways. Whichever of "*o" or "*a" to be generalized in the reflexes of a word in a given Brittonic language is completely unpredictable, and occasionally both "o" and "a" reflexes have been attested within the same language. Southwest Brittonic languages like Breton and Cornish usually generalize the same variant of "*wo" in a given word while Welsh tends to have its own distribution of variants. The distribution of "*wo/wa" is also complicated by an Old Breton development where "*wo" that had not turned to "*gwa" would split into "go(u)-" (Old Breton "gu-") in penultimate post-apocope syllables and "go-" in monosyllables. Developments of "*ub". The sequence "*ub" > "*uβ" remained as such when followed by a consonant, for instance in Proto-Celtic "*dubros" "water" > "*duβr" > Welsh "dwfr", "dŵr" and Breton "dour". However, if no consonant exists after a "*ub" sequence, the "*u" merges with whatever Proto-Celtic "*ou" and "*oi" became, the result of which is written in the Brittonic languages. The lenited "*b" > "*β" is lost word-finally after this happens. Schrijver dates this development between the 6th to 8th centuries, with subsequent loss of "*β" datable to the 9th century. a-affection. In Brittonic, final a-affection was triggered by final-syllable "*ā" or "*a", which was later apocopated. This process lowered "*i" and "*u" in the preceding syllable to "*e" and "*o", respectively. A-affection, by affecting feminine forms of adjectives and not their masculine counterparts, created root vowel alternations by gender such as "*windos", feminine "*windā" > "*gwɪnn", feminine "*gwenn" > Welsh "gwyn", feminine "gwen". i-affection. There were two separate processes of i-affection in Brittonic: final i-affection and internal i-affection. Both processes caused the fronting of vowels. Simplified summary of consonantal outcomes. The regular consonantal sound changes from Proto-Celtic to Welsh, Cornish, and Breton are summarised in the following table. Where the graphemes have a different value from the corresponding IPA symbols, the IPA equivalent is indicated between slashes. V represents a vowel; C represents a consonant. Remnants in England, Scotland and Ireland. Place names and river names. The principal legacy left behind in those territories from which the Brittonic languages were displaced is that of toponyms (place names) and hydronyms (names of rivers and other bodies of water). There are many Brittonic place names in lowland Scotland and in the parts of England where it is agreed that substantial Brittonic speakers remained (Brittonic names, apart from those of the former Romano-British towns, are scarce over most of England). Names derived (sometimes indirectly) from Brittonic include London, Penicuik, Perth, Aberdeen, York, Dorchester, Dover, and Colchester. Brittonic elements found in England include and for 'hill', while some such as "co[o]mb[e]" (from ) for 'small deep valley' and "tor" for 'hill, rocky headland' are examples of Brittonic words that were borrowed into English. Others reflect the presence of Britons such as Dumbarton – from the Scottish Gaelic meaning 'Fort of the Britons', and Walton meaning (in Anglo-Saxon) a 'settlement' where the 'Britons' still lived. The number of Celtic river names in England generally increases from east to west, a map showing these being given by Jackson. These include Avon, Chew, Frome, Axe, Brue and Exe, but also river names containing the elements "der-/dar-/dur-" and "-went" e.g. Derwent, Darwen, Deer, Adur, Dour, Darent, and Went. These names exhibit multiple different Celtic roots. One is * 'water' (Breton , Cumbric , Welsh ), also found in the place-name Dover (attested in the Roman period as ); this is the source of rivers named Dour. Another is 'oak' or 'true' (Bret. , Cumb. , W. ), coupled with two agent suffixes, and ; this is the origin of Derwent, Darent, and Darwen (attested in the Roman period as ). The final root to be examined is . In Roman Britain, there were three tribal capitals named (modern Winchester, Caerwent, and Caistor St Edmunds), whose meaning was 'place, town'. Brittonicisms in English. Some, including J. R. R. Tolkien, have argued that Celtic has acted as a substrate to English for both the lexicon and syntax. It is generally accepted that Brittonic effects on English are lexically few, aside from toponyms, consisting of a small number of domestic and geographical words, which "may" include "bin", "brock", "carr", "comb", "crag" and "tor". Another legacy may be the sheep-counting system "yan tan tethera" in the north, in the traditionally Celtic areas of England such as Cumbria. Several words of Cornish origin are still in use in English as mining-related terms, including costean, gunnies, and vug. Those who argue against the theory of a more significant Brittonic influence than is widely accepted point out that many toponyms have no semantic continuation from the Brittonic language. A notable example is "Avon" which comes from the Celtic term for river or the Welsh term for river, , but was used by the English as a personal name. Likewise the River Ouse, Yorkshire, contains the Celtic word which merely means 'water' and the name of the river Trent simply comes from the Welsh word for a 'trespasser' (figuratively suggesting 'overflowing river'). Scholars supporting a Brittonic substrate in English argue that the use of periphrastic constructions (using auxiliary verbs such as "do" and "be" in the continuous/progressive) of the English verb, which is more widespread than in the other Germanic languages, is traceable to Brittonic influence. Others, however, find this unlikely since many of these forms are only attested in the later Middle English period; these scholars claim a native English development rather than Celtic influence. Ian G. Roberts postulates Northern Germanic influence, despite such constructions not existing in Norse. Literary Welsh has the simple present = 'I love' and the present stative (al. continuous/progressive) = 'I am loving', where the Brittonic syntax is partly mirrored in English. (However, English "I am loving" comes from older "I am a-loving", from still older 'I am in the process of loving'). In the Germanic sister languages of English, there is only one form, for example in German, though in "colloquial" usage in some German dialects, a progressive aspect form has evolved which is formally similar to those found in Celtic languages, and somewhat less similar to the Modern English form, e.g. 'I am working' is , literally: 'I am on the working'. The same structure is also found in modern Dutch (), alongside other structures (e.g. , lit. 'I sit to working'). These parallel developments suggest that the English progressive is not necessarily due to Celtic influence; moreover, the native English development of the structure can be traced over 1000 years and more of English literature. Some researchers (Filppula, et al., 2001) argue that other elements of English syntax reflect Brittonic influences. For instance, in English tag questions, the form of the tag depends on the verb form in the main statement ("aren't I?", "isn't he?", "won't we?", etc.). The German and the French , by contrast, are fixed forms which can be used with almost any main statement. It has been claimed that the English system has been borrowed from Brittonic, since Welsh tag questions vary in almost exactly the same way. Brittonic effect on the Goidelic languages. Far more notable, but less well known, are Brittonic influences on Scottish Gaelic, though Scottish and Irish Gaelic, with their wider range of preposition-based periphrastic constructions, suggest that such constructions descend from their common Celtic heritage. Scottish Gaelic contains several P-Celtic loanwords, but, as there is a far greater overlap in terms of Celtic vocabulary than with English, it is not always possible to disentangle P- and Q-Celtic words. However, some common words such as = Welsh , Cumbric are particularly evident. The Brittonic influence on Scots Gaelic is often indicated by considering Irish language usage, which is not likely to have been influenced so much by Brittonic. In particular, the word (anglicised as "strath") is a native Goidelic word, but its usage appears to have been modified by the Welsh cognate whose meaning is slightly different. The effect on Irish has been the loan from British of many Latin-derived words. This has been associated with the Christianisation of Ireland from Britain.
4071
7436027
https://en.wikipedia.org/wiki?curid=4071
Bronski Beat
Bronski Beat were a British synth-pop band formed in 1983 in London, England. The initial lineup, which recorded the majority of their hits, consisted of Scottish musicians Jimmy Somerville (vocals) and Steve Bronski (keyboards, percussion) and English musician Larry Steinbachek (keyboards, percussion). Simon Davolls contributed backing vocals to many songs. Throughout the band's career, Bronski was the only member to appear in every lineup. Bronski Beat achieved success in the mid-1980s, particularly with the 1984 single "Smalltown Boy", from their debut album, "The Age of Consent". "Smalltown Boy" was their only US "Billboard" Hot 100 single. All members of the band were openly gay and their songs reflected this, often containing political commentary on gay issues. Somerville left Bronski Beat in 1985 and went on to have success as lead singer of the Communards and as a solo artist. He was replaced by vocalist John Foster, with whom the band continued to have hits in the UK and Europe through 1986. Foster left Bronski Beat after their second album, and the band were joined by Jonathan Hellyer before dissolving in 1995. Steve Bronski revived the band in 2016, recording new material with 1990s member Ian Donaldson. Steinbachek died later that year; Bronski died in 2021. As of 2025, Somerville is the last surviving member of the band's original lineup and one of six surviving members of the band. History. 1983–1985: early years and "The Age of Consent". Bronski Beat formed in 1983 when Jimmy Somerville, Steve Bronski (both from Glasgow) and Larry Steinbachek (from Southend, Essex) shared a three-bedroom flat at Lancaster House in Brixton, London. Steinbachek had heard Somerville singing during the making of "" and suggested they make some music. They first performed publicly at an arts festival, "September in the Pink". The trio were unhappy with the inoffensive nature of contemporary gay performers and sought to be more outspoken and political. Bronski Beat signed a recording contract with London Records in 1984 after doing only nine live gigs. The band's debut single, "Smalltown Boy", about a gay teenager leaving his family and fleeing his home town, was a hit, peaking at No 3 in the UK Singles Chart, and topping charts in Belgium and the Netherlands. The single was accompanied by a promotional video directed by Bernard Rose, showing Somerville trying to befriend an attractive diver at a swimming pool, then being attacked by the diver's homophobic associates, being returned to his family by the police and having to leave home. (The police officer was played by Colin Bell, then the marketing manager of London Records.) "Smalltown Boy" reached 48 in the U.S. chart and peaked at 8 in Australia. The follow-up single, "Why?", adopted a hi-NRG sound and was more lyrically focused on anti-gay prejudice. It also achieved Top 10 status in the UK, reaching 6, and was another Top 10 hit for the band in Australia, Switzerland, Germany, France and the Netherlands. At the end of 1984, the trio released an album titled "The Age of Consent". The inner sleeve listed the varying ages of consent for consensual gay sex in different nations around the world. At the time, the age of consent for sexual acts between men in the UK was 21 compared with 16 for heterosexual acts, with several other countries having more liberal laws on gay sex. The album peaked at 4 in the UK Albums Chart, 36 in the U.S., and 12 in Australia. Around the same time, the band headlined "Pits and Perverts", a concert at the Electric Ballroom in London to raise funds for the Lesbians and Gays Support the Miners campaign. This event is featured in the film "Pride". The third single, released before Christmas 1984, was a revival of "It Ain't Necessarily So", the George and Ira Gershwin classic (from "Porgy and Bess"). The song questions the accuracy of biblical tales. It also reached the UK Top 20. In 1985, the trio joined up with Marc Almond to record a version of Donna Summer's "I Feel Love". The full version was actually a medley that also incorporated snippets of Summer's "Love to Love You Baby" and John Leyton's "Johnny Remember Me". It was a big success, reaching 3 in the UK and equalling the chart achievement of "Smalltown Boy". Although the original had been one of Marc Almond's all-time favourite songs, he had never read the lyrics and thus incorrectly sang "What’ll it be, what’ll it be, you and me" instead of "Falling free, falling free, falling free" on the finished record. The band and their producer Mike Thorne had gone back into the studio in early 1985 to record a new single, "Run from Love", and PolyGram (London Records' parent company at that time) had pressed a number of promo singles and 12" versions of the song and sent them to radio and record stores in the UK. However, the single was shelved as tensions in the band, both personal and political, resulted in Somerville leaving Bronski Beat in the summer of that year. "Run from Love" was subsequently released in remix form on the Bronski Beat album "Hundreds & Thousands", a collection of mostly remixes (LP) and B-sides (as bonus tracks on the CD version) as well as the hit "I Feel Love". Somerville went on to form the Communards with Richard Coles while the remaining members of Bronski Beat searched for a new vocalist. 1985–1995: Somerville's departure, John Foster and Jonathan Hellyer eras. Bronski Beat recruited John Foster as Somerville's replacement (Foster is credited as "Jon Jon"). A single, "Hit That Perfect Beat", was released in November 1985, reaching 3 in the UK. It repeated this success on the Australian chart and was also featured in the film "Letter to Brezhnev". A second single, "C'mon C'mon", also charted in the UK Top 20 and an album, "Truthdare Doubledare", released in May 1986, peaked at 18. The film "Parting Glances" (1986) included Bronski Beat songs "Love and Money", "Smalltown Boy" and "Why?" During this period, the band teamed up with producer Mark Cunningham on the first-ever BBC Children In Need single, a cover of David Bowie's "Heroes", released in 1986 under the name of The County Line. Foster left the band in 1987. Following Foster's departure, Bronski Beat began work on their next album, "Out and About". The tracks were recorded at Berry Street studios in London with engineer Brian Pugsley. Some of the song titles were "The Final Spin" and "Peace and Love". The latter track featured Strawberry Switchblade vocalist Rose McDowall and appeared on several internet sites in 2006. One of the other songs from the project called "European Boy" was recorded in 1987 by disco group Splash. The lead singer of Splash was former Tight Fit singer Steve Grant. Steinbachek and Bronski toured extensively with the new material with positive reviews, however the project was abandoned as the group was dropped by London Records. Also in 1987, Bronski Beat and Somerville performed at a reunion concert for "International AIDS Day", supported by New Order, at the Brixton Academy, London. In 1989, Jonathan Hellyer became lead singer, and the band extensively toured the U.S. and Europe with back-up vocalist Annie Conway. They achieved one minor hit with the song "Cha Cha Heels", a one-off collaboration sung by American actress and singer Eartha Kitt, which peaked at 32 in the UK. The song was originally written for movie and recording star Divine, who was unable to record the song before his death in 1988. 1990–91 saw Bronski Beat release three further singles on the Zomba record label, "I'm Gonna Run Away", "One More Chance" and "What More Can I Say". The singles were produced by Mike Thorne. Foster and Bronski Beat teamed up again in 1994, and released a techno "Tell Me Why '94" and an acoustic "Smalltown Boy '94" on the German record label, ZYX Music. The album "Rainbow Nation" was released the following year with Hellyer returning as lead vocalist, as Foster had dropped out of the project and Ian Donaldson was brought on board to do keyboards and programming. After a few years of touring, Bronski Beat then dissolved, with Steve Bronski going on to become a producer for other artists and Ian Donaldson becoming a successful DJ (Sordid Soundz). Larry Steinbachek became the musical director for Michael Laub's theatre company, 'Remote Control Productions'. 2007–2016: Bronski solo activities and resurrection of Bronski Beat. In 2007, Steve Bronski remixed the song "Stranger to None" by the UK alternative rock band, All Living Fear. Four different mixes were made, with one appearing on their retrospective album, "Fifteen Years After". Bronski also remixed the track "Flowers in the Morning" by Northern Irish electronic band Electrobronze in 2007, changing the style of the song from classical to Hi-NRG disco. In 2015, Steve Bronski teamed up as a one-off with Jessica James (aka Barbara Bush) and said that she reminded him of Divine, because of her look and Eartha Kitt-like sound. The one-off project was to cover the track he made in 1989. In 2016, Steve Bronski again teamed up with Ian Donaldson, with the aim of bringing Bronski Beat back, enlisting a new singer, Stephen Granville. In 2017, the new Bronski Beat released a reworked version of "Age of Consent" entitled "Age of Reason". "Out & About", the unreleased Bronski Beat album from 1987, was released digitally via Steve Bronski's website. The album features the original tracks plus remixes by Bronski. 2017–present: deaths of Steinbachek and Bronski. On 12 January 2017, it was revealed that Steinbachek had died the previous month after a short battle with cancer, with his family and friends at his bedside. He was 56. Bronski died on 7 December 2021, at the age of 61, in a Central London flat fire. Awards and nominations. ! Year !! Awards !! Work !! Category !! Result !! Ref.
4074
35358178
https://en.wikipedia.org/wiki?curid=4074
Barrel (disambiguation)
A barrel is a cylindrical container, traditionally made with wooden material. Barrel may also refer to:
4077
199747
https://en.wikipedia.org/wiki?curid=4077
Binary prefix
A binary prefix is a unit prefix that indicates a multiple of a unit of measurement by an integer power of two. The most commonly used binary prefixes are kibi (symbol Ki, meaning ), mebi (), and gibi (). They are most often used in information technology as multipliers of bit and byte, when expressing the capacity of storage devices or the size of computer files. The binary prefixes "kibi", "mebi", etc. were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard (Amendment 2). They were meant to replace the metric (SI) decimal power prefixes, such as "kilo" (), "mega" () and "giga" (), that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as "2 megabytes" or "2 MB" would hold = , instead of = . On the other hand, a hard disk whose capacity is specified by the manufacturer as "10 gigabytes" or "10 GB", holds = bytes, or a little more than that, but less than = and a file whose size is listed as "2.3 GB" may have a size closer to ≈ or to = , depending on the program or operating system providing that measurement. This kind of ambiguity is often confusing to computer system users and has resulted in lawsuits. The IEC 60027-2 binary prefixes have been incorporated in the standard and are supported by other standards bodies, including the BIPM, which defines the SI system, the US NIST, and the European Union. Prior to the 1999 IEC standard, some industry organizations, such as the Joint Electron Device Engineering Council (JEDEC), noted the common use of the terms "kilobyte", "megabyte", and "gigabyte", and the corresponding symbols "KB", "MB", and "GB" in the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such as magnetic storage) continued using those same terms and symbols with the decimal meaning. Since then, the major standards organizations have expressly disapproved the use of SI prefixes to denote binary multiples, and recommended or mandated the use of the IEC prefixes for that purpose, but the use of SI prefixes in this sense has persisted in some fields. Definitions. In 2022, the International Bureau of Weights and Measures (BIPM) adopted the decimal prefixes "ronna" for 10009 and "quetta" for 100010. In 2025, the prefixes "robi" () and "quebi" () were adopted by the IEC. Comparison of binary and decimal prefixes. The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kibi vs. kilo to nearly 27% for the quebi vs. quetta. History. Early prefixes. The original metric system adopted by France in 1795 included two binary prefixes named "double-" (2×) and "demi-" (×). However, these were not retained when the SI prefixes were internationally adopted by the 11th CGPM conference in 1960. Storage capacity. Main memory. Early computers used one of two addressing methods to access the system memory; binary (base 2) or decimal (base 10). For example, the IBM 701 (1952) used a binary methods and could address 2048 words of 36 bits each, while the IBM 702 (1953) used a decimal system, and could address ten thousand 7-bit words. By the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. This is the most natural configuration for memory, as all combinations of states of their address lines map to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses. While early documentation specified those memory sizes as exact numbers such as 4096, 8192, or units (usually words, bytes, or bits), computer professionals also started using the long-established metric system prefixes "kilo", "mega", "giga", etc., defined to be powers of 10, to mean instead the nearest powers of two; namely, 210 = 1024, 220 = 10242, 230 = 10243, etc. The corresponding metric prefix symbols ("k", "M", "G", etc.) were used with the same binary meanings. The symbol for 210 = 1024 could be written either in lower case ("k") or in uppercase ("K"). The latter was often used intentionally to indicate the binary rather than decimal meaning. This convention, which could not be extended to higher powers, was widely used in the documentation of the IBM 360 (1964) and of the IBM System/370 (1972), of the CDC 7600, of the DEC PDP-11/70 (1975) and of the DEC VAX-11/780 (1977). In other documents, however, the metric prefixes and their symbols were used to denote powers of 10, but usually with the understanding that the values given were approximate, often truncated down. Thus, for example, a 1967 document by Control Data Corporation (CDC) abbreviated "216 = = words" as "65K words" (rather than "64K" or "66K"), while the documentation of the HP 21MX real-time computer (1974) denoted = = as "196K" and 220 = as "1M". These three possible meanings of "k" and "K" ("1024", "1000", or "approximately 1000") were used loosely around the same time, sometimes by the same company. The HP 3000 business computer (1973) could have "64K", "96K", or "128K" bytes of memory. The use of SI prefixes, and the use of "K" instead of "k" remained popular in computer-related publications well into the 21st century, although the ambiguity persisted. The correct meaning was often clear from the context; for instance, in a binary-addressed computer, the true memory size had to be either a power of 2, or a small integer multiple thereof. Thus a "512 megabyte" RAM module was generally understood to have = bytes, rather than . Hard disks. In specifying disk drive capacities, manufacturers have always used conventional decimal SI prefixes representing powers of 10. Storage in a rotating disk drive is organized in platters and tracks whose sizes and counts are determined by mechanical engineering constraints so that the capacity of a disk drive has hardly ever been a simple multiple of a power of 2. For example, the first commercially sold disk drive, the IBM 350 (1956), had 50 physical disk platters containing a total of sectors of 100 characters each, for a total quoted capacity of 5 million characters. Moreover, since the 1960s, many disk drives used IBM's disk format, where each track was divided into blocks of user-specified size; and the block sizes were recorded on the disk, subtracting from the usable capacity. For example, the IBM 3336 disk pack was quoted to have a 200-megabyte capacity, achieved only with a single -byte block in each of its 808 × 19 tracks. Decimal megabytes were used for disk capacity by the CDC in 1974. The Seagate ST-412, one of several types installed in the IBM PC/XT, had a capacity of when formatted as 306 × 4 tracks and 32 256-byte sectors per track, which was quoted as "10 MB". Similarly, a "300 GB" hard drive can be expected to offer only slightly more than = , bytes, not (which would be about bytes or "322 GB"). The first terabyte (SI prefix, bytes) hard disk drive was introduced in 2007. Decimal prefixes were generally used by information processing publications when comparing hard disk capacities. Some programs and operating systems, such as Microsoft Windows, still use "MB" and "GB" to denote binary prefixes even when displaying disk drive capacities and file sizes, as did Classic Mac OS. Thus, for example, the capacity of a "10 MB" (decimal "M") disk drive could be reported as "", and that of a "300 GB" drive as "279.4 GB". Some operating systems, such as Mac OS X, Ubuntu, and Debian, have been updated to use "MB" and "GB" to denote decimal prefixes when displaying disk drive capacities and file sizes. Some manufacturers, such as Seagate Technology, have released recommendations stating that properly-written software and documentation should specify clearly whether prefixes such as "K", "M", or "G" mean binary or decimal multipliers. Floppy disks. Floppy disks used a variety of formats, and their capacities was usually specified with SI-like prefixes "K" and "M" with either decimal or binary meaning. The capacity of the disks was often specified without accounting for the internal formatting overhead, leading to more irregularities. The early 8-inch diskette formats could contain less than a megabyte with the capacities of those devices specified in kilobytes, kilobits or megabits. The 5.25-inch diskette sold with the IBM PC AT could hold = bytes, and thus was marketed as "1200 KB" with the binary sense of "KB". However, the capacity was also quoted "1.2 MB", which was a hybrid decimal and binary notation, since the "M" meant 1000 × 1024. The precise value was (decimal) or (binary). The 5.25-inch Apple Disk II had 256 bytes per sector, 13 sectors per track, 35 tracks per side, or a total capacity of bytes. It was later upgraded to 16 sectors per track, giving a total of = bytes, which was described as "140KB" using the binary sense of "K". The most recent version of the physical hardware, the "3.5-inch diskette" cartridge, had 720 512-byte blocks (single-sided). Since two blocks comprised 1024 bytes, the capacity was quoted "360 KB", with the binary sense of "K". On the other hand, the quoted capacity of "1.44 MB" of the High Density ("HD") version was again a hybrid decimal and binary notation, since it meant 1440 pairs of 512-byte sectors, or = . Some operating systems displayed the capacity of those disks using the binary sense of "MB", as "1.4 MB" (which would be ≈ ). User complaints forced both Apple and Microsoft to issue support bulletins explaining the discrepancy. Optical disks. When specifying the capacities of optical compact discs, "megabyte" and "MB" usually meant 10242 bytes. Thus a "700-MB" (or "80-minute") CD has a nominal capacity of about , which is approximately (decimal). On the other hand, capacities of other optical disc storage media like DVD, Blu-ray Disc, HD DVD and magneto-optical (MO) have been generally specified in decimal gigabytes ("GB"), that is, 10003 bytes. In particular, a typical "" DVD has a nominal capacity of about , which is about . Tape drives and media. Tape drive and media manufacturers have generally used SI decimal prefixes to specify the maximum capacity, although the actual capacity would depend on the block size used when recording. Data and clock rates. Computer clock frequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the original IBM PC was , that is . Similarly, digital information transfer rates are quoted using decimal prefixe. The Parallel ATA " disk interface can transfer bytes per second, and a " modem transmits bits per second. Seagate specified the sustained transfer rate of some hard disk drive models with both decimal and IEC binary prefixes. The standard sampling rate of music compact disks, quoted as , is indeed samples per second. A "" Ethernet interface can receive or transmit up to 109 bits per second, or bytes per second within each packet. A "56k" modem can encode or decode up to bits per second. Decimal SI prefixes are also generally used for processor-memory data transfer speeds. A PCI-X bus with clock and 64 bits wide can transfer 64-bit words per second, or = , which is usually quoted as . A PC3200 memory on a double data rate bus, transferring 8 bytes per cycle with a clock speed of has a bandwidth of = , which would be quoted as . Ambiguous standards. The ambiguous usage of the prefixes "kilo ("K" or "k"), "mega" ("M"), and "giga" ("G"), as meaning both powers of 1000 or (in computer contexts) of 1024, has been recorded in popular dictionaries, and even in some obsolete standards, such as ANSI/IEEE 1084-1986 and ANSI/IEEE 1212-1991, IEEE 610.10-1994, and IEEE 100-2000. Some of these standards specifically limited the binary meaning to multiples of "byte" ("B") or "bit" ("b"). Early binary prefix proposals. Before the IEC standard, several alternative proposals existed for unique binary prefixes, starting in the late 1960s. In 1996, Markus Kuhn proposed the extra prefix "di" and the symbol suffix or subscript "2" to mean "binary"; so that, for example, "one dikilobyte" would mean "1024 bytes", denoted "K2B" or "K2B". In 1968, Donald Morrison proposed to use the Greek letter kappa (κ) to denote 1024, κ2 to denote 10242, and so on. (At the time, memory size was small, and only K was in widespread use.) In the same year, Wallace Givens responded with a suggestion to use bK as an abbreviation for 1024 and bK2 or bK2 for 10242, though he noted that neither the Greek letter nor lowercase letter b would be easy to reproduce on computer printers of the day. Bruce Alan Martin of Brookhaven National Laboratory proposed that, instead of prefixes, binary powers of two were indicated by the letter B followed by the exponent, similar to E in decimal scientific notation. Thus one would write 3B20 for . This convention is still used on some calculators to present binary floating point-numbers today. In 1969, Donald Knuth, who uses decimal notation like 1 MB = 1000 kB, proposed that the powers of 1024 be designated as "large kilobytes" and "large megabytes", with abbreviations KKB and MMB. Consumer confusion. The ambiguous meanings of "kilo", "mega", "giga", etc., has caused significant consumer confusion, especially in the personal computer era. A common source of confusion was the discrepancy between the capacities of hard drives specified by manufacturers, using those prefixes in the decimal sense, and the numbers reported by operating systems and other software, that used them in the binary sense, such as the Apple Macintosh in 1984. For example, a hard drive marketed as "1 TB" could be reported as having only "931 GB". The confusion was compounded by fact that RAM manufacturers used the binary sense too. Legal disputes. The different interpretations of disk size prefixes led to class action lawsuits against digital storage manufacturers. These cases involved both flash memory and hard disk drives. Early cases. Early cases (2004–2007) were settled prior to any court ruling with the manufacturers admitting no wrongdoing but agreeing to clarify the storage capacity of their products on the consumer packaging. Accordingly, many flash memory and hard disk manufacturers have disclosures on their packaging and web sites clarifying the formatted capacity of the devices or defining MB as 1 million bytes and 1 GB as 1 billion bytes. "Willem Vroegh v. Eastman Kodak Company". On 20 February 2004, Willem Vroegh filed a lawsuit against Lexar Media, Dane–Elec Memory, Fuji Photo Film USA, Eastman Kodak Company, Kingston Technology Company, Inc., Memorex Products, Inc.; PNY Technologies Inc., SanDisk Corporation, Verbatim Corporation, and Viking Interworks alleging that their descriptions of the capacity of their flash memory cards were false and misleading. Vroegh claimed that a 256 MB Flash Memory Device had only 244 MB of accessible memory. "Plaintiffs allege that Defendants marketed the memory capacity of their products by assuming that one megabyte equals one million bytes and one gigabyte equals one billion bytes." The plaintiffs wanted the defendants to use the customary values of 10242 for megabyte and 10243 for gigabyte. The plaintiffs acknowledged that the IEC and IEEE standards define a MB as one million bytes but stated that the industry has largely ignored the IEC standards. The parties agreed that manufacturers could continue to use the decimal definition so long as the definition was added to the packaging and web sites. The consumers could apply for "a discount of ten percent off a future online purchase from Defendants' Online Stores Flash Memory Device". "Orin Safier v. Western Digital Corporation". On 7 July 2005, an action entitled "Orin Safier v. Western Digital Corporation, et al." was filed in the Superior Court for the City and County of San Francisco, Case No. CGC-05-442812. The case was subsequently moved to the Northern District of California, Case No. 05-03353 BZ. Although Western Digital maintained that their usage of units is consistent with "the indisputably correct industry standard for measuring and describing storage capacity", and that they "cannot be expected to reform the software industry", they agreed to settle in March 2006 with 14 June 2006 as the Final Approval hearing date. Western Digital offered to compensate customers with a gratis download of backup and recovery software that they valued at US$30. They also paid in fees and expenses to San Francisco lawyers Adam Gutride and Seth Safier, who filed the suit. The settlement called for Western Digital to add a disclaimer to their later packaging and advertising. Western Digital had this footnote in their settlement. "Apparently, Plaintiff believes that he could sue an egg company for fraud for labeling a carton of 12 eggs a 'dozen', because some bakers would view a 'dozen' as including 13 items." "Cho v. Seagate Technology (US) Holdings, Inc.". A lawsuit ("Cho v. Seagate Technology (US) Holdings, Inc.", San Francisco Superior Court, Case No. CGC-06-453195) was filed against Seagate Technology, alleging that Seagate overrepresented the amount of usable storage by 7% on hard drives sold between 22 March 2001 and 26 September 2007. The case was settled without Seagate admitting wrongdoing, but agreeing to supply those purchasers with gratis backup software or a 5% refund on the cost of the drives. "Dinan et al. v. SanDisk LLC". On 22 January 2020, the district court of the Northern District of California ruled in favor of the defendant, SanDisk, upholding its use of "GB" to mean . The Ninth Circuit affirmed in February 2021. IEC 1999 Standard. In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) proposed the prefixes "kibi" (short for "kilobinary"), "mebi" ("megabinary"), "gibi" ("gigabinary") and "tebi" ("terabinary"), with respective symbols "kb", "Mb", "Gb" and "Tb", for binary multipliers. The proposal suggested that the SI prefixes should be used only for powers of 10; so that a disk drive capacity of "500 gigabytes", "0.5 terabytes", "500 GB", or "0.5 TB" should all mean , exactly or approximately, rather than (= ) or (= ). The proposal was not accepted by IUPAC at the time, but was taken up in 1996 by the Institute of Electrical and Electronics Engineers (IEEE) in collaboration with the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). The prefixes "kibi", "mebi", "gibi" and "tebi" were retained, but with the symbols "Ki" (with capital "K"), "Mi", "Gi" and "Ti" respectively. In January 1999, the IEC published this proposal, with additional prefixes "pebi" ("Pi") and "exbi" ("Ei"), as an international standard (IEC 60027-2 Amendment 2) The standard reaffirmed the BIPM's position that the SI prefixes should always denote powers of 10. The third edition of the standard, published in 2005, added prefixes "zebi" and "yobi", thus matching all then-defined SI prefixes with binary counterparts. The harmonized ISO/IEC IEC 80000-13:2025 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005 (those defining prefixes for binary multiples). The only significant change is the addition of explicit definitions for some quantities. In 2009, the prefixes kibi-, mebi-, etc. were defined by ISO 80000-1 in their own right, independently of the kibibyte, mebibyte, and so on. The BIPM standard JCGM 200:2012 "International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition" lists the IEC binary prefixes and states "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent bits (210 bits), which is 1 kibibit." The IEC 60027-2 standard recommended operating systems and other software were updated to use binary or decimal prefixes consistently, but incorrect usage of SI prefixes for binary multiples is still common. At the time, the IEEE decided that their standards would use the prefixes "kilo", etc. with their metric definitions, but allowed the binary definitions to be used in an interim period as long as such usage was explicitly pointed out on a case-by-case basis. Other standards bodies and organizations. The IEC standard binary prefixes are supported by other standardization bodies and technical organizations. The United States National Institute of Standards and Technology (NIST) supports the ISO/IEC standards for "Prefixes for binary multiples" and has a web page documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced as "bee". NIST has stated the SI prefixes "refer strictly to powers of 10" and that the binary definitions "should not be used" for them. As of 2014, the microelectronics industry standards body JEDEC describes the IEC prefixes in its online dictionary, but acknowledges that the SI prefixes and the symbols "K", "M" and "G" are still commonly used with the binary sense for memory sizes. On 19 March 2005, the IEEE standard IEEE 1541-2002 ("Prefixes for Binary Multiples") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period. , the IEEE Publications division does not require the use of IEC prefixes in its major magazines such as "Spectrum" or "Computer". The International Bureau of Weights and Measures (BIPM), which maintains the International System of Units (SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in the SI. The Society of Automotive Engineers (SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not cite the IEC binary prefixes. The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03. The European Union (EU) has required the use of the IEC binary prefixes since 2007. Current practice. Some computer industry participants, such as Hewlett-Packard (HP), and IBM have adopted or recommended IEC binary prefixes as part of their general documentation policies. As of 2023, the use of SI prefixes with the binary meanings is still prevalent for specifying the capacity of the main memory of computers, of RAM, ROM, EPROM, and EEPROM chips and memory modules, and of the cache of computer processors. For example, a "512-megabyte" or "512 MB" memory module holds 512 MiB; that is, 512 × 220 bytes, not 512 × 106 bytes. JEDEC continues to include the customary binary definitions of "kilo", "mega", and "giga" in the document "Terms, Definitions, and Letter Symbols", and, , still used those definitions in their memory standards. On the other hand, the SI prefixes with powers of ten meanings are generally used for the capacity of external storage units, such as disk drives, solid state drives, and USB flash drives, except for some flash memory chips intended to be used as EEPROMs. However, some disk manufacturers have used the IEC prefixes to avoid confusion. The decimal meaning of SI prefixes is usually also intended in measurements of data transfer rates, and clock speeds. Some operating systems and other software use either the IEC binary multiplier symbols ("Ki", "Mi", etc.) or the SI multiplier symbols ("k", "M", "G", etc.) with decimal meaning. Some programs, such as the GNU ls command, let the user choose between binary or decimal multipliers. However, some continue to use the SI symbols with the binary meanings, even when reporting disk or file sizes. Some programs may also use "K" instead of "k", with either meaning. Other uses. While the binary prefixes are predominantly used with units of data, bits and bytes, they may be used with other unit of measure. For example, in signal processing it may be convenient to use a binary prefix with the unit of frequency, hertz (Hz), to produce a unit such as the kibihertz (KiHz), which is equal to .
4078
45733435
https://en.wikipedia.org/wiki?curid=4078
National Baseball Hall of Fame and Museum
The National Baseball Hall of Fame and Museum is a history museum and hall of fame in Cooperstown, New York, operated by a private foundation. It serves as the central collection and gathering space for the history of baseball in the United States displaying baseball-related artifacts and exhibits, honoring those who have excelled in playing, managing, and serving the sport. The Hall's motto is "Preserving History, Honoring Excellence, Connecting Generations". Cooperstown is often used as shorthand (or a metonym) for the National Baseball Hall of Fame and Museum. The museum also established and manages the process for honorees into the Hall of Fame. The Hall of Fame was established in 1939 by Stephen Carlton Clark, an heir to the Singer Sewing Machine fortune. Clark sought to bring tourists to the village hurt by the Great Depression, which reduced the local tourist trade, and Prohibition, which devastated the local hops industry. Clark constructed the Hall of Fame's building, which was dedicated on June 12, 1939. (His granddaughter, Jane Forbes Clark, is the current chairman of the board of directors.) The mythology that future Civil War hero Abner Doubleday invented baseball in Cooperstown in the 1830s was instrumental in the placement and early marketing of the Hall. An expanded library and research facility opened in 1994. Dale Petroskey became the organization's president in 1999. In 2002, the Hall launched "Baseball as America", a traveling exhibit that toured ten American museums over six years. The Hall of Fame has since also sponsored educational programming on the Internet to bring the Hall of Fame to schoolchildren who might not visit. The Hall and Museum completed a series of renovations in spring 2005. The Hall of Fame also presents an annual exhibit at FanFest at the Major League Baseball All-Star Game. Inductees. Among baseball fans, "Hall of Fame" means not only the museum and facility in Cooperstown, New York, but the pantheon of players, managers, umpires, executives, and pioneers who have been inducted into the Hall. The first five men elected were Ty Cobb, Babe Ruth, Honus Wagner, Christy Mathewson and Walter Johnson, chosen in 1936; roughly 20 more were selected before the entire group was inducted at the Hall's 1939 opening. , 351 people had been elected to the Hall of Fame, including 278 former professional players, 23 managers, 10 umpires, and 40 pioneers, executives, and organizers. 119 members of the Hall of Fame have been inducted posthumously, including four who died after their selection was announced. Of the 39 members primarily recognized for their contributions to Negro league baseball, 31 were inducted posthumously, including all 26 selected since the 1990s. The Hall of Fame includes one woman, baseball executive Effa Manley. The newest members of the Hall of Fame as of January 21, 2025, are Dick Allen, Dave Parker, CC Sabathia, Ichiro Suzuki, and Billy Wagner. In 2019, former Yankees closer Mariano Rivera became the first player to be elected unanimously on the writers' ballot. Derek Jeter, Marvin Miller, Ted Simmons, and Larry Walker were to be inducted in 2020, but their induction ceremony was delayed by the COVID-19 pandemic until September 8, 2021. The ceremony was open to the public, as COVID restrictions had been lifted. Selection process. Players are currently inducted into the Hall of Fame through election by either the Baseball Writers' Association of America (or BBWAA), or the Veterans Committee, which now consists of four subcommittees, each of which considers and votes for candidates from a separate era of baseball. Five years after retirement, any player with 10 years of major league experience who passes a screening committee (which removes from consideration players of clearly lesser qualification) is eligible to be elected by BBWAA members with 10 years' membership or more who also have been actively covering MLB at any time in the 10 years preceding the election (the latter requirement was added for the 2016 election). From a final ballot typically including 25–40 candidates, each writer may vote for up to 10 players; until the late 1950s, voters were advised to cast votes for the maximum 10 candidates. Any player named on 75% or more of all ballots cast is elected. A player who is named on fewer than 5% of ballots is dropped from future elections. In some instances, the screening committee had restored their names to later ballots, but in the mid-1990s, dropped players were made permanently ineligible for Hall of Fame consideration, even by the Veterans Committee. A 2001 change in the election procedures restored the eligibility of these dropped players; while their names will not appear on future BBWAA ballots, they may be considered by the Veterans Committee. Players receiving 5% or more of the votes but fewer than 75% are reconsidered annually until a maximum of ten years of eligibility (lowered from fifteen years for the 2015 election). Under special circumstances, certain players may be deemed eligible for induction even though they have not met all requirements. Addie Joss was elected in 1978, despite only playing nine seasons before he died of meningitis. Additionally, if an otherwise eligible player dies before his fifth year of retirement, then that player may be placed on the ballot at the first election at least six months after his death. Roberto Clemente set the precedent: the writers put him up for consideration after his death on New Year's Eve, 1972, and he was inducted in 1973. The five-year waiting period was established in 1954 after an evolutionary process. In 1936 all players were eligible, including active ones. From the 1937 election until the 1945 election, there was no waiting period, so any retired player was eligible, but writers were discouraged from voting for current major leaguers. Since there was no formal rule preventing a writer from casting a ballot for an active player, the scribes did not always comply with the informal guideline; Joe DiMaggio received a vote in 1945, for example. From the 1946 election until the 1954 election, an official one-year waiting period was in effect. (DiMaggio, for example, retired after the 1951 season and was first eligible in the 1953 election.) The modern rule establishing a wait of five years was passed in 1954, although those who had already been eligible under the old rule were grandfathered into the ballot, thus permitting Joe DiMaggio to be elected within four years of his retirement. Contrary to popular belief, no formal exception was made for Lou Gehrig (other than to hold a special one-man election for him): there was no waiting period at that time, and Gehrig met all other qualifications, so he would have been eligible for the next regular election after he retired during the 1939 season. However, the BBWAA decided to hold a special election at the 1939 Winter Meetings in Cincinnati, specifically to elect Gehrig (most likely because it was known that he was terminally ill, making it uncertain that he would live long enough to see another election). Nobody else was on that ballot, and the numerical results have never been made public. Since no elections were held in 1940 or 1941, the special election permitted Gehrig to enter the Hall while still alive. If a player fails to be elected by the BBWAA within 10 years of his eligibility for election, he may be selected by the Veterans Committee. Following changes to the election process for that body made in 2010 and 2016, the Veterans Committee is now responsible for electing all otherwise eligible candidates who are not eligible for the BBWAA ballot — both long-retired players and non-playing personnel (managers, umpires, and executives). From 2011 to 2016, each candidate could be considered once every three years; now, the frequency depends on the era in which an individual made his greatest contributions. A more complete discussion of the new process is available below. From 2008 to 2010, following changes made by the Hall in July 2007, the main Veterans Committee, then made up of living Hall of Famers, voted only on players whose careers began in 1943 or later. These changes also established three separate committees to select other figures: Players of the Negro leagues have also been considered at various times, beginning in 1971. In 2005, the Hall completed a study on African American players between the late 19th century and the integration of the major leagues in 1947, and conducted a special election for such players in February 2006; seventeen figures from the Negro leagues were chosen in that election, in addition to the eighteen previously selected. Following the 2010 changes, Negro leagues figures were primarily considered for induction alongside other figures from the 1871–1946 era, called the "Pre-Integration Era" by the Hall; since 2016, Negro leagues figures are primarily considered alongside other figures from what the Hall calls the "Early Baseball" era (1871–1949). Predictably, the selection process catalyzes endless debate among baseball fans over the merits of various candidates. Even players elected years ago remain the subjects of discussions as to whether they deserved election. For example, Bill James' 1994 book "Whatever Happened to the Hall of Fame?" goes into detail about who he believes does and does not belong in the Hall of Fame. Non-induction of banned players. The selection rules for the Baseball Hall of Fame were modified to prevent the induction of anyone on Baseball's "permanently ineligible" list. Until they were removed from the list in 2025 the most prominent former players to be affected were Pete Rose and "Shoeless Joe" Jackson—many others have been barred from participation in MLB, but none have Hall of Fame qualifications on the level of Jackson or Rose. Jackson and Rose were both banned from MLB for life for actions related to gambling on games involving their own teams. Jackson was determined to have cooperated with those who conspired to intentionally lose the 1919 World Series, and for accepting payment for losing, although his actual level of culpability is fiercely debated. The ensuing Black Sox Scandal led directly to baseball's Rule 21, prominently posted in every clubhouse locker room, which mandates permanent banishment from MLB for having a gambling interest of any sort on a game in which a player, manager or umpire is directly involved. Rose voluntarily accepted a permanent spot on the ineligible list in return for MLB's promise to make no official finding in relation to alleged betting on the Cincinnati Reds when he was their manager in the 1980s. No credible evidence has ever emerged to support allegations that Rose bet against his team and/or that his betting influenced his managerial decisions, nevertheless, the betting constituted a clear violation of the aforementioned Rule 21. After years of denial, Rose admitted that he bet on the Reds in his 2004 autobiography "My Prison Without Bars". Before the ban was lifted in 2025, baseball fans were deeply split on the issue of whether Rose and/or Jackson (both deceased) should remain banned or have their punishments posthumously revoked. Writer Bill James, though he advocates Rose eventually making it into the Hall of Fame, compared the people who want to put Jackson in the Hall of Fame to "those women who show up at murder trials wanting to marry the cute murderer". Changes to Veterans Committee process. The actions and composition of the Veterans Committee have been at times controversial, with occasional selections of contemporaries and teammates of the committee members over seemingly more worthy candidates. In 2001, the Veterans Committee was reformed to comprise the living Hall of Fame members and other honorees. The revamped Committee held three elections, in 2003 and 2007, for both players and non-players, and in 2005 for players only. No individual was elected in that time, sparking criticism among some observers who expressed doubt whether the new Veterans Committee would ever elect a player. The Committee members, most of whom were Hall members, were accused of being reluctant to elect new candidates in the hope of heightening the value of their own selection. After no one was selected for the third consecutive election in 2007, Hall of Famer Mike Schmidt noted, "The same thing happens every year. The current members want to preserve the prestige as much as possible, and are unwilling to open the doors." In 2007, the committee and its selection processes were again reorganized; the main committee then included all living members of the Hall, and voted on a reduced number of candidates from among players whose careers began in 1943 or later. Separate committees, including sportswriters and broadcasters, would select umpires, managers and executives, as well as players from earlier eras. In the first election to be held under the 2007 revisions, two managers and three executives were elected in December 2007 as part of the 2008 election process. The next Veterans Committee elections for players were held in December 2008 as part of the 2009 election process; the main committee did not select a player, while the panel for pre–World War II players elected Joe Gordon in its first and ultimately only vote. The main committee voted as part of the election process for inductions in odd-numbered years, while the pre-World War II panel would vote every five years, and the panel for umpires, managers, and executives voted as part of the election process for inductions in even-numbered years. Further changes to the Veterans Committee process were announced by the Hall in July 2010, July 2016, and April 2022. Current structure. Per the latest changes, announced on April 22, 2022, the multiple eras previously utilized were collapsed to three, to be voted on in an annual rotation (one per year): A one-year waiting period beyond potential BBWAA eligibility (which had been abolished in 2016) was reintroduced, thus restricting the committee to considering players retired for at least 16 seasons. Eligibility. The eligibility criteria for Era Committee consideration differ between players, managers, and executives. Players and managers with multiple teams. While the text on a player's or manager's plaque lists all teams for which the inductee was a member in that specific role, inductees are usually depicted wearing the cap of a specific team, though in a few cases, like umpires, they wear caps without logos. (Executives are not depicted wearing caps.) Additionally, as of 2015, inductee biographies on the Hall's website for all players and managers, and executives who were associated with specific teams, list a "primary team", which does not necessarily match the cap logo. The Hall selects the logo "based on where that player makes his most indelible mark." Although the Hall always made the final decision on which logo was shown, until 2001 the Hall deferred to the wishes of players or managers whose careers were linked with multiple teams. Some examples of inductees associated with multiple teams are the following: In all of the above cases, the "primary team" is the team for which the inductee spent the largest portion of his career except for Ryan, whose primary team is listed as the Angels despite playing one fewer season for that team than for the Astros. In 2001, the Hall of Fame decided to change the policy on cap logo selection, as a result of rumors that some teams were offering compensation, such as number retirement, money, or organizational jobs, in exchange for the cap designation. (For example, though Wade Boggs denied the claims, some media reports had said that his contract with the Tampa Bay Devil Rays required him to request depiction in the Hall of Fame as a Devil Ray.) The Hall decided that it would no longer defer to the inductee, though the player's wishes would be considered, when deciding on the logo to appear on the plaque. Newly elected members affected by the change include the following: The museum. Sam Crane (who had played a decade in 19th century baseball before becoming a manager and sportswriter) had first approached the idea of making a memorial to the great players of the past in what was believed to have been the birthplace of baseball: Cooperstown, New York, but the idea did not muster much momentum until after his death in 1925. In 1934, the idea for establishing a Baseball Hall of Fame and Museum was devised by several individuals, such as Ford C. Frick (president of the National League) and Alexander Cleland, a Scottish immigrant who decided to serve as the first executive secretary for the Museum for the next seven years that worked with the interests of the Village and Major League Baseball. Stephen Carlton Clark (a Cooperstown native) paid for the construction of the museum, which was planned to open in 1939 to mark the "Centennial of Baseball", which included renovations to Doubleday Field. William Beattie served as the first curator of the museum. According to the Hall of Fame, approximately 260,000 visitors enter the museum each year, and the running total has surpassed 17 million. These visitors see only a fraction of its 40,000 artifacts, 3 million library items (such as newspaper clippings and photos) and 140,000 baseball cards. The Hall has seen a noticeable decrease in attendance since the mid-2010s. A 2013 story on ESPN.com about the village of Cooperstown and its relation to the game partially linked the reduced attendance with Cooperstown Dreams Park, a youth baseball complex about away in the town of Hartwick. The 22 fields at Dreams Park currently draw 17,000 players each summer for a week of intensive play; while the complex includes housing for the players, their parents and grandparents must stay elsewhere. According to the story, Prior to Dreams Park, a room might be filled for a week by several sets of tourists. Now, that room will be taken by just one family for the week, and that family may only go into Cooperstown and the Hall of Fame once. While there are other contributing factors (the recession and high gas prices among them), the Hall's attendance has tumbled since Dreams Park opened. The Hall drew 383,000 visitors in 1999. It drew 262,000 last year. Notable events. 1982 unauthorized sales. A controversy erupted in 1982, when it emerged that some historic items given to the Hall had been sold on the collectibles market. The items had been lent to the Baseball Commissioner's office, gotten mixed up with other property owned by the Commissioner's office and employees of the office, and moved to the garage of Joe Reichler, an assistant to Commissioner Bowie Kuhn, who sold the items to resolve his personal financial difficulties. Under pressure from the New York Attorney General, the Commissioner's Office made reparations, but the negative publicity damaged the Hall of Fame's reputation, and made it more difficult for it to solicit donations. 2014 commemorative coins. In 2012, Congress passed and President Barack Obama signed a law ordering the United States Mint to produce and sell commemorative, non-circulating coins to benefit the private, non-profit Hall. The bill, , was introduced in the United States House of Representatives by Rep. Richard Hanna, a Republican from New York, and passed the House on October 26, 2011. The coins, which depict baseball gloves and balls, are the first concave designs produced by the Mint. The mintage included 50,000 gold coins, 400,000 silver coins, and 750,000 clad (nickel-copper) coins. The Mint released them on March 27, 2014, and the gold and silver editions quickly sold out. The Hall receives money from surcharges included in the sale price: a total of $9.5 million if all the coins are sold.
4079
41740461
https://en.wikipedia.org/wiki?curid=4079
BPP (complexity)
In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest "practical" classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine. Informally, a problem is in BPP if there is an algorithm for it that has the following properties: Definition. A language "L" is in BPP if and only if there exists a probabilistic Turing machine "M", such that Unlike the complexity class ZPP, the machine "M" is required to run for polynomial time on all inputs, regardless of the outcome of the random coin flips. Alternatively, BPP can be defined using only deterministic Turing machines. A language "L" is in BPP if and only if there exists a polynomial "p" and deterministic Turing machine "M", such that In this definition, the string "y" corresponds to the output of the random coin flips that the probabilistic Turing machine would have made. For some applications this definition is preferable since it does not mention probabilistic Turing machines. In practice, an error probability of 1/3 might not be acceptable; however, the choice of 1/3 in the definition is arbitrary. Modifying the definition to use any constant between 0 and 1/2 (exclusive) in place of 1/3 would not change the resulting set BPP. For example, if one defined the class with the restriction that the algorithm can be wrong with probability at most 1/2100, this would result in the same class of problems. The error probability does not even have to be constant: the same class of problems is defined by allowing error as high as 1/2 − "n"−"c" on the one hand, or requiring error as small as 2−"nc" on the other hand, where "c" is any positive constant, and "n" is the length of input. This flexibility in the choice of error probability is based on the idea of running an error-prone algorithm many times, and using the majority result of the runs to obtain a more accurate algorithm. The chance that the majority of the runs are wrong drops off exponentially as a consequence of the Chernoff bound. Problems. All problems in P are obviously also in BPP. However, many problems have been known to be in BPP but not known to be in P. The number of such problems is decreasing, and it is conjectured that P = BPP. For a long time, one of the most famous problems known to be in BPP but not known to be in P was the problem of determining whether a given number is prime. However, in the 2002 paper "PRIMES is in P", Manindra Agrawal and his students Neeraj Kayal and Nitin Saxena found a deterministic polynomial-time algorithm for this problem, thus showing that it is in P. An important example of a problem in BPP (in fact in co-RP) still not known to be in P is polynomial identity testing, the problem of determining whether a polynomial is identically equal to the zero polynomial, when you have access to the value of the polynomial for any given input, but not to the coefficients. In other words, is there an assignment of values to the variables such that when a nonzero polynomial is evaluated on these values, the result is nonzero? It suffices to choose each variable's value uniformly at random from a finite subset of at least "d" values to achieve bounded error probability, where "d" is the total degree of the polynomial. Related classes. If the access to randomness is removed from the definition of BPP, we get the complexity class P. In the definition of the class, if we replace the ordinary Turing machine with a quantum computer, we get the class BQP. Adding postselection to BPP, or allowing computation paths to have different lengths, gives the class BPPpath. BPPpath is known to contain NP, and it is contained in its quantum counterpart PostBQP. A Monte Carlo algorithm is a randomized algorithm which is likely to be correct. Problems in the class BPP have Monte Carlo algorithms with polynomial bounded running time. This is compared to a Las Vegas algorithm which is a randomized algorithm which either outputs the correct answer, or outputs "fail" with low probability. Las Vegas algorithms with polynomial bound running times are used to define the class ZPP. Alternatively, ZPP contains probabilistic algorithms that are always correct and have expected polynomial running time. This is weaker than saying it is a polynomial time algorithm, since it may run for super-polynomial time, but with very low probability. Complexity-theoretic properties. It is known that BPP is closed under complement; that is, BPP = co-BPP. BPP is low for itself, meaning that a BPP machine with the power to solve BPP problems instantly (a BPP oracle machine) is not any more powerful than the machine without this extra power. In symbols, BPPBPP = BPP. The relationship between BPP and NP is unknown: it is not known whether BPP is a subset of NP, NP is a subset of BPP or neither. If NP is contained in BPP, which is considered unlikely since it would imply practical solutions for NP-complete problems, then NP = RP and PH ⊆ BPP. It is known that RP is a subset of BPP, and BPP is a subset of PP. It is not known whether those two are strict subsets, since we don't even know if P is a strict subset of PSPACE. BPP is contained in the second level of the polynomial hierarchy and therefore it is contained in PH. More precisely, the Sipser–Lautemann theorem states that formula_1. As a result, P = NP leads to P = BPP since PH collapses to P in this case. Thus either P = BPP or P ≠ NP or both. Adleman's theorem states that membership in any language in BPP can be determined by a family of polynomial-size Boolean circuits, which means BPP is contained in P/poly. Indeed, as a consequence of the proof of this fact, every BPP algorithm operating on inputs of bounded length can be derandomized into a deterministic algorithm using a fixed string of random bits. Finding this string may be expensive, however. Some weak separation results for Monte Carlo time classes were proven by , see also . Closure properties. The class BPP is closed under complementation, union, intersection, and concatenation. Relativization. Relative to oracles, we know that there exist oracles A and B, such that PA = BPPA and PB ≠ BPPB. Moreover, relative to a random oracle with probability 1, P = BPP and BPP is strictly contained in NP and co-NP. There is even an oracle in which (and hence ), which can be iteratively constructed as follows. For a fixed ENP (relativized) complete problem, the oracle will give correct answers with high probability if queried with the problem instance followed by a random string of length "kn" ("n" is instance length; "k" is an appropriate small constant). Start with "n"=1. For every instance of the problem of length "n" fix oracle answers (see lemma below) to fix the instance output. Next, provide the instance outputs for queries consisting of the instance followed by "kn"-length string, and then treat output for queries of length ≤("k"+1)"n" as fixed, and proceed with instances of length "n"+1. The lemma ensures that (for a large enough "k"), it is possible to do the construction while leaving enough strings for the relativized answers. Also, we can ensure that for the relativized , linear time suffices, even for function problems (if given a function oracle and linear output size) and with exponentially small (with linear exponent) error probability. Also, this construction is effective in that given an arbitrary oracle A we can arrange the oracle B to have and . Also, for a oracle (and hence ), one would fix the answers in the relativized E computation to a special nonanswer, thus ensuring that no fake answers are given. Derandomization. The existence of certain strong pseudorandom number generators is conjectured by most experts of the field. Such generators could replace true random numbers in any polynomial-time randomized algorithm, producing indistinguishable results. The conjecture that these generators exist implies that randomness does not give additional computational power to polynomial time computation, that is, P = RP = BPP. More strongly, the assumption that P = BPP is in some sense equivalent to the existence of strong pseudorandom number generators. László Babai, Lance Fortnow, Noam Nisan, and Avi Wigderson showed that unless EXPTIME collapses to MA, BPP is contained in formula_2 The class i.o.-SUBEXP, which stands for infinitely often SUBEXP, contains problems which have sub-exponential time algorithms for infinitely many input sizes. They also showed that P = BPP if the exponential-time hierarchy, which is defined in terms of the polynomial hierarchy and E as EPH, collapses to E; however, note that the exponential-time hierarchy is usually conjectured "not" to collapse. Russell Impagliazzo and Avi Wigderson showed that if any problem in E, where formula_3 has circuit complexity 2Ω("n") then P = BPP.
4080
374440
https://en.wikipedia.org/wiki?curid=4080
BQP
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP. A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3. Definition. BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language "L" is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits formula_1, such that Alternatively, one can define BQP in terms of quantum Turing machines. A language "L" is in BQP if and only if there exists a polynomial quantum Turing machine that accepts "L" with an error probability of at most 1/3 for all instances. Similarly to other "bounded error" probabilistic classes, the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − "n"−"c" on the one hand, or requiring error as small as 2−"nc" on the other hand, where "c" is any positive constant, and "n" is the length of input. Relationship to other complexity classes. BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means . Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time. BQP contains P and BPP and is contained in AWPP, PP and PSPACE. In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are: formula_5 As the problem of has not yet been solved, the proof of inequality between BQP and classes mentioned above is supposed to be difficult. The relation between BQP and NP is not known. In May 2018, computer scientists Ran Raz of Princeton University and Avishay Tal of Stanford University published a paper which showed that, relative to an oracle, BQP was not contained in PH. It can be proven that there exists an oracle A such that formula_6. In an extremely informal sense, this can be thought of as giving PH and BQP an identical, but additional, capability and verifying that BQP with the oracle (BQPA) can do things PHA cannot. While an oracle separation has been proven, the fact that BQP is not contained in PH has not been proven. An oracle separation does not prove whether or not complexity classes are the same. The oracle separation gives intuition that BQP may not be contained in PH. It has been suspected for many years that Fourier Sampling is a problem that exists within BQP, but not within the polynomial hierarchy. Recent conjectures have provided evidence that a similar problem, Fourier Checking, also exists in the class BQP without being contained in the polynomial hierarchy. This conjecture is especially notable because it suggests that problems existing in BQP could be classified as harder than NP-Complete problems. Paired with the fact that many practical BQP problems are suspected to exist outside of P (it is suspected and not verified because there is no proof that P ≠ NP), this illustrates the potential power of quantum computing in relation to classical computing. Adding postselection to BQP results in the complexity class PostBQP which is equal to PP. A complete problem for Promise-BQP. Promise-BQP is the class of promise problems that can be solved by a uniform family of quantum circuits (i.e., within BQP). Completeness proofs focus on this version of BQP. Similar to the notion of NP-completeness and other complete problems, we can define a complete problem as a problem that is in Promise-BQP and that every other problem in Promise-BQP reduces to it in polynomial time. APPROX-QCIRCUIT-PROB. The APPROX-QCIRCUIT-PROB problem is complete for efficient quantum computation, and the version presented below is complete for the Promise-BQP complexity class (and not for the total BQP complexity class, for which no complete problems are known). APPROX-QCIRCUIT-PROB's completeness makes it useful for proofs showing the relationships between other complexity classes and BQP. Given a description of a quantum circuit acting on qubits with gates, where is a polynomial in and each gate acts on one or two qubits, and two numbers formula_7, distinguish between the following two cases: Here, there is a promise on the inputs as the problem does not specify the behavior if an instance is not covered by these two cases. Claim. Any BQP problem reduces to APPROX-QCIRCUIT-PROB. Proof. Suppose we have an algorithm that solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuit acting on qubits, and two numbers formula_7, distinguishes between the above two cases. We can solve any problem in BQP with this oracle, by setting formula_15. For any formula_16, there exists family of quantum circuits formula_1 such that for all formula_2, a state formula_19 of formula_20 qubits, if formula_21; else if formula_22. Fix an input formula_19 of qubits, and the corresponding quantum circuit formula_24. We can first construct a circuit formula_25 such that formula_26. This can be done easily by hardwiring formula_19 and apply a sequence of CNOT gates to flip the qubits. Then we can combine two circuits to get formula_28, and now formula_29. And finally, necessarily the results of formula_24 is obtained by measuring several qubits and apply some (classical) logic gates to them. We can always defer the measurement and reroute the circuits so that by measuring the first qubit of formula_29, we get the output. This will be our circuit , and we decide the membership of formula_32 by running formula_33 with formula_15. By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), so formula_16 reduces to APPROX-QCIRCUIT-PROB. BQP and EXP. We begin with an easier containment. To show that formula_36, it suffices to show that APPROX-QCIRCUIT-PROB is in EXP since APPROX-QCIRCUIT-PROB is BQP-complete. Note that this algorithm also requires formula_37 space to store the vectors and the matrices. We will show in the following section that we can improve upon the space complexity. BQP and PSPACE. Sum of histories is a technique introduced by physicist Richard Feynman for path integral formulation. APPROX-QCIRCUIT-PROB can be formulated in the sum of histories technique to show that formula_38. Consider a quantum circuit , which consists of gates, formula_39, where each formula_40 comes from a universal gate set and acts on at most two qubits. To understand what the sum of histories is, we visualize the evolution of a quantum state given a quantum circuit as a tree. The root is the input formula_41, and each node in the tree has formula_42 children, each representing a state in formula_43. The weight on a tree edge from a node in -th level representing a state formula_44 to a node in formula_45-th level representing a state formula_46 is formula_47, the amplitude of formula_46 after applying formula_49 on formula_44. The transition amplitude of a root-to-leaf path is the product of all the weights on the edges along the path. To get the probability of the final state being formula_51, we sum up the amplitudes of all root-to-leave paths that ends at a node representing formula_51. More formally, for the quantum circuit , its sum over histories tree is a tree of depth , with one level for each gate formula_53 in addition to the root, and with branching factor formula_42. Notice in the sum over histories algorithm to compute some amplitude formula_55, only one history is stored at any point in the computation. Hence, the sum over histories algorithm uses formula_56 space to compute formula_55 for any since formula_56 bits are needed to store the histories in addition to some workspace variables. Therefore, in polynomial space, we may compute formula_59 over all with the first qubit being , which is the probability that the first qubit is measured to be 1 by the end of the circuit. Notice that compared with the simulation given for the proof that formula_36, our algorithm here takes far less space but far more time instead. In fact it takes formula_61 time to calculate a single amplitude! BQP and PP. A similar sum-over-histories argument can be used to show that formula_62. P and BQP. We know formula_63, since every classical circuit can be simulated by a quantum circuit. It is conjectured that BQP solves hard problems outside of P, specifically, problems in NP. The claim is indefinite because we don't know if P=NP, so we don't know if those problems are actually in P. Below are some evidence of the conjecture:
4086
1281127911
https://en.wikipedia.org/wiki?curid=4086
Brainfuck
Brainfuck is an esoteric programming language created in 1993 by Swiss student Urban Müller. Designed to be extremely minimalistic, the language consists of only eight simple commands, a data pointer, and an instruction pointer. Brainfuck is an example of a so-called Turing tarpit: it can be used to write any program, but it is not practical to do so because it provides so little abstraction that the programs get very long or complicated. While Brainfuck is fully Turing-complete, it is not intended for practical use but to challenge and amuse programmers. Brainfuck requires one to break down commands into small and simple instructions. The language takes its name from the slang term "brainfuck", which refers to things so complicated or unusual that they exceed the limits of one's understanding, as it was not meant or made for designing actual software but to challenge the boundaries of computer programming. Because the language's name contains profanity, many substitutes are used, such as brainfsck, branflakes, brainoof, brainfrick, BrainF, and BF. History. Müller designed Brainfuck with the goal of implementing the smallest possible compiler, inspired by the 1024-byte compiler for the FALSE programming language. Müller's original compiler was implemented in Motorola 68000 assembly on the Amiga and compiled to a binary with a size of 296 bytes. He uploaded the first Brainfuck compiler to Aminet in 1993. The program came with a "Readme" file, which briefly described the language, and challenged the reader "Who can program anything useful with it? :)". Müller also included an interpreter and some examples. A second version of the compiler used only 240 bytes. Language design. The language consists of eight commands. A brainfuck program is a sequence of these commands, possibly interspersed with other characters (which are ignored). The commands are executed sequentially, with some exceptions: an instruction pointer begins at the first command, and each command it points to is executed, after which it normally moves forward to the next command. The program terminates when the instruction pointer moves past the last command. The brainfuck language uses a simple machine model consisting of the program and instruction pointer, as well as a one-dimensional array of at least 30,000 byte cells initialized to zero; a movable data pointer (initialized to point to the leftmost byte of the array); and two streams of bytes for input and output (most often connected to a keyboard and a monitor respectively, and using the ASCII character encoding). The eight language commands each consist of a single character: codice_1 and codice_2 match as parentheses usually do: each codice_1 matches exactly one codice_2 and vice versa, the codice_1 comes first, and there can be no unmatched codice_1 or codice_2 between the two. Brainfuck programs are usually difficult to comprehend. This is partly because any mildly complex task requires a long sequence of commands and partly because the program's text gives no direct indications of the program's state. These, as well as Brainfuck's inefficiency and its limited input/output capabilities, are some of the reasons it is not used for serious programming. Nonetheless, like any Turing-complete language, Brainfuck is theoretically capable of computing any computable function or simulating any other computational model if given access to an unlimited amount of memory and time. A variety of Brainfuck programs have been written. Although Brainfuck programs, especially complicated ones, are difficult to write, it is quite trivial to write an interpreter for Brainfuck in a more typical language such as C due to its simplicity. Brainfuck interpreters written in the Brainfuck language itself also exist. Examples. Adding two values. As a first, simple example, the following code snippet will add the current cell's value to the next cell: Each time the loop is executed, the current cell is decremented, the data pointer moves to the right, that next cell is incremented, and the data pointer moves left again. This sequence is repeated until the starting cell is 0. [->+<] This can be incorporated into a simple addition program as follows: Cell c0 =. > +++++ Cell c1 = 5 [ Start your loops with your cell pointer on the loop counter (c1 in our case) < + Add 1 to c0 > - Subtract 1 from c1 ] End your loops with the cell pointer on the loop counter At this point our program has added 5 to 2 leaving 7 in c0 and 0 in c1 but we cannot output this value to the terminal since it is not ASCII encoded To display the ASCII character "7" we must add 48 to the value 7 We use a loop to compute 48 = 6 * 8 ++ ++++ c1 = 8 and this will be our loop counter aga. < +++ +++ Add 6 to c0 > - Subtract 1 from c1 < . Print out c0 which has the value 55 which translates to "7"! Hello World! The following program prints "Hello World!" and a newline to the screen: [ This program prints "Hello World!" and a newline to the screen; its length is 106 active command characters. [It is not the shortest.] This loop is an "initial comment loop", a simple way of adding a comment to a BF program such that you don't have to worry about any command characters. Any ".", ",", "+", "-", "<" and ">" characters are simply ignored, the "[" and "]" characters just have to be balanced. This loop and the commands it contains are ignored because the current cell defaults to a value of 0; the 0 value causes this loop to be skipped. ++++++ Set Cell #0 to. >++++ Add 4 to Cell #1; this will always set Cell #1 to 4 [ as the cell will be cleared by the loop >++ Add 2 to Cell #2 >+++ Add 3 to Cell #3 >+++ Add 3 to Cell #4 >+ Add 1 to Cell #5 ««- Decrement the loop counter in Cell #1 ] Loop until Cell #1 is zero; number of iterations is 4 >+ Add 1 to Cell #2 >+ Add 1 to Cell #3 >- Subtract 1 from Cell #4 »+ Add 1 to Cell #6 [<] Move back to the first zero cell you find; this will be Cell #1 which was cleared by the previous loop <- Decrement the loop Counter in Cell #0 ] Loop until Cell #0 is zero; number of iterations is 8 The result of this is: Cell no : 0 1 2 3 4 5 6 Contents: 0 0 72 104 88 32 8 Pointer : ^ ». Cell #2 has value 72 which is 'H' >---. Subtract 3 from Cell #3 to get 101 which is 'e' +++++..+++. Likewise for 'llo' from Cell . ». Cell #5 is 32 for the space <-. Subtract 1 from Cell #4 for 87 to give a 'W' <. Cell #3 was set to 'o' from the end of 'Hello' +.------.--------. Cell #3 for 'rl' and '. »+. Add 1 to Cell #5 gives us an exclamation point >++. And finally a newline from Cell #6 For readability, this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands codice_8 so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as: ROT13. This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65–77) to N-Z (78–90), and vice versa. Also it must map a-m (97–109) to n-z (110–122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or "no change"), at which point the program terminates. -,+[ Read first character and start outer character reading loop -[ Skip forward if character is 0 »++++[>++++++++<-] Set up divisor (32) for division loop (MEMORY LAYOUT: dividend copy remainder divisor quotient zero zero) <+<-[ Set up dividend (x minus 1) and enter division loop >+>+>-[»>] Increase copy and remainder / reduce divisor / Normal case: skip forward <]<[ Zero that flag unless quotient was 2 or 3; zero quotient; check flag ++++++++++++<[ If flag then set up divisor (13) for second division loop (MEMORY LAYOUT: zero copy dividend divisor remainder quotient zero zero) >-[>+»] Reduce divisor; Normal case: increase remainder >[+[<+>-]>+»] Special case: increase remainder / move it back to divisor / increase quotient ««<- Decrease dividend ] End division loop »[<+>-] Add remainder back to divisor to get a useful 13 >[ Skip forward if quotient was 0 -[ Decrement quotient and skip forward if quotient was 1 -«[-]» Zero quotient and divisor if quotient was 2 ]«[«-»-]» Zero divisor and subtract 13 from copy if quotient was 1 ]«[«+»-] Zero divisor and add 13 to copy if quotient was 0 ] End outer skip loop (jump to here if ((character minus 1)/32) was not 2 or 3) <[-] Clear remainder from first division if second division was skipped <.[-] Output ROT13ed character from copy and clear it <-,+ Read next character ] End character reading loop Simulation of abiogenesis. In 2024, a Google research project used a slightly modified 10-command version of Brainfuck as the basis of an artificial digital environment. In this environment, they found that replicators arose naturally and competed with each other for domination of the environment.
4091
48523215
https://en.wikipedia.org/wiki?curid=4091
Bartolomeo Ammannati
Bartolomeo Ammannati (18 June 1511 – 13 April 1592) was an Italian architect and sculptor, born at Settignano, near Florence, Italy. He studied under Baccio Bandinelli and Jacopo Sansovino (assisting on the design of the Library of St. Mark's, the Biblioteca Marciana, Venice) and closely imitated the style of Michelangelo. He was more distinguished in architecture than in sculpture. He worked in Rome in collaboration with Vignola and Vasari), including designs for the Villa Giulia, but also for works at Lucca. He labored during 1558–1570, in the refurbishment and enlargement of Pitti Palace, creating the courtyard consisting of three wings with rusticated facades, and one lower portico leading to the amphitheatre in the Boboli Gardens. His design mirrored the appearance of the main external façade of Pitti. He was also named "Consul" of Accademia delle Arti del Disegno of Florence, which had been founded by the Duke Cosimo I in 1563. In 1569, Ammannati was commissioned to build the Ponte Santa Trinita, a bridge over the Arno River. The three arches are elliptic, and though very light and elegant, has survived, when floods had damaged other Arno bridges at different times. Santa Trinita was destroyed in 1944, during World War II, and rebuilt in 1957. Ammannati designed what is considered a prototypic Mannerist sculptural ensemble in the Fountain of Neptune ("Fontana del Nettuno"), prominently located in the Piazza della Signoria in the center of Florence. The assignment was originally given to the aged Bartolommeo Bandinelli; however when Bandinelli died, Ammannati's design, bested the submissions of Benvenuto Cellini and Vincenzo Danti, to gain the commission. From 1563 and 1565, Ammannati and his assistants, among them Giambologna, sculpted the block of marble that had been chosen by Bandinelli. He took Grand Duke Cosimo I as model for Neptune's face. The statue was meant to highlight Cosimo's goal of establishing a Florentine Naval force. The ungainly sea god was placed at the corner of the Palazzo Vecchio within sight of Michelangelo's David statue, and the then 87-year-old sculptor is said to have scoffed at Ammannati— saying that he had ruined a beautiful piece of marble— with the ditty: "Ammannati, Ammanato, che bel marmo hai rovinato!" Ammannati continued work on this fountain for a decade, adding around the perimeter a cornucopia of demigod figures: bronze reclining river gods, laughing satyrs and marble sea horses emerging from the water. In 1550 Ammannati married Laura Battiferri, an elegant poet and an accomplished woman. Later in his life he had a religious crisis, influenced by Counter-Reformation piety, which resulted in condemning his own works depicting nudity, and he left all his possessions to the Jesuits. He died in Florence in 1592.
4092
7770027
https://en.wikipedia.org/wiki?curid=4092
Bishop
A bishop is an ordained member of the clergy who is entrusted with a position of authority and oversight in a religious institution. In Christianity, bishops are normally responsible for the governance and administration of dioceses. The role or office of the bishop is called episcopacy or the episcopate. Organisationally, several Christian denominations utilise ecclesiastical structures that call for the position of bishops, while other denominations have dispensed with this office, seeing it as a symbol of power. Bishops have also exercised political authority within their dioceses. Traditionally, bishops claim apostolic succession, a direct historical lineage dating back to the original Twelve Apostles or Saint Paul. The bishops are by doctrine understood as those who possess the full priesthood given by Jesus Christ, and therefore may ordain other clergy, including other bishops. A person ordained as a deacon, priest (i.e. presbyter), and then bishop is understood to hold the fullness of the ministerial priesthood, given responsibility by Christ to govern, teach and sanctify the Body of Christ (the Church). Priests, deacons and lay ministers co-operate and assist their bishops in pastoral ministry. Some Pentecostal and other Protestant denominations have bishops who oversee congregations, though they do not necessarily claim apostolic succession. Etymology and terminology. The English word "bishop" derives, via Latin , Old English , and Middle English , from the Greek word , meaning "overseer" or "supervisor". Greek was the language of the early Christian church, but the term did not originate in Christianity: it had been used in Greek for several centuries before the advent of Christianity. The English words "priest" and "presbyter" both derive, via Latin, from the Greek word , meaning "elder" or "senior", and not originally referring to priesthood. In the early Christian era the two terms were not always clearly distinguished, but is used in the sense of the order or office of bishop, distinct from that of , in the writings attributed to Ignatius of Antioch in the second century. History in Christianity. The earliest organization of the Church in Jerusalem was, according to most scholars, similar to that of Jewish synagogues, but it had a council or college of ordained presbyters (). In Acts 11:30 and Acts 15:22, a collegiate system of government in Jerusalem is chaired by James the Just, according to tradition the first bishop of the city. In Acts 14:23, the Apostle Paul ordains presbyters in churches in Anatolia. The word "presbyter" was not yet distinguished from "overseer" (, later used exclusively to mean "bishop"), as in Acts 20:17, Titus 1:5–7 and 1 Peter 5:1. The earliest writings of the Apostolic Fathers, the Didache and the First Epistle of Clement, for example, show the church used two terms for local church offices—presbyters (seen by many as an interchangeable term with or overseer) and deacon. In the First Epistle to Timothy and Epistle to Titus in the New Testament a more clearly defined episcopate can be seen. Both letters state that Paul had left Timothy in Ephesus and Titus in Crete to oversee the local church. Paul commands Titus to ordain presbyters/bishops and to exercise general oversight. The authorship of both those letters is questioned by many scholars in the field and the question whether they reflect a first or second century structure of church hierarchy is among the arguments used in the debate as to their authenticity. Early sources are unclear but various groups of Christian communities may have had the bishop surrounded by a group or college functioning as leaders of the local churches. Eventually the head or "monarchic" bishop came to rule more clearly, and all local churches would eventually follow the example of the other churches and structure themselves after the model of the others with the one bishop in clearer charge, though the role of the body of presbyters remained important. Apostolic Fathers. Around the end of the 1st century, the church's organization became clearer in historical documents. In the works of the Apostolic Fathers, and Ignatius of Antioch in particular, the role of the episkopos, or bishop, became more important or, rather, already was very important and being clearly defined. While Ignatius of Antioch offers the earliest clear description of monarchial bishops (a single bishop over all house churches in a city) he is an advocate of monepiscopal structure rather than describing an accepted reality. To the bishops and house churches to which he writes, he offers strategies on how to pressure house churches who do not recognize the bishop into compliance. Other contemporary Christian writers do not describe monarchial bishops, either continuing to equate them with the presbyters or speaking of (bishops, plural) in a city. Clement of Alexandria (end of the 2nd century) writes about the ordination of a certain Zachæus as bishop by the imposition of Simon Peter Bar-Jonah's hands. The words bishop and ordination are used in their technical meaning by the same Clement of Alexandria. The bishops in the 2nd century are defined also as the only clergy to whom the ordination to priesthood (presbyterate) and diaconate is entrusted: "a priest (presbyter) lays on hands, but does not ordain." (). At the beginning of the 3rd century, Hippolytus of Rome describes another feature of the ministry of a bishop, which is that of the : the primate of sacrificial priesthood and the power to forgive sins. Christian bishops and civil government. The efficient organization of the Roman Empire became the template for the organisation of the church in the 4th century, particularly after Constantine's Edict of Milan. As the church moved from the shadows of privacy into the public forum it acquired land for churches, burials and clergy. In 391, Theodosius I decreed that any land that had been confiscated from the church by Roman authorities be returned. The most usual term for the geographic area of a bishop's authority and ministry, the diocese, began as part of the structure of the Roman Empire under Diocletian. As Roman authority began to fail in the western portion of the empire, the church took over much of the civil administration. This can be clearly seen in the ministry of two popes: Pope Leo I in the 5th century, and Pope Gregory I in the 6th century. Both of these men were statesmen and public administrators in addition to their role as Christian pastors, teachers and leaders. In the Eastern churches, latifundia entailed to a bishop's see were much less common, the state power did not collapse the way it did in the West, and thus the tendency of bishops acquiring civil power was much weaker than in the West. However, the role of Western bishops as civil authorities, often called prince bishops, continued throughout much of the Middle Ages. Bishops holding political office. As well as being Archchancellors of the Holy Roman Empire after the 9th century, bishops generally served as chancellors to medieval monarchs, acting as head of the "justiciary" and chief chaplain. The Lord Chancellor of England was almost always a bishop up until the dismissal of Cardinal Thomas Wolsey by Henry VIII. Similarly, the position of Kanclerz in the Polish kingdom was always held by a bishop until the 16th century. In modern times, the principality of Andorra is headed by Co-Princes of Andorra, one of whom is the Bishop of Urgell and the other, the sitting President of France, an arrangement that began with the Paréage of Andorra (1278), and was ratified in the 1993 constitution of Andorra. The office of the Papacy is inherently held by the sitting Roman Catholic Bishop of Rome. Though not originally intended to hold temporal authority, since the Middle Ages the power of the Papacy gradually expanded deep into the secular realm and for centuries the sitting Bishop of Rome was the most powerful governmental office in Central Italy. In modern times, the Pope is also the sovereign Prince of Vatican City, an internationally recognized micro-state located entirely within the city of Rome. In France, prior to the Revolution, representatives of the clergy—in practice, bishops and abbots of the largest monasteries—comprised the First Estate of the Estates-General. This role was abolished after separation of Church and State was implemented during the French Revolution. In the 21st century, the more senior bishops of the Church of England continue to sit in the House of Lords of the Parliament of the United Kingdom, as representatives of the established church, and are known as Lords Spiritual. The Bishop of Sodor and Man, whose diocese lies outside the United Kingdom, is an "ex officio" member of the Legislative Council of the Isle of Man. In the past, the Bishop of Durham had extensive vice-regal powers within his northern diocese, which was a county palatine, the County Palatine of Durham, (previously, Liberty of Durham) of which he was "ex officio" the earl. In the 19th century, a gradual process of reform was enacted, with the majority of the bishop's historic powers vested in The Crown by 1858. Eastern Orthodox bishops, along with all other members of the clergy, are canonically forbidden to hold political office. Occasional exceptions to this rule are tolerated when the alternative is political chaos. In the Ottoman Empire, the Patriarch of Constantinople, for example, had de facto administrative, cultural and legal jurisdiction, as well as spiritual authority, over all Eastern Orthodox Christians of the empire, as part of the Ottoman millet system. An Orthodox bishop headed the Prince-Bishopric of Montenegro from 1516 to 1852, assisted by a secular "guvernadur". More recently, Archbishop Makarios III of Cyprus, served as President of the Cyprus from 1960 to 1977, an extremely turbulent time period on the island. In 2001, Peter Hollingworth, AC, OBE – then the Anglican Archbishop of Brisbane – was controversially appointed Governor-General of Australia. Although Hollingworth gave up his episcopal position to accept the appointment, it still attracted considerable opposition in a country which maintains a formal separation between Church and State. Episcopacy during the English Civil War. During the period of the English Civil War, the role of bishops as wielders of political power and as upholders of the established church became a matter of heated political controversy. Presbyterianism was the polity of most Reformed Christianity in Europe, and had been favored by many in England since the English Reformation. Since in the primitive church the offices of "presbyter" and were not clearly distinguished, many Puritans held that this was the only form of government the church should have. The Anglican divine, Richard Hooker, objected to this claim in his famous work "Of the Laws of Ecclesiastic Polity" while, at the same time, defending Presbyterian ordination as valid (in particular Calvin's ordination of Beza). This was the official stance of the English Church until the Commonwealth, during which time, the views of Presbyterians and Independents (Congregationalists) were more freely expressed and practiced. Christian churches. Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican churches. Bishops exercise leadership roles in the Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, certain Lutheran churches, the Anglican Communion, the Independent Catholic churches, the Independent Anglican churches, and certain other, smaller, denominations. The traditional role of a bishop is as pastor of a diocese (also called a bishopric, synod, eparchy or see), and so to serve as a "diocesan bishop", or "eparch" as it is called in many Eastern Christian churches. Dioceses vary considerably in size, geographically and population-wise. Some dioceses around the Mediterranean Sea which were Christianised early are rather compact, whereas dioceses in areas of rapid modern growth in Christian commitment, as in some parts of Sub-Saharan Africa, South America and the Far East, are much larger and more populous. As well as traditional diocesan bishops, many churches have a well-developed structure of church leadership that involves a number of layers of authority and responsibility. Duties. In Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, High Church Lutheranism, and Anglicanism, only a bishop can ordain other bishops, priests, and deacons. In the Eastern liturgical tradition, a priest can celebrate the Divine Liturgy only with the blessing of a bishop. In Byzantine usage, an antimension signed by the bishop is kept on the altar partly as a reminder of whose altar it is and under whose omophorion the priest at a local parish is serving. In Syriac Church usage, a consecrated wooden block called a thabilitho is kept for the same reasons. The bishop is the ordinary minister of the sacrament of confirmation in the Latin Church, and in the Old Catholic communion only a bishop may administer this sacrament. In the Lutheran and Anglican churches, the bishop normatively administers the rite of confirmation, although in those denominations that do not have an episcopal polity, confirmation is administered by the priest. However, in the Byzantine and other Eastern rites, whether Eastern or Oriental Orthodox or Eastern Catholic, chrismation is done immediately after baptism, and thus the priest is the one who confirms, using chrism blessed by a bishop. Ordination of Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican bishops. Bishops in all of these communions are ordained or consecrated by other bishops through the laying on of hands. Ordination of a bishop, and thus continuation of apostolic succession, takes place through a ritual centred on the imposition of hands and prayer. Catholic, Eastern Orthodox, Oriental Orthodox, Anglican, Old Catholic and some Lutheran bishops claim to be part of the continuous sequence of ordained bishops since the days of the apostles referred to as apostolic succession. In Scandinavia and the Baltic region, Lutheran churches participating in the Porvoo Communion (those of Iceland, Norway, Sweden, Finland, Estonia, and Lithuania), as well as many non-Porvoo membership Lutheran churches (including those of Kenya, Latvia, and Russia), as well as the confessional Communion of Nordic Lutheran Dioceses, believe that they ordain their bishops in the apostolic succession in lines stemming from the original apostles. "The New Westminster Dictionary of Church History" states that "In Sweden the apostolic succession was preserved because the Catholic bishops were allowed to stay in office, but they had to approve changes in the ceremonies." While traditional teaching maintains that any bishop with apostolic succession can validly perform the ordination of another bishop, some churches require two or three bishops participate, either to ensure sacramental validity or to conform with church law. Peculiar to the Catholic Church. Catholic doctrine holds that one bishop can validly ordain another (priest) as a bishop. Although a minimum of three bishops participating is desirable (there are usually several more) in order to demonstrate collegiality, canonically only one bishop is necessary. The practice of only one bishop ordaining was normal in countries where the church was persecuted under Communist rule. The Second Vatican Council's "Constitution on the Sacred Liturgy" stated that "when a bishop is consecrated, the laying of hands may be done by all the bishops present". The title of archbishop or metropolitan may be granted to a senior bishop, usually one who is in charge of a large ecclesiastical jurisdiction. He may, or may not, have provincial oversight of suffragan bishops and may possibly have auxiliary bishops assisting him. Apart from the ordination, which is always done by other bishops, there are different methods as to the actual selection of a candidate for ordination as bishop. The Dicastery for Bishops generally oversees the selection of new bishops, with recommendations sent for the approval of the pope. The papal nuncio usually solicits names from the bishops of a country, consults with priests and leading members of a laity, and then selects three to be forwarded to the Holy See. In Europe, some cathedral chapters have duties to elect bishops. The Eastern Catholic churches generally elect their own bishops. Most Eastern Orthodox churches allow varying amounts of formalised laity or lower clergy influence on the choice of bishops. This also applies in those Eastern churches which are in union with the pope, though it is required that he give assent. The pope, in addition to being the Bishop of Rome and spiritual head of the Catholic Church, is also the Patriarch of the Latin Church. Each bishop within the Latin Church is answerable directly to the Pope and not any other bishop except to metropolitans in certain oversight instances. The pope previously used the title "Patriarch of the West", but this title was dropped from use in 2006, a move which caused some concern within the Eastern Orthodox Communion as, to them, it implied wider papal jurisdiction. Recognition of other churches' ordinations. The Catholic Church does recognise as valid (though illicit) ordinations done by breakaway Catholic, Old Catholic or Oriental bishops, and groups descended from them; it also regards as both valid and licit those ordinations done by bishops of the Eastern churches, so long as those receiving the ordination conform to other canonical requirements (for example, is an adult male) and an eastern orthodox rite of episcopal ordination, expressing the proper functions and sacramental status of a bishop, is used; this has given rise to the phenomenon of (for example, clergy of the Independent Catholic groups which claim apostolic succession, though this claim is rejected by both Catholicism and Eastern Orthodoxy). With respect to Lutheranism, "the Catholic Church has never officially expressed its judgement on the validity of orders as they have been handed down by episcopal succession in these two national Lutheran churches" (the Evangelical Lutheran Church of Sweden and the Evangelical Lutheran Church of Finland) though it does "question how the ecclesiastical break in the 16th century has affected the apostolicity of the churches of the Reformation and thus the apostolicity of their ministry". Since Pope Leo XIII issued the bull in 1896, the Catholic Church has insisted that Anglican orders are invalid because of the Reformed changes in the Anglican ordination rites of the 16th century and divergence in understanding of the theology of priesthood, episcopacy and Eucharist. However, since the 1930s, Utrecht Old Catholic bishops (recognised by the Holy See as validly ordained) have sometimes taken part in the ordination of Anglican bishops. According to the writer Timothy Dufort, by 1969, all Church of England bishops had acquired Old Catholic lines of apostolic succession recognised by the Holy See. This development has been used to argue that the strain of apostolic succession has been re-introduced into Anglicanism, at least within the Church of England. However, other issues, such as the Anglican ordination of women, is at variance with Catholic understanding of Christian teaching, and have contributed to the reaffirmation of Catholic rejection of Anglican ordinations. The Eastern Orthodox Churches do not accept the validity of any ordinations performed by the Independent Catholic groups, as Eastern Orthodoxy considers to be spurious any consecration outside the church as a whole. Eastern Orthodoxy considers apostolic succession to exist only within the Universal Church, and not through any authority held by individual bishops; thus, if a bishop ordains someone to serve outside the (Eastern Orthodox) Church, the ceremony is ineffectual, and no ordination has taken place regardless of the ritual used or the ordaining prelate's position within the Eastern Orthodox Churches. The position of the Catholic Church is slightly different. Whilst it does recognise the validity of the orders of certain groups which separated from communion with Holy See (for instance, the ordinations of the Old Catholics in communion with Utrecht, as well as the Polish National Catholic Church - which received its orders directly from Utrecht, and was until recently part of that communion), Catholicism does not recognise the orders of any group whose teaching is at variance with what they consider the core tenets of Christianity; this is the case even though the clergy of the Independent Catholic groups may use the proper ordination ritual. There are also other reasons why the Holy See does not recognise the validity of the orders of the Independent clergy: Whilst members of the Independent Catholic movement take seriously the issue of valid orders, it is highly significant that the relevant Vatican Congregations tend not to respond to petitions from Independent Catholic bishops and clergy who seek to be received into communion with the Holy See, hoping to continue in some sacramental role. In those instances where the pope does grant reconciliation, those deemed to be clerics within the Independent Old Catholic movement are invariably admitted as laity and not priests or bishops. There is a mutual recognition of the validity of orders amongst Catholic, Eastern Orthodox, Old Catholic, Oriental Orthodox and Assyrian Church of the East churches. Some provinces of the Anglican Communion have begun ordaining women as bishops in recent decades – for example, England, Ireland, Scotland, Wales, the United States, Australia, New Zealand, Canada and Cuba. The first woman to be consecrated a bishop within Anglicanism was Barbara Harris, who was ordained in the United States in 1989. In 2006, Katharine Jefferts Schori, the Episcopal Bishop of Nevada, became the first woman to become the presiding bishop of the Episcopal Church. In the Evangelical Lutheran Church in America (ELCA) and the Evangelical Lutheran Church in Canada (ELCIC), the largest Lutheran Church bodies in the United States and Canada, respectively, and roughly based on the Nordic Lutheran national churches (similar to that of the Church of England), bishops are elected by Synod Assemblies, consisting of both lay members and clergy, for a term of six years, which can be renewed, depending upon the local synod's "constitution" (which is mirrored on either the ELCA or ELCIC's national constitution). Since the implementation of concordats between the ELCA and the Episcopal Church of the United States and the ELCIC and the Anglican Church of Canada, all bishops, including the presiding bishop (ELCA) or the national bishop (ELCIC), have been consecrated using the historic succession in line with bishops from the Evangelical Lutheran Church of Sweden, with at least one Anglican bishop serving as co-consecrator. Since going into ecumenical communion with their respective Anglican body, bishops in the ELCA or the ELCIC not only approve the "rostering" of all ordained pastors, diaconal ministers, and associates in ministry, but they serve as the principal celebrant of all pastoral ordination and installation ceremonies, diaconal consecration ceremonies, as well as serving as the "chief pastor" of the local synod, upholding the teachings of Martin Luther as well as the documentations of the Ninety-Five Theses and the Augsburg Confession. Unlike their counterparts in the United Methodist Church, ELCA and ELCIC synod bishops do not appoint pastors to local congregations (pastors, like their counterparts in the Episcopal Church, are called by local congregations). The presiding bishop of the ELCA and the national bishop of the ELCIC, the national bishops of their respective bodies, are elected for a single 6-year term and may be elected to an additional term. Although ELCA agreed with the Episcopal Church to limit ordination to the bishop "ordinarily", ELCA pastor-"ordinators" are given permission to perform the rites in "extraordinary" circumstance. In practice, "extraordinary" circumstance have included disagreeing with Episcopalian views of the episcopate, and as a result, ELCA pastors ordained by other pastors are not permitted to be deployed to Episcopal Churches (they can, however, serve in Presbyterian Church USA, United Methodist Church, Reformed Church in America, and Moravian Church congregations, as the ELCA is in full communion with these denominations). The Lutheran Church–Missouri Synod (LCMS) and the Wisconsin Evangelical Lutheran Synod (WELS), the second and third largest Lutheran bodies in the United States and the two largest Confessional Lutheran bodies in North America, do not follow an episcopal form of governance, settling instead on a form of quasi-congregationalism patterned off what they believe to be the practice of the early church. The second largest of the three predecessor bodies of the ELCA, the American Lutheran Church, was a congregationalist body, with national and synod presidents before they were re-titled as bishops (borrowing from the Lutheran churches in Germany) in the 1980s. With regard to ecclesial discipline and oversight, national and synod presidents typically function similarly to bishops in episcopal bodies. Methodism. African Methodist Episcopal Church. In the African Methodist Episcopal Church, "Bishops are the Chief Officers of the Connectional Organization. They are elected for life by a majority vote of the General Conference which meets every four years." Christian Methodist Episcopal Church. In the Christian Methodist Episcopal Church in the United States, bishops are administrative superintendents of the church; they are elected by "delegate" votes for as many years deemed until the age of 74, then the bishop must retire. Among their duties, are responsibility for appointing clergy to serve local churches as pastor, for performing ordinations, and for safeguarding the doctrine and discipline of the church. The General Conference, a meeting every four years, has an equal number of clergy and lay delegates. In each Annual Conference, CME bishops serve for four-year terms. In 2010, Teresa E. Jefferson-Snorton was elected as a bishop, becoming the first woman to hold that position. As of 2024, she remains the only female bishop in CME. United Methodist Church. In the United Methodist Church (the largest branch of Methodism in the world) bishops serve as administrative and pastoral superintendents of the church. They are elected for life from among the ordained elders (presbyters) by vote of the delegates in regional (called jurisdictional) conferences, and are consecrated by the other bishops present at the conference through the laying on of hands. In the United Methodist Church bishops remain members of the "Order of Elders" while being consecrated to the "Office of the Episcopacy". Within the United Methodist Church only bishops are empowered to consecrate bishops and ordain clergy. Among their most critical duties is the ordination and appointment of clergy to serve local churches as pastor, presiding at sessions of the Annual, Jurisdictional, and General Conferences, providing pastoral ministry for the clergy under their charge, and safeguarding the doctrine and discipline of the church. Furthermore, individual bishops, or the Council of Bishops as a whole, often serve a prophetic role, making statements on important social issues and setting forth a vision for the denomination, though they have no legislative authority of their own. In all of these areas, bishops of the United Methodist Church function very much in the historic meaning of the term. According to the "Book of Discipline of the United Methodist Church", a bishop's responsibilities are: In each Annual Conference, United Methodist bishops serve for four-year terms, and may serve up to three terms before either retirement or appointment to a new Conference. United Methodist bishops may be male or female, with Marjorie Matthews being the first woman to be consecrated a bishop in 1980. The collegial expression of episcopal leadership in the United Methodist Church is known as the Council of Bishops. The Council of Bishops speaks to the church and through the church into the world and gives leadership in the quest for Christian unity and interreligious relationships. The Conference of Methodist Bishops includes the United Methodist "Council of Bishops" plus bishops from affiliated autonomous Methodist or United Churches. John Wesley consecrated Thomas Coke a "General Superintendent", and directed that Francis Asbury also be consecrated for the United States of America in 1784, where the Methodist Episcopal Church first became a separate denomination apart from the Church of England. Coke soon returned to England, but Asbury was the primary builder of the new church. At first he did not call himself bishop, but eventually submitted to the usage by the denomination. Notable bishops in United Methodist history include Coke, Asbury, Richard Whatcoat, Philip William Otterbein, Martin Boehm, Jacob Albright, John Seybert, Matthew Simpson, John S. Stamm, William Ragsdale Cannon, Marjorie Matthews, Leontine T. Kelly, William B. Oden, Ntambo Nkulu Ntanda, Joseph Sprague, William Henry Willimon, and Thomas Bickerton. The Church of Jesus Christ of Latter-day Saints. In The Church of Jesus Christ of Latter-day Saints, the Bishop is the leader of a local congregation, called a ward. As with most LDS priesthood holders, the bishop is a part-time lay minister and earns a living through other employment. As such, it is his duty to preside, call local leaders, and judge the worthiness of members for certain activities. The bishop does not deliver sermons at every service (generally asking members to do so), but is expected to be a spiritual guide for his congregation. It is therefore believed that he has both the right and ability to receive divine inspiration (through the Holy Spirit) for the ward under his direction. Because it is a part-time position, all able members are expected to assist in the management of the ward by holding delegated lay positions (for example, women's and youth leaders, teachers) referred to as callings. The bishop is especially responsible for leading the youth, in connection with the fact that a bishop is the president of the Aaronic priesthood in his ward (and is thus a form of Mormon Kohen). Although members are asked to confess serious sins to him, unlike the Catholic Church, he is not the instrument of divine forgiveness, but merely a guide through the repentance process (and a judge in case transgressions warrant excommunication or other official discipline). The bishop is also responsible for the physical welfare of the ward, and thus collects tithing and fast offerings and distributes financial assistance where needed. A literal descendant of Aaron has "legal right" to act as a bishop after being found worthy and ordained by the First Presidency. In the absence of a literal descendant of Aaron, a high priest in the Melchizedek priesthood is called to be a bishop. Each bishop is selected from resident members of the ward by the stake presidency with approval of the First Presidency, and chooses two "counselors" to form a "bishopric". An priesthood holder called as bishop must be ordained a high priest if he is not already one, unlike the similar function of branch president. In special circumstances (such as a ward consisting entirely of young university students), a bishop may be chosen from outside the ward. Traditionally, bishops are married, though this is not always the case. A bishop is typically released after about five years and a new bishop is called to the position. Although the former bishop is released from his duties, he continues to hold the Aaronic priesthood office of bishop. Church members frequently refer to a former bishop as "Bishop" as a sign of respect and affection. Latter-day Saint bishops do not wear any special clothing or insignia the way clergy in many other churches do, but are expected to dress and groom themselves neatly and conservatively per their local culture, especially when performing official duties. Bishops (as well as other members of the priesthood) can trace their line of authority back to Joseph Smith, who, according to church doctrine, was ordained to lead the church in modern times by the ancient apostles Peter, James, and John, who were ordained to lead the Church by Jesus Christ. At the global level, the presiding bishop oversees the temporal affairs (buildings, properties, commercial corporations, and so on) of the worldwide church, including the church's massive global humanitarian aid and social welfare programs. The presiding bishop has two counselors; the three together form the presiding bishopric. As opposed to ward bishoprics, where the counselors do not hold the office of bishop, all three men in the presiding bishopric hold the office of bishop, and thus the counselors, as with the presiding bishop, are formally referred to as "Bishop". Irvingism. New Apostolic Church. The New Apostolic Church (NAC) knows three classes of ministries: Deacons, Priests and Apostles. The Apostles, who are all included in the apostolate with the Chief Apostle as head, are the highest ministries. Of the several kinds of priestly ministries, the bishop is the highest. Nearly all bishops are set in line directly from the chief apostle. They support and help their superior apostle. Pentecostalism. Church of God in Christ. In the Church of God in Christ (COGIC), the ecclesiastical structure is composed of large dioceses that are called "jurisdictions" within COGIC, each under the authority of a bishop, sometimes called "state bishops". They can either be made up of large geographical regions of churches or churches that are grouped and organized together as their own separate jurisdictions because of similar affiliations, regardless of geographical location or dispersion. Each state in the U.S. has at least one jurisdiction while others may have several more, and each jurisdiction is usually composed of between 30 and 100 churches. Each jurisdiction is then broken down into several districts, which are smaller groups of churches (either grouped by geographical situation or by similar affiliations) which are each under the authority of District Superintendents who answer to the authority of their jurisdictional/state bishop. There are currently over 170 jurisdictions in the United States, and over 30 jurisdictions in other countries. The bishops of each jurisdiction, according to the COGIC Manual, are considered to be the modern day equivalent in the church of the early apostles and overseers of the New Testament church, and as the highest ranking clergymen in the COGIC, they are tasked with the responsibilities of being the head overseers of all religious, civil, and economic ministries and protocol for the church denomination. They also have the authority to appoint and ordain local pastors, elders, ministers, and reverends within the denomination. The bishops of the COGIC denomination are all collectively called "The Board of Bishops". From the Board of Bishops, and the General Assembly of the COGIC, the body of the church composed of clergy and lay delegates that are responsible for making and enforcing the bylaws of the denomination, every four years, twelve bishops from the COGIC are elected as "The General Board" of the church, who work alongside the delegates of the General Assembly and Board of Bishops to provide administration over the denomination as the church's head executive leaders. One of twelve bishops of the General Board is also elected the "presiding bishop" of the church, and two others are appointed by the presiding bishop himself, as his first and second assistant presiding bishops. Bishops in the Church of God in Christ usually wear black clergy suits which consist of a black suit blazer, black pants, a purple or scarlet clergy shirt and a white clerical collar, which is usually referred to as "Class B Civic attire". Bishops in COGIC also typically wear the Anglican Choir Dress style vestments of a long purple or scarlet chimere, cuffs, and tippet worn over a long white rochet, and a gold pectoral cross worn around the neck with the tippet. This is usually referred to as "Class A Ceremonial attire". The bishops of COGIC alternate between Class A Ceremonial attire and Class B Civic attire depending on the protocol of the religious services and other events they have to attend. Church of God (Cleveland, Tennessee). In the polity of the Church of God (Cleveland, Tennessee), the international leader is the presiding bishop, and the members of the executive committee are executive bishops. Collectively, they supervise and appoint national and state leaders across the world. Leaders of individual states and regions are administrative bishops, who have jurisdiction over local churches in their respective states and are vested with appointment authority for local pastorates. All ministers are credentialed at one of three levels of licensure, the most senior of which is the rank of ordained bishop. To be eligible to serve in state, national, or international positions of authority, a minister must hold the rank of ordained bishop. Pentecostal Church of God. In 2002, the general convention of the Pentecostal Church of God came to a consensus to change the title of their overseer from general superintendent to bishop. The change was brought on because internationally, the term "bishop" is more commonly related to religious leaders than the previous title. The title "bishop" is used for both the general (international leader) and the district (state) leaders. The title is sometimes used in conjunction with the previous, thus becoming general (district) superintendent/bishop. Seventh-day Adventists. According to the Seventh-day Adventist understanding of the doctrine of the church: "The "elders" (Greek, ) or "bishops" () were the most important officers of the church. The term elder means older one, implying dignity and respect. His position was similar to that of the one who had supervision of the synagogue. The term bishop means "overseer". Paul used these terms interchangeably, equating elders with overseers or bishops (Acts 20:17,; Titus 1:5, 7). "Those who held this position supervised the newly formed churches. Elder referred to the status or rank of the office, while bishop denoted the duty or responsibility of the office—"overseer". Since the apostles also called themselves elders (1 Peter 5:1; 2 John 1; 3 John 1), it is apparent that there were both local elders and itinerant elders, or elders at large. But both kinds of elder functioned as shepherds of the congregations." The above understanding is part of the basis of Adventist organizational structure. The world wide Seventh-day Adventist church is organized into local districts, conferences or missions, union conferences or union missions, divisions, and finally at the top is the general conference. At each level (with exception to the local districts), there is an elder who is elected president and a group of elders who serve on the executive committee with the elected president. Those who have been elected president would in effect be the "bishop" while never actually carrying the title or ordained as such because the term is usually associated with the episcopal style of church governance most often found in Catholic, Anglican, Methodist and some Pentecostal/Charismatic circles. Others. Some Baptists also have begun taking on the title of "bishop". In some smaller Protestant denominations and independent churches, the term "bishop" is used in the same way as "pastor", to refer to the leader of the local congregation, and may be male or female. This usage is especially common in African-American churches in the US. In the Church of Scotland, which has a Presbyterian church structure, the word "bishop" refers to an ordained person, usually a normal parish minister, who has temporary oversight of a trainee minister. In the Presbyterian Church (USA), the term bishop is an expressive name for a Minister of Word and Sacrament who serves a congregation and exercises "the oversight of the flock of Christ." The term is traceable to the 1789 Form of Government of the PC (USA) and the Presbyterian understanding of the pastoral office. While not considered orthodox Christian, the Ecclesia Gnostica Catholica uses roles and titles derived from Christianity for its clerical hierarchy, including bishops who have much the same authority and responsibilities as in Catholicism. The Salvation Army does not have bishops but has appointed leaders of geographical areas, known as Divisional Commanders. Larger geographical areas, called Territories, are led by a Territorial Commander, who is the highest-ranking officer in that Territory. Jehovah's Witnesses do not use the title 'Bishop' within their organizational structure, but appoint elders to be overseers (to fulfill the role of oversight) within their congregations. The Batak Christian Protestant Church of Indonesia, the most prominent Protestant denomination in Indonesia, uses the term "Ephorus" instead of "bishop". In the Vietnamese syncretist religion of Caodaism, bishops () comprise the fifth of nine hierarchical levels, and are responsible for spiritual and temporal education as well as record-keeping and ceremonies in their parishes. At any one time there are seventy-two bishops. Their authority is described in Section I of the text (revealed through seances in December 1926). Caodai bishops wear robes and headgear of embroidered silk depicting the Divine Eye and the Eight Trigrams. (The color varies according to branch.) This is the full ceremonial dress; the simple version consists of a seven-layered turban. Dress and insignia in Christianity. Traditionally, a number of items are associated with the office of a bishop, most notably the mitre and the crosier. Other vestments and insignia vary between Eastern and Western Christianity. In the Latin Rite of the Catholic Church, the choir dress of a bishop includes the purple cassock with amaranth trim, rochet, purple zucchetto (skull cap), purple biretta, and pectoral cross. The cappa magna may be worn, but only within the bishop's own diocese and on especially solemn occasions. The mitre, zucchetto, and stole are generally worn by bishops when presiding over liturgical functions. For liturgical functions other than the Mass the bishop typically wears the cope. Within his own diocese and when celebrating solemnly elsewhere with the consent of the local ordinary, he also uses the crosier. When celebrating Mass, a bishop, like a priest, wears the chasuble. The Caeremoniale Episcoporum recommends, but does not impose, that in solemn celebrations a bishop should also wear a dalmatic, which can always be white, beneath the chasuble, especially when administering the sacrament of holy orders, blessing an abbot or abbess, and dedicating a church or an altar. The Caeremoniale Episcoporum no longer makes mention of episcopal gloves, episcopal sandals, liturgical stockings (also known as buskins), or the accoutrements that it once prescribed for the bishop's horse. The coat of arms of a Latin Church Catholic bishop usually displays a galero with a cross and crosier behind the escutcheon; the specifics differ by location and ecclesiastical rank (see Ecclesiastical heraldry). Anglican bishops generally make use of the mitre, crosier, ecclesiastical ring, purple cassock, purple zucchetto, and pectoral cross. However, the traditional choir dress of Anglican bishops retains its late mediaeval form, and looks quite different from that of their Catholic counterparts; it consists of a long rochet which is worn with a chimere. In the Eastern Churches (Eastern Orthodox, Eastern Rite Catholic) a bishop will wear the mandyas, panagia (and perhaps an enkolpion), sakkos, omophorion and an Eastern-style mitre. Eastern bishops do not normally wear an episcopal ring; the faithful kiss (or, alternatively, touch their forehead to) the bishop's hand. To seal official documents, he will usually use an inked stamp. An Eastern bishop's coat of arms will normally display an Eastern-style mitre, cross, eastern style crosier and a red and white (or red and gold) mantle. The arms of Oriental Orthodox bishops will display the episcopal insignia (mitre or turban) specific to their own liturgical traditions. Variations occur based upon jurisdiction and national customs. Cathedra. In Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican cathedrals there is a special chair set aside for the exclusive use of the bishop. This is the bishop's "cathedra" and is often called the throne. In some Christian denominations, for example, the Anglican Communion, parish churches may maintain a chair for the use of the bishop when he visits; this is to signify the parish's union with the bishop. The term's use in non-Christian religions. Buddhism. The leader of the Buddhist Churches of America (BCA) is their bishop, The Japanese title for the bishop of the BCA is , although the English title is favored over the Japanese. When it comes to many other Buddhist terms, the BCA chose to keep them in their original language (terms such as and ), but with some words (including ), they changed/translated these terms into English words. Between 1899 and 1944, the BCA held the name Buddhist Mission of North America. The leader of the Buddhist Mission of North America was called (superintendent/director) between 1899 and 1918. In 1918 the was promoted to bishop (). However, according to George J. Tanabe, the title "bishop" was in practice already used by Hawaiian Shin Buddhists (in Honpa Hongwanji Mission of Hawaii) even when the official title was "kantoku". Bishops are also present in other Japanese Buddhist organizations. Higashi Hongan-ji's North American District, Honpa Honganji Mission of Hawaii, Jodo Shinshu Buddhist Temples of Canada, a Jodo Shu temple in Los Angeles, the Shingon temple Koyasan Buddhist Temple, Sōtō Mission in Hawai‘i (a Soto Zen Buddhist institution), and the Sōtō Zen Buddhist Community of South America () all have or have had leaders with the title bishop. As for the Sōtō Zen Buddhist Community of South America, the Japanese title is , but the leader is in practice referred to as "bishop". Tenrikyo. Tenrikyo is a Japanese New Religion with influences from both Shinto and Buddhism. The leader of the Tenrikyo North American Mission has the title of bishop.
4093
9755426
https://en.wikipedia.org/wiki?curid=4093
Bertrand Andrieu
Bertrand Andrieu (24 November 1761 – 6 December 1822) was a French engraver of medals. He was born in Bordeaux. In France, he was considered as the restorer of the art, which had declined after the time of Louis XIV. During the last twenty years of his life, the French government commissioned him to undertake every major work of importance.
4097
18109331
https://en.wikipedia.org/wiki?curid=4097
Bordeaux
Bordeaux ( ; ; Gascon ; ) is a city on the river Garonne in the Gironde department, southwestern France. A port city, it is the capital of the Nouvelle-Aquitaine region, as well as the prefecture of the Gironde department. Its inhabitants are called ""Bordelais" (masculine) or ""Bordelaises" (feminine). The term "Bordelais" may also refer to the city and its surrounding region. The city of Bordeaux proper had a population of 259,809 in 2020 within its small municipal territory of , but together with its suburbs and exurbs the Bordeaux metropolitan area had a population of 1,376,375 that same year (Jan. 2020 census), the sixth-most populated in France after Paris, Lyon, Marseille, Lille, and Toulouse. Bordeaux and 27 suburban municipalities form the Bordeaux Metropolis, an indirectly elected metropolitan authority now in charge of wider metropolitan issues. The Bordeaux Metropolis, with a population of 819,604 at the January 2020 census, is the fifth most populated metropolitan council in France after those of Paris, Marseille, Lyon and Lille. Bordeaux is a world capital of wine: many châteaux and vineyards stand on the hillsides of the Gironde, and the city is home to the world's main wine fair, Vinexpo. Bordeaux is also one of the centers of gastronomy and business tourism for the organization of international congresses. It is a central and strategic hub for the aeronautics, military and space sector, home to major companies such as Dassault Aviation, ArianeGroup, Safran and Thales. The link with aviation dates back to 1910, the year the first airplane flew over the city. A crossroads of knowledge through university research, it is home to one of the only two megajoule lasers in the world, as well as a university population of more than 130,000 students within the Bordeaux Metropolis. Bordeaux is an international tourist destination for its architectural and cultural heritage with more than 362 historic , making it, after Paris, the city with the most listed or registered monuments in France. The "Pearl of Aquitaine" has been voted European Destination of the year in a 2015 online poll. The metropolis has also received awards and rankings by international organizations such as in 1957, Bordeaux was awarded the Europe Prize for its efforts in transmitting the European ideal. In June 2007, the Port of the Moon in historic Bordeaux was inscribed on the UNESCO World Heritage List, for its outstanding architecture and urban ensemble and in recognition of Bordeaux's international importance over the last 2000 years. Bordeaux is also ranked as a Sufficiency city by the Globalization and World Cities Research Network. History. 5th century BC to 11th century AD. Around 300 BC, the region was the settlement of a Celtic tribe, the Bituriges Vivisci, who named the town Burdigala, probably of Aquitanian origin. In 107 BC, the Battle of Burdigala was fought by the Romans who were defending the Allobroges, a Gallic tribe allied to Rome, and the Tigurini led by Divico. The Romans were defeated and their commander, the consul Lucius Cassius Longinus, was killed in battle. The city came under Roman rule around 60 BC, and it became an important commercial centre for tin and lead. During this period were built the amphitheatre and the monument "Les Piliers de Tutelle". In 276 AD, it was sacked by the Vandals. The Vandals attacked again in 409, followed by the Visigoths in 414, and the Franks in 498, and afterwards the city fell into a period of relative obscurity. In the late 6th century AD the city re-emerged as the seat of a county and an archdiocese within the Merovingian kingdom of the Franks, but royal Frankish power was never strong. The city started to play a regional role as a major urban center on the fringes of the newly founded Frankish Duchy of Vasconia. Around 585 Gallactorius was made Count of Bordeaux and fought the Basques. In 732, the city was plundered by the troops of Abd er Rahman who stormed the fortifications and overwhelmed the Aquitanian garrison. Duke Eudes mustered a force to engage the Umayyads, eventually engaging them in the Battle of the River Garonne somewhere near the river Dordogne. The battle had a high death toll, and although Eudes was defeated he had enough troops to engage in the Battle of Poitiers and so retain his grip on Aquitaine. In 737, following his father Eudes's death, the Aquitanian duke Hunald led a rebellion to which Charles responded by launching an expedition that captured Bordeaux. However, it was not retained for long, during the following year the Frankish commander clashed in battle with the Aquitanians but then left to take on hostile Burgundian authorities and magnates. In 745 Aquitaine faced another expedition where Charles's sons Pepin and Carloman challenged Hunald's power and defeated him. Hunald's son Waifer replaced him and confirmed Bordeaux as the capital city (along with Bourges in the north). During the last stage of the war against Aquitaine (760–768), it was one of Waifer's last important strongholds to fall to the troops of King Pepin the Short. Charlemagne built the fortress of Fronsac ("Frontiacus", "Franciacus") near Bordeaux on a hill across the border with the Basques ("Wascones"), where Basque commanders came and pledged their loyalty (769). In 778, Seguin (or Sihimin) was appointed count of Bordeaux, probably undermining the power of the Duke Lupo, and possibly leading to the Battle of Roncevaux Pass. In 814, Seguin was made Duke of Vasconia, but was deposed in 816 for failing to suppress a Basque rebellion. Under the Carolingians, sometimes the Counts of Bordeaux held the title concomitantly with that of Duke of Vasconia. They were to keep the Basques in check and defend the mouth of the Garonne from the Vikings when they appeared in c. 844. In Autumn 845, the Vikings were raiding Bordeaux and Saintes, count Seguin II marched on them but was captured and executed. Although the port of Bordeaux was a buzzing trade center, the stability and success of the city was threatened by Viking and Norman incursions and political instability. The restoration of the Ramnulfid Dukes of Aquitaine under William IV and his successors (known as the House of Poitiers) brought continuity of government. 12th century to 15th century, the English era. From the 12th to the 15th century, Bordeaux flourished once more following the marriage of Eléonore, Duchess of Aquitaine and the last of the House of Poitiers, to Henry II Plantagenêt, Count of Anjou and the grandson of Henry I of England, who succeeded to the English crown months after their wedding, bringing into being the vast Angevin Empire, which stretched from the Pyrenees to Ireland. After granting a tax-free trade status with England, Henry was adored by the locals as they could be even more profitable in the wine trade, their main source of income, and the city benefited from imports of cloth and wheat. The belfry (Grosse Cloche) and city cathedral St-André were built, the latter in 1227, incorporating the artisan quarter of Saint-Paul. Under the terms of the Treaty of Brétigny it became briefly the capital of an independent state (1362–1372) under Edward, the Black Prince, but after the Battle of Castillon (1453) it was annexed by France. 15th century to 17th century. In 1462, Bordeaux created a local parliament. Bordeaux adhered to the Fronde, being effectively annexed to the Kingdom of France only in 1653, when the army of Louis XIV entered the city. 18th century, the golden era. The 18th century saw another golden age of Bordeaux. The Port of the Moon supplied the majority of Europe with coffee, cocoa, sugar, cotton and indigo, becoming France's busiest port and the second busiest port in the world after London. Many downtown buildings (about 5,000), including those on the quays, are from this period. Bordeaux was also a major trading centre for slaves. In total, the Bordeaux shipowners deported 150,000 Africans in some 500 expeditions. French Revolution: political disruption and loss of the most profitable colony. At the beginning of the French Revolution (1789), many local revolutionaries were members of the Girondists. This Party represented the provincial bourgeoisie, favorable towards abolishing aristocratic privileges, but opposed to the Revolution's social dimension. The Gironde valley's economic value and significance was satiated by the city's commercial power which was in dire contrast to the emerging widespread poverty affecting its inhabitants. Trade and commerce were the driving factors in the region's economic prosperity, still this resulted in a significant number of locals struggling to survive on a daily basis due to lack of food and resources. This socioeconomic disparity served as fertile ground for discontent, sparking frequent episodes of mass unrest well before the tumultuous events of 1783. In 1793, the Montagnards led by Robespierre and Marat came to power. Fearing a bourgeois misappropriation of the Revolution, they executed a great number of Girondists. During the purge, the local Montagnard Section renamed the city of Bordeaux "Commune-Franklin" (Franklin-municipality) in homage to Benjamin Franklin. At the same time, in 1791, a slave revolt broke out at Saint-Domingue (current Haiti), the most profitable of the French colonies. In the lively era of the 18th century, Bordeaux emerged as a center of economic activity, particularly known at first for its successful wine trade. The city's placement along the Gironde River was very strategic, helping to facilitate the transportation of produce to markets both internationally and domestically, which led to an increase in exports and Bordeaux's economic prosperity. There was a significant transformation to the economic landscape of Bordeaux in 1785, which was spurred by the attraction of large profits, traders and merchants in Bordeaux began to turn their attention to the slave trade. This was a very important moment in the city's economic history seeing as it diversified its commercial expansion, at a serious moral cost. This introduced a new layer of difficulty to Bordeaux's economic activities. Even though it brought along significant wealth to certain segments of society, it complicated the socio-economic inconsistencies within the region. The entry into the slave trade brought even more tension within Bordeaux society. The trade exacerbated the divide between an elite with growing wealth and those living in poverty. This economic divide laid out the foundation for the mass unrest that would break out in the French Revolution. Three years later, the Montagnard Convention abolished slavery. In 1802, Napoleon revoked the manumission law but lost the war against the army of former slaves. In 1804, Haiti became independent. The loss of this "Pearl" of the West Indies generated the collapse of Bordeaux's port economy, which was dependent on the colonial trade and trade in slaves. Towards the end of the Peninsular War of 1814, the Duke of Wellington sent William Beresford with two divisions and seized Bordeaux, encountering little resistance. Bordeaux was largely anti-Bonapartist and the majority supported the Bourbons. The British troops were treated as liberators. Distinguished historian of the French revolution Suzanne Desan explains that "examining intricate local dynamics" is essential to studying the Revolution by region. 19th century, rebirth of the economy. From the Bourbon Restoration, the economy of Bordeaux was rebuilt by traders and shipowners. They engaged to construct the first bridge of Bordeaux, and customs warehouses. The shipping traffic grew through the new African colonies. Georges-Eugène Haussmann, a longtime prefect of Bordeaux, used Bordeaux's 18th-century large-scale rebuilding as a model when he was asked by Emperor Napoleon III to transform the quasi-medieval Paris into a "modern" capital that would make France proud. Victor Hugo found the town so beautiful he said: "Take Versailles, add Antwerp, and you have Bordeaux". In 1870, at the beginning of the Franco-Prussian war against Prussia, the French government temporarily relocated to Bordeaux from Paris. That recurred during World War I and again very briefly during World War II, when it became clear that Paris would fall into German hands. 20th century. During World War II, Bordeaux fell under German occupation. In May and June 1940, Bordeaux was the site of the life-saving actions of the Portuguese consul-general, Aristides de Sousa Mendes, who granted thousands of Portuguese visas, which were needed to pass the Spanish border, to refugees fleeing the German occupation. From 1941 to 1943, the Italian Royal Navy established BETASOM, a submarine base at Bordeaux. Italian submarines participated in the Battle of the Atlantic from that base, which was also a major base for German U-boats as headquarters of 12th U-boat Flotilla. The massive, reinforced concrete U-boat pens have proved impractical to demolish and are now partly used as a cultural center for exhibitions. 21st century, listed as World heritage. In 2007, 40% of the city surface area, located around the Port of the Moon, was listed as World Heritage Site. UNESCO inscribed Bordeaux as "an inhabited historic city, an outstanding urban and architectural ensemble, created in the age of the Enlightenment, whose values continued up to the first half of the 20th century, with more protected buildings than any other French city except Paris". Geography. Bordeaux is located close to the European Atlantic coast, in the southwest of France and in the north of the Aquitaine region. It is around southwest of Paris. The city is built on a bend of the river Garonne, and is divided into two parts: the right bank to the east and left bank in the west. Historically the left bank is more developed because when flowing outside the bend, the water makes a furrow of the required depth to allow the passing of merchant ships, which used to offload on this side of the river. But, today, the right bank is developing, including new urban projects. In Bordeaux, the Garonne River is accessible to ocean liners through the Gironde estuary. The right bank of the Garonne is a low-lying, often marshy plain. Climate. Bordeaux's climate can be classified as oceanic (Köppen climate classification "Cfb"), bordering on a humid subtropical climate ("Cfa"). However, the Trewartha climate classification system classifies the city as solely humid subtropical, due to a recent rise in temperatures related – to some degree or another – to climate change and the city's urban heat island. The city enjoys cool to mild, wet winters, due to its relatively southerly latitude, and the prevalence of mild, westerly winds from the Atlantic. Its summers are warm and somewhat drier, although wet enough to avoid a Mediterranean classification. Frosts occur annually, but snowfall is quite infrequent, occurring for no more than 3–4 days a year. The summer of 2003 set a record with an average temperature of , while February 1956 was the coldest month on record with an average temperature of −2.00 °C at Bordeaux Mérignac-Airport. Economy. Bordeaux is a major centre for business in France as it has the sixth largest metropolitan population in France. It serves as a major regional center for trade, administration, services and industry. Wine. The vine was introduced to the Bordeaux region by the Romans, probably in the mid-first century, to provide wine for local consumption, and wine production has been continuous in the region since. Bordeaux wine growing area has about of vineyards, 57 appellations, 10,000 wine-producing estates (châteaux) and 13,000 grape growers. With an annual production of approximately 960 million bottles, the Bordeaux area produces large quantities of everyday wine as well as some of the most expensive wines in the world. Included among the latter are the area's five "premier cru" (First Growth) red wines (four from Médoc and one, Château Haut-Brion, from Graves), established by the Bordeaux Wine Official Classification of 1855: Both red and white wines are made in the Bordeaux region. Red Bordeaux wine is called claret in the United Kingdom. Red wines are generally made from a blend of grapes, and may be made from Cabernet Sauvignon, Merlot, Cabernet Franc, Petit verdot, Malbec, and, less commonly in recent years, Carménère. White Bordeaux is made from Sauvignon blanc, Sémillon, and Muscadelle. Sauternes is a sub-region of Graves known for its intensely sweet, white, dessert wines such as Château d'Yquem. Because of a wine glut (wine lake) in the generic production, the price squeeze induced by an increasingly strong international competition, and vine pull schemes, the number of growers has recently dropped from 14,000 and the area under vine has also decreased significantly. In the meantime, the global demand for first growths and the most famous labels markedly increased and their prices skyrocketed. The Cité du Vin, a museum as well as a place of exhibitions, shows, movie projections and academic seminars on the theme of wine opened its doors in June 2016. Others. The Laser Mégajoule will be one of the most powerful lasers in the world, allowing fundamental research and the development of the laser and plasma technologies. Some 15,000 people work for the aeronautic industry in Bordeaux. The city has some of the biggest companies including Dassault, EADS Sogerma, Snecma, Thales, SNPE, and others. The Dassault Falcon private jets are built there as well as the military aircraft Rafale and Mirage 2000, the Airbus A380 cockpit, the boosters of Ariane 5, and the M51 SLBM missile. Tourism, especially wine tourism, is a major industry. Globelink.co.uk mentioned Bordeaux as the best tourist destination in Europe in 2015. Gourmet Touring is a tourism company operating in the Bordeaux wine region. Access to the port from the Atlantic is via the Gironde estuary. Almost nine million tonnes of goods arrive and leave each year. Major companies. This list includes indigenous Bordeaux-based companies and companies that have major presence in Bordeaux, but are not necessarily headquartered there. Population. In January 2020, there were 259,809 inhabitants in the city proper (commune) of Bordeaux. The commune (including Caudéran which was annexed by Bordeaux in 1965) had its largest population of 284,494 at the 1954 census. The majority of the population is French, but there are sizable groups of Italians, Spaniards, Portuguese, Turks, Germans. The built-up area has grown for more than a century beyond the municipal borders of Bordeaux due to the small size of the commune () and urban sprawl. By January 2020 there were 1,376,375 people living in the overall metropolitan area ("aire d'attraction") of Bordeaux, only a fifth of whom lived in the city proper. Politics. Municipal administration. The Mayor of the city is the environmentalist Pierre Hurmic. Bordeaux is the capital of five cantons and the Prefecture of the Gironde and Aquitaine. The town is divided into three districts, the first three of Gironde. The headquarters of Urban Community of Bordeaux Mériadeck is located in the neighbourhood and the city is at the head of the Chamber of Commerce and Industry that bears his name. The number of inhabitants of Bordeaux is greater than 250,000 and less than 299,999 so the number of municipal councilors is 65. They are divided according to the following composition: Mayors of Bordeaux. Since the Liberation (1944), there have been six mayors of Bordeaux: Elections. Presidential elections of 2007. At the 2007 presidential election, the Bordelais gave 31.37% of their votes to Ségolène Royal of the Socialist Party against 30.84% to Nicolas Sarkozy, president of the UMP. Then came François Bayrou with 22.01%, followed by Jean-Marie Le Pen who recorded 5.42%. None of the other candidates exceeded the 5% mark. Nationally, Nicolas Sarkozy led with 31.18%, then Ségolène Royal with 25.87%, followed by François Bayrou with 18.57%. After these came Jean-Marie Le Pen with 10.44%, none of the other candidates exceeded the 5% mark. In the second round, the city of Bordeaux gave Ségolène Royal 52.44% against 47.56% for Nicolas Sarkozy, the latter being elected President of the Republic with 53.06% against 46.94% for Ségolène Royal. The abstention rates for Bordeaux were 14.52% in the first round and 15.90% in the second round. Parliamentary elections of 2007. In the parliamentary elections of 2007, the left won eight constituencies against only three for the right. After the partial 2008 elections, the eighth district of Gironde switched to the left, bringing the count to nine. In Bordeaux, the left was for the first time in its history the majority as it held two of three constituencies following the elections. In the first division of the Gironde, the outgoing UMP MP Chantal Bourragué was well ahead with 44.81% against 25.39% for the Socialist candidate Béatrice Desaigues. In the second round, it was Chantal Bourragué who was re-elected with 54.45% against 45.55% for his socialist opponent. In the second district of Gironde the UMP mayor and all new Minister of Ecology, Energy, Sustainable Development and the Sea Alain Juppé confronted the General Counsel PS Michèle Delaunay. In the first round, Alain Juppé was well ahead with 43.73% against 31.36% for Michèle Delaunay. In the second round, it was finally Michèle Delaunay who won the election with 50.93% of the votes against 49.07% for Alain Juppé, the margin being only 670 votes. The defeat of the so-called constituency "Mayor" showed that Bordeaux was rocking increasingly left. Finally, in the third constituency of the Gironde, Noël Mamère was well ahead with 39.82% against 28.42% for the UMP candidate Elizabeth Vine. In the second round, Noël Mamère was re-elected with 62.82% against 37.18% for his right-wing rival. Municipal elections of 2008. In 2008 municipal elections saw the clash between mayor of Bordeaux, Alain Juppé and the President of the Regional Council of Aquitaine Socialist Alain Rousset. The PS had put up a Socialist heavyweight in the Gironde and had put great hopes in this election after the victory of Ségolène Royal and Michèle Delaunay in 2007. However, after a rather exciting campaign it was Alain Juppé who was widely elected in the first round with 56.62 percent, far ahead of Alain Rousset who garnered 34.14 percent of the vote. At present, of the eight cantons that has Bordeaux, five are held by the PS and three by the UMP, the left eating a little each time into the right's numbers. European elections of 2009. In the European elections of 2009, Bordeaux voters largely voted for the UMP candidate Dominique Baudis, who won 31.54% against 15.00% for PS candidate Kader Arif. The candidate of Europe Ecology José Bové came second with 22.34%. None of the other candidates reached the 10% mark. The 2009 European elections were like the previous ones in eight constituencies. Bordeaux is located in the district "Southwest", here are the results: UMP candidate Dominique Baudis: 26.89%. His party gained four seats. PS candidate Kader Arif: 17.79%, gaining two seats in the European Parliament. Europe Ecology candidate Bove: 15.83%, obtaining two seats. MoDem candidate Robert Rochefort: 8.61%, winning a seat. Left Front candidate Jean-Luc Mélenchon: 8.16%, gaining the last seat. At regional elections in 2010, the Socialist incumbent president Alain Rousset won the first round by totaling 35.19% in Bordeaux, but this score was lower than the plan for Gironde and Aquitaine. Xavier Darcos, Minister of Labour followed with 28.40% of the votes, scoring above the regional and departmental average. Then came Monique De Marco, Green candidate with 13.40%, followed by the member of Pyrenees-Atlantiques and candidate of the MoDem Jean Lassalle who registered a low 6.78% while qualifying to the second round on the whole Aquitaine, closely followed by Jacques Colombier, candidate of the National Front, who gained 6.48%. Finally the candidate of the Left Front Gérard Boulanger with 5.64%, no other candidate above the 5% mark. In the second round, Alain Rousset had a tidal wave win as national totals rose to 55.83%. If Xavier Darcos largely lost the election, he nevertheless achieved a score above the regional and departmental average obtaining 33.40%. Jean Lassalle, who qualified for the second round, passed the 10% mark by totaling 10.77%. The ballot was marked by abstention amounting to 55.51% in the first round and 53.59% in the second round. 2017 elections. Bordeaux voted for Emmanuel Macron in the presidential election. In the 2017 parliamentary election, La République En Marche! won most of the constituencies in Bordeaux. 2019 European elections. Bordeaux voted in the 2019 European Parliament election in France. Municipal elections of 2020. After 73 years of right-of-centre rule, the ecologist Pierre Hurmic (EELV) came in ahead of Nicolas Florian (LR/LaREM). Parliamentary representation. The city area is represented by the following constituencies: Gironde's 1st, Gironde's 2nd, Gironde's 3rd, Gironde's 4th, Gironde's 5th, Gironde's 6th, Gironde's 7th. Education. University. During Antiquity, a first university had been created by the Romans in 286. The city was an important administrative centre and the new university had to train administrators. Only rhetoric and grammar were taught. Ausonius and Sulpicius Severus were two of the teachers. In 1441, when Bordeaux was an English town, the Pope Eugene IV created a university by demand of the archbishop Pey Berland. In 1793, during the French Revolution, the National Convention abolished the university, and replace them with the École centrale in 1796. In Bordeaux, this one was located in the former buildings of the college of Guyenne. In 1808, the university reappeared with Napoleon. Bordeaux accommodates approximately 70,000 students on one of the largest campuses of Europe (235 ha). Schools. Bordeaux has numerous public and private schools offering undergraduate and postgraduate programs. Engineering schools: Business and management schools: Other: Weekend education. The , a part-time Japanese supplementary school, is held in the "Salle de L'Athénée Municipal" in Bordeaux. Attractions and tourism. In October 2021, Bordeaux was shortlisted for the European Commission's 2022 European Capital of Smart Tourism award along with Copenhagen, Dublin, Florence, Ljubljana, Palma de Mallorca, and Valencia. Heritage and architecture. Bordeaux is classified "City of Art and History". The city is home to 362 "monuments historiques" (national heritage sites), with some buildings dating back to Roman times. Bordeaux, Port of the Moon, has been inscribed on UNESCO World Heritage List as "an outstanding urban and architectural ensemble". Bordeaux is home to one of Europe's biggest 18th-century architectural urban areas, making it a sought-after destination for tourists and cinema production crews. It stands out as one of the first French cities, after Nancy, to have entered an era of urbanism and metropolitan big scale projects, with the team Gabriel father and son, architects for King Louis XV, under the supervision of two intendants (Governors), first Nicolas-François Dupré de Saint-Maur then the Marquis de Tourny. Saint-André Cathedral, Saint-Michel Basilica and Saint-Seurin Basilica are part of the World Heritage Sites of the Routes of Santiago de Compostela in France. The organ in Saint-Louis-des-Chartrons is registered on the French monuments historiques. Notable historic buildings include: Contemporary buildings in contemporary architectural style include: Slavery memorials. Slavery was part of a growing drive for the city. During the 18th and 19th centuries, Bordeaux was an important slave port, which saw some 500 slave expeditions that cause the deportation of 150,000 Africans by Bordeaux shipowners. Secondly, even though the "Triangular trade" represented only 5% of Bordeaux's wealth, the city's direct trade with the Caribbean, that accounted for the other 95%, concerns the colonial stuffs made by the slave (sugar, coffee, cocoa). And thirdly, in that same period, a major migratory movement by Aquitanians took place to the Caribbean colonies, with Saint-Domingue (now Haiti) being the most popular destination. 40% of the white population of the island came from Aquitaine. They prospered with plantations incomes, until the first slave revolts which concluded in 1848 in the final abolition of slavery in France. A statue of Modeste Testas, an Ethiopian woman who was enslaved by the Bordeaux-based Testas brothers was unveiled in 2019. She was trafficked by them from West Africa, to Philadelphia (where one of the brothers coerced her to have two children by him) and was ultimately freed and lived in Haiti. The bronze sculpture was created by the Haitian artists Woodly Caymitte. A number of traces and memorial sites are visible in the city. Moreover, in May 2009, the Museum of Aquitaine opened the spaces dedicated to "Bordeaux in the 18th century, trans-Atlantic trading and slavery". This work, richly illustrated with original documents, contributes to disseminate the state of knowledge on this question, presenting above all the facts and their chronology. The region of Bordeaux was also the land of several prominent abolitionists, as Montesquieu, Laffon de Ladébat and Elisée Reclus. Others were members of the Society of the Friends of the Blacks as the revolutionaries Boyer-Fonfrède, Gensonné, Guadet and Ducos. Pont Jacques Chaban-Delmas. Europe's longest-span vertical-lift bridge, the Pont Jacques Chaban-Delmas, was opened in 2013 in Bordeaux, spanning the River Garonne. The central lift span is , weighs 4,600 tons and can be lifted vertically up to to let tall ships pass underneath. The €160 million bridge was inaugurated by President François Hollande and Mayor Alain Juppé on 16 March 2013. The bridge was named after the late Jacques Chaban-Delmas, who was a former Prime Minister and Mayor of Bordeaux. Shopping. Bordeaux has many shopping options. In the heart of Bordeaux is Rue Sainte-Catherine. This pedestrianised street has of shops, restaurants and cafés; it is also one of the longest shopping streets in Europe. Rue Sainte-Catherine starts at Place de la Victoire and ends at Place de la Comédie by the Grand Théâtre. The shops become progressively more upmarket as one moves towards Place de la Comédie and the nearby Cours de l'Intendance is where there are the more exclusive shops and boutiques. Culture. Bordeaux is the first city in France to have created, in the 1980s, an architecture exhibition and research centre, "Arc en rêve". The city has a large number of cinemas, theatres, and is the home of the Opéra national de Bordeaux. There are many music venues of varying capacity. The city also offers several festivals throughout the year. The Bordeaux International Festival of Women in Cinema (Festival international du cinéma au féminin de Bordeaux) took place in Bordeaux from 2002 until 2005. The Festival international du film indépendant de Bordeaux (Fifib or FIFIB), or Bordeaux International Independent Film Festival, was established in 2012. Transport. Road. Bordeaux is an important road and motorway junction. The city is connected to Paris by the A10 motorway, with Lyon by the A89, with Toulouse by the A62, and with Spain by the A63. There is a ring road called the "Rocade" which is often very busy. Another ring road is under consideration. Bordeaux has five road bridges that cross the Garonne, the Pont de pierre built in the 1820s and three modern bridges built after 1960: the Pont Saint Jean, just south of the Pont de pierre (both located downtown), the Pont d'Aquitaine, a suspension bridge downstream from downtown, and the Pont François Mitterrand, located upstream of downtown. These two bridges are part of the ring-road around Bordeaux. A fifth bridge, the Pont Jacques-Chaban-Delmas, was constructed in 2009–2012 and opened to traffic in March 2013. Located halfway between the Pont de pierre and the Pont d'Aquitaine and serving downtown rather than highway traffic, it is a vertical-lift bridge with a height in closed position comparable to that of Pont de pierre, and to the Pont d'Aquitaine when open. All five road bridges, including the two highway bridges, are open to cyclists and pedestrians as well. Another bridge, the Pont Jean-Jacques Bosc, is to be built in 2018. Lacking any steep hills, Bordeaux is relatively friendly to cyclists. Cycle paths (separate from the roadways) exist on the highway bridges, along the riverfront, on the university campuses, and incidentally elsewhere in the city. Cycle lanes and bus lanes that explicitly allow cyclists exist on many of the city's boulevards. A paid bicycle-sharing system with automated stations was established in 2010. Rail. The main railway station, Gare de Bordeaux Saint-Jean, near the center of the city, has 12 million passengers a year. It is served by the French national (SNCF) railway's high speed train, the TGV, that gets to Paris in two hours, with connections to major European centers such as Lille, Brussels, Amsterdam, Cologne, Geneva and London. The TGV also serves Toulouse and Irun (Spain) from Bordeaux. A regular train service is provided to Nantes, Nice, Marseille and Lyon. The Gare Saint-Jean is the major hub for regional trains (TER) operated by the SNCF to Arcachon, Limoges, Agen, Périgueux, Langon, Pau, Le Médoc, Angoulême and Bayonne. Historically the train line used to terminate at a station on the right bank of the river Garonne near the Pont de Pierre, and passengers crossed the bridge to get into the city. Subsequently, a double-track steel railway bridge was constructed in the 1850s, by Gustave Eiffel, to bring trains across the river direct into Gare de Bordeaux Saint-Jean. The old station was later converted and in 2010 comprised a cinema and restaurants. The two-track Eiffel bridge with a speed limit of became a bottleneck and a new bridge was built, opening in 2009. The new bridge has four tracks and allows trains to pass at . During the planning there was much lobbying by the Eiffel family and other supporters to preserve the old bridge as a footbridge across the Garonne, with possibly a museum to document the history of the bridge and Gustave Eiffel's contribution. The decision was taken to save the bridge, but by early 2010 no plans had been announced as to its future use. The bridge remains intact, but unused and without any means of access. The LGV Sud Europe Atlantique became fully operational in July 2017, shortening the journey time from Bordeaux city to Paris to 2hrs 4mins. Air. Bordeaux is served by Bordeaux–Mérignac Airport, located from the city centre in the suburban city of Mérignac. Trams, buses and boats. Bordeaux has an important public transport system called Transports Bordeaux Métropole (TBM). This company is run by the Keolis group. The network consists of: This network is operated from 5 am to 2 am. There had been several plans for a subway network to be set up, but they stalled for both geological and financial reasons. Work on the Tramway de Bordeaux system was started in the autumn of 2000, and services started in December 2003 connecting Bordeaux with its suburban areas. The tram system uses Alstom APS a form of ground-level power supply technology developed by French company Alstom and designed to preserve the aesthetic environment by eliminating overhead cables in the historic city. Conventional overhead cables are used outside the city. The system was controversial for its considerable cost of installation, maintenance and also for the numerous initial technical problems that paralysed the network. Many streets and squares along the tramway route became pedestrian areas, with limited access for cars. The Bordeaux Tramway system reached the Mérignac airport on April 29th 2023 with the opening of a 5-km extension of Line A. Taxis. There are more than 400 taxicabs in Bordeaux. Public transportation statistics. The average amount of time people spend commuting with public transit in Bordeaux, for example to and from work, on a weekday is 51 min. 12.% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 13 min, while 15.5% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is , while 8% travel for over in a single direction. Sport. The 41,458-capacity Nouveau Stade de Bordeaux is the largest stadium in Bordeaux. The stadium was opened in 2015 and replaced the Stade Chaban-Delmas, which was a venue for the FIFA World Cup in 1938 and 1998, as well as the 2007 Rugby World Cup. In the 1938 FIFA World Cup, it hosted a violent quarter-final known as the Battle of Bordeaux. The ground was formerly known as the "Stade du Parc Lescure" until 2001, when it was renamed in honour of the city's long-time mayor, Jacques Chaban-Delmas. There are two major sport teams in Bordeaux, Girondins de Bordeaux is the football team who, following administrative relegation, currently play in Championnat National 2, the fourth tier of French football. They are one of the most successful clubs in France, with six Division 1/Ligue 1 titles. Union Bordeaux Bègles is a rugby team in the Top 14 in the Ligue Nationale de Rugby. Skateboarding, rollerblading, and BMX biking are activities enjoyed by many young inhabitants of the city. Bordeaux is home to a quay which runs along the Garonne river. On the quay there is a skate-park divided into three sections. One section is for Vert tricks, one for street style tricks, and one for little action sports athletes with easier features and softer materials. The skate-park is very well maintained by the municipality. Other sports clubs include top flight ice hockey team Boxers de Bordeaux and third-tier basketball team JSA Bordeaux Basket Bordeaux is also the home to one of the strongest cricket teams in France and are champions of the South West League. There is a wooden velodrome, Vélodrome du Lac, in Bordeaux which hosts international cycling competition in the form of UCI Track Cycling World Cup events. The 2015 Trophee Eric Bompard was in Bordeaux. But the Free Skate was cancelled in all of the divisions due to the Paris and aftermath. The Short Program occurred hours before the bombing. French skaters Chafik Besseghier (68.36) in tenth place, Romain Ponsart (62.86) in 11th. Mae-Berenice-Meite (46.82) in 11th and Laurine Lecavelier (46.53) in 12th. Vanessa James/Morgan Cipres (65.75) in second. Between 1951 and 1955, an annual Formula 1 motor race was held on a 2.5-kilometre circuit which looped around the Esplanade des Quinconces and along the waterfront, attracting drivers such as Juan Manuel Fangio, Stirling Moss, Jean Behra and Maurice Trintignant. International relationships. Twin towns – sister cities. Bordeaux is twinned with:
4098
45417033
https://en.wikipedia.org/wiki?curid=4098
Puzzle Bobble
internationally known as Bust-A-Move, is a 1994 tile-matching puzzle arcade game developed and published by Taito. It is based on the 1986 arcade game "Bubble Bobble", featuring characters and themes from that game. Its characteristically cute Japanese animation and music, along with its play mechanics and level designs, made it successful as an arcade title and spawned several sequels and ports to home gaming systems. Gameplay. At the start of each round, the rectangular playing arena contains a prearranged pattern of colored "bubbles". At the bottom of the screen, the player controls a device called a "pointer", which aims and fires bubbles up the screen. The color of bubbles fired is randomly generated and chosen from the colors of bubbles still left on the screen. The objective of the game is to clear all the bubbles from the arena without any bubble crossing the bottom line. Bubbles will fire automatically if the player remains idle. After clearing the arena, the next round begins with a new pattern of bubbles to clear. The arcade version of the game consists of 30 levels. The fired bubbles travel in straight lines (possibly bouncing off the sidewalls of the arena), stopping when they touch other bubbles or reach the top of the arena. If a bubble touches identically colored bubbles, forming a group of three or more, those bubbles—as well as any bubbles hanging from them—are removed from the field of play, and points are awarded. After every few shots, the "ceiling" of the playing arena drops downwards slightly, along with all the bubbles stuck to it. The number of shots between each drop of the ceiling is influenced by the number of bubble colors remaining. The closer the bubbles get to the bottom of the screen, the faster the music plays and if they cross the line at the bottom then the game is over. Release. Two different versions of the original game were released. "Puzzle Bobble" was originally released in Japan only in June 1994 by Taito, running on Taito B System hardware (with the preliminary title "Bubble Buster"). Then, six months later in December, the international Neo Geo version of "Puzzle Bobble" was released. It was almost identical aside from being in stereo and having some different sound effects and translated text. Reception. In Japan, "Game Machine" listed the Neo Geo version of "Puzzle Bobble" on their February 15, 1995 issue as being the second most-popular arcade game at the time. It went on to become Japan's second highest-grossing arcade printed circuit board (PCB) software of 1995, below "Virtua Fighter 2". In North America, "RePlay" reported the Neo Geo version of "Puzzle Bobble" to be the fourth most-popular arcade game in February 1995. Reviewing the Super NES version, Mike Weigand of "Electronic Gaming Monthly" called it "a thoroughly enjoyable and incredibly addicting puzzle game". He considered the two player mode the highlight, but also said that the one player mode provides a solid challenge. "GamePro" gave it a generally negative review, saying it starts out fun but that ultimately lacks intricacy and longevity. They elaborated that in one player mode all the levels feel the same, and that two player matches are over too quickly to build up any excitement. They also criticized the lack of any 3D effects in the graphics. "Next Generation" reviewed the SNES version of the game and called it "addictive as hell". A reviewer for "Next Generation", while questioning the continued viability of the action puzzle genre, admitted that the game is "very simple and "very" addictive". He remarked that though the 3DO version makes no significant additions, none are called for by a game with such simple enjoyment. "GamePro"s brief review of the 3DO version commented that the game's controls are responsive, and they also praised visuals and music. "Edge" magazine ranked the game 73rd on their 100 Best Video Games in 2007. "IGN" rated the SNES version 54th in its Top 100 SNES Games. Legacy. The simplicity of the concept has led to many clones, both commercial and otherwise. 1996's "Snood" replaced the bubbles with small creatures and has been successful in its own right. "Worms Blast" was Team 17's take on the concept. On September 24, 2000, British game publisher Empire Interactive released a similar game, "Spin Jam", for the original PlayStation console. Mobile clones include "Bubble Witch Saga" and "Bubble Shooter". "Frozen Bubble" is a free software clone. For "Bubble Bobble"s 35th anniversary, Taito launched "Puzzle Bobble VR: Vacation Odyssey" on the Oculus Quest and Oculus Quest 2, later coming to PlayStation 4 and PlayStation 5 as "Puzzle Bobble 3D: Vacation Odyssey" in 2021." "Puzzle Bobble Everybubble!". "Puzzle Bobble Everybubble!" was released on May 23, 2023, for Nintendo Switch. The game also comes with an extra mode called ""Puzzle Bobble" vs. "Space Invaders"", where up to four players can work together to erase bubble-encased invaders before they reach the player while only being able to aim straight up.
4099
1300863107
https://en.wikipedia.org/wiki?curid=4099
Bone
A bone is a rigid organ that constitutes part of the skeleton in most vertebrate animals. Bones protect the various other organs of the body, produce red and white blood cells, store minerals, provide structure and support for the body, and enable mobility. Bones come in a variety of shapes and sizes and have complex internal and external structures. They are lightweight yet strong and hard and serve multiple functions. Bone tissue (osseous tissue), which is also called bone in the uncountable sense of that word, is hard tissue, a type of specialised connective tissue. It has a honeycomb-like matrix internally, which helps to give the bone rigidity. Bone tissue is made up of different types of bone cells. Osteoblasts and osteocytes are involved in the formation and mineralisation of bone; osteoclasts are involved in the resorption of bone tissue. Modified (flattened) osteoblasts become the lining cells that form a protective layer on the bone surface. The mineralised matrix of bone tissue has an organic component of mainly collagen called "ossein" and an inorganic component of bone mineral made up of various salts. Bone tissue is mineralized tissue of two types, cortical bone and cancellous bone. Other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels, and cartilage. In the human body at birth, approximately 300 bones are present. Many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. The largest bone in the body is the femur or thigh-bone, and the smallest is the "stapes" in the middle ear. The Ancient Greek word for bone is ὀστέον ("osteon"), hence the many terms that use it as a prefix—such as osteopathy. In anatomical terminology, including the "Terminologia Anatomica" international standard, the word for a bone is "os" (for example, "os breve", "os longum", "os sesamoideum"). Structure. Bone is not uniformly solid, but consists of a flexible matrix (about 30%) and bound minerals (about 70%), which are intricately woven and continuously remodeled by a group of specialized bone cells. Their unique composition and design allows bones to be relatively hard and strong, while remaining lightweight. Bone matrix is 90 to 95% composed of elastic collagen fibers, also known as ossein, and the remainder is ground substance. The elasticity of collagen improves fracture resistance. The matrix is hardened by the binding of inorganic mineral salt, calcium phosphate, in a chemical arrangement known as bone mineral, a form of calcium apatite. It is the mineralization that gives bones rigidity. Bone is actively constructed and remodeled throughout life by specialized bone cells known as osteoblasts and osteoclasts. Within any single bone, the tissue is woven into two main patterns: cortical and cancellous bone, each with a distinct appearance and characteristics. Cortex. The hard outer layer of bones is composed of cortical bone, which is also called compact bone as it is much denser than cancellous bone. It forms the hard exterior (cortex) of bones. The cortical bone gives bone its smooth, white, and solid appearance, and accounts for 80% of the total bone mass of an adult human skeleton. It facilitates bone's main functions—to support the whole body, to protect organs, to provide levers for movement, and to store and release chemical elements, mainly calcium. It consists of multiple microscopic columns, each called an osteon or Haversian system. Each column is multiple layers of osteoblasts and osteocytes around a central canal called the osteonic canal. Volkmann's canals at right angles connect the osteons together. The columns are metabolically active, and as bone is reabsorbed and created the nature and location of the cells within the osteon will change. Cortical bone is covered by a periosteum on its outer surface, and an endosteum on its inner surface. The endosteum is the boundary between the cortical bone and the cancellous bone. The primary anatomical and functional unit of cortical bone is the osteon. Trabeculae. Cancellous bone or spongy bone, also known as trabecular bone, is the internal tissue of the skeletal bone and is an open cell porous network that follows the material properties of biofoams. Cancellous bone has a higher surface-area-to-volume ratio than cortical bone and it is less dense. This makes it weaker and more flexible. The greater surface area also makes it suitable for metabolic activities such as the exchange of calcium ions. Cancellous bone is typically found at the ends of long bones, near joints, and in the interior of vertebrae. Cancellous bone is highly vascular and often contains red bone marrow where hematopoiesis, the production of blood cells, occurs. The primary anatomical and functional unit of cancellous bone is the trabecula. The trabeculae are aligned towards the mechanical load distribution that a bone experiences within long bones such as the femur. As far as short bones are concerned, trabecular alignment has been studied in the vertebral pedicle. Thin formations of osteoblasts covered in endosteum create an irregular network of spaces, known as trabeculae. Within these spaces are bone marrow and hematopoietic stem cells that give rise to platelets, red blood cells and white blood cells. Trabecular marrow is composed of a network of rod- and plate-like elements that make the overall organ lighter and allow room for blood vessels and marrow. Trabecular bone accounts for the remaining 20% of total bone mass but has nearly ten times the surface area of compact bone. The words "cancellous" and "trabecular" refer to the tiny lattice-shaped units (trabeculae) that form the tissue. It was first illustrated accurately in the engravings of Crisóstomo Martinez. Marrow. Bone marrow, also known as myeloid tissue in red bone marrow, can be found in almost any bone that holds cancellous tissue. In newborns, all such bones are filled exclusively with red marrow or hematopoietic marrow, but as the child ages the hematopoietic fraction decreases in quantity and the fatty/ yellow fraction called marrow adipose tissue (MAT) increases in quantity. In adults, red marrow is mostly found in the bone marrow of the femur, the ribs, the vertebrae and pelvic bones. Vascular supply. Bone receives about 10% of cardiac output. Blood enters the endosteum, flows through the marrow, and exits through small vessels in the cortex. In humans, blood oxygen tension in bone marrow is about 6.6%, compared to about 12% in arterial blood, and 5% in venous and capillary blood. Cells. Bone is metabolically active tissue composed of several types of cells. These cells include osteoblasts, which are involved in the creation and mineralization of bone tissue, osteocytes, and osteoclasts, which are involved in the reabsorption of bone tissue. Osteoblasts and osteocytes are derived from osteoprogenitor cells, but osteoclasts are derived from the same cells that differentiate to form macrophages and monocytes. Within the marrow of the bone there are also hematopoietic stem cells. These cells give rise to other cells, including white blood cells, red blood cells, and platelets. Osteoblast. Osteoblasts are mononucleate bone-forming cells. They are located on the surface of osteon seams and make a protein mixture known as osteoid, which mineralizes to become bone. The osteoid seam is a narrow region of a newly formed organic matrix, not yet mineralized, located on the surface of a bone. Osteoid is primarily composed of Type I collagen. Osteoblasts also manufacture hormones, such as prostaglandins, to act on the bone itself. The osteoblast creates and repairs new bone by actually building around itself. First, the osteoblast puts up collagen fibers. These collagen fibers are used as a framework for the osteoblasts' work. The osteoblast then deposits calcium phosphate which is hardened by hydroxide and bicarbonate ions. The brand-new bone created by the osteoblast is called osteoid. Once the osteoblast is finished working it is actually trapped inside the bone once it hardens. When the osteoblast becomes trapped, it becomes known as an osteocyte. Other osteoblasts remain on the top of the new bone and are used to protect the underlying bone, these become known as bone lining cells. Osteocyte. Osteocytes are cells of mesenchymal origin and originate from osteoblasts that have migrated into and become trapped and surrounded by a bone matrix that they themselves produced. The spaces the cell body of osteocytes occupy within the mineralized collagen type I matrix are known as lacunae, while the osteocyte cell processes occupy channels called canaliculi. The many processes of osteocytes reach out to meet osteoblasts, osteoclasts, bone lining cells, and other osteocytes probably for the purposes of communication. Osteocytes remain in contact with other osteocytes in the bone through gap junctions—coupled cell processes which pass through the canalicular channels. Osteoclast. Osteoclasts are very large multinucleate cells that are responsible for the breakdown of bones by the process of bone resorption. New bone is then formed by the osteoblasts. Bone is constantly remodeled by the resorption of osteoclasts and created by osteoblasts. Osteoclasts are large cells with multiple nuclei located on bone surfaces in what are called "Howship's lacunae" (or "resorption pits"). These lacunae are the result of surrounding bone tissue that has been reabsorbed. Because the osteoclasts are derived from a monocyte stem-cell lineage, they are equipped with phagocytic-like mechanisms similar to circulating macrophages. Osteoclasts mature and/or migrate to discrete bone surfaces. Upon arrival, active enzymes, such as tartrate-resistant acid phosphatase, are secreted against the mineral substrate. The reabsorption of bone by osteoclasts also plays a role in calcium homeostasis. Composition. Bones consist of living cells (osteoblasts and osteocytes) embedded in a mineralized organic matrix. The primary inorganic component of human bone is hydroxyapatite, the dominant bone mineral, having the nominal composition of Ca10(PO4)6(OH)2. The organic components of this matrix consist mainly of type I collagen—"organic" referring to materials produced as a result of the human body—and inorganic components, which alongside the dominant hydroxyapatite phase, include other compounds of calcium and phosphate including salts. Approximately 30% of the acellular component of bone consists of organic matter, while roughly 70% by mass is attributed to the inorganic phase. The collagen fibers give bone its tensile strength, and the interspersed crystals of hydroxyapatite give bone its compressive strength. These effects are synergistic. The exact composition of the matrix may be subject to change over time due to nutrition and biomineralization, with the ratio of calcium to phosphate varying between 1.3 and 2.0 (per weight), and trace minerals such as magnesium, sodium, potassium and carbonate also be found. Type I collagen composes 90–95% of the organic matrix, with the remainder of the matrix being a homogenous liquid called ground substance consisting of proteoglycans such as hyaluronic acid and chondroitin sulfate, as well as non-collagenous proteins such as osteocalcin, osteopontin or bone sialoprotein. Collagen consists of strands of repeating units, which give bone tensile strength, and are arranged in an overlapping fashion that prevents shear stress. The function of ground substance is not fully known. Two types of bone can be identified microscopically according to the arrangement of collagen: woven and lamellar. Woven bone is produced when osteoblasts produce osteoid rapidly, which occurs initially in all fetal bones, but is later replaced by more resilient lamellar bone. In adults, woven bone is created after fractures or in Paget's disease. Woven bone is weaker, with a smaller number of randomly oriented collagen fibers, but forms quickly; it is for this appearance of the fibrous matrix that the bone is termed "woven". It is soon replaced by lamellar bone, which is highly organized in concentric sheets with a much lower proportion of osteocytes to surrounding tissue. Lamellar bone, which makes its first appearance in humans in the fetus during the third trimester, is stronger and filled with many collagen fibers parallel to other fibers in the same layer (these parallel columns are called osteons). In cross-section, the fibers run in opposite directions in alternating layers, much like in plywood, assisting in the bone's ability to resist torsion forces. After a fracture, woven bone forms initially and is gradually replaced by lamellar bone during a process known as "bony substitution". Compared to woven bone, lamellar bone formation takes place more slowly. The orderly deposition of collagen fibers restricts the formation of osteoid to about 1 to 2 μm per day. Lamellar bone also requires a relatively flat surface to lay the collagen fibers in parallel or concentric layers. Deposition. The extracellular matrix of bone is laid down by osteoblasts, which secrete both collagen and ground substance. These cells synthesise collagen alpha polypeptide chains and then secrete collagen molecules. The collagen molecules associate with their neighbors and crosslink via lysyl oxidase to form collagen fibrils. At this stage, they are not yet mineralized, and this zone of unmineralized collagen fibrils is called "osteoid". Around and inside collagen fibrils calcium and phosphate eventually precipitate within days to weeks becoming then fully mineralized bone with an overall carbonate substituted hydroxyapatite inorganic phase. In order to mineralise the bone, the osteoblasts secrete alkaline phosphatase, some of which is carried by vesicles. This cleaves the inhibitory pyrophosphate and simultaneously generates free phosphate ions for mineralization, acting as the foci for calcium and phosphate deposition. Vesicles may initiate some of the early mineralization events by rupturing and acting as a centre for crystals to grow on. Bone mineral may be formed from globular and plate structures, and via initially amorphous phases. Types. Five types of bones are found in the human body: long, short, flat, irregular, and sesamoid. Terminology. In the study of anatomy, anatomists use a number of anatomical terms to describe the appearance, shape and function of bones. Other anatomical terms are also used to describe the location of bones. Like other anatomical terms, many of these derive from Latin and Greek. Some anatomists still use Latin to refer to bones. The term "osseous", and the prefix "osteo-", referring to things related to bone, are still used commonly today. Some examples of terms used to describe bones include the term "foramen" to describe a hole through which something passes, and a "canal" or "meatus" to describe a tunnel-like structure. A protrusion from a bone can be called a number of terms, including a "condyle", "crest", "spine", "eminence", "tubercle" or "tuberosity", depending on the protrusion's shape and location. In general, long bones are said to have a "head", "neck", and "body". When two bones join, they are said to "articulate". If the two bones have a fibrous connection and are relatively immobile, then the joint is called a "suture". Development. The formation of bone is called ossification. During the fetal stage of development this occurs by two processes: intramembranous ossification and endochondral ossification. Intramembranous ossification involves the formation of bone from connective tissue whereas endochondral ossification involves the formation of bone from cartilage. Intramembranous ossification mainly occurs during formation of the flat bones of the skull but also the mandible, maxilla, and clavicles; the bone is formed from connective tissue such as mesenchyme tissue rather than from cartilage. The process includes: the development of the ossification center, calcification, trabeculae formation and the development of the periosteum. Endochondral ossification occurs in long bones and most other bones in the body; it involves the development of bone from cartilage. This process includes the development of a cartilage model, its growth and development, development of the primary and secondary ossification centers, and the formation of articular cartilage and the epiphyseal plates. Endochondral ossification begins with points in the cartilage called "primary ossification centers". They mostly appear during fetal development, though a few short bones begin their primary ossification after birth. They are responsible for the formation of the diaphyses of long bones, short bones and certain parts of irregular bones. Secondary ossification occurs after birth and forms the epiphyses of long bones and the extremities of irregular and flat bones. The diaphysis and both epiphyses of a long bone are separated by a growing zone of cartilage (the epiphyseal plate). At skeletal maturity (18 to 25 years of age), all of the cartilage is replaced by bone, fusing the diaphysis and both epiphyses together (epiphyseal closure). In the upper limbs, only the diaphyses of the long bones and scapula are ossified. The epiphyses, carpal bones, coracoid process, medial border of the scapula, and acromion are still cartilaginous. The following steps are followed in the conversion of cartilage to bone: Bone development in youth is extremely important in preventing future complications of the skeletal system. Regular exercise during childhood and adolescence can help improve bone architecture, making bones more resilient and less prone to fractures in adulthood. Physical activity, specifically resistance training, stimulates growth of bones by increasing both bone density and strength. Studies have shown a positive correlation between the adaptations of resistance training and bone density. While nutritional and pharmacological approaches may also improve bone health, the strength and balance adaptations from resistance training are a substantial added benefit. Weight-bearing exercise may assist in osteoblast (bone-forming cells) formation and help to increase bone mineral content. High-impact sports, which involve quick changes in direction, jumping, and running, are particularly effective with stimulating bone growth in the youth. Sports such as soccer, basketball, and tennis have shown to have positive effects on bone mineral density as well as bone mineral content in teenagers. Engaging in physical activity during childhood years, particularly in these high-impact osteogenic sports, can help to positively influence bone mineral density in adulthood. Children and adolescents who participate in regular physical activity will place the groundwork for bone health later in life, reducing the risk of bone-related conditions such as osteoporosis. Functions. Bones have a variety of functions: Mechanical. Bones serve a variety of mechanical functions. Together the bones in the body form the skeleton. They provide a frame to keep the body supported, and an attachment point for skeletal muscles, tendons, ligaments and joints, which function together to generate and transfer forces so that individual body parts or the whole body can be manipulated in three-dimensional space (the interaction between bone and muscle is studied in biomechanics). Bones protect internal organs, such as the skull protecting the brain or the ribs protecting the heart and lungs. Because of the way that bone is formed, bone has a high compressive strength of about , poor tensile strength of 104–121 MPa, and a very low shear stress strength (51.6 MPa). This means that bone resists pushing (compressional) stress well, resist pulling (tensional) stress less well, but only poorly resists shear stress (such as due to torsional loads). While bone is essentially brittle, bone does have a significant degree of elasticity, contributed chiefly by collagen. Mechanically, bones also have a special role in hearing. The ossicles are three small bones in the middle ear which are involved in sound transduction. Synthetic. The cancellous part of bones contain bone marrow. Bone marrow produces blood cells in a process called hematopoiesis. Blood cells that are created in bone marrow include red blood cells, platelets and white blood cells. Progenitor cells such as the hematopoietic stem cell divide in a process called mitosis to produce precursor cells. These include precursors which eventually give rise to white blood cells, and erythroblasts which give rise to red blood cells. Unlike red and white blood cells, created by mitosis, platelets are shed from very large cells called megakaryocytes. This process of progressive differentiation occurs within the bone marrow. After the cells are matured, they enter the circulation. Every day, over 2.5 billion red blood cells and platelets, and 50–100 billion granulocytes are produced in this way. As well as creating cells, bone marrow is also one of the major sites where defective or aged red blood cells are destroyed. Metabolic. Determined by the species, age, and the type of bone, bone cells make up to 15 percent of the bone. Growth factor storage—mineralized bone matrix stores important growth factors such as insulin-like growth factors, transforming growth factor, bone morphogenetic proteins and others. Calcium. Strong bones during our youth is essential for preventing osteoporosis and bone fragility as we age. The importance of insuring factors that could influence increases in BMD while lowering our risks for further bone degradation is necessary during our childhood as these factors lead to a supportive and healthy lifestyle/bone health. Up till the age of 30, the bone stores that we have will ultimately start to decrease as we surpass this age. Influencing factors that can help us have larger stores and higher amounts of BMD will allow us to see less harmful results as we reach older adulthood. The issue of having fragile bones during our childhood leads to an increase in certain disorders and conditions such as juvenile osteoporosis, though it is less common to see, the necessity for a healthy routine especially when it comes to bone development is essential in our youth. Children that naturally have lower bone mineral density have a lower quality of life and therefore lead a life that is less fulfilling and uncomfortable. Factors such as increases in Calcium intake has been shown to increase BMD stores. Studies have shown that increasing calcium stores whether that be through supplementation or intake via foods and beverages such as leafy greens and milk have pushed the notion that prepuberty or even early pubertal children will see increases in BMD with the addition of increase Calcium intake. Another research study goes on to show that long-term calcium intake has been proven to significantly contribute to overall BMD in children without certain conditions or disorders. This data shows that ensuring adequate calcium intake in children reinforces the structure and rate at which bones will begin to densify. Further detailing how structuring a strong nutritional plan with adequate amounts of Calcium sources can lead to strong bones but also can be a worth-while strategy into preventing further damage or degradation of bone stores as we age. The connection between Calcium intake & BMD and its effects on youth as a whole is a very world-wide issue and has been shown to affect different ethnicities in a variety of differing ways. In a recent study, there was a strong correlation between calcium intake and BMD across a variety of diverse populations of children and adolescence ultimately coming to the conclusion that fundamentally, achieving optimal bone health is necessary for providing our youth with the ability to undergo hormonal changes as well. They found in a study of over 10,000 children ages 8–19 that in females, African Americans, and the 12–15 adolescent groups that at 2.6–2.8g/kg of body weight, they began to see a decrease in BMD. They elaborate on this by determining that this is strongly influenced by a lower baseline in calcium intake throughout puberty. Genetic factors have also been shown to influence lower acceptance of calcium stores. Ultimately, the window that youth have for accruing and building resilient bones is very minimal. Being able to consistently meet calcium needs while also engaging in weight-bearing exercise is essential for building a strong initial bone foundation at which to build upon. Being able to reach our daily value of 1300 mg for ages 9–18 is becoming more and more necessary and as we progress in health, the chance that osteoporosis and other factors such as bone fragility or potential for stunted growth can be greatly reduced through these resources, ultimately leading to a more fulfilling and healthier lifestyle. Remodeling. Bone is constantly being created and replaced in a process known as remodeling. This ongoing turnover of bone is a process of resorption followed by replacement of bone with little change in shape. This is accomplished through osteoblasts and osteoclasts. Cells are stimulated by a variety of signals, and together referred to as a remodeling unit. Approximately 10% of the skeletal mass of an adult is remodelled each year. The purpose of remodeling is to regulate calcium homeostasis, repair microdamaged bones from everyday stress, and to shape the skeleton during growth. Repeated stress, such as weight-bearing exercise or bone healing, results in the bone thickening at the points of maximum stress (Wolff's law). It has been hypothesized that this is a result of bone's piezoelectric properties, which cause bone to generate small electrical potentials under stress. The action of osteoblasts and osteoclasts are controlled by a number of chemical enzymes that either promote or inhibit the activity of the bone remodeling cells, controlling the rate at which bone is made, destroyed, or changed in shape. The cells also use paracrine signalling to control the activity of each other. For example, the rate at which osteoclasts resorb bone is inhibited by calcitonin and osteoprotegerin. Calcitonin is produced by parafollicular cells in the thyroid gland, and can bind to receptors on osteoclasts to directly inhibit osteoclast activity. Osteoprotegerin is secreted by osteoblasts and is able to bind RANK-L, inhibiting osteoclast stimulation. Osteoblasts can also be stimulated to increase bone mass through increased secretion of osteoid and by inhibiting the ability of osteoclasts to break down osseous tissue. Increased secretion of osteoid is stimulated by the secretion of growth hormone by the pituitary, thyroid hormone and the sex hormones (estrogens and androgens). These hormones also promote increased secretion of osteoprotegerin. Osteoblasts can also be induced to secrete a number of cytokines that promote reabsorption of bone by stimulating osteoclast activity and differentiation from progenitor cells. Vitamin D, parathyroid hormone and stimulation from osteocytes induce osteoblasts to increase secretion of RANK-ligand and interleukin 6, which cytokines then stimulate increased reabsorption of bone by osteoclasts. These same compounds also increase secretion of macrophage colony-stimulating factor by osteoblasts, which promotes the differentiation of progenitor cells into osteoclasts, and decrease secretion of osteoprotegerin. Volume. Bone volume is determined by the rates of bone formation and bone resorption. Certain growth factors may work to locally alter bone formation by increasing osteoblast activity. Numerous bone-derived growth factors have been isolated and classified via bone cultures. These factors include insulin-like growth factors I and II, transforming growth factor-beta, fibroblast growth factor, platelet-derived growth factor, and bone morphogenetic proteins. Evidence suggests that bone cells produce growth factors for extracellular storage in the bone matrix. The release of these growth factors from the bone matrix could cause the proliferation of osteoblast precursors. Essentially, bone growth factors may act as potential determinants of local bone formation. Cancellous bone volume in postmenopausal osteoporosis may be determined by the relationship between the total bone forming surface and the percent of surface resorption. Clinical significance. A number of diseases can affect bone, including arthritis, fractures, infections, osteoporosis and tumors. Conditions relating to bone can be managed by a variety of doctors, including rheumatologists for joints, and orthopedic surgeons, who may conduct surgery to fix broken bones. Other doctors, such as rehabilitation specialists may be involved in recovery, radiologists in interpreting the findings on imaging, and pathologists in investigating the cause of the disease, and family doctors may play a role in preventing complications of bone disease such as osteoporosis. When a doctor sees a patient, a history and exam will be taken. Bones are then often imaged, called radiography. This might include ultrasound X-ray, CT scan, MRI scan and other imaging such as a Bone scan, which may be used to investigate cancer. Other tests such as a blood test for autoimmune markers may be taken, or a synovial fluid aspirate may be taken. Fractures. In normal bone, fractures occur when there is significant force applied or repetitive trauma over a long time. Fractures can also occur when a bone is weakened, such as with osteoporosis, or when there is a structural problem, such as when the bone remodels excessively (such as Paget's disease) or is the site of the growth of cancer. Common fractures include wrist fractures and hip fractures, associated with osteoporosis, vertebral fractures associated with high-energy trauma and cancer, and fractures of long-bones. Not all fractures are painful. When serious, depending on the fractures type and location, complications may include flail chest, compartment syndromes or fat embolism. Compound fractures involve the bone's penetration through the skin. Some complex fractures can be treated by the use of bone grafting procedures that replace missing bone portions. Fractures and their underlying causes can be investigated by X-rays, CT scans and MRIs. Fractures are described by their location and shape, and several classification systems exist, depending on the location of the fracture. A common long bone fracture in children is a Salter–Harris fracture. When fractures are managed, pain relief is often given, and the fractured area is often immobilised. This is to promote bone healing. In addition, surgical measures such as internal fixation may be used. Because of the immobilisation, people with fractures are often advised to undergo rehabilitation. Tumors. Tumor that can affect bone in several ways. Examples of benign bone tumors include osteoma, osteoid osteoma, osteochondroma, osteoblastoma, enchondroma, giant-cell tumor of bone, and aneurysmal bone cyst. Cancer. Cancer can arise in bone tissue, and bones are also a common site for other cancers to spread (metastasise) to. Cancers that arise in bone are called "primary" cancers, although such cancers are rare. Metastases within bone are "secondary" cancers, with the most common being breast cancer, lung cancer, prostate cancer, thyroid cancer, and kidney cancer. Secondary cancers that affect bone can either destroy bone (called a "lytic" cancer) or create bone (a "sclerotic" cancer). Cancers of the bone marrow inside the bone can also affect bone tissue, examples including leukemia and multiple myeloma. Bone may also be affected by cancers in other parts of the body. Cancers in other parts of the body may release parathyroid hormone or parathyroid hormone-related peptide. This increases bone reabsorption, and can lead to bone fractures. Bone tissue that is destroyed or altered as a result of cancers is distorted, weakened, and more prone to fracture. This may lead to compression of the spinal cord, destruction of the marrow resulting in bruising, bleeding and immunosuppression, and is one cause of bone pain. If the cancer is metastatic, then there might be other symptoms depending on the site of the original cancer. Some bone cancers can also be felt. Cancers of the bone are managed according to their type, their stage, prognosis, and what symptoms they cause. Many primary cancers of bone are treated with radiotherapy. Cancers of bone marrow may be treated with chemotherapy, and other forms of targeted therapy such as immunotherapy may be used. Palliative care, which focuses on maximising a person's quality of life, may play a role in management, particularly if the likelihood of survival within five years is poor. Diabetes. Type 1 diabetes is an autoimmune disease in which the body attacks the insulin-producing pancreas cells causing the body to not make enough insulin. In contrast type 2 diabetes in which the body creates enough Insulin, but becomes resistant to it over time. Children makeup approximately 85% of Type 1 Diabetes cases and in America there was an average 22% rise in cases over the first 24 months of the COVID-19 Pandemic. With the increase of developing some form of diabetes across all ranges continually growing the health impacts on bone development and bone health in these populations are still being researched. Most evidence suggests that diabetes, either Type 1 and Type 2, inhibits osteoblastic activity and causes both lower BMD and BMC in both adults and children. The weakening of these developmental aspects is thought to lead to an increased risk of developing many diseases such as osteoarthritis, osteoporosis, osteopenia and fractures. Development of any of these diseases is thought to be correlated with a decrease in ability to perform in athletic environments and activities of daily living. Focusing on therapies that target molecules like osteocalcin or AGEs could provide new ways to improve bone health and help manage the complications of diabetes more effectively. Osteoporosis. Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined in women by the World Health Organization as a bone mineral density of 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average. This density is measured using dual energy X-ray absorptiometry (DEXA), with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and are at risk of fracture. One of the most important risk factors for osteoporosis is advanced age. Accumulation of oxidative DNA damage in osteoblastic and osteoclastic cells appears to be a key factor in age-related osteoporosis. Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium and trace mineral supplements may also be advised, as may Vitamin D. When medication is used, it may include bisphosphonates, Strontium ranelate, and hormone replacement therapy. Osteopathic medicine. Osteopathic medicine is a school of medical thought that links the musculoskeletal system to overall health. , over 77,000 physicians in the United States are trained in osteopathic medical schools. Bone health. Bone health is vastly important all throughout life due to a number of reasons, some of those being, without strong healthy bones we are more at risk for different chronic diseases, and fractures as well as day-to-day function being more difficult with poor bone health. Developing strong bones as a child is one of the most important steps to having healthy bones all throughout life because this is when a strong foundation is built, which will make it much easier to maintain musculoskeletal health in later years. Adolescence offers a window to really develop bones in either a positive or negative way. It is estimated that diet and exercise during these years can impact peak bone mass as an adult nearly 20–40%. One study done on children with developmental coordination disorder found an increase in bone mass up to 4% and 5% in the cortical areas of the tibia alone from a 13-week training period, which is truly significant when considering how participants only participated in the multimodal workouts twice per week, and it would be reasonable to expect these increases to be greater if workouts were more frequent, especially in youth without developmental coordination disorder. Peak bone mass occurs between the second and third decade of most people's lives, and with this being the case if we can really stockpile as much bone mass and increase our BMD and BMC by living healthy active lives, and having good diets that consume adequate calcium and vitamin D then we will truly have a leg up in our later lives as well as actively decreasing risks of certain chronic diseases such as osteoporosis. Osteology. The study of bones and teeth is referred to as osteology. It is frequently used in anthropology, archeology and forensic science for a variety of tasks. This can include determining the nutritional, health, age or injury status of the individual the bones were taken from. Preparing fleshed bones for these types of studies can involve the process of maceration. Typically anthropologists and archeologists study bone tools made by "Homo sapiens" and "Homo neanderthalensis". Bones can serve a number of uses such as projectile points or artistic pigments, and can also be made from external bones such as antlers. Other animals. Bird skeletons are very lightweight. Their bones are smaller and thinner, to aid flight. Among mammals, bats come closest to birds in terms of bone density, suggesting that small dense bones are a flight adaptation. Many bird bones have little marrow due to them being hollow. A bird's beak is primarily made of bone as projections of the mandibles which are covered in keratin. Some bones, primarily formed separately in subcutaneous tissues, include headgears (such as bony core of horns, antlers, ossicones), osteoderm, and os penis/os clitoris. A deer's antlers are composed of bone which is an unusual example of bone being outside the skin of the animal once the velvet is shed. The extinct predatory fish "Dunkleosteus" had sharp edges of hard exposed bone along its jaws. The proportion of cortical bone that is 80% in the human skeleton may be much lower in other animals, especially in marine mammals and marine turtles, or in various Mesozoic marine reptiles, such as ichthyosaurs, among others. This proportion can vary quickly in evolution; it often increases in early stages of returns to an aquatic lifestyle, as seen in early whales and pinnipeds, among others. It subsequently decreases in pelagic taxa, which typically acquire spongy bone, but aquatic taxa that live in shallow water can retain very thick, pachyostotic, osteosclerotic, or pachyosteosclerotic bones, especially if they move slowly, like sea cows. In some cases, even marine taxa that had acquired spongy bone can revert to thicker, compact bones if they become adapted to live in shallow water, or in hypersaline (denser) water. Many animals, particularly herbivores, practice osteophagy—the eating of bones. This is presumably carried out in order to replenish lacking phosphate. Many bone diseases that affect humans also affect other vertebrates—an example of one disorder is skeletal fluorosis. Society and culture. Bones from slaughtered animals have a number of uses. In prehistoric times, they have been used for making bone tools. They have further been used in bone carving, already important in prehistoric art, and also in modern time as crafting materials for buttons, beads, handles, bobbins, calculation aids, head nuts, dice, poker chips, pick-up sticks, arrows, scrimshaw, ornaments, etc. Bone glue can be made by prolonged boiling of ground or cracked bones, followed by filtering and evaporation to thicken the resulting fluid. Historically once important, bone glue and other animal glues today have only a few specialized uses, such as in antiques restoration. Essentially the same process, with further refinement, thickening and drying, is used to make gelatin. Broth is made by simmering several ingredients for a long time, traditionally including bones. Bone char, a porous, black, granular material primarily used for filtration and also as a black pigment, is produced by charring mammal bones. Oracle bone script was a writing system used in ancient China based on inscriptions in bones. Its name originates from oracle bones, which were mainly ox clavicle. The Ancient Chinese (mainly in the Shang dynasty), would write their questions on the oracle bone, and burn the bone, and where the bone cracked would be the answer for the questions. To point the bone at someone is considered bad luck in some cultures, such as Australian aborigines, such as by the Kurdaitcha. The wishbones of fowl have been used for divination, and are still customarily used in a tradition to determine which one of two people pulling on either prong of the bone may make a wish. Various cultures throughout history have adopted the custom of shaping an infant's head by the practice of artificial cranial deformation. A widely practised custom in China was that of foot binding to limit the normal growth of the foot.
4100
4051065
https://en.wikipedia.org/wiki?curid=4100
Bretwalda
Bretwalda (also brytenwalda and bretenanwealda, sometimes capitalised) is an Old English word. The first record comes from the late 9th-century "Anglo-Saxon Chronicle". It is given to some of the rulers of Anglo-Saxon kingdoms from the 5th century onwards who had achieved overlordship of some or all of the other Anglo-Saxon kingdoms. It is unclear whether the word dates back to the 5th century and was used by the kings themselves or whether it is a later, 9th-century, invention. The term "bretwalda" also appears in a 10th-century charter of Æthelstan. The literal meaning of the word is disputed and may translate to either 'wide-ruler' or 'Britain-ruler'. The rulers of Mercia were generally the most powerful of the Anglo-Saxon kings from the mid 7th century to the early 9th century but are not accorded the title of "bretwalda" by the "Chronicle", which had an anti-Mercian bias. The "Annals of Wales" continued to recognise the kings of Northumbria as "Kings of the Saxons" until the death of Osred I of Northumbria in 716. Etymology. The first syllable of the term "bretwalda" may be related to "Briton" or "Britain". The second element is taken to mean 'ruler' or 'sovereign'. Thus, one interpretation might be 'sovereign of Britain'. Otherwise, the word may be a compound containing the Old English adjective "brytten" ('broad', from the verb "breotan" meaning 'to break' or 'to disperse'), an element also found in the terms "bryten rice" ('kingdom'), "bryten-grund" ('the wide expanse of the earth') and "bryten cyning" ('king whose authority was widely extended'). Though the origin is ambiguous, the draughtsman of the charter issued by Æthelstan used the term in a way that can only mean 'wide-ruler'. The latter etymology was first suggested by John Mitchell Kemble who alluded that "of six manuscripts in which this passage occurs, one only reads "Bretwalda": of the remaining five, four have "Bryten-walda" or "-wealda", and one "Breten-anweald", which is precisely synonymous with Brytenwealda"; that Æthelstan was called "brytenwealda ealles ðyses ealondes", which Kemble translates as 'ruler of all these islands'; and that "bryten-" is a common prefix to words meaning 'wide or general dispersion' and that the similarity to the word "bretwealh" ('Briton') is "merely accidental". Contemporary use. The first recorded use of the term "Bretwalda" comes from a West Saxon chronicle of the late 9th century that applied the term to Ecgberht, who ruled Wessex from 802 to 839. The chronicler also wrote down the names of seven kings that Bede listed in his "Historia ecclesiastica gentis Anglorum" in 731. All subsequent manuscripts of the "Chronicle" use the term "Brytenwalda", which may have represented the original term or derived from a common error. There is no evidence that the term was a title that had any practical use, with implications of formal rights, powers and office, or even that it had any existence before the 9th-century. Bede wrote in Latin and never used the term and his list of kings holding "imperium" should be treated with caution, not least in that he overlooks kings such as Penda of Mercia, who clearly held some kind of dominance during his reign. Similarly, in his list of bretwaldas, the West Saxon chronicler ignored such Mercian kings as Offa. The use of the term "Bretwalda" was the attempt by a West Saxon chronicler to make some claim of West Saxon kings to the whole of Great Britain. The concept of the overlordship of the whole of Britain was at least recognised in the period, whatever was meant by the term. Quite possibly it was a survival of a Roman concept of "Britain": it is significant that, while the hyperbolic inscriptions on coins and titles in charters often included the title "rex Britanniae", when England was unified the title used was "rex Angulsaxonum", ('king of the Anglo-Saxons'.) Modern interpretation by historians. For some time, the existence of the word "bretwalda" in the "Anglo-Saxon Chronicle", which was based in part on the list given by Bede in his "Historia Ecclesiastica", led historians to think that there was perhaps a "title" held by Anglo-Saxon overlords. This was particularly attractive as it would lay the foundations for the establishment of an English monarchy. The 20th-century historian Frank Stenton said of the Anglo-Saxon chronicler that "his inaccuracy is more than compensated by his preservation of the English title applied to these outstanding kings". He argued that the term "bretwalda" "falls into line with the other evidence which points to the Germanic origin of the earliest English institutions". Over the later 20th century, this assumption was increasingly challenged. Patrick Wormald interpreted it as "less an objectively realized office than a subjectively perceived status" and emphasised the partiality of its usage in favour of Southumbrian rulers. In 1991, Steven Fanning argued that "it is unlikely that the term ever existed as a title or was in common usage in Anglo-Saxon England". The fact that Bede never mentioned a special title for the kings in his list implies that he was unaware of one. In 1995, Simon Keynes observed that "if Bede's concept of the Southumbrian overlord, and the chronicler's concept of the 'Bretwalda', are to be regarded as artificial constructs, which have no validity outside the context of the literary works in which they appear, we are released from the assumptions about political development which they seem to involve... we might ask whether kings in the eighth and ninth centuries were quite so obsessed with the establishment of a pan-Southumbrian state". Modern interpretations view the concept of "bretwalda" overlordship as complex and an important indicator of how a 9th-century chronicler interpreted history and attempted to insert the increasingly powerful Saxon kings into that history. Overlordship. A complex array of dominance and subservience existed during the Anglo-Saxon period. A king who used charters to grant land in another kingdom indicated such a relationship. If the other kingdom were fairly large, as when the Mercians dominated the East Anglians, the relationship would have been more equal than in the case of the Mercian dominance of the Hwicce, which was a comparatively small kingdom. Mercia was arguably the most powerful Anglo-Saxon kingdom for much of the late 7th though 8th centuries, though Mercian kings are missing from the two main "lists". For Bede, Mercia was a traditional enemy of his native Northumbria and he regarded powerful kings such as the pagan Penda as standing in the way of the Christian conversion of the Anglo-Saxons. Bede omits them from his list, even though it is evident that Penda held a considerable degree of power. Similarly powerful Mercia kings such as Offa are missed out of the West Saxon "Anglo-Saxon Chronicle", which sought to demonstrate the legitimacy of their kings to rule over other Anglo-Saxon peoples.
4101
1295565660
https://en.wikipedia.org/wiki?curid=4101
Brouwer fixed-point theorem
Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function formula_1 mapping a nonempty compact convex set to itself, there is a point formula_2 such that formula_3. The simplest forms of Brouwer's theorem are for continuous functions formula_1 from a closed interval formula_5 in the real numbers to itself or from a closed disk formula_6 to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset formula_7 of Euclidean space to itself. Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu. The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the -dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911. Statement. The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows: ;In the plane: Every continuous function from a closed disk to itself has at least one fixed point. This can be generalized to an arbitrary finite dimension: ;In Euclidean space:Every continuous function from a closed ball of a Euclidean space into itself has a fixed point. A slightly more general version is as follows: ;Convex compact set:Every continuous function from a nonempty convex compact subset "K" of a Euclidean space to "K" itself has a fixed point. An even more general form is better known under a different name: ;Schauder fixed point theorem:Every continuous function from a nonempty convex compact subset "K" of a Banach space to "K" itself has a fixed point. Importance of the pre-conditions. The theorem holds only for functions that are "endomorphisms" (functions that have the same set as the domain and codomain) and for nonempty sets that are "compact" (thus, in particular, bounded and closed) and "convex" (or homeomorphic to convex). The following examples show why the pre-conditions are important. The function "f" as an endomorphism. Consider the function formula_8 with domain [-1,1]. The range of the function is [0,2]. Thus, f is not an endomorphism. Boundedness. Consider the function formula_9 which is a continuous function from formula_10 to itself. As it shifts every point to the right, it cannot have a fixed point. The space formula_10 is convex and closed, but not bounded. Closedness. Consider the function formula_12 which is a continuous function from the open interval formula_13 to itself. Since the point formula_14 is not part of the interval, there is no point in the domain such that formula_15. The set formula_13 is convex and bounded, but not closed. On the other hand, the function formula_1 does have a fixed point in the "closed" interval formula_18, namely formula_14. The closed interval formula_18 is compact, the open interval formula_13 is not. Convexity. Convexity is not strictly necessary for Brouwer's fixed-point theorem. Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms, Brouwer's fixed-point theorem is equivalent to forms in which the domain is required to be a closed unit ball formula_22. For the same reason it holds for every set that is homeomorphic to a closed ball (and therefore also closed, bounded, connected, without holes, etc.). The following example shows that Brouwer's fixed-point theorem does not work for domains with holes. Consider the function formula_23, which is a continuous function from the unit circle to itself. Since "-x≠x" holds for any point of the unit circle, "f" has no fixed point. The analogous example works for the "n"-dimensional sphere (or any symmetric domain that does not contain the origin). The unit circle is closed and bounded, but it has a hole (and so it is not convex) . The function "f" have a fixed point for the unit disc, since it takes the origin to itself. A formal generalization of Brouwer's fixed-point theorem for "hole-free" domains can be derived from the Lefschetz fixed-point theorem. Notes. The continuous function in this theorem is not required to be bijective or surjective. Illustrations. The theorem has several "real world" illustrations. Here are some examples. Intuitive approach. Explanations attributed to Brouwer. The theorem is supposed to have originated from Brouwer's observation of a cup of gourmet coffee. If one stirs to dissolve a lump of sugar, it appears there is always a point without motion. He drew the conclusion that at any moment, there is a point on the surface that is not moving. The fixed point is not necessarily the point that seems to be motionless, since the centre of the turbulence moves a little bit. The result is not intuitive, since the original fixed point may become mobile when another fixed point appears. Brouwer is said to have added: "I can formulate this splendid result different, I take a horizontal sheet, and another identical one which I crumple, flatten and place on the other. Then a point of the crumpled sheet is in the same place as on the other sheet." Brouwer "flattens" his sheet as with a flat iron, without removing the folds and wrinkles. Unlike the coffee cup example, the crumpled paper example also demonstrates that more than one fixed point may exist. This distinguishes Brouwer's result from other fixed-point theorems, such as Stefan Banach's, that guarantee uniqueness. One-dimensional case. In one dimension, the result is intuitive and easy to prove. The continuous function "f" is defined on a closed interval ["a", "b"] and takes values in the same interval. Saying that this function has a fixed point amounts to saying that its graph (dark green in the figure on the right) intersects that of the function defined on the same interval ["a", "b"] which maps "x" to "x" (light green). Intuitively, any continuous line from the left edge of the square to the right edge must necessarily intersect the green diagonal. To prove this, consider the function "g" which maps "x" to "f"("x") − "x". It is ≥ 0 on "a" and ≤ 0 on "b". By the intermediate value theorem, "g" has a zero in ["a", "b"]; this zero is a fixed point. Brouwer is said to have expressed this as follows: "Instead of examining a surface, we will prove the theorem about a piece of string. Let us begin with the string in an unfolded state, then refold it. Let us flatten the refolded string. Again a point of the string has not changed its position with respect to its original position on the unfolded string." History. The Brouwer fixed point theorem was one of the early achievements of algebraic topology, and is the basis of more general fixed point theorems which are important in functional analysis. The case "n" = 3 first was proved by Piers Bohl in 1904 (published in "Journal für die reine und angewandte Mathematik"). It was later proved by L. E. J. Brouwer in 1909. Jacques Hadamard proved the general case in 1910, and Brouwer found a different proof in the same year. Since these early proofs were all non-constructive indirect proofs, they ran contrary to Brouwer's intuitionist ideals. Although the existence of a fixed point is not constructive in the sense of constructivism in mathematics, methods to approximate fixed points guaranteed by Brouwer's theorem are now known. Before discovery. At the end of the 19th century, the old problem of the stability of the solar system returned into the focus of the mathematical community. Its solution required new methods. As noted by Henri Poincaré, who worked on the three-body problem, there is no hope to find an exact solution: "Nothing is more proper to give us an idea of the hardness of the three-body problem, and generally of all problems of Dynamics where there is no uniform integral and the Bohlin series diverge." He also noted that the search for an approximate solution is no more efficient: "the more we seek to obtain precise approximations, the more the result will diverge towards an increasing imprecision". He studied a question analogous to that of the surface movement in a cup of coffee. What can we say, in general, about the trajectories on a surface animated by a constant flow? Poincaré discovered that the answer can be found in what we now call the topological properties in the area containing the trajectory. If this area is compact, i.e. both closed and bounded, then the trajectory either becomes stationary, or it approaches a limit cycle. Poincaré went further; if the area is of the same kind as a disk, as is the case for the cup of coffee, there must necessarily be a fixed point. This fixed point is invariant under all functions which associate to each point of the original surface its position after a short time interval "t". If the area is a circular band, or if it is not closed, then this is not necessarily the case. To understand differential equations better, a new branch of mathematics was born. Poincaré called it "analysis situs". The French Encyclopædia Universalis defines it as the branch which "treats the properties of an object that are invariant if it is deformed in any continuous way, without tearing". In 1886, Poincaré proved a result that is equivalent to Brouwer's fixed-point theorem, although the connection with the subject of this article was not yet apparent. A little later, he developed one of the fundamental tools for better understanding the analysis situs, now known as the fundamental group or sometimes the Poincaré group. This method can be used for a very compact proof of the theorem under discussion. Poincaré's method was analogous to that of Émile Picard, a contemporary mathematician who generalized the Cauchy–Lipschitz theorem. Picard's approach is based on a result that would later be formalised by another fixed-point theorem, named after Banach. Instead of the topological properties of the domain, this theorem uses the fact that the function in question is a contraction. First proofs. At the dawn of the 20th century, the interest in analysis situs did not stay unnoticed. However, the necessity of a theorem equivalent to the one discussed in this article was not yet evident. Piers Bohl, a Latvian mathematician, applied topological methods to the study of differential equations. In 1904 he proved the three-dimensional case of our theorem, but his publication was not noticed. It was Brouwer, finally, who gave the theorem its first patent of nobility. His goals were different from those of Poincaré. This mathematician was inspired by the foundations of mathematics, especially mathematical logic and topology. His initial interest lay in an attempt to solve Hilbert's fifth problem. In 1909, during a voyage to Paris, he met Henri Poincaré, Jacques Hadamard, and Émile Borel. The ensuing discussions convinced Brouwer of the importance of a better understanding of Euclidean spaces, and were the origin of a fruitful exchange of letters with Hadamard. For the next four years, he concentrated on the proof of certain great theorems on this question. In 1912 he proved the hairy ball theorem for the two-dimensional sphere, as well as the fact that every continuous map from the two-dimensional ball to itself has a fixed point. These two results in themselves were not really new. As Hadamard observed, Poincaré had shown a theorem equivalent to the hairy ball theorem. The revolutionary aspect of Brouwer's approach was his systematic use of recently developed tools such as homotopy, the underlying concept of the Poincaré group. In the following year, Hadamard generalised the theorem under discussion to an arbitrary finite dimension, but he employed different methods. Hans Freudenthal comments on the respective roles as follows: "Compared to Brouwer's revolutionary methods, those of Hadamard were very traditional, but Hadamard's participation in the birth of Brouwer's ideas resembles that of a midwife more than that of a mere spectator." Brouwer's approach yielded its fruits, and in 1910 he also found a proof that was valid for any finite dimension, as well as other key theorems such as the invariance of dimension. In the context of this work, Brouwer also generalized the Jordan curve theorem to arbitrary dimension and established the properties connected with the degree of a continuous mapping. This branch of mathematics, originally envisioned by Poincaré and developed by Brouwer, changed its name. In the 1930s, analysis situs became algebraic topology. Reception. The theorem proved its worth in more than one way. During the 20th century numerous fixed-point theorems were developed, and even a branch of mathematics called fixed-point theory. Brouwer's theorem is probably the most important. It is also among the foundational theorems on the topology of topological manifolds and is often used to prove other important results such as the Jordan curve theorem. Besides the fixed-point theorems for more or less contracting functions, there are many that have emerged directly or indirectly from the result under discussion. A continuous map from a closed ball of Euclidean space to its boundary cannot be the identity on the boundary. Similarly, the Borsuk–Ulam theorem says that a continuous map from the "n"-dimensional sphere to Rn has a pair of antipodal points that are mapped to the same point. In the finite-dimensional case, the Lefschetz fixed-point theorem provided from 1926 a method for counting fixed points. In 1930, Brouwer's fixed-point theorem was generalized to Banach spaces. This generalization is known as Schauder's fixed-point theorem, a result generalized further by S. Kakutani to set-valued functions. One also meets the theorem and its variants outside topology. It can be used to prove the Hartman-Grobman theorem, which describes the qualitative behaviour of certain differential equations near certain equilibria. Similarly, Brouwer's theorem is used for the proof of the Central Limit Theorem. The theorem can also be found in existence proofs for the solutions of certain partial differential equations. Other areas are also touched. In game theory, John Nash used the theorem to prove that in the game of Hex there is a winning strategy for white. In economics, P. Bich explains that certain generalizations of the theorem show that its use is helpful for certain classical problems in game theory and generally for equilibria (Hotelling's law), financial equilibria and incomplete markets. Brouwer's celebrity is not exclusively due to his topological work. The proofs of his great topological theorems are not constructive, and Brouwer's dissatisfaction with this is partly what led him to articulate the idea of constructivity. He became the originator and zealous defender of a way of formalising mathematics that is known as intuitionism, which at the time made a stand against set theory. Brouwer disavowed his original proof of the fixed-point theorem. Proof outlines. A proof using degree. Brouwer's original 1911 proof relied on the notion of the degree of a continuous mapping, stemming from ideas in differential topology. Several modern accounts of the proof can be found in the literature, notably . Let formula_24 denote the closed unit ball in formula_25 centered at the origin. Suppose for simplicity that formula_26 is continuously differentiable. A regular value of formula_1 is a point formula_28 such that the Jacobian of formula_1 is non-singular at every point of the preimage of formula_30. In particular, by the inverse function theorem, every point of the preimage of formula_1 lies in formula_32 (the interior of formula_33). The degree of formula_1 at a regular value formula_28 is defined as the sum of the signs of the Jacobian determinant of formula_1 over the preimages of formula_30 under formula_1: formula_39 The degree is, roughly speaking, the number of "sheets" of the preimage "f" lying over a small open set around "p", with sheets counted oppositely if they are oppositely oriented. This is thus a generalization of winding number to higher dimensions. The degree satisfies the property of "homotopy invariance": let formula_1 and formula_41 be two continuously differentiable functions, and formula_42 for formula_43. Suppose that the point formula_30 is a regular value of formula_45 for all "t". Then formula_46. If there is no fixed point of the boundary of formula_33, then the function formula_48 is well-defined, and formula_49 defines a homotopy from the identity function to it. The identity function has degree one at every point. In particular, the identity function has degree one at the origin, so formula_41 also has degree one at the origin. As a consequence, the preimage formula_51 is not empty. The elements of formula_51 are precisely the fixed points of the original function "f". This requires some work to make fully general. The definition of degree must be extended to singular values of "f", and then to continuous functions. The more modern advent of homology theory simplifies the construction of the degree, and so has become a standard proof in the literature. A proof using the hairy ball theorem. The hairy ball theorem states that on the unit sphere in an odd-dimensional Euclidean space, there is no nowhere-vanishing continuous tangent vector field on . (The tangency condition means that = 0 for every unit vector .) Sometimes the theorem is expressed by the statement that "there is always a place on the globe with no wind". An elementary proof of the hairy ball theorem can be found in . In fact, suppose first that is "continuously differentiable". By scaling, it can be assumed that is a continuously differentiable unit tangent vector on . It can be extended radially to a small spherical shell of . For sufficiently small, a routine computation shows that the mapping () = + is a contraction mapping on and that the volume of its image is a polynomial in . On the other hand, as a contraction mapping, must restrict to a homeomorphism of onto (1 + ) and onto (1 + ) . This gives a contradiction, because, if the dimension of the Euclidean space is odd, (1 + )/2 is not a polynomial. If is only a "continuous" unit tangent vector on , by the Weierstrass approximation theorem, it can be uniformly approximated by a polynomial map of into Euclidean space. The orthogonal projection on to the tangent space is given by () = () - () ⋅ . Thus is polynomial and nowhere vanishing on ; by construction /|||| is a smooth unit tangent vector field on , a contradiction. The continuous version of the hairy ball theorem can now be used to prove the Brouwer fixed point theorem. First suppose that is even. If there were a fixed-point-free continuous self-mapping of the closed unit ball of the -dimensional Euclidean space , set formula_53 Since has no fixed points, it follows that, for in the interior of , the vector () is non-zero; and for in , the scalar product ⋅ () = 1 – ⋅ () is strictly positive. From the original -dimensional space Euclidean space , construct a new auxiliary ()-dimensional space = x R, with coordinates = (, ). Set formula_54 By construction is a continuous vector field on the unit sphere of , satisfying the tangency condition ⋅ () = 0. Moreover, () is nowhere vanishing (because, if has norm 1, then ⋅ () is non-zero; while if has norm strictly less than 1, then and () are both non-zero). This contradiction proves the fixed point theorem when is even. For odd, one can apply the fixed point theorem to the closed unit ball in dimensions and the mapping (,) = ((),0). The advantage of this proof is that it uses only elementary techniques; more general results like the Borsuk-Ulam theorem require tools from algebraic topology. A proof using homology or cohomology. The proof uses the observation that the boundary of the "n"-disk "D""n" is "S""n"−1, the ("n" − 1)-sphere. Suppose, for contradiction, that a continuous function has "no" fixed point. This means that, for every point x in "D""n", the points "x" and "f"("x") are distinct. Because they are distinct, for every point x in "D""n", we can construct a unique ray from "f"("x") to "x" and follow the ray until it intersects the boundary "S""n"−1 (see illustration). By calling this intersection point "F"("x"), we define a function "F" : "D""n" → "S""n"−1 sending each point in the disk to its corresponding intersection point on the boundary. As a special case, whenever "x" itself is on the boundary, then the intersection point "F"("x") must be "x". Consequently, "F" is a special type of continuous function known as a retraction: every point of the codomain (in this case "S""n"−1) is a fixed point of "F". Intuitively it seems unlikely that there could be a retraction of "D""n" onto "S""n"−1, and in the case "n" = 1, the impossibility is more basic, because "S"0 (i.e., the endpoints of the closed interval "D"1) is not even connected. The case "n" = 2 is less obvious, but can be proven by using basic arguments involving the fundamental groups of the respective spaces: the retraction would induce a surjective group homomorphism from the fundamental group of "D"2 to that of "S"1, but the latter group is isomorphic to Z while the first group is trivial, so this is impossible. The case "n" = 2 can also be proven by contradiction based on a theorem about non-vanishing vector fields. For "n" > 2, however, proving the impossibility of the retraction is more difficult. One way is to make use of homology groups: the homology "H""n"−1("D""n") is trivial, while "H""n"−1("S""n"−1) is infinite cyclic. This shows that the retraction is impossible, because again the retraction would induce an injective group homomorphism from the latter to the former group. The impossibility of a retraction can also be shown using the de Rham cohomology of open subsets of Euclidean space "E""n". For "n" ≥ 2, the de Rham cohomology of "U" = "E""n" – (0) is one-dimensional in degree 0 and "n" – 1, and vanishes otherwise. If a retraction existed, then "U" would have to be contractible and its de Rham cohomology in degree "n" – 1 would have to vanish, a contradiction. A proof using Stokes' theorem. As in the proof of Brouwer's fixed-point theorem for continuous maps using homology, it is reduced to proving that there is no continuous retraction from the ball onto its boundary ∂. In that case it can be assumed that is smooth, since it can be approximated using the Weierstrass approximation theorem or by convolving with non-negative smooth bump functions of sufficiently small support and integral one (i.e. mollifying). If is a volume form on the boundary then by Stokes' theorem, formula_55 giving a contradiction. More generally, this shows that there is no smooth retraction from any non-empty smooth oriented compact manifold onto its boundary. The proof using Stokes' theorem is closely related to the proof using homology, because the form generates the de Rham cohomology group (∂) which is isomorphic to the homology group (∂) by de Rham's theorem. A combinatorial proof. The BFPT can be proved using Sperner's lemma. We now give an outline of the proof for the special case in which "f" is a function from the standard "n"-simplex, formula_56 to itself, where formula_57 For every point formula_58 also formula_59 Hence the sum of their coordinates is equal: formula_60 Hence, by the pigeonhole principle, for every formula_58 there must be an index formula_62 such that the formula_63th coordinate of formula_64 is greater than or equal to the formula_63th coordinate of its image under "f": formula_66 Moreover, if formula_64 lies on a "k"-dimensional sub-face of formula_56 then by the same argument, the index formula_63 can be selected from among the coordinates which are not zero on this sub-face. We now use this fact to construct a Sperner coloring. For every triangulation of formula_56 the color of every vertex formula_64 is an index formula_63 such that formula_73 By construction, this is a Sperner coloring. Hence, by Sperner's lemma, there is an "n"-dimensional simplex whose vertices are colored with the entire set of available colors. Because "f" is continuous, this simplex can be made arbitrarily small by choosing an arbitrarily fine triangulation. Hence, there must be a point formula_64 which satisfies the labeling condition in all coordinates: formula_75 for all formula_76 Because the sum of the coordinates of formula_64 and formula_78 must be equal, all these inequalities must actually be equalities. But this means that: formula_79 That is, formula_64 is a fixed point of formula_81 A proof by Hirsch. There is also a quick proof, by Morris Hirsch, based on the impossibility of a differentiable retraction. Let "f" denote a continuous map from the unit ball Dn in n-dimensional Euclidean space to itself and assume that "f" fixes no point. By continuity and the fact that Dn is compact, it follows that for some ε > 0, ∥x - "f"(x)∥ > ε for all x in Dn. Then the map "f" can be approximated by a smooth map retaining the property of not fixing a point; this can be done by using the Weierstrass approximation theorem or by convolving with smooth bump functions. One then defines a retraction as above by sending each x to the point of ∂Dn where the unique ray from x through "f"(x) intersects ∂Dn, and this must now be a differentiable mapping. Such a retraction must have a non-singular value p ∈ ∂Dn, by Sard's theorem, which is also non-singular for the restriction to the boundary (which is just the identity). Thus the inverse image "f" -1(p) would be a compact 1-manifold with boundary. Such a boundary would have to contain at least two endpoints, and these would have to lie on the boundary of the original ball. This would mean that the inverse image of one point on ∂Dn contains a different point on ∂Dn, contradicting the definition of a retraction Dn → ∂Dn. R. Bruce Kellogg, Tien-Yien Li, and James A. Yorke turned Hirsch's proof into a computable proof by observing that the retract is in fact defined everywhere except at the fixed points. For almost any point "q" on the boundary — assuming it is not a fixed point — the 1-manifold with boundary mentioned above does exist and the only possibility is that it leads from "q" to a fixed point. It is an easy numerical task to follow such a path from "q" to the fixed point so the method is essentially computable. gave a conceptually similar path-following version of the homotopy proof which extends to a wide variety of related problems. A proof using oriented area. A variation of the preceding proof does not employ the Sard's theorem, and goes as follows. If formula_82 is a smooth retraction, one considers the smooth deformation formula_83 and the smooth function formula_84 Differentiating under the sign of integral it is not difficult to check that ""("t") = 0 for all "t", so "φ" is a constant function, which is a contradiction because "φ"(0) is the "n"-dimensional volume of the ball, while "φ"(1) is zero. The geometric idea is that "φ"("t") is the oriented area of "g""t"("B") (that is, the Lebesgue measure of the image of the ball via "g""t", taking into account multiplicity and orientation), and should remain constant (as it is very clear in the one-dimensional case). On the other hand, as the parameter "t" passes from 0 to 1 the map "g""t" transforms continuously from the identity map of the ball, to the retraction "r", which is a contradiction since the oriented area of the identity coincides with the volume of the ball, while the oriented area of "r" is necessarily 0, as its image is the boundary of the ball, a set of null measure. A proof using the game Hex. A quite different proof given by David Gale is based on the game of Hex. The basic theorem regarding Hex, first proven by John Nash, is that no game of Hex can end in a draw; the first player always has a winning strategy (although this theorem is nonconstructive, and explicit strategies have not been fully developed for board sizes of dimensions 10 x 10 or greater). This turns out to be equivalent to the Brouwer fixed-point theorem for dimension 2. By considering "n"-dimensional versions of Hex, one can prove in general that Brouwer's theorem is equivalent to the determinacy theorem for Hex. A proof using the Lefschetz fixed-point theorem. The Lefschetz fixed-point theorem says that if a continuous map "f" from a finite simplicial complex "B" to itself has only isolated fixed points, then the number of fixed points counted with multiplicities (which may be negative) is equal to the Lefschetz number formula_85 and in particular if the Lefschetz number is nonzero then "f" must have a fixed point. If "B" is a ball (or more generally is contractible) then the Lefschetz number is one because the only non-zero simplicial homology group is: formula_86 and "f" acts as the identity on this group, so "f" has a fixed point. A proof in a weak logical system. In reverse mathematics, Brouwer's theorem can be proved in the system WKL0, and conversely over the base system RCA0 Brouwer's theorem for a square implies the weak Kőnig's lemma, so this gives a precise description of the strength of Brouwer's theorem. Generalizations. The Brouwer fixed-point theorem forms the starting point of a number of more general fixed-point theorems. The straightforward generalization to infinite dimensions, i.e. using the unit ball of an arbitrary Hilbert space instead of Euclidean space, is not true. The main problem here is that the unit balls of infinite-dimensional Hilbert spaces are not compact. For example, in the Hilbert space ℓ2 of square-summable real (or complex) sequences, consider the map "f" : ℓ2 → ℓ2 which sends a sequence ("x""n") from the closed unit ball of ℓ2 to the sequence ("y""n") defined by formula_87 It is not difficult to check that this map is continuous, has its image in the unit sphere of ℓ2, but does not have a fixed point. The generalizations of the Brouwer fixed-point theorem to infinite dimensional spaces therefore all include a compactness assumption of some sort, and also often an assumption of convexity. See fixed-point theorems in infinite-dimensional spaces for a discussion of these theorems. There is also finite-dimensional generalization to a larger class of spaces: If formula_88 is a product of finitely many chainable continua, then every continuous function formula_89 has a fixed point, where a chainable continuum is a (usually but in this case not necessarily metric) compact Hausdorff space of which every open cover has a finite open refinement formula_90, such that formula_91 if and only if formula_92. Examples of chainable continua include compact connected linearly ordered spaces and in particular closed intervals of real numbers. The Kakutani fixed point theorem generalizes the Brouwer fixed-point theorem in a different direction: it stays in R"n", but considers upper hemi-continuous set-valued functions (functions that assign to each point of the set a subset of the set). It also requires compactness and convexity of the set. The Lefschetz fixed-point theorem applies to (almost) arbitrary compact topological spaces, and gives a condition in terms of singular homology that guarantees the existence of fixed points; this condition is trivially satisfied for any map in the case of "D""n".
4106
7903804
https://en.wikipedia.org/wiki?curid=4106
Benzoic acid
Benzoic acid () is a white (or colorless) solid organic compound with the formula , whose structure consists of a benzene ring () with a carboxyl () substituent. The benzoyl group is often abbreviated "Bz" (not to be confused with "Bn," which is used for benzyl), thus benzoic acid is also denoted as BzOH, since the benzoyl group has the formula –. It is the simplest aromatic carboxylic acid. The name is derived from gum benzoin, which was for a long time its only source. Benzoic acid occurs naturally in many plants and serves as an intermediate in the biosynthesis of many secondary metabolites. Salts of benzoic acid are used as food preservatives. Benzoic acid is an important precursor for the industrial synthesis of many other organic substances. The salts and esters of benzoic acid are known as benzoates (). History. Benzoic acid was discovered in the sixteenth century. The dry distillation of gum benzoin was first described by Nostradamus (1556), and then by Alexius Pedemontanus (1560) and Blaise de Vigenère (1596). Justus von Liebig and Friedrich Wöhler determined the composition of benzoic acid. These latter also investigated how hippuric acid is related to benzoic acid. In 1875 Salkowski discovered the antifungal properties of benzoic acid, which explains the preservation of benzoate-containing cloudberry fruits. Production. Industrial preparations. Benzoic acid is produced commercially by partial oxidation of toluene with oxygen. The process is catalyzed by cobalt or manganese naphthenates. The process uses abundant materials, and proceeds in high yield. The first industrial process involved the reaction of benzotrichloride (trichloromethyl benzene) with calcium hydroxide in water, using iron or iron salts as catalyst. The resulting calcium benzoate is converted to benzoic acid with hydrochloric acid. The product contains significant amounts of chlorinated benzoic acid derivatives. For this reason, benzoic acid for human consumption was obtained by dry distillation of gum benzoin. Food-grade benzoic acid is now produced synthetically. Laboratory synthesis. Benzoic acid is cheap and readily available, so the laboratory synthesis of benzoic acid is mainly practiced for its pedagogical value. It is a common undergraduate preparation. Benzoic acid can be purified by recrystallization from water because of its high solubility in hot water and poor solubility in cold water. The avoidance of organic solvents for the recrystallization makes this experiment particularly safe. This process usually gives a yield of around 65%. By hydrolysis. Like other nitriles and amides, benzonitrile and benzamide can be hydrolyzed to benzoic acid or its conjugate base in acid or basic conditions. From Grignard reagent. Bromobenzene can be converted to benzoic acid by "carboxylation" of the intermediate phenylmagnesium bromide. This synthesis offers a convenient exercise for students to carry out a Grignard reaction, an important class of carbon–carbon bond forming reaction in organic chemistry. Oxidation of benzyl compounds. Benzyl alcohol and benzyl chloride and virtually all benzyl derivatives are readily oxidized to benzoic acid. Uses. Benzoic acid is mainly consumed in the production of phenol by oxidative decarboxylation at 300−400 °C: The temperature required can be lowered to 200 °C by the addition of catalytic amounts of copper(II) salts. The phenol can be converted to cyclohexanol, which is a starting material for nylon synthesis. Precursor to plasticizers. Benzoate plasticizers, such as the glycol-, diethyleneglycol-, and triethyleneglycol esters, are obtained by transesterification of methyl benzoate with the corresponding diol. These plasticizers, which are used similarly to those derived from terephthalic acid ester, represent alternatives to phthalates. Precursor to sodium benzoate and related preservatives. Benzoic acid and its salts are used as food preservatives, represented by the E numbers E210, E211, E212, and E213. Benzoic acid inhibits the growth of mold, yeast and some bacteria. It is either added directly or created from reactions with its sodium, potassium, or calcium salt. The mechanism starts with the absorption of benzoic acid into the cell. If the intracellular pH changes to 5 or lower, the anaerobic fermentation of glucose through phosphofructokinase is decreased by 95%. The efficacy of benzoic acid and benzoate is thus dependent on the pH of the food. Benzoic acid, benzoates and their derivatives are used as preservatives for acidic foods and beverages such as citrus fruit juices (citric acid), sparkling drinks (carbon dioxide), soft drinks (phosphoric acid), pickles (vinegar) and other acidified foods. Typical concentrations of benzoic acid as a preservative in food are between 0.05 and 0.1%. Foods in which benzoic acid may be used and maximum levels for its application are controlled by local food laws. Concern has been expressed that benzoic acid and its salts may react with ascorbic acid (vitamin C) in some soft drinks, forming small quantities of carcinogenic benzene. Medicinal. Benzoic acid is a constituent of Whitfield's ointment which is used for the treatment of fungal skin diseases such as ringworm and athlete's foot. As the principal component of gum benzoin, benzoic acid is also a major ingredient in both tincture of benzoin and Friar's balsam. Such products have a long history of use as topical antiseptics and inhalant decongestants. Benzoic acid was used as an expectorant, analgesic, and antiseptic in the early 20th century. Niche and laboratory uses. In teaching laboratories, benzoic acid is a common standard for calibrating a bomb calorimeter. Biology and health effects. Benzoic acid occurs naturally as do its esters in many plant and animal species. Appreciable amounts are found in most berries (around 0.05%). Ripe fruits of several "Vaccinium" species (e.g., cranberry, "V. vitis macrocarpon"; bilberry, "V. myrtillus") contain as much as 0.03–0.13% free benzoic acid. Benzoic acid is also formed in apples after infection with the fungus "Nectria galligena". Among animals, benzoic acid has been identified primarily in omnivorous or phytophageous species, e.g., in viscera and muscles of the rock ptarmigan ("Lagopus muta") as well as in gland secretions of male muskoxen ("Ovibos moschatus") or Asian bull elephants ("Elephas maximus"). Gum benzoin contains up to 20% of benzoic acid and 40% benzoic acid esters. In terms of its biosynthesis, benzoate is produced in plants from cinnamic acid. A pathway has been identified from phenol via 4-hydroxybenzoate. Reactions. Reactions of benzoic acid can occur at either the aromatic ring or at the carboxyl group. Aromatic ring. Electrophilic aromatic substitution reaction will take place mainly in 3-position due to the electron-withdrawing carboxylic group; i.e. benzoic acid is "meta" directing. Carboxyl group. Reactions typical for carboxylic acids apply also to benzoic acid. Safety and mammalian metabolism. It is excreted as hippuric acid. Benzoic acid is metabolized by butyrate-CoA ligase into an intermediate product, benzoyl-CoA, which is then metabolized by glycine "N"-acyltransferase into hippuric acid. Humans metabolize toluene which is also excreted as hippuric acid. For humans, the World Health Organization's International Programme on Chemical Safety (IPCS) suggests a provisional tolerable intake would be 5 mg/kg body weight per day. Cats have a significantly lower tolerance against benzoic acid and its salts than rats and mice. Lethal dose for cats can be as low as 300 mg/kg body weight. The oral for rats is 3040 mg/kg, for mice it is 1940–2263 mg/kg. In Taipei, Taiwan, a city health survey in 2010 found that 30% of dried and pickled food products had benzoic acid.
4107
45988278
https://en.wikipedia.org/wiki?curid=4107
Boltzmann distribution
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form: formula_1 where is the probability of the system being in state , is the exponential function, is the energy of that state, and a constant of the distribution is the product of the Boltzmann constant and thermodynamic temperature . The symbol formula_2 denotes proportionality (see for the proportionality constant). The term "system" here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore, the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied. The "ratio" of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference: formula_3 The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902. The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain "state" as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle "speeds" or "energies" in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution. The distribution. The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as formula_4 where: Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy formula_6 subject to the normalization constraint that formula_7 and the constraint that formula_8 equals a particular mean energy value, except for two special cases. (These special cases occur when the mean value is either the minimum or maximum of the energies . In these cases, the entropy maximizing distribution is a limit of Boltzmann distributions where approaches zero from above or below, respectively.) The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database. The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states and is given as formula_9 where: The corresponding ratio of populations of energy levels must also take their degeneracies into account. The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state . This probability is equal to the number of particles in state divided by the total number of particles in the system, that is the fraction of particles that occupy state . formula_10 where is the number of particles in state and is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state as a function of the energy of that state is formula_11 This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition. The softmax function commonly used in machine learning is related to the Boltzmann distribution: formula_12 Generalized Boltzmann distribution. A distribution of the form formula_13 is called generalized Boltzmann distribution by some authors. The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from the principle of maximum entropy, but there are other derivations. The generalized Boltzmann distribution has the following properties: In statistical mechanics. The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects: The canonical ensemble gives the probabilities of the various possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form. When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form. In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form. Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed: In economics. The Boltzmann distribution can be introduced to allocate permits in emissions trading. A new allocation method, known as the Boltzmann Fair Division, uses the Boltzmann distribution to describe the most probable, natural, and unbiased distribution of emission permits among multiple countries. This framework has been further extended to address general problems of distributive justice, including cake-cutting and resource allocation, by allowing flexibility in how factors such as contribution, need, or preference are weighted. The Boltzmann fair division is recognized for providing a simple yet powerful probabilistic model that can be adapted to various social, political, and economic contexts. The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization.
4109
29278485
https://en.wikipedia.org/wiki?curid=4109
Leg theory
Leg theory is a bowling tactic in the sport of cricket. The term "leg theory" is somewhat archaic, but the basic tactic remains a play in modern cricket. Simply put, leg theory involves concentrating the bowling attack at or near the line of leg stump. This may or may not be accompanied by a concentration of fielders on the leg side. The line of attack aims to cramp the batsman, making him play the ball with the bat close to the body. This makes it difficult to hit the ball freely and score runs, especially on the off side. Since a leg theory attack means the batsman is more likely to hit the ball on the leg side, additional fielders on that side of the field can be effective in preventing runs and taking catches. Stifling the batsman in this manner can lead to impatience and frustration, resulting in rash play by the batsman which in turn can lead to a quick dismissal. Concentrating attack on the leg stump is considered by many cricket fans and commentators to lead to boring play, as it stifles run scoring and encourages batsmen to play conservatively. Leg theory can be a moderately successful tactic when used with both fast bowling and spin bowling, particularly leg spin to right-handed batsmen or off spin to left-handed batsmen. However, because it relies on lack of concentration or discipline by the batsman, it can be risky against patient and skilled players, especially batsmen who are strong on the leg side. The English opening bowlers Sydney Barnes and Frank Foster used leg theory with some success in Australia in 1911–12. In England, at around the same time, Fred Root was one of the main proponents of the same tactic. Fast leg theory. In 1930, England captain Douglas Jardine, together with Nottinghamshire's captain Arthur Carr and his bowlers Harold Larwood and Bill Voce, developed a variant of leg theory in which the bowlers bowled fast, short-pitched balls that would rise into the batsman's body, together with a heavily stacked ring of close fielders on the leg side. The idea was that when the batsman defended against the ball, he would be likely to deflect the ball into the air for a catch. Jardine called this modified form of the tactic "fast leg theory". On the 1932–33 English tour of Australia, Larwood and Voce bowled fast leg theory at the Australian batsmen. It turned out to be extremely dangerous, and most Australian players sustained injuries from being hit by the ball. Wicket-keeper Bert Oldfield's skull was fractured by a ball hitting his head (although the ball had first glanced off the bat and Larwood had an orthodox field), almost precipitating a riot by the Australian crowd. The Australian press dubbed the tactic "Bodyline", and claimed it was a deliberate attempt by the English team to intimidate and injure the Australian players. Reports of the controversy reaching England at the time described the bowling as "fast leg theory", which sounded to many people to be a harmless and well-established tactic. This led to a serious misunderstanding amongst the English public and the Marylebone Cricket Club – the administrators of English cricket – of the dangers posed by Bodyline. The English press and cricket authorities declared the Australian protests to be a case of sore losing and "squealing". It was only with the return of the English team and the subsequent use of Bodyline against English players in England by the touring West Indian cricket team in 1933 that demonstrated to the country the dangers it posed. The MCC subsequently revised the Laws of Cricket to prevent the use of "fast leg theory" tactics in future, also limiting the traditional tactic.
4110
3492060
https://en.wikipedia.org/wiki?curid=4110
Blythe Danner
Blythe Katherine Danner (born February 3, 1943) is an American actress. Accolades she has received include two Primetime Emmy Awards for Best Supporting Actress in a Drama Series for her role as Izzy Huffstodt on "Huff" (2004–2006), and a Tony Award for Best Featured Actress for her performance in "Butterflies Are Free" on Broadway (1969–1972). Danner was twice nominated for the Primetime Emmy for Outstanding Guest Actress in a Comedy Series for portraying Marilyn Truman on "Will & Grace" (2001–06; 2018–20), and the Primetime Emmy for Outstanding Lead Actress in a Miniseries or Movie for her roles in "We Were the Mulvaneys" (2002) and "Back When We Were Grownups" (2004). For the latter, she also received a Golden Globe Award nomination. Danner played Dina Byrnes in "Meet the Parents" (2000) and its sequels "Meet the Fockers" (2004), "Little Fockers" (2010) and an upcoming fourth film which is set to release in 2026. She has collaborated on several occasions with Woody Allen, appearing in three of his films: "Another Woman" (1988), "Alice" (1990), and "Husbands and Wives" (1992). Her other notable film credits include "1776" (1972), "Hearts of the West" (1975), "The Great Santini" (1979), "Mr. & Mrs. Bridge" (1990), "The Prince of Tides" (1991), "To Wong Foo, Thanks for Everything! Julie Newmar" (1995), "The Myth of Fingerprints" (1997), "The X-Files" (1998), "Forces of Nature" (1999), "The Love Letter" (1999), "The Last Kiss" (2006), "Paul" (2011), "Hello I Must Be Going" (2012), "I'll See You in My Dreams" (2015), and "What They Had" (2018). Danner is the sister of Harry Danner and the widow of Bruce Paltrow. Early life. Danner was born in Philadelphia, Pennsylvania, the daughter of Katharine and Harry Earl Danner, a bank executive. She has a brother, opera singer and actor Harry Danner, a sister and a maternal half-brother. Danner has Pennsylvania Dutch, some English and Irish ancestry; her maternal grandmother was a German immigrant, and one of her paternal great-grandmothers was born in Barbados to a family of European descent. Danner graduated from George School, a Quaker high school located near Newtown, Bucks County, Pennsylvania, in 1960. Career. A graduate of Bard College, Danner's first roles included the 1967 musical "Mata Hari" and the 1968 Off-Broadway production of "Summertree". Her early Broadway appearances included "Cyrano de Bergerac" (1968) and her Theatre World Award-winning performance in "The Miser" (1969). She won the Tony Award for Best Featured Actress in a Play for portraying a free-spirited divorcée in "Butterflies Are Free" (1970). In 1972, Danner portrayed Martha Jefferson in the film version of "1776". That same year, she played the unknowing wife of a husband who committed murder, opposite Peter Falk and John Cassavetes, in the "Columbo" episode "Étude in Black". Her earliest starring film role was opposite Alan Alda in "To Kill a Clown" (1972). Danner appeared in the episode of "M*A*S*H" entitled "The More I See You", playing the love interest of Alda's character Hawkeye Pierce. She played lawyer Amanda Bonner in television's "Adam's Rib", opposite Ken Howard as Adam Bonner. She played Zelda Fitzgerald in "F. Scott Fitzgerald and 'The Last of the Belles"' (1974). She was the eponymous heroine in the film "Lovin' Molly" (1974) (directed by Sidney Lumet). She appeared in "Futureworld", playing Tracy Ballard with co-star Peter Fonda (1976). In the 1982 TV movie "Inside the Third Reich", she played the wife of Albert Speer. In the film version of Neil Simon's semi-autobiographical play "Brighton Beach Memoirs" (1986), she portrayed a middle-aged Jewish mother. She has appeared in two films based on the novels of Pat Conroy, "The Great Santini" (1979) and "The Prince of Tides" (1991), as well as two television movies adapted from books by Anne Tyler, "Saint Maybe" and "Back When We Were Grownups", both for the Hallmark Hall of Fame. Danner appeared opposite Robert De Niro in the 2000 comedy hit "Meet the Parents", and its sequels, "Meet the Fockers" (2004) and "Little Fockers" (2010). On May 30, 2025, it was announced that Danner would return for a fourth film which is scheduled to release on November 25, 2026. From 2001 to 2006, she regularly appeared on NBC's sitcom "Will & Grace" as Will Truman's mother Marilyn. From 2004 to 2006, she starred in the main cast of the comedy-drama series "Huff". In 2005, she was nominated for three Primetime Emmy Awards for her work on "Will & Grace", "Huff", and the television film "Back When We Were Grownups", winning for her role in "Huff". The following year, she won a second consecutive Emmy Award for "Huff". For 25 years, she has been a regular performer at the Williamstown Summer Theater Festival, where she also serves on the board of directors. In 2006, Danner was awarded an inaugural Katharine Hepburn Medal by Bryn Mawr College's Katharine Houghton Hepburn Center. In 2015, Danner was inducted into the American Theater Hall of Fame. Environmental activism. Danner has been involved in environmental issues such as recycling and conservation for over 30 years. She has been active with INFORM, Inc., is on the Board of Environmental Advocates of New York and the board of directors of the Environmental Media Association, and won the 2002 EMA Board of Directors Ongoing Commitment Award. In 2011, Danner joined Moms Clean Air Force, to help call on parents to join in the fight against toxic air pollution. Health care activism. After the death of her husband Bruce Paltrow from oral cancer, she became involved with the nonprofit Oral Cancer Foundation. In 2005, she filmed a public service announcement to raise public awareness of the disease and the need for early detection. She has since appeared on morning talk shows and given interviews in such magazines as "People". The Bruce Paltrow Oral Cancer Fund, administered by the Oral Cancer Foundation, raises funding for oral cancer research and treatment, with a particular focus on those communities in which healthcare disparities exist. She has also appeared in commercials for Prolia, a brand of denosumab used in the treatment of osteoporosis. Personal life. Danner was married to producer and director Bruce Paltrow, who died of oral cancer in 2002. She and Paltrow had two children together, actress Gwyneth Paltrow and director Jake Paltrow. Danner's niece is the actress Katherine Moennig, the daughter of her maternal half-brother William. Danner co-starred with her daughter in the 1992 television film "Cruel Doubt" and again in the 2003 film "Sylvia", in which she portrayed Aurelia Plath, mother to Gwyneth's title role of Sylvia Plath. Danner is a practitioner of transcendental meditation, which she has described as "very helpful and comforting".
4111
28481209
https://en.wikipedia.org/wiki?curid=4111
Bioleaching
Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt. Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite. The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper. Process. Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including "Acidithiobacillus ferrooxidans" (formerly known as "Thiobacillus ferrooxidans") and "Acidithiobacillus thiooxidans " (formerly known as "Thiobacillus thiooxidans"). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal. Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+): (1)   formula_1    spontaneous The ferrous ion is then oxidized by bacteria using oxygen: (2)   formula_2    (iron oxidizers) Thiosulfate is also oxidized by bacteria to give sulfate: (3)   formula_3    (sulfur oxidizers) The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction: (4)  formula_4 The net products of the reaction are soluble ferrous sulfate and sulfuric acid. The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant. The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu2+ ions being left in solution. Chalcopyrite leaching: (1)   formula_5    spontaneous (2)   formula_6    (iron oxidizers) (3)   formula_7    (sulfur oxidizers) net reaction: (4)  formula_8 In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe3+ ==> UO22+ + 2 Fe2+). In this case, the sole purpose of the bacterial step is the regeneration of Fe3+. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by "Acidithiobacillus" spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals. Further processing. The dissolved copper (Cu2+) ions are removed from the solution by ligand exchange solvent extraction, which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene: Cu2+(aq) + 2LH(organic) → CuL2(organic) + 2H+(aq) The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution. Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there. The copper can also be concentrated and separated by displacing the copper with Fe from scrap iron: Cu2+(aq) + Fe(s) → Cu(s) + Fe2+(aq) The electrons lost by the iron are taken up by the copper. Copper is the oxidising agent (it accepts electrons), and iron is the reducing agent (it loses electrons). Traces of precious metals such as gold may be left in the original solution. Treating the mixture with sodium cyanide in the presence of free oxygen dissolves the gold. The gold is removed from the solution by adsorbing (taking it up on the surface) to charcoal. With fungi. Several species of fungi can be used for bioleaching. Fungi can be grown on many different substrates, such as electronic scrap, catalytic converters, and fly ash from municipal waste incineration. Experiments have shown that two fungal strains ("Aspergillus niger, Penicillium simplicissimum") were able to mobilize Cu and Sn by 65%, and Al, Ni, Pb, and Zn by more than 95%. "Aspergillus niger" can produce some organic acids such as citric acid. This form of leaching does not rely on microbial oxidation of metal but rather uses microbial metabolism as source of acids that directly dissolve the metal. Feasibility. Economic feasibility. Bioleaching is in general simpler and, therefore, cheaper to operate and maintain than traditional processes, since fewer specialists are needed to operate complex chemical plants. And low concentrations are not a problem for bacteria because they simply ignore the waste that surrounds the metals, attaining extraction yields of over 90% in some cases. These microorganisms actually gain energy by breaking down minerals into their constituent elements. The company simply collects the ions out of the solution after the bacteria have finished. Bioleaching can be used to extract metals from low concentration ores such as gold that are too poor for other technologies. It can be used to partially replace the extensive crushing and grinding that translates to prohibitive cost and energy consumption in a conventional process. Because the lower cost of bacterial leaching outweighs the time it takes to extract the metal. High concentration ores, such as copper, are more economical to smelt rather bioleach due to the slow speed of the bacterial leaching process compared to smelting. The slow speed of bioleaching introduces a significant delay in cash flow for new mines. Nonetheless, at the largest copper mine of the world, Escondida in Chile the process seems to be favorable. Economically it is also very expensive and many companies once started can not keep up with the demand and end up in debt. In space. In 2020 scientists showed, with an experiment with different gravity environments on the ISS, that microorganisms could be employed to mine useful elements from basaltic rocks via bioleaching in space. Environmental impact. The process is more environmentally friendly than traditional extraction methods. For the company this can translate into profit, since the necessary limiting of sulfur dioxide emissions during smelting is expensive. Less landscape damage occurs, since the bacteria involved grow naturally, and the mine and surrounding area can be left relatively untouched. As the bacteria breed in the conditions of the mine, they are easily cultivated and recycled. Toxic chemicals are sometimes produced in the process. Sulfuric acid and H+ ions that have been formed can leak into the ground and surface water turning it acidic, causing environmental damage. Heavy ions such as iron, zinc, and arsenic leak during acid mine drainage. When the pH of this solution rises, as a result of dilution by fresh water, these ions precipitate, forming "Yellow Boy" pollution. For these reasons, a setup of bioleaching must be carefully planned, since the process can lead to a biosafety failure. Unlike other methods, once started, bioheap leaching cannot be quickly stopped, because leaching would still continue with rainwater and natural bacteria. Projects like Finnish Talvivaara proved to be environmentally and economically disastrous.
4113
7903804
https://en.wikipedia.org/wiki?curid=4113
Bouldering
Bouldering is a form of rock climbing that is performed on small rock formations or artificial rock walls without the use of ropes or harnesses. While bouldering can be done without any equipment, most climbers use climbing shoes to help secure footholds, chalk to keep their hands dry and to provide a firmer grip, and bouldering mats to prevent injuries from falls. Unlike free solo climbing, which is also performed without ropes, bouldering problems (the sequence of moves that a climber performs to complete the climb) are usually less than tall. Traverses, which are a form of boulder problem, require the climber to climb horizontally from one end to another. Artificial climbing walls allow boulderers to climb indoors in areas without natural boulders. Bouldering competitions take place in both indoor and outdoor settings. The sport was originally a method of training for roped climbs and mountaineering, so climbers could practice specific moves at a safe distance from the ground. Additionally, the sport served to build stamina and increase finger strength. During the 20th century, bouldering evolved into a separate discipline. Individual problems are assigned ratings based on difficulty. Although there have been various rating systems used throughout the history of bouldering, modern problems usually use either the V-scale or the Fontainebleau scale. Outdoor bouldering. The characteristics of boulder problems depend largely on the type of rock being climbed. For example, granite often features long cracks and slabs while sandstone rocks are known for their steep overhangs and frequent horizontal breaks. Limestone and volcanic rock are also used for bouldering. There are many prominent bouldering areas throughout the United States, including Hueco Tanks in Texas, Mount Blue Sky in Colorado, The Appalachian Mountains in The Eastern United States, and The Buttermilks in Bishop, California. Squamish, British Columbia is one of the most popular bouldering areas in Canada. Europe is also home to a number of bouldering sites, such as Fontainebleau in France, Meschia in Italy, Albarracín in Spain, and various mountains throughout Switzerland. Indoor bouldering. Artificial climbing walls are used to simulate boulder problems in an indoor environment, usually at climbing gyms. These walls are constructed with wooden panels, polymer cement panels, concrete shells, or precast molds of actual rock walls. Holds, usually made of plastic, are then bolted onto the wall to create problems. Some problems use steep overhanging surfaces which force the climber to support much of their weight using their upper body strength. Climbing gyms often feature multiple problems within the same section of wall. Historically, the most common method route-setters used to designate the intended problem was by placing colored tape next to each hold. For example, red tape would indicate one bouldering problem while green tape would be used to set a different problem in the same area. Indoor bouldering requires very little in terms of equipment: at minimum, climbing shoes; at maximum, a chalk bag, chalk, a brush, and climbing shoes. Grading. Bouldering problems are assigned numerical difficulty ratings by route-setters and climbers. The two most widely used rating systems are the V-scale and the Fontainebleau system. The V-scale, which originated in the United States, is an open-ended rating system with higher numbers indicating a higher degree of difficulty. The V1 rating indicates that a problem can be completed by a novice climber in good physical condition after several attempts. The scale begins at V0, and as of 2024, the highest V rating that has been assigned to a bouldering problem is V17. Some climbing gyms also use a VB grade to indicate beginner problems. The Fontainebleau scale follows a similar system, with each numerical grade divided into three ratings with the letters "a", "b", and "c". For example, Fontainebleau 7A roughly corresponds with V6, while Fontainebleau 7C+ is equivalent to V10. In both systems, grades are further differentiated by appending "+" to indicate a small increase in difficulty. Despite this level of specificity, ratings of individual problems are often controversial, as ability level is not the only factor that affects how difficult a problem may be for a particular climber. Height, arm length, flexibility, and other body characteristics can also affect difficulty. Highball bouldering. Highball bouldering is "a sub-discipline of bouldering in which climbers seek out tall, imposing lines to climb ropeless above crash pads." It may have begun in 1961 when John Gill, without top-rope rehearsal or bouldering pads (which did not exist), bouldered a steep face on an granite spire called "The Thimble". In 2002 Jason Kehl completed the first highball at double-digit V-difficulty, called Evilution, a boulder in the Buttermilks of California, earning the grade of V12. Important milestone ascents in this style include: Competition bouldering. The International Federation of Sport Climbing (IFSC) employs an indoor format (although competitions can also take place in an outdoor setting) that breaks the competition into three rounds: qualifications, semi-finals, and finals. The rounds feature different sets of four to six boulder problems, and each competitor has a fixed amount of time to attempt each problem. At the end of each round, competitors are ranked by the number of completed problems with ties settled by the total number of attempts taken to solve the problems. Some competitions only permit climbers a fixed number of attempts at each problem with a timed rest period in between. In an open-format competition, all climbers compete simultaneously, and are given a fixed amount of time to complete as many problems as possible. More points are awarded for more difficult problems, while points are deducted for multiple attempts on the same problem. In 2012, the IFSC submitted a proposal to the International Olympic Committee (IOC) to include lead climbing in the 2020 Summer Olympics. The proposal was later revised to an "overall" competition, which would feature bouldering, lead climbing, and speed climbing. In 2016, the International Olympic Committee (IOC) officially approved climbing, along with four other sports, as an Olympic sport, based on their "impact on gender equality, the youth appeal of the sports and the legacy value of adding them to the Tokyo Games". History. Modern bouldering. Modern recreational climbing began in the late 19th century in England, southeastern Germany, northern Italy, and France. Bouldering on the rocks of Fontainbleau outside of Paris began in the late 1800s, with the first guidebook written by Maurice Martin in 1945. Bouldering as training or a recreational past-time began also in the late 1800s in England and perhaps elsewhere. Oscar Eckenstein was an early proponent. In the late 1950s, John Gill, who is frequently called "the father of modern bouldering", combined gymnastics with rock climbing, and felt that the best place to do that was on boulders or small outcrops. He developed a rating system that was closed-ended: B1 problems were as difficult as the most challenging roped routes of the time, B2 problems were more difficult, and B3 problems had been completed once. He also introduced chalk as a method of keeping the climber's hands dry, promoted a dynamic climbing style, and emphasized the importance of strength training to complement skill. His 1969 article in the Journal of the American Alpine Club entitled "The Art of Bouldering" defines modern bouldering. As Gill improved in ability and influence, his ideas became the norm. In the 1980s, two important training tools emerged. One important training tool was bouldering mats, also referred to as "crash pads", which protected against injuries from falling and enabled boulderers to climb in areas that would have been too dangerous otherwise. The second important tool was indoor climbing walls, which helped spread the sport to areas without outdoor climbing and allowed serious climbers to train year-round. As the sport grew in popularity, new bouldering areas were developed throughout Europe and the United States, and more athletes began participating in bouldering competitions. The visibility of the sport greatly increased in the early 2000s, as YouTube videos and climbing blogs helped boulderers around the world to quickly learn techniques, find hard problems, and announce newly completed projects. Notable ascents. Notable boulder climbs are chronicled by the climbing media to track progress in boulder climbing standards and levels of technical difficulty; in contrast, the hardest traditional climbing routes tend to be of lower technical difficulty due to the additional burden of having to place protection during the course of the climb, and due to the lack of any possibility of using natural protection on the most extreme climbs. As of November 2022, the world's hardest bouldering routes are "Burden of Dreams" by Nalle Hukkataival and "Return of the Sleepwalker" by Daniel Woods, both at proposed grades of . There are a number of routes with a confirmed climbing grade of , the first of which was "Gioia" by Christian Core in 2008 (and confirmed by Adam Ondra in 2011). As of December 2021, female climbers Josune Bereziartu, Ashima Shiraishi, and Kaddi Lehmann have repeated boulder problems at the boulder grade. On 28 July 2023, Katie Lamb became the first female climber to climb an -rated boulder by repeating "Box Therapy" at Rocky Mountain National Park. However, after Brooke Raboutou repeated the climb In October 2023, the boulder was ultimately downgraded to . Equipment. Unlike other climbing sports, bouldering can be performed safely and effectively with very little equipment, an aspect which makes the discipline highly appealing, but opinions differ. While bouldering pioneer John Sherman asserted that "The only gear really needed to go bouldering is boulders," others suggest the use of climbing shoes and a chalkbag – a small pouch where ground-up chalk is kept – as the bare minimum, and more experienced boulderers typically bring multiple pairs of climbing shoes, chalk, brushes, crash pads, and a skincare kit. Climbing shoes have the most direct impact on performance. Besides protecting the climber's feet from rough surfaces, climbing shoes are designed to help the climber secure footholds. Climbing shoes typically fit much tighter than other athletic footwear and often curl the toes downwards to enable precise footwork. They are manufactured in a variety of different styles to perform in different situations. Stiffer shoes excel at securing small edges, whereas softer shoes provide greater sensitivity. The front of the shoe, called the "toe box", can be asymmetric, which performs well on overhanging rocks, or symmetric, which is better suited for vertical problems and slabs. To absorb sweat, most boulderers use gymnastics chalk on their hands, stored in a chalk bag, which can be tied around the waist (also called sport climbing chalk bags), allowing the climber to reapply chalk during the climb. There are also versions of floor chalk bags (also called bouldering chalk bags), which are usually bigger than sport climbing chalk bags and are meant to be kept on the floor while climbing; this is because boulders do not usually have so many movements as to require chalking up more than once. Different sizes of brushes are used to remove excess chalk and debris from boulders in between climbs; they are often attached to the end of a long straight object in order to reach higher holds. Crash pads, also referred to as bouldering mats, are foam cushions placed on the ground to protect climbers from injury after falling. Boulder problems are generally shorter than from ground to top. This makes the sport significantly safer than free solo climbing, which is also performed without ropes, but with no upper limit on the height of the climb. However, minor injuries are common in bouldering, particularly sprained ankles and wrists. To prevent injuries, boulderers position crash pads near the boulder to provide a softer landing, as well as one or more spotters to help redirect the climber towards the pads. Upon landing, boulderers employ falling techniques similar to those used in gymnastics: spreading the impact across the entire body to avoid bone fractures and positioning limbs to allow joints to move freely throughout the impact. Techniques. Although every type of rock climbing requires a high level of strength and technique, bouldering is the most dynamic form of the sport, requiring the highest level of power and placing considerable strain on the body. Training routines that strengthen fingers and forearms are useful in preventing injuries such as tendonitis and ruptured ligaments. However, as with other forms of climbing, bouldering technique begins with proper footwork. Leg muscles are significantly stronger than arm muscles; thus, proficient boulderers use their arms to maintain balance and body positioning as much as possible, relying on their legs to push them up the rock. Boulderers also keep their arms straight with their shoulders engaged whenever feasible, allowing their bones to support their body weight rather than their muscles. Bouldering movements are described as either "static" or "dynamic". Static movements are those that are performed slowly, with the climber's position controlled by maintaining contact on the boulder with the other three limbs. Dynamic movements use the climber's momentum to reach holds that would be difficult or impossible to secure statically, with an increased risk of falling if the movement is not performed accurately. Environmental impact. Bouldering can damage vegetation that grows on rocks, such as moss and lichens. This can occur as a result of the climber intentionally cleaning the boulder, or unintentionally from repeated use of handholds and footholds. Vegetation on the ground surrounding the boulder can also be damaged from overuse, particularly by climbers laying down crash pads. Soil erosion can occur when boulderers trample vegetation while hiking off of established trails, or when they unearth small rocks near the boulder in an effort to make the landing zone safer in case of a fall. Other environmental concerns include littering, improperly disposed feces, and graffiti. These issues have caused some land managers to prohibit bouldering, as was the case in Tea Garden, a popular bouldering area in Rocklands, South Africa.
4115
10354770
https://en.wikipedia.org/wiki?curid=4115
Boiling point
The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor. The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at 100°C (or with scientific precision: ) under standard pressure at sea level, but at at altitude. For a given pressure, different liquids will boil at different temperatures. The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar. The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure). Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid. Saturation temperature and pressure. A "saturated liquid" contains as much thermal energy as it can without boiling (or conversely a "saturated vapor" contains as little thermal energy as it can without condensing). Saturation temperature means "boiling point". The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition. If the pressure in a system remains constant (isobaric), a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy (heat) is removed. Similarly, a liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied. The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Boiling points may be published with respect to the NIST, USA standard pressure of 101.325 kPa (1 atm), or the IUPAC standard pressure of 100.000 kPa (1 bar). At higher elevations, where the atmospheric pressure is much lower, the boiling point is also lower. The boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point. Likewise, the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point. If the heat of vaporization and the vapor pressure of a liquid at a certain temperature are known, the boiling point can be calculated by using the Clausius–Clapeyron equation, thus: formula_1 where: formula_2 is the boiling point at the pressure of interest, formula_3 is the ideal gas constant, formula_4 is the vapor pressure of the liquid, formula_5 is some pressure where the corresponding formula_6 is known (usually data available at 1 atm or 100 kPa (1 bar)), formula_7 is the heat of vaporization of the liquid, formula_6 is the boiling temperature, formula_9 is the natural logarithm. Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased, so is saturation temperature. If the temperature in a system remains constant (an "isothermal" system), vapor at saturation pressure and temperature will begin to condense into its liquid phase as the system pressure is increased. Similarly, a liquid at saturation pressure and temperature will tend to flash into its vapor phase as system pressure is decreased. There are two conventions regarding the "standard boiling point of water": The "normal boiling point" is commonly given as (actually following the thermodynamic definition of the Celsius scale based on the kelvin) at a pressure of 1 atm (101.325 kPa). The IUPAC-recommended "standard boiling point of water" at a standard pressure of 100 kPa (1 bar) is . For comparison, on top of Mount Everest, at elevation, the pressure is about and the boiling point of water is . The Celsius temperature scale was defined until 1954 by two points: 0 °C being defined by the water freezing point and 100 °C being defined by the water boiling point at standard atmospheric pressure. Relation between the normal boiling point and the vapor pressure of liquids. The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point (i.e., the boiling point at atmospheric pressure) of the liquid. The vapor pressure chart to the right has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. The critical point of a liquid is the highest temperature (and pressure) it will actually boil at. See also Vapour pressure of water. Boiling point of chemical elements. The element with the lowest boiling point is helium. Both the boiling points of rhenium and tungsten exceed 5000 K at standard pressure; because it is difficult to measure extreme temperatures precisely without bias, both have been cited in the literature as having the higher boiling point. Boiling point as a reference property of a pure compound. As can be seen from the above plot of the logarithm of the vapor pressure vs. the temperature for any given pure chemical compound, its normal boiling point can serve as an indication of that compound's overall volatility. A given pure compound has only one normal boiling point, if any, and a compound's normal boiling point and melting point can serve as characteristic physical properties for that compound, listed in reference books. The higher a compound's normal boiling point, the less volatile that compound is overall, and conversely, the lower a compound's normal boiling point, the more volatile that compound is overall. Some compounds decompose at higher temperatures before reaching their normal boiling point, or sometimes even their melting point. For a stable compound, the boiling point ranges from its triple point to its critical point, depending on the external pressure. Beyond its triple point, a compound's normal boiling point, if any, is higher than its melting point. Beyond the critical point, a compound's liquid and vapor phases merge into one phase, which may be called a superheated gas. At any given temperature, if a compound's normal boiling point is lower, then that compound will generally exist as a gas at atmospheric external pressure. If the compound's normal boiling point is higher, then that compound can exist as a liquid or solid at that given temperature at atmospheric external pressure, and will so exist in equilibrium with its vapor (if volatile) if its vapors are contained. If a compound's vapors are not contained, then some volatile compounds can eventually evaporate away in spite of their higher boiling points. In general, compounds with ionic bonds have high normal boiling points, if they do not decompose before reaching such high temperatures. Many metals have high boiling points, but not all. Very generally—with other factors being equal—in compounds with covalently bonded molecules, as the size of the molecule (or molecular mass) increases, the normal boiling point increases. When the molecular size becomes that of a macromolecule, polymer, or otherwise very large, the compound often decomposes at high temperature before the boiling point is reached. Another factor that affects the normal boiling point of a compound is the polarity of its molecules. As the polarity of a compound's molecules increases, its normal boiling point increases, other factors being equal. Closely related is the ability of a molecule to form hydrogen bonds (in the liquid state), which makes it harder for molecules to leave the liquid state and thus increases the normal boiling point of the compound. Simple carboxylic acids dimerize by forming hydrogen bonds between molecules. A minor factor affecting boiling points is the shape of a molecule. Making the shape of a molecule more compact tends to lower the normal boiling point slightly compared to an equivalent molecule with more surface area. Most volatile compounds (anywhere near ambient temperatures) go through an intermediate liquid phase while warming up from a solid phase to eventually transform to a vapor phase. By comparison to boiling, a sublimation is a physical transformation in which a solid turns directly into vapor, which happens in a few select cases such as with carbon dioxide at atmospheric pressure. For such compounds, a sublimation point is a temperature at which a solid turning directly into vapor has a vapor pressure equal to the external pressure. Impurities and mixtures. In the preceding section, boiling points of pure compounds were covered. Vapor pressures and boiling points of substances can be affected by the presence of dissolved impurities (solutes) or other miscible compounds, the degree of effect depending on the concentration of the impurities or other compounds. The presence of non-volatile impurities such as salts or compounds of a volatility far lower than the main component compound decreases its mole fraction and the solution's volatility, and thus raises the normal boiling point in proportion to the concentration of the solutes. This effect is called boiling point elevation. As a common example, salt water boils at a higher temperature than pure water. In other mixtures of miscible compounds (components), there may be two or more components of varying volatility, each having its own pure component boiling point at any given pressure. The presence of other volatile components in a mixture affects the vapor pressures and thus boiling points and dew points of all the components in the mixture. The dew point is a temperature at which a vapor condenses into a liquid. Furthermore, at any given temperature, the composition of the vapor is different from the composition of the liquid in most such cases. In order to illustrate these effects between the volatile components in a mixture, a boiling point diagram is commonly used. Distillation is a process of boiling and [usually] condensation which takes advantage of these differences in composition between liquid and vapor phases. Boiling point of water with elevation. Following is a table of the change in the boiling point of water with elevation, at intervals of 500 meters over the range of human habitation [the Dead Sea at to La Rinconada, Peru at ], then of 1,000 meters over the additional range of uninhabited surface elevation [up to Mount Everest at ], along with a similar range in Imperial.
4116
12530063
https://en.wikipedia.org/wiki?curid=4116
Big Bang
The Big Bang is a physical theory that describes how the universe expanded from an initial state of high density and temperature. Various cosmological models based on the Big Bang concept explain a broad range of phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The uniformity of the universe, known as the horizon and flatness problems, is explained through cosmic inflation: a phase of accelerated expansion during the earliest stages. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated  billion years ago, which is considered the age of the universe. A wide range of empirical evidence strongly favors the Big Bang event, which is now widely accepted. Extrapolating this cosmic expansion backward in time using the known laws of physics, the models describe an extraordinarily hot and dense primordial universe. Physics lacks a widely accepted theory that can model the earliest conditions of the Big Bang. As the universe expanded, it cooled sufficiently to allow the formation of subatomic particles, and later atoms. These primordial elements—mostly hydrogen, with some helium and lithium—then coalesced under the force of gravity aided by dark matter, forming early stars and galaxies. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to a concept called dark energy. The concept of an expanding universe was introduced by the physicist Alexander Friedmann in 1922 with the mathematical derivation of the Friedmann equations. The earliest empirical observation of an expanding universe is known as Hubble's law, published in work by physicist Edwin Hubble in 1929, which discerned that galaxies are moving away from Earth at a rate that accelerates proportionally with distance. Independent of Friedmann's work, and independent of Hubble's observations, in 1931 physicist Georges Lemaître proposed that the universe emerged from a "primeval atom," introducing the modern notion of the Big Bang. In 1964, the CMB was discovered. Over the next few years measurements showed this radiation to be uniform over directions in the sky and the shape of the energy versus intensity curve, both consistent with the Big Bang models of high temperatures and densities in the distant past. By the late 1960s most cosmologists were convinced that competing steady-state model of cosmic evolution was incorrect. There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. These include the unequal abundances of matter and antimatter known as baryon asymmetry, the detailed nature of dark matter surrounding galaxies, and the origin of dark energy. Features of the models. Assumptions. Big Bang cosmology models depend on three major assumptions: the universality of physical laws, the cosmological principle, and that the matter content can be modeled as a perfect fluid. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location. A perfect fluid has no viscosity; the pressure of a perfect fluid is proportional to its density. These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that the largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. The key physical law behind these models, general relativity has passed stringent tests on the scale of the Solar System and binary stars. The cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995. Expansion prediction. The cosmological principle dramatically simplifies the equations of general relativity, giving the Friedmann–Lemaître–Robertson–Walker metric to describe the geometry of the universe and, with the assumption of a perfect fluid, the Friedmann equations giving the time dependence of that geometry. The only parameter at this level of description is the mass-energy density: the geometry of the universe and its expansion is a direct consequence of its density. All of the major features of Big Bang cosmology are related to these results. Mass-energy density. In Big Bang cosmology, the mass–energy density controls the shape and evolution of the universe. By combining astronomical observations with known laws of thermodynamics and particle physics, cosmologists have worked out the components of the density over the lifespan of the universe. In the current universe, luminous matter, the stars, planets, and so on makes up less than 5% of the density. Dark matter accounts for 27% and dark energy the remaining 68%. Horizons. An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach earth. This places a limit or a "past horizon" on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a "future horizon", which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric that describes the expansion of the universe. Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well. Thermalization. Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other. Timeline. According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling. Singularity. Existing theories of physics cannot tell us about the moment of the Big Bang. Extrapolation of the expansion of the universe backwards in time using only general relativity yields a gravitational singularity with infinite density and temperature at a finite time in the past, but the meaning of this extrapolation in the context of the Big Bang is unclear. Moreover, classical gravitational theories are expected to be inadequate to describe physics under these conditions. Quantum gravity effects are expected to be dominant during the Planck epoch, when the temperature of the universe was close to the Planck scale (around 1032 K or 1028 eV). Even below the Planck scale, undiscovered physics could greatly influence the expansion history of the universe. The Standard Model of particle physics is only tested up to temperatures of order 1017K (10 TeV) in particle colliders, such as the Large Hadron Collider. Moreover, new physical phenomena decoupled from the Standard Model could have been important before the time of neutrino decoupling, when the temperature of the universe was only about 1010K (1 MeV). Inflation and baryogenesis. The earliest phases of the Big Bang are subject to much speculation, given the lack of available data. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, , and consequently had a temperature of approximately 1032 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell. At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were "frozen in" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified. Inflation stopped locally at around 10−33 to 10−32 seconds, with the observable universe's volume having increased by a factor of at least 1078. Reheating followed as the inflaton field decayed, until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe. Cooling. The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds. After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 108 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos). A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei. As the universe cooled, the rest energy density of matter came to gravitationally dominate over that of the photon and neutrino radiation at a time of about 50,000 years. At a time of about 380,000 years, the universe cooled enough that electrons and nuclei combined into neutral atoms (mostly hydrogen) in an event called recombination. This process made the previously opaque universe transparent, and the photons that last scattered during this epoch comprise the cosmic microwave background. Structure formation. After the recombination epoch, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. Cosmic acceleration. Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together, gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate. Dark energy in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory. All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10−15 seconds. Understanding this earliest of eras in the history of the universe is one of the greatest unsolved problems in physics. Concept history. Etymology. English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." However, it did not catch on until the 1970s. It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is "unconvincing", and mentions a number of indications that it was not a pejorative. A primordial singularity is sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase. The term itself has been argued to be a misnomer because it evokes an explosion. The argument is that whereas an explosion suggests expansion into a surrounding space, the Big Bang only describes the intrinsic expansion of the contents of the universe. Another issue pointed out by Santhosh Mathew is that bang implies sound, which is not an important feature of the model. However, an attempt to find a more suitable alternative was not successful. According to Timothy Ferris: The term 'big bang' was coined with derisive intent by Fred Hoyle, and its endurance testifies to Sir Fred's creativity and wit. Indeed, the term survived an international competition in which three judges — the television science reporter Hugh Downs, the astronomer Carl Sagan, and myself — sifted through 13,099 entries from 41 countries and concluded that none was apt enough to replace it. No winner was declared, and like it or not, we are stuck with 'big bang'. Before the name. Early cosmological models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time. In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law. Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe. He inferred the relation that Hubble would later observe, given the cosmological principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence. In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by an expanding universe imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the expanding universe concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, "viz"., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, disagreed: During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis. After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's expanding universe theory, advocated and developed by George Gamow, who used it to develop a theory for the abundance of chemical elements in the universe. and whose associates, Ralph Alpher and Robert Herman, predicted the cosmic background radiation. As a named model. Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this "big bang" idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe. In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe). Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating. Observational evidence. The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the cosmic microwave background, large-scale structure, and Hubble's law. The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures. These are sometimes called the "four pillars" of the Big Bang models. Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics. Hubble's law and the expansion of the universe. Observations of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed: formula_1 where Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker. The theory requires the relation formula_5 to hold at all times, where formula_3 is the proper distance, formula_2 is the recessional velocity, and formula_2, formula_9, and formula_3 vary as the universe expands (hence we write formula_4 to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity formula_2. For distances comparable to the size of the observable universe, the attribution of the cosmological redshift becomes more ambiguous, although its interpretation as a kinematic Doppler shift remains the most natural one. An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder. Cosmic microwave background radiation. In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics. The "surface of last scattering" corresponding to emission of the CMB occurs shortly after "recombination", the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around , the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent. In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 104, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 105. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results. During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies. In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The "Planck" space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing. Abundance of primordial elements. Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (4He), helium-3 (3He), deuterium (2H), and lithium-7 (7Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for 4He:H, about 10−3 for 2H:H, about 10−4 for 3He:H, and about 10−9 for 7Li:H. The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two for 7Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too. Galactic evolution and distribution. Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory. Primordial gas clouds. In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN. Other lines of evidence. The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model. The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult. Future observations. Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang. Problems and related issues in physics. As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists. Baryon asymmetry. It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. Both matter and antimatter were much more abundant than today, with a tiny asymmetry of only one part in 10 billion. The matter and antimatter collided and annihilated, leaving only the residual amount of matter. Today observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter with very little antimatter. If matter and antimatter were in complete symmetry, then annihilation would result in only photons and virtually no matter at all, which is obviously not what is observed. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry. Dark energy. Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, cosmological models require that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy". Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure--baryon acoustic oscillations--as a cosmic ruler. Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant. The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units. Dark matter. During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, the galaxy rotation problem, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters. Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway. Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations. Horizon problem. The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature. A resolution to this apparent inconsistency is offered by inflation theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation. Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been confirmed by measurements of the CMB. A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended. Magnetic monopoles. The magnetic monopole objection was raised in the late 1970s. Grand unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness. Flatness problem. The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be "flat". Observations indicate the universe is consistent with being flat. The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 1014 of its critical value, or it would not exist as it does today. Misconceptions. In addition to confusion about the nature of cosmic expansion, the Big Bang model itself is sometimes misunderstood. One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe. Another common misconception relates to the recession speeds associated with Hubble's law. These are not velocities in a relativistic sense (for example, they are not related to the spatial components of 4-velocities). Therefore, it is not remarkable that according to Hubble's law, galaxies farther than the Hubble distance recede faster than the speed of light. Such recession speeds do not correspond to faster-than-light travel. Implications. Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture. Pre–Big Bang cosmology. The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the ""primeval atom" while Gamow called the material "ylem"". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, if specific laws of nature were to come to existence in a random way, inflation models show, some combinations of these are far more probable, partly explaining why our Universe is rather stable. Another possible explanation for the stability of the Universe could be a hypothetical multiverse, which assumes every possible universe to exist, and thinking species could only emerge in those stable enough. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created. The Big Bang theory is built upon the equations of classical general relativity, which are not expected to be valid at the origin of cosmic time, as the temperature of the universe approaches the Planck scale. Correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang. While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony". Some speculative proposals in this regard, each of which entails untested hypotheses, are: Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse. Ultimate fate of the universe. Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch. Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death. Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom dark energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip. Religious and philosophical interpretations. As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous.
4119
1291097476
https://en.wikipedia.org/wiki?curid=4119
Bock
Bock () is a strong German beer, usually a dark lager. History. The style now known as "Bock" was first brewed in the 14th century in the Hanseatic town of Einbeck in Lower Saxony. The style was later adopted in Bavaria by Munich brewers in the 17th century. Due to their Bavarian accent, citizens of Munich pronounced "Einbeck" as "ein Bock" ("a billy goat"), and thus the beer became known as "Bock". A goat often appears on bottle labels. Bock is historically associated with special occasions, often religious festivals such as Christmas, Easter, or Lent (""). Bock has a long history of being brewed and consumed by Bavarian monks as a source of nutrition during times of fasting. Styles. Substyles of Bock include: Traditionally Bock is a sweet, relatively strong (6.3–7.6% by volume), lightly hopped lager registering between 20 and 30 International Bitterness Units (IBUs). The beer should be clear, with colour ranging from light copper to brown, and a bountiful, persistent off-white head. The aroma should be malty and toasty, possibly with hints of alcohol, but no detectable hops or fruitiness. The mouthfeel is smooth, with low to moderate carbonation and no astringency. The taste is rich and toasty, sometimes with a bit of caramel. The low-to-undetectable presence of hops provides just enough bitterness so that the sweetness is not cloying and the aftertaste is muted. Maibock. The Maibock style – also known as Heller Bock or Lente Bock in the Netherlandsis a strong pale lager, lighter in colour and with more hop presence. Colour can range from deep gold to light amber with a large, creamy, persistent white head, and moderate to moderately high carbonation, while alcohol content ranges from 6.3% to 8.1% by volume. The flavour is typically less malty than a traditional Bock, and may be drier, hoppier, and more bitter, but still with a relatively low hop flavour, with a mild spicy or peppery quality from the hops, increased carbonation and alcohol content. Doppelbock. "Doppelbock" or "Double Bock" is a stronger version of traditional Bock that was first brewed in Munich by the Paulaner Friars, a Franciscan order founded by St. Francis of Paula. Historically, Doppelbock was high in alcohol and sweetness. The story is told that it served as "liquid bread" for the Friars during times of fasting when solid food was not permitted. In 2011, journalist J. Wilson proved this was at least "possible" by consuming only doppelbock and water for the 46 days of Lent. However, historian Mark Dredge, in his book "A Brief History of Lager", says that this story is myth and that the monks produced Doppelbock to supplement their order's vegetarian diet all year. Today, Doppelbock is still strongranging from 7% to 12% or more by volume. It is clear, with colour ranging from dark gold, for the paler version, to dark brown with ruby highlights for a darker version. It has a large, creamy, persistent head (although head retention may be impaired by alcohol in the stronger versions). The aroma is intensely malty, with some toasty notes, and possibly some alcohol presence as well; darker versions may have a chocolate-like or fruity aroma. The flavour is very rich and malty, with noticeable alcoholic strength, and little or no detectable hops (16–26 IBUs). Paler versions may have a drier finish. The monks who originally brewed Doppelbock named their beer "Sankt-vater-bier" ("Blessed Father beer"). This was eventually shortened to "Salvator" (literally "Savior"), which today is trademarked by Paulaner. Brewers of modern Doppelbock often add "-ator" to their beer's name as a signpost of the style; there are 200 "-ator" Doppelbock names registered with the German patent office. The following are representative examples of the style: Paulaner Salvator, Ayinger Celebrator, Weihenstephaner Korbinian, Andechser Doppelbock Dunkel, Spaten Optimator, Augustiner Brau Maximator, Tucher Bajuvator, Weltenburger Kloster Asam-Bock, Capital Autumnal Fire, EKU 28, Eggenberg Urbock 23º, Bell's Consecrator, Moretti La Rossa, Samuel Adams Double Bock, Tröegs Tröegenator Double Bock, Wasatch Brewery Devastator, Great Lakes Doppelrock, Abita Andygator, Wolverine State Brewing Company Predator, Burly Brewing's Burlynator, Monteith's Doppel Bock, and Christian Moerlein Emancipator Doppelbock. Eisbock. Eisbock is a traditional specialty beer of the Kulmbach district of Bavaria, made by partially freezing a Doppelbock and removing the water ice to concentrate the flavour and alcohol content, which ranges from 8.6% to 14.3% by volume. It is clear, with a colour ranging from deep copper to dark brown in colour, often with ruby highlights. Although it can pour with a thin off-white head, head retention is frequently impaired by the higher alcohol content. The aroma is intense, with no hop presence, but frequently can contain fruity notes, especially of prunes, raisins, and plums. Mouthfeel is full and smooth, with significant alcohol, although this should not be hot or sharp. The flavour is rich and sweet, often with toasty notes, and sometimes hints of chocolate, always balanced by a significant alcohol presence. The following are representative examples of the style: Colorado Team Brew "Warning Sign", Kulmbacher Reichelbräu Eisbock, Eggenberg, Schneider Aventinus Eisbock, Urbock Dunkel Eisbock, Franconia Brewing Company Ice Bock 17%. The strongest ice beer, Strength in Numbers, was a one-time collaboration in 2020 between Schorschbrau of Germany and BrewDog of Scotland, who had competed with each other in the early years of the 21st century to produce the world's strongest beer. "Strength in Numbers" was created using traditional ice distillation, reaching a final strength of 57.8% ABV. Weizenbock. Weizenbock is a style that replaces some of the barley in the grain bill with 40–60% wheat. It was first produced in Bavaria in 1907 by G. Schneider & Sohn and was named "Aventinus" after 16th-century Bavarian historian Johannes Aventinus. The style combines darker Munich malts and top-fermenting wheat beer yeast, brewed at the strength of a Doppelbock.
4124
28481209
https://en.wikipedia.org/wiki?curid=4124
Bantu languages
The Bantu languages (English: , Proto-Bantu: *bantʊ̀), or Ntu languages are a language family of about 600 languages of Central, Southern, Eastern and Southeast Africa. They form the largest branch of the Southern Bantoid languages. The total number of Bantu languages is estimated at between 440 and 680 distinct languages, depending on the definition of "language" versus "dialect". Many Ntu languages borrow words from each other, and some are mutually intelligible. Some of the languages are spoken by a very small number of people, for example the Kabwa language was estimated in 2007 to be spoken by only 8500 people but was assessed to be a distinct language. The total number of Ntu language speakers is estimated to be around 350 million in 2015 (roughly 30% of the population of Africa or 5% of the world population). Bantu languages are largely spoken southeast of Cameroon, and throughout Central, Southern, Eastern, and Southeast Africa. About one-sixth of Bantu speakers, and one-third of Bantu languages, are found in the Democratic Republic of the Congo. The most widely spoken Ntu language by number of speakers is Swahili, with 16 million native speakers and 80 million L2 speakers (2015). Most native speakers of Swahili live in Tanzania, where it is a national language, while as a second language, it is taught as a mandatory subject in many schools in East Africa, and is a lingua franca of the East African Community. Other major Ntu languages include Lingala with more than 20 million speakers (Congo, DRC), followed by Zulu with 13.56 million speakers (South Africa), Xhosa at a distant third place with 8.2 million speakers (South Africa and Zimbabwe), and Shona with less than 10 million speakers (if Manyika and Ndau are included), while Sotho-Tswana languages (Sotho, Tswana and Pedi) have more than 15 million speakers (across Botswana, Lesotho, South Africa, and Zambia). Zimbabwe has Kalanga, Matebele, Nambiya, and Xhosa speakers. "Ethnologue" separates the largely mutually intelligible Kinyarwanda and Kirundi, which together have 20 million speakers. Name. The similarity among dispersed Bantu languages had been observed as early as the 17th century. The term "Bantu" as a name for the group was not coined but "noticed" or "identified" (as "Bâ-ntu") by Wilhelm Bleek as the first European in 1857 or 1858, and popularized in his "Comparative Grammar" of 1862. He noticed the term to represent the word for "people" in loosely reconstructed Proto-Bantu, from the plural noun class prefix "*ba-" categorizing "people", and the root "*ntʊ̀-" "some (entity), any" (e.g. Xhosa "umntu" "person", "abantu" "people"; Zulu "umuntu" "person", "abantu" "people"). There is no native term for the people who speak Bantu languages because they are not an ethnic group. People speaking Bantu languages refer to their languages by ethnic endonyms, which did not have an indigenous concept prior to European contact for the larger ethnolinguistic phylum named by 19th-century European linguists. Bleek's identification was inspired by the anthropological observation of groups frequently self-identifying as "people" or "the true people" (as is the case, for example, with the term "Khoikhoi", but this is a "kare" "praise address" and not an ethnic name). The term "narrow Bantu", excluding those languages classified as Bantoid by Malcolm Guthrie (1948), was introduced in the 1960s. The prefix "ba-" specifically refers to people. Endonymically, the term for cultural objects, including language, is formed with the "ki-" noun class (Nguni "ísi-"), as in KiSwahili (Swahili language and culture), IsiZulu (Zulu language and culture) and KiGanda (Ganda religion and culture). In the 1980s, South African linguists suggested referring to these languages as "KiNtu." The word "kintu" exists in some places, but it means "thing", with no relation to the concept of "language". In addition, delegates at the African Languages Association of Southern Africa conference in 1984 reported that, in some places, the term "Kintu" has a derogatory significance. This is because "kintu" refers to "things" and is used as a dehumanizing term for people who have lost their dignity. In addition, "Kintu" is a figure in some mythologies. In the 1990s, the term "Kintu" was still occasionally used by South African linguists. But in contemporary decolonial South African linguistics, the term "Ntu languages" is used. Within the fierce debate among linguists about the word "Bantu", Seidensticker (2024) indicates that there has been a "profound conceptual trend in which a "purely technical [term] without any non-linguistic connotations was transformed into a designation referring indiscriminately to language, culture, society, and race"." Origin. The Bantu languages descend from a common Proto-Bantu language, which is believed to have been spoken in what is now Cameroon in Central Africa. An estimated 2,500–3,000 years ago (1000 BC to 500 BC), speakers of the Proto-Bantu language began a series of migrations eastward and southward, carrying agriculture with them. This Bantu expansion came to dominate Sub-Saharan Africa east of Cameroon, an area where Bantu peoples now constitute nearly the entire population. Some other sources estimate the Bantu Expansion started closer to 3000 BC. The technical term Bantu, meaning "human beings" or simply "people", was first used by Wilhelm Bleek (1827–1875), as the concept is reflected in many of the languages of this group. A common characteristic of Bantu languages is that they use words such as "muntu" or "mutu" for "human being" or in simplistic terms "person", and the plural prefix for human nouns starting with "mu-" (class 1) in most languages is "ba-" (class 2), thus giving "bantu" for "people". Bleek, and later Carl Meinhof, pursued extensive studies comparing the grammatical structures of Bantu languages. Classification. The most widely used classification is an alphanumeric coding system developed by Malcolm Guthrie in his 1948 classification of the Bantu languages. It is mainly geographic. The term "narrow Bantu" was coined by the "Benue–Congo Working Group" to distinguish Bantu as recognized by Guthrie, from the Bantoid languages not recognized as Bantu by Guthrie. In recent times, the distinctiveness of Narrow Bantu as opposed to the other Southern Bantoid languages has been called into doubt, but the term is still widely used. There is no true genealogical classification of the (Narrow) Bantu languages. Until recently most attempted classifications only considered languages that happen to fall within traditional Narrow Bantu, but there seems to be a continuum with the related languages of South Bantoid. At a broader level, the family is commonly split in two depending on the reflexes of proto-Bantu tone patterns: many Bantuists group together parts of zones A through D (the extent depending on the author) as "Northwest Bantu" or "Forest Bantu", and the remainder as "Central Bantu" or "Savanna Bantu". The two groups have been described as having mirror-image tone systems: where Northwest Bantu has a high tone in a cognate, Central Bantu languages generally have a low tone, and vice versa. Northwest Bantu is more divergent internally than Central Bantu, and perhaps less conservative due to contact with non-Bantu Niger–Congo languages; Central Bantu is likely the innovative line cladistically. Northwest Bantu is not a coherent family, but even for Central Bantu the evidence is lexical, with little evidence that it is a historically valid group. Another attempt at a detailed genetic classification to replace the Guthrie system is the 1999 "Tervuren" proposal of Bastin, Coupez, and Mann. However, it relies on lexicostatistics, which, because of its reliance on overall similarity rather than shared innovations, may predict spurious groups of conservative languages that are not closely related. Meanwhile, "Ethnologue" has added languages to the Guthrie classification which Guthrie overlooked, while removing the Mbam languages (much of zone A), and shifting some languages between groups (much of zones D and E to a new zone J, for example, and part of zone L to K, and part of M to F) in an apparent effort at a semi-genetic, or at least semi-areal, classification. This has been criticized for sowing confusion in one of the few unambiguous ways to distinguish Bantu languages. Nurse & Philippson (2006) evaluate many proposals for low-level groups of Bantu languages, but the result is not a complete portrayal of the family. "Glottolog" has incorporated many of these into their classification. The languages that share Dahl's law may also form a valid group, Northeast Bantu. The infobox at right lists these together with various low-level groups that are fairly uncontroversial, though they continue to be revised. The development of a rigorous genealogical classification of many branches of Niger–Congo, not just Bantu, is hampered by insufficient data. Computational phylogenetic classifications. Simplified phylogeny of northwestern branches of Bantu by Grollemund (2012): Other computational phylogenetic analyses of Bantu include Currie et al. (2013), Grollemund et al. (2015), Rexova et al. 2006, Holden et al., 2016, and Whiteley et al. 2018. Glottolog classification. Glottolog (2021) does not consider the older geographic classification by Guthrie relevant for its ongoing classification based on more recent linguistic studies, and divides Bantu into four main branches: Bantu A-B10-B20-B30, Central-Western Bantu, East Bantu and Mbam-Bube-Jarawan. Language structure. Guthrie reconstructed both the phonemic inventory and the vocabulary of Proto-Bantu. The most prominent grammatical characteristic of Bantu languages is the extensive use of affixes (see Sotho grammar and Ganda noun classes for detailed discussions of these affixes). Each noun belongs to a class, and each language may have several numbered classes, somewhat like grammatical gender in European languages. The class is indicated by a prefix that is part of the noun, as well as agreement markers on verb and qualificative roots connected with the noun. Plurality is indicated by a change of class, with a resulting change of prefix. All Bantu languages are agglutinative. The verb has a number of prefixes, though in the western languages these are often treated as independent words. In Swahili, for example, "Kitoto kidogo kimekisoma" (for comparison, "Kamwana kadoko karikuverenga" in Shona language) means 'The small child has read it [a book]'. "kitoto" 'child' governs the adjective prefix "ki-" (representing the diminutive form of the word) and the verb subject prefix "a-". Then comes perfect tense "-me-" and an object marker "-ki-" agreeing with implicit "kitabu" 'book' (from Arabic "kitab"). Pluralizing to 'children' gives "Vitoto vidogo vimekisoma" ("Vana vadoko varikuverenga" in Shona), and pluralizing to 'books' ("vitabu") gives "vitoto vidogo vimevisoma". Bantu words are typically made up of open syllables of the type CV (consonant-vowel) with most languages having syllables exclusively of this type. The Bushong language recorded by Vansina, however, has final consonants, while slurring of the final syllable (though written) is reported as common among the Tonga of Malawi. The morphological shape of Bantu words is typically CV, VCV, CVCV, VCVCV, etc.; that is, any combination of CV (with possibly a V- syllable at the start). In other words, a strong claim for this language family is that almost all words end in a vowel, precisely because closed syllables (CVC) are not permissible in most of the documented languages, as far as is understood. This tendency to avoid consonant clusters in some positions is important when words are imported from English or other non-Bantu languages. An example from Chewa: the word "school", borrowed from English, and then transformed to fit the sound patterns of this language, is "sukulu". That is, "sk-" has been broken up by inserting an epenthetic "-u-"; "-u" has also been added at the end of the word. Another example is "buledi" for "bread". Similar effects are seen in loanwords for other non-African CV languages like Japanese. However, a clustering of sounds at the beginning of a syllable can be readily observed in such languages as Shona, and the Makua languages. With few exceptions, such as Kiswahili and Rutooro, Bantu languages are tonal and have two to four register tones. Reduplication. Reduplication is a common morphological phenomenon in Bantu languages and is usually used to indicate frequency or intensity of the action signalled by the (unreduplicated) verb stem. Well-known words and names that have reduplication include: Repetition emphasizes the repeated word in the context that it is used. For instance, "Mwenda pole hajikwai," means "He who goes slowly doesn't trip," while, "Pole pole ndio mwendo," means "A slow but steady pace wins the race." The latter repeats "pole" to emphasize the consistency of slowness of the pace. As another example, "Haraka haraka" would mean "hurrying just for the sake of hurrying" (reckless hurry), as in "Njoo! Haraka haraka" [come here! Hurry, hurry]. In contrast, there are some words in some of the languages in which reduplication has the opposite meaning. It usually denotes short durations, or lower intensity of the action, and also means a few repetitions or a little bit more. Noun class. The following is a list of nominal classes in Bantu languages: Syntax. Virtually all Bantu languages have a subject–verb–object word order, with some exceptions, such as the Nen language, which has a subject–object–verb word order. By country. Following is an incomplete list of the principal Bantu languages of each country. Included are those languages that constitute at least 1% of the population and have at least 10% the number of speakers of the largest Bantu language in the country. Most languages are referred to in English without the class prefix ("Swahili", "Tswana", "Ndebele"), but are sometimes seen with the (language-specific) prefix ("Kiswahili", "Setswana", "Sindebele"). In a few cases prefixes are used to distinguish languages with the same root in their name, such as Tshilubà and Kiluba (both "Luba"), Umbundu and Kimbundu (both "Mbundu"). The prefixless form typically does not occur in the language itself, but is the basis for other words based on the ethnicity. So, in the country of Botswana the people are the "Batswana", one person is a "Motswana", and the language is "Setswana"; and in Uganda, centred on the kingdom of "Buganda", the dominant ethnicity are the "Baganda" (singular "Muganda"), whose language is "Luganda". "Swahili is a recognized national language" "Swahili is a recognized national language" "Swahili is the national language. English and Swahili are official languages." "Swahili is a recognized national language" "Swahili, Kinyarwanda, English, and French are official languages" South Africa. According to the South African National Census of 2011: "Swahili is the national language" "Swahili and English are official languages" Geographic areas. Map 1 shows Bantu languages in Africa and map 2 a magnification of the Benin, Nigeria and Cameroon area, as of July 2017. Bantu words popularised in western cultures. A case has been made out for borrowings of many place-names and even misremembered rhymes – chiefly from one of the Luba varieties – in the USA. Some words from various Bantu languages have been borrowed into western languages. These include: Writing systems. Along with the Latin script and Arabic script orthographies, there are also some modern indigenous writing systems used for Bantu languages: