text
stringlengths 144
682k
|
---|
Efficient, Portable and Extensible Back up of VMs
By Enterprise Networking Planet Contributor | Mar 13, 2012 | Print this Page
In the knowledge industry, where data and information are the most important organizational assets, their protection and integrity are highly critical. With the ever increasing usage of cloud and virtual infrastructure for processing and storage of data, the backup of these virtual processing nodes, also known as virtual machines (VMs), are equally important.
This article describes the need for taking backup of the virtual machines and explores various ways of making a VM backup. It also defines the concept of "service instance" as a set of VMs and proposes two algorithms, one for taking the back up of service instance and the other for restoring it to the target virtualized environment.
To deal with the problem of data loss, the most fundamental and reliable solution is taking regular back ups of all the data and keeping it at the safe place with restricted access. The same is true for VMs, as well.
Backup of VM in a virtualized environment
There are various reasons which necessitate creating a back up of the whole VM rather than only data it processes. They are:
1. The VMs can be cloned or replicated from another virtual machine, which is not the case with the physical machines. So if you have one virtual machine with all necessary software configurations done in it, also known as the master copy VM, you can create as many instances of virtual machines as possible from it. Keeping the back up of the master copy VM will be of great help in case of system failure.
2. In the case of physical machine crash, if the secondary storage devices are intact, one can recover the complete data available on secondary storage device, which is not the case for virtual machines. If the VM image file is deleted accidently or corrupted, all the data in the VM image will be lost.
3. Taking snapshot of the VM at regular interval will allow you to go back to a particular state of the VM. This technique is very helpful in testing and in research environments.
Various ways of taking backups of VMs
There are various ways of making back ups of VMs in different formats. The selection of a particular way depends on the purpose of the backup. What follows are some of the popular ways of taking backup of virtual machines:
Taking the snapshot of the virtual machine - While the VM is running, you can take the snapshot of the VM. In principle, snapshots are not back ups but they preserve the state and data of a virtual machine at a specific point in time. The data includes all of the files that make up the VM such as disks, memory, and other devices.
The snapshots are not for permanent or long term back up purposes, but is useful only during the life time of the VM. Generally, it is the time from when the VM is created to the point when the VM gets deleted. The snapshot of the VM is useful when the running VM gets corrupted or you want to go back to the previous known state of the VM.
Copying the whole VM image and related configuration files - This procedure includes coping VM images and its configuration files from virtualized environment to other storage medium, generally outside of the virtualized environment. Later on, this VM image and configuration files can be used to restore the VM image in case of a crash or revert back to the previous state of the VM.
The limitation is that the copied VM image and other configuration files are virtualization-technology dependent, which means the backed up VM image can only be re-provisioned on the same hypervisor. The version of the hypervisor platform may also cause problem in future while provisioning backup VM images. This can happen if there is a change in the format of configuration file or if there is a change in the way the VM images are created by hypervisor.
Storing the virtual machine in open virtualization format (OVF) - The third option is converting the VM into the OVF and storing it as a back up in offline storage outside the virtualized environment. Later, the same OVF can be redeployed to restore the VM. An OVF package consists of several files placed in one directory. Generally, they are VM image files with VM configuration information in XML format.
Service instance
Taking a back up of service instance
• Datacenter crash due to natural calamities or human activities.
• Moving to a different virtual infrastructure provider.
Algorithm-1: Backup Service Instance
3. Delete the template after the OVF got exported successfully.
end if
Restoring backed up service instance
Algorithm-2: Restore Service Instance
1. Store the VM power state.
2. Power off the VM.
3. Delete the powered off VM from the virtualized environment.
1. Delete the VM from the virtualized environment.
end if
Update the restore information into the service instance metadata.
|
Water Sampling
Dissolved Oxygen
Data Entry
Retrieving Data
Retrieving Data
Now that all the data are entered it is time to use it to discover how the estuary works. The first thing we need to do is get the interesting data from the database. To do so, click on the "Retrieve data" button. The data retrieval form gives you lots of options for getting the data. Depending on what you want to do with the data you can retrieve data based on who collected it, the site from which it was collected, or the date. You can also choose which parameters you want to include. Once you have filled out the form, press the "Submit Query" button at the bottom of the page. The computer should display all of the data that you requested. You should now be able to use the data to answer questions or test a hypothesis.
Sometimes it is easier to look at the data using another program, such as Excel. To do so, you need to copy the data from the web browser to Excel or another program. There are two ways to do this.
First, you can simply drag the mouse over the data while pressing the left mouse button or use the "Edit" menu to "Select All". These procedures will highlight the data on that page. Next, use the "Edit" menu to "Copy" the highlighted data to the Windows Clipboard. Once Excel is open, you can use the "Edit" menu to "Paste" the data into the spreadsheet.
Alternatively, you can save the data as a text file by pressing the "View Delimited Text" link at the top of the page. This link opens another browser window that displays the data in a very basic format. You can use the "File" menu to "Save" the information to a file. Once the data are saved as text, they can be imported to any other program for graphing and analysis.
|
Edition: U.S. / Global
Published: January 25, 1987
HOMES heated by steam or circulating hot water have radiators. For maximum economy and efficiency it is essential that the homeowner make certain that the radiators are operating properly.
When radiators are not working properly, or when they are not putting out as much heat as they should, some parts of the house will be colder than others. It's also likely that a lot of heat is being wasted because the hot water circulates through the system and back to the boiler without ever losing all of its heat to the air inside the house. When this happens, people tend to turn the thermostat higher than normally required, thus adding considerably to the heating bill.
After the hot water heats the radiator, heat is supposed to be radiated into the room. The heat actually warms the air in two ways: by radiation and convection. Radiation accounts for only a small percentage of the heat given off by the radiator; a much higher percentage of the heat is distributed by means of convection - as the radiator warms the air next to it this heated air rises, drawing cool air into and through the radiator from underneath. This movement of air sets up vertical currents that will distribute heated air throughout the room.
It is essential that the air be able to flow freely over, under and around each radiator. When radiators are inside an enclosure, there must be plenty of venting at top and bottom. Even baseboard radiators recessed into the wall will have plenty of openings along the floor, as well as along the top of the radiator's baseboard housing.
Decorative cabinets used to enclose a free-standing cast-iron radiator should also be designed with plenty of openings. Never install heavy curtains or draperies in front of a radiator, and try to leave a space of several inches between the radiator and any large piece of furniture in front of it. Even then it is best not to position large pieces in front of a radiator when these pieces go right down to the floor (couches, tables, and other pieces with legs will not restrict air flow as much).
Older-style free-standing radiators of cast iron consist of multiple sections that allow air to flow through and around. The cast iron stores heat, which will continue radiating out into the room for some time after the heat goes off. However, they also take longer to heat up. Modern convector-type baseboard radiators have a great many metal fins attached to the outside of the pipes that carry the hot water from the boiler. As these fins heat up, they warm the air flowing around them so that a current of warm air starts circulating soon after the heat comes on.
Because air circulation is essential for efficient dispersal of heat, it is important that radiators be cleaned regularly. Radiators and radiator fins should never be allowed to accumulate layers of dust or lint. Such accumulations will not only keep air from flowing freely, they will also act as insulation that will slow the transmission of heat.
Since radiators are often mounted against an outside wall, much heat can be wasted through the cold wall directly behind the radiator (this is especially true if the radiator is recessed.) One technique for reducing this waste, as well for improving efficiency, is to slide a sheet of aluminum-faced foam insulation board behind the radiators. This type of insulation is sold in many large hardware stores and home centers, but if you cannot find it you can make your own by gluing aluminum foil to the face of some foam-core board, which can be bought in most art supply stores and hobby shops.
In a steam system it is important that the radiators be level - or they can slope very slightly toward the inlet valve through which the steam enters. If a radiator slopes the other way then water will get trapped, causing gurgling and banging noises and often keeping the radiator from heating up properly. That's because as the steam cools and condenses into water it has to run out through the same pipe that the steam entered.
To check, place a level on top of the radiator to see if it is tilted the wrong way. If so, you can slip a thin piece of wood under the end to raise it slightly as shown in the drawing.
Banging in steam radiators or pipes can also be caused if the pipes are sagging or not sloping properly, often a problem after structural alterations or caused by sagging of structural members. Just as radiators must drain easily, so too must pipes in a steam system be pitched slightly so all of them slope down toward the boiler. Otherwise water can get trapped in low places (where the pipe has sagged). This can not only cause banging or hammering noises each time the steam comes up, it can also prevent radiators from getting as hot as they should.
In a hot-water system, air trapped inside a radiator can keep hot water from filling the radiator so it will not get hot all the way across. To correct this, radiators usually have small vent valves mounted near the top (usually at the end opposite the inlet valve) that will allow you to bleed out the trapped air. To do this you open this valve while the heat is on and the hot water is circulating (some need a special key that you can buy in hardware stores; others can be opened with a screwdriver). Leave the valve open until all the air escapes and water starts to flow freely in a solid stream, then close it tightly.
|
How should grammar be taught?
last changed 31 May 2005
What does the KS3 Strategy say about how grammar should be taught? The answer clearly depends very heavily on:
• what grammar is
• why it is being taught
Here is a summary of the main points, which you can click on for more information. Each point focuses on one of the tensions which can probably be found in teaching any subject, but which are particularly acute in grammar.
1. Grammar teaching should be integrated but systematic.
2. Grammar should be taught explicitly but should also be explored by investigation.
3. Grammar teaching should use standard terminology but focus on the interrelated ideas behind the terms.
4. Grammar should be general but applied to day-to-day work.
5. Grammatical knowledge should be cumulative but constantly revisited.
6. Grammar should be taught not only in English, but also in MFL and across the curriculum.
1. Grammar teaching should be integrated but systematic. Grammatical knowledge is not taught for its own sake, so whatever its other benefits may be, it is always integrated ("made relevant to the texts studied and written in class" - The Grammar Papers, p. 8) into the teaching of language skills (writing, reading, speaking/listening). For example, Grammar for Writing lists 54 units for use in KS2 classes, each of which:
• explains some point of grammatical knowledge or skill which is defined as a learning objective for the relevant KS2 year,
• suggests a range of investigation activities in which the class could collect and classify examples of the pattern concerned,
• suggests ways in which the findings of the investigation could be applied in writing activities which would follow on immediately after the investigation.
Notice here how the grammatical knowledge is immediately applied to a piece of writing by the pupils, so that grammar teaching is closely integrated into the teaching of writing.
Another important feature of this approach, however, is that the teaching is planned in terms of grammatical concepts. This is important because it allows grammar to be taught "explicitly and systematically" (as recommended in The Grammar Papers, p. 8), thereby solving the problem which dogs any attempt to make grammar 'relevant' by only teaching it 'when needed' - i.e. when a child's writing produces a particular problem. "For those seeking to teach grammar in the context of pupils' work there are some clear dilemmas. How can grammar teaching be systematic and progressive if it is only taught when it arises, either naturally or by chance, in the context of pupils' own work? At the same time, how can a systematic treatment of grammar avoid being a study of form, divorced from the living language it is meant to represent?" (The Grammar Papers, p. 16)
This solution may be the most impressive achievement of the Literacy Strategy, and the KS3 Strategy for English is right to adopt it. A lesson may be focused on a particular area of grammar, which allows systematic teaching; but the discussion of grammar moves, via investigation, directly into writing, which makes it relevant to pupils' writing. In short, grammar teaching is proactive, not reactive. Of course, once a grammatical concept has been introduced, with its terminology, a teacher will also be free to use it reactively in commenting on a pupil's writing; but that is not the primary mode of grammatical discussion.
Interestingly, the proactive approach is compatible with the one method of grammar teaching which has been repeatedly shown to 'work' in terms of its positive effects on writing. Called 'sentence combining', this has two parts:
• the teacher writes a number of simple sentences on the board,
• the pupils try to combine these into a single sentence in as many ways as they can.
This has a clear grammatical focus - compound and complex sentences, and different kinds of subordinate clauses - so it is systematic. Indeed, it makes no pretence to being relevant to the immediate writing needs of individual pupils. On the other hand, it also leads immediately into a writing activity, and is seen as an important element in the teaching of writing. It is easy to see why it is popular with both teachers and students.
2. Grammar should be taught explicitly but should also be explored by investigation. The KS3 Framework for English (p. 16) encourages teaching which is:
• direct and explicit
• highly interactive
• inspiring and motivating
In relation to teaching in English, the Framework recommends, among other things:
• more explicit teaching, with attention to Word and Sentence level skills
• investigations in which pupils explore language and work out its rules and conventions
Investigations have the great attraction of showing pupils that grammar is 'for real' - not just a collection of terminology and abstract ideas, but the organisation behind everything we say and write. Potentially, at least, grammar investigations are both fun and informative.
3. Grammar teaching should use standard terminology but should focus on the interrelated ideas behind the terms. Standard terminology is essential if grammatical understanding is to grow from year to year (i.e. in spite of changes of teacher) and across subjects. However, what really counts is obviously not the terms, but the ideas that they name. As in any other subjects, the ideas form a tightly integrated network so it is essential for teaching to emphasise these connections so that later work not only sits comfortably with earlier work, but reinforces it. For example, the idea 'noun' is involved in a wide range of facts about English grammar which a child will encounter in different years:
• Nouns are used to name people and concrete objects.
• Nouns may be common or proper.
• Nouns may be singular or plural, which is signalled by the ending.
• Nouns may also be used to name the actions that verbs can name.
• We can form nouns out of verbs by adding various endings.
• Nouns may be modified by adjectives (but not by adverbs).
• Nouns may be modified by determiners.
• Nouns may be modified by prepositional phrases and relative clauses.
• Nouns may be used as subjects and objects of verbs.
• Sometimes an abstract noun is a better alternative to a subordinate clause.
All of these facts about nouns (and many more) should come up for discussion at some point during a child's schooling. Collectively they provide a much better definition of 'noun' than any single memorable one-sentence definition (even if this is not the disastrously misleading traditional definition of a noun as the name for a person, place or thing). To understand grammar is to understand these interconnecting facts.
4. Grammar should be general but should be applied to day-to-day work. This is the question of 'where' English grammar is. On the one hand, it is in our minds as a very general set of rules and patterns; but on the other, it is in the texts that we read and hear every day. (Similarly, you could say that the rules of football are in the players' minds, but they can also be observed in a game of football, through their effects on the players' behaviour.) It is important for grammatical ideas to be "looked for in day-to-day work" (The Grammar Papers, p. 8) - i.e. constantly applied to concrete examples, and if at all possible to genuine examples rather than specially-constructed ones made up by the teacher. (Not that there is anything inherently wrong with made-up examples; they can be much clearer because they avoid unwanted complications.)
5. Grammatical knowledge should be cumulative but constantly revisited. "Teachers should base their planning on a clear idea of pupils' prior grammatical knowledge, to ensure that pupils are not taught the same aspects of grammar repeatedly and to make full use of pupils' implicit knowledge of grammar. " (The Grammar Papers, p. 37). It is important that grammatical knowledge should be constantly growing and maturing, not only through learning more terminology but also through a deepening awareness of the interconnections among the ideas described above. At the same time, however, it is important to revisit and revise the old ideas in order to consolidate them into a (hopefully) permanent network of well understood ideas.
6. Grammar should be taught not only in English, but also in MFL and across the curriculum. The Frameworks for MFL are absolutely explicit about the need for explicit teaching about grammar and for this to build on the grammar that pupils will have learned in their English lessons. They even insist on the need to use the same terminology as the English teacher does.
However, the KS3 Strategy sees an even broader remit for grammar teaching, as something which every teacher will engage in. As The Grammar Papers (p. 21) says,
It is clear ... that explicit grammatical knowledge ... is relevant to other subjects in the way that knowledge is constructed. Although each subject has its own vocabulary and technical concepts, explicit grammatical knowledge can help students use the language of the subject area appropriately, for example when describing events, reporting a process, or explaining what they have learned.
More recently, the Strategy has put considerable resources into developing "literacy across the curriculum", with an ambitious series of materials (books and CDs) for different subjects. For example, a geography teacher might
• focus students' attention on the grammatical structures in an essay about the causes of famine by comparing different ways of using the word drought as a cause.
• develop sensitivity to grammatical structure by exploring grammatical ways of connecting sentences that describe two kinds of valley.
• prepare students for describing a geographical feature by pointing out that "many sentences will begin with adverbials to locate the feature, e.g. At the top … , On one side … , Above the snowline …".
|
[toor-in, tyoor-, too-rin, tyoo-]
Turin, Ital. Torino, city (1991 pop. 962,507), capital of Piedmont and of Turin prov., NW Italy, at the confluence of the Po and Dora Riparia rivers. It is a major transportation hub and Italy's most important industrial center. Manufactures include motor vehicles, tires, textiles, clothing, machinery, electronic equipment, leather goods, furniture, chemicals, and vermouth. It is an international fashion center.
Turin was founded by the pre-Roman Taurini. The most important Roman town of the W Po valley, Turin was later a Lombard duchy and then a Frankish county. In spite of the claims of the house of Savoy, it remained a free commune in the 12th and 13th cent. It passed c.1280 to the house of Savoy (see Savoy, house of). Occupied (1536-62) by the French, it was restored to the dukes of Savoy and became their capital. From 1720 to 1861 it was the capital of the kingdom of Sardinia. During the War of the Spanish Succession it suffered a long siege, which ended with the victory of Eugene of Savoy over the French. In 1798, Charles Emmanuel IV of Savoy was obliged by the French to abdicate and to abandon Turin, but Victor Emmanuel I returned in 1814, and the city became the center of Italian national aspirations. From 1861 to 1865 it was the capital of the new Italian kingdom.
Because of its industrial importance, Turin suffered heavy damage in World War II; most of the important buildings that remain date from the 17th-19th cent. Of note are the Palace of the Marquesses of Caraglio e Senantes (17th cent.); the Palazzo Madama (begun late 13th cent.); the baroque Venaria Reale, a restored (2008) 17th-century royal summer palace, which houses a fine collection of arms and armor; the Academy of Science, which contains the rich Egyptian Museum; and the Car Museum.
The Cathedral of San Giovanni (late 15th cent.) has a casket that contains the famous "Shroud of Turin," in which some believe Jesus was wrapped after death. Carbon-14 dating (1988) suggested that it is a medieval forgery, but the testing may have been done on a sample from a repaired area. Analysis of pollen grains and plant images on the shroud (1999) indicated a date prior to the 8th cent., and other tests have suggested the shroud is pre-Medieval. Research published in 2009, comparing a shroud known to date from Jerusalem in the early 1st cent. A.D., noted that the Turin shroud had a more complex weave and consisted of a single piece of cloth instead of separate pieces for the head and body.
On a hill overlooking the city is the basilica of Superga (1717-31), containing the tombs of many of the dukes of Savoy and kings of Sardinia. Turin has a university and a well-known polytechnic institute (1859).
Italian Torino
City (pop., 2001 prelim.: 857,433), Piedmont region, northwestern Italy. Located on the Po River, it was founded by the Taurini. It was partly destroyed by Hannibal in 218 BC. It was made a Roman military colony under Emperor Augustus. A part of the Lombard duchy in the 6th century AD, it became the seat of government under Charlemagne (742–814). It passed to the house of Savoy in 1046. The capital of the kingdom of Sardinia in 1720, Turin was occupied by the French during the Napoleonic Wars. The political and intellectual centre of the Risorgimento movement, it served as the first capital of united Italy (1861–65). During World War II Turin sustained heavy damage from Allied air raids but was rebuilt. It is the focus of Italy's automotive industry and an international fashion centre. The Shroud of Turin has been housed in the 15th-century cathedral there since the 16th century.
Learn more about Turin with a free trial on Britannica.com.
Turin is a town in Coweta County, Georgia, United States. The population was 165 at the 2000 census.
Turin is located at (33.326798, -84.634064).
As of the census of 2000, there were 165 people, 66 households, and 51 families residing in the town. The population density was 131.5 people per square mile (51.0/km²). There were 68 housing units at an average density of 54.2/sq mi (21.0/km²). The racial makeup of the town was 89.09% White and 10.91% African American. Hispanic or Latino of any race were 1.21% of the population.
There were 66 households out of which 24.2% had children under the age of 18 living with them, 51.5% were married couples living together, 22.7% had a female householder with no husband present, and 22.7% were non-families. 21.2% of all households were made up of individuals and 10.6% had someone living alone who was 65 years of age or older. The average household size was 2.50 and the average family size was 2.86.
In the town the population was spread out with 18.8% under the age of 18, 13.3% from 18 to 24, 23.6% from 25 to 44, 23.6% from 45 to 64, and 20.6% who were 65 years of age or older. The median age was 41 years. For every 100 females there were 85.4 males. For every 100 females age 18 and over, there were 86.1 males.
The median income for a household in the town was $50,000, and the median income for a family was $55,375. Males had a median income of $23,125 versus $21,771 for females. The per capita income for the town was $19,994. About 1.9% of families and 7.3% of the population were below the poverty line, including 12.5% of those under the age of eighteen and 10.0% of those sixty five or over.
External links
Search another word or see Turinon Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature
|
Bonobo expert to discuss Darwinian feminism at Natural History Museum
Professor Amy Parish says evolutionary science has only been telling one side of the story, and feminists ought to be worried.
Up until now the big name in evolutionary theory - the species known as the "closest ancestor" - has been the chimpanzee. But Professor Parish and a new wave of anthropologists are quick to point out that bonobos, another species closely related to the common chimp, are just as likely to be our missing link. In fact, genetically, it's a tie.
There are significant differences, though, between the two species. Chimps are known for their male-bonded clans, their dominant male figures, and their aggression. Bonobos, meanwhile, are much more peaceful than common chimpanzees. Their communities are female-dominated, and they counter the chimps' violent tempers with raging hormones. Ferocious, unfettered hormones. Bonobos have more sex, in more ways, and for more reasons, than you and I can imagine.
The differences are real, and Parish says that bonobos ought to be given their rightful seat at the evolutionary table. She says human evolutionary theory can and should be revised so that it accomodates meaningul female bonds, possibilities of female dominance over males, and hunting and meat distribution by females, too.
Professor Parish speaks this Friday, February 3rd, at 6:30pm at the Natural History Museum of L.A. County.
blog comments powered by Disqus
What's popular now on KPCC
|
Health knowledge made personal
Join this community!
› Share page:
Search posts:
HEP C = Cirrhosis, Right? WRONG!
Posted Jan 02 2010 12:00am
Only the minority of patients with hepatitis C infection progress to cirrhosis. Studies have shown that 20% to 25% of people with hepatitis C will develop cirrhosis ([1]). There are some individuals that are more likely to progress to cirrhosis than others ([2]). The current or past use of significant amounts of alcohol is the single most important factor in accelerating progression to cirrhosis ([3]). For this reason, we recommend that all patients with chronic hepatitis C abstain totally from alcohol.
Other factors that may increase the likelihood of progression to cirrhosis include co-infection with HIV (human immunodeficiency virus) and/or hepatitis B virus. Recent research suggests that excessive iron in the liver may also accelerate progression to cirrhosis ([4]). In some patients, progression to cirrhosis occurs despite none of these factors being present. Virus-specific factors or the type of immune response to the infection may be responsible for the progression to cirrhosis in these individuals.
More recently it has been observed that progression to fibrosis (scar tissue) and cirrhosis appears to accelerate after age 45. The reasons for this are not clear, but it is suspected that changes in the immune response to the hepatitis C infection may cause increased fibrosis after age 45 ([5]). This is another reason why we are becoming more aggressive in treating hepatitis C in young people, even if fibrosis has not yet developed.
Factors that are associated with a lower likelihood of progression to cirrhosis include young age at time of infection, female gender, no history of alcohol use and past treatment with interferon. It should be noted that the genotype of the virus and the viral load have no relationship whatsoever to the development of cirrhosis.
What are the symptoms of cirrhosis?
In early cases of cirrhosis, there are no specific symptoms that would make the physician suspect cirrhosis. At an early stage, even laboratory tests may not show evidence of cirrhosis. Currently we do not have an accurate way of diagnosing cirrhosis by doing a blood test. Even though there is a commercially-available blood test for detecting advanced fibrosis in the liver, the accuracy of this test in patients with hepatitis C is still unknown, and currently it is unable to differentiate cirrhosis from less-advanced stages of fibrosis.
As the cirrhosis becomes more advanced, symptoms from the complications of cirrhosis may develop. By this time, laboratory test abnormalities suggestive of decreased liver function (abnormal levels of bilirubin and albumin; and abnormal coagulation parameters) also develop. Complications from cirrhosis include ascites, variceal bleeding, encephalopathy and liver cancer.
The severity of the cirrhosis is determined based on laboratory test results and findings on physical exam. The liver biopsy plays no role in determining the severity of the cirrhosis. Factors that are taken into account to determine the severity of cirrhosis include the serum albumin (albumin is a protein produced by the liver), the PT or INR (measures the ability of the blood to clot) and the level of serum bilirubin (bilirubin is a substance excreted by the liver, which, when it accumulates, causes jaundice). In addition, the presence or absence of ascites (fluid accumulation in the abdomen) and encephalopathy (confusion caused by toxins not filtered by the liver) are also used to grade the severity of cirrhosis.
A point system known as the Child’s-Pugh-Turcotte score (CPT score) has been devised to determine the severity of the cirrhosis. Depending on the total score, a patient is classified as Class A (early cirrhosis) through Class C (advanced cirrhosis).
Child-Pugh-Turcotte Criteria
1 Point 2 Points 3 Points
>3.5 2.8-3.5 <2.8
Bilirubin (mg/dL) <2 2-3 >3
Ascites None Minimal Moderate
Encephalo-pathy None Grade 1-2 Grade 3-4
PT (sec
<4> 4-6
Class A: 5-6 points; Class B: 7-9 points; Class C: 10-15 points
As of 9/1/09 Ricki stands at 12 points. (Class C - advanced)
Prognosis of cirrhosis
Patients with early cirrhosis (CPT Class A) from hepatitis C infection who have no complications from cirrhosis have an excellent prognosis. Even without treating the hepatitis C infection, 10 years after diagnosing cirrhosis the majority (>75%) continue to do well with no liver-related complications ([6]). It is believed that treatment of the hepatitis C with interferon will provide an even better prognosis.
The diagnosis of early cirrhosis should not be considered a fatal diagnosis. Most patients will continue to do well for decades. There is no reason to refer a person with cirrhosis to a liver transplant center unless the cirrhosis is advanced (CPT class C) or complications from cirrhosis have developed.
Hepatitis C Support Project
Post a comment
Write a comment:
|
Bolivia lies at the heart of the South American plate, thus is old rock not likely to shatter. This will be pushed higher in altitude during the shift, but not by much, and the latitude will not be much more distance from the new equator after the shift than before, so life will continue much the same for survivors. The sun will rise in a different location, and the skies more cloudly due to volcanic dust, and this will puzzle the rustic folk living in the mountains. But lying above the low atmosphere where most of the volcanic dust will linger as it settles provides advantages, as there will be clear days on occasion. Life will be harder, as everywhere, due to less vegetation, but those used to living a simple life will find ways to cope, unlike those in cities used to soft living. The rural peoples of Bolivia will be survivors.
|
The Java EE 5 Tutorial
Page Encoding
For JSP pages, the page encoding is the character encoding in which the file is encoded.
For JSP pages in standard syntax, the page encoding is determined from the following sources:
If none of these is provided, ISO-8859-1 is used as the default page encoding.
For JSP pages in XML syntax (JSP documents), the page encoding is determined as described in section 4.3.3 and appendix F.1 of the XML specification.
The pageEncoding and contentType attributes determine the page character encoding of only the file that physically contains the page directive. A web container raises a translation-time error if an unsupported page encoding is specified.
|
An amusement which mangles some input text. It simulates a k'th order Markov chain based on the text's statistics. The chain can either be based on letter statistics ("if the last 4 letters in the text were ` the', what is the chance that the next letter is `r'?), or on word statistics ("if the last 2 words in the text were `I love', what is the chance that the next word is `nobody'?"). The parameter k is the degree of continuity required; 4-6 is a good choice for a letter-based chain, while 2-3 for a word-based chain. The results are always amusing, since they're almost English-like.
In Emacs, you can perform dissociation based on the current buffer using "M-x dissociated-press RET". See that function's documentation string ("C-h f dissociated-press RET") for details on how to specify k and the use of letter or word statistics.
In practice, most implementations don't do a real Markov chain, but instead do the following:
• Pick a random place in the text.
• Search forward for the require continuity (wrap around from the end of the text to its beginning).
• Output the next letter or word.
If the original text were generated by the Markov model, the results would be the same. Unfortunately, this is not true of English.
dispress = D = distribution
Dissociated Press n.
[play on `Associated Press'; perhaps inspired by a reference in the 1950 Bugs Bunny cartoon "What's Up, Doc?"] An algorithm for transforming any text into potentially humorous garbage even more efficiently than by passing it through a marketroid. The algorithm starts by printing any N consecutive words (or letters) in the text. Then at every step it searches for any random occurrence in the original text of the last N words (or letters) already printed and then prints the next word or letter. EMACS has a handy command for this. Here is a short example of word-based Dissociated Press applied to an earlier version of this Jargon File:
window sysIWYG: n. A bit was named aften /bee't*/ prefer to use the other guy's re, especially in every cast a chuckle on neithout getting into useful informash speech makes removing a featuring a move or usage actual abstractionsidered interj. Indeed spectace logic or problem!
A hackish idle pastime is to apply letter-based Dissociated Press to a random body of text and vgrep the output in hopes of finding an interesting new word. (In the preceding example, `window sysIWYG' and `informash' show some promise.) Iterated applications of Dissociated Press usually yield better results. Similar techniques called `travesty generators' have been employed with considerable satirical effect to the utterances of Usenet flamers; see pseudo.
The following Python script implements a quick-and-dirty word-based Dissociated Press algorithm.
#!/usr/bin/env python2
from whrandom import choice
from sys import stdin
from time import sleep
dict = {}
def dissociate(sent):
"""Feed a sentence to the Dissociated Press dictionary."""
words = sent.split(" ")
for i in xrange(len(words) - 1):
if dict.has_key(words[i]):
if dict[words[i]].has_key(words[i+1]):
dict[words[i]][words[i+1]] += 1
dict[words[i]][words[i+1]] = 1
dict[words[i]] = { words[i+1]: 1 }
def associate():
"""Create a sentence from the Dissociated Press dictionary."""
w = choice(dict.keys())
r = ""
while w:
r += w + " "
p = []
for k in dict[w].keys():
p += [k] * dict[w][k]
w = choice(p)
return r
if __name__ == '__main__':
while 1:
s = stdin.readline()
if s == "": break
print "=== Dissociated Press ==="
while 1:
print associate()
except KeyboardInterrupt:
print "=== Enough! ==="
This code may be used from the command line or as a Python module. The command-line handler (the last chunk of code, beginning with if __name__ == '__main__') reads one line at a time from standard input, and treats each line as a sentence. When it reaches EOF, it begins printing one dissociated sentence per second.
The dissociate function stores frequency information about successive words in the global dictionary dict. That is to say: Every word in the input text occurs as a key in dict. The value of dict[foo], for some word foo, is itself a dictionary. It stores the words which have occurred immediately after foo in the source text, and the number of times they have done so. The end of a sentence is represented with the null value None.
The associate function creates a new sentence based on the frequency information in dict. It begins a sentence with a random word from the source text. Next, it uses dict to select a word which, in the original text, followed the first word. The probability of each possible word being selected is based on that word's frequency following the first word in the original text. If the "word" selected is the None value, the sentence is complete; otherwise, it picks a new word that has followed the last.
Here is a sample source text file. Please note the lack of punctuation; this program isn't smart enough to deal with it appropriately.
all your base are belong to us
everything is a base for soy lesbians
are you all monkeys
monkeys lesbians and soy are good for you
now is the time for all good lesbians to come to the aid of their monkeys
good monkeys belong in a zoo
on everything you can meet a zoo full of lesbians
And here is a sample of the output based on these sentences:
meet a zoo full of their monkeys lesbians and soy are you
a zoo full of their monkeys
on everything is the time for soy are good monkeys lesbians
time for all your base for you can meet a base are belong to us
good monkeys belong to us
base are belong to come to us
now is the time for all your base are you can meet a zoo
is a zoo
now is the time for soy are good lesbians and soy are good monkeys belong to the time for all monkeys
full of lesbians
base for all good monkeys
lesbians and soy lesbians to the time for soy lesbians
Log in or registerto write something here or to contact authors.
|
Extra Terrestrial Intelligence-4: What if we get a signal? »« Extra Terrestrial Intelligence-2: The chances of ETI existing
Extra Terrestrial Intelligence-3: The most likely contact scenario
What is likely to be our reaction if we did receive an unambiguous signal that there existed ETI somewhere else in the universe?
The reaction would be hard to predict because it is not a topic that is not publicly discussed much. This is a bit surprising because it is not such a stretch to think that we could wake up one morning to find out that we have received some signal from an alien civilization. I suspect that the reason why we don’t speculate on this question is that any such occurrence might be extremely difficult for most people to absorb into their existing worldviews, so they avoid thinking about it.
Take religious believers. If there is life elsewhere, what does that do to the common idea that humans somehow have a special relationship to god? For Christians, if Jesus died on Earth as a redemptive act for all humankind, did a similar event take place with these other alien civilizations? Would Jews still be able to see themselves as some god’s chosen people? Why did the angel Gabriel not reveal this information of ETI to Mohammed during one of their chats?
I suspect that religious apologists would quickly get to work to come up with new doctrines that would keep the faithful loyal. After all, very similar theological challenges have been encountered before although they are not remembered now. After Copernicus’s ideas about a heliocentric solar system sank in and it became clear that the Earth was merely one of several planets, theologians worried that this meant that other planets could also have inhabitants and this would cause problems for religious doctrines.
As Thomas Kuhn pointed out in book The Copernican Revolution:
When it was taken seriously, Copernicus’ proposal raised many gigantic problems for the believing Christian. If, for example, the earth were merely one of six planets, how were the stories of the Fall and of the Salvation, with their immense bearing on Christian life, to be preserved? If there were other bodies essentially like the earth, God’s goodness would surely necessitate that they, too, be inhabited. But if there were men on other planets, how could they be descendents of Adam and Eve, and how could they have inherited the original sin, which explains man’s otherwise incomprehensible travail on an earth made for him by a good and omnipotent deity? Again, how could men on other planets know of the Savior who opened to them the possibility of eternal life? … Worst of all, if the universe is infinite, as many of the later Copernicans thought, where can God’s Throne be located? In an infinite universe, how is man to find God or God man? (p. 193)
The Copernican model, once its implications were fully appreciated by the theologians, thus raised some serious problems. Fortunately for the theologians, no life was found on the other planets so they did not have to deal with the implications of original sin, of Adam and Eve as being created in god’s image, of the fall from grace, of Jesus as savior, and so on.
Similar problems were encountered with the early exploration of the world. Before the arrival of Europeans in the New World, for example, St. Augustine went so far as to argue that there could not be human beings already there because the Bible said that all humans were descended from Adam and Eve, and since there was no way that their descendants could have got to the other side of the Earth (what they called the Antipodes), that meant there could not be any humans there. Such was the typical approach of one of the Catholic Church’s great thinkers: start with a doctrinal belief and then use that to predict what the data should reveal. In science, we may start with a paradigmatic model to make a prediction but we never depend upon any revelation or a special text.
Of course, Augustine was wrong. But as always, the theologians managed to absorb the discovery that the New World was indeed inhabited and devise ways to incorporate these new and awkward scientific facts in ways to keep the faithful loyal.
Next: The potential benefit of receiving signals an ETI.
POST SCRIPT: But seriously,…
Our wise pundits start to discuss the really important issues.
Leave a Reply
|
Take the 2-minute tour ×
Once the American Civil war ended, did some states encounter more difficulty than others when it came to rejoining the Union? What circumstances or conditions might have contributed to their issues?
share|improve this question
i want to know states that are strong enemy to USA reunion – md nth Nov 14 '12 at 22:49
i mean less loyal to USA,and during war – md nth Nov 14 '12 at 23:19
I hope I've cleared it up with my edit. – Russell Nov 14 '12 at 23:25
How does one measure resistance? Does a fierce Confederate militia being active in non-seceding Indiana outweigh the anti-secessionist sentiment in western Virginia? Do virulent anti-conscription riots in the north count as anti-Union actions? – choster Nov 14 '12 at 23:45
This is way different from what it kind of looked like he was trying to ask, but now its a good enough question that I'm retracting my votes (and prior comments). – T.E.D. Nov 15 '12 at 15:11
1 Answer 1
up vote 5 down vote accepted
The question refers to "problems" without defining them, so I am going to cast my answer in terms of which states rejoined the Union earlier (supposedly fewer "problems") rather than later. I will try to tie the rejoining order to the best correlation I can find.
According to the link below, Tennessee, Arkansas, North Carolina, and Florida were the first four Confederate states to rejoin the Union. They were Upper South (or in the case of Florida, peripheral) states that were not part of the mainstream South. The first three states plus Virginia did not join the Confederacy until after the firing on Fort Sumter.
The last seven states to rejoin were Louisiana, South Carolina, Alabama, Georgia, Virginia, Mississpi, and Texas. With the notable exception of Virginia, these were the hard core "deep South" states. The most pro-Union part of Virginia was West Virginia, which "seceded from secession," meaning that what was left was more pro South than the other Upper South states.
share|improve this answer
Answers.com hardly constitutes a reliable source. – American Luke Nov 15 '12 at 15:01
Usually your answers are very thorough, I don't see how this answers the question about the difficulties the states might have had in rejoining during unification. I also wouldn't consider North Carolina a border or peripheral state... – MichaelF Nov 15 '12 at 18:00
I took that part to mean "outside the Deep South" (which is quite true for North Carolina). Changing "border" to "upper south" to make this clearer. I'd suggest Tom read over the wikipedia Deep South page ( en.wikipedia.org/wiki/Upper_South ), clean up the terminology a bit, and IMHO most of these complaints would be moot. – T.E.D. Nov 15 '12 at 19:18
@Luke:I'm satisfied with their FACTUAL content (the dates and order of the states rejoining the Union). The interpretations are mine. – Tom Au Nov 16 '12 at 0:25
Your Answer
|
You are here
Share Share Share Share Share Share Share
Law in the United States
by Carol Weisbrod
The situation of the Jewish community in the United States is shaped fundamentally by the condition of political equality. This legal status is shared with all other citizens and is assumed as an essential baseline. Where there are violations of that status—when an individual otherwise of full legal capacity is treated as a member of a subordinated racial or religious group, and when group membership defines rights and duties—we discuss the problem under the heading “discrimination.”
In Jewish history, political equality has a specific historical dimension. It is a “postemancipation” phenomenon. Count Clermont-Tonnerre’s famous words are descriptive: “To the Jews as a people, nothing; to the Jews as citizens, everything.” In this modern view, the word “law” itself is related to official behavior. Emancipation assumes that, in relation to the state, members of groups are in general treated equally, regardless of religious background or ethnicity. In such a context, “law” means secular law, the law of the state regulating the status of subjects and citizens. Any other law—religious law or folk law—is considered not really law at all.
We would have a very different picture if we were describing the idea of law in other times and places. The index to Life Is with People, a well-known book on the shtetl, has an entry that makes the point: “law, see Torah.” Until recently, “law” in relation to the Jewish community meant religious law. The existence of external governmental law was acknowledged, of course, but that external law was seen largely as something to be accommodated, an aspect of the historical situation. This is reflected in the maxim Dina de-malkhuta dina: the law of the kingdom is law.
For the women who came as immigrants to the United States, the American approach—built on ideas from the eighteenth-century Enlightenment and nineteenth-century liberalism—often was a new situation. They had come from countries in which religious law was the defining law of their communities, under the system of state law. In such systems of state regulation, certain authority was delegated to the religious leadership. Religious law was, in effect, recognized by the state as the law of an internally regulated autonomous community that operated its own system of sanctions. Both such delegation to religious law and the customs of the Jewish community restricted the roles of women.
This article juxtaposes the idea of “law” with the idea of “Jewish women in America.” The result of this juxtaposition is a brief inquiry into three separate subjects: the interaction of legal systems; the impact on women of particular changes in the substance of the law(s); and the contributions to law of individual women.
The Old Testament describes an ideal of law, and of church and state, in which religious and secular law are fused in a religious commonwealth. Long after the dispersion of the Jews, the theocracies of ancient Israel provided a model of such a government for the Western world. The early colonists in North America, for example, attempted to establish a city upon a hill, modeled (particularly in Massachusetts and Connecticut) on the commonwealth of ancient Israel described in the Old Testament.
Law and government in the West were not, however, typically theocratic even in those centuries characterized by the unity of Christendom. Law emanated from a multiplicity of sovereignties, some associated with kings and public officials, others with popes and clerics, still others with smaller units of society operating as self-regulatory bodies. In the medieval situation of pluralist law, any particular religious community’s law was one body of law competing with others. The modern state had not yet so clearly triumphed that its law was given the name “Law.”
We often assume today that the triumph of the secular government is complete. We talk as though religious authority has been eliminated and state power is all that remains. Yet we might more accurately say that the decline of religious authority is a decline in the public coercive aspects of religious regulation, but that religious authority, in the lives of an observant religious community, is still powerful. Today it operates in what is called the “private” sector, generally as a matter of individual submission to the will of the religious community and its sanctions.
In the modern American situation, secular law and religious law are still interdependent and interacting legal systems. In one model interaction, a rough substantive correspondence is assumed between the law of a religious system and the law of the state. The state considers itself, in more or less structured ways, for example, to be a “Christian Nation” in a world of Christian denominations, with smaller or larger non-Christian communities tolerated by the state. This generally was the situation in what is loosely termed Christendom, and it was equally the situation in the Ottoman Empire. In some contexts, both historically and today, this system involves a kind of official delegation of authority to religious leaders of minority traditions to administer their own law under the auspices of—and within limits set by—the majority system. Appeal to the state from the community was possible. Thus the nineteenth-century feminist Ernestine Rose successfully fought her father, a Polish rabbi, in the Polish civil courts. But such appeals to the state were rare.
A comparison between the situation of a Jewish and a Catholic immigrant to the United States in the late nineteenth century is useful on this point. When newcomers arrived in the United States, they found a law of marriage and divorce that assumed complete state authority. Following Christian approaches, state law restricted divorce—though it was becoming increasingly available—insisted on monogamy, and prohibited incestuous marriages. The illegality of birth control was assumed by the state system—following the Catholic position—until the mid-twentieth century. Jewish law was more open on the issue of contraception—and on divorce—though its availability in practice was restricted in part because of the rules of the surrounding state systems. Other family law issues (relating, for example, to religious matching in the context of adoption) had an impact on both groups.
Problems under the law were different for the two immigrant groups. For Catholics, the possibility of divorce in the secular culture raised the question of whether to divorce and remarry in the outside system without the blessing of the church. (The church typically insisted that the first marriage, unless annulled by the church, remained valid.) For Jews, the question might result in confusion on the issue of which official—religious or secular—had authority to issue a divorce, because both systems recognized divorce and remarriage. Rabbinic authority in the country of origin was sometimes sufficient to effect such a change in legal status. In the United States, it was not. The risk was that mistaken reliance on a religious divorce would result in a bigamous second marriage. This was an issue well into the twentieth century. It was a matter of concern, for example, for the New York City Kehilla—the attempt by Jews in New York to form a modern version of the traditional, quasi-autonomous Polish Jewish community.
Some aspects of tension between rabbinic and secular authority are still visible today. Among the most difficult problems arise in the context of family law. A prime example is the get, the Jewish divorce, which must, according to tradition, be issued by the husband to the wife. Because religious sanctions are no longer sufficiently powerful in all cases to pressure a recalcitrant husband to give a get, the agunah [chained wife] has sought state intervention to pressure husbands to give divorces. Some have focused on state solutions (e.g., the New York get law) to this problem, while others have worked on finding solutions within the religious tradition (e.g., the possible use of conditional divorce).
It may be that, indirectly, ideas of egalitarianism and emancipation have had an impact on religious tradition and modern understandings of its rules. The division of religious authority in the United States makes a variety of religious options available for individuals. Difficulties are most acute for those who want not only to remain within traditional religious structures but also to absorb the egalitarian values of the modern state into their own lives. (Still, it may be noted that the situation is less difficult in the United States than in Israel, where traditional religious authority is given governmental power.)
Emancipation operates to free individuals from the restraints of groups. In modern conceptions of religious liberty, religious group membership is voluntary, at least as understood by the state. Exit from a group is a serious possibility, because, in effect, emancipation simultaneously opened both the doors of the larger society and the doors of the community. A situation in which everyone has to be a member of some community, and exit from one either involves membership in another (conversion) or some sort of outlaw status, is quite different from a situation in which state citizenship is a residual category for everyone and group membership is viewed as a private associative option. Emancipation and assimilation produce exits from communities as well as changes in those communities. It may be noted on this point that groups themselves often adopt different definitions of affiliation. For some groups exit is impossible, at least from the point of view of the group. The state’s answer to that question, where it becomes relevant, is often different depending on the legal system of the country in which the question is raised. In some legal systems, (e.g., the United States), the issue turns almost entirely on an idea of voluntary association or personal affiliation and commitment. In others, the matter is decided on the basis of formal religious or racial definitions. Group sanctions continue to exist, but some, such as expulsion and excommunication, operate most effectively on those most strongly identified with traditional structures. Thus the claim of the group to a permanent and irrevocable affiliation is one that in the modern situation can be simply denied by an individual who chooses to reject the group and, in effect, move away into the larger society. The more open the social and practical situation, the more real the possibility of exit.
We can speak of the impact of state law on Jewish women in the United States across a time frame of several hundred years, from the time that the first group came to New Amsterdam in 1654. And we can note here that one form of religious freedom could involve recognition of group authority. (See the Rhode Island statute authorizing the uncle-niece marriage of Jewish law as an exception to the state incest law dating from 1798.)
Yet the early experience must be viewed against two other points. First, we must note the diversity and strength of later Jewish immigration. Most of the Jewish women affected by changes in the law were not the descendants of those original settlers. Rather, the Jewish population was regularly enlarged and changed by waves of immigrants arriving over the next two centuries. Some came from situations in which emancipation had been in place for a century, others from contexts built on the conception of a separate, autonomous Jewish community. If Hannah Arendt can represent women of the twentieth-century German refugee tradition, the tradition of the shtetl might be represented by some women of Eastern European communities. The class background of Jewish immigrants generally ranges from those like the Brandeis family (described by Josephine Goldmark), who arrived in the mid-nineteenth century with a grand piano or two, to those who came later in steerage with virtually nothing. Immigrants came (and come) from Eastern Europe and from the Middle East, from backgrounds more or less affluent, Westernized, observant.
Second, the legal context changed substantially with the disestablishment of the Christian churches, a process formalized at the national level through the First Amendment to the Constitution. At the state level, the process was not concluded until 1833, when Massachusetts disestablished its church. Under the theories of separation of church and state formulated in the First Amendment (“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof”), the state could not act directly against any protected “exercises of religion.” The state thus cannot insist, for example, on the ordination of women or the revision of the liturgy.
The diversity of the social and economic situations of Jewish immigrants meant that the particular attributes they brought to the United States were very different. Yet the openness of the American situation had an impact on all of them. The change in the legal status of women in the first part of the twentieth century, for example, relating to suffrage and jury service, touched all women. Attempts to discriminate against women came to be rooted less in law than in social role definitions, which made it difficult if not impossible for women both to maintain public or market positions and to raise families, despite the formal equality marked by full political rights. General societal concern with violence against women also reached the Jewish community.
It is also true, however, that the traditional rules of the Jewish community continued to exclude women from full participation in the synagogue service and from ordination as rabbis, and insisted on separate seating in houses of worship. The assumption that a woman’s primary role was as a mother and a supporter of the family—a carrier of cultural values but not religious values institutionally understood—remained unchanged in the less assimilated parts of the community.
But it should also be noted that while women were subordinated in the Jewish tradition, they were in other ways not dependent. In Eastern Europe, for example, it was entirely conventional to say that it was an honor for a wife to support her scholar husband. (The scholarship was assumed to be religious scholarship.) To the extent that women were understood to be responsible for the support of their families by working in the family business (but not for strangers), we see a pattern quite different from the subordination and economic and psychological dependency often associated with the idea of the “separate spheres.” The managerial skill, shrewdness, and competence that parts of traditional Jewish culture allowed women made possible a transition to the modern commercial and professional world for Jewish women that was presumably more difficult for women socialized in more stereotypically male-dominated cultures.
As general societal discrimination against women in the professions weakened, women entered the professions for reasons somewhat similar to those that influenced men. To begin with, law has a very special place in America. It is at the center of the culture, a road to middle-class success and status. It is, simultaneously, a way to seek reform and social change, a path for those pursuing justice in the prophetic tradition. Jerold Auerbach suggests that law had a highly particularized role in the minds of the Jewish immigrants to the United States. For many, commitment to American legal forms and American democratic faith replaced the traditional faith. This observation may well hold across genders.
Having said this, however, we should consider the specific contribution of Jewish women to law.
Women lawyers would be a useful addition to the list—largely if not entirely male—offered in the Encyclopedia Judaica. In Women in Law, Cynthia Epstein describes the careers of Nanette Dembitz Brandeis and Justine Wise Polier, daughter of the Rabbi Stephen Wise. (Both were judges in New York.) Of course, in addition to Justice Ruth Bader Ginsburg on the Supreme Court, today there are Jewish women who serve as state court judges.
The contribution of women to law need not, however, be limited to those who practice law (with or without their husbands) or become judges. Contributions to law might well be looked at more broadly. To illustrate with a familiar example, the famous Brandeis brief, which brought social science material to the attention of the Supreme Court in Muller v. Oregon (208 US 412 (1908), upholding the Oregon statute regulating hours of employment for women), was the work of Brandeis’s sister-in-law, Josephine Goldmark. She was not a lawyer, but her contribution to the law is plain. We also might look for Jewish women who have been plaintiffs in lawsuits, particularly lawsuits understood as test cases involving significant public propositions. Esta Gluck, president of a parent-teachers association, and active in the American Jewish Congress, for example, was a plaintiff in Zorach v. Clauson (343 US 306 (1952), upholding the New York released-time program). More broadly, it would be appropriate to see the impact on law of the activity of Jewish women who have been effective members of lobbying groups, consumer groups, or trade unions, directing their efforts to reform. Finally, we might include the Jewish women associated with the process of law in its various institutional forms, for example, the many administrative and welfare agencies that implement governmental programs.
The contribution of Jewish women to law is part of a larger story about legal change. If law is taken to be male, women’s law might well be different, as the sociologist Georg Simmel suggested. In the twentieth century, women have become members of political society in a new way. Questions concerning the meaning and the consequences of that change—the question of the significance of a “different voice” of women—are regularly discussed in the legal academic literature under the broad heading “feminist jurisprudence.” Some writers are highly self-conscious about the ways Jewish commitment or identity has influenced their understanding of issues of difference and of pluralism in modern societies. Whether, in the end, women’s law is basically different from men’s law is still not known. That women—including Jewish women—already have altered the legal conversation in the United States seems entirely clear.
Auerbach, Jerold. Rabbis and Lawyers: From Torah to Constitution (1990); Baum, Charlotte, Paula Hyman, and Sonya Michel. The Jewish Woman in America (1976); Biale, Rachel. Women and Jewish Law: An Exploration of Women’s Issues in Halachic Sources (1984); Borden, Morton. Jews, Turks and Infidels (1984); EJ, s.v. “Law”; Epstein, Cynthia. Women in Law (1981); Goldmark, Josephine. The Pilgrims of ’48 (1930); Goren, Arthur. New York Jews and the Quest for Community: The Kehillah Experiment (1908–1922) (1970); Graff, Gil. Separation of Church and State: Dina de-Malkhuta Dina in Jewish Law (1750–1848) (1985); Gulak, Asher. “Jewish Law.” In Encyclopedia of the Social Sciences; Hall, Kermit, ed. Oxford Companion to the Supreme Court of the United States (1992); Hertzberg, Arthur. The French Enlightenment and the Jews (1968); Meiselman, Moshe. Jewish Woman in Jewish Law (1978); Minow, Martha. Making All the Difference: Inclusion and Exclusion, and American Law (1990); Pfeffer, Leo. Church, State, and Freedom (1967), and Creeds in Competition (1958); Shepherd, Naomi. A Price Below Rubies: Jewish Women as Rebels and Radicals (1993); Simmel, Georg. On Women, Sexuality and Love (1984); Weisbrod, Carol. “Family, Church, and State.” Journal of Family Law (1987. Reprinted in Meyers, Diana Tietjens, Kenneth Kipnis, and Cornelius F. Murphy, Jr. Kindred Matters: Rethinking the Philosphy of the Family, 1993); Zborowski, Mark, and Elizabeth Herzog. Life Is with People: The Culture of the Shtetl (1952).
Polier, Justine - still image [media]
Full image
A fighter throughout her life for the rights of children and minorities, Justine Wise Polier was a judge of the New York City Family Court for thirty-eight years. She attributed her main goals of fighting injustice and "speaking truly… without fear" to the influence of the Hebrew prophets and her father, Rabbi Stephen Wise.
Institution: American Jewish Historical Society
How to cite this page
Weisbrod, Carol. "Law in the United States." Jewish Women: A Comprehensive Historical Encyclopedia. 1 March 2009. Jewish Women's Archive. (Viewed on September 18, 2014) <>.
Help us elevate the voices of Jewish women.
donate now
Sign Up for JWA eNews
Discover Education Programs
Join our growing community of educators.
view programs
|
When her 10-year-old son, Hamish,* came home with the answers to his French test scrawled up his skinny forearm in red marker, Elizabeth McClellan was shocked. “He was cheating. I couldn’t believe it. I mean, we’re not a cheating family.”
In a teary confrontation, the Toronto mom learned that, in her son Hamish’s mind, failure was not an option. “He was afraid — he was afraid his friends would think he was stupid, he was afraid I’d be disappointed, he was afraid his teacher would be mad.”
Also see:
Fear, not delinquency, is what motivates many kids who cheat — whether it’s by copying someone else’s test or plagiarizing from the Internet. And unless they’ve been living in a cave, kids know it’s wrong. “Maybe they’re afraid of repercussions [from peers, parents or teachers] or maybe they have high expectations and are afraid of letting themselves down,” says Daliah Chapnik, a child and adolescent psychologist and the director of the Child and Adolescent Psychology Centre in Aurora, Ont.
Cheating, then, in Chapnik’s view, may be a coping strategy, and it’s the job of parents to provide kids with a better one — namely, study skills. “Kids may think that if they understood a lesson in class, they’ll be able to recall it on a test,” says Chapnik, “so they may not think it’s necessary to prepare.” What Hamish McClellan describes as “a remembering problem, not a knowing problem” can most often be traced to a gap between learning information and locking it in.
More Stories from Today's Parent
Natalie Nerima, a teacher at Withrow Public School in Toronto, tries to close that gap by encouraging her students to prepare, sending home practice sheets before every test, and notifying parents so they can support study efforts.
Nerima also makes sure every student in her class knows that re-tests are an option. In her experience, kids who know they have a second chance are less likely to cheat. If re-tests are not permitted, she says, “they basically have one chance and if they mess it up, it’s their fault.”
That’s a lot of pressure. And, says Chapnik, “parents often send the message that it’s not OK to fail. They don’t ask what a child got right, they concentrate on what a child got wrong.” Instead, she counsels parents to put the emphasis on effort, not outcome. “Kids have not yet learned that the amount of time they put into studying directly correlates with how well they do,” she says. Parents can help students make that connection by rewarding prep and practice time. “Then, whatever the mark is, as long as a solid effort has been made, it’s got to be OK.”
If cheating persists despite solid study efforts, or it’s accompanied by behaviour like stealing and vandalism, it might indicate a bigger problem that requires professional help. such as a psychoeducational assessment. Most cheating, though, is not habitual — and is, in fact, a sign that a child cares about his achievement.
Stop cheating, start studying
Equipping kids with study strategies is a solid first step toward avoiding cheating, says Chapnik. Here are some tips:
• Come up with a homework routine.
• Create a space that is free from distraction.
• Suggest kids learn little tricks or ways to help them remember things.
• Offer to quiz kids before a test.
*Names changed by request.
Information is current as of the original date of publication.
Follow MSN.ca Lifestyle (@MSNcaLS) on Twitter!
|
Take the 2-minute tour ×
My son is 18 months old and still doesn't want to chew and eat solid foods. We have to blend soups and other food into purée for him to accept it. If there is even a slight solid, he will automatically push it out of his mouth with his tongue.
I have had a breakthrough with breakfast where I don't blend as much and leave it a little chunky. He ate it but still pushed chunks of soft banana that were bigger out.
Some of my friends have kids that are 4-5 months younger, who are eating chicken cutlets.
Any ideas or proven methods for getting a child to start eating solids?
share|improve this question
Have you tried tasty, but solid, snacks like graham crackers? – Beofett Jan 18 '13 at 19:47
He doesn't take any crackers and just ends up tossing them down to the dog. at best he will chew and spit a bit of the edge. – kacalapy Jan 18 '13 at 20:03
Stop comparing with others - it is unproductive. Remember, Einstein couldn't walk until he was 3 years old. – JBRWilkinson Apr 29 at 8:06
4 Answers 4
My daughter was like this some. Once we switched her to "solid" foods, she basically subsisted on yogurt and milk for about 6 months. She finally started eating cheerios and other crunchy foods as well as peas--because they're soft.
Like Karl Bielefeldt pointed out, learning to control your tongue is a skill that has to be practiced. Having said that, sometimes kids just have texture issues. With these kids, it can take introducing a food many, many, many times before they warm up to it. Some kids will take to a food after a couple of introductions, with my daughter it's more like 25 or 30 before she will reliably eat more than a bite of it.
Here are few tricks that we tried with my daughter when she was REALLY little and ate hardly anything:
share|improve this answer
Along with #4 of "really hungry", "bored" is another good time to introduce new foods to toddlers. – Shawn C Apr 8 at 18:33
Have you mentioned this to your pediatrician? If your son is pushing out solid food at his age there might be a developmental issue to combat. You can also check with a community Early Start program; they often deal with eating delays.
If your doctor has given him the all clear, drop the bottle entirely if he's still on it; go to sippy cups (there are sippy cups with a softer top, to help with the transition (or in our case, give our 21-month-old something to shake in his teeth like a dog with a toy)). Try foods that melt quickly, like toddler puffs.
share|improve this answer
I wouldn't worry a ton about it at this point. Controlling your tongue while eating is a skill, believe it or not, and some children take longer to master it than others. More likely than not, he will figure it out in his own time with practice.
You've already figured out one thing to help, by partially blending the food. Another thing that speech therapists have recommended to us (they also deal with eating because it all involves the same muscles) is to place the food back on his molars, or at least into his cheeks, instead of in front in prime spit-out position. This can work well with long chewy food like Twizzlers where you can hold onto the other end.
share|improve this answer
Well most of the kids follow tactics of not eating or chewing solid food. But with time they learn the same for which you need to take the initiative of eating some of his food by properly chewing it in front of him, through this way your kid will get inspired & if he still continues his old habit, then it is better to be little strict with him by not allowing him any type of food for some time. Let the kid grow craving to have food then he will automatically start chewing the food to the fullest.
share|improve this answer
Your Answer
|
Take the 2-minute tour ×
For desktop and server programming in C++, the most commonly used compilers are Microsoft C++ and GCC. What are the most commonly used compilers for embedded programming? Is it most typical to use a version of GCC or are other compilers more popular?
share|improve this question
closed as not constructive by talonmies, Ken White, jonsca, Bo Persson, Robert Longson Oct 14 '12 at 11:01
Are you referring to EC++ or standard C++? – M4rc Oct 14 '12 at 6:32
That's a very good question, I hadn't previously known about EC++. I guess the answer is, either, or, how widely used is EC++ anyway? Wikipedia suggests it's kind of dead, is that true? – rwallace Oct 14 '12 at 6:36
it often depends on the target platform. For ARM processors you can find both gcc and ARM-designed compilers for example. – Serge Oct 14 '12 at 6:38
@rwallace I've heard of it used in more of in a car brain-box type scenario, though it's prevalence with the amount of RAM we have now is probably made it obsolete now that you can have the little additional overhead of C++. So in short, it's probably not widely enough used to be considered or used now. – M4rc Oct 14 '12 at 6:42
EC++ is a subset of C++, EC++ is only about enforcement of that subset; if you write code to the same subset using a C++ compiler, there is in fact zero additional overhead. The motivation for EC++ was largely to do with lack of a ratified C++ standard at the time of inception; since that no longer applies, EC++ is now largely irrelevant. The same subset can be enforces with static analysis tools without needing any special compiler or compiler mode. – Clifford Oct 14 '12 at 12:26
3 Answers 3
up vote 3 down vote accepted
I'm working in embedded development and for most of our targets (ARM cores) we use GCC. For some exotic targets we need to use compilers provided by the manufacturers or 3rd party. The latter tend to provide acceptable support for C, but support for C++ is often poor.
IMHO the EC++ standard is nothing else than a good excuse for compiler manufacturers to keep their efforts low for C++ support, when it comes to template support (which often is the much better choice over RTTI) this 'standard' is just a bad joke.
share|improve this answer
When programming embedded software on a microcontroller or microprocessor, most of the times you don't get to choose your compiler. You have to deal with the compiler provided by the manufacturer. Sometimes it actually is gcc. Sometimes, very popular chips get supported by several compilers, but don't count on it too much.
share|improve this answer
There are many different IDEs for embedded programming. Some of them use gcc, others use its own compilers. Many manufactures distributes their own compilers like TI and Analog devices. Also there are some independent IDEs with independent compilers like IAR, Keil, Code Sourcery. Most popular compiler is gcc, but it is not most advanced. I've seen efficiency comparison for IAR, Keil and gcc. In most cases IAR was the best.
share|improve this answer
Microchip provides gcc with some architectures. But recently they seem to plan to replace it with an own for the 16-bit range – Marco van de Voort Oct 14 '12 at 12:30
|
News in Science
PrintPrint EmailEmail to a friend
Egypt home of the first written language?
Thursday, 17 December 1998
A German archeology team believes they may have found evidence that ancient Egyptians created the earliest written text more than 5,000 years ago.
It has been commonly believed that the Sumerians, the ancient people of what is now Iraq, were the first to make written records. They were used to maintain tax records and have been dated roughly at some 5,000 years old. But the new Egyptian finds are said to predate these by a few hundred years.
The Egyptian hieroglyphics have been carbon dated at some 5,300 years old. They were discovered in Abydos an ancient religious center, near Luxor in the south of Egypt.
The hieroglyphics, found in the tomb of king Scorpion are also tax records. They used symbols such as the sun, mountains, plants and animals which represent sounds in their spoken language.
Related Stories
PrintPrint EmailEmail a friend | Email us
ˆ top | Back to latest news
All Science News
Health & Medical
Environment & Nature
Space & Astronomy
Ancient Worlds
Innovation & Technology
Previous news stories
Science Features
Ibis invasion The secret life of water rats
Science by Topic
Ancient Worlds
DNA & Genetics
Sex & Relationships
>> More topics
Email List
Join our email list
the Lab - ABC Science Online ABC Online
|
What is the optimal wavelength of light for growing plants (cacti indoors in a northern illinois climate)?
Bart Z. Lederman lederman at star.enet.dec.DISABLE-JUNK-EMAIL.com
Tue Oct 14 14:26:39 EST 1997
In article <61vlk3$6k5$2 at news.NetVision.net.il>, Stan Goodman <sgoodman at netvision.net.il> (Stan Goodman) writes:
>In message <pspopeXSPM-1310971949310001 at> -
>pspopeXSPM at shopchicago.com (Scott) writes:
>I am trying to accelerate the growth of my baby cacti by increasing their
>light exposure with fluorescent lamps. I am wondering what would the
>Plants, for the most part, are uninterested in red and infra-red radiation.
>They do their photosynthesis thing with the light from the sun; in order to do
I used to grow plants under lights, and have read many reference books
(and even own a few), and all of them indicate that the above statement
is wrong. Plants are sensitive to red and near infrared.
The balance between shorter and longer wavelengths (plants are relatively
insensitive to green) will have an effect on how the plant grows. Too
much of one or the other will cause the plant either to be very short
and 'stocky' or very long and 'gangly'. At the moment, however, I
don't remember which excess causes which effect.
The so called 'plant lights', which give off purple (actually magenta)
light may make your plants look nice, but they tend not to be very
efficient compared with regular fluorescents. However, fluorescents
tend not to have enough red and near infra-red compared with daylight.
Of course, flourescents have been greately improved in the past few
decades, but I suggest doing what most of the better books I've read
on the subject recommend. This is to use the most efficient normal
white flourescents you can get that are on the 'cool' side (used to
be 'cool white'), and add 10% of the wattage in incandescents to
make up the missing red and near-infrared. So if you are using
four 40 watt CW lamps, you would add about 16 watts of incandescent,
or say two 7 or 10 watt lamps.
Keep in mind that many plants are very sensitive to the ratio of light
to dark that they get, and if you give them too much light or too little
light in each 24 hour cycle you won't get optimum results.
B. Z. Lederman Personal Opinions Only
Posting to a News group does NOT give anyone permission
to send me advertising by E-mail or put me on a mailing
list of any kind.
Please remove the "DISABLE-JUNK-EMAIL" if you have a
legitimate reason to E-mail a response to this post.
More information about the Plantbio mailing list
|
Chegg Guided Solutions for Introduction to Algorithms 3rd Edition: Chapter 20.1
0 Stars(0 users)
• Step 1 of 7
Van Emde Boas Tree
• Step 2 of 7
The common tree structures like Binary trees, Search trees and so many other tree structures are support the data structure operations like insert, delete, search and other operations take time in all these operations. In either of the data structure operation the worst possibility makes any of the above case to take time.
• Step 3 of 7
This time can be reduced by using a priority based tree that does not store the actual values in the tree. This tree is made up by using an array that holds the integer keys of bit m. m is the size of the key over here. The span of the data values in the array is where u is the universe range for the values in the array. The complexity of any operation for the Van Emde Boas tree iswhere m is the key size. In terms of n the complexity would be.
• Step 4 of 7
Van Emde Boas tree uses direct addressing approach to store the key values in it. For example a Van Emde Boas tree having thekeys 3, 5, 8, 9, 12,13, 14, 15,would be created as below:
Van Emde Boas Tree is already way efficient and memory saving than any other self-balancing tree structure. However, this can be made more efficient in terms of speed of search as well by superimposing a binary tree at the top of it. This can be achieved by creating a binary tree that holds a 1 in its node if any of its children have value 1 in each one of them, 0, otherwise.
The creation starts from bottom to upwards. Two consecutive indexes of the bit array are checked for creating a node of the binary tree. If any of those values is 1 the value in the node of the binary treewould be 1; 0 otherwise.
• Step 5 of 7
The above tree, then, would look like as below:
C:\Users\evelyn 16\Desktop\CDR TIP\8.tif
The figure shows the Van Emde Boas tree with a binary tree superimposed on it. It is apparent that the binary tree is constructed from the bottom to upwards.The bottom most nodes of the binary tree hold a 0 if both of the consecutive indexes of bit array hold a 0 like in the case of index 0 and 1. If any of the consecutive indexes or both of them hold a 1 then the binary tree node value is 1 like in the case of index 2 and 3.
Logically speaking the tree node stores the logical-OR of its children.Observing the above structure it can be seen that like self-balancing tree structures this structure is not meant for storing the duplicate values. The duplicate values can simply be ignored while inserting the values in the Van Emde Boas Tree.
• Step 6 of 7
Change the Van Emde Boas Tree to support duplicate keys: The Van Emde Boas tree does not support the duplicate keys. The reason is the storage structure of the tree. The tree stores a bit array that could contain only a 1 or 0. The keys are not physically stored in the tree rather the array indexes are considered to be the value of the key. Wherever the key is present, the related index is set to 1. If the key is not present the index is left 0.
In order to make the tree support the duplicate keys, the change in the storage structure of the tree would be required. The reason is that binary value can only indicate the presence or absence of the data. It cannot give any clue about the duplicate data.The changes in the storage structure of the bit array can be performed as specified ahead. While performing the changes it should be taken care of if the change in the bit array is making any alterations in the upper levels of the structure such as the superimposed binary tree.
To allow the duplication, minor modification in the leaves would be enough. In the common tree there exists a one bit data that is stored to show whether or not the data is held up at the given position. For duplication, an integer can be maintained instead. Initially all the indexes would be set to 0. Now whenever the key is encountered, the related index would be incremented by 1.This can be shown as below:
Now, consider that the values to be stored in the tree are 3, 5, 8, 9, 3, 12, 13, 14, 12,5, and 15. The tree that allows duplication can be as below:
C:\Users\evelyn 16\Desktop\CDR TIP\9.tif
Here, it can be noted that the keys 3 and 12 have been repeated. So the leaves are not representing the presence of data in the binary form. Rather the keys stored are in integer form and the value shows how many times the key has occurred in the available data of keys.
• Step 7 of 7
Now, the change should not make any impact on the remaining of the data structure. This becomes more important when the structure is superimposed by augmenting a binary tree. As the previous rule says that the node value should be 1 if either one of the children has a 1 as the value. If both of them are 0 then it should be 0.
Here, the rule can be modified as below: If there is a 0 in both of the node’s children then the node value should be 0, 1 otherwise. A nonzero value can simply be considered to be a positive integer in the case of the modified tree structure.
So, the duplication can be allowed by just representing the frequency of occurrence of the key value instead of showing whether the data is there or not. The nodes of superimposed binary tree would have a 1 in them if any of the children has a non-zero value, 0, otherwise.
Select Your Problem
Step-by-step solutions for this book and 2,500 others
Already used by over 1 million students to study better
|
Skip to main content
The Web Powered by
powered by Yahoo!
Science & Space
Telescope eyes Milky Way construction
RCW 49 highlights the nebula's older stars (blue stars in center pocket), its gas filaments (green) and dusty tendrils (pink).
National Aeronautics and Space Administration (NASA)
Space Programs
Space Exploration
(CNN) -- NASA released new Spitzer Space Telescope images and data Thursday that show regions of intense star and planet formation in our Milky Way galaxy. Astronomers said the new findings support the idea that our solar system is likely one of many.
"It may very well be that solar systems like our own are probably not rare in the galaxy," said Alan Boss, an astronomer with the Carnegie Institution of Washington. "They may actually be a very common case."
The Spitzer Space Telescope is an orbiting observatory launched in August 2003. It surveys deep space in the infrared spectrum, meaning that its instruments pick up heat rather than light. That enables the telescope to detect objects that normally cannot be seen because they are shrouded in gas or dust, like newborn stars.
Astronomers say that as a star forms, clouds of dust and debris swirl around it, forming a disk and feeding matter onto the star as it ignites. In some cases, leftover material in the disk eventually condenses into planets. But, until recently, scientists have not had the tools to actually seek out such protostars and protoplanetary disks to watch the process happen and check their theories against observations.
"These new results on circumstellar disks show how Spitzer builds upon the results from the Infrared Astronomical Satellite (IRAS) and Infrared Space Observatory (ISO) missions of the 1980s and '90s," said Deborah Padgett of the California Institute of Technology.
"Fortunately for us, the Spitzer space telescope is a thousand times more sensitive than the IRAS satellite -- it can see disks around more-distant stars, it can see disks around fainter stars and it can see disks that are intrinsically fainter because they have less material around them."
The new Spitzer observations focus on two areas of the galaxy, both relatively near our solar system. One region, called RCW 49, is an extremely large "stellar nursery" nearly 14,000 light years away in the constellation Centaurus. Astronomers had known for some time that this was a region of fairly intense star formation, but new Spitzer imagery shows more than 300 stars with protoplanetary disks around them.
"We're incredibly excited about this," said Ed Churchwell of the University of Wisconsin at Madison. "No one, on my team at least, and no astronomers that I know, could ever in their wildest dreams have expected to find a single region like this with so many stars currently in the process of being built."
The other area of observation, 152 Taurus, is just 420 light years away. There, astronomers were able to observe extremely young stars and their protoplanetary disks in unprecedented detail. One, CoKu Tau 4, appears to have a gap in its disk of debris, suggesting a planet may have formed there already.
"The object is only a million years old," said Dan Watson of the University of Rochester. "If the planet formed within the million years that it took between star formation and when we're seeing it now, that probably makes it the youngest planet we have ever seen."
Other instruments aboard Spitzer that detect individual chemicals show the region to be flush with water, methanol and carbon dioxide ice, all of which are ingredients common in our solar system.
"We're hearing that protostars are as common as the cicadas on the trees here on the East Coast," said NASA scientist Anne Kinney, "that infant stars have a lot of ice in them that could provide for future oceans in the future systems around them, and that toddlers, at least one toddler star, has a hole carved out of it, very likely by a planet."
Astronomers announced the discovery of the first extrasolar planet in 1995. Since then, researchers have found more than 120 extrasolar planets, most by using the "wobble method" -- looking for a wobbling motion by stars that would indicate they are influenced by the gravitational pull of orbiting planets.
NASA plans to launch a host of missions over the next decade, ultimately aimed at finding nearby, habitable planets. Those include the Kepler Mission, the Space Interferometry Mission and the Terrestrial Planet Finder.
Story Tools
Subscribe to Time for $1.99 cover
Top Stories
Quake jitters hit California
Top Stories
CNN/Money: Security alert issued for 40 million credit cards
International Edition
CNN TV CNN International Headline News Transcripts Advertise With Us About Us
The Web
Powered by
© 2005 Cable News Network LP, LLLP.
A Time Warner Company. All Rights Reserved.
Terms under which this service is provided to you.
Read our privacy guidelines. Contact us.
external link
Premium content icon Denotes premium content.
Add RSS headlines.
|
Definitions for Warriorˈwɔr i ər, ˈwɔr yər, ˈwɒr i ər, ˈwɒr yər
This page provides all possible meanings and translations of the word Warrior
Random House Webster's College Dictionary
war•ri•orˈwɔr i ər, ˈwɔr yər, ˈwɒr i ər, ˈwɒr yər(n.)
1. a person engaged or experienced in warfare; soldier.
Category: Military
2. a person who has shown great vigor, courage, or aggressiveness, as in politics.
Origin of warrior:
1250–1300; ME werreieor < ONF, =werrei(er) to war1+-eor -or2
Princeton's WordNet
1. warrior(noun)
someone engaged in or experienced in warfare
Kernerman English Learner's Dictionary
1. warrior(noun)ˈwɔr i ər, ˈwɔr yər, ˈwɒr i ər, ˈwɒr yər
a soldier
1. warrior(Noun)
A person who is actively engaged in battle, conflict or warfare; a soldier or combatant.
2. warrior(Noun)
A person who is aggressively, courageously, or energetically involved in an activity, such as athletics.
Webster Dictionary
1. Warrior(noun)
1. Warrior
A warrior is a person skilled in combat or warfare, especially within the context of a tribal or clan-based society that recognizes a separate warrior class.
The Roycroft Dictionary
1. warrior
1. A soldier de luxe. 2. A successful, patriotic thug who has been dead fifty years or more. 3. A fearless person who gains renown by the number of alcoholic drinks he has taken in a day and by the variety and virulence of the venereal diseases he has contracted. 4. A myth, a fable, a lie.
British National Corpus
1. Nouns Frequency
Rank popularity for the word 'Warrior' in Nouns Frequency: #2823
Translations for Warrior
Kernerman English Multilingual Dictionary
a soldier or skilled fighting man, especially in primitive societies
The chief of the tribe called his warriors together; (also adjective) a warrior prince.
Get even more translations for Warrior »
Find a translation for the Warrior definition in other languages:
Select another language:
Discuss these Warrior definitions with the community:
Use the citation below to add this definition to your bibliography:
"Warrior." STANDS4 LLC, 2014. Web. 18 Sep. 2014. <>.
Are we missing a good definition for Warrior?
The Web's Largest Resource for
Definitions & Translations
A Member Of The STANDS4 Network
Nearby & related entries:
Alternative searches for Warrior:
|
Julia Grant
• Print
• Cite
Julia Grant (1826-1902) was an American first lady (1869-77) and the wife of the American Civil War general and 18th president of the United States, Ulysses S. Grant. A devoted wife, Julia Grant often joined her husband at his military postings, including several trips to the front during the Civil War. For Julia, unlike many of her predecessors, her husband’s election to the presidency was a happy occassion, and she was a popular and well-respected hostess. The Grants’ fame reached far beyond the United States, and the couple travelled extensively after leaving the White House. Julia Grant was the first first lady to pen her memoirs, although they remained unpublished until nearly 75 years after her death.
Here are five things you may not know about the life of this popular first lady:
She enjoyed an idyllic childhood.
Julia Boggs Dent was the fifth of seven children born to “Colonel” Frederick and Ellen Wrenshall Dent. Raised on the White Haven plantation approximately 12 miles from St. Louis, Missouri, she enjoyed outdoor activities such as fishing and riding horses. The future first lady was well educated, attending the Mauro Boarding School for seven years and taking a liking to literature. She painted an idyllic picture of her upbringing in her memoirs, depicting the plantation as a place where even the slaves were fully content.
Her engagement to Ulysses Grant was kept a secret.
A classmate of Julia’s older brother Fred at West Point, Ulysses Grant met his bride-to-be at White Haven early in 1844 and proposed a few months later. However, the two kept the engagement hidden from Colonel Dent, who was unhappy with Grant’s meager pay as a soldier. Grant finally asked Dent for permission to marry Julia in 1845 and received approval, but the outbreak of the Mexican-American War delayed the wedding until August 1848.
The Grants endured financial troubles prior to the Civil War.
She was known for a distinct eye condition.
Her husband’s final act paved the path for late-life comfort.
Despite all the success and fame they enjoyed as the first family in the 1870s, the Grants again faced monetary difficulties after failed business investments. Seeking to provide a cushion for his wife, Grant worked on his memoirs despite suffering from throat cancer, completing a mere week before his death in 1885. Published by Mark Twain, the erstwhile president’s book was a huge hit, and Julia lived out her final years in comfort in Washington, D.C., surrounded by friends and family.
|
Like it? Share it!
Earliest known dinosaur discovered in museum storage
When things like dinosaur bones come out of the ground, after they’re neatly packed and labeled, they often go into storage either at a museum or university to await study. And sometimes, they sit there in storage for a really long time before anyone knows exactly what was found. Case in point is a dinosaur skeleton 10-15 million years older than anything on record that had been chilling in London’s Natural History museum since the 1930s.
If it turns out that Nyasasaurus parringtoni is not a dinosaur, its identification will still be a pretty major event in paleontology, marking the discovery of the closest relative to the dinosaurs. Early research on the fossils suggests that it was more closely related to birds than crocodilians, but it’s too early too say for sure at this point that Nyasasaurus parringtoni is definitely the oldest dinosaur on record.
Not that there’s not plenty of good evidence suggesting just that, such as the rate at which the bones and blood vessels of the creature took shape. According to co-author Sarah Werning at UC Berkeley:
The bone tissue of Nyasasaurus is exactly what we would expect for an animal at this position on the dinosaur family tree. It’s a very good example of a transitional fossil; the bone tissue shows that Nyasasaurus grew about as fast as other primitive dinosaurs, but not as fast as later ones. Still, researchers are parsing their words carefully so as not to overstate. They do point out, however, that the find has implications whether Nyasasaurus is eventually recognized as a dinosaur or just a close relative, and lead author Sterling Nesbitt with the University of Washington:
It establishes that dinosaurs likely evolved earlier than previously expected and refutes the idea that dinosaur diversity burst onto the scene in the Late Triassic, a burst of diversification unseen in any other groups at that time. The name of the early dinosaur — or close cousin, if it breaks that way — pays homage to the researchers who discovered and initially studied the bones. The late Alex Charig — credited posthumously as an author on the study — studied and identified the bones, but never formally published his research on the creature he dubbed Nyasasaurus.
18 notes
1. undfreiber reblogged this from iheartchaos
2. nativechef reblogged this from iheartchaos
3. a-powerful-wizard reblogged this from iheartchaos
4. emelleven reblogged this from iheartchaos
5. whiteychan reblogged this from iheartchaos
6. d34d1n51d3 reblogged this from iheartchaos
7. generalskeletor reblogged this from iheartchaos
8. needmoreuseless reblogged this from iheartchaos
9. iheartchaos posted this
blog comments powered by Disqus
See all IHC Reviews here
|
Work the Shell - More Special Variables
Use bash's more powerful variable substitution forms to simplify your scripts.
I realize this might throw a spanner into the editorial works here at Linux Journal, but after a two-month sidetrack on how to analyze letter usage in English to give you an edge in Hangman (yeah, I can't believe I write about this stuff either), it's time to get back to our tour of basic shell variable referencing capabilities.
In previous columns, we talked about ${var:-alt value}, ${var:=alt value}, ${var:?no value} and even ${var:start:length} as a way to extract specific ranges of characters from a variable.
This month, I want to look at what are perhaps some of the more arcane variable references you can do—calls that are definitely helpful if you're deep in the zone with your scripting. I imagine they won't be things you need for those quick five-line scripts, but when your little project has expanded to a dozen screens and you have seven functions and a dozen arrays, well, these will be of great value to you.
Expanding and Matching
In a previous column, I showed how to do substring expansion with shell variables in the form of ${var:start:length}, but it's also useful to know the length of a variable's value. This can be done with ${#var}, like this:
$ test="the rain in Spain"
$ echo ${#test}
One situation I've encountered in scripts is the need to set an arbitrary number of variables in the form value1, value2, value3 and so on. Later, I need to determine the names of the ones that I've set. My lazy solution is typically another variable, valuecount, which counts the number of variables I've set, but, of course, that doesn't directly give me the names. A smarter way to do this is with the ${!pattern*} notation, as shown here:
$ echo ${!t*}
$ thimble="full"
$ tart="pop"
$ echo ${!t*}
tart test thimble
As you can see, it lets you get a list of defined variables that match the specified pattern. I'm using t* in the example, but it just as easily could be value* to match the situation outlined earlier.
Pattern Substitution
Here's a cool thing you can do with the bash shell that I'm betting you didn't realize: pattern substitution. When I have a situation where this is required, I almost always use the clunky and CPU-expensive form of:
var=$(echo $var | sed 's/old/new')
which actually can be neatly accomplished with the shell itself by using the form ${var/old/new}. I kid you not! Check out this example:
$ test="The Rain in Spain"
$ echo ${test/ain/ixn}
The Rixn in Spain
If you're like me, your fingers are itching to add a /g suffix to the substitution. It turns out that's done a bit differently within a shell variable: you need to have the pattern start with a /, which looks a bit weird, but it does work:
$ echo ${test//ain/ixn}
The Rixn in Spixn
The general case here is ${var//pat/global subst}. There's more you can do with this notation too—notably, use the equivalent of the ^ and $ special characters you might use in sed regular expressions to root the pattern to the beginning or end of the variable's value:
$ echo ${test/#ain/ixn}
The Rain in Spain
$ echo ${test/%ain/ixn}
The Rain in Spixn
In the first situation, the pattern didn't match the first few letters of the variable value (the pattern would need to have been “The” rather than “ain”), so nothing changed. In the second situation, however, it did match the last few characters, so the substitution took place.
To be fair, using sed does give you quite a bit more power and capability, but if you're just doing something simple like removing an extension and appending a PID to a variable to make a quick temp file, you can indeed just use shell pattern replacement:
$ test="The Rain in Spain.txt"
$ echo ${test/%.*/}.$$
The Rain in Spain.10126
Personally, I think this is very cool!
Command Substitutions
We've explored just about everything you can do with variables other than delving into arrays, which we'll do next month, so I thought I'd take a bit of space to show you a few slick command substitution tricks. First off, us old-timers are used to using backticks to have a command embedded within another, as in the following:
echo the date is `date`
This is pretty commonly used, but, in fact, a better and certainly more readable notational convention is to use $() instead, as I showed earlier. This is functionally identical:
echo the date is $(date)
Using this notation gives you some interesting capabilities. For example, instead of $(cat file), you simply can use $(< file) to make the contents of the file appear.
As is always the case with the shell, when and where fields are parsed is important too. Check out the following:
$ echo the date is $(date)
the date is Wed Feb 4 08:08:35 MST 2009
$ echo the date is "$(date)"
the date is Wed Feb 4 08:08:43 MST 2009
By adding the double quotes around the second invocation of $(date), you can see that the returning values weren't parsed by the shell and normalized: notice the two spaces between Feb and 4 in the second output compared to one space in the first output.
I hope I don't need to tell you what happens if you use single quotes instead of double quotes—oh, what the heck:
$ echo the date is '$(date)'
the date is $(date)
No surprise there—single quotes disable shell expansion, just as it does in this case:
$ echo The '$HOSTNAME' is $HOSTNAME
The $HOSTNAME is soyvah33
This leads to the classic question of what if you actually do want those quotes to be part of the output? It's a bit convoluted, but this works:
$ echo The '$HOSTNAME' is \'$HOSTNAME\'
The $HOSTNAME is 'soyvah33'
Let's wrap things up here, and next month, we'll dig into the oft-confusing world of shell script arrays.
Dave Taylor has been involved with UNIX since he first logged in to the ARPAnet in 1980. That means, yes, he's coming up to the 30-year mark now. You can find him just about everywhere on-line, but start here:
Learn More
Sponsored by Bit9
Linux Backup and Recovery Webinar
Learn More
Sponsored by Storix
|
Brain Behav Immun. Author manuscript; available in PMC Sep 19, 2007.
Published in final edited form as:
PMCID: PMC1986672
Infection-induced viscerosensory signals from the gut enhance anxiety: implications for psychoneuroimmunology
Infection and inflammation lead to changes in mood and cognition. Although the “classic” sickness behavior syndrome, involving fatigue, social withdrawal, and loss of appetites are most familiar, other emotional responses accompany immune activation, including anxiety. Recent studies have shown that gastrointestinal bacterial infections lead to enhanced anxiety-like behavior in mice. The bacteria-induced signal is most likely carried by vagal sensory neurons, and occurs early on (within six hours) during the infection. These signals induce evidence of activation in brain regions that integrate viscerosensory information with mood, and potentiate activation in brain regions established as key players in fear and anxiety. The findings underline the importance of viscerosensory signals arising from the gastrointestinal tract in modulation of behaviors appropriate for coping with threats, and suggest that these signals may contribute to affective symptoms associated with gastrointestinal disorders.
Keywords: gut-brain interaction, Campylobacter jejuni, Citrobacter rodentium, vagus, interoception
1. Introduction
In the late 1800s William James, a psychologist at Harvard University, published a provocative theory of emotion: that the perception of emotions follows from perception of our physical responses to cognitive apprehension of external threats (or more pleasant stimuli). That is: the experience of emotion is integrated with somato- and viscerosensory signals that result from cognitively driven motor, neuroendocrine, or autonomic responses, in a kind of brain-body-brain “loop” communication sequence. He further proposed that, when emotions were finally mapped in the brain, rather than existing as separate centers (e.g. anxiety center, happiness center) the neural substrates of emotion would be integrated with somato- and viscerosensory representation (James, 1890). Although James is usually remembered for the assertion that bodily sequelae of emotion-inducing stimuli “are” the emotions, this corollary prediction regarding brain substrates of emotions is proving particularly prescient. Findings from neuroanatomical, neuropsychological and functional neuroimaging studies support James' contention that the brain representations of emotions would be co-represented with information derived from internal tissues (Craig, 2003; Dalgleish, 2004; Nauta, 1971; Price, 1999; Zagon, 2001). Such regions include the medial and orbital prefrontal and anterior cingulate cortex, insula, hypothalamus, amygdala, and bed nucleus of the stria terminalis. These findings underline the important contribution of viscerosensory signals related, for instance, to hunger, satiety, or pain, to emotional states.
The relation of emotions and internal sensory information carries important implications for psychoneuroimmunology. It is now well established that the immune system influences mood and cognition (Dantzer, 2004; Maier, 2003). Furthermore, via autonomic and neuroendocrine outflow, psychological states, including depression and stress, modulate immune system functioning in turn. These interactions exert significant influence on the course of chronic illness, and thus make compelling targets for therapeutic intervention. Understanding the biological and neurological substrates (e.g. brain regions and neurotransmitter systems involved) mediating interactions of the immune and nervous systems, however, is key to such an aim.
2. Infection-induced anxiety
We first became involved with these issues when one of us (ML) observed that mice treated per-orally with live bacteria (Campylobacter jejuni) seemed more anxious than saline-treated control mice (Lyte et al. 1998). This observation was confirmed by comparing the behavior of C. jejuni-treated mice and control mice on the elevated plus maze. However, there was no evidence that the bacteria had reached the systemic circulation (and thus the brain), or that the infection induced circulating pro-inflammatory cytokines. For that matter, the mice showed no evidence of “classic” sickness behavior, which includes suppressed ingestive behavior, social withdrawal, hunched posture, and psychomotor retardation. These observations raised questions regarding what mechanism, if not via the circulation, is used to signal the brain, and what, then, is going on in the brain during this infection?
If bacteria-related signals are not circulating in the blood, the most likely explanation is that the signals are carried instead by viscerosensory nerves that innervate the gut. Viscerosensory nerves in the gut fall into two categories: intrinsic and extrinsic. Intrinsic (or enteric) resident nerve cells serve to mediate secretory and motor functions associated digestion. Some of these intrinsic nerves innervate the lymphoid tissues below the epithelium, and may be sensitive to bacteria or immune generated signals, including cytokines. These intrinsic nerves do not innervate the central nervous system, however, and cannot directly signal the brain. Communication between the intrinsic neurons of the gut with the brain and spinal cord is carried out by two populations of extrinsic nerves. Vagal (parasympathetic sensory) nerves project to the caudal brainstem, and spinal visceral (sympathetic sensory) project to the thoraco-lumbar spinal cord. These nerves innervate the intrinsic neurons, but also are in contact with immune cells and lymphoid tissue, and the subepithelium, a site of host/pathogen interface. The sensitivity of spinal visceral sensory nerves to noxious distension of the colon is inhibited by a diet including bacteria or bacterial products (Kamiya et al., 2006) suggesting that gut bacteria influence these nerves by either a direct or perhaps indirect (via immune constituents) action within the gut. Vagal sensory neurons, in particular, evince a close relationship with immune cells in the submucosa (Williams et al. 1997) and project nerve endings into the mucosa (although not into the lumen). Thus, in our experiments, bacteria related signals that influence brain functions most likely were carried by either the vagus or the spinal visceral sensory nerve fibers.
Based on previous studies implicating the vagus nerve in immune-gut-brain communication (Castex et al., 1995; Goehler et al., 2000, for review), we questioned whether vagal sensory neurons might be the link between infection in the gut and brain responses that support anxiety. To address the issue we used immunohistochemical localization of the immediate-early gene product c Fos. This protein is induced in cells, including neurons, shortly (~ 45-60 minutes) after cells become activated. Thus, induction of this protein can be used as a marker for activation. Although not all neurons express c-Fos when activated (but many do) and expression of c-Fos protein in a neuron is not necessarily related to the experimental manipulations, c-Fos expression patterns have been widely used to provide information regarding the functions of neuronal ensembles. For these studies, mice were inoculated with Camplyobacter jejuni (Goehler et al., 2005) or Citrobacter rodentium (Lyte et al. 2006), and c-Fos expression in the vagal sensory neurons assessed beginning at four hours after inoculation. C-Fos protein began to appear in the vagal sensory neurons at around five hours post-inoculation, and peaked around seven to eight hours. Given that intestinal transit time to the cecum (the primary site of colonization in Camplyobacter jejuni and Citrobacter rodentium) in a mouse is estimated to be around three hours, and c-Fos protein induction takes about an hour, these findings imply that within one hour of arriving at the cecum, the bacteria were able to generate signals in vagal sensory neurons, and thus signal the brain. Thus, vagal sensory neurons seem to be highly sensitive to the presence of potentially dangerous bacteria in the GI tract, and likely form the link between bacteria and behavior.
A critical piece of puzzle of how infection in the gut influences the brain and behavior regards the signals generated as a consequence of bacterial infection that serve to signal sensory neurons. That is- how do vagal sensory neurons know? This is still an open question, but viscerosensory neurons innervating the gut express receptors for a variety of immune/inflammatory mediators, including histamine, prostaglandins, ATP, adenosine, serotonin, cytokines including interleukin 1 (IL-1) and tumor necrosis factor (Kirkup et al., 2001). Additionally, or alternatively, vagal sensory neurons may respond directly to pathogens or pathogen products. For instance, vagal sensory ganglia have been reported to express TOLL-like receptors (proteins that bind to pathogen products or cellular components), including TOLL-like receptor 4 (TLR4), which signals bacterial lipopolysaccharides (Hosoi et al., 2005). Whether vagal sensory neurons express TLR-4 on their projections to the gut, however, is unknown. Further, Spiller (2002) has pointed out that cholera toxin (CT), a product of the A-B family of heat-labile enterotoxins, binds to GM1 gangliosides, which are expressed on both nerve fibers and enterocytes. CT subunits appear to enter vasoactive intestinal peptide-expressing neurons, specifically, to interact with adenyl cyclase as a basis for its diarrhea-inducing effects. However, other members of this toxin family are expressed by bacteria known to induce signaling to the brain, notably Esherichia coli and Campylobacter jejuni. These toxins could well interact with GM1 gangliosides on vagal sensory neurons. There are clearly multiple possible mechanisms by which pathogen-related signals, or local signals generated in response to pathogens or tissue damage could interact with vagal sensory neurons, and thus signal the brain.
3. Brain substrates for infection-induced anxiety-like behavior
As James predicted and as recent studies now support, brain regions involved in emotions are associated with viscerosensory and autonomic (visceral motor) processing (Craig, 2003; Nauta, 1971; Price, 1999; Zagon, 2001). Previous studies using immune challenges with bacterial products (LPS) or cytokines, such as IL-1, that usually induce symptoms of behavioral depression have indicated that, indeed, these immune stimuli also activate brain regions involved in viscerosensory processing (Goehler et al., 2000, for review). What about pathogen-related stimuli, such as per-orally administered bacteria, that do influence behavior, but do not necessarily induce behavioral depression? To address this issue we evaluated brain response patterns to Campylobacter infection, initially while mice rested in their home cages (Gaykema et al., 2004). These studies have indicated that treatment with these bacteria is strongly associated with the same viscerosensory processing stimuli previously reported for other immune-related stimuli (cytokines and bacterial products) presented exogenously. In particular, activation was seen in the nucleus of the solitary tract (nTS), the site of termination of vagal sensory nerve fibers, as well as in the ventrolateral medulla, which (with the nTS) propagates viscerosensory information to the forebrain, as well as in the lateral parabrachial nucleus, the paraventricular hypothalamus, the central nucleus of the amygdala, and the bed nucleus of the stria terminalis. The latter three regions are well-established for integrating affective states (stress, fear, anxiety) with autonomic and neuroendocrine responses. In addition, cortical areas proposed by both Damasio (1994) and Craig (2003), i.e. the medial prefrontal cortex and insula, also showed activated neurons in the early phases of live bacterial infection of the gut. Although the models of both Damasio and Craig are specific for primate/human brain, the findings that gut infection induces anxiety-like behavior and may involve brain regions that may be homologous in the mouse suggest that the integration of viscerosensory drive with affective experience/behavior is a fundamental organizing feature of mammalian brain.
Whereas mapping activation of brain regions to Campylobacter in resting animals provided us with confirmation that the response to live infections generally matched that of exogenous cytokines etc., we still do not know how infection-induced viscerosensory drive enhances (see below) anxiety-like behavior. To approach this issue we have challenged mice with Camplyobacter and Citrobacter, and assessed their behavior using the open-field holeboard, another apparatus useful for assessing a rodent's innate apprehension of open spaces (Goehler, et al., in preparation; Lyte et al. 2006;). This apparatus also provides a set of nine holes the mice can investigate, setting up a potential conflict between exploration/foraging in the open against the safety of remaining near the walls. We assessed brain activation patterns in infected mice following exposure to the open-field/hole board, and compared the activation patterns with saline-treated mice and infected animals left to rest in their home cages. These experiments were carried out between seven and nine hours after inoculation, during the time range when vagal sensory neurons show evidence of activation (c-Fos protein induction). Whereas all animals exposed to the open-field/hole board showed activation in brain regions previously implicated in behavior and defensive responses, including the septal area, several hypothalamic regions, amygdala, bed nucleus of the stria terminalis, and periaquiductal grey, the responses in the paraventricular hypothalamus, central and basolateral amygdala, and bed nucleus of the stria terminalis were potentiated in the infected animals, who also exhibited enhanced anxiety-like behavior. Notably, the paraventricular hypothalamus is a key player in the induction of neuroendocrine and autonomic response to stress, whereas the basolateral and central nuclei of the amgydala, with the related bed nucleus of the stria terminalis, are strongly implicated in fear and anxiety (Walker et al., 2003). In this way, infection-related viscerosensory signals enhance anxiety in novel environments by exerting additional drive on key brain regions regulating responses to environmental challenges.
4. What may be the adaptive value of enhanced anxiety during gut infection?
An important issue to keep in mind regarding experiments investigating emotional states in animals, especially anxiety, is that the characteristics of the testing procedure influence the findings. In the case of the open field/hole board, this apparatus represents a possibly dangerous novel environment. It can be thought of as analogous to that which an animal might encounter when foraging in an unknown area for food, or in any other exploratory behavior in the wild. The open character of the environment allows potentially for the presence of predators (e.g. hawks). Thus, some innate anxiety is normal when an animal is placed in this apparatus. The critical difference we observed when comparing infected animals with controls was that the infected animals showed enhanced anxiety-like behavior. In particular, the infected animals avoided almost completely the center part of the field, and preferred to remain in the corners (Lyte et al., 2006). They also spent more of their time engaged in “risk assessment” behavior. Overall, the behavior of the infected animals can be described as markedly more cautious.
Why should infected animals be more cautious in an open environment? An important aspect of the findings from these studies is that both vagal sensory neuron signaling and anxiety-like behavior occur rather rapidly following per-oral inoculation. This suggests that gut-brain signaling via the vagus is operating as something like an early warning signal. The consequences of food-poisoning can be severe, and a prompt response from the brain, to initiate supportive neuroendocrine, physiological, and behavioral responses is essential for full-fledged host defense. However, behavioral responses to infection often include symptoms such as somnolence and psychomotor retardation that, while supportive of recuperation, can render an animal vulnerable to predators, particularly if an animal is in an exposed situation, such as an open field. Thus cautiousness and avoidance of open, exposed spaces likely confers an advantage to recently infected animals.
Anxiety is usually conceptualized as a response to a possible threat. Experimentally these typically take the form of exteroceptive threats- those arising from the environment, notably potential predators in the form of predator scents, or of open spaces where predators may lurk. The findings from our studies, and those of others (Basso et al. 2003; Castex, et al. 1995; Lacosta et al. 1999; Rossi-George et al. 2005) indicate, however, that interoceptive threats, such as infection or allergy or systemic immune challenge, also can engage behaviors typical of threat avoidance, and increase drive on brain regions that process threat-related information. These areas also contribute to neurocircuitry that support stress responses. Because stress may potentiate peripheral inflammatory conditions, this increased drive might be predicted to exacerbate ongoing inflammation in the gastrointestinal tract, or other peripheral tissues.
5. Bidirectional interactions of bacteria/immune-brain-gut axis: “bottom-up” and “top-down” influences on gastrointestinal health and mood
One of the core principles of psychoneuroimmunology is the bidirectional influence of the immune and nervous systems on mood and health. That is, affective symptoms associated with illness or inflammation follow not only from stress of dealing with a medical condition, or personality factors (“top-down” brain-mediated) but also from the effects of cytokines, or other products of inflammation, induced by infection or inflammation (“bottom-up” immune and viscerosensory mediated). In the case of infection and inflammation of the gastrointestinal tract, this principle is particularly relevant for the anxiety and depressive symptoms so often concomitant with chronic inflammation in Chrohn's Disease and Inflammatory Bowel Disease, as well as in functional gastrointestinal disorders such as Irritable Bowel Syndrome (IBS).
IBS is a particularly good example of a psychoneuroimmunologic disorder. The condition is diagnosed in people with abdominal pain associated with changes in bowel frequency, in the absence of abnormalities indicating a physical cause such as obstruction (Lydiard, 2001). Thus it is a functional, as opposed to structural, gastrointestinal disorder. It is more common among women, is exacerbated by stress, and is associated with activation of mucosal immune cells (Chadwick et al., 2002). In as many as 30% of patients, symptoms follow a bout of food poisoning, often with Campylobacter jejuni (Spiller, 2002). This condition is termed “post-infective IBS”, and symptoms can persist for a year or more. IBS is associated with a high prevalence of psychiatric symptoms, including depression, panic disorder, generalized anxiety, and post-traumatic stress disorder (Lydiard, 2001). Interestingly, it has been noted that anxiety-related symptoms may be more prominent early in the course of IBS, with depression more common in patients with chronic symptoms (Lydiard, 2001), and that agoraphobia (fear of open spaces, or “fear of the marketplace”) is more common in patients with gastrointestinal symptoms (Mayer et al., 2001). These findings highlight the association of affective symptoms, particularly anxiety, with gastrointestinal disorders. Our findings, and those of others (Basso et al. 2003; Castex, et al. 1995; Lacosta et al. 1999; Rossi-George et al. 2005) suggest that the consequences of inflammation or infection in the bowel may contribute to these symptoms.
From recent findings a picture emerges in which viscerosensory nerves make multifunctional contributions to stress-related and affective responses to gastrointestinal dysfunction. Visceral hypersensitivity and enhanced pain states accompany infection and immune activation, and are most likely mediated by spinal viscerosensory nerves (Kamiya et al., 2006; Liu et al. 2007) that drive ascending nociception pathways to the forebrain, via the pontine lateral parabrachial nucleus (Traub & Murphy, 2002). In this way, activation of spinal viscerosensory fibers likely influence affective states by potentiating pain and discomfort. In contrast, the vagus nerve provides immediate signals relevant to host defense (Liu et al, 2007) and mood modulation (Lyte et al., 2006; Schachter, 2004). Stimulation of the vagus nerve in humans can ameliorate depressive symptoms in some treatment-refractory patients, but can increase or induce symptoms of anxiety and depression in others. Interestingly, disruption of the vagus nerve enhances visceral hypersensitivity following immune challenge (Coelho et al., 2000) implying that the vagus plays an as yet uncharacterized protective role during inflammation, as well. One mechanism for protective effects may be via a vagal efferent suppression of pro-inflammatory cytokine production (Tracey, 2002). Taken together, these findings suggest that the outcome and severity of gastrointestional dysfunction and mood symptoms may be modulated by the interaction of vagal and spinal viscerosensory nerve functions.
The observation that infection in the gut can potentiate anxiety-like behavior suggests implications for the phenomenology of affective symptoms associated with gastrointestinal disorders. Factors previously associated with the development or risk of such symptoms, (e.g. pre-existing psychiatric disorder, somatosizing personality, stress and pain associated the illness) have generally been conceptualized from the top-down perspective, as have many new therapeutic approaches (Mayer et al. 2006). However, if viscerosensory drive via the vagus contributes to affective symptoms in these disorders, interventions addressing underlying infection or inflammation, or other source of stimuli activating the vagus, could also ameliorate symptoms. In addition, understanding that some of the negative affective experiences associated with gastrointestinal disorders may not be under voluntary control of the patients may relieve some of the stress of these disorders.
6. Conclusion
It seems reasonable to assert that linking emotions to homeostatic mechanisms conveys powerful adaptive advantages, as this is a salient way for conditions within the body to influence behavior. In the case of infection-induced anxiety, increasing cautiousness in an animal that may soon become ill could save its life. In a similar way, the development of a conditioned taste aversion (a learned negative association of recently eaten food with illness) following the survival of food poisoning encourages an animal to avoid potentially tainted food in the future. Our recent findings implicate the vagus as a critical link between internal threats and brain regions that cope with such threats, supporting Andrews (1992) description of it as the “great wandering protector”.
Figure 1
Model diagram of gut-brain interactions underlying bacterial infection-induced enhanced anxiety and other mood and affective changes. Bacteria interact, in some unknown way, with viscerosensory nerves, likely associated with the vagus nerve, that innervate ...
This work was supported by NIH grants MH64648, MH50431, & MH68834.
• Andrews PLR. A Protective Role For Vagal Afferents: A Hypothesis. In: Andrews PLR, Lawes INC, editors. Neuroanatomy and Physiology of Abdominal Vagal Afferents. CRC Press; London: 1992. pp. 279–302.
• Basso AS, Costa Pinto FA, Russo M, Giorgetti Britto LR, de Sa-Rocha LC, Palermo Neto J. Neural correlates of IgE-mediated food allergy. J. Neuroimmunol. 2003;140:69–77. [PubMed]
• Castex N, Fioramonti J, Fargeas MJ, Bueno L. c-fos expression in specific rat brain nuclei after intestinal anaphylaxis:involvement of 5-HT3 receptors and vagal afferent fibers. Brain Res. 1995;688:149–160. [PubMed]
• Craig AD. Interoception: the sense of the physiological condition of the body. Curr. Opin. Neurobiol. 2003;13:500–505. [PubMed]
• Chadwick VS, Chen W, Shu D, Paulus B, Bethwaite P, Tie A, Wilson I. Activation of the mucosal immune system in irritable bowel syndrome. Gastroenterol. 2002;120:1778–1783. [PubMed]
• Critchley HD, Mathias CJ, Dolan RJ. Neuroanatomical basis for first-and second-order representations of bodily states. Nat. Neurosci. 2001;4:207–212. [PubMed]
• Coelho A-M, Fioramonti J, Bueno L. Systemic lipopolysaccharide influences rectal sensitivity in rats: role of mast cells, cytokines, and the vagus nerve. Am. J. Physiol. Gastrointest. Liver Physiol. 2000;279:G781–G790. [PubMed]
• Damasio AR. Descartes' Error. Avon Books Inc.; New York, NY: 1994.
• Danzter R. Cytokine-induced sickness behavior: a neuroimmune response to activation of innate immunity. Eur. J. Pharmacol. 2004;500:399–411. [PubMed]
• Gaykema RPA, Goehler LE, Lyte M. Brain response to cecal infection with Camplyobacter jejuni: analysis with Fos immunohistochemistry. Brain, Behav., Immun. 2004;18:238–245. [PubMed]
• Goehler LE, Gaykema RPA, Anderson K, Hansen MK, Maier SF, Watkins LR. Vagal immune-to-brain communication: a visceral chemoreceptive pathway. Auton. Neurosci. 2000;85:49–59. [PubMed]
• Goehler LE, Gaykema RPA, Opitz N, Reddaway R, Badr NA, Lyte M. Activation in vagal afferents and central autonomic pathways: early responses to intestinal infection with Campylobacter jejuni. Brain Behav. Immun. 2005;19:334–344. [PubMed]
• Hosoi T, Okuma Y, Matsuda T, Nomura Y. Novel pathway for LPS-induced afferent vagus nerve activation: possible role of nodose ganglion. Auton. Neurosci. 2005;120:104–107. [PubMed]
• James W. The Principles of Psychology. II. Henry Holt & Co.; New York, NY: 1890. Chapter XXV. The Emotions.
• Kamiya T, Wang L, Forsythe P, Goettsche G, Mao Y, Tougas G, Bienestock J. Inhibitory effects of Lactobacillus reuteri on visceral pain induced by colorectal distension in Sprague-Dawley rats. Gut. 2006;55:191–196. [PMC free article] [PubMed]
• Kirkup AJ, Brunsden AM, Grundy D. Receptors and transmission in the brain-gut axis: Potential for novel therapies I. Receptors on visceral afferents. Am J. Physiol. 2001;280:G787–G794. [PubMed]
• Lacosta S, Merali Z, Anisman H. Behavioral and neurochemical consequences of lipopolysaccharide in mice: angiogenic-like effects. Brain Res. 1999;818:291–303. [PubMed]
• Liu CY, Mueller MH, Grundy D, Kreis ME. Vagal modulation of intestinal afferent sensitivity to systemic lipopolysaccharide in the rat. Am. J. Physiol. Gastrointest. Liver Physiol. 2007 Epub ahead of print. [PubMed]
• Lyte M, Varcoe JJ, Bailey MT. Anxiogenic effect of subclinical bacterial infection in mice in the absence of overt immune activation. Physiol. Behav. 1998;65:63–68. [PubMed]
• Lyte M, Wang L, Opitz N, Gaykema RPA, Goehler LE. Anxiety-like behavior during initial stage of infection with agent of colonic hyperplasia Citrobacter rodentium. Physiol. Behav. 2006;89:350–357. [PubMed]
• Lydiard RB. Irritable bowel syndrome, anxiety, and depression: what are the links? J. Clin. Psychiatry. 2001;62:38–45. [PubMed]
• Maier SF. Bi-directional immune-brain communication: implications for understanding stress, pain, and cognition. Brain Behav. Immun. 2003;17:69–85. [PubMed]
• Mayer EA, Craske M, Naliboff BD. Depression, anxiety, and the gastrointestinal system. J. Clin. Psychiatry. 2001;62:28–36. [PubMed]
• Mayer EA, Tillisch K, Bradesi S. Review article: modulation of the brain-gut axis as a therapeutic approach in gastrointestinal disease. Aliment. Pharmacol. Ther. 2006;24:919–933. [PubMed]
• Nauta WJH. The problem of the frontal lobe: A reinterpretation. J. Psychiatry Res. 1971;8:167–187. [PubMed]
• Price JL. Prefrontal cortical networks related to visceral function and mood. Ann. NY. Acad. Sci. 1999;877:383–396. [PubMed]
• Reichenberg A, Yirmiya R, Schuld A, Kraus T, Haack M, Morag A. Cytokine-associated emotional and cognitive disturbances in humans. Arch. Gen. Psychiatry. 2001;58:445–452. [PubMed]
• Rossi-George A, Urbach D, Colas D, Goldfarb Y, Kusnecov AW. Neuronal, endocrine and anorexic responses to the T-cell superantigen staphylococcal enterotoxin A: Dependence on tumor-necrosis factor-α J. Neurosci. 2005;25:5314–5322. [PubMed]
• Schachter S. Vagus nerve stimulation: mood and cognitive effects. Epilepsy Behav. 2004;5:S56–S59. [PubMed]
• Spiller RC. Role of nerves in enteric infection. Gut. 2002;51:759–762. [PMC free article] [PubMed]
• Tracey KJ. The inflammatory reflex. Nat. 2002;420:853–859. [PubMed]
• Traub RJ, Murphy A. Colonic inflammation induces fos expression in the thoracolumbar spinal cord increasing activity in the spinoparabrachial pathway. Pain. 2002;95:93–102. [PubMed]
• Walker DL, Toufexis DJ, Davis M. Role of the bed nucleus of the stria terminalis versus the amygdala in fear, stress, and anxiety. Eur. J. Pharmacol. 2003;463:199–216. [PubMed]
• Williams RM, Berthoud HR, Stead RH. Vagal afferent nerve fibres contact mast cells in rat small intestinal mucosa. Neuroimmunomodulation. 1997;5:266–270. [PubMed]
• Zagon A. Does the vagus nerve mediate the sixth sense? TINS. 2001;24:671–673. [PubMed]
PubReader format: click here to try
Related citations in PubMed
See reviews...See all...
Cited by other articles in PMC
See all...
Recent Activity
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
See more...
|
Migraine causes attacks of headaches, often with feeling sick or vomiting. Treatment options include: avoiding possible triggers, painkillers, anti-inflammatory painkillers, antisickness medicines, and triptan medicines. A medicine to prevent migraine attacks is an option if the attacks are frequent or severe.
We are waiting on the update to the British Association for the Study of Headache 2010 guidelines before routinely reviewing this leaflet.
Migraine is a condition that causes episodes (attacks) of headaches. Other symptoms such as feeling sick (nausea) or vomiting are also common. Between migraine attacks, the symptoms go completely.
Track your triggers
Get the app free on iOS »
Migraine is common. About 1 in 4 women, and about 1 in 12 men, develop migraine at some point in their life. It most commonly first starts in childhood or as a young adult. Some people have frequent attacks - sometimes several a week. Others have attacks only now and then. Some people may go for years between attacks. In some people, the migraine attacks stop in later adult life. However, in some cases the attacks persist throughout life.
There are two main types of migraine attack: migraine attack without aura (sometimes called common migraine) and migraine attack with aura (sometimes called classic migraine).
Migraine without aura
This is the most common type of migraine. Symptoms include the following:
• The headache is usually on one side of the head, typically at the front or side. Sometimes it is on both sides of the head. Sometimes it starts on one side, and then spreads all over the head. The pain is moderate or severe and is often described as throbbing or pulsating. Movements of the head may make it worse. It often begins in the morning, but may begin at any time of the day or night. Typically, it gradually gets worse and peaks after 2-12 hours, then gradually eases off. However, it can last from 4 to 72 hours.
• Other migraine symptoms that are common include: feeling sick (nausea), vomiting, you may not like bright lights or loud noises, and you may just want to lie in a dark room.
• Other symptoms that sometimes occur include: being off food, blurred vision, poor concentration, stuffy nose, hunger, diarrhoea, abdominal pain, passing lots of urine, going pale, sweating, scalp tenderness, and sensations of heat or cold.
Migraine with aura
About 1 in 4 people with migraine have migraine with aura. The symptoms are the same as those described above (migraine without aura), but also include an aura (warning sign) before the headache begins.
• Visual aura is the most common type of aura. Examples include: a temporary loss of part of vision, flashes of light, objects may seem to rotate, shake, or boil.
• Numbness and pins and needles are the second most common type of aura. Numbness usually starts in the hand, travels up the arm, then involves the face, lips, and tongue. The leg is sometimes involved.
• Problems with speech are the third most common type of aura.
• Other types of aura include: an odd smell, food cravings, a feeling of wellbeing, other odd sensations.
One of the above auras may develop, or several may occur one after each other. Each aura usually lasts just a few minutes before going, but can last up to 60 minutes. The aura usually goes before the headache begins.The headache usually develops within 60 minutes of the end of the aura, but it may develop a lot sooner than that - often straight afterwards. Sometimes, just the aura occurs and no headache follows. Most people who have migraine with aura also have episodes of migraine without aura.
Phases of a typical migraine attack
A migraine attack can typically be divided into four phases:
• A premonitory phase occurs in up to half of people with migraine. You may feel irritable, depressed, tired, have food cravings, or just know that a migraine is going to occur. You may have these feelings for hours or even days before the onset of the headache.
• The aura phase (if it occurs).
• The headache phase.
• The resolution phase when the headache gradually fades. During this time you may feel tired, irritable, depressed, and may have difficulty concentrating.
Less common types of migraine
There are various other types of migraine which are uncommon, and some more types which are rare. These include:
Menstrual migraine. The symptoms of each attack are the same as for common migraine or migraine with aura. However, the migraine attacks are associated with periods. There are two types of patterns. Pure menstrual migraine is when migraine occurs only around periods, and not at other times. This occurs in about 1 in 7 women who have migraine. Menstrual-associated migraine is when migraines occur around periods, and also at other times of the month too. About 6 in 10 women who have migraine have this type of pattern. Treatment of each migraine attack is the same as for any other type of migraine. However, there are treatments that may prevent menstrual migraines from occurring. See separate leaflet called 'Migraine Triggered by Periods' for more detail.
Abdominal migraine. This mainly occurs in children. Instead of headaches, the child has attacks of abdominal (tummy) pain which last several hours. Typically, during each attack there is no headache, or only a mild headache. There may be associated nausea (feeling sick), vomiting or aura symptoms. Commonly, children who have abdominal migraine switch to develop common migraine in their teenage years.
Ocular migraine. This is sometimes called retinal migraine, ophthalmic migraine or eye migraine. It causes temporary loss of all or part of the vision in one eye. This may be with or without a headache. Each attack usually occurs in the same eye. There are no abnormalities in the eye itself and vision returns to normal. Important note: see a doctor urgently if you get a sudden loss of vision (particularly if it occurs for the first time). There are various causes of this and these need to be ruled out before ocular migraine can be diagnosed.
Hemiplegic migraine. This is rare. In addition to a severe headache, symptoms include weakness (like a temporary paralysis) of one side of the body. This may last up to several hours, or even days, before resolving. Therefore, it is sometimes confused with a stroke. You may also have other temporary symptoms of vertigo (severe dizziness), double vision, visual problems, hearing problems and difficulty speaking or swallowing. Important note: see a doctor urgently if you get sudden weakness (particularly if it occurs for the first time). There are other causes of this (such as a stroke) and these need to be ruled out before hemiplegic migraine can be diagnosed.
Basilar-type migraine. This is rare. The basilar artery is in the back of your head. It used to be thought that this type of migraine originated due to a problem with the basilar artery. It is now thought that this is not the case, but the exact cause is not known. Symptoms typically include headache at the back of the head (rather than one-sided as in common migraine). They also tend to include strange aura symptoms such as temporary blindness, double vision, vertigo, ringing in the ears, jerky eye movements, trouble hearing, slurred speech, dizziness. Unlike hemiplegic migraine, basilar-type migraine does not cause weakness. There is an increased risk of having a stroke with this type of migraine. Important note: see a doctor urgently if you get the symptoms described for basilar-type migraine (particularly if they occur for the first time). There are other causes of these symptoms (such as a stroke) and these need to be ruled out before basilar-type migraine can be diagnosed
Migraine is usually diagnosed by the typical symptoms. There is no test to confirm migraine. A doctor can usually be confident that you have migraine if you have the typical symptoms and by an examination which does not reveal any abnormality. However, some people with migraine have nontypical headaches. Therefore, sometimes tests are done to rule out other causes of headaches. Also, with some uncommon or rare types of migraine such as ocular migraine, tests are sometimes done to rule out other causes of these symptoms. (For example, temporary blindness can be due to various causes apart from ocular migraine.)
Remember, if you have migraine, you do not have symptoms between attacks. It is the episodic nature of the symptoms (that is, they come and then go) that is typical of migraine. A headache that does not go, or other symptoms that do not go, are not due to migraine.
Tension headaches are sometimes confused with migraine. These are the common headaches that most people have from time to time. See separate leaflet called 'Headaches - Tension-type'. Note: if you have migraine, you can also have tension headaches at different times to migraine attacks.
The cause is not clear. A theory that used to be popular was that blood vessels in parts of the brain go into spasm (become narrower) which accounted for the aura. The blood vessels were then thought to dilate (open wide) soon afterwards, which accounted for the headache. However, this theory is not the whole story and, indeed, may not even be a main factor. It is now thought that some chemicals in the brain increase in activity and parts of the brain may then send out confusing signals which cause the symptoms. The exact changes in brain chemicals are not known. It is also not clear why people with migraine should develop these changes. However, something may trigger a change in activity of some brain chemicals to set off a migraine attack.
Migraine is not classed as an inherited condition. However, it often occurs in several members of the same family. So, there is probably some genetic factor involved. Therefore, you are more likely to develop migraine if you have one or more close relatives who have migraine.
Most migraine attacks occur for no apparent reason. However, something may trigger migraine attacks in some people. Triggers can be all sorts of things. For example:
• Diet. Dieting too fast, irregular meals, cheese, chocolate, red wines, citrus fruits, and foods containing tyramine (a food additive).
• Environmental. Smoking and smoky rooms, glaring light, VDU screens or flickering TV sets, loud noises, strong smells.
• Psychological. Depression, anxiety, anger, tiredness, stress, etc. Many people with migraine cope well with stress but have attacks when they relax, leading to so-called weekend migraine.
• Medicines. For example, hormone replacement therapy (HRT), some sleeping tablets, and the contraceptive pill.
• Other. Periods (menstruation), shift work, different sleep patterns, the menopause.
It may help to keep a migraine diary. Note down when and where each migraine attack started, what you were doing, and what you had eaten that day. A pattern may emerge, and it may be possible to avoid one or more things that may trigger your migraine attacks. See separate leaflet called 'Migraine - Triggers and Diary' which gives more details and includes a diary that you can print out and fill in. There are also separate leaflets called 'Migraine Triggered by Periods' and 'Migraine and the Contraceptive Pill and Patch'.
See separate leaflet called 'Migraine - Medicines to Treat Attacks' for details of the various migraine treatment options. A brief summary is given here.
Paracetamol or aspirin works well for many migraine attacks. (Note: children aged under 16 should not take aspirin for any condition.) Take a dose as early as possible after symptoms begin. If you take painkillers early enough, they often reduce the severity of the headache, or stop it completely. A lot of people do not take a painkiller until a headache becomes really bad. This is often too late for the painkiller to work well.
Take the full dose of painkiller. For an adult this means 900 mg aspirin (usually three 300 mg tablets) or 1,000 mg of paracetamol (usually two 500 mg tablets). Repeat the dose in four hours if necessary. Soluble tablets are probably best as they are absorbed more quickly than solid tablets.
Anti-inflammatory painkillers
Anti-inflammatory painkillers probably work better than paracetamol. They include ibuprofen which you can buy at pharmacies or get on prescription. Other types such as diclofenac, naproxen, or tolfenamic acid need a prescription. (Strictly speaking, aspirin is an anti-inflammatory painkiller.)
Dealing with nausea and sickness
Migraine attacks may cause you to feel sick (nausea) which can cause poor absorption of tablets into your body. If you take painkillers, they may remain in your stomach and not work well if you feel sick. You may even vomit the tablets back. Tips that may help include:
• You can take an antisickness medicine with painkillers. A doctor may prescribe one. Like painkillers, they work best if you take them as soon as possible after symptoms begin.
• An antisickness medicine, domperidone, is available as a suppository. Another antisickness medicine, prochlorperazine, comes in a buccal form which dissolves between the gum and cheek. These can be useful if you feel very sick or vomit during migraine attacks. An anti-inflammatory painkiller suppository (diclofenac) is also available.
Combinations of medicines
Triptan medicines
A triptan medicine is an alternative if painkillers do not help. These include: almotriptan, eletriptan, frovatriptan, naratriptan, rizatriptan, sumatriptan, and zolmitriptan. They are not painkillers. They work by interfering with a brain chemical called 5HT. An alteration in this chemical is thought to be involved in migraine. A triptan will often reduce or abort a migraine attack. Some triptans work in some people and not in others. Therefore, if one triptan does not work, a different one may well do so. Most people who have migraine can usually find a triptan that works well for most migraines, and where side-effects are not too troublesome.
Do not take a triptan too early in an attack of migraine. (This is unlike painkillers described above which should be taken as early as possible.) You should take the first dose when the headache (pain) is just beginning to develop, but not before this stage. For example, do not take it during the premonitory or aura phase but wait until the headache begins. Triptans probably work much less well if taken too early on in an attack.
A medicine to prevent migraine attacks is an option if you have frequent or severe attacks. It may not stop all attacks, but their number and severity are often reduced. Medicines to prevent migraine are taken every day. They are not painkillers, and are different to those used to treat each migraine attack. A doctor can advise on the various medicines available. See separate leaflet called 'Migraine - Medicines to Prevent Attacks' for more details.
Some points to note about migraine in children include the following:
• Migraine is common in children. It affects about 1 in 10 school-age children.
• Symptoms can be similar to those experienced by adults. However, sometimes symptoms are not typical. For example, compared with adults, attacks are often shorter, pain may be on both sides of the head, and associated symptoms such as feeling sick and vomiting may not occur.
• Abdominal migraine (described earlier) mainly affects children.
• Common triggers in children include missing meals, dehydration and irregular routines. So, if a child is troubled with migraine attacks, it is important to try to have regular routines, with set meals and bedtimes. Also, encourage children to have plenty to drink.
• Many of the medicines used by adults are not licensed for children.
• Paracetamol or ibuprofen are suitable and are commonly used. Do not use aspirin.
• As regards antisickness medicines, domperidone is licensed for children of all ages, and prochlorperazine is licensed for children older than 12 years of age.
• Triptans are not licensed for children and so should not be used.
The bad news is that many of the medicines used to treat migraine should not be taken by pregnant or breast-feeding women.
• For relief of a migraine headache:
• Ibuprofen is sometimes used but do not take it in the last third of the pregnancy (the third trimester).
• Aspirin - avoid if you are trying to conceive, early in pregnancy, in the third trimester and whilst breast-feeding.
• Triptans - should not be taken by pregnant women at all. Triptans can be used during breast-feeding, but milk should be expressed and discarded for 12-24 hours after the dose (see manufacturer's information on the packet).
• For feeling sick and vomiting - no medicines are licensed in pregnancy. However, occasionally a doctor will prescribe one off licence.
• Medicines used for the prevention of migraine are not recommended for pregnant or breast-feeding women.
Migraine Action
27 East Street, Leicester, LE1 6NB
Tel: 0116 275 8317 Web: www.migraine.org.uk
Migraine Trust
52-53 Russell Square, London, WC1B 4HP
Tel: 020 7631 6975 Web: www.migrainetrust.org
Original Author:
Dr Tim Kenny
Current Version:
Peer Reviewer:
Dr Beverley Kenny
Last Checked:
Document ID:
4299 (v41)
The Information Standard - certified member
|
When were binoculars invented and where were they used?
4 Responses to When were binoculars invented and where were they used?
1. dirk_vermaelen says:
if i remember correctly, the Arabs invented it.
2. seraph1818 says:
Although the dates vary by 2 years, below you will find 2 possible answers…
A binocular is a optical instrument for providing a magnified view of distant objects, consisting of two similar telescopes, one for each eye, mounted on a single frame. The first binocular telescope was invented by J. P. Lemiere in 1825.
Binoculars – In 1823, a new optical instrument began to appear in French opera houses that allowed patrons in the distant (and less expensive) seats to view the opera as if they were in the front row. Called opera glasses, the device combined telescope lenses with stereoscopic prisms to provide a magnified, three-dimensional view. After many years (but relatively few modifications), opera glasses have evolved into the binocular.
In their simplest form, binoculars are a pair of small refracting telescope lenses, one for each eye. The brain assembles the two views, one from each lens, into a single picture. Because each eye sees its own view, the final image has depth; this is not so with conventional telescopes, which possess only one eyepiece and, therefore, a two-dimensional image.
3. Elizabeth says:
Phoenicians cooking on sand discovered glass around 3500 BCE, but it took about 5,000 years more for glass to be shaped into a lens for the first telescope.
A spectacle maker probably assembled the first telescope. Hans Lippershey (c1570-c1619) of Holland is often credited with the invention, but he almost certainly was not the first to make one. Lippershey was, however, the first to make the new device widely known.
“What we call a binocular is a binocular telescope, two small prismatic telescopes joined together. When Hans Lippershey applied for a patent on his instrument in 1608, the bureaucracy in charge, who had never before seen a telescope, asked him to build a binocular version of it, with quartz optics, which he is reported to have completed in December 1608.”
4. Fizzle says:
That was an interesting question, so I had to look. The first telescopes came in around 1600 and it seems the first thing people wanted to know was if there was a way to see through with both eyes. That doesn’t qualify as a new concept because that’s how we use our eyes naturally. Binocular telescopes were alright if you wanted to look at the world upside down through a cumbersome contraption. Ignatio Porro patented a prism erecting system in 1854, but making it work required better glass, skills, and one more step. That step was made by Ernst Abbe, who glued the prisms together by 1873 to make an erecting telescope. Otto Schott was a glassmaker and Carl Zeiss was an instrument maker, so everything came together in Germany and was first sold in 1894.
World war one broke out soon afterward, and that didn’t hurt the popularity of these new instruments.
|
The Protein-Folding Problem
A droopy, strung-out chain of amino acids that isn't much good for anything -- that's what rolls off the assembly line of the molecular factory inside a cell when a protein is created. All the pieces are there, and they're in the right sequence. Yet the new protein is unfit for duty. It isn't in shape.
To do its job, whatever it may be among the thousands of life-sustaining jobs proteins do, this dangly chain forged from hundreds of amino acids must fold into just the right three-dimensional configuration. It happens within seconds, a long time in protein biochemistry, and the result is a complex bundle of twists and turns with clefts and notches precisely sculpted to allow the protein to attach and release other molecules. Function follows from form, and when the form is right, the protein goes to work.
For biomedical scientists, this phenomenon poses the kind of mystery science thrives on. How is it that a particular sequence of amino acids uniquely determines the right shape, out of almost unlimited possibilities, so that the protein can perform its predestined biological role? It's called the protein-folding problem, and solving it is no mere intellectual exercise. In theory, knowing the biological laws that govern protein folding could make it possible to create new proteins made to order to cure diseases and abate maladies from indigestion to arthritis.
"The relationship between protein sequence and three-dimensional structure is one of the primary unsolved problems in biology today," says Charles Brooks, who is attacking this Grand Challenge problem in several related projects at the Pittsburgh Supercomputing Center. Brooks is a leader in the development of a computational method called molecular dynamics (MD). He helped develop the CHARMM package of MD software, used by a number of researchers working in protein and DNA structure analysis, and he has been applying MD in protein research at the Pittsburgh Supercomputing Center since it opened its doors in 1986.
In a series of very large-scale simulations on the CRAY C90, Brooks has explored protein folding in apomyoglobin, a partially unfolded form of myoglobin, a protein that carries oxygen in muscle tissue. His results have begun to identify intermediate folding stages along the folding "pathway" of this protein. In a related project, Brooks has collaborated with PSC biomedical scientist Bill Young, to develop a version of CHARMM that distributes MD computations between the CRAY T3D and C90, significantly boosting performance for MD simulations.
Researchers: Charles L. Brooks III, Carnegie Mellon University; William S. Young, Pittsburgh Supercomputing Center
Hardware: CRAY-C90, CRAY T3D
Software: CHARMM
Keywords: protein, enzymes, antibodies, oxygen, protein-folding, amino acid, sequence, three-dimensional structure, molecular dynamics (MD), DNA structure, apomyoglobin, myoglobin, nuclear magnetic resonance (NMR), CHARMM.
Related Material on the Web:
Projects in Scientific Computing, PSC's annual research report.
References, Acknowledgements & Credits
|
previous up next next
Go up to Top
Go forward to About the Software Maple version, Singular version
Introduction to the Problem
The problem of resolution of singularities shortly is:
Given an algebraic variety X, construct a proper, birational morphism Y-> X, where Y is a nonsingular variety.
The morphism is an isomorphism over an open dense subset of X. Along the same lines, the problem of embedded resolution of singularities is:
Given a nonsingular variety W and a variety X embedded into W; construct a proper, birational morphism p: W'-> W, where W' is nonsingular, such that p is an isomorphism outside the singular locus, Sing(X), of X, the proper transform of X (i.e. the closure of p-1(X-Sing(X))) is nonsingular and p-1(X) has normal crossings.
The most essential operation in the resolution of singularities is blowing up (also called as monoidal transformation, sigma-process, etc.). The blowing up of the affine plane at the origin is illustrated by the following figure:
Blowing up (A2, V(y2-x3-x2)) at the origin.
This problem has an impressive history. For curves, the existence of a resolution was already known in the 19th century. In the surface case, Walker gave an analytic proof [Walker1935], and Zariski gave an algebraic proof [Zariski1939]. Five years later, Zariski proved resolution of 3-folds [Zariski1944]. Then it took quite some time until Hironaka came up with his famous result of resolution for arbitrary dimension [Hironaka1964]. Hironaka's result holds only for the characteristic zero case. In positive characteristic, there are some partial results [Abhyankar1965,Lipman1978], but the general case is still open. For an account of the difficulties in positive characteristic, the reader should consult [Hauser1998].
Hironaka's existence proof is not constructive. Later, [Villamayor1989,Villamayor1996,Encinas and Villamayor2000] and [Bierstone and Milman1991,Bierstone and Milman1997] have come up with more explicit strategies to find a resolution. Still, the main interest of these authors was to prove the existence of resolutions with particular properties (equivariance) and not actual computation.
The algorithm is not new, since the main ideas are due to Villamayor; but this is the first time that a resolution algorithm has been formulated in such an explicit way. This is underlined by the fact that we could do a full implementation in Maple.
Even when taking the results of Villamayor and Bierstone-Milman into account, it is quite surprising that automated resolution is possible with nowadays means. We are not aware of any other implementations of resolution, even in the relatively easy case of surfaces, except a program for cyclic surfaces [Castellanos et al.1998] (see also [Eisenbud1993], where resolution of surfaces is considered as an open problem in constructive algebraic geometry). Only for curves, there are some implementations, e.g. in [Tran and Winkler1997] or in MapleV release 5 (in the package ``algcurves'' by M. van Hoeij).
One of the main difficulties in automated resolution is the following. The resolution is obtained by blowing up subvarieties in the singular locus. Now, it is quite easy to blowup a point. But for more complicated blowing up centers, the construction is not so trivial. There are implementations for performing blowing ups in various computer algebra systems like [Greuel et al.1998] and [Bayer et al.1993] (both are using Gröbner bases, by the way). But these computations are time consuming and the output can be much more complicated than the input. In the resolution algorithm, we need to apply blowing up repeatedly, which would be rather difficult when using these procedures. Gröbner bases [Buchberger1965,Buchberger1985,Winkler1988,] are, however, used in other places of the algorithm: testing whether a given ideal contains 1, solving linear equations over polynomials, computing quotient ideals.
By a new result in [Encinas and Villamayor2001] we were able to generalize the algorithm to the non hypersurface case in a straightforward manner. The package is also extended with adjoints computation facilities and it is able to compute the dual hypergraph of the exceptional divisors of a resolution.
This work was supported by the Austrian Science Fund FWF in the frame of the projects Computation of Adjoints for Surfaces and Proving and Solving over Reals. We are indebted to S. Encinas and O. Villamayor for their help in the clarification of details, and to H. Hauser for helpful remarks.
previous up next next
|
• Save
Global warming(geography) by natasha, tesa and grace y10 n
Upcoming SlideShare
Loading in...5
Total Views
Views on SlideShare
Embed Views
0 Embeds 0
No embeds
Upload Details
Uploaded via as Microsoft PowerPoint
Usage Rights
© All Rights Reserved
Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate
Select your reason for flagging this presentation as inappropriate.
• Full Name Full Name Comment goes here.
Are you sure you want to
Your message goes here
Post Comment
Edit your comment
Global warming(geography) by natasha, tesa and grace y10 n Global warming(geography) by natasha, tesa and grace y10 n Presentation Transcript
• WHAT IS GLOBAL WARMING?Global warming is the rise in the average temperature of Earthsatmosphere and oceans since the late 19th century and its projectedcontinuation. Since the early 20th century, Earths mean surfacetemperature has increased by about 0.8 °C (1.4 °F), with about two-thirds of the increase occurring since 1980.Warming of the climatesystem is not misinterpreted, and scientists are more than 90% certainthat it is primarily caused by increasing concentrations of greenhousegases produced by human activities such as the burning of fossil fuelsand deforestation. These findings are recognized by the national scienceacademies of all major industrialized nations
• WHAT ARE THE CAUSES OF GLOBAL WARMNG?• Global Warming is caused by many things. The causes are split up into twogroups, man-made or anthropogenic causes, and natural causes.• Natural Causes:Natural causes are causes created by nature. One natural cause is a releaseof methane gas from arctic tundra (the arctic) plain and wetlands. Methaneis a greenhouse gas. A greenhouse gas is a gas that traps heat in the earthsatmosphere. Another natural cause is that the earth goes through a cycle ofclimate change. This climate change usually lasts about 40,000 years.
• • Man-made CausesMan-made causes probably do the most damage. There are many man-made causes.Pollution is one of the biggest man-made problem. Burning fossil fuelsis one thing that causes pollution. Fossil fuels are fuels made of organicmatter such as coal, or oil. When fossil fuels are burned they give off agreen house gas called CO2. Also mining coal and oil allows methane toescape. Methane is naturally in the ground. When coal or oil is minedyou have to dig up the earth a little. When you dig up the fossil fuelsyou dig up the methane as well.
• • Another major man-made cause of Global Warming is population.More people means more food, and more methods oftransportation. That means more methane because there will bemore burning of fossil fuels, and more agriculture. If your in a barnfilled with animals, something smells terrible, and that smell ismethane. Another source of methane is manure. Because morefood is needed we have to raise food. Animals like cows are asource of food which means more manure and methane. Anotherproblem with the increasing population is transportation. Morepeople means more cars, and more cars means more pollution.Also, many people have more than one car.
• • Since CO2 contributes to global warming, the increase inpopulation makes the problem worse because we breathe outCO2. Also, the trees that convert our CO2 to oxygen are beingdemolished because were using the land that we cut the treesdown from as property for our homes and buildings. We are notreplacing the trees (an important part of our eco system), so weare constantly taking advantage of our natural resources and givingnothing back in return.
• EFFECTS OF GLOBAL WARMING• The planet is warming, from North Pole to South Pole, andeverywhere in between. Globally, the mercury is already up morethan 1 degree Fahrenheit (0.8 degree Celsius), and even more insensitive polar regions. And the effects of rising temperaturesaren’t waiting for some far-flung future. They’re happening rightnow. Signs are appearing all over, and some of them are surprising.The heat is not only melting glaciers and sea ice, it’s also shiftingprecipitation patterns and setting animals on the move
• Some impacts from increasing temperatures are alreadyhappening:Ice is melting worldwide, especially at the Earth’s poles. Thisincludes mountain glaciers, ice sheets covering West Antarcticaand Greenland, and Arctic sea ice Sea level rise became faster over the last century. Some butterflies, foxes, and alpine plants have moved farthernorth or to higher, cooler areas. Precipitation (rain and snowfall) has increased across the globe,on average.
• Other effects could happen later this century, if warming continues:Sea levels are expected to rise between 7 and 23 inches (18 and 59centimeters) by the end of the century, and continued melting atthe poles could add between 4 and 8 inches (10 to 20 centimeters).• Hurricanes and other storms are likely to become stronger.• Species that depend on one another may become out of sync.For example, plants could bloom earlier than their pollinatinginsects become active.• Floods and droughts will become more common. Rainfall inEthiopia, where droughts are already common, could decline by10 percent over the next 50 years.• Less fresh water will be available. If the Quelccaya ice cap inPeru continues to melt at its current rate, it will be gone by 2100,leaving thousands of people who rely on it for drinking water andelectricity without a source of either.•
• PREDICTED EFFECTS OF GLOBAL WARMING• Rising Sea LevelThe most obvious effect of global warming is the warming of the planet. Risingtemperatures are already responsible for the melting of glaciers and ice, and if present daytrends continue the effect could be devastating as massive areas of ice begin to melt andflow into the oceans. With rising sea levels comes the danger that many areas consideredwaterfront property today could be completely under water in just a few decades.Extreme WeatherGlobal warming is predicted to heat up the worlds oceans, which means more intensehurricanes and typhoons. In addition, warmer climates around the earth are expected tochange traditional weather patterns, meaning more thunderstorms and tornadoes in someareas and devastating drought in others.
• Some diseases will spread, such as malaria carried by mosquitoes.Ecosystems will change—some species will move farther north or become moresuccessful; others won’t be able to move and could become extinct.• Agricultural ChangesBoth the increase in temperature and changes in weather patterns are expected tosignificantly affect the agricultural industry. Droughts will have the obvious effect of makingsoil infertile, but in some areas, increased rain may have the opposite effect. What is adesert today could conceivably become a verdant valley in a century or two.• Loss of SpeciesBecause global warming is essentially speeding up what would be a natural process,extreme climatic changes impact the habitat of animals at a pace thats too rapid for themto naturally evolve to meet successfully. The result is expected to be a wholesale loss ofcertain animal species, which may mean even more devastation to an ecosystem ill-prepared to handle such rapid change.
• • Economic EffectsThe change in agriculture will necessitate unforeseen fluctuations in the priceof food. The rise in temperatures will place economic pressures for coolingcosts, especially those who cool with electricity. The devastating storms thatmay become commonplace will result in a plethora of rising costs, from floodinsurance to health care.DiseaseScientists are already reporting that higher temperatures in Third Worldenvironments have resulted in a resurgence of infectious diseases associatedwith bacteria that thrive in warm temperatures. As global warming takes hold,these diseases are expected to widen their scope and spread quickly intodeveloped countries.
|
The courtesies of coca culture
Rory Carroll's use of the word "boisterous" in a sentence beginning with the statement that Peruvian "politicians munched on the raw ingredient of cocaine" is unfortunate (Peruvian politicians in coca protest, March 15). It associates the mastication of coca leaves with activity uncharacteristic of what the article rightly points out as the sacredness surrounding coca culture in the Andes.
People in Aymara- and Quechua-speaking communities in Andean countries observe certain courtesies; on meeting an acquaintance a reciprocal exchange of a personal item, a little coca bag, takes place so that you take a few leaves from my bag and I take a few leaves from yours. A quiet exchange of news information follows the addition of the leaves to the quid in one's mouth. The attitude is one of contemplation. At the end of the meeting, one usually adds new leaves before taking one's departure.
People often masticate coca in a period of studied calmness before undertaking strenuous activity such as mining or herding, which more often than not take place at altitudes well in excess of 4,000m above sea level. The cultural behaviour associated with the highs reported from taking cocaine could not be more pronounced.
Dr Penelope Dransart
Reader in anthropology, University of Wales
|
Researchers Study an Elephant That Speaks Korean
It's said that for native English speakers, Korean ranks among the top-five most difficult language to learn -- but to the surprise of animal researchers, one Asian elephant named Koshik seems well on his way to mastering the basics.
In 2006, video clips first began appearing on YouTube of Koshik, a captive elephant from South Korea's Everland Zoo, mimicking human speech with remarkable clarity -- responding to his keeper's greetings with a polite, well-spoken reply: "annyong", hello in Korean. At the time, no one was really sure how an animal whose normal vocalizations (sounding more like low rumbling) could so intelligibly replicate a human voice. But recently a team from the University of Vienna met with Koshik to learn more about his special speaking skills.
Here he is in action:
According the researchers, lead by Dr. Angela Stoeger, Koshik was capable of speaking at least five different Korean words well enough for local humans to understand -- not enough to wax poetic -- but perhaps sufficient to break the ice with a cute new conversation partner: "annyeong" (hello); "anja" (sit down); "aniya" (no); "nuwo" (lie down) and "choah" (good).
To be fair, Koshik most certainly has no idea what these words really mean, or how forward he would seem, but that's besides the point. What Stoeger is most interested in is how he's making these sounds despite lacking human vocal chords. As it turns out, the secret's in his trunk.
"He always puts his trunk tip into his mouth and then modulates the oral chamber," Dr. Stoeger told the BBC. "We don't have X-rays, so we don't really know what is going on inside his mouth, but he's invented a new way way of sound production to match his vocalizations with his human companions."
Stoeger believes that because Koshik grew up in captivity, hearing more humans speak than fellow elephants, he developed the ability to ape Korean in order to form social bonds with our species that were lacking amongst his own.
Although there are already other animals known to be capable of mimicking human speech, like those occasionally fowl-mouthed parrots (pun intended), this may be a first for our pachyderm counterparts. Heck, even whales are getting in on the act.
Unfortunately, Koshik has yet to respond to my request for comment on all this -- but in truth, he may have crushed the computer as he tried to send the email.
Tags: Animals
|
17th President of the United States- Andrew Johnson
300px-Andrew Johnson
Andrew Johnson
President of the United States story of america online encyclopedia
America's Presidents Story of America (DVD) World Book Online Encyclopedia
-Andrew Johnson (1808-1875), the first president to be impeached, became chief executive upon the assassination of Abraham Lincoln.
-Boyhood. Andrew Johnson was born in Raleigh, North Carolina, on Dec. 29, 1808. His father, Jacob Johnson, worked as a handyman in a tavern. Andrew's mother, Mary McDonough Johnson, was a maid in the tavern.
-Tailor's apprentice. Andrew's mother apprenticed him to a tailor when he was 13 years old. The shop foreman probably taught him to read. Tailors usually employed someone to read to the workers as they sat at their tables stitching clothes. Andrew became familiar with the Constitution, American history, and politics through reading newspapers and a few books.
-Johnson's family. On May 17, 1827, Johnson married Eliza McCardle (Oct. 4, 1810-Jan. 15, 1876), the daughter of a Scottish shoemaker. She taught him to write and to solve simple problems in arithmetic. She also encouraged him to read and to study. The Johnsons had five children.
-Local and state offices. In 1829, the year Jackson became president, Johnson won his first election. He became a Greeneville alderman along with a tanner and a plasterer. In 1834, Johnson became mayor. In 1835, the voters elected him to the Tennessee House of Representatives. There, he opposed a bill for state assistance in the construction of railroads because he feared dishonesty, monopolies, and wasteful spending. But Johnson represented eastern Tennessee, which needed railroads because of its mountainous isolation. His vote contributed to his defeat for reelection in 1837, the first of only two times in his 45-year political career that he ever lost a popular vote.
-U.S. representative. In 1843, Johnson won the first of five terms in the U.S. House of Representatives.
-Governor. Johnson ran successfully for governor of Tennessee in 1853. As governor, he favored laws to provide free public education. He also fought unsuccessfully the use of prison labor to compete with free labor. His courage and speaking skill were valuable in the rough and tumble politics of the day.
-U.S. senator. In 1857, Johnson returned to Washington as a U.S. senator and continued to push for the Homestead Act. It finally passed in 1862, after the Civil War had begun and Southerners had resigned from Congress.
-Military governor of Tennessee. Union armies gained a foothold in western Tennessee early in 1862. Then Lincoln appointed Johnson military governor of the state, an unusual position designed to give Johnson extensive civil and military authority. His main task was to organize the loyal Unionists, assure their protection, hold free elections, and restore federal authority in the state.
-Vice president. Johnson was ideally suited to run as a vice presidential candidate with Lincoln in 1864. He had strongly supported the Union, he was a Southerner, and he was a leading member of the War Democrats. The War Democrats were Democrats who had been loyal to Lincoln throughout the war.
-Lincoln's Assassination. On April 14, only six weeks after the inauguration, Lincoln was assassinated. The next morning, Johnson took the presidential oath of office in his hotel room.
-Plans for Reconstruction. Lincoln had not firmly decided the details of a Reconstruction plan at the time of his death. His wartime plan filled a temporary need but was not adequate for a postwar solution. Lincoln would have expected as a minimum assurances of future loyalty, recognition of the end of slavery, and protection for Southern Unionists and blacks. Widely regarded as a moderate, he could favor restricting rebel leaders without desiring punishment.
-Break with Congress. Many Northerners questioned Johnson's plan, especially after the beginning of 1866. They doubted the fitness of the Southern States for readmission because of reports of violence against blacks and their white supporters, the passing of laws unfair to blacks, and the frequent election of former Confederate leaders. When Congress met in December 1865, it rejected Johnson's plan and would not seat the newly elected Southern congressmen, and some congressmen criticized Johnson's plan.
-Increased tension developed after March 2, 1867, when Congress passed two laws that Johnson considered unconstitutional. One law was the First Reconstruction Act, which put the Southern States under military rule and established strict requirements for their readmission to the Union. This act also disfranchised the rebels whom the proposed 14th Amendment prohibited from holding office. It did this by barring these people from voting to elect delegates to new state constitutional conventions and from serving as convention delegates. The other law was the Tenure of Office Act. It required Senate approval before the president could fire members of his Cabinet and other officials who had been confirmed by the Senate. Johnson vetoed both of these acts, but Congress repassed them.
-Impeachment had long been a goal of the Radicals. On Feb. 24, 1868, the House of Representatives voted 126 to 47 to impeach Johnson. On March 2 and March 3, the House adopted 11 articles of impeachment. The most important articles were the first, which charged that the president had violated the Tenure of Office Act by dismissing Stanton, and the 11th, which claimed that he had conspired against Congress and the Constitution. This charge cited Johnson's claim that Congress did not properly represent all the states.
On May 16, the Senate voted on the 11th article. Because it charged a general intent to obstruct the will of Congress, Johnson's opponents believed it had the best chance of passing. Senator James Grimes of Iowa, stricken with paralysis, came in on a stretcher and voted "not guilty." The roll call vote lasted over an hour, and the outcome was in doubt until the very end. The final tally of 35 "guilty" and 19 "not guilty" acquitted Johnson by one vote. Ten days later, following the Republican National Convention, the Senate voted on the second and third articles, with the same result. The Senate took no further votes, and the trial was over. The verdict ensured that something more serious than political opposition to Congress would be necessary to justify removing a president from office.
-Life in the White House became livelier during Johnson's administration than it had been during the gloomy war years. The household included the Johnson's two surviving sons, Robert and Andrew; their daughters, Mrs. Mary Stover and Mrs. Martha Patterson; Mrs. Patterson's husband, Senator D. T. Patterson of Tennessee; and the president's five grandchildren. The children played and had many parties in the White House.
-The final months of Johnson's administration after the impeachment trial were uneventful. He hoped the Democrats would nominate him for president in 1868. But they chose former Governor Horatio Seymour of New York, who lost to General Ulysses S. Grant.
-DeathDuring a visit to Tennessee, Johnson suffered a paralytic stroke. He died a few days later, on July 31, 1875. Johnson was buried in Greeneville, wrapped in a U.S. flag and with his well-worn copy of the Constitution under his head.
Works Cited:
Sefton, James E."Johnson, Andrew." World Book Advanced. World Book, 2013.
Buy World Book Online Subscription
World Book 2013 Encyclopedia
Share this story:
|
Kmart, Tender, and Dyna E accused by FTC of Deceptive Biodegradable claims
| | Comments (3)
The testimony states that with the recent growth in "green" advertising and product lines, the agency will continue its efforts to ensure that environmental marketing is truthful, substantiated, and not confusing to consumers. <>!
Federal Trade Commission Defines Biodegradability
In a traditional “Dry Tomb” landfill, the stuff isn’t going to go away, at least not very fast. We use our dry tomb landfills to hide our trash...keep it away from our sight and noses.
I was surprised yesterday, to see an interesting tidbit about our landfills in the press. The federal government (Federal Trade Commission) has determined that since things do not biodegrade in a landfill, any item that is disposed of by tossing it in the garbage cannot be called biodegradable. Why you ask? Well, things in a typical “Dry Tomb” landfill don’t biodegrade and that would include things that normally do paper, and food waste. It seems that anything that isn’t burned or composted in a commercial composting site and is disposed of in a landfill....can no longer be called biodegradable.
Let’s take another example, leftover food. If you dig a hole in your backyard and put your food scraps in the hole and cover it with dirt, your food scraps will biodegraded within two to three weeks. Food placed in a back yard hole would be considered biodegradable. Now, let’s take that same piece of lettuce and take it to our local garbage dump...oops; it’s no longer biodegradable (Assuming your landfill is the “Dry Tomb” type).
There are exceptions to every rule, and there are things that go into a dry tomb landfill that can be considered biodegradable. Anything that can biodegrade in an anaerobic environment will biodegrade in a dry tomb landfill. ENSO Bottles, and environmental company, realized that all plastics ultimately end up in a landfill and with more than 150 billion bottles being produced each year something needed to be done about reducing plastic pollution. ENSO is a supporter of recycling, an important part of conserving scarce resources and protecting our environment. However, less that 30 percent of plastic bottles are recycled, the remaining 100 plus billion bottles are ending up in our landfills, streams and oceans. ENSO recently announced the development of a modified PET plastic bottle that will biodegrade including in a “Dry Tomb” anaerobic landfill environment. ENSO Bottles have been tested to biodegrade in an anaerobic or aerobic microbial environment leaving behind natural elements of biogases and humus.
ENSO bottles with EcoPure™ have been tested and validated for the following:
(2) ASTM D 4603 (Intrinsic Viscosity)
(4) ASTM D 5511 Standard Test Methods, a standard for biodegradation testing in anaerobic environments. Results clearly indicate ENSO bottles with EcoPure™ biodegrade through natural microbial digestion.
To learn more about these solutions visit and
To request official test results contact:
What if I told you there was no such thing as a biodegradable product? Would it bring to mind images of products you have seen at the local store that are promoted as biodegradable? Would you think I was swimming out of the mainstream? Well according to the FTC, it just may be true if you are being marketed a biodegradable product and the seller doesn’t qualify that claim
On June 9th, 2009 the FTC announced “Actions against Kmart, Tender and Dyna-E Alleging Deceptive 'Biodegradable' Claims”. The FTCs release was picked up by many national papers (click here to see the WSJ article) and led to an internet wide chat about what it means. To boil it down and begin our review of what happened we can summarize the FTC action as acting against Kmart for the marketing of disposable paper plates as biodegradable.
From the FTC press release:
From the WSJ article By Brent Kendall, of Dow Jones Newswires:
“The charges involved the discount retailer's claim that a brand of its paper plates was biodegradable. The FTC said the paper products at issue didn't decompose quickly enough to qualify for the biodegradable label”
To some this may come as a surprise as we are often led since elementary school science to believe that paper and wood products are biodegradable. More recently some plastic have been marketed as biodegradable. But regretfully regardless of what we taught in the past, most items are not by themselves biodegradable, degradable, (oxo) degradable or (hydro) degradable in their marketed state. Rather it is when exposed to an appropriate environment they can become biodegradable or other. This exposure to an environment that creates biodegradation is referred to as composting or other processes. Let me provide you with a story I use to illustrate this to companies when I consult with them.
Rather than starting off with paper a more complex material to biodegrade we can say that most reasonable people would consider table scraps (leftover food) to be biodegradable. They would likely agree that if placed on the floor of a Brazilian rainforest and left there for three months, it is likely that when they returned the scraps would be gone. But they are likely to agree that if the same table scraps were placed on the Alaskan tundra in the middle of winter, and left alone for three months, once you returned it is reasonable that the scraps would still be there. This simple yet extreme illustration shows that the environment of disposal and not the product material has the most significant impact on the ability of something to degrade, biodegrade etc...
Since 1992 the FTC has cautioned against such unqualified claims in its GUIDES FOR THE USE OF
ENVIRONMENTAL MARKETING CLAIMS, commonly referred to as the FTC Green Guide. The FTC never outlawed environmental claims in its guide; rather it required that companies qualify their claims.
The general guideline provided in the FTC Green Guide:
It is deceptive to misrepresent, directly or by implication, that a product, package or service offers a general environmental benefit. Unqualified general claims of environmental benefit are difficult to interpret, and depending on their context, may convey a wide range of meanings to consumers. In many cases, such claims may convey that the product, package or service has specific and far-reaching environmental benefits.
Specific guidelines for claims of biodegradability from the FTC Green Guide:
Claims of degradability, biodegradability or photodegradability should be qualified to the extent necessary to avoid consumer deception about: (1) the product or package's ability to degrade in the environment where it is customarily disposed; and (2) the rate and extent of degradation.
These qualifications have become very relevant in light of studies conducted by William L. Rathje, Professor Emeritus at the University of Arizona. Dr. Rathje’s studies touched on how much or how little things that we consider to be biodegradable actually biodegrade in a landfill, what I would call the most customary method of disposal in the United States. The department of energy references his work on their kids’ page by stating:
After digging into three landfills in Arizona, California, and Illinois, Rathje found out that there are a lot of garbage myths. He and his team discovered that it takes a lot longer for paper and other organic wastes to decompose than people previously thought.
Rathje and his team found newspapers from the late 1970s that were still readable. He found “organic debris—green grass clippings, a T-bone steak with lean and fat, and five hot dogs—[that] looked even better!”
Rathje’s research suggests that for some kinds of organic garbage, biodegradation goes on for a while and then slows to a standstill. For other kinds, biodegradation never gets under way at all.
So if we make the generalizations that: most American’s dispose of their trash in landfills and biodegradation occurs very slowly in landfills if at all, it’s easier to understand very few products biodegrade when we throw them away.
Why does this matter?
Well in the age of Green marketing, if two nearly identical products, with similar prices are placed side by side it is likely that if one is labeled “biodegradable” consumers will buy it for a perceived value added benefit to the environment and society. Basically they feel they are getting something additional for free. This advantage puts pressure on other manufactures to follow suit to avoid a loss of market share and before long no one know if a product is better for an environment or not because they all say “me too”.
Are biodegradable products a myth?
Actually there are biodegradable products but in most cases they are more accurately called compostable. Modern compostable products need to be returned to an industrial compost to guarantee that they will biodegrade within a reasonable time. Many cities and waste management companies have curbside compost service. Some local services are very basic and will only accept yard waste in paper bags, while others like those in the cities of San Francisco and Oakland accept compostable plastics. Once products are composted they return back to soil that has no toxins and is suitable for growing plant life within that have a seal of approval from the Biodegradable Products Institute (BPI). I recommend visiting the BPI website to be more familiar with their certification logo. There are many more products claiming to be compostable every day, but very few have passed the standards of the BPI. If those products enter the composting cycle it is possible that they could lead to contamination.
Now I want to say that I have not had an opportunity to review any testing data related to the products addressed by the FTC. But it is likely that if they met the standards set by the BPI, Kmart could have simply called them compostable and pursed a BPI logo to avoid all of this. This would likely have allowed them to market them as “biodegradable in industrial composts”. Although too many this would seem wordy or of little value, to people in San Francisco and other cities that compost it means a lot, and they are the only ones likely to benefit anyway. So there was a market to reach if properly addressed.
So there you have it. More than just a lesson that Kmart has learned, I hope this is a lesson that society will learn from.
It our world so remember to reduce, reuse and recycle (compost) responsibly where you can or the three Rs may be replaced with ban, tax and prohibit.
There is the test for biodegradation OECD 311
Dyna-E conducted on their product. CAn you go to scroll down to biodegradable. You'll find the results of this test.
Can you comment?
Email | About | George's Column
|
Weird Critter Profile: Virginia Opossum
Sometimes the most common species are the oddest. The Virginia Opossum (Didelphis virginiana) definitely falls into this category. Almost everyone has come across one of these fuzzy, gray mammals at some point or another, whether it was scurrying across the backyard, helping itself to the smorgasbord in your trash can, or even smushed on the side of the road, the unfortunate victim of a car. But few people realize how truly bizarre an animal the "possum" is!
Opossum_with_grapes WIKI
I present to you a list of odd facts about the Virginia Opossum:
• Its common name is confusing. This animal is found in much of North and Central America, not just Virginia.
• It's the only marsupial found in North America. The majority of marsupial species are found in Australia and include kangaroos, koalas and Tasmanian devils.
• Opossums are one of the few animals that seem to be faring well in the face of human development. Naturally a creature of woodlands, opossums are extremely adaptable and use human-made structures for shelter, and are happy to feed on human food and garbage. In fact, they'll eat just about anything.
• Opossums are immune to snake venom, and often feed on copperheads, rattlesnakes and other venomous snake species.
• Global warming could actually be helping this animal expand its range northward. Their hairless ears, feet and tails often get frostbitten in cold weather, which can be fatal, but as the average winter temperatures in North America rise, the opossum is surviving further north and can now be found in parts of Canada.
• Opossums have prehensile tails that help them grip onto tree branches. Young animals are light enough to hang from their tails.
• Opossums have huge litters of tiny babies, sometimes more than 20! When born, the babies are less than half an inch long and haven't yet developed fur, eyes or even hind legs. The helpless babies must find their mother's pouch and find a nipple to nurse. Unfortunately for those born in larger litters, there are only 13 nipples. Those that don't find a nipple don't survive. When they get too big for the pouch, the babies ride on their mom's back.
• Male opossums have a forked penis.
• Opossums are an ancient species. They've been around for over 70 million years, which means they coexisted with dinosaurs!
• Opossums have an opposable, thumb-like digit called a hallux on their hind feet.
• The term "playing possum" is based on truth. When threatened, an opossum will writhe around in mock death throes, froth at the mouth and then lie on its back with its tongue hanging out and eyes rolled into its head, playing dead. It also defecates and releases a stinky green gel from its anus. Would-be predators usually leave the seemingly dead opossum alone.
• Opossums have 50 teeth, more than any other North American mammal.
• They are very short lived animals. In the wild they often don't survive more than a couple of years although in captivity they can make it to a ripe old age of four or five.
• Opossums have a lower body temperature than other mammals. A temperature in the low 90s is normal for them (98.6 degrees is normal for humans).
• Opossums are highly resistant to the rabies virus. This could be due to their low body temperature, which is not warm enough for the virus to replicate in the animal's body.
Photo courtesy of Patricia O'Tuama via Wikimedia Commons.
stay connected
our sites
|
Last modified 7 years ago Last modified on 10/04/07 04:22:43
Low-latency computing
BOINC was originally designed for high-throughput computing, and one of its basic design goals was to minimize the number of scheduler RPCs (in order to reduce server load and increase scalability). In particular, when a client requests work from a server and there is none, the client uses exponential backoff, up to a maximum backoff off 1 day or so. This policy limits the number of scheduler requests to (roughly) one per job. However, this backoff policy is inappropriate for low-latency computing, by which we mean projects whose tasks must be completed in a few minutes or hours. Such projects require a minimum connection rate, rather than seeking to minimize the connection rate.
For example, if you need to get batches of 10,000 jobs completed with 5 minute latency, and each job takes 2 minutes of computing, you'll need to arrange to get 10,000 scheduler requests every 3 minutes (and you'll need a server capable to handling this request rate).
The minimum connection rate
Suppose that, at a given time, the project has N hosts online, and that each host has 1 CPU that computes at X FLOPS.
Suppose that the project's work consists of 'batches' of M jobs. Each batch is generated at a particular time, and all the jobs must be completed within time T. For simplicity, assume that a batch is not created until the previous batch has been completed, and that each has is given at most one job from each batch. Suppose that each job takes Y seconds to complete on the representative X-FLOPS CPU.
Clearly, for feasibility we must have Y = M. Let W = T - Y; a job must be dispatched within W seconds if it is to be completed within T.
Now suppose that each host requests work every Z seconds. Assume Z is small enough so that at least M requests arrive in any given period of length W. (TODO: figure out what this is, given a Poisson arrival process).
Then, within W seconds of the batch creation, all of the jobs have been sent to hosts, and within T seconds (assuming no errors or client failures) they have been completed and reported. Note: this is a simplistic analysis, and doesn't take into account multiprocessors, hosts of different CPU speed, the possibility of sending multiple jobs to one client, the ability for Z to vary between hosts, and probably many other factors. If someone wants to analyze this in more generality, please do!
How to do low-latency computing
The key component in the above is the ability to control Z, the time between requests for a given host. Starting with version 5.6 of the BOINC client, it is now possible to control this: each scheduler reply can include a tag
telling the client when to contact the scheduler again. By varying this value, a project can achieve a rate of connection requests necessary to achieve its latency bounds. Projects can currently add this tag to the project configuration file to ensure that all clients will perform a RPC no later then X seconds after their last RPC with the project. In the future it would be nice to expand the capability of this field and make it dynamic. The following scheduler changes would need to be done:
• Keep track of how many active hosts you have (this will change over time).
• Keep track of the performance statistics of these hosts (means and maybe variances of their FLOPS, for example).
• Parameterize your workload: how many jobs per batch, latency bound, FLOPs per job, etc.
• Do the math and write the code for figuring out that the RPC delay should be for a given host (this is dynamic - it will change with the number of active hosts).
• Change the scheduler so that it uses this value so that it only sends the appropriate number of jobs.
If you're interested in helping add these features to BOINC, please contact us.
|
Coal Region
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about coal-mining region in northeastern Pennsylvania. For coal-mining regions in general, see coal-mining region.
Counties of the Coal Region of Pennsylvania, known for anthracite mining.
Anthracite Coal fields of Pennsylvania
The Coal Region is a historically important coal-mining area in Northeastern Pennsylvania in the central Appalachian Mountains, comprising Lackawanna, Luzerne, Columbia, Carbon, Schuylkill, Northumberland, and the extreme northeast corner of Dauphin counties. Academics have made the distinction North Anthracite Coal Field and South Anthracite Coal Field (each of Pennsylvania),[1] the lower region bearing the further classification Anthracite Uplands[2] in physical geology.
The region's combined population was 890,121 people as of the 2010 census. Many of the place names in the region are from the Delaware Indians (the self-named Lenape peoples) and the powerful Susquehannock nation, an Iroquoian people who dominated the Susquehanna valley in the 16th and 17th-century when Dutch, Swedish and French migrants were exploring North America and founding settlements along the Atlantic Seaboard.
The Coal region or Pennsylvania Anthracite region or fields is home to the largest known deposits of anthracite coal found in the Americas, with an estimated reserve of seven billion short tons.[3] It is these deposits that provide the region with its nickname. The discovery of anthracite coal was first made in the Schuylkill County by a hunter in 1791, 16 years after the North Field saw its first mine.
Anthracite, or hard coal
The Region lies north of the Lehigh Valley and Berks County Regions, south of the Endless Mountains, west of the Pocono Mountains, and east of the region known in Pennsylvania as the Susquehanna Valley. The Region lies at the northern edge of the Ridge-and-Valley Appalachians, and draws its name from the vast deposits of anthracite coal that can be found under several of the valleys in the region. The Wyoming Valley is the most densely populated of these valleys, and contains the cities of Wilkes-Barre and Scranton. Hazleton and Pottsville are two of the larger cities in the southern portion of the region. The Lehigh and Schuylkill Rivers both originate within the region, while the much larger Susquehanna River skirts the Northern edge.
The money route, North Coal Field to New York City, eventually that route of the Delaware and Hudson Railroad, alongside the same companies 1828 built canal.
The population of the Amerindian tribesmen of the Susquehannock nation was reduced 90 percent[4] in three years of a plague of diseases and possibly war,[4] opening up the Susquehanna Valley and all of Pennsylvania to settlement as the tribe was all but eliminated, the survivors adopted[4] into the quasi-enemy but related Iroquois by formal treaty in 1870.[4]
Settlement in the region predates the American Revolution: both Delaware and Susquehannock power had been broken by disease and Indian-on-Indian warfare before the British took over the Dutch and Swedish colonies and settled Pennsylvania. The first discovery of the anthracite coal occurred in 1762, and the first mine was established in 1775 near Pittston, Pennsylvania.[5] In 1791 Anthracite was discovered by a hunter atop Pisgah Ridge, and by 1792 the slap-dash Lehigh Coal Mining Company began sporadically producing and shipping coal to Philadelphia via Mauch Chunk from the Southern Anthracite Field and Summit Hill, Pennsylvania built atop the line between Schuylkill County and what would be renamed Carbon County. By 1818, customers fed up with the inconsistent mismanagement, leased the Lehigh Coal Mining Company and founded the Lehigh Navigation Company: construction soon began for navigation; the locks and dams on the Lehigh River rapids stretches, later known as the Lehigh Canal (finished in 1820).
In 1822, the two companies merged as the Lehigh Coal & Navigation Company (LC&N) and by 1824 were turning heads with the volume of coal shipped down the Lehigh and Delaware Canals. Meanwhile, three brothers had similar ideas from near the turn of the century, and about the same time began mining coal in Carbondale, 15 miles (24.1 km) northeast of Scranton, but high enough to run a gravity railroad to the Delaware River and feed New York City via the Delaware and Hudson Canal. Pennsylvania began the Delaware Canal to connect the Lehigh Canal to Philladelphia and environs, while funding to build a canal across the Appalachians' Allegheny Mountains to Pittsburgh. In 1827, LC&N built the second railroad in the country, a gravity railroad from Mauch Chunk Switchback Railway, running Summit Hill to Mauch Chunk.
Population rapidly grew in the period following the American Civil War, with the expansion of the mining and railroad industries. English, Welsh, Irish and German immigrants formed a large portion of this increase, followed by Polish, Slovak, Ruthenian, Ukrainian, Hungarian, Italian, Russian and Lithuanian immigrants. The influence of these immigrant populations is still strongly felt in the region, with various towns possessing pronounced ethnic characters and cuisine.
The anthracite mining industry loomed over the region until its decline in the 1950s. Strip mines and fires, most notably in Centralia remain visible. Several violent incidences in the history of the U.S. labor movement occurred within the coal region as this was the location of the Lattimer Massacre and the home of the Molly Maguires.
The Knox Mine Disaster in 1959 served as the death knell for deep mining; almost all current anthracite mining is done via strip mining. Tours of underground mines can be taken in Ashland, Scranton, and Lansford, each of them also having museums dedicated to the mining industry. Also evident are patch towns, small villages affiliated with a particular mine. These towns, with populations less than 500, were owned by the mining company. Though no longer company owned, many hamlets survive; one of them, the Eckley Miners' Village, is a historical park owned and administered by the Pennsylvania Historical and Museum Commission, which seeks to restore patch towns to their original state.
Famous people from the Coal Region
See also[edit]
1. ^ Healey, Richard (2005) "The Breakers of the Northern Anthracite Coalfield of Pennsylvania", 'Vol. 1, Major breakers prior to 1902'. Dept of Geography, University of Portsmouth, Portsmouth. quote="Northern Anthracite Coalfield of Pennsylvania" (implying there is a Southern Anthracite Coalfield of Pennsylvania)
2. ^ Sevon, W. D., compiler, 2000, "Physiographic provinces of Pennsylvania", Pennsylvania Geological Survey of the Pennsylvania Department of Conservation and Natural Resources, Pennsylvania Geological Survey, 4th ser., Map 13, scale 1:2,000,000.
3. ^ PA DEP Website
4. ^ a b c d see facts cited and cites of American Heritage book of Indians (1961) in articles: Iroquois, Susquehannock
5. ^ U.S. Department of Labor|publisher=Mine Safety and Health Administration
External links[edit]
|
From Uncyclopedia, the content-free encyclopedia
Jump to: navigation, search
The Snipe is an omnivorous, terrestrial mammal of the Southern United States. It is indigenous to the Island of Caucasia, and the Himalayas.
edit Origins
The Snipe is indigenous to the Island of Caucasia, off the coast of Western Europe. whites brought the snipe to the United States in the 17th century and turned it loose in the forests of the South. Nowdays, the snipe is very common in states such as Ohio, West Virginia, Tennessee, Kentuckistan, Utah,Wisconsin, Alabama, and North Carolina.
edit Appearance & Behavior
The snipe is a mammal and resembles a furry toad. Snipes tend to be brown and green, but have been known to change color to white in the snow. They have red eyes and very sharp claws on all four legs. There have also been many sightings of invisible snipes, who are known to disappear instantly when spotted by amateur snipe hunters. Snipes are known to leap up to twenty feet forward. The snipe is very intelligent and vocal. They communicate with variations of a high-pitched squeal. During mating season (Octorember), the calls of the snipe are commonly heard in the woods, especially in areas where trees are common.
edit Diet
Snipes are omnivorous creatures that forage on roots and garbage. Snipes are attracted to the scent of nylon and/or mass amounts of toothpaste. Though toothpaste usually comes in sparkly colors and can only be seen easily in the reflections of a flashlight, they can also be seen by the snipe's beady red eyes. So be sure to cover up the toothpaste you smear on your face with an arrangement of leaves providing camouflage.
Snipes have been known to eat pickled pig feet from random convenience stores near their nesting locations. Generally, if they are to choose these snacks, it is between the hours of 2:23AM and 2:24AM. They often have a Dr. Pepper or Rockstar energy drink with their pickled pig feet. It is rare, but they have been know to pick up a 40oz. Steel Reserve lager, also known as a 211. This is generally because they are short and grab whatever is on the bottom shelf.
edit Snipe Hunting
Snipe hunting originated with the Caucasians, and has become a popular pastime with many rural Americans, especially Rednecks and god dern people from rabun county mostly Dick Cheney. Snipe hunting is often a form of initiation for new snipe hunters. This practice has attracted a lot of controversy due to a high rate of snipe hunting injuries, which often occur during accidents.
edit Snipe Hunting Techniques
Snipe hunting is considered an art form by many. Ancient Caucasians began the practice by placing themselves in an open field at night, and holding open a large, black trash bag. Other hunters would circle around the field and drive all of the nearby snipes toward the hunters in the field. The snipes would be easily identifiable in the dark due to their red eyes. The hunters in the field would use the red eyes of the snipes to find them and capture them with the trash bags. The snipes must be killed by having their hearts stopped. This can be accomplished in many ways.
The mountain snipe is a bit harder to catch, and the technique is a bit more complicated. In the fall, the mountain snipe makes it's nest underground, so that it may hibernate for the winter. They usually do this around suburban homes, as the kids running around and cars going by tend to drive away predators. Even in hibernation, the snipes are light sleepers, especially at night. This is because there isn't as much human activity going on at night, which means that there's nothing keeping away predators. When the sun goes down, and you're sure most of the people in the neighborhood are asleep, grab a pot and spoon and run around the people's yards banging the pot and spoon together and scream "COODYHOO!" as loud as you can. This will drive the snipes out of their nests, making them easy prey. Some people might lean out of their windows and scream at you to shut up. Ignore them. Remember that people are the snipe's best friend, and therefore, your enemy. The snipe might try to bolt towards the houses in hopes of finding a hole to crawl into, so you must trick them into thinking there are predators around the houses by throwing rocks or trash at them. If you see something moving in the grass, jump on it and begin hitting it with your spoon until it stops moving. If the police show up and tell you to stop, just inform them that you are snipe hunting.
edit Controversy
Snipe hunting is a notoriously dangerous sport. No firearms are used, so snipes often are often killed by hand. This results in many hunters being mauled. In 2004, there were 60,000 snipe maulings in the Southern United States alone.
Personal tools
|
From NPR - News Local/State -
As Biotech Seed Falters, Insecticide Use Surges In Corn Belt
(Duration: 4:45)
Crop consultant Dan Steiner inspects a field of corn near Norfolk, Nebraska.
Crop consultant Dan Steiner inspects a field of corn near Norfolk, Neb. (Dan Charles/NPR)
We've reported on earlier phases of this battle: The discovery of rootworms resistant to one type of genetically engineered corn, and an appeal from scientists for the government to limit the use of this new corn to preserve the effectiveness of its protection against rootworm.
This is a return to the old days, before biotech seeds came along, when farmers relied heavily on pesticides. For Dan Steiner, an independent crop consultant in northeastern Nebraska, it brings back bad memories. "We used to get sick [from the chemicals]," he says. "We'd dig [in the soil] to see how the corn's coming along, and we didn't use the gloves or anything, and we'd kind of puke in the middle of the day. Well, I think we were low-dosing poison on ourselves!"
For a while, biotechnology came to his rescue. Biotech companies such as Monsanto spent many millions of dollars creating and inserting genes that would make corn plants poisonous to the corn rootworm but harmless to other creatures.
The first corn hybrids containing such a gene went on sale in 2003. They were hugely popular, especially in places like northeastern Nebraska where the rootworm has been a major problem. Sales of soil insecticides fell. "Ever since, I'm like, hey, we feel good every spring!" says Steiner.
But all along, scientists wondered how long the good times would last. Some argued that these genes — a gift of nature — were being misused. (For a longer explanation, read my post from two years ago.)
Those inserted genes, derived from genes in a strain of the bacterial Bacillus thuringiensis, worked well for a while. In fact, the Bt genes remain a rock-solid defense against one pest, the European corn borer.
In parts of Illinois, Iowa, Minnesota and Nebraska, though, farmers are running into increasing problems with corn rootworms.
"You never really know for sure, until that big rain with strong wind, and you get the phone call the next morning: 'What's going on out there?'" says Steiner.
Entire hillsides of corn, with no support from their eaten-away roots, may be blown flat.
Monsanto has downplayed such reports, blaming extraordinary circumstances. But in half a dozen universities around the Midwest, scientists are now trying to figure out whether, in fact, the Bt genes have lost their power.
At the University of Nebraska, entomologist Lance Meinke is turning colonies of rootworms loose on potted corn plants that contain different versions of the anti-rootworm gene, to see how well they survive.
The larvae get to feed on the corn roots for about two weeks. The soil from each pot then is dumped into a kind of steel container. If the larvae are still alive, a bright light will drive them into little glass jars filled with alcohol. "They try to escape from the heat," says David Wangila, a graduate student who is managing this experiment.
If the rootworm-fighting genes in the corn are working well, no larvae should emerge.
But some have. Wangila points to one of the little glass jars. Inside, there are three nice plump corn rootworm larvae.
This is not good. Those insects, originally collected from a cornfield in Nebraska, were feeding on corn that contained the first rootworm-fighting gene that Monsanto introduced ten years ago. Technically, it's known as the Cry 3Bb gene.
Meinke and Wangila will compare the survival rate of these rootworms with others that have never been exposed to Bt. They're looking for signs that rootworms in the corn fields of Nebraska have evolved resistance to genetically engineered crops.
An identical experiment in Iowa, carried out more than a year ago, found corn rootworms resistant to the Cry 3Bb gene.
Nobody knows how widely those insects have spread, but farmers aren't waiting to find out. Some are switching to other versions of biotech corn, containing anti-rootworm genes that do still work. Others are going back to pesticides.
Steiner, the Nebraska crop consultant, usually argues for another strategy: Starve the rootworms, he tells his clients. Just switch that field to another crop. "One rotation can do a lot of good," he says. "Go to beans, wheat, oats. It's the No. 1 right thing to do."
Insect experts say it's also likely to work better in the long run.
Meinke, who's been studying the corn rootworm for decades, tells farmers that if they plant just corn, year after year, rootworms are likely to overwhelm any weapon someday.
The problem, Meinke says, is that farmers are thinking about the money they can make today. "I think economics are driving everything," he says. "Corn prices have been so high the last three years, everybody is trying to protect every kernel. People are just really going for it right now, to be as profitable as they can."
As a result, they may just keep growing corn, fighting rootworms with insecticides — and there's a possibility that those chemicals will eventually stop working, too.
|
Site hosted by Build your free website today!
The Invisible One
There was once a large Indian village situated on the border of a lake, --Nameskeek' oodun Kuspemku. At the end of the place was a lodge, in which dwelt a being who was always invisible. He had a sister who attended to his wants, and it was known that any girl who could see him might marry him. Therefore there were indeed few who did not make the trial, but it was long ere one succeeded.
As they entered the place she would bid them not to take a certain seat, for it was his. After they had helped to cook the supper they would wait with great curiosity to see him eat. Truly he gave proff that he was a real person, for as he took off his moccasins they became visible, and his sister hung them up; but beyond this they beheld nothing not even when they remained all night, as many did.
There dwelt in the village an old man, a widower, with three daughters. The youngest of these was very small, weak, and often ill, which did not prevent her sisters, especially the eldest, treating her with great cruelty. The second daughter was kinder , ans sometimes took the part of the poor abused little girl, but the other would burn her hands and face with hot coals; yes, her whole body was scarred with marks made by torture, so that people called her OOchigeaskw (the rough-faced girl). And when her father, coming home, asked what it meant that the child was so disfigured, her sister would promptly say that it was the fault of the girl herself, for that, having been forbidden to go near the fire, she had disobeyed and fallen in.
Now it came to pass that it entered the heads of the the two elder sisters of this poor girl that they would go and try their fortune at seeing the Invisible One. So they clad themselves in their finest and strove to look their fairest; and finding his sister at home when with her to the wonted walk down to the water. Then when He came, being asked if they saw him, they said, "Certainly," and also replied to the question of the shoulder-strap or sled cord, "A piece of rawhide." In saying which, they lied, like the rest, for they had seen nothing, and got nothing for their pains.
When their father returned home the next evening, he brought with him many of the pretty little shells from which weiopeskool, or wampum, was made, and they were soon engaged napawejik (stringing them).
That day poor little OOchigeaskw', the burnt-faced gir, who had always run barefoot, got a pair of her father's old moccasins, and put them into water that they might become flexible to wear. And begging her sissters for a few wampum shells, the eldest did but call her "a lying little pest," but the other gave her a few. And having no clothes beyond a few pawltry rags, the poor creature went forth and got herself from the woods a few sheets of birch bark, of which she made a dress, putting some figures on the bark. And this dress she shaped like those worn of old. So she made a petticoat and a loose gown, a cap, leggins, and handkerchief, and, having put on her father's great old moccasins,--whcih came nearly up to her knees,--she went forth to try her luck. For even this little thing would see the Invisible One in the great wigwam at the end of the village.
Truly her luck had a most auspicious beginning, for there was one long storm of ridicule and hisses, yells and hoots, from her own door to that of which she went ot seek. Her sisters tried to shame her, and bade her to stay home, but she would not obey; and all the idlers, seeing this strange little creature in her odd array, cried, "Shame!" But she went on, for she was greatly resolved; it may be that some spirit inspired her.
Now this poor small wretch in her mad attire, with her hair singed off and her little face as full of burns and scars as there are holes in a sieve, was, for all this, mostly kindly received by the sister of the Invisible One; for this noble girl knew more than the mere outside of thins as the world knows them. And as the brown of the evening sky became black, she took her down to the lake. And erelong the girls knew that He had come. Then the sister said, "Do you see him?" And the other replied in awe, "Truly I do, --and He is wonderful." "And what is his sled string?" "It is," she replied, "the Rainbow." And great fear was on her. "But, my sister," said the other, "what is his bow-string?" "His bow-string is Ketaksoowowcht: (the Spirit's Road, the Milky Way).
"Legends & Lore of the American Indians" Edited by Terri Hardin
Back to Native American Indian Stories Page
Back to Main Page
|
University of Minnesota Extension
Menu Menu
Extension > Agriculture > Dairy Extension > Employees > What's in your job description?
What's in your job description?
Chuck Schwartau
Published in Dairy Star February 24, 2007
Managing the Farm
Managing the business needs to be part of the owner/manager's daily routine.
A recent column addressed the topic of job descriptions for dairy farm employees. Owner/managers should have a job description as well, and there are differences in what should be in that job description.
The first item should be putting a title on your job and getting key elements included in the description. For many dairy operators, this may involve making a significant attitude change in their thinking. Most consider themselves 'farmers'; and historically, farmers think of themselves as "do-ers". Farmers do things. The change needs to include the idea that farmers also manage.
Most dairy farmers are in the business because they like cows. There is another important aspect that needs to move up the priority ladder, though, and that is managing the business. A manager is responsible for the health of the business, and for generating wealth to sustain the business. Those management tasks cannot afford to be put off to the end of the day, or when time happens to come along. Management time needs to be given a high priority and become part of the work routine.
Dairy farms today, large and small, have a significant investment in cattle and facilities. They also have a tremendous production capacity in the value of milk that can be produced in a year. Managing that production facility cannot be done by the 'seat of your pants' anymore. Management needs to be a deliberate part of the owner/manager's daily routine.
The trouble comes when the owner thinks he or she has to spend the entire day doing physical work on the farm to show value to the farm. At a New Zealand South Island Dairy Event (SIDE), Owen Grieg, operations manager for Grieg Farming Ltd, made the following statement, "Not feeling guilty when you are not doing physical farm work is a hard skill to learn."
Dairy owner/managers need to imbed in their minds the idea that management tasks are important to the health and wealth of the farm. They are as important as physical work. If sufficient, quality time is not devoted to management, decisions are more likely made because they are quick, not because they are the right decisions.
The other people involved in the farm business also need to realize the value of the right person having time to manage. If others on the farm do not see the value of the manager spending time maintaining and analyzing records, visiting with consultants, or attending industry seminars, they may become critical of the manager who appears in their minds to be slacking off, and not holding up his or her end of the business.
This brings us back to the topic of job descriptions. As an owner/manager, take time to list the necessary tasks on the farm that only you can perform because you are the owner/manager. Consider such tasks as working with lenders, marketing, purchasing, dealing with regulatory agencies, planning for routine maintenance as well as major improvements or expansions, visiting with nutritionists and veterinarians for health management, hiring employees, staff meetings (family or hired labor), performance evaluations, attending seminars and workshops to enhance your skills, training other staff, and endless other tasks that may be specific to your farm. Larger farms may distribute some of the duties among other partners or even key employees.
All of those tasks are important to the success of a dairy farm, and not one of them relates directly to working with a cow. That is not to say owner/managers shouldn't work with the cattle, but it points out that there are many important tasks that need to be done to support the farm having cattle.
Once you have completed that management list for your job description, share it with others on the farm. They need to know the important tasks that you perform for the farm, even when they don't see you in the barn working side by side with them. If you look at the list and think, "I don't have time for all that," maybe you need to look at the workload on your farm and determine how to better balance the load and the labor supply. Hiring labor is never an easy decision, but it might be a good investment, freeing up the owner/manager to do a better job of managing. The gains on the farm by devoting adequate and quality time to management may well pay more than the cost of the hired labor.
What's in your job description? Take a look. Update it. Then follow it to make your dairy business more successful.
|
An Investigation of Jewish Ethnic Identity and Jewish Affiliation for American Jews
Article excerpt
Previous studies suggest that Jews have been left out of discussion in textbooks on multicultural counseling (Weinrach, 2002), professional psychological journals (Foley, 2007; Robbins, 2000), and multiculturalism in general (Langman, 1995; 1999; Schlosser, 2006). Some possible reasons for these omissions include the designation of Jews as just a religious group, the perception of Jews as just mainstream White Americans, and the perceived high economic status of Jews (Langman, 1999; 1995). However, such assumptions ignore important aspects of being Jewish in America. The label "White" implies a shared set of values, a common history, and the same sense of privilege among all members of the group. Unfortunately though, this kind of categorization may confuse race with culture and/or ethnicity, perpetuating thinking that marginalizes entire groups of people. Over the years, the Jewish people have become so assimilated into the American culture that their unique issues and concerns have been overlooked. This study attempts to address some of the unique concerns of this group by investigating the relationships among Jewish ethnic identity, Jewish affiliation, and well-being for American Jews.
Jews do not constitute a race because being Jewish is not a biological distinction (Casas, 1984). There are Jews of many different races in the world (Langman, 1999). Jews are best defined as an ethnic group because they share a common history, a language, a religion, a nation, and a culture (Casas, 1984). Whereas ethnic identity refers to a person's sense of belonging to a group, self-identification or affiliation is related to participation in activities of the group (Phinney, 1992). Most previous research intending to examine Jewish identity may actually be focused on Jewish affiliation (Himmelfarb, 1980). The current study makes a distiction between these two concepts.
Ethnic Identity Research
Previous studies on the effects of discrimination on individuals' self-esteem and well-being show that members of stigmatized groups do not necessarily have lower self-esteem than members of the majority group (Crocker & Major, 1989; Hoelter, 1983). The members of minority groups face a choice between accepting the majority views of them (which are usually negative) or rejecting these views in search of their own identity. This choice can create a psychological conflict, and thus some members of minority groups develop a negative self-identity and self-hatred (Phinney, 1989; Tajfel 1978). Cross, Smith, and Payne (2002) discussed the concept of "buffering," which refers to the practice of using one's own ethnic identity as a shield against racism or other methods of discrimination from the majority society. Although Cross and colleagues (2002) discussed "buffering" in relation to African Americans, this concept can apply to other minority groups. In fact, Dubow, Pargament, Boxer, and Tarakeshwar (2000) found that higher scores on measures of Jewish ethnic identity were related to more ethnic-related coping strategies for early adolescents. Since one's racial or ethnic identity can serve as a protective shield from the negative views of the majority culture, those individuals who have stronger ethnic identities may be more successful in protecting themselves from internalizing negative messages coming from the oppressing group.
Phinney (1992) found that ethnic identity is correlated with self-esteem measures for high-school students. At the college level, this relationship was still present for ethnic minority students, but not for the White students (Phinney, 1992). Another study found that for college students correlations between self-esteem and ethnic identity were higher for ethnic minorities than for White students (Phinney & Alipuria, 1990). However, Phinney (1992) reported that the White students who attended schools where Whites were in the minority, showed the same patterns of relationship between their ethnic identity and self-esteem as the minority students. …
|
Welcome, guest! ~ Login ~ Register
Quick Search:
S9.com / Biographies /
Born: 428? AD
Died: 348? AD.
Nationality: Greek
Categories: Philosophers
428? BC - Born in Athens, or Aegina, Greece, the son of Ariston and Perictione. An ancient Greek philosopher, the second of the great trio of ancient Greeks—Socrates, Plato, and Aristotle—who between them laid the philosophical foundations of Western culture.
399 BC - The democracy condemned Socrates to death, and Plato and other Socratic men took temporary refuge at Megara with Eucleides, founder of the Megarian school of philosophy.
388 BC - Plato states that he visited Italy and Sicily at the age of 40 and was disgusted by the gross sensuality of life there but found a kindred spirit in Dion, brother-in-law of Dionysius I, the ruler of Syracuse.
361-360 BC - Plato later paid a second and longer visit to Syracuse, still in the hope of effecting an accommodation; but he failed, not without some personal danger.
348? BC - He died in Athens.
Page last updated: 11:31pm, 27th Feb '07
|
Skip over navigation
The English Patient
Michael Ondaatje
Themes, Motifs, and Symbols
Analysis of Major Characters
Chapter I
Nationality and Identity
Nationality and identity are interconnected in The English Patient, functioning together to create a web of inescapable structures that tie the characters to certain places and times despite their best efforts to evade such confinement. Almásy desperately tries to elude the force of nationality, living in the desert where he creates for himself an alternate identity, one in which family and nation are irrelevant. Almásy forges this identity through his character, his work, and his interactions with others. Importantly, he chooses this identity rather than inheriting it. Certain environments in the novel lend credence to the idea that national identity can be erased. The desert and the isolated Italian villa function as such places where national identity is unimportant to one's connection with others. Kip, who becomes enmeshed in the idea of Western society and the welcoming community of the villa's inhabitants, even dismisses his hyperawareness of his own racial identity for a time.
Ultimately, however, the characters cannot escape from the outside reality that, in wartime, national identity is prized above all else. This reality invades Almásy's life in the desert and Kip's life in the Italian villa. Desperate for help, Almásy is locked up merely because his name sounds foreign. His identity follows him even after he is burned beyond recognition, as Caravaggio realizes that the "English" patient is not even English. For Kip, news of the atomic bomb reminds him that, outside the isolated world of the villa, western aggression still exists, crushing Asian people as Kip's brother had warned. National identity is, then, an inescapable part of each of the characters, a larger force over which they have no control.
Love's Ability to Transcend Time and Place
One theme that emerges in the novel is that love, if it is truly heartfelt, transcends place and time. Hana feels love and connection to her father even though he has died alone, far from her in another theater of war. Almásy desperately maintains his love for Katharine even though he is unable to see her or reach her in the cave. Likewise, Kip, despite leaving Italy to marry in India, never loses his connection to Hana, whom he imagines thirteen years later and halfway across the world. Such love transcends even death, as the characters hold onto their emotions even past the grave. This idea implies a larger message—that time and place themselves are irrelevant to human connection. We see this especially in Almásy's connection to Herodotus, whose writings he follows across time through the desert. Maps and geography become details, mere artificial lines that man imposes on the landscape. It is only the truth in the soul, which transcends time, that matters in the novel.
The frequent recurrence of descriptions of bodies in the novel informs and develops its themes of healing, changing, and renewal. The text is replete with body images: Almásy's burned body, Kip's dark and lithe body, Katharine's willowy figure, and so on. Each description provides not only a window into that character's existence; more importantly, it provides a map of that person's history. Almásy remembers the vaccination scar on Katharine's arm and immediately knows her as a child getting a shot in a school gymnasium. Caravaggio looks at Hana's serious face and knows that she looks that way because of the experiences that have shaped her. Understanding the bodies of the different characters is a way to draw maps, to get closer to the experiences which have shaped and been shaped by identity. Bodies thus function as a means of physical connections between characters, tying them to a certain times and places.
Dying in a Holy Place
The characters in the novel frequently mention the idea of "dying in a holy place." Katharine dies in a cave, a holy place to ancient people. Patrick, Hana's father, also dies in a holy place, a dove-cot, a ledge above a building where doves can be safe from predatory rats. Madox dies in a holy place by taking his life in a church in England. This idea recurs throughout the nvoel, but the meaning of "holy place" is complex. It does not signify a place that is 'holy' to individual people: Katharine hates the desert, Patrick hates to be alone, and Madox loses his faith in the holiness of the church. None of these characters, then, die in a location that is special to them. But the figurative idea of a 'holy place' touches on the connection between actual places and states of emotion in the novel. Emotionally, each of these characters died in a "holy place" by remaining in the hearts of people who love them. In The English Patient, geography is transcendent; it is the sacredness of love that endures.
Reading is recurs throughout the novel in various forms and capacities: Hana reads to Almásy to connect with him and try to make him interested in the present life, Katharine reads voraciously to learn all she can about Cairo and the desert, and Almásy consistently reads The Histories by Herodotus to guide him in his geographical searches. In each of these instances of reading, the characters use books to inform their own lives and to connect to another place or time. Reading thus becomes a metaphor for reaching beyond oneself to connect with others. Indeed, it is Katharine's reading of the story in Herodotus that makes Almásy fall in love with her. Books are used to pass secret codes, as in the German spy's copy of Rebecca. In their interactions with books, the characters overlay the stories of their own lives onto the tales of the books, constructing multi-dimensional interactions between persons and objects.
The Atomic Bomb
The atomic bomb the United States drops on Japan symbolizes the worst fears of western aggression. The characters in the novel try to escape the war and all its horrors by remaining with the English patient in a small Italian villa in the hills. Staying close to the patient, they can immerse themselves in his world of the past rather than face the problems of the present. The atomic bombs rip through this silence of isolation, reawakening the characters, especially Kip, to the reality of the outside world pressing in upon them. The bomb reminds them of the foolishness and power of nation-states and reminds them of the violability of their enclosed environment.
The Italian villa
In Chapter II, Hana reflects to herself that "there seemed little demarcation between house and landscape." Such an organic depiction of the villa is symbolically important to the novel. Straddling the line between house and landscape, building and earth, the villa represents both death and rebirth. War has destroyed the villa, making huge holes in walls and ceilings. But nature has returned to fill these holes, replacing the void with new life. Such an image mirrors the spiritual death and rebirth of the villa's inhabitants, the way they learn to live again after the emotional destruction of war.
More Help
Previous Next
Follow Us
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.