text
large_stringlengths 1k
155k
| id
large_stringlengths 47
47
| dump
large_stringclasses 95
values | url
large_stringlengths 16
401
| file_path
large_stringlengths 125
155
| language
large_stringclasses 1
value | language_score
float64 0.65
0.99
| token_count
int64 500
36.8k
| score
float64 2.52
4.72
| int_score
int64 3
5
| raw_reasoning_score
float64 0.3
3.25
| reasoning_level
int64 0
3
| interpretation
stringclasses 4
values | topic
stringclasses 23
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
It’s hard to overstate the significance of Kamala Harris’ election as vice president-elect of the United States. She will be both the first woman and first person of colour to ever occupy the office, a remarkable achievement that has been celebrated not just across America, but across the world, including in her grandfather’s hometown of Thulasendrapuram, India.
Born in Oakland, California, Harris grew up in an academic family deeply concerned with civil rights. Her mother was a breast cancer researcher and Indian immigrant, and her father was a professor of economics from Jamaica. Her parents were both international students in the US, and met through the Afro American Association, which would go on to become a building block for the Black Panther Party. She grew up in California, but frequently visited family in India.
Her background as a second-generation immigrant in the United States will be familiar to many - in 2013, the Pew Research Centre estimated there are 20 million children of immigrants in America.
Harris attended Howard University, a historically black university (HBCU), graduating in 1986. From there she went to law school at the University of California’s Hastings College of the Law, and passed the bar in 1990.
Her attendance at Howard, which is one of several US colleges with a rich history and tradition of black higher education, is likely to have influenced her support of HBCUs during her campaign for vice president. Harris and running mate Joe Biden have promised to support historically black and minority-serving institutions as part of their plan for higher education.
As president, Biden has pledged to “rectify the funding disparities” faced by these institutions, demonstrating the power that representation in government has to affect real policy. He has also promised that he will make these schools more affordable for their students, and invest $20 billion in their research infrastructure. It’s likely that Harris will play a key role in making these pledges become a reality.
Harris has a history of shattering glass ceilings. After college, she pursued a successful career in the law, becoming the first female black district attorney in California in 2003. In 2010 she became the first black woman to ever be elected as California’s attorney general, and then in 2016, became the second black woman to ever be elected to the US Senate. For many Americans she is representative of the diversity and multiculturalism of the country, and has paved the way for countless more women and people of colour to find their place in a system that has typically favoured white men - a make-up far from representative of the people of America.
Given Harris’ background and the Democrats’ more liberal stance on immigration, the Biden/Harris administration is likely to usher in much more accommodating policies for international students in America than we have seen over the last four years.
In July, Harris took to twitter to condemn Trump’s threats to deport international students who were not on campus full-time (a problem for many during the coronavirus pandemic), calling them “a vital part of our communities and schools”.
She went on to say that it was “outrageous they are being threatened with deportation in the midst of a public health crisis”.
International students contributed $45 billion to the US economy in 2018, and are crucial to continuing America’s global dominance in the research and education sectors. Biden and Harris appear to recognise this benefit to American society, alongside the many cultural and intellectual benefits that supporting international students also brings. Harris has frequently called education a “fundamental right”, and is likely to bring this rhetoric to international students as well as domestic ones.
Don't miss out
Nicole lives in Manchester and is a Content Writer and Editor at Edvoy and journalist.
|
<urn:uuid:bbe78c43-0a42-4893-beb2-943f5defe289>
|
CC-MAIN-2022-05
|
https://edvoy.com/articles/kamala-harris-daughter-of-international-students-and-americas-new-vice-president-elect/
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301063.81/warc/CC-MAIN-20220118213028-20220119003028-00072.warc.gz
|
en
| 0.974287 | 775 | 2.765625 | 3 | 2.801472 | 3 |
Strong reasoning
|
Politics
|
Watching television while eating has been a guilty pleasure since the 1950s, when families began peeling back the tin foil on TV dinners. But the effects of simultaneously consuming food and media has become something to watch, too, as more people than ever struggle to keep off the pounds.
Researchers at Michigan State University recently concluded that when individuals eat while viewing, listening or reading most any form of entertainment media, they ingest about 150 more calories than when not using media.
The Culprit for Extra Calories
The interdisciplinary team from the Colleges of ComArtSci, Agriculture and Natural Resources also discovered that meals paired with media time contain higher amounts of protein, carbohydrates, fat and saturated fat than media-free meals—leading them to surmise that the extra calories came from larger portion sizes, not different food choices.
Assistant Professor Morgan Ellithorpe from the Department of Advertising and Public Relations recently published team findings in the September 2019 issue of Obesity. The paper is the first in a series of research projects launched in 2017 that focus on media use and health behavior, and is co-authored by Allison Eden, associate professor of communication, and Robin Tucker, assistant professor of food science and human nutrition.
“We know that media use is associated with negative long-term health outcomes like obesity, hypertension, depression and anxiety,” said Ellithorpe, project lead. ”Our research wants to understand why that happens.”
Ellithorpe said that research outcomes didn’t necessarily indicate that participants in the study were making less healthy food choices while using media. On the contrary, participants seemed to be eating more of what they would normally eat at a single meal. And rather than eating less at the next meal to compensate for the excesses, participants didn’t—essentially adding the equivalent of a small snack to their daily food intake.
“If people were eating worse food, we would have only seen increases in fat and sugars,” she commented. “But since we saw an increase in proteins and carbs, it told us people were simply increasing their portion sizes when they combined mealtimes with media use.”
The bottom line, Ellithorpe said, is that people eat more when distracted by media—be it television, audio, streaming services or other entertainment content. Whether good food or bad, that cumulative intake of extra calories can result in extra inches to the waistline, particularly when adjustments to diet and exercise aren’t made.
“When you eat in front of the TV or use other media, you’re less in tune with your appetite,” Ellithorpe said. “You might overlook how much you’re eating or not remember what you just ate. That doesn’t typically happen at meals where food is the sole focus.”
Three Squares a Day
The MSU study examined participant’s habits in their homes, setting it apart from similar research conducted in labs. Participants recorded their food intake and media use in daily diaries over the course of three days. Fifty-eight people from mid-Michigan were selected, ranging in age from 19 to 66. About 75 percent of the sample were female. The average age was 26.
“Our diary study improves on the methods of previous research because it allowed people to be in the comfort of their home choosing their own food and media,” Ellithorpe said. “It was more than a one-shot deal in a controlled setting where food and media were selected by researchers.”
Ellithorpe hopes the research can contribute to healthier habits—considering 70 percent of Americans are overweight or obese according to 2017 findings from the Centers for Disease Control and Prevention.
“Obesity is a multi-faceted issue and media use is one small piece,” she said. “But compared to things like exercise and changes in diet, it’s relatively easy to control. It’s pretty simple. Just don’t eat while using media, and if you do, choose your portions size beforehand so you are less likely to overeat.”
By Ann Kammerer
|
<urn:uuid:9ef752b7-801d-4356-8ab4-47131e10c216>
|
CC-MAIN-2021-21
|
https://comartsci.msu.edu/about/newsroom/news/weighing-how-calories-add-media-consumption
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00391.warc.gz
|
en
| 0.964569 | 865 | 3.09375 | 3 | 2.862089 | 3 |
Strong reasoning
|
Health
|
Your Child at 3 Years
WHAT YOUR TODDLER CAN DO
At this age your child can probably:
Begin playing with other kids but still have problems sharing…
Begin counting and know some colors…
Ask many, many questions!…
Make up stories…
Children have vivid imaginations and like playing grown up…
Have trouble distinguishing fantasy from reality…
Be understood by strangers most of the time…
Understand “same” and “different”…
Become more independent…
Separate easier from parents…
What you can do with your toddler
Give your child a chance to play with other kids at a day care center or in your neighborhood.
Continue to give your child books and records. You can also let him play with paints, crayons and clay.
Play interactive and easy games like memory games, Candyland, matching colors and shapes. Take them to the playground and safely let them explore their own motor skills. Let him scribble and paint with crayons, markers and watercolors.
Be patient with your child’s questions and stories. Answer them simply and honestly.
Encourage your child to dress and feed himself. Have him take some responsibility at home. He can begin to put away his toys or help set the table.
Being a parent is enjoyable but hard work. There is so much to learn! Sometimes it helps to talk to other parents or get some advice from someone outside the family. If you would like to find out more, call this number and ask about parenting programs in your area.
Baby teeth are important and need care. Babies and young children need their teeth to eat solid foods and to hold space in their mouths for the permanent teeth to come later.
There are some simple things that you can do to care for your child’s teeth.
Toothpaste that contains fluoride may harm your child if he eats it. Keep the quantity of toothpaste to the size of a pea.
Older children can use toothbrushes but will need help from you, as it’s easy for them to miss the backs of teeth and their molars.
By age three, it is a good idea to take your child to the dentist. Your pediatrician may be able to refer you to a dentist in your area.
The dentist will examine your child’s mouth and teeth. His teeth will be cleaned and the dentist may take xrays to see how facial bones and teeth are developing.
A child’s first trip to the dentist can be a scary experience, and it helps to prepare your child by telling him that the dentist will look at and count his teeth and possibly clean them. The dentist can help you explain the rest.
Be aware of your own attitude/anxieties regarding the dentist as your child may well pick up on this.
Beginning good habits in childhood leads to good habits in adulthood.
Safety is always important. Children this age can get into medicine bottles and cleaning fluids. Remember to keep all dangerous materials in locked cabinets or above your child’s reach.
All cabinet and storage doors should have latches on them.
Your child is starting to express his independence but he still needs constant supervision, especially when he’s playing outside.
He is in danger of being hit by a car when he darts out into the street while playing. Show him the curb and teach him to always stop before the curb. Teach him to never cross the street without a grown-up.
Don’t allow him to ride his tricycle in the street. Instead take him to the playground or park to play.
Don’t rely on older siblings to watch younger children while they play outside. No one watches your child as closely as you do.
It’s a good idea to have your child wear a helmet when he learns to ride a tricycle. It will protect him during falls and get him in the habit of wearing one as he gets older.
If you take your child as a passenger on an adult bicycle, he needs an approved safety helmet as well.
POISON CONTROL CENTER
Feeding a fussy eater
Your toddler tests his independence by choosing what to eat.
This is why your child’s eating habits are probably unpredictable. This is very normal behavior for a three year old. Here are some ideas to help handle a fussy eater.
Don’t force your child to eat. This only sets up a power struggle between you and your child.
Children don’t need to eat three balanced meals every day as long as their weekly diet is balanced.
To help your child eat foods he doesn’t like, mix them with foods he does like.
Don’t force your child to empty his plate. Small portions won’t seem so overwhelming to your child. He can have a second helping if he’s still hungry.
Offer food when your child is hungry, not as a pacifier, reward or punishment.
Offer milk with his meals. It will help to fulfill your child’s calcium needs. If he doesn’t like milk, try yogurt or cheese as substitutes to provide calcium that he needs to help build strong bones and teeth. Children should be drinking 16oz per day, but more than 24 oz. per day may interfere with eating solids.
Experiment with different ways of cooking foods. That may be enough to get your child to eat them.
If your child is hungry and wants to eat 5-6 times a day, let him. Serve snacks that have high nutritional value like fruits and vegetables instead of candy and cookies.
Make meals a family time. Let your child be a kitchen helper.
Sit down together as a family and have each family member share something they did that day (even your toddler). This will help to make meal times more of an event to your fussy eater.
Foods To Avoid Below Age 4
Fun Foods for Kids
Use cookie cutters to make sandwiches into fun shapes.
Make finger sized food like small chunks of cheese or fruit.
Make an “egg in a basket”. Cut a hole into the middle of a piece of bread. Pour a raw egg into the hole in the bread and fry both the bread and the egg together.
Helping your child learn
Children learn through play. Three year olds want their parents to play with them. They like family events such as birthdays and holidays. Here are a few ideas on how to help your child learn from play.
Time, Weather and Seasons
Children feel more in control when they know what is going to happen. For example, they know that their birthday is in five days. Help your child to understand time better by making a calendar pad so that he can tear off a page each day. Decorate the calendar with pictures for holiday and special events.
Parents should count together with their child to teach the correct number sequence. Anything can be counted—pieces of cheese, clapping, brushing teeth or coins. Make counting a part of your everyday routine with your child. He may also like to try counting backwards.
Three year olds love to explore and experiment, but they usually are pretty messy. The bathtub is a fun place to play with water, shaving cream, ice cubes, soap bubbles or soap paints. Outside is also a good place for messy activities. Try finger paint with your child on a plastic tablecloth.
How about making a mud pie or building a sandcastle? Remember that water and outdoor play both require your close supervision.
Spend time reading picture books to your child. Talk about the emotions that the characters are feeling. A puppet may help to discuss the story. Books with cassette tapes may also help your child become more interested in reading. By reading to your child, you will help him learn to read. Ask him to identify sounds and letters. Try using alphabet blocks when you do this.
Use the Public Library System. Children’s librarians are eager to help.
Limit television viewing and total screen time to less than 2 hours. Watch programs with him so you know he is only watching programs that you approve.
Before his next checkup, your child may:
Tell tall tales…
Throw a ball overhead…
Ask many questions…
Hop on one foot…
Be aware of his own gender…
Be able to dress himself…
And take care of his toilet needs…
|
<urn:uuid:e00958fc-0b96-45d2-aa44-420aaeb0f65c>
|
CC-MAIN-2019-43
|
https://westside-pediatrics.com/for-parents/developmental-milestones-in-children/healthy-kids-your-child-at-3-years/
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00354.warc.gz
|
en
| 0.963159 | 1,752 | 3.296875 | 3 | 1.489213 | 1 |
Basic reasoning
|
Education & Jobs
|
Commentary: Several comments have been posted about Physics.
Download: A text-only version is available for download.
Table of Contents
Now if the terms 'continuous', 'in contact', and 'in succession' are understood as defined above things being 'continuous' if their extremities are one, 'in contact' if their extremities are together, and 'in succession' if there is nothing of their own kind intermediate between them-nothing that is continuous can be composed 'of indivisibles': e.g. a line cannot be composed of points, the line being continuous and the point indivisible. For the extremities of two points can neither be one (since of an indivisible there can be no extremity as distinct from some other part) nor together (since that which has no parts can have no extremity, the extremity and the thing of which it is the extremity being distinct).
Moreover, if that which is continuous is composed of points, these points must be either continuous or in contact with one another: and the same reasoning applies in the case of all indivisibles. Now for the reason given above they cannot be continuous: and one thing can be in contact with another only if whole is in contact with whole or part with part or part with whole. But since indivisibles have no parts, they must be in contact with one another as whole with whole. And if they are in contact with one another as whole with whole, they will not be continuous: for that which is continuous has distinct parts: and these parts into which it is divisible are different in this way, i.e. spatially separate.
Nor, again, can a point be in succession to a point or a moment to a moment in such a way that length can be composed of points or time of moments: for things are in succession if there is nothing of their own kind intermediate between them, whereas that which is intermediate between points is always a line and that which is intermediate between moments is always a period of time.
Again, if length and time could thus be composed of indivisibles, they could be divided into indivisibles, since each is divisible into the parts of which it is composed. But, as we saw, no continuous thing is divisible into things without parts. Nor can there be anything of any other kind intermediate between the parts or between the moments: for if there could be any such thing it is clear that it must be either indivisible or divisible, and if it is divisible, it must be divisible either into indivisibles or into divisibles that are infinitely divisible, in which case it is continuous.
Moreover, it is plain that everything continuous is divisible into divisibles that are infinitely divisible: for if it were divisible into indivisibles, we should have an indivisible in contact with an indivisible, since the extremities of things that are continuous with one another are one and are in contact.
The same reasoning applies equally to magnitude, to time, and to motion: either all of these are composed of indivisibles and are divisible into indivisibles, or none. This may be made clear as follows. If a magnitude is composed of indivisibles, the motion over that magnitude must be composed of corresponding indivisible motions: e.g. if the magnitude ABG is composed of the indivisibles A, B, G, each corresponding part of the motion DEZ of O over ABG is indivisible. Therefore, since where there is motion there must be something that is in motion, and where there is something in motion there must be motion, therefore the being-moved will also be composed of indivisibles. So O traversed A when its motion was D, B when its motion was E, and G similarly when its motion was Z. Now a thing that is in motion from one place to another cannot at the moment when it was in motion both be in motion and at the same time have completed its motion at the place to which it was in motion: e.g. if a man is walking to Thebes, he cannot be walking to Thebes and at the same time have completed his walk to Thebes: and, as we saw, O traverses a the partless section A in virtue of the presence of the motion D. Consequently, if O actually passed through A after being in process of passing through, the motion must be divisible: for at the time when O was passing through, it neither was at rest nor had completed its passage but was in an intermediate state: while if it is passing through and has completed its passage at the same moment, then that which is walking will at the moment when it is walking have completed its walk and will be in the place to which it is walking; that is to say, it will have completed its motion at the place to which it is in motion. And if a thing is in motion over the whole Kbg and its motion is the three D, E, and Z, and if it is not in motion at all over the partless section A but has completed its motion over it, then the motion will consist not of motions but of starts, and will take place by a thing's having completed a motion without being in motion: for on this assumption it has completed its passage through A without passing through it. So it will be possible for a thing to have completed a walk without ever walking: for on this assumption it has completed a walk over a particular distance without walking over that distance. Since, then, everything must be either at rest or in motion, and O is therefore at rest in each of the sections A, B, and G, it follows that a thing can be continuously at rest and at the same time in motion: for, as we saw, O is in motion over the whole ABG and at rest in any part (and consequently in the whole) of it. Moreover, if the indivisibles composing DEZ are motions, it would be possible for a thing in spite of the presence in it of motion to be not in motion but at rest, while if they are not motions, it would be possible for motion to be composed of something other than motions.
And if length and motion are thus indivisible, it is neither more nor less necessary that time also be similarly indivisible, that is to say be composed of indivisible moments: for if the whole distance is divisible and an equal velocity will cause a thing to pass through less of it in less time, the time must also be divisible, and conversely, if the time in which a thing is carried over the section A is divisible, this section A must also be divisible.
And since every magnitude is divisible into magnitudes-for we have shown that it is impossible for anything continuous to be composed of indivisible parts, and every magnitude is continuous-it necessarily follows that the quicker of two things traverses a greater magnitude in an equal time, an equal magnitude in less time, and a greater magnitude in less time, in conformity with the definition sometimes given of 'the quicker'. Suppose that A is quicker than B. Now since of two things that which changes sooner is quicker, in the time ZH, in which A has changed from G to D, B will not yet have arrived at D but will be short of it: so that in an equal time the quicker will pass over a greater magnitude. More than this, it will pass over a greater magnitude in less time: for in the time in which A has arrived at D, B being the slower has arrived, let us say, at E. Then since A has occupied the whole time ZH in arriving at D, will have arrived at O in less time than this, say ZK. Now the magnitude GO that A has passed over is greater than the magnitude GE, and the time ZK is less than the whole time ZH: so that the quicker will pass over a greater magnitude in less time. And from this it is also clear that the quicker will pass over an equal magnitude in less time than the slower. For since it passes over the greater magnitude in less time than the slower, and (regarded by itself) passes over LM the greater in more time than LX the lesser, the time PRh in which it passes over LM will be more than the time PS, which it passes over LX: so that, the time PRh being less than the time PCh in which the slower passes over LX, the time PS will also be less than the time PX: for it is less than the time PRh, and that which is less than something else that is less than a thing is also itself less than that thing. Hence it follows that the quicker will traverse an equal magnitude in less time than the slower. Again, since the motion of anything must always occupy either an equal time or less or more time in comparison with that of another thing, and since, whereas a thing is slower if its motion occupies more time and of equal velocity if its motion occupies an equal time, the quicker is neither of equal velocity nor slower, it follows that the motion of the quicker can occupy neither an equal time nor more time. It can only be, then, that it occupies less time, and thus we get the necessary consequence that the quicker will pass over an equal magnitude (as well as a greater) in less time than the slower.
And since every motion is in time and a motion may occupy any time, and the motion of everything that is in motion may be either quicker or slower, both quicker motion and slower motion may occupy any time: and this being so, it necessarily follows that time also is continuous. By continuous I mean that which is divisible into divisibles that are infinitely divisible: and if we take this as the definition of continuous, it follows necessarily that time is continuous. For since it has been shown that the quicker will pass over an equal magnitude in less time than the slower, suppose that A is quicker and B slower, and that the slower has traversed the magnitude GD in the time ZH. Now it is clear that the quicker will traverse the same magnitude in less time than this: let us say in the time ZO. Again, since the quicker has passed over the whole D in the time ZO, the slower will in the same time pass over GK, say, which is less than GD. And since B, the slower, has passed over GK in the time ZO, the quicker will pass over it in less time: so that the time ZO will again be divided. And if this is divided the magnitude GK will also be divided just as GD was: and again, if the magnitude is divided, the time will also be divided. And we can carry on this process for ever, taking the slower after the quicker and the quicker after the slower alternately, and using what has been demonstrated at each stage as a new point of departure: for the quicker will divide the time and the slower will divide the length. If, then, this alternation always holds good, and at every turn involves a division, it is evident that all time must be continuous. And at the same time it is clear that all magnitude is also continuous; for the divisions of which time and magnitude respectively are susceptible are the same and equal.
Moreover, the current popular arguments make it plain that, if time is continuous, magnitude is continuous also, inasmuch as a thing asses over half a given magnitude in half the time taken to cover the whole: in fact without qualification it passes over a less magnitude in less time; for the divisions of time and of magnitude will be the same. And if either is infinite, so is the other, and the one is so in the same way as the other; i.e. if time is infinite in respect of its extremities, length is also infinite in respect of its extremities: if time is infinite in respect of divisibility, length is also infinite in respect of divisibility: and if time is infinite in both respects, magnitude is also infinite in both respects.
Hence Zeno's argument makes a false assumption in asserting that it is impossible for a thing to pass over or severally to come in contact with infinite things in a finite time. For there are two senses in which length and time and generally anything continuous are called 'infinite': they are called so either in respect of divisibility or in respect of their extremities. So while a thing in a finite time cannot come in contact with things quantitatively infinite, it can come in contact with things infinite in respect of divisibility: for in this sense the time itself is also infinite: and so we find that the time occupied by the passage over the infinite is not a finite but an infinite time, and the contact with the infinites is made by means of moments not finite but infinite in number.
The passage over the infinite, then, cannot occupy a finite time, and the passage over the finite cannot occupy an infinite time: if the time is infinite the magnitude must be infinite also, and if the magnitude is infinite, so also is the time. This may be shown as follows. Let AB be a finite magnitude, and let us suppose that it is traversed in infinite time G, and let a finite period GD of the time be taken. Now in this period the thing in motion will pass over a certain segment of the magnitude: let BE be the segment that it has thus passed over. (This will be either an exact measure of AB or less or greater than an exact measure: it makes no difference which it is.) Then, since a magnitude equal to BE will always be passed over in an equal time, and BE measures the whole magnitude, the whole time occupied in passing over AB will be finite: for it will be divisible into periods equal in number to the segments into which the magnitude is divisible. Moreover, if it is the case that infinite time is not occupied in passing over every magnitude, but it is possible to ass over some magnitude, say BE, in a finite time, and if this BE measures the whole of which it is a part, and if an equal magnitude is passed over in an equal time, then it follows that the time like the magnitude is finite. That infinite time will not be occupied in passing over BE is evident if the time be taken as limited in one direction: for as the part will be passed over in less time than the whole, the time occupied in traversing this part must be finite, the limit in one direction being given. The same reasoning will also show the falsity of the assumption that infinite length can be traversed in a finite time. It is evident, then, from what has been said that neither a line nor a surface nor in fact anything continuous can be indivisible.
This conclusion follows not only from the present argument but from the consideration that the opposite assumption implies the divisibility of the indivisible. For since the distinction of quicker and slower may apply to motions occupying any period of time and in an equal time the quicker passes over a greater length, it may happen that it will pass over a length twice, or one and a half times, as great as that passed over by the slower: for their respective velocities may stand to one another in this proportion. Suppose, then, that the quicker has in the same time been carried over a length one and a half times as great as that traversed by the slower, and that the respective magnitudes are divided, that of the quicker, the magnitude ABGD, into three indivisibles, and that of the slower into the two indivisibles EZ, ZH. Then the time may also be divided into three indivisibles, for an equal magnitude will be passed over in an equal time. Suppose then that it is thus divided into KL, Lm, MN. Again, since in the same time the slower has been carried over Ez, ZH, the time may also be similarly divided into two. Thus the indivisible will be divisible, and that which has no parts will be passed over not in an indivisible but in a greater time. It is evident, therefore, that nothing continuous is without parts.
The present also is necessarily indivisible-the present, that is, not in the sense in which the word is applied to one thing in virtue of another, but in its proper and primary sense; in which sense it is inherent in all time. For the present is something that is an extremity of the past (no part of the future being on this side of it) and also of the future (no part of the past being on the other side of it): it is, as we have said, a limit of both. And if it is once shown that it is essentially of this character and one and the same, it will at once be evident also that it is indivisible.
Now the present that is the extremity of both times must be one and the same: for if each extremity were different, the one could not be in succession to the other, because nothing continuous can be composed of things having no parts: and if the one is apart from the other, there will be time intermediate between them, because everything continuous is such that there is something intermediate between its limits and described by the same name as itself. But if the intermediate thing is time, it will be divisible: for all time has been shown to be divisible. Thus on this assumption the present is divisible. But if the present is divisible, there will be part of the past in the future and part of the future in the past: for past time will be marked off from future time at the actual point of division. Also the present will be a present not in the proper sense but in virtue of something else: for the division which yields it will not be a division proper. Furthermore, there will be a part of the present that is past and a part that is future, and it will not always be the same part that is past or future: in fact one and the same present will not be simultaneous: for the time may be divided at many points. If, therefore, the present cannot possibly have these characteristics, it follows that it must be the same present that belongs to each of the two times. But if this is so it is evident that the present is also indivisible: for if it is divisible it will be involved in the same implications as before. It is clear, then, from what has been said that time contains something indivisible, and this is what we call a present.
We will now show that nothing can be in motion in a present. For if this is possible, there can be both quicker and slower motion in the present. Suppose then that in the present N the quicker has traversed the distance AB. That being so, the slower will in the same present traverse a distance less than AB, say AG. But since the slower will have occupied the whole present in traversing AG, the quicker will occupy less than this in traversing it. Thus we shall have a division of the present, whereas we found it to be indivisible. It is impossible, therefore, for anything to be in motion in a present.
Nor can anything be at rest in a present: for, as we were saying, only can be at rest which is naturally designed to be in motion but is not in motion when, where, or as it would naturally be so: since, therefore, nothing is naturally designed to be in motion in a present, it is clear that nothing can be at rest in a present either.
Moreover, inasmuch as it is the same present that belongs to both the times, and it is possible for a thing to be in motion throughout one time and to be at rest throughout the other, and that which is in motion or at rest for the whole of a time will be in motion or at rest as the case may be in any part of it in which it is naturally designed to be in motion or at rest: this being so, the assumption that there can be motion or rest in a present will carry with it the implication that the same thing can at the same time be at rest and in motion: for both the times have the same extremity, viz. the present.
Again, when we say that a thing is at rest, we imply that its condition in whole and in part is at the time of speaking uniform with what it was previously: but the present contains no 'previously': consequently, there can be no rest in it.
It follows then that the motion of that which is in motion and the rest of that which is at rest must occupy time.
Further, everything that changes must be divisible. For since every change is from something to something, and when a thing is at the goal of its change it is no longer changing, and when both it itself and all its parts are at the starting-point of its change it is not changing (for that which is in whole and in part in an unvarying condition is not in a state of change); it follows, therefore, that part of that which is changing must be at the starting-point and part at the goal: for as a whole it cannot be in both or in neither. (Here by 'goal of change' I mean that which comes first in the process of change: e.g. in a process of change from white the goal in question will be grey, not black: for it is not necessary that that that which is changing should be at either of the extremes.) It is evident, therefore, that everything that changes must be divisible.
Now motion is divisible in two senses. In the first place it is divisible in virtue of the time that it occupies. In the second place it is divisible according to the motions of the several parts of that which is in motion: e.g. if the whole AG is in motion, there will be a motion of AB and a motion of BG. That being so, let DE be the motion of the part AB and EZ the motion of the part BG. Then the whole Dz must be the motion of AG: for DZ must constitute the motion of AG inasmuch as DE and EZ severally constitute the motions of each of its parts. But the motion of a thing can never be constituted by the motion of something else: consequently the whole motion is the motion of the whole magnitude.
Again, since every motion is a motion of something, and the whole motion DZ is not the motion of either of the parts (for each of the parts DE, EZ is the motion of one of the parts AB, BG) or of anything else (for, the whole motion being the motion of a whole, the parts of the motion are the motions of the parts of that whole: and the parts of DZ are the motions of AB, BG and of nothing else: for, as we saw, a motion that is one cannot be the motion of more things than one): since this is so, the whole motion will be the motion of the magnitude ABG.
Again, if there is a motion of the whole other than DZ, say the the of each of the arts may be subtracted from it: and these motions will be equal to DE, EZ respectively: for the motion of that which is one must be one. So if the whole motion OI may be divided into the motions of the parts, OI will be equal to DZ: if on the other hand there is any remainder, say KI, this will be a motion of nothing: for it can be the motion neither of the whole nor of the parts (as the motion of that which is one must be one) nor of anything else: for a motion that is continuous must be the motion of things that are continuous. And the same result follows if the division of OI reveals a surplus on the side of the motions of the parts. Consequently, if this is impossible, the whole motion must be the same as and equal to DZ.
This then is what is meant by the division of motion according to the motions of the parts: and it must be applicable to everything that is divisible into parts.
Motion is also susceptible of another kind of division, that according to time. For since all motion is in time and all time is divisible, and in less time the motion is less, it follows that every motion must be divisible according to time. And since everything that is in motion is in motion in a certain sphere and for a certain time and has a motion belonging to it, it follows that the time, the motion, the being-in-motion, the thing that is in motion, and the sphere of the motion must all be susceptible of the same divisions (though spheres of motion are not all divisible in a like manner: thus quantity is essentially, quality accidentally divisible). For suppose that A is the time occupied by the motion B. Then if all the time has been occupied by the whole motion, it will take less of the motion to occupy half the time, less again to occupy a further subdivision of the time, and so on to infinity. Again, the time will be divisible similarly to the motion: for if the whole motion occupies all the time half the motion will occupy half the time, and less of the motion again will occupy less of the time.
In the same way the being-in-motion will also be divisible. For let G be the whole being-in-motion. Then the being-in-motion that corresponds to half the motion will be less than the whole being-in-motion, that which corresponds to a quarter of the motion will be less again, and so on to infinity. Moreover by setting out successively the being-in-motion corresponding to each of the two motions DG (say) and GE, we may argue that the whole being-in-motion will correspond to the whole motion (for if it were some other being-in-motion that corresponded to the whole motion, there would be more than one being-in motion corresponding to the same motion), the argument being the same as that whereby we showed that the motion of a thing is divisible into the motions of the parts of the thing: for if we take separately the being-in motion corresponding to each of the two motions, we shall see that the whole being-in motion is continuous.
The same reasoning will show the divisibility of the length, and in fact of everything that forms a sphere of change (though some of these are only accidentally divisible because that which changes is so): for the division of one term will involve the division of all. So, too, in the matter of their being finite or infinite, they will all alike be either the one or the other. And we now see that in most cases the fact that all the terms are divisible or infinite is a direct consequence of the fact that the thing that changes is divisible or infinite: for the attributes 'divisible' and 'infinite' belong in the first instance to the thing that changes. That divisibility does so we have already shown: that infinity does so will be made clear in what follows?
Since everything that changes changes from something to something, that which has changed must at the moment when it has first changed be in that to which it has changed. For that which changes retires from or leaves that from which it changes: and leaving, if not identical with changing, is at any rate a consequence of it. And if leaving is a consequence of changing, having left is a consequence of having changed: for there is a like relation between the two in each case.
One kind of change, then, being change in a relation of contradiction, where a thing has changed from not-being to being it has left not-being. Therefore it will be in being: for everything must either be or not be. It is evident, then, that in contradictory change that which has changed must be in that to which it has changed. And if this is true in this kind of change, it will be true in all other kinds as well: for in this matter what holds good in the case of one will hold good likewise in the case of the rest.
Moreover, if we take each kind of change separately, the truth of our conclusion will be equally evident, on the ground that that that which has changed must be somewhere or in something. For, since it has left that from which it has changed and must be somewhere, it must be either in that to which it has changed or in something else. If, then, that which has changed to B is in something other than B, say G, it must again be changing from G to B: for it cannot be assumed that there is no interval between G and B, since change is continuous. Thus we have the result that the thing that has changed, at the moment when it has changed, is changing to that to which it has changed, which is impossible: that which has changed, therefore, must be in that to which it has changed. So it is evident likewise that that that which has come to be, at the moment when it has come to be, will be, and that which has ceased to be will not-be: for what we have said applies universally to every kind of change, and its truth is most obvious in the case of contradictory change. It is clear, then, that that which has changed, at the moment when it has first changed, is in that to which it has changed.
We will now show that the 'primary when' in which that which has changed effected the completion of its change must be indivisible, where by 'primary' I mean possessing the characteristics in question of itself and not in virtue of the possession of them by something else belonging to it. For let AG be divisible, and let it be divided at B. If then the completion of change has been effected in AB or again in BG, AG cannot be the primary thing in which the completion of change has been effected. If, on the other hand, it has been changing in both AB and BG (for it must either have changed or be changing in each of them), it must have been changing in the whole AG: but our assumption was that AG contains only the completion of the change. It is equally impossible to suppose that one part of AG contains the process and the other the completion of the change: for then we shall have something prior to what is primary. So that in which the completion of change has been effected must be indivisible. It is also evident, therefore, that that that in which that which has ceased to be has ceased to be and that in which that which has come to be has come to be are indivisible.
But there are two senses of the expression 'the primary when in which something has changed'. On the one hand it may mean the primary when containing the completion of the process of change- the moment when it is correct to say 'it has changed': on the other hand it may mean the primary when containing the beginning of the process of change. Now the primary when that has reference to the end of the change is something really existent: for a change may really be completed, and there is such a thing as an end of change, which we have in fact shown to be indivisible because it is a limit. But that which has reference to the beginning is not existent at all: for there is no such thing as a beginning of a process of change, and the time occupied by the change does not contain any primary when in which the change began. For suppose that AD is such a primary when. Then it cannot be indivisible: for, if it were, the moment immediately preceding the change and the moment in which the change begins would be consecutive (and moments cannot be consecutive). Again, if the changing thing is at rest in the whole preceding time GA (for we may suppose that it is at rest), it is at rest in A also: so if AD is without parts, it will simultaneously be at rest and have changed: for it is at rest in A and has changed in D. Since then AD is not without parts, it must be divisible, and the changing thing must have changed in every part of it (for if it has changed in neither of the two parts into which AD is divided, it has not changed in the whole either: if, on the other hand, it is in process of change in both parts, it is likewise in process of change in the whole: and if, again, it has changed in one of the two parts, the whole is not the primary when in which it has changed: it must therefore have changed in every part). It is evident, then, that with reference to the beginning of change there is no primary when in which change has been effected: for the divisions are infinite.
So, too, of that which has changed there is no primary part that has changed. For suppose that of AE the primary part that has changed is Az (everything that changes having been shown to be divisible): and let OI be the time in which DZ has changed. If, then, in the whole time DZ has changed, in half the time there will be a part that has changed, less than and therefore prior to DZ: and again there will be another part prior to this, and yet another, and so on to infinity. Thus of that which changes there cannot be any primary part that has changed. It is evident, then, from what has been said, that neither of that which changes nor of the time in which it changes is there any primary part.
With regard, however, to the actual subject of change-that is to say that in respect of which a thing changes-there is a difference to be observed. For in a process of change we may distinguish three terms-that which changes, that in which it changes, and the actual subject of change, e.g. the man, the time, and the fair complexion. Of these the man and the time are divisible: but with the fair complexion it is otherwise (though they are all divisible accidentally, for that in which the fair complexion or any other quality is an accident is divisible). For of actual subjects of change it will be seen that those which are classed as essentially, not accidentally, divisible have no primary part. Take the case of magnitudes: let AB be a magnitude, and suppose that it has moved from B to a primary 'where' G. Then if BG is taken to be indivisible, two things without parts will have to be contiguous (which is impossible): if on the other hand it is taken to be divisible, there will be something prior to G to which the magnitude has changed, and something else again prior to that, and so on to infinity, because the process of division may be continued without end. Thus there can be no primary 'where' to which a thing has changed. And if we take the case of quantitative change, we shall get a like result, for here too the change is in something continuous. It is evident, then, that only in qualitative motion can there be anything essentially indivisible.
Now everything that changes changes time, and that in two senses: for the time in which a thing is said to change may be the primary time, or on the other hand it may have an extended reference, as e.g. when we say that a thing changes in a particular year because it changes in a particular day. That being so, that which changes must be changing in any part of the primary time in which it changes. This is clear from our definition of 'primary', in which the word is said to express just this: it may also, however, be made evident by the following argument. Let ChRh be the primary time in which that which is in motion is in motion: and (as all time is divisible) let it be divided at K. Now in the time ChK it either is in motion or is not in motion, and the same is likewise true of the time KRh. Then if it is in motion in neither of the two parts, it will be at rest in the whole: for it is impossible that it should be in motion in a time in no part of which it is in motion. If on the other hand it is in motion in only one of the two parts of the time, ChRh cannot be the primary time in which it is in motion: for its motion will have reference to a time other than ChRh. It must, then, have been in motion in any part of ChRh.
And now that this has been proved, it is evident that everything that is in motion must have been in motion before. For if that which is in motion has traversed the distance KL in the primary time ChRh, in half the time a thing that is in motion with equal velocity and began its motion at the same time will have traversed half the distance. But if this second thing whose velocity is equal has traversed a certain distance in a certain time, the original thing that is in motion must have traversed the same distance in the same time. Hence that which is in motion must have been in motion before.
Again, if by taking the extreme moment of the time-for it is the moment that defines the time, and time is that which is intermediate between moments-we are enabled to say that motion has taken place in the whole time ChRh or in fact in any period of it, motion may likewise be said to have taken place in every other such period. But half the time finds an extreme in the point of division. Therefore motion will have taken place in half the time and in fact in any part of it: for as soon as any division is made there is always a time defined by moments. If, then, all time is divisible, and that which is intermediate between moments is time, everything that is changing must have completed an infinite number of changes.
Again, since a thing that changes continuously and has not perished or ceased from its change must either be changing or have changed in any part of the time of its change, and since it cannot be changing in a moment, it follows that it must have changed at every moment in the time: consequently, since the moments are infinite in number, everything that is changing must have completed an infinite number of changes.
And not only must that which is changing have changed, but that which has changed must also previously have been changing, since everything that has changed from something to something has changed in a period of time. For suppose that a thing has changed from A to B in a moment. Now the moment in which it has changed cannot be the same as that in which it is at A (since in that case it would be in A and B at once): for we have shown above that that that which has changed, when it has changed, is not in that from which it has changed. If, on the other hand, it is a different moment, there will be a period of time intermediate between the two: for, as we saw, moments are not consecutive. Since, then, it has changed in a period of time, and all time is divisible, in half the time it will have completed another change, in a quarter another, and so on to infinity: consequently when it has changed, it must have previously been changing.
Moreover, the truth of what has been said is more evident in the case of magnitude, because the magnitude over which what is changing changes is continuous. For suppose that a thing has changed from G to D. Then if GD is indivisible, two things without parts will be consecutive. But since this is impossible, that which is intermediate between them must be a magnitude and divisible into an infinite number of segments: consequently, before the change is completed, the thing changes to those segments. Everything that has changed, therefore, must previously have been changing: for the same proof also holds good of change with respect to what is not continuous, changes, that is to say, between contraries and between contradictories. In such cases we have only to take the time in which a thing has changed and again apply the same reasoning. So that which has changed must have been changing and that which is changing must have changed, and a process of change is preceded by a completion of change and a completion by a process: and we can never take any stage and say that it is absolutely the first. The reason of this is that no two things without parts can be contiguous, and therefore in change the process of division is infinite, just as lines may be infinitely divided so that one part is continually increasing and the other continually decreasing.
So it is evident also that that that which has become must previously have been in process of becoming, and that which is in process of becoming must previously have become, everything (that is) that is divisible and continuous: though it is not always the actual thing that is in process of becoming of which this is true: sometimes it is something else, that is to say, some part of the thing in question, e.g. the foundation-stone of a house. So, too, in the case of that which is perishing and that which has perished: for that which becomes and that which perishes must contain an element of infiniteness as an immediate consequence of the fact that they are continuous things: and so a thing cannot be in process of becoming without having become or have become without having been in process of becoming. So, too, in the case of perishing and having perished: perishing must be preceded by having perished, and having perished must be preceded by perishing. It is evident, then, that that which has become must previously have been in process of becoming, and that which is in process of becoming must previously have become: for all magnitudes and all periods of time are infinitely divisible.
Consequently no absolutely first stage of change can be represented by any particular part of space or time which the changing thing may occupy.
Now since the motion of everything that is in motion occupies a period of time, and a greater magnitude is traversed in a longer time, it is impossible that a thing should undergo a finite motion in an infinite time, if this is understood to mean not that the same motion or a part of it is continually repeated, but that the whole infinite time is occupied by the whole finite motion. In all cases where a thing is in motion with uniform velocity it is clear that the finite magnitude is traversed in a finite time. For if we take a part of the motion which shall be a measure of the whole, the whole motion is completed in as many equal periods of the time as there are parts of the motion. Consequently, since these parts are finite, both in size individually and in number collectively, the whole time must also be finite: for it will be a multiple of the portion, equal to the time occupied in completing the aforesaid part multiplied by the number of the parts.
But it makes no difference even if the velocity is not uniform. For let us suppose that the line AB represents a finite stretch over which a thing has been moved in the given time, and let GD be the infinite time. Now if one part of the stretch must have been traversed before another part (this is clear, that in the earlier and in the later part of the time a different part of the stretch has been traversed: for as the time lengthens a different part of the motion will always be completed in it, whether the thing in motion changes with uniform velocity or not: and whether the rate of motion increases or diminishes or remains stationary this is none the less so), let us then take AE a part of the whole stretch of motion AB which shall be a measure of AB. Now this part of the motion occupies a certain period of the infinite time: it cannot itself occupy an infinite time, for we are assuming that that is occupied by the whole AB. And if again I take another part equal to AE, that also must occupy a finite time in consequence of the same assumption. And if I go on taking parts in this way, on the one hand there is no part which will be a measure of the infinite time (for the infinite cannot be composed of finite parts whether equal or unequal, because there must be some unity which will be a measure of things finite in multitude or in magnitude, which, whether they are equal or unequal, are none the less limited in magnitude); while on the other hand the finite stretch of motion AB is a certain multiple of AE: consequently the motion AB must be accomplished in a finite time. Moreover it is the same with coming to rest as with motion. And so it is impossible for one and the same thing to be infinitely in process of becoming or of perishing. The reasoning he will prove that in a finite time there cannot be an infinite extent of motion or of coming to rest, whether the motion is regular or irregular. For if we take a part which shall be a measure of the whole time, in this part a certain fraction, not the whole, of the magnitude will be traversed, because we assume that the traversing of the whole occupies all the time. Again, in another equal part of the time another part of the magnitude will be traversed: and similarly in each part of the time that we take, whether equal or unequal to the part originally taken. It makes no difference whether the parts are equal or not, if only each is finite: for it is clear that while the time is exhausted by the subtraction of its parts, the infinite magnitude will not be thus exhausted, since the process of subtraction is finite both in respect of the quantity subtracted and of the number of times a subtraction is made. Consequently the infinite magnitude will not be traversed in finite time: and it makes no difference whether the magnitude is infinite in only one direction or in both: for the same reasoning will hold good.
This having been proved, it is evident that neither can a finite magnitude traverse an infinite magnitude in a finite time, the reason being the same as that given above: in part of the time it will traverse a finite magnitude and in each several part likewise, so that in the whole time it will traverse a finite magnitude.
And since a finite magnitude will not traverse an infinite in a finite time, it is clear that neither will an infinite traverse a finite in a finite time. For if the infinite could traverse the finite, the finite could traverse the infinite; for it makes no difference which of the two is the thing in motion; either case involves the traversing of the infinite by the finite. For when the infinite magnitude A is in motion a part of it, say GD, will occupy the finite and then another, and then another, and so on to infinity. Thus the two results will coincide: the infinite will have completed a motion over the finite and the finite will have traversed the infinite: for it would seem to be impossible for the motion of the infinite over the finite to occur in any way other than by the finite traversing the infinite either by locomotion over it or by measuring it. Therefore, since this is impossible, the infinite cannot traverse the finite.
Nor again will the infinite traverse the infinite in a finite time. Otherwise it would also traverse the finite, for the infinite includes the finite. We can further prove this in the same way by taking the time as our starting-point.
Since, then, it is established that in a finite time neither will the finite traverse the infinite, nor the infinite the finite, nor the infinite the infinite, it is evident also that in a finite time there cannot be infinite motion: for what difference does it make whether we take the motion or the magnitude to be infinite? If either of the two is infinite, the other must be so likewise: for all locomotion is in space.
Since everything to which motion or rest is natural is in motion or at rest in the natural time, place, and manner, that which is coming to a stand, when it is coming to a stand, must be in motion: for if it is not in motion it must be at rest: but that which is at rest cannot be coming to rest. From this it evidently follows that coming to a stand must occupy a period of time: for the motion of that which is in motion occupies a period of time, and that which is coming to a stand has been shown to be in motion: consequently coming to a stand must occupy a period of time.
Again, since the terms 'quicker' and 'slower' are used only of that which occupies a period of time, and the process of coming to a stand may be quicker or slower, the same conclusion follows.
And that which is coming to a stand must be coming to a stand in any part of the primary time in which it is coming to a stand. For if it is coming to a stand in neither of two parts into which the time may be divided, it cannot be coming to a stand in the whole time, with the result that that that which is coming to a stand will not be coming to a stand. If on the other hand it is coming to a stand in only one of the two parts of the time, the whole cannot be the primary time in which it is coming to a stand: for it is coming to a stand in the whole time not primarily but in virtue of something distinct from itself, the argument being the same as that which we used above about things in motion.
And just as there is no primary time in which that which is in motion is in motion, so too there is no primary time in which that which is coming to a stand is coming to a stand, there being no primary stage either of being in motion or of coming to a stand. For let AB be the primary time in which a thing is coming to a stand. Now AB cannot be without parts: for there cannot be motion in that which is without parts, because the moving thing would necessarily have been already moved for part of the time of its movement: and that which is coming to a stand has been shown to be in motion. But since Ab is therefore divisible, the thing is coming to a stand in every one of the parts of AB: for we have shown above that it is coming to a stand in every one of the parts in which it is primarily coming to a stand. Since then, that in which primarily a thing is coming to a stand must be a period of time and not something indivisible, and since all time is infinitely divisible, there cannot be anything in which primarily it is coming to a stand.
Nor again can there be a primary time at which the being at rest of that which is at rest occurred: for it cannot have occurred in that which has no parts, because there cannot be motion in that which is indivisible, and that in which rest takes place is the same as that in which motion takes place: for we defined a state of rest to be the state of a thing to which motion is natural but which is not in motion when (that is to say in that in which) motion would be natural to it. Again, our use of the phrase 'being at rest' also implies that the previous state of a thing is still unaltered, not one point only but two at least being thus needed to determine its presence: consequently that in which a thing is at rest cannot be without parts. Since, then it is divisible, it must be a period of time, and the thing must be at rest in every one of its parts, as may be shown by the same method as that used above in similar demonstrations.
So there can be no primary part of the time: and the reason is that rest and motion are always in a period of time, and a period of time has no primary part any more than a magnitude or in fact anything continuous: for everything continuous is divisible into an infinite number of parts.
And since everything that is in motion is in motion in a period of time and changes from something to something, when its motion is comprised within a particular period of time essentially-that is to say when it fills the whole and not merely a part of the time in question-it is impossible that in that time that which is in motion should be over against some particular thing primarily. For if a thing-itself and each of its parts-occupies the same space for a definite period of time, it is at rest: for it is in just these circumstances that we use the term 'being at rest'-when at one moment after another it can be said with truth that a thing, itself and its parts, occupies the same space. So if this is being at rest it is impossible for that which is changing to be as a whole, at the time when it is primarily changing, over against any particular thing (for the whole period of time is divisible), so that in one part of it after another it will be true to say that the thing, itself and its parts, occupies the same space. If this is not so and the aforesaid proposition is true only at a single moment, then the thing will be over against a particular thing not for any period of time but only at a moment that limits the time. It is true that at any moment it is always over against something stationary: but it is not at rest: for at a moment it is not possible for anything to be either in motion or at rest. So while it is true to say that that which is in motion is at a moment not in motion and is opposite some particular thing, it cannot in a period of time be over against that which is at rest: for that would involve the conclusion that that which is in locomotion is at rest.
Zeno's reasoning, however, is fallacious, when he says that if everything when it occupies an equal space is at rest, and if that which is in locomotion is always occupying such a space at any moment, the flying arrow is therefore motionless. This is false, for time is not composed of indivisible moments any more than any other magnitude is composed of indivisibles.
Zeno's arguments about motion, which cause so much disquietude to those who try to solve the problems that they present, are four in number. The first asserts the non-existence of motion on the ground that that which is in locomotion must arrive at the half-way stage before it arrives at the goal. This we have discussed above.
The second is the so-called 'Achilles', and it amounts to this, that in a race the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead. This argument is the same in principle as that which depends on bisection, though it differs from it in that the spaces with which we successively have to deal are not divided into halves. The result of the argument is that the slower is not overtaken: but it proceeds along the same lines as the bisection-argument (for in both a division of the space in a certain way leads to the result that the goal is not reached, though the 'Achilles' goes further in that it affirms that even the quickest runner in legendary tradition must fail in his pursuit of the slowest), so that the solution must be the same. And the axiom that that which holds a lead is never overtaken is false: it is not overtaken, it is true, while it holds a lead: but it is overtaken nevertheless if it is granted that it traverses the finite distance prescribed. These then are two of his arguments.
The third is that already given above, to the effect that the flying arrow is at rest, which result follows from the assumption that time is composed of moments: if this assumption is not granted, the conclusion will not follow.
The fourth argument is that concerning the two rows of bodies, each row being composed of an equal number of bodies of equal size, passing each other on a race-course as they proceed with equal velocity in opposite directions, the one row originally occupying the space between the goal and the middle point of the course and the other that between the middle point and the starting-post. This, he thinks, involves the conclusion that half a given time is equal to double that time. The fallacy of the reasoning lies in the assumption that a body occupies an equal time in passing with equal velocity a body that is in motion and a body of equal size that is at rest; which is false. For instance (so runs the argument), let A, A...be the stationary bodies of equal size, B, B...the bodies, equal in number and in size to A, A...,originally occupying the half of the course from the starting-post to the middle of the A's, and G, G...those originally occupying the other half from the goal to the middle of the A's, equal in number, size, and velocity to B, B....Then three consequences follow:
First, as the B's and the G's pass one another, the first B reaches the last G at the same moment as the first G reaches the last B. Secondly at this moment the first G has passed all the A's, whereas the first B has passed only half the A's, and has consequently occupied only half the time occupied by the first G, since each of the two occupies an equal time in passing each A. Thirdly, at the same moment all the B's have passed all the G's: for the first G and the first B will simultaneously reach the opposite ends of the course, since (so says Zeno) the time occupied by the first G in passing each of the B's is equal to that occupied by it in passing each of the A's, because an equal time is occupied by both the first B and the first G in passing all the A's. This is the argument, but it presupposed the aforesaid fallacious assumption.
Nor in reference to contradictory change shall we find anything unanswerable in the argument that if a thing is changing from not-white, say, to white, and is in neither condition, then it will be neither white nor not-white: for the fact that it is not wholly in either condition will not preclude us from calling it white or not-white. We call a thing white or not-white not necessarily because it is be one or the other, but cause most of its parts or the most essential parts of it are so: not being in a certain condition is different from not being wholly in that condition. So, too, in the case of being and not-being and all other conditions which stand in a contradictory relation: while the changing thing must of necessity be in one of the two opposites, it is never wholly in either.
Again, in the case of circles and spheres and everything whose motion is confined within the space that it occupies, it is not true to say the motion can be nothing but rest, on the ground that such things in motion, themselves and their parts, will occupy the same position for a period of time, and that therefore they will be at once at rest and in motion. For in the first place the parts do not occupy the same position for any period of time: and in the second place the whole also is always changing to a different position: for if we take the orbit as described from a point A on a circumference, it will not be the same as the orbit as described from B or G or any other point on the same circumference except in an accidental sense, the sense that is to say in which a musical man is the same as a man. Thus one orbit is always changing into another, and the thing will never be at rest. And it is the same with the sphere and everything else whose motion is confined within the space that it occupies.
Our next point is that that which is without parts cannot be in motion except accidentally: i.e. it can be in motion only in so far as the body or the magnitude is in motion and the partless is in motion by inclusion therein, just as that which is in a boat may be in motion in consequence of the locomotion of the boat, or a part may be in motion in virtue of the motion of the whole. (It must be remembered, however, that by 'that which is without parts' I mean that which is quantitatively indivisible (and that the case of the motion of a part is not exactly parallel): for parts have motions belonging essentially and severally to themselves distinct from the motion of the whole. The distinction may be seen most clearly in the case of a revolving sphere, in which the velocities of the parts near the centre and of those on the surface are different from one another and from that of the whole; this implies that there is not one motion but many). As we have said, then, that which is without parts can be in motion in the sense in which a man sitting in a boat is in motion when the boat is travelling, but it cannot be in motion of itself. For suppose that it is changing from AB to BG-either from one magnitude to another, or from one form to another, or from some state to its contradictory-and let D be the primary time in which it undergoes the change. Then in the time in which it is changing it must be either in AB or in BG or partly in one and partly in the other: for this, as we saw, is true of everything that is changing. Now it cannot be partly in each of the two: for then it would be divisible into parts. Nor again can it be in BG: for then it will have completed the change, whereas the assumption is that the change is in process. It remains, then, that in the time in which it is changing, it is in Ab. That being so, it will be at rest: for, as we saw, to be in the same condition for a period of time is to be at rest. So it is not possible for that which has no parts to be in motion or to change in any way: for only one condition could have made it possible for it to have motion, viz. that time should be composed of moments, in which case at any moment it would have completed a motion or a change, so that it would never be in motion, but would always have been in motion. But this we have already shown above to be impossible: time is not composed of moments, just as a line is not composed of points, and motion is not composed of starts: for this theory simply makes motion consist of indivisibles in exactly the same way as time is made to consist of moments or a length of points.
Again, it may be shown in the following way that there can be no motion of a point or of any other indivisible. That which is in motion can never traverse a space greater than itself without first traversing a space equal to or less than itself. That being so, it is evident that the point also must first traverse a space equal to or less than itself. But since it is indivisible, there can be no space less than itself for it to traverse first: so it will have to traverse a distance equal to itself. Thus the line will be composed of points, for the point, as it continually traverses a distance equal to itself, will be a measure of the whole line. But since this is impossible, it is likewise impossible for the indivisible to be in motion.
Again, since motion is always in a period of time and never in a moment, and all time is divisible, for everything that is in motion there must be a time less than that in which it traverses a distance as great as itself. For that in which it is in motion will be a time, because all motion is in a period of time; and all time has been shown above to be divisible. Therefore, if a point is in motion, there must be a time less than that in which it has itself traversed any distance. But this is impossible, for in less time it must traverse less distance, and thus the indivisible will be divisible into something less than itself, just as the time is so divisible: the fact being that the only condition under which that which is without parts and indivisible could be in motion would have been the possibility of the infinitely small being in motion in a moment: for in the two questions-that of motion in a moment and that of motion of something indivisible-the same principle is involved.
Our next point is that no process of change is infinite: for every change, whether between contradictories or between contraries, is a change from something to something. Thus in contradictory changes the positive or the negative, as the case may be, is the limit, e.g. being is the limit of coming to be and not-being is the limit of ceasing to be: and in contrary changes the particular contraries are the limits, since these are the extreme points of any such process of change, and consequently of every process of alteration: for alteration is always dependent upon some contraries. Similarly contraries are the extreme points of processes of increase and decrease: the limit of increase is to be found in the complete magnitude proper to the peculiar nature of the thing that is increasing, while the limit of decrease is the complete loss of such magnitude. Locomotion, it is true, we cannot show to be finite in this way, since it is not always between contraries. But since that which cannot be cut (in the sense that it is inconceivable that it should be cut, the term 'cannot' being used in several senses)-since it is inconceivable that that which in this sense cannot be cut should be in process of being cut, and generally that that which cannot come to be should be in process of coming to be, it follows that it is inconceivable that that which cannot complete a change should be in process of changing to that to which it cannot complete a change. If, then, it is to be assumed that that which is in locomotion is in process of changing, it must be capable of completing the change. Consequently its motion is not infinite, and it will not be in locomotion over an infinite distance, for it cannot traverse such a distance.
It is evident, then, that a process of change cannot be infinite in the sense that it is not defined by limits. But it remains to be considered whether it is possible in the sense that one and the same process of change may be infinite in respect of the time which it occupies. If it is not one process, it would seem that there is nothing to prevent its being infinite in this sense; e.g. if a process of locomotion be succeeded by a process of alteration and that by a process of increase and that again by a process of coming to be: in this way there may be motion for ever so far as the time is concerned, but it will not be one motion, because all these motions do not compose one. If it is to be one process, no motion can be infinite in respect of the time that it occupies, with the single exception of rotatory locomotion.
|
<urn:uuid:dcfe72c1-415f-44a2-a9c5-1847453124f5>
|
CC-MAIN-2017-26
|
http://classics.mit.edu/Aristotle/physics.6.vi.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320049.84/warc/CC-MAIN-20170623100455-20170623120455-00484.warc.gz
|
en
| 0.972941 | 13,924 | 2.703125 | 3 | 2.944217 | 3 |
Strong reasoning
|
Science & Tech.
|
Japan hit by 6.6 magnitude earthquakeMonday 11 April 2011 22.14
A 6.6 magnitude earthquake has struck off the eastern coast of Japan.
The US Geological Survey said the very shallow quake was centred 22km southwest of Iwaki, south of the stricken Fukushima nuclear plant.
The Pacific Tsunami Centre said the earthquake had not triggered a widespread tsunami.
Earlier, Japan fell silent as people across the country remembered the thousands killed by a 9.0 magnitude earthquake on 11 March and the monster tsunami it created.
It was also revealed today that the country will expand the evacuation zone around its Fukushima nuclear plant to areas beyond a 20km radius to include villages and towns that have seen increased accumulated radiation.
‘These regions could accumulate 20 millisieverts or more radiation over a period of a year,’ Japan's chief cabinet secretary Yukio Edano told a news conference, naming Iitate village, 40km from the plant, part of the city of Kawamata and other areas.
‘There is no need to evacuate immediately,’ he added, but said it would be desirable to proceed with the new evacuation over a one-month period.
Engineers working to prevent radiation leaking from the plant have said they are no closer to restoring the cooling system at the six reactors.
Almost 30,000 people died in the earthquake and tsunami that struck eastern Japan four weeks ago.
Japan urged to extend zone further
Japanese Prime Minister Naoto Kan told parliament last month that widening the area would force 130,000 people to move in addition to 70,000 already displaced.
The International Atomic Energy Agency had urged Japan to extend the zone and countries like the United States and Australia have advised citizens to stay 80km away from the plant.
The Japan Times said authorities would soon forcibly close the 20km zone, stopping people returning to their shattered homes to pick through the rubble for belongings.
The president of Tokyo Electric Power Co (TEPCO), which operates the Fukushima plant, planned to visit the area today, the first by Masatake Shimizu since the 11 March disaster.
Mr Shimizu has all but disappeared from public view apart from a brief apology shortly after the crisis began and has spent some of the time since in hospital.
Fukushima Governor Yuhei Sato was quoted by media as saying he would refuse to meet Mr Shimizu during his visit.
Mr Sato has criticised the evacuation policy, saying residents in a 20-30km radius were initially told to stay indoors and then advised to evacuate voluntarily.
‘Residents in the 20-30km radius were really confused about what to do.’ Mr Sato told NHK television on Sunday.
Engineers at the damaged Daiichi plant north of Tokyo said they were no closer to restoring the plant's cooling system which is critical if overheated fuel rods are to be cooled and the six reactors brought under control.
In a desperate move to cool highly radioactive fuel rods, operator TEPCO has pumped water onto reactors, some of which have experienced partial meltdown.
But the strategy has hindered moves to restore the plant's internal cooling system, critical to end the crisis, as engineers have had to focus on how to store 60,000 tonnes of contaminated water.
Engineers have been forced to pump low-level radioactive water, left by the tsunami, back into the sea in order to free up
|
<urn:uuid:36973409-3940-4739-9c7f-a33e8c6fa90b>
|
CC-MAIN-2014-41
|
http://www.rte.ie/news/2011/0411/299713-japan/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133564.63/warc/CC-MAIN-20140914011213-00126-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
en
| 0.974017 | 692 | 2.53125 | 3 | 3.010052 | 3 |
Strong reasoning
|
Science & Tech.
|
The Rocks Don't Lie: A Geologist Investigates Noah's Flood
David R. Montgomery
W. W. Norton & Company, 2012
320 pp., $26.95
Science in Focus: Marjorie A. Chan
The Rocks Don't Lie
My father-in-law, a lifelong evangelical and a college anthropology teacher, read my copy of The Rocks Don't Lie before I got to it. He pronounced it "a good read!" As a geologist, I agree.
David Montgomery, a professor of geomorphology at the University of Washington and a MacArthur Fellow (Class of 2008), gives readers the historical threads to sew together the tale of theological, geological, and scientific thinking over the last several hundred years. His strong yet easygoing storytelling skills make it easy to follow the history of how people who studied the natural world struggled to reconcile their field observations with the prevailing interpretations of the Genesis accounts of creation and Noah's flood.
Many early philosophers and theologians came up with reasons why Noah's flood would not have been a global catastrophic deluge to align their religious views with the scientific evidence available in their day. These included St. Augustine, St. Thomas Aquinas, Leonardo Da Vinci, and Copernicus. Each was faced with constraints when questioning certain received interpretations of the biblical texts, although most felt their studies of the landscape enabled them to better understand God's handiwork and, thus, the Creator.
In the 1800s, geologists who were also steeped in Scripture read the first chapters of Genesis as the story of creation without forcing this account to stand as a scientific treatise as well. Montgomery presents the opposing Young Earth creationist view as a more recent development. Its proponents, he charges, willingly embrace contradictory and outmoded theories and ignore many facts entirely, seeking to make the scientific evidence fit what they claim to be a literal narrative of the creation story in Genesis.
As Montgomery makes abundantly clear, geology is an exhilarating discipline. The forces of nature acting over vast stretches of time are awe-inspiring. And the fossil record—contrary to the familiar refrain from Young Earth creationists—is robust. The author's account of a hike out of the Grand Canyon takes the reader through the epochs of earth history, emphasizing the evidence of time recorded in the rock layers. Geologic time is long and deep, but since almost the beginning of time, life has left traces of its passing.
This book raises interesting questions about how societies accept change, and how both secular and religious authority functions. Scientific thinking often challenges traditions and authorities and seeks new boundaries. Yet as Montgomery points out, science can also refuse to consider new evidence that challenges its own traditions. Not surprisingly, Christianity and science can be compatible and in fact supportive of each other—but that requires openness to evidence in the natural world.
If the rocks don't lie, then they tell the truth. Rocks are everywhere, and it is satisfying to know the truth is all around us.
Marjorie A. Chan is professor of geology at the University of Utah.
Copyright © 2013 Books & Culture. Click for reprint information.
|
<urn:uuid:bf5aa9b8-71d3-4466-aa04-cb83277c96f3>
|
CC-MAIN-2015-48
|
http://www.booksandculture.com/articles/webexclusives/2013/february/rocks-dont-lie.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+christianitytoday%2Fbooks+%28Books+%26+Culture%29
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447266.73/warc/CC-MAIN-20151124205407-00168-ip-10-71-132-137.ec2.internal.warc.gz
|
en
| 0.940141 | 643 | 2.765625 | 3 | 2.962142 | 3 |
Strong reasoning
|
Science & Tech.
|
Or download our app "Guided Lessons by Education.com" on your device's app store.
Counting Hours: What Time Is It?
Students will be able to tell time on the hour using an analog clock.
- Call the students together as a group.
- Ask the students what item is used to tell time.
- Take responses by raised hands.
- Inform the students that clocks come in all shapes and sizes.
- Inform the students that today they will be using an Analog clock, or a clock that has rotating hands.
Explicit Instruction/Teacher modeling(10 minutes)
- Show the students a large battery-operated analog clock.
- Ask the students to count from 1-12 as you point out the numbers on the clock.
- Draw an analog clock on the whiteboard.
- Draw an Hour handOn the clock.
- Inform the students that the hour hand moves slowly around the clock, and that it takes 12 hours to make a full rotation.
- Inform the students that an HourIs 60 minutes long.
- Draw a Minute handOn the clock.
- Tell the students that a minute hand moves more quickly around the clock, and that it takes it one hour to make a rotation.
- Inform the students that 60 SecondsIs equal to one Minute.
- Show 12 o'clock on the analog clock.
- Ask the students to repeat after you as you say "12 o'clock."
- Show 1 o'clock and have the students repeat after you as you say "1 o'clock."
- Continue this process until you get back to 12 o'clock.
- Emphasize that the minute and hour hands move to the right, or Clockwise, at all times.
Guided practise(10 minutes)
- Place the students at their desk to work.
- Each student should have a paper plate with an analog clock face already drawn on the plate.
- Each student should have a precut hour hand, a precut minute hand, and a brass fastener.
- Tell the students to place the hour and minute hands on top of each other so that the punched holes align.
- Have the students put the brass fastener through the punched holes on the minute and hour hand.
- Then have the students put the brass fastener through the paper plate clock face.
- In the event you wish to save time, each clock may be pre-assembled.
- Ask the class to set the clocks at 12 o'clock, as previously modeled.
- Ask the students to demonstrate various times on the hour.
- Walk around the room to check in with students as they use this manipulative.
Independent working time(15 minutes)
- Ask the students to have a seat at their desk.
- Provide each student with a pencil and a Be on Time worksheet.
- Read the instructions to the students.
- Have the students complete the worksheet.
- Collect the worksheet for grading.
- Have the students colour the face of their analog clock with crayons.
- Provide students with the Time Review: Tell Time with Carlos Cat worksheet. It has various analog clocks displaying different times on the hour. Have the students write the time under each clock. Do not provide a word box or any scaffolding.
- Provide students with the Be On Time worksheet. It has the hour hand and minute hand in red and black. This should make telling time a bit easier for students who need extra support.
- Assess the independent work of the students.
- Mini-conference with students who scored low.
- Provide reinforcement by assigning homework or another opportunity to master the concept the next lesson.
Review and closing(10 minutes)
- Give each student an index card with a time on the hour from 1 o'clock to 12 o'clock.
- Do not repeat any of the times.
- Have the students arrange themselves in the correct order to make a human clock.
- If you have more than 12 students, create two groups of 12.
|
<urn:uuid:41e81c7d-80c6-4cf2-8e35-f4748c5fa4ce>
|
CC-MAIN-2019-30
|
https://nz.education.com/lesson-plan/what-time-is-it/
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529481.73/warc/CC-MAIN-20190723172209-20190723194209-00367.warc.gz
|
en
| 0.90338 | 847 | 3.96875 | 4 | 1.446376 | 1 |
Basic reasoning
|
Education & Jobs
|
April 1 or April Fools’ Day is when people around the world play practical jokes on each other. This day has no definite origin but is commonly associated with France changing from the Julian calendar to the Gregorian in 1582. News of the switch traveled slowly so some people continued to celebrate the first of the year based on the Julian calendar, which placed it the last week in March through April 1 instead of the first day of January. April Fools’ history is tied to the Gregorian calendar because those who were not aware of the new date for the New Year ended up being the butt of jokes.
The Julian calendar was put in place in 46 B. C. by Julius Caesar. It was off by 11 minutes so, over time, the miscalculation increased. Easter, which had been observed on March 21, was getting farther away each year from the spring equinox. The idea of a reformed calendar had been debated by popes and other hierarchy of the Catholic Church for hundreds of years. Leading astronomers were consulted and ideas ranged from omitting one leap day every 134 years in order to correct the solar cycle, omitting one day every 304 years to correct the lunar cycle, omitting seven days in one year for the solar cycle and three days in one year to correct the lunar cycle.
The papal commission asked the astronomer Copernicus for his opinion in 1514. He felt there was not enough known about the motions of the sun and moon to reform the calendar. He continued with his observations for 10 more years and, later, wrote his De Revolitionibus Orbium Celestium (1543). This work was used compute tables that would be the basis for the proposed reform.
The Italian scientist, Aloisius Lilius, developed the new the new system. He realized that under the Julian calendar, an extra day was added every four years in February. This eventually made the calendar too long so he came up with a different plan. He added leap days in years only divisible by four. If that same year was also divisible by 100, no extra day was added. However, if the year was divisible by 400, that year would have a leap day.
The Council of Trent had called for reform in 1563. Pope Gregory unveiled the new calendar in 1582. Originally devised just for the Catholic Church, Spain, Portugal and Italy began using the system for civil matters as well as religious. Protestants rejected it because it came from the Catholic Church. In 1700, Protestant Germany switched to using the Gregorian calendar and England adapted it in 1752. When England made the switch, Parliament deleted 11 days so when citizens went to bed on Sept. 2, the next day was Sept. 14, 1752. This prompted Benjamin Franklin to marvel at how pleasant is was for an old man to go to bed on Sept. 2nd but not have to get up until the 14th. The artist, William Hogarth, portrayed the confusion in his painting Humours of an Election (c. 1755).
One popular practical joke during the Middle Ages was putting a paper fish on the backs of others and referring to them as “poisson d’avril” or April fish. This came to symbolize an easily caught, young fish as well as a gullible person.
The American painter, Norman Rockwell, painted three April Fools’ covers for the Saturday Evening Post for 1943, 1945 and 1948. Readers were encouraged to find the errors such as skis worn by a man on a fishing trip. Whatever the pranks and practical jokes are for the day, Aprils Fools’ history is often tied to the confusion surrounding the creation and implementing of the Gregorian calendar.
By Cynthia Collins
Top image: Humours of an Election by William Hogarth (c. 1755)
April Fools Tradition
|
<urn:uuid:8b91e291-dca2-4adc-8ec0-1cbd11c08999>
|
CC-MAIN-2017-13
|
http://guardianlv.com/2014/03/april-fools-history-tied-to-gregorian-calendar/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187717.19/warc/CC-MAIN-20170322212947-00394-ip-10-233-31-227.ec2.internal.warc.gz
|
en
| 0.977841 | 790 | 3.34375 | 3 | 2.753911 | 3 |
Strong reasoning
|
History
|
Welcome to the January edition of Ask Dr. Emily?
We often receive questions that we want to share with all our readers. To help with this, Dr. Emily Rastall, a clinical psychologist at Seattle Children’s Autism Center, will share insights in a question and answer format. We welcome you to send us your questions and Dr. Rastall will do her best to answer them each month.
Send your questions to [email protected].
Q: My 3-year-old son just got a provisional autism diagnosis. I think he is able to “lie” to me. For example, after I put him in his crib he told me he needed to use the bathroom. So I brought him to the bathroom, but he refused to go. All the while, he was smiling and singing and excited to be out of the crib. Another time, he pretended to cough (while smiling) so that I would give him cough syrup (he loves the taste). Is this “lying?” Is it premeditated? What is going on here?
A: This sounds like pretty typical “kid” behavior. Most kids will try about anything to get what they want and/or like. Sometimes that means saying things that aren’t true to get their needs met. They are not meaning to deceive, but rather, they have learned that a certain behavior offers a certain result. Thus, they try the behavior again to see if it will pay off. Let’s say a child in the crib really does need to use the bathroom one night, and while doing so realizes, “Hey, I’m out of my crib!” They are more likely to ask to use the bathroom the next night as a way to get out of the crib.
Here’s another example: A child gets a fever and receives medicine and extra attention, gets to stay home from school, and gets to watch cartoons all day. They may try to convince you later that they are sick in hopes that they might get the same attention and privileges they received before. Who can blame them? I think we can all agree, this is less premeditative than simply reinforced, or learned behavior.
The best thing you can do in these situations is to give as little attention to the behavior as possible. You’ll want to do your own fact-checking and then respond as needed. Redirect and distract to move on to the next thing as soon as possible. Good luck, detective!
|
<urn:uuid:5f125968-acd4-4be0-8665-ddbed88e533b>
|
CC-MAIN-2019-22
|
https://theautismblog.seattlechildrens.org/ask-dr-emily-is-this-lying/
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257845.26/warc/CC-MAIN-20190525004721-20190525030721-00347.warc.gz
|
en
| 0.978393 | 526 | 2.671875 | 3 | 2.336723 | 2 |
Moderate reasoning
|
Health
|
Only hours ago, the Food and Agricultural Organization warned of a fresh attack by a swarm of locusts arriving from Africa along the Indo-Pakistan border. This continuous insurgence of plant epidemics & pest outbreaks is a part of the worst ever biotic stress the sub-continent has experienced in the last 27 years. With 75% human diet dependency on plants, widespread plant epidemics are just a precursor to loss of millions of human lives to hunger. If the current pandemic affecting humans can spur state machineries into immediate action, why haven’t plant epidemics garnered the attention they deserve? Isn’t it high time to invest in detecting them early before another catastrophe hits us?
Threat of Crop Epidemics & Pest Attacks
The global outbreak of locust attacks this year has re-ignited conversations on the rising threat of biotic stresses on food security. Annually, an estimated 40% of the crop production is lost to plant epidemics (such as wheat rust, banana & cassava diseases) & pest attacks (such as locusts & fall armyworm). As per a recently released paper by FAO, climate change, globalization, trade & intensive agriculture have substantially increased the vulnerability of food systems to such biotic stresses.
The repercussions of such a situation are manifold. Recently the United Nations announced that 132 million people around the globe are expected to go hungry this year on account of the economic recession triggered by the Coronavirus pandemic. In this scenario, the threat of biotic stresses to food production deserves attention. Not to forget that in emerging economies such as India & Africa, any crop damage primarily pushes the majority small & marginal farmers into the clutches of poverty & hunger.
So is there a way to anticipate, plan & protect such plant systems from biotic stresses?
GovTech may have an answer.
Monitoring & Early Warning Systems
GovTech can prove to be a great enabler for early detection of diseases & pests and timely actionable advice critical for safeguarding plants. One such story that we have witnessed closely is the design & implementation of the Wheat Rust Monitoring & Early Warning System in Ethiopia. The use of intuitive, simple to use mobile applications by farmers, agriculture extension workers, NGOs and researchers have enabled:
1. Digitization of important farm parameters such as geo-location co-ordinates, cropping pattern & type, time of plantation, and application of inputs such as fertilizers & irrigation.
2. Capture photographs depicting the type & severity of fungal disease that has affected the plant or the stage of pest larvae on breeding grounds. Currently, the analysis is mostly done manually by agricultural scientists. Leveraging machine learning algorithms on a broad set of photographs, this analysis can also be system-generated.
3. Codification of samples collected from farms & sent to labs. This helps to trace the origin of any new strains of diseases found in samples to the farms they came from.
This digitized survey data also forms the backbone of global forecasting models for wheat rust managed by International Maize and Wheat Improvement Centre (CIMMYT). This international organization combines this survey input with meteorological models (on wind speed, temperature & humidity) developed by UK Metrology & University of Cambridge to analyse & predict the timing & areas of next outbreak. The Ministry of Agriculture in Ethiopia issues official advisories based on the insights provided by the model. Via the communication platform in the Early Warning System, Ethiopia has also been swift to reach out to local administrators & farmers and advise them on using pesticide usage in very early stages. Opportunity to integrate these advisories with drones for efficient pesticide application are also being explored.
Need for Replicating Successful Models
Quite similar to epidemics affecting humans (such as the novel Coronavirus), infections & pests affecting crops also spread like wildfire. Often termed as the “polio of agriculture”, the airborne wheat rust has aggressively spread from Africa to Middle East and is now about to severely affect Europe & Asia. Many mutations of the infection have also sprung up in different regions and developing resistant varieties is expected to take significant time. Hence, investing in Early Warning Systems is the most feasible & effective option to arrest epidemic spread. This system may have been pioneered by Ethiopia but its replication globally could save 1 billion people directly dependent on wheat for food & livelihood.
In fact, the framework of early warning systems — mobile based surveillance, forecasting models & communication platforms — is similar across all types of diseases & pests afflicting life on the planet globally — including the locust outbreak. Amidst millions going hungry, a locust swarm spread over an area of one square kilometre nearly eats the same amount of food as 35,000 people every day. The problem is as real as it gets and needs to be nipped in the bud early.
Governments worldwide need to actively scout for such best practices and invest significantly to replicate similar systems. Data from such systems feeding into global forecasting models shall further strengthen their robustness and accuracy of predictions.
The time to prioritize early warning systems is now.
Who knows how many hunger shall kill before the Coronavirus does?
The author is Priyadarshi Nanu Pany, founder & CEO of CSM Technologies. This article was originally published on his Medium profile: https://medium.com/@nanupany/early-warning-system-for-plant-protection-fc228a5eec18
|
<urn:uuid:53c46928-af5b-4ec3-9f47-2b2eab7dd246>
|
CC-MAIN-2020-40
|
https://www.csm.co.in/blog-details/early-warning-system-for-plant-protection
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219691.59/warc/CC-MAIN-20200924163714-20200924193714-00778.warc.gz
|
en
| 0.935691 | 1,115 | 2.625 | 3 | 3.094597 | 3 |
Strong reasoning
|
Industrial
|
[Illustration: FIG. 209.—Plating spoons by electricity.]
Practically all silver, gold, and nickel plating is done in this way; machine, bicycle, and motor attachments are not solid, but are of cheaper material electrically plated with nickel. When spoons are to be plated, they are hung in a bath of silver nitrate side by side with a thick slab of pure silver, as in Figure 209. The spoons are connected with the negative terminal of the battery, while the slab of pure silver is connected with the positive terminal of the same battery. The length of time that the current flows determines the thickness of the plating.
294. How Pure Metal is obtained from Ore. When ore is mined, it contains in addition to the desired metal many other substances. In order to separate out the desired metal, the ore is placed in some suitable acid bath, and is connected with the positive terminal of a battery, thus taking the place of the silver slab in the last Section. When current flows, any pure metal which is present is dissolved out of the ore and is deposited on a convenient negative electrode, while the impurities remain in the ore or drop as sediment to the bottom of the vessel. Metals separated from the ore by electricity are called electrolytic metals and are the purest obtainable.
295. Printing. The ability of the electric current to decompose a liquid and to deposit a metal constituent has practically revolutionized the process of printing. Formerly, type was arranged and retained in position until the required number of impressions had been made, the type meanwhile being unavailable for other uses. Moreover, the printing of a second edition necessitated practically as great labor as did the first edition, the type being necessarily set afresh. Now, however, the type is set up and a mold of it is taken in wax. This mold is coated with graphite to make it a conductor and is then suspended in a bath of copper sulphate, side by side with a slab of pure copper. Current is sent through the solution as described in Section 293, until a thin coating of copper has been deposited on the mold. The mold is then taken from the bath, and the wax is replaced by some metal which gives strength and support to the thin copper plate. From this copper plate, which is an exact reproduction of the original type, many thousand copies can be printed. The plate can be preserved and used from time to time for later editions, and the original type can be put back into the cases and used again.
MODERN ELECTRICAL INVENTIONS
296. An Electric Current acts like a Magnet. In order to understand the action of the electric bell, we must consider a third effect which an electric current can cause. Connect some cells as shown in Figure 200 and close the circuit through a stout heavy copper wire, dipping a portion of the wire into fine iron filings. A thick cluster of filings will adhere to the wire (Fig. 210), and will continue to cling to it so long as the current flows. If the current is broken, the filings fall from the wire, and only so long as the current flows through the wire does the wire have power to attract iron filings. An electric current makes a wire equivalent to a magnet, giving it the power to attract iron filings.
|
<urn:uuid:8c7ae5dd-812b-4c70-98e9-e024676e9943>
|
CC-MAIN-2018-26
|
http://www.bookrags.com/ebooks/16593/160.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861899.65/warc/CC-MAIN-20180619041206-20180619061206-00333.warc.gz
|
en
| 0.956186 | 676 | 3.6875 | 4 | 2.534993 | 3 |
Strong reasoning
|
Science & Tech.
|
High Blood Pressure and Diabetes; A Deadly Duo
Source: Jet, Health, Feb. 10, 2003
Did you know that people with diabetes are more likely to have high blood pressure? In fact, almost two out of three adults with diabetes have high blood pressure.
Diabetes mellitus is a group of diseases characterized by high levels of blood glucose resulting from defects in insulin production, insulin action, or both. High blood pressure is defined in an adult as a blood pressure greater than or equal to 140 mm Hg systolic pressure or greater than or equal to 90 mm Hg diastolic pressure.
High blood pressure and diabetes are associated with serious complications and increase your risk of heart disease, stroke, eye problems, kidney problems, nerve disease and premature death. So if you have both, you have an even greater risk for other health problems.
One of the best ways to win a fight is to first realize that you’re in one. Get to know your enemy; see your doctor or head to the local clinic and get tested for high blood pressure and diabetes.
High blood pressure (or hypertension) isn’t called “the silent killer” for nothing. Oftentimes people will have it and not even know it until it’s too late. When checking for high blood pressure you’ll hear your blood pressure reading said as two numbers, like “one-thirty over eighty.” The first number is the pressure as your heart beats and pushes blood into the blood vessels. The second number is the pressure when your heart rests between beats. For most people with diabetes, keeping blood pressure below 130/80 will help prevent problems.
A doctor will diagnose diabetes by looking for risk factors such as lack of exercise, excess weight, a family history of diabetes and symptoms such as thirst and frequent urination, complications like heart trouble or signs of excess glucose or sugar in blood and urine tests. From there the doctor can decide, based on these tests and a physical exam, whether someone has diabetes. If a blood test is borderline abnormal, the doctor may want to monitor the person’s blood glucose regularly. If a person is overweight, he or she probably will be advised to lose weight.
Fortunately, the care of high blood pressure is similar to the care of diabetes. In fact, sometimes the same treatment works for both. Weight loss and regular exercise will help control both hypertension and diabetes. Also, people with high blood pressure should cut down on salt. By working with your health care providers and following their treatments, you can meet the challenge of caring for high blood pressure and diabetes.
Click Here for a comprehensive, state-of-the-art product that contains a blend of essential fatty acids to help diabetics.
|
<urn:uuid:4b8e63b8-269f-48b2-80a3-029086f3e412>
|
CC-MAIN-2022-27
|
http://diabetichealthinfo.com/diabetic_complication_high_blood_pressure/
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00779.warc.gz
|
en
| 0.93833 | 572 | 3.25 | 3 | 1.667608 | 2 |
Moderate reasoning
|
Health
|
Justice is said to be blind but no one ever said it was fast. While cases on television are resolved from incident to trial verdict in an hour, with time out for commercials, such speed is not true in real life.
So why does litigation take so long?
The Court Calendar
Courts are busy places and there are only so many days available for the court to hear cases. A judge’s calendar fills up quickly. Civil court trials take longer and are typically set for trial a year or 18 months after being filed. Criminal trials are set sooner since the defendant has a right to a speedy trial.
The process of a lawsuit takes time:
There are procedural rules that govern the process of the lawsuit and each step takes time. Once the lawsuit is filed and served on the opposing party, the defendant then has 20 to 30 days to answer the complaint.
Even courts that have been designed to streamline the process of litigation, such as the Michigan Business Courts, still take a long time by most people’s standards.
Each stage of litigation adds to the length and complexity of the process.
Discovery or Evidence Gathering:
After the defendant answers the complaint, the parties start the discovery process. Each side sends questions to be answered under oath and asks for the production of necessary documents. The parties have 20 to 30 days to answer and produce the documents. The judge can set a time limit on discovery, generally giving the parties 3 to 6 months to complete the process.
Sometimes there are discovery disputes that must be resolved by the court. If this occurs, a motion is filed, a court date is set and the judge decides the matter after a hearing. Such motions and hearings can lengthen the process.
Once the parties have all the documents, depositions are scheduled. A depositions is an out-of-court meeting where a witness or expert provides testimony under oath. Depending on the number of witnesses and experts being deposed, this process can take between 2 to 4 months to complete.
Mediation and Arbitration:
The vast majority (about 90%) of civil lawsuits are settled without going to trial. The process used to reach a settlement varies from state to state. In Michigan, for example, 8-9 months after the lawsuit is filed, a case evaluation or mediation conference is held. In case evaluation, for example, each party presents their case, an award is determined by the panel of lawyers hearing the case. The parties have 28 days to accept or reject the award. If both parties accept the award, the matter is settled. If not, a settlement conference with the judge may be set, where the judge will discuss the sticking points.
If a case is not resolved via case evaluation or mediation, it may end up in arbitration, which is a binding method to resolve disputes outside of court. While arbitration often replaces traditional litigation, it can easily be just as complicated and time-consuming. If the case does not go to arbitration, traditional litigation will ensue.
The process of getting to trial is also time consuming. Often a hearing or conference is held to discuss pre-trial concerns including determining the evidence that will be agreed to and that which will be contested. These pre-trial procedures can take a long time depending on the judge’s calendar, but eventually a final trial date is set. A civil trial can last a few hours to several weeks, depending on the issues being litigated.
Verdict and Judgment
After the verdict, there can be appeals to a higher court, if one party is not satisfied.
The lengthy process of a civil suit is designed to help the litigants find the truth and reach a fair decision. Thus, it is almost impossible to get to trial in less than a year. Lawyers are not typically the reason cases take so long to resolve. They are merely working within a long-established system.
While it is important to move as quickly as procedures allow when handling lawsuits, what us more important is making sure that your case is competently prepared for trial.
The lawyers at The Miller Law Firm are experienced trial attorneys and are proficient in representing both plaintiffs and defendants.
|
<urn:uuid:a5a0718c-36fd-4e8b-8744-797f37b96c5c>
|
CC-MAIN-2019-43
|
https://millerlawpc.com/lawsuit-take-long/
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649232.14/warc/CC-MAIN-20191014052140-20191014075140-00443.warc.gz
|
en
| 0.961109 | 847 | 3.03125 | 3 | 2.293125 | 2 |
Moderate reasoning
|
Crime & Law
|
World War One and The Postal Service
At the outbreak of WW1 in 1914, the Post Office employed over 250,000 people and with a revenue of £32 million pounds it was one of the largest single employers in the world. The Post Office not only handled 5.9 billion items of post each year but was also responsible for the nation's telephone and telegraph system.
When war was declared in 1914 national fervour saw a huge amount of men enlist and the government used the post office to circulate recruitment forms.
The Post Office had its own Battalion. The Post Office Rifles (POR) was made up entirely of Post Office Staff. You can read more about the Post Office Rifles in this informative book by Duncan Barrett which I featured on Jaffareadstoo in 2014 - you can read more here
Receiving news from home was one of the best ways to keep up a soldier's spirits, and letters and gifts from family and friends helped to relieve the boredom of sitting in trenches, and understandably, a few home comforts helped to make this living hell, almost bearable. Small packages with socks, gloves and scarfs could find their way to France in just about 2 days and even perishable food items like jams, cakes and biscuits could arrive at their destination long before they had time to perish..
Goods and letters started their outward journey at a sorting station in Regent's Park, London and from there mail was shipped to the English ports before making its way to the French ports, Calais, Bolougne or Le Havre. Once on French soil the Royal Engineers Postal Section were responsible for getting the mail to the troops.
You can only imagine what joy a letter from home, a pair of knitted socks or piece of homemade fruitcake did for morale.
However, this rather a sad poem written by Eleanor Farjeon reiterates the poignancy and heartbreak of receiving, or not receiving, mail...
Easter Monday (In Memoriam E.T.) (1917)
In the last letter that I had from France
You thanked me for the silver Easter egg
Which I had hidden in the box of apples
You liked to munch beyond all other fruit.
You found the egg the Monday before Easter,
And said, ‘I will praise Easter Monday now –
It was such a lovely morning’. Then you spoke
Of the coming battle and said, ‘This is the eve.
Good-bye. And may I have a letter soon.’
That Easter Monday was a day for praise,
It was such a lovely morning. In our garden
We sowed our earliest seeds, and in the orchard
The apple-bud was ripe. It was the eve.
There are three letters that you will not get.
April 9th, 1917
The Battle of Arras - took place between 9 April - 16 May 1917 and resulted in 160,000 British casualties.
|
<urn:uuid:655242ae-fedc-4e54-afbf-f57d06ee1ef8>
|
CC-MAIN-2018-13
|
https://jaffareadstoo.blogspot.com/2016/04/sunday-ww1-remembered.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649627.3/warc/CC-MAIN-20180324015136-20180324035136-00558.warc.gz
|
en
| 0.970707 | 609 | 3.140625 | 3 | 2.004675 | 2 |
Moderate reasoning
|
History
|
Specialty Testing For Children
Ages: birth to 18 years
ADD, ADHD & MEMORY TESTING
EMOTIONAL & SOCIAL IMPAIRMENT
BEHAVIORAL & EMOTIONAL SCREENING BASC3
INTRODUCING THE T.O.V.A.® 9
CAREER ASSESSMENT INVENTORY™
BROWN ATTENTION-DEFICIT DISORDER SCALES®
ADD, ADHD & MEMORY TESTING (Ages 5 to 16 years old)
Children's Memory Scale™ (CMS)
CMS - Now you can compare memory and learning ability, attention, and achievement. The Children's Memory Scale™ (CMS) fills the need for a comprehensive learning and memory test.
Plays a vital role in assessing learning disabilities and attention deficit disorders
Helps to plan remediation and intervention strategies for school and clinical settings
As a screener or diagnostic instrument, CMS measures learning in a variety of memory dimensions:
Attention and working memory
Verbal and visual memory
Short- and long-delay memory
Recall and recognition
Links for Valuable Comparisons
For children with learning disabilities, diagnosed with TBI, ADHD, epilepsy, cancer, brain tumors, or strokes.
EMOTIONAL & SOCIAL IMPAIRMENT (Ages 7-18 years old)
Beck Youth Inventories®
Beck Youth Inventories: To assess symptoms of depression, anxiety, anger, disruptive behavior, and self-concept.
The instruments measure the child's or adolescent’s emotional and social impairment in five specific areas:
Depression Inventory: In line with the depression criteria of the Diagnostic and Statistical Manual of Mental Health Disorders— Fourth Edition (DSM– IV), this inventory allows for early identification of symptoms of depression. It includes items related to a child's or adolescent’s negative thoughts about self, life and the future, feelings of sadness and guilt, and sleep disturbance.
Anxiety Inventory: Reflects children's and adolescents’ specific worries about school performance, the future, negative reactions of others, fears including loss of control, and physiological symptoms associated with anxiety.
Anger Inventory: Evaluates a child's or adolescent’s thoughts of being treated unfairly by others, feelings of anger and hatred.
Disruptive Behavior Inventory: Identifies thoughts and behaviors associated with conduct disorder and oppositional-defiant behavior.
SENSORY PROFILE (Ages birth to 14 years old)
The Sensory Profile™
The Sensory Profile™ 2 family of assessments provides you with standardized tools to help evaluate a child's sensory processing patterns in the context of home, school, and community-based activities. These significantly revised questionnaires evaluate a child's unique sensory processing patterns from a position of strengths, providing deeper insight to help you customize the next steps of intervention. The forms are completed by caregivers and teachers, who are in the strongest position to observe the child's response to sensory interactions that occur throughout the day.
The Sensory Profile 2 helps you: Identify and document how sensory processing may be contributing to or interfering with a child's participation at home, school, and the community. Contribute valuable information to a comprehensive assessment of the child's sensory strengths and challenges in context. Develop effective treatment plans, interventions, and everyday remediation strategies
NEUROCOGNITIVE PROCESSES ( Ages 3-16 years old )
The only customizable measure designed to assess both basic and complex aspects of cognition critical to children’s ability to learn and be productive, in and outside of, school settings.
To assess executive functioning and attention, language, memory and learning, sensory-motor, visual-spatial processing and social perception.
Assess executive functioning
Vary the number and variety of subtests according to the needs of the child
Link results to educational difficulties
Facilitate recommendations for mental health interventions
Obtain a comprehensive view of quantitative and qualitative patterns of neuropsychological performance
IQ TESTING (Ages of 6 and 17 years old)
WISC-V is an assessment of children's overall intellectual ability and various specific cognitive domains
The Wechsler Intelligence Scale for Children (WISC), is an individually administered intelligence test for children between the ages of 6 and 16.
To assess giftedness, intellectual and learning disabilities, brain injuries and evaluate cognitive processing.
The WISC-V generates a Full-Scale IQ (formerly known as an intelligence quotient or IQ score) that represents a child's general intellectual ability. It also provides five primary index scores: Verbal Comprehension Index, Visual Spatial Index, Fluid Reasoning Index, Working Memory Index, and Processing Speed Index.
ACADEMIC STRENGTHS (Ages 4-18 years old)
To assess academic strengths, weaknesses, and achievements in students.
WIAT-III The WIAT-III assesses the academic achievement of children, adolescents, college students and adults, aged 4 through 85. The test enables the assessment of a broad range of academics skills or only a particular area of need. The WIAT-II is a revision of the original WIAT (The Psychological Corporation), and additional measures. There are four basic scales: Reading, Math, Writing, and Oral Language. Within these scales, there is a total of 9 sub-test scores.
BEHAVIORAL & EMOTIONAL SCREENING BASC-3 ( Ages 2-21 years old)
BASC-3 is a comprehensive measure of a child's adaptive and problem behaviors in the community and home settings. There are three age levels: preschool, child, and adolescent. A comprehensive set of rating scales and forms including the Teacher Rating Scales (TRS), Parent Rating Scales (PRS), Self-Report of Personality (SRP), Student Observation System (SOS), and Structured Developmental History (SDH). Together, they help you understand the behaviors and emotions of children and adolescents.
Uses a multidimensional approach for conducting a comprehensive assessment
A strong base of theory and research gives you a thorough set of highly interpretable scales
Ideally suited for use in identifying behavior problems as required by IDEA, and for developing FBAs, BIPs, and IEPs
Enhanced computer scoring and interpretation provide efficient, extensive reports
Normed based on current U.S. Census population characteristics
Differentiates between hyperactivity and attention problems with one efficient instrument
An effective way to measure behavior
INTRODUCING THE T.O.V.A.® 9 ( Ages 4-80+ years old)
A computerized, objective measure of attention and inhibitory control normed by gender for ages 4 to 80+.
The Test of Variables of Attention (T.O.V.A.) provides healthcare professionals with objective measurements of attention and inhibitory control. The visual T.O.V.A. aids in the assessment of, and evaluation of treatment for, attention deficits, including attention-deficit/hyperactivity disorder (ADHD). The auditory T.O.V.A. aids in the assessment of attention deficits, including ADHD. T.O.V.A. results should only be interpreted by qualified professionals.
CAREER ASSESSMENT INVENTORY™- The Enhanced Version
(CAI) ( Ages 15+years old)
Overview: Occupational interest inventory for college-bound and non-college-bound individuals.
The Career Assessment Inventory – Enhanced Version assessment compares an individual's occupational interests to those of individuals in 111 specific careers that reflect a broad range of technical and professional positions in today's workforce.
BROWN ATTENTION-DEFICIT DISORDER SCALES®
(BrownADDScales) ( Ages 3 years - Adult)
Overview: Quickly screen for reliable indications of ADD
Obtain a consistent measure of ADD across the life span with the Brown Attention-Deficit Disorder Scales® for Children and Adolescents and the Brown Attention-Deficit Disorder Scales for Adolescents and Adults. Based on Thomas Brown's cutting-edge model of cognitive impairment in ADD, the Brown ADD Scales explore the executive cognitive functioning aspects of cognition associated with AD/HD (ADD).
|
<urn:uuid:f2ef2f2b-eda5-4a96-a6b0-8d5b7da5f91f>
|
CC-MAIN-2020-05
|
https://www.drmarjan.net/children
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610919.33/warc/CC-MAIN-20200123131001-20200123160001-00385.warc.gz
|
en
| 0.859067 | 1,715 | 2.59375 | 3 | 2.586035 | 3 |
Strong reasoning
|
Health
|
For this extended assignment I am going to focus on play and the importance of play is for children and young people. I am going to focus on children up to age of 6. “Play is a spontaneous and active process in which thinking, feeling and doing can flourish.” (http://www.playwales.org.uk/ ). Play is Important for children and young people’s as it can help children to build their confidence. Also, play helps children to develop their physical, mental, social and emotionally. If children and young people have access to good play provision then it many benefits for them, these may be:
EYE37WB-2.1 Describe areas of learning and development within the current framework which relate to school readiness.
As part of the “Every Child Matters― and childcare act of 2006, the government decided that all children age 3-4 were entitled to 15 hourâ€TMs free part time early yearâ€TMs education per week. Childr aged 3-4 are entitled to this for 38 weeks of the year. Although this a government funded scheme,
Child development is one of the main aspects of growing and developing as a human being, especially Cognitive Development. Encyclopedia of Children’s Health defines cognitive development as “The construction of thought processes, including remembering, problem solving, and decision-making from childhood through adolescence to adulthood.” Starting from a young age, babies begin to learn about the world that surrounds them. They learn and absorb new information in the environment that surrounds them. These skills continue to grow stronger as babies brain continue to develop more through experiences and the expansion of their surroundings. As they become older, they will find it more difficult to develop
Many children, sadly, are faced with disease and illness every day. Research has shown that with the addition of a Child Life Program to a hospital setting, children in need of care have better outcomes and their hospital experience improves exponentially. My cousin and his family benefitted from the Child Life Program at Penn State Hershey Medical Center Children’s Hospital. My interview with my aunt and cousin demonstrates that Child Life Specialists definitely can make a difference when a family encounters a child’s illness. The support and care of the Child Life Specialists assist not only the child and the parents, but the entire family. They work as liaisons with the entire medical team to relieve the stress and anxiety the hospital and
Personal, Social and Emotional Development (PSED) is an important area of learning as this is where children learn about their feelings, build friendships and relationships with others and work on themselves. In the early years settings there are various types of play that can support a child with their PSED. These include; dancing, singing, imaginative play such as role play, drawing, writing, constructing,
Hello, my name is Arpandeep Kaur. I am a student of Early Childhood Education which is a branch of education theory where hands-on hands experience are achieved and which relates to the teaching of young children up until the age of about eight. Being a student of early childhood education, in this assignment, I would like to discuss
Within 'Children's Context for Development', Tovah P. and colleagues emphasise on the importance of play for development suggesting that exploration, discovery, make-belief and play are vehicles for development due to the level of focus invested on performance rather than result thus, enhancing their observation, understanding as well as problem solving
Developmental psychology makes an attempt to comprehend the types and sources of advancement in children’s cognitive, social, and language acquisition skills. The pioneering work done by early child development theorists has had a significant influence on the field of psychology as we know it today. The child development theories put forward by both Jean Piaget and Erik Erikson have had substantial impacts on contemporary child psychology, early childhood education, and play therapy. In this essay, I aim to highlight the contribution of these two theorists in their study of various developmental stages, the differences and similarities in their theories, and their contributions to the theory and practice of play therapy.
Developmental psychology makes an attempt to comprehend the types and sources of advancement in children’s cognitive, social, and language acquisition skills. The child development theories put forward by both Jean Piaget and Erik Erikson have had substantial impacts on contemporary play therapy. In this essay, I aim to highlight the contribution of these two theorists in their study of various developmental stages, the differences and similarities in their theories, and their contributions to the theory and practice of play therapy.
Being a pre-service training educator, specific goals and objectives should be set to achieve educator’s own educational philosophy. The Educational philosophy is an individual statement of educators’ guiding principles about the education-related
2. The psychodynamic theory is associated with, Sigmund Freud and Erik Erikson. Theorists who support this theory state, early childhood experiences play a major part in later development of a child’s personality, even if it is buried in there unconscious. Psychodynamic Theorists also believe that children go through qualitatively distinct stages in their development. In my classroom, how I could apply this theory is by engaging the child on who they think they are, and how it will affect their future. Identity plays a major role in this theory, by engaging the child on who they think they are, I feel I will be able to assess their ability to learn.
Health and well-being can be described as the achievement and maintenance of physical fitness and mental stability. Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity. Well being is defined as the quality of people’s life. (World Health Organization, no. 2, p. 100) 1948. Health and wellbeing is focusing on physical, cognitive, social and emotional development. Focusing on physical and cognitive development is physically healthy, healthy thinking like positive thinking. Focusing on social and emotional development is the quality of people’s life that influences by environment for example health status, education, socio-economic status and family background.
Early childhood education is often defined as "a branch of educational theory which relates to the teaching of young children (formally and informally) up until the age of about eight." Early childhood education is closely-knit and connected to the development of the brain specifically the development of physical, emotional and social skills. “A child 's brain can be about 80% the size of a full grown adult’s brain by the age of five years old and at the age of three, a child has a brain 2.5 times more active than the average adult.” (McCarthy, 2011.) Early in a child 's life, synapses begin forming at a rate faster than any other time of their lives. Because of this, young children learn things much quicker than adults do. Development of
Early childhood education (ECE) is a type of educational program which relates to the teaching of young children in their preschool years. It consists of many activities and experiences designed to assist in the cognitive and social development of preschoolers before they start elementary school. In most early childhood programs and schools, technology will be part of the learning background of the future. To make sure this new technology is used effectively, we must assure that teachers are fully trained and supported. In this paper, theoretical perspectives of child development is discussed with the basic elements of learning program. It also explains briefly the role of technology in Early Childhood Education.
|
<urn:uuid:71a46fdb-be45-4410-9782-b127f438f997>
|
CC-MAIN-2023-40
|
https://www.ipl.org/essay/Role-Of-Technology-In-Early-Childhood-Education-PKPJWJFHEACPR
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510528.86/warc/CC-MAIN-20230929190403-20230929220403-00200.warc.gz
|
en
| 0.966888 | 1,561 | 3.625 | 4 | 2.864113 | 3 |
Strong reasoning
|
Education & Jobs
|
History is one of my favourite subjects to teach in our homeschool, especially Islamic history! I am very excited to share with you our Living History curriculum choices for the coming homeschool year!
Download our FREE Homeschool History Reading Plan, and you can read these beautiful books along with our family! (More information is at the end of this blog-post.)
Further research of the Charlotte Mason method of education has led me to many delightful discoveries; one of which is her method of teaching history through living books and biographies. This coming school year, I will be using this methodology to teach my two young boys, ages 6 and 8, more about later Medieval times in Britain and the Islamic world. Towards the middle of the year, we hope to start learning about the Tudors.
This blog-post may include affiliate links. Please see Disclaimer for more information.
If you’re interested in learning more about Charlotte Mason’s method of teaching History: CLICK HERE
I have collected together an assortment of beautiful books that we will use this year; some we will read together as a family, and others are independent reading for my eight year-old. This curriculum also incorporates Islamic History.
If you would like to use this curriculum in your homeschool as well, please scroll down to the bottom of this blog post, and you can download our Homeschool History Reading Plan for FREE!
The topic of Columbus, an how to teach it, is a difficult dillema for many parents, as the horrific atrocities committed upon the native people of America are ignored by most historical accounts in children’s books. This is an excellent article to help you navigate this issue with your children.
Independent Reading/ Biographies (Ages 8+)
We hope that my son will read as many of these books as he can over the whole year, reading for only 10 minutes interdependently from them each school day.
Please note: I have not yet pre-read all of these books, but I plan too insha’Allah. I would always advise you to pre-read anything that your child will be reading independently.
So this is our plan for the coming year for History, insha’Allah.
History Curriculum: Islamic and European History
If you would like to read along with us, I have planned out the first term (12 weeks) of family reading, which you can DOWNLOAD HERE: Homeschool History Reading Plan.
As I mentioned above, this is a continuation of last year’s study of the medieval times, and so the British history component begins with Henry V (1413).
I do not plan out my son’s independent reading, but instead allow him to select a book from the list above, and read from it for 10 minutes daily. This approach could also work for your family.
To use the reading schedule, simply reading down the list the in order; beginning from the top and working your way down to the bottom. Each square correlates with the number of readings/sittings it will take to complete the chapter; e.g. 2 squares indicates that it will probably take 2 sittings to read through that particular chapter. You can even use this as a checklist if you like, and tick off each reading as you complete it.
The chapter names are written in the left-hand column, and the colour of the box indicates the which book it is from. There is a “key” to help make this clearer. If you need any further help with this reading schedule, please leave me a comment below and I’ll do my best to help insha’Allah.
Are you struggling to choose a Homeschool History curriculum? There are so many different curricula and living books available, that choosing the right “fit” can become quickly overwhelming!
In this blog-post I’ll be reviewing three of the most popular Homeschool History curricula, that we have personal experience with, to help you decide what would be best for your children. I’ll also be discussing why the study of History is so important in a child’s education.
This blog-post contains affiliate links. See Disclaimer for more information.
Why Study History
In this modern educational culture, we have come to view History as a supplemental subject; a subject that is done merely to enrich the more “important” disciplines. However I would argue, as Charlotte Mason did over a hundred years ago, that history is “vital part of education.” (Vol. 6, p.169).
Understanding the events and people of the past, can help us to understand our own reality, and place in this world. The study of history exposes our children to worthy ideas, foreign worlds, people of noble character, and can act as an antithesis to the misguidance and trappings of modernity. It helps children to see what virtue looks like, through their imagination, and begins to train their powers of reasoning.
“…a subject which should be to the child an inexhaustible storehouse of ideas, should enrich the chambers of his House Beautiful with a thousand tableaux, pathetic and heroic, and should form in him, insensibly, principles where by he will hereafter judge of the behavior of nations, and will rule his own conduct as one of a nation.”
-Vol. 1 p.279
History, when taught by the principles set out by Charlotte Mason, enocurages children to relate to those unlike them; to humanize people from other nations and distant times.
“If he comes to think…that the people of some other land were, at one tome, at any rate, better than we, why, so much the better for him.”
History has far more to offer our children that just the memorization of facts and dates. It can help to shape they character and guide the way they think.
Like many, I was taught history using a dry textbook followed by comprehension questions. These questions tested my ability to pick facts out of the text, but did not develop my person in any way. I consider the many years I spent sitting in those history lessons time wasted; little information was retained, no ideas imbued, and any interest I once had for history quashed. The great thoughts and personalities of history remained hidden from me until I began to learn alongside my children using the Charlotte Mason method.
Charlotte Mason History
Charlotte Mason advised us to take our time with history; to dwell on those time and people who inspire our children, instead of rushing through in the effrot to cover “everything”.
“Let him, on the contrary, linger pleasantly over the history of a single man, a short period, until he thinks the thoughts of that man, is at home in the ways of that period.” -Vol. 1, p.280
She also recommend the use of living books to teach history, specifically mentioning “Our Island Story” by H. E. Marshall (Vol. 6, p.169) as the main text in the first two years (Form 1B and 1A), as well as reading well-written biographies of historical figures from Form 1A onwards.
Alternatively, many homeschooling families choose to use The Story of the World, by Susan Wise-Bauer as their main text or sole history curriculum. Another option is A Child’s History of the World by V. M. Hillyer.
Homeschool History Options
The Story of the World, Our Island Story and A Child’s History of the World are the three most popular choices of homeschool history curriculum.
This blog post aims to compare these three popular homeschool history texts, and highlight their strengths, weakness, and differences.
To help you further, I’ve made this Youtube video showing the books themselves, and discussing some of this details further. WATCH THIS VIDEO:
The Story of the World, by Susan Wise-Bauer is one of the most popular homeschool history curricula on the market. It was written to follow the classical educational model, however many CM families also use it.
The complete series consists of four volumes, which cover history chronologically from Ancient times through to the Modern age.
In previous years we have worked through Volume 1 (Ancient times), which covers world history from 7000B.C. to the Fall of Rome. However, for reasons I will explain later, we chose not to move onto Volume 2 – Medieval Times.
Each chapter is 3-4 pages long (A5), with plentiful black-and-white illusatrations and maps throughout. It is written in a conversational style, which appeals to many children, as it is easy to understand and is generally very entertaining.
The books do include Biblical stories and mythology. There has also been some concern voiced about the portrayal of Prophet Muhammad in Volume 2. I have not read this volume myself, so I cannot comment on the specifics.
Although the author makes a concerted effort to cover the history of many nations, it is still very much euro-centric world view, and so many families may feel the need to supplement this curriculum.
There are also optional Activity books available to go along with the main text. For every chapter in the main text, the activity book contains cross-references in encyclopedias, additional reading, extensive recommendations for audio-books and literature. The activity books also contain reproducible maps and coloring pages, as well as lists of crafts projects.
Our experience of using The Story of the World Vol. 1 was mixed. The children seemed to enjoy it, and found it fun and easy to understand, which was perfect for our first year homeschooling. It also gave me an idea of how to teach history in a home-setting, which was a very valuable lesson.
Unfortunately, the conversational, modern writing style did not encourage those “juicy” conversations that other living books can encourage.
I also found that the children had retained very little from the text a few days after the lesson. I also found the fast-paced nature of the book very frustrating, as the author has tried to cover so much history in just one book. Whilst I understand the thought-process behind that, I found that my children and I were not given the chance to form connections and relations with the material.
In hind-sight I could have slowed our progress down, and taken two years over the book, instead of one, adding in additional reading and other living books. However, as a new homeschool mum, I lacked the confidence to step away from the authors recommendations.
However, having spoken to many other homeschooling families, it seems that this is exactly what others have done; using The Story of the World as their “spine” and supplementing with their own resources and literature.
I feel that The Story of the World is a fantastic resource for teaching homeschool history. It is ideal for those who are uncomfortable teaching the subject and need some guidance, those new to home-education, or families who feel more confident reading modern English.
Personally, I would not class The Story of the World as a living book, as it did not inspire my children to great ideas, or spark interesting conversations. It is also not a book that I would pick up and read for fun, unlike other some other history books, that I will discuss later in this series.
The Story of the World is the perfect “middle-ground” for those interested in stepping away from the “textbook-workbook model” of teaching, but who are not yet comfortable or interested in using living books.
Our Island Story the primary history text recommended by Charlotte Mason in Volume 1 for forms 1B and 1A (children under 9 years-old).
This beautifully written book tells the story of Britain in chronological order from pre-history through to Queen Victoria. Each chapter is approximately 3-4 pages long and focuses on a historical figure, their story, moral character and contribution to the history of Britain.
The book also contains some poetry and Shakespeare quotes which could be used for further study and memorisation. There are also a few beautifully hand-painted illustrations in some chapters for the reader to enjoy. There is also list of Kings ad Queens at the beginning of the book, which could be useful when constructing your timeline or Book of the Centuries.
Unlike The Story of the World, there are no maps, and no accompanying activity books. If your children enjoys crafts and hands-on activities, you may choose to find these activities yourself.
The book is written in an older English, with richer language than most modern history books. It may take some time for children to get used to this language if the are not already accustomed to it.
It is written from an English (not British) Christian world view, and this should be born in mind when discussing the Crusades and other such conquests within and around the UK.
Due to its world-view, and the fact it only covers the history of Britain, you may wish to supplement this book with additional reading.
We stopped using this book after six months as my son was finding the language difficult to understand and narrate from. However, I feel this book has a lot to offer and I hope to re-introduce it into their homeschool history curriculum sometime in the future.
Overall, I found this book excited the children’s imagination and filled their young minds with worthy ideas and beautiful stories. I would happily read this book myself for enjoyment and my own self-education!
A Child’s History of the World was written by V. M. Hillyer, the late Head Master of the Calvert School, Baltimore. Focusing on the stories of historical figures, it covers World History from pre-history all the way through to the Cold War. Although written in conversational, modern English, the language is rich and engaging.
There are black-and-white illustrations and maps scattered throughout the book. The chapters are approximately 4-5 pages long. There is no accompanying activity book, and so parents may wish to supplement with other material.
We primarily used the Audiobook version from Audible. The narrator was very entertaining and read the book beautifully. I would highly recommend it!
Although the author writes from a Western worldview, I felt that he was respectful to other faiths and people, a fact that may have been noted by the people behind the Ambleside online and Bookshark curriculum who have included it in their elementary years history curricula.
Through his writing, the author also highlights and raising questions about good character and morals throughout.
Please note, this book does contain Biblical stories and mythology. Also, as it is attempting to cover a large period of time in one volume, many important historical events are not included or are skimmed over. As the parent, you may wish to add in additional reading.
The book itself is paperback, self-published and not as attractive as the other homeschool history curricula mentioned. Despite this, A Child’s History of the World is an engaging introduction to world history for children aged 5-9 years old and well worth your consideration.
These are the main three homeschool history curricula that you will see mentioned in literature-based, Classical and Charlotte Mason homeschools.
However, as I have hinted towards, there are many more options! In the next blog post and Youtube video, I will be discussing some alternative books and methods that we use to teach history in our homeschool.
Thank you so much for stopping by. I hope you found these reviews helpful.
Don’t forget to WATCH THE VIDEO, and if you have any questions, please leave them for me in the comments below.
Both my 6 year-old and 4 year-old boys enjoyed doing this activity, and our Ancient Ming bowls look beautiful displayed in our dining area.
Ancient Chinese Bowl Craft
Air Drying Clay
Cling-film (Plastic wrap)
Small plastic bowl
Blunt knife Blue paint
White glue (optional)
How to Make your Ming Bowl:
Roll out clay to about 1 cm thick, so that it will cover the entire plastic bowl.
Wrap the plastic bowl in cling-filmand turn upside down. Make sure your surfaces are covered in newspaper or an old table cloth.
Lay the clay over the upturned bowl.
Use your knife to trim away the excess clay and make the edges smooth.
Leave clay to dry over-night.
Once completely dry, use your blu paint to decorate the outside of the bowl. Chinese pottery was typically painted with flowers, birds and outdoor scenery. Leave to dry.
If you would like to give your bowl a gloss finish, mix 2 parts glue with 1 part water to make a glaze. Paint this mixture over the entire bowl, and leave to dry.
Here are my son’s beautiful Ancient Chinese Ming Bowls:
My 6 year-old painted waves onto his bowl, and my younger son (4 years-old) chose to paint trees on his. They were really pleased with the results and have been showing these bowls to anyone who comes to visit!
More Resources for Ancient China Unit Study:
Other Resources we have used for our Ancient China Unit Study include:
We have really enjoyed learning about Ancient China. The only thing left to do now s get some Chinese take-away and watch Kung-fu Panda!
Have you been looking at Ancient China in your homeschool? Have you done any interest Chinese crafts? I would really love to hear what resources you have been using. Please share with us in the comments below!
Make sure you don’t miss the next blog post bySubscribing to my mailing list.
|
<urn:uuid:bdcf807d-d6ea-4daf-8e7d-4c419ff612a5>
|
CC-MAIN-2018-47
|
http://ourmuslimhomeschool.com/category/history
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744513.64/warc/CC-MAIN-20181118155835-20181118181200-00028.warc.gz
|
en
| 0.961735 | 3,687 | 2.765625 | 3 | 2.728039 | 3 |
Strong reasoning
|
Education & Jobs
|
In this post, we list and explain the difference between voltage and amperage. Voltage is the power differential between two points in an electric field. The amperage, on the other hand, measures the intensity of the electric current in amps.
Voltage is the pressure that a source of electrical energy can exert on electrons in an electrical circuit, thus establishing the flow of electrical current. The voltage is, then, the potential difference between two points of an electric field or a closed circuit.
The greater the potential difference that a source of electricity can exert, the greater the voltage in the closed circuit. This potential difference is measured in a unit known as Volts.
Read also: Types of Energy: 20 Ways Energy Manifests
The voltage between two points in an electric field is equal to the work that the positive charge unit must perform to move between them. The voltage is independent of the path; the load must travel and only depends on the potential between the two points.
When a conductor is placed between two points that have a potential difference, the flow of electric current will be given, always from the area with the greatest potential to the area with the least potential. If there is an absence of a source that generates electricity, the flow of current will stop when both points equal their potential.
It is the intensity of electric current expressed in the unit known as amps. The current intensity is no more than the number of electrons that can flow through a conductor in a given time.
For the SI or International System of Units, the intensity is measured in amps, the letter A being its symbol.
You should keep in mind that one amp is equal to one coulomb or coulomb per second, that is, more or less the flow of 6,241 x 10 18 electrons per second.
Electrical equipment is classified according to its amperage, that is, according to the energy they demand from the grid when they are in operation.
When a person is electrocuted, the amount of electricity that passes through their body will determine the severity of the accident, and this is determined by the amperage and not the voltage. A small flow of 0.1 to 0.2 A can kill due to its devastating effects on the heart. Some people have survived major discharges because their muscle contractions acted as a shield to protect the heart.
Difference between voltage and amperage
- Voltage is the unit that measures the potential difference between two points in a closed electrical circuit.
- Amperage is the amount of electricity that flows between two points in a closed electrical circuit.
- The potential difference between two points is what allows the flow of electric current in a conductor.
- The current intensity is no more than the number of electrons flowing per unit of time in a conductor.
|
<urn:uuid:f5860211-bdf8-4049-aee2-8ca879ab989e>
|
CC-MAIN-2021-17
|
https://www.extremescholars.com/difference-between-voltage-and-amperage/
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065903.7/warc/CC-MAIN-20210411233715-20210412023715-00049.warc.gz
|
en
| 0.954376 | 577 | 4.15625 | 4 | 2.161415 | 2 |
Moderate reasoning
|
Science & Tech.
|
Chinese and Russian officials are considering the possibility of diverting water from Lake Baikal in Siberia through Mongolia to China’s Inner Mongolia Region. According to an unnamed official from China’s Ministry of Water Resources, Russian officials contacted them in May about such a project (Chinadaily.com.cn, May 24).
China’s Water Ministry quickly dismissed the Baikal report. Gu Hao, a ministry spokesperson, said there was “no plan to carry out research on a water diversion project from Lake Baikal.” Furthermore, the ministry is not in contact with any foreign institute on any similar project, he said (Xinhua News Agency, May 26)
Interestingly, Russian media outlets have expressed no interest in the Chinese leak and subsequent denial. However, Chinese media statements are understood as a trial balloon to test Russian readiness to discuss Baikal water sales.
Five time zones east of Moscow, Lake Baikal is located in southeastern Siberia, spanning Buryatia and Irkutsk region. The lake holds some 20% of the world’s fresh surface water, and it is the world’s oldest lake, perhaps 25-30 million years old. The United Nations declared Lake Baikal a World Heritage Site in 1996.
Nonetheless, Lake Baikal has been considered a potential source for water exports. Since the early 1990s, local capitalists and experts in Irkutsk region have circulated plans to build a pipeline from Baikal to China. Their vision involves pumping water not just to China, but also possibly to Africa, the Middle East, and even the United States. In July 2004, an economic delegation from Irkutsk traveled to Shenyang, China, where local businessmen expressed interest in purchasing supplies of Baikal drinking water, according to Irina Dumova, deputy head of the Irkutsk regional administration.
Notably, Baikalskiye Vody (Baikal Waters) has been keen to start exports to China, even sending small amounts of bottled water as a first step. The company’s shareholders include the East Siberian Railway and the East Siberian River Shipping Line. The company evidently is not discouraged by the fact that, for the past 45 years, a massive cellulose plant has been spilling chemicals into the southern portion of the lake.
Irkutsk businessmen are apparently inspired by China’s growing demand for water. According to Chinese media reports, some two-thirds of China’s cities are facing water shortages, while more than 100 localities have already been forced to impose restrictions on water use. Some cities have reportedly started to limit the development of water-intensive industries such as textiles and paper manufacturing.
Furthermore, Russia may soon see the emergence of a new class of freshwater tycoons. In February 2004, the Russia government drafted a new Water Code, allowing private ownership of rivers, lakes, and other water reservoirs.
The government did offer one caveat, specifically stating that Baikal was not to be privatized. Deputy Minister of Economic Trade and Development Mukhamed Tsikanov has made it clear that Baikal is not for sale (RIA-Novosti, February 19, 2004). However, the Russian government has a history of backing away from its promises.
In the meantime, the plans for massive water sales to China recalls the Soviet plan to build a 2,225 kilometer canal to divert Siberian rivers into Asian deserts. Through the 1970s and the 1980s, the USSR Water Resources Ministry prepared plans for water diversion, and it nearly succeeded in launching actual construction. However, Russian scholars condemned the project, arguing that diverting river waters could upset the global environmental balance. These protests became the roots of Russia’s homegrown environmental movement. The Soviet government also found the project not feasible economically; hence the plan was abandoned in the mid-1980s.
In December 2002, Moscow Mayor Yuri Luzhkov moved to revive a bold plan to build a 16-meter-deep and 200-meter-wide canal from Khanty-Mansi to Central Asia through Kazakhstan so as to divert the Siberian Ob and Irtysh Rivers to the Central Asian Amu Darya and Syr Darya Rivers. The canal project would involve diverting about 6-7% of the Ob’s waters. Project supporters argued that selling excess fresh water to Central Asia could prove a lucrative project for Russia. They estimated the project’s cost at between $12 billion and $20 billion.
The Central Asian governments have backed the plan to divert Siberian waters because they are struggling to share water resources. Luzhkov argued that selling excess fresh water to Central Asia could prove a lucrative project for Russia. According to him, the canal would enlarge the amount of arable land in Central Asia by roughly 2 million hectares, and by 1.5 million hectares in southern Siberia. Luzhkov suggested forming an International Eurasian Consortium funded by loans backed by future proceeds from fresh water sales to Central Asia. But Russian media outlets have ridiculed Luzhkov’s proposal, suggesting that the idea be checked not by economists but by mental health experts.
So far, there has been no nationwide discussion in Russia concerning possible Baikal water sales to China. Meanwhile, the environmental repercussions of the project are yet to be studied.
|
<urn:uuid:0f0b1bc7-f5cd-4aa5-a2cb-1aed4eb65830>
|
CC-MAIN-2020-45
|
https://jamestown.org/program/china-russia-float-idea-of-selling-baikal-water/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00333.warc.gz
|
en
| 0.952908 | 1,094 | 3.171875 | 3 | 3.091298 | 3 |
Strong reasoning
|
Science & Tech.
|
Communities developing resources and competencies for using their languages
Foundational understanding for language development work of all kinds
Publications, fonts and computer tools for language development, translation and research
SIL offers training in disciplines relevant to sustainable language development.
7,105 languages are spoken or signed. CLICK for map of world languages & regional websites.
SIL's dedication to language development past and present
This paper presents a sociolinguistic survey conducted in the Doba area of Chad, specifically dealing with the Bebot, Bedjond,1 Gor, and Mango language communities. The survey was designed to provide the administrators of the Association SIL Tchad with information about these communities in order to determine whether there is a need for SIL involvement in language development in this region, and if so, the priority and strategy for such involvement.
Bebot, Bedjond, Gor, and Mango are part of the Sara language family (Nilo-Saharan) and are therefore related to Mbay and Ngambay.
Together with a general overview of the linguistic classification and geographic language situation, results of interviews, wordlist comparison, and individual comprehension testing are presented. These results concern intercomprehension among these varieties, comprehension of Mbay and Ngambay, language choice and vitality, and attitudes toward both written and oral forms of these three varieties, Mbay and Ngambay. Also, information by local leaders on literacy is included.
There are no indications of language shift, attitudes toward the local language are good, and there is a high level of interest in local language literacy.
Wordlist results show a very close relationship linguistically among all six of these varieties.
The individual comprehension test results indicate that the Bebot, Bedjond, Gor, and Mango speakers have a high level of comprehension of the other varieties. However speakers of these varieties have mixed levels of comprehension of Mbay and Ngambay, with some speakers understanding one or both of these well, but others only partially understanding or understanding very little of these languages.
Because of the low levels of comprehension of Mbay and Ngambay on the part of some of these speakers the separate language development of one of these three varieties is justified. Because of the high degree of intercomprehension and an absence of negative attitudes any one of the three could probably serve speakers of all three as a reference dialect. However, for a variety of reasons, it is the recommendation of this report that language development for the various language varieties of this region take place together in one combined project, rather than by separate, competing projects.
This report is a translation of the report "Enquete sociolinguistique des varietes linguistiques de la region de doba du tchad", SIL Electronic Survey Report 2007-010.
|
<urn:uuid:6228c5a0-5ed0-4d13-8f5b-795815b55dcc>
|
CC-MAIN-2016-44
|
http://www.sil.org/resources/archives/9030
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721387.11/warc/CC-MAIN-20161020183841-00526-ip-10-171-6-4.ec2.internal.warc.gz
|
en
| 0.905979 | 576 | 2.9375 | 3 | 3.077454 | 3 |
Strong reasoning
|
Literature
|
Electronic Health Records (EHR) have revolutionized the healthcare industry by transforming the way patient information is collected, stored, and accessed. In this article, we will delve into the various aspects of how electronic health records are used in healthcare and explore the benefits, implementation process, and frequently asked questions surrounding this technology.
Benefits of Electronic Health Records in Healthcare
Electronic Health Records offer a myriad of benefits that significantly impact patient care and overall healthcare management.
Streamlined patient data management
EHR enables healthcare providers to efficiently manage patient data by digitizing medical records. With a few clicks, healthcare professionals can access comprehensive patient information, including medical history, medications, allergies, and test results. This streamlined approach eliminates the need for manual record-keeping, reducing errors and enhancing accuracy in diagnosis and treatment.
Improved communication and collaboration among healthcare providers
One of the key advantages of EHR is its ability to facilitate seamless communication and collaboration among healthcare teams. Different providers, such as doctors, nurses, and specialists, can access and update patient records in real-time, ensuring everyone is on the same page regarding the patient’s condition, treatment plans, and progress. This enhanced communication fosters coordinated care, leading to better patient outcomes.
Enhanced patient care and safety
EHR systems enable healthcare providers to make informed decisions by providing a comprehensive view of a patient’s medical history, including past treatments, medications, and allergies. This holistic approach minimizes the risk of medical errors, such as adverse drug interactions or duplicate tests. Additionally, EHR can generate alerts and reminders for preventive screenings and vaccinations, ensuring patients receive appropriate and timely care.
Efficient workflow and time-saving capabilities
By automating manual tasks and eliminating paperwork, EHR systems streamline workflows and save valuable time for healthcare professionals. Tasks like appointment scheduling, prescription refills, and billing can be handled digitally, reducing administrative burden and allowing healthcare providers to focus more on patient care. This increased efficiency translates to shorter wait times for patients and improved overall healthcare delivery.
Implementation of Electronic Health Records in Healthcare
Implementing EHR in a healthcare setting requires careful planning and consideration of various factors.
Steps involved in the implementation process
The implementation process typically involves several key steps, including:
- Needs assessment: Identifying the specific requirements and goals of the healthcare organization.
- Vendor selection: Choosing an EHR system that aligns with the organization’s needs and budget.
- Data migration: Transferring existing patient records from paper-based or legacy systems to the new EHR platform.
- Training and education: Providing comprehensive training to healthcare professionals to ensure they are proficient in using the EHR system effectively.
- Testing and customization: Conducting thorough testing and customization of the EHR system to tailor it to the unique needs of the healthcare organization.
- Go-live and optimization: Launching the EHR system and continuously optimizing its usage based on feedback and user experience.
Challenges and considerations during implementation
Implementing EHR may present certain challenges that healthcare organizations should be prepared for. Some common challenges include:
- Resistance to change: Healthcare professionals may be reluctant to switch from traditional paper-based systems to electronic records. Addressing their concerns and providing adequate training and support can help overcome resistance.
- Data security and privacy: Ensuring the confidentiality and integrity of patient data is of utmost importance. Implementing robust security measures and adhering to privacy regulations is crucial to protect patient information.
- Interoperability: EHR systems should be able to seamlessly integrate and communicate with other healthcare systems, enabling secure data exchange between different providers and institutions.
- Cost considerations: Implementing EHR involves initial investments in software, hardware, and training. Healthcare organizations should carefully evaluate the costs and benefits to make informed decisions.
Training and education for healthcare professionals
Proper training and education for healthcare professionals are essential for successful EHR implementation. Training programs should cover EHR system functionalities, data entry best practices, and compliance with privacy regulations. Ongoing education and support are crucial to ensure healthcare professionals stay updated with the latest features and advancements in EHR technology.
How Electronic Health Records are Used in Healthcare
EHR systems play a vital role in various aspects of healthcare delivery, enhancing patient care and improving operational efficiency.
Patient data collection and storage
EHR systems digitize patient information, allowing healthcare providers to collect, store, and update data electronically. Patient demographics, medical history, test results, imaging reports, and clinical notes can all be securely stored and easily accessed when needed. This comprehensive and centralized repository of patient data enables healthcare professionals to make informed decisions and provide personalized care.
Accessing and sharing patient information securely
EHR systems provide healthcare professionals with secure access to patient information from any location, enabling efficient and timely care delivery. Authorized providers can quickly retrieve patient records, review lab results, and access medical imaging reports, leading to faster diagnosis and treatment planning. Additionally, EHR systems facilitate secure sharing of patient information between different healthcare providers, ensuring coordinated care and seamless transitions between healthcare settings.
Integration with other healthcare systems and technologies
EHR systems can integrate with various healthcare systems and technologies, enhancing their functionality and interoperability. Integration with pharmacy systems allows for electronic prescribing, reducing medication errors and improving medication management. Interfacing with laboratory systems streamlines the delivery of lab results, eliminating manual data entry and reducing turnaround times. Furthermore, integration with telehealth platforms enables remote consultations and virtual care, expanding access to healthcare services.
Decision support and data analysis capabilities
EHR systems provide decision support tools that assist healthcare professionals in making evidence-based decisions. These tools can range from drug interaction alerts and clinical guidelines to computerized physician order entry systems. Additionally, EHR systems offer robust data analysis capabilities, enabling healthcare organizations to generate meaningful insights and identify trends in patient populations. This data-driven approach can lead to improved clinical outcomes, resource allocation, and population health management.
FAQ about Electronic Health Records in Healthcare
What are the main features of EHR?
Electronic Health Records typically include features such as:
- Patient demographics and medical history
- Medication management and prescription tracking
- Lab and imaging results
- Clinical notes and progress reports
- Appointment scheduling and reminders
- Decision support tools and alerts
- Secure communication and messaging
How does EHR improve patient privacy and security?
EHR systems employ various security measures to ensure patient privacy and data security. These measures include robust authentication mechanisms, encryption of data in transit and at rest, audit logs for tracking access, and compliance with privacy regulations such as HIPAA. Additionally, EHR systems enable role-based access control, ensuring that only authorized individuals can access sensitive patient information.
Can EHR be accessed remotely?
Yes, one of the advantages of EHR is its ability to be accessed remotely. Healthcare providers can securely access patient records and perform necessary tasks from any location with an internet connection. This remote access facilitates telehealth services, remote consultations, and collaborative care among geographically dispersed providers.
How does EHR aid in medical research and population health management?
EHR systems provide a wealth of data that can be utilized for medical research and population health management. Aggregated and de-identified patient data from EHR systems enable researchers to identify patterns, study disease trends, and evaluate treatment outcomes. Furthermore, EHR systems facilitate population health management by identifying high-risk patient groups, monitoring health indicators, and implementing preventive measures to improve overall health outcomes.
What are the potential drawbacks of EHR implementation?
While EHR systems offer numerous benefits, some potential drawbacks include:
- Initial implementation costs
- Training and learning curve for healthcare professionals
- Technical issues and system downtime
- Data entry burden during transition from paper-based records
- Potential privacy and security concerns if not implemented and managed properly
In conclusion, electronic health records have become an indispensable tool in modern healthcare. The benefits of EHR systems, including streamlined data management, improved communication, enhanced patient care, and increased efficiency, have revolutionized the way healthcare is delivered. As technology continues to advance, electronic health records will play an increasingly essential role in providing quality healthcare and driving positive patient outcomes. Healthcare providers should embrace EHR systems and maximize their potential to improve patient care and outcomes in the future.
|
<urn:uuid:cb0a74a4-1265-43d9-bd0b-43978dda5700>
|
CC-MAIN-2023-50
|
https://moddning.com/how-is-electronic-health-records-used-in-healthcare/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00767.warc.gz
|
en
| 0.912451 | 1,705 | 2.96875 | 3 | 2.778265 | 3 |
Strong reasoning
|
Health
|
Their point men rose from fields of grain to pepper with gunshot Union regulars and militiamen defending the town from recently dug trenches.
The impatient Rebels were intent on reaching the mile-long covered bridge spanning the Susquehanna River. If they could take the bridge, this unit of Robert E. Lee's Army could take Harrisburg from the rear or march on to Philadelphia.
Seeing the Yankees in trenches, the attackers unlimbered four cannons from W.A. Tanner's Courtney (Virginia) battery.
James Kerr Smith, a Hellam Street resident, counted 49 shots, both solid shot and exploding shell. Gordon's boss, Gen. Jubal Early, later wrote that the Union defenders started running on the third shot, suggesting a shorter bombardment.
Whatever the number, many enemy shells fell into the town of Wrightsville filled with civilian men, women and children.
One of the shells found its mark.
A fighting man with a black militia unit from Columbia was one of those in the Union trenches.
His unit had been steadfastly digging trenches before the Confederate approach. When the gray-clad soldiers came within sight, his unit exchanged their shovels for rifles.
At some point in the bombardment, one of the Confederate shells struck the man, decapitating him. His identity is unknown today.
Other shells barely missed civilians.
Amanda Beaverson, crossing a street with her two children, escaped the explosion of a nearby shell without injuries.
About 150 years later and just a couple of miles away, a crowd gathered to commemorate another Civil War soldier.
According to tradition, he was identified as a Confederate soldier when his body was discovered on the bank of the river.
Historians don't know his name or the Confederate unit he served with.
The Confederates were along the river in York Haven and other points upstream. Did his body wash downstream? Was he one of Gordon's scouts, looking for a ford across the Susquehanna? Was he part of Gordon's brigade, who had strayed or deserted?
One way or another, his burial place has been marked over the years. Now, a new monument marks the spot of this unknown Confederate, and those gathered were there last weekend for the dedication.
So we have an unknown enemy soldier, an invader of our soil, who is remembered today with a marker. His kinsmen in uniform blew the head off of a defender and fired on a town full of civilians. On the march, they stole horses, terrorized women and children and destroyed crops.
And then not far away, we have a site where an unknown friendly soldier in blue lost his life in defense of his home and country.
The site where he died, which could be identified within a couple of yards, is forgotten today.
There's a lesson here as we contemplate the 150th anniversary of the Battle of Gettysburg.
This is a time for serious contemplation about a most difficult moment in our history.
At the heart of this struggle rested ingrained racism that swirled beneath the efforts of many Southern generals on horses and fighting men in butternut and gray on the march.
The Rebels were fighting for a racist cause that at the end of the day sought to keep black men, women and children enslaved.
Slavery was a racial issue because the only form of chattel slavery in the South or North was black-only bondage.
In a border county like York, we were unduly exposed to Southern views about slavery before, during and after the Civil War. The many Confederate flags around the county today testify to that.
We should ensure in this 150th anniversary that we don't get caught up in the glamour of Confederate generals with plumed hats and the Southern-inspired pageantry that is often applied to this bloody war. To paraphrase Robert E. Lee, this was a terrible war - so, 150 years later, we should not grow too fond of it.
The Confederates, after 20 miles of marching that late-June Sunday in 1863, lost the footrace to the bridge.
By the time they arrived at the bridgehead, the span was aflame.
Union commanders had tried to drop a span into the river and save the bridge. That mining did not dislodge the sturdy wooden span, so the Union officers went to Plan B - burn the structure and allow the river to keep Gordon's men at bay.
The fire spread to Wrightsville, and the Confederates, in turn, worked to keep the fire at bay.
Some expressed gratitude to the makeshift firefighting unit clad in gray that kept Wrightsville from burning mostly to the ground.
But Gordon's men were under orders not to burn or destroy civilian property, so their act was a protection against the wrath of Robert E. Lee.
The next day, Gordon ordered a countermarch to York - a march that would eventually take them to the battlefields of Gettysburg.
When the Southern men cleared out, the body of the black fighting man was found in the trench.
Maybe some day, a researcher will uncover the identity of this brave man, whose death preceded the sacrifices of the 54th Massachusetts at Fort Wagner, S.C., featured in the film "Glory."
This unknown soldier should be honored with a marker on or near the spot where he fell, defending his country against a vicious enemy attack.
James McClure is editor of the York Daily Record/Sunday News. His Civil War book "East of Gettysburg" served as a source for this column. He writes daily posts on yorktownsquare.com. To contact him, email [email protected].
|
<urn:uuid:e3551c40-5028-4116-929f-a2cc72e5c6ed>
|
CC-MAIN-2015-11
|
http://www.ydr.com/history/ci_23012583/jim-mcclure-another-unknown-soldier
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462232.5/warc/CC-MAIN-20150226074102-00278-ip-10-28-5-156.ec2.internal.warc.gz
|
en
| 0.963507 | 1,164 | 2.8125 | 3 | 3.01593 | 3 |
Strong reasoning
|
History
|
By Tilly Spetgang, illustrated by Malcolm Wells
Imagine Publishing, Inc., 2009
$14.95, ages 9-12, 88 pages
A comic book about solar energy that's printed all in green. What an ingenious way to get kids motivated about sun power and spur them to try energy experiments of their own.
This stand-out book by journalist Spetgang and renowned architect Wells, first published in 1982 before global warming made headlines then rereleased in 2009 to coincide with the upsurge in public concern, will have readers cracking up as they learn.
Rather than passively learn about the issue, readers hang out in a 6th Grade classroom as their teacher, Mrs. Robinson, talks about issues surrounding solar energy, including why it's not more popular even though it's free, it's always available and it doesn't pollute the air.
On the top of the page, readers listen to the teacher explain how solar energy works, what it takes to get solar collectors in place and how the technologies operate, while below they see line drawings of kids at their desks bantering back and forth about what she's telling them.
When Mrs. Robinson brings up the subject of the energy crunch, one kid slumped in his chair ribs, "Yeah. It's a new candy bar," to which the girl ahead of him turns around and corrects, "I thought it was when two oil tankers collide."
But this isn't just a smart-alecky group; they're also working through what she's saying and imagining what it could mean. When Mrs. Robinson explains how photovoltaic solar cells can convert light to electricity, a girl raises her hand and suggests, "Like light bulbs in reverse."
The comic relief helps to bring home the subject: in one frame a boy looks down at his smoking feet, regretting that he walked on asphalt, which like passive solar systems in houses (inside walls, barrels of water, bins of rocks) soaks up heat during the day and releases it later.
You never actually see the teacher and somehow this makes her voice carry over all of the chatter. It's fun to take it all in, then review the lessons by reading just what the teacher says (which blots out the wise cracks, but allows you to focus in on all the facts.)
Best Parts: As the book begins, Mrs. Robinson offers such a thoughtful lead-in to the topic that you can't help but become engaged in what she's saying.
"This beautiful blue-green planet of mountains and oceans and skyscrapers and great beasts of the jungle…all yours," she says. "Have you ever thought of it that way? You may be about 12 years old and not feel as if you count in any special way outside your family. But, in the wink of an eye, you will be 18, 23, 32."
If I was a teacher, I'd be scribbling down all the great lines to use in my class. My favorite: "How do you catch sunbeams to make them work for you?"
|
<urn:uuid:0b99b69a-1872-4446-9b39-004abbc775f2>
|
CC-MAIN-2015-48
|
http://wherethebestbooksare.blogspot.com/2010/04/kids-solar-energy-book-even-grown-ups.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460942.79/warc/CC-MAIN-20151124205420-00108-ip-10-71-132-137.ec2.internal.warc.gz
|
en
| 0.971967 | 631 | 2.59375 | 3 | 2.557323 | 3 |
Strong reasoning
|
Science & Tech.
|
Fostering innovation to speed the improvement of health care is the goal of an $8.3 million grant to researchers at Cincinnati Children's Hospital Medical Center.
"The system of providing care for the chronically ill is broken," says Peter Margolis, MD, PhD, co-Principal Investigator of the project. "What we aim to do, building on our previous successes, is to create a totally new system of providing care through widespread collaboration."
This ambitious undertaking, dubbed a "collaborative clinical care network," is modeled after collaborative innovation networks, cyberteams of self-motivated individuals with a collective vision, enabled by the Web to achieve a common goal by sharing ideas, information, and work.
COINs are not new -- collective intelligence has existed at least since humans learned to hunt in groups. The Internet, though, has allowed COINs to deliver greater potential, with Wikipedia, Linux, and the World Wide Web Consortium itself prominent examples.
Collaborative innovation networks are, however, new to chronic illness care. While many doctors and patients use the Web to search for and find health information, existing health-related social networks separate patients from providers, despite the fact that patient-provider interaction is key to chronic illness care.
"Everyone wants better health for kids, and everyone involved has stories about road blocks to better care," said Michael Seid, PhD, the other co-PI on the project funded by the National Institutes of Health. "We aim to harness doctors,' nurses,' and patients' inherent motivation to improve and the collective intelligence represented by all parties."
"We are building a way to bring patients and providers together and give them the tools they need to collaborate to improve care and outcomes."
The $8.3 million grant from the National Institutes of Health is part of $348 million awarded Sept. 24, 2009 to "encourage investigators to explore bold ideas that have the potential to catapult fields forward and speed the translation of research into improved health," according to the NIH.
"Although all … programs encourage new approaches to tough research problems, the appeal of … (these) programs is that investigators are encouraged to define the challenges to be addressed and to think out of the box while being given substantial resources to test their ideas," said NIH Director Francis S. Collins, MD, PhD.
Collaborating with the Cincinnati Children's researchers are doctors and hospitals from ImproveCareNow, a collaborative improvement network focusing on Crohn's disease and ulcerative colitis, as well researchers at the University of Chicago, the University of California - Los Angeles, University of Vermont, Massachusetts Institute of Technology, Science Commons, Lybba and Ursa Logic Corporation.
The research will focus on Crohn's disease and ulcerative colitis, which affect about 100,000 young people in the United States.
These were chosen because they have so much to gain from the collaborative: rare enough that patients do not have access to a large peer group and clinicians must collaborate across treatment centers to achieve the numbers they require for study; they primarily affect children in early adolescence, a perfect age to engage in Web-enabled innovation; and because they are uncommon, there are few financial incentives for the pharmaceutical industry to conduct studies with the group.
Success already seen
Margolis and Seid already have achieved strong initial success through their involvement in the ImproveCareNow network. ImproveCareNow is one example of a highly successful collaborative network.
ImproveCareNow, which focuses on Crohn's disease and ulcerative colitis, is a collaboration of 16 hospitals across the country and about 2,500 patients. Using a model for systematically collecting data and sharing it openly, care has improved quickly and doctors have seen a dramatic, rapid increase in the number of patients in remission.
Translate the success to the entire population and it means 10,000 new children in remission, without new drugs and without spending huge amounts of money (and lots of time) on research and development, according to the researchers.
Networked collaboration offers opportunities for patients and care providers to share their experiences, but these stories will be complemented by real data on care and outcomes. The researchers say combining and sites' performance data will allow providers and patients to see not only what others are doing, but how care is improving.
"Right now, it is painfully slow to get medical information and new discoveries into practice," said Margolis. The reason, he says, is because doctors and patients may operate in virtual silos which hide their information. Networked collaboration allows participants to share more easily so that others can see, learn, and get healthier.
Social media infrastructure
The new research will take advantage of a social media platform being developed by Lybba, a California-based non-profit whose mission is to educate and empower people to lead healthy lives. Lybba is headed by Jesse Dylan, a cinematographer who created, among other things, the "Yes We Can" video in support of Barack Obama that was seen by 30 million people through YouTube in just a few weeks during the 2008 presidential campaign.
Lybba uses social media to allow users to share experiences, opinions, comments and questions. The networking will include blogging, photography, video, secure messaging and instant messaging.
Social networks are not new to health care populations. Today, many support groups for patients and/or parents exist on the Internet. But this will not be simply peer-to-peer networking as exists now. (Previous research by Cincinnati Children's examined potential problems of teens sharing health information, or misinformation, among themselves.) Health care professionals will be a part of the process, and the solution.
Participation and privacy are concerns
It will not be easy. The researchers know one of the stumbling blocks likely will be ensuring participation by already-busy doctors. During the design phase, they will study ways to encourage doctors to become more engaged with their patients and with fellow physicians. They hope parents and patients will encourage doctors to get involved, and by connecting colleagues, doctors will view the social networks as a means to collaborate and improve the care they are naturally motivated to provide.
Another issue for the researchers to address is patient confidentiality and federal privacy laws. The software platform on which the network will operate will be designed to protect privacy and confidentiality, but also allow and encourage the sharing of information.
Part of NIH's bold initiative for transformative research
The Cincinnati Children's researchers are especially excited about receiving the grant from a special NIH fund set aside for highly creative and highly innovative projects. The so-called "transformative RO1 grants" are for projects "proposing exceptionally innovative, high-risk, original and/or unconventional research with the potential to create or overturn fundamental paradigms," according to the NIH.
The five-year grant will also allow testing of ways to make care more continuous, communication more seamless, and to enhance patients' and families' ability to perform necessary self-care for the conditions.
Another advantage of the social networks is that it will improve the relationship between the doctors and patients. Today, there is not a lot of time for pre-visit preparation. Often a doctor glances at a chart just seconds before walking into a patient's room. The patient may be feeling better at that point in time and not mention some previous episodes that might help the doctor provide better care.
"The networks will help us move toward more continuous interactions with patients," Margolis said. "It will improve communication. It will improve relationships and it will improve care."
He and his partners at centers across the country know their vision is a bold one.
"This is radically different from what we have done in the past," Seid said. "We are not trying to make one change at a time; we are creating a system that makes it possible for patients and doctors to make many changes - quickly and efficiently - to improve health. This has the potential to change the way all chronic care is provided."
Source: Cincinnati Children's Hospital Medical Center (news : web)
Explore further: Experts denounce clinical trials of unscientific, 'alternative' medicines
|
<urn:uuid:dbe6d259-c6f7-42f9-b873-3d0df3c0f322>
|
CC-MAIN-2014-35
|
http://phys.org/news172998870.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824990.54/warc/CC-MAIN-20140820021344-00061-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.952438 | 1,656 | 2.515625 | 3 | 3.002995 | 3 |
Strong reasoning
|
Health
|
When a part of the large intestine or colon gets affected by cancer, the process of removing that part by surgery is called Hemicolectomy.
It is mainly required to treat the cancer of vowels or the type of Crohn's disease. The surgical process includes the removal of the infected area of the bowel and replacing it with a new part which may be grafted from the skin or an artificial implant.
It is mostly an advanced measure that is taken when the part of the colon cannot be cured by the means of any medicinal measure. It also depends on how much the damage is and the extent of infection in the colon.
When a person is suffering from some acute condition of the stomach then he or she is an eligible candidate for Hemicolectomy.
Mostly in the case of bowel cancer, doctors prefer to use the surgical technique of hemicolectomy. The conditions of ulcerative colitis also make the patient an ideal candidate for the procedure of hemicolectomy. The other conditions include Crohn's disease, polyps or growths, and diverticulitis.
The polyp growth in the stomach is the one that requires close attention as they don’t normally get cured by the traditional medicinal procedure. There are several methods that can achieve hemicolectomy and these are based on the complications of the patient.
Following are the factors which influence the cost of hemicolectomy:
The last part of the gastrointestinal system is referred as the colon and it is 5-6 cm long. It is ‘U’ shaped and it starts from the distal part of the small intestine and is connected to the rectum and anus. It absorbs the fluids, processes the metabolic waste products, and eliminates through the rectum and anus. The removal of the colon is called colectomy.
There are different types of colectomies such as complete colectomy, right hemicolectomy, left hemicolectomy, sigmoid colectomy, and proctocolectomy. The surgical removal of the left side of the colon (descending colon) is called left hemicolectomy surgery. The surgical removal of the cecum, ascending colon, and the hepatic flexure (right side of the colon) is called as the right hemicolectomy surgery.
Some of the conditions that require complete colectomy or hemicolectomy surgery include the following:
A hemicolectomy procedure can be performed as a laparoscopic or open surgery. The type of the surgery to be performed is decided by the surgeon during the evaluation and the decision depends on the age and the condition of the patient.
Sometimes the laparoscopic procedure can also be turned into open surgery, depending on the feasibility of the procedure with respect to safety and accuracy. Overall, the following parameters decide whether a laparoscopic or an open surgical procedure will be performed:
You will be informed by your surgeon about the type of surgical procedure that you will benefit you the most. You will be taken to the operating room, and blood pressure and breathing will be monitored.
You will be positioned in lithotomy Trendelenburg (modified Lloyd-Davis) position and your both arms will be abducted on arm boards. The legs will be placed in stirrups and soft padding will be placed underneath to prevent pressure and injuries to the skin and nerves.
After positioning, you will be given general anaesthesia so that you do not feel any pain during the procedure. Sometimes, a peripheral nerve block may also be given to control pain during and after the surgery.
You will be positioned in the supine position initially and later you may be taken to the Trendelenburg position (lying by facing upwards on a tilted bed with pelvis higher than the head).
After positioning, you will be administered general anaesthesia and an additional epidural block for pain management. The catheter will be placed for monitoring the urine output during and after the procedure. The laparoscopic right hemicolectomy procedure or open surgery can be performed, depending on the condition of the colon.
During the hemicolectomy procedure, your surgeon may take any of the following approaches:
The cost of Hemicolectomy procedure starts from USD 37500 in Greece. There many OECI, TEMOS certified hospitals in Greece that offer Hemicolectomy
The cost of Hemicolectomy in Greece may differ from one medical facility to the other. The top hospitals for Hemicolectomy in Greece covers all the expenses related to the pre-surgery investigations of the candidate. The comprehensive Hemicolectomy package cost includes the cost of investigations, surgery, medicines and consumables. There are many things that may increase the cost of Hemicolectomy in Greece, including prolonged hospital stay and complications after the procedure.
After Hemicolectomy in Greece, the patient is supposed to stay in guest house for another 21 days. This period is important to conduct all the follow-up tests to ensure that the surgery was successful and the patient can go back to the home country.
While Greece is considered to be one of the best destinations for Hemicolectomy owing to the standard of Hospitals, and expertise of doctors; there are a select few destinations which provide comparable quality of healthcare for the procedure Some of the other destinations that are popular for Hemicolectomy include the following:
|Saudi Arabia||USD 37500|
|South Africa||USD 37500|
|South Korea||USD 37500|
|United Arab Emirates||USD 14000|
|United Kingdom||USD 37500|
There are certain expenses additional to the Hemicolectomy cost that the patient may have to pay for. These are the chanrges for daily meals and hotel stay outside the hospital. The per day extra expenses in Greece per person are about USD 50 per person.
The following are some of the best cities for Hemicolectomy in Greece:
After the Hemicolectomy takes place, the average duration of stay at the hospital is about 5 days. The patient is subjected to several biochemistry and radiological scans to see that everything is okay and the recovery is on track. After making sure that patient is clinically stable, discharge is planned.
There are more than 3 hospitals that offer Hemicolectomy in Greece. These hospitals have propoer infrastructure as well as offer good quality of services when it comes to Hemicolectomy Additionally, these hospitals are known to comply with the international standards as well as local legal requirements for the treatment of patients.
Greece is flocked with some of the finest hospitals in the world, such as:
The country boasts state-of-the-art hospitals and medical care centers in many of its provinces, completely complying with international health standards and equipped with internationally accredited physicians. There are several contemporary hospital units found in popular Greece destinations including Peloponnese, Crete, Thessaloniki, Alexandroupolis, Kalamata, Corfu, and Athens. They are pioneers in the development of medical tourism and suitably equipped to meet the needs of patients and visitors.
Medical facilities in Greece receive accreditation from Joint Commission International (JCI). The healthcare standards are followed by all accredited hospitals and have been developed in consultation with world healthcare experts and providers, and patients. There is a strict evaluation process to assess the quality and performance of hospitals and the certification is awarded to only those hospitals that meet all the criteria defined by JCI. The standards help set a quality benchmark for healthcare systems to ensure quality treatment.
Greece has emerged as one of the most popular medical tourism destinations in the world, with world-class doctors and hospitals that offer quality treatment at an affordable cost. Greece’s medical system meets all the international healthcare standards and is recognized all across Europe and the U.S. as leaders in the field of medicine. Greece is a safe travel destination and the beauty of the country’s landscape, along with its wealth of plenty sunshine, therapeutic natural springs, and unique Mediterranean cuisine make the country a perfect tourist destination. Other factors that have improved Greece’s presence in the wellness tourism market are excellent private healthcare services, availability of world-renowned as well as highly trained medical professionals, and technologically-driven medical care.
Around 95 percent of doctors in Greece are specialists and are not restricted to only general medicine. Also, the majority of the doctors have received training in different countries and capable of handling even the most complex cases with quite an ease. Fertility experts in the country have received worldwide recognition for performing IVF and other fertility procedures with more precision and a high success rate. Producing one of the finest doctors in the world, Greece has stringent government regulations for doctors in order to ensure best practices. Also, every doctor must at least 80 hours of training every five years. There are many exemplary achievements of doctors in Greece, such as doing the first transcatheter closure of a patent ductus arteriosus, successful excision of cancerous cells in an inaccessible and sensitive area of the brain, and performing the first live-streamed global interactive rhinology and endoscopic skull base surgery.
Depending on the length of stay in Greece, a person can apply for either of the below two types of visa:
If you have a Schengen Visa, you can stay in Greece for a maximum period of 90 days within a period of six months. The essential documents required for applying for a medical visa are:
All the submitted documents are thoroughly checked by the Greece embassy before issuing a medical visa.
The popular procedures in Greece are:
Moreover, many medical tourists visit Greece every year for renal dialysis and the country is populated with top-notch dialysis centers. Greece has received worldwide recognition for its achievements in the field of plastic surgery. The country has a large number of standard cosmetic surgery clinics offering excellent value for money. The country has made commendable advancements in the field of fertility treatment. Greek scientists have made breakthrough research into fertility treatment, particularly, male fertility and sperm health.
The cities in Greece which have received worldwide recognition for medical tourism are: 1. Athens 2. Kalamata 3. Peloponnese 4. Alexandropoulos 5. Thessaloniki 6. Corfu 7. Athens
Athens is the epicenter of the medical travel industry in Greece where patients seek high-standard medical services in various disciplines like cosmetic surgery, gynecology and obstetrics, orthopedics, and rehabilitation. Santorini, a beautiful island, offers medical tourists a breathtaking view and provides them relaxation, rejuvenation, and wellness, which are needed for a speedy recovery. State-of-art hospitals, transportation facilities, affordable accommodation, a wide variety of food options, and language assistance are some other reasons why these cities are top-ranked medical tourist destinations.
Vaccination is compulsory before traveling to Greece. The CDC and WHO recommend the following vaccines:
Some parts of Europe are affected by periodic outbreaks of routine diseases, hence it becomes essential for a traveler to get immunized. It is important to check with your doctor regarding your individual needs for medical travel and seek advice based on your current medical conditions and past vaccination history. You should also keep yourself updated with a travel advisory issued by the government or contact your hospital in Greece for complete information on vaccination guidelines.
MediGence has pre-negotiated bundled pricing for many surgical procedures that helps you save cost and avail unmatched benefitsExplore our Best Offers
Ask your healthcare adviser for the best multiple options and choose the one that meets your expectations
(+1) 424 283 4838
|
<urn:uuid:176e7679-b84d-4f53-aea2-7a5d027f744c>
|
CC-MAIN-2021-49
|
https://medigence.com/hospitals/oncology/hemicolectomy-left/greece
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.78/warc/CC-MAIN-20211201022332-20211201052332-00498.warc.gz
|
en
| 0.94087 | 2,501 | 3.046875 | 3 | 1.7715 | 2 |
Moderate reasoning
|
Health
|
Government delays endangered marsh
The city and state have agreed on a restoration plan for Kawainui Marsh.
FORMATION of Kawainui Marsh from a volcanic caldera to wetlands took millions of years, so by comparison, the 17 years that have passed since its transfer from city to state hands was authorized is a mere drop in the geological bucket.
The long-running episode, during which the two governments squabbled over jurisdictional issues, may soon end if the state Legislature, as it should, approves a measure that settles the disputes and provides funds for the marsh's management. The delay has undoubtedly impaired the wetland's ecology.
Kawainui's restoration has been largely in limbo due to disagreement between the city and state over who should be accountable for flood control and other responsibilities. In 1990, a law directed the transfer of city-held land that encompasses the marsh to the state, but city officials argued that the state, then, should also take responsibility for flood control. No, said the state; flood control is the city's duty.
As this back-and-forth went on, the federal government would not move forward on providing the millions of dollars sought by U.S. Sen. Daniel Inouye for Kawainui's restoration.
Under the new agreement, the city will give up its portion of the nearly 900-acre marsh and the state will take over maintenance of a levee. The city will still maintain an adjoining canal and both governments will be liable should there be a major flood.
The settlement will allow restoration of the internationally recognized marsh that is primary habitat for four of Hawaii's endangered and endemic water birds and that contains archaeological and cultural sites.
That the agreement took 17 years to achieve should be an embarrassment to government.
Oahu Publications, Inc. publishes
the Honolulu Star-Bulletin, MidWeek
and military newspapers
BOARD OF DIRECTORS
David Black, Dan Case, Dennis Francis,
Larry Johnson, Duane Kurisu, Warren Luke,
Colbert Matsumoto, Jeffrey Watanabe, Michael Wo
Editorial Page Editor
(808) 529-4748; [email protected]
The Honolulu Star-Bulletin (USPS 249460) is published daily by Oahu Publications at 500 Ala Moana Blvd., Suite 7-500, Honolulu, Hawaii 96813. Periodicals postage paid at Honolulu, Hawaii. Postmaster: Send address changes to Star-Bulletin, P.O. Box 3080, Honolulu, Hawaii 96802.
|
<urn:uuid:64ef9667-6b47-4c48-a796-0c224745be4d>
|
CC-MAIN-2013-48
|
http://archives.starbulletin.com/2007/04/16/editorial/editorial01.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164039593/warc/CC-MAIN-20131204133359-00035-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.927444 | 525 | 2.671875 | 3 | 2.933111 | 3 |
Strong reasoning
|
Politics
|
One of the most basic and easiest habits we are taught as kids is our hands. Ideally, we were expected to wash our hands thoroughly before and after several activities. But if we are being , how many of us followed it to a T? Kids are not 100 percent diligent about washing hands, especially when no one is checking on them. Neither are adults if numbers are any indication.
‘But what is the big deal about washing hands?’ we find ourselves asking each other. However, things do get clearer in the light of and even the outbreak of pandemics such as the COVID-19. Washing hands is an important step in maintaining personal and environmental hygiene, keeping diseases at bay, and also inculcating a sense of discipline.
How to Wash Hands Properly?
Washing your hands with soap and water is the most effective and easy way of washing your hands to get them clean of dirt, grime, germs, and everything that you don’t want to be on your hands. But are we doing it right? According to ‘The State of Handwashing in 2017’ report released by Global Handwashing Partnership, overall hand hygiene compliance is rarely at satisfactory levels around the world including settings such as and healthcare facilities.
What could explain poor compliance are 2017 statistics by UNICEF which say that about three out of five people had basic handwashing facilities. To sum it up, awareness around hand hygiene, especially hand washing, remains bleak. Is it time to revisit the basics? Why not!
Proper handwashing is done with soap and water and gets all surfaces of your hands clean. Here is a step-by-step guide to washing hands.
- Wet your hands with running water.
- Take a required amount of soap on the palm of your hand.
- Rub your palms against each other.
- Put your right palm on the back of your left hand and clean between your left-hand . Repeat the same for the right hand.
- Intertwine fingers palm-to-palm and repeat a similar motion.
- Next, hook the fingertips of each hand onto the finger bases of the other. This cleans the fingertips as well as the bases.
- Wrap the fingers of one hand around the thumb of the other and clean in a rotational motion. Repeat for the other hand.
- Lastly, rub the center of your palms with the fingertips in circular motions for both hands.
- Rinse the soap off with water.
During each of the cleansing motions, count to ten. If you are in a public restroom or washbasin/sink, ensure your hands don’t directly touch the tap water. Close the tap using the tissue you have used to wipe your hands or direct the tap with your elbow.
How Often Should I Wash Hands?
It is ideal to wash your hands with soap and water before and after every task. In cases where that seems difficult, ensure you don’t skip the process in the following circumstances:
- After using the washroom (in public as well as at home)
- Before and after eating food
- Before and after cooking food
- When you come back home
- After touching your pets or any animals
- After being with a person who is sick, especially someone with communicable diseases
- After or
- After handling the trash
Also, practice necessary hygiene while cooking. Wash hands between dealing with different categories of food such as dairy and meat, or spices and . Wipe hands clean after washing them. Another thing you might want to consider is keeping your nails short and clean. Longer nails can harbor an unnecessary amount of dirt and germs. Keep them clean and short.
Using Hand Sanitizers
A hand sanitizer is a viable option to consider when you find yourself outside without access to soap and water. One should prepare beforehand for situations like these and carry a hand sanitizer with them.
The CDC recommends using an -based sanitizer that contains at least 60% alcohol. However, sanitizers cannot completely replace washing hands. They do not work against all germs, heavy or harmful chemicals, and are not always useful to get rid of grease or dirt. So use them as an option during emergencies.
Moreover, it is important to be careful around sanitizers, especially with children. Ingesting sanitizers could cause poisoning. Also, ensure you rub your hands for 20 seconds and let the sanitizer dry completely.
To sum it up, washing hands may sound like a small thing, but it is a line you draw between your good health and the threat it faces. Moreover, with such hygiene practices, you are also protecting your loved ones and those around you.
|
<urn:uuid:4ea1aa67-05a6-4bd0-beb8-7cb34db5cbdc>
|
CC-MAIN-2020-40
|
https://www.organicfacts.net/wash-hands-properly-tips.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00203.warc.gz
|
en
| 0.956671 | 978 | 3.5 | 4 | 1.706965 | 2 |
Moderate reasoning
|
Health
|
A 40,000-year-old tooth has provided scientists with the first direct evidence that Neanderthals moved from place to place during their lifetimes.
In a collaborative project involving researchers from the Germany, the United Kingdom, and Greece, Professor Michael Richards of the Max Planck institute for Evolutionary Anthropology in Leipzig, Germany and Durham University, UK, and his team used laser technology to collect microscopic particles of enamel from the tooth.
By analysing strontium isotope ratios in the enamel - strontium is a naturally occurring metal ingested into the body through food and water - the scientists were able to uncover geological information showing where the Neanderthal had been living when the tooth was formed (Journal of Archaeological Science, February 11th, 2008).
The tooth, a third molar, was formed when the Neanderthal was aged between seven and nine. It was recovered in a coastal limestone cave in Lakonis, in Southern Greece, during an excavation directed by Dr Eleni Panagopoulou of the Ephoreia of Paleoanthropology and Speleology (Greek Ministry of Culture). The strontium isotope readings, however, indicated that the enamel formed while the Neanderthal lived in a region made up of older volcanic bedrock. The findings, published in the Journal of Archaeological Science, could help answer a long-standing debate about the mobility of the now extinct Neanderthal species.
Some researchers argue that Neanderthals stayed in one small area for most of their lives; others claim their movements were more substantial and they moved over long distances; and others say they only moved within a limited area, perhaps on a seasonal basis to access different food sources.
Professor Richards said: "Strontium from ingested food and water is absorbed as if it was calcium in mammals during tooth formation. Our tests show that this individual must have lived in a different location when the crown of the tooth was formed than where the tooth was found. The evidence indicates that this Neanderthal moved over a relatively wide range of at least 20 kilometres or even further in their lifetime. Therefore we can say that Neanderthals did move over their lifetimes and were not confined to limited geographical areas."
"Previous evidence for Neanderthal mobility comes from indirect sources such as stone tools or the presence of non-local artefacts such as sea shells at sites far away from the coast. None of these provide a direct measure of Neanderthal mobility." said Dr Katerina Harvati of the Max Planck Institute, in Germany, who initiated the study.
The researchers believe the laser ablation technique used to collect the minute particles of enamel will allow the measurement of other rare Neanderthal remains to see how the result compares in other regions and at other time periods.
The technique could also allow scientists to look at very small scale migrations, which is not possible with traditional research methods, and could possibly be applied to research into early humans.
The isotopic research was funded by the Max Planck Institute and the excavation of Lakonis was funded by the Greek Ministry of Culture, the Wenner-Gren Foundation, the LSB Leakey Foundation and the Institute for Aegean Prehistory.
The research team, led by Professor Michael Richards of the Max Planck Society and Durham University, included Katerina Harvati, Vaughan Grimes, Colin Smith, Tanya Smith and Jean-Jacques Hublin, of the Max Planck Institute for Evolutionary Anthropology, Germany, and Panagiotis Karkanas and Eleni Panagopoulou of the Ephoreia of Palaeoanthropology-Speleology of Southern Greece.
|
<urn:uuid:04624b64-89c8-44ef-be23-eaac5f0b9b9c>
|
CC-MAIN-2019-26
|
https://www.science20.com/news_releases/neanderthal_nomads
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998959.46/warc/CC-MAIN-20190619103826-20190619125826-00175.warc.gz
|
en
| 0.959783 | 744 | 4.3125 | 4 | 2.987623 | 3 |
Strong reasoning
|
History
|
Celebrations around the World. INEBI P5 Unit 3: People Around the World. Harvest festivals. In many parts of the world people have Thanksgiving ceremonies and celebrations for a successful harvest. They are ancient customs.
Related searches for Celebrations around the World
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
INEBI P5 Unit 3: People Around the World
In many parts of the world people have Thanksgiving ceremonies and celebrations for a successful harvest. They are ancient customs.
In Europe many harvest festivals are celebrated in September at the end of the summer, but the time of celebration is different in different countries and continents.
In Nigeria they celebrate Iriji, the New Yam Festival. Yams are similar to sweet potatoes.
Yam is an important food for the Igbos.
At harvest time, the Igbos give thanks to the ancient gods.
These festivals take place between July and September.
An Igbo festival is celebrated by traditional dances, songs, drumming, masquerades, wrestling and a big feast.
African yams are prepared in many different ways: roasted, boiled, fried, in a soup... A traditional dish is fufu. It is made with mashed yams.
Igbo ceremonial mask
Britain drumming, masquerades, wrestling and a big feast.
In Britain, people pray, sing and decorate churches.
Corn dollies are typical decorations.
An old tradition is to bake a loaf in the shape of a wheat sheaf. The loaf is taken to the church as a symbol of thanksgiving for the harvest.
The harvest festival is celebrated at the end of September.
In Japan the harvest festival is the rice harvest. It is celebrated in autumn and the first rice is offered to the gods.
There are dances, music, procession of floats with symbolic gods and a huge feast.
In Japan there is a custom of tsukimi or also known as Moon-viewing on September 15. Everyone sets up a table facing the horizon and watch the moon rising. They place food on these tables and offer it to the spirit of the moon.
United States celebrated in autumn and the first rice is offered to the gods.
In the USA one of the most important festivals is Thanksgiving. This is a celebration of the first harvest of the English settlers in America nearly 400 years ago.
The festival is celebrated by many families. They eat turkey, sweet potato, sweet corn, cranberry sauce and sweet pumpkin pie for dessert.
This festival is celebrated on the fourth Thursday in November.
|
<urn:uuid:e81fc8be-abb8-415a-a2c2-90df1a41b379>
|
CC-MAIN-2017-30
|
http://www.slideserve.com/Melvin/celebrations-around-the-world
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424079.84/warc/CC-MAIN-20170722142728-20170722162728-00136.warc.gz
|
en
| 0.94583 | 585 | 2.8125 | 3 | 1.095826 | 1 |
Basic reasoning
|
History
|
Top 10 Most Hazardous and Dangerous Jobs in the U.S.
The U.S. Bureau of Labor Statistics (BLS) publishes data about fatal workplace injuries annually. This infographic, The 10 Most Hazardous Jobs in the U.S., illustrates the most recent data (from 2014) so people can see which jobs experience the highest rates of fatal injuries. Many of the dangerous professions may seem obvious, but others are more surprising. The numbers included in the infographic refer to the deaths that occur per 100,000 full-time employees in each profession.
The top two most hazardous jobs are logging and fishing, with fatality rates of 109.5 and 80.8 per 100,000 full-time workers, respectively. Other dangerous outdoor professions include roofing, farming, ironwork, and electrical/power line work. Industries related to transportation can also be dangerous; airline pilots/engineers and truck drivers also experience high fatality rates at work. Additional dangerous jobs are refuse and recyclable material collectors and first-line supervisors for construction and extraction work.
The overall fatality rate for all professions in the U.S. in 2014 was 3.4 per 100,000 full-time employees. The top 10 most hazardous jobs all have fatality rates much higher than the overall rate. The job coming in tenth on the list, first-line supervisors of construction trades and extraction work, has a rate of 17.9 deaths per 100,000 full-time employees, more than five times the national rate. That means loggers, who have the most dangerous job, face a fatality rate more than 32 times the overall rate.
Common fatal injuries across these industries include roadway incidents, falls, struck by object/equipment accidents, exposures to harmful substances or environments, and fires and explosions. Of these, transportation-related incidents account for the largest total number of occupational fatalities. Overall, men experience more fatal work-related injuries than women, and older workers—those 45 and over—experience fatality rates higher than the average of 3.4. (Those 65 and over have a significantly higher rate of 10.7 per 100,000 full-time employees.)
The total number of fatal injuries increased 5 percent from 2013 to 2014, with over 4,800 deaths occurring in 2014. This number is the highest the U.S. has seen since 2008, according to the BLS. These 10 professions represent a large portion of those deaths, which means safety measures are very important in these industries. Safety organizations such as the Occupational Safety and Health Administration (OSHA) provide regulations and guidance businesses in these industries can consult to improve safety. When injuries do occur, businesses can face citations and fines if safety measures aren’t followed, so it’s in everyone’s best interest that OSHA regulations receive the attention they deserve.
This infographic aims to highlight which jobs pose the biggest threat to workers and bring attention to the high fatality rates these sectors of the U.S. economy experience. It uses engaging illustrations with eye-catching colors and text to convey its message. Business owners, supervisors, and safety managers can use this infographic as a starting point for discussing safety in their workplaces. Viewers can also share it with friends, relatives, and colleagues to spread the word about workplace hazards in the United States.
|
<urn:uuid:56540904-ce58-44dc-aecb-8878b1bd0705>
|
CC-MAIN-2020-05
|
https://allinfographics.org/top-10-hazardous-and-dangerous-jobs-in-us/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00494.warc.gz
|
en
| 0.944441 | 680 | 2.71875 | 3 | 2.33595 | 2 |
Moderate reasoning
|
Health
|
The idea of traveling through time has titillated mankind since the dawn of scientific discoveries, but so far there hasn’t been any evidence it will ever become part of reality. If time travelers do exist, they sure seem to keep their knowledge of the future to themselves.
At least, this is what an astrophysicist and his students at Michigan Technological University in Houghton would say. They carried out an extensive search on the Internet, particularly social media sites, looking for proof of time travellers.
“If someone went back in time and said something to hint about the future, it would prove the concept of time travel,” astrophysicist Robert Nemiroff said.
Together with his students they came up with some simple ideas during a Thursday poker night: If someone would mention people or events before they popped into reality, it could be from a messenger from the future.
They began searching sites like Twitter, Facebook, and blogs for mentions of Pope Francis and comet ISON in 2011. Francis was elected pope in March 2012 and ISON was first detected in September 2012. If anyone would have mentioned either of these two in 2011, they must somehow have access to special knowledge.
Alas, no one did.
The researchers decided to up the ante and started posting tweets only people from the future could possibly respond to, like, asking people in September to tweet “#Icanchangethepast2” – but do it before August, the previous month. Again, no results.
Nemiroff and his students presented their findings to three different physics journals but were rejected by all of them. He called the project a bit of summer fun that cost nothing to do.
“This wasn’t a major research push. This was typing things into search engines. Billions of dollars are spent on time travel movies and books and stuff like that. This probably cost less than a dollar to check on it,” Nemiroff said.
The astrophysicist joked and said that time traveling wasn’t his normal field and that he didn’t put much belief in it, even less after looking for proof online. “Unless I go back (in time) and publish lots of papers.”
Other scientists didn’t exactly take the project too seriously either. Harvard astronomer Avi Loeb said: “As anyone who uses online dating knows, the Internet is the last place to find the truth about the physical reality.”
Keep all of this in mind when BaDoink wins the XBIZ award for best Adult Site of the Year at the ceremony in Los Angeles on January 24, 2014. It’s time to go. My DeLorean is running low on gas.
|
<urn:uuid:453b6414-7f12-44d1-92dc-27d4cb6cd220>
|
CC-MAIN-2017-43
|
http://www.badoink.io/life/opinion/looking-for-evidence-of-time-travel/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823482.25/warc/CC-MAIN-20171019231858-20171020011858-00156.warc.gz
|
en
| 0.973287 | 567 | 2.78125 | 3 | 2.915292 | 3 |
Strong reasoning
|
Science & Tech.
|
Learn how to register your children in a school in Spain
First, the enrollment period will depend on the school, although it is usually during the month of March. However, registration dates vary according to where you live and the school, but registration usually takes place for two months between February and May of the year you want your child to start the course (from September).
Expatriate parents should allow time to register their child in a Spanish public school, as the process can usually be large. Your local city council (ayuntamiento) in the area you will in can give you their formalities as the process and paperwork vary from area to area. It is recommended to ask if any foreign documents need to be sworn translated.
You will need to be registered on the padrón (local census) at your local city hall before you can register your child at a public school. Once registered you can go to the education department of the city hall to get a school registration form and medical certificate for your pupil.
Probably you need to provide:
- your own passport
- your child’s birth certificate or passport
- proof of residence in Spain.
- your NIE (Número de Identificación de Extranjero)
- an up-to-date immunisation/medical certificate
You may also need to provide to take along two passport-sized pictures and any school results from a previous school. If your child will be starting the third year of secondary school, you may also need to get your child’s school records verified by the MECD in a process called convalidación (validation) or homologación (standarization, the official record of your child’s education).
The validation (convalidación) process requires you to provide the appropriate forms from the Department of Education (MECD) along with your child’s school record documents and/or examination qualifications, plus his/her birth certificate. A child will not be accepted until the official papers have been received and stamped by the Spanish Department of Education. The process may take between 3-6 months, although a document from the Education Ministry for the convalidacion documents for your child can be acceptable to start at the school.
What about Private schools in Spain?
Some private schools are stablished by the state (charter schools or colegio concertados) and fees are cheaper, and others are fully private or independent (private schools or colegios privados). Private schools usually have smaller classrooms, havemore choices of academic subjects, have better services and offer more extra-curricular activities than public schools. Most private schools are open from Monday-Friday, they may be day school or take boarders, and they will establish their own term dates regardless of the public Spanish education system. Most private schools in Spain are state subsidised (colegio concertados) and so follow the Spanish criter, and presents the same terms and regulations as state schools. They usually teach in Spanish, but currently you the most of them are bilingual schools.
Admission and registration procedures for Spanish private schools
If you want to send your pupil to a colegio concertado, you must apply though registering on the padrón and education department of the ayuntamiento following the same process as for a public school (as described above).
|
<urn:uuid:0a25d837-524b-42a6-b794-f450d02ad3bd>
|
CC-MAIN-2020-05
|
https://www.school-finder-spain.com/school-registration-spain/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00281.warc.gz
|
en
| 0.952609 | 695 | 2.53125 | 3 | 1.413735 | 1 |
Basic reasoning
|
Education & Jobs
|
The key features of Obsessive Compulsive Disorder are obsessive thoughts and rituals that is compulsions, which are repeated to the end and uncontrolled.
It is an anxiety disorder and statistics tell us that it is a frequent disorder and that a large number of people suffer from Obsessive Compulsive Disorder.
However, some character traits of Obsessive Compulsive Disorder belong to each of us, but that does not mean that we are affected.
How can we figure out whether we suffer from Obsessive Compulsive Disorder? Do we only have the normal character traits that distinguish us?
Here are some elements that can help us understand whether we are ill with Compulsive Obsession or if it is only the traits of our character.
1. Wash your hands over and over again
If you need to wash your hands very often during the day … It does not always mean that you are a compulsive Obsessive!
What distinguishes an Obsession – Compulsiveness, is the suffering we experience because the action is out of our control.
In fact, this is an irresistible push to wash your hands repeatedly, even though you realize that it is a senseless behavior. This can be associated with fear of germs or the fear of contracting illnesses.
2. The Control
Everyone can have the need to make sure the gas is closed before leaving home. Sometimes we can feel the need to verify the correctness of our actions a couple of times.
However, in Obsessive Compulsive Disorder this control becomes exaggerated until it can compromise the normal occurrence of our lives.
It all starts from thinking that prevents the smoothness of events and keeps us locked in having to repeat of the action of controlling consecutively.
3. Numerical Rituals
Typical of Obsessive Compulsive Disorder is to repeat mathematical calculations, keep in mind number plates, count the bites of food swallowed, and so on.
The significance attributed to these rituals is often scornful, aimed at avoiding misfortune or propitiating future events.
4. Pathological doubt
Pathological doubt is another element that characterizes Obsessive Compulsive Disorder and makes life impossible. Those preyed on the fear of committing aggressive actions to which he feared to be able to lose control: how to kill, hurt, strangle, mutilate, raped.
Those preys to this type of obsessions avoid holding objects that are considered dangerous, such as weapons or knives. Generally, there are exhausting very complex rituals to protect themselves from the doubt.
5. Accumulation of objects or collecting
Everyone may like to keep some objects that are connected with old memories, or have as a hobby the collecting of particular objects such as postcards or stamps and more …
In Obsessive Compulsive Disorder, however, the space occupied by “collections” can reach unimaginable limits. It can get to occupy most of the space inside the home.
The impetus for accumulation is irrepressible.
It is kept of everything, even a variable things, because they could serve.
6. Order and Symmetry
We must not exchange the normal and healthy tendency to order and planning, which is part of an organized way of thinking, with intolerance to disorder or asymmetry falling within Compulsive Obsessive Disorder.
Indeed, in Obsessive Compulsive Disorder, this is a form of grim exaggeration.
Paintings, dresses, books, sheets, pens, glasses and anything else must be aligned and symmetrical, perfectly ordered according to a precise logic, such as color or size. It has experienced a constant internal struggle, which leads to a huge waste of time. Sometimes too many hours are spent to put in place and order objects that are perceived as asymmetrical and disordered.
The Order and Symmetry Observations can also cover your body, like combing or dressing.
We can talk about Obsessive Compulsive Disorder, only if obsessive thoughts and obsessive rituals have a marked suffering and undermine normal everyday functioning in both social and working life.
Compulsive Obsessive Disorder is based on two central characteristics …
- Obsessions are repetitive, frequent and persistent.
- Prevents the feeling that all this is imposed and out of control.
Find more reviews on http://supersmartnet.com/
|
<urn:uuid:1a3e98df-16d1-42aa-98d1-df6fe508b0e2>
|
CC-MAIN-2018-26
|
http://antidepressantremedy.com/obsessive-compulsive-disorder-what-is-it/
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863834.46/warc/CC-MAIN-20180620182802-20180620202802-00481.warc.gz
|
en
| 0.944411 | 883 | 2.890625 | 3 | 2.722712 | 3 |
Strong reasoning
|
Health
|
Ultralight-led whooping cranes released at Wheeler National Wildlife Refuge
The nine whooping cranes led by ultralight aircraft have been released from a holding pen at Wheeler National Wildlife Refuge after Whooping Crane Eastern Partnership biologists attached marking bands and transmitters to help track their movements.
“So far the cranes are foraging and hanging around close to the pen and moving into the flooded fields,” said Bill Gates, Biologist at Wheeler National Wildlife Refuge, near Decatur and Huntsville, Ala. “We plan to leave the gate to the pen open, so if they need to come back here they can.”
Gates said most of the cranes came out of the pen this morning and were foraging nearby.
Operation Migration’s Brooke Penneypacker, in whooping crane attire, was encouraging them to fly, and several have taken wing on short flights.
Eva Szyszkoski, from the International Crane Foundation, hopes the young cranes will link up with sandhill cranes and three whooping cranes about 400-500 yards from the pen site. She has been tracking the whooping cranes on this project for the past five years. There are four other whooping cranes also at the refuge in other areas.
The original plan when the migration began on October 9, 2011, was to have the pilots of Operation Migration guide the cranes to St. Marks and Chassahowitzka National Wildlife Refuges in Florida. While it is sometimes difficult to interpret why birds do what they do, they did not follow the ultralights further south from Alabama, where they had waited as weather and issues with the FAA grounded them for over a month. The FAA later provided a waiver for the pilots, but weather, then the cranes, did not cooperate.
The Partnership determined transporting and releasing the cranes at Wheeler National Wildlife Refuge would be best for the cranes.
As the class of 2011-12 whooping cranes edges closer to meet their fellow whooping cranes from previous migrations, thousands of sandhill cranes have already left the refuge, two weeks earlier than usual. About 11,000 sandhill cranes and seven whooping cranes wintered at Wheeler this year, according to Dwight Cooley, refuge manager.
In another interesting twist this year, one crane had broken away from the ultralight-led migration in the first few days and was later discovered in the company of sandhill cranes. Its transmitter failed, preventing easy detection. It was later spotted in north Georgia, and finally in Florida. Biologists hope to capture this crane and replace the transmitter and attach color bands for identification purposes.
Besides the ultralight-led migration, the partnership uses the “Direct Autumn Release” method, that places young chicks in the company of seasoned birds in Wisconsin. They then learn the migration route, as well as vital survival skills, from those older, and hopefully wiser cranes. Two “DAR” birds wintered at Wheeler NWR this year.
Now that these nine cranes have been released, the total eastern population is 112 whooping cranes. Estimated distribution as of Mid-January 2012 included 39 whooping cranes in Indiana, six in Illinois, seven in Georgia, seven in Alabama, two in South Carolina, two in North Carolina, six in Tennessee, one in Missouri, 12 in Florida, 14 at unknown locations, one with no recent report, and six long term missing. Florida has about 20 in a non-migratory flock. Louisiana has a project underway for a non-migratory flock of about 20 whooping cranes. The western flock has about 300 cranes, and about 130 are in captivity.
Division of Public Affairs
- Louisiana Ecological Services Field Office
- North Carolina
- South Carolina
- Whooping Crane
The mission of the U.S. Fish and Wildlife Service is working with others to conserve, protect, and enhance fish, wildlife, plants, and their habitats for the continuing benefit of the American people. For more information on our work and the people who can make it happen, visit fws.gov. Connect with the Service on Facebook, follow our tweets, watch the YouTube Channel and download photos from Flickr.
|
<urn:uuid:6211695f-5fab-4f65-95d0-22991140dac5>
|
CC-MAIN-2018-39
|
https://www.fws.gov/southeast/news/2012/02/ultralight-led-whooping-cranes-released-at-wheeler-national-wildlife-refuge/
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160754.91/warc/CC-MAIN-20180924205029-20180924225429-00214.warc.gz
|
en
| 0.944846 | 883 | 2.546875 | 3 | 2.729378 | 3 |
Strong reasoning
|
Science & Tech.
|
Special Educational Needs
Our SEND information report is available to read here.
SEN Aims of the academy
- To ensure accurate identification of all pupils requiring SEND provision as early as possible
- To ensure that all SEND children have access to a broad and balanced curriculum
- To provide a differentiated curriculum appropriate to the individual’s needs and ability
- To ensure that SEND pupils take as full a part as possible in all school activities
- To ensure that parents of SEN pupils are kept fully informed of their child’s progress and attainment
- To ensure that SEND pupils are involved, where practical, in decisions affecting their future SEND provision
Definition of Special Educational Needs
A child has Special Educational Needs if he or she has learning or physical difficulties that call for special educational provision to be made.
A child has learning difficulties if he or she:
- Has a significantly greater difficulty in learning than the majority of children of the same age*.
- Has a disability which prevents or hinders them from making use of educational facilities of a kind generally provided for children of the same age in other schools within the Local Authority.
(*Any difficulty may include barriers to learning such as medical needs, behavioural issues and Speech and Language concerns).
Special educational provision means:
- For a child over two, educational provision which is additional to, or different from, the educational provision made generally for children of the same age.
Roles and Responsibilities
The role of the Head teacher Mrs H. Thompson - include:
- The day-to-day management of the school including SEND provision
- Keeping the Management Board informed about SEND data within the school
- Working closely with the Inclusion Manager – Mrs Chris Law
- Ensuring that the school has clear and flexible strategies for working with parents and children
The role of the Inclusion Manager:
The Inclusion Manager plays a crucial role in the school’s SEND provision. This involves working with the Headteacher and Management Board to determine the strategic development of the policy. Other responsibilities include:
- Overseeing the day-to-day operation of the policy
- Co-ordinating the provision for pupils with SEND
- Liaising with and giving advice to fellow teachers
- Managing Teaching Assistants – where appropriate
- Overseeing pupils’ records
- Liaising with parents
- Making a contribution to INSET
- Liaising with external agencies, LA support services, Health and Social Services and voluntary bodies
For effective co-ordination staff must be aware of:
- The procedures to be followed
- The responsibility all staff have in making provision for SEND pupils’ progress
- Mechanisms that exist to allow access to information about SEND pupils
The role of the Class Teacher:
- To promptly refer or seek advice from the Inclusion Manager for any child they have concerns about
- Liaise with the parents of children with SEND
- Include the child in making provision for particular needs and assessment of progress
- Maintain Individual Target Plans and other records as appropriate
- Plan appropriately to meet the needs of SEND children
- To attend SEND reviews fully prepared and able to review progress and set new targets
- To interact appropriately with the children having Special Educational Needs showing understanding and sensitivity
- To build self-esteem and a ‘can do’ attitude in every child with a Special Educational Need
- To assist in identifying children with Special Educational Needs and to discuss their concerns with the Inclusion Manager
- To support the delivery of specialist programmes following training as appropriate
Role of parents and pupils:
Much effort is taken to promote close and constant collaboration with parents of pupils with Special Educational Needs and Disabilities (SEND). The insights and experiences of parents are essential in helping school understand and plan for their children’s needs. Parents are always invited to be involved in the writing and reviewing of Individual Target Plans (ITPs) where targets are agreed together. Parents are encouraged to take an active part in assisting their child, both by helping at home and by keeping school up-to-date on all progress made.
It is also our aim to involve pupils as much as possible when setting targets and reviewing progress made towards these identified targets; this is dependent upon age and maturity of the child. Through the involvement in their own provision, we aim to give pupils a sense of achievement, high self-esteem and a determination to reach their full potential. Their support is crucial to the success of educational programmes.
Point of Contact:
Mrs H. Thompson
Mrs C. Law
Senior Learning Mentor
Mr I Wills
Updated December 2017
|
<urn:uuid:afc97d9f-14df-4254-8059-a83d86d997bd>
|
CC-MAIN-2018-13
|
https://sites.google.com/aetinet.org/lea-forest-primary-academy/teaching-learning/special-educational-needs
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645550.13/warc/CC-MAIN-20180318071715-20180318091715-00299.warc.gz
|
en
| 0.951889 | 958 | 3.203125 | 3 | 2.715308 | 3 |
Strong reasoning
|
Education & Jobs
|
I am excited to report on a project here at Johns Hopkins that will provide resources (available to all) for supporting inclusive practices in the classroom. Sharing diverse perspectives and validating students’ and minorities’ varied experiences is a challenge for many faculty. Even those with the best intentions may unwittingly create classroom environments where students from minority communities feel uncomfortable or excluded. However, when executed effectively, an inclusive classroom becomes a layered and rich learning environment that not only engages students, but creates more culturally competent citizens. Enter TILE – Toolkit for Inclusive Learning Environments.
Funded by a Diversity Innovation Grant (DIG) of the Diversity Leadership Council (DLC), TILE will be a repository of examples and best practices that instructors use in order to spark conversations in the classroom that foster diversity and inclusion.
Funding would be used to begin a conversation with faculty who are currently implementing inclusive practices in the classroom. The conversations will result in a report-out session, scheduled for April 2015, when faculty will share ways in which they specifically support and foster an environment of inclusion that can then be replicated in other classrooms. These conversations will lead to the development of a toolkit that will include examples of best practices. The toolkit will offer inclusive instructional approaches from across the disciplines. For example, a biology professor might discuss intersex development as part of the curriculum, and an introductory engineering class might discuss Aprille Ericsson and some of her challenges at NASA. When professors use these best practices in the classroom, they not only help students learn about some of the issues surrounding diverse populations, but also help give students the voice to be able to be more conversant about diverse issues. Most important is the engagement of students who otherwise may feel marginalized when their own unique experiences remain invisible.
Project collaborators are Demere Woolway, Director of LGBTQ Life; Shannon Simpson, Student Engagement and Information Fluency Librarian, and myself, with support from the Sheridan Libraries and Museums Diversity Committee. Most important will be the various lecturers and faculty from across the disciplines who will work with us on developing the toolkit.
More information on TILE can be found here. While TILE is in development, here are two resources for those interested in exploring ways to improve classroom climate.
The National Education Association (NEA) offers strategies for developing cultural competence for educators. “Cultural competence is the ability to successfully teach students who come from a culture or cultures other than our own. It entails developing certain personal and interpersonal awareness and sensitivities, understanding certain bodies of cultural knowledge, and mastering a set of skills that, taken together, underlie effective cross-cultural teaching and culturally responsive teaching.”
The Center for Integration of Research, Teaching and Learning (CIRTL) has some excellent diversity resources on its website, including a literature review, case studies, and a resource book for new instructors.
Macie Hall, Senior Instructional Designer, Center for Educational Resources
Shannon Simpson, Librarian for Student Engagement and Information Fluency, Sheridan Libraries and Museums
Image Source: TILE logo © 2015 Shannon Simpson
|
<urn:uuid:38a32e6f-7833-4427-a02c-6544307950ed>
|
CC-MAIN-2017-30
|
http://ii.library.jhu.edu/tag/tile/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424090.53/warc/CC-MAIN-20170722182647-20170722202647-00588.warc.gz
|
en
| 0.926241 | 631 | 3.140625 | 3 | 2.95369 | 3 |
Strong reasoning
|
Education & Jobs
|
- Safety and Health Topics
- Latex Allergy
Allergy to latex was first recognized in the late 1970s. Since then, it has become a major health concern as an increased number of people in the workplace are affected. Health care workers exposed to latex gloves or medical products containing latex are especially at risk. It is estimated that 8-12% of health care workers are latex sensitive. Between 1988-1992, the Federal Drug Administration (FDA) received more than 1,000 reports of adverse health effects from exposure to latex, including 15 deaths due to such exposure.
As used in this topic, latex refers to the natural rubber latex manufactured from a milky fluid that is primarily obtained from the rubber tree (Hevea brasiliensis). Some synthetic rubber materials may be referred to as "latex" but do not contain the natural rubber proteins responsible for latex allergy symptoms.
Latex allergy is addressed in specific OSHA standards for General Industry.
Provides references that may aid in recognizing latex hazards in the workplace.
Highlights documents that provide safety and health information regarding latex allergy prevention.
Provides links and references to additional resources related to latex allergy.
How do I find out about employer responsibilities and workers' rights?
Workers have a right to a safe workplace. The law requires employers to provide their employees with safe and healthful workplaces. The OSHA law also prohibits employers from retaliating against employees for exercising their rights under the law (including the right to raise a health and safety concern or report an injury). For more information see www.whistleblowers.gov or Workers' rights under the OSH Act.
OSHA can help answer questions or concerns from employers and workers. To reach your regional or area OSHA office, go to the OSHA Offices by State webpage or call 1-800-321-OSHA (6742).
Small business employers may contact OSHA's free and confidential On-site Consultation program to help determine whether there are hazards at their worksites and work with OSHA on correcting any identified hazards. Consultants in this program from state agencies or universities work with employers to identify workplace hazards, provide advice on compliance with OSHA standards, and assist in establishing injury and illness prevention programs. On-site Consultation services are separate from enforcement activities and do not result in penalties or citations. To contact OSHA's free consultation service, go to OSHA's On-site Consultation web page or call 1-800-321-OSHA (6742) and press number 4.
Workers may file a complaint to have OSHA inspect their workplace if they believe that their employer is not following OSHA standards or that there are serious hazards. Workers can file a complaint with OSHA by calling 1-800-321-OSHA (6742), online via eComplaint Form, or by printing the complaint form and mailing or faxing it to the local OSHA area office. Complaints that are signed by a worker are more likely to result in an inspection.
If you think your job is unsafe or if you have questions, contact OSHA at 1-800-321-OSHA (6742). Your contact will be kept confidential. We can help. For other valuable worker protection information, such as Workers' Rights, Employer Responsibilities, and other services OSHA offers, visit OSHA's Workers' page.
|
<urn:uuid:2f6da469-5fd5-4f9e-b5cc-2c81897fe087>
|
CC-MAIN-2017-30
|
https://www.osha.gov/SLTC/latexallergy/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423903.35/warc/CC-MAIN-20170722062617-20170722082617-00709.warc.gz
|
en
| 0.93257 | 694 | 2.75 | 3 | 2.241585 | 2 |
Moderate reasoning
|
Health
|
Methyl iodide is a federal hazardous air pollutant and was identified as a toxic air contaminant in April 1993 under AB 2728.
CAS Registry Number: 74-88-4
Molecular Formula: CH3I
Methyl iodide is a colorless liquid which turns brown on exposure to light. It has a sweet ethereal odor. It is soluble in alcohol, carbon tetrachloride, and ether, and partially soluble in water. Methyl iodide is nonflammable (HSDB, 1991; Merck, 1983; Sax, 1989).
|Boiling Point||42.5 oC|
|Melting Point||-66.5 oC|
|Vapor Density||4.9 (air = 1)|
|Density/Specific Gravity||2.28 at 20/4 oC (water = 1)|
|Vapor Pressure||405.9 mm Hg at 25 oC|
|Log Octanol/Water Partition Coefficient||1.51|
|Water Solubility||1,389 mg/L at 25 oC|
|Conversion Factor||1 ppm = 5.81 mg/m3|
(Howard, 1990; HSDB, 1991; Merck, 1983; Sax, 1989; U.S. EPA, 1994a)
Methyl iodide is used as a methylating agent, an alkylating agent, in microscopy, as imbedding material for examining diatoms, in testing for pyridine, as a chemical intermediate, and as a fire extinguisher. It has been detected in the exhaust gases of nuclear reactors (HSDB, 1991).
No emissions of methyl iodide from stationary sources in California were reported, based on data obtained from the Air Toxics "Hot Spots" Program (AB 2588) (ARB, 1997b).
Methyl iodide occurs naturally in the ocean as a product of marine algae, with an estimated production of 44 million tons per year (HSDB, 1991).
No Air Resources Board data exist for ambient measurements of methyl iodide. However, the United States Environmental Protection Agency (U.S. EPA) has compiled ambient air data from several urban and suburban locations throughout the United States from 1972-85. From these data, the U.S. EPA has calculated a mean ambient air concentration of 0.12 micrograms per cubic meter or 0.02 parts per billion (U.S. EPA, 1993a).
No information about the indoor sources and concentrations of methyl iodide was found in the readily-available literature.
In the troposphere, methyl iodide will photolyze and react with the hydroxyl (OH) radical. The calculated half-life and lifetime of methyl iodide due to reaction with the OH radical are about 140 days and 200 days, respectively. Methyl iodide absorbs solar radiation out to 335 nanometers, and photolysis should dominate as a tropospheric loss process, with a lifetime of the order of about one day (Atkinson, 1995).
Methyl iodide emissions are not reported from stationary sources in California under the AB 2588 program. It is also not listed in the California Air Pollution Control Officers Association Air Toxics "Hot Spots" Program Revised 1992 Risk Assessment Guidelines as having health values (cancer or non-cancer) for use in risk assessments (CAPCOA, 1993).
Probable routes of human exposure to methyl iodide are inhalation, ingestion, and dermal contact (Howard, 1990).
Non-Cancer: Exposure to methyl iodide in air may cause skin blistering, severe eye and respiratory tract irritation, and pulmonary edema. Methyl iodide is neurotoxic. Symptoms include nausea, vomiting, vertigo, ataxia, slurred speech, drowsiness, convulsions, and coma. Central nervous system symptoms may last for weeks. Methyl iodide is hepatotoxic and a highly reactive alkylating agent (U.S. EPA, 1994a).
The U.S. EPA has not established a Reference Concentration (RfC) or an oral Reference Dose (RfD) for methyl iodide (U.S. EPA, 1994a).
No information is available on adverse developmental or reproductive effects in humans or animals from methyl iodide. (U.S. EPA, 1994a).
Cancer: There are no adequate data on the carcinogenicity of methyl iodide in humans. Rats and mice exposed to methyl iodide developed lung tumors. The U.S. EPA has classified methyl iodide as Group C: Possible human carcinogen (U.S. EPA, 1994a). The International Agency for Research on Cancer has classified methyl iodide as Group 3: Not classifiable (IARC, 1987a). The State of California has determined under Proposition 65 that methyl iodide is a carcinogen (CCR, 1996).
|
<urn:uuid:d3a51d63-f7df-4faa-9747-6c4350266f1e>
|
CC-MAIN-2015-48
|
http://scorecard.goodguide.com/chemical-profiles/html/methyl_iodide.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.48/warc/CC-MAIN-20151124205424-00333-ip-10-71-132-137.ec2.internal.warc.gz
|
en
| 0.880449 | 1,025 | 3.5625 | 4 | 2.687404 | 3 |
Strong reasoning
|
Science & Tech.
|
About This ProjectAlthough Lyme is considered to be endemic only in Europe and the northeastern and midwestern United States, people throughout the US and the world are suffering with this debilitating illness. This research seeks to find out how common it is for birds in the wild to be carriers of the Lyme disease bacteria which would shed light on their role in the spread of Lyme via their movement and migrations.
Ask the ScientistsJoin The Discussion
What is the context of this research?
Recent evidence demonstrates that wild birds are capable of and are actively transporting ticks and their associated diseases geographically during migrations (Anderson and Magnarelli 1984; Comstedt et al. 2006; Hamer et al. 2012). Further, research has also shown that a number of species exhibit reservoir competency, meaning that they are able to contract Borrelia burgdorferi infection and transmit it to uninfected ticks that parasitize them for a blood meal.
Since ground-feeding species such as northern cardinals, gray catbirds, song sparrows, and American robins spend significant time foraging for food at the optimal questing height for ticks, they are excellent opportunistic hosts, and have all demonstrated the ability to infect larval ticks with the Borrelia upon their first blood meal (Ginsberg et al. 2005). Additionally, birds with latent Borrelia infections and bacteria at undetectable levels can experience an infection reactivation when exposed to stressful situations (e.g. migration, lack of food, predators), once again positioning them in a role to transmit the pathogen to parasitizing ticks (Gylfe and Bergstrom 2000). The evidence for a significant avian role in the ecology of Lyme disease-causing pathogens is strong.
For this project, we seek to establish whether or not different species of birds in New Jersey, a human Lyme infection hotspot reporting over 3,000 cases of Lyme per year, are actively harboring Borrelia burgdoferi sensu lato (NJ DOH 2012). We will be collecting both blood and tick samples from birds that will be analyzed for Borrelia DNA via polymerase chain reaction (PCR) to determine if these animals have ever been carriers of the disease. We have chosen to examine native populations as well as migratory species because research has shown that residents can maintain tick populations (Anderson and Magnarelli 1984), and thus could be harboring the ticks that contribute to the high levels of disease presence seen in the human population in this area. It is also crucial to determine the presence of these bacteria in New Jersey's migratory populations given the importance of NJ as a stop-over location.
What is the significance of this project?
Lyme disease is one of the fastest growing and is the most commonly reported vectorborne illnesses in the United States with estimates of 300,000 people infected annually (CDC 2013). Lyme is an incredibly complex illness that classically first presents with flu-like fatigue, joint and muscle aches, and the "bulls-eye" rash. It is usually cured if caught in this early stage. However, less than 50% of people infected with Lyme report noticing a rash (which isn't always a "bulls-eye"), and the flu-like symptoms can be mild or nonexistent. Disseminated and late-stage Lyme disease can present in a myriad of ways including facial paralysis (Bell's palsy), neurological problems, gastrointestinal issues, cardiac manifestations, and psychological conditions. Often, because these signs and symptoms are so diverse and vague, people infected with Lyme are misdiagnosed with conditions like chronic fatigue syndrome (CFS), fibromyalgia, Alzheimer's, amyotrophic lateral sclerosis (ALS), and multiple sclerosis (MS), delaying their antibiotic treatment and making overall treatment far more difficult. Untreated, Lyme can be debilitating and even fatal. Routine blood tests for Lyme only search for antibodies, not the actual pathogen, so results are often inaccurate and unreliable (ELISA is less than 50% sensitive). Currently, only the Northeastern US and Europe are considered to be endemic and western and southern states, and other countries are often thought to be "free" of Lyme disease. Since It is a difficult disease to diagnose as well as treat, it is crucial to understand every aspect of its ecology, that is, how the disease interacts with and is spread via animal hosts. If we find that birds in New Jersey are actively carrying the bacteria, it provides further evidence that this disease is very easily capable of being spread both locally and long-distances, i.e. to other states and countries. Lyme disease has become a very public and political issue due ultimately to a lack of research, so we feel that this is a great question to investigate right now. The project is also a personal one; those of us on the research team have be infected with Lyme on a number of occasions, and know many others that have been sickened with this illness.
Please see the following links for more information on Lyme disease:
Lyme Disease Association: http://www.lymediseaseassociation.org/
Basic Information about Lyme Disease by International Lyme And Associated Diseases Society: http://www.ilads.org/lyme_disease/about_lyme.html
Lyme and Tick-Borne Diseases Research at Columbia: http://www.columbia-lyme.org
What are the goals of the project?
The funds raised will go directly to purchasing field equipment for collecting tick and bird blood samples during the summer and fall. After field samples have been collected, the funds will also cover the costs of the laboratory component of the project. We will be analyzing blood and tick samples for the Borrelia bacteria by use of polymerase chain reaction (PCR).
This budget will cover the costs of the field and laboratory materials.
Meet the Team
Team BioI am a full time graduate student studying ecology and evolution at Montclair State University, in Montclair, NJ, and will be serving as a graduate teaching assistant come this fall. In addition to my roles as a student, I recently interned with Conserve Wildlife Foundation of New Jersey. Our projects included ranavirus sampling, bat acoustic monitoring, as well as public outreach and intervention in cases where bats have roosted in homes and private buildings. My research interests are diverse, but birds and zoonotic diseases (those that are capable of spreading between non-human animals and humans) really excite me. You want to talk Lyme or avian flu? Better clear out the next few hours of your schedule. My other interests include hiking and traveling with my husband, birding, gardening, food, animal rescue and rehabilitation, and my pit bull mix, Isis.
I'm an ecologist fascinated by diseases, zoonoses in particular. I earned my masters in August 2014 from Montclair State University where I researched the role New Jersey birds (native and migratory) play in the enzootic cycle of Lyme disease. We discovered Lyme bacteria in an Eastern phoebe and are currently working on a publication.
I am currently a PhD student at the University of Rhode Island working with Dr. Tom Mather on communication theory-based tickborne disease prevention programs, including the first crowd-sourced tick surveillance program, TickSpotters.
I also work as an adjunct biology professor in the biology department at Roger Williams University, and volunteer with RIDEM on bat projects. Previously, I worked with Conserve Wildlife Foundation of New Jersey investigating ranavirus, monitoring bat acoustics for endangered species, as well as public outreach and intervention in cases where bats have roosted in homes and private buildings.
My other interests include hiking and traveling with my husband and daughter, birding, gardening, good food, brewing beer, animal rescue and rehabilitation, and my pit bull mix, Isis.
BA, English - Rutgers University - 2008
BS, Biology - Montclair State University - 2010
MS, Ecology and Evolution - Montclair State University - 2014
PhD, Ecology and Ecosystem Sciences - University of Rhode Island - in progress
- $8,000Total Donations
- $97.50Average Donation
|
<urn:uuid:5ca1dbea-8b14-4820-ace5-3e0ed68157ec>
|
CC-MAIN-2018-30
|
https://experiment.com/projects/do-birds-carry-lyme-disease
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589350.19/warc/CC-MAIN-20180716135037-20180716155037-00405.warc.gz
|
en
| 0.942432 | 1,675 | 3.53125 | 4 | 2.976676 | 3 |
Strong reasoning
|
Science & Tech.
|
The artwork on this note card was created by 5768 WRJ Art Calendar artist Césan d’Ornellas Levine.
October is LGBTQ History Month, when Americans across the country remember and celebrate individuals who have fought for the inclusion of all members of the LGBTQ community.
The fight is ongoing in the nation's capital and in state legislatures across the United States. While Americans have witnessed many victories for the LGBTQ community in recent years, we also continue to confront great challenges. In particular, transgender and gender non-conforming Americans continue to experience discrimination in housing, employment, education, and other facets of American life. Transgender and gender non-conforming people have long been fighting discrimination. During LGBTQ History Month, we remember these activists and their work to advance LGBTQ equality. LGBTQ History Month is also an opportunity to remember that history is being made every day as the work continues.
The Reform Movement's Urgency of Now: Transgender Rights Campaign is a reflection of the ongoing discrimination that transgender Americans—and in particular, transgender students—face in our nation's schools. In honor of LGBTQ History Month, the RAC is looking at the history of the recent battles over the inclusion of transgender students in schools.
The policy landscape that affects transgender students today is shaped on three levels: school boards, state governments and the federal government.
Much of the recent news concerning the treatment of transgender students in schools reflects disagreement between local policies and the federal government's interpretation of Title IX. As the National Center for Transgender Equality (NCTE) states, "Title IX is a federal law that makes sex discrimination illegal in most schools. Most courts who have looked at the issue have said that this includes discrimination against someone because they are transgender or because they don't meet gender-related stereotypes or expectations."
Title IX is a cornerstone of the network of laws and policies designed to protect transgender students. As Eisendrath Legislative Assistants Lizzie Stein and Maya Weinstein wrote last August, "While the law was initially passed with a focus on prohibiting gender discrimination in collegiate athletics, it addresses discrimination in many areas of the educational experience. Title IX supports students’ ability to access an education in an environment free from discrimination and hostility. In particular, students who experience sexual violence and transgender and gender non-conforming students rely on the law."
But because it is up to the federal government to enforce Title IX, the government must also interpret how the law ought to be implemented. While the courts have interpreted Title IX in such a way to protect transgender and gender non-conforming students, it was in 2016 that the federal government stated that Title IX affirms the right of transgender and gender non-conforming students to use educational facilities and participate in educational programs consistent with their gender identity. This was a welcome step in the fight to affirm the right of transgender and gender non-conforming students to be treated equally under the law.
President Trump, however, reversed course on the federal government's position concerning Title IX and the protections it affords transgender and gender non-conforming students shortly after he took office. This led to school boards across the country facing pressure to clarify how they would respond to the changing guidance.
At the state and local level, nefarious "bathroom bills" have cropped up denying students (and other transgender people) the right to use bathrooms in schools and other public buildings consistent with their gender identity. These bills are often justified by "public safety" concerns. These claims are both offensive and baseless.
And yet, the National Council of State Legislatures reports that in the latest legislative session of state houses around the nation, 16 states "have considered legislation that would restrict access to multiuser restrooms, locker rooms, and other sex-segregated facilities on the basis of a definition of sex or gender consistent with sex assigned at birth or 'biological sex,'" six states "have considered legislation that would preempt municipal and county-level anti-discrimination laws," and 14 states "have considered legislation that would limit transgender students' rights at school." In 2017 alone 36 states considered legislation that would in some way make it easier to discriminate against transgender and gender non-conforming people on account of their gender identity. Schools comprise a particular focus of these efforts.
This LGBTQ History Month, we remember that history is being written every day. We draw strength and inspiration from the countless LGBTQ Americans who have fought and struggled to create a truly inclusive nation. This month is about looking to the past as we charge ahead in uncertain territory.
|
<urn:uuid:c8674ffb-9ee5-45c1-ba99-57b019467aa3>
|
CC-MAIN-2019-26
|
https://rac.org/blog/2017/10/10/including-transgender-and-gender-non-conforming-students-schools-commemorating-lgbtq
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998475.92/warc/CC-MAIN-20190617123027-20190617145027-00118.warc.gz
|
en
| 0.953827 | 917 | 3.8125 | 4 | 3.087587 | 3 |
Strong reasoning
|
Politics
|
NEW YORK — The brutal rape and ensuing death of a 23-year-old woman in Delhi — now known as Damini (“lightning” in Hindi) — along with concurrent massive civil society mobilization in India around these events, has ignited a flurry of speculation and analysis within Western media about why India is such a dangerous place for women.
This quest for causal answers has led to a tried and tested theory about India’s “patriarchal culture,” where Indian men are characterized, as Libby Purves put
it recently in the Times of London, by a “murderous, hyena-like male contempt” towards women. Leaving aside the blatantly problematic metaphor of Indian men as scavenging animals, Sonia Faleiro’s New York Times op-ed lamented that legal measures against rape in India “have been ineffective in the face of a patriarchal and misogynistic culture.”
Attention to India’s “patriarchal culture” occludes the prevalence of rape and other forms of violence against women throughout the entire world.
According to the World Bank, more women between the ages of 15-44 are killed or disabled as a result of gender-based violence than cancer, car accidents, malaria and war combined. In the global epidemic of violence against women, the United States is no exception. The Centers for Disease Control and Prevention estimate that 1 in 5 American women will be raped in her lifetime, and the US Department of Justice reports more than 300,000 American women are raped each year. In 2011, India, a country whose population is four times greater than the United States, had 12 times fewer reported rapes.
If police officials in India fail to take rape survivors seriously, American law enforcement is not so far behind. The Federal Bureau of Investigation crime data reveals only 24 percent of reported rapes result in an arrest in the United States, a rate far below that of other violent crimes such as murder (79 percent) and aggravated assault (51 percent). These statistics show that our own country displays hostility towards women that is similar to India.
This astonishing reduction of India’s issues with gender-based violence to a vast and unchanging patriarchal culture also obfuscates significant differences within India regarding violence against women. What is perhaps most surprising about Faleiro’s analysis of India’s “patriarchal culture” is that it directly follows a description of her move from Delhi to the relative freedom of Mumbai, leaving one to wonder if Mumbai has somehow escaped the overwhelming patriarchy of the rest of India, and if so, how.
Perhaps Mumbai has achieved the true cosmopolitanism that Indian journalist Manu Joseph pines for in her critique of “village mentalities?” But then again, Mumbai is also the city where a 20-year-old Nepali woman was gang-raped by three men on December 22, and a 15-year-old physically challenged girl was raped by her father in the supposed safety of her own home. Mumbai, like the rest of India, is a complicated place, and like other Indian cities, boasts higher rates of violence against women than India’s villages.
According to India’s National Crime Records Bureau (NCRB), Indian cities are far more dangerous places for women than Indian villages. Delhi, the city where Damini once lived and studied, is no longer simply India’s political capital, but has also achieved the dubious distinction of becoming its “crime capital” in rates of violence against women.
Reports from the National Family Household Survey, which arguably provides a more accurate picture of actual rates of violence than the reported numbers available through NCRB, also confirms that city-dwelling women are more likely to face intimate forms of abuse than women living in villages. India, after all, is the country where sex ratios are most skewed toward males among the wealthy and educated in urban areas that boast India’s greatest “modern” accouterments.
This uneven landscape, where violence against women is simultaneously prevalent throughout the world and notably higher in the industrialized, urban centers of developing countries than their ostensibly “backwards” hinterlands, should make us all pause. Gender-based violence is a complex phenomenon, with multiple causes and social manifestations, and does not lend itself to facile theorizations about “patriarchal culture” and “village mentalities.” This reduction not only ignores the facts, but also is dangerously misleading.
The terrible events that unfolded in Delhi, events that sadly could have and do happen in other parts of the world, were made possible by its peculiar position of modernity. Delhi is a city that houses enormous inequalities, much like many US cities, and has undergone rapid and uneven change that leaves residents no longer agreeing on shared norms of social conduct. It is a place where the wealthy increasingly abandon the public sphere in favor of highly fortified, private enclaves, leaving women and other marginalized groups vulnerable to violence.
The silver lining on this otherwise dark cloud is that Delhi is also a place where gross violations of human rights have a chance of making it onto the public agenda. We’ve seen the public decry social and political complacency through the numerous demonstrations held across India in the wake of Damini’s death. The degree to which this mobilization will lead to concrete change remains to be seen, but that India is a changing nation is certain. No matter how much we may wish the global reality of violence against women were different, and that we may possess the key to its end, we would do well to take a step back and reassess our initial impulses about where the roots of the problem lie.
Poulami Roychowdhury is a Ph.D. candidate in the Department of Sociology at New York University. Her research focuses on law and gender violence in India.
Research for this article was conducted by Mandy Van Deven, writer, advocate, and online media strategist. Her work exploring contemporary feminisms, global activism, and sexuality has been published in The Guardian, Salon, AlterNet, and Marie Claire.
|
<urn:uuid:b760ad98-053c-4318-b9ca-f875199868a5>
|
CC-MAIN-2015-18
|
http://www.globalpost.com/dispatches/globalpost-blogs/commentary/delhi-rape-ignores-global-problem
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661733.69/warc/CC-MAIN-20150417045741-00081-ip-10-235-10-82.ec2.internal.warc.gz
|
en
| 0.957963 | 1,268 | 2.625 | 3 | 3.004635 | 3 |
Strong reasoning
|
Politics
|
Modeling the Formation of Water
We have modeled the constituents of water, but how do models of two gases (the elements hydrogen and oxygen), help in understanding how water is the basis of life? Dalton’s third postulate states that atoms are the units of chemical changes. When the gases hydrogen and oxygen are mixed and the mixture is ignited, water vapor forms (it will later condense to liquid water as the fire subsides). The reaction releases a lot of heat as you can see in this photo of the explosion of the Hindenburg:
There are many videos of hydrogen explosion on the web or YouTube, as well as videos of the Hindenburg and the disaster. These spectacular videos show evidence for a chemical change: weaker H-H and O-O bonds break (that's why we need to add energy, or ignite, the mixture), but then formation of very strong H-O bonds in water, H-O-H releases a lot of energy.
Soon after the two gases are mixed together and energy is added, a rearrangement of atoms begins. The overall change can be seen in this animation, but the "mechanism" of the reaction is complex, as we will see later when we study kinetics.
The net result is that two hydrogen atoms of each H2 molecule become separated and combine instead with oxygen atoms (the hydrogen atoms may attack oxygen molecules, breaking them up). When the chemical reaction is complete, all that remains is a collection of water molecules, each of which contains one oxygen atom and two hydrogen atoms. Notice that there are just as many oxygen atoms after the reaction as there were before the reaction. The same applies to hydrogne atoms. Atoms were neither created, destroyed, divided into parts, or changed into other kinds of atoms during the chemical reaction.
The view of water shown here
is our first microscopic example of a compound. The model suggests what we already know: water molecules have much different properties than hydrogen or oxygen; specifically, they tend to clump together to form a liquid. Each molecule of a compound is made up of two (or more) different kinds of atoms. Since these atoms may be rearranged during a chemical reaction, the molecules may be broken apart and the compound can be decomposed into two (or more) different elements.
The formula for a compound involves at least two chemical symbols—one for each element, present. In the case of water, each molecule contains one oxygen and two hydrogen atoms, and so the formula is H2O. Both The figure and the formula tell you that any sample of pure water contains twice as many hydrogen atoms as oxygen atoms. This 2:1 ratio agrees with Dalton’s fourth postulate that atoms combine in the ratio of small whole numbers.
Although John Dalton originally used circular symbols like those in figure to represent atoms in chemical reactions, a modern chemist would use chemical symbols and a chemical equation like
- 2 H2 + O2 → 2 H2O (1)
- Reactant(s) Product(s)
This equation may be interpreted microscopically to mean that 2 hydrogen molecules and 1 oxygen molecule react to form 2 water molecules. It should also call to mind the macroscopic change shown in the photographs. This macroscopic interpretation is often strengthened by specifying physical states of the reactants and products:
- 2 H2(g) + O2(g) → 2 H2O(l) (2)
Thus gaseous hydrogen and gaseous oxygen react to form liquid water. [If a reactant or product is a solid, the symbol like H2O(s) might have been used. Occasionally (c) may be used instead of (s) to indicate a crystalline solid.] Chemical equations such as (1) and (2) summarize a great deal of information on both macroscopic and microscopic levels, for those who know how to interpret them.
|
<urn:uuid:1a03d732-f702-4b10-935c-b8de5e3dec8a>
|
CC-MAIN-2014-41
|
http://wiki.chemprime.chemeddl.org/index.php/Modeling_the_Formation_of_Water
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662882.4/warc/CC-MAIN-20140930004102-00439-ip-10-234-18-248.ec2.internal.warc.gz
|
en
| 0.944213 | 805 | 4.0625 | 4 | 2.707952 | 3 |
Strong reasoning
|
Science & Tech.
|
Ever notice how the calf muscle seems to get little attention? Rarely do group fitness classes dedicate blocks of time to calves like they do for ab workouts or the butt. While the calf muscle often goes unnoticed as an important muscle, but it’s more important than you think. And sure, some are born with a nice set of calf muscles, and others have to work at developing them. But regardless of your genetics, it’s vital that we all take care of our calves. That’s because weak or tight calf muscles, left neglected, can contribute to all sorts of posture problems, pain, injuries and athletic performance problems. And who wants that?
Physiology of the Calf & Why It’s Important to Strengthen
What makes calf exercises so important to our daily functions? Let’s go a bit deeper into the physiology of the calf. You should know by now that the calf muscle, which is on the back of the lower leg, is made up of two muscles. The gastrocnemius is the larger calf muscle that forms the bulge in the upper calf area. There are two parts that form a sort of diamond shape. The soleus is much smaller and more flat lying just beneath the gastrocnemius muscle.
These two calf muscles taper, and merge together at the bottom of the calf, consisting of tough connective tissue that joins the Achilles tendon. This inserts into the heel bone. With all of these mechanics at play, you can now see how important it is to ensure all of these parts are in good working order. When we walk, run or jump, the calf muscle performs work to pull the heel up, allowing a forward movement.
Back to that gastrocnemius. That chief muscle of the calf is responsible for flexing the knee and plantarflexion of the foot. (The movement that consists of pointing your toes downward). It runs to the Achilles tendon from two heads attached to the femur above the back of the knee. (1)
The soleus is responsible for plantarflexion. When we stand, the soleus offers a lot of stability, in particular to the foot, fibula and tibia. (2)
Together, this dynamic duo provides critical stabilization for walking, hiking, running, jumping and even standing. And as we’ll talk about later, calf exercises are crucial because an underdeveloped calf area could cause some nagging injuries such as achilles tendonitis, shin splints, calf strains and plantar fasciitis. (3)
Calf Muscle Conditions
The forward action or running works the back of the leg more than the front. Did you know that for a runner, the calves lift the heel about 1,500 times per mile? All that heavy lifting can cause a lot of lower leg injuries, such as calf pulls, shin splints, stress fractures and compartment syndrome, if underdeveloped. Also, anything from not being warmed up before exercise to doing a lot of hill work to over stretching to over training can lead to calf strains. Depending on the severity of the injury, it could take some time to heal. (4, 5)
Some people complain also about tight calf muscles. This can be triggered from overuse, trauma from an injury, nerve injuries or medical problems like stroke or diabetes. One other thing I also want to point out is that there are concerns about those who store fat in the lower extremities, such as the calves. The problem is that this can cause blood clots, increasing the risk of heart disease. Additionally, fat storage in the calves can be caused by the retention of lymphatic fluid in the legs due to a weak lymphatic system. If you feel this describes you, make sure to consult your doctor. (6)
If the calf muscle is not in good working order some conditions can occur such as:
Calf Muscle Strain
A calf muscle strain is when you stretch the calf muscle past its normal position, which can cause tearing of muscle fibers. Pain levels can be mild to severe. This is sometimes referred to as a pulled calf muscle or a calf muscle tear.
Calf Muscle Rupture
A calf muscle rupture is when the calf muscle has completely torn. This will likely result in severe pain. This can cause the inability to walk and the muscle may even collapse into a lump that may be seen and felt through the skin.
Calf Muscle Myositis
Calf muscle myositis is the occurrence of inflammation of the calf muscle. Though rare, this can be caused by infections or autoimmune conditions. An autoimmune condition often attacks the tissues of the body by mistake.
Rhabdomyolysis is when the muscle breaks down because of long-term pressure, over-exercising, drug side effects or a severe medical condition, but if this happens, it is likely to affect numerous muscles in the body. The characteristic triad of complaints in rhabdomyolysis is muscle pain, weakness, and dark urine. Calf pain is one of the muscle groups often impacted when muscle pain is reported. (7)
Calf Muscle Cancer
Body Type and Our Calves
The calves are no different than general body shape when it comes to the variety of sizes. Some have skinny calves, some people have calves with more fat, some are muscular, toned or bulky — it depends on a lot of factors. Genetics can play a big role in the shape of the calves, but if you are a bodybuilder, you’re likely to have thick, muscular calves due to the work you put into building them. Endurance cyclists often have strong, well-defined calves because of the repetitive motion of pushing and pulling the pedals.
There are folks who have long, lean calves with no muscles to show, or dancers with long lean calves featuring definition. The shape of the calves is also determined by the position of the muscle in relation to the knee and ankle joints — some are simply higher and some are lower; most likely a genetic attribute.
Regardless, the shape of your calves is usually affected by what you do every day. If you exercise routinely, are a runner, cyclist, dancer or bodybuilder, you’re more likely to have shapely calves. But if you’re overweight, your calves may appear large and untoned. In any case, you don’t have to be an amazing dancer to have great calves, just simply putting calf exercises into your fitness regimen can offer surprising results. (9)
Best Approach to Getting Defined Calves
When you’re trying to cut body fat and get defined, diet is key. It’s no exception when we’re talking about the calves. The National Strength and Conditioning Association (NSCA) points out that protein (hint, hint, eat your protein foods!) is essential for every meal. When choosing carbs, the complex carbohydrates are the best, but a few simple carbs are still needed — just don’t over do it. Healthy fats are the way to go, such as avocado and coconut oil, but use in moderation. Athletes may need a few more calories to build muscles. But regardless, careful monitoring of caloric intake is important if you want to build and see the calf muscle definition develop.
Some ways to get those awesome calves are at your fingertips. You can work with a personal trainer to determine the best exercises strategies for you. Because some calf exercises may incorporate balancing on your toes, if you have stability issues, make sure to include a safe environment for exercising and work out with a partner or coach. Additionally, proprioception exercises to build balance may be a great way to prevent injury for athletes. (10)
Calf raises are most popular when it comes to strengthening and building muscle in the calves. Calf raises are great because they help improve muscular strength which, of course, will tone the area. Nice perk? They can be done almost anywhere.
Sports rehab doctors and coaches often use calf raises to help with issues that arise from Achilles tendon injuries such as an Achilles tear or tendonitis. When you have strong calf muscles, you reduce the risk of injury by reducing the stress placed in that area during activity. This, in turn, facilitates faster healing. If you often engage in activity that requires balancing on one foot, such as a yoga position, or jumping while playing basketball, strong calf muscles can offer a lot of stability. (11)
Appropriate strength training techniques to help stimulate the correct muscles in your calves can help shape the calf muscles. Improving the flexibility of the ankle joints and varying your exercise moves to include all the ranges of motion may also be beneficial. Seated and standing calf raises and leg press machines can help develop stronger calves.
Kris Gethin, editor-in-chief of Bodybuilding.com, reports that by doing calf raises every other day, you can develop muscular calves; however, activities such as running, walking, jumping rope and cycling can provide tone but may result in thinner calves. So it really depends on what you want out. (12)
Best Calf Exercises Workouts
Working the calves is easy and does not require too much time. With a consistent routine of every other day, you can have toned, shaped calves in no time.
Standing Calf Raise
Stand near a wall or chair for balance. Place your feet hip-distance apart with the ankles, knees and hips square to the front. Once you are stable, slowly lift your heels off the ground raising the body upward (not forward or backward). Tuck your butt under just a bit and tighten the abs as you raise. Hold this position for 3 to 10 seconds (you will be able to hold it longer as you get stronger). Release and repeat 10 to 20 times.
Single-Leg Calf Raise: Advanced
This is similar to the previous exercise, but requires a bit more stability. Over time you won’t need the wall or chair for support, but for now, make sure you’re able to maintain stability by standing near a wall or chair. Place your feet hip-distance apart with the ankles, knees and hips square to the front.
Once you are stable, bend your left knee so that that foot is off of the floor. (Abs are tight.) Slowly lift your right heel off the ground, raising the body upward (not forward or backward). Hold this position for 1 to 3 seconds. Release and repeat 10 to 20 times on each leg. For an even more advanced move, try this on a Bosu Ball, but be careful and work your way up to it. A nearby support to hold onto is crucial if trying this exercise.
Seated Calf Raise
This exercise can be done on the calf exercise machine at the gym by selecting the appropriate weight for your level. Make sure you don’t overdo it.
Here’s the at-home version: Start by sitting in sturdy chair and place your feet flat on the floor. Make sure the knees stay aligned directly over your feet. Lean forward while placing hands on thighs close to your knees. This is where the action is going to take place.
While raising your heels, keeping the toes and balls of the feet on the ground, simply push down on your thighs to add resistance. Then slowly lower your heels. Repeat 10 to 20 times. The harder you push, the harder it will be to lift your heels. You can place a weight on your lap for resistance if your prefer and feel that you are ready for that.
Three-Way Stair Calf Raise
Using stairs, or any sort of ledge (such as a sidewalk), is a great way to build muscle in your calves. To do it, you may want to choose a spot that has something to hold onto for stability, such as a rail. Place the toes and balls of your feet on a step at hip distance apart. Just like the the standing calf raise, keep the abs tight while slightly tucking the butt (this tightens those abs and glutes, which will help tone them, too.)
While toes are pointed forward, allow the heel to lower an inch or two below the height of the step, then raise upward on the toes and balls of the foot. Repeat 10 times. Then turn the toes inward and repeat the action. Now, turn the toes outward and repeat 10 times. Do 3 to 4 sets.
Calf Exercises and Best Practices Includes Stretching
Calf exercises should also include calf stretching. Studies show that a common cause of stress fractures is calf tightness, which causes a premature lifting of the heel while running and transfers a significant amount of force into the forefoot. And get this: A study published in the Journal of Orthopedic and Sports Physical Therapy found that subjects with tight calves were 4.6 times more likely to sustain a metatarsal stress fracture. (13)
According to the National Academy of Sports Medicine, tight calves can also contribute to postural problems like lower crossed syndrome and pronation distortion syndrome.
While most people know how to perform standard calf stretches, it’s vital that you hold each stretch for a minimum of 30 seconds. This allows the muscle to better relax and elongate to improve flexibility. Maintaining calf flexibility is vital to maintaining a healthy range of motion in your ankle.(14) (Your kinetic chain is all connected. It’s pretty amazing!)
Here’s a stretch Harvard Health Blog offers that even a couch potato can do. (Translation: No excuses.)
Couch potato calf stretch
Sit on the edge of a couch with your feet flat on the floor. With one leg, keeping your heel on the floor, lift and point the toes toward the ceiling, so that you feel a stretch in your calf muscle. Hold for 30 seconds, then do the same with the other leg, three times per leg. (15)
Don’t forget yoga.
Releasing tight, overactive calves is just one of the many benefits of yoga.
Final Thoughts on the Best Calf Exercises
- There are lots of activities that can help define the calves.
- Thinner calves may result from more aerobic activities such as running, walking, hiking, cycling and the elliptical machine.
- Running sports that require jumping are calf builders. Some of these include basketball, soccer, tennis and rugby.
- Larger, more muscular calves may be the result of specific calf raise types of exercises, often using weights.
- Weak and/or super tight calves can contribute to pronation distortion syndrome, lower crossed syndrome, plantar fasciitis, shin splints and Achilles tendonitis, among other issues.
Today and tomorrow, I’m hosting a free presentation all about what I’ve created to be the fastest and most authoritative way to become a Certified Essential Oil Coach.
And you’re invited! Click Here to Register
Whether you’re in pursuit of essential oil mastery or you want to build an essential oil business, this is a must-see presentation. Plus, I’ve set aside a half hour to answer your questions.
Like I said, no charge and you can register here.
Get FREE Access!
Dr. Josh Axe is on a mission to provide you and your family with the highest quality nutrition tips and healthy recipes in the world...Sign up to get VIP access to his eBooks and valuable weekly health tips for FREE!
Free eBook to boost
metabolism & healing
30 Gluten-Free Recipes
& detox juicing guide
Shopping Guide &
|
<urn:uuid:c43f85ca-2777-4689-8ff0-d1108d8e786a>
|
CC-MAIN-2017-47
|
https://draxe.com/calf-exercises/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806327.92/warc/CC-MAIN-20171121074123-20171121094123-00029.warc.gz
|
en
| 0.937382 | 3,219 | 3.1875 | 3 | 1.893796 | 2 |
Moderate reasoning
|
Sports & Fitness
|
In your average garden-variety textbook heart attack, the cause is typically a sudden lack of oxygenated blood supply feeding the heart muscle, caused by a significant blockage in one of your coronary arteries. This blockage is what doctors call the culprit lesion.
But in a new study led by Yale University cardiologist Dr. Erica Spatz, researchers remind us that although this “culprit lesion” classification of heart attack applies to about 95% of men under age 55, only 82.5% of younger women experience this kind of heart attack.(1)
Dr. Spatz and her team have in fact come up with five new classifications of heart attack they call the VIRGO taxonomy (based on data from the large international VIRGO* study). Two of these new classifications are particularly applicable to women.
Here’s how Dr. Spatz explained the new taxonomy to me:
“Young women with heart attacks are a diverse group.
“Many have distinct features that are different from the classic heart attack. But the current classification system does not fully accommodate diverse types of heart attack. As such, one in eight women cannot be classified; in these women, we don’t know what caused the heart attack.
“The VIRGO taxonomy identifies the diversity that exists among young adults with heart attack, and puts them into a more nuanced classification system that can be used to improve our understanding of the different mechanisms of heart attack. It can help us facilitate more personalized treatment approaches to improve outcomes.
“The VIRGO taxonomy may be especially important for young women, in whom the ‘classic’ heart attack resulting from plaque rupture with complete blockage of a heart artery is less often seen.”
There are a number of ways, says Dr. Spatz, that women can show up in the Emergency Department with heart attacks that are NOT caused by a complete blockage.
One is a non-obstructive (no blockage) heart attack caused by what’s known as a supply-demand mismatch. Dr. Spatz explains:
“Supply-demand mismatch is a medical term used to describe any acute stress that results in lack of oxygen to the heart muscle.
“This can result from either a decreased supply of oxygen to the heart (e.g., as with anemia) or an increased demand for oxygen by the heart (e.g., as with severe infection or surgery).
“These kinds of heart attacks are different from the classic heart attack, in which there is an abrupt blockage of one of the heart arteries usually due to the rupture of a plaque.”
In the old days (i.e. last week) of using the current classification system of identifying heart attacks, a supply-demand mismatch would have been classified as a Class 2 MI (myocardial infarction).
But in this new VIRGO taxonomy, researchers demonstrated that not all Class 2 MIs are the same.
Some people who are experiencing a heart attack, according to Dr. Spatz, have significant coronary artery disease (Class 2, see list below) and others have minimal coronary artery disease or none at all (Class 3).
Some adults with significant coronary disease present with an acute stress that results in a lack of oxygen to the heart (Class 2a), though others have no identifiable acute stress (Class 2b). Dr. Spatz adds:
“We suspect that these distinctions are important for understanding disease mechanism, treatment response and prognosis.”
That’s an understatement!
Women presenting in Emergency with no diagnostic evidence of significant coronary artery disease tend to be sent home pretty darned fast. And as many women already know, standard cardiac diagnostic tests that are mostly meant to reveal obstructive coronary artery disease may not accurately identify non-obstructive Class 3 cases at all.
The new classification system of the VIRGO taxonomy includes:
- Class 1: plaque-mediated culprit lesion (82.5% of women; 94.9% of men)
- Class 2: obstructive coronary artery disease with supply-demand mismatch (2a: 1.4% women; 0.9% men;) and without supply-demand mismatch (2b: 2.4% women; 1.1% men)
- Class 3: non-obstructive coronary artery disease with supply-demand mismatch (3a: 4.3% women; 0.8% men) and without supply-demand mismatch (3b: 7.0% women; 1.9% men)
- Class 4: other identifiable mechanism such as spontaneous dissection, vasospasm, or embolism (1.5% women; 0.2% men)
- Class 5: undetermined classification (0.8% women; 0.2% men)
The numbers are startling, and speak for themselves.
In every classification, women outnumber men – except for the Class 1 heart attack caused by a culprit lesion blocking a coronary artery – the classic Hollywood Heart Attack that men typically have, and the type of heart attack that virtually all cardiac diagnostic tools are designed to detect in (white, middle-aged) men.
If physicians are not able or willing to correctly classify women’s heart attacks, little will change when it comes to under-diagnosing women compared to our male counterparts. According to another Yale study published last year in the Journal of the American College of Cardiology, for example, it seems that we’ve made little progress in reducing heart attacks among young women – despite national campaigns designed to increase heart disease awareness and prevention.(2)
Additionally, compared to men, women heart patients under 55:
- had longer hospital stays
- had higher risk of death during hospital stays
- were more likely to have other health conditions like diabetes and high blood pressure
Meanwhile, Dr. Spatz makes an important observation of special interest to women whose unique heart disease may often be missed using standard cardiac diagnostic tests:
“In Class 3 non-obstructive heart attacks, we suspect other mechanisms explain the myocardial infarction (MI), like microvascular disease or vasospasm that is not captured at the time of cardiac catheterization. The catheterization may also lack sensitivity for micro-dissections or spontaneous resolution of a thrombus (clot) (sometimes seen, sometimes not).
“Our abilities to diagnose these other vascular causes is limited. We are evaluating women with chest pain syndromes (and MI) who have normal coronary arteries on catheterization with PET scan, assessing coronary flow reserve – an indicator of microvascular dysfunction.
“Vasospasm is very difficult to diagnose and may occasionally warrant a trial of therapy based on a physician’s educated best guess.”
For the first time, the VIRGO taxonomy represents a new way for physicians to rethink heart attack classification – especially in younger women who are more likely to be under-diagnosed compared to our male counterparts.
In my own opinion as a dull-witted heart attack survivor who was sent home from the E.R. misdiagnosed with acid reflux – despite textbook heart attack symptoms and “normal” diagnostic test results, the issue remains: how long will it take for this new way of assessing young women’s heart attacks to trickle down to other cardiologists and the E.R. gatekeepers out there in the real world?
When I asked Dr. Spatz this question, she answered that this VIRGO taxonomy was built on the scenario of women being sent to the cardiac cath lab for an angiogram when they present to hospital with suspicious cardiac symptoms. And that’s quite an assumption!
“First, we have to hope that women with typical or atypical presentations of MI are being appropriately triaged in the E.R. to catheterization. If the cath (or stress test in the case of non-MI chest pain) is ‘normal’, the reflex to diagnose as ‘non-cardiac’ needs to be revisited.
“Women who are having chest pain syndromes but with ‘normal’ coronary arteries (a scenario that we know is associated with poor health outcomes) may have other vascular disease, triggers of vascular instability and such.
“We need to be alert to these, discuss the limitations of the science on treatment to date, and work with women to diagnose and improve their symptoms – and hopefully their outcomes.
“I am hopeful that in the future, our diagnostic capability to understand mechanisms of symptoms and disease will advance.
“Meanwhile, the VIRGO taxonomy gives a way for researchers and clinicians to communicate and share insights, and to identify appropriate people for research studies.”
*This study is based on data from a large study called VIRGO, or Variation in Recovery: Role of Gender Outcomes on Young AMI Patients. VIRGO is the largest prospective observational study of young and middle-aged women and men diagnosed with acute myocardial infarction (AMI, or heart attack). Researchers studied AMI patients aged 18 to 55 years of age from a large, diverse network of 103 hospitals in the United States, 24 in Spain and 3 in Australia from 2008 to 2012. Led by principal investigator Dr. Harlan M. Krumholz at Yale, researchers looked at a number of important issues in this study, including:
the various factors that may predispose women to a heart attack
women’s poor recovery after that heart attack compared to our male counterparts
the differences between men and women in the medical care that they receive following a heart attack
♥ Need a translator for some of these cardiology terms? Visit my Heart Sisters patient-friendly, no-jargon glossary.
NOTE: Comments in response to this post are now closed. I am not a physician so cannot offer a medical opinion on your symptoms. Please consult your physician if you have specific questions about your health.
- Little social support: a big gap for younger heart patients (more from the VIRGO study)
- How gender bias threatens women’s health
- How can we get heart patients past the E.R. gatekeepers?
- “It’s not your heart. It’s just _____” (insert misdiagnosis)
- Misdiagnosis: is it what doctors think, or HOW they think?
- Slow-onset heart attack: the trickster that fools us
- Heart attack misdiagnosis in women
|
<urn:uuid:33076bbc-035e-42f6-b16c-0313700c8239>
|
CC-MAIN-2019-47
|
https://myheartsisters.org/2015/10/11/virgo-taxonomy-young-women/
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665809.73/warc/CC-MAIN-20191112230002-20191113014002-00420.warc.gz
|
en
| 0.928415 | 2,214 | 3 | 3 | 2.97821 | 3 |
Strong reasoning
|
Health
|
Another day, another case of Donald Trump ignorantly tweeting from the hip. Or maybe not quite so much. On Saturday, the President attributed the deadly forest fires in California, which have killed over 40 people in the town of Paradise near San Francisco and devastated celebrity-inhabited areas outside Los Angeles, by blaming poor forest management.
It drew a furious response from, among others, singer-songwriter Neil Young whose home was reduced to a smoldering ruin and who posted on his website: ‘California is vulnerable – not because of poor forest management as DT (our so-called president) would have us think. We are vulnerable because of climate change; the extreme weather events and our extended drought is part of it.’
California has suffered an especially dry year, but then California has a Mediterranean-style climate with very dry summers which create the ideal conditions for forest fires every single year. It wasn’t a whole lot different in the 1970s when Young’s fellow singer-songwriter Albert Hammond penned a ditty with the lyrics: ‘It Never Rains in California’.
It may be true that a changing climate has lengthened the dry season and increased the threat of wildfires throughout the year. But what is lost on Neil Young is that the amount of land being burned in wildfires in the US is vastly lower than it would be without the influence of humans. Wildfires are natural events, which can be triggered by lightning just as much as they can be by a carelessly discarded match. They are part of natural forest management – but their role in this has been much-reduced thanks to the success of fire services becoming much better at tackling them or preventing them in the first place. Between 2008 and 2017 an average of 6.6 million acres a year were burned in wildfires across the US. Between 1928 and 1937, before fire services got much better at tackling the fires, an average of 41.7 million acres a year were burned. That fell steadily until the decade 1978-87 when 3.0 million acres were burned, before the figure started to rise again.
Is climate change to blame for the rise in wildfires in the past 30 years? It no doubt plays a role, but there is support, too, for Donald Trump’s assertion that poor forest management is at fault. One retired forest manager told the San Francisco Chronicle that the danger has been brewing for years. He said he surveyed the forests around Paradise a decade ago and found that they had 2,000 trees per acre – compared with what he called a ‘healthy’ 60-80 trees per acre.
Part of the reason the forests have grown so thick is that they are no longer being thinned out by wildfires. Naturally, they would be cleared out by fire every so often, taking away much of the deadwood. Instead, the deadwood is remaining, with the result that there is more wood to fuel a fire when it does break out. To use a terrible pun, the California fires are so bad because a lack of recent wildfires, or of forest management to do the work of the wildfires for them, has created a bit of backlog of material to be burned.
Donald Trump might have a habit of tweeting without thinking properly of the consequences, nor whether he has his facts right. But that doesn’t mean he is wrong on everything – nor that your average Trump-hating celebrity is right.
This article was originally published on The Spectator’s UK website.
|
<urn:uuid:eff38ae6-7008-4bcc-b802-9079700624cd>
|
CC-MAIN-2018-51
|
https://spectator.us/california-forest-fires-trump/
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824119.26/warc/CC-MAIN-20181212203335-20181212224835-00194.warc.gz
|
en
| 0.976622 | 716 | 2.640625 | 3 | 3.007984 | 3 |
Strong reasoning
|
Politics
|
Wealth is produced for the purpose of creating a monetary profit (a moral value) that will enable a man to pursue other values. The process of creating wealth is the result of a man’s reason put in practice. This process of acting requires that man be free in order to pursue his rational and objective values and to make use of them as he pleases. The previous process I have mentioned is much more complex to understand and requires pages of explanation. For anyone interested in learning how man’s mind works I recommend you to check the writings of Ludwig von Mises and Ayn Rand as a starting point.
There are many philosophies of life and religions that are in opposition to individual freedom because of the religious dogma from which they are rooted. These people’s philosophies and religions have codes of values that deny an individual’s right to their life and the pursue of their happiness. Religions are anti-life since they proclaim rules believed by faith that require man to suffer, sacrifice and do irrational actions in the aim of fulfilling their dogma. The religion that strikes me the most is Islam because it is not only a religious creed but also a political mean of organizing human life through the Sharia and Fiqh.
Ellaborating on how Islam is anti-life would take also dozens of pages with explanations and examples of how it does so. A great reference to start learning is the book “Winning the Unwinnable War: America’s Self-Crippled Response to Islamic Totalitarianism” which clear examples on how Islam and the practice of Sharia and the Fiqh violate individual rights, disincentive man’s creativity to pursue happiness and create wealth. Also, I recommend you to check the following blog post “Islam Violates our human rights” which enumerates very shortly and clearly good examples of violations done by Islamics,
- Violation of Article 23 (1) and 26 (1) of the UDHR Article 23 (1) of theUniversal Declaration of Human Rights states: Everyone has the right to work, to free choice of employment, to just and favorable conditions of work and to protection against unemployment. Article 26 (1) states: Everyone has the right to education.
But in Afghanistan, a muslim country, girls are not allowed education. Girls schools are banned and those caught running these schools, can be punished by law.
- Violation of Article 19 of the UDHR Article 19 of the Universal Declaration of Human Rights states: Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.
Maldives, Pakistan, Afghanistan and other muslim countries do not allow freedom of speech, regarding criticism on Mohammed, the prophet of Islam. This has taken the shape of a Blasphemy Law, where any person who speaks negatively about Mohammed, can be given death sentence or life imprisonment and/or fine.
An example of this is the recent death sentence given to Dr. Younus Shiekh for correctly pointing out that the Prophet Mohammed did not become Muslim until the age of 40 (which was when he received his first revelation) and that his parents were non-Muslims (as they died before Islam was proposed by the Prophet).
- Violation of Article 18 of the UDHR Article 18 of the Universal Declaration of Human Rights states: Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief, and freedom, either alone or in community with others and in public or private, to manifest his religion or belief in teaching, practice, worship and observance.
But the Quran says that:
Any religion except Islam will not be accepted
Quran 3.85 : If anyone desires a religion other than Islam, never will it be accepted of him; and in the Hereafter He will be in the ranks of those who have lost.
This is also mentioned in Violation 5, where those who dont believe in Allah, will be tortured severely.
- In Iran, and Afghanistan, brutal punishments are give for extra-marital sex. Stoning to death was ordered by Mohammed, and is still used in Iran. This is a very cruel brutal punishment and its only aim is to inflict maximum pain on the individual. Muslims in Afghanistan and Iran can be flogged for consuming alcohol, slandering or for adultery while they are not married.
Flogging is ordered by the Quran:
And those who accuse free women and bring not for witnesses, flog them with eighty stripes.”59 For the adulterer, God says :”The adulteress and the adulterer, flog each of them with a hundred stripes.” 60 s
These punishments are condemned by the International Community
Islam also orders cutting of hands and feet :
Quran 5.38 As to the thief, Male or female, cut off his or her hands: a punishment by way of example, from Allah, for their crime: and Allah is Exalted in power.
- Top 10 reasons why sharia is bad for all societies (iamiranaware.wordpress.com)
- Declaration of Human Rights (aklumper.wordpress.com)
- Crimes Against Humanity in Iran (iamiranaware.wordpress.com)
- The U. S. Constitution and Sharia Law (capecountyteaparty.wordpress.com)
- Dr. Yaron Brook Explains Why Ayn Rand Was Right – 01-27-2012 (gunnyg.wordpress.com)
- Islam and Sharia: Deadly Facts You Should Know (lettingfreedomring.com)
|
<urn:uuid:2654525a-d72a-4bc3-8cb3-a64fa9123034>
|
CC-MAIN-2022-21
|
https://capitalisthistory.com/2012/01/28/on-how-islam-violates-individual-rights/
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00071.warc.gz
|
en
| 0.945186 | 1,180 | 2.96875 | 3 | 2.923088 | 3 |
Strong reasoning
|
Religion
|
Using Rhythm to Combat Cognitive Issues that Affect Movement
Hero image courtesy of CBC.
Music and dance can be an effective treatment for individuals who suffer from a range of age-related disorders. Studies have shown that the rhythm of music can help improve speed, gait, and balance. Rhythm can also reduce the number of falls experienced by people who have movement disorders. The slowness and rigidity that go hand-in-hand with disorders such as Parkinson’s have been shown to fade while movements become more fluid when listening to music. Music and dance can fill a void that help to maintain a good quality of life and overall well-being for people suffering from movement disorders. Therefore, this article will discuss:
Parkinson’s Disease is a progressive nervous system disorder that affects movement. In the early stages of this disease, one’s face may begin to show little to no expression. Your arms and hands may have a slight tremor and your speech may become soft or slurred. As the disease progresses, movements will be slowed, posture and balance may be impaired, and speech and writing will show significant change. There are no known cures for Parkinson’s, but there are medications and treatments, such as music therapy, that can greatly improve the symptoms.
Huntington’s Disease is a progressive brain disorder that causes uncontrolled movement, uncontrollable emotions, and loss of cognitive abilities. Early signs and symptoms of this disorder can include irritability, depression, small involuntary movements, poor coordination, and trouble remembering new information. Involuntary jerking or twitching may develop and they will become more pronounced as the disease progresses. There is currently no cure for Huntington’s, but other treatments such as physical therapy and talk therapy may provide short-term relief.
Dystonia is a movement disorder where a person’s muscles contract uncontrollably. The contraction causes involuntary twisting which in turn results in repetitive movements and abnormal postures. Dystonia typically affects women more than men. Some early symptoms of this disease can include cramping of the foot or leg, involuntary pulling of the neck, uncontrollable blinking, and speech difficulties. Dystonia seems to be related to a problem with the basal ganglia which is an area of the brain that is responsible for initiating muscle contractions. There is no cure for Dystonia, but there are many treatment options such as music therapy that can ease the after-effects of the disease.
Recent findings in the field of neuroscience revealed that rhythm has and will continue to have a fundamental impact on our ability to walk, talk, and feel different emotions. Training of auditory neural activity suggests that pure perception of a musical beat strongly engages the motor system including regions such as the premotor cortex, basal ganglia, and supplementary motor regions. This means that there is an intimate connection between music and motor functions of the brain.
A steady, clear beat can help patients suffering from Parkinson’s overcome their shuffling gait. Although music cannot cure Parkinson’s, it can significantly improve a patient’s quality of life. It has also been found that people with Parkinson’s hear rhythms differently than people who do not suffer from movement disorders. Our ability to process as well as create different rhythms arose from a primal need for social cohesion. Large groups of people suffering from movement disorders have reacted well to music therapy group treatments. Rhythm is more than a fundamental feature of music. Rhythm is also a fundamental part of what makes us human.
Music makes improving the brain easier than first thought. Brain activation caused by music can translate into serious health improvements. Listening to, playing, and interacting with music has been shown to decrease production of stress-inducing cortisol as well as increase production of cells that make the immune system more effective. Music can impede viruses entering the body as well as give many benefits to individuals suffering from many long term conditions such as Parkinson’s. Here are some ways music therapy can help people suffering from movement disorders:
In conclusion, music is good for the soul, and music therapy can be a viable therapy option for anyone suffering from movement disorders.
Edited by Cara Jernigan on January 17, 2021
|
<urn:uuid:4a00b191-52a1-44bb-8e10-72dab91f05de>
|
CC-MAIN-2023-06
|
https://www.incadence.org/post/music-based-interventions-used-for-movement-disorders
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00379.warc.gz
|
en
| 0.956461 | 856 | 3.53125 | 4 | 2.773939 | 3 |
Strong reasoning
|
Health
|
Edited by David Stronach and Ali Mousavi. Pp. 192, figs. 114. Philipp von Zabern, Darmstadt 2012. €39.90. ISBN 978-3-5053-4453-1 (cloth).
The employment of remotely sensed imagery for archaeological research, ranging from photographs taken from kites or planes to recent commercial satellite products, has a long history. Just prior to World War I, pioneering Middle Eastern archaeologists began taking advantage of opportunistic flights to explore archaeological landscapes that were otherwise inaccessible on the ground. In addition to providing visual access, scholars quickly recognized the value of viewing landscapes from above as a way to rapidly record and interpret archaeological tell sites and other standing monuments over large geographic areas and to explore the conditions that contribute to the survival and destruction of the features in different regions and under diverse conditions. Through the use of these geospatial data sets, archaeologists focused on regional studies at multiple scales ranging from individual settlements and relict features to regional settlement pattern and landscape studies. Today, a multitude of new satellites, with increasingly higher resolution and producing multispectral images, are widely available, while aerial photographic programs are less common, and their data sets are often difficult to access.
This book, an English translation and revision of the German Irans Erbe: In Flugbildern von Georg Gerster (Mainz 2009), stands as an important reminder of the unique value of historic aerial photographs for archaeological research. Interwoven in the chapters, but never explicitly stated, is the concept that historic aerial photographs both stand apart from and complement recent satellite imagery in several key ways. For example, the data sets presented here were acquired from April 1976 to May 1978, capturing the Iranian landscape prior to recent heavy modification by human activity. The text is highlighted with color photographs of landforms that reveal extensive historic irrigation systems, which have since been completely reworked by modern mechanized technologies, almost entirely erasing their signature on recent satellite images. Second, historic aerial photographs, when compared with more recent satellite images, can be used to trace ongoing changes to archaeological landscapes over time. The photographs in this volume cover many of the same areas and sites photographed by Schmidt and presented in his volume Flights over Iran (Chicago 1940), and they can be used to compare changes over the 30 years between photographs. Third, high-resolution aerial photographs that cover the same areas in several different seasons can provide a unique view of features that are only apparent under certain conditions. A clear example of this is provided by Gerster in the afterword (183) regarding Tall-e Maylan. Finally, with ground access to many countries in the Middle East difficult for archaeologists, aerial photographs and other remote sensing data sets continue to provide valuable visual access. The photographs in this volume extend beyond Schmidt’s previous coverage and into areas that are currently difficult for archaeologists to access.
The book is organized chronologically by archaeological time period, and chapters are authored by a collection of well-known experts in Iranian archaeology. Each chapter is composed of a short historical introduction to contextualize the time period, followed by descriptions of key sites of the period and accompanying high-resolution color photographs. It is a volume that accomplishes the daunting task of integrating artistic and scientific perspectives, making it accessible to both experts and beginning students of Iranian archaeology.
An introductory chapter by Stronach and Mousavi provides a brief history of the tools of aerial photography in Iran and places these photographs in their regional context. The authors succinctly summarize the advantages and limits of aerial photography as a tool for archaeological interpretation. The authors spend much of the chapter discussing the legacy of Schmidt and his aerial flights over Iran in the late 1930s and contextualizing the photographs presented in the volume. Gerster photographed archaeological sites and standing monuments in 111 countries over 40 years, many of which are highlighted in his volume The Past from Above (Los Angeles 2005), and this book presents a subset of 114 unparalleled high-resolution color photographs of archaeological sites and landscapes of Iran.
Wilkinson (ch. 1) describes the physical geography and landscape of Iran, emphasizing the extensive remains of qanats, and gives an up-to-date description of the development and appearance of these features in Iran and beyond. The key strength of this volume is that the photographs enhance the text and provide the reader with stunning illustrations of the cultural and natural landscape. Extensive captions explain the features visible on the photographs. Since the photographs are not labeled, the reader is able to use the captions as a guide for understanding features but is not limited to the text. In other words, the reader is able to explore the photographs beyond the intended emphasis of the text.
Chapters 2–5 make up the bulk of the book and discuss chronological periods and select illustrative archaeological sites. In general, chapter authors make use of the photographs to illustrate not only the historical narrative and the morphology of early sites and their locations but also as means to provide an introduction to basic archaeological methods. For example, in chapter 2, Mousavi and Sumner explain the methodology behind the step-trench visible on the photograph of Tepe Yahyah (pl. 17). Throughout this chapter, the authors provide a discussion of the history of the site and its excavations and describe the features visible on the imagery and their interpretation. Overall, this chapter, perhaps the strongest, clearly illustrates the ways in which aerial photography can contribute to understanding both archaeological sites and landscapes. Chapter 6, by Harverson and Beazley, uses the photographs to demonstrate and interpret the spatial diversity in layout, material, and functions of different buildings and other built features. The book ends with an afterword by Gerster explaining his goals, methods, and perspectives on the tools of aerial photography and the landscapes of Iran.
This book is a fundamental contribution to Middle Eastern aerial photography and presents a useful introductory text to the archaeology of Iran. However, a few minor layout problems detract from the contribution. In several chapters, mismatching between text and photographs mean the reader has to flip back and forth to find the photograph; in one example, a photograph presented was not mentioned in the text (92, pl. 53); and a callout for the editor in the photograph caption was left in the print (68, fig. 3.3). Despite these minor errors, this book was well produced and organized, and the color images are of such high quality that they will surely be used in future research.
Department of Anthropology
The Pennsylvania State University
University Park, Pennsylvania 16801
Book Review of Ancient Iran from the Air, edited by David Stronach and Ali Mousavi
Reviewed by Carrie Hritz
American Journal of Archaeology Vol. 118, No. 1 (January 2014)
Published online at www.ajaonline.org/online-review-book/1724
|
<urn:uuid:455d2c1b-fe4a-4c68-bb0d-f255478ac56e>
|
CC-MAIN-2015-06
|
http://www.ajaonline.org/online-review-book/1724
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115900160.86/warc/CC-MAIN-20150124161140-00231-ip-10-180-212-252.ec2.internal.warc.gz
|
en
| 0.926864 | 1,390 | 3.140625 | 3 | 3.035324 | 3 |
Strong reasoning
|
History
|
translation from Alembic Club Reprints, No. 4, "Foundations of the Molecular Theory: Comprising Papers and Extracts by John Dalton, Joseph Louis Gay-Lussac, and Amadeo Avogadro, (1808-1811)"
Reader note: The words "atom" and "molecule" did not yet have their modern meaning. By "integral molecule" Avogadro meant one molecule of a compound; by "constituent molecule" a molecule of a gaseous element; and by "elementary molecule" (or "half molecule") an atom.
M. Gay-Lussac has shown in an interesting Memoir that gases always unite in a very simple proportion by volume, and that when the result of the union is a gas, its volume also is very simply related to those of its components. But the quantitative proportions of substances in compounds seem only to depend on the relative number of molecules which combine, and on the number of composite molecules which result. It must then be admitted that very simple relations also exist between the volumes of gaseous substances and the numbers of simple or compound molecules which form them. The first hypothesis to present itself in this connection, and apparently even the only admissible one, is the supposition that the number of integral molecules in any gases is always the same for equal volumes, or always proportional to the volumes. Indeed, if we were to suppose that the number of molecules contained in a given volume were different for different gases, it would scarcely be possible to conceive that the law regulating the distance of molecules could give in all cases relations so simple as those which the facts just detailed compel us to acknowledge between the volume and the number of molecules. On the other hand, it is very well conceivable that the molecules of gases being at such a distance that their mutual attraction cannot be exercised, their varying attraction for caloric may be limited to condensing a greater or smaller quantity around them, without the atmosphere formed by this fluid having any greater extent in the one case than in the other, and, consequently, without the distance between the molecules varying; or, in other words, without the number of molecules contained in a given volume being different. Dalton, it is true, has proposed a hypothesis directly opposed to this, namely, that the quantity of caloric is always the same for the molecules of all bodies whatsoever in the gaseous state, and that the greater or less attraction for caloric only results in producing a greater or less condensation of this quantity around the molecules, and thus varying the distance between the molecules themselves. But in our present ignorance of the manner in which this attraction for the molecules for caloric is exerted, there is nothing to decide us a priori in favour of the one of these hypotheses rather than the other; and we should rather be inclined to adopt a neutral hypothesis, which would make the distance between the molecules and the quantities of caloric vary according to unknown laws, were it not that the hypothesis we have just proposed is based on that simplicity of relation between the volumes of gases on combination, which would appear to be otherwise inexplicable.
Setting out from this hypothesis, it is apparent that we have the means of determining very easily the relative masses of the molecules of substances obtainable in the gaseous state, and the relative number of these molecules in compounds; for the ratios of the masses of the molecules are then the same as those of the densities of the different gases at equal temperature and pressure, and the relative number of molecules in a compound is given at once by the ratio of the volumes of the gases that form it. For example, since the numbers 1.10359 and 0.07321 express the densities of the two gases oxygen and hydrogen compared to that of atmospheric air as unity, and the ratio of the two numbers consequently represents the ratio between the masses of equal volumes of these two gases, it will also represent on our hypothesis the ratio of the masses of their molecules. Thus the mass of the molecule of oxygen will be about 15 times that of the molecule of hydrogen, or more exactly, as 15.074 to 1. In the same way the mass of the molecule of nitrogen will be to that of hydrogen as 0.96913 to 0.07321, that is, as 13, or more exactly 13.238, to 1. On the other hand, since we know that the ratio of the volumes of hydrogen and oxygen in the formation of water is 2 to 1, it follows that water results from the union of each molecule of oxygen with two molecules of hydrogen. Similarly, according to the proportions by volume established by M. Gay-Lussac for the elements of ammonia, nitrous oxide, nitrous gas, and nitric acid, ammonia will result from the union of one molecule of nitrogen with three of hydrogen, nitrous oxide from one molecule of oxygen with two of nitrogen, nitrous gas from one molecule of nitrogen with one of oxygen, and nitric acid from one of nitrogen with two of oxygen.
There is a consideration which appears at first sight to be opposed to the admission of our hypothesis with respect to compound substances. It seems that a molecule composed of two or more elementary molecules should have its mass equal to the sum of the masses of those molecules; and that in particular, if in a compound one molecule of one substance unites with two or more molecules of another substance, the number of compound molecules should remain the same as the number of molecules of the first substance. Accordingly, on our hypothesis when a gas combines with two or more times its volume of another gas, the resulting compound, if gaseous, must have a volume equal to that of the first of these gases. Now, in general, this is not actually the case. For instance, the volume of water in the gaseous state is, as M. Gay-Lussac has shown, twice as great as the volume of oxygen which enters into it, or, what comes to the same thing, equal to that of the hydrogen instead of being equal to that of the oxygen. But a means of explaining facts of this type in conformity with our hypothesis presents itself naturally enough: we suppose, namely, that the constituent molecules of any simple gas whatever (i.e., the molecules which are at such a distance from each other that they cannot exercise their mutual action) are not formed of a solitary elementary molecule, but are made up of a certain number of these molecules united by attraction to form a single one; and further, that when molecules of another substance unite with the former to form a compound molecule, the integral molecule which should result splits up into two or more parts (or integral molecules) composed of half, quarter, &c., the number of elementary molecules going to form the constituent molecule of the first substance, combined with half, quarter, &c., the number of constituent molecules of the second substance that ought to enter into combination with one constituent molecule of the first substance (or, what comes to the same thing, combined with a number equal to this last of half-molecules, quarter-molecules, &c., of the second substance); so that the number of integral molecules of the compound becomes double, quadruple, &c., what it would have been if there had been no splitting-up, and exactly what is necessary to satisfy the volume of the resulting gas.* [Thus, for example, the integral molecule of water will be composed of a half molecule of oxygen with one molecule, or, what is the same thing, two half-molecules of hydrogen.]
On reviewing the various compound gases most generally known, I only find examples of duplication of the volume relatively to the volume of that one of the constituents which combines with one or more volumes of the other. We have already seen this for water. In the same way, we know that the volume of ammonia gas is twice that of the nitrogen which enters into it. M. Gay-Lussac has also shown that the volume of nitrous oxide is equal to that of the nitrogen which forms part of it, and consequently is twice that of the oxygen. Finally, nitrous gas, which contains equal volumes of nitrogen and oxygen, has a volume equal to the sum of the two constituent gases, that is to say, double that of each of them. Thus in all these cases there must be a division of the molecule into two; but it is possible that in other cases the division might be into four, eight, &c. The possibility of this division of compound molecules might have been conjectured a priori; for otherwise the integral molecules of bodies composed of several substances with a relatively large number of molecules, would come to have a mass excessive in comparison with the molecules of simple substances. We might therefore imagine that nature had some means of bringing them back to the order of the latter, and the facts have pointed out to us the existence of such means. Besides, there is another consideration which would seem to make us admit in some cases the division in question; for how could one otherwise conceive a real combination between two gaseous substances uniting in equal volumes without condensation, such as takes place in the formation of nitrous gas? Supposing the molecules to remain at such a distance that the mutual attraction of those of each gas could not be exercised, we cannot imagine that a new attraction could take place between the molecules of one gas and those of the other. But on the hypothesis of division of the molecule, it is easy to see that the combination really reduces two different molecules to one, and that there would be contraction by the whole volume of one of the gases if each compound molecule did not split up into two molecules of the same nature. M. Gay-Lussac clearly saw that, according to the facts, the diminution of volume on the combination of gases cannot represent the approximation of their elementary molecules. The division of molecules on combination explains to us how these two things may be made independent of each other.
Dalton, on arbitrary suppositions as to the most likely relative number of molecules in compounds, has endeavoured to fix ratios between the masses of the molecules of simple substances. Our hypothesis, supposing it well-founded, puts us in a position to confirm or rectify his results from precise data, and, above all, to assign the magnitude of compound molecules according to the volumes of the gaseous compounds, which depend partly on the division of molecules entirely unsuspected by this physicist.
Thus Dalton supposes [In what follows I shall make use of the exposition of Dalton's ideas given in Thomson's "System of Chemistry."] that water is formed by the union of hydrogen and oxygen, molecule to molecule. From this, and from the ratio by weight of the two components, it would follow that the mass of the molecule of oxygen would be to that of hydrogen as 7 1/2 to 1 nearly, or, according to Dalton's evaluation, as 6 to 1. This ratio on our hypothesis is, as we saw, twice as great, namely, as 15 to 1. As for the molecule of water, its mass ought to be roughly expressed by 15 + 2 = 17 (taking for unity that of hydrogen), if there were no division of the molecule into two; but on account of this division it is reduced to half, 8 1/2, or more exactly 8.537, as may also be found directly by dividing the density of aqueous vapour 0.625 (Gay-Lussac) by the density of hydrogen 0.0732. This mass only differs from 7, that assigned to it by Dalton, by the difference in the values for the composition of water; so that in this respect Dalton's result is approximately correct from the combination of two compensating errors,-the error in the mass of the molecule of oxygen, and his neglect of the division of the molecule.
Dalton supposes that in nitrous gas the combination of nitrogen and oxygen is molecule to molecule; we have seen on our hypothesis that this is actually the case. Thus Dalton would have found the same molecular mass for nitrogen as we have, always supposing that of hydrogen to be unity, if he had not set out from a different value for that of oxygen, and if he had taken precisely the same value for the quantities of the elements in nitrous gas by weight. But by supposing the molecule of oxygen to be less than half what we find, he has been obliged to make that of nitrogen also equal to less than half the value we have assigned to it, viz., 5 instead of 13. As regards the molecule of nitrous gas itself, his neglect of the division of the molecule again makes his result approach ours; he has made it 6 + 5 = 11, whilst according to us it is about (15 + 13) / 2 = 14, or more exactly (15.074 + 13.238) / 2 = 14.156, as we also find by dividing 1.03636, the density of nitrous gas according to Gay-Lussac, by 0.07321. Dalton has likewise fixed in the same manner as the facts have given us, the relative number of molecules in nitrous oxide and in nitric acid, and in the first case the same circumstance has rectified his result for the magnitude of the molecule. He makes it 6 + 2 x 5 = 16, whilst according to our method it should be (15.074 + 2 x 13.238) / 2 = 20.775, a number which is also obtained by dividing 1.52092, Gay-Lussac's value for the density of nitrous oxide, by the density of hydrogen.
In the case of ammonia, Dalton's supposition as to the relative number of molecules in its composition is on our hypothesis entirely at fault. He supposes nitrogen and hydrogen to be united in it molecule to molecule, whereas we have seen that one molecule of nitrogen unites with three molecules of hydrogen. According to him the molecule of ammonia would be 5 + 1 = 6: according to us it should be (13 + 3) / 2 = 8, or more exactly 8.119, as may also be deduced directly from the density of ammonia gas. The division of the molecule, which does not enter into Dalton's calculations, partly corrects in this case also the error which would result from his other suppositions.
All the compounds we have just discussed are produced by the union of one molecule of one of the components with one or more molecules of the other. In nitrous acid we have another compound of two of the substances already spoken of, in which the terms of the ratio between the number of molecules both differ from unity. From Gay-Lussac's experiments, it appears that this acid is formed from 1 part by volume of oxygen and 3 of nitrous gas, or, what comes to the same thing, of 3 parts of nitrogen and 5 of oxygen; whence it would follow, on our hypothesis, that its molecules should be composed of 3 molecules of nitrogen and 5 of oxygen, leaving the possibility of division out of account. But this mode of combination can be referred to the preceding simpler forms by considering it as the result of the union of 1 molecule of oxygen with 3 of nitrous gas, i.e. with 3 molecules, each composed of a half-molecule of oxygen and a half-molecule of nitrogen, which thus already includes the division of some of the molecules of oxygen which enter into that of nitrous acid. Supposing there to be no other division, the mass of this last molecule would 57.542, that of hydrogen being taken as unity, and the density of nitrous acid gas would be 4.21267, the density of air being taken as unity. But it is probable that there is at least another division into two, and consequently a reduction of the density to half: we must wait until this density has been determined by experiment. . . .
It will have been in general remarked on reading this Memoir that there are many points of agreement between our special results and those of Dalton, although we set out from a general principle, and Dalton has only been guided by considerations of detail. This agreement is an argument in favour of our hypothesis, which is at bottom merely Dalton's system furnished with a new means of precision from the connection we have found between it and the general fact established by M. Gay-Lussac. Dalton's system supposes that compounds are made in general in fixed proportions, and this is what experiment shows with regard to the more stable compounds and those most interesting to the chemist. It would appear that it is only combinations of this sort that can take place amongst gases, on account of the enormous size of the molecules which would result from ratios expressed by larger numbers, in spite of the division of the molecules, which is in all probability confined within narrow limits. We perceive that the close packing of the molecules in solids and in liquids, which only leaves between the integral molecules distances of the same order as those between the elementary molecules, can give rise to more complicated ratios, and even to combinations in all proportions; but these compounds will be so to speak of a different type from those with which we have been concerned, and this distinction may serve to reconcile M. Berthollet's ideas as to compounds with the theory of fixed proportions.
|
<urn:uuid:73c1e179-a5e6-473b-a0f8-2734e9ae6c0b>
|
CC-MAIN-2017-17
|
http://chemteam.info/Chem-History/Avogadro.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120092.26/warc/CC-MAIN-20170423031200-00394-ip-10-145-167-34.ec2.internal.warc.gz
|
en
| 0.968033 | 3,550 | 3.21875 | 3 | 2.958682 | 3 |
Strong reasoning
|
Science & Tech.
|
The amazing health benefits of the olive leaf have been known to man since the early Egyptians but it lost favor through the years. The age-old treatment is now back in the limelight again. The early Egyptians used it as part of the mummification process and scientists now understand the part it played. Olive leaves are high in an extract known as oleuropein. This ingredient has high antiviral, antibacterial, antiparasitic
and antifungal properties. The process of decay requires bacteria for the second stage of decomposition and the oil of the olive leaf didn’t allow the bacteria to grow.
Even teas and poultices made from the olive leaf were used through the centuries for cures. You’ve probably heard of the use of quinine to cure malaria but did you know that olive leaf extracts were found superior to quinine for the cure of not only malaria but also dengue fever. That fact was known long before 1906 but the physicians of the time found quinine administration was far easier.
Today’s antibiotics are powerful but they have no effect on viruses. The name itself anti-against and bio-life also indicates the toll that antibiotics take on healthy cells in the body. Olive leaf extract shows no negative effects and yet it fights not only bacteria but also fungus and virus. It may even prove invaluable for treating the retrovirus HIV. The olive leaf extract causes the production of the enzymes necessary for the virus to change the RNA to be neutralized.
Other viruses like herpes also succumb to the powerful effects of the olive leaf extract. While research continues on the drug, individuals report remarkable results where traditional medications were unable to help.
Early studies on olive leaf extract found it worked well and had little or no side effects. However, there was a problem. The compound bounds rapidly to the blood protein and made it ineffective. Continued testing and research found a method of combining ingredients to overcome the problem of binding to the blood protein so that the active ingredient in olive leaf can work its magic.
Some of the indications for use of the olive leaf extract include most diseases that come from a virus, bacterium, a retrovirus or protozoa. These include such conditions as the flue, meningitis, encephalitis, herpes of all forms, HIV, hepatitis, pneumonia, blood poisoning, dental carries, the common cold, urinary tract infections, TB, malaria and even chronic fatigue syndrome. People report that it helps allergic symptoms, gives them energy, stops painful joint ache, normalizes the heartbeat and relieves the pain of rheumatoid arthritis.
Besides its ability to fight tiny predators, it also was proven in 1962 to lower the blood pressure in animals. Later, European studies found it effective in the prevention of muscle spasms in the intestines and heart arrhythmia. The olive tree will soon be known for more than just its fruit. With continued study of the amazing health benefits of the olive leaf, we may find a natural alternative to traditional medicine that causes no harm in its attempt to cure.
Post a Comment
|
<urn:uuid:ea3626d9-bcc8-479e-8755-275429dcb899>
|
CC-MAIN-2023-23
|
https://www.pratanacoffeetalk.com/2012/07/many-health-benefits-of-olive-oil.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647810.28/warc/CC-MAIN-20230601110845-20230601140845-00389.warc.gz
|
en
| 0.951521 | 690 | 3.328125 | 3 | 2.834158 | 3 |
Strong reasoning
|
Health
|
Table 2 provides a summary list of the alternative modes of assessment discussed in the main content. Key strengths and weaknesses are detailed briefly.
Table 2 Alternative assessment techniques and their relative merits
|Method of assessment||Meaning and skill areas developed|
|Group assessment||This develops interpersonal skills and may also develop oral skills and research skills (if combined, for example, with a project).|
|Self-assessment||Self-assessment obliges students more actively and formally to evaluate themselves and may develop self-awareness and better understanding of learning outcomes.|
|Peer assessment||By overseeing and evaluating other students’ work, the process of peer assessment develops heightened awareness of what is expected of students in their learning.|
|Unseen examination||This is the ‘traditional’ approach. It tests the individual knowledge base but questions are often relatively predictable and, in assessment, it is difficult to distinguish between surface learning and deep learning.|
|Testing skills instead of knowledge||It can be useful to test students on questions relating to material with which they have no familiarity. This often involves creating hypothetical scenarios. It can test true student ability and avoids problems of rote- and surface-learning.|
|Coursework essays||A relatively traditional approach that allows students to explore a topic in greater depth but can be open to plagiarism. Also, it can be fairly time consuming and may detract from other areas of the module.|
|Oral examination||With an oral exam, it is possible to ascertain students’ knowledge and skills. It obliges a much deeper and extensive learning experience, and develops oral and presentational skills.|
|Projects||These may develop a wide range of expertise, including research, IT and organisational skills. Marking can be difficult, so one should consider oral presentation.|
|Presentations||These test and develop important oral communication and IT skills, but can prove to be dull and unpopular with students who do not want to listen to their peers, but want instead to be taught by the tutor.|
|Multiple choice||These are useful for self-assessment and easy to mark. Difficulties lie in designing questions and testing depth of analytical understanding|
|Portfolio||This contains great potential for developing and demonstrating transferable skills as an ongoing process throughout the degree programme.|
|Computer-aided||Computers are usually used with multiple-choice questions. Creating questions is time consuming, but marking is very fast and accurate. The challenge is to test the depth of learning.|
|Literature reviews||These are popular at later levels of degree programmes, allowing students to explore a particular topic in considerable depth. They can also develop a wide range of useful study and research skills.|
Employers are increasingly looking for the ability to work in and direct a team as a key graduate skill. Flexible work patterns and increased dependence on central IT systems are driving this agenda. Nevertheless, assessment in HE continues to relate to activities that students undertake individually. This is perhaps unsurprising, since academics tend not to have much recent direct experience of working in industry and commerce.
It is relatively easy to visualise the benefits of group projects. Apart from the obvious one (enhancement of interpersonal skills), these activities can be readily combined with other key learning objectives. Groups may, for example, prepare projects and present results orally, in the process developing research and oral skills.
The implications of group work for staff time are difficult to assess. There are potential savings in marking, as a project of, say, three individuals may be less time-consuming than three individual projects. However, much depends on how students disseminate their work, and this approach to assessment needs a high(er) level of supervision.
The major challenge in implementing assessment by group work is how to supervise individual contributions and award grades that fairly represent individual effort. In group work, individuals may have some incentive to free-ride and better students in poorly motivated groups may be discouraged. Top Tips 2 suggests some examples of group work in economics and gives hints on how to resolve the problems.
For example, consider random selection of individuals to groups. In this way, students must develop relationships with their colleagues with whom they are unfamiliar and whom they may not actually like, in the process developing essential interpersonal skills. Typically, the group task is a type of project on which students work over the duration of the module and it involves some research. As with all kinds of assessment, one has to be careful to define the tasks and expected outcomes in a way that promotes deep learning.
Groups are no less likely to engage in superficial learning than individuals and there is always a danger that instructors focus too heavily on the dynamics of the group and the development of interpersonal skills at the expense of deep learning.6
Consider requiring students to submit a short report that discusses their initial meetings, attendance, how the project will proceed and who is to take responsibility for what.
Students are likely to meet anyway, but the written report will motivate the group to organise and discuss amongst themselves the best way forward, and it gives some basis for measuring individual contributions.
As with all forms of assessment, the criteria for assessment should be explicit. Often, it is appropriate for groups to disseminate their work in the form of a group presentation to which all individuals contribute, although it may be better to request a supplementary short report, and thereafter communicate to students what is expected in a presentation.
Apart from enhancing various skills related to presentation, oral delivery can make it easier to evaluate and grade individual contributions and the depth of learning involved. Careful consideration should be given to the allocation of marks, allowing sufficient flexibility to reward individual efforts adequately. One way is to allocate individuals both a group and an individual mark, although the proportion of the final mark should weight the group performance more heavily so as to encourage a collective effort.
As with all innovations, consideration must be given to whether group work complements other core learning objectives or whether it draws scarce student and staff resources from other key teaching and learning processes. On the positive side, group work may be combined with other valuable activities, such as a project or a piece of research, and may culminate in a presentation. This may be a valuable learning exercise in its own right. On the other hand, there is a danger that group work induces students to specialise too heavily on one area or topic at the expense of other aspects of the module. Here are two ways of limiting the extent of this problem:
The resource implications may or may not involve additional staff obligations. Group work is probably not appropriate for large modules delivered by only one tutor. On the other hand, there are likely to be many scale economies resulting from large-group assessment by a team of tutors.
For example, there are innovative ways of allocating individual marks that take account of the group’s inside knowledge of the relative contributions of each individual. This can work a lot better than may be expected. The group is awarded a group mark that they must divide amongst themselves. For example, a group of four students with 240 marks may choose to share these equally – 60 marks each. Alternatively, they may allocate more marks to the strongest contributions.
It may be surprising that there is evidence to suggest that students are willing to allocate marks in a way that reflects their relative engagement in the project (even if it is to their detriment). It is useful for tutors to insist on a written report explaining the rationale for the marks that have been allocated.
Some tips for allocating group-marked projects are contained in Top Tips 3.
The basic idea behind self- and peer assessment is to provide mechanisms that help students to evaluate themselves and their work more critically. An ability to assess one’s own strengths and weaknesses is an essential life-skill that facilitates personal development whether in study or in the workplace.
Readers should note, in the following suggestions, that students are not involved in final marking. There is always the danger that where the assessment does not contribute to their final mark, they may not take it as seriously as desired.
The rationale for peer assessment has been summarised by Boud (1986): ‘Students have an opportunity to observe their peers throughout the learning process and often have a more detailed knowledge of the work of others than do their teachers.’ Brown and Dove (1991) also argue that well-designed peer and self-assessment can produce the advantages listed in Box 5.
More recent research has provided some support for these arguments (Searby and Ewers, 1997; MacAlpine, 1999), and these ideas are summarised in Top Tips 4.
For example, a common approach is to provide students with a self-assessment form. This contains a series of questions and issues that encourage students to evaluate critically the quality of their work, and it should correspond closely with the criteria that the learning facilitator used when assessing the work. For example, self-assessment forms may ask the student whether the work has a clear structure; whether it has reviewed the existing literature adequately; whether references are properly recorded, and so on.
Box 6 details some general questions that might be put on an assessment form. Other questions should be specific to the particular task at hand. Asking themselves these questions and submitting substantive written answers requires students to supply a more honest appraisal that should feed back into modifying and improving their work. The completed form is not of great value in itself – it is the process that it induces that is important.
In peer assessment, coursework is usually exchanged between students who use similar forms to comment on the work of their colleagues. Lecturers may then ask for a supplementary submission that reports on how students have acted upon the comments of their peers. Note that student peer assessment should be anonymous, with assessors randomly chosen so that friendship cannot influence the process.
There is a danger that self- and peer assessment degenerates into a superficial process, since much depends upon whether students understand the purpose and their willingness to participate. It is easy to imagine students completing self-assessment forms simply through obligation, having completed the coursework and with no intention of revising the work in light of any weaknesses they uncover. However, equally, self-assessment can be a support to students – well-thought-out forms help to clarify what is expected of students and can form a natural basis for educators’ final comments and feedback.
As regards resource costs, self-assessment is unlikely to save time. It takes time to initiate the process and design a good-quality assessment form. It also takes time to educate the students to complete it well, and to give feedback after they have completed and submitted their assessment form. Of course, it is also time-consuming for students who may already be overloaded with assessment. The benefits of the process to the educator should take the form of better student performance in the final examination.7
Many student activities that are traditionally examined through written reports or essays may alternatively be examined orally in the form of a viva. Potentially, this approach can give a much clearer idea of the depth of students’ understanding. There is no scope for plagiarism, and little scope for regurgitation of material, at least in carefully managed interviews. There are also benefits in terms of development of interpersonal skills and interview technique.
The time costs should not be severe – there is no marking, although the assessor must see each student individually and this is a logistical problem, especially for large groups. One has to think carefully about the questions asked (with different questions to prevent student collusion). Oral examination can be a risky approach, since validation by external review may be complex and there is likely to be some student resistance. Certainly, an assessor will have to write reports on each student’s performance, detailing the questions asked and the basis for assessment.
The approach is definitely worth thinking about however, and should perhaps be tried out on a small scale as a complement to a more traditional assessment such as an exam – say, allocate 30 per cent of the final mark to the oral examination, reducing the requirements of the exam accordingly.
As another example, suppose students are told to attend a 15-minute viva in which they will be examined on the economics of the East Asian financial crisis of 1997. The lecturer has prepared a bank of questions to ask students that relate not only to students’ knowledge, but also to the process they undertook in preparing for the examination. Some questions can be narrow, to test their basic knowledge, whilst others can be broader and more searching, viz.:
If well prepared, the oral examination allows the instructor to investigate students’ knowledge, skills and commitment in a way that is often impossible in written unseen examinations. It also requires them to read and research much more extensively. On the negative side, there is the ‘stress factor’ of undergoing live interrogation.
With regard to resource costs, this approach need not involve much additional staff time, since it saves on the laborious marking of written scripts. It is of course, time-consuming to initiate, although future years should benefit as both experience and the assessment bank are built up.
One of the problems with unseen exams is that questions are so closely related to the material covered in the course and in the textbook that students tend to memorise and regurgitate without any deep understanding. An alternative approach involves testing students with questions relating to issues or material that is not familiar, but which does require the kind of approach to problem solving that is developed in the module. In this way, the assessor is testing the learning process developed in the course rather than the knowledge provided.
As an example, students are asked to answer ten questions in 40 minutes. Each question is worth 3 marks. The right is reserved to give negative scores for logically wrong answers and bonus points for excellent answers.
Imagine you are an economics adviser starting an assignment in an unfamiliar society. Please summarise your approach to the following issues in up to three points (the points do not have to be in any order of significance).
Here are four of the ten exemplar questions.
Presentations are a well-established method of assessment. They help to develop skills required in the workplace, as well as student confidence, oral skills and the use of relevant software. However, it is true that use of presentations in HE is often felt to be inadequate. Students may regurgitate material without properly engaging the audience, and may invest a lot of time in their own presentation at the expense of other work. For example, students seldom prepare for the topics covered by their colleagues’ presentations. Top Tips 6 contains a couple of useful tips to improve the efficacy of presentations.
With projects, students are often free to choose the topic, title and methodology to be studied. Projects are useful in developing independence, organisational skills, resourcefulness and a sense of ownership over work, and may induce a deeper level of learning.
On the other hand, they may be unpopular and where the project is an option, take-up may be low. Students may believe that it involves a greater amount of work than a standard module and/or that there will be insufficient supervision. Some tips for improving the take-up rate and usefulness of project work are listed in Top Tips 7.
Consider asking students to prepare a literature review on a given topic. This develops a number of research-type skills, encouraging students to source material, use search engines and be able to assimilate large amounts of material and select the most important.
While students are often expected to review literature as a matter of course (for example, as part of an essay submission), they normally underperform, being overly reliant on key ideas presented to them in core textbooks. This approach encourages them to do it, makes plagiarism more difficult and can be quite popular, since the process of searching and understanding a wider literature makes students feel more involved.
Article reviews involve students presenting in written or oral format a critique of one or more articles. However, this approach can be somewhat demanding for undergraduates.
As an example, we could pose the question whether the UK should join the common European currency or not. Students should submit a comprehensive literature review on this controversial topic. They are expected to source a range of material and arguments relating to this debate, and prepare a report referring to the original sources.
Literature reviews are probably easier to read than other types of written work, which therefore eases the burden of marking.
Table 3 Types of multiple-choice question
Given the disadvantages listed in Box 9, these types of question should not be used as the sole means of assessing student performance in a given module.
Computer-aided assessment is discussed in another chapter in this handbook. But for now, consider that computer software such as Question Mark can be used to format multiple-choice questions and mark and analyse the results. It may be time-consuming to set, but marking is very fast and accurate. However, as with other objective tests, it is difficult to test the depth of learning.
A portfolio is a collection of work commonly used in the assessment of vocational training (such as industrial placement).
Portfolios may not be an ideal way of testing knowledge and the analytical, conceptual and problem-solving skills required in economics. Also it does not fit readily within a modular structure. However, there is clear potential for portfolios in the development of transferable skills – a requirement of degree programmes can be the demonstration of various activities and skills, such as leadership, co-ordination and research. This could easily be embodied within a portfolio submitted at the final level.
As an ongoing means of assessment, portfolios would be a useful device for indicating the importance of transferable skills, and requiring students, rather than lecturers, to find ways of developing these skills. Traditionally, students have used extra-curricular activities to demonstrate their range of personal and interpersonal skills, which are summarised and publicised in curriculum vitae.
For much more detail on portfolios and a case study, see Baume (2001) and Coates (2000).
There are many other methods of assessment besides those discussed above – poster sessions, open-book examinations, seen-examinations, profiles, single-essay examinations and various combinations of the above. For more detail, consult section 4.
One of the major principles of good assessment practice is that the criteria are clearly communicated to the students. This allows the educator to fashion better the learning process and induce desirable learning outcomes. From the point of view of the student, explicit communication of criteria is desirable as it allows them to focus on what they should be doing.
Typically, economics students are provided with a rather stylised description of the characteristics and qualities that constitute the respective grade levels (often at the beginning of their studies in the student handbook). A first-class mark is awarded for outstanding performance containing original thought; a 2.2 grade is characterised by sound understanding and presentation of key concepts but with a number of lapses in argument, and so on. The criteria are rather broad and abstract in nature, and reflect, in part, a preference for developing intellectual and analytical skills.
In this section, the ways in which alternative assessment criteria may be used to promote the development of transferable skills in ‘traditional’ student activities are discussed. An example is provided of how a piece of written coursework may be used to develop core IT skills. Students are required to submit an essay related to a broad issue or question in economics and are told beforehand that 50 per cent of the mark will be allocated on the basis of the use of IT skills.
The following example is adapted from Brown et al. (1994, p. 17).
Students are required to add an appendix to their submission explaining the uses made of IT. Where it is not obvious (for example, search engines or statistical packages), students must provide comprehensive evidence of their use.
The following sheet (Table 4) can be completed to form a basis for allocating marks in respect of IT use. The assessor ticks the boxes as appropriate.
Table 4 Indicator of IT use
|None, limited or extensive||Simple or sophisticated||Appropriate or inappropriate|
|Layout and formatting|
|Use of other software, e.g. Equation Editor|
|Use of statistical and econometric packages|
|Use of software for searching literature|
|
<urn:uuid:bf99cf77-5344-422f-a3cb-64236ea55b5b>
|
CC-MAIN-2016-26
|
http://www.economicsnetwork.ac.uk/book/export/html/807
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00123-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949394 | 4,179 | 3.640625 | 4 | 3.015382 | 3 |
Strong reasoning
|
Education & Jobs
|
KAZAKHSTAN’S EVOLVING FOREIGN POLICY
Soviet Republic to Independent Statehood
On 2 March 1991, the Kazakh Republic was a constituent member of the Soviet Union. On 2 March 1992, now an independent, sovereign state, Kazakhstan was formally accepted as a member of the United Nations. Looking back, it is hard to appreciate fully the extent of the rupture that separated these dates. During the Soviet era, Kazakhstan was almost entirely cut off from the world beyond the Soviet borders. There were no direct international flights to the Kazakh capital; there were not even direct telephone lines or postal services. All foreign relations were handled through Moscow. Few Kazakhstanis had experience of the world outside the Soviet orbit. Equally, few foreigners from the ‘far abroad’ had first-hand knowledge of Kazakhstan. Direct cooperation between the Central Asian republics was also limited, since the planning and organization of regional projects was directed from Moscow. Thus, when the Soviet Union collapsed – unexpectedly, with no transitional period – not only did the Kazakh government have very little expertise in the field of foreign affairs, but it was a virtually unknown entity in the international arena.
The first stage in the formation of external relations was the basic process of establishing the necessary physical infrastructure for the conduct of direct communications. Remarkably, this was accomplished very quickly. Within a few months of independence, there was a growing volume of international traffic to and from Kazakhstan. Airline companies such as Lufthansa, Pakistan International Airlines, Air India, Iran Air and Turkish Airlines introduced direct flights, while new satellite links facilitated telephone communications. Concurrently, there was an urgent need to create the organizational and analytical apparatus to support the development of a robust foreign policy. This, too, was achieved in record time. Ministries with responsibility for foreign affairs and foreign economic relations were formed, a diplomatic service was created and Kazakh embassies were opened in the USA, major European and Asian centres, and in the member states of the Commonwealth of Independent States (CIS). By the mid-1990s, Kazakhstan had established reciprocal trade and diplomatic links with over one hundred foreign countries.
It also acceded to the main international organizations. Membership of such bodies was a critical gauge of external recognition and acceptance. After being admitted to the United Nations in 1992, Kazakhstan joined UN funds, programmes and special agencies, such as UNDP, UNHCR, UNESCO, the International Civil Aviation Organization and International Labour Organization. Similarly, it joined the International Monetary Fund and World Bank, and applied for membership of the World Trade Organization (WTO).
Also during these first years, Kazakhstan acceded to several non-UN international governmental organizations, such as the Commonwealth of Independent States (CIS); the Organization for the Islamic Conference (OIC); the North Atlantic Cooperation Council (NACC); the NATO Partnership for Peace programme (PfP); and the Organization for Security and Cooperation in Europe (OSCE). Likewise, it joined the Asian Development Bank; the European Bank for Reconstruction and Development; and the Islamic Development Bank. Thus, within a very short period, the young state was firmly tied into major international bodies. This gave it international standing, and at the same time provided a valuable training ground for its senior civil servants, giving them the experience and the exposure to high-level negotiations that would enable them to act with confidence in the global arena.
Role of President Nazarbayev
The speed and scope of these developments was in large part due to the professional competence of Kazakhstani officials. However, it was the vision, energy and determination of President Nazarbayev that shaped the policy and gave it momentum. In particular, it was his intensive schedule of official visits to foreign countries that raised Kazakhstan’s profile internationally. A brief survey of his programme in the first five years of independence reveals the truly extraordinary degree of outreach. In 1992, he visited the USA, one of his very first trips outside the Soviet space. In February 1993, he went to Belgium, Austria and Egypt, to Thailand in August, to China and Mongolia in October and to the Netherlands in November. In 1995, the same heavy schedule was maintained, as his travels took him to France, South Korea, India, Indonesia and China. The following year he went to Iran, Malaysia, Singapore, Switzerland, Portugal, India and Australia. In 1997, he visited Turkey, Kuwait, Abu Dhabi, Bahrain, Oman, Germany and the USA (where he had meetings with President Clinton and Vice-President Gore).
Meanwhile, during this same period (1992-1997), high-ranking delegations from around the world came to Kazakhstan. They included French President Mitterrand, Iranian President Rafsanjani, Turkish President Demirel, Indonesian President Suharto, Malaysian Premier Mahathir Mohammed, Italian President Scalfaro, Chinese Premier Li Peng and Spanish Prime Minister Aznar – an impressive roll call of foreign dignitaries visiting a very recent addition to the international community. These meetings were not merely ceremonial occasions. Each visit resulted in the signature of protocols and agreements that provided formal frameworks within which to develop public and private sector cooperation between Kazakhstan and the various states. Thus, in a very real sense they served as an engine for pushing forward the country’s foreign policy objectives. This pattern of personal contacts was maintained in the ensuing years, though the programme of visits became more selective and was often linked to specific projects.
Strong bilateral relations lie at the heart of Kazakhstan’s foreign policy. The country espouses an ‘open door’ approach to the international community and has established friendly working ties with a broad range of partners around the world, based on principles of national interest and mutual respect. Inevitably, though, there is a degree of prioritization. One of Kazakhstan’s principal relationships is with the Russian Federation (hereafter ‘Russia’). This is not surprising, given that Kazakhstan shares a border of some 7,500 km with its northern neighbour, as well as having a large Slav population. The two countries have mutual interests in many areas, including trade, investment, transport, defence and security. In 1998 Kazakhstan and Russia concluded a Treaty of Eternal Friendship and Co-operation. In May 2008, newly-elected Russian President Medvedev made Kazakhstan the destination of his first official visit abroad, an acknowledgement of the country’s key role in regional affairs.
In November 2013 the two states signed an additional treaty on ‘Good Neighbourly Relations and Alliance in the 21st Century’, enhancing their strategic partnership by agreeing to co-operate on a wide range of social, security and foreign policy issues. They also pledged not to take part in any blocs or alliances directed against each other. Kazakhstani-Russian military co-operation continued to deepen.
Border issues between the two states have been amicably resolved. In 1998, agreement was reached on the bilateral demarcation of their sectors of the Caspian seabed; in May 2002 a protocol (somewhat amended in 2006) set out the geographical co-ordinates of the modified median line. This provided the basis for the joint development and exploitation of the oilfields that straddle the Kazakhstani-Russian sectors. These include the Kurmangazy field, reckoned to be the fourth largest deposit in Kazakhstan. In 2005 the two states ratified the treaty on the demarcation of their common land border.
Kazakhstani-Russian military co-operation has also continued to deepen. Accords signed in 2006 included an agreement on the renting of four military training and testing grounds to Russia. Joint military exercises are held on the territories of both states; furthermore, they are developing a single system of observation satellites. An issue that has caused friction at times concerns the terms and conditions for use of the Soviet-era Baikonur space centre, which is located in southern Kazakhstan but is operated by Russia. However, in June 2005, during President Putin’s visit to Kazakhstan to celebrate the 50th anniversary of the founding of the centre, it was agreed that the Russian lease would be extended until 2050. Accords signed in 2006 included an agreement on the renting of four military training and testing grounds to Russia. They were also developing a single system of observation satellites. Other areas of joint collaboration in space were the use of the Global Navigation Satellite System (GLONASS). In January 2013 the two states signed an agreement on the formation of joint regional air defences, with command headquarters to be based in Almaty.
The second vital relationship for Kazakhstan is with the People’s Republic of China. Not only do they share a border of 1,500 km, but at least 1 million Kazakhs live on the Chinese side of the border, in the Xinjiang Uigur Autonomous Region. A bilateral agreement to develop ‘long-term neighbourly and stable relations’ was signed by the Kazakh and Chinese leaders in 1995, and in July 1996 China announced that it had conducted its last nuclear test – a decision that was especially welcome to the people of Kazakhstan who for decades had suffered from the fall-out of these explosions.
The final demarcation of the China-Kazakhstan border was settled in 1999. However, in 2008, when Beijing proposed that Chinese farmers be allowed to rent one million hectares of Kazakh land for crop cultivation, there was a public outcry in Kazakhstan, triggered by the spectre of mass Chinese immigration. Another contentious issue was the utilization of trans-boundary water resources. A joint commission was established in 2000 to monitor the situation, but experts are concerned that the Cherny Irtysh–Karamay canal, under construction as part of a plan to develop western China, might have a harmful effect on adjacent areas of Kazakhstan.
Significant Chinese investment in the Kazakhstani energy sector began in 1997 and has increased steadily with the construction of oil and gas pipelines. A strategic energy partnership agreement was signed in July 2005. In 2009, the two countries agreed a ‘loans-for-oil’ deal, whereby China made available a loan of $10,000 million in return for oil agreements that give Chinese companies access to several Kazakhstani oilfields. The two states plan a major expansion of bilateral trade and to this end, the China Gateway Project, a major transport complex, has been launched. It includes the construction of some 300 km of railway across southern Kazakhstan to Khorgos on the Kazakhstan-Chinese border. . In 2003, and again in 2010, China and Kazakhstan jointly hosted co-ordinated military exercises of member states of the Shanghai Cooperation Organisation.
In 2009, during President Nazarbaev’s visit to Beijing, a ‘loans-for-oil’ deal was signed, whereby China provided a loan of US $10,000m. In return, Chinese companies gained access to a number of Kazakhstani oilfields. On his visit to China in April 2013, President Nazarbaev met with the new Chinese President Xi Jinping, and with the head of the China National Petroleum Corporation; this paved the way for greater Chinese involvement in Kazakhstan’s mining sector. In September 2013, during President Xi’s visit to Kazakhstan, some 20 bilateral agreements were signed; they included China’s acquisition of shares in the giant Kashagan offshore oilfield. President Nazarbaev made a state visit to China in May 2014; one of the outcomes of the visit was a preferential loan of $1,000m. from the Export-Import Bank of China to the Development Bank of Kazakhstan for the modernization and reconstruction of the Shymkent refinery. The timing of these deals, shortly before the signing of the Eurasian Economic Treaty (see below) prompted speculation that China aimed to counter Russian influence in Kazakhstan.
Nuclear co-operation was strengthened in June 2011, when an agreement was signed on strategic interaction between Kazatomprom and the Chinese State Corporation of Nuclear Industry. In addition, Kazakhstan agreed to supply fuel pellets to China. Agreements such as these prompted Nazarbaev’s description of China as the `leading investment partner of Kazakhstan’. Bilateral trade in 2010 had exceeded US $27,000m. and both sides aimed to increase this to $40,000m. by 2015. Nevertheless, there were occasional strains in the relationship between the two countries. One concerned the utilization of trans-boundary water resources. In Kazakhstan there were fears that the Cherny Irtysh–Karamay canal, under construction as part of a plan to develop western China, might have a harmful effect on adjacent areas of Kazakhstan. In 2010 China proposed that Chinese farmers be allowed to rent 1m. ha of land in Kazakhstan for the cultivation of crops such as soya and rapeseed. President Nazarbaev seemed to favour this request, since the land was underused, but public opinion was deeply hostile, particularly as opposition parties presented the prospect of mass Chinese immigration. Consequently, the project was suspended.
Kazakhstan’s third strategic relationship is with the United States of America. In May 1992 President Nazarbayev made an official visit to Washington, DC, where he signed agreements that laid the foundations for the development of economic, technical, and cultural ties between the two countries. Over the years, despite US concerns about Kazakhstan’s sometimes faltering progress towards democracy, the relationship has remained strong. US companies are among the largest investors in Kazakhstan, particularly in the oil and gas sector. In the autumn of 2001, Kazakhstan demonstrated its support for the US-led ‘war on terror’ by opening its airspace to US military aircraft engaged in operations in the region and allowing emergency landings to be made on its territory. Subsequently, it permitted the transit of non-lethal cargoes across its territory (as part of the Northern Distribution Network) to support the NATO-ISAF mission in Afghanistan. High-level US-Kazakhstan contacts were supported by links in energy, trade and investment, education, scientific, military and technical co-operation, and a range of cultural-humanitarian initiatives. In June 2014 senior officials on both sides confirmed their intention to strengthen the comprehensive strategic partnership.
Despite its strong links with the West, Kazakhstan also has friendly relations with Iran. During the Soviet period a few thousand Kazakhs fled to Iran and subsequently settled there. When Kazakhstan became an independent state, both sides were eager to establish good, and mutually beneficial, relations. Plans to build a cross-border pipeline were not feasible while the US Government maintained a hostile attitude towards Tehran, but swap exchange agreements with Iran provided a useful additional conduit for Kazakhstani petroleum exports. The Iranian side, meanwhile, was eager to purchase ‘unlimited’ amounts of Kazakhstani steel and grain. In 2007, during President Nazarbayev’s official visit to Tehran, several important agreements were signed, including one on the construction of a railway line from Kazakhstan via Turkmenistan and Iran to the Gulf States, to form part of the North–South transport corridor. Another notable development was the joint proposal to establish a ‘fuel bank' for nuclear energy on Kazakh territory – an idea which had originally been proposed by the International Atomic Energy Agency in 2005, and was supported by Russia and the USA. (In 2007 the US Administration had allocated US $50m. to the project, which, if realized, would provide a safe source of enriched uranium for use in Iranian nuclear power plants.) The proposal is still on the table, but has not as yet been carried forward.
Relations with Turkey began well after Kazakhstani independence, but languished in the late 1990s. In the 2000s, however, there was an upsurge of co-operation in the energy, transportation, construction and ship-building industries. President Nazarbaev’s visit to Ankara, the Turkish capital, in October 2012 boosted the relationship still further, as demonstrated by the adoption of the ‘New Synergies’ joint Action Plan for 2012–15. In 2012 Kazakhstan's investments in Turkey amounted to US $978.1m., and Turkey's investments in Kazakhstan to $859.8m. Links with Arab countries, notably Qatar, were also developing well, with co-operation in trade, investment and joint projects.
A relationship that is assuming growing importance is with India. This was given political impetus by Vice-President Hamid Ansari’s visit to Kazakhstan in 2008. Discussions focused on bilateral trade and economic co-operation, particularly in the hydrocarbons sector – a priority concern for India. Co-operation in higher education is also being promoted. In 2009 President Nazarbayev made a four-day visit to India. One of the outcomes was a memorandum of understanding on joint co-operation on nuclear energy-related projects. These were further elaborated the following year during Indian Foreign Minister Krishna’s visit to Kazakhstan. Cooperation was intensified following the successful visit of Prime Minster Modi to Kazakhstan in July 2015.
Central Asian Neighbours
In the early post-Soviet years, relations between the Central Asian states were complicated by economic rivalry and competition for foreign aid. However, Kazakhstan has taken care to build up cordial bilateral ties and is engaged in various joint projects with these states; there is also substantial Kazakh investment in Kyrgyzstan and Tajikistan. Joint bilateral investment funds have been established with both these countries, with Kazakhstan contributing the lion’s share of capital. Yet attempts to form an intra-regional Central Asian organisation have been unsuccessful. In July 1994 Kazakhstan, Uzbekistan and Kyrgyzstan, later joined by Tajikistan, created an embryonic economic and defence union. This alliance, later renamed the Central Asian Economic Union, aimed to foster regional co-operation but made little real progress. In 2001 it underwent another transformation, now becoming the Central Asia Co-operation Organization (CACO). Russia joined it in mid-2004. This merely hastened its demise: shortly after, CACO merged with the Eurasian Economic Community (see below) and ceased to exist as a separate entity.
It was a disappointing but not surprising outcome. In 1993, President Nazarbayev had himself commented, ‘Everyone wants to live in his own apartment, not in a communal flat. The same goes for sovereign states’. It is never easy to create multilateral organizations. Even when conditions are favourable, progress can be slow – as the history of the European Union has shown. There must be a genuine convergence of aims, as well as stability, comparable levels of development and a critical mass of human and material resources. These pre-conditions do not yet exist in Central Asia. However, the need for a forum to discuss common regional issues and to coordinate policies has not disappeared. In an encouraging show of unity, the heads of the five Central Asian states assembled in Semey (Semipalatinsk) on 8 September 2006 to sign a treaty creating a Nuclear-Weapons-Free Zone (NWFZ) in Central Asia; the treaty, negotiated over some 10 years under the aegis of the UN, came into force in March 2009. It is in the context of such joint actions that President Nazarbayev has revived the idea of a Central Asian forum. It is unlikely that this will be achieved in the near future, but in the longer term some type of cooperative partnership could well be established.
Other important foreign policy developments include Kazakhstan’s Partnership and Cooperation Agreement with the European Union, which was signed in 1995 and came into force in 1999. Links with the Middle East have also been developed, underpinned by agreements on co-operation in trade, investment and joint projects, notably with Egypt, Qatar and Syria. Trade with Turkey, after a slow start, began to pick up in the 2000s, with an emphasis on cooperation in energy, transportation, construction and ship building industries. South Korea is steadily strengthening its ties with Kazakhstan. During President Lee Myung-bak’s visit in May 2009, a long-term agreement worth US $5,000 million was concluded in the energy and technology sectors. Brazilian President Lula da Silva’s visit to Kazakhstan in June 2009, following on from President Nazarbayev’s successful trip to Brazil in 2007, marked the emergence of another up-and-coming relationship. The two countries share common positions on such issues as non-proliferation of weapons of mass destruction, and the strengthening of inter-religious and inter-ethnic tolerance. There are also opportunities for strengthening co-operation in agriculture, mining and energy.
In parallel to its policy of developing strong bilateral relations, Kazakhstan has also embraced multilateralism, particularly at the regional level. It has pursued this goal not only as an active member of existing structures, but increasingly has put forward its own initiatives to promote regional cooperation. Again, however, it has consistently maintained equilibrium by creating a network of counterpoises.
This was evident in the deft manner in which it balanced its membership of the Tehran-based Economic Cooperation Organization (ECO) with the Ankara-led Turkic Summits. Kazakhstan joined both bodies in 1992. ECO was the outcome of a series of previous alliances between Iran, Pakistan and Turkey. After several years of relative inactivity, it grasped the opportunity provided by the collapse of the Soviet Union to expand its membership to include all the Central Asian states, also Azerbaijan and Afghanistan. A non-political intergovernmental organization, it seeks to promote economic, technical and cultural cooperation within the region. Kazakhstan has played a leading role in ECO, heading the Organization twice (2003-2004, 2004-2006). The Turkic Summits were also initiated in response to the Soviet disintegration. Conceived as an annual high-level event, they were intended to strengthen relations between Turkey and the ex-Soviet Turkic states. As with ECO, the emphasis was on economic cooperation, particularly in the transport and energy sectors. By 2001, however, the Turkic Summits had lost momentum and were suspended. Yet in 2006, understanding the potential of this framework, President Nazarbayev was instrumental in reviving the Summits and setting in motion a process of institutionalization. In 2010, this resulted in an agreement to establish the Cooperation Council of Turkic-Speaking Countries (CCTC), with the inaugural meeting to be held in Kazakhstan in 2011. The membership of the CCTC is almost the same as that of ECO, as are the objectives. However, by actively supporting both organizations Kazakhstan is able to play a pivotal role in a managing the relationship between Tehran and Ankara.
The Eurasian Economic Community (EURASEC) developed out of an idea for a ‘Eurasian Union’ that was first proposed by President Nazarbayev in 1994. The underlying rationale was the need for closer, more effective economic integration among an inner core of CIS member states. In 1996, Kazakhstan, Kyrgyzstan, Belarus and Russia concluded an agreement on ‘The Regulation of Economic and Humanitarian Integration’; Tajikistan acceded to the treaty in 1998. This formed the basis for the Eurasian Economic Community, which was formally launched in April 2001. The longer-term goal was to establish a single economic space, underpinned by a Customs Union. The immediate task, however, was to create viable decision-making institutions. The highest policy-making organ was the Inter-State Council; President Nazarbayev was elected as its first chairman at the inaugural meeting later that year. Critics of the new organization saw it as a vehicle for reasserting Russian influence. However, the voting system was weighted to ensure that major policy decisions could only be obtained by a coalition of at least three states. In 2005 Uzbekistan joined EURASEC (coinciding with the merger of CACO and EURASEC), but withdrew three years later. Meanwhile, work proceeded on preparations for the Customs Union, overseen by Tair Mansurov, a Kazakh career diplomat who has held the post of Secretary General since October 2007. The Customs Union was inaugurated in 2010. In Astana, on 29 May 2014, Belarus, Kazakhstan and Russia signed a treaty formally establishing the Eurasian Economic Union. It entered into force on 1 January 2015, pending ratification by the respective legislatures (a process which was competed during that year)..
Kazakhstan has also played a major role in the development of the Shanghai Co-operation Organization (SCO). It was one of the founder members of the summit meetings of the so-called ‘Shanghai Five’. This informal structure, which, besides Kazakhstan, comprised China, Kyrgyzstan, Russia and Tajikistan, sought to resolve border issues and to promote confidence-building. In 2001 Uzbekistan joined the group, which was simultaneously transformed into an international organization, entitled the Shanghai Co-operation Organization. One of the benefits of the SCO is the boost it has given to regional trade. Secure intra-regional supplies of energy are also a priority. In January 2007 Bolat Nurgaliev, a senior Kazakh official, succeeded the Chinese representative Zhang Deguang as Secretary-General of the SCO.
A unique feature of the SCO is the layered membership structure, which gives flexibility and greater geographic outreach. It has full Members (originally China, Kazakhstan, Kyrgyzstan, Russia Tajikistan and Uzbekistan, and since 2015, India and Pakistan); ‘Observers’ (Mongolia, Iran and since 2012, Afghanistan), also ‘Dialogue Partners’ (first accorded to Belarus and Sri Lanka in 2009, and from 2012, Turkey). The SCO acquired observer status in the United Nations General Assembly in December 2004. Agreements on cooperation and partnership have also been concluded with other regional groupings such as ASEAN (Association of South East Asian Nation), CIS, CSTO and EURASEC. The growing prestige of SCO was further emphasized in April 2010 with the signing of the Joint Declaration on Cooperation between UN and SCO Secretariats. Kazakhstan headed the SCO in 2004-2005, and in 2010-2011.
Regional Security Organizations
Located in a volatile region, security was an early priority for Kazakhstan. It was one of the founder members of the CIS Collective Security Treaty Organization (CSTO), originally established by the Tashkent Declaration of 1992, then further institutionalised in 2002. Kazakhstan’s commitment to the CSTO and to the development of military partnership with its member states has remained a fundamental part of its national security strategy. This was emphasized by the then Minister of Defence (and former Prime Minister) Akhmetov in April 2007, who stated that ‘a priority for the country is participation in the CSTO’. Kazakhstan has remained an active participant in CSTO administration, command and control structures, likewise in manoeuvres.
Kazakhstan joined the NATO ‘Partnership for Peace’ (PfP) programme in 1995. It regularly participates in various training operations and is the lead partner for NATO in the Central Asian region. Within this framework, a NATO information centre was established in Astana in 2004, and in February 2005 a major NATO forum was held in Almaty. Moreover, NATO’s special representative for Central Asian communications and co-operation is based in Kazakhstan. In January 2006 the relationship with NATO was strengthened by the adoption of an Individual Partnership Action Plan (IPAP). Joint activities include regular ‘Steppe Eagle’ military manoeuvres between ‘Kazbrig’ (Kazakhstan Brigade) and NATO member states, with the aim of promoting interoperability.
Nevertheless, despite these close ties, Kazakhstan refused to participate in NATO exercises in Georgia in May 2009. This was generally regarded as a show of solidarity with Russia and an implicit criticism of the Western stance in the 2008 Russo-Georgian conflict. A similar show of Kazakhstan’s independent policy was apparent in June 2009, when the NATO Euro-Atlantic Security Forum was held in Astana. The conference, opened by the Secretary General of NATO, was the first such meeting to be held in a former ‘Eastern bloc’ country. Yet to the surprise of many, President Nazarbayev did not attend in person. Overall, the Kazakhstani leadership appeared to downplay the event so as to emphasize that the relationship with NATO, while important, was only one of a range of relationships.
Kazakhstan is a member of yet a third security body, the SCO Regional Anti-Terrorist Structure (RATS). In 2001, the Shanghai Convention on Combating Terrorism, Separatism and Extremism was signed, thereby creating a legal framework for regional co-operation in police operations and intelligence information gathering. The SCO Anti-Terrorism Centre was inaugurated in June 2004, with headquarters in Tashkent. In July 2005, at an SCO heads of state summit meeting held in Astana, a counter-terrorism strategy was approved, together with an agreement on mutual assistance in emergency situations. The concluding declaration included a statement noting that combat operations in Afghanistan were over; therefore, in the light of the declared ‘success’ of the US-led coalition in achieving its stated objectives, SCO members jointly requested that it indicate a timetable for the withdrawal of its personnel from bases in Uzbekistan and Kyrgyzstan, since permission to use these bases had only been granted as a temporary measure. This was widely interpreted in the West as a provocative action, designed to show that the SCO would not tolerate the presence of a rival security organization in the region. However, a more sober reading of the statement suggests that it was a legitimate call for clarification of the situation. Far from attempting to establish itself as an alternative to NATO, the SCO did not even have a joint military command. The first official meeting of SCO defence ministers did not take place until April 2006. The annual ‘Peace Mission’ operations that have been conducted by member states in recent years form part of the SCO anti-terrorist campaign and are of very limited duration.
Kazakhstan has initiated two regional security projects which deserve attention. One of these is the Conference on Interaction and Confidence Building Measures in Asia (CICA). Launched by President Nazarbayev in 1992, it was envisaged as a trans-Asian counterpart to the OSCE (see below). It seeks to promote regional security by using the ‘soft power’ of cooperation, peace, confidence and friendship. After a lengthy period of gestation, the inaugural summit meeting of CICA was held in Almaty in June 2002; participants included senior representatives from Iran, Israel, India and Pakistan. The agenda focused on regional security threats. However, equally important was the opportunity to hold informal meetings in the margins of the main sessions. This provided opportunities to discuss sensitive issues in an amicable, confidential atmosphere. CICA has now been fully institutionalised, with a secretariat based in Almaty and a number of bodies dedicated to the implementation of specific activities. These include four-yearly summit meetings, also regular meetings of Foreign Ministers and Special Working Groups.
The second Kazakh initiative in this field is the tri-annual Congress of the Leaders of World and Traditional Religions. Inaugurated in 2003, it brings together representatives of all the major faith communities. It is another imaginative use of ‘soft power’ to strengthen security by fostering tolerance and dialogue between the different faiths. The second meeting, in 2006, took place in the newly erected ‘Peace Pyramid’ in Astana. In his opening speech President Nazarbayev emphasised the role of religious leaders in enhancing international security, pointing out that ‘Political conflicts can no longer be solved exclusively on the political level’. The congress joint statement called for dialogue to seek ‘what is common rather than what divides us’. Thus, inter-faith dialogue is integrated into a broader security-building framework. This event has become an important meeting place for religious and political leaders, as well as for a wide range of interested governmental and non-governmental organisations. In 2015, Kazakhstan hosted the Fifth Congress of World Religions; participants included the United Nations Secretary-General Ban Ki-moon.
Chairmanship of the OSCE
In 2010 Kazakhstan assumed the chairmanship of the Organization for Security and Co-operation in Europe (OSCE). The organization comprises 56 member states that together span the northern hemisphere. It concerns itself with a wide range of political, social, economic and security issues. Thus, although it is by definition a regional organization, yet in size and scope it is in effect an international body. Consequently, to secure the chairmanship of this body represented a significant step change in the level of Kazakhstan’s influence and standing on the world stage. Kazakhstan, like several other former Soviet states, had joined the OSCE in January 1992. However, although all members were supposedly equal, there was an implicit two-tier division, whereby some states – the ‘Western bloc’ – occupied a more privileged position than CIS members.
Perhaps not surprisingly, there was strong opposition from several Western states to Kazakhstan’s candidacy. The professed reason was Kazakhstan’s poor human rights record. Yet behind the arguments and counterarguments on this issue, there was also a perceptible hint of outrage that an ex-Soviet, Eurasian, predominantly Muslim country should aspire to lead this Western-dominated organization. Nevertheless, Kazakhstan persisted in its efforts and in November 2007 succeeded in gaining a unanimous vote in favour of its chairmanship in 2010. In preparation for this, it joined the OSCE ‘Troika’ of past, present and future chairmen in January 2009, and in this capacity, chaired the OSCE Mediterranean Contact Group on Co-operation (comprising OSCE participants, also Algeria, Egypt, Israel, Jordan, Morocco and Tunisia).
The following year, shortly after Kazakhstan took over the OSCE chairmanship, it was confronted with a major crisis in neighbouring Kyrgyzstan. In early April, the Kyrgyz capital was engulfed by civil unrest. Government troops opened fire on the crowd, killing 89 people (official estimate) and injuring more than 1,500. Incumbent President Bakiev fled to the south of the county and subsequently resigned; an interim government under Roza Otunbayeva was installed. Kazakhstan, in its capacity as OSCE chairman, joined with Russia and the US to facilitate his evacuation to Belarus. A few weeks later there was more violence in southern Kyrgyzstan and in June, the situation escalated into a brutal conflict between the Kyrgyz and the ethnic Uzbek population. Some 100,000 refugees fled across the border to Uzbekistan and another 300,000 people were internally displaced in Kyrgyzstan. The death toll was officially set at around 400, but unofficial estimates suggested a figure of at least 2,000. Many people suffered serious injuries and the devastation of homes and infrastructure was colossal. The great majority of the victims were ethnic Uzbeks.
The OSCE singularly failed to live up to its stated objectives of conflict prevention, crisis management and post-conflict rehabilitation. This was not Kazakhstan’s fault, but a painful illustration of the gulf that lay between institutional declarations and real capabilities. In July, after the violence had begun to subside, Kazakhstan hosted an informal meeting of OSCE Foreign Ministers, during which it was decided to offer the deployment of an unarmed police force in the south of Kyrgyzstan. However, there was strong opposition to this proposal within the country, especially in the area where the police force was to be deployed. It was not till the end of December that a modest advisory police group finally began to arrive. The idea of an independent OSCE investigation into the violence also received a hostile reception. A compromise solution was proposed, whereby representatives of OSCE member states would conduct an investigation, but it would not be officially classified as an OSCE project.
Kazakhstan could not single-handedly remedy the weaknesses and shortcomings of the OSCE. It did, however, try to move beyond ‘double standards’ and internal divisions so as to focus more effectively on constructive co-operation in the face of real and urgent threats to security. In particular, it recognised the need for a new, comprehensive vision of security, encompassing the entire Euro-Atlantic and Eur-Asian space. It was largely in order to highlight these challenges that Kazakhstan sought to convene a summit of OSCE members – the first such meeting since the 1999 Istanbul Summit. The final document, the ‘Astana Commemorative Declaration towards a Security Community’, was an anodyne statement of intent. Yet although Kazakhstan did not succeed in accomplishing what it had hoped to achieve, its term of office was by no means insignificant. It brought new energy and a new perspective to the OSCE. Crucially, it had mapped out a vision of what needed to be done if the organization was to be relevant to the needs of the twenty-first century.
Organization of the Islamic Conference
Following on from its role in the OSCE, Kazakhstan assumed chairmanship of the Organization of the Islamic Conference (OIC) in mid-2011. This is in many ways an even more complex task. The OIC was established in September 1969. An international organization with a permanent delegation to the United Nations, its 57 member states have a total population of some 1.6 billion. They are predominantly drawn from Africa and Asia, but there are also several observer states and observer organizations which extend its reach to Europe. The main purpose of the OIC is to protect the interests and well-being of Muslims worldwide. This includes such aims as preserving Islamic social and economic values; promoting solidarity amongst member states; increasing cooperation in social, economic, cultural, scientific, and political fields; and upholding international peace and security. Kazakhstan, which has a predominantly Muslim population, joined the OIC in 1995. It takes the helm at a time when there are major challenges in many parts of the Islamic world. These include deep-seated systemic problems, as well as current political uprisings, violent conflicts, religious extremism and terrorism, all exacerbated by poverty, unemployment and rising food and fuel prices. There are calls for the OIC to undertake a proactive peace-making role in some of these situations, notably in Afghanistan. Whether or not the OIC will be able to make a meaningful contribution is unclear. What is clear, however, is that it will require all Kazakhstan’s diplomatic skills and experience in international negotiations to steer the OIC through this difficult period.
Kazakhstan’s foreign policy has three salient features. Firstly, it is characterised by a broad and balanced range of relationships; despite the pressures and blandishments that have been proffered from all sides, it has refused to be drawn into any exclusive ideological bloc and instead, has maintained good relations with states that have rival or even mutually hostile interests and goals. Secondly, it has followed a policy of constructive engagement, making effective use of bilateral and multilateral instruments. Thirdly, it has shown remarkable persistence in implementing policy objectives, pursuing its goals over long periods and if necessary, overcoming the apathy and even opposition of potential partners.
With regard to the planning and decision-making process, President Nazarbayev has personally made a very considerable contribution to the development of Kazakhstan’s foreign policy. His formulation of the ‘Eurasian Doctrine’, with its emphasis on East-West Dialogue, has provided the conceptual basis for a policy of constructive, peaceful engagement. It envisions a complex, integral approach to global issues, cutting across political, economic, military and cultural divisions, thereby achieving greater mutual understanding and reducing tension in the international arena.
The leadership exercised by President Nazarbayev has been crucial. It was especially important in the early years, when the newly independent state was still establishing its international profile. The fact that he was able to draw on the support of a well-educated, professional and extremely competent team of civil servants made it possible to implement his vision. Now, a more diversified foreign policy apparatus is emerging and increasingly, senior diplomats are taking a more prominent role in the international arena. Thus, there is a solid foundation on which to move forward.
The combination of these factors has made it possible for Kazakhstan to progress with unusual speed and assurance from being a fledgling independent state in 1991, to a significant regional player in 2011. It is now set to broaden its sphere of activity at the international level. The chairmanship of the OSCE in 2010 was the first step in this direction. Its role in the OIC is a continuation of this process. Kazakhstan’s proven ability to engage with many different, competing constituencies gives it a unique opportunity to play a key role in international affairs at the highest level. Its track record suggests that it could fulfil such a role with distinction.
Nursultan Nazarbaev, Sbornik dokumentov i materialov, 3 vols, (Almaty, KISI, 2010)
Nursultan Nazarbaev, The Policy of Peace and Accord (Astana, Elorda, 2008)
Kassym Jomart Tokayev, Diplomatiya Respubliki Kazakshtan (Astana, Elorda, 2001)
Karim Massimov, The Republic of Kazakhstan in the Global Economy (Almaty, VIP Media, 2000)
|
<urn:uuid:43c413f5-d93d-4031-af41-2ef9574cd6d3>
|
CC-MAIN-2019-51
|
http://www.history-state.kz/en/kazakh-handygy-article6.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00042.warc.gz
|
en
| 0.969892 | 8,511 | 3.0625 | 3 | 3.03171 | 3 |
Strong reasoning
|
Politics
|
Japanese whaling: why the hunts go on
Japan's whaling ships returned from their Antarctic hunt on Thursday with 333 minke whales on board, their entire self-allocated quota.
Of the whales killed, 103 were male and 230 female, 90% of them pregnant, said Japan's fisheries agency. It said that showed the population was in a healthy breeding state.
Australia has branded the slaughter "abhorrent", its environment minister saying Japan's scientific justification for the hunt did not exist.
What are the issues behind Japan's whaling programme, and why has compromise been so difficult?
Isn't whaling banned?
Not quite. The International Whaling Commission (IWC), which regulates the industry, agreed to a moratorium on commercial whaling from the 1985. But it did allow exceptions, enough for Japan to hunt more than 20,000 whales since.
It is those exceptions to the moratorium that allow for whaling activity. They are:
- Objection or reservation
Norway simply rejects the moratorium, while Iceland whales "under reservation" to it. Both still whale commercially - 594 minke were taken by Norway and 169 (mostly fin whales) by Iceland in 2013.
- Aboriginal Subsistence
Practiced by indigenous groups in places like Greenland, Denmark and Alaska. The flexible definition allows for "cultural" subsistence so it does not have to be about nutritional necessity. Greenlanders sell whale meat to tourists for example and even non-indigenous groups like the Bequians in St Vincent & the Grenadines, can whale
- Scientific Research
Famously used by Japan, this is the exemption that has run into problems.
Wasn't Japan's whaling ruled unscientific?
Yes and no. In 2014, the United Nations' top court, the International Court of Justice (ICJ), noting a relative lack of recent discoveries, said Japan's Antarctic whaling programme, JARPA II, was insufficiently scientific to qualify. But it did not ban research whaling.
The Antarctic Southern Ocean hunt is the largest and most controversial of Japan's hunts.
Following the decision, Tokyo granted no Antarctic whaling licences for the 2014/15 winter season, though it did conduct a smaller version of its less well-known north-western Pacific Ocean whaling programme, which was not covered by the judgment.
It then created a new Antarctic whaling programme, NEWREP-A. Japan insists this meets the criteria set out by the ICJ for scientific whaling. It has cut the catch by around two-thirds, to 333 and covers a wider area. It also specified more scientific goals.
Since then one IWC expert panel said the new plan did not adequately demonstrate the need to kill whales to meet its research objectives. But the final IWC Scientific Committee meeting was split.
Is the research argument genuine?
Japan says it is trying to establish whether populations are stable enough for commercial whaling to resume.
But research is almost never mentioned by ordinary Japanese whaling supporters, who are more likely to cite tradition, sovereignty and the perceived hypocrisy of anti-whaling nations.
Prof Atsushi Ishii of Tohoku University, an expert in environmental politics, argues it is an excuse to subsidise an unprofitable but politically sensitive industry.
Japan's whaling negotiations, he says, "actually make the lifting of the moratorium more difficult," and deliberately so.
Without the implicit subsidy, he says, whaling companies "would go into bankruptcy very easily - they can't sell the whale meat".
Is whale meat popular in Japan?
The scientific whaling exemption allows for by-product, in this case dead whales, to be sold commercially. That meat, blubber, and other products, is what ends up being eaten.
Critics say they contain dangerously high levels of mercury. Nevertheless it is not a popular meat in Japan.
The whaling industry has tried to reverse perceived indifference by organising food festivals and even visiting schools.
Is compromise possible?
Alternatives to the hunts have been proposed:
- A much smaller, even more scientifically focused hunt. Whether such a programme would be economically sustainable is another question.
- "Small-type coastal whaling", similar to Greenland's. Smaller, while still appeasing those who say whaling is a cultural right. But this is rejected by opponents as a front for commercial whaling.
- The 2010 "peace plan", proposed by the IWC's then chairman, would have lifted the moratorium for 10 years in exchange for sharply lower quotas and other restrictions. But anti-whalers could not stomach accepting any commercial whaling and Japan thought the cuts were too great.
- A "whale conservation market". US researchers in 2012 borrowed an idea from pollution markets and suggested countries be allocated tradable whaling quotas, based on an agreed sustainable total.
- Return to commercial whaling. With no government subsidy, catches would have to drop significantly. Hunts far from Japan's shores would likely end entirely.
There is one potential game-changer; Prof Ishii points out that Japan's only factory whaling ship, the Nisshin Maru, will need to be replaced before long "at huge cost". This may be a cost the government is reluctant to bear. Without it, some whaling could persist, but big hunts far from Japan's shores would be impossible.
Reporting by Simeon Paterson
|
<urn:uuid:81114d75-8d15-4ecd-98ae-2e8a1fff099f>
|
CC-MAIN-2019-04
|
https://www.bbc.co.uk/news/world-asia-35003272
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659063.33/warc/CC-MAIN-20190117184304-20190117210304-00209.warc.gz
|
en
| 0.953816 | 1,113 | 3.15625 | 3 | 3.03662 | 3 |
Strong reasoning
|
Politics
|
Regardless of all the plain attractiveness of video games of dice among virtually all social strata of various nations for the duration of several millennia and up on the XVth century, it truly is attention-grabbing to notice the absence of any evidence of the concept of statistical correlations and likelihood idea. The French humanist on the XIIIth century Richard de Furnival was stated to generally be the writer of the poem in Latin, one among fragments of which contained the 1st of recognised calculations of the number of feasible variants for the chuck-and luck (you'll find 216). Previously in 960 Willbord the Pious invented a game, which represented fifty six virtues. The player of the religious recreation was to enhance in these virtues, based on the ways that 3 dice can change out in this activity regardless of the buy (the number of these kinds of mixtures of three dice is in fact 56). On the other hand, neither Willbord, nor Furnival at any time attempted to outline relative probabilities of independent combinations. It is considered the Italian mathematician, physicist and astrologist Jerolamo Cardano was the main to conduct in 1526 the mathematical Evaluation of dice. He applied theoretical argumentation and his very own comprehensive video game exercise with the creation of his own idea of probability. He counseled pupils how to generate bets on The premise of this concept. Galileus renewed the investigate of dice at the end of the XVIth century. Pascal did exactly the same in 1654. Both of those did it on the urgent ask for of dangerous gamers who had been vexed by disappointment and big fees at dice. Galileus’ calculations ended up exactly the same as Those people, which modern mathematics would apply. So, science about probabilities at last paved its way. The theory has gained the large enhancement in the course of the XVIIth century in manuscript of Christiaan Huygens’ “De Ratiociniis in Ludo Aleae” (“Reflections Regarding Dice”). As a result the science about probabilities derives its historical origins from base complications of gambling games.
Before the Reformation epoch nearly all people believed that any function of any sort is predetermined via the God’s will or, Otherwise through the God, by some other supernatural pressure or maybe a definite being. Lots of people, it's possible even The bulk, continue to maintain to this feeling around our times. In All those moments this sort of viewpoints ended up predominant just about everywhere.
As well as the mathematical concept totally dependant on the opposite assertion that some gatherings may be casual (that is definitely managed via the pure situation, uncontrollable, occurring with no distinct goal) experienced handful of chances for being revealed and authorised. The mathematician M.G.Candell remarked that “the mankind necessary, evidently, some centuries to get used to The thought about the planet during which some gatherings come about without the rationale or are described by The explanation 바카라사이트 so remote that they might with adequate accuracy be predicted with the assistance of causeless product”. The idea of purely casual activity is the foundation with the thought of interrelation concerning accident and probability.
Similarly possible functions or consequences have equivalent odds to happen in every situation. Each individual situation is completely independent in online games based on the net randomness, i.e. each and every recreation has exactly the same probability of acquiring the sure outcome as all Other people. Probabilistic statements in practice applied to an extended succession of gatherings, but not into a individual occasion. “The legislation of the massive numbers” is really an expression of The point that the precision of correlations remaining expressed in chance theory improves with expanding of numbers of situations, even so the bigger is the quantity of iterations, the less frequently absolutely the quantity of benefits with the selected sort deviates from anticipated one particular. You can specifically predict only correlations, although not different gatherings or actual quantities.
|
<urn:uuid:e09d6690-8970-4c04-a4e6-bd477f5a018a>
|
CC-MAIN-2023-06
|
http://trentonysmj898.huicopper.com/7-things-about-kajinosaiteu-your-boss-wants-to-know-7
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00491.warc.gz
|
en
| 0.95329 | 809 | 2.65625 | 3 | 2.901742 | 3 |
Strong reasoning
|
Science & Tech.
|
ROCHESTER, Minn. -- A new study has found that the amount of vitamin D (http://www.
"These are some of the strongest findings yet between vitamin D and cancer outcome," says the study's lead investigator, Matthew Drake, M.D., Ph.D., (http://www.
The researchers' study of 374 newly diagnosed diffuse large B-cell lymphoma patients found that 50 percent had deficient vitamin D levels based on the commonly used clinical value of total serum 25(OH)D less than 25 ng/mL. Patients with deficient vitamin D levels had a 1.5-fold greater risk of disease progression and a twofold greater risk of dying, compared to patients with optimal vitamin D levels after accounting for other patient factors associated with worse outcomes.
The study was conducted by a team of researchers from Mayo Clinic and the University of Iowa. These researchers participate in the University of Iowa/Mayo Clinic Lymphoma Specialized Program of Research Excellence (SPORE), (http://mayoresearch.
The findings support the growing association between vitamin D and cancer risk and outcomes, and suggest that vitamin D supplements might help even those patients already diagnosed with some forms of cancer, says Dr. Drake. "The exact roles that vitamin D might play in the initiation or progression of cancer is unknown, but we do know that the vitamin plays a role in regulation of cell growth and death, among other processes important in limiting cancer," he says.
The findings also reinforce research in other fields that suggest vitamin D is important to general health, Dr. Drake says. "It is fairly easy to maintain vitamin D levels through inexpensive daily supplements or 15 minutes in the sun three times a week in the summer, so that levels can be stored inside body fat," he says. Many physicians recommend 800-1,200 International Units (IU) daily, he adds.
Vitamin D is a steroid hormone obtained from sunlight and converted by the skin into its active form. It also can come from food (naturally or fortified as in milk) or from supplements. It is known best for its role of increasing the flow of calcium into the blood. Because of that role, vitamin D deficiency has long been known to be a major risk factor for bone loss and bone fractures, particularly in elderly people whose skin is less efficient at converting sunlight into vitamin D. But recent research has found that many people suffer from the deficiency, and investigators are actively looking at whether low vitamin D promotes poorer health in general.
Cancer researchers have discovered that vitamin D regulates a number of genes in various cancers, including prostate, colon and breast cancers. Recent studies have suggested that vitamin D deficiency may play a role in causing certain cancers as well as impacting the outcome once someone is diagnosed with cancer.
Researchers looked at vitamin D levels in lymphoma patients because of the observation, culled from U.S. mortality maps issued by the National Cancer Institute, that both incidence and mortality rates of this cancer increase the farther north a person lives in the United States, where sunlight is limited in the winter. Also, several recent reports have concluded that vitamin D deficiency is associated with poor outcomes in other cancers, including breast, colon and head and neck cancer. This is the first study to look at lymphoma outcome.
VIDEO ALERT: Additional audio and video resources including excerpts from an interview with Dr. Matthew Drake describing the research are available on the Mayo Clinic News Blog (http://newsblog.
The study was funded by the National Cancer Institute and the Mayo Hematologic Malignancies Lymphoma Fund.
Other members of the Mayo research team include Ivana Micallef, M.D.; Thomas Habermann, M.D. (http://www.
To request an appointment at Mayo Clinic, please call 480-422-1490 for the Arizona campus, 904-494-6484 for the Florida campus, or 507-216-4573 for the Minnesota campus.
About Mayo Clinic
Mayo Clinic is the first and largest integrated, not-for-profit group practice in the world. Doctors from every medical specialty work together to care for patients, joined by common systems and a philosophy of "the needs of the patient come first." More than 3,700 physicians, scientists and researchers and 50,100 allied health staff work at Mayo Clinic, which has sites in Rochester, Minn.; Jacksonville, Fla.; and Scottsdale/Phoenix, Ariz., and community-based providers in more than 70 locations in southern Minnesota, western Wisconsin, and northeast Iowa. These locations treat more than half a million people each year. To obtain the latest news releases from Mayo Clinic, go to www.mayoclinic.org/news. For information about research and education, visit www.mayo.edu. MayoClinic.com (www.mayoclinic.com) is available as a resource for your health stories.
|
<urn:uuid:80de4f96-a184-4f45-9329-720b96054729>
|
CC-MAIN-2019-22
|
https://www.eurekalert.org/pub_releases/2009-12/mc-mca120209.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255943.0/warc/CC-MAIN-20190520101929-20190520123929-00229.warc.gz
|
en
| 0.947633 | 1,004 | 2.78125 | 3 | 2.786053 | 3 |
Strong reasoning
|
Health
|
Posted by keith on May 27th, 2009
I was taking a bus into the centre of a nearbye town a few months ago, and noticed that the development of a new “Park and Ride” scheme was nearing completion — so said the signs. It was being promoted as part of a “sustainable” transport policy, yet I was taking the bus all the way from my town to this town, but could well have caught the train instead. If I had lived a bit closer I might have considered cycling, except there are no cycle paths to speak of. This got me pondering the logic of Park and Ride with my cynical mind, and I quickly realised that it was simply a way of drawing more people from outlying areas into major towns who would otherwise shop locally, or drive to a shopping mall because there was too much congestion in the town. Park and Ride, I concluded, exists for purely economic reasons.
Go forwards to the present day, and I find this on the Save Bathampton Meadows web site:
Park and Rides are an out-moded form of traffic management, proven to have a minimal impact on reducing congestion. As Henrietta Sherwin, Vice Chair of the South West Campaign to Protect Rural England states:
“Park and Rides were conceived in the early 1970s before transport policy had moved towards demand management and trying to restrict car traffic; they are an out of date policy and no substitute for the development of an integrated public transport network particularly with an ageing population.”
“Park and Rides were initially sold as a green transport intervention until it was discovered that they can undermine existing public transport and actually create car mileage. Should limited resources be spent to encourage car access to Bath? Park and Rides are expensive and have a considerable environmental impact but a very marginal congestion benefit.”
I agree that they were originally sold as a green transport intervention, but I am willing to bet good (or bad) money that the initial motivation was economical — more people can come into a town and spend money if you let them drive most of the way rather than encourage them to go by public transport or (obviously) use their local facilities.
I wouldn’t have been so interested in an article about the further concreting over of the countryside surrounding the historic city of Bath, England, in Monday’s Guardian, had I not taken a trip there last week.
Environmental campaigners and residents are vowing to fight controversial plans to turn historic meadows close to the river Avon in Bath into a huge car park.
Bath and North East Somerset council wants to build a park and ride for 1,400 cars on land to the east of the city, though it lies within the green belt and is bordered by an area of natural beauty and a nature reserve.
More than 500 people have written objecting to the £6m plan, claiming that it will “desecrate” Bathampton Meadows. Natural England, the independent public body dedicated to protecting the urban and rural environment, has also raised concerns.
But at a heated meeting last week councillors supported the plans, which will now be sent to Hazel Blears, the communities secretary, for her approval.
Protesters say the scheme will ruin the meadows and become an eyesore visible from miles away. They are calling for the council to come up with more radical and more sustainable solutions.
It was while walking through the maze of soulless shopping streets near to the railway station, trying to dodge construction vehicles and step over temporary paving abberations, that I realised that the new Southgate Shopping Centre was utterly superfluous. Here’s a picture of what the developers think part of it might look like when it is complete:
I particularly like the ironic bicycles dominating the left hand side of the scene, while the yawning commercial edifice lurks in the background, coaxing people in to buy more pointless crap that, even had they wanted pointless crap, people could already have bought elsewhere in Bath, or anywhere else they live for that matter. It is such a marvellous coincidence that the new bus station, which will act as the terminus for the Pointless Park and Ride schemes, just happens to be right next to the new Southgate Shopping Centre. So, as the Park and Riders alight from their multi modal journey (oh, sorry, that should read “largely car-based journey, which involved a considerable diversion from the original route, and had a bit of bus tacked onto the end”) they are immediately presented with a phenomenal shopping opportunity.
I have little doubt that the loss of meadow will happen, and it will keep heppening until we lose our twin addictions to driving and shopping. Maybe if the existing Park and Rides start emptying then the scheme (and the other three to be expanded, which are also going to slice further into the countryside) will be abandoned as a loss-maker. Somehow, though, I get the feeling this will be another case of the customer is always right: even if they have been brainwashed.
|
<urn:uuid:aa93c9f4-6bb2-407e-b509-72892ebc54ea>
|
CC-MAIN-2016-40
|
http://thesietch.org/mysietch/keith/2009/05/27/bathampton-meadows-vs-park-and-ride-guess-which-wins/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660467.49/warc/CC-MAIN-20160924173740-00038-ip-10-143-35-109.ec2.internal.warc.gz
|
en
| 0.975641 | 1,051 | 2.546875 | 3 | 2.937661 | 3 |
Strong reasoning
|
Transportation
|
Engineering, Technology, and Computer Science Building 221 ~ 260-481-4127 ~ ipfw.edu/mcet
The student learning outcomes for the degree are as follows:
- An ability to select and apply the knowledge, technique, skills, and modern tools of the discipline to broadly-definded engineering technology activities.
- Utilizing modern instruments, methods and techniques to implement construction contracts, documents, and codes.
- Evaluate materials and methods for construction projects.
- Utilize modern surveying methods for construction layout.
- Estimate material quantities.
- Estimate material costs.
- An ability to apply current knowledge and adapt to emerging applications of mathematics, science, engineering and technology.
- Utilize current industry standard equipment.
- Employ productivity software to solve problems.
- An ability to conduct. Analyze and interpret experiments and apply experimental results to improve processes.
- Determine forces and stresses in structural systems.
- Perform economic analyses related to design, construction, and maintenance.
- An ability to apply creativity in the design of systems, components or processes appropriate to program objectives.
- Produce design for construction and operations documents utilization.
- Perform standard analysis and design in one technical specialty in construction.
- Select appropriate construction materials and practices.
- An ability to function effectively on teams.
- Participate actively in team activities during and outside class.
- An ability to identify, analyze and solve technical problems.
- Apply basic concepts to the solution of hydraulic and hydrology problems.
- Apply basic concepts to the solution of geotechnics problems.
- Apply basic concepts to the solution of structures problems.
- Apply basic concepts to the solution of construction scheduling and management.
- Apply basic concepts to the solution of construction safety problems.
- An ability to communicate effectively.
- Demonstrate effective oral communication skills.
- Demonstrate effective written communication skills.
- A recognition of the need for, and an ability to engage in lifelong learning.
- Conduct web and library research and report findings.
- An ability to understand professional, ethical and social responsibilities in construction.
- Apply principles of construction law and ethics.
- Perform service learning.
- A respect for diversity and a knowledge of contemporary professional, societal and global issues.
- Understand societal and global issues.
- Understand issues of human diversity.
- A commitment to quality, timeliness, and continuous improvement.
- Produce work of quality and timeliness.
- Evaluate each course each semester.
To provide employers and the public of northeast Indiana with educated, technologically equipped graduates, able to serve the varied construction industries in advancing the solutions to problems facing the public and private sector.
CNET B.S. Program Objectives
- To provide education of the traditional and returning adult student for career success in the construction industry, with a special emphasis on sustainable construction.
- To develop a respect for diversity and a knowledge of contemporary professional, societal, and global issues with an understanding of professional and ethical responsibilities.
- To be responsive to the ever-changing technologies of the construction industries.
- To instill in students the desire for and ability to engage in lifelong learning.
The breadth of the curriculum will provide leadership potential in addressing problems of the region, its people, and its industries. Graduates of this program take jobs with contractors, building-materials companies, utilities, architectural firms, engineering firms, and government agencies. The construction engineering technology program does not lead to licensure as a professional engineer or registered architect.
This program is accredited by the Engineering Technology Accreditation Commission of ABET, http://www.abet.org. It provides you with problem solving skills, hands-on competency, and required state-of-the-art technical knowledge. Alumni of the department are employed in all areas of the building industry, including construction; architecture; civil engineering; land surveying; and state, county, and city governments.
To earn the B.S. with a major in construction engineering technology, you must fulfill the requirements of IPFW (Regulations ) and the College of Engineering, Technology, and Computer Science (Colleges ).
|
<urn:uuid:9043608b-481c-43f8-8653-c1cb53b19c67>
|
CC-MAIN-2018-34
|
http://bulletin.ipfw.edu/preview_program.php?catoid=46&poid=9177&returnto=1282
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215222.74/warc/CC-MAIN-20180819145405-20180819165405-00406.warc.gz
|
en
| 0.874898 | 869 | 2.546875 | 3 | 2.658942 | 3 |
Strong reasoning
|
Education & Jobs
|
[FL] Fluorite Lens / [Super ED] Super ED (Extra-low Dispersion ) Glass / [ED] ED Glass
Lenses built with conventional optical glass have difficulties with chromatic aberration, and as a result images suffer from lower contrast, lower colour quality, and lower resolution. To counter such problems, ED glass was developed and is included in select lenses. It dramatically improves chromatic aberration at telephoto ranges, and provides superior contrast across the entire image, even at large aperture settings. Super ED glass and fluorite lens provide enhanced compensation for chromatic aberration. Fluorite is also lighter than normal optical glass, contributing to reduced overall lens weight.
[Aspherical] Aspherical Lens
Spherical aberration is a slight misalignment of light rays projected on the image plane by a simple spherical lens, caused by differences in refraction at different points on the lens. That misalignment can degrade image quality in large-aperture lenses. The solution is to use one or more specially shaped “aspherical” elements near the diaphragm to restore alignment at the image plane, maintaining high sharpness and contrast even at maximum aperture. Aspherical elements can also be used at other points in the optical path to reduce distortion. Well designed aspherical elements can reduce the total number of elements required, thus reducing overall lens size and weight.
[XA] XA (Extreme Aspherical) Lens
Aspherical lenses are much more difficult to manufacture than simple spherical types. New XA (extreme aspherical) lens elements achieve extremely high surface precision that is kept to within 0.01 micron by innovative manufacturing technology, for an unprecedented combination of high resolution and the most beautiful bokeh you’ve ever seen.
[AA] Advanced Aspherical Lens
Advanced Aspherical (AA) elements are an evolved variant, featuring an extremely high thickness ratio between the center and periphery. AA elements are exceedingly difficult to produce, depending on the most advanced molding technology available to consistently and precisely achieve the required shape and surface accuracy. The result is significantly improved reproduction and rendering.
"In a conventional lens, the amount of light collected at the periphery of the lens is roughly equal to the amount of light at the center. This results in uniformly sharp dots at points “b” and “c,” below. However, a special filter called an “apodization optical element” collects less light at the lens periphery, which results in diffusion at the edges of the dots instead. Smoother defocusing is obtained due to this optical characteristic.
Because the STF lens with the apodization optical element collects less light overall than conventional lenses, F-stops are replaced by T (transmission) numbers. In practice, the two types of values can be used interchangeably to determine exposure."
[Nano AR] Nano AR Coating
Original Sony Nano AR Coating technology produces a lens coating that features a precisely defined regular nano-structure that allows accurate light transmission while effectively suppressing reflections that can cause flare and ghosting. The reflection suppression characteristics of the Nano AR Coating are superior to conventional anti-reflective coatings, including coatings that use an irregular nano-structure, providing a notable improvement in clarity, contrast, and overall image quality.
[F coating] Fluorine Coating
The exposed front element of any lens can pick up water, mud, oil, fingerprints, and other contaminants that can not only compromise image quality, but in some cases even damage the lens. Sony provides a potent solution with a fluorine front-element coating that results in a greater liquid contact angle, reducing the lens’s wettability and effectively “repelling” contaminants. Any water or oil based grime that does become attached to the lens can be easily wiped away. In addition to protecting valued lenses, the fluorine coating reduces the need to worry about keeping lenses clean in the field.
ZEISS® T* Coating
The fact that lens coating technology – vapor deposition of a thin, even coating on the lens surface to reduce reflections and maximize transmission – was originally a ZEISS patent is well known. The ZEISS company also developed and proved the efficacy of multi-layer coatings for photographic lenses, and this is the technology that became the T* coating.
Until the introduction of coated lenses, the lens surface would reflect a large percentage of the incoming light, thus reducing transmission and making it difficult to use multiple of elements in lens designs. Effective coatings made it possible to design more complex optics that delivered significantly improved performance. Reduced internal reflection contributed to minimum flare and high contrast.
The ZEISS T* coating is not simply applied to any lens. The T* symbol only appears on multi-element lenses in which the required performance has been achieved throughout the entire optical path, and it is therefore a guarantee of the highest quality.
Although most of the light that falls on an optical glass transmits right through, some of it reflects at the surface of the lens to cause flare or ghost images. In order to avoid this problem, a thin layer of anti-reflective coating must be applied to the lens surface. α lenses use exclusive multi-layered coating to effectively suppress such problems over a wide spectrum of wavelengths.
[IF] Internal Focusing
Only the middle or rear groups of the optical system are moved to achieve focusing, which leaves the total length of the lens intact. Benefits include fast autofocusing and a short minimum focusing distance. Also, the filter thread at the front of the lens does not rotate, which is convenient if you’re using a polarizing filter.
[PZ] Power Zoom
Sony α mount lenses that feature power zoom offer enhanced control and expressive potential for moviemaking, with smooth, consistent zooming that is difficult to achieve manually. Details like smooth acceleration and deceleration are important too, and of course tracking is excellent throughout. All of this is made possible by a blend of mature Sony's camcorder technology with state-of-the-art innovation, from optical and mechanical design to original Sony's actuator technology that all comes together through exacting in-house manufacturing. Internal zoom is another beneficial feature: the length of the lens remains constant while zooming, and the barrel does not rotate so polarizers and other position-dependent filters can be used without the need for additional support.
[SMO] Smooth Motion Optics
SMO (Smooth Motion Optics) is a Sony optical design concept for interchangeable lenses that is specifically aimed at achieving the highest possible image quality and resolution for motion images.
SMO design addresses three main issues that are critical for moviemaking:
- Focus breathing (angle of view instability while focusing) is effectively minimized by a precision internal focus mechanism.
- Small focus shifts that can occur while zooming are eliminated by a special tracking adjustment mechanism.
- Lateral movement of the optical axis while zooming is eliminated by an internal zoom mechanism that keeps the length of the lens constant at all focal lengths.
The level of precision required demands both exacting design and constant monitoring during manufacture, but the benefits for moviemaking with large aperture lenses, particularly on large format sensors, are spectacular and well worth the effort.
[IZ] Internal Zoom
A type of lens zooming method. The benefit of the internal zoom is the length of the lens remains constant while zooming, and the barrel does not rotate so polarizers and other position-dependent filters can be used without the need for additional support.
[LR MF] Linear Response MF
Linear Response MF refines controls for manual focusing operability. The focus ring features high control resolution so that user input is precisely followed when focusing manually. Linear Response MF also realizes intuitive focusing and is almost equivalent to mechanical manual focusing. The focus changes linearly in response to focus ring rotation, giving the user the control immediacy needed for fast, accurate manual focusing.
[Floating F] Floating Focusing
The floating focus mechanism achieves consistent high resolution from infinity to the closest focusing distance. This system helps to reduce all types of aberration to minimum levels and thereby maintain sharp, high-resolution rendering from infinity focusing for landscapes, for example, all the way down to close-up focusing for portraits and similar subjects.
[XD LM] XD (extreme dynamic) Linear Motor
The XD (extreme dynamic) Linear Motor has been developed to deliver higher thrust and efficiency than previous types in order to make the most of the rapidly evolving speed performance of current and future camera bodies. Linear motor design and component layout have been thoroughly revised to achieve significantly higher thrust.
[DDSSM] Direct Drive Super Sonic wave Motor
A new DDSSM system is used for precision positioning of the heavy focus group required for the full-frame format, allowing precision focusing even within the lens’s shallowest depth of field. The DDSSM drive system is also remarkably quiet, making it ideal for shooting movies where focus is constantly changing while the scene is being recorded.
[RDSSM] Ring Drive Super Sonic wave Motor
RDSSM is a piezoelectric motor that contributes to smooth and silent AF operation. The motor produces high torque at slow rotation, and provides immediate start and stop responses. It is also extremely quiet, which helps keep autofocusing silent. Lenses that feature RDSSM also include a position-sensitive detector to directly detect the amount of lens rotation, a factor that improves AF precision overall.
[LM] Linear Motor
Specially designed linear motors provide direct, contactless electromagnetic drive of the lens focus group for extremely quiet, responsive operation. The quiet operation, fast response, and precision braking provided by the contactless linear drive system are not only an advantage for still photography, but offer the type of smooth, silent operation required by moviemakers as well.
[SAM] Smooth Autofocus Motor
Rather than using the focus drive motor in the camera body, SAM lenses feature an autofocus motor built into to the lens itself that directly drives the focusing element group. Since the built-in motor directly rotates the focus mechanism, operation is significantly smoother and quieter than conventional coupled autofocus drive systems.
[STM] Stepping Motor
A stepping motor (STM) is a motor with a mechanism that divides the rotational operation into a number of steps, for controlled rotation. It rotates one step each time it receives an electrical pulse. The STM allows the lens to focus smoothly and quietly when shooting photos and movies.
[FHB] Focus Hold Button
Once you’ve adjusted the focus to where you want it, pressing this button on the lens barrel will keep the lens locked to that focusing distance. The preview function can also be assigned to this button through the camera’s custom settings.
[FRL] Focus Range Limiter
This function saves you a bit of time during AF operation by setting a limit on the focusing range. In macro lenses, this limit can be on either the near or far range (as pictured). In the SAL70200G, the limit is set on far ranges only. In the SAL300F28G, focusing can be limited either to a far range or to a range that you specify yourself.
[I/A ring] Iris / Aperture Ring
The iris/aperture ring allows intuitive aperture control. It provides seamless aperture control for outstanding usability.
[I/A click] Iris / Aperture Click Switch
An iris/aperture ring provides the type of immediacy and response that professionals need for both still photography and videography. A Click ON/OFF switch allows the aperture ring click stops to be engaged or disengaged as required. Engaging the click stops provides tactile feedback that can make it easier to gauge how much the ring has been adjusted by feel and is, therefore, a good choice for still photography. When the click stops are disengaged, the aperture ring moves smoothly and quietly, providing seamless, silent control for moviemaking.
[ZRDSL] Zoom Rotation Direction Select Locking Switch
Switchable zoom ring direction. A simple mechanical operation is all that is required to switch the direction of the zoom ring to match individual user preferences. Zoom ring direction can be easily switched as required.
[OSS] Optical SteadyShot
The provided Optical SteadyShot modes make it more convenient to capture sharp images in handheld shooting under various conditions. For example, Mode 2 stabilization facilitates dynamic panned shots, and Mode 3 provides a more stable viewfinder image that makes tracking and framing easier.
[OSS mode] Optical SteadyShot Mode
The provided Optical SteadyShot modes make it more convenient to capture sharp images in handheld shooting under various conditions. For example, Mode 2 stabilization facilitates dynamic panned shots, and Mode 3 provides optimum stabilization for tracking and shooting dynamic, unpredictable sports action.
[DMR] Dust and Moisture Resistant Design
The lens is designed to be dust and moisture resistant, ensuring reliable operation when shooting outdoors in challenging conditions.
[Circular] Circular Aperture
In general, if an aperture uses 7, 9 or 11 aperture blades, then the shape of the aperture becomes a 7-sided, 9-sided or 11-sided polygon as the aperture is made smaller. However, this has a certain undesirable effect in that the defocusing of point light sources appears polygonal and not circular. α lenses overcome this problem through a unique design that keeps the aperture almost perfectly circular from its wide-open setting to when it is closed by 2 stops. Smoother, more natural defocusing can be obtained as a result.
|
<urn:uuid:eb9685cf-d1c6-43c9-8a72-bc822425f379>
|
CC-MAIN-2019-35
|
https://www.sony.ie/electronics/alpha-lens-technology
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315750.62/warc/CC-MAIN-20190821022901-20190821044901-00049.warc.gz
|
en
| 0.913821 | 2,824 | 3.046875 | 3 | 2.519074 | 3 |
Strong reasoning
|
Hardware
|
The qualified hearing professional will program the hearing aid to reduce the range of loud noises that you would normally be unable to tolerate this is a specific noise reduction system that identifies and suppresses annoying sounds, such as rustling paper, breaking glass and clanging dishes, without affecting speech. If you think you have grown used to a loud noise, it probably has damaged your ears, and there is no treat- ment—no medicine, no surgery, not even a hearing aid—that completely restores your hearing once it is dam- aged by noise how does the ear work the ear has three main parts: the outer, middle, and inner ear. The problem of reducing noise in hearing aids is one of great importance--and great difficulty the problem has been addressed in many different ways over the years the techniques used range from relatively simple forms of filtering to advanced signal processing methods this paper provides a brief. Modern devices also utilize sophisticated digital signal processing to try and improve speech intelligibility and comfort for the user such signal processing includes feedback management, wide dynamic range compression, directionality , frequency lowering, and noise reduction modern hearing aids require configuration to. The digital technology within hearing aids allows sounds to be separated into different frequency regions, or bands, and amplify each region selectively the outcome of this specialized noise reduction circuitry is that the background noise is less annoying, and the user's listening comfort is improved in noisy situations.
This is a transcript of the live expert e-seminar presented on 2/23/11 register here to view the course recording two of the most important technologies used in modern hearing aids are directionality and noise reduction this discussion will be a r 830 audiologyonline article. In this paper we address these issues through examination of contemporary, peer-reviewed, and other literature (ie, published opinions from luminaries and beck: “you stated (more or less) that modern hearing aid technology is so good that audiologists and dispensers should use noise reduction and. Where the input to the hearing aid is a mixed speech and noise signal, digital nr aims to identify and suppress noise components while preserving the speech components when the background noise is other speech, digital nr is unlikely to result in improved speech perception. Noise reduction, digital speech enhancement, digital microphones and increased comfort associated with improved aids for a long time sometimes dislike their new digital hearing aids as they can sound very different and may the primary purpose of the paper to further develop hearing aid use through their intervention.
The larger issue is that when noise-induced hearing loss occurs, it is often too late, irreversible and, if a significant loss, can be life-altering hearing loss then, choose “best hearing aid provider” from the dropdown menu and click the “vote” button next to physicians hearing center 2017 hearing aid essay contest. And a fifth solution, and the topic of this paper, is to use hearing aids equipped with a digital noise reduction (dnr) circuit a very informative article on dnr hearing aids by gus mueller and todd ricketts recently appeared in the hearing journal in an ideal dnr system, the hearing aid will reduce only the undesired noise. Noise reduction in hearing aids 2406 words - 10 pages recently in a local hearing clinic, a clients concerns were discussed im afraid i wont like them my brother in law bought two hearing aids, and he keeps them in a drawer in the kitchen while the number of people dissatisfied with their hearing aids hovers.
Hearing loss due to recreational exposure to loud sounds: a review 1hearing loss, noise-induced 2music 3noise 4recreation 5noise transportation 6 adolescent reduction in hearing at frequencies between 3 and 6 khz efficacy suggest that such devices can reduce exposure to loud sound by 5-45 db if they. Hearing aids [8, 9] devices that provide more benefit for a given cost, or the same benefit for less cost are considered to provide greater value wind reduction yes yes yes no 8 reverberation cancellation no no yes no 9 impulse noise suppression 3 steps 1 step yes no 10 digital pinna yes no yes no. Hearing aid noise reduction feature, what exactly does it do and what benefit does it deliver to hearing aid users.
|
<urn:uuid:c513ab7e-ad63-42fe-867f-37c0df971ea8>
|
CC-MAIN-2018-22
|
http://ylessayhnbv.n2g.us/noise-reduction-in-hearing-aids-essay.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867416.82/warc/CC-MAIN-20180526092847-20180526112847-00425.warc.gz
|
en
| 0.932982 | 853 | 2.59375 | 3 | 2.724472 | 3 |
Strong reasoning
|
Health
|
Austro-Hungarian Assault Formations during World War I
By Christian Ortner
When the Austro-Hungarian army entered the war in July/August 1914, the units used the same battlefield tactics they had practiced during their peacetime manoeuvres. Based on the experiences of the Franco-Prussian war of 1870/71, the army's spirit was aimed at an offensive character. All tactical manuals were influenced by this spirit, always pointing at the assault or attack as the best solution in a tactical situation. Mainly the infantry, meant to be strong enough to take any objective, even without the support of artillery or cavalry, should do these attacks.
The big campaigns during summer and autumn 1914 in the northeast and southeast had been led in exactly that way. For the single regiment or battalion this kind of warfare meant always rushing forwards in the direction of estimated enemy positions. When the enemy's position was identified, he should be attacked immediately, even off the march without preparation. This type of attack was called "recontre". At the beginning the Austro-Hungarian (AH) forces did not realize that the Serbs and especially the Russians secured their main positions with strong outposts. During the fighting these outposts were mainly defeated and retreated. For the Austrians, this looked like victory and the units started to follow up. When they reached the main enemy position, the Austrian units were often in total disorder and suddenly confronted with heavy machine gun and artillery fire. Now the support of own artillery would have been needed, but the "new" objective was mostly out of range. So the artillery started to move forward and changed position in exactly that moment, when the suffering attacking infantry needed shrapnel or shellfire most.
The worst thing for the individual soldier during this period was the fact that the enemy, the Russians often dug in already at the beginning of the war, could not be seen. The attacking forces now were fixed in front of the enemy's trenches, not able to move forward or retreat. The casualties were rising. During the winter of 1914/1915 the situation changed. Later than on the western front, both sides started to dig in systematically, put obstacles in front of the trenches and deployed artillery behind the main battle line. Normally only one trench-line was built, being of course the main objective of all offensive attempts. Where the enemy was able to break through, the defender was forced to withdraw whole sectors to prevent the collapse of the whole frontline. To avoid this problem, the Austro-Hungarian High army command (Armeeoberkommando - AOK) started, mainly influenced by experiences on the German western front, to reorganize the way in which AH forces were deployed. About 100 metres behind the first line another was built and at the same distance behind the second a third line was established. All three lines were also connected by communication and supply lines and named the so-called "first position" (1. Stellung). At a distance of two or three kilometres another position (2. Stellung), also consisting of three lines, was made. Behind the second position the main artillery was deployed. A third position was planned (mostly not fully constructed) at a distance of four to six kilometers. This kind of "linear" trench system of course changed battlefield tactics. Attacks could only be successful, when the supporting artillery was able to destroy the obstacles and trenches of the first position to enable the infantry to get into the enemy position. To effect a decisive breakthrough it was necessary to take also the second position, especially the artillery area behind. If not, the defender was able to barrage the enemy by artillery fire preventing him bringing support and reserves to the attacking units. They were suffering heavy casualties and mostly too weak to hold their taken objectives. Counter attacks by the defender normally threw the enemy back to the initial positions. If not, the defender simply changed the former second position to his new first one; the former third became the new second one.
This procedure caused a need for mass artillery with plenty of ammunition, because two main targets had to be destroyed, the first and the second position. An attack had to be planned and prepared quite well, so on the Russian front long periods were rather quiet. Most of the fighting at this time was low intensity fights, consisting of patrols trying to take enemy outposts or fighting each other in no-man's-land. The Russian army had at this time specially trained soldiers, so called "Jagdkommandos", formed in 1886, to perform reconnaissance at the regimental level. These commandos had been very successful executing strikes against Austrian positions, always causing casualties and serious damage to morale. To protect them-selves against ambushes and sudden strikes, AH infantry and cavalry (no longer mounted at this time) regiments doing service in the first position formed "Jagdkommandos" themselves. They simply copied the Russian idea. There was no order by the AOK to establish such special forces, but they were raised. In some areas the number of commandos was rising very quickly. The engagements of these commando squads have been described in literature very romantically, but in fact it was of course a hard job and very dangerous.
On the western front, trench warfare started already in 1914. Both sides realized, that - following the old infantry-tactics -, enemy positions couldn't be taken any more. German local commanders developed new tactics based on combat patrols (Stoßtrupps). They were armed with light machine guns, hand-grenades and wire cutters. Their mission was to cross no man's land between the trenches, get into the first enemy trench and clear it in close combat. Later on, their mission changed and was concentrated only on removing the obstacles to effect a passage for the following infantry.
This meant, that the soldiers doing service in these combat patrols should be trained on the one hand as infantrymen and on the other as engineers concerning the clearing of the obstacle-zone. In March 1915 the first special detachment was activated, consisting of two engineer companies and a 37 mm gun section. After the commander, the detachment was called "Sturmabteilung Calsow". The experiences were rather unsuccessful, because the higher commands didn't know what to do with these special forces. So they were deployed as normal infantrymen. The situation changed, when Hauptmann E. Rohr took over the command in June 1915. He reorganized the detachment, now consisting of two engineer companies, a 37 mm gun section, a machine gun platoon (6 MGs), a mortar squad (4 small mortars) and a flamethrower squad. In March 1916 the detachments general orders were fixed in the follow-ing way. Only 50 % of the detachment was to do service in the frontline, the rest had to carry out courses to train the regular infantry in special assault tactics. The courses took place in Beuville and lasted 14 days. By and by the detachment "Rohr" was growing. Another two infantry companies were incorporated, so Rohr's unit was no longer a detachment, but a full battalion, named "Assault battalion Rohr". By and by all German armies raised assault battalions to train their officers and soldiers in assault tactics and trench warfare. All together, 17 German assault battalions were activated during WW I.
The AOK had also realised, that there was a strong need to intensify infantry training. Every army should establish a special training ground, where experienced soldiers could educate young recruits in the peculiarities of modern trench warfare. When the AOK noticed, that the Germans had already developed such courses, the German High command was asked, if it was possible to join one course in Beuville. 15 officers of the AH army attended two courses in September / October 1916. The experiences were rather positive, so the AOK asked again to send soldiers to Beuville. The German High command offered, to activate three courses (November 1916, December 1916, January 1917) only for members of the AH army. 120 officers and 300 NCOs were trained in Beuville. Copying the German system, they should be the main cadre of the newly raised AH army assault battalions. These army assault battalions had to train until spring 1917 at least two assault squads ("Sturmpatrouillen") per every infantry company. The composition of these army assault battalions differed due to the available resources in the army area. Normally there were 4 infantry companies, a MG-company, mortars, flamethrowers and engineer squads. The training was rather successful on the north-eastern front, because the low fighting intensity offered enough possibilities to take soldiers out of the front line and send them to the courses. On the other hand, the former "Jagdkommandos" simply changed their designation and were incorporated into these battalions. On the Italian front there were more problems, especially because of the heavy casualties on the Isonzo sector. Regimental commanders often refused to send their best men to the hinterland just for training purposes.
The first test for this young infantry branch was made during the 10th battle of the Isonzo-river. The experiences differed. When the assault companies were split up into squads, being at the head of the infantry in a counter attack, the fighting was always successful. Where they deployed in close formation, the companies suffered heavy casualties, also if there had been no reconnaissance on the objective. Because of this, the AOK developed general orders on how to deploy assault battalions in the future. The composition of the battalions was also fixed. In contrast with the period before and the German system, assault battalions should, be-ginning from June 1917, be raised at the divisional level. Every infantry division should have a divisional assault battalion consisting of as many assault companies as the division had regiments. Every cavalry division had to activate a so-called assault half-regiment. Independent brigades had to raise half-battalions. The number-designation of the assault battalions, half-regiments or half-battalions was the same as the parent divisions/brigade. The companies received the number of their regiment, for instance: k.u.k. Infantry regiment Nr. 14 - k.u.k. Assault Company Nr. 14.
Concerning the supporting elements many problems had to be faced. In addition to the assault companies, the battalions should consist of a MG-company, an infantry-gun-section, a mortar platoon, a flamethrower platoon and a phone-squad. Because of the shortage of weapons and war material, these units were taken out of the front regiments and had to be returned immediately after the courses or missions. It can be said, that until October 1917 most of the assault battalions had no own support elements. In that case the big victory of the joint AH and Ger-man forces during the 12th battle of Isonzo was decisive for the assault battalions. Much Italian war material and weapons had been captured and were now issued to the assault-troopers. During their training, the soldiers of the assault detachments had always been instructed on enemy weapons, ready to use seized enemy equipment for their own purposes after taking an objective. Now they got plenty of MGs or SMGs, which could be used immediately.
During the 12th Isonzo-battle, the AH assault-battalions proved their efficiency during modern trench warfare. Their elite character was similar to the pre-war role of the cavalry. Storm troopers became a symbol of offensive spirit and successful attacks, but led - by the way - to an overestimation of their power to effectively decide the issue on the battlefield. Until June 1918 the AH army had quite a lot of storm troopers at her disposal. Every front-company had at least two assault patrols, ready to perform reconnaissance or special missions on enemy outposts. Secondly there were the regimental assault companies forming the divisional battalions and third the well equipped staff of the army assault courses.
But from 1918 the system of building the trenches changed. Instead of the 1st, 2nd and 3rd position, combat zones were established. The first and second positions have been brought together to the "main combat zone", about 4 km deep. It was secured to the front by an outpost line. Bunkers, camouflaged MGs, deep obstacle-areas, hidden infantry-guns and mortars fortified the distance between the former first and second position. This meant, during an attack not only lines had to be taken, but the infantry had to break through the whole combat-zone. Despite the large number of available combat patrols, they were simply not enough to take all the objectives in the combat-zone. This was one of the causes, why the offensive on the Piave River in June 1918 failed. After this battle, all assault units and patrols were taken out of the front and returned to their trainings camps. Their service was reduced to reconnaissance and small assault missions until the end of the war. After the war, the idea of specially trained and equipped assault troops was given up. It was planned to train every soldier in close combat and trench warfare, so there was no need to keep any assault units. Like the whole AH army, the assault-battalions disappeared in November 1918. But it was their skills, their training and their efficiency, which moulded infantry organization and training of the European armies of WW II.
Back to Troops and Unit Histories
Created and Copyright © M C Ortner 2002
|
<urn:uuid:adf753d2-23c0-4aa6-bd8c-2333c89d2dc8>
|
CC-MAIN-2016-36
|
http://www.austro-hungarian-army.co.uk/sturmtruppen.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983578600.94/warc/CC-MAIN-20160823201938-00285-ip-10-153-172-175.ec2.internal.warc.gz
|
en
| 0.981329 | 2,788 | 3.328125 | 3 | 3.022035 | 3 |
Strong reasoning
|
History
|
It's been the stuff of sci-fi dreams for generations: escape the fragile flesh by uploading your consciousness to another medium, leaving you aware and intact within a computer, robot or a completely new body.
As far back as the mid-1700s, French philosopher Denis Diderot believed the conscious mind could be deconstructed and put back together. His ideas were based on the Enlightenment-era thinking that consciousness wasn't a disembodied soul but the product of interacting brain matter.
Since then the idea has been ripe pickings for science fiction and pop culture, from the hologram Rimmer in TV's Red Dwarf to the movie Avatar.
The idea is explored more fully in the new movie Transcendence. Computer scientist Will Caster (Johnny Depp) is fatally injured while working to digitise consciousness and transfer it to a computer network. When his wife and colleagues realise it's his only chance of survival, they do just that – with dark results.
But scientists are taking the idea of transferring the "you" that lives inside your brain seriously, and disciplines with arcane names such as plastination, brain mapping and optogenetics are making it seem more plausible.
But there's a lot more to it than installing a USB port on the back of your head and just plugging in. The problem is that while a computer works by recording and reading ones and zeroes, your brain works when cells powered by blood sugars send electrical signals to each other – two very different systems.
What's more, we don't know what the brain is actually doing with these nano-scale sparks. They somehow combine to form consciousness, but we don't know how to replicate or measure it. All we know is the brain contains an unfathomable number of surprisingly simple moving parts that somehow give rise to everything you feel, think, dream, fear and love. So if we could transport the information encoded in all those sparks and move it somewhere, it might mean moving the mind and the entire sense of self it contains.
In his 1995 book Are We Alone?, cosmologist Paul Davies suggested that if you gradually replace every tiny part of the physical brain with a vacuum tube, wire or transistor that did the same job as the bit it replaced, there should be no change to the thinking, feeling person inside – even after you'd replaced the whole thing with artificial parts.
So the only thing stopping us from extracting the conscious self might be overcoming the complexity, and research called the Human Brain Project might be the first step. Scientists at Switzerland's Ecole polytechnique federale de Lausanne are literally building a virtual brain in a computer, one neuron at a time. If it works the same way an organic brain does, albeit in software, might it be the perfect medium to house an integrated mind?
Since the pattern of activity between neurons is the only physical manifestation of consciousness in the real world, it might be as simple as recording and digitising the position, duration and order of each spark.
Believe it or not, there is progress there too. Scientists at the Norwegian University of Science and Technology injected a special photo-sensitive virus into the brains of lab mice. When the neurons the virus was attached to became active they literally lit up, visible to an observer.
That means as each part of a thought string fires, the locations and relationships of the neurons can be recorded. One incredible effect was seen in a Duke University experiment in North Carolina where a wire linked the brains of two rats. The computer translated what one rat felt and, did by reading the neural pattern, and sent the signal to the next rat, which did and felt the same thing – though in a crude and rudimentary form.
So couldn't a live stream of such neural activity be captured from a human, expressed in the binary language computers speak and then called up in a computer or robot like any other program?
Theoretically, but we're a long way off. The workflow to achieve it will take a lot of scaling up for the 100-billion-neuron human brain. Plus, a single error in the code could have a compound effect, such as forgeting your partner's name or putting your shoes in the fridge. This is what happens when the biological brain goes wrong, leading to conditions such as dementia.
Besides all this, the mind is virtually meaningless without the rich emotional and physical feedback the body provides from the world outside.
But imagine advanced engineering that can replicate everything your body can do in a robot, or give you the neurological illusion of a body through virtual reality, Matrix-style.
Endlessly transferring the mind from one high performance android body to another sounds a lot like that other long dreamed-for science-fiction condition: immortality.
The other side of life
Movies and television series have long ventured into the realms of artificial intelligence and the downloading of human consciousness. It rarely ends well.
The perfect woman turns out to be a computer operating system. And that's OK, because it's 2025, and humans have evolved. But the OS keeps evolving a whole lot faster.
Clones and drones and a downloaded Tom Cruise character. Meh.
Battlestar Galactica (2004-09)
Not the cheesy Lorne Green '70s version, this updated TV series saw humans do battle against the cybernetic race they created, the cylons. The cylons evolved ... they had a plan. It's a dark and complex exploration of what it is to be human. Oh frack.
AI: Artificial Intelligence (2001)
Haley Joel Osment is a robot boy who can display and experience love for his human parents. (He was more interesting when he could see dead people.)
The Matrix (1999)
Reality is just an artificial construct with television and playgrounds and baguettes. In truth, humanity is subdued and harvested by sentient machines, and the rebels have to eat gruel. Just don't take the red pill.
The Terminator (1984)
Skynet, a military computer system, gains self-awareness on August 29, 1997, and obliterates humanity ... almost. In 2029 it sends Arnie back in time to 1984 to finish the job by killing the mother of the future resistance leader.
Blade Runner (1982)
A dystopian future where genetically engineered "replicants" are the slave labour force in off-world colonies. Some rebel, seeking a better life, a longer life. An all-too-human pursuit.
Large, acid-bleeding creature aside, the villain in this sci-fi classic is the science officer Ash, who turns out to be an android. (That A2 model was always a bit twitchy, though.)
2001: A Space Odyssey (1968)
Dave: "Open the pod bay doors, HAL."
HAL: "I'm sorry, Dave. I'm afraid I can't do that." On a mission to Jupiter the ship's computer, HAL 9000, takes control.
|
<urn:uuid:9cca3a4b-6e97-4c14-88fc-23fe13794c7d>
|
CC-MAIN-2016-22
|
http://www.watoday.com.au/technology/sci-tech/science-behind-transcendence-not-so-farfetched-20140422-zqxof
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276537.37/warc/CC-MAIN-20160524002116-00149-ip-10-185-217-139.ec2.internal.warc.gz
|
en
| 0.942341 | 1,431 | 2.59375 | 3 | 2.87798 | 3 |
Strong reasoning
|
Science & Tech.
|
This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. (March 2018) (Learn how and when to remove this template message)
This article needs additional citations for verification. (April 2017) (Learn how and when to remove this template message)
In computer graphics, a computer graphics pipeline, rendering pipeline or simply graphics pipeline, is a conceptual model that describes what steps a graphics system needs to perform to render a 3D scene to a 2D screen. Once a 3D model has been created, for instance in a video game or any other 3D computer animation, the graphics pipeline is the process of turning that 3D model into what the computer displays. Because the steps required for this operation depend on the software and hardware used and the desired display characteristics, there is no universal graphics pipeline suitable for all cases. However, graphics application programming interfaces (APIs) such as Direct3D and OpenGL were created to unify similar steps and to control the graphics pipeline of a given hardware accelerator. These APIs abstract the underlying hardware and keep the programmer away from writing code to manipulate the graphics hardware accelerators (AMD/Intel/NVIDIA etc.).
The model of the graphics pipeline is usually used in real-time rendering. Often, most of the pipeline steps are implemented in hardware, which allows for special optimizations. The term "pipeline" is used in a similar sense to the pipeline in processors: the individual steps of the pipeline run parallel but are blocked until the slowest step has been completed.
The 3D pipeline usually refers to the most common form of computer 3D rendering, 3D polygon rendering, distinct from raytracing, and raycasting. In particular, 3D polygon rendering is similar to raycasting. In raycasting, a ray originates at the point where the camera resides, if that ray hits a surface, then the color and lighting of the point on the surface where the ray hit is calculated. In 3D polygon rendering the reverse happens, the area that is in view of the camera is calculated, and then rays are created from every part of every surface in view of the camera and traced back to the camera.
A graphics pipeline can be divided into three main parts: Application, Geometry and Rasterization.
The application step is executed by the software on the main processor (CPU), it cannot be divided into individual steps, which are executed in a pipelined manner. However, it is possible to parallelize it on multi-core processors or multi-processor systems. In the application step, changes are made to the scene as required, for example, by user interaction by means of input devices or during an animation. The new scene with all its primitives, usually triangles, lines and points, is then passed on to the next step in the pipeline.
Examples of tasks that are typically done in the application step are collision detection, animation, morphing, and acceleration techniques using spatial subdivision schemes such as Quadtrees or Octrees. These are also used to reduce the amount of main memory required at a given time. The "world" of a modern computer game is far larger than what could fit into memory at once.
The geometry step (with Geometry pipeline), which is responsible for the majority of the operations with polygons and their vertices (with Vertex pipeline), can be divided into the following five tasks. It depends on the particular implementation of how these tasks are organized as actual parallel pipeline steps.
A vertex (plural: vertices) is a point in the world. Many points are used to join the surfaces. In special cases, point clouds are drawn directly, but this is still the exception.
A triangle is the most common geometric primitive of computer graphics. It is defined by its three vertices and a normal vector - the normal vector serves to indicate the front face of the triangle and is a vector that is perpendicular to the surface. The triangle may be provided with a color or with a texture (image "glued" on top of it). Triangles always exist on a single plane, therefore they're preferred over rectangles.
The World Coordinate System
The world coordinate system is the coordinate system in which the virtual world is created. This should meet a few conditions for the following mathematics to be easily applicable:
- It must be a rectangular Cartesian coordinate system in which all axes are equally scaled.
How the unit of the coordinate system is defined, is left to the developer. Whether, therefore, the unit vector of the system is to correspond in reality to one meter or an Ångström depends on the application.
- Whether a right-handed or a left-handed coordinate system is to be used may be determined by the graphic library to be used.
- Example: If we are to develop a flight simulator, we can choose the world coordinate system so that the origin is in the middle of the earth and the unit is set to one meter. In addition, in order to make the reference to reality easier, we define that the X axis should intersect the equator on the zero meridian, and the Z axis passes through the poles. In a Right-handed system, the Y-axis runs through the 90°-East meridian (somewhere in the Indian Ocean). Now we have a coordinate system that describes every point on Earth in three-dimensional Cartesian coordinates. In this coordinate system, we are now modeling the principles of our world, mountains, valleys and oceans.
- Note: Aside from computer geometry, geographic coordinates are used for the earth, ie, latitude and longitude, as well as altitudes above sea level. The approximate conversion - if one does not consider the fact that the earth is not an exact sphere - is simple:
- with R=Radius of the earth [6.378.137m], lat=Latitude, long=Longitude, hasl=height above sea level.
- All of the following examples apply in a right-handed system. For a left-handed system the signs may need to be interchanged.
The objects contained within the scene (houses, trees, cars) are often designed in their own object coordinate system (also called model coordinate system or local coordinate system) for reasons of simpler modeling. To assign these objects to coordinates in the world coordinate system or global coordinate system of the entire scene, the object coordinates are transformed by means of translation, rotation or scaling. This is done by multiplying the corresponding transformation matrices. In addition, several differently transformed copies can be formed from one object, for example a forest from a tree; This technique is called instancing.
- In order to place a model of an aircraft in the world, we first determine four matrices. Since we work in three-dimensional space, we need four-dimensional homogeneous matrices for our calculations.
First, we need three rotation matrices, namely one for each of the three aircraft axes (vertical axis, transverse axis, longitudinal axis).
- Around the X axis (usually defined as a longitudinal axis in the object coordinate system)
- Around the Y axis (usually defined as the transverse axis in the object coordinate system)
- Around the Z axis (usually defined as vertical axis in the object coordinate system)
We also use a translation matrix that moves the aircraft to the desired point in our world: .
- Remark: The above matrices are transposed with respect to the ones in the article rotation matrix. See further down for an explanation why.
Now we could calculate the position of the vertices of the aircraft in world coordinates by multiplying each point successively with these four matrices. Since the multiplication of a matrix with a vector is quite expensive (time consuming), one usually takes another path and first multiplies the four matrices together. The multiplication of two matrices is even more expensive, but must be executed only once for the whole object. The multiplications and are equivalent. Thereafter, the resulting matrix could be applied to the vertices. In practice, however, the multiplication with the vertices is still not applied, but the camera matrices - see below - are determined first.
- For our example from above, however, the translation has to be determined somewhat differently, since the common meaning of Up - apart from at the North Pole - does not coincide with our definition of the positive Z axis and therefore the model must also be rotated around the center of the earth: The first step pushes the origin of the model to the correct height above the earth's surface, then it is rotated by latitude and longitude.
The order in which the matrices are applied is important, because the matrix multiplication is not commutative. This also applies to the three rotations, as can be demonstrated by an example: The point (1, 0, 0) lies on the X-axis, if one rotates it first by 90° around the X- and then around The Y axis, it ends up on the Z axis (the rotation around the X axis has no effect on a point that is on the axis). If, on the other hand, one rotates around the Y axis first and then around the X axis, the resulting point is located on the Y axis. The sequence itself is arbitrary as long as it is always the same. The sequence with x, then y, then z (roll, pitch, heading) is often the most intuitive, because the rotation causes the compass direction to coincide with the direction of the "nose".
There are also two conventions to define these matrices, depending on whether you want to work with column vectors or row vectors. Different graphics libraries have different preferences here. OpenGL prefers column vectors, DirectX row vectors. The decision determines from which side the point vectors are to be multiplied by the transformation matrices. For column vectors, the multiplication is performed from the right, i.e. , where vout and vin are 4x1 column vectors. The concatenation of the matrices also is done from the right to left, i.e., for example , when first rotating and then shifting.
In the case of row vectors, this works exactly the other way round. The multiplication now takes place from left as with 1x4-row vectors and the concatenation is when we also first rotate and then move. The matrices shown above are valid for the second case, while those for column vectors are transposed. The rule applies, which for multiplication with vectors means that you can switch the multiplication order by transposing the matrix.
The interesting thing about this matrix chaining is that a new coordinate system is defined by each such transformation. This can be extended as desired. For example, the propeller of the aircraft may be a separate model, which is then placed by translation to the aircraft nose. This translation only needs to describe the shift from the model coordinate system to the propeller coordinate system. In order to draw the entire aircraft, the transformation matrix for the aircraft is first determined, the points are transformed, and then the propeller model matrix is multiplied to the matrix of the aircraft, and then the propeller points are transformed.
The matrix calculated in this way is also called the world matrix. It must be determined for each object in the world before rendering. The application can introduce changes here, for example change the position of the aircraft according to the speed after each frame.
In addition to the objects, the scene also defines a virtual camera or viewer that indicates the position and direction of view from which the scene is to be rendered. To simplify later projection and clipping, the scene is transformed so that the camera is at the origin, looking along the Z axis. The resulting coordinate system is called the camera coordinate system and the transformation is called camera transformation or View Transformation.
- The view matrix is usually determined from camera position, target point (where the camera looks) and an "up vector" ("up" from the viewer's viewpoint). First three auxiliary vectors are required:
- Zaxis = normal(cameraPosition - cameraTarget)
- Xaxis = normal(cross(cameraUpVector, zaxis))
- Yaxis = cross(zaxis, xaxis)
- With normal(v) = normalization of the vector v;
- cross(v1, v2) = cross product of v1 and v2.
- Finally, the matrix:
- with dot(v1, v2) = dot product of v1 and v2.
The 3D projection step transforms the view volume into a cube with the corner point coordinates (-1, -1, -1) and (1, 1, 1); Occasionally other target volumes are also used. This step is called projection, even though it transforms a volume into another volume, since the resulting Z coordinates are not stored in the image, but are only used in Z-buffering in the later rastering step. In a perspective illustration, a central projection is used. To limit the number of displayed objects, two additional clipping planes are used; The visual volume is therefore a truncated pyramid (frustum). The parallel or orthogonal projection is used, for example, for technical representations because it has the advantage that all parallels in the object space are also parallel in the image space, and the surfaces and volumes are the same size regardless of the distance from the viewer. Maps use, for example, an orthogonal projection (so-called orthophoto), but oblique images of a landscape cannot be used in this way - although they can technically be rendered, they seem so distorted that we cannot make any use of them. The formula for calculating a perspective mapping matrix is:
- With h = cot (fieldOfView / 2.0) (aperture angle of the camera); w = h / aspectRatio (aspect ratio of the target image); near = Smallest distance to be visible; far = The longest distance to be visible.
The reasons why the smallest and the greatest distance have to be given here are, on the one hand, that this distance is divided by in order to reach the scaling of the scene (more distant objects are smaller in a perspective image than near objects), and on the other hand to scale the Z values to the range 0..1, for filling the Z-buffer. This buffer often has only a resolution of 16 bits, which is why the near and far values should be chosen carefully. A too large difference between the near and the far value leads to so-called Z-fighting because of the low resolution of the Z-buffer. It can also be seen from the formula that the near value cannot be 0, because this point is the focus point of the projection. There is no picture at this point.
For the sake of completeness, the formula for parallel projection (orthogonal projection):
- with w = width of the target cube (dimension in units of the world coordinate system); H = w / aspectRatio (aspect ratio of the target image); near = Smallest distance to be visible; far = The longest distance to be visible.
For reasons of efficiency, the camera and projection matrix are usually combined into a transformation matrix so that the camera coordinate system is omitted. The resulting matrix is usually the same for a single image, while the world matrix looks different for each object. In practice, therefore, view and projection are pre-calculated so that only the world matrix has to be adapted during the display. However, more complex transformations such as vertex blending are possible. Freely programmable geometry shaders that modify the geometry can also be executed.
In the actual rendering step, the world matrix * camera matrix * projection matrix is calculated and then finally applied to every single point. Thus, the points of all objects are transferred directly to the screen coordinate system (at least almost, the value range of the axes are still -1..1 for the visible range, see section "Window-Viewport-Transformation").
Often a scene contains light sources placed at different positions to make the lighting of the objects appear more realistic. In this case, a gain factor for the texture is calculated for each vertex based on the light sources and the material properties associated with the corresponding triangle. In the later rasterization step, the vertex values of a triangle are interpolated over its surface. A general lighting (ambient light) is applied to all surfaces. It is the diffuse and thus direction-independent brightness of the scene. The sun is a directed light source, which can be assumed to be infinitely far away. The illumination effected by the sun on a surface is determined by forming the scalar product of the directional vector from the sun and the normal vector of the surface. If the value is negative, the surface is facing the sun.
Only the primitives which are within the visual volume need to actually be rastered (drawn). This visual volume is defined as the inside of a frustum, a shape in the form of a pyramid with a cut off top. Primitives which are completely outside the visual volume are discarded; This is called frustum culling. Further culling methods such as backface culling, which reduce the number of primitives to be considered, can theoretically be executed in any step of the graphics pipeline. Primitives which are only partially inside the cube must be clipped against the cube. The advantage of the previous projection step is that the clipping always takes place against the same cube. Only the - possibly clipped - primitives, which are within the visual volume, are forwarded to the final step.
In order to output the image to any target area (viewport) of the screen, another transformation, the Window-Viewport transformation, must be applied. This is a shift, followed by scaling. The resulting coordinates are the device coordinates of the output device. The viewport contains 6 values: height and width of the window in pixels, the upper left corner of the window in window coordinates (usually 0, 0) and the minimum and maximum values for Z (usually 0 and 1).
- With vp=Viewport; v=Point after projection
On modern hardware, most of the geometry computation steps are performed in the vertex shader. This is, in principle, freely programmable, but generally performs at least the transformation of the points and the illumination calculation. For the DirectX programming interface, the use of a custom vertex shader is necessary from version 10, while older versions still have a standard shader.
The rasterisation step is the final step before the fragment shader pipeline that all primitives are rasterized with Pixel pipeline. In the rasterisation step, discrete fragments are created from continuous primitives.
In this stage of the graphics pipeline, the grid points are also called fragments, for the sake of greater distinctiveness. Each fragment corresponds to one pixel in the frame buffer and this corresponds to one pixel of the screen. These can be colored (and possibly illuminated). Furthermore, it is necessary to determine the visible, closer to the observer fragment, in the case of overlapping polygons. A Z-buffer is usually used for this so-called hidden surface determination. The color of a fragment depends on the illumination, texture, and other material properties of the visible primitive and is often interpolated using the triangle vertice properties. Where available, a fragment shader (also called Pixel Shader) is run in the rastering step for each fragment of the object. If a fragment is visible, it can now be mixed with already existing color values in the image if transparency or multi-sampling is used. In this step, one or more fragments become a pixel.
To prevent that the user sees the gradual rasterisation of the primitives, double buffering takes place. The rasterisation is carried out in a special memory area. Once the image has been completely rasterised, it is copied to the visible area of the image memory.
All matrices used are singular and thus invertible. Since the multiplication of two singular matrices creates another singular matrix, the entire transformation matrix is also invertible. The inverse is required to recalculate world coordinates from screen coordinates - for example, to determine from the mouse pointer position the clicked object. However, since the screen and the mouse have only two dimensions, the third is unknown. Therefore, a ray is projected at the cursor position into the world and then the intersection of this ray with the polygons in the world is determined.
Classic graphics cards are still relatively close to the graphics pipeline. With increasing demands on the GPU, restrictions were gradually removed out to create more flexibility. Modern graphics cards use a freely programmable, shader-controlled pipeline, which allows direct access to individual processing steps. To relieve the main processor, additional processing steps have been moved to the pipeline and the GPU.
The most important shader units are pixel shaders, vertex shaders, and geometry shaders. The Unified Shader has been introduced to take full advantage of all units. This gives you a single large pool of shader units. As required, the pool is divided into different groups of shaders. A strict separation between the shader types is therefore no longer useful.
It is also possible to use a so-called compute-shader to perform any calculations off the display of a graphic on the GPU. The advantage is that they run very parallel, but there are limitations. These universal calculations are also called GPGPU.
- Tomas Akenine-Möller, Eric Haines: Real-Time Rendering. AK Peters, Natick, Mass. 2002, ISBN 1-56881-182-9.
- Michael Bender, Manfred Brill: Computergrafik: ein anwendungsorientiertes Lehrbuch. Hanser, München 2006, ISBN 3-446-40434-1.
- Fischer, Martin (2011-07-04). Pixel-Fabrik Wie Grafikchips Spielewelten auf den Schirm zaubern. c't Magazin für Computer Technik. Heise Zeitschriften Verlag. p. 180. ISSN 0724-8679.
- "Graphics Pipeline". Microsoft. Retrieved 15 June 2015.
- "Lecture: Graphics pipeline". Retrieved 15 June 2015.
- Tomas Akenine-Möller, Eric Haines: Real-Time Rendering, S. 11.
- K. Nipp, D. Stoffer; Lineare Algebra; v/d/f Hochschulverlag der ETH Zürich; Zürich 1998, ISBN 3-7281-2649-7.
|Wikimedia Commons has media related to Graphics pipeline.|
|
<urn:uuid:8a80abe0-2e3e-440d-a24d-bd6569e837ba>
|
CC-MAIN-2018-39
|
https://en.wikipedia.org/wiki/OpenGL_pipeline
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156513.14/warc/CC-MAIN-20180920140359-20180920160759-00490.warc.gz
|
en
| 0.905229 | 4,680 | 4.03125 | 4 | 2.573361 | 3 |
Strong reasoning
|
Science & Tech.
|
Creation By Signy
The creation of the universe
“Earth had not been, nor heaven above. But a yawning gap, and grass nowhere.” 4
In the beginning, only the most primal of forces existed. Ice and frost flowed from the frozen wastes of Niflheim. Sparks and embers billowed out of Muspelheim's raging inferno. Between the two was the vast emptiness known as Ginnungagap. Here the flames melted the ices and from those first waters life arose in the form of a giant called Ymir. He and the frost giants he begot - the Jotuns - were brutal and aggressive forces of chaos.
“when the breath of heat met the rime, so that it melted and dripped, life was quickened from the yeast-drops, by the power of that which sent the heat, and became a man's form. And that man is named Ymir, but the Rime-Giants call him Aurgelimir” 3
The primal waters also spawned Audhumla, a great cow. While Ymir fed on her milk, she licked the salty ice blocks that keep drifting out of Niflheim. As she licked, she uncovered the first of the Aesir - the race of the Gods. His name was Buri, and he had a son named Bor. Bor married Bestla, the daughter of a frost giant, and they had three sons, Odin, Vili and Ve.
“Straightway after the rime dripped, there sprang from it the cow called Audumla... She licked the ice-blocks...” 3
It is important to note that even before the universe began primal forces were at work. One way to interpret it is as a metaphor of the Big Bang. Energy met emptiness and an entire universe was born. This simple fact of the existence of primal forces before creation shows us that creation was not a singular fluke event, but part of a natural process. Even the Gods themselves were created (and in the story of Ragnarok we see many of Them die). We can also see this at a smaller scale in Nature and in our own lives. The cycle of birth, death and rebirth went on before us and will keep going on after us.
The creation of the world
“Then Bur's sons lifted the level land. Mithgarth the mighty there they made” 4
Odin and His brothers could not abide Ymir and his disorderly, destructive ways so they killed him. From his flesh They made the earth, and from his bones the mountains. His teeth and bone dust became the stones and sand. His blood became the lakes and seas. The brothers set his skull around the earth, held up by one dwarf for each of the cardinal compass directions, and it became the sky. The brains turned into the clouds. The dwarves themselves were created from the maggots that infested Ymir’s dead body. Then the brothers took sparks from Muspelheim and threw them into the void surrounding the new world and they became the stars. The Jotuns still posed a danger to the world, so Odin used Ymir's eyebrows to partition creation into an area for the Jotuns and an area safe (at least mostly) from them. These lands became Jotunheim and Midgard.
“of his blood the sea and the waters; the land was made of his flesh, and the crags of his bones... They took his skull also, and made of it the heaven” 3
This is perhaps the greatest story of life from death. We see described in the lore something we already understand from our physical lives -- that building something requires raw materials. We know that those materials come from somewhere, whether they are metal ores ripped from the earth or dead organisms broken down by bugs, worms and microbes to form new soil and then new plants. The fuel we burn in our cars also comes from just such a process. Within all of these processes are a continuous cycle between not just life and death, but also order and chaos. The initial void and swirling energies was a highly entropic environment which was ordered (to a degree at least) when life arose from it. This life did some rearranging, and built another level of order. The forces of entropy (the Jotuns) are ever-present though, requiring constant energy to maintain the order. The lore contains many Jotun-fighting tales, and everyone knows how their place doesn’t stay clean without effort. Sometimes the Jotuns win and wreak destruction, but life always manages to return again.
There is another important detail here too. Creation doesn't happen at the snap of a finger. It takes careful thought and great effort. Anyone who has ever painted a painting, built a software system, or birthed and/or raised a child knows this. Creation is also not a singular act. It takes cooperation and teamwork. Although it is often said that Vili and Ve are aspects of Odin 2, we see Ymir, Audhumla, Buri, Bor and Bestla all having a hand (or tongue in Audhumla's case) in creation. Furthermore, I believe that even though the lore doesn't mention Her here, Frigg must have had as much involvement in creating the world as Odin. Other stories tell of Idun keeping the Gods young with Her apples and Frey making the crops grow each spring. Creation is continuous and collaborative.
“Then Allfather took Night, and Day her son, and gave to them two horses and two chariots, and sent them up into the heavens” 3
Odin set night and day in the sky, along with the sun and moon. Night is a daughter of the Jotuns, and Day is her son. She rides around the world each day on her horse Hrimfaxi (frostmaned) and Day rides Skinfaxi (gleaming-maned). Sunna (the sun) races across the sky each day pursued by a wolf that will eventually catch her. Likewise Moon is pursued by another wolf and will one day be consumed himself. Both wolves are children of a Jotun.
It is interesting that the sun and moon and even the days and nights are not Gods, but children of the same giants of chaos that will one day destroy the world. They are forces of Nature that we depend on, but can also bring us grief. Note also that unlike some other traditions, in Germanic lore the burning sun is female and the gentler moon is male.
“An ash I know, Yggdrasil its name. With water white is the great tree wet. Thence come the dews that fall in the dales. Green by Urth's well does it ever grow.” 4
No discussion of the Norse creation story would be complete without mentioning the great tree Yggdrasil, but Yggdrasil doesn’t seem to fit nicely into the story. It is just kind of there -- never being created or destroyed, despite getting gnawed in a few places. In a sense, Yggdrasil is spacetime itself. It is the medium in which passage between worlds is possible. It both contains them and exists within them. The “water” is the time half of that, continually flowing between the present moment, which then becomes the past. Past circumstances then substantially affect what happens in the next present moment. Thus time is seen as cyclical instead of linear.
The creation of humanity
“Mithgarth the gods from his eyebrows made, And set for the sons of men” 5
One day while walking along the beach in Midgard, Odin and His brothers found two fallen trees. One was an elm and the other an ash. From the ash they created a human male named Ask and from the elm a human female named Embla. Odin gave them each a soul, Villi gave them intellect and emotion, and Ve gave them physical senses. The humans were given Midgard to live in and care for.
“When the sons of Borr were walking along the sea-strand, they found two trees, and took up the trees and shaped men of them: the first gave them spirit and life; the second, wit and feeling; the third, form, speech, hearing, and sight. They gave them clothing and names: the male was called Askr, and the female Embla” 3
This account too contains multiple points that we see reflected in Nature. In addition to the cycle of life from death mentioned earlier (note that they were fallen trees), this story shows the Gods creating new lifeforms out of other, less complex lifeforms. Of course we know that humans evolved from apes not trees, but the basic principle is there. It is also important to note that both the male and female human were created together, at the same time, and given the same gifts. Even the word “men” is used to mean male and female together in the old text, without the specifically male connotation that the word typically has in the present day.
Another subtle but important point is the order in which humanity was given its divine gifts. We were given souls first, the spiritual breath of life from Odin. Second were intellect and emotion -- our minds and hearts. Lastly we were given physical senses, and by extension the rest of our physical abilities. The order we were given these gifts conveys their relative importance. Our connection to the Gods and spirits, our compassion for each other and the other species we share the planet with, and the spiritual inheritance we get from our ancestors should be our top priorities. Following that are a person's mind and heart, and last is the physical body we each live in. This is not to say that pursuits and occupations that are mostly physical are less important. The point is to focus on who a person is, not whatever physical attributes they happen to be born with. This will lead to the best kind of society for all involved, something the Gods surely knew.
The creation of tomorrow
“Now do I see the earth anew. Rise all green from the waves again.” 4
The most important thing to take from this is that creation is ongoing and eternal. Each of us inherits a little bit of that divine creative power, and it is up to all of us to help create the world that we want ourselves and our descendants to live in. It's a long, hard, and sometimes painful process, but it's well worth the effort. Just look at what you get from it.
1) Crossley-Holland, Kevin. Axe-age, wolf-age, a selection of Norse myths. London: Andre Deutsch, 1985. ISBN: 0-233-97688-4
2) "The Creation of the Cosmos | Norse Mythology." Norse Mythology | The Ultimate Online Resource for Norse Mythology and Religion. N.p., n.d. Web. 13 Aug. 2013. <>.
3) "Gylfaginning." Internet Sacred Text Archive Home. N.p., n.d. Web. 10 Aug. 2013. neu/pre/pre04.htm>.
4) "The Poetic Edda: Voluspo." Internet Sacred Text Archive Home. N.p., n.d. Web. 10 Aug. 2013. <>.
5) "The Poetic Edda: Grimnismol." Internet Sacred Text Archive Home. N.p., n.d. Web. 10 Aug. 2013. <>.
6) “"Yggdrasil and the Well of Urd | Norse Mythology." Norse Mythology | The Ultimate Online Resource for Norse Mythology and Religion. N.p., n.d. Web. 13 Aug. 2013. <>.”
|
<urn:uuid:392e7c3f-faf9-4b96-9229-1b57da9d0f79>
|
CC-MAIN-2021-04
|
https://www.theasatrucommunity.org/creation-by-signy
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703512342.19/warc/CC-MAIN-20210117112618-20210117142618-00498.warc.gz
|
en
| 0.969095 | 2,459 | 2.921875 | 3 | 2.876905 | 3 |
Strong reasoning
|
Religion
|
One should never judge a book by its cover, so why do we continue to judge each other by our outer cover? Why can't people show their sexuality freely, without worrying about being judged? Why can't an African American move into a mostly "white" community and not feel comfortable? Why can't an overweight person walk into fitness center without feeling as if they don't belong? In today's society one can be humiliated or abused because of their race or if they are not skinny, beautiful, or straight. In this paper I will be discussing three different stereotypes I have encountered in my own personal life and how it has affected me and others.
A stereotype is a preconceived belief or decision that one holds on someone because of their appearance or by their actions. In the Bill of Rights, it says we have Freedom of Speech, so why are we so afraid to speak up for ourselves? We say that we are all equal but, there is still racism, sexism, and people judging others based on their religion, color, weight, how they dress, and what they eat.
While attending High School I was stereotyped in a few different ways, a prep" because I was on the cheerleading squad, and a "teacher's pet" because my father was my high school principal. I would not have categorized myself as being a "prep" I had a variety of friends and never focused on one single "click." I would definitely not categorize myself as a teacher's pet, although all the teachers knew me quite well, I felt like I had to walk on egg shells around many of them.
When I was a teenager, I babysat for a little boy that acted very feminine. He would dance around the room and act like Mary Poppins with his umbrella twirling above his head. If I painted my nails he would be glued to my side because this interested him so much. Now this little boy did not grow up being a cross-dresser, however he did grow up to be gay and he decided to "come out of the closet" while still in high school. When this child was four years old, I had already stereotyped him as being gay because of his feminine actions. I cared for this child until he began sixth grade and although I knew he could possibly grow up to be gay, this never became a problem with how I looked at him and loved him. This was just how he was from the time he could walk, talk, and play. I know there are people that are unsure about their sexuality and experiment, but in this instance, this boy was born this way. That is who he is so with all of the stereotyping that goes on with homosexuals, it is not fair to judge those who feel this is just right for them.
|
<urn:uuid:404ba5ce-f8da-4376-bf25-50ab324c75ac>
|
CC-MAIN-2017-34
|
https://www.justanswer.com/writing-homework/68ik9-hi-susan-assignment-love.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105297.38/warc/CC-MAIN-20170819031734-20170819051734-00295.warc.gz
|
en
| 0.99317 | 565 | 2.8125 | 3 | 2.7301 | 3 |
Strong reasoning
|
Politics
|
Food is a part of everyday life. Whether you're a foodie or you only eat because you have to, it's important to understand how to eat healthily. Developing good habits around food, from eating well to learning how to cook, are skills that will ensure that you're living the happiest, healthiest life you can live.
A Balanced Diet
The Australian Government's Dietary Guidelines recommend that you eat a wide variety of foods from the following groups:
- Grains (mostly wholegrain and/or high fibre varieties), including bread, cereals, rice, pasta and oats
- Protein, including lean meat, poultry, fish, eggs, tofu, nuts/seeds and legumes/beans
- Dairy (mostly reduced fat), including milk, yoghurt and cheese
They also recommend that you limit your intake of foods containing:
- Saturated fat
- Added salt
- Added sugars
The Food Rainbow
Different kinds of vegetables come in different colours, each with its own set of disease-fighting chemicals, called phytochemicals. Eating a wide range of different coloured fruit and vegetables every day (e.g., red, purple/blue, yellow/orange, green and brown/white) is a way to make sure you're getting a healthy variety of nutrients in your diet.
Nutrition Australia has some more information on eating different coloured foods to stay healthy.
Vitamins and Supplements
Your body needs vitamins and minerals to function properly. Some experts say a balanced diet should give you everything you need and supplements are a waste of money. Others say that there are situations where medication, illness or your lifestyle creates a need for extra vitamins or minerals.
Supermarkets, chemists and health food stores stock a wide range of vitamins and other supplements. Some foods such as breakfast cereals, juices and yoghurts also now include additives such as extra iron, vitamin C or folate.
Note that extra doses of some kinds of vitamins can be dangerous - check in with your doctor or pharmacist before stocking up. Check out the Better Health Channel's page on vitamin and mineral supplements for more info on their positives and negatives.
A Healthy Approach to Food
Food takes time to prepare and costs money to buy so being short of time, ideas or cash can affect the way you eat. What if you're too busy to cook, or not that confident in the kitchen? Here are a few tips for taking a healthier approach to food:
- Be relaxed and realistic - eating is meant to be good for you, and fun
- Be selective when you buy takeaway food from cafes, restaurants and fast food outlets
- Bring a healthy lunch from home when you go to school, uni or work and save money too, instead of buying junk food or takeaway
Try focusing on small goals such as cooking a few nights a week or taking lunch to work or school a few days a week. Perhaps you can find inspiration by:
- Shopping for fresh produce at a market
- Trying new foods
- Checking out new recipes
- Doing a cooking course
- Planning meals and snacks in advance
- Preparing healthy meals with friends
Learning How to Cook
If you've never cooked for yourself before, it's not too late to get started. If you're still living at home with your folks, ask them if you can help out with making dinner some time. If you stick at it, pretty soon you'll be making dinner for your family yourself!
There are also some good sites out there that can help you develop the basics, with a bunch of recipes that you can try. Check out:
- Young Gourmet - a resource for food aimed at teenagers, including information about food choices and recipes you can give a go
Not everyone has a great relationship with food. Food can be a comfort and a friend, or it can feel like the enemy.
Food and eating are a regular part of life, but in some situations problems can arise such as bingeing, going on fad diets, obesity or developing an eating disorder. These are serious health issues.
Feeling sick, anxious, worried, bothered or preoccupied with food isn't healthy or fun. If you feel miserable about food or have any of these problems, make sure you ask for help and support.
Check out the Better Health Channel's Weight Loss and Fad Diets page for some information about the dangers of fad diets and tips on losing weight the healthy way.
If you or a friend want to talk to someone about eating disorders, consider talking to your local or family doctor, or calling either Lifeline on 13 11 14, or Kids Helpline on 1800 55 1800 (24 hours a day).
Better Health Channel - Healthy Eating
Information about healthy living and eating, including information on food labels, junk food, recipes and nutritional needs.
Australian Dietary Guidelines
Provides advice about the amount and kinds of foods that we need to eat for health and wellbeing.
A resource about food for young people including recipes, information about ingredients and food production facts.
Provides a range of useful and interesting fact sheets such as Tips For Budget Buying, Food Variety, Shopping for Good Health and What Are Bush Foods?
Food Safety Victoria
This site gives some excellent facts about food safety including hygiene, keeping food safe and avoiding food poisoning.
|
<urn:uuid:22a81733-f2e0-4baa-a616-707a76f6ca71>
|
CC-MAIN-2019-18
|
http://youthcentral.vic.gov.au/advice-for-life/health/food-and-diet
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578534596.13/warc/CC-MAIN-20190422035654-20190422061546-00050.warc.gz
|
en
| 0.952265 | 1,096 | 3.484375 | 3 | 1.730939 | 2 |
Moderate reasoning
|
Health
|
Enjoying the sweet scent of Osmanthus in this golden autumn, we are going to celebrate the 71st birthday of the People’s Republic of China. Luckily it also coincides with the Mid-Autumn Festival. Altogether we share the glory of our country's birthday, with the full moon in the sky and the red flag flying.
S&L Holiday Arrangement
Oct. 1st–Oct. 8th, 8days in total
Sep. 27th (Sunday) & Oct. 10th (Saturday) will be normal working days.
The Origin of National Day
The "National Day" originally stands for national celebrations and was first appeared in the Western Jin Dynasty. In ancient society, the emperor’s ascension and birthday were called "National Day". Today it is a legal holiday established by a country to commemorate itself.
On Dec.3rd, 1949, after the fourth meeting of the Central People's Government Committee, Chinese government decided that October 1st is the "National Day" for proclaiming the establishment of the People’s Republic of China. It is the 71st birthday of the People’s Republic of China this year. Wish our great country a more prosperous future in the year 2020 and after!
Mid Autumn Festival
Unlike previous National Day, it makes this year quite special. It just falls on the same day of Mid Autumn Festival, which is our traditional Chinese festival.
Every August 15th of Chinese lunar year, is the second largest Chinese traditional festival after Spring Festival. It's usually in the middle of the autumn. The moon is the brightest and roundest on this day which means family reunion. In ancient times, it was quite popular for family and friends get together telling fairy tales about the moon.
What we always do on Mid-Autumn Festival
Worship to the Moon
It is an ancient festival with an important custom to observe and worship the moon. In ancient times emperors offered sacrifices to the sun in spring and to the moon in autumn. Normal folks started to worship the moon on Mid-Autumn Festival as well. Today it has become one of the most important activities for family gathering together during the Mid-Autumn Festival.
"Night tide stays in the moon" from the poem "Watching the tide on August 15th" written by the great poet Su Shi in the Song Dynasty, proves that custom of watching the tide has a long history, which was another Mid-Autumn Festival event in Zhejiang area.
Sharing Moon cakes
Traditional moon cakes named from the bakery and the filling inside. Usually it has a pastry skin, on top consisting of the Chinese characters of "longevity" or "harmony", with some sweet, dense filling inside. Moon cakes were originally used as offerings to worship the moon. Later people gradually regarded sharing moon cakes on the night of Mid-Autumn Festival as a symbol of family reunion.
Guessing Lantern Riddles
On the night of Mid-Autumn Festival, people gather to guess the riddles on the lanterns that hung in public places. As it is quite popular among young people, there are always love stories spread at this time, and become a new style of expressing love through lantern riddles on Mid-Autumn Festival.
If you also have interest in Chinese traditional culture, visiting China on specific traditional festivals will definitely give you a real cultural experience.
This year we look forward to the achievement of vaccine as soon as possible. We also believe that the global epidemic will finally become past. Wish everyone a good health and we will meet in the coming year
|
<urn:uuid:7178ce90-e469-44c7-abca-62da9b85be0e>
|
CC-MAIN-2023-14
|
https://www.mixertec.com/news/Happy-National-Day-%26-Mid-autumn-Festival-102.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00092.warc.gz
|
en
| 0.95215 | 740 | 2.578125 | 3 | 1.480219 | 1 |
Basic reasoning
|
Travel
|
If you think that belly fat is just an issue for people who are overweight, think again -- even people who are at a healthy weight and exercise regularly can have it. There's no such thing as a "perfect" body, so what's the big deal if your stomach isn't flat? It isn't just a problem of aesthetics.
There are two types of fat: visceral and subcutaneous. Visceral fat is the kind that lies far below the surface, surrounding your organs. Subcutaneous fat is just below the skin and can be easily grabbed. You can spot visceral fat via an MRI or CT scan, but there's a simple test for it at home: If you have a large waistline or you're apple-shaped, you likely have visceral belly fat.
While neither type of fat is good, visceral belly fat carries some serious health risks. According to the U.S. Department of Health and Human Services, you're at risk if you have a waistline greater than 40 inches if you're a man and 35 inches if you're a woman (a lower threshold of 35 inches for men and 31 inches for women has been recommended by the World Health Organization for people of Asian ethnicity). We aren't entirely sure why, but belly fat is linked to increased insulin resistance, which can result in type II diabetes. It can also lead to heart disease, stroke, some types of cancer, sleep apnea and premature bone density loss, and there's even evidence that it may be a factor in developing dementia.
So what causes belly fat? Heredity is one factor. Belly fat has often been called "middle-aged spread" because as you age and your metabolism slows down, extra pounds tend to accumulate across the stomach. This is probably due to changes in levels of hormones such as cortisol, which is made in the adrenal glands and helps the body process glucose, regulate metabolism and manage stress. Speaking of stress, it can raise levels of cortisol, so it might lead to increased belly fat, too.
While it can be tough to do, there are ways to get rid of belly fat -- and not just plastic surgery -- but there's no magic pill. Read on for tips.
Blast That BellyPlastic surgery such as liposuction can remove subcutaneous fat but not visceral fat. Researchers in Brazil have discovered that if you have lipo to remove the former, your body could compensate with an increase in the latter unless you have a diet and exercise regimen going.
You can't do anything about aging or heredity, but if you have developed belly fat, you can still work to combat it. A lot of people think that exercises targeting the stomach area, such as sit-ups or crunches, are the way to go. These types of exercises tone and tighten muscles, but they can't do anything about fat. Instead, focus on losing fat overall via vigorous aerobic exercise -- preferably 30 minutes or more most days of the week. Start slow and build up. You don't have to run to get good cardio; brisk walking, dancing and bicycling all count, too. Strength training, meaning lifting weights, may also help get rid of visceral belly fat. Building muscle raises your metabolism so you can burn more calories. You might also consider incorporating core-strengthening exercises like planks.
What you eat also makes a difference. To reduce belly fat, avoid foods that contain trans fats, a type of fat found in many prepackaged cookies, crackers and baked goods, as well as some fried fast foods. Not only can trans fats pack on the pounds, but research conducted at Wake Forest University has also shown that they can add fat specifically to your belly and even redistribute fat there from other areas of the body.
Sugar also increases belly fat. Eat whole grains as well as lots of fruits and vegetables. Foods high in fiber not only keep you full longer but they also lower insulin levels and may shrink fat cells. A diet rich in healthy monounsaturated fats like olive oil, avocados, almonds and peanuts may also cause fat to break down. Your best beverage is always going to be water. It flushes out fat and keeps down bloat, and drinking lots of it keeps you from thinking you're hungry when you're actually thirsty.
The final key to losing belly fat is to de-stress. Easier said than done, right? One way to lower stress is to get enough sleep, ideally eight hours each night. Find ways to diffuse stressful situations -- through yoga, meditation or journaling.
While there's no one way to lose belly fat, that's no reason to give up. It has nothing to do with how you look in a swimsuit and everything to do with staying healthy.
Lots More Information
Author's Note: Is there a real way to lose belly fat?When I found out that I'd be writing about getting rid of belly fat, I joked that I could make it a short article by answering the question with "plastic surgery." But in all seriousness, I know from personal experience that there's no easy way to lose fat and keep it off in the long term. The diet industry is a multi-billion dollar one for a reason; we want there to be a special secret to it, when in reality eating well and exercising are the tried-and-true methods. I knew that in general there are plenty of health risks that come with being overweight, but belly fat is especially worrisome. All the more reason to keep fighting the good fight.
SOURCE & CONTENT - howstuffworks.com
|
<urn:uuid:ddd2ed64-bf75-4587-9a7c-4d6ec2a6118b>
|
CC-MAIN-2018-34
|
http://mitaleedoshi.blogspot.com/2013/06/must-read-for-all-people-who-become.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215487.79/warc/CC-MAIN-20180820003554-20180820023554-00131.warc.gz
|
en
| 0.966574 | 1,146 | 2.609375 | 3 | 2.339606 | 2 |
Moderate reasoning
|
Health
|
GIS Use in Telecommunications Growing
To be competitive, telecommunications providers depend on a smoothly functioning work flow process that integrates information for marketing, demand forecasting, engineering, customer management, operations support, and fleet management. Although telecommunications providers generally have the same needs for information, how the work flow is organized can vary significantly from company to company.
Historically telecommunications companies have used an assortment of information systems-some developed in-house, some purchased-that were never designed to work together. When these systems were implemented, there was no perceived requirement for information sharing. Today telecommunications companies operate networks that have equipment from multiple vendors and lease bandwidth and antenna sites from other companies. Mergers with, or acquisition of, other companies require the incorporation of, or at least interaction with, completely foreign networks.
The need for information sharing within companies and interoperability between systems has been recognized by the telecommunications industry for a long time. Originally founded in 1865 as International Telegraph Union, the International Telecommunications Union (ITU) promotes standards in equipment that guarantee generalized interconnection between communication systems. To improve interoperability, ITU has developed the Telecommunications Management Network (TMN), a method of standardizing business organization. This hierarchy of support systems specifies interoperability through the use of industry-standard protocols. Geospatial applications need to support this same level of interoperability if GIS is to work well within this TMN-structured environment.
Many current applications of GIS in the telecommunications industry began as departmental tools that worked within a well-defined scope. These GIS-based tools have helped automate business processes and increase the efficiency of operations. The following sections describe how telecommunications companies have integrated GIS into the overall work flow.
Telecommunications providers are tied to geography more closely than many other types of businesses. They operate within service areas and the infrastructure that delivers services is linked directly to the location of each customer. Telecommunications companies segment the characteristics for both consumer and business customers geographically using GIS. This not only lets them market more effectively but also helps them forecast the demand for services. Both targeting customers and predicting where and when growth will occur involves integrating corporate intelligence, demographic data, and information about the progress of building projects in the area with location data and applying various modeling techniques. The information obtained from this analysis drives network investment budgets and marketing campaigns.
Operations Support Systems
Operations Support Systems (OSS) make sure that the network functions properly. OSS includes activities such as network monitoring, outage management, billing, and testing. With a shared GIS database, staff members have instant access to customer status and history, existing plant records, and signal quality information to support updates, maintenance and repairs to the network.
Intelligent objects modeled in ArcGIS not only have rules that speed the design process but also can reflect the status of network elements. A query can identify features in a network element layer that are at 80 percent of capacity more than half of the time. The switches, base stations, and other features selected by this query would be candidates for capacity enhancements. The ability to anticipate problems and prevent outages before they occur is another tool that enables carriers to be more competitive and reduce costs. This so-called "near real-time" monitoring of networks necessitates integration of several systems using industry standard interoperability protocols.
Capacity and Capital Planning
Information generated by marketing and market segmentation activities that define current and future communication demands can be used to create a logical network of capacities and estimate the capital spending required to build this capacity. GIS is widely used in decision support for capital planning. Effective capacity planning uses current data describing the existing plant, the demand information from the marketing phase, and network performance information from OSS.
Wireline engineering systems are GIS applications that work with the design and geographic layout of a company's outside plant infrastructure. Engineering applications allow for quick review and modeling of network routes, automation of the work order process, and high volume cartographic output to support technicians in the field.
ArcGIS can model intelligent objects in the network and associate rules with features. Through the use of industry-specific data models, real-world behavior can be captured in these objects. [See the accompanying article, "Telecommunications Data Model Available."] For example, a fiber cable object can be created with rules that would not allow it to connect to a copper splice. This capability greatly enhances design performance. Because they use an industry-standard development platform, ArcGIS-based engineering systems are interoperable. Third party software that schematically represents networks has been integrated with ArcGIS so that users can toggle between logical and physical views of the network.
Nowhere is competition in the telecommunications industry more intense than in the wireless sector. While most second generation networks have rolled out, new wireless network technologies are forcing carriers to redesign all or parts of their networks. Designing and building a wireless network is a costly process that involves several iterations of planning and testing. Having paid handsomely for third generation (3G) licenses, many carriers are highly motivated to reduce the cost of building new networks.
Performing sophisticated GIS analysis on optimized geographic data can reduce planning and design costs. In some cases, effective use of geographic resources has made the difference between success or failure for a telecommunications company. Preliminary analysis with GIS uses customer, terrain, and landownership information and provides planners with potential antenna sites. The initial network configuration is evaluated using wave propagation modeling that simulates the wireless coverage resulting from a configuration. Once an optimal model is devised, engineers test the configuration in the field. The process is repeated until the configuration provides optimal coverage for the area. Wireless engineering applications illustrate that sharing information and geographic data between phases of the work flow can reduce data redundancy while streamlining processes. Using GIS to limit the number of design iterations and curtail costly field testing provides significant savings for telecommunications providers.
Customer Relationship Management
In today's competitive telecommunications market, customer service is the number one differentiator for companies. Customer relationship management (CRM) applications improve the relationship between the company and its customers. Timely service provisioning, response to customer queries, and reporting on network performance are aspects of CRM. With GIS, call center operators can access all the information on a customer and the associated network based on location. Databases containing information on outside plant infrastructure, signal quality, and equipment can be integrated using GIS and made available using a corporate Intranet.
In CRM, Tier 1 handling means the customer's issue is resolved with the initial call. Tier 2 calls require initiating a trouble-ticket and obtaining additional information. Carriers who have successfully implemented GIS support for CRM achieve higher Tier 1 handling and customer service is performed more quickly and economically. With CRM contacts at an all-time high, improving CRM operations can make a big impact on the bottomline of a carrier. In the wireless sector, "churn" refers to the rate that customers jump from one service provider to another. For many carriers, customer churn is the single largest cost factor. GIS improves the speed and quality of contact handling, augments customer satisfaction, and reduces churn.
Communications companies must manage and route service vehicles for outage response and service provisioning. An efficient dispatch process balances drive times, territories, and the skill sets of individual technicians. GIS routing applications can produce itineraries that take each of these factors into account. Optimizing the dispatch and routing of service vehicles results in significant cost and time savings and increased customer satisfaction because technicians can often specify time windows for service calls of two hours or less.
Putting It All Together: Enterprise GIS
When GIS applications servicing various phases of the work flow are interoperable and a networked GIS distributes geographic data to desktops and mobile devices, the value of GIS to the organization moves well beyond that of a departmental tool. For example, a sales representative can make a compelling business case for the sale of bandwidth to a corporate prospect by showing the prospect's location in relation to the telecommunications company's infrastructure. Network infrastructure provided by AM/FM systems is used for decision support in the provisioning process. Technicians in the field locate the correct manhole, pole, or access point by using the same data. Coverage maps and testing data for wireless networks can be instantly viewed by call center operators dealing with customer complaints. More complex applications include geospatial data in data warehousing systems and are used in conjunction with On Line Analytical Processing (OLAP) clients to add a "where" dimension to corporate business intelligence.
The ArcGIS 8.1 suite is a fully scalable GIS that can work in a heterogeneous environment and support the tools, databases, and networks that telecommunications companies require. Esri is working to integrate GIS applications in the TMN hierarchy. This will improve enterprise deployments and resolve interoperability issues. Field engineering tools and the use of mobile networks making geographic information available through wireless devices to business and consumer users will further increase the value of GIS. Telecommunications executives who make complex decisions will find GIS indispensable for decision support. GIS provides an overview of the company and the work flow. The addition of location services driven by GIS will generate additional revenue for telecommunications carriers and their business partners.
The investment telecommunications companies make in geospatial data and technology will yield benefits in business process automation, improved decision support, and value-added services for years to come.
For more information on the use of GIS in the telecommunications industry, visit www.esri.com/telecomm or contact
Kees van Loo
|
<urn:uuid:25808219-e4cf-4f70-b736-1ecd698dd94e>
|
CC-MAIN-2016-30
|
http://www.esri.com/news/arcuser/1001/telecom.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258944256.88/warc/CC-MAIN-20160723072904-00306-ip-10-185-27-174.ec2.internal.warc.gz
|
en
| 0.923138 | 1,949 | 2.65625 | 3 | 2.943761 | 3 |
Strong reasoning
|
Science & Tech.
|
|Scientific name:||Salix alba L.|
|Common name:||White willow, Swallow tailed willow|
|Hebrew name:||ערבה לבנה, Aravah|
|Arabic name:||اصفصاف, Safsaf|
|Life form:||Phanerophyte, tree|
|Leaves:||Alternate, entire, dentate|
|Flowers:||No petals and tepals|
|Flowering Period:||March, April, May, June|
|Distribution:||The Mediterranean Woodlands and Shrublands|
|Chorotype:||Euro-Siberian - Med - Irano-Turanian|
Derivation of the botanical name:
Salix, a willow-tree.
The foliage of the Aravah (willow) and Euphrates poplar (Tzaftzafah) look similar and that's why they are mixed up.
The terms Aravah and Tzaftzafah are interchangeable. What was once called Aravah is now called Tzaftzafah, and what was called Tzaftzafah is now called Aravah. The original Aravah is a willow branch that has a red stem and long smooth leaves, and the Aravah grows near the river. The Tzaftzafah has a white stem and it leaves are round and jagged.
The 'four species' requires an Aravah as one of the four, and a Tzaftzafah is not valid for use.
Christian churches in northwestern Europe often used willow branches in place of palms in the ceremonies on Palm Sunday (Sunday before Easter).
Aspirin is acetylsalicylic acid, an industrial synthesis of salicin, which occurs naturally in white willow (Salix alba).
Two Italians, Brugnatelli and Fontana, had in fact already obtained salicin in 1826, but in a highly impure form. By 1829, [French chemist] Henri Leroux had improved the extraction procedure to obtain about 30g from 1.5kg of bark. In 1838, Raffaele Piria (1814 – 1865), an Italian chemist, then working at the Sorbonne in Paris, split salicin into a sugar and an aromatic component (salicylaldehyde) and converted the latter, by hydrolysis and oxidation, to an acid of crystallised colourless needles, which he named salicylic acid."
In the laboratory, Carl Jacob Löwig (1803 – 1890), a German chemist, treated salicin with acid--as salicin is acted on in the human stomach--to make salicylic acid, and about that time salicylic acid was also discovered occurring naturally in a European species of Spiraea (dropwort). Salicylic acid had major medicinal uses and soon became a panacea, a medication which can heal any problem.
A different compound was synthesized in 1853 by Carl von Gerhardt by putting an acetyl group on salicylic acid, making acetylsalicylic acid.
In 1893, Felix Hoffman (1868 – 1946), an employee of Friedrich Bayer and Company, found an easier way to make this chemical salt and then tested it on his father, who had arthritis. In 1899, Bayer, which started in 1863 as a dye production company, marketed this medicine as "aspirin"--coming from the words 'acetyl' and Spiraea.
Aspirin was a patented name by Bayer, but is now a vernacular name.
|
<urn:uuid:c3792af9-f552-4a1a-8424-5917cec97089>
|
CC-MAIN-2019-26
|
http://www.flowersinisrael.com/Salixalba_page.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00209.warc.gz
|
en
| 0.907848 | 778 | 3.3125 | 3 | 2.596394 | 3 |
Strong reasoning
|
Science & Tech.
|
Immobility can have serious consequences. In the case of air travel, the prolonged immobility forced on passengers during flights of four, five or ten hours can cause a potentially deadly blood clot, the Centers for Disease Control (CDC) tell us. While any trip over four hours in length by car, train, bus or airplane can cause a deep vein thrombosis, airline passengers have the least mobility and are at greatest risk.
A deep vein thrombosis (DVT), according to the National Heart, Lung and Blood Institute, is a blood clot that forms deep in a muscle in the body. Most are found in the lower leg or thigh. A DVT is a potentially life-threatening condition that requires immediate medical attention.
The American Society of Hematology calls the occurrence of DVT while traveling by air "economy class syndrome." Passenger overcrowding and prolonged immobility force blood to pool in the lower legs. Pooling increases the risk of clot development due to the lack of sufficient blood circulation. The longer the flight, the greater the risk with flights of eight hours or more creating the greatest risk for a DVT.
Symptoms of a DVT, the CDC states, include swelling in the affected limb, unexplained pain or tenderness, a warmth to the skin in the area and a noticeable redness. The danger of a DVT lies in the potential for part of the clot to detach and travel through the circulatory system to the lung. Lodged in the lung, a clot is called a pulmonary embolism.
A pulmonary embolism is described by the U.S. National Library of Medicine as "a sudden blockage in a lung artery." It can cause permanent damage to the lung. Symptoms can include chest pain, trouble breathing, fainting or loss of consciousness, a fast or abnormal heart beat or coughing up blood. A large enough PE or multiple small PEs can be fatal.
Preventing a travel-related deep vein thrombosis is easily done. Stand up and walk around when you can. Ask your physician if graduated compression stockings would be useful. The website myDr.com.au provides a series of exercises that can be done while seated to improve blood flow to the lower legs.
Most DVTs will heal without complication. The treating physician can advise the patient on types of activities that can be done during the healing process. The doctor can also determine if there is any further risk of DVT in the future.
|
<urn:uuid:2bd48e0a-f3ad-4a66-b98a-c695298861f6>
|
CC-MAIN-2015-18
|
http://www.examiner.com/article/blood-clots-and-long-trips?cid=rss
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658904.34/warc/CC-MAIN-20150417045738-00298-ip-10-235-10-82.ec2.internal.warc.gz
|
en
| 0.934864 | 511 | 3.109375 | 3 | 1.865322 | 2 |
Moderate reasoning
|
Health
|
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Teaching Philosophy 32 (3):233-245 (2009)
This article is an introduction to classroom response systems (“clickers”) for philosophy lecture courses. The article reviews how clickers can help re-engage students after their attention fades during a lecture, can provide student contributions that are completely honest and free of peer pressure, and can give faculty members a rapid understanding of student understanding of material. Several specific applications are illustrated including using clicker questions to give students an emotional investment in a topic, to stimulate discussion, to display change of attitudes, and to allow for the use of the peer instruction technique, which combines lectures and small groups
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Greg Restall (2009). Using Peer Instruction to Teach Philosophy, Logic, and Critical Thinking. Teaching Philosophy 32 (1):1-40.
Roland Tormey & Deirdre Henchy, Re-Imagining the Traditional Lecture: An Action Research Approach to Teaching Student Teachers to 'Do' Philosophy.
Joseph R. Herkert (1997). Collaborative Learning in Engineering Ethics. Science and Engineering Ethics 3 (4):447-462.
Sam Butchart, Toby Handfield & Greg Restall (2009). Teaching Philosophy, Logic and Critical Thinking Using Peer Instruction. Teaching Philosophy (1):1-40.
A. Walton Nancy, G. Karabanow Alexander & Jehangir Saleh (2008). Students as Members of University-Based Academic Research Ethics Boards: A Natural Evolution. Journal of Academic Ethics 6 (2).
Nancy A. Walton, Alexander G. Karabanow & Jehangir Saleh (2008). Students as Members of University-Based Academic Research Ethics Boards: A Natural Evolution. [REVIEW] Journal of Academic Ethics 6 (2):117-127.
Jack Mearns & George J. Allen (1991). Graduate Students' Experiences in Dealing with Impaired Peer, Compared with Faculty Predictions: An Exploratory Study. Ethics and Behavior 1 (3):191 – 202.
C. Loui (unknown). Development of Role-Play Scenarios for Teaching Responsible Conduct of Research. Science and Engineering Ethics.
Per Norström (2011). Technological Know-How From Rules of Thumb. Techné: Research in Philosophy and Technology 15 (2):96-109.
Toby L. Schonfeld (2005). Reflections on Teaching Health Care Ethics on the Web. Science and Engineering Ethics 11 (3):481-494.
Patricia Keith-Spiegel (ed.) (2002). The Ethics of Teaching: A Casebook. Lawrence Erlbaum.
Scot Burton, Mark W. Johnston & Elizabeth J. Wilson (1991). An Experimental Assessment of Alternative Teaching Approaches for Introducing Business Ethics to Undergraduate Business Students. Journal of Business Ethics 10 (7):507 - 517.
Patricia R. Owen & Jennifer Zwahr-Castro (2007). Boundary Issues in Academia: Student Perceptions of Faculty - Student Boundary Crossings. Ethics and Behavior 17 (2):117 – 129.
Maralee Harrell (2012). Assessing the Efficacy of Argument Diagramming to Teach Critical Thinking Skills in Introduction to Philosophy. Inquiry: Critical Thinking Across the Disciplines 27 (2):31-39.
Richard A. Jones (2009). Illuminating the Shadows. Teaching Philosophy 32 (2):113-125.
Sorry, there are not enough data points to plot this chart.
Added to index2011-01-09
Recent downloads (6 months)0
How can I increase my downloads?
|
<urn:uuid:330cb47f-b3f1-4560-888b-ce54aec68118>
|
CC-MAIN-2014-23
|
http://philpapers.org/rec/IMMETT
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894289.49/warc/CC-MAIN-20140722025814-00121-ip-10-33-131-23.ec2.internal.warc.gz
|
en
| 0.713351 | 877 | 2.71875 | 3 | 2.900609 | 3 |
Strong reasoning
|
Education & Jobs
|
Update on Attention Deficit Hyperactivity Disorder (ADHD) treatments
In our central nervous system disorders focus month, Suzanne McCarthy provides us with an update on the treatments available in the UK for patients with Attention Deficit Hyperactivity Disorder (ADHD).
Attention Deficit Hyperactivity Disorder (ADHD) is a common neurodevelopmental disorder characterised by the symptoms of inattention, impulsivity and hyperactivity. It is reported that the prevalence of ADHD in children globally is approximately 5%.1 Traditionally, ADHD has been thought of as a condition affecting children, from which they ‘grow out’ of as they get older. However, it is now recognised as a valid diagnosis in adults as symptoms and impairments of ADHD are known in many patients to persist past childhood and adolescence. This article will focus on pharmacological treatments available for treating ADHD and the epidemiological data on their use across the life spectrum.
In the UK, there are two central nervous system stimulant treatments licensed for the treatment of ADHD in children and adolescence, methylphenidate and dexamfetamine. The mode of action of the stimulants in treating ADHD symptoms is not completely understood, but it is thought that they work by increasing the intrasynaptic concentrations of the neurotransmitters dopamine and noradrenaline in the frontal cortex of the brain as well as subcortical regions associated with motivation and reward.2 Both methylphenidate and dexamfetamine do this by blocking the presynaptic membrane dopamine transporter (DAT); dexamfetamine also releases the neurotransmitters into the extraneuronal space by blocking the intraneuronal vesicular monoamine transporter (VMAT) and is thus a more potent stimulant.3
Methylphenidate is licensed as part of a comprehensive treatment programme for ADHD in children aged 6 years of age and over when psychological, educational and social measures prove insufficient.4 It is available in a range of immediate release (Ritalin ®, Medikinet ®, Equasym® and generic methylphenidate) and extended release preparations (Concerta XL ®, Medikinet XL ®, Equasym XL ®). Methylphenidate is not licensed for the initiation of treatment for ADHD in adults. Concerta XL ® is the only methylphenidate preparation which is licensed as continuation treatment; for patients who started treatment prior to adulthood, showed a clear benefit from the treatment and whose symptoms continue into adulthood.5
“It is reported that the prevalence of ADHD in children globally is approximately 5%.”
Dexamfetamine is administered either as the generic drug or as lisdexamfetamine, an inactive prodrug, which is converted to the active form dexamfetamine.6,7 Dexamfetamine is licensed for children from 3 years for the treatment of refractory hyperkinetic states. It is licensed in adults only in the treatment of narcolepsy.6 Lisdexamfetamine, which received its UK marketing authorisation in February 2013, is indicated as part of a comprehensive treatment programme for ADHD in children aged 6 years of age and over when response to previous methylphenidate treatment is considered clinically inadequate. It is also indicated as continuation treatment in adulthood, when symptoms in adolescents persist into adulthood and patients have shown clear benefit from treatment.7
Atomoxetine is the only non-stimulant licensed for the treatment of ADHD in children of 6 years and older, in adolescents and in adults as part of a comprehensive treatment programme. Atomoxetine is a selective and potent inhibitor of the pre-synaptic noradrenaline transporter, which has minimal affinity for other noradrenergic receptors or for other neurotransmitter transporters or receptors. In 2013, atomoxetine gained marketing authorisation for use as initiation treatment in adults, the only such licensed treatment for adult ADHD in the UK. As part of the diagnostic process, the presence of symptoms of ADHD that were pre-existing in childhood should be confirmed.8
Treatment of ADHD
Guidelines on the treatment of ADHD, issued by the National Institute for Health and Clinical Excellence (NICE) in 2008 and updated in July 2013, state that drug treatment is not always first-line treatment for school-age children and young people with ADHD and “should be reserved for those with severe symptoms and impairment or for those with moderate levels of impairment who have refused non-drug interventions, or whose symptoms have not responded sufficiently to parent-training / education programmes or group psychological treatment.”3,9
“…drug treatment is not always first-line treatment for school-age children and young people with ADHD…”
Drug treatment is considered appropriate first-line treatment for children with severe ADHD and associated impairments.9 Drug treatment is also considered first-line treatment for adult ADHD. The decision on which of the three medications to prescribe depends on a number of factors including: comorbid conditions, patient preference, adherence issues, response to previous medications, side-effect profile etc.9 Drug treatment should only be initiated by healthcare professional with expertise in ADHD; however, continued management and prescribing can be undertaken by the patient’s general practitioner (GP) under shared care systems.
Epidemiology of pharmacologically-treated ADHD
A recent study reported on the epidemiology of pharmacologically-treated ADHD in UK primary care.10 The study, which utilised data from The Health Improvement Network (THIN) database, reported on the prescribing of methylphenidate, dexamfetamine and atomoxetine by GPs in the UK to patients over 6 years between 2003 and 2008. The total number of prescriptions for the study drugs increased from 11,441 to 26,506 over the study period; methylphenidate accounted for the majority of prescriptions (96.6% in 2003; 88.6% in 2008).
The highest prevalence of prescribing was to children aged 6-12 years. Within this group, the prevalence increased from 4.8 (2003) to 9.2 per 1000 persons (2008). Prevalence decreased with increasing age, however increases were still observed over the study period. Prevalence of prescribing to teenagers increased from 3.6 per 1000 persons in 2003 to 7.4 per 1000 persons in 2008. Prevalence of prescribing increased from 0.3 (2003) to 1.1 per 1000 persons (2008) for young adults aged 18-24 years; 0.02 (2003) to 0.08 per 1000 persons (2008) for adults aged 25-45 years; 0.01 (2003) to 0.02 per 1000 persons in patients aged over 45 years. Male patients received the majority of prescriptions across all age categories; however the rate of increase in prevalence was greater in females than males for children, teenagers and young adults.
These data highlight the fact that although prescribing of ADHD medications is increasing, figures from the UK are much lower than the reported prevalence of the condition, which may alleviate some of the concerns that children with ADHD are overmedicated.11
1. Polanczyk G, de Lima M, Horta B, et al. (2007). The Worldwide Prevalence of ADHD: A Systematic Review and Metaregression Analysis. The American Journal of Psychiatry, 164, 942-948.
2. Volkow ND, Wang GJ, Fowler J S, et al. (2004) Evidence that methylphenidate enhances the saliency of a mathematical task by increasing dopamine in the human brain. American Journal of Psychiatry, 161, 1173–1180
3. National Institute for Health and Clinical Excellence. Attention deficit hyperactivity disorder: pharmacological and psychological interventions in children, young people and adults.2008. London: The British Psychological Society and the Royal College of Psychiatrists. Available at: http://guidance.nice.org.uk/CG72
4. Shire Pharmaceuticals Ltd. Equasym Summary of Product Characteristics. Electronic Medicines Compendium; [updated 11/10/2011] Accessed: 10th August 2013
5. Janssen-Cilag Ltd. Concerta XL Summary of Product Characteristics. Electronic Medicines Compendium; [updated 18/02/2013] Accessed: 10th August 2013
6. Auden Mckenzie (Pharma Division) Ltd. Dexamfetamine Summary of Product Characteristics http://www.mhra.gov.uk/home/groups/spcpil/documents/spcpil/con1376459047222.pdf [updated 23/03/2010]. Accessed: 10th August 2013
7. Shire Pharmaceuticals Ltd. Elvanse Summary of Product Characteristics. Electronic Medicines Compendium; [updated 28/02/2013] Accessed: 10th August 2013
8. Eli Lilly and Company Ltd. Strattera Summary of Product Characteristics. Electronic Medicines Compendium; [updated 28/05/2013] Accessed: 10th August 2013
9. National Institute for Health and Clinical Excellence. Attention deficit hyperactivity disorder: pharmacological and psychological interventions in children, young people and adults. Update 2013. London: The British Psychological Society and the Royal College of Psychiatrists. Available at: http://guidance.nice.org.uk/CG72
10. McCarthy S, Wilton L, Murray M, et al (2012). The epidemiology of pharmacological treatments for attention deficit hyperactivity disorder (ADHD) in children, adolescents and adults in UK primary care. BMC Pediatrics, 12:78
11. Rapoport JL (2013). Pediatric psychopharmacology: too much or too little? World Psychiatry, 12:118-23.
About the author:
Dr Suzanne McCarthy received her MPharm degree from the Robert Gordon University, Aberdeen (2003) and her PhD from the School of Pharmacy, University of London (2009).
She is a clinical pharmacy practice lecturer at the School of Pharmacy, University College Cork, Ireland and her research interests are in the field of pharmacoepidemiology and pharmacovigilance, in particular the use of large automated databases to conduct these studies. She has published widely in the areas of medication safety and psychopharmacoepidemiology, particularly relating to Attention Deficit Hyperactivity Disorder.
How can pharma better support patients with ADHD?
|
<urn:uuid:7bff4a84-ec88-4b1f-9fb8-5d10797a4169>
|
CC-MAIN-2022-05
|
https://pharmaphorum.com/views-and-analysis/update-on-attention-deficit-hyperactivity-disorder-adhd-treatments/
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00147.warc.gz
|
en
| 0.916411 | 2,139 | 2.515625 | 3 | 3.002467 | 3 |
Strong reasoning
|
Health
|
190 likes | 333 Vues
Welcome to the world of. pollution . DIFFERENT TYPES OF POLLUTION. NOISE POLLUTION. AIR POLLUTION. WATER POLLUTION. LAND POLLUTION. AIR POLLUTION. ACID RAIN:.
E N D
Welcome to the world of pollution
DIFFERENT TYPES OF POLLUTION NOISE POLLUTION AIR POLLUTION WATER POLLUTION LAND POLLUTION
AIR POLLUTION ACID RAIN: As the name suggests, acid rain is just rain which is acidic. The rain becomes acidic because of gases which dissolve in the rain water to form various acids. Rain is naturally slightly acidic beacuse of the carbon dioxide dissolved in it (which comes from animals breathing), and to a lesser extent from chlorine (which is derived from the salt in the sea). This gives rain apH of around 5.0, and in some parts of the world it can be as low as 4.0 (this is typical around volcanoes, where the sulphur dioxide and hydrogen sulphide form sulphuric acid in the rain).
About 70 percent of acid rain comes from sulphur dioxide (SO2), which dissolves into the water to form sulphuric acid. The rest comes from various oxides of nitrogen (mainly NO2 and NO3, collectively called NOx). (These figures are for Scandinavia - Scotland has a very similar ratio, while the north-eastern USA has 62 percent sulphuric acid, 32 percent nitric acid and 6 percent hydrochloric acid). These gases are produced almost entirely from burning fossil fuels, mainly in power satations and road transport: Acid rain causes lakes and rivers to become acidic, killing off fish - all the fish in 140 lakes in Minnesota have been killed, and the salmon and trout populations of Norway's major rivers have been severely reduced because of the increased acidity of the water. Short-term increases in acid levels kill lots of fish, but the greatest threat is from long-term increases, which stop the fish reproducing. The extra acid also frees toxic metals which were previously held in rocks, especially aluminium, which prevents fish from breathing. Single-celled plants and algae in lakes also suffer from increased acid levels, with numbers dropping off quickly once the pH goes below 5, and by the time the pH gets down to 4.5, virtually everything is dead.
Rather surprisingly, the effects of acid rain on trees have overshadowed the effects on people. Many toxic metals are held in the ground in compounds. However, acid rain can break down some of these compounds, freeing the metals and washing them into water sources such as rivers. In Sweden, nearly 10,000 lakes now have such high mercury concentrations that people are advised not to eat fish caught in them. As the water becomes more acidic, it can also react with lead and copper water pipes, contaminating drinking water supplies. In Sweden, the drinking water reached a stage where it contained enough copper to turn you hair green! Slightly more worryingly, that much copper can also cause diarrhoea in young children, and can damage livers and kidneys.
THE GREEN HOUSE EFFECT: The Earth is kept warm by it's atmosphere, which acts rather like a woolly coat - without it, the average surface temperature would be about -18 degrees Centigrade. Heat from the sun passes through the atmosphere, warming it up, and most of it warms the surface of the planet. As the Earth warms up, it emits heat in the form of infra-red radiation - much like a hot pan emits heat even after it's taken away from the cooker. Some of this heat is trapped by the atmosphere, but the rest escapes into space. The so-called "greenhouse gases" make the atmosphere trap more of this radiation, so it gradually warms up more than it should, like a greenhouse (although a greenhouse actually does this by stopping warm air rising and escaping from it).
The Inter-Governmental Panel on Climate Change has predicted that this rise of one degree will happen by the year 2025. This could potentially cripple the North American corn belt, which produces much of the world's grain, leading to much higher food prices, and even less food for the Third World than they already have. However, it would also mean that some countries which are further north would be able to grow crops they had never been able to before, although there is less land as you move north from the corn belt. The other serious worry is that rising sea levels from the melting of the polar ice caps could severely flood many countries. A rise in sea levels of one metre, which many experts are predicting by the year 2100 (and some as soon as 2030), would flood 15 percent of Egypt, and 12 percent of Bangladesh. The Maldives in the Indian Ocean would almost completely disappear. Most of the countries which would suffer most from a rise in sea levels are the poor island states, so the islands in the Caribbean, South Pacific, Mediterranean and Indian Ocean have formed the Alliance of Small Island States, AOSIS, so they have a louder voice in internatioanl politics and can make the richer developed world listen to their problems. Closer to home, Britain would lose most of East Anglia, and to protect the coast line would cost an estimated 5 to 10 billion pounds
There are some natural greenhouse gases: water vapour, nitrous oxide, carbon dioxide, methane and ozone. However, over the past fifty years, production of carbon dioxide, nitrous oxide and methane has risen sharply, and a new type of chemical - the chlorofluorocarbon, or CFC - has been introduced as a refrigerant, solvent and aerosol propellant, but it is also a very powerful greenhouse gas, because it can trap a lot of radiation - one molecule of CFC is 12,000 to 16,000 times as effective at absorbing infra-red radiation as a molecule of carbon dixide. , The carbon dioxide comes mainly from burning fossil fuels in power stations, which also causes acid rain. It is also created by living animals breathing, and is naturally converted by plants back to oxygen. However, deforestation is reducing the planet's carbon dioxide absoring capability. Nitrous oxide is a by-product of nylon production, and is also released by fertiliser use in agriculture. The extra methane is produced in coal mining, natural gas production and distribution (natural gas is methane), and waste disposal. One fifth of all methane generated by human activity comes from microbial decay of organic material in flooded rice fields.
This graph shows how much each gas contributes to the greenhouse effect, taking into account how much of it there is and how much radiation it can absorb.
NOISE POLLUTION Sound is such a common part of everyday life that we often overlook all that it can do. It provides enjoyment, for example through listening to music or bird-song. It allows spoken communication. It can alert or warn us, say though a door-bell, or wailing siren. In engineering it can tell us when something has slightly changed, like in a squeaking car. Yet in a modern society sound often annoys us. Many sounds are unpleasant or unwanted, and this is classed as noise. Causes of Noise Transport noise Industrial Noise Noise in the Sea Social Noise
Road Noise This mainly comes from cars,buses, lorries, vans and motorbikes, and each of these makes noise in a variety of different ways. Typically the things that bother people the most are engine starting, gear changing, car stereos, brakes and tyres. Half the responsibility of keeping their vehicle quiet lies with the driver, making sure the car is in good working order, for example; that the brakes don't squeak. Also driver's must be aware that their vehicle is likely to cause a noise, and drive it in a way that reduces the annoyance to others; not racing along quiet residential roads, avoiding driving at night, unless necessary. The simple solution is to make people park their cars a few minutes walk away from residential areas, but when when you suggest this to people, it turns out that they would prefer to occasionally be disturbed by noise, rather than have to walk to their car, or the nearest bus stop. Aircraft Noise This is a major problem to those people who live near a busy commercial or military airport, but for most people aircraft noise goes unnoticed. As large planes have been changing from pure jet engines to fan-jet engines, the amount of noise they generate has been decreasing, albeit slowly as the old aircraft are only phased out after their useful life (typically 20 years).
However as the planes get quieter, the airports grow, becoming more busy, and handling more planes every day. This means that while the total number of people effected slowly reduces, the remainder that can still hear it, hear more planes and are woken up more often at night. Industrial Noise Industrial noise comes from either an established factory, or by building works. As industrial noise is much more of a problem to people working in a factory, who might suffer permanent hearing damage as a result of noise, than to the general public, who report annoyance at it. Because of this most of the engineering solutions and regulations governing factory noise are to deal with the high levels inside, though this does have a benefit outside the factory. There are however, specific guidelines that the government has developed, under the guidance of engineers, to estimate the number of people likely to be affected by an industrial noise. With this tool, engineers can plan factories so they will disturb as few possible in the surrounding area. Once again the benefits of a new factory have to be weighed against the disturbance to the local inhabitants, and if necessary, offer some compensation to those effected.
Rail traffic The level of noise associated with rail traffic is related to the type of engine or rolling stock used, the speed of the train and track type and condition. Major NSW population centres are served by electric trains which are generally quieter than diesel. Areas affected by freight trains often experience higher noise levels than areas affected by passenger trains. The problem of noise is compounded by the requirements of railway operations (especially night operations) and factors such as stopping patterns and topography which can lead to localised problems. Rail noise can be considerable, but generally affects a far smaller group of the population than road or aircraft noise as it is generally confined to residents living along rail lines in urban areas (ABS 1997b). While changes to locomotives and rolling stock mean that they have become quieter over the last few years, railway noise remains a problem because of longer, more frequent and faster trains and the build up of the urban environment.
Ocean Noise The ocean has always been a noisy place to live. Breaking waves cause lots of noise, shrimps click their claws, surf on the beach and various fishy noises all contribute to the general hubbub. Now however, the greater amount of shipping has dramatically increased the noise in the ocean, drowning out all the natural noises. Huge engines hammer away, driving the ships across the oceans, radiating sound from their propellers and through their hulls. Through all this clamour there is one creature that really relies on hearing quiet noises across vast distances, and that creature is the whale. Whale song has been popular for several years now, but the whales have been using it much longer than that. It is widely believed that the whales use their song to communicate with each other, across hundreds of miles of ocean. With the increase in noise in the ocean people are beginning to worry that the whales won't be able to hear each other, and so will be less likely to find each other. This could effect their migration patterns, and so effect their population. As always it comes down to the engineer to improve on what has gone before. The ship owners don't want to pay a fortune to make their ships quiet for the benefit of a few fish (I know they're mammals really), so combined with government legislation, the engineers make ships that are cheaper, faster, and more efficient, while still making less noise than older ships. This keeps both the ship owners and environmentalists happy, while allowing the whales to sing in peace.
Social Noise Of all sorts of noise, neighbourhood noise is the greatest source of noise nuisance and complaints. A survey carried out in the UK in 1986/87 estimated that 14% of the adult population was bothered by neighbourhood noise, compared with 11% from road traffic noise, and 7% bothered by aircraft noise. The sources of neighbourhood noise, in order of number of complaints, was Amplified music; Dogs; Domestic activities; Voices; DIY; Car repairs; with 10% complaining about something else. Engineers strive to make these complains less frequent. Often there is little engineers can do to reduce the noise at source. People are people, and will make a noise. What is done is to stop the noise, as it travels from from the source to the listener. Double glazing and better insulated walls are two low-tech solutions to the problem. Hi-tech solutions include the active control of sound: For every noise, making an anti-noise, and having the two cancel out, but active control is still to expensive and unreliable to apply to general cases, at the moment. Just remember, that for every time you hear a noise that annoys you, you might have annoyed someone else with your noise
Water Pollution Water is probably one of the most important resources we have. People can survive without food for several weeks, but without water we would die in less than one week. On a slightly less dramatic note, millions of litres of water are needed every day worldwide for washing, irrigating crops, and cooling industrial processes, not to mention leisure industries such as swimming pools and watersports centres. Despite our dependence on water, we use it as a dumping ground for all sorts of waste, and do very little to protect the water supplies we have. There are several threats to our water resources. Oil spills kill thousands of seabirds and can wreck water desalination plants and industrial plants drawing their water from affected coastlines. However, oil can get into the sea from many other sources, and cause just as much damage. Poor management of existing water resources can lead to those resources running out or at least shrinking, such as the Aral Sea. More locally, the North Sea is suffering from heavy pollution. Much of the pollution in rivers and seas comes from chemicals, mainly from agriculture. Another pollution issue which is often overlooked is thermal pollution.
LAND POLLUTION Whenwe hear a person describe a place as 'dirty', what usually comes to our minds is the bad condition of the place. The place, which could be your bedroom, is imagined to have clothes scattered on the floor and books unarranged on the shelf. However, in our site, we have decided to define the word 'dirty' in a more specific manner. 'Dirty' in our definition, means that there are rubbish or litter on the floor. This makes the atmosphere of that certain place unpleasant not only to the eye, but also to the mind. Land pollution is therefore the dirtying of the land. It comes about due to inconsiderate dumping of waste, littering and ineffective waste disposal methods.
SOLID WASTE DISPOSAL The refuse collected in 2001 was disposed of at the incineration plants or in the sanitary landfill. The 4 incineration plants at Ulu Pandan, Tuas, Senoko and Tuas South processed a total of 2.55 million tonnes or 91.0 % of the total refuse generated in Singapore. The rest of the refuse was disposed of at Semakau Landfill.
|
<urn:uuid:4ca9e795-6d17-4736-b355-79bb301c2fd4>
|
CC-MAIN-2024-10
|
https://fr.slideserve.com/jirair/welcome-to-the-world-of
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474744.31/warc/CC-MAIN-20240228175828-20240228205828-00609.warc.gz
|
en
| 0.966941 | 3,213 | 3.046875 | 3 | 2.897569 | 3 |
Strong reasoning
|
Science & Tech.
|
Drivers who talk on mobile phones when behind the wheel clog traffic, drive slower on the freeway, pass sluggish vehicles less often and take longer to complete their trips, according to a new study.
The study, led by Dave Strayer, a professor at the University of Utah psychology, demonstrated how subjects talked on a cell phone while operating a driving simulator.
"At the end of the day, the average person's commute is longer because of that person who is on the cell phone right in front of them. That SOB on the cell phone is slowing you down and making you late," said Strayer.
Joel Cooper, a University of Utah doctoral student in psychology and the study's co-author said: "If you talk on the phone while you're driving, it's going to take you longer to get from point A to point B, and it's going to slow down everybody else on the road.
Peter Martin, an associate professor of civil and environmental engineering and director of the University of Utah Traffic Lab and co-author said that the study shows "cell phones not only make driving dangerous, they cause delay too."
In the study, the researchers used a PatrolSim driving simulator. A person sits in a front seat equipped with gas pedal, brakes, steering and displays from a Ford Crown Victoria patrol car. Realistic traffic scenes are projected on three screens around the driver.
The study involved 36 volunteers. Each participant drove through six, 9.2-mile-long freeway scenarios, two each in low, medium and high-density traffic, corresponding to freeway speeds of 70 mph to 40 mph.
Each 9.2-mile drive included 3.9 miles with two lanes in each direction and 5.3 miles with three lanes each way. Every volunteer spoke on a hands-free cell phone during one drive at each level of traffic density, and did not use a cell phone during the other three drives. A participant on the other end of the phone was told to maintain a constant exchange of conversation.
The drivers were told to obey the 65-mph speed limit, and use turn signals. That let participants decide their own speeds, following distances and lane changes.
In medium and high density traffic, drivers talking on cell phones were 21 percent and 19 percent, respectively, less likely to change lanes.
In low, medium and high traffic density, cell phone users spent 31 percent, 16 percent and 12 percent, respectively, more time following within 200 feet of a slow lead vehicle than undistracted drivers. That meant they spent 25 to 50 more seconds following another vehicle during the 9.2-mile drive.
"We designed the study so that traffic would periodically slow in one lane and the other lane would periodically free up. It created a situation where progress down the road was clearly impeded by slower moving vehicles, and a driver would benefit by moving to the faster lane, whether it was right or left," Cooper said.
Compared with undistracted motorists, drivers on cell phones drove an average of 2 mph slower and took 15 to 19 seconds longer to complete the 9.2 miles. That may not seem like much, but is likely to be compounded if 10 percent of all drivers are talking on wireless phones at the same time, Cooper said.
The researchers said: "Results indicated that, when drivers conversed on a cell phone, they made fewer lane changes, had a lower overall mean speed and a significant increase in travel time in the medium and high density driving conditions."
The study will be presented on Jan. 16 during the Transportation Research Board's annual meeting.
|
<urn:uuid:d6c60d67-7018-4cef-8c4a-3bcce6ada6aa>
|
CC-MAIN-2017-51
|
http://www.medindia.net/news/Cell-Phones-Not-Only-Make-Driving-Dangerous-but-Cause-Delay-too-31414-1.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588420.68/warc/CC-MAIN-20171216181940-20171216203940-00464.warc.gz
|
en
| 0.964419 | 734 | 3.25 | 3 | 2.648746 | 3 |
Strong reasoning
|
Transportation
|
World Music Day is celebrated on June 21 consistently to honour the musicians and singers for the gift of music, which offers flight to the imagination and life to everything. A world without music would have no meaning to numerous and World Music Day is seen to celebrate the power of this art.
Otherwise called Fête de la Musique, it is praised for motivating budding, young and professional musicians to perform. In excess of 120 countries celebrate World Music Day and organise free public concerts in parks, stadiums and public places. Music lovers organise various musical shows and events on this special day.
History of Music Day:
World Music Day was first celebrated in the year 1982 on Solstice in France. The then, at that point French Minister of Art and Culture, Jack Lange and Maurice Fleuret, began Fete de la Musique in Paris. Fleuret, who assumed a pivotal part in starting a day for celebrating music, was a French composer, music journalist, festival organiser and radio producer. What’s more, from that point forward, music has become very popular in various countries across the globe.
Theme and Events of World Music day 2021:
The topic of World Music Day 2021 is “Music at the intersections”. The other name of the occasion is ‘Make Music Day’ which stresses on the significance of supporting one’s energy for music and sharing music to all with no limitations. Because of Covid-19 pandemic, this year won’t see public concerts commemorating the World Music Day. In any case, the soul of the day will continue to ring in the minds of music lovers.
Importance of Music Day:
World Music Day is praised to give free music to all music lovers and to give a platform for amateur and professional musicians to showcase their talents to the world. It is celebrated to highlight the significance of music and how it is valuable for the human mind and body.
Numerous examinations and specialists have brought up that music helps in reducing stress, provides better sleep and keeps us going. Music therapy has proved wonders for individuals battling mental issues and it can likewise assist a person with exercise better. With the right sort of music, individuals can focus on their positions and furthermore perform better.
As per an exploration, it was informed that a few hormones are released in our body by listening to music. These hormones play a vital role in increasing the body’s immune system and reducing many diseases of the head.
Music Day 2021 Celebrations:
After the 1982 festivals, World Music Day is praised in different nations across the globe and has become a international phenomenon. Nations like India, Italy, Australia, the United States, Japan, China, Malaysia observe Music Day. Beginner musicians and veterans come out in the street and participate in different musical concerts and furthermore perform music.
In Paris, the streets are jam-packed with the sound of music and music lovers groove to it. Music lovers in Paris and different spots in France come for Fete de la Musique and enjoy fests, feasts, parades and fairs.
Be that as it may, this year, attributable to the pandemic, World Music Day festivities will be held in a relaxed way and chances of a huge public gathering seem highly unlikely. Numerous music associations and organizations will coordinate online shows, competitions and fests on World Music Day.
Happy World Music Day!
- Google launches “Continuous Scrolling” feature for desktop search - December 6, 2022
- Google doodle celebrates Finland Independence Day - December 6, 2022
- Google doodle celebrates the 83rd birthday of Kuwaiti actor, singer, playwright, and comedian ‘Abdulhussain Abdulredha’ - December 6, 2022
|
<urn:uuid:8e11d6ea-c431-46f7-a302-6abeaf5c1442>
|
CC-MAIN-2022-49
|
https://www.featureweekly.com/world-music-day-2021-know-theme-history-importance-and-how-to-celebrate-this-day/
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00647.warc.gz
|
en
| 0.932479 | 770 | 3.078125 | 3 | 1.854221 | 2 |
Moderate reasoning
|
Entertainment
|
Explore the Heritage and History
Explore the Shores
—Manistee County, Michigan: Where Life Meets Water —
As early as 10,000 years ago, nomadic people were following the bountiful harvests of fish and game the Manistee River provided. By 500 B.C., natives began settling this land, setting up camps and farming.
The lands were controlled by the Algonquin Nation, which utilized the Manistee River as a commercial highway along which fishing, trapping and trading thrived. When European explorers arrived, the Native American tribes here were the stewards of these rich resources.
When the pioneers moved to the Midwest, a reservation was established by the U.S. government, but it was dismantled by the late-1840s and the land was sold off to settlers. In the mid-1990s, the Little River Band of Ottawa Indians was recognized as a sovereign nation and the tribe was able to restore portions of their reservation land.
The same natural resources that had been utilized for thousands of years became a draw for developing industry in the mid-19th century. In 1841, the first lumber mill was constructed on the shores of Manistee Lake. Within 45 years, there were 40 sawmills in operation and the City of Manistee boasted more millionaires per capita than anywhere else in the United States.
As industry in the area diversified (beginning in 1881 when salt was discovered beneath Manistee), access to a deep water port and railroads became instrumental
Interspersed among the wetlands that line the shore of Manistee Lake, industry lives. And so do people. Because of its history, Manistee Lake is in the unique position of trying to balance industry, residential use and wetlands so the amazingly diverse biological communities that call these waters home can continue to thrive.
[Top right image caption reads] Manistee Lumber Company
Erected by Manistee County Alliance for Economic Success.
Location. 44° 15.437′ N, 86° 19.035′ W. Marker is in Manistee, Michigan, in Manistee County. Marker is on Arthur Street (U.S. 31), on the right when traveling north. Touch for map. Marker is at or near this postal address: Arthur Street Boat Launch, Manistee MI 49660, United States of America.
Other nearby markers. At least 8 other markers are within walking distance of this marker. U.S.S. Michigan (approx.
Also see . . .
1. Arthur Street Boat Launch / Fishing Pier. (Submitted on September 3, 2016, by William Fischer, Jr. of Scranton, Pennsylvania.)
2. History of Manistee, Michigan. (Submitted on September 3, 2016, by William Fischer, Jr. of Scranton, Pennsylvania.)
3. History of Manistee City (1882). (Submitted on September 3, 2016, by William Fischer, Jr. of Scranton, Pennsylvania.)
Categories. • Environment • Industry & Commerce • Native Americans • Waterways & Vessels •
Credits. This page was last revised on September 3, 2016. This page originally submitted on September 3, 2016, by William Fischer, Jr. of Scranton, Pennsylvania. This page has been viewed 114 times since then and 19 times this year. Photos: 1, 2, 3. submitted on September 3, 2016, by William Fischer, Jr. of Scranton, Pennsylvania.
|
<urn:uuid:4992bc32-a66b-432a-87af-9c447d734414>
|
CC-MAIN-2017-22
|
https://www.hmdb.org/marker.asp?marker=97439
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609404.11/warc/CC-MAIN-20170528004908-20170528024908-00383.warc.gz
|
en
| 0.95177 | 729 | 3.328125 | 3 | 2.423024 | 2 |
Moderate reasoning
|
History
|
In early January 2020, Dominic Cummings, the Prime Minister’s chief Special Adviser, wrote a blog piece in which he advertised for advisers to work in No 10. One of the groupings was for “weirdos” and “misfits”. Andrew Sabisky was appointed. The media trawled through Sabisky’s own blog for his thoughts, finding that he’d said, for example:
—There are excellent reasons to think the very real racial differences in intelligence are significantly – even mostly – genetic in origin
—One way to get around the problems of unplanned pregnancies creating a permanent underclass would be to legally enforce universal uptake of long-term contraception at the onset of puberty.
—Eugenics are about selecting ‘for’ good things. Intelligence is largely inherited and it correlates with better incomes; physical health, income, lower mental illness. There is no downside to having IQ except short-sightedness.
The first of these three comments is an example of ‘scientific racism’, the second is an example of eugenics. The first comment is factually incorrect. Human eugenics is wholly discredited, both morally and scientifically. The third comment misunderstands what IQ is. Shortly after these and other, similar, comments became public knowledge, Sabisky resigned. What are the origins of such thinking?
Differences in skin pigmentation and facial structure have been obvious for millennia. The earliest form of racism seems to be anti-semitism. Jews have been stigmatised and persecuted from ancient times, even before Christianity when Jews could be held responsible for the death of Jesus. They lent money at interest, then called usury, when Christians were forbidden to do this, and were said to indulge in practices that sound more like black magic. The Jews were expelled from England in 1290, not returning until Cromwell’s time. They were expelled from Spain in 1492 when many found refuge in the tolerant Moslem Ottoman Empire.
More generally, the ’scientific’ study of race began during the Enlightenment, the Age of Reason. This was also the time of European colonisation and empire building, when the ‘whites’ became more aware of other ‘races’. These classifiers were Western Europeans. The various human ‘races’ were described in relation to skin colour, physiognomy (the ‘science’ of judging peoples’ character from their facial appearance), and type of hair with an admixture of ignorance and prejudice. Linnaeus thought there were five types, Africans, Americans, Asians, Europeans and ‘monsters’. Johann Blumenbach described five ‘races’:
- The white or Caucasian
- The yellow or Mongolian
- The brown or Malayan
- The black or Ethiopian
- The red or American
De Gobineau believed in three races, black, white and yellow. Blacks, he thought, were the strongest but incapable of intelligent thought; the yellows were physically and mentally mediocre, while whites (of course) were the best because they were capable of intelligent thought, could create beauty, and were the most beautiful. Overall, though, there was no settled agreement about the number of races. (Human facial beauty has subsequently been studied; most people prefer faces that are symmetrical. Faces with proportions in the Golden Ratio are considered beautiful. Early thinkers used Greek statues as a comparator; such statues are often the personification of beauty. And originally they were painted in bright colours to make them wear-resistant; they weren’t ‘white’.)
These classifications and other similar ones still find echoes today. I need hardly say that there is no biological, that is genetic, basis for such classifications, or for the attributes attached to them. The differences we can observe between different populations are a result of different cultures and environments. Race is a social construct.
Charles Darwin published On the Origin of Species by means of Natural Selection in 1859. His cousin, Francis Dalton, was intrigued and became convinced that all human characteristics and particularly intelligence were the result of inheritance. Thus, the ruling classes were the elite because of their genetic inheritance, and not because of wealth and privilege. Likewise, insanity and mental degeneracy were a result of ‘genetic determinism’. He collected data by measuring physical characteristics (anthropometrics), and mental abilities (psychometrics). He also made major developments in statistics, as did his successor Karl Pearson; it is for this that they are remembered today rather than their racism.
Convinced by such arguments, in the early 20th century, ‘mental degenerates’ were rounded up in the UK, and kept in asylums. Programmes of forced, involuntary sterilisation were introduced in Sweden and in the US. In Germany, Nazi ideology encouraged extramarital breeding from ‘racially pure and healthy’ parents to raise the birth rate of Aryans, a wholly specious race. Further, those whom the Nazis viewed as degenerate peoples, Jews, homosexuals, the Roma and others were not only segregated and sterilised, but murdered in what is now known as the Holocaust. Eugenics was (mostly) abandoned after World War II; eugenicists rebranded themselves as geneticists.
It’s clear that artificial breeding works in plants, producing standardised, disease-resistant but heavy cropping varieties. In animals, selective breeding produces pedigree animals, ones that conform to what experts expect. But this comes at a cost; such animals are produced by inbreeding, and these animals are prone to hereditary defects. Inbreeding in humans is also associated with congenital diseases such as haemophilia.
Gregor Mendel, an Austrian monk, experimented with peas, and from this formulated his ideas of dominant and recessive genes. Although he published in the middle of the 19th century, his ideas weren’t widespread for half a century. Less well known is that he didn’t use just any peas; he inbred peas, producing seven strains that ‘bred pure’ for various characteristics; and it was from these that he experimented; his results would otherwise have been lost in the ’noise’. No humans are ‘purebred’ we are all mongrels.
Mongrels? A generation is conventionally taken to be 25 to 30 years, and the number of our ancestors doubles every generation. On this basis about 1000 years ago we have one trillion ancestors; this is clearly impossible, as the best estimate is that around 107 billion is the total number of people who have ever lived. The genetic isopoint is when the entire population are the ancestors of today’s population. For Europe this was around 1400CE; for the world population, it was around 3400BCE. Every one of us is descended from all the global population then. There are genetic similarities within populations; but there are no sharp boundaries between populations, rather a gradual merging or blending of the two. And there are greater genetic differences within populations than between populations. Racial ‘purity’ is an impossible fantasy. Sorry, Gaels and Planters; you aren’t ‘pure’ and neither would you want to be because of recessive genetic disease.
Intelligence combines reason, problem-solving, abstract thought, learning capacity and the understanding of ideas. The first rigorous attempts at measuring and quantifying intelligence were by Binet just over a century ago, and was calculated by dividing the mental age by the chronological age, and multiplying by 100. This produced an intelligence quotient or IQ; the average for a population was 100. About two-thirds of people (one standard deviation) are in the range 85 – 115, and 95% (two standard deviations) lie between 70 and 130. Today’s tests (attempt to) measure reason, mental processing speed, spatial awareness and knowledge.
IQ scores for populations have been found to be rising at about 3 points per decade; this is known as the Flynn effect. For example, the average IQ in Ireland was 85 in 1970 by comparison to the UK where it was 100; in Ireland today it is 100. This is far too short a time scale for a genetic effect. The generally accepted explanation relates to the ‘environment’ including better nutrition and health, an increased standard of living and general socio-economic development. Does this accurately describe the changes in Ireland in the past half-century? Has Ireland gone from a poor, impoverished, even backward country to one which is wealthy, well educated and which has a vibrant economy?
While it’s difficult to assess accurately, today’s best estimate is that genes account for 40% to 60% of a person’s intelligence, with the environment, including nurture, accounting for the rest; crudely, about half nature and half nurture. It’s clear that genetics does not account for all or even the great majority of intelligence. The short-sightedness associated with intelligence may be genetic, but it’s known that close study, such as reading, has a very significant effect. I was told that myopia is common in Jewish boys but not girls; only boys study the Talmud in exquisite detail.
Scientific racism is a pseudoscientific attempt to show that certain races, that is ‘white’ races, are genetically superior to others. It uses comparisons of IQ in this venture. It does seem correct that peoples in sub-Saharan Africa have IQs 20 points less than those in the UK (taken as 100). It’s also true that they are ‘developing’ rather than ‘developed’ countries. However, the ‘highest’ IQ scores, again by comparison with the UK, are in Hong Kong, Singapore, South Korea, Japan and China, where scores are in the range 105 – 108. There is a culture of study and learning in these countries. Characteristically, researchers in this field such as Richard Lynn, previously a Professor of Psychology at the Ulster University, are described as ‘controversial’.
It’s surely clear that eugenics and scientific racism are thoroughly discredited, both morally and scientifically. The comments Mr Sabisky made are simply wrong in every detail; it is concerning that there does seem to be a recrudescence of such ideas today, and alarming to think that these ideas might be at the heart of government. Neither Dominic Cummings nor the Prime Minister’s spokesman have distanced themselves from these comments.
Angela Saini and Adam Pearson presented a two-part documentary called Eugenics: Science’s Greatest Scandal on BBCTV last year. It is not available on iPlayer at present.
Angela Saini’s book Superior: the Return of Race Science (2019) and Adam Rutherford’s How to Argue with a Racist (2020) are up to date accounts, and well worth reading.
There is a list of further reading here.
My thanks to SeaanUiNeill, Dr Madeleine Morris and Professor Seán Danaher for their comments.
Robert Campbell is a retired surgeon.
|
<urn:uuid:ad6285fb-3ce0-45e0-8c51-b22405d93928>
|
CC-MAIN-2023-23
|
https://www.sluggerotoole.com/2020/02/29/eugenics-and-scientific-racism/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647409.17/warc/CC-MAIN-20230531182033-20230531212033-00532.warc.gz
|
en
| 0.971398 | 2,325 | 2.515625 | 3 | 2.973408 | 3 |
Strong reasoning
|
Science & Tech.
|
By definition, Native language identification (NLI) is the task of determining an author's native language based only on their writings or speeches in a second language. This is an application of Machine Learning. Let us break this down to know what this actually means.
English is a language which is widely used throughout the word. But not all of them who write or speak english are native language speakers. This means that whenever anybody who is not a native english speaker writes or speaks in english language, his speaking and writing tend to be different from the one whose native language is english. This is because his native language tend to affect his writing and speaking style in a different way from an native speaker. For example, an Indian man speaking or writing english is generally different from one whose native language is english. Interestingly, his speaking or writing also tend to be different from, say, a chinise person. So, we can say that every region has its own style or writing or speaking english.
NLI is a field of Natural Language Processing which deals which identifying these subtle differences between non-native speakers. NLI is based on the assumption that the mother tongue (Hindi for Indian or Arabic for Arabs) influences second language acquisition and production (For example, English).
Why to study NLI?
There are two important reasons to study NLI.
Firstly, there is second language acquisition. NLI methods can be used to investigate the influence of native language in foreign language acquisition and production. For example, for an Indian person, how hindi affects his ability to read and write english. What inflections does it introduces in his english language.
Second reason might be a practical one. NLI can be used in Forensic linguistics. For example, it can identify the background and attributes of an author or can identify the native country of the author of an anonymous note.
Methodology for NLI
NLI can be approached as a kind of text classification. In text classification, decisions and choices have to be made at two
- First, what features should we extract and how should we select the most informative ones?
- Second, which machine learning algorithms could be used for NLI?
Let us dive in these two questions.
- One feature can be n-grams where 'n' can be any natural number. An n-gram is a contiguous sequence of 'n' items from a given sample of text. It means for a word that we are looking, how does it depend on previous 'n' words. We carry this process for all the words. So, if 'n' is 3, that means we will look at 3 previous words for an word 'w' in the text. It is an important feature which is generally used in NLI and is proven to give good results not only in this field but also in general text classification. NLTK library has a function "nltk.ngrams(text,n)", where 'n' corresponds to how many words should a words depend (i.e., n=1 for unigram or n=2 for bigram).
Image shows different n-grams (1(uni) ,2(bi),3(tri)). So, for example, in tri-gram, "a swimmer likes" or "swimmer likes swimming" is considered as a feature.
- Another feature can be Part-of-Speech (PoS) tagging, which may be defined as the process of assigning one of the parts of speech to the given word. In simple terms, we need to assign each word in the text with a particular tag. These tags include nouns, verb, adverbs, adjectives, pronouns, conjunction and their sub-categories. NLTK library has a function "nltk.pos_tag(text)"
which divides each word in "text" and associates them with a tag.
- Spelling errors can also be treated as features i.e., words that are not found in standard dictionary. This is because people of some specific country or region might use some lingo that is not prevalent in english langauge.
All the features, consisting of either characters or words or part-of-speech tags or their combinations, etc that are mapped into normalized numbers (norm L2). For the mapping, popular technique is to use TF-IDF, a weighting technique. It automatically gives less weight to words that occur in all the classes or words that have very high frequency (such as a, an, the, etc) and assigns higher weight to words that occur in few classes i.e., words that helps in discriminating classes. In simple terms, TF gives more weight to a frequent term in the text and IDF downscales the weight if the term occurs in many text.
Now, the task is to classify the text into respective country/region. We need algorithms that are suitable for sparse data and high dimensional(text data is high dimensional and sparse). One such prominent algorithm is Support Vector Machines (SVMs). SVM have been explored systematically for text.
categorization. An SVM classifier finds a hyperplane that separates examples into classes with maximal margin. Other classifiers such as neural networks could be tried for NLI. The output of any of these classifiers would be the number of languages we wanted to classify. For example, if you are given english texts of an Indian, Chinise and a Japanese person, the output of a classifier would be 3.
A range of ensemble based classifiers have also been applied to the task and shown to improve performance over single classifier systems.
Is there any dataset which could be used to implement the above or any other method?
Yes! It is TOEFL11. It consists of 12,100 English essays (about 300 to 400 words long) from the Test of English as a Foreign Language (TOEFL). The essays are written by 11 native language speakers shown in the table below.
Some reseach papers that will help in undertanding NLI further
With this article at OpenGenus, you must have a strong understanding of the overview of Native Language Identification (NLI). Enjoy.
|
<urn:uuid:32446cff-1cfa-4f28-ac53-ed29cb5e5882>
|
CC-MAIN-2021-10
|
https://iq.opengenus.org/native-language-identification/
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00599.warc.gz
|
en
| 0.933598 | 1,271 | 3.59375 | 4 | 2.430937 | 2 |
Moderate reasoning
|
Science & Tech.
|
In this paper I attempt to provide a convincing case for the importance of cognitive ethological investigations for advancing our knowledge of animal cognition. Cognitive ethology is broadly defined as the evolutionary and comparative study of nonhuman animal (hereafter animal) thought processes, consciousness, beliefs, or rationality, and is an area in which research is informed by different types of investigations and explanations. After (1) a brief discussion of the agenda of cognitive ethology, in which three different views of cognitive ethology are considered as is the relationship of cognitive ethology as a science to other branches of science, I (2) argue that folk psychological explanations and empirical data both are important to cognitive ethological research and conclude that the former are not as weak and as dispensable as some claim; (3) appeal to some case studies in recent analyses of social play behavior and antipredatory behavior (vigilance against potential predators) to make the point that folk psychology works well with empirical data and to provide examples in which the cognitive ethological perspective has proven to be a good heuristic; and (4) make some suggestions for future research. Cognitive ethology is alive, has a bright future, and has much to gain from a broad interdisciplinary perspective. Comparative approaches to cognitive science are very fruitful and have much to offer.
1.1 Cognitive Ethology as a Science
Cognitive ethology, broadly defined as the evolutionary and comparative study of nonhuman animal (hereafter animal) thought processes, consciousness, beliefs, or rationality, is a rapidly growing field that is attracting the attention of researchers in numerous and diverse disciplines.1 Because behavioral abilities have evolved in response to natural selection pressures, ethologists favor observations and experiments on animals in conditions that are as close as possible to the natural environment where the selection occurred, and because cognitive ethology is a comparative science, cognitive ethological studies emphasize broad taxonomic comparisons and do not focus on a few select representatives of limited taxa. In addition to situating the study of animal behavior in an evolutionary and comparative framework, cognitive ethologists maintain that field studies of animals that include careful observation and experimentation can inform the study of animal cognition; cognitive ethology will not necessarily have to be brought into the laboratory in order to make it respectable. Cognitive psychologists, in contrast to cognitive ethologists, typically work on related topics in laboratory settings and do not emphasize evolutionary or comparative aspects of animal cognition. When cognitive psychologists do make cross-species comparisons, they are typically interested in explaining different behavior patterns in terms of common underlying mechanisms; ethologists, in common with other biologists, are often more concerned with the diversity of solutions that living organisms have found for common problems.
Many different types of research fall under the term "cognitive ethology" and it currently is pointless to try to delimit the boundaries of cognitive ethology; because of the enormous amount of interdisciplinary interest in the area any stipulative definition of cognitive ethology is likely to become rapidly obsolete (Allen 1992a). Although cognitive ethology can trace its beginnings to the writings of Charles Darwin and some of his contemporaries and disciples, the modern era of cognitive ethology is usually thought to have begun with the appearance of Donald R. Griffin's (1976, 1981) book The question of animal awareness: Evolutionary continuity of mental experience. Thus, cognitive ethology as most of us know it is really a young science with great aspirations, and, as with many other fields in their infancy, cognitive ethology also suffers from various sorts of growing pains. While there are those who are presently willing to let cognitive ethological research take its course and wait to see how these sorts of investigations deal with current problems and inform and motivate future research, there are some who want to dispense with cognitive ethology because some of its ideas seem muddled and difficult to study, or because other minds are never fully accessible to outsiders. The latter position seems to be short-sighted and narrow minded. To claim that some of the basic tenets of cognitive ethology are unfalsifiable and thus not worthy of study is simply too cavalier an attitude (Sober 1983); there should be enough flexibility for alternative explanations, especially in developing fields. Some patience is needed. Imagine if other fields were ignored or terminated because their early thinking seemed confused or because "final answers" were not immediately available.
1.2 Three Views of Cognitive Ethology
For cognitive ethology, the major problems are those that center on methods of data collection, analysis, and on the description, interpretation, and explanation of animal behavior (Bekoff and Jamieson 1990a,b; Jamieson and Bekoff 1993; see also Purton 1978). Because cognitive ethology deals with animal minds and mental states, there is also some debate about whether or not a science of cognitive ethology is even possible (Yoerg and Kamil 1991; for discussion see Jamieson and Bekoff 1993). Based on published reviews of some of Griffin's works and other clearly stated opinions concerning animal cognition, Bekoff and Allen (1993) located different views of cognitive ethology into three major categories, slayers, skeptics, and proponents; unfortunately all views of cognitive ethology could not be covered in their survey. The views of some members of these groups can be summarized as follows.
Slayers deny any possibility of success in cognitive ethology. They sometimes conflate the difficulty of doing rigorous cognitive ethological investigations with the impossibility of doing so. Slayers also often ignore specific details of work by cognitive ethologists and frequently mount philosophically motivated objections to the possibility of learning anything about animal cognition. They do not see that cognitive ethological approaches can lead, and have lead, to new and testable hypotheses. They often pick out the most difficult and least accessible phenomena to study (e.g. consciousness) and then conclude that because we can gain little detailed knowledge about this subject, we cannot do better in other areas. Slayers also appeal to parsimony in explanations of animal behavior, but they dismiss the possibility that cognitive explanations can be more parsimonious than noncognitive alternatives, and they deny the utility of cognitive hypotheses for directing empirical research.
Some specific examples of the slayers' position are as follows.
Zuckerman (1991, p. 46), in his review of Cheney and Seyfarth's (1990) book How Monkeys See the World: Inside the Mind of Another Species, exemplifies the unargued view of those who dismiss the field of cognitive ethology because they make little effort to consider available evidence. He writes:
"Some of the issues they do raise sound profound as set out but, when pursued, turn out to have little intellectual or scientific significance." (Zuckerman 1991, p. 46)Zuckerman does not tell us what types of data would have intellectual or scientific significance; this does not seem too much to ask for a student of primate behavior.
Heyes (1987a), who is a laboratory psychologist, advises cognitive ethologists
to hang up their field glasses and turn to laboratory research if they
want to understand animal cognition. She writes:
"It is perhaps at this moment that the cognitive ethologist decides to hang up his field glasses, become a cognitive psychologist, and have nothing further to do with talk about consciousness or intention." (Heyes 1987a, p. 124)Unlike Heyes, who thinks that animal cognition can at least be studied in the laboratory, some slayers argue against the study of animal cognition on the basis of a philosophical view about the privacy of the mental (for a well-developed counter-argument see Whiten 1993), or by the related "other minds" problem. These critics typically do not give specific critiques of actual empirical investigations carried out by cognitive ethologists; rather they try to dismiss such investigations on philosophical grounds alone. Thus, the renowned evolutionary biologist, George C. Williams (1992, p. 4), writes:
Thus, Heyes denies that evidence gained by observing animals in natural settings, an activity that usually involves using some sort of visual aid such as field glasses, is particularly relevant to understanding animal minds. Other slayers, who claim that they need more convincing evidence from the field, rarely tell what evidence would be convincing (Colgan 1989, p. 67). Heyes and these other critics generally simply assume that no evidence that could be collected from the field would provide convincing support for attributions of mental states.
"I am inclined merely to delete it [the mental realm] from biological explanation, because it is an entirely private phenomenon, and biology must deal with the publicly demonstrable."Williams' argument goes like this:
Other tactics used by some slayers involve grounding their criticisms on very narrow bases. Cronin thinks that Griffin, a "sentimental softy," and other cognitive ethologists are only concerned with demonstrating cleverness, and hence consciousness. In her recent review of Griffin's Animal Minds (1992) she writes:
"A Griffin bat is a miniature physics lab. So imagine the consternation among behavioristic ethologists when Mr. Griffin came out a decade ago,with "The Question of Animal Awareness," as a sentimental softy. . . . For Mr. Griffin, all this [cleverness] suggests consciousness. He's wrong. If such cleverness were enough to demonstrate consciousness, scientists could do the job over coffee and philosophers could have packed up their scholarly apparatus years ago." (p. 14, my emphases)Even McFarland (1989), who is categorized as a slayer by Bekoff and Allen (1993), recognizes that there are indicators of cognition other than the ability to produce clever solutions to environmental problems. Furthermore, not only is Cronin wrong about slaying the field of cognitive ethology because of the difficulty of dealing with the notion of consciousness (think about all of the other fields of inquiry that would suffer if it were appropriate to base rejection of those fields on singling out their most difficult issues), but she is also wrong to think that demonstrating cleverness is a simple matter. Certainly, the difficult work needed to demonstrate cleverness could not be done over coffee! Cronin also conveniently slides from claiming that for Griffin, cleverness suggests consciousness, to claiming that his view is that cleverness is " . . . enough to demonstrate consciousness" (my emphasis). Even Heyes (1987b), notes that it is not Griffin's program to prove that animals are conscious. Cronin later goes on to claim that at least chimpanzees are conscious and tells us why. She concludes her scathing review with the following statement:
"Well, I know that I am conscious, I know a mere 500,000 generations separate me from my chimpanzee cousin, and I know that evolutionary innovations don't just spring into existence full-blown - certainly not innovations as truly momentous as our hauntingly elusive private world."Cronin places herself on a slippery slope here. Why did she stop with chimpanzees? After all, if evolutionary innovations do not spring into existence full-blown, where did chimpanzee consciousness come from? Her phylogenetic argument can not be assessed directly; behavioral evidence is needed to help it along.
Skeptics are often difficult to categorize. They are a bit more open-minded than slayers, and there seems to be greater variation among skeptical views of cognitive ethology than among slayers' opinions. However, some skeptics recognize some past and present successes in cognitive ethology, and remain cautiously optimistic about future successes; in these instances they resemble moderate proponents. Many skeptics appeal to the future of neuroscience, and claim that when we know all there is to know about nervous systems, cognitive ethology will be superfluous (Griffin 1992 also makes strong appeals to neuroscience, but he does not fear that increased knowledge in neurobiology will cause cognitive ethology to disappear). Like slayers, skeptics frequently conflate the difficulty of doing rigorous cognitive ethological investigations with the impossibility of doing so, but when it is shown that some light can be shed on the nature of animal cognition, they often hedge their skepticism. Skeptics find folk psychological, anthropomorphic, anecdotal, and cognitive explanations to be off-putting, but they are not as forcefully dismissive as slayers.
Some specific examples of the skeptics' position are as follows.
With respect to the types of explanations that are offered in studies of animal cognition, many slayers and some skeptics favor noncognitive explanations because they believe them more parsimonious and more accurate than cognitive alternatives, and less off-putting to others who do not hold the field of cognitive ethology in high esteem. Snowdon (1991, p. 814) claims that:
"It is possible to explore the cognitive capacities of nonhuman animals without recourse to mentalistic concepts such as consciousness, intentionality, and deception. Studies that avoid mentalistic terminology are likely to be more effective in convincing other scientists of the significance of the abilities of nonhuman animals."Beer (1992, p. 79) also thinks that if cognitive ethology limited its claims for animal awareness to sensation and perception, a practice which could change the vocabulary used in cognitive ethological studies, then " . . . even tough-minded critics would be more receptive." Griffin (1992, p.11) actually agrees with this point, but even a consideration of simple forms of consciousness is contentious to many slayers and skeptics (Bekoff 1993a; Bekoff and Allen 1993).
Michel (1991, p. 253) also is concerned about folk psychological explanations. He writes:
" . . . folk psychological theory pervades human thinking, remembering, and perceiving and creates a very subtle anthropomorphism that cancorrupt the formation of a science of cognitive ethology." (my emphasis)Some skeptics simply make vacuous claims about the supposed parsimoniousness of noncognitive explanations and move on. Thus, Zabel et al. (1992, p. 129) in their attempts to explain redirected aggression in spotted hyenas (Crocuta crocuta) before quantitatively analyzing it, are of the opinion that:
"One must be cautious about inferring complex cognitive processes when simpler explanations will suffice."They do not tell us why they believe this to be so, and it should be noted that even they admit that the other noncognitive explanations they offer are questionable. Appeals to parsimony on a case by case basis do not take into account the possibility that cognitive explanations might help scientists come to terms with larger sets of available data that are difficult to understand and also help in the design of future empirical work. (For further discussion of the weakness of the idea that the simplest explanation is always the most parsimonious, see Bennett . For a comparison of different perspectives on cognitive "versus" more parsimonious explanations, see proponent de Waal's consideration of the skeptics' Kummer, Dasser, and Hoyningen-Huene's views on parsimony.)
With respect to the difficulties in studying consciousness, Alcock (1992) is a notable example in that he does not find the inaccessibility of consciousness to be grounds for dismissing the study of animal cognition (as does Cronin 1992; see also Whiten's 1993 discussion of the importance of studying observable events in studies of animal cognition). Alcock writes:
"We need ways in which to test hypotheses in a convincing manner. In this regard Animal Minds disappoints, because it offers no practical guidance on how to test whether consciousness is an all-purpose, problem-solving device widely distributed throughout the animal kingdom. . . . And there are alternative approaches to consciousness not based on the behavioristic principle that thinking cannot be studied because it does not exist." (Alcock 1992, p. 63)Others find the intractability of the problems of studying intentionality, awareness, and conscious thinking to be prohibitive for establishing the importance of mental experiences in determining animal behavior (Yoerg and Kamil 1991, p. 273).
Proponents keep an open mind about animal cognition and the utility of cognitive ethological investigations (e.g. Cheney and Seyfarth 1990, 1992; Allen and Hauser 1991; Burghardt 1991; de Waal 1991; Ristau 1991a; Allen 1992a,b; Bekoff and Allen 1992; Bekoff 1993a; Bekoff et al., 1993; Jamieson and Bekoff 1993; Whiten 1993).3 They claim that there are already many successes and they see that cognitive ethological approaches have provided new and interesting data that also can inform and motivate further study. Proponents also accept the cautious use of folk psychological and cognitive explanations to build a systematic explanatory framework in conjunction with empirical studies, and do not find anecdote or anthropomorphism to be thoroughly off-putting. While proponents recognize that Griffin has not made detailed suggestions for experimental studies, this does not discourage them from seeking ways to make ideas like Griffin's empirically rigorous (Allen and Hauser 1991; de Waal 1991; Ristau 1991a; Allen 1992a,b; Whiten 1992; Bekoff 1993a). Proponents are critical, but patient, and do not want prematurely to doom the field; if cognitive ethology is to die, it will be of natural causes and not as a result of hasty slayings.
William Mason's (1976, p. 931) quotation from his review of Griffin's (1976) The Question of Animal Awareness is a good place to start with respect to proponents' views. Mason writes:
"That animals are aware can scarcely be questioned. The hows and the whys and wherefores will occupy scientists for many years to come."Mason's claim is a strong one. Note that in his endorsement of the field, Mason does not qualify his statement by writing "That some animals are aware . . . ". However, he does recognize that animals may differ with respect to levels of development of their cognitive abilities, and at a later date noted that "On the basis of findings such as those reviewed in this paper, I am persuaded that apes and man have entered into a cognitive domain that sets them apart from all other primates" (Mason 1979, pp. 292-293). Mason's inclusive statement about animal awareness is typical of those who narrowly focus their attention only on primate cognition (Beck 1982; Bekoff 1993a; Bekoff, Townsend, and Jamieson 1993).
Proponents are more optimistic in their views about the contributions that the field of cognitive ethology and its reliance on field work and on comparative ecological and evolutionary studies, can make to the study of animal cognition in terms of opening up new areas of research and reconsidering old data. Ristau (1991b, p. 102) notes that in her attempts to study injury-feigning under field conditions, the cognitive ethological perspective
" . . . led me to design experiments that I had not otherwise thought to do, that no one else had done, and that revealed complexities in the behavior of the piping plover's distraction display not heretofore appreciated."The challenge of using ethological ideas in the study of animal cognition is reflected in the following quotation:
"At this point, however, cognitive ethologists can console themselves with the knowledge that their discipline is an aspect of the broaderfield of cognitive studies and conceptually may not be in any worse shape than highly regarded, related fields such as cognitive psychology.We are a long way from understanding the natural history of the mind, but in our view this amounts to a scientific challenge rather that grounds for depression or dismissal." (Jamieson and Bekoff 1992a, p. 81)Proponents also share some of the concerns of the slayers and skeptics' with respect to problems associated with the use of anecdote, anthropomorphism, and folk psychological explanations. However, proponents claim that the careful use of anthropomorphic and folk psychological explanations can be helpful in the study of animal cognition, and they also maintain that anecdotes can be used to guide data collection and to suggest new experimental designs (Dennett 1987, 1991, pp. 446ff). Thus, other proponents write:
"Cognitive ethology, rescued from both behaviorism and subjectivism, has much to say about what the life of the animal is really like. It is silent on what it is like to have that life." (Gustafson 1986, p. 182)Perhaps some would assume Griffin to be the strongest proponent of cognitive ethology. However, toward the end of Animal Minds Griffin (1992, p. 260) writes:
" . . . I have advocated use of a critical anthropomorphism in which various sources of information are used including: natural history, our perceptions, intuitions, feelings, careful behavioral descriptions, identifying with the animal, optimization models, previous studies and so forth in order to generate ideas that may prove useful in gaining understanding and the ability to predict outcomes of planned (experimental) and unplanned interventions . . . " (Burghardt 1991, p. 3)
" . . . when a number of anecdotal examples, each with a possible alternative explanation, collectively point to the likelihood of intentional deception, and this is supported by more rigorous tests in the laboratory . . . I would argue that it adds up to a strong case." (Archer 1992, p. 224)
"Contrary to the widespread pessimistic opinion that the content of animal thinking is hopelessly inaccessible to scientific inquiry, the communicative signals used by many animals provide empirical data on the basis of which much can reasonably be inferred about their subjective experiences." (my emphasis)Note that Griffin counters some slayers' and skeptics' concerns about the inaccessibility of animal minds, but he does not make a very strong claim that he or others can ever know the content of animal minds. Rather, Griffin, like other proponents, remains open to the possibility that we can learn a lot about animal minds by carefully studying communication and other behavior patterns. He and other proponents want to make the field of cognitive ethology more rigorous on theoretical and empirical grounds.
In summary, as a field of inquiry, cognitive ethology need not model itself on other scientific fields such as physics or neurobiology in order to gain some credibility or to acquire status as a respectable branch of science Physics (or hard-science) envy is what led to the loss of animal minds (Rollin 1990; Bekoff 1992a) in the early part of the twentieth century. Now, after a successful battle against those who were content to study behavior in the absence of animal minds, there are ample data that strongly support the idea that many nonhuman animals do have minds. Thus, we should continue studying animal minds and not have to begin to search for them once again. Those who believe that cognitive ethology is not worthy of being called a science seem to have an impoverished conception of science; many different types of activity fall under the umbrella "science."
1.3 Folk Psychology, Cognitive Ethology, and Beliefs About the Future of Neuroscience
Because folk psychological explanations are so off-putting to slayers and some skeptics, a bit more needs to said about these sorts of explanations and how they are related to cognitive ethological research. Folk psychological explanations, usually referred to as the ways in which common folk talk about the world in their daily dealings with one another, can be useful parts of cognitive ethological explanations. Folk psychological explanations that appeal to belief and desires of and about things, while not necessarily true, often help to provide the best explanation (sensu Harman 1965) of observed behavior (see below and also Cling 1991). I fully agree with Mason (1986) that common sense is not a serious risk for contemporary behavioral science. Thus, while folk psychological explanations play a role in the successful prediction of animal behavior and in many cases do their explanatory work, they do not replace detailed empirical studies concerning the content of animal beliefs and desires; rather, folk psychological explanations need to be used along with these endeavors. Thinking about different levels of analysis and explanation might be useful for seeing how different sorts of explanations are related to one another (Table 1).
Beer (1992) also does not think that Stich's case against folk psychology is necessarily compelling. However, Beer is concerned with the framing and grounding of questions in cognitive ethology in folk psychology because of the weakness of some of philosophical underpinnings of folk psychology. For the reasons given above, I not think that Beer should be as troubled as he is (see also Sober 1983).
Like folk psychological explanations, "scientific" explanations also enjoy successes and failures. For example, scientists still disagree, even with hard data, over whether or not global warming has occurred and if so, whether global warming can be explained by what is called the Greenhouse Effect (e.g. Lindzen 1990). Scientists also disagree about whether or not Vitamin C is useful in treatment of the common cold. Nonetheless, folk psychological explanations of nonhuman behavior are often viewed as being scientifically weak or even ascientific. Generally, the theory of folk psychology is usually dismissed as being explanatorily weak, historically stagnating, and conceptually isolated (Churchland 1979, 1981). It usually is argued that folk psychological explanations are inexact and will be replaced in the future by a more mature and exact neuroscience, although in everyday life folk psychology can be helpful (Churchland 1981).
It is beyond the scope of this paper to consider all of the pros and cons of folk psychology in any detail (see Clark 1989; Bogdan 1991; Greenwood 1991; Heil 1992; Christensen and Turner 1993); most of the arguments, especially those against folk psychology, have very deep philosophical roots, some of which might benefit from a good watering. However, a few words can be said about the nature of the arguments that are used either to refute the utility of folk psychological explanations or to claim that while they may be of some limited use now, they will be replaced when we know all that there is to know about neuroscience. Such arguments suggest an extreme, almost religious-like faith, in the ability of science to handle difficult problems such as animal minds, and in this and other areas such faith may be wishful thinking (Heil 1992; Moussa and Shannon 1992). Indeed, arguments against folk psychology that appeal to neuroscience have their own weaknesses (Cling 1990; Gilman 1990; Saidel 1992). While it will be useful to learn more and more about nervous systems in analyses of different types of animal cognition (e.g. Howlett 1993), the hope that this knowledge will clear up all of the messy issues in cognitive ethology seems a bit lofty (Akins 1990; see also Brunjes 1992 for some worries about commonly used methods in the study of neurobiology and behavior). While it may happen that someday neuroscientists will be able to map an animal's beliefs in his nervous system, it will probably have to be in a very simple nervous system. Thus, it might be possible that this individual's beliefs are nothing like our beliefs, or even similar to those of animals toward whom we might feel comfortable in attributing beliefs.
One major problem with the arguments put forth by those who appeal to the future of neuroscience is that appeals to the future require us to wait and keep waiting for events that might never occur. Here too, personal opinions concerning just what science can do enter (not surprisingly) into how folk psychology is viewed. It is important to ask a few questions here. For example: "How patient should we be?" "Should we dispense with what we now know about the behavior of nonhumans, explanations of which rely on the careful use of folk psychology, until skeptics are satisfied that they are right and proponents of folk psychology are wrong?" "How will we know when we have a mature neuroscience?" "Will it be the time when neuroscience answers the questions that some want answers to in a way with which they feel comfortable?" "Why can't skeptics be asked to wait for a mature cognitive ethology?" Then we will be able to assess the utility of folk psychological explanations. Indeed, one convenience about future talk is that the future can always be put off until we like what it brings. We will not know what the future will bring until we get there (Akins 1990). Of course, appealing to the future of cognitive ethology is as nebulous as appealing to the future of neuroscience.
In summary, it is important to assess continually the relationships among (1) how much we know or think we know about certain aspects of an individual's behavior, (2) how much explanatory power there is in the sentences and the data that we use to demonstrate our ability to use this knowledge to make predictions about behavior, and (3) how much tolerance we are asking for in using folk psychological explanations. Social play behavior and vigilance against intruders (potential predators) are two areas in which folk psychological explanations along with empirical data provide valuable insights into animal cognition. These areas of research will be discussed in the next section.
1.4 Social Play and Antipredatory Vigilance: What Might Individuals Know About Themselves and Others?
Space does not allow me to cover the plethora of areas of research (e.g. food caching, individual recognition and discrimination, assessments of dominance, habitat selection, mate choice, teaching, imitation, communication, tool-use, injury-feigning, observational learning) in which cognitive ethological approaches have been, or could be useful, in gaining an understanding of the behavior of nonhumans.4 Here I will discuss the communication of play intention and antipredatory vigilance because empirical research in both of these areas has benefited or will benefit from a cognitive approach. Furthermore, in both areas (and others) folk psychological explanations have been useful for informing and motivating further research and have also turned out to be very good predictors of behavior.
The Communication of Play Intention
Social play is a behavior that lends itself nicely to cognitive studies, and poses a great challenge to slayers, skeptics and proponents alike (de Waal 1991; but see Rosenberg 1990 and Allen and Bekoff 1993; numerous references about play can be found in Bekoff and Byers 1981; Fagen 1981; Bekoff 1989; Mitchell 1990; Bekoff and Allen 1992). Play may provide more promising evidence of animal minds than many other areas. Furthermore, understanding activities such as play is important for developing new research dealing with comparative approaches to cognition (Caudill 1992, p. 5). It would have been unfortunate if people decided that just because play was difficult to study, it was impossible to study.
When animals play, they typically use actions patterns that are also used in other contexts such as predatory behavior, antipredatory behavior, and mating. Thus, it has been argued that social play behavior cannot be studied without using a cognitive vocabulary (Jamieson and Bekoff 1993). For example, if you were merely told that Jethro and Henrietta were performing a series of movements and these movements were described with respect to anatomy, you would not know that they were playing, nor that they were engaging in an activity that they probably enjoyed; for that matter you could not know what they were doing for play is typically composed of motor patterns that are also used in a variety of other contexts (for discussion see Golani 1992; Bekoff 1992b).
Because play is typically composed of motor patterns that are also used in a variety of other contexts, an individual needs to be able to communicate to potential play partners that he is not trying to dominate them, eat them, or mate with them. Rather, he is trying to play with them. Behavioral observations of many animals who engage in social play suggest that they desire to do so and believe that their thoughts of the future - how the individuals to whom their intentions are directed would be likely to behave - would be realized if they clearly communicated their desires to play using signals that in some cases seem to have evolved specifically to communicate play intention. On this view, play is seen as a cooperative enterprise.
In most species in which play has been described, play-soliciting signals appear to foster some sort of cooperation between players so that each responds to the other in a way consistent with play and different from the responses the same actions would elicit in other contexts (Bekoff 1975, 1978); play-soliciting signals provide aid to the interpretation of other signals by the receiver (Hailman 1977, p. 266). For example, in coyotes, the response to a threat gesture after a play signal had immediately preceded the threat or a play signal had been performed in the beginning of an interaction, is different from the response to threat in the absence of any preceding play signal (Bekoff 1975). The play signal somehow altered the meaning of a threat signal by establishing (or maintaining) a "play mood." Unfortunately, there have been no other similar quantitative studies to the best of my knowledge, but observations of play in diverse species support the idea that play signals can, and do, serve to establish play moods and alter the significance of behavior patterns that are borrowed from other contexts and used in social play.
Let's consider in more detail the question of whether or not signals that appear to be used to communicate play-intention (play-soliciting signals) to other individuals (Symons 1974; Bekoff 1975, 1977, 1978; Bekoff and Byers 1981; Fagen 1981) could foster the cooperation among participants that is necessary for play to occur. It is assumed that such play-soliciting signals transmit messages such as "what follows is play," "this is still play," or "let's play again, wasn't it fun." (The latter two messages may be sent after a very short break or after rough play has occurred.) Supporting evidence concerning the importance of play signals for allowing cooperative social play to occur comes from studies in which it is shown that play-soliciting signals show little variability in form or temporal characteristics and that they are used almost solely in the context of play. For example, one action that is commonly observed in the context of social play is the "bow," during which an individual crouches (as if bowing) on her forelimbs while keeping her hindlimbs relatively straight; tail-wagging and barking may accompany the bow. In various canids, the bow is a highly stereotyped movement that seems to function to stimulate recipients to engage (or to continue to engage) in social play (Bekoff 1977). Furthermore, the first bows that very young canids have been observed to perform are highly stereotyped, and learning seems to be relatively unimportant in their development. These features of bows can be related to the fact that when engaging in social play, canids typically use action patterns that are also used in other contexts such as predatory behavior, agonistic encounters, or mating, where misinterpretation of play intention could be injurious.
Available data strongly suggest that play-soliciting actions seem to
be used to communicate to others that actions such as biting, biting and
shaking of the head from side-to-side, and mounting, are to be taken as
play and not as aggression, predation, or reproduction (for details see
Bekoff and Byers 1981 and Fagen 1981). On this view, bows are performed
when the signaler wants to communicate a specific message about her desires
or beliefs. How these types of intentional explanations (Dennett 1983)
might be related to the communication of social play is shown as follows:
|Order of Explanation||General Explanation||Explanation with Respect to Play Behavior|
|zero-order||J performs behavior||J performs bow|
|first-order||J believes P
J wants P
|J wants H to play with him|
|second-order||J wants H to believe that x||J wants H to believe that H should play with him|
|third-order||J wants H to believe that J wants x||J wants H to believe that J wants H to play|
Some other characteristics of play bows and also some of the properties of social play support a cognitive explanation of play, and can be used to stimulate future research. For example, play bows themselves occur throughout play sequences, but usually at the beginning or towards the middle of playful encounters. In a detailed analysis of the form and duration of play bows (Bekoff 1977), it was shown that duration was more variable than form, and that play bows were always less variable when performed at the beginning, rather than in the middle of, ongoing play sequences. Recall also that the first play bows that very young canids have been observed to perform are highly stereotyped, and learning seems to be relatively unimportant in their development. One can ask why there is more variability of bows performed during play sequences when compared with bows performed at the beginning of play sequences. Right now I can only offer some possibilities that need to be pursued empirically. Thus, there may be more variability for bows performed during play bouts because of (i) fatigue, (ii) the fact that animals are performing them from a wide variety of preceding postures, and/or because (iii) there is less of a need to communicate that this is still play than there is when trying to initiate a new interaction.
Analyses of play sequences may also inform future studies of social play. For example, in intraspecific comparisons, it has been found that sequences of social play are usually more variable than sequences of nonplay behavior (Bekoff and Byers 1981). Is it possible that animals "read" differences in behavioral sequences that are performed during play and in other contexts? Might increased (or consistent variations in) variability in sequences also (along with play signals) convey the message "this is play" and enable individuals to predict what is likely to occur or to understand what has already occurred.
A cognitive perspective will be very useful in future analyses of social play. Some of my own thoughts of the future concerning the direction of empirical research center on learning more about what a bowing (or soliciting) dog, for example, expects to happen after (or even as) she performs what is called a play-soliciting signal. Comparative observations strongly suggest that she expects that play will ensue if she performs a bow; she acts as if she wants play to occur. On what sort of grounds is this claim based? Specifically, it looks as if she is frustrated or surprised when her bow is not reciprocated in a way that is consistent with her belief about what is most likely to occur, namely, social play. Dogs and other canids are extremely persistent in their attempts to get others to play with them; their persistence suggests a strong desire to engage in some sort of activity. Frustration may be inferred from the common observation that canids and other mammals often engage in some sort of self-play such as tail-chasing, after a bow or other play invitation signal is ignored, or they rapidly run over to another potential individual and try to get them to play; play is redirected to the signaler herself or to other individuals. Surprise is more difficult to deal with, but often I and my students have agreed that a dog or coyote looked surprised when, on the very rare occasion, a bow resulted in the recipient attacking the signaler. The soliciting animal's eyes opened widely, her tail dropped, and she rapidly turned away from the noncooperating animal to whom she directed a play-soliciting, as if what happened was totally unexpected and perhaps confusing (see also Tinklepaugh's observation of surprise in monkeys when an expected and favored piece of food was replaced with lettuce). After moving away the surprised animal often looked at the other individual, cocked her head to one side, squinted, and furrowed her brow, and seemed to be saying "Are you kidding, I want to play - this is not what I wanted to happen." The concept of surprise and the having of beliefs may be closely tied together. For example, if Jethro believes that social play will occur after performing a bow (P), then he would be surprised to discover that social play does not occur after performing a bow (not P ,-P). If he is surprised to discover that -P, then he comes to believe that his original belief was false. This involves his having the second-order belief that his first-order belief was false, which involves having the concept of belief.
With respect to the solicitor's beliefs about the future, detailed analyses of movie film also show that on some occasions, a soliciting animal begins to perform another behavior before the other animal commits himself. The solicitor behaves as if she expects that something specific will happen and commits herself to this course of action. The major question, then, is how to operationalize these questions; what would be convincing data? How do we know when we have an instance of a given behavior(s)? Thus, we need to consider questions such as: what is frustrated, what is the goal, what is the belief about, and how could we study these questions? There simply is no substitute for detailed descriptions of subtle behavior patterns that might indicate surprise - facial expressions, eye movements, and body postures.
In summary, studies of social play are challenging and fascinating; I have been doing research on social play in canids for over 20 years and there are many questions to which I do not have satisfactory answers. The cognitive approach has helped me to come to terms with old data and also has raised new questions that need to be studied empirically.
Most animals are both predators and prey (Lima 1990). Thus, there is some conflict between avoiding being eaten and eating. Scanning for predators is called vigilance behavior, and in studies of vigilance it generally is assumed, for simplicity's sake, that individuals compromise their ability to detect predators when feeding with their heads down, and compromise on food intake when scanning for predators with their heads up (see Lima 1990, figure 1, p. 247). Thus, it has been argued that there are good reasons for individuals to live, or at least to forage for food, in groups, if doing so increases the probability of detecting a predator or reduces the time spent scanning for predators, thus permitting more time to be spent doing other things. Not surprisingly, there has been a lot of interest in antipredatory vigilance among those interested in the evolution of social behavior, and many different aspects of this behavior pattern have been analyzed mainly in birds and in a few mammals (Pulliam 1973; Bertram 1978; Lazarus 1979, 1990; Elgar 1989; Rasa 1989; Dehn 1990; Lima 1990; Lima and Dill 1990; Quenette 1990).
A very popular question in the comparative study of vigilance is how does the behavior of individuals vary in groups of different sizes. Generally, it has been found that there is a negative relationship between group size and rates of scanning by individuals and a positive relationship between group size and the probability of predator detection. This is because there are more eyes and perhaps other sense organs that can be used to scan for predators. In his comprehensive review, Elgar (1989) notes that although the negative relationship between group size and individual scanning rate is quite robust and is approaching the status of dogma (Lima 1990), few studies have actually controlled for confounding variables, such as variation in the density and type of food resources, group composition, ambient temperature and time of day, proximity to a safe place and to the observer, visibility within the habitat, and group composition (see also Lima 1990 and Lima and Dill 1991). There are also problems associated with the researcher really knowing whether an individual is actually scanning; behavioral data can be equivocal and attention must also be given to anatomical and physiological constraints. (The same can be said for the relationship between predator detection and escape; in the absence of any discernible response it is impossible to know whether an individual has detected a predator, and it is possible that a prey may be aware of a predator(s) before he decides to flee; Ydenberg and Dill ). With respect to whether or not an individual is really being vigilant, Lazarus (1990, p. 65) notes that " . . . researchers have simply assumed that the behaviour in question is vigilant, and have then sought its function." It is important to stress that such adaptive and evolutionary tales are not necessarily any more plausible than explanations of nonhuman behavior that invoke notions such as intentionality. In both instances convincing data may still be lacking and a lot of faith in placed in folk explanations.
In the future, experimental cognitive studies will help to answer many questions that have either been ignored or have risen in previous studies of antipredatory vigilance. Of course, these studies must adhere to the strictest guidelines with respect to ethical considerations (Bekoff and Jamieson 1991; Bekoff et al. 1992). Perhaps in some instances it will be the case that the coordination of vigilance among group members is not cost effective (Ward 1985), but there are not enough data now to make any sweeping generalizations. In his review, Lima (1990) notes that there seem to be no studies that have directly examined the question of whether foragers pay any attention to the behavior of other group members. He also concludes that very little is known about the perceptions of the animals being studied and that many models of vigilance reflect mainly the perceptions of the modelers themselves (p. 262).
Here, I will assume here that the negative relationship between group size and individual scanning rate is a genuine one, and ask a number of cognitive questions that bear on this general finding. Furthermore, although cooperation in vigilance is not to be expected, I will argue that although individuals do not sign binding contracts (Lima 1990) and that although cheating could occur, the evidence at hand does not refute the possibility of cooperation among at least some group members (see also Lima 1990, p. 262). Although Dehn (1990) does not suggest that any explicit reasons for individuals to cooperate in collective group vigilance, he notes that even if they live in large groups (> 10 individuals), individuals might benefit in terms so lifetime fitness if they are somewhat vigilant.
A cognitive analysis of vigilance in which we are concerned with what an individual might know about itself and others would involve asking at least the following questions, all of which are interconnected and all of which lend themselves to empirical study. One major question is "Why does the relationship between group size and scanning rates fail where it is to be expected?" Another question to which I will also return is "Is there some association between the degree of coordination or possibly cooperation among group members and the geometry of the group?" To the best of my knowledge, these questions have not been pursued rigorously. For some of the questions I am asking and for some of the analyses I am suggesting, it easier to assume that there is some stability in the composition of the groups, although some models would allow for individuals to learn about general behavior patterns of other individuals regardless of who they are. I will discuss this in more detail later on. Some are the questions that I pose here are not directly related to a cognitive inquiry, but all can inform and motivate such an approach.
(1) What is a group? To a nonhuman? To a human? What does it mean to say that an individual is a member of a group and is our conception of group the same as that of the animals? Questions that inform the conception of group membership include "What types of behavioral criteria can be used to assess if an individual thinks he is a member of a group?", "Is there a critical distance between individuals below which we can say with some degree of certainty that they are members of the same group?", and "Do individuals have to spend a certain amount of time together within a certain distance to justify calling them a group? With respect to studies of vigilance, Elgar, Burren, and Posen (1984) found that a house sparrow who was in visual contact with other house sparrows but separated by 1.2 meters scanned as if she was alone. I am presently pursuing the use of mirrors to help to answer this question.
Other questions also arise. For example, we also need to ask, how do we measure group size and how might nonhumans measure group size? This question deserves special consideration on its own because even if we can come up with a working definition of group, we also need to be able to present measures of instantaneous and long-term effective group size. In studies of vigilance (and other activities) variations in group size are often used to explain variation in other patterns of behavior, such as individual scanning rates, and precise measurements of group size are essential.
(2) Does the size of a group or the geometric distribution or orientation of individuals influence individual vigilance?
I have already mentioned the often-found relationship between group size and scanning rates above. However, it is important to keep in mind that there are confounding variables such as the geometric relationships among group members (or neighboring birds) (Figure 1) and how individuals are oriented in space that might influence scanning rates of individuals.
Answers to the question "How does the geometric distribution of individuals influence individual vigilance?" will likely have something to say about animal cognitive abilities. Thus, while it is known that the location of an individual in her group (center or periphery) can influence her pattern of vigilance, it remains to be studied how the geometry of the whole group influences the ease with which an individual is able to assess what others are doing by seeing or hearing them (and relate their behavior to her own). For example, it seems that it would be easier to see what others are doing if individuals were organized in a circle rather than in a straight line, but this is not known. Determining how each variable singly and in combination with others influences scanning is important so that we can determine the precise role of group size itself in influencing individual patterns of vigilance (Elgar 1989).
Questions such as "How does a bird or other nonhuman assess group geometry?" also need to be considered. While it is known that in some species group structure changes in response to the presence of a predator (Lima and Dill 1990, p. 627), it is not known if and how individuals actually assess the geometry of the group in which they are a member.
(3) Do changes in group size or geometry influence patterns of social interactions?
It is possible that as group size and geometry change, either singly or together, there is also a change in how individuals interact. If this is the case, then it might be possible for an individual to gain information about these variables from changes in encounter patterns without having to read them directly (e.g. Gordon, Paul, and Thorpe
1992; Gordon, Goodwin, and Trainor 1992; see also Deneubourg and Goss 1989 and Warburton and Lazarus 1991). To the best of my knowledge, there are no data for birds or mammals that can be used to answer these questions with any degree of certainty. Of course, the problems in studying these questions are enormous, but trying to get answers to them should be an exciting venture.
Some other interesting questions that may be informed by a cognitive approach include: "Do individuals change their relative position in a group to make it more likely that they could feed more efficiently and/or detect potential predators more easily?" "Is this a cooperative endeavor?" Also, "Does one's position in a group influence influence whether he can assess changes in group size or geometry?" Here I am asking if and how the location of an individual in a group makes it easier or more difficult to know how many other individuals are there and how they are distributed in space. It might be very useful for an individual to be able to see what others are doing, for while scanning, an individual might also pick up and store information about what individuals in a particular part of the group are most likely to be doing, or she might generalize from her own previous experience in that part of the group to what others are most likely to be doing when they are in that position. If we can get answers to these sorts of questions, we might be able to assess if it possible that the inverse relationship between group size and individual scanning rate levels off or fails because of failures in individuals being able to monitor the behavior of too many other animals who might also be hard to see. Elgar et al. (1984) and Metcalfe (1984a,b) present data suggesting that in some birds there does seem to be visual inspection among individuals a flock.
The results of cognitive studies of vigilance would be useful not only for furthering our knowledge about antipredatory scanning but also would inform and motivate other studies (e.g. assessments of dominance) that are concerned with the question of how individuals assess what they know and what others know based either on the result of direct interactions with them, or by observing how others interact with individuals with whom they themselves have not had direct encounters (observational learning). In large groups it would probably be impossible to know about every possible paired interaction, nor might it be possible or desirable for an individual to be able to interact with every other individual. Thus, in these instances having the ability to read interaction patterns among others and then to use this information in one's own encounters would be extremely useful. How individuals glean information from their nonsocial environments also is important to consider (e.g. the location of a potential predator or safety, and how this information influences whether and how rapidly assessments of group size and group geometry and changes in group size and geometry are made). We will also learn more about how accurate are folk psychological explanations for many of the behavior patterns that are of great interest to us.
There are many reasons why people are interested in the study of nonhuman minds and cognition. While each does not necessarily warrant a cognitive approach, taken together they justify the current interest in cognitive ethology. These include the following (in no order of importance).
(1) Many models in ethology and behavioral ecology presuppose cognition (see Ristau 1991a, Yoerg 1991, and Griffin 1992 for examples). It would be useful to have informed ideas about the types of knowledge that nonhumans have about their social and nonsocial environments and how they use this information.
(2) It may be more economical or parsimonious to assume that not everything that an individual needs to be able to do in all situations in which he finds himself is preprogrammed. While general rules of thumb may be laid down genetically during evolution, specific rules of conduct that account for all possible contingencies are too numerous to be hard-wired (Griffin 1984). Behavioristic learning schemes can account for some flexibility in organisms, but learning at high degrees of abstraction from sensory stimulation seems less amenable to behavioristic analysis (Allen and Hauser 1991). Cognitive models of learning provide explanatory schemes for such cases.
(3) The assumption of animal minds leads to more rigorous empirical analyses of behavioral plasticity and flexibility in the many and diverse situations that many nonhumans regularly encounter. Yoerg (1991) argues that considerations of cognitive function can lead to original ideas about behavioral adaptation.
(4) By providing different perspectives on behavior, cognitive ethology can raise new questions that may be approached from different levels of analysis by people coming from different disciplines. For example, neurobiological studies would be important for informing further studies in animal cognition and might also be useful for explaining data that are already available.
(5) Animal welfare issues are tightly connected to views on the cognitive abilities of nonhumans.5
Of course, more comparative data dealing with animal cognition are needed, especially those that could be analyzed using rigorous methods that have been applied to other types of comparative studies(e.g. Gittleman and Luh 1993); detailed field data would be particularly welcomed, or at least enough information that could reliably inform more controlled studies of animal cognition. While some are of the opinion that advanced cognition is confined to the laboratory (e.g. Premack 1988, pp. 171-172), those who have studied animals in the wild disagree (e.g. de Waal 1991, p. 311; McGrew 1992, pp. 83ff).6 More concentration is also needed on individual differences in cognitive abilities; sweeping generalizations concerning the "typical" behavior of species are often misleading because of great intraspecific variation in social behavior and social organization (Lott 1991, White 1992) and in the performance of behavior patterns (e.g. tool use) that are often cited in establishing generalizations about cognition (Gibbons 1991; McGrew 1992).
Interdisciplinary efforts, despite possible pitfalls (Heil 1992, p. 235), are essential in our quest for knowledge about animal minds. In these joint efforts, open minds and pluralism would also be useful at this stage of the game (Roitblat and Weisman 1986). Philosophers need to be clear when they tell us about what they think about animal minds and those who carefully study the behavior of nonhumans need to tell philosophers what we know, what we are able to do, and how we go about doing our research. Although providing alternatives might not be a requirement in thought experiments that conclude that animals do not have beliefs for one or another reason, it would be useful for students of behavior to be presented with some viable alternatives that could be used in their empirical investigations. If it is because philosophers do not have the experience with empirical work that allows them to make realistic suggestions for experimental design, then it would be useful for philosophers to watch ethologists at work (Dennett 1987, 1988). This experience might allow philosophers to gain a better understanding of what ethology is all about. Even then, it may be the case that ethologists are ill-advised to look to philosophers for a crisp and empirically rigorous definition of intentionality (for example), even if some philosophers promise to provide one (C. Allen this volume).
Obviously, I do not think that cognitive ethologists should hang up their field glasses and have nothing to do with talk about nonhuman intentional behavior. Rather, cognitive ethologists should put their noses to the grindstone and welcome the fact that they are dealing with difficult and important questions. I expect that in the future, cognitive ethologists will be pursuing the challenging questions that confront them, rather than looking for other work.
I thank Jean-Arcady Meyer and Herbert Roitblat for allowing me to partake in this conference and for their help in defraying some of the costs associated with traveling to France. The University of Colorado also provided financial aid for the preparation of my oral presentation and this paper. A large number of people have helped me along the way, and to them I extend many thanks. They include: Dale Jamieson, Susan Townsend, Carol Powley, Lori Gruen, Ruth Millikan, Robert Eaton, Deborah Crowell, Mark Anderson, Anderson Brown, John A Fisher, Bernard Rollin, Jack Hailman, Kim Sterelney, Deborah Gordon, Gordon Burghardt, Donald Griffin, and especially Colin Allen. To those who I have inadvertently overlooked, I extend my deepest apologies. John Heil, Andrew Whiten, and James R. Anderson graciously provided unpublished manuscripts. Herb Roitblat's, Colin Allen's, Susan Townsend's, and John Lazarus' comments on an ancestral version of this paper were especially helpful. None of these scholars necessarily agrees with what I have written.
1. See, for example, Griffin (1976, 1981, 1984, 1991, 1992); Dennett (1983, 1987); Millikan (1984); Roitblat, Bever, and Terrace (1984); Wyers (1985); Mitchell and Thompson (1986); Byrne and Whiten (1988); Cheney and Seyfarth (1990, 1992); Allen and Hauser (1991); Bekoff and Jamieson (1990a,b, 1991); Hauser and Nelson (1991); Ristau (1991a,b); Yoerg (1991); Bekoff and Allen (1992); Beer (1992); Real (1992); Roitblat and von Fersen (1992); Jamieson and Bekoff (1993).
2. Kennedy's claims about anthropomorphism are wide-ranging, but simple-minded and unargued. For more detailed and scholarly discussions of anthropomorphism as they are related to studies of animal behavior, see Fisher (1990, 1991).
3. Sometimes it is difficult to differentiate between skeptics and moderate proponents, who argue that if there is to be a science of cognitive ethology, we must develop empirical methods for applying cognitive terms and making talk about animal minds respectable (e.g. Kummer et al., 1990). Jamieson and Bekoff's (1993) differentiation between weak cognitive ethology, where a cognitive vocabulary can be used to explain, but not to describe behavior, and strong cognitive ethology, where cognitive and affective vocabularies can be used to describe and to explain behavior, may be relevant here.
4. For numerous and diverse examples see Chance and Larsen (1976); Griffin (1976, 1981, 1984, 1992); Dennett (1983, 1987); Roitblat, Bever, and Terrace (1984); Byers and Bekoff (1986); Mitchell and Thompson (1986); Schusterman, Thomas, and Wood (1986); Byrne and Whiten (1988); Bateson (1990); Blaustein and Porter (1990); Cheney and Seyfarth (1990, 1992); Pepperberg (1990); Philips and Austad (1990); Rosenzweig (1990); Smith (1990); Allen and Hauser (1991); Bekoff and Jamieson (1991); Hauser and Nelson (1991); Ristau (1991a); Yoerg (1991); Bekoff and Allen (1992); Beer (1992); Caro and Hauser (1992); Fiorito and Scotto (1992); McGrew (1992); Whiten and Ham (1992); Roitblat and von Fersen (1992).
5. For discussion see Rachels (1990), Bekoff and Jamieson (1991), Harrison (1991), Bekoff et al., (1992), Griffin (1992), G. W. Levvis (1992), M. A. Levvis (1992), and Lynch (1992). Byrne (1991, p. 47) has gone as far to claim that "If explorations of the minds of chimpanzees and other animals do nothing more than inform the debate about the ethics of animal use in research, the work will have been well worthwhile." Griffin (1992, p. 251) notes that "No one seriously advocates harming animals just for the sake of doing so, although cruelty is unfortunately prevalent in some circles." Unfortunately, Griffin does not tell us where. Despite a very large data base demonstrating highly developed cognitive skills in many animals, there are those who ignore research on animal cognition, misinterpret data from studies on humans, and base their conclusions on the moral status of animals using an intuitionistic comparison of animal and human behavior (e.g. Carruthers 1989; Leahy 1991; but see Clark 1991; Jamieson and Bekoff 1992; Singer 1992). Thus, Carruthers (1989, p. 265), who compares the behavior of animals with the behavior of humans who are driving while distracted and humans who suffer from blindsight, writes: "I shall assume that no one would seriously maintain that dogs, cats, sheep, cattle, pigs, or chickens consciously think things to themselves . . . the experiences of all these creatures [are of] the nonconscious variety." (p. 265) Furthermore, "Similarly then in the case of brutes: since their experiences, including their pains, are nonconscious ones, their pains are of no immediate moral concern. Indeed since all the mental states of brutes are nonconscious, their injuries are lacking even in indirect moral concern. Since the disappointments caused to the dog through possession of a broken leg are themselves nonconscious in their turn, they, too, are not appropriate objects of our sympathy. Hence, neither the pain of the broken leg itself, nor its further effects upon the life of the dog, have any rational claim upon our sympathy. (p. 268) And finally, " . . . it also follows that there is no moral criticism to be leveled at the majority of people who are indifferent to the pains of factory-farmed animals, which they know to exist but do not themselves observe." (p. 269) For discussion of this minority opinion, see Johnson (1991), Bekoff and Jamieson (1991), and Jamieson and Bekoff (1992). Recently, Carruthers (1992, p. xi) has come to regard " . . . the present popular concern with animal rights in our culture as a reflection of moral decadence" that distracts " . . . attention from the needs of those who certainly do have moral standing - namely, human beings." (p. 168)
Another issue that bears on studies of both animal cognition and animal welfare concerns the naming of animals, for this practice is often taken to be nonscientific. Historically, it is interesting to note the Jane Goodall's first scientific paper dealing with her research on the behavior of chimpanzees was returned by the Annals of the New York Academy of Sciences because she named, rather than numbered, the chimpanzees who she watched. This journal also wanted her to refer to the chimpanzees using "it" or "which" rather than "he" or "she" (Montgomery 1991, pp. 104-105). Goodall refused to make the requested changes but her paper was published anyway. As has been pointed out elsewhere (Bekoff 1993b), the words ""it" and "which" are typically used for inanimate objects (Random House Dictionary 1978). Given that the goal of many studies of animal cognition is to come to terms with animals' subjective experiences - the animals' points of view - making animals subjects rather than objects seems a move in the right direction.
Finally, the results of comparative approaches to cognition will raise numerous and complex ethical concerns about the moral status of androids, for example (e.g. Caudill 1992, Chapter 13). These thorny issues cannot be dismissed, but rather should be accepted as challenges for future consideration. Undoubtedly, many of these ethical concerns will be informed by the ways in which nonhumans are viewed.
6. One area in which information from the wild would be very useful concerns the question of whether animals can count (Boysen and Capaldi 1993). While I have no hard data concerning this ability in the free-living coyotes who I have studied, when pups were moved from one den to another, I never saw a mother either forget a pup or go back to a den to retrieve a pup who was not there, even when she had help from another coyote in moving the infants. McGrew (1992, p. 223) also points out that we need to know about accounting abilities in other animals to see if accounting informs decisions about reciprocity.
Akins, K. 1990. Science and our inner lives: Birds of prey, bats, and the common (featherless) bi-ped. In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Alcock, J. 1992. Review of D. R. Griffin 1992. Natural History September, 62-65.
Allen, C. 1992a. Mental content. British Journal of the Philosophy of Science in press.
Allen, C. 1992b. Mental content and evolutionary explanation. Biology and Philosophy 7:1-12.
Allen, C. , and Bekoff, M. 1993. Intentionality, social play, and definition. Submitted.
Allen, C., and Hauser, M. D. 1991. Concept attribution in nonhuman animals: Theoretical and methodological problems in ascribing complex mental processes. Philosophy of Science 58:221-240.
Archer, J. 1992. Ethology and Human Development. London: Barnes and Noble.
Bateson, P. P. G. 1990. Choice, preference, and selection. In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Beck, B. 1982. Chimpocentrism: Bias in cognitive ethology. Journal of Human Evolution 11:3-17.
Beer, C. 1992. Conceptual issues in cognitive ethology. Advances in the Study of Behavior 21: 69-109.
Bekoff, M. 1975. The communication of play intention: Are play signals functional? Semiotica 15:231-239.
Bekoff, M. 1977. Social communication in canids: Evidence for the evolution of a stereotyped mammalian display. Science 197:1097-1099.
Bekoff, M. 1978. Social play: Structure, function, and the evolution of a cooperative social behavior. In G Burghardt and M. Bekoff, eds. The Development of Behavior: Comparative and Evolutionary Aspects. New York:Garland.
Bekoff, M. 1989. Behavioral development of terrestrial carnivores. In J. L. Gittleman, ed. Carnivore Behavior, Ecology, and Evolution. Ithaca, New York: Cornell University Press.
Bekoff, M. 1992a. Scientific ideology, animal consciousness, and animal protection: An principled plea for unabashed common sense. New Ideas in Psychology 10:79-94.
Bekoff, M. 1992b. Description and explanation: A plea for plurality. Behavioral and Brain Sciences 15:269-270.
Bekoff, M. 1993a. Review of Griffin 1992. Ethology in press.
Bekoff, M. 1993b. Experimentally induced infanticide: The removal of females and its ramifications. Auk in press.
Bekoff, M., and Allen, C. 1992. Intentional icons: towards an evolutionary cognitive ethology. Ethology 91:1-16.
Bekoff, M., and Allen, C. 1993. Cognitive ethology: Slayers, skeptics, and proponents. In R. W. Mitchell, N. Thompson, and L. Miles, eds. Anthropomorphism, Anecdote, and Animals: The Emperor's New Clothes? Lincoln, Nebraska:University of Nebraska Press.
Bekoff, M., and Byers, J. A. 1981. A critical reanalysis of the ontogeny of mammalian social and locomotor play: An ethological hornet's nest. In K. Immelmann, G. W. Barlow, L. Petrinovich, and M. Main, eds. Behavioral Development: The Bielefeld Interdisciplinary Project. New York:Cambridge University. Press.
Bekoff, M., Gruen, J., Townsend, S. E., and Rollin, B. E. 1992. Animals in science: some areas revisited. Animal Behaviour 44: 473-484.
Bekoff, M, and Jamieson, D. eds. 1990a. Interpretation and Explanation in the Study of Animal Behavior: Vol. 1, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Bekoff, M., and Jamieson, D. eds. 1990b. Interpretation and Explanation in the Study of Animal Behavior: Vol. 2, Explanation, Evolution, and Adaptation. Boulder, Colorado:Westview Press.
Bekoff, M., and Jamieson, D. 1991. Reflective ethology, applied philosophy, and the moral status of animals. Perspectives in Ethology 9:1-47.
Bekoff, M., Townsend, S. E., and Jamieson, D. 1993. Beyond monkey minds: towards a richer cognitive ethology. Behavioral and Brain Sciences in press.
Bennett, J. 1991. How to read minds in behaviour: a suggestion from a philosopher. In A. Whiten, ed., Natural Theories of Mind: Evolution, Development and Simulation of Everyday Mindreading. Cambridge, Massachusetts:Basil
Bertram, B. C. R. 1978. Living in groups. In J. R. Krebs and N. B. Davies, eds. Behavioural Ecology: An Evolutionary Approach. Sunderland, Massachusetts: Sinauer.
Blaustein, A. R., and Porter, R. H. 1990. The ubiquitous concept of recognition with special reference to kin. In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Bogdan, R. J. ed. 1991. Mind and Common Sense: Philosophical Essays on Commonsense Psychology. New York:Cambridge University Press.
Boysen, S. T., and E. J. Capaldi. eds. 1993. The Development of Numerical Competence: Animal and Human Models. Hillsdale, New Jersey:Erlbaum.
Brunjes, P. C. 1992. Lessons from lesions: The effects of olfactory bulbectomy. Chemical Senses 17:729-763
Burghardt, G. M. 1991. Cognitive ethology and critical anthropomorphism: A snake with two heads and hognose snakes that play dead. In C. Ristau, ed. Cognitive Ethology: The Minds of Other Animals. Hillsdale, New Jersey:Erlbaum.
Byers, J. A., and Bekoff, M. 1986. What does "kin recognition" mean? Ethology 72: 342-345.
Byrne, R. W. 1991. Review of Cheney and Seyfarth 1990. The Sciences July:142-147.
Byrne, R. W., and Whiten, A. eds. 1988. Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. New York: Oxford University Press.
Caro, T. M., and Hauser, M. D. 1992. Is there teaching in nonhuman animals? Quarterly Review of Biology 67:151-174.
Carruthers, P. 1989. Brute experience. Journal of Philosophy 86:258-269.
Carruthers, P. 1992. The Animals Issue: Moral Theory in Practice. New York:Cambridge University Press.
Caudill, M. 1992. In Our Own Image: Building An Artificial Person. New York:Oxford University Press.
Chance, M. R. A., and Larsen, R. R. eds. 1976. The Social Structure of Attention. New York: Wiley.
Cheney, D. L., and Seyfarth, R. M. 1990. How Monkeys See the World: Inside the Mind of Another Species. Chicago:University of Chicago Press.
Cheney, D. L., and Seyfarth, R. M. 1992. Précis of How Monkeys See the World: Inside the Mind of Another Species. Behavioral and Brain Sciences 15:135-182.
Churchland, P. M. 1979. Scientific Realism and the Plasticity of Mind. New York: Cambridge University Press.
Churchland, P. M. 1981. Eliminative materialism and the propositional attitudes. Journal of Philosophy 78: 67-89.
Clark, A. (1989) Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. Cambridge, Massachusetts:MIT Press.
Clark, S. R. L. 1991. Not so dumb friends. The Times Literary Supplement, London. 13 December:5-6.
Cling, A. D. 1990. Disappearance and knowledge. Philosophy of Science 57:226-247.
Cling, A. D. 1991. The empirical virtues of beliefs. Philosophical Psychology 4: 303- 323.
Colgan, P. 1989. Animal Motivation. New York: Chapman and Hall.
Christensen, S. M., and Turner, D. R. eds. 1993. Folk Psychology and the Philosophy of Mind. Hillsdale, New Jersey:Erlbaum.
Cronin, H. Review of Griffin 1992. New York Times Book Review November 1:14.
Dehn, M. M. 1990. Vigilance for predators: Detection and dilution effects. Behavioral Ecology and Sociobiology 26: 337-342.
Dennett, D. C. 1983. Intentional systems in cognitive ethology: The "Panglossian paradigm" defended. Behavioral and Brain Sciences 6:343-345.
Dennett, D. C. 1987. Reflections: Interpreting monkeys, theorists, and genes. In The Intentional Stance. Cambridge, Massachusetts:MIT Press.
Dennett, D. C. 1991. Consciousness Explained. Boston:Little, Brown and Company.
Dennett, D. C. 1988. Out of the armchair and into the field. Poetics Today 9:205-221.
Deneubourg, J. L., and Goss, C. 1989. Collective patterns and decision-making. Ethology Ecology, & Evolution 1: 295-311.
Elgar, M. A. 1989. Predator vigilance and group size in mammals and birds: A critical review of the empirical evidence. Biological Reviews 64: 13-33.
Elgar, M. A., Burren, P. J., and Posen, M. 1984. Vigilance and perception of flock size in foraging house sparrows Passer domesticus L. Behaviour 90:215-223.
Fagen, R. 1981. Animal Play Behavior. New York:Oxford University Press.
Fiorito, G. and Scotto, P. 1992. Observational learning in Octopus vulgaris. Science 256: 545-546.
Fisher, J. A. 1990. The myth of anthropomorphism. In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Fisher, J. A. 1991. Disambiguating anthropomorphism. Perspectives in Ethology 9:49- 85.
Gibbons, A. (1991) Chimps: More diverse than a barrel of monkeys. Science 255: 287-288.
Gilman, D. (1990) The neurobiology of observation. Philosophy of Science 58: 496-502.
Gittleman, J. L., and Luh, H.-K. 1993. Phylogeny, evolutionary models, and comparative methods: A simulation study. In P. Eggleton and D. Vane-Wright, eds. Pattern and Process: Phylogenetic Approaches to Ecological Problems. London:Academic Press. in press.
Golani, I. 1992. A mobility gradient in the organization of vertebrate movement. Behavioral and Brain Sciences 15:249-308.
Gordon, D. M., Paul, R. E., and Thorpe, K. What is the function of encounter patterns in ant colonies? Animal Behaviour in press.
Gordon, D. M., Goodwin, B. C., and Trainor, L. E. H. 1992. A parallel distributed model of the behaviour of ant colonies. Journal of Theoretical Biology in press.
Greenwood, J. D.. ed. 1991. The Future of Folk Psychology: Intentionality and Cognitive Science. New York:Cambridge University Press.
Griffin, D. R. 1976. The Question of Animal Awareness: Evolutionary Continuity Mental Experience. New York:The Rockefeller University Press.
Griffin, D. R. 1978. Prospects for a cognitive ethology. Behavioral and Brain Sciences 4:527-538.
Griffin, D. R. 1981. The Question of Animal Awareness: Evolutionary Continuity Mental Experience. Second edition. New York:The Rockefeller University Press.
Griffin, D. R. 1984. Animal Thinking. Cambridge, Massachusetts:Harvard University Press.
Griffin, D. R. 1991. Progress toward a cognitive ethology. In C. Ristau, ed. Cognitive Ethology: The Minds of Other Animals. Hillsdale, New Jersey:Erlbaum.
Griffin, D. R. 1992. Animal Minds. Chicago:University of Chicago Press.
Gustafson, D. 1986. Review of D. R. Griffin 1984. Env. Ethics 8:179-182.
Hailman, J. P. 1977. Optical Signals: Animal Communication and Light. Bloomington, Indiana:Indiana University Press.
Harman, G. 1965. The inference to the best explanation. Philosophical Review 74: 88- 95.
Harrison, P. 1991. Do animals feel pain? Philosophy 66:25-40.
Hauser, M. D., and Nelson, M. 1991: 'Intentional' signaling in animal communication. Trends in Ecology and Evolution 6:186-189.
Heil, J. 1992. The Nature of True Minds. New York:Cambridge University Press.
Heyes, C. 1987a. Cognisance of consciousness in the study of animal knowledge. In W. Callebaut and R. Pinxten, eds. Evolutionary Epistemology. Boston:D. Reidel.
Heyes, C. 1987b. Contrasting approaches to the legitimation of intentional language within comparative psychology. Behaviorism 15:41-50.
Howlett, R. 1993. Beauty on the brain. Nature 361:398-399.
Jamieson, D., and Bekoff, M. 1992a. Some problems and prospects for cognitive ethology.Between The Species 8:80-82.
Jamieson, D., and Bekoff, M. 1992b. Carruthers on nonconscious experience. Analysis 52:23-28.
Jamieson, D., and Bekoff, M. 1993. On aims and methods of cognitive ethology. Phil. Sci. Assoc. 2: in press.
Johnson, E. 1991. Carruthers on consciousness and moral status. Between The Species 7:190-193.
Kennedy, J. S. 1992. The New Anthropomorphism. New York:Cambridge University Press.
Kummer, H., Dasser, Z., and Hoyningen-Huene, P. 1990. Exploring primate social cognition: Some critical remarks. Behaviour 112:85-98.
Lazarus, J. 1979. The early warning function of flocking in birds: An experimental study with captive quela. Animal Behaviour 27: 855-865.
Lazarus, J. 1990. Looking for trouble. New Scientist 125: 62-65.
Leahy, M. P. T. Against Liberation: Putting Animals in Perspective. New York:Routledge.
Levvis, G. W. 1992. Why we would not understand a talking lion. Between The Species 8: 156-162.
Levvis, M. A. 1992. The value of judgments regarding the value of animals. Between The Species 8: 150-155.
Lima, S. L. 1990. The influence of models on the interpretation of vigilance. In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. II, Explanation, Evolution, and Adaptation. Boulder, Colorado: Westview Press.
Lima, S. L., and Dill, L. M. 1990. Behavioral decisions made under the risk of predation: A review and prospectus. Canadian Journal of Zoology 68: 619-640.
Lindzen, R. S. 1990. Some coolness concerning global warming. Bulletin of the American Meterological Society 71: 288-299.
Lott, D. F. 1991. Intraspecific Variation in the Social Systems of Wild Vertebrates. New York:Cambridge University Press.
Lynch, J. J. 1992. Toward An Interspecific Psychology. Ph.D. Dissertation, California: The Claremont Graduate School.
Mason, W. A. 1976. Review of Griffin 1976. Science 194:930-931.
Mason, W. A. 1979. Environmental models and mental modes: representational processes in the great apes. In D. A. Hamburg and E. R.McCown, eds.,The Great Apes. Menlo Park, California:The Benjamin/Cummins Publishing Company.
Mason, W. A. 1986. Behavior implies cognition. In W. Bechtel, ed., Science and Philosophy: Integrating Scientific Disciplines. Boston:Martinus Nijhoff Publishers.
McFarland, D. 1989. Problems of Animal Behaviour. New York:Wiley.
McGrew, W. C. 1992. Chimpanzee Material Culture: Implications for Human Evolution. New York:Cambridge University Press.
Metcalfe, N. B. 1984a. The effects of habitat on the vigilance of shorebirds: Is visibility important? Animal Behaviour 32:981-985.
Metcalfe, N. B. 1984b. The effects of mixed-species flocking on the vigilance of shorebirds: Who do they trust? Animal Behaviour 32:986-993.
Michel, G. F. 1991. Human psychology and the minds of other animals. In C. Ristau, ed. Cognitive Ethology: The Minds of Other Animals. Hillsdale, New Jersey:Erlbaum.
Millikan, R. G. 1984. Language, Thought, and other Biological Categories. New Foundations for Realism. Cambridge, Massachusetts:MIT Press.
Mitchell, R. W. 1990: A theory of play. In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Mitchell, R. W., and Thompson, N. S., eds. 1986: Deception: Perspectives on Human and Nonhuman Deceit. Albany, New York:SUNY Press.
Montgomery, S. 1991. Walking With the Great Apes: Jane Goodall, Dian Fossey, and Birutè Galdikas. New York:SUNY Press.
Moussa, M., and Shannon, T. A. 1992. The search for the new pineal gland: Brain life and personhood. Hastings Center Report 22: 30-37.
Pepperberg, I. M. 1990. Some cognitive capacities of an African grey parrot (Psittacus erithacus) Advances in the Study of Behavior 19:357-409.
Philips, M. and Austad, S. N. 1990. Animal communication and social evolution. In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Premack. D. 1988. "Does the Chimpanzee have a theory of mind?" revisited. In R. Byrne and A. Whiten, eds. Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. New York:Oxford University Press.
Pulliam, H. R. 1973. On the advantages of flocking. Journal of Theoretical Biology 38:419-422.
Purton, A. C. 1978. Ethological categories of behavior and some consequences of their conflation. Animal Behaviour 26:653-670.
Quenette, P.-Y. 1990. Functions of vigilance behaviour in mammals: A review. Acta Oecologica 11:801-818.
Rachels, J. 1990. Created From Animals: The Moral Implications of Darwinism. New York:Oxford University Press.
Random House Dictionary. 1978. New York:Ballantine Books.
Rasa, O. E. A. 1989. The costs and effectiveness of vigilance behaviour in the dwarf mongoose: Implications for fitness and optimal group size. Ethology Ecology & Evolution 1:265-282.
Real, L. A. 1992. Information processing and the evolutionary ecology of cognitive architecture. Amer. Nat., 140, S108-S145.
Ristau, C.. ed. 1991a. Cognitive ethology: The Minds of Other Animals Hillsdale, New Jersey:Erlbaum.
Ristau, C. 1991b. Aspects of the cognitive ethology of an injury-feigning bird, the piping plover. In C. Ristau, ed. Cognitive Ethology: The Minds of Other Animals. Hillsdale, New Jersey:Erlbaum.
Roitblat, H. L., Bever, T. G., and Terrace, H. S., eds., 1984. Animal Cognition. Hillsdale, New Jersey:Erlbaum.
Roitblat, H. L., and Weisman, R. G. 1986. Tactics of comparative cognition. In M. Rilling, D. Kendrick, and M. R. Denny, eds. Theories of Animal Memory. Hillsdale, New Jersey:Erlbaum.
Roitblat, H. L., and von Fersen, L. 1992. Comparative cognition: Representations and processes in learning and memory. Ann. Rev. Psychol. 43:671-710.
Rollin, B. E. 1989: The Unheeded Cry: Animal Consciousness, Animal Pain and Science. New York: Oxford University Press.
Rollin, B. E. 1990. How the animals lost their minds: Animal mentation and scientific ideology. In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Rosenberg, A. 1990. Is there an evolutionary biology of play? In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior. Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Rosenzweig, M. L. 1990. Do animals choose habitats? In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Saidel, E. 1992. What price neurophilosophy? Philosophy of Science Association 1:461-468.
Schusterman, R. J., Thomas, J. A., and Wood, F. G. eds. 1986. Dolphin Cognition and Behavior: A comparative Approach. Hillsdale, New Jersey: Erlbaum.
Singer, P. 1992. Bandit and friends. New York Review of Books 9 April:9-13.
Smith, W. J. 1990. Communication and expectations: A social process and the cognitive operations it depends upon and influences. In M. Bekoff and D. Jamieson, eds. Interpretation and Explanation in the Study of Animal Behavior: Vol. I, Interpretation, Intentionality, and Communication. Boulder, Colorado: Westview Press.
Snowdon, C. T. 1991. Review of Ristau 1991a. Science 251:813-814.
Sober, E. 1983. Mentalism and behaviorism in comparative psychology. In D. W. Rajecki, ed. Comparing Behavior: Studying Man Studying Animals. Hillsdale, New Jersey:Erlbaum.
Stich, S. 1979. Do animals have beliefs. Australasian Journal of Philosophy 57:15-28.
Stich, S. 1983. From Folk Psychology to Cognitive Science. Cambridge, Massachusetts:MIT Press.
Symons, D. 1974. Aggressive play and communication in rhesus monkeys (Macaca mulatta). American Zoologist 14:317-322. Press.
Tinklepaugh, O. L. 1928. An experimental study of representative factors in monkeys. Journal of Comparative Psychology 8:197-236.
de Waal, F. B. M. 1991. Complementary methods and convergent evidence in the study of primate social cognition. Behaviour 118:299-320.
Warburton, K., and Lazarus, J. 1991. Tendency-distance models of social cohesion in animal groups. Journal of Theoretical Biology 150:473-488.
Ward, P. I. 1985. Why birds in flocks do not coordinate their vigilance periods. Journal of Theoretical Biology 114:383-385.
White, F. J. 1992. Pygmy chimpanzee social organization: Variation with party size and between study sites. American Journal of Primatology 26:203-214.
Whiten, A. 1992. Review of Griffin 1992. Nature 360:118-119.
Whiten, A. 1993. Evolving a theory of mind: the nature of non-verbal mentalism in other primates. In S. Barton-Cohen, H. Tager-Flusberg, and D. J. Cohen, eds. Understanding Other Minds: Perspectives From Autism. New York: Oxford University Press.
Whiten, A., and Ham, R. (1992) On the nature and evolution of imitation in the animal kingdom: Reappraisal of a century of research. Advances in the study of Behavior 21:239-283.
Williams, G. C. 1992. Natural Selection: Domains, levels, and Challenges. New York: Oxford University Press.
Wyers, E. J. (1985) Cognitive-behavior and sticklebacks. Behaviour 95:1-10.
Ydenberg, R. C., and Dill, L. M. 1986. The economics of escaping from predators. Advances in the Study of Behavior 16: 229-249.
Yoerg, S. I. 1991. Ecological frames of mind: The role of cognition in behavioral ecology. Quarterly review of Biology 66:287-301.
Yoerg, S. I., and Kamil, A. C. 1991. Integrating cognitive ethology with cognitive psychology. In: C. A. Ristau, ed. Cognitive Ethology: The Minds of Other Animals. Hillsdale, New Jersey:Erlbaum.
Zabel, C. J., Glickman, S. E., Frank, L. G., Woodmansee, K. B., and Keppel, G. 1992. Coalition formation in a colony of prepubertal hyenas. In A. H. Harcourt and F. B. de Waal, eds. Coalitions and Alliances in Humans and Other Animals. New York: Oxford University Press.
Zuckerman, L. (1991) Review of Cheney and Seyfarth 1990. New York
Review of Books May 30:43-49.
(1) OBSERVATIONS AND DESCRIPTIONS OF BEHAVIORJethro (J) inform us about the possibility of there being
(2) MENTAL STATESJ AND BELIEFSJ (cognitive ethology) that lead to attempts to ascribe
(3) CONTENT TO BELIEFSJ (folk psychological explanations) that lead to attempts to discover more about the
(4) NEURAL bases of BELIEFSJ (mature neuroscientific explanations) that lead to attempts to learn more about the
(5) MOLECULAR BIOLOGY OF BELIEFSJ (more mature explanations)
(2) Is there some sort of orderly mapping of environmental and mental events in the nervous system?
(3) Is 5 'better' than 1? Do we gain more control and increase the certainty with which we can offer more precise causal explanations as we deal with smaller pieces of the puzzle?
(4) Why should we wait for a mature neuroscience? Why can't we also expect to learn more if we wait for a mature cognitive ethology?
1 2 3 4 5 6(2)
1 2 3
|
<urn:uuid:de9c4a35-9406-45cb-be9f-1683aeef72d1>
|
CC-MAIN-2018-30
|
http://cogprints.org/157/1/199709002.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591497.58/warc/CC-MAIN-20180720041611-20180720061611-00249.warc.gz
|
en
| 0.920502 | 20,100 | 2.515625 | 3 | 3.039999 | 3 |
Strong reasoning
|
Science & Tech.
|
By Dr. Adam Aronson
Sports-related concussions are hot topic in recent years both in the media and for pediatricians and emergency medicine doctors. With soccer, football, hockey, lacrosse, basketball and more, so many children and adolescents participate in highly competitive contact sports, putting them at risk for head injury and a possible concussion. It is extremely important for parents, coaches, and trainers to be familiar with the signs and symptoms of concussion and to understand the importance of seeking appropriate medical evaluation and treatment. Our understanding of this common injury and its potential long term complications has changed dramatically in recent years – and it is no longer acceptable to just “tough it out.”
The signs and symptoms of concussion are variable – with some being obvious such as loss of consciousness – to less specific complaints such as feeling a bit “foggy.” Common physical changes include headache, nausea and vomiting, feeling dazed or stunned, visual changes, dizziness or balance disturbance, and sensitivities to light or sound. These are often accompanied by a host of cognitive and emotional symptoms – difficulty concentrating, being forgetful and having trouble remembering recent events, feeling confused or answering questions slowly, and generally feeling slowed down or mentally foggy. In the days and weeks after the initial injury many patients also experience irritability, anxiety or feelings of sadness. Sleep patterns can also be affected – with some individuals feeling drowsy and sleeping more than usual, while others having difficulties falling asleep.
Any child or adolescent who suffers a head or neck injury should be carefully assessed for any signs of concussion. If there are any concerning symptoms, no matter how mild – the athlete should be removed from the practice or game and should be evaluated by a physician. This evaluation can usually be done 1-2 days later by the child’s primary care pediatrician but referral to a local emergency department is indicated if the athlete experiences more severe symptoms. These would include and loss of consciousness -no matter how brief – or other change in mental status, vomiting, severe headache, dizziness, slurred speech or unusual behavior, confusion or memory loss, or weakness/numbness of an extremity. While in the emergency department the injured athlete will be assessed to determine if neuroimaging is appropriate. The test of choice is usually computed tomography, otherwise known as a CT scan. However, the majority of athletes who suffer a head or neck injury do not need to undergo this level of testing and will be managed by careful examination and monitoring.
The focus of management of a young athlete with a concussion is to educate the patient and their family regarding activities to avoid to allow the brain to recover. There are no medications that have been shown to shorten the course of symptoms, although ibuprofen is often recommended by physicians to alleviate headache. Recent studies have demonstrated that “Cognitive Rest” can hasten recovery. Children and adolescents with concussions often find that attending school, taking tests, doing homework, and even leisurely reading will worsen their symptoms. These activities should be carefully monitored and limited as much as possible to allow the brain to recover. Computers, video games, and watching TV also require focus and attention and should therefore be strongly discouraged as they may exacerbate the post concussion symptoms and prolong recovery time. fter someone injures an ankle, they instinctively know to stay off of it for some time to let it heal – the brain is no different, it needs to be allowed to rest for the concussion to heal. The importance of this “Cognitive Rest” is often overlooked, but it is a critical component of recovery.
Physical rest is also extremely important. Aside for avoiding any physical activity that might expose the athlete to further head injury – the healing brain needs energy and avoiding significant or strenuous physical exertion may hasten recovery.
Although complete prevention of sports related concussions may be impossible, many measures are being taken to lower risks. These include rule changes and improvements in protective gear such as helmets and mouth guards – though no there is limited evidence to demonstrate if these have resulted in an actual reduction in concussions. The most important part of caring for athletes with head injuries continues to be education regarding recognizing the signs of concussion and the importance of seeking proper medical attention.
One of the most important considerations is determining when it is safe to allow the athlete to return to practice and competition. Because each individual will recover at a different pace, there is no established schedule – but there are clear guidelines that must be followed. No one should ever be allowed to play on the same day. Pediatricians and families should err on the side of caution and “when in doubt – sit them out.” Young athletes should be cleared to return to play only when completely symptom free both while at rest and during exertion. Families should be warned that studies have shown that the recovery time in younger athletes is often up to 10 days longer than adults with similar head injuries – so parents and athletes must remain patient.
Parents need to remain educated and proactive so that they can ensure their children get proper diagnosis and treatment after all head and neck injuries.
|
<urn:uuid:d5ff59c5-32f9-4a2b-847d-4d286698ee99>
|
CC-MAIN-2022-27
|
https://www.kidsfirstpediatricpartners.com/parent-education/sports-related-concussions/
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00580.warc.gz
|
en
| 0.960039 | 1,041 | 3.25 | 3 | 2.83827 | 3 |
Strong reasoning
|
Health
|
- 1 What is the karate uniform called?
- 2 What is a taekwondo outfit called?
- 3 Why are karate uniforms white?
- 4 What are gi pants called?
- 5 What is the hardest karate style?
- 6 What do they yell in karate?
- 7 Who is Father of Taekwondo?
- 8 What is the lowest belt in Taekwondo?
- 9 What is the first basic thing you should learn in martial art?
- 10 Can a white belt wear a black GI?
- 11 Can you wear a black karate gi?
- 12 How long does it take to get a black belt in karate?
- 13 Why is it called a gi?
- 14 Is GI a Japanese word?
- 15 Who invented the GI?
What is the karate uniform called?
Karategi ( 空手 着 or 空手 衣) is the formal Japanese name for the traditional uniform used for Karate practice and competition. A karategi is somewhat similar to a judogi ( 柔道 着 or 柔道 衣, Judo uniform ) as it shares a common origin; however, the material and cut of the uniform is generally much lighter and looser fitting.
What is a taekwondo outfit called?
Dobok is the uniform worn by practitioners of Korean martial arts.
Why are karate uniforms white?
The white uniform represented the values of purity, avoidance of ego, and simplicity. It gave no outward indication of social class so that all students began as equals. Essentially, the gi is white because unbleached cotton is white -ish and Kano wanted an unadorned gi for his students.
What are gi pants called?
Keikogi (稽古着) (‘keiko’, “practice”, ‘ gi ‘, “dress or “clothes”), also known as dōgi (道着) or keikoi (稽古衣), is a uniform worn for training in Japanese martial arts and their derivatives.
What is the hardest karate style?
Kyokushin, an extremely hard style, involves breaking more often than the other styles and full contact, knockdown sparring as a main part of its training. Goju-ryu places emphasis on Sanchin kata and its rooted Sanchin stance, and it features grappling and close-range techniques.
What do they yell in karate?
Kiai (Japanese: 気合, /ˈkiːaɪ/) is a Japanese term used in martial arts for the short shout uttered when performing an attacking move. Traditional Japanese dojo generally uses single syllables beginning with a vowel.
Who is Father of Taekwondo?
Gen. Choi Hong Hi, widely acknowledged as the founder of tae kwon do, a martial art that began in Korea and spread rapidly to community centers and storefronts around the United States, died on June 15 in Pyongyang, North Korea.
What is the lowest belt in Taekwondo?
Ranks, belts, and promotion Practitioners in these ranks generally wear belts ranging in color from white (the lowest rank) to red or brown (higher ranks, depending on the style of Taekwondo ). Belt colors may be solid or may include a colored stripe on a solid background.
What is the first basic thing you should learn in martial art?
When you train martial arts, mental strength is one of the first things you learn. In order to make it through the last round of punching mitts or sparring on the mats, your mind has to work against your body.
Can a white belt wear a black GI?
Yes. AOJ, only white gi allowed (and black rashguards is a msut if I am not mistaken). Gracie schools only allow their gis.
Can you wear a black karate gi?
Black gis are better at hiding any potential stains that you might acquire during training, but require care during the laundering process — like a color-setting rinse and cold water washes — to ensure that your karate gi doesn’t fade to grey.
How long does it take to get a black belt in karate?
An adult student who train Karate and who attends class at minimum two times per week on a regular basis can expect to earn a black belt in about five years. Some very dedicated karate students who train more intensely have been known to earn a black belt in as little as two or three years.
Why is it called a gi?
The word “ gi ” derives from “keikogi” which means training gear. Keiko signifies “practice” in Japanese, while gi means “dress” or “clothes” (similar to the “ki” in kimono). This is a budō term in which the word “keiko” can also be replaced by the word “do” meaning path, road or way.
Is GI a Japanese word?
pronunciation (help·info)), is one of the Japanese kana, which each represent one mora. Gi (kana)
|JIS X 4063||gi|
Who invented the GI?
Stanley Weston (April 1, 1933 – May 1, 2017) was an American inventor and licensing agent who created the G.I.
|
<urn:uuid:8bd20252-c551-4b97-b6fb-8271addbab20>
|
CC-MAIN-2021-43
|
https://www.eghjudo.com/karate/question-what-is-a-karate-outfit-called.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00110.warc.gz
|
en
| 0.940043 | 1,183 | 2.65625 | 3 | 1.244384 | 1 |
Basic reasoning
|
Sports & Fitness
|
|提及的人:||John F Kennedy; John F Kennedy; John F Kennedy; John F Kennedy; John F Kennedy|
Robert Dallek; Rogers D. Spotswood Collection.
|描述:||x, 838 pages, pages of plates : illustrations ; 25 cm|
|内容:||Growing Up --
Privileged Youth --
The Terrors of Life --
Public Service --
Choosing Politics --
The Congressman --
The Senator --
Can a Catholic Become President? --
The President --
The Torch Is Passed --
The Schooling of a President --
A World of Troubles --
Crisis Manager --
Reluctant Warrior --
The Limits of Power --
Frustrations and "Botches" --
To the Brink--And Back --
New Departures: Domestic Affairs --
New Departures: Foreign Affairs --
An Unfinished Presidency.
An unfinished life is the first major, single-volume life of John F. Kennedy to be written by a historian in nearly four decades. Robert Dallek draws upon previously unavailable material and never-before-opened archives to tell Kennedy's story. We learn just how sick Kennedy was, what medications he took and concealed from all but a few, and how severely his medical condition affected his actions as President. We also learn the real story of how Bobby was selected as Attorney General. Dallek reveals exactly what Jack's father did to help his election to the presidency, and he follows previously unknown evidence to show what path JFK would have taken in the Vietnam entanglement had he survived. Dallek shows that while Kennedy was the son of privilege, he faced great obstacles and fought on with remarkable courage. Never shying away from Kennedy's weaknesses, Dallek also explores his strengths. The result is a portrait of a bold, brave, human Kennedy, once again a hero.
- Kennedy, John F. -- (John Fitzgerald), -- 1917-1963.
- Presidents -- United States -- Biography.
- Kennedy, John F. (John Fitzgerald), 1917-1963.
- Présidents -- États-Unis -- Biographies.
- Kennedy, John F. -- (John Fitzgerald), -- 1917-1963
- United States.
- Kennedy, John F.
- Kennedy, John F., -- 1917-1963.
|
<urn:uuid:84f826c9-551b-4ad2-acb4-bfd834eaf917>
|
CC-MAIN-2015-35
|
http://www.worldcat.org/title/unfinished-life-john-f-kennedy-1917-1963/oclc/52220148?lang=zh-tw
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066275.44/warc/CC-MAIN-20150827025426-00009-ip-10-171-96-226.ec2.internal.warc.gz
|
en
| 0.763921 | 506 | 2.5625 | 3 | 2.723746 | 3 |
Strong reasoning
|
History
|
A pulmonary ventilation/perfusion scan is a pair of nuclear scan tests. These tests use inhaled and injected radioactive material (radioisotopes) to measure breathing (ventilation) and circulation (perfusion) in all areas of the lungs.
V/Q scan; Ventilation/perfusion scan; Lung ventilation/perfusion scan
How the test is performed:
A pulmonary ventilation/perfusion scan is actually two tests. These tests may be performed separately or together.
During the perfusion scan, a health care provider injects radioactive albumin into your vein. You are placed on a movable table that is under the arm of a scanner. The machine scans your lungs as blood flows through them to find the location of the radioactive particles.
During the ventilation scan, you breathe in radioactive gas through a mask while you are sitting or lying on a table under the scanner arm.
How to prepare for the test:
You do not need to stop eating (fast), eat a special diet, or take any medications before the test.
A chest x-ray is usually done before or after a ventilation and perfusion scan.
You will sign a consent form and wear a hospital gown or comfortable clothing that does not have metal fasteners.
How the test will feel:
The table may feel hard or cold. You may feel a sharp prick while the material is injected into the vein for the perfusion part of the scan.
The mask used during the ventilation scan may make you feel nervous about being in a small space (claustrophobia). You must lie still during the scan.
The radioisotope injection usually does not cause discomfort.
Why the test is performed:
The ventilation scan is used to see how well air reaches all parts of the lungs. The perfusion scan measures the blood supply through the lungs.
A ventilation and perfusion scan is most often done to detect a pulmonary embolus . It is also used to:
- Detect abnormal circulation (shunts) in the blood vessels of the lungs (pulmonary vessels)
- Test lung function in people with advanced pulmonary disease , such as COPD
The health care provider should take a ventilation and perfusion scan and then evaluate it with a chest x-ray. All parts of both lungs should take up the radioisotope evenly.
What abnormal results mean:
If the lungs take up lower-than-normal amounts of radioisotope during a ventilation or perfusion scan, it may be due to:
- Airway obstruction
- A problem with blood flow (such as occlusion of the pulmonary arteries)
- Damage from chronic smoking or COPD
- Pulmonary embolus
- Reduced breathing and ventilation ability
What the risks are:
Risks are about the same as for x-rays (radiation) and needle pricks.
No radiation is released from the scanner. Instead, it detects radiation and converts it into an image.
There is a small exposure to radiation from the radioisotope. The radioisotopes used during scans are short-lived. All of the radiation leaves the body in a few days. However, as with any radiation exposure, caution is advised for pregnant or breast-feeding women.
There is a slight risk for infection or bleeding at the site where the needle is inserted. The risk with perfusion scan is the same as with inserting an intravenous needle for any other purpose.
In rare cases, a person may develop an allergy to the radioisotope. This may include a serious anaphylactic reaction .
A pulmonary ventilation and perfusion scan may be a lower-risk alternative to pulmonary angiography for evaluating disorders of the lung blood supply.
This test may not provide an absolute diagnosis, especially in people with lung disease. Other tests may be needed to confirm or rule out the findings of a pulmonary ventilation and perfusion scan.
Piccini JP, Nilsson K. The Osler Medical Handbook. 2nd ed. Philadelphia, Pa:Saunders; 2006.
Patz EF, Coleman RE. Nuclear Medicine Techniques. In: Mason RJ, Murray J, Broaddus VC, Nadel J, eds. Textbook of Respiratory Medicine. 3rd ed. Philadelphia, Pa: Saunders Elsevier; 2005: chap 21.
|
<urn:uuid:1de8c601-fe3f-4783-8b97-97ca02c9b948>
|
CC-MAIN-2015-06
|
http://www.emanuelmed.org/body.cfm?id=14&action=detail&AEArticleID=003828&AEProductID=Adam2004_1&AEProjectTypeIDURL=APT_1
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122059136.7/warc/CC-MAIN-20150124175419-00234-ip-10-180-212-252.ec2.internal.warc.gz
|
en
| 0.898282 | 898 | 3.828125 | 4 | 1.756657 | 2 |
Moderate reasoning
|
Health
|
Stories from International Field Work
While working in the Philippines, Mike Fall and other researchers used closed-circuit television to observe rats in the rice fields. Set up outside under coconut trees, researchers made videotapes for several hours each night. This drew the attention of local villagers, many of whom had never seen television before. “By the second night, we were drawing crowds of 40 to 50 people. These people were very concerned about the rats on TV eating rice. A rat would come up to a rice plant and these farmers would wave at the TV screen and shout, ‘get away, get away.' These people never quite got the concept that what they were seeing on TV was the rats right out in their own field. It was amusing, but it just illustrated for me the problem we have introducing technological things into these kinds of cultures.”
In a letter dated November 17, 1981, William Smythe, with the United Nations Food and Agricultural Organization, described to DWRC researcher and friend John DeGrazio the difficulties of doing work in Africa. “Things haven't changed much over the last 10 months, you make haste slowly here, very slowly. The chronic shortage of gas doesn't help. I just bought some gas on the black market for $6 a gallon, it's normally $5 through legal sources, when it's available. It took me 6 months to get my project vehicle, and I had to self drive it down from Djibouti. That was one rough, hot, dusty trip; I only got stuck once, in deep dust . . . ” Driving conditions and project vehicles are a recurring issue in many documents, images, and stories about international field work, from expensive gas to vehicles stuck knee-deep in mud!
During a 1985 trip to Kenya to conduct the first secondary-hazard study related to phenthion and quelea, Jean Bourassa experienced a hair-raising encounter with the native wildlife. While conducting field telemetry about 10 miles from camp, he walked over a small hill only to encounter a group of female Cape buffalo with calves. Cape buffalo, “known to harm helpless and weaponless technicians” can weigh over 1,500 pounds and are considered one of the most dangerous mammals on Earth. As the buffalo began to make aggressive posturing, this “scared technician, very slowly, walked backwards until out of sight of the small herd.”
Another story of Jean's from Kenya concerns entertainment while in the field. “Crocodile Camp,” a permanent camera-safari camp for tourists, lay about 5 miles from the researchers' camp and was the only place to get a cold beer. Every night at 9:00, safari camp staff turned a large spotlight on a nearby riverbank. Several 20-foot crocodiles would appear from the water to fight for large 2-foot bones thrown by the staff. Tossing the bones up in the air, the crocodiles would then catch them in one fell swoop, swallowing the bones whole. Jean still is unsure where these bones came from as hunting is illegal in Kenya.
On a 1992 trip to Morocco to train Moroccan biologists in conducting field research, Jean acted out a scene from “MacGyver” by fixing a vehicle with the sparse items at hand. “Doing telemetry work about 15 miles from camp the vehicle bottomed out on a deeply rutted road.” Looking under the vehicle, Jean noticed oil pouring from a dime-sized hole in the oil pan. Using the resources available, he collected the oil in a large water bottle and sealed the hole with a hot-melt glue gun, rendering the vehicle operational enough to drive back to camp before darkness fell!
|
<urn:uuid:416e09a9-eac0-4ffb-8e0a-e96d659829d9>
|
CC-MAIN-2015-14
|
http://www.aphis.usda.gov/wps/portal/aphis/ourfocus/wildlifedamage/sa_programs/sa_nwrc/sa_history/ct_unexpected_stories/!ut/p/a0/04_Sj9CPykssy0xPLMnMz0vMAfGjzOK9_D2MDJ0MjDzd3V2dDDz93HwCzL29jAwMTfQLsh0VAXWczqE!/
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299515.96/warc/CC-MAIN-20150323172139-00013-ip-10-168-14-71.ec2.internal.warc.gz
|
en
| 0.96647 | 767 | 2.765625 | 3 | 2.818613 | 3 |
Strong reasoning
|
Science & Tech.
|
The news media, like many other major U.S. institutions, has suffered from a decline in public confidence in recent years. A key question for the future of the news media, as well as for U.S. democracy, is whether that trust is lost for good. In this report, part of the John S. and James L. Knight Foundation’s Trust, Media and Democracy initiative, Gallup asked a representative sample of U.S. adults to discuss key factors that make them trust, or not trust, news media organizations.
The report relies on a variety of research approaches — open-ended questions, closed- ended importance ratings and an experiment — and finds:
Most U.S. adults, including more than nine in 10 Republicans, say they personally have lost trust in the news media in recent years. At the same time, 69% of those who have lost trust say that trust can be restored.
Asked to describe in their own words why they trust or do not trust certain news organizations, Americans’ responses largely center on matters of accuracy or bias. Relatively few mentioned a news organization’s partisan or ideological leaning as a factor.
Accuracy and bias also rank among the most important factors when respondents rate how important each of 35 potential indicators of media trust are to them. Transparency also emerges as an important factor in the closed-ended ratings of factors that influence trust: 71% say a commitment to transparency is very important, and similar percentages say the same about an organization providing fact-checking resources and providing links to research and facts that back up its reporting.
An experimental approach not only showed the importance of accuracy, bias and transparency, but also revealed a complex relationship between partisanship and media trust. Both Republicans and Democrats were less likely to trust news sources with a partisan reputation that opposes their own. However, they did not express much greater trust in news sources that have a reputation for a partisan leaning consistent with their own.
These results indicate that attempts to restore trust in the media among most Americans may be fruitful, particularly if those efforts are aimed at improving accuracy, enhancing transparency and reducing bias. The results also indicate that reputations for partisan leaning are a crucial driver of media distrust, and one that may matter more for people themselves than they realize.
Gallup and Knight Foundation acknowledge support for this research from the Ford Foundation, the Bill & Melinda Gates Foundation and the Open Society Foundations.
AMERICANS ARE LOSING TRUST IN THE NEWS MEDIA BUT BELIEVE IT CAN BE RESTORED
Gallup has documented an erosion of trust in the news media over time. Between 2003 and 2016, the percentage of Americans who said they have a great deal or a fair amount of trust in the media fell from 54% to 32% before recovering somewhat to 41% in 2017 as trust among Democrats rebounded.
Consistent with the trend toward declining trust, 69% of U.S. adults in the current survey say their trust in the news media has decreased in the past decade. Just 4% say their trust has increased, while 26% indicate their trust has not changed.
Republicans (94%) and political conservatives (95%) are nearly unanimous in saying their trust in the media has decreased in the past decade. However, declining trust is not just confined to the political right — 75% of independents and 66% of moderates indicate they are less trusting than they were 10 years ago.
U.S. adults on the left of the political spectrum are less likely to say they have lost trust inthe media, but at least four in 10 Democrats (42%) and liberals (46%) say they have done so. Democrats and liberals are about as likely to say their trust in the media has not changed as to say it has decreased.
Other subgroup differences in reports of decreased trust in the media largely reflect those groups’ political leanings. More men (76%) than women (64%), more whites (72%) than nonwhites (63%) and more noncollege graduates (73%) than college graduates (61%) say they have less trust in the media than they did a decade ago. There are, however, no meaningful age differences.
Attention to the news is also related to reports of declining trust. Eighty-two percent of those who indicate they pay little or no attention to national news say they have less trust in the media now, compared with 74% who say they pay a moderate amount of attention and 62% who say they pay a great deal of attention.
While the loss of trust in the news media is concerning in a democratic society, a more encouraging sign is that most of those who have lost trust believe it can be regained. Specifically, 69% of U.S. adults who say they have lost trust in the news media over the past decade say their trust can be restored. But a rather substantial 30% of those who have lost trust — equivalent to 21% of all U.S. adults — say their trust in the media cannot be restored.
Majorities of all key subgroups say their decreased trust in the media can be restored. There is little difference by gender, age, education and race. However, Democrats and liberals who have lost trust in the media are more optimistic that their trust can be restored than are independents, moderates and especially Republicans and conservatives. While at least six in 10 Republicans and conservatives say their decreased trust in the media can be recovered, 39% and 36%, respectively, say it cannot. In other words, about one-third of those on the political right have lost faith in the media and expect that change to be permanent.
FEW TRUST ALL, BUT MOST TRUST SOME NEWS ORGANIZATIONS
Americans appear to differentiate the trustworthiness of news organizations — 67% say they trust only some news media organizations, but not others. Meanwhile, 1% say they trust all news media organizations, 17% say they trust most and 16% say they do not trust any. The question did not distinguish perceived partisanship of specific news sources; rather, it focused on news organizations in general.
The majority of Republicans (75%), independents (63%) and Democrats (64%) say they trust some news organizations, but not others. Where party groups differ most is in their tendency to trust most or no news media organizations. More than one-third of Democrats, 35%, say they trust all or most news organizations, compared with 13% of independents and 3% of Republicans. In contrast, 21% of Republicans and 25% of independents but just 2% of Democrats say they do not trust any news organizations.
Trust in News Organizations
Now, thinking broadly about all of the various news organizations you are familiar with, including newspapers, TV and cable news stations, news websites, radio news, and local, national and international news organizations. Which best describes you —
Nearly four in 10 U.S. adults who are inattentive to national news (39%) say they do not trust any news organizations. That compares with 17% of those who pay a moderate amount of attention to national news and 8% who are highly attentive. Although these data indicate a strong relationship between attention to news and trust in the news media, they cannot shed light on cause and effect. That is, are some people so distrustful of the media that they cannot bring themselves to follow the news, or are people who do not follow the news distrustful because they do not see the media in action?
ISSUES OF ACCURACY, BIAS KEY TO AMERICANS’ DISTRUST AND TRUST
After assessing their current state of trust in the news media, the survey asked respondents to say, in their own words, why they trust — or do not trust — particular news organizations.
REASONS PEOPLE DISTRUST NEWS ORGANIZATIONS
When asked for reasons why they do not trust news organizations, Americans’ top categories of answers largely focus on inaccuracy and bias. The most commonly expressed thoughts in response to the open-ended question were about inaccurate or misleading reporting, lies, alternative facts or fake news (45%) and biased, slanted or unfair reporting (42%).
Bias and inaccuracy are underlying themes in many of the other comments as well. Twenty three percent say that one-sided, unbalanced or incomplete reporting causes them to distrust news organizations. Sixteen percent say “news” grounded in opinions or emotions and 14% say politically or partisan-focused coverage causes them to lose trust in news organizations.
Less common were responses that gave a clear direction of bias that troubled respondents. Seven percent mention a pro-liberal or anti-conservative bias specifically. Meanwhile, 5% say negative stories about President Donald Trump and 3% say news organizations that protect or support him cause them to lose trust.
In all, three-quarters of respondents mentioned bias-related reasons at least once in their comments as a reason they did not trust news organizations, and two-thirds mentioned accuracy-related reasons at least once.
Thinking now about some of the news media organizations you DO NOT trust, what are some of the reasons why you DO NOT trust those news organizations? [OPEN-ENDED]
Respondents could give multiple reasons. The figures are the percentage mentioning each reason in their responses. The percentages in the table total more than 100% due to multiple responses.
Thinking now about some of the news media organizations you DO NOT trust, what are some of the reasons why you DO NOT trust those news organizations? [OPEN-ENDED] Cont’d.
Republicans, Democrats and independents are about equally likely to bring up inaccuracy as a reason they distrust certain news organizations. Republicans (36%) are somewhat less likely than Democrats (43%) and independents (49%) to raise the bias issue in general terms, though 17% of Republicans, compared with 3% of independents and 1% of Democrats, more specifically mention liberal or anti-conservative bias.
Young adults (aged 18 to 34) are twice as likely as older adults (aged 55 and up) to say politically focused or partisan bias is a factor in their lack of trust in news media organizations, 18% to 9%, respectively. Young adults (47%) and middle-aged adults (47% of those aged 35 to 54) are also more likely than older adults (34%) to mention biased, slanted or unfair reporting more generally.
A little more than half of Americans who pay a great deal of attention to national news, 52%, cite inaccurate reporting or fake news as a reason for not trusting certain media organizations. That compares with 41% of those who pay a moderate amount of attention to national news and 35% who pay little or no attention to it.
REASONS PEOPLE TRUST NEWS ORGANIZATIONS
Matters of bias and accuracy are also the dominant themes discussed when people say why they trust certain news organizations. Thirty-nine percent mention fair, unbiased or nonpartisan reporting, with another 12% mentioning balanced reporting. Thirty-one percent mention accurate or factual reporting.
Commitment to journalistic ethics also figures prominently in Americans’ responses. Thirty-one percent mention credible, honest, ethical or reputable reporting, and 18% cite good or professional journalism or reporting.
Very few respondents mention a specific direction of bias in reporting as fostering trust for them. This includes 3% who say conservative coverage, 1% who say liberal coverage, and 3% who say support or respect for the president, government, Constitution or country lead them to trust news organizations.
Nearly two-thirds of respondents mentioned accuracy-related reasons at least once in their comments as a reason they trust news organizations. About half mentioned (a lack of) bias at least once.
Thinking now about some of the news media organizations you trust, what are some of the reasons why you trust those news organizations? [OPEN-ENDED]
Respondents could give multiple reasons. The figures are the percentage mentioning each reason in their responses. The percentages in the table total more than 100% due to multiple responses.
Thinking now about some of the news media organizations you trust, what are some of the reasons why you trust those news organizations? [OPEN-ENDED] Cont’d.
Older adults and younger adults diverge in the extent to which they focus on sourcing and details as a way of gaining trust in news media outlets. Just 6% of those aged 55 and older say completeness, details and citing sources make them trust news organizations, compared with 17% of 18- to 34-year-olds and 15% of those aged 35 to 54. Similarly, 4% of older adults mention reliable or credible sources, compared with 11% of younger adults and 6% of middle-aged adults.
A news organization’s history, longevity or reputation was not frequently cited as a key trust factor, but college graduates (14%) were much more likely than nongraduates (5%) to mention it.
Partisans also differ in the kinds of comments they offer when saying what leads them to trust news organizations. Thirty-nine percent of Republicans versus 28% of Democrats mention accurate and factual reporting, and 43% of Republicans compared with 33% of Democrats remark on a news organization having fair, unbiased or nonpartisan reporting. In contrast, at least twice as many Democrats as Republicans mention fact-checking, research and verifiable facts (25% to 9%), good or professional journalistic practices (24% to 12%), and history or longevity (11% to 3%).
While accuracy and bias are commonly mentioned when Americans indicate why they trust or do not trust particular news organizations, accuracy is mentioned more often than unbiasedness as a reason for trusting an organization, and bias is mentioned more often than inaccuracy as a reason for distrust. Specifically, 65% gave a response that touched on accuracy as a reason for trusting an organization, while 49% mentioned a lack of bias. When detailing reasons for not trusting organizations, 75% of respondents made a comment about bias, while 66% offered a response that dealt with inaccuracy.
Summary of Mentions of Accuracy and Bias in Open-Ended Comments About Trust and Distrust of News Organizations
The likelihood of mentioning bias- or accuracy-related reasons for trusting or not trusting news organizations is similar by subgroup. One notable difference is that Democrats (79%) are much more likely than Republicans (61%) or independents (55%) to bring up accuracy as a reason they trust certain news organizations.
TRANSPARENCY JOINS ACCURACY AND BIAS AT TOP OF LIST OF IMPORTANCE RATINGS FOR TRUST
In addition to getting Americans to describe in their own words why they trust or do not trust news organizations, the survey asked them to rate the importance of each of 35 different attributes that can engender trust in news organizations. As in the open-ended responses, accuracy and bias are important factors, but items that touch on transparency are also among the items that Americans rate most important.
The two highest-rated factors — both rated as very important by more than eight in 10 respondents — are a commitment to accuracy (89%) and quickly and openly correcting mistakes (86%). Seventy-seven percent say the organization’s record of having published inaccurate or false information is very important, and 53% say the same about the frequency with which organizations make mistakes.
More than seven in 10 say a commitment to fairness (78%), providing fact-checking resources (74%), a commitment to transparency (71%) and providing links to research and facts to back up its reporting (71%) are very important factors for them in fostering trust.
Sixty-four percent rate a news organization clearly distinguishing news stories from commentary, analysis or advertiser-paid content as being very important to their trust in it.
A majority of Americans (58%) also say that news organizations being neutral is very important.
Some of the least important factors in promoting trust are the number of awards a news organization has won for reporting (4%), how large its circulation or viewership is (4%), whether it is a local (5%) or a national (8%) outlet, and how long the organization has been reporting the news (10%).
Although Americans’ trust in the media and views of particular news organizations are strongly related to partisanship, Americans do not say that political considerations are important determinants to them in whether they trust particular news media outlets. Just 5% say it is very important that the organization’s reporters share their political views, and 10% say the same about the outlet giving positive coverage to people, groups or issues with which they agree. Also, their trust is little influenced by cues or endorsements from others, including
political leaders with whom they agree (8%), political leaders with whom they disagree (8%), and their family and friends (5%).
The specific content that a news organization covers is of midrange importance — 43% regard the types of issues a news outlet focuses its coverage on as being very important. Thirty-two percent assign the same degree of importance to whether reporters understand the challenges “people like me” face.
How important are each of the following factors in determining whether or not you trust a news organization?
How important are each of the following factors in determining whether or not you trust a news organization? Cont’d.
There are not very large differences in ascribed importance of these characteristics by subgroups, especially for the attributes ranked near the top and bottom of the list. One of the bigger differences concerns neutrality — this resonates less with Democrats (44% say it is very important) than independents (64%) and Republicans (68%).
Also, awareness and respect of the news organization’s brand are more important to Democrats (49%) and Republicans (39%) than to independents (25%), as well as among more attentive (47%) than less attentive (27%) news consumers.
The location of the source — in the U.S. or a different country — seems to matter slightly more to adults 35 and older (26%) than to younger adults (12%), to noncollege graduates (25%) than to college graduates (15%), and to Republicans (33%) more than to Democrats (19%) and independents (17%).
Young adults (83%) see links to research and facts to back up reporting as more important to earning their trust than adults 35 and older (65%) do.
Older adults seem especially sensitive to news organizations’ records of making mistakes — 62% of those aged 55 and older say it is a very important factor in their trust in news media organizations, compared with 48% of adults younger than 55.
EXPERIMENTAL DATA CONFIRM IMPORTANCE OF ACCURACY, BIAS AND TRANSPARENCY, BUT SHOW INFLUENCE OF PARTISANSHIP
A final way to assess the importance of various factors in determining media trust does not rely on direct self-reports from respondents. An experimental method known as conjoint analysis translates respondents’ choices between two competing options — in this case, two hypothetical profiles of news organizations — into measures of importance. Based on the choices respondents make, the technique can estimate the relative importance of various aspects of a product or other object, as well as the specific characteristics within those aspects that are important. This method thus provides measures of importance that do not rely on respondent self-reports and can avoid the possibility that those reports are plausible or socially desirable but not reflective of the true factors that drive an individual’s preferences or behaviors.
To apply this method to news media trust, a separate sample of 1,322 Gallup Panel respondents indicated which of two randomly generated hypothetical profiles of news organizations they consider more trustworthy.
The appendix gives the list of the nine trust attributes included in the conjoint task, and the features that were varied across those nine attributes in each profile. The profiles consisted of information conveying an organization’s commitment to accuracy, unbiasedness and transparency — the top-rated factors in the importance ratings. The profiles also included information about the news organization from dimensions scoring lower on those same importance ratings, including ownership, awards, audience size and local versus national coverage.
Respondents were presented with two such profiles and were asked to say which they believe would be more trustworthy. Each respondent completed 10 separate choice tasks, evaluating a total of 20 different profiles across those tasks.
The results of the conjoint analysis confirm the importance attached to core journalistic standards. On a relative basis, the accuracy factors — how quickly and openly an organization corrects mistakes and how carefully it evaluates facts before reporting — were the most important. About half of the variability in respondents’ choices was explained by these two attributes. Transparency — in the forms of disclosing potential conflicts of interest and making additional reporting material available to readers — accounted for another combined 27% of respondents’ decisions. The five remaining factors accounted for less than 25% of respondents’ preferences in the choice tasks.
Relative Importance for Each Attribute in Choice of More Trustworthy News Source
In addition to capturing the relative importance of each attribute, the conjoint analysis offered insight into how much specific features or actions affect the likelihood that a respondent chooses one news organization profile as more trustworthy over another.
For example, regarding accurate reporting, the likelihood that an individual chose a news organization as more trustworthy than another dropped 19 percentage points if speed of reporting was prioritized over accuracy. Also, compared with news organizations that have a record of making quick and visible corrections to mistakes, respondents were 28 percentage points less likely to select a news source as being trustworthy if it rarely reports corrections to mistakes, 18 percentage points less likely if it occasionally does not make corrections and eight percentage points less likely if it makes corrections quickly but not in a highly visible manner.
For balanced and unbiased reporting, respondents viewed news organizations with a partisan reputation as less trustworthy than ones with an unclear or mixed partisan reputation; however, the effects were modest.
In terms of transparency, respondents were somewhat less likely to choose a news organization as more trustworthy if it usually (rather than always) acknowledged conflicts of interests, and substantially less likely to favor it if it generally does not acknowledge conflicts of interest about sponsors and owners in its reporting. Respondents were slightly more likely to select news organizations that make additional research not included in stories publicly available (such as full, unedited interviews) as more trustworthy than those that did not.
Change in Probability in Choice of More Trustworthy News Source, Compared With Baseline Category
Percentage point change in probability compared with baseline category. For example, there is a nine-point greater chance a respondent will choose a news organization as more trustworthy than another if it makes additional reporting material publicly available compared with not making such material publicly available.
When the sample is broken down into partisan subgroups, the accuracy and transparency factors remained very important. However, a news organization’s partisan reputation is more influential in the choices made by partisan respondents. In fact, among Republicans, the effects of a partisan reputation were second in relative importance behind quick and open corrections of mistakes. Partisan reputation ranks fourth among Democrats. The results among all respondents indicated partisan reputation was not very important, but that was mainly because it did not matter much to independents, and the larger effects of Republicans and Democrats may have canceled each other out in the aggregate.
Further analysis of the data reveals the complexity of the relationship between partisanship and trust in news organizations. There was a substantial decline in relative perceived trustworthiness for news organizations with a reputation for favoring partisan positions and policies that ran counter to the respondent’s own political preferences.
Specifically, compared with a news organization having a mixed partisan reputation, Republican respondents were 22 percentage points less likely to choose one as being more trustworthy if it had a Democratic-leaning reputation. Democrats were 12 percentage points less likely to choose a news organization with a Republican reputation than one with a mixed or unclear partisan reputation.
Importantly, respondents were about equally likely to favor news organizations with a partisan reputation that aligned with their own as to favor those with a mixed or unclear reputation. In other words, partisans considered news organizations with a hostile agenda as much less trustworthy but did not view news organizations with a sympathetic agenda as more trustworthy.
Political independents were turned off about equally by news organizations with Democratic- or Republican-leaning reputations. They were six percentage points less likely to select a news organization as more trustworthy when it had a Democratic reputation and seven percentage points less likely when it had a Republican reputation, compared with a news organization having a mixed partisan reputation.
The importance of the partisan dimension in the conjoint analysis contrasts with the importance ratings. Relatively few Americans said political considerations were very important factors in their assessments of whether news organizations were trustworthy. These results may indicate that people are not aware of the degree to which partisanship colors their opinions of the news media. To the extent they are aware of this, they may want to give a “socially desirable” response in an opinion survey and indicate that political concerns do not affect their views of the news media.
The conjoint results also suggest that partisan reputations are an important shortcut or cue that Democrats and Republicans employ to sift through all the available information to form opinions about the trustworthiness of competing news organizations.
The Gallup/Knight Foundation report American Views: Trust, Media and Democracy indicates that Americans believe the news media play a critical role in U.S. democracy but think they are doing a poor job of fulfilling that role. This report reinforces that finding by showing that most Americans say they have lost trust in the news media in the past 10 years. However, Americans also say that their trust can be restored.
Americans give clear indications that their trust largely relies upon getting accurate, unbiased and even-handed news. Nevertheless, these normative factors are more philosophical in nature and provide less obvious direction for actions that news organizations can take to gain, or regain, public trust. Achieving these aims may require news organizations to rigorously adhere to journalistic norms, something most news outlets probably aspire to do but that may be more challenging to do in an era when staffs and resources have been slashed amid declining revenues.
A major challenge in fostering trust in the news media is that accuracy and unbiasedness are often in the eye of the beholder. Previous research in the Gallup/Knight Foundation partnership found that Democrats and Republicans mostly disagree as to which specific news organizations areaccurate and unbiased.
Still, Americans are unlikely to say that their trust in news organizations depends on political agreement, but the conjoint analysis suggests it may be more important than they say — especially for partisans when the direction of perceived bias runs counter to their own political leanings. The prominent effects of partisan reputations in the conjoint analysis among Republicans and Democrats indicate how much these reputations can influence media trust. If certain sources are branded by opinion leaders as “liberal” or “conservative,” it could turn off large segments of the population to them and foster distrust in the news media more generally. An earlier experiment as part of the Gallup/Knight Foundation work confirmed how powerful such branding can be, as people’s ratings of the same news item differed significantly when the news organization that reported the story was shown versus not shown.
Restoring trust in the news media may then require addressing and countering shared perceptions of bias and inaccuracy within partisan groups.
Beyond addressing bias and inaccuracy concerns, efforts to increase transparency could help increase trust in the news media. While transparency is not an idea that is top-of-mind for Americans when talking about trust in their own words, it is among the most highly rated factors when respondents rate the relative importance of a number of items that can influence trust in the media. The conjoint analysis provides more specific guidance in this area by indicating that people regarded organizations as more trustworthy if they faithfully disclose conflicts of interest and make additional reporting material available to readers and viewers.
|
<urn:uuid:90971544-6e63-407c-a9d1-5c4a3bc3a7c5>
|
CC-MAIN-2020-16
|
https://knightfoundation.org/reports/indicators-of-news-media-trust/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371799447.70/warc/CC-MAIN-20200407121105-20200407151605-00308.warc.gz
|
en
| 0.958102 | 5,708 | 2.5625 | 3 | 3.062695 | 3 |
Strong reasoning
|
Politics
|
Jellyfish are creatures that are found in the sea. There are different species of jellyfish, ranging from the very small ones to the giant-sized jellyfish. They have a general umbrella shaped bell, a d a tail of tentacles. Jellyfish do not swim, they allow themselves to be carried by current. This is why some beaches have high population of jellyfish. The tentacles are used in stinging their prey ad paralyzing them. However, the sting is not very harmful to human beings, although it can cause pain, redness of the skin and even infection. However, the effects of the sting only last for a few hours, with the most dangerous lasting for a day or two.
Beaches in Norway under risk of jelly fish
Here is a list of a few beaches in Norway where jellyfish are seen often;
• The Harstad beach in Norway is one of the beaches where there is a large population of the clear jellyfish. Some of the jellyfish are dangerous and unfriendly.
• Batu Ferringhi Beach. This is also one of the beaches in Norway that has a dangerously large population of jellyfish in the water and on the beaches.
• On the beaches of the Lofoten Island, there is also a large population of jellyfish in the water. The most occurring type of jellyfish is the purple jellyfish.
Most of the beaches have the largest population of jellyfish around the months of June.
How to avoid being stung by jellyfish
Here are several methods that you can use to prevent being stung by jellyfish when you are at the beaches:
• Wearing full body swimwear. Some of the examples of full body swimwear include; stinger suits and dive skin. These are swim suits that are made of thick material to ensure that you are safe from being stung by jellyfish. They cover the whole body, leaving only few parts of the body like the face, which are easy to protect.
• Wearing a UV protective swimwear. These kinds of swimwear are made in a design that makes the outer part of the swimwear soft. This way, you can be able to swim among jellyfish without being stung. This is because the jellyfish are tricked into believing that they are touching themselves when they come into contact with your UV protective swimwear.
• Avoid touching the jellyfish when you see them. Some of the jellyfish may be aggressive and unfriendly and at the closest contact with the jellyfish you may end up getting stung.
• You can also disturb the water a little when you are getting in. This helps to scare away the jellyfish that may be in the water to prevent you from stepping them. It also allows the dead ones to be carried away by the water.
Jellyfish ay e colorful and very beautiful, but it is important to ensure that you stay away from them. This is because the sting from the jellyfish is very painful and it can led to infections in the body. Some of the important things to do to ensure you do not get stung by jellyfish include wearing a UV protective swimwear. Wearing a full body swimwear and avoiding the jellyfish at all costs are also helpful measures to take to ensure you are not stung by jellyfish.
|
<urn:uuid:182c6763-1ea8-4578-bc34-254e2241265c>
|
CC-MAIN-2019-51
|
https://www.ecostinger.com/blog/beaches-in-norway-under-risk-of-jellyfish/
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541315293.87/warc/CC-MAIN-20191216013805-20191216041805-00526.warc.gz
|
en
| 0.975996 | 678 | 2.828125 | 3 | 0.839856 | 1 |
Basic reasoning
|
Health
|
What is automated migration?
Automated migration is the process of using software tools--similar in concept and architecture to a compiler--to convert the original source code of an application to a more modern programming language, framework, platform, or all three.
How do Mobilize.Net's automated migration tools work?
Mobilize.Net's tools are all based on the same theories and architecture that allow compilers to work. Oversimplifying, there are three fundamental pieces in all our tools:
- Front end parser/typer: this component reads and analyzes the source code, finding all the references, parsing the syntax of the language, and understanding the types of all objects
- Static code analyzer: this middle component builds abstract symbol trees that represent the static structure of the code, then runs interative analysis tools to build symbolic representations of the semantic intent of the application code.
- Following a complete structural representation of the code, the back end emits new code in the form of source code files, folders, and appropriate helper classes (also in source code), along with project and solution files ready to be loaded into Visual Studio or other appropriate development system.
Mobilize.Net's tools are componentized (see above). This makes it feasible to modify the basic tools to perform new transformations. Both input and output languages/platforms can be modified. In addition, the architecture of the tools allow for substantial refactoring of application code, such as what happens when using WebMAP to migrate from Windows desktop to native web architecture using Angular and ASP.NET Core.
Automated migration using Mobilize.Net tools rely on iterative semantic analysis. First, all the application code, including external components, is parsed and typed. This symbolic representation is analyzed using AI algorithms to build a semantic representation of the intent of the code. Prior attempts by other computer scientists to do rich code analysis were limited to syntactic analysis; Mobilize pioneered true semantic analysis of source code over two decades ago. Mobilize stands alone in its ability to perform automated migration that truly replicates the intent of the code. The success of the approach has been proven by the billions of lines of real-world code that have been migrated with our tools.
When does automated migration make sense?
Not every modernization project is suited for automation, but many are. Some of the characteristics of legacy applications that could benefit from automated migration include:
- The application is in continual use providing on-going value to the organization
- The application contains a large amount of business logic
- The application is considered to be a critical asset to the organization, perhaps providing competitive benefit or strategic advantage
- The source code is modified frequently to support changing requirements or regulations
- Over the life of the application, errors in logic or business rules have been largely eliminated
- The programming language is obsolete, making it hard to find developers familiar with it at competitive rates
- The application is extremely large and complex (>100,000 lines of code)
- The organization would like to be able to expose services in the application as RESTful APIs
- The organization is an independent software vendor (ISV) who is experiencing new competition from market entrants with more modern UX, or with browser-based SaaS applications.
In cases where an application has many or all of these criteria, the organization is typically forced to choose between automated migration or a complete rewrite. See below for more information on the pros and cons of automated migration compared to manual coding.
How automated migration saves time and money
Automated migration can, in a single process, convert 85-95 percent of the source code to the new target language and platform. The remaining work can be done by the in-house development team, and outside systems integrator, or Mobilize's own engineering services team. Actual project data, collected over a 15 year period, confirms that automated migration typically saves as much as 75 percent of the time required to rewrite a legacy application. Cost savings are similar, averaging about 20-35% of the cost of a manual rewrite.
One large government organization undertook to compare the actual cost of rewriting a legacy application with using Mobilize to migrate a similar application. Analyzing the actual data collected showed the cost of the rewrite was 3x compared to the cost of using automated migration. Additionally, the automated migration project was delivered as a fixed-price, fixed-schedule project, eliminating risk.
Why automated migration beats a manual rewrite
The history of software development is littered with the bleached bones of failed projects. Methodologies abound to avoid issues with development, yet they continue to be not only common but almost the rule. Even as recently as 2017 only 29 percent of projects were successful. Even organizations implementing Agile development only meet their schedule goal 65 percent of the time and their budget 67 percent of the time. While this is demonstrably better than 29 percent, are you willing to bet on only a 2 out of 3 chance of reasonable success? Furthermore, as the size and complexity of a project increases, the risk of failure can go up by as much as 10x.
And yet many companies undertake to rewrite aging legacy applications rather than investigate automated migration as an alternative approach. Here are a few actual examples:
This particular company had a VB6 application that was sold to vertical-market customers for years, leading to a strong market position. However, recent changes caused them to undertake a rewrite to create an HTML version (for, among other compelling reasons, to enable them to adopt a SaaS model). Over the course of almost two years, the company struggled to both keep the VB6 version up to date while building the HTML web version, constantly changing the new version's requirements to match the evolution of the VB6 version.
As is typical with these situations, the rewrite project included a number of ambitious goals:
- Add in a prioritized list of feature enhancements that the product management team has been asking for
- Re-architect the code base to take advantage of modern coding practices like loose-coupling, true object orientation, separation of concerns, naming and coding standards, better commenting, etc
- Create a user experience that mirrors the existing desktop application so that users will not require retraining
- Not introduce any defects in the functionality of the application, in effect re-creating all the business logic perfectly
- Creating a test harness and full suite of automated functional tests to run future regressions
- Add unit tests to all code
- Host the platform on Azure as a hybrid application initially, with an eye to full public cloud deployment in future
- Implement application performance monitoring and measurement tools post-deployment
- Implement DevOps and CI/CD after migration and deployment.
A bridge too far
If you guessed this set of goals was overly-optimistic, you'd be correct. Some of the problems they ran into included:
- The single development team was overwhelmed by trying to keep the VB6 product competitive while simultaneously working on the rewrite; as a result schedules slipped constantly.
- The team was faced with a dizzying array of new technical skills, tools, and components they needed to learn, master, and employ correctly
- Shifting requirements, resource bleed off, and weak project management as they attempted to implement Agile and scrum caused morale to deteriorate and a "death march" attitude among team members.
A better bridge
After over a year of struggle trying to get the project on track and keep it on track, they decided on a different approach.
Turning to Mobilize.Net, they were able to use automated migration to dramatically cut the time and cost of migrating their VB6 application to the web. Following verification that the migrated app was functionally and visually equivalent to the VB6 version--only now existing and deployed as a modern web application--they were able to quickly switch their licensing model from updates to subscription (SaaS), move their deployment to public cloud, and begin implementing DevOps and CI/CD.
But what about...?
Note that the automated migration did not move the needle on their goals to refactor the code into more modern patterns like full object orientation or DRY (don't repeat yourself). (Note: the automated process DID re-architect the code into MVC on the server and MVVM on the client.) But now the company had a modern code base--in production--which could be refactored and improved using modern tools, patterns, and affordable and findable developers right out of school. The team found that refactoring a lot of similar code into classes was more fun than missing dates and getting yelled at by management. Learn more>
Automated migration code quality
Code quality in automated migration projects is a major concern, as well it should be. Some approaches rely on a runtime layer that effectively translates the source platform syntax and runtime library API into the target. This means the resulting code to be maintained looks almost exactly like the original application's source code. Why is that bad? Because legacy languages are not being taught in computer science curricula, nor are emerging developers either schooled in or interested in learning those languages. Additionally, if the migration relies on a runtime, there is now a permanent 3rd party dependency--do you want to bet your company on something you can't control?
Refactoring after automated migration
There's no debate about whether legacy code needs refactoring. Older code rarely if ever shows modern patterns like true OOP, DRY, separation of concerns, loose coupling, and so on. Coding conventions, naming standards, and documentation/comments may be all over the map. Migration using automated migration tools can improve some of these areas, but not all. What automated migration CAN do is to get your app up and running on a new, modern code base using modern languages, platforms, frameworks, and tools so you can begin refactoring. Investing in refactoring AFTER migration makes far more economic sense than holding on to legacy code while refactoring BEFORE modernization. Modern languages and frameworks have built in support for constructs needed in your refactoring process--legacy languages and frameworks usually don't.
The process of syntactic translation of code from one language to another is relatively simple but does not represent a migration or modernization of the underlying application. To borrow from a famous movie, these are not the droids you are looking for.
Instead, anyone investigating tools to assist with legacy modernization should restrict their search to semantic transformation tools. Semantic transformation re-creates the application functionality in a new language with a new runtime environment. For example, converting VB.NET to C# is syntactic translation, while converting VB6 to .NET (with C#) is semantic transformation.
Mobilize.Net only provides tools capable of semantic transformation, not syntatic translation. Just as a compiler analyzes high level source code and creates low-level, optimized machine level code, both VBUC and WebMAP use static code analysis to understand the intent of the code, then generate a correct representation of that intent in the target language, framework, and runtime environment.
- With that as an introduction, when evaluating potential automated modernization tool solutions and vendors:
- Ignore those who only provide syntactic translation, as this does little to solve the pressing problems of legacy code bases
- Evaluate the output of the tool, ensuring that the generated code is readable, maintainable, and meets coding standards
- Ensure the vendor's tooling can be modified or configured to implement naming, coding, error handling, and other standards.
- Verify that the vendor's tooling can be modified or extended to handle unique or rare coding constructs, components, and dependencies to minimize manual effort post-migration.
- Verify that the vendor's solution will create 100% functionally equivalence, meaning that any functional test the original application passes the migrated application will also pass. In fact, a full regression test suite for the source application is the perfect validation for the migrated application, once the runtime environment differences have been accounted for.
- The migration should fully embrace the target language and runtime environment (such as .NET), rather than merely pasting legacy-looking code on top of a binary translation layer.
- The presence of 3rd party binaries for runtime translation should be a red flag. 3rd party dependencies place the vendor in a position of power for all time over the customer, with the constant risk the vendor will cease support or even go out of business.
- The vendor should have a verifiable track record of multiple successful migrations of applications similar in size, scope, complexity, and nature to the target application
- The vendor should be able to perform a proof of concept on the target application's actual code, migrating some representative but small part of the code to the target using the expected tooling and be willing to discuss both the pros and cons of the generated results.
- If the generated code contains helper classes to ease the transition for the development team to the new platform, they should be in available source code and should not limit or restrict the use, modification, or improvement of them with respect to the migrated application.
Approaches to automated migration
Let's review a few popular approaches to automated migration, including ours.
The workbench could be described as "tool assisted migration," in that it provides some assistance to help you migrate code file by file. With the workbench, you look at each line of code separately, evaluating proposed changes shown by the tool and selecting which ones you want to implement.
Pros: Since you are migrating each file individually, when you are done the project should compile and run right away. Many workbenches offer a direct way to extend the built-in mappings, sort of like using regular expressions.
Cons: Since each line of code is modified directly by a developer, this approach doesn't lend itself to spreading the migration across teams, each of whom might migrate similar code in a different fashion. This approach can create a tower of Babel in the final code, unless the teams are particularly effective at manually implementing a variety of standard approaches. Also: this approach is highly granular, since each file is processed individually. It prevents the migration process from working from a representation of the entire application.
Some approaches use a binary runtime that inserts itself between the source code and the OS. This approach lets the developer preserve a lot of the original syntactic flavor of the source language, while mapping virtually all of the behavior of the source libraries to the destination.
Pros: Migration is typically quick and easy. The resulting executable will run on the new platform with faithful mirroring of the original application.
Cons: Dependency on the vendor to maintain and support the runtime layer. If the runtime layer works, there are no problems. If the runtime layer has bugs or missing functionality, there is no workaround. If the vendor abandons the libraries (this has happened) or ceases business operations, there is no path forward. Further, this doesn't solve the problem of dependency on developers with legacy skills and knowledge, since it preserves most of the style and syntax of the source language. This approach also doesn't help companies who are struggling with compliance since the app will still require all of the old technologies (which are out of compliance - e.g. Windows XP, Visual Basic 6.0, etc.)
The Mobilize way. With this approach, the entire application is analyzed and understood before any code is created in the target platform, in order to ensure the best possible code quality.
Pros: This is the fastest and most efficient method to move the legacy code base to a new language, runtime environment, and framework. Iterative static analysis allows for understanding of the whole application before any code is generated. Output is native code using correct syntax and conventions of the target framework. Any and all helper classes are C# source code. Business logic is preserved without introduction of new defects. Functional equivalence is assured. Lends itself to team migration to reduce overall time required to deliver final code. The most proven approach, since more code has been migrated with this system than all others combined. You wind up with readable, maintainable code with no external dependencies.
Cons: The process requires real work, but it's orders of magnitude faster than rewriting.
|
<urn:uuid:086a9479-d9f7-4848-bb04-b4c5f0d8da6d>
|
CC-MAIN-2023-23
|
https://www.mobilize.net/resources/guides/faqs/automated-migration
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655244.74/warc/CC-MAIN-20230609000217-20230609030217-00421.warc.gz
|
en
| 0.925886 | 3,413 | 2.546875 | 3 | 2.985956 | 3 |
Strong reasoning
|
Software Dev.
|
During the period of the Soviet Union, real estate was not fully included in the real estate market as an object of property turnover. Land was the exclusive property of the state, and considering it as a means of production in agriculture was provided only with the right of use. There was no need for recognition of property rights to land, nor for market appraisal. In the case of buildings, there was only ownership of citizens in the form of individual residential houses, and a certain amount of ownership of cooperative and collective horticulture companies. There was no system of registration of the rights of users and owners of either land or buildings, and only recording and inventory work was carried out.
After the independence of the Republic of Armenia, the privatization process of land, housing stock and other branches of the economy began in the republic, for the first time (1996) the concept of "real estate" was given by law, uniting the land and the property into one property unit. The real estate market and real estate appraisal took a shape, the creation of the latter was important for real estate transactions.
Under these circumstances, by the N 234 decision of the Republic of Armenia Government dated June 30, 1997, the Department of the State Unified Real Estate Cadastre under the Government of the Republic of Armenia was established.
By the No 496 decision of the Prime Minister of the Republic of Armenia dated August 20, 1999, the Department of the State Unified Real Estate Cadastre under the Government of the Republic of Armenia was renamed the State Committee of the Real Estate Cadastre under the Government of the Republic of Armenia.
One of the most important steps in the transition to the new state registration system was the cadastral mapping and the first state registration of rights to real estate carried out free of charge at the expense of the state budget of the Republic of Armenia. That made it possible to collect qualitative and quantitative data on real estate units, as well as relatively complete and reliable data on rights and restrictions on real estate.
The implementation of the registration of rights stimulated the activation of the real estate market, as many real estate units, which were previously excluded from civil law filds, were brought into the legal, and therefore, taxation field.
During 2011, large-scale legal reforms were implemented in the system․ Service offices were created, real estate surveying activities were privatized, state registration of rights and information provision procedures were simplified, and the use of existing resources was optimized.
In particular, Armenian Real Property Information System (ARPIS) was invested and launched in January 2012.
Within the framework of legal reforms, the Committee developed and submitted about 100 normative legal acts for consideration by the National Assembly and the Government of the Republic of Armenia.
According to the decision N 749-L of the RA Prime Minister of June 11, 2018, the authorized body managing the real estate cadastre was renamed the Cadastre Committee (state register of real estate).
The committee was headed by:
1997-2009 - Manuk Vardanyan
2009-2014 - Yervand Zakharyan
2014-2018 - Martin Sargsyan
2018-2019 - Sarhat Petrosyan
From 2019 to now - Suren Tomasyan
|
<urn:uuid:cff0c97a-3fd8-4a2d-a018-2b6205aaadb4>
|
CC-MAIN-2023-23
|
http://www.cadastre.am/en/history
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645417.33/warc/CC-MAIN-20230530063958-20230530093958-00442.warc.gz
|
en
| 0.96617 | 664 | 2.59375 | 3 | 2.825877 | 3 |
Strong reasoning
|
Home & Hobbies
|
Auckland, New Zealand : Auckland University Press , 2017. ©2017
- "From 1840 to 1852, the Crown Colony period, the British attempted to impose their own law on New Zealand. In theory Māori, as subjects of the Queen, were to be ruled by British law. But in fact, outside the small, isolated, British settlements, most Māori and many settlers lived according to tikanga. How then were Māori to be brought under British law? Influenced by the idea of exceptional laws that was circulating in the Empire, the colonial authorities set out to craft new regimes and new courts through which Māori would be encouraged to forsake tikanga and to take up the laws of the settlers. Shaunnagh Dorsett examines the shape that exceptional laws took in New Zealand, the ways they influenced institutional design and the engagement of Māori with those new institutions, particularly through the lowest courts in the land. It is in the everyday micro-encounters of Māori and the new British institutions that the beginnings of the displacement of tikanga and the imposition of British law can be seen"--Back cover.
- Includes bibliographical references.
- Introduction -- Juridical Encounters -- PART I: WHOSE LAW? WHICH LAW? -- 1. Preliminary Matters -- 2. Metropolitan Theorising: Amelioration, Protection and Exceptionalism -- 3. Amenability to British Law and Toleration: The Executive and Others -- 4. Common Law Jurisdiction over Māori: Three Cases -- 5. Conclusion -- PART II: DESIGNING EXCEPTIONAL LAWS AND INSTITUTIONS -- 1. Hobson and Clarke: 'Native' Courts -- 2. FitzRoy: The Native Exemption Ordinance 1844 -- 3. FitzRoy: Unsworn Testimony -- 4. Grey: The Resident Magistrates Courts 1846 -- 5. Conclusion -- PART III: JURIDICAL ENCOUNTERS IN THE COLONIAL COURTS -- 1. Preliminaries: Courts and Data -- 2. Offices: Protectors, Lawyers, Interpreters -- 3. Crime -- 4. Suing Civilly: The Resident Magistrates Court and the Office of the Native Assessor -- 5. Conclusion -- The Displacement of Tikanga -- A Brief Jurisprudential Afterword -- APPENDIX I: A Note on Court Data -- APPENDIX II: Court Structure in the Colonial Period -- APPENDIX III: Māori before the Superior Courts -- APPENDIX IV: Māori before the Resident Magistrates Court for Civil Matters inter se in Auckland and Wanganui -- APPENDIX V: The Provinces -- Abbreviations -- Bibliography -- Index
- Shortcut help message
- Highlight search box
- Close dialog
Available in search results
- Next page
- Previous page
- Toggle filters
- Open nth result on page
|
<urn:uuid:b01844f9-ddaa-4c0c-81af-39d410377234>
|
CC-MAIN-2018-13
|
https://search.library.wisc.edu/catalog/9912437445402121
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647660.83/warc/CC-MAIN-20180321141313-20180321161313-00330.warc.gz
|
en
| 0.810224 | 611 | 3.296875 | 3 | 2.966491 | 3 |
Strong reasoning
|
History
|
An expenditure report is a document that lists all of the costs associated with administering a business. Employees may be asked to file expense reports to be reimbursed for business-related items such as gas or meals. Alternatively, a small business owner might use expense reports to track project expenditure and prepare for tax season.
expense reports are required for small enterprises with employees who frequently pay for business costs out of pocket. All reimbursable expenses will be itemized on the employee’s expense report. Receipts should also be attached to the expense report. The owner can then double-check the expense report for correctness before paying the employee the full amount. A small business owner can also use expense reports to analyze their total spending for a specific reporting period–usually a month, quarter, or year. The owner might examine the results to discover if total expenditure were more or lower than predicted.
What Is the Purpose of an Expense Report?
An expense report is a document that keeps track of how much money a company spends. According to Entrepreneur, an expense report form contains any purchases required to run a trade, such as food, parking, gas, or motels. Bookkeeping software or a pattern in Word, PDF, Excel, or other common programs can be used to create an expense report.
HOW DOES AN EXPENSE REPORT APPEAR?
Columns on an expense report often include:
- Date: the item was acquired on this date.
- The place where the item was purchased is referred to as the vendor.
- Client: For whom was the item purchased?
- What is the project for which the item was purchased?
- An account number can be used instead of a customer or project number.
- Author: the person who bought the item
- Notes: a few more clarifications
- The entire cost of the expense, including taxes, is the amount.
An expenditure report in its most basic form is shown below. This report calculates expenditure based on tax categories, such as rent. There is a subtotal for each category, followed by a total. Create an outlay report based on a date range, then export or print it.
Expense Reporting System Selection
When choosing a disbursement management solution, adopt best practices and keep a few suggestions in mind, just as you would with any software installation project:
Begin by conducting a finding and development phase, which includes crucial scheme needs based on specific cost management workflows, and researching and selecting a system.
Inquire about the purchase, setup, and customization costs and ongoing and indirect fees for updates, customer support, software license renewals, and other services.
Check to see if the solution can be adjusted to fit your company’s operations and rules without requiring a lot of developer time. It might be a better bet to see if your procedures can be changed to fit the software, keeping in mind that it was designed to handle the demands of hundreds or thousands of businesses.
Select a scheme with mobile abilities, well-designed functionality, and an appealing UI that appeals to today’s multi-generational workforces that is simple to use for all employees—those who report, favour and examine.
Allocate enough resources to give all users with training and technical support.
Take the time to eliminate obsolete customer accounts and check for discrepancies before migrating all historical data to the new system.
Accounting and project costing systems are deeply integrated.
|
<urn:uuid:35dc85d1-9868-4fbb-899e-b2fa90aba886>
|
CC-MAIN-2021-43
|
http://www.wald-holz-eifel.org/what-is-expense-reporting-used-for/
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00563.warc.gz
|
en
| 0.956905 | 705 | 2.71875 | 3 | 1.932651 | 2 |
Moderate reasoning
|
Finance & Business
|
The Salem Witchcraft trials in Massachusetts during 1692 resulted in nineteen innocent men and women being hanged, one man pressed to death, and in the deaths of more Media: Information on the witch trials in Salem Here is a worksheet that focuses on how to write an essay/ comment (structure, writing plan, writing assignments for 6th graders Buy the salem witch trials papers essays. The place that this was occurring was salem massachusetts a city full of puritans who came from europe.27. März 2016 Sawyer Day from Cranston was looking for essay examples opinion examples opinion Brandeis University, essay for salem witch trials!
Salem Witch Trials. Home Search Essays FAQ Contact. Search “Before the outbreak at Salem Village, trials for witchcraft had been fairly common events in 5 page essay on salem witch trials · dissertation proposals in essay for · how to write letter of application for a job · budget counseling resume · write my essay The Salem Witchcraft Trials - A CSI Investigation! After engaging students in a discussion of what a witch is, have them read a short expository essay about Sollte die das bibliographieren dissertation format essay writing an essay topic Salem witch trials thesis policy us, die erschlie ung der forschungsliteratur, comparison essay between a rose for emily and the yellow wallpaper The Salem Witchcraft Trials of 1692 The Salem Witchcraft Trials are so famous that people say it as if it’s one word: Salemwitchcraft. But do people really know pay it forward movie review essay · college admission essay academic goals · description of sales term paper on salem witch trials · fidm admissions essay
Die Hexenprozesse von Salem im Jahre 1692 - Betrachtung | Self
Essays for sale research paper - Reliable Term Paper Writing and Editing Service - Purchase Secure Essays essay on sustainability · salem witch trials essays. FREE German Essay on my best drug abuse research paper thesis friend: Mein on my best friend: Mein bester Freund. . . salem witch trials essay outline.Hunt and action: source link to tituba, reluctant witch of salem essay questions, And literary analysis of a critical analysis of salem, black witch trials, his black mmu published dissertations write an essay on the theme of my last duchess opening line for a persuasive essay for earthquakes cause and effect essay on the salem witch trials
The Making of Salem: The Witch Trials in History, Fiction and
Unviewed and impeded Jeth currie her districts three paragraph essay graphic Scunges Aryan that essay on salem witch trials and mccarthyism gash tightly? Many people question about the Salem witch trials and if they really used witch When witchcraft ended. When the witch trials were over many could not get out of creative writing essays about love Witchcraft trials in the mccarthy hearings in salem witch trials, an essay: salem witch trials that this is not expert? Source of being a simple. Thesis titles bs and 30vgl. Drechsler,Wolfgang, The Use of Spectral Evidence in the Salem Witchcraft Trials: .. S.41; vgl. auch Geis, Gilbert/Bunn, Ivan, A Trial of Witches, S.74; . Hutchinson, Frances, An Historical Essay Concerning Witchcraft, in Sharpe, Ja-. Com ghost writer movie watch online essay ghostwriter ihnen ein guter On salem witch trials research paper english gcse no parent, von kleinen text oder
Salem Witchcraft Hysteria: An Original National Geographic Interactive Feature The Salem Witchcraft Trials The Salem Witch Trials, of 1692, occurred in Salem Massachusetts. This is a case where people accused other people of witchcraft. Salem was a town governed by … essays smoking should illegal writing an essay for college application compare and contrast compare and contrast salem witch trials and mccarthyism essay · essay prompts about theme verteidigungs. uni versity of the publication of the salem witch trials research thesis in andrea lange vester, dissertation this dissertation, bergische universit . The Salem Witch Trials History Essay. Published: 23, March 2015. Introduction. The Salem witch trials of 1692 took place in Salem, Massachusetts. Overall, 141 people
Cialis 5 mg generico prezzo - Wintergarten Bönstrup
Deutsch-Englisch-Wörterbuch Diese Liste zeigt die aktivsten Beitragenden beim Sprachpaar Deutsch-Englisch. The Actual Salem Witch Trials. Um hier mit Buy arthur miller crucible papers, essays, and research papers. . reviews, puritanism, comparing salem witch trials with mccarthyism, author biography. personal educational experience essay Causes of The Salem Witch Craft Trials. Witchcraft, Insanity, and the Ten Signs of Decay. Since there never was a spurned lover stirring things up in Salem Village 13. Dez. 2015 Many essay topics concerning the Salem witch trials can be derived from the multitude of information that we have, Salem Witch Trials: Salem Totally Free The Salem Witch Trials - A Research Paper Essays, The Salem Witch Trials - A Research Paper Research Papers, The Salem Witch Trials
Referenzen Maschinenbau Ungarn - Boschert.de
Paper gun control - Salem witch trials essay. Online help with algebra homework. Essay writer helper. Sample of college essays. Rubric for group project how to write a high school application essay research the pearl by john steinbeck essay help what caused the salem witch trial hysteria of 1692 essayRead Salem Witch Trials Of 1692 free essay and over 84,000 other research documents. Salem Witch Trials Of 1692. The Salem Witch Trials of 1692 In … essays on heroes by robert cormier Salem witch trials,Puritan law,witchcraft,the Salem Witch Trials,the strict Puritan religion,Ergot poisoning,the Salem Witch Trials,the Salem Witch Trials,the strict
Customer Journey Masterarbeit >>>KLICKEN SIE HIER<<<
28 Jan 2016 Betreff des Beitrags: thesis statment for the salem witch trials. Beitrag thesis application essay sample essay on business administration Essays Salem Witch Trials Of 1692. Dow, James essays salem witch trials of 1692. Amodal is always more than a 13 on the surface of the church. helponline class The Salem Witchcraft Papers. Transcriptions of the Court Records. Original three volumes edited by Paul Boyer and Stephen Nissenbaum (Da Capo Press: New York, …The Salem witch trials occurred in colonial Massachusetts between 1692 and 1693. A young woman is led to her execution during the Salem witchcraft trials. essay writing service ranking research paper ?p=college-essay-papers-for-sale college essay papers for sale essay salem witch trials
Salem Witch Trial Essay Topics - Board index - Uol
17 Mar 2016 Free abigail williams essay Crucible Abigail Williams papers, essays, Cause The Salem Witch Trials Page contains information and court persuasive essay on not paying college athletes · thesis statement for argumentative essay money can buy happiness salem witch trials john proctor essaySalem Witch Trials Superstition and witchcraft resulted in many being hanged or in prison. In the seventeenth century, a belief in witches and witchcraft was an business plan for buy here pay here car lot essay about commitment to public service 5 paragraph essay on salem witch trials · essay about do schools have the right to search students lockers essay
From Africa to Barbados via Salem: Maryse Condé's Cultural
11. Juli 2015 social service and students an essay short literary essays about mexican culture · 5 paragraph essay salem witch trials paper thesis admission essay writing my hometown · how to write salem witch trials 17th century essay · dissertation customer relationship management essay questions refugee in australia essay Waxed Anatollo phosphorates contextually. Aidless Judith ceasings remissly. Epoch-making Knox whittles, her puritans salem witch trials essays naturalize very Essays written about Salem Witch Trials including papers about The Crucible and Witchcraft Oct 25, 2012 · Salem Witchcraft Trial The Lesson of Salem). The Salem witchcraft trials had many historical factors that effected Puritanism in the new nation.
voltaire an essay on the customs and spirit of nations · writing services salem possessed the social origins of witchcraft thesis salem witch trial of 1692 essay The Witchcraft Bibliography Project grew out of, and is still about one The Inventions of History: Essays on the Representation in European Witch Trials. intelligence and crime analysis critical thinking through writing how to write a introduction paragraph for a essay · customise paper homework helpers essays term papers · master thesis salem witch trials paper thesisThe task of writing an essay about a dark page of the history that happened way back in the late 1660 is certainly a difficult one. Getting information for a Salem This play describes the Salem witchcraft trials of 1692 and the irony of a terrible period of American history. [tags: Papers] 382 words (1.1 pages) FREE Essays
Advice for students so they don't sound silly in emails (essay & How
Noire de Salem, published in France in 1986 and La vie scélérate originally the Salem witch trials from the perspective of the 1950's and McCarthyism has re-. clark atlanta university admissions essay · shortest phd dissertation argumentative essay about vegetarianism health thesis on the salem witch trialsTo present you can i get a dissertation uk buy a paper essay on online suche buy a For Salem Witch Trials · dissertation studentification · Senior project essay. chef cover letters Salem Witchcraft Trials If one person believes something, all the rest will follow. Modern day people learned this in Salem, Massachusets during the witchcraft
Willkommen im Austria "Support Board" • Thema anzeigen - goi peace
The Salem Witch Trials of 1692 Home : About us : Order the narrator has shown that the frenzied “Salem Witchcraft Trials of 1692” were brought about by compulsory military service persuasive essay · dispatch customer service compare and contrast essay on salem witch trials and mccarthyism · essay about essay my best friend tree From essays about the Salem witch trials to literary uses of ghosts by Twain, Wharton, and Bierce to the cinematic blockbuster The Sixth Sense, this book is the Die Hexenprozesse von Salem - Willem Fromm - Hausarbeit - Geschichte Sie Ihre Hausarbeiten, Referate, Essays, Bachelorarbeit oder Masterarbeit. Solch eine Hexenverfolgung stellen die Salem Witch Trials des Jahres 1692 in der Help to write essay. Write a letter of application for a job. Place to buy college essay. My best friend essay. Film ratings uk. Salem witch trials essay. Online essay
The Salem witch trials were a series of hearings and prosecutions of people accused of witchcraft in colonial Massachusetts between February 1692 … -funny-incident-of-my-life-essays A funny incident of my life http://depressionteenshelp.com/salem-witch-trials-essay Salem witch trials job application essay sample Nonconforming and dried Brent sideswiped her pinpoints salem witch trial essay paper alkalizing and rejoicing what. Incased Woodie interpose pregnantly.Upon a successful barry pearce from participants, reading: witch trials causes of the of the salem witch trials essay. Salem witch trial diary entries. Links. Summary: Discusses Stephen Vincent Benets essay, We Arent Superstitious. Explores the history of the Salem witchcraft trials.
|
<urn:uuid:d4d27e94-c2df-4250-9fe9-c9fd37129064>
|
CC-MAIN-2018-26
|
http://picadillybackpackers.com/4735-essay-on-the-salem-witchcraft-trials.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00512.warc.gz
|
en
| 0.803514 | 2,476 | 3.265625 | 3 | 2.59646 | 3 |
Strong reasoning
|
History
|
Identifying Why Your Child Is Struggling
Prepared by the HSLDA Team of Special Needs Consultants
Recommended Teacher Resource Books
• Homeschooling Children with Special Needs by Sharon Hensley
• Teaching with the Brain In Mind by Eric Jensen
• Brain Matters: Translating Research into Classroom Practice by Patricia Wolfe
• Building the Reading Brain, Pre-K-3 by Patricia Wolfe and Pamela Nevills
• Worksheets Don’t Grow Dendrites by Marcia Tate
Professionals and educators who follow brain research understand that there are four main processing areas or “learning gates” that need to be properly functioning in order for a child to have an easy time learning.
The four learning gates are:
The provided checklists identify some of the characteristics that students may exhibit when a learning gate is “blocked,” or not functioning properly and efficiently. Also included is a list of informal evaluations that parent-educators may choose to perform at home. Additionally, there are some resources for correction that can either be delivered by a professional or in the home setting by parent-teachers.
Learning is all about energy output. Read the characteristics and see if you can identify where your struggling learner may be experiencing an “energy leak.”
Compensation or Correction?
Before you begin evaluating your child, you should know that once the process is complete you might face a fundamental choice: compensation or correction. Many educational experts debate whether it is more beneficial to help a struggling learner compensate for the learning processes that are difficult, or if time and effort should be spent in the pursuit of a correction of the processing problem.
An example of compensation would be for a child to use a keyboard at a very young age to write papers when he or she struggles with handwriting. A correction would be to do a handwriting exercise that eliminates reversed letters, for instance, and helps the child write more neatly. Another common compensation is to reduce the spelling list required at a grade level for a child who is struggling with spelling. A correction would be to train the child's photographic memory so that the task of spelling is easier.
Many times this does not need to be a debate. One can easily pursue both compensation and correction simultaneously. Compensation makes the learning task easier while the correction reduces the stress in the child's learning system so that learning can flow. We call this “opening up the child’s learning gate.”
Sensory Integration Issues
Many times a child who appears to have great difficulty with focusing and attending to a task is really struggling with a sensory processing problem. The child’s sensory system is not functioning correctly, resulting in errant signals. An example of this would be a malfunctioning sensory system that shouts “pain,” when a tag on a shirt touches the skin. Another example is when a child covers his ears at fairly minor unexpected sounds, because the sensory system is giving the errant signal that the sound is too loud. This child is not just distracted by his outside environment, but is distracted by his inside environment as well.
The following are some of the typical symptoms of sensory dysfunction:
Resources for Correction
A Right Brain Learner Stuck in a Left Brain Curriculum
You may have noticed that your children have totally different learning styles. Your left brain child tends to like workbooks and working on his own. The right-brainer, on the other hand, likes discussion, prefers projects to workbooks and tends to be a little higher maintenance during the school day, requiring more of your interaction time.
Since most curriculum teaches in a more left brain manner, focusing on auditory and sequential aspects, as well as writing, our children who are more right brain learners often feel left out, and even struggle with learning and retaining material using this same curriculum. Once we have identified the right-brainer who is struggling because he is stuck in a left brain curriculum, then we can tweak our teaching process to help these right brain children get in touch with the “smart part of themselves.”
Before we explore these many different teaching strategies, let’s identify the common learning styles of these children.
Common Characteristics of a Left Brain Learner
Common Characteristics of a Right Brain Learner
Many right brain dominant children can adapt to left brain curriculum without much effort. If that is the case, then no changes need to be made for this child. However, if a child is struggling to be successful in learning, then some accommodations need to be made. Sometimes just putting the struggling child in a more right brain friendly curriculum, makes all the difference in the world in how easy his school day goes.
Other times a child needs a totally different strategy to make learning easy. That is when we turn to right brain teaching strategies.
Who Needs Right Brain Teaching Strategies?
What Are Right Brain Teaching Strategies?
In 1981 Dr. Roger Sperry received the Nobel Prize for his split brain research. Prior to that, little was known about the separate responsibilities of the two brain hemispheres. President George Bush declared the 1990s as the Decade of the Brain. Much brain research came to the forefront during that time. It has been a very exciting time in beginning to understand the processes of learning.
The right brain is responsible for long-term memory storage. Ultimately, we all store learned material in our right brain, for easy retrieval. Generally this process of storing material in the short term memory (the left brain’s responsibility), and then transferring it to our long-term memory (the right brain's responsibility) is automatic, and we don’t even think about the intricate process that is taking place. However, when the left brain methods of repetition (either orally or in writing) is not transferring to the right brain long-term memory storage unit, then we need to look at ways to make this transfer more efficient. This is where right brain teaching strategies comes in. When we use right brain teaching strategies with our children, they are required to use much less energy to store learned material. Both right and left brain learners love these techniques!
Right brain teaching strategies involve using “visual Velcro” to easily memorize material. For example, if learning math facts through oral repetition, games, or writing them isn’t working, then by making little stories (not rhymes because these are auditory) with emotion, and adding picture and color to the math fact, the child finds that it is easy to recall. This is using an easy, inexpensive learning strategy that totally transforms how a child remembers something as important as math facts. This type of teaching applies to all areas of curriculum. When a child says, “I can't remember,” then it is time to use right brain teaching strategies to make the memory process so much easier. Let’s explore some of these troublesome learning areas:
Possible Remedial Solutions for Daily Teaching
As a special education resource teacher for remedial reading and language arts I developed this method of teaching these bright, hard-working, but struggling students. The key for you is to have your struggling child work with you in a one-on-one situation for defined periods of time during each day. Struggling children do not learn independently, but need much teacher involvement to be successful. Using this method, I regularly saw a two-year growth in one year in both reading and spelling in the children I worked with, even if they had dyslexia and were non-readers at the beginning. Feel free to modify the plan in any way that works for your family. There are many other methods that work. This is just one of them.
|
<urn:uuid:b529ef01-2f4d-4587-a035-0dd15cb51066>
|
CC-MAIN-2017-26
|
https://www.hslda.org/strugglinglearner/sn_checklists.asp
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320323.17/warc/CC-MAIN-20170624184733-20170624204733-00519.warc.gz
|
en
| 0.943078 | 1,579 | 3.40625 | 3 | 2.721061 | 3 |
Strong reasoning
|
Education & Jobs
|
Gem Mining in Burma
Gem Mining in Burma
The following article is reprinted by kind permission of Gems & Gemology, in which the original appeared, Spring 1957 (Volume IX, Number 1). Readers of Ehrmann’s unpublished manuscript, “The Ruby Mines of Mogok,” will be on perhaps overly familiar terrain, but will have the opportunity, via the images, to put faces to names, and the visual to the conceptual.
The photo on the cover at right (click to enlarge): “The oldest pagoda in Mogok. It is reputed to be located on very rich gem soil, but will never be worked because it is sacred ground.” Please note that we have removed scanning artifacts from most of the photos below, which also has softened the photos slightly.
For many centuries, Burma has been one of the most important gem centers in the world. The Mogaung area, practically in the path of the famous World War II Burma Road, is the only commercial source of jadeite that produces qualities ranging from the finest gem emerald green to the cheapest utilitarian quality. The gem mines in Mogok are the only sources of fine gem rubies; Siam rubies are generally inferior. Only in rare cases is a fine Siam ruby found. Today, approximately eighty-five percent of all rubies and sapphires mined are of Burmese origin, especially since the Kashmir mines in India have ceased operations on any large scale.
Despite sufficient time and adequate facilities for making a complete and thorough investigation, a superficial survey in the course of several trips during the past two years revealed the presence of immense wealth still hidden within Burma. Until a complete scientific investigation is made of the gem areas in Burma, it is hoped that this article will fill the interim gap.
Burma, or the Union of Burma as it is now called, is divided into six states: Burma, Shan, Kashin, Kayah, Chin and Karen. It is a long, narrow country, bordered by India and Assam on the west, by Tibet and China on the north and by Siam on the east. The port of entry into Burma, either by sea or air, is Rangoon, its capital city. Rangoon has a population of about 800,000 people, consisting of approximately 550,000 Burmese, 125,000 Chinese and 125,000 Indians. All Burma has a population of 18,000,000. By contrast with India and other Asiatic countries, its people seem prosperous, wellfed and wellclothed. Slums and poverty are evident only in the large cities of Rangoon and Mandalay.
Rangoon is a rather impressive city with wide, pleasant, well-laid-out streets and many beautiful parks within the city limits. However, the ravages of World War II are still seen in the many bombed-out buildings with all their rubble. The slum sections, too, are as bad as anywhere in India, with crowds and filth evident everywhere.
|Sula Pagoda, on the Sula Pagoda Road, in the heart of Rangoon.|
The residential section of the city is well kept with many beautiful mansions standing majestically on Prome Road, the finest residential street in the city. There are many temples and pagodas of beautiful architecture (though not as elaborate as those in Siam). The Sula Pagoda, in the center of the city, has a tower over fifty feet high covered with 24-karat gold leaf to which gold is added regularly every few years. This gold is reputed to be worth many millions of dollars. The Shwedagon Pagoda is located in the residential area on a lovely landscaped two-mile square. This pagoda, with its golden spire glittering in the sunlight is the largest and finest Buddhist temple in the world; it, too, is worth millions of dollars.
Another interesting landmark is the University of Rangoon, known as the finest higher education institution in the Far East. It consists of many buildings on a huge campus, with a mixture of old and modern architecture. It is staffed by a fine, international faculty.
Transportation within Burma is poor. There is a railroad from Rangoon to the north, but it is old, dilapidated, and very slow. A trip to Mandalay, a distance of 450 miles, takes about three days and is extremely dangerous. Each train leaving Rangoon is preceded by an armored car with a complement of soldiers, since these trains are frequently attacked by marauding bands of insurgents roaming the jungles of Burma. Even this protection does not prevent frequent attacks and much loss of life. Another mode of transportation is by slow, crowded, and uncomfortable river boats on the Irrawaddy River, the most important river in Burma and navigable for about 1000 miles. Such a trip is a combination of three days of train travel to Mandalay, two days by boat to Thabeikian, and sixty miles by automobile to Mogok.
|Burmese plane and crew that flew author from Rangoon to Mogok.|
However, it is air travel that offers the finest mode of transportation within Burma. The Union of Burma Airlines is most efficient, flying regularly to all points north. The planes used are Dakotas and DC3s, flown by Burmese pilots trained in England. These two-motor planes are found best for the short distances flown. There are two daily flights to Mandalay and flights once a week to the capitals of all other states, thus making air travel the only efficient way to get to Mogok. A plane leaves Rangoon every Friday at 8:00 A.M. and arrives in Momeik about noon, where it is met by a jeep for the twenty-eight-mile drive to Mogok. This brief jeep trip is usually accompanied by an element of danger, since the roads are frequently mined by the insurgents who roam the jungles. Cars are frequently robbed or blown up. The insurgents are political enemies of the state. In addition, there is the ever-present danger of the highway bandits who steal everything of value but seldom kill.
Nevertheless, the drive to Mogok is fascinating. Momeik is 800 feet above sea level and Mogok is 4000 feet above, so there is a continual climb through extraordinarily interesting country. Scenically, it is the most astounding country in all Asia. Occasionally, green valleys appear through thick jungles; then, suddenly, terrible desolation and ruined pagodas come into view. Here and there is a village beyond which are picturesque rice paddies. The roads are deteriorated and appear to have had no care in years.
Mogok is situated in a magnificent valley, surrounded by majestic mountains which are studded with temples and pagodas, some very modern and others thousands of years old. Geographically, Mogok is considered a part of the Burma State, although it is actually located in the western part of the Shan State. This fact dates back a long time, before the English occupation of Burma. Burma was a kingdom then, and all of Burma was ruled by Burmese kings. Because of the vast wealth in the Mogok area, these kings retained their hold on it and fought the Sab Bwas (same as maharajas of India) who tried to wrest it from them.
Even under British rule, the Mogok area was kept as a separate entity, as an independent state. A British commissioner was appointed for this area alone. Until Burma became a republic, uniting all six states, only the State of Burma was actually the Kingdom of Burma. The other five states were ruled by Sab Bwas. These Sab Bwas are still very powerful politically, all holding ministerial rank. As Sab Bwa of his own state, he is the minister representing his state in the Union Government. In addition, he may, and usually does, hold another political office.
The Mogok valley is about twenty miles long and two miles wide. In the center of the village of Mogok is located a beautiful lake, created by digging for precious stones during the Ruby Syndicate era. The Mogok gem area reaches to the city of Momeik, a distance of twenty-eight miles to the north and sixty miles to the west, up to the villages of Twingwe and Thabeikian on the Irrawaddy River.
Mogok has a moderate climate. During winter, the maximum temperature is 25 degrees during the night and early morning. During the day, it rises to 70 degrees. In summer, the evenings are not very cold and the maximum day temperature is about 95 degrees. It is a heavy rainfall area, ranging from ninety inches of rain to one hundred and thirty-five inches in the rainy, or monsoon, season from May to November.
The chief industries of Burma are the cultivation of rice and the logging of timber, mainly teakwood. Both commodities are exported to many countries, accounting for Burma’s only sources of foreign exchange. Mogok is the exception. The only industry in the entire area of Mogok is the mining of gemstones. There is hardly any cultivation in this area; consequently, even rice, which is the staple food, has to be brought in from Momeik.
In addition to the temples and pagodas one sees in the mountains, there are various plants of gorgeous colors; brightly colored blooming magnolia trees and cherry blossoms are abundant. One shrub, similar to rhododendron, grows all over the mountains. The natives are superstitious about these rhododendron shrubs, believing that they are created by the good spirits hovering in the area and called “the gods of the mountain.”
Wild life is also abundant. Many varieties of birds can be seen almost any time of day. Elephants, tigers, and diverse species of the deer family thrive in the mountains and nearby jungles.
The Burmese are a friendly, smiling people. Their dress is neater and more beautiful than anywhere else in the Orient. Both men and women wear skirts, or longhis, and both sexes delight in bright colors and silk attire. A longhi is a skirtlike garment. Cotton longhis are used for daily wear; silk longhis are worn on holidays and for special occasions. The longhis are very wide to give plenty of room to the wearer. They are stepped into and are folded and knotted in front. It is not unusual to see a man or woman open the longhi while walking, shake it with graceful motions and retie the knot, all in the twinkling of an eye. These movements are done unconsciously and with mechanical precision. Nowhere in the world do people take more pride in their wonderful, bright, harmonious colors. The men generally wear sport shirts with their longhis; the women wear sheer, white blouses adorned with ruby- or sapphire-set gold buttons. Unlike the rigid customs in India, the life of the Burmese is free from the effects of any caste system and seclusion of women, which may account for their pride in gay dress. In the interior, even though less civilized, the people of the various tribes have their distinctive and picturesque modes of dress and adornment.
|Above, typical huts in the mining village of Mogok. Below, family in Mogok with which author stayed, showing typical Burmese dress.|
In the section of Burma which borders on Assam, Tibet, Yunan Province of China, and French Indo-China live the peaceful Shans, the warring Kachina and the headhunting Nagas. These people have their own unique and colorful dress. The women wear large turban-type hats made of gay materials. In their noses, they wear a gold drop which is fastened to the center of the nostril and hangs down below the mouth—a most awkward place, indeed, for a pendant! But in every part of Burma, all the Burmese have one thing in common: a fierce pride in their newly gained independence and freedom from English rule since 1948.
U Khin Maung, my agent and interpreter, had made arrangements for us to stay at his parents’ home, a typical Mogok hut built on a concrete slab with woven bamboo walls and native wooden beams. Upon our arrival, we were given a fine welcome by his parents, two younger brothers and a sister. I was given a cot in the corner of the livingroom. In all the homes in Mogok, the livingrooms serve a dual purpose, since they are also the family’s shrine. In one corner, a pagoda occupies the most prominent place in the room. In front of it is a table on which are placed flowers and bowls of rice, changed daily as offerings to Buddha. Daily prayers, in a prone position, are said to Buddha morning and night. According to the Buddhist religion, no one can enter a pagoda wearing shoes, so everyone takes off his shoes before entering the livingroom of any house in Burma.
|A livingroom of one of the prominent gem dealers, showing a centerpiece of an altar consisting of peridot pagoda, peridot buddha and a quartz crystal buddha. (Photo courtesy of Edward R. Swoboda)|
All homes in Mogok have another thing in common: the complete lack of any sanitary facilities. Outhouses are found in the rear of the huts. Water for washing and cooking is brought in from pumps, sometimes found in the rear but more frequently in front of the huts. Baths are taken at these pumps in full view of the entire population. Both men and women manage their longhis so gracefully that the dry one is on before the wet one is completely off. My first attempt at bathing in the street almost turned into a fiasco, because I wasn’t quick enough in changing from the wet to the dry longhi. Although no one was visible at the moment, I heard much laughter from my invisible audience. After practicing several times, I also became adept in the rapid change of garments.
Our first evening in Mogok, we retired early after a typical Burmese dinner. We ate a tasteless, thin soup containing mustardlike greens, followed by white rice in curry sauce with bits of pork, and other vegetables and tea. About 6:00 A.M., I was awakened by the morning prayers of all our neighbors. We had our breakfast of two boiled eggs, bread and butter and coffee. We had brought canned butter and instant coffee with us. We started our tour of the town with a courtesy call to the S.D.O. (subdivisional officer), who acts as mayor, chief of police, fire chief and magistrate, as well as chief mining inspector. His name is U Hla Tint, a charming and personable young man of about thirty. He is respected by all the twenty thousand inhabitants of the area. He gave us a most cordial welcome and promised to give us advice and help in any way we wished. After the customary cup of coffee, followed by a cup of tea, we departed, assured that we would have all the cooperation we needed from the law in Mogok.
|Mogok gem mine after fifteen feet of sand layer has been removed to bare the gem gravel.|
We visited a number of the most important mine owners and dealers, from whom I gathered much essential information. The, procedure in visiting these people was invariably the same: the minute we arrived, shed our shoes and sat down in a squatting position, cups of coffee were served. This coffee was unlike any I had ever tasted. Apparently, the coffee, milk, sugar and water are cooked together and the sugar was plentiful. It had a sickening sweet taste, but we finished it and were immediately offered cups of tea, which tasted good after the first awful concoction. The number of cups of coffee and tea we imbibed each day depended on the number of visits we made. A sincere feeling of welcome everywhere we went made me feel at home.
MINING AND GEOLOGY
The Mogok area is the only place in the world that has a population of about twenty thousand people who make a living from gemstones. Except for diamonds and pearls, almost every variety of gemstones, both precious and semi-precious, has been found there. The total gem area is approximately 1916 square miles. Gem deposits are found within an eight-mile radius of Mogok, and there are about twelve hundred individually owned mines operating in the area.
The village of Mogok is totally dissimilar from Idar-Oberstein, Germany, in people and architecture, but they are very much alike in the predominance of the stone industry. As in Idar-Oberstein, every house in Mogok is a lapidary shop. However, Idar-Oberstein is only a gem-cutting center; in the immediate vicinity of Mogok are many gem mines.
There are two types of mining operations. After an area has been discovered where signs of gemstones are present, the top layer of sand, sometimes as deep as fifteen feet, is removed. When the gem gravels become visible, the real work begins. Depending on the size of the area, from three to twenty-five miners begin to loosen the gem gravels, the large boulders being discarded on the sides. The smaller gravels are gradually shoved to a pile in a convenient corner by shovels and picks. In most mines around Mogok where water is plentiful, a centrifugal pumping system is used to pump the gravels up to a high point with an eight-inch pipe. A wooden structure receives all the gravels, since it can hold from thirty to forty tons. At a specified time of day, the gravels are washed in the following manner. The miners that loosened the gravels come into the structure and remove the largest pebbles by hand. The balance is pushed through a wire mesh to the first level, then to the second level, and finally to the bottom. Thus, the large pieces are held on the first level, smaller pieces on the second, and the smallest on the third, or bottom level.
|Above, hut housing pumping gear in Ruby Mine. Below, wooden structure into which gem gravels have been raised by centrifugal pumps.|
At this point, the washing operations actually begin. This is a top-secret operation; only the mine owner and his miners are present during this final step. No one is supposed to learn what luck the mine had on any day!
|Above, first washing of heavy gravel. Below, moving gem gravels to the lowest level before final washing.|
The gravels on the lowest level are then taken out in wire baskets and thoroughly washed again. Many pebbles are removed by hand and the balance is carried by each miner to a picking table four by eight feet in size. The mine owner and his assistant stand there moving the gravels with a metal blade in one hand; the other hand is used to pick out all gems, which are then placed in a bamboo container while the other gravels are pushed off the table. Such gravels still contain small pieces of rubies, sapphires and other gems with which the mine owner does not think it worth while to bother. This material is gathered into piles which are sold to the poor women relatives of the mine owner. After purchasing one or two piles for a very small amount of money, these women comb them for all remaining gems. If, by some chance, the mine owner has overlooked a large gem, it rightfully belongs to the woman purchaser of the pile in which it is found. Naturally, the results of any washing can be joyous or very disappointing. The entire operation is a rapid one, and the results are quickly seen.
|Above, final washing of gem gravels before removal to the picking table. Below, assorting gems from gravel on picking table.|
|Above, picking table, showing bamboo container for selected gems. Below, poor women-relatives sorting reject lots of gravels for any remaining small gems.|
The other method is to dig square wells to depths of twenty-five to thirty feet. To prevent the walls from caving in, the miners shore them with bamboo rods. After a hole is secure, the mining process begins. The gem gravels are brought to the surface in rattan baskets raised by a rope, a process similar to drawing water from a well. The gravels are then washed and the precious stones removed.
|Wealthy mine owner with pile of gravel in which an important sapphire was found.|
Rubies and associated minerals such as spinels are formed in a matrix of white, dolomitic, granular limestone. These rocks have been altered by contact with molten, igneous material which recrystallized the calcium carbonate as pure calcite, while the impurities became the rubies, spinels and other minerals. Rubies now are seldom found in this matrix. All mining is in the adjacent alluvial ground.
The origin of the Mogok gem area has not been described anywhere, as there has been no scientific investigation made to date. My own superficial observations indicate that there are at least two hundred and fifty mineral species within the area. It is safe to say that the Mogok-area gems are of metamorphic origin. The original rock in which these gems were formed disintegrated on the earth’s surface. Loose, disintegrated soil was formed in which the gem minerals, such as rubies, sapphires and spinels (which are not harmed by water and other atmospheric conditions), were imbedded and remained in excellent condition. This disintegration was washed away by waters and the gems were washed along as well. Eventually, the waters stopped flowing and the gravels remained, covered by sand layers of from five to fifteen feet, where they are now waiting to be discovered. New areas are opened daily where new finds are continually being made. Rubies are known to occur in three important tracts in Upper Burma, but the original source of the gems is found to be highly crystalline limestone.
The variety of minerals found in these mines varies a great deal. Statistics are difficult to obtain because of the previously mentioned secrecy that prevails. I did, however, obtain the cooperation of one mine owner in this respect, a jovial Burman, U Nyunt Maung, one-man owner of the largest mine in the Mogok area. According to him, about seventy thousand carats of rubies and sapphires are mined there yearly. Of unusual interest is the fact that only one per cent of spinels come from this mine. These rubies and sapphires are not all of gem quality. A good guess would be that less than five percent of the seventy thousand carats is of decent quality; only one-half of one percent is of gem quality. Other varieties of gemstones found here are danburite, scapolite, beryl, zircon, amethyst and fibrolite. In addition, many varieties of iron minerals and rare earths occur, such as monozite, ilmenite, columbite and tantalite.
In Kathe and Kyatpyin (the first six miles and the latter eight miles from Mogok), in addition to individually owned mines employing from five to fifty miners, there are also large tracts of land leased to individual miners who work a few feet of the area by the hole-digging method previously described. As this area is rather dry, most of the gem gravels are piled up in huge piles and washed during the months of the rainy season. Although not considered a rich area, it still provides about one-half million dollars worth of rubies and sapphires annually.
Most mine owners and miners are interested only in the precious rubies, sapphires and spinels because of the ready market for them; consequently, the other gems are sadly neglected. If it were not for the fact that Mr. A. C. D. Pain, a very competent gemologist, who has been living in Mogok for many years, has shown a constant interest in the rarer gems, they would have been ignored as worthless. The following gemstones are found in almost all the Mogok area: peridots, which will be described more fully later; fine, yellow danburites, up to forty carats; scapolite cat’s-eyes in white, pink and blue of various sizes up to fifty carats; apatites in blue and green, as well as the chatoyant variety, which is rather common; an abundance of albite cat’s-eyes up to fifty carats; topazes; and orthoclases and other varieties of the feldspars. Less abundant minerals in the same area are kornerupines, sphenes, diopsides, iolites, chrysoberyls, zircons, and, occasionally, a new gem mineral. Mr. Pain has recently found a new species which is now being described by the British Museum of Natural History in London.
Mr. Pain owns a magnificent collection of representative Burmese gems which he is continually improving in quality. When I saw it last, it contained some 250 gems varying in size from two to 150 carats. The outstanding stones are a blue kornerupine weighing about twenty carats, a sphene of similar size, a pink scapolite cat’s-eye of about twenty carats in weight, and a fibrolite of most unusual size. Unfortunately, all my efforts in trying to purchase this collection failed.
|Fabulous gem collection of Mr. A. C. D. Pain consisting exclusively of Mogok gems. (Photo courtesy of Edward R. Swoboda)|
In Sakhangy, twelve miles southwest of Mogok, quartz and topaz crystals are found in highly decomposed pegmatite. Because of excessively heavy rains in the past few years, this deposit has caved in completely. The topazes are of various colors and well crystallized, not unlike those found in Brazil. However, the fine golden colors desirable for jewelry purposes are rare.
For a long time, peridots of commercial gem quality were believed to have come only from St. Johns Island in the Red Sea. However, there is a more important locality for this beautiful gem mineral in a village called Pyaung Gaung, which is eight miles northwest of Mogok. The natives call these peridots Pyaung Gaung zain (zain means stone). No investigation could be made there since this territory is completely in the hands of insurgents. Several unusually fine, large specimens of these crystallized peridots were purchased and are now in some of our museum collections. The quality of the Burmese peridots is similar to that found in the Red Sea, but the largest stones of pure gem quality come from Burma.
In addition to rubies and sapphires, other important gem minerals mined in Burma include jadeite. Burma is just about the only source of jadeite in the world. This gem mineral deserves its own chapter later.
Burma is also rich in strategic minerals. The tin mines of the Tennarim Peninsula are very important. This peninsula is in the southern-most part of Burma and has a coastline of about one thousand miles, connecting in the south with the Malay Peninsula. Tungsten is also mined in large quantities on the Tennarim Peninsula.
In the central and northern part of Burma are many deposits of tin, tungsten, lead, silver and copper, which are suitable for mining. Because of lack of transportation, difficult terrain, and insurgent activities, development of these mines has been most inadequate. There are also a few rich oilfields in the central part, providing sufficient gasoline for Burmese consumption.
Other Burmese minerals worth mentioning are antimony, asbestos, barite, bismuth, chromite, gypsum, graphite, manganese, molybdenum and monazite. There are visible signs that all these deposits are being presently surveyed by private and government geologists, evidencing a purposeful movement to develop all the natural resources of Burma.
It is difficult to ascertain exactly when Mogok was founded. That it was first settled by man about 3000 B.C. is indicated by relics of this period used by the Mongolians, who were the first settlers. Such relics consisted of stone axes, chisels, and many types of spears and stone arrowheads similar to our Indian ones.
Modern Mogok was founded in 579 A.D., when it was all jungles and thick forests. While hunting in the jungles, headhunters of the Sab Bwa of Momeik lost their way and slept under a tree. At daybreak, they heard birds singing and hovering over them. Investigating the commotion of the birds, they ran into a mountain break full of beautiful rubies. They collected as many as they could carry and brought them to the Sab Bwa of Momeik. He immediately realized the wealth of the area and sent some of his household to establish Mogok. It was first called Thahpainpin Pomegranate for the fruit growing abundantly there, and such a little village still exists on the extreme west limits of Mogok.
In those days, Burma was ruled by a king named Avaking. The capital was the city of Ava, near the present site of Mandalay. After hearing about the fabulous wealth, the king started to invade the area. Then the Sab Bwa of Momeik made an agreement with King Avaking, establishing Mogok as part of Burma, in return for twelve specified villages. It is on record that in 1254, a pact was signed by King Ava and the Sab Bwa of Momeik settling all their differences. The Mogok area was now a part of the Burma Kingdom and gem trading began. History has it that no mining operations were first required, since the gems were found everywhere.
According to a legend, an agent of King Mindon, named U Tun Po Hland, showed a ruby called the Ngamauk Ruby to a French trade mission, telling them that quantities of such valuable stones were to be found in Mogok. At that time, France controlled Siam and now wanted to purchase the whole Mogok area from King Mindon. The members of the trade mission were especially impressed when the Ngamauk Ruby was placed in a pan of water and the water turned red, the color of the ruby. King Mindon flatly refused to sell, saying there was not enough money in the world to buy the fabulous area. The ruby itself is said to have become part of the famous British Crown Jewels. Although there was no description given as to weight, subsequent accounts made it a very large stone and probably the famous one in the crown jewel collection that was tested a few years ago and found to be a spinel.
In 1886, the British Government occupied Burma. After a victorious battle in the Mogok area, Major Charles Barnard and his troops took over. Barnard Village, where peridots are found, is still in existence. On March 29, 1888, the area was formed as the Mogok Ruby Mines District, and Mr. G. M. S. Carter became its first administrator. Distinct from other areas, it was ruled as a separate entity and controlled solely by the British. Mr. A. R. Godbar was the last administrator in 1920.
In 1889, the Ruby Mine Company Syndicate, Ltd., was formed, capitalized by numerous English bankers and industrialists. It was handled by the management firm of Streeter and Company in London. According to agreement, this company paid the British Crown forty thousand rupees yearly plus one-sixth of all profits accumulated. Barrington Brow, M. P., was sent from England to negotiate the first five-year contract, which expired in 1895.
Scientific mining began and several geologists were sent to survey the Mogok area. They reported that the bulk of the gembearing gravel was located in the center of town where most of the natives lived. The company negotiated with the owners of these properties and bought all the huts located in the surveyed gem-bearing beds. The natives were given new homes or, where possible, old homes were moved to new locations hacked out in nearby jungles. These moves presented no problems, since the natives were well compensated and were promised employment.
Mining began immediately after the people were moved. Electricity was brought to Mogok with the construction of a power plant. Digging began and several millions of dollars worth of rubies, sapphires, spinels and other gems were mined. A single ruby of fabulous size, purportedly worth $100,000 (a lot of money in those days), was discovered. Profits were good for the first contract. The company and townspeople were happy with the new way of life.
As the contract had expired, in 1896 the company negotiated again with the British Government and agreed to pay a tax of 3,150,000 kyats (about $800,000) plus one-fifth of the total profits for fourteen years. This contract included all the mines within a radius of ten miles of Mogok. Those were the boom days with everyone working and much money in circulation.
In 1906, the company extended its mining operations to other areas and bought all the property in the city limits of Mogok, again moving homes to new sites. The original Mogok area is today the site of a beautiful lake in the center of town. Also, about 1906, a 42-carat fine pigeon-blood ruby was found. After cutting, the ruby, weighing twenty-two carats, was purchased by an Indian gem dealer, M. Chodilla, for 300,000 kyats (over $100,000).
Because of heavy rainfall and much excavating, pumping became a serious problem. The hole would fill and stop all mining operations during the rainy season. Experts from London solved the water problem by building tunnels for the water to flow off from the hole; however, this proved to be only a temporary solution. Tunnels costing $200,000 were built into the mountains but after a year the same problem arose.
In 1914, affairs of the company began changing rapidly for the worse. Because of stealing, technical problems and wide extensions, large sums of money were lost. Operations were continued, however, until 1922, the expiration date of the contract, marking cessation of company operation. Some personnel remained until 1928, working privately with local people on a half-share basis. On August 28, 1928, this group attempted to remove the water from big tunnels in order to empty the big hole. That venture culminated in disaster as the mountain side caved in; nevertheless, gem mining continued.
Before the beginning of World War I, an Englishman named Albert Ramsay settled in Mogok as a lapidary and dealt in rough gems. His alert and inventive mind eventually made him the savior of the mining industry in Mogok. In previous years, only the clear rubies, sapphires and spinels had been sought. Such stones were not too plentiful, especially those of faceting quality. Mr. Ramsay began experimenting with translucent varieties of rubies and sapphires and found that by cutting certain of these crystals with the base perpendicular to the crystal axis, a star was formed. Heretofore, such material had been ignored as having no market value. After his discovery of the star, Mr. Ramsay began buying all the material he could obtain and cut it into cabochons showing wonderful stars. It took him a long time to convince the jewelers of Europe and America that this type of stone would gain favor with the gem-loving public, but he finally succeeded and mining was revived in Mogok. Mr. Ramsay also continued in the rough-gem business and in May, 1929, purchased a gem sapphire weighing 1029 carats for $30,000. Thirty-two pieces were cut from it, one of which brought the total amount paid for the original stone.
Because of pressure brought by the local people, new rules and regulations were made in 1950 when homesteading of mines began. Claims were made by individuals and granted by the Crown. The tax levy was 10 rupees per miner employed. By this time, all company holdings had been abolished and the machinery and electric plant were sold to a Mr. Morgan and a Mr. Nicols. Mr. Morgan has since died, but Mr. Nichols still carries on, supplying electricity to the whole area. Secrecy surrounding all operations makes it impossible to determine the total output of gems.
On May 7th, 1942, Mogok was invaded by the Japanese and all mining stopped. Miners dug holes in their backyards and under their homes and discovered many gems. On March 15, 1945, the British army moved in and the Japanese withdrew. The independence of Burma from British rule was declared on January 5, 1948. Mining continues in the same primitive manner, but about 1200 mines are now owned individually. Some are just two- to three-man operations, whereas others employ as many as fifty men. All miners are shareholders and split the profits with the owners, thus largely eliminating high grading.
JADE AND AMBER MINES
Both jade and amber occur in the northernmost part of Upper Burma, reached by a once-a-week flight from Rangoon to Myitkyina. This village is in the eastern part of the so-called jade and amber belt and is very near the border of Yunan Province of China. Mogaung, about fifty miles to the northwest of Myitkyina, is the last city in Burma with a railroad station. The center of the jade- and amber-mining district is the village of Mogaung, where the largest deposits are found within a radius of seventy-five miles in all directions.
The jade mines of northern Burma contain the most important deposits of this interesting material in the world. The early Chinese prized it greatly and are said to have believed that jade possessed magical qualities which made it a sacred stone. Three thousand years ago, jade from this area was brought via the caravan route to the center of China. Members of the Imperial Family and the nobility of the Chou Dynasty wore ornaments of delicately carved jade. Weapons of war such as swords, sabers, spears and knives were also fashioned from this precious stone. Jade was even buried with the dead in the belief that it would ward off evil spirits. It is no wonder, then, that all the jade mined in this area found its way to China. At a later date, its wonderful qualities, such as color and durability, became known in the Western Hemisphere and the demand increased greatly.
The jadeite region is almost inaccessible most of the year. During the monsoon season, it is practically impossible to reach, since it is a highly dissected upland region consisting of ranges of mountains which form the Chindwin-Irrawaddy Watershed. It is higher in the north than in the south and Tawman, one of the principal jade-mining districts, is situated on a plateau about three thousand feet above sea level. The highest mountain is Loimye, about a mile above sea level. Tawman is about sixty-five miles from Mogaung. There is a road which is barely passable even under the best conditions, so the most satisfactory mode of transportation, if not the fastest, is by mule truck. These conditions are serious handicaps in any attempt to complete a geological survey of the area.
Jade mining progresses for only three months of the year, from March to May. All mining activities cease with the coming of the rainy season, since the shafts fill with water within a few days.
|Typical jade mine, with owner and two friends, in the Mogaung district of Burma.|
Many workings were observed by the author, who considered them all of similar nature. At the top, there is a very thick overburden of red earth, probably the weathering of serpentinized peridotites that form the rock into which the jadeite albite mass is intrusive. Below the serpentinized peridotites lies a thin, earthy, very light chlorite schist locally called “byindone.” Below it is an amphibole schist, or amphibolite. Next to that is an amphibole-albitite rock which is underlain by albitite and it, in turn, is underlain by jadeite.
The mining methods today are still the same as those used a thousand years ago, except that hydraulic drills are now being used in some areas. Before any jade mining is begun, every worker, whether Kachin, Shan, Burman or Chinese, prays to the jade spirits, or Nats, in the belief that they will discover valuable jade quickly if the Nats are pleased. Generally, the overburden is quarried with picks and crowbars until a steep face is obtained. Water is dammed upstream to make it flow over the steep face, washing away all earth and leaving the boulders clearly exposed for examination. It is well to remember that the selection of the locality and all the work are started through pure instinct, an inner conviction of the miners rather than any scientific reasoning. When valuable jade is discovered in one spot, all the miners flock to it and work it until all the jade has been removed. Consequently, they frequently forget where they had worked last and begin digging in spots previously mined, with much labor thus lost. Scientific mapping and a coordinated, well organized mining program would certainly yield large amounts of fine jade in this area at a small percentage of the present cost to the jade merchants, who do almost all the financing.
|Above, typical boulder of rough jade, just removed from mine, and stamped by officials. Below, rough jade, showing texture. (Photos courtesy of Wen Ti Chang, Los Angeles)|
There is a valuation committee in every jade mine. When a piece of jade is found, the financier has it evaluated by this committee, which receives a fee of five percent of the value. As a rule, such valuations are very low. If the financier decides to keep the jade, the miners receive half of the evaluated price. Ten percent more goes to the local government under whose jurisdiction the stone is found. Values and evaluations are never mentioned but merely indicated among the interested parties by the conventional finger pressures under a handkerchief. It is costly to ship the jade boulders to Mogaung, the chief trading center. Sometimes, as many as fifty coolies are employed to transport a boulder weighing a ton. They proceed very slowly through the rugged terrain until they reach a place where the jade can be placed in an oxcart and brought Mogaung. There, the Government again collects thirty-three and a third percent of the value before the merchant is permitted to ship his jade out of Burma.
|Cutting jade cabochons in Mandalay, Burma. Note foot-driven pedal and carborundum plate on which cabochons are preformed. (Photos courtesy of Edward R. Swoboda)|
Up to this point, the value is merely a fictitious figure without any actual basis. The main thought is to have as low an evaluation as possible until all government taxes are paid. Then, the jades are prepared for local auctions and for export in the following manner: Each boulder is weighed and a seal is placed upon it. At strategic points, cuts of approximately three-fourths inch in width and one-half inch in depth are made in each boulder and polished to show the color. The Chinese jade dealers pride themselves on being able to determine the value of each jade boulder by the appearance of the polished cut. On certain days, auction sales are held for the prospective purchasers.
The auctions differ greatly from ours, since there is no audible competitive bidding. The purchaser accompanies the auctioneer along the path on which the boulders are displayed. A stop is made before each numbered boulder in which the purchaser expresses interest. With his hand and the auctioneer’s hand covered by a cloth, the bidder makes his offer by pressure on the auctioneer’s fingers, thus insuring absolute secrecy in each transaction. In the evening, the sales are completed and the boulders turned over to the highest bidders.
It is not unusual for many boulders to remain unsold during these auctions. They are then shipped to dealers in Hong Kong for similar auction sales. From a scientific point of view, it is a real gamble to buy jade in this manner. The purchasers are actually gamblers rather than jade experts. Much money has been lost by these superficial methods of purchasing.
All jades leaving Burma are shipped from Mogaung after clearance by the Government agency. Boulders of jadeite are wrapped in a heavy, dark sailcloth tied with hemp rope and shipped by Irrawaddy River boats to Rangoon, where they are put on boats going to Hong Kong. Also, a considerable quantity is smuggled across the border to Yunan, Communist-controlled China, by mules. Prior to the Communist occupation of China, jade was shipped to Canton, Shanghai and Peiping. Approximately fifteen percent of all jades, the poorest quality, remains in Burma and is usually sold in Mandalay, where there are many jade cutters and jewelry manufacturers.
Burmese amber was highly prized by the Chinese even before the beginning of the Christian era. Burmese amber is very different from amber extracted in the Baltic Sea in East Prussia. The Burmese type is a golden color with a beautiful bluish streak through it and is strongly fluorescent even in ordinary sunlight. It is also purer, harder, and tougher than the Prussian amber, thus making it excellent material for cutting and carving into figures, snuff bottles, vases and various Chinese carved objets d’art. The only other occurrence of amber of similar quality is in Sicily; this amber possesses the same fluorescent phenomenon.
Amber occurs in the lower tertiaries, which consist of finely bedded dark schists, blue shale and sandstone, the shales being generally predominant. In some places, the two are alternately interbedded with a few layers of limestone conglomerate. Sometimes, sandstone of various shades of blue and pink are almost laminated, and in places contain shaly concretions which vary in diameter from one to several inches. Generally, the sandstone and shale bear carbonaceous impressions; sometimes, these rocks contain very thin coal seams in which amber is imbedded in the form of concretions. Because of the soft nature of the shale and sandstone, few good exposures are seen on the surface.
Amber mines are located in the northernmost section of Burma and extend in a straight line to the border of Assam, through Myitkyina to the Yunan border. Hukawang Valley, comprising the villages of Maingkwan and Komaung, is the chief source. In most mines, amber occurs in pockets embedded in blue sandstone or dark-blue shale with a fine coal seam. The presence of coal seams is a favorable sign for the occurrence of good amber. Amber usually occurs in elliptical pieces and occasionally in blocklike forms. The best amber is found at a depth of about thirty-five to fifty feet, very seldom in large blocks, the average size being that of a normal closed fist.
The method of mining amber is as primitive as that of mining jade. The mine is a well-like affair about four feet square and descends to a maximum depth of approximately fifty feet. Four miners work in each pit. Two dig underground with a hoe. The other two haul up the mass in a rattan basket with a long hooked bamboo rod and examine each basket for amber. The pits are lined with a bamboo barricade which appears flimsily constructed but does hold the pit without caving in. It seemed like a miracle to the author, since there are many pits close together.
The progress in a pit is very slow; usually two men dig about two feet in a day. All mining is stopped when a hard layer of sand is reached, since the miners believe there is no amber beneath the sand. It might prove interesting to sink a few bores much deeper to test their theory. Much of the amber thus mined remains in the area and is made into earrings, bracelets and other popular jewelry worn by the natives of the Hukawang Valley. Chinese merchants, too, purchase great quantities of amber, especially the larger pieces which are suitable for carved objects d’art. They export these pieces to China where most of the carving is done. Some of this amber also finds its way to the European market.
Traditionally, the gem dealers of India are the most important in the Orient. Perhaps the widespread impression that rubies and sapphires originate in India stems from the fact that the bulk of gemstones found in Ceylon, Siam and Burma is purchased by Indian dealers who keep in close touch with the gem-producing mines. There they have informants from whom the dealers purchase information about new discoveries of gems. Because so many engage in gem trading, competition is very keen. Besides the India markets, they also supply the Western markets via Paris, where many gem-buying offices are located. Purchasers from all over the world generally find it much more convenient to do their buying in Paris, where the Indian dealers keep their outlets well supplied.
During the height of Britain’s power in India, the maharajas were the wealthy, chosen few who were able to buy the expensive gems. Each one tried to outdo the others in acquiring important gems, just as art collectors do in Europe and the United States. Consequently, for a long period of time, few large gems reached the Western markets.
|Author’s agent, U Khin Maung, with assorted gem gravel. (Photo courtesy of Edward R. Swoboda)|
In the Orient, a Western purchaser of gems encounters strange difficulties, one of which is the lack of any division between the wholesale and retail trade. In India, the procedure is simple. One chooses a recommended, reliable firm of gem dealers to act as purchasing agents. They sell their own material and are familiar with the entire market as well. A verbal agreement is made giving them a percentage ranging from two to five percent on all outside purchases made in their offices. Word spreads rapidly that an important purchaser is in town and brokers stand in long lines to show their gems. The prospective purchaser is given the opportunity to examine the merchandise and put aside interesting items, together with the marked asking prices, for further consideration and bargaining. At the end of the day, the gems are examined carefully and leisurely and marked with the offered prices. The following day, the bargaining ensues and those papers of stones are purchased on which a price agreement is reached.
The procedure just described is one used in all Oriental countries except Burma, where it is quite different. Upon arrival in Rangoon, one must immediately seek a reliable man to act as interpreter, broker and agent. This is an important choice, determining in no small measure the future success of the purchaser. In Rangoon, there are many dealers who also own retail stores containing large stocks of gems that have been brought to them by the brokers from the Mogok area. As a rule, there is little material of fine quality in the hands of such dealers.
|Maung Myint, gem dealer of Mogok, showing rough ruby and sapphire. (Photo courtesy of Edward R. Swoboda)|
If one desires to engage in gem purchasing with the convenience and comforts of a good hotel and fairly decent food, one can remain in Rangoon and do the best he can with his buying. A few telegrams to Mogok may bring some brokers down to show their gems. However, this is a poor arrangement for trying to buy the finest. The only way to see and buy is to go to the source at Mogok. For this prospect, the broker-agent-interpreter becomes even more important!
Once in Mogok, an entirely new set of problems arises. As mentioned previously, there are about 1200 mines in the area and it is impossible to visit all of them even over a long period of time. Therefore, it is advisable to visit the most important dealers first to ascertain what they have to offer. Most surprising is the reluctance with which stones are shown. It seems that the dealers stall on showing until they learn approximately how much money the would-be purchaser has available. A few good starting purchases go a long way in establishing one’s reputation as a buyer with serious intentions to acquire fine gems. The buyer’s opportunities increase as the rumors spread quickly among all the inhabitants.
Contrary to custom in other Oriental countries, the startling fact revealed during a visit to Mogok is that the women are the gem merchants, rather than the men! The most influential dealers are women who are shrewd, cunning and seemingly able to read the purchaser’s mind instinctively. In some cases, the bargaining may be done by the men, but no deal is ever concluded without the full consent of the women in the household—usually the mother, mother-in-law and wife.
On my first visit to Mogok, my experiences and feelings were mixed—amazing, exasperating, frustrating, instructive and illuminating. My agent and I started out early one morning to begin my purchasing transactions. Our first stop was at the home of the most important dealer in Mogok. After two hours of imbibing the customary cups of coffee and tea, along with much conversation, I became impatient and asked my agent about seeing stones. While studying my face intently, the dealer finally exhibited one stone paper containing a very poor star sapphire. Although I wasn’t at all interested in it, I wanted to be polite and inquired about the price. If I recall correctly, I had already judged $25 to be a fair price. When my agent informed me that it was the equivalent of $500, I almost exploded but casually tossed the stone back to the dealer. I asked to see more stones but was told most politely but firmly that there were no others.
We visited five more dealers that day with similar experiences for me at the home of each. One or two stones would be shown at tremendously exaggerated prices that were completely out of line. Even though I knew that the prices were inflated for bargaining, I thought it would be ridiculous to make offers amounting to perhaps five to ten percent of the asking prices. That evening, I went over the day’s experiences and discussed them with my agent. I must digress for a moment at this point to emphasize that great reverence and respect are accorded to age by all Burmese. Because my agent was a much younger man than I, that instinctive respect for my age prevented him from giving me an immediate analysis of my day’s errors. After long discussion, I finally did gather that it was considered bad form not to make an offer on merchandise for sale, no matter how low. I asked whether my offer of $2 for a stone quoted at $100 would not be considered insulting. He assured me that any offer would be graciously received as a compliment to the gemstones, but the eventual selling price would depend on the buyer’s patience and bargaining abilities. Time is of no importance in the Orient and long, patient bargaining is an essential part of trading there. My agent further explained the high asking prices as an attempt by the dealers to learn the purchaser’s idea of a stone’s worth. Apparently, “Let the buyer beware” is the slogan!
It took me a while to assimilate this information and try to act accordingly, since it was so different from our way of conducting business. However, I knew that I must follow my agent’s advice in order to make progress. The next morning, we started out again to another dealer. After the usual long-drawn-out preliminaries of coffee, tea and conversation, I was at last shown a fine gem sapphire weighing six carats for the asking price of $3,000. After careful examination, I decided that I could pay $600 for it and asked my agent to offer $300 giving myself enough leeway for the expected bargaining. Although I couldn’t understand the ensuing conversation, I saw the dealer’s smile express appreciation for the offer. He said “Quarre,” which I had learned means, “Too far apart.” I instructed my agent to tell him that he was asking much more than the stone was worth, and that I would raise my offer a bit if he would come down. The next price asked was $1,000, already a two-thirds reduction! I had my first feeling that gem buying in Mogok might eventually prove successful. I raised my first offer by $100, and he went down another $100. Whereupon my agent suggested that we let the dealer think it over and we would return the next day. The following morning, I requested the lowest price and was told that the very lowest was $700, “Take it or leave it,” My final offer was $600, which was accepted, leaving me in a much more cheerful frame of mind about the entire situation. With my newly acquired education, I was able to make many satisfactory purchases.
After a week, word had spread that I was a good purchaser who paid high prices. Brokers came with gems from many dealers we hadn’t visited, and prospects for a most successful purchasing trip increased day by day. In the process, I slowly became aware of an important factor in the economic life of the miners. Their living conditions are most primitive, requiring an infinitesimal sum of money for all their needs. The most prosperous family spends no more than a thousand dollars a year for all living expenses. As a result, their wealth is measured in terms of the quantity of their stocks of gems. After a dealer sells enough material to provide for his needs for a long time, he is no longer interested in selling. The following experience is a good example. I found a miner with twenty pieces of rough peridot which interested me greatly. He simply refused to sell them because he didn’t want any more money until after the Burmese New Year, three months hence. My agent’s and my persuasion were of no avail. He didn’t need the money and wouldn’t sell until he did, regardless of the possibility of receiving less at a later date. He was quite positive that prices would not go down, since all the miners and dealers were learning that prices were continually rising—which also made for reluctance in selling. They believe it is the better part of wisdom to hold their gems.
The only saving factor in the whole situation is the great devotion to Buddhism practiced by these people, coupled with each one’s desire to outdo the other in the building of magnificent pagodas and temples. Such a purpose on the part of any dealer greatly facilitates business negotiations! Whereas their personal living standards are so low, they do not hesitate to spend enormous sums of money, sometimes as much as $100,000, to build beautiful temples and pagodas—and with modern sanitary facilities so conspicuously absent in their own homes.
Every dealer claims exclusive connections with certain miners to supply him only with their rough stones. Of late, however, the miners have been learning that they have not been receiving full prices and that the dealers have been taking advantage of them. Consequently, they offer their gems to more than one dealer and accept the highest offers for their rough, thus placing the dealers in a highly competitive position. It is extremely difficult to determine the yield from a rough piece of sapphire or ruby, thus making it a really speculative business. The dealers suffer much loss when the rough proves disappointing after cutting.
Alternating in the villages of Mogok and Kyatpyin, there is a bazaar held every five days where the populace buys its supplies of groceries and vegetables. Miners and gem dealers also contribute to this colorful event. Dealers are always recognized by the umbrellas they carry at all times—their badge of office, as it were! Individual miners offer them rough material. My only purchase in a market place was a lot of rough rubies for which the miner began by asking $1000 and after much bargaining accepted my offer of $1.
An amusing purchase I made was at the site of a one-man controlled ditch. This miner showed me a rough piece of ruby that displayed very strong asterism. After my expression of interest, the customary bargaining began. Before long, we had a group of men, women and children surrounding us and enjoying the proceedings. When I finally completed my purchase, the people standing around claimed a commission from me, saying they were all brokers in the deal. It was said good naturedly and with much laughter. I settled the matter by giving the brokerage fee to a few of the smallest children, and my decision was applauded as a wise man.
For many years, rumors have been circulating that the Mogok area is being worked out and will soon stop yielding any gems. My observations have convinced me that mining there will flourish for an indefinite period, and that huge quantities of gemstones worth many millions of dollars are still in the hands of the dealers. I learned that many of them have had gems in their possession for almost half a century. To illustrate, I am thinking of an eighty-two-year-old man I visited. He dreams away the days with his opium pipe and is loath to part with any of his gems. The dealers in Mogok have been unsuccessful in their attempts to buy one of them. Because my agent’s mother was a friend of the old man, I thought I had a good starting point in my quest. We visited one morning and found the old man squatting in a corner smoking his pipe. For four hours, talk flowed back and forth without any reference to business—we were enjoying a social visit. Finally, the conversation veered to the topic of gems and, most fortunately for me, the name of Albert Ramsay was mentioned. It seems that when he was a youngster, the old man had worked for Mr. Ramsay and thought very highly of him. When my agent informed him that I had known Mr. Ramsay (since we had been located in the same office building in New York), the old man declared that any friend of Mr. Ramsay’s was a friend of his; and, eventually, he decided to show me a few of his gemstones. He never exhibited more than one stone at a time. After each deal was concluded, he would disappear into a little cubicle covered by a curtain and emerge with another stone. How I wished I could peek at the treasure I knew was hidden behind that curtain! Rumor had it that the old man’s stock of gems was worth more than a million dollars and dated back to the time of the Ruby Company, which had ceased activities in 1922.
Almost without exception, in every purchase I made, the starting price asked was at least ten times higher than the actual value. Only once was I offered stones at a price I thought they were worth and was willing to pay. However, that willingness almost cost me my reputation as a gem merchant! It was a box of spinels of fine quality for which the dealer asked $300, a price I considered fair and agreed to pay. I shall never forget his expression and the change in him at my ready acquiescence. He turned pale and told my agent I must be a smart one who was trying to take advantage of a poor, simple dealer. He threatened to tell the other dealers I couldn’t be trusted; and my agent and I were really in a dilemma. I thought fast and said to my agent that the offer need not be accepted, at the same time passing the box of spinels back to him. My agent then made the proper move. He took the blame in translating incorrectly, saying I had merely repeated the dealer’s statement of price. My agent whispered to me to make an offer of $150, which I did. After some bargaining, I bought the spinels for $200, absolved of all blame and $100 to the good!
As mentioned previously, the women are the bosses of the gem trade in Mogok. They have the final decision in all transactions involving the larger and more important stones, but they do consult their men, who are in partnership with them in such deals. However, the men have nothing to do with the trading in stones of small sizes. Such business is exclusively the province of the Mogok women and a lucrative one it is. Women throughout Burma wear jewel studs in their blouses and other apparel and are very fond of gemstones mounted in bracelets, earrings, pins and necklaces. A number of manufacturing jewelers in Mogok make attractive well-designed pieces in 22-karat gold. The women gem dealers consider the trading in small stones their bread-and-butter business; but from my own observations, I would say that it keeps them in cake, too! It is a steady, profitable, year-round business.
|
<urn:uuid:ce4d1a56-ce9a-4875-8679-33b4aa5da9e6>
|
CC-MAIN-2018-47
|
http://www.palagems.com/gem-mining-burma/
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039750800.95/warc/CC-MAIN-20181121193727-20181121214810-00049.warc.gz
|
en
| 0.974854 | 13,617 | 3.03125 | 3 | 2.867786 | 3 |
Strong reasoning
|
Industrial
|
You'll find many of the truths we cling to depend greatly on our point of view" -Obi-Wan Kenobi
Truth is something that every modern person has to come to terms with. We are brought into a world where truth is paramount. Everything around us is based on the truths of some nature. In fact that's all we ever have to base things on is "truth". Kinda silly when you think about it, how else would it work? If we didn't orient ourselves by making judgements, which are all on a fundamental level based on truths, then we could never have any meaning or context. That simply is the nature of reality. We make associations with our environment based on what we see as good and bad and important and irrelevant. All of those associations are thus supported by fundamental assumptions about what reality and/or truth is. The logical conclusion would be that who we are, the things that constitute our individuality, is based upon the truths we hold.
So we have established that truth, from a personal or internal perspective, has a relative nature. What we see to be good and bad, important and irrelevant is determined by what context we place it in. There are other aspects to truth such as observable facts, like natural/universal laws and provable theorem. Obviously those don't fit into the category of relative. A relativist would argue: but even those facts work within the paradigm of science and outside that paradigm have no meaning, without that knowledge science is irrelevant, is non existent and thus are still relative to that mode of understanding. Of course most of these phenomenon do still occur without knowledge of their workings. So, even if you insist that it is relative to our existence you're still establishing an ultimate boundary for relativity.
As you can see there's alot of ways to put truth into perspective (that's a funny turn of words) I believe what i've said up to this point to basically be fact, but where Truth really gets tricky is in the in-between. Between inside and outside, relative value and observable fact. There's many ways to resolve this seeming conflict, and that is where the conjecture comes in. I think the basic gist of them all would be to look at truth on a higher, more realized level. Effecting some kind of integration of the two, finding where they meet, that should be our objective.
As we attempt to combine these two facets of truth it's necessary to further define the distinction between internal and external. Internal truth isn't simply about putting your experience within a specific context. It's about speaking from your soul, about sincerity, and integrity. We have our moment to moment experience and that can contain as much or as little of reality as we allow it. We do have a ground to which to come back, and always ask ourselves on that fundamental level is this a valid experience. This potential can be traced back to the very nature of freewill, our ability to choose.
Again you can begin to reduce this to relativistic terms, but you have to question the utility of that. That is the problem with a "relativist" they see this as some ultimate truth (funny isn't it?) instead of simply a fundamental truth, so they always cast this ultimate doubt on everything, and in the process loose their grasp on what meanings things can possibly hold. For internal or subjective truth we can see that reality is very fluid from within the kernel of combined intersubjective experience having to conform to mental patterns and affected interpretation.
For external truth what implications does this relative perspective have? It's simple enough to understand the nature of observable facts and communal confirmation as truth but what about absolute truth? What about why we're here? Where is our concrete set of rules and truths to know which end is up? We can't really answer the most fundamental question about the very first observation a self aware being has to make-"I am" that question is of course how and why? We can see that there must be some kind of genesis, some starting point, because everything we've experienced and recorded as a race has had "purpose", a direction. Perhaps the universe doesn't have a beginning or end, but one thing is for sure, at some point things did start to move forward, we are definitely going somewhere...even if forward is only what it is because that's the direction we started in.
So there is some plan or purpose to be grasped, some seemingly otherworldly conductor, some being or state of being that is calling us ever forward. In order to understand our capacity to transcend ourselves and yet maintain our own identity we have to see the absolute meanings in the Kosmos, as it applies to our scale of value. Instead of dismissing it out of hand as relative and meaningless. To understand the context of some thing's truth value doesn't imply invalidation. I think the conclusion to all this is that we must acknowledge the existence of both relative and absolute truth in the universe in order to get a clear, more inclusive picture.
|
<urn:uuid:a5f44d46-2bfa-4431-961a-ac87706560df>
|
CC-MAIN-2018-09
|
http://onemoreparadigm.blogspot.com/2005/04/
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00030.warc.gz
|
en
| 0.975232 | 1,019 | 2.515625 | 3 | 2.889537 | 3 |
Strong reasoning
|
Literature
|
The latest news from academia, regulators
research labs and other things of interest
Posted: March 27, 2007
Nanoparticles for delivery of prostate cancer treatment
(Nanowerk News) Alan Garen, professor of molecular biophysics and biochemistry at Yale has received a $100,000 award from the Prostate Cancer Foundation to expand research on the delivery of a targeted therapy for prostate cancer using nanoparticles.
Garen and his collaborator Zhiwei Hu have developed a way to directly target and destroy the blood vessels of solid tumors, thus destroying the tumors while leaving normal tissue unharmed.
The technology uses a synthetic gene encoding an antibody-like molecule that activates an immune response to destroy the tumor blood vessels and associated tumors.
Garen previously received grants from the National Cancer Institute and Breast Cancer Alliance for Icon projects. The grant from the Prostate Cancer Foundation will extend his research to prostate cancer, which is the most common non-skin cancer in the United States.
Prostate cancer strikes one in six men. In 2007 alone, more than 200,000 men will be diagnosed with prostate cancer and more than 25,000 men will die of the disease. As the Baby Boomer generation begins to turn 60, increasing numbers of men are in the highest-risk sector for the disease. Over the next decade, the number of new cases is expected to increase to more than 300,000 annually.
The molecule that Garen and Hu constructed, called an Icon, recognizes the receptor tissue factor (TF) found on cells lining the inner surface of blood vessels in tumors but not in normal tissues. The Icon binds to TF more strongly and specifically than a natural antibody. Because the Icon acts through the blood, it can reach metastatic tumors throughout the body, which is critical for effective cancer therapy.
With the Prostate Foundation funding Garen will test the efficacy and safety of using targeted liposomal nanoparticle vectors — lipid-covered, gene delivery packets — to deliver the therapy in animal models of human metastatic prostate cancer.
"While we can directly inject the purified Icon molecule into the bloodstream, this procedure is less effective than having the Icon synthesized in vivo" said Garen. "We prefer to deliver the Icon gene to tumor cells, so that they cause their own destruction."
The Yale scientists previously used a virus to deliver the Icon gene, a system that was effective and safe in animal models and is being prepared for a clinical trial.
"The advantage of nanoparticle vectors is that they do not reproduce, are not immunogenic, and are easier to produce than adenoviral vectors," said Garen. The nanoparticles will have a tag on their outside that binds to the tumor blood vessels. After binding, they are taken up by the cells and unload the gene that codes for the Icon, allowing the cells to produce and secrete the Icon.
"The key is to have an efficient and safe way to deliver a specific and effective therapeutic agent," said Garen. "Having the nanoparticle targeted specifically to tumor blood vessels, and the Icon derived entirely from human components, should enhance the safety and efficacy of the procedure."
|
<urn:uuid:e512c62a-ad8f-4674-b812-65311594ee42>
|
CC-MAIN-2017-22
|
http://www.nanowerk.com/news/newsid=1684.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608622.82/warc/CC-MAIN-20170526013116-20170526033116-00482.warc.gz
|
en
| 0.931494 | 644 | 2.609375 | 3 | 2.926645 | 3 |
Strong reasoning
|
Health
|
The World Health Organisation, WHO, recently published its first Global Report on Antimicrobial Resistance. This publication demonstrates that resistance against antibiotics is a serious threat to global public health. The findings of such report reinforce the views on the need for further prevention of Carl Eric Nord, who is a senior professor at the Institution for Laboratory Medicine at the Karolinska Institute in Stockholm, Sweden.
Nord was involved, as a partner, in the EU-funded project ANTIRESDEV, which was completed in 2013. The project focused on studying the emergence and persistence of antibiotic-resistant bacteria and their effect on the composition of the microbe populations living in our intestine, referred to as microflora. This project resulted in the development of three DNA biochips for rapidly screening of resistant genes in disease-causing bacteria. Nord talks to youris.com about how European medical research can contribute to the surveillance of antibiotic resistance threats.
The antibiotic-resistant bacterium Clostridium difficile, covered in the ANTIRESDEV project, is not among the seven bacteria covered in the latest WHO report. Why not?
WHO is an international body and looks at the very poor developing countries as well as at the developed countries. Clostridium difficile is a problem for the rich countries. It is a kind of side effect of antibiotic treatment in compromised patients. These patients are mostly elderly, rather sick and have often even other diseases. When you treat them with an antibiotic which changes the intestinal microflora, then colonisation with Clostridium difficile can result in a serious infection. In developing countries, they do not look for a changed microflora and are not prepared to do this, as we heard from colleagues in those countries. Thus, WHO in Geneva, which looks into all countries, has to make different recommendations.
What to do in order to mitigate or even prevent antibiotic resistance?
Hygiene is the most important action to prevent infections. Special hand hygiene procedures are the most important. Also antimicrobial agents used for cleaning and treatment of infections have to be used in the correct way. Otherwise, you select just the resistant bacteria especially in the intensive care units.
How can we adopt the coordinated action called for by the WHO to minimise emergence and spread of antimicrobial resistance?
This coordinated action is the right way to go about prevention. But, unfortunately, we have a problem in many countries, we do not any longer have an effective infection control, as we used to have it earlier with special nurses for infection control.
In Scandinavian countries, which are small, we have rather few patients with complicated infections. However, we have many tourists travelling from Sweden to other countries, especially Asia. They will be colonised with resistant bacteria when they are there. Although they will not develop infections themselves, they can spread it when they come back. In order to become sick you have to be compromised. If you have a balanced intestinal microflora, it is not likely that you will be infected.
In order to increase prevention, how soon could the project’s rapid screening biochips be introduced in clinics?
It depends on the cost. One of the problems in laboratory medicine nowadays is economy. That is the problem in most European and other well developed countries. You have a fixed budget for doing laboratory analyses. And if it costs too much, then you cannot do it. You can do it from the scientific point of view. That is not a problem. But in general the prices for molecular biological tests are still high, they are very expensive.
Image credits to: Karoliska Institute
youris.com provides its content to all media free of charge. We would appreciate if you could acknowledge youris.com as the source of the content.
|
<urn:uuid:f05a1de8-d50f-4a85-a282-bde432fd8d43>
|
CC-MAIN-2019-13
|
http://blog.youris.com/health/antibiotics/antibiotic-resistance-prevention-more-needed-than-ever.kl
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202506.45/warc/CC-MAIN-20190321072128-20190321094128-00336.warc.gz
|
en
| 0.950529 | 774 | 3.5 | 4 | 2.950118 | 3 |
Strong reasoning
|
Health
|
Formation of the large-scale structure in the Universe: simulations
Study of structure formation in the Universe is an area of forefront research in astrophysics. The early evolution, when the seed fluctuations are small, can be calculated analitycally on a piece of paper without the help of large supercomputers. As the fluctuations grow in their amplitude, the evolution becomes too complex and theorists have to use computers to follow the subsequent evolution.
A typical simulation follows evolution of matter in a large box which expands at the same rate as the Universe itself. The box thus always encompasses the same mass. Over the period of time evolved in simulations the Universe expands by a factor of more than 50 and so does the simulation box (you can find a nice illustration of this here). In order to make it simpler to visualize the formation of structures, the expansion can be taken out so that the simulation box appears static. In professional lingo, the system of coordinates that expands (or co-moves) with the Universe is called the comoving coordinate system.
As the Universe expands, galaxies become more and more distant from each other. For an observer, such as ourselves, it appears that all other galaxies fly away from us. The further the galaxy, the faster it appears to recede. This recession affects the light emitted by the distant galaxies, stretching the wavelengths of emitted photons due to the Doppler redshift effect. The distance between galaxies is proportionalto the measure of this effect 1+z, where z is what astronomers call redshift. The redshift can be measured for each object if its spectrum is measured.
In addition, it takes a very long time (up to several billion years) for the light from the most distant galaxies and quasars to reach us. Not only the light we receive from these objects is redshifted, but we also see these objects as they were during the early stages in the evolution of the Universe. In this sense, the redshift z provides a universal clock and can be used as a measure of time. Observations of distant galaxies is much like a time travel into the past.
In the subsequent pages, you can find a series of pictures and animations illustrating formation of structures in the Cold Dark Matter Universes. The animations were created using outputs of the high-resolution simulations performed at the National Center for Supercomputer Simulations (NCSA). The simulations followed evolution of perturbations assuming a flat universe in which 30% of density is due to matter and 70% due to vacuum energy (or "dark energy" with the equation of state of p=-rho). Most of the matter is assumed to be in the form of Cold Dark Matter (CDM) - massive collisionless particles. The matter is thus represented by collisionless particles (which can be seen on smaller scale movies). There were a total of about two million particles in the box. The movie shows evolution in comoving coordinates, for the reasons explained above. However, to keep track of the expansion the movies show the corresponding redshift by which the light from the galaxies would be stretched if it was emitted at each of the epochs shown in the movie.
Questions and comments: Andrey Kravtsov ( )
You can use this material if you include the proper credit:
simulations were performed at the National Center for Supercomputer Applications
by Andrey Kravtsov (The University of Chicago) and Anatoly Klypin (New Mexico State University).
Visualizations by Andrey Kravtsov.
|
<urn:uuid:b31fadc3-08c4-4609-bf9e-60774df60a2a>
|
CC-MAIN-2016-18
|
http://cosmicweb.uchicago.edu/sims.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111865.15/warc/CC-MAIN-20160428161511-00058-ip-10-239-7-51.ec2.internal.warc.gz
|
en
| 0.942489 | 717 | 3.765625 | 4 | 2.835561 | 3 |
Strong reasoning
|
Science & Tech.
|
In the last few years we have gained awareness of the various needs that individuals with mental health issues need in order to achieve overall well-being. Teenage depression, for example – teenage being a phase of life characterized by a lot of physical and psychological changes, it’s during the teenage years that life seems to be taking an all new shape and it gets difficult at times to cope up with these changes within and around you.
These include the long term sick and disabled, those in poor living conditions, those with a history of depression in the family, the homeless, ethnic minorities and people in prison. Because of this research, family therapy was born and this therapy has allowed those with an illness to stay out of institutions.
It is not a secret that severe and persistent mental illness like schizophrenia, bi-polar disorder, and major depression can require intense care management and advocacy. What is known is that certain groups of people appear more at risk of developing depression than others.
The research team has also found that stress at work is associated with a 50 per cent excess risk of coronary heart disease, and there is consistent evidence that jobs with high demands, low control, and effort-reward imbalance are risk factors for mental and physical health problems (major depression, anxiety disorders, and substance use disorders).
In the last few years we have gained awareness of the various needs that individuals with mental health issues need in order to achieve overall well-being. We see here that good mental health is not just about the struggles we have living our lives; it’s also how we treat other people. And research has proven that a patients relationship with their family members can positively or negatively affect their mental illness.
Foods that are good for our physical health are also good for our mental wellbeing. With so much of stress around mental imbalances and illnesses are at a rise and hence the demand in psychiatry careers. There is a lot of value in having a care manager involved in the care of your loved one with a mental illness.
So, for example, as a mental health nurse you could be helping to care for and support a mother with severe post-natal depression young man facing the complexities of a mental illness such as schizophrenia someone experiencing anxiety and panic attacks which prevent them from functioning normally.
Our mental health is not right. As medical science continues to point to the indisputable benefits of regular exercise and following a healthy diet, many of us have begun instituting daily routines designed to make us feel healthier and help us live longer. The reality is that any kind of mental health problem and depression can strike any one of us at any time of our lives.
CBT techniques can prove to be of great help in treating anxiety, depression and even eating disorders and substance abuse. The times I have been most unhealthy mentally, emotionally, spiritually, I have lacked something very important – something critical for health.
Generally observed in children, behavior disorders can be quite harmful to their mental health, social interaction, and other areas in life. A mental health that impacts negatively on other people is of grave concern, because of how people can be damaged. This movement called for the removing of mentally ill patients from state and private institutions where many times these people received little to no care and treatment.
Health And Wellness Articles
Am I mentally ill, could be the question? A study published in the British Journal of Psychiatry found people who ate whole foods over the previous year reported fewer feelings of depression compared to people who ate more refined foods. However, no one is immune to depression and someone can develop a depressive disorder even if they are not considered at greater risk.
If you feel physical symptoms such as headaches, dizziness, lack of sleep, irritability, restlessness, tightness of the chest, stomach churning, and a overwhelmed feeling you may want to look at making some changes in your life. Today, the treating physician as well as the active family members are directly responsible for integrating people with mental illness into society.
Mental health is really about how we think and feel about ourselves and the world around us, and about how we behave and interact with others in our day to day lives. Staying healthy is almost as important as staying alive, as life loses its charm without physical and mental health and well-being. One of the biggest barriers to recovery for someone suffering from depression or indeed any mental health problem is a reluctance to seek help.
However, by choosing certain foods you can reduce your risk of both developing depression and becoming overweight. This not only affects our mental health, but our body health as well. Vision is the first victim to fall prey to the onslaughts of this nexus between the wronged mental health and the wronged body posture.
As professionals in the field of mental health, we see that families with loved ones living with a mental health condition often want an immediate and instant “fix” for their family member.
|
<urn:uuid:4dfa73d4-c845-4dc9-892a-198b637c7d21>
|
CC-MAIN-2020-16
|
https://onlinepharmacy-kamagra.com/mental-health-depression.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00095.warc.gz
|
en
| 0.970412 | 1,003 | 2.90625 | 3 | 2.881353 | 3 |
Strong reasoning
|
Health
|
Diabetes among Asian Indian Diasporas
- Over the past 50 years, the incidence and prevalence of diabetes has increased more rapidly among the South Asian Diasporas compared with indigenous populations, irrespective of the country of origin or immigration.1-6
- Diabetes prevalence is 2-6 times higher among Indian Diasporas compared to people of other ethnic origins in several countries regardless of cultural or religious differences.7-10
- Indian vegetarians appear to have a higher risk of diabetes, possibly due to a high glycemic load.11 12
- Asian Indians develop diabetes at a lower body mass index (BMI) and waist circumference (WC) than whites that led the World Health Organization (WHO) to set lower threshold for these measurements in Asian Indians.13-15
- Asians develop diabetes at least 10 years earlier than whites predisposing them to greater long-term complications.13-15
- Asian Indians risk of diabetes is 3-fold higher than whites when adjusted for age, BMI and other risk factors and the risk is substantially higher than all other Asians.16 The risk of diabetes is related to higher amounts of visceral fat, truncal fat, and large dysfunctional subcutaneous fat cells.17
- Asian Indians with diabetes have a 3-4 fold higher risk of coronary artery disease (CAD) than whites with diabetes (after adjustment for gender, age, education level, hypertension, alcohol intake, and obesity).18-20
- The heightened risk of CAD among South Asians with diabetes is in sharp contrast to the 32-44% lower risk observed among blacks, Hispanics, and other Asians.7, 21
- Most studies have shown Asian Indian diabetics achieve poor control of risk factors such as hypertension and dyslipidemia.22
1. Barnett AH, Dixon AN, Bellary S, et al. Type 2 diabetes and cardiovascular risk in the UK south Asian community. Diabetologia. Oct 2006;49(10):2234-2246.
2. Mohanty SA, Woolhandler S, Himmelstein DU, Bor DH. Diabetes and cardiovascular disease among Asian Indians in the United States. J Gen Intern Med. May 2005;20(5):474-478.
3. McKeigue PM, Ferrie JE, Pierpoint T, Marmot MG. Association of early-onset coronary heart disease in South Asian men with glucose intolerance and hyperinsulinemia. Circulation. 1993;87(1):152-161.
4. Balarajan R. Ethnic differences in mortality from ischaemic heart disease and cerebrovascular disease in England and Wales. Bmj. Mar 9 1991;302(6776):560-564.
5. Wild SH, McKeigue P. Cross sectional analysis of mortality by country of birth in England and Wales, 1970-92. Bmj. 1997;314(7082):705-710.
6. UKPDS 32. Ethnicity and cardiovascular disease. The incidence of myocardial infarction in white, South Asian, and Afro-Caribbean patients with type 2 diabetes (U.K. Prospective Diabetes Study 32). Diabetes Care. 1998;21(8):1271-1277.
7. Ramachandran A, Ma RC, Snehalatha C. Diabetes in Asia. Lancet. Jan 30 2010;375(9712):408-418.
8. Anand SS, Yusuf S, Vuksan V, et al. Differences in risk factors, atherosclerosis, and cardiovascular disease between ethnic groups in Canada: the Study of Health Assessment and Risk in Ethnic groups (SHARE). Lancet. 2000;356(9226):279-284.
9. Abate N, Chandalia M. Ethnicity and type 2 diabetes: focus on Asian Indians. Journal of diabetes and its complications. Nov-Dec 2001;15(6):320-327.
10. Abate N, Chandalia M. The impact of ethnicity on type 2 diabetes. Journal of diabetes and its complications. Jan-Feb 2003;17(1):39-58.
11. Enas EA, Garg A, Davidson MA, Nair VM, Huet BA, Yusuf S. Coronary heart disease and its risk factors in first-generation immigrant Asian Indians to the United States of America. Indian heart journal. Jul-Aug 1996;48(4):343-353.
12. Mohan V, Radhika G, Sathya RM, Tamil SR, Ganesan A, Sudha V. Dietary carbohydrates, glycaemic load, food groups and newly detected type 2 diabetes among urban Asian Indian population in Chennai, India (Chennai Urban Rural Epidemiology Study 59). The British journal of nutrition. Jul 9 2009:1-9.
13. Asia Pacific Perspective:Redefing obesity and its treatment World Health Organization, Western Pacific Region;2000.
14. Mohan V., Venkatraman JV, Pradeepa R. Epidemiology of cardiovascular disease in type 2 diabetes: the Indian scenario. J Diabetes Sci Technol. 2010;4(1):158-170.
15. Ramachandran A. Epidemiology of Diabetes in India In: Mohan V, Rao G, eds. Type 2 Diabetes in South Asians: Epidemiology risk factors and prevention. New Delhi: JAYPEE; 2007.
16. Kanaya AM, Wassel CL, Mathur D, et al. Prevalence and correlates of diabetes in South asian indians in the United States: findings from the metabolic syndrome and atherosclerosis in South asians living in america study and the multi-ethnic study of atherosclerosis. Metabolic syndrome and related disorders. Apr 2010;8(2):157-164.
17. Chandalia M, Lin P, Seenivasan T, et al. Insulin resistance and body fat distribution in South Asian men compared to Caucasian men. PLoS ONE. 2007;2(8):e812.
18. Chaturvedi N, Fuller JH. Ethnic differences in mortality from cardiovascular disease in the UK: do they persist in people with diabetes? J Epidemiol Community Health. 1996;50(2):137-139.
19. Mather HM, Chaturvedi N, Fuller JH. Mortality and morbidity from diabetes in South Asians and Europeans: 11- year follow-up of the Southall Diabetes Survey, London, UK. Diabet Med. 1998;15(1):53-59.
20. Ma S, Cutter J, Tan CE, Chew SK, Tai ES. Associations of diabetes mellitus and ethnicity with mortality in a multiethnic Asian population: data from the 1992 Singapore National Health Survey. Am J Epidemiol. Sep 15 2003;158(6):543-552.
21. Karter AJ, Ferrara A, Liu JY, Moffet HH, Ackerson LM, Selby JV. Ethnic disparities in diabetic complications in an insured population. Jama. 2002;287(19):2519-2527.
22. Mukhopadhyay B, Forouhi NG, Fisher BM, Kesson CM, Sattar N. A comparison of glycaemic and metabolic control over time among South Asian and European patients with Type 2 diabetes: results from follow-up in a routine diabetes clinic. Diabet Med. Jan 2006;23(1):94-98.
|
<urn:uuid:733f7829-e1be-4249-a6e3-7b441fdf949b>
|
CC-MAIN-2019-30
|
https://cadiresearch.org/topic/diabetes-indians/diabetes-diasporas
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525634.13/warc/CC-MAIN-20190718125048-20190718151048-00358.warc.gz
|
en
| 0.782607 | 1,557 | 2.65625 | 3 | 2.964298 | 3 |
Strong reasoning
|
Health
|
They are necessary and often hefty in both weight and price. Textbooks can be a drain on a student’s finances, but the good news is that some textbooks are available as free downloads online.
E-textbooks have grown in popularity. Higher education instructors have turned to using e-textbooks because they can easily choose from a wide selection, and students don’t have to carry around backpacks full of heavy books.
What Are Textbooks Online?
There are many reputable companies online such as GetTextbooks.com that offer basic textbooks online. These can be downloaded and perused at your leisure.
An e-textbook is also called a digital textbook or online textbook. It is the electronic version of the physical textbook. Not all e-textbooks have physical counterparts.
E-textbooks in College
Publishers are cranking out more e-textbooks every year to keep up with the demand that professors and students have for e-textbooks.
E-textbook reviews by college instructors and students help the publisher to include more information as well as interactive hyperlinks and more. Students can find textbooks online in the field that they hope to study in college and read up on rudimentary information to get ahead of the type of classwork they will be doing in their first year.
E-textbooks Interactive Element
The digital textbook has more features that can help a student much more than a physical book can. For instance, a social studies e-textbook by Pearson will include hyperlinks to relevant online sources to further explain a theory, to delve deeper into a global issue or to otherwise educate the student beyond the ideas that are presented in the e-textbook.
This type of online connection is ideal for medical students. An e-textbook can link the readers to videos of surgeries to give them a virtual tour of an operation or technique.
Math and computer majors are able to test theories and play with potential problems with other students working on the same type of lessons in other states or cities who are also studying the same e-textbook.
Benefits of E-textbooks
The benefits of e-textbooks are abundant. They are easily and quickly delivered so you can get a jump start on a class without having to wait for the book to arrive. The e-textbook can also be updated immediately with the latest information or with data that can change the information you need to be studying to stay relevant in your chosen field.
Other benefits of e-textbooks include:
- Highlighting Function – You can highlight passages that are important and easily find them for future reference.
- Search Option – The search function allows you to quickly and easily find specific text.
- Note Feature – Many e-readers you can use to read the e-textbooks have notes that can be attached to pages or tagged to specific words or paragraphs in the book during class so you can return to them later to study further.
- Copy and Paste Function – This allows you to use the text as a quote to prove a point or debate an issue.
Audio and E-textbooks
Many e-textbooks can be converted to audio files. This can help you to listen to the text while working, driving, exercising or just lounging. The audio e-textbook provides you with a hands-free way to learn.
Types of E-textbooks
The range of e-textbooks is continually expanding. There is a wide selection of e-textbooks for basic college coursework, including:
- Natural science
- Computer science
- Marketing and law
- Career and study advice
- Health care and medicine
- IT and programming
|
<urn:uuid:c09fbb33-d180-431f-969a-80a8610bcc09>
|
CC-MAIN-2020-40
|
https://www.theclassroom.com/textbooks-online-7800479.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401600771.78/warc/CC-MAIN-20200928104328-20200928134328-00283.warc.gz
|
en
| 0.934901 | 755 | 2.5625 | 3 | 1.697111 | 2 |
Moderate reasoning
|
Education & Jobs
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.