text
stringlengths 219
516k
| score
float64 2.78
4.96
|
---|---|
In 1621, the Plymouth colonists and Wampanoag Indians shared an autumn harvest feast that is acknowledged today as one of the first Thanksgiving celebrations in the colonies. For more than two centuries, days of thanksgiving were celebrated by individual colonies and states.
It wasn't until October 1777 that all 13 colonies celebrated day of Thanksgiving.
The very first national day of Thanksgiving was held in 1789, when President George Washington proclaimed Thursday, Nov. 26, to be "a day of public thanksgiving and prayer to be observed by acknowledging with grateful hearts the many signal favors of Almighty God especially by affording them an opportunity peaceably to establish a form of government for their safety and happiness."
Though a national day of Thanksgiving was declared in 1789, Thanksgiving was not an annual celebration.
We owe the modern concept of Thanksgiving to poet and editor, Sarah Josepha Hale. Hale wrote the famous nursery rhyme, "Mary Had a Little Lamb," and was editor of "Godey's Lady's Book." She spent 40 years advocating for a national, annual Thanksgiving holiday.
In the years leading up to the Civil War, she saw the holiday as a way to infuse hope and belief in the nation and the constitution. So, when the United States was torn in half during the Civil War and President Abraham Lincoln was searching for a way to bring the nation together, he discussed the matter with Hale.
On Oct. 3, 1863, Lincoln issued a Thanksgiving Proclamation that declared the last Thursday in November (based on Washington's date) to be a day of "thanksgiving and praise."
For the first time, Thanksgiving became a national, annual holiday with a specific date.
For 75 years after Lincoln issued his Thanksgiving Proclamation, succeeding presidents honored the tradition and annually issued their own Thanksgiving Proclamation, declaring the last Thursday in November as the day of Thanksgiving.
However, in 1939, during the Great Depression, the date of Thanksgiving was scheduled to be Nov. 30.
Retailers complained to President Franklin D. Roosevelt (FDR) that this only left 24 shopping days to Christmas and begged him to push Thanksgiving just one week earlier. It was determined that most people do their Christmas shopping after Thanksgiving and retailers hope that with an extra week of shopping, people would buy more.
When FDR announced his Thanksgiving Proclamation in 1939, he declared the date of Thanksgiving to be Thursday, Nov. 23, the second-to-last Thursday of the month.
The new date for Thanksgiving caused a lot of confusion. Calendars were now incorrect. Schools who had planned vacations and tests now had to reschedule. Thanksgiving had been a big day for football games, as it is today, so the game schedule had to be examined.
Before 1939, governors followed the president in officially proclaiming the same day as Thanksgiving for their state. In 1939, many governors did not agree with FDR's decision to change the date and refused to follow him. The country became split on which Thanksgiving they should observe.
Twenty-three states followed FDR's change. Twenty-three other states disagreed with FDR and kept the traditional date for Thanksgiving. Two states, Colorado and Texas, decided to honor both dates.
This idea of two Thanksgiving days split some families because not everyone had the same day off work.
Did it work?
The answer was no. Businesses reported that the spending was approximately the same but the distribution of the shopping was changed.
For those states who celebrated the earlier Thanksgiving date, the shopping was evenly distributed throughout the season. For those states that kept the traditional date, businesses experienced a bulk of shopping in the last week before Christmas.
In 1940, FDR again announced Thanksgiving to be the second-to-last Thursday of the month. This time, 31 states followed him with the earlier date and 17 kept the traditional date. Confusion over two Thanksgivings continued.
Lincoln established the Thanksgiving holiday to bring the country together, but the confusion over the date change was tearing it apart. On Dec. 26, 1941, Congress passed a law declaring that Thanksgiving would occur every year on the fourth Thursday of November.
| 3.851687 |
Every generation has to reinvent the practice of computer programming. In the 1950s the key innovations were programming languages such as Fortran and Lisp. The 1960s and '70s saw a crusade to root out "spaghetti code" and replace it with "structured programming." Since the 1980s software development has been dominated by a methodology known as object-oriented programming, or OOP. Now there are signs that OOP may be running out of oomph, and discontented programmers are once again casting about for the next big idea. It's time to look at what might await us in the post-OOP era (apart from an unfortunate acronym).
The Tar Pit
The architects of the earliest computer systems gave little thought to software. (The very word was still a decade in the future.) Building the machine itself was the serious intellectual challenge; converting mathematical formulas into program statements looked like a routine clerical task. The awful truth came out soon enough. Maurice V. Wilkes, who wrote what may have been the first working computer program, had his personal epiphany in 1949, when "the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs." Half a century later, we're still debugging.
The very first programs were written in pure binary notation: Both data and instructions had to be encoded in long, featureless strings of 1s and 0s. Moreover, it was up to the programmer to keep track of where everything was stored in the machine's memory. Before you could call a subroutine, you had to calculate its address.
The technology that lifted these burdens from the programmer was assembly language, in which raw binary codes were replaced by symbols such as load, store, add, sub. The symbols were translated into binary by a program called an assembler, which also calculated addresses. This was the first of many instances in which the computer was recruited to help with its own programming.
Assembly language was a crucial early advance, but still the programmer had to keep in mind all the minutiae in the instruction set of a specific computer. Evaluating a short mathematical expression such as x2+y2 might require dozens of assembly-language instructions. Higher-level languages freed the programmer to think in terms of variables and equations rather than registers and addresses. In Fortran, for example, x2+y2 would be written simply as X**2+Y**2. Expressions of this kind are translated into binary form by a program called a compiler.
With Fortran and the languages that followed, programmers finally had the tools they needed to get into really serious trouble. By the 1960s large software projects were notorious for being late, overbudget and buggy; soon came the appalling news that the cost of software was overtaking that of hardware. Frederick P. Brooks, Jr., who managed the OS/360 software program at IBM, called large-system programming a "tar pit" and remarked, "Everyone seems to have been surprised by the stickiness of the problem."
One response to this crisis was structured programming, a reform movement whose manifesto was Edsger W. Dijkstra's brief letter to the editor titled "Go to statement considered harmful." Structured programs were to be built out of subunits that have a single entrance point and a single exit (eschewing the goto command, which allows jumps into or out of the middle of a routine). Three such constructs were recommended: sequencing (do A, then B, then C), alternation (either do A or do B) and iteration (repeat A until some condition is satisfied). Corrado Böhm and Giuseppe Jacopini proved that these three idioms are sufficient to express essentially all programs.
Structured programming came packaged with a number of related principles and imperatives. Top-down design and stepwise refinement urged the programmer to set forth the broad outlines of a procedure first and only later fill in the details. Modularity called for self-contained units with simple interfaces between them. Encapsulation, or data hiding, required that the internal workings of a module be kept private, so that later changes to the module would not affect other areas of the program. All of these ideas have proved their worth and remain a part of software practice today. But they did not rescue programmers from the tar pit.
Nouns and Verbs
The true history of software development is not a straight line but a meandering river with dozens of branches. Some of the tributaries—functional programming, declarative programming, methods based on formal proofs of correctness—are no less interesting than the mainstream, but here I have room to explore only one channel: object-
Consider a program for manipulating simple geometric figures. In a non-OOP environment, you might begin by writing a series of procedures with names such as rotate, scale, reflect, calculate-area, calculate-perimeter. Each of these verblike procedures could be applied to triangles, squares, circles and many other shapes; the figures themselves are nounlike entities embodied in data structures separate from the procedures. For example, a triangle might by represented by an array of three vertices, where each vertex is a pair of x and y coordinates. Applying the rotate procedure to this data structure would alter the coordinates and thereby turn the triangle.
What's the matter with this scheme? One likely source of trouble is that the procedures and the data structures are separate but interdependent. If you change your mind about the implementation of triangles—perhaps using a linked list of points instead of an array—you must remember to change all the procedures that might ever be applied to a triangle. Also, choosing different representations for some of the figures becomes awkward. If you describe a circle in terms of a center and a radius rather than a set of vertices, all the procedures have to treat circles as a special case. Yet another pitfall is that the data structures are public property, and the procedures that share them may not always play nicely together. A figure altered by one procedure might no longer be valid input for another.
Object-oriented programming addresses these issues by packing both data and procedures—both nouns and verbs—into a single object. An object named triangle would have inside it some data structure representing a three-sided shape, but it would also include the procedures (called methods in this context) for acting on the data. To rotate a triangle, you send a message to the triangle object, telling it to rotate itself. Sending and receiving messages is the only way objects communicate with one another; outsiders are not allowed direct access to the data. Because only the object's own methods know about the internal data structures, it's easier to keep them in sync.
This scheme would not have much appeal if every time you wanted to create a triangle, you had to write out all the necessary data structures and methods—but that's not how it works. You define the class triangle just once; individual triangles are created as instances of the class. A mechanism called inheritance takes this idea a step further. You might define a more-general class polygon, which would have triangle as a subclass, along with other subclasses such as quadrilateral, pentagon and hexagon. Some methods would be common to all polygons; one example is the calculation of perimeter, which can be done by adding the lengths of the sides, no matter how many sides there are. If you define the method calculate-perimeter in the class polygon, all the subclasses inherit this code.
Object-oriented programming traces its heritage back to simula, a programming language devised in the 1960s by Ole-Johan Dahl and Kristen Nygaard. Some object-oriented ideas were also anticipated by David L. Parnas. And the Sketchpad system of Ivan Sutherland was yet another source of inspiration. The various threads came together when Alan Kay and his colleagues created the Smalltalk language at the Xerox Palo Alto Research Center in the 1970s. Within a decade several more object-oriented languages were in use, most notably Bjarne Stroustrup's C++, and later Java. Object-oriented features have also been retrofitted onto older languages, such as Lisp.
As OOP has transformed the way programs are written, there has also been a major shift in the nature of the programs themselves. In the software-engineering literature of the 1960s and '70s, example programs tend to have a sausage-grinder structure: Inputs enter at one end, and outputs emerge at the other. An example is a compiler, which transforms source code into machine code. Programs written in this style have not disappeared, but they are no longer the center of attention. The emphasis now is on interactive software with a graphical user interface. Programming manuals for object-oriented languages are all about windows and menus and mouse clicks. In other words, OOP is not just a different solution; it also solves a different problem.
Aspects and Objects
Most of the post-OOP initiatives do not aim to supplant object-oriented programming; they seek to refine or improve or reinvigorate it. A case in point is aspect-oriented programming, or AOP.
The classic challenge in writing object-oriented programs is finding the right decomposition into classes and objects. Returning to the example of a program for playing with geometric figures, a typical instance of the class pentagon might look like this: . But this object is also a pentagon: . And so is this: . To accommodate the differences between these figures, you could introduce subclasses of pentagon—perhaps named convex-pentagon, non-convex-pentagon and five-pointed-star. But then you would have to do the same thing for hexagons, heptagons and so forth, which soon becomes tedious. Moreover, this classification would give you no way to write methods that apply, say, to all convex polygons but to no others. An alternative decomposition would divide the polygon class into convex-polygon and non-convex-polygon, then subdivide the latter class into simple-polygon and self-intersecting-polygon. With this choice, however, you lose the ability to address all five-sided figures as a group.
One solution to this quandary is multiple inheritance—allowing a class to have more than one parent. Thus a five-pointed star could be a subclass both of pentagon and of self-intersecting-polygon and could inherit methods from both. The wisdom of this arrangement is a matter of eternal controversy in the OOP community.
Aspect-oriented programming takes another approach to dealing with "crosscutting" issues that cannot easily be arranged in a treelike hierarchy. An example in the geometry program might be the need to update a display window every time a figure is moved or modified. The straightforward OOP solution is to have each method that changes the appearance of a figure (such as rotate or scale) send a message to a display-manager object, telling the display what needs to be redrawn. But hundreds of methods could send such messages. Even apart from the boredom of writing the same code over and over, there is the worry that the interface to the display manager might change someday, requiring many methods to be revised. The AOP answer is to isolate the display-update "aspect" of the program in a module of its own. The programmer writes one instance of the code that calls for a display update, along with a specification of all the occasions on which that code is to be invoked—for example, whenever a rotate method is executed. Then even though the text of the rotate method does not mention display updating, the appropriate message is sent at the appropriate time.
An AOP system called AspectJ, developed by Gregor Kiczales and a group of colleagues at Xerox PARC, works as an extension of the Java language. AOP is particularly attractive for implementing ubiquitous tasks such as error-handling, the logging of events, and synchronizing multiple threads of execution, which might otherwise be scattered throughout a program. But there are dissenting views. Jörg Kienzle and Rachid Guerraoui report on an attempt to build a transaction-
processing system with AspectJ, where the key requirement is that transactions be executed completely or not at all (so that the system cannot debit one account without crediting another). They found it difficult to cleanly isolate this property as an aspect.
Surely the most obvious place to look for help with programming a computer is the computer itself. If Fortran can be compiled into machine code, then why not transform some higher-level description or specification directly into a ready-to-run program? This is an old dream. It lives on under names such as generative programming, metaprogramming and intentional programming.
In general, fully automatic programming remains beyond our reach, but there is one area where the idea has solid theoretical underpinnings as well as a record of practical success: in the building of compilers. Instead of hand-crafting a compiler for a specific programming language, the common practice is to write a grammar for the language and then generate the compiler with a program called a compiler compiler. (The best-known of these programs is Yacc, which stands for "yet another compiler compiler.")
Generative programming would adapt this model to other domains. For example, a program generator for the kind of software that controls printers and other peripheral devices would accept a grammar-like description of the device and produce an appropriately specialized program. Another kind of generator might assemble "protocol stacks" for computer networking.
Krzysztof Czarnecki and Ulrich W. Eisenecker compare a generative-programming system to a factory for manufacturing automobiles. Building the factory is more work than building a single car by hand, but the factory can produce thousands of cars. Moreover, if the factory is designed well, it can turn out many different models just by changing the specifications. Likewise generative programming would create families of programs tailored to diverse circumstances but all assembled from similar components.
The Quality Without a Name
Another new programming methodology draws its inspiration from an unexpected quarter. Although the term "computer architecture" goes back to the dawn of the industry, it was nonetheless a surprise when a band of software designers became disciples of a bricks-and-steel architect, Christopher Alexander. Even Alexander was surprised.
Alexander is known for the enigmatic thesis that well-designed buildings and towns must have "the quality without a name." He explains: "The fact that this quality cannot be named does not mean that it is vague or imprecise. It is impossible to name because it is unerringly precise." Does that answer your question?
Even if the quality had a name, it's not clear how one would turn it into a prescription for building good houses—or good software. Fortunately, Alexander is more explicit elsewhere in his writings. He urges architects to exploit recurrent patterns observed in both problems and solutions. For the pattern of events labeled "watching the world go by," a good solution is probably going to look something like a front porch. Taken over into the world of software, this approach leads to a catalogue of design patterns for solving specific, recurring problems in object-oriented programming. For example, a pattern named Bridge deals with the problem of setting up communications between two objects that may not know of each other's existence at the time a program is written. A pattern named Composite handles the situation where a single object and a collection of multiple objects have to be given the same status, as is often the case with files and directories of files.
Over the past 10 years a sizable community has grown up around the pattern idea. There are dozens of books, web sites and an annual conference called Pattern Languages of Programming, or PLoP. Compared with earlier reform movements in computing, the pattern community sounds a little unfocused and New Age. Whereas structured programming was founded on a proof that three specific structures suffice to express all algorithms, there is nothing resembling such a proof to justify the selection of ideas included in catalogues of design patterns. As a matter of fact, the whole idea of proofs seems to be out of favor in the pattern community.
Software Jeremiahs usually preach that programming should be an engineering profession, guided by standards analogous to building codes, or else it should be a branch of applied mathematics, with programs constructed like mathematical proofs. The pattern movement rejects both of these ideals and suggests instead that programmers are like carpenters or stonemasons—stewards of a body of knowledge gained by experience and passed along by tradition and apprenticeship. This is a movement of practitioners, not academics. Pattern advocates express particular contempt for the notion that programming might someday be taken over entirely by the computer. Automating a craft, they argue, is not only infeasible but also undesirable.
The rhetoric of the pattern movement may sound like the ranting of a fringe group, but pattern methods have been adopted in several large organizations producing large—and successful—software systems. (When you make a phone call, you may well be relying on the work of programmers seeking out the quality without a name.) Moreover, beyond the rhetoric, the writings of the software-patterns community can be quite down-to-earth and pragmatic.
If the pattern community is on the radical fringe, how far out is extreme programming (or, as it is sometimes spelled, eXtreme programming)? For the leaders of this movement, the issue is not so much the nature of the software itself but the way programming projects are organized and managed. They want to peel away layers of bureaucracy and jettison most of the stages of analysis, planning, testing, review and documentation that slow down software development. Just let programmers program! The recommended protocol is to work in pairs, two programmers huddling over a single keyboard, checking their own work as they go along. Is it a fad? A cult? Although the name may evoke a culture of body piercing and bungee jumping, extreme programming seems to have gained a foothold among the pinstriped suits. The first major project completed under the method was a payroll system for a transnational automobile manufacturer.
Ask Me About My OOP Diet
Frederick Brooks, who wrote of the tar pit in the 1960s, followed up in 1987 with an essay on the futility of seeking a "silver bullet," a single magical remedy for all of software's ills. Techniques such as object-oriented programming might alleviate "accidental difficulties" of software development, he said, but the essential complexity cannot be wished away. This pronouncement that the disease is incurable made everyone feel better. But it deterred no one from proposing remedies.
After several weeks' immersion in the how-to-program literature, I am reminded of the shelves upon shelves of diet books in the self-help department of my local bookstore. In saying this I mean no disrespect to either genre. Most diet books, somewhere deep inside, offer sound advice: Eat less, exercise more. Most programming manuals also give wise counsel: Modularize, encapsulate. But surveying the hundreds of titles in both categories leaves me with a nagging doubt: The very multiplicity of answers undermines them all. Isn't it likely that we'd all be thinner, and we'd all have better software, if there were just one true diet, and one true programming methodology?
Maybe that day will come. In the meantime, I'm going on a spaghetti-code diet.
© Brian Hayes
| 3.347894 |
The Douglasfir is to the world of trees what a decathlon winner is to the Olympics. This tree is an all-around champion. It is one of our most important lumber species, a magnificent ornamental tree, and one of the most popular Christmas trees in America. Additionally, a large number of bird and animal species find shelter and food in its majestic foliage.
This magnificent evergreen has a dense, cone-shape when young becoming more open and pyramid-shaped with maturity. It has a straight trunk with thin, smooth bark with resin blisters when young becoming thicker and furrowed on older trees. It reaches 40'-80', 12'-20' spread in the home landscape to over 200' in natural conditions. Coast Douglasfir needles are a dark yellow green although on some trees bluish green. In Rocky Mountain Douglasfir, the needles are blue green, but occasionally are yellowish green. The light brown, 3"-4" cones grow downward on the branches with distinctive 3-pointed bracts protruding from the between the scales. The Coast Douglasfir grows best in deep, moist, well drained, acid or neutral soil with atmospheric moisture, but the hardier Rocky Mountain variety is found in its native range on rocky mountain slopes. It does not tolerate dry, poor soils, and breakage is common on the side exposed to high winds. (zones 4-6)
Douglasfir seeds are used by blue grouse,songbirds, squirrels, rabbits, and other rodents and small animals. Antelope, deer, elk, mountain goats, and mountain sheep eat the twigs and foliage. It provides excellent cover for a wide range of animals.
While the Douglasfir may have first been introduced to cultivation by botanist-explorer David Douglas in 1826, its importance to American history continues unabated. As well as being the country's top source of lumber today, the Douglasfir also helped settle the West, providing railroad ties and telephone/telegraph poles. The Douglasfir was crucial to American soldiers in World War II as well, being used for everything from GIs' foot lockers to portable huts and even the rails of stretchers that carried many a soldier from battle. But perhaps one contribution of the Douglasfir symbolizes its place in America's evolving history more than any other. When in 1925 the time came to restore the masts of "Old Ironsides," the USS Constitution, sufficiently grand White Pine trees could no longer be found. Today, Old Ironsides proudly sails in the Boston Navy Yard under the power of three Douglasfir masts.
There are two geographical varieties of Douglasfir: Coast Douglasfir, Pseudotsuga menziesii var. menziesii native to British Columbia along the Pacific coast to central California and western Nevada; Rocky Mountain Douglasfir native to the inland mountains of the Pacific Northwest and the Rocky Mountains from central British Columbia south to northern and central Mexico. The Coastal variety is faster growing, long-lived, and can reach over 300' tall. The needles are usually a dark yellow-green although some trees they may be bluish green. Rocky Mountain Douglasfir, Pseudotsuga menziesii var. glauca is hardier, slower growing, shorter lived and seldom grows over 130' tall. The needles are shorter and bluish green although in some trees may be yellowish green. The cones are barely 3" in length with bracts bent upwards.
Douglasfir is written as one word or hyphenated to show it is not a true fir.
Sensitive to drought conditions; requires good drainage.
The needles are spiral, simple, 1-1/2 inches long, shining, shade of green depends upon the variety, two bands of stomata beneath. Coast Douglasfir has dark yellow-green, occasionally bluish-green needles. Rocky Mountain Douglasfir has shorter, bluish-green, occasionally yellowish green needles.
male is red
female is green with prominent bracts
The light brown, oval, pendulous cones are 3-4 inches long with prominent 3-pointed bracts that protrude between the scales.
| 3.268037 |
Types of Arthritis
There are over 100 types of arthritis, and these diseases affect more than 46 million individuals in the U.S. alone. This figure is expected to breach the 60 million mark before 2030. Various types of arthritis are unique and distinct from each other and each type has its own treatment approach. This highlights the necessity and importance of accurate tests and diagnostic procedures in determining the specific type of arthritis that a person is suffering from.
Once you are able to isolate the triggers of the pain and inflammation associated with the disease, then you will be able to undertake the necessary steps in finding relief from the effects of the disease and maintain a relatively normal daily routine.
Diversity of the Types of Arthritis
If you live long enough, there is a strong likelihood that you will experience a touch of at least one of the more than 100 types of arthritis. This chronic medical condition can take the form of a mild tendinitis and bursitis or a debilitating systemic disease like rheumatoid arthritis. There are also some forms of arthritis and arthritis-related conditions such as fibromyalgia and systemic lupus erythematosus that are widespread and affect different parts of the body.
Arthritis is not a medical condition of the old. There are certain types of the disease that specifically affect infants and children, and these include juvenile rheumatoid arthritis and septic arthritis. There are also a significant number of men and women who are suffering from this disease at the prime of their lives. A common denominator for these types of arthritis is the presence of musculoskeletal and joint pain, and this is the primary reason why these conditions are collectively referred to as arthritis.
Major types of arthritis include:
This type of arthritis is a degenerative disease and is also referred to as arthrosis, degenerative joint disease or degenerative arthritis. Osteoarthritis is the most common type of arthritis. It is a medical condition that is characterized by low-grade inflammation which triggers persistent pain in the affected joints. This condition is the result of the progressive deterioration of the cartilage, and its deteriorated condition affects its capacity to protect and act as cushion of the joints.
Rheumatoid arthritis is a chronic autoimmune disorder that primarily affects the joints. It is a painful and debilitating inflammatory condition and can lead to substantial impairment of normal mobility as a result of the persistent pain and damage to the affected joint. Rheumatoid is also a systemic condition and can affect the extra-articular tissues in various parts of the body. These include the muscles, lungs, hearts, blood vessels and the skin. The highest incidence of the disease is observed in men and women within the 30-60 years age range.
Gout is characterized by the sudden and severe episodes which usually affect the big toe, although any joint is prone to these gout attacks. This type of arthritis is a metabolic problem brought about by the accumulation of uric acid in the bloodstream. The precursor of the condition is the buildup of harmful crystals in the joints and other parts of the body. Specific medications and proper diet are essential in the control and management of gout.
Ankylosing spondylitis is a chronic inflammatory disorder that affects the spine. In its advanced stage, the condition is characterized by the stiffness of the spine and fusion of the vertebrae. In most cases, it is more difficult to detect and diagnose the disease in women than in men. The persistent pain and discomfort are the results of the inflammation in the vertebrae or spinal joints. Aside from the fusion of the vertebrae, the abnormal bone growth can lead to immobility and forward-stooped posture.
Ankylosing spondylitis can also lead to stiffness, pain and inflammation on the other parts of the body, including the joints of the hands and feet, heels, ribs, hips and shoulders. There are also some rare cases where the disease can affect the eyes, a condition known as Uveitis or Iritis, as well as the heart and lungs.
Juvenile arthritis, or simply JA, is a chronic disease that is associated with the inflammation of one or several joints. This disease affects individuals below the age of 16. The inflammation is the common denominator of the various forms of juvenile arthritis, although such forms of the disease have distinct nuances and require different treatment modes.
Psoriatic arthritis is the type of arthritis that is common in patients who are suffering from a chronic skin disorder known as psoriasis. It has some similarities with rheumatoid arthritis, although most patients who have psoriatic arthritis exhibit mild to moderate symptoms. This chronic disorder affects both men and women, and it can lead to further complications and serious health problems when left untreated. The progress of the disease is generally slow and affects specific joints.
Septic arthritis is brought about by haematogenous spread of infection, although there are instances where the condition is triggered by the introduction of infecting agents from adjacent infection, as in the case of osteomyelitis, or through a penetrating wound. The condition is common in children and premature neonates, although it can also affect the elderly and individuals who are immune suppressed.
There is no cure for arthritis, but there are several treatment modes that we can use to alleviate or mitigate the effects of the disease. Among your options are several natural arthritis treatment options that have been proven to be effective and safe when used in the management of the disease.
| 3.247073 |
Researchers Model Fetal-To-Adult Hemoglobin Switching: Important Step Towards Cure For Blood Diseases
Researchers have engineered mice that model the switch from fetal to adult hemoglobin, an important step towards curing genetic blood diseases such as sickle cell anemia and beta-thalassemia. The research is published in the February 2011 issue of the journal Molecular and Cellular Biology.
They also produced for the first time a mouse that synthesizes a distinct fetal-stage hemoglobin, which was necessary for modeling human hemoglobin disorders. These diseases manifest as misshapen hemoglobin, causing anemia, which can be severe, as well as other symptoms, which can range from minor to life-threatening. The cure would lie in causing the body to revert to use of fetal hemoglobin.
“The motivation for our research is to understand the basic mechanisms of gene regulation in order to cure human disease,” says Thomas Ryan of the University of Alabama Birmingham, who led the research. “If we can figure out how to turn the fetal hemoglobin back on, or keep it from switching off, that would cure these diseases.”
The new model “mimics precisely the timing in humans, completing the switch after birth,” says Ryan. “The previous models didn’t do that.” In earlier models, researchers inserted transgenes, large chunks of DNA containing the relevant genes, randomly into the mouse chromosome. In the new model, the investigators removed the adult mouse globin genes, and inserted the human fetal and adult genes in their places.
The successful engineering of a mouse with a fetal-stage hemoglobin means that humanized mouse models with mutant human genes will not die in utero.
While the basic principals behind the research are simple, the details are complex. For example, Ryan and Sean C. McConnell, a doctoral student who is the paper’s first author, had to deal with the fact that hemoglobin switching occurs twice in H. sapiens, from embryonic to fetal globin chains in early fetal life, and then to adult globin chains at birth, while wild type mice have a single switch from embryonic to adult chains early in fetal life. “Instead of the single hemoglobin switch that occurs in wild type mice, our humanized knock-in mice now have two hemoglobin switches, just like humans, from embryonic to fetal in early fetal life, and then fetal to adult at birth,” says Ryan.
Hemoglobin switching is believed to have evolved to enable efficient transfer of oxygen from the mother’s hemoglobin to the higher oxygen affinity fetal hemoglobin in the placenta during fetal life.
(S.C. McConnell, Y. Huo, S. Liu, and T.M. Ryan, 2011. Human globin knock-in mice complete fetal-to-adult hemoglobin switching in postnatal development. Mol. Cell. Biol. 31:876-883.)
| 3.137598 |
July Rendezvous with Vesta
"We often refer to Vesta as the smallest terrestrial planet," said Christopher T. Russell, a UCLA professor of geophysics and space physics and the mission's principal investigator. "It has planetary features and basically the same structure as Mercury, Venus, Earth and Mars. But because it is so small, it does not have enough gravity to retain an atmosphere, or at least not to retain an atmosphere for very long.
"There are many mysteries about Vesta," Russell said. "One of them is why Vesta is so bright. The Earth reflects a lot of sunlight — about 40 percent — because it has clouds and snow on the surface, while the moon reflects only about 10 percent of the light from the Sun back. Vesta is more like the Earth. Why? What on its surface is causing all that sunlight to be reflected? We'll find out."
Dawn will map Vesta's surface, which Russell says may be similar to the moon's. He says he expects that the body's interior is layered, with a crust, a mantle and an iron core. He is eager to learn about this interior and how large the iron core is.
Named for the ancient Roman goddess of the hearth, Vesta has been bombarded by meteorites for 4.5 billion years.
"We expect to see a lot of craters," Russell said. "We know there is an enormous crater at the south pole that we can see with the Hubble Space Telescope. That crater, some 280 miles across, has released material into the asteroid belt. Small bits of Vesta are floating around and make their way all the way to the orbit of the Earth and fall in our atmosphere. About one in every 20 meteorites that falls on the surface of the Earth comes from Vesta. That has enabled us to learn a lot about Vesta before we even get there."
Dawn will arrive at Vesta in July. Beginning in September, the spacecraft will orbit Vesta some 400 miles from its surface. It will then move closer, to about 125 miles from the surface, starting in November. By January of 2012, Russell expects high-resolution images and other data about surface composition. Dawn is arriving ahead of schedule and is expected to orbit Vesta for a year.
Vesta, which orbits the Sun every 3.6 terrestrial years, has an oval, pumpkin-like shape and an average diameter of approximately 330 miles. Studies of meteorites found on Earth that are believed to have come from Vesta suggest that Vesta formed from galactic dust during the Solar System's first 3 million to 10 million years.
Dawn's cameras should be able to see individual lava flows and craters tens of feet across on Vesta's surface.
"We will scurry around when the data come in, trying to make maps of the surface and learning its exact shape and size," Russell said.
Dawn has a high-quality camera, along with a back-up; a visible and near-infrared spectrometer that will identify minerals on the surface; and a gamma ray and neutron spectrometer that will reveal the abundance of elements such as iron and hydrogen, possibly from water, in the soil. Dawn will also probe Vesta's gravity with radio signals.
The study of Vesta, however, is only half of Dawn's mission. The spacecraft will also conduct a detailed study of the structure and composition of the "dwarf planet" Ceres. Vesta and Ceres are the most massive objects in the main asteroid belt between Mars and Jupiter. Dawn's goals include determining the shape, size, composition, internal structure, and the tectonic and thermal evolution of both objects, and the mission is expected to reveal the conditions under which each of them formed.
Dawn, only the second scientific mission to be powered by an advanced NASA technology known as ion propulsion, is also the first NASA mission to orbit two major objects.
"Twice the bang for the buck on this mission," said Russell, who added that without ion propulsion, Dawn would have cost three times as much.
UCLA graduate and postdoctoral students work with Russell on the mission. Now is an excellent opportunity for graduate students to join the project and help analyze the data, said Russell, who teaches planetary science to UCLA undergraduates and solar and space physics to undergraduates and graduate students.
After orbiting Vesta, Dawn will leave for its three-year journey to Ceres, which could harbor substantial water or ice beneath its rock crust — and possibly life. On the way to Ceres, Dawn may visit another object. The spacecraft will rendezvous with Ceres and begin orbiting in 2015, conducting studies and observations for at least five months.
Russell believes that Ceres and Vesta, formed almost 4.6 billion years ago, have preserved their early record, which was frozen into their ancient surfaces.
"We're going back in time to the early solar system," he said.
| 3.638535 |
Why does this galaxy have so many big black holes?
No one is sure.
What is sure is that
NGC 922 is a ring galaxy created by the collision of a large and small galaxy about
300 million years ago.
Like a rock thrown into a pond, the
ancient collision sent ripples of
high density gas out from the impact point near the center that partly condensed into stars.
Pictured above is NGC 922 with its beautifully complex ring along the left side, as imaged recently by the
Hubble Space Telescope.
Observations of NGC 922 with the
Chandra X-ray Observatory, however, show several glowing X-ray knots that are likely large black holes.
The high number of massive black holes was
somewhat surprising as the gas composition in
NGC 922 -- rich in heavy elements -- should have discouraged almost anything so massive from forming.
Research is sure to continue.
spans about 75,000 light years, lies about 150 million light years away, and can be seen with a small telescope toward the constellation of the furnace (Fornax).
Acknowledgement: Nick Rose
| 4.056317 |
Serious head injuries are rare in the wilderness. But get hit square on the melon by a falling rock, and the resulting brain swelling can cause dangerous intracranial pressure. Unlike skin gashes and broken bones, traumatic brain injuries don't always bleed or even cause pain, making early diagnosis tricky.
Since brain injuries can occur without exterior wounds, the best indicator of serious trauma is a person's level of consciousness, says Jeffrey Isaac, curriculum director at Wilderness Medical Associates.
Use the AVPU scale to establish a person's alertness and monitor any deterioration in brain function. The farther down the scale (A is the best, U the worst) the person registers, the more serious the brain injury.
(A) Victim is Alert and oriented; he knows who he is, where he is, and what happened.
(V) You get a response to Verbal stimuli, but victim is confused and disoriented.
(P) Victim responds only to Painful stimuli, like pinching his arm or rubbing his breastbone.
(U) Victim is Unresponsive to all of the above.
Record any periods of unconsciousness. Blackouts lasting longer than two to three minutes indicate a serious head injury, especially if accompanied by persistent disorientation.
Because brain swelling can develop slowly, evaluate the victim's mental state for 24 hours after the injury.
Watch for behavioral indicators like combativeness, restlessness, or acting drunk, as well as severe headache, nausea, and persistent vomiting.
Move the victim to a safer location if necessary. Don't leave a victim in a dangerous place or where you can't treat life-threatening injuries just because you are unable to stabilize the spine, says Isaac. Recent studies have shown that cervical spine damage occurs in a tiny percentange of victims with traumatic head injuries. As a result, new first aid protocols recommend spine "protection" over stabilization when hazardous conditions require moving the victim.
Monitor a victim's breathing and pulse rate, and keep him hydrated and warm. Treat for shock by raising the legs while you gauge his level of consciousness.
Initial disorientation or confusion can improve in a short period. The duration a person remains unconscious isn't as important as how quickly he returns to normal brain functioning, says Isaac.
Contrary to popular belief, the victim of a head injury can doze or sleep as long as he is monitored and woken up every few hours to check alertness.
Initiate immediate evacuation for victims whose alertness or memory remains severely altered, or worsens over time. Even if a victim recovers enough to walk out, he should still seek medical attention.
| 3.298254 |
Is the evidence strong enough to support a medieval Welsh settlement in North America?
First recorded almost four hundred years after the lifetime of Madoc ap Owain Gwynedd, there is little to indicate that the story was known before Humphrey Llwyd. While there certainly were medieval stories about a Madoc, who seems to have been more well known in Flanders than in Wales, it is by no means certain that the Madoc of the stories and poems was Madoc ap Owain. All that can be said of the medieval romances is that they concern a sea-farer of some renown. That is as far as the medieval sources go.
Where Humphrey Llwyd got the story is unknown. It is in none of the sources he translated into English and it is so far from the medieval versions that they cannot have been the sole inspiration. It is possible, perhaps even probable, that he simply made it up. As a proud Welshman at a time when the English government was doing its best to anglicise the recently-created province (following An Acte for Lawes & Justice to be ministred in Wales in like fourme as it is in this Realme of 1536), making a claim that the Welsh had discovered the New World long before the English (and, indeed, the Spanish) had ever set foot there would have been a strike by Llwyd in favour of national pride.
The flawed evidence of the Bat Creek Stone
In 1991, archaeologists Robert C Mainfort Jr. and Mary L Kwas, writing in The Tennessee Anthropologist 16 (1) identified the hoaxer of the Bat Creek Stone as John Emmert, the assistant who claimed to have found it. Cyrus Thomas had doubts about Emmert’s abilities, believing his judgement to be impaired by the drink problem that eventually led to his sacking. Following a series of begging letters to Thomas, Emmert was reinstated in 1888, promising to give him “greater satisfaction than I ever did before” and agreeing with Thomas’s hypothesis that the Cherokees were the moundbuilders. Emmert certainly had the motive for producing a spectacular find and despite Cyrus Gordon’s identification of the script as Hebrew, it is passable for the Cherokee syllabary. Alas, the Cherokee syllabary was invented in 1819 by the native American silversmith Sequoyah (c 1767-1843, also known as George Gist/Guess/Guest) and a radiocarbon date on material from Mound 3 of 1605 ± 170 bp (409 ± 174 CE) is much too early.
So, could Blackett and Wilson be right in identifying the inscription as sixth-century Welsh, in the Coelbren script? Once again, we find Coelbren to be a modern invention, having been first published in 1791 by Edward Williams (1747-1826, better known as Iolo Morganwg), a serial forger. Although claims have been made for an earlier origin (such as in the “Welsh runes” attributed to the scholar Nennius or Nemnivus and said to have been invented because an Englishman had taunted him that the Welsh had no writing system), nothing like Coelbren is attested before the time of Edward Williams. It is also evident that if it incorporates symbols for mutated consonants and such mutations are not written before the period of Middle Welsh orthography (twelfth to fourteenth centuries CE), long after the date claimed for the Bat Creek inscription by Wilson and Blackett, then Coelbren can be no earlier that the twelfth century CE.
Dismissing the recent claims
Wilson and Blackett are keen promoters of an alternative Arthurian archaeology that uses some very poor evidence that does not stand up to critical scrutiny. Indeed, there is even a suggestion that some of the evidence they use is fraudulent. Their frequent complaint that they are not taken seriously by academe is typical of Bad Archaeologists: they tell their readers that the reasons for being ignored are professional jealousies, an inability to see beyond accepted ideas and even darkly political conspiracies. Like so many other Bad Archaeologists they seem incapable of recognising that the real reason the professional archaeologists do not give them the recognition they believe they deserve is that their ideas are poorly thought out, supported by inadmissable evidence and, ultimately, rubbish.
| 3.445593 |
Household Dust and Allergies in Children
The use of mattress covers on children’s beds, along with intensive education and assistance in dust-reduction measures in the home, prevents the development of allergies in children at high risk, according to a recent article in Archives of Pediatric and Adolescent Medicine (2002;156:1021–7).
Childhood allergies are a common and growing problem, and children who suffer from allergies frequently develop chronic allergic conditions, including asthma. In the United States, asthma is the most commonly diagnosed chronic disease of childhood, affecting nearly nine million children. Conventional medical management of childhood allergies and asthma includes the use of antihistamines, inhaled drugs that dilate the bronchial passages (bronchodilators), and inhaled and oral steroids. All of these medicines can have serious side effects, and a child’s likelihood of needing to continue treatment into adulthood is high. Antihistamines can affect the central nervous system, causing either overstimulation or sedation; bronchodilators can also stimulate the nervous system, causing anxiety and insomnia. Over long periods of time, inhaled steroids can damage the immune defenses of the respiratory system, leading to increased rates of infections, and oral steroids have negative effects on adrenal function and the entire immune system. For these reasons, preventive approaches should be pursued whenever possible.
According to some, but not all, studies, dust mite allergy is a predictor of asthma and other types of chronic allergies such as eczema. One previous study found that reducing the amount of dust in children’s home environments prevented the development of dust mite allergies. Another study found that the use of mattress covers effectively reduced dust mite exposure in the home. In the current study, the combined effect of mattress covers and intensive dust reduction measures on the development of dust mite allergies was evaluated.
The 566 European children who participated in this one-year study were between 18 months and five years of age. All had some allergy symptoms, such as asthma, hay fever, or eczema, but tests for dust mite allergy were negative. Each had at least one parent with allergy symptoms and positive results on the dust mite allergy test, putting these children at high risk for becoming allergic to dust mites themselves. The children were randomly assigned to one of two groups. Both groups’ parents received standard information about environmental influences on children’s health and recommendations for allergy prevention.
Recommendations for allergy prevention included avoiding exposure to pets in bedrooms, ventilating bedrooms well, avoiding cigarette smoke, and washing bedding and cleaning bedrooms regularly to minimize dust. In addition, parents of children in the intervention group were given more detailed dust-reduction instructions and assistance, and a mattress cover to prevent dust accumulation on children’s beds. The children were evaluated for allergy symptoms after six months, and tests for dust mite allergies were repeated after one year. The percentage of children who had developed allergies to dust mites by the end of the trial was 6.5% in the group that received standard instructions, compared with only 3% in the intervention group.
The results of this study add to a growing body of evidence that preventing exposure to house dust mites may significantly reduce the development of dust mite allergies, and therefore might decrease the incidence of childhood allergies and asthma. These measures should be recommended in conjunction with other allergy prevention approaches that have been shown to be effective. For example, observational studies have shown a 30 to 50% reduction in childhood asthma in those who receive exclusive breastfeeding for three months. Several studies have demonstrated a protective effect of dietary supplementation with omega-3 fatty acids. Restricting allergenic foods in the diets of nursing mothers and infants has been shown to prevent some types of allergies. Controlled trials have shown that probiotic supplements (which support the growth of healthy gut bacteria) and hydrolyzed milk formulas (in which potentially allergenic proteins are broken down to smaller, non-allergenic sizes) reduce allergies and asthma in infants. The cumulative effect of these preventive measures remains to be evaluated.
Maureen Williams, ND, received her bachelor’s degree from the University of Pennsylvania and her Doctorate of Naturopathic Medicine from Bastyr University in Seattle, WA. She has a private practice in Quechee, Vermont, and does extensive work with traditional herbal medicine in Guatemala and Honduras. Dr. Williams is a regular contributor to Healthnotes Newswire.
Copyright © 2003 Healthnotes, Inc. All rights reserved. Republication or redistribution of the Healthnotes® content is expressly prohibited without the prior written consent of Healthnotes, Inc. Healthnotes Newswire is for educational or informational purposes only, and is not intended to diagnose or provide treatment for any condition. If you have any concerns about your own health, you should always consult with a healthcare professional. Healthnotes, Inc. shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon. Healthnotes and the Healthnotes logo are registered trademarks of Healthnotes, Inc.
| 3.518368 |
One of the world's most ancient societies has been given a legal buffer zone to guard it from the modern world.
India's Supreme Court has banned all commercial and tourism activity near their habitat in the country's remote Andaman and Nicobar islands in the Indian Ocean.
The ruling bars hotels and resorts from operating within a three-mile buffer zone around the Jarawa reserve, which is home to the Jarawa tribal people. The order means resorts that had opened nearby will have to close.
The Jarawas are among the world's most ancient people, with many still hunting with bows and arrows and rubbing stones together to make fire. Scientists believe they were among the first people to migrate from Africa to Asia around 70,000 years ago.
Jarawas did not have any contact with government authorities until 1996 and did not begin leaving their habitat until a few years ago, when they began moving out of the reserve in small groups for a brief while before returning. Scientists say there are around 320 Jarawa tribespeople living in the southern and middle Andaman islands.
The Indian government has come under increasing criticism from rights activists for failing to protect the Jarawas. Critics say the local government has allowed unscrupulous tour operators to promote "human safaris."
In 2002, the Supreme Court ordered that a road passing through the reserve be closed, but the local government still has not barred the Andaman Trunk Road, enabling tourist buses and vehicles to enter Jarawa habitats deep in the jungle.
India's cabinet recently authorised stiff penalties for those trying to organise tours to Jarawa habitats or photographing the tribespeople.
Last year, activists were outraged when media reports and videos surfaced of local policemen forcing bare-chested Jarawa women to dance for tourists in exchange for food.
Survival International, a London-based international rights group for indigenous people, welcomed the new order, but said the Indian government has "missed" an opportunity by allowing the road to remain open to tourists.
| 3.028809 |
The National Drought Mitigation Center in Lincoln, Nebraska,says drought has produced an above-normal wildfire potential this season along the leeward sides of all the Hawaiian Islands, including the western third of Hawaii Island. Most of our state has received above-normal precipitation, but not parts of Hawaii Island. The result: the danger of out-of-control brush fires is significant.
Elsewhere on the United States mainland, the situation is similar. The NDMC says snowpack was disappointing in many states–in Colorado and Utah, they had only half the usual snowpack. Colorado had its first wildfire of 2012 last month in a conflagration that took 700 firefighters more than a week to control the fire. January and February were the driest on record in California.
Hawaii County Fire Department warns residents to clear brush from near their houses, and to have an evacuation plan should there be a fire. They also warn that tossing lit cigarettes out of cars is not only illegal, but not trigger a fire.
The National Drought Mitigation Center says areas on Hawaii Island and elsewhere that have not had soakings of brush, grass, and trees, and the wildfire fuels don’t have water and cannot resist fire. Hawaii Island’s west side usually has a rainy summer season and there have been some mauka showers, with residents hoping for more. Hilo has gotten lots of rain this winter, but other areas around the island vary in how much rain they’ve received.
The NDMC, established in 1995, is based in the School of Natural Resources at the University of Nebraska-Lincoln. The organization helps people and institutions develop and implement measures to reduce societal vulnerability to drought, stressing preparedness and risk management rather than crisis management. It works with several different agencies to collaborate on drought issues–United States Department of Agriculture, National Oceanic and Atmospheric Administration, United States Geological Survey, the National Climate Data Center, and international organizations, among others.
| 3.003105 |
Scientists identified seven new species of bamboo coral discovered on a NOAA-funded mission in the deep waters of the Papahānaumokuākea Marine National Monument. Six of these species may represent entirely new genera, a remarkable feat given the broad classification a genus represents. A genus is a major category in the classification of organisms, ranking above a species and below a family. Scientists expect to identify more new species as analysis of samples continues.
"These discoveries are important, because deep-sea corals support diverse seafloor ecosystems and also because these corals may be among the first marine organisms to be affected by ocean acidification," said Richard Spinrad, Ph.D., NOAA's assistant administrator for Oceanic and Atmospheric Research. Ocean acidification is a change in ocean chemistry due to excess carbon dioxide. Researchers have seen adverse changes in marine life with calcium-carbonate shells, such as corals, because of acidified ocean water.
"Deep-sea bamboo corals also produce growth rings much as trees do, and can provide a much-needed view of how deep ocean conditions change through time," said Spinrad.
Rob Dunbar, a Stanford University scientist, was studying long-term climate data by examining long-lived corals. "We found live, 4,000-year-old corals in the Monument meaning 4,000 years worth of information about what has been going on in the deep ocean interior."
"Studying these corals can help us understand how they survive for such long periods of time as well as how they may respond to climate change in the future," said Dunbar.
Among the other findings were a five-foot tall yellow bamboo coral tree that had never been described before, new beds of living deepwater coral and sponges, and a giant sponge scientists dubbed the "cauldron sponge," approximately three feet tall and three feet across. Scientists collected two other sponges which have not yet been anal
|Contact: Christine Patrick|
| 3.820234 |
March 22, 2000
UPTON, NY - Painting a bridge can be a costly and time-consuming undertaking, especially if the paint job doesn't last. So scientists have been working on ways to test paint durability before the brushes even get wet. At a March 22 session at the American Physical Society meeting in Minneapolis, scientists from the University of Missouri at Kansas City, who worked in collaboration with physicist Bent Nielsen of the U.S. Department of Energy's Brookhaven National Laboratory, will present findings that could lead to the development of an extremely sensitive and quick durability test.
The technique is called positron annihilation. Essentially, the scientists bombard small painted samples of metal with a beam of positrons, or positively charged electrons. When these "antielectrons" interact with the electrons in the molecules of the paint, they annihilate one another and send out gamma rays that give the scientists information about the molecules in the paint. The technique can detect nanometer-scale holes and defects in the paint molecules; free radicals, which indicate the presence of broken chemical bonds; and cross linking, which may make the paint brittle.
"These experiments show that this technique is extremely sensitive to detecting damage early," says Brookhaven's Nielsen - well before the formation of any visible cracks in the paint. "So you can test the paint on a much shorter time scale - a day instead of half a year. That's a big advantage," Nielsen says.
The scientists typically test the paint samples before and after exposure to ultraviolet (UV) light, one of the components of sunlight known to damage bridge coatings. The more sensitive the paint is to UV damage, the less durable the paint would be on a bridge exposed to sunlight day after day. They've also exposed samples to UV light during the positron annihilation test to see if they could detect the damage as it occurred. In both cases, the damage increased with UV exposure time, and was most severe near the surface of the paint.
In addition to laying the foundation for a quick paint durability test, the detailed observations made possible by positron annihilation may also help scientists learn more about the fundamental mechanisms of paint degradation. That knowledge, in turn, may eventually lead to the development of more durable paints.
Brookhaven was a pioneer in developing positron beams in the late 1970s and early 1980s. Positron emission tomography (PET) scanning, a medical technique used to learn about the function of body organs such as the brain, works on a similar principle, Nielsen says.
This paper will be presented at session L36 on March 22, 2000, at 10 a.m. in the Exhibit Hall of the Minneapolis Convention Center.
The U.S. Department of Energy's Brookhaven National Laboratory creates and operates major facilities available to university, industrial and government personnel for basic and applied research in the physical, biomedical and environmental sciences and in selected energy technologies. The Laboratory is operated by Brookhaven Science Associates, a not-for-profit research management company, under contract with the U.S. Department of Energy.
Note to local editors: Bent Nielsen lives in Port Jefferson, New York.
Last updated 5/28/99 by Public Affairs
| 3.510219 |
Multiple Choice Questions
This section contains 180 multiple choice questions about The Unbearable Lightness of Being. Multiple choice questions test a student's recall and understanding of the text. Use these questions for chapter quizzes, homework assignments or tests. Jump to the quiz/homework section for the multiple choice worksheets.
Multiple Choice - Part 1, Chapters 1-5 | Part 1, Chapters 6-10
1. Why is Tomas attracted to Tereza?
a) Her beauty is unlike any other.
b) He opens up to her like he does with no one else.
c) She seems like a child to him.
d) She is submissive.
2. Why does Tereza stay at Tomas's home for a week during her initial visit?
a) He invites her to stay.
b) She comes down with the flu.
c) She asks to stay, dreading returning to her mother's home.
d) She sprains her ankle.
3. After she returns home to her small...
This section contains 6,535 words|
(approx. 22 pages at 300 words per page)
| 3.102008 |
Short Essay Questions
The 60 short essay questions listed in this section require a one to two sentence answer. They ask students to demonstrate a deeper understanding of the text. Students must describe what they've read, rather than just recall it.
Short Essay Question - Introduction
1. Describe the essential conflict of the Iron Heel.
2. What is the two-tiered future described in the Iron Heel?
3. What does the contemporary author H. Bruce Franklin describe about London?
4. What does the Iron Heel reveal about workers according to Franklin?
5. What tension did London experience in his own life that is reflected in the Iron Heel?
6. Why is it important to be vigilant in society in the Introduction?
Short Essay Question - Forward and Chapter...
This section contains 674 words|
(approx. 3 pages at 300 words per page)
| 3.124068 |
Exponential growth refers to an amount of substance increasing exponentially. Exponential growth is a type of exponential function where instead of having a variable in the base of the function, it is in the exponent. Exponential decay and exponential growth are used in carbon dating and other real-life applications.
I want to talk about exponential growth, I have an example here the population of mice in the Duchy of Grand Fenwick grows at a rate of 6% per year. How long will it take for the population to double or quadruple? I have a table of values here I wanted to show you why this is exponential growth. Increasing at 6% per year means very year we're multiplying by 1.06 and so we get p sub 0 when t=0, p sub 0 times 1.06 when t=1 and p sub 0 times 1.06 squared when t=2 and so on.
This would suggest the formula p sub t equals p sub 0 times 1.06 to the t and that's an exponential growth formula. Now to find the doubling time I need to plug in twice the initial population here. I don't know what the initial population is but twice the initial population is 2 times p sub and after you plug in you can see that the actual initial population doesn't matter it's going to cancel out. So now I have the equation 2=1.06 to the t, this is an exponential equation and the way we solve exponential equations is to take the log of both sides. I'm going to take that natural log of both sides, it doesn't matter what log you use as long as it's on your calculator so you can use either the common log or the natural log.
Okay before I calculate I actually need to use the property of the logs, natural log of 1.06 to the t this is the log of a power so the exponent can come out in front t times ln of 1.06 and then we have a simple linear equation to solve this all we need to do is divide both sides b y natural log of 1.06. So ln 2 over ln of 1.06 now I'd like a numerical answer so I'm going to calculate this value ln2 divided by ln 1.06 enter, I get t, t is approximately 11.89566 and that would be in years. Because the population growth rate was given as 6% per year, I want to round this off let's say your teacher likes you to round off to the nearest hundredth then when I calculate quadrupling time a reason like this in order for a population to quadruple, it's got to double and then double again. So we're going to have 2 doubling times in a row. So it stands to reason that the quadrupling time is twice the doubling time. Twice this, but if I write 23.80 years my answer is not quite right let me multiply in my calculator answer times 2. it's actually 23.79 years.
You have to be really careful when you're using rounded values to do calculations. These values rounded to the nearest hundredth and when I double it I double whatever round off error there was. So the best way to get my final answer for quadrupling time is to double this value which is still stored in the calculator so multiply this times 2 and you get the correct value to the nearest hundredth. This is my answer.
| 3.464481 |
History's greatest empires covered a fifth of the world, ruled hundreds of millions of people, and lasted anywhere from 100 years to over a millennium.
Empire, which comes from the Latin "imperare" meaning "to command," is a collection of states or ethnic groups united under one ruler or oligarchy.
It is a term that has been used to describe recent US foreign policy. Wrote filmmaker Oliver Stone and historian Peter Kuznick in a recent USA Today op-ed: "Obama is about to enter his second term as heir of George W. Bush's imperial strategy unless his latest foreign policy appointments signal significant change."
Each empire seemed unstoppable for an age, but in the end they were all consigned to history.
| 3.04273 |
THE LAW. The Equal Pay Act (EPA) of 1963 prohibits employers from dishing out different wages or bene-fits on the basis of gender for "equal work on jobs (requiring) equal skill, effort and responsibility and which are performed under similar working conditions."
The EPA applies to all employees covered by the Fair Labor Standards Act, which means virtually all employees, and has no exclusion for small businesses. The law is administered by the Equal Employment Opportunity Commission (EEOC), which can conduct equal-pay audits even if it hasn't received a complaint and can initiate lawsuits on behalf of workers.
WHAT'S NEW? There's evidence the wage gap is widening, and that could spark more complaints and more inspections.
Also, the EEOC is already seeing the fruits of an Equal Pay Task Force it set up two years ago. Last year, employees filed 1,251 equal-pay charges and the EEOC brought in a record amount of monetary awards for workers, more than $5 million, compared with less than $2 million just five years ago.
HOW TO COMPLY. The law seems very straightforward: It requires that men and women be given equal pay for equal work. The jobs don't have to be identical, but they must be "substantially equal." Focus on the job content, not titles, to decide whether jobs are substantially equal.
When doing a self-audit of your workplace (or determining if a worker has a valid equal-pay case), always look at the factors a court would examine:
Skill. Look at the experience, ability, education and training required to perform a job, not what skills the individual employees have. Example: Two accounting jobs could be considered equal under the EPA even if one of the employees has a master's degree in physics, since that degree isn't required for the job.
Effort. Look at the physical or mental exertion needed to do the job. Suppose men and women work on an assembly line, but the employee at the end must perform his task and also lift the product into a box. That job requires more effort than the other jobs if the extra lifting is a substantial and regular part of the job. Net effect: You could pay that person more.
Responsibility. Look at the level of accountability needed to do the work. If you're assigning a minor difference in responsibility, it wouldn't be a factor.
Working conditions. This encompasses two things: physical environment, like temperature or ventilation, and workplace hazards. A more dangerous or demanding environment allows for a higher pay grade.
Key point: You can also set "pay differentials" when they're based on seniority, merit, quantity or quality of work or another business reason other than gender. Just be prepared to show a sound business reason why those differentials exist, and have good documentation of your reasons for employee pay levels.
Three other tips:
- Have accurate job descriptions to help justify pay-for-performance disputes.
- Offer equal amounts of overtime. It's illegal to favor a man because of a woman's "family constraints" or pregnancy.
- If you discover equal-pay problems, it's illegal to lower the wages of either sex to equalize the pay.
Resources: Equal Pay Act
- The EEOC's Equal Pay and Compensation Discrimination Web page, www.eeoc.gov/epa.
- The Equal Pay Act in full, www.eeoc.gov/laws/epa.html.
- Ten steps to performing an equal-pay self-audit at your company, www.dol.gov/wb/10step71.htm.
Like what you've read? ...Republish it and share great business tips!
Attention: Readers, Publishers, Editors, Bloggers, Media, Webmasters and more...
We believe great content should be read and passed around. After all, knowledge IS power. And good business can become great with the right information at their fingertips. If you'd like to share any of the insightful articles on BusinessManagementDaily.com, you may republish or syndicate it without charge.
The only thing we ask is that you keep the article exactly as it was written and formatted. You also need to include an attribution statement and link to the article.
" This information is proudly provided by Business Management Daily.com: http://www.businessmanagementdaily.com/660/equal-pay-act-erase-the-sex-from-your-pay-grades "
| 3.113302 |
Enumerators in C#
In this article I will explain you about Enumerators in C#.
This article has been excerpted from book "The Complete Visual C# Programmer's Guide" from the Authors of C# Corner.
As explained earlier, C# has a new iteration syntax called foreach. The foreach statement can only be applied to objects of classes that implement the IEnumerable interface. The IEnumerable interface exposes the enumerator, which supports a simple iteration over a collection. Enumerators are intended to be used only to read data in the collection and cannot be used to modify the underlying collection. The enumerator does not have exclusive access to the collection. To understand what happens in the background, consider the code snippet in Listing 5.57.
Listing 5.57: Enumerator Example 1
foreach (int i in a)
This code functions just like the while loop used in Listing 5.58.
Listing 5.58: Enumerator Example 2
a = x.GetEnumerator();
Please refer to the C# Language Specification (http://msdn.microsoft.com/en-us/library/aa645596(VS.71).aspx) for more details and recent updates to the C# language.
Before entering the statement block, the compiler generates the code to call the method GetEnumerator of the object passed as the second parameter in the foreach statement. The GetEnumerator method must return an object, having a property named Current, of type similar to the first argument of the foreach statement. Also this object must have a MoveNext method of return type bool. This method informs the runtime when to terminate the loop.
When an enumerator is instantiated, it takes a snapshot of the current state of the collection. If changes are made to the collection, such as the addition, modification, or deletion of elements, the snapshot gets out of sync and the enumerator throws an InvalidOperationException. Two enumerators instantiated from the same collection simultaneously can have different snapshots of the collection.
If the enumerator is positioned before the first element in the collection or after the last element in the collection, the enumerator is in an invalid state. In that case, calling Current throws an exception.
The enumerator is positioned before the first element in the collection initially. The Reset function brings the enumerator back to this position. The MoveNext method must be called to advance the enumerator to the first element of the collection before reading the value of Current, after an enumerator is created or after a Reset. The Current property returns the same object until either MoveNext or Reset is called.
Once the end of the collection is passed, the enumerator is in an invalid state and calling MoveNext returns false.
Calling Current throws an exception if the last call to MoveNext returned false. With this information under your belt, you should insert your enumerating code inside a try-catchfinally block to prevent unexpected exits.
Hope this article would have helped you in understanding Enumerators in C#. See other articles on the website on .NET and C#.
| ||The Complete Visual C# Programmer's Guide covers most of the major components that make up C# and the .net environment. The book is geared toward the intermediate programmer, but contains enough material to satisfy the advanced developer.|
| 3.101803 |
First of all, students in smaller classes generally perform better in academics than those in larger classes. Project STAR was a project run in Tennessee from 1985 to about 1999. It compared the academics of kindergarten through third grade children taught in smaller classes (13-17 kids) to those in larger classes (22-26 kids). The results were stunning. The small class children substantially outperformed the larger class children. The rates of second grade suspension went down. The district even went from low tier in math and reading to middle tier in the state. Since smaller classes have been proven to outperform larger classes, then we should transition to smaller class sizes.
Secondly, being in smaller classes early on has lasting effects throughout the child's years in school. Project STAR saw that fourth grade children who had been in smaller classes in K-3 were better behaved, probably due to the less discipline needed to quiet a smaller class. Students also had better grades than those in larger classes, even when taking into account demographics, resources, and cost of living, according to a study by Harold Weglinsky. The Weglinsky study, which pretty much repeated the Project STAR studies but in different cities, also showed that fourth graders in the inner city were three quarters of a grade level ahead of children enrolled in larger classes. There was also a stronger bond between students. I know the bond part is true because last year in band class we had a grand total of ten people. We all got to know each other better.
The story doesn't end there. Project STAR also noted that the small class kids found in high school, and were more likely to graduate on schedule and less likely to drop out. More smaller class kids were found in honors classes, and more took the SATs and ACTs, indicating a higher rate of going to college. Since studies prove again and again that smaller classes are better for your kids, isn't it the logical choice?
The benefits of smaller class sizes keep going on. Smaller class sizes can actually save money. In smaller classes, there's a smaller teacher to child ratio, so that means that there's more time for children to talk directly to the teacher for assistance. Children receive a more personalized education that fits their needs. In smaller classes early on, during K-3 years, teachers can give personalized education to those who may have learning problems, rather than referring his or her parents to costly special education programs. Fewer children get left out since teachers don't have as many kids to teach. Since smaller classes save money, why shouldn't classes downsize?
The more and more time we take, the more children are left undereducated, the more money we spend on special education, tests, and false answers. Cutting class sizes is proven to be better. It saves your money, your child's education, and their future. We should minimize the classes as soon as possible in order to make sure our children get the education they need. If we don't act now, it could be too late for an entire generation of children.
| 3.214619 |
What is skin testing for allergies?
The most common way to test for allergies is on the skin, usually the forearm or the back. In a typical skin test, a doctor or nurse will place a tiny bit of an allergen (such as pollen or food) on the skin, then make a small scratch or prick on the skin.
The allergist may repeat this, testing for several allergens in one visit. This can be a little uncomfortable, but not painful.
If your child reacts to one of the allergens, the skin will swell a little in that area. The doctor will be able to see if a reaction occurs within about 15 minutes. The swelling usually goes down within about 30 minutes to a few hours. Other types of skin testing include injecting allergens into the skin or taping allergens to the skin for 48 hours.
With a skin test, an allergist can check for these kinds of allergies:
- environmental, such as mold, pet dander, or tree pollen
- food, such as peanuts or eggs
- medications, such as penicillin
Some medications (such as antihistamines) can interfere with skin testing, so check with the doctor to see if your child's medications need to be stopped before the test is done. While skin testing is useful and helpful, sometimes additional tests (like blood tests or food challenges) also must be done to see if a child is truly allergic to something.
While skin tests are usually well tolerated, in rare instances they can cause a more serious allergic reaction. This is why skin testing must always be done in an allergist's office, where the doctor is prepared to handle a reaction.
Reviewed by: Larissa Hirsch, MD
Date reviewed: May 2012
| 3.409705 |
In all modern languages like C# and Java, we gain benefits of garbage collection. What about implementing our own. In this article, I will try to explain how to implement garbage collector for C language.
What is Garbage Collection?
In C language, dynamic memory management operations are done with
free() functions. When a piece of memory area is required, programmer calls the
malloc() function and receives a pointer of this area, and releases this area using
free() function when it is not used anymore. This is really a very easy task, you create memory area using
malloc() and release it using
free(). What if the programmer forgets to call the
free() function or application breaks before the
free() function is executed? If
free() function is not called, operating system cannot use this area and still thinks that it is in use. Large chunks of unreleased memory areas can affect system performance vitally.
Need for an automated garbage collection mechanism is born at this point. Automated garbage collection mechanism guarantees that all allocated memory during program run are released at the end.
There are a lot of garbage collection algorithms such as mark and sweep, copying, generational, reference counting, etc. In this article, I will try to explain mark and sweep algorithm.
What about Conservation?
Garbage collectors (abbreviated GC from now on) should not force developers to tag data or force to use special
datatype as pointers. GC also should work on existing source code. Working on existing code without compilation would be a more elegant solution. GC should not force to change on compilers. Conservative garbage collection approach provides GC solution preserving the above mentioned tasks.
In order to work properly, GC should have knowledge about the following tasks:
- Variables actively in use
- Which variable is a pointer and which is not
- Information of the allocated memory
Information about the allocated memory can be collected while GC allocates memory.
In C language collection about variables in use can be done with a special scanning on heap, stack and static data of the application. This solution is highly hardware dependent.
Also in C language, we do not have knowledge of a type at runtime. This means, at runtime phase it is not an easy task to distinguish pointers from non-pointers. Again we receive no assistance from the compiler. Once we have information about variables actively in use, we can scan this list with a special pointer identification algorithm to distinguish pointers. This step has some shortcomings but efficiency can be provided with elegant algorithms.
Conservative approach allows developers to use GC in their already written codes without any change on it. Developers call
malloc() function and never call
free() again inside the code. The rest is handled by GC as a smart servant.
Stop the World Approach
I mentioned that we scan memory areas of the application. We also need to release unused memory areas. These operations take additional CPU cycles. So when garbage is being collected, we need to use CPU. At this point, there are two main approaches for use of the CPU. These are stop the world and concurrent approaches.
Concurrent approach handles GC cycles on different threads. For this approach, complex locking mechanisms are needed. As a result, it benefits high performance which is desirable by most of the modern architecture. For further information, you can search on Tri-Color Marking Algorithm.
Stop the world approach stops program execution, does garbage collection and resumes program execution. This has a completely big disadvantage, it does not allow the application to use CPU while garbage is collected. This can cause the application to pause while garbage is being collected. Also we cannot use multi processor even if hardware has more than one CPU which can be a big performance gap. Although it has a lot of disadvantages, it is really very easy to implement so in this article we will use this approach.
Mark and Sweep Algorithm
Mark and Sweep Algorithm is the first algorithm which handles cyclic references. This algorithm is one of the most commonly used garbage collectors with combination of some other techniques.
Mark and sweep algorithm is a tracing collector so it traces through all available pointers to distinguish used and unused memory areas. It consists of two phases. The first phase is the marking phase. In marking phase, GC traces through all available variables and finds pointers using pointer identification algorithm. Once pointers are determined, marking phase finds the heap area of the pointers and marks them as used. In the second phase, GC traces through the heap and picks unmarked areas. Unmarked areas are the memory areas which are not currently used. These areas are reclaimed.
As mentioned, mark and sweep can handle cyclic references. Moreover, it includes no overhead on variables.
Conservative GC faces two main difficulties, the first is for identifying where to find root set and the second is how to identify pointers.
Root set can be described as the variables which are in use at time(t). Finding root sets without the assistance of the compiler is a highly system and hardware dependent issue. Root sets can be found on stack, registers and static area of the application. In order to implement our GC, we should find base addresses of these memory areas.
GC should discover the bottom and top of the stack. Stack is the main stack of the application. If we take a closer look into the CPU architecture, we can see that there is a dedicated stack which holds addresses of execution points, passed parameters to functions and local variables. Stack grow direction may change in each architecture. When a new item is pushed into the stack in some architectures, the stack grows downward and in some, it grows upward. GC should be aware of this. Stack bottom and top addresses can be found by combination of EBP, ESP and DS register values of 32bit architecture. Also there are alternative ways.
Static areas are held in the data segment register in 32Bit CPUs and stored in the heap of the running application. Static areas are the memory block where local static and global variables are held. In a realworld application, we can have some global and static local variables which hold pointers. GC should be aware of these variables.
Registers are CPU registers of the hardware. These memory areas are highly system dependent. GC should be aware of the root sets held in the registers before GC takes place. Reclaiming of the memory areas which are in use could cause severe bugs.
The second difficulty that Conservative GC faces is identification of the pointers. In C and C++ languages, pointers can be held inside the integer variables. In some cases, it is not an easy task to distinguish a pointer with a 32 bit integer value. As GC has no assistance from the compiler, it has to handle identification of the pointers by itself. In general, the approach for conservative garbage collector is that "GC must treat any word (integer) that it encounters as a potential pointer unless it can prove otherwise"¹.
While in this step, GC should be aware of pointers to pointers. In this project, I implemented depth first search as pointer traversal algorithm. In order to identify pointers, GC should have some test steps to filter pointers with non-pointers. Some of the tests are mentioned below:
- Does a potential pointer refer to the atom pointer.
- Does a potential pointer refer to application heap
- Does a potential pointer refer to root sets. If so, execute pointer traversal algorithm to find which portion of the heap it refers.
- If potential pointer refers to heap, traces through allocated block to find exact block that it points.
Atom pointers are the pointers which are used by GC itself. GC should distinguish these pointers from actual application pointers. Also GC should give the ability to the developer to identify custom atom pointers. Atom pointers are being skipped at pointer identification phase and they are not recognized as pointers by GC. GC never touches the memory areas of these pointers.
If a potential pointer passes these tests, it is treated as a pointer and marked as in use at mark phase. Pointer identification has some deficiencies such as false pointers. False pointers are the integer values which hold heap addresses. Assume that we have an integer
i which holds random 32bit value, also assume that this value is
0x003932e8. When GC takes place, we have also a pointer
p which points to
0x003932e8 heap address with size in MBs.
p is set to NIL and not used anymore. Application requests new memory block but having less memory GC cannot allocate free space and steps into collection phase. In collection phase,
p should be reclaimed so it is not used anymore but
i can be recognized as a pointer actually which is not. This type of situations can be troublesome. Boehm reports that certain classes of data, such as large compressed bitmaps, introduce false references with an excessively high probability [Boehm, 1993].
After a lot of theoretical information, let's take a look at how we can implement that type of automated memory manager.
As mentioned, the GC we will design will be highly system and hardware dependent. We will use IA32 architecture and Windows Operating system.
The first thing GC should do is to find root sets. Stack top can be found by retrieving address of the last created variable. In Windows environment, the address of the last created variable can be used to query active memory block using
VirtualQuery function. This function tells the base address and other properties of the related memory area². After calling
VirtualQuery function with top of the stack, we can retrieve full set of stack roots. This root set gives us the variables currently in use. Defining
static root sets requires another call to
VirtualQuery function. This time we query memory area using a created
static local variable. Register roots can be retrieved using
When developer calls
malloc function of GC, our code should add additional header information to this memory block. This block is linked to doubly linked list. Using this list, we can store and query which memory areas are allocated by GC. In my implementation, allocation does not invoke collection step which it should do when system gets low on memory, it does only create new memory area using low level
malloc and returns address of this block. In future releases, my implementation of
gc_malloc should work in a smarter and elegant way.
In our implementations, the developer should be able to call
collect function of GC. Calling this function is not recommended but for flexibility we can allow developers to call our
Collect function should invoke the following steps. First it should determine root sets, then it should invoke mark and sweep phases respectively.
In mark phase, GC should trace through whole root sets. Code should invoke pointer identification step for each possible pointer in the root set. Once possible pointer is passed identification step code should mark it as in use (in my current implementation, I have two lists. The first list holds used areas, and the second holds free areas. When I mark a pointer as used, I remove it from the used area and link it to a free area which decreases CPU use on sweep phase).
In sweep phase, GC should trace through the whole heap. In this step, only marked areas are not reclaimed and the rest is reclaimed. (In my current implementation, I free the memory area when it is not used anymore. This can cause performance penalties. A more advanced approach can be used at this step.)
The last thing GC should do is reclaiming the whole heap when the application quits. We can use
atexit() function of standard C. In this function, we will trace through the whole heap to reclaim all used memory.
Source Code and Last Words
This project is an open source project. Please feel free to join this project. If you wish to work on this project, please let me know. Source repository of this project can be found here.
Please note that this project is actively in development. Also the current version supports a fully working GC mechanism and it has a lot of deficiencies on performance issues.
- Garbage Collection - Algorithms for Automatic Dynamic Memory Management 1996, Richard Jones, Rafael Lins
- An introduction to garbage collection part II, Richard Gillam
- Mark-and-Sweep garbage collection
- Why conservative garbage collectors
- Automatic garbage collection
- Fast multiprocessor memory allocation and garbage collection, Hans-J.Boehm, HP Laboratories
- Composing high-performance memory allocators, Emery D. Berger, Benjamin G.Zorn, Kathryn S. McKinley
- Hoard: a scalable memory allocator for multithreaded applications, Emery D. Berger, Kathryn McKinley, Robert D. Blumofe, Paul R. Wilson
- Managing heap memory in win32
- Heap pleasures and pains
- The measured cost of conservative garbage collection, Bejamin Zorn
- Conservative garbage collection for general memory allocators, Gustavo Rodriguez-Rivera, Charles Fiterman
- Conservative garbage collection for C, Christian Höglinger
Yasin has more than 10 years of professional experience. He has several published articles includes graphics programming, robotics and application development in academic resources and national press. He is now working as a software developer for semi-governmental organization in Turkey.
| 3.663331 |
This archived Web page remains online for reference, research or recordkeeping purposes. This page will not be altered or updated. Web pages that are archived on the Internet are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats of this page on the Contact Us page.
Before 1980, most of the Chinese who came to Canada were from countries in the southeast of Canton and so the Chinese spoken was mainly Cantonese. This is just one of the many dialects spoken in China. Today, Cantonese and Mandarin are the two Chinese dialects most spoken in Canada.
Although the Chinese brought their own religious beliefs with them, about ten percent of Chinese immigrants had changed to Christianity by 1923. By 1961, close to sixty percent of Chinese Canadians were Christians. Buddhism and Islam were also important religions for Chinese people in Canada.
The Chinese hold many celebrations throughout the year. The most important one is the Chinese New Year. It usually is celebrated in February and is a time for settling debts and cleaning house. Red packets containing small amounts of money are given away, especially to children, and firecrackers are set off.
The Lantern Festival takes place on the 15th day of the New Year. Lanterns are hung in homes, along with symbols of good fortune, happiness and health.
The Ching Ming Festival falls in April and is also known as Remembrance of Ancestors Day. Chinese visit the graves of their loved ones and clear away the weeds.
One of the best known festivals, the Dragon Boat Festival, is usually celebrated in June. Boats from 45 to 120 feet long, and decorated with dragon's heads and tails race each other in competition. Paddlers keep in stroke to the beat of loud drums.
The Mid-Autumn Festival is a time for people to gather and watch the moon. As darkness falls, lanterns are lit and everyone enjoys moon cakes (a mix of ground lotus, mashed beans, sesame seeds and dates) while watching the rise of the large autumn moon.
The Winter Soltice Festival takes place on the longest night of the year (December 22 or 23). As people look forward to longer days, they visit with family and enjoy a yummy banquet.
| 3.695905 |
Some of our most interesting prints were once given away in a magazine. Today they comprise an important social history resource for the nineteenth century, illustrating spectacles and other visual devices in use by some of the most significant people of the day.
Vanity Fair was a Victorian magazine, founded in 1868 and aimed at the middle and upper sections of society. From the very beginning its illustrations included a distinctive satirical portraiture of a type which was new to English journalism. These portraits were designed to be collectible and many households or clubs enjoyed gathering their own gallery of the most distinguished politicians, clergymen and lawyers of the day. The artists became well-known by their pseudonyms such as 'Spy' and indeed their works are often known by the shorthand label 'Spy cartoons'. The caricatures were reproduced by the relatively new colour printing process of chromolithography.
Many of the Vanity Fair portraits include spectacles, monocles or pince-nez, often with a neck cord attached. In some cases the depiction is indistinct; the artist, usually working in watercolour, was interested only in providing an impression of the device. In other portraits however, the optical device is very clearly shown or else exaggerated such that it becomes a defining motif for the person concerned. An example would be the huge monocle filling the socket of Mr Maguire, the campaigner for Irish Home Rule.
The portraits are a useful source for studying the spectacle fashions of the second half of the nineteenth century and the Edwardian period. The manner in which a frame is worn, held or suspended is often clearly shown. This may be from a cord, or a coat button. The spectacles may be flourished in the hand or parked out of the way on the forehead. In one instance, the son of the novelist Charles Dickens is illustrated actually cleaning his spectacle lenses with a cloth.
Educated Victorian readers held contemporary scientists in a level of esteem that would be unfamiliar today. In consequence, certain distinguished names in the fields of optics and ophthalmology are to be found amongst the Vanity Fair portraits. This includes Mr Frank Crisp, the well-known collector of microscopes, Sir George Airy, reputedly the first to use a cylindrical lens for his own correction, Sir William Crookes the Chemist (a former Superintendent of the Radcliffe Observatory) and R Brudenell Carter who found fame through his operations on corneal staphyloma.
There are so many relevant Vanity Fair portraits (the BOA Museum has 68 for example) that a comprehensive listing would seem superfluous. Below is attached a list of some of the best that an enthusiast might consider including in his collection. It should be noted that many of these prints are available quite cheaply as modern reproductions.
Wearing Pince-Nez or Nose spectacles
Wearing a monocle
Other optical devices
The MusEYEum Guide to the pseudonyms of Vanity Fair cartoonists:
|Ao||= L’Estrange. Floruit 1903-7.|
|APE||= Carlo Pellegrini (1839-1889). Born Capua. Came to England 1864. Adopted name ‘APE’ from 1869.|
|F.C.G.||= Sir Francis Carruthers Gould (1844-1925).|
|F.T.D.||= F.T. Dalton. Floruit 1890.|
|GUTH||= Jean Baptiste Guth. Floruit 1883-1921.|
|Hay||= Floruit 1888-1893.|
|Lib||= Liberio Prosperi. Floruit 1886-1903.|
|PAL||= Jean de Paleogu. Born 1855.|
|SPY||= Sir Leslie Ward (1851-1922). Adopted name ‘SPY’ from 1873, working 36 years for Vanity Fair. Knighted 1918.|
|STUFF||= Possibly H.C. Sepping Wright (i.e. his name becomes the ‘wright stuff’). Floruit 1894-1900.|
|T||= Theobald Chartran (1849-1907).|
|w.a.g.||= A.G. Witherby. Floruit 1894-1901.|
| 3.214411 |
A brochosopy is a procedure in which a long, lighted scope is inserted into the lungs in order to examine the airways of the lungs and to assess lung function.
Chest fluoroscopy may be performed when the motion of the lungs, diaphragm, or other structures in the chest need to be evaluated.
Chest ultrasound is a procedure in which sound wave technology is used alone, or along with other types of diagnostic methods, to examine the organs and structures of the chest.
A chest X-ray is used to examine the chest and the lungs and other organs and structures located in the chest.
CT/CAT scans are more detailed than standard x-rays and are often used to assess the organs of the respiratory and cardiovascular systems,and esophagus, for injuries, abnormalities, or disease.
A lobectomy is a surgical procedure that removes one of the lobes of the lungs.
A lung biopsy is a procedure in which tissue samples are removed with a special needle to determine if cancer or other abnormal cells are present.
A lung scan is a procedure that uses nuclear radiology to assess the function and structure of the lungs. It is most often performed when problems with the lungs and respiratory tract are suspected.
In a lung transplant, one or both diseased lungs are removed and replaced with a healthy lung from another person.
A mediastinoscopy is a surgical procedure performed to examine the mediastinum - the space behind the sternum (breastbone) in the middle of the chest that separates the two lungs.
Oximetry is a procedure used to measure the oxygen level—or oxygen saturation—in the blood.
Peak flow measurement is a procedure that records the amount of air flowing out of your lungs. Peak flow can be measured with either a spirometer or a peak flow meter.
A pleural biopsy is a procedure in which a sample of the pleura (the membrane that surrounds the lungs) is removed with a special biopsy needle or during surgery to determine if disease, infection, or cancer is present.
Positron emission tomography (PET) is a specialized radiology procedure used to examine various body tissues to identify certain conditions. PET may also be used to follow the progress of the treatment of certain conditions.
A pulmonary angiogram is aa procedure that uses a combination of contrast dye and X-rays to examine the blood vessels in the lungs and evaluate blood flow to the lungs.
Pulmonary function tests measure how well your lungs are functioning and are used to help diagnose certain lung disorders.
A sinus x-ray is a type of x-ray used to obtain images of the sinuses - the air-filled cavities lined with mucous membranes located within the bones of the skull.
A sleep study - or polysomnogram - consists of a number of medical tests performed at the same time during sleep. The tests measure specific sleep characteristics and help to diagnose sleep disorders.
Thoracentesis is a procedure in which a needle is inserted through the back of the chest wall to remove fluid or air from between the lungs and the interior chest wall.
| 3.163157 |
Little Known Episode in John Bidwell’s History: Thwarting the 1851 Federal Indian Treaty
By Michele Shover, professor emerita, Political Science
In mid-August of that year, 200 members from Mountain and Foothill Maidu tribelets as well as 100 or so Valley Maidus gathered on the ranch at Bidwell’s request to consider a federal treaty. They moved back and forth from their tree-sheltered encampments, scattered to avoid old enemies but close enough to meet with friends. At large campfires, the Mechoopda Maidus roasted slabs of government beef, which rewarded each arriving group. The amount of beef would later become an issue.
While some Mountain Maidus were reluctant to enter rival Maidu turf without protection, their curiosity prevailed. They recalled many tribal “Big Times” there for trade with the Mechoopdas. Of course, when the best bow maker or other artisan was one of their own, they had held the get-togethers. But when a Valley Indian was the master builder, the advantage shifted and Mountain Indians became guests. According to Bidwell, such trading events took place at intervals of years, and when they concluded, the mountain and valley tribelets parted as enemies. In Bidwell’s time, the Mountain Maidus likely made surreptitious night visits to take a look at Bidwell’s ranch or for secret meetings with willing Mechoopdas. While Bidwell understood the land as his property, the tribelets at large still considered it, rightfully at least, as the Maidu territory of their Mechoopda tribelet.
The Indians, many of whom had arrived at Bidwell’s place about three weeks earlier than he expected them, set up camps in heavy groves across from his log house and the long shed, his stable and tack room. The visitors took a keen interest in the ranch’s new “saltbox”-style store with its “hotel” of several rooms above and the clapboard-sided bunkhouse where Bidwell housed “the Indian boys” who joined his vaqueros. Their elders had worked for John Potter, the area’s pioneer who had a substantial cattle operation which ran south from Big Chico Creek along both sides of the Oroville-Shasta Road. From Bidwell the young men learned how to work on field and row crops, and they had put in the rancher’s first orchards and some grape vines during the previous spring. Their elders were posted at the borders of grain fields with orders to keep out the cattle. The young men’s bunkhouse not only presented a considerable contrast to the older families’ bark huts, but it pointed to Bidwell’s separation of the male laborers from their families. In other respects, he respected their culture.
From their tree-shaded campsites, the Mountain Maidus could also assess the Mechoopdas’ situation. The Valley tribelet had agreed to work on Bidwell’s terms. They had nowhere they could go and they gained access to his resources, including added protection from their mountain rivals. This situation intrigued the more warlike mountain tribe. How could they drive off this rancher or tap his resources or find a way to restore their access to the valley? Now that they were at Bidwell’s headquarters, they also could communicate with the Mechoopdas to compare ideas.
While the Indians’ activities intrigued onlookers, the drama of Treaty Commissioner Oliver Wozencraft’s arrival conveyed command. Accompanying him were “gentlemanly and efficient” Army officers and 50 mounted infantry with a train of heavily laden packhorses enveloped in a rolling wall of dust. While Wozencraft’s mission was difficult, he had reason for confidence because he had negotiated signed treaties with other tribes. While he was at Bidwell’s rancho, the San Francisco Alta declared “the reservations must be made where the Indians at present reside … and that has been the course of the commissioners.”
As he slowed to dismount, Wozencraft noted the hundreds of Indians who took his measure in turn. By contrast to treaty meetings where a few tribesmen had shown up, Bidwell had organized an impressive turnout. As Wozencraft moved through the crowd with Bidwell “doing the honors,” he was impressed by the mix of valley headmen or “captains” and those of the mountain tribelets. The latter were difficult to assemble, most at risk and most dangerous to settlers. He explained later the Mountain and Foothill Maidus lived in small groups and were “generally at war with one another.” Hence, “they were very distrustful when it is attempted to bring them together.”
The treaty commissioner found Bidwell had anticipated his needs. Because the Native people would find it hard to understand the terms of a legal document, he had his carpenter build a lectern to draw a common focus. This podium’s image entered into the Mechoopdas’ oral history. In addition, Bidwell provided interpreters. One, Rafael, about 12, was the young boy he had “adopted” from a tribelet and trained as his personal assistant. The second interpreter, Napani, was about 9. She was a daughter of Mechoopda Headman Luc-a-yan, whom a settler woman described as “a man of superior ability, dignity and fine disposition.” She thought he resembled “a bronze statue.”
The ranch was organized for the meeting, the Indians were in place, and Wozencraft was ready to lay out the treaty terms. Deliberations would follow a rocky course—as would the relationship between Wozencraft and Bidwell. In 1858 Bidwell would testify that Commissioner Wozencraft had instructed him to offer the Indians all the beef they wanted regardless of cost. His statement contradicted Wozencraft’s explicit instructions, however. In an 1851 letter to Bidwell, Wozencraft referred to their common understanding that Bidwell should distribute beef to “keep the Indians pacified at the least cost to the government” and that his beef allocations “should be governed by necessity.” Bidwell also wanted Wozencraft to award him the lucrative contract to supply the beef, even though Wozencraft had already awarded it to someone else and wouldn’t go back on his word. Needless to say, Bidwell was not happy.
There is much more to the treaty story, but the end result was that Bidwell worked against the ratification of the treaty, and it failed to pass the California Legislature. California is one of the few states that did not establish a treaty with Indians. The consequences of this for the California Indians, especially the Mechoopda Maidu, is that many of them are still fighting to be recognized as a tribe, with the incumbent rights and privileges that accompany that recognition. In Butte County, the Mechoopda Maidu are in court to challenge Butte County’s effort to stop them from building a casino southeast of Chico.
Note: This is an excerpt from a chapter from a manuscript by Michele Shover, “The California Indian Wars on the Butte County Front, 1850-1865.” Shover, professor emerita, delivered the lecture at the Department of Political Science’s Faculty Forum on Oct. 21. The entire lecture can be found on the department’s website. Photo of Emma Cooper courtesy of Meriam Library, Special Collections.
| 3.295303 |
Teaching Materials Social Studies Kits
A social history of women [kit] by Shiela C. Robertson and Kathleen O’Brien. (1986). TKSS 700
Unit of study on the social history of women, concentrating on women from Minnesota. Includes general lessons on women’s history, and lessons on six "non-famous" historic women: Mathilda Tolksdorf Shillock, Linda James Benitt, Theresa Ericksen, Mary Longley Riggs, Caroline Seabury, and Ethel Ray Nance. For elementary grades. 7 booklets, 6 posters : b&w and col. ; in portfolio, 26 x 39 x 3 cm.
My cup runneth over : value game [kit]. (1973). TKSS 701
Assists players in communicating personal values and interpreting the values of others.
Black studies [kit]. (1973). TKSS 702
Examines some of the significant aspects of Black American culture. 1. African heritage. 9 cards.--2. Slavery & Civil War. 16 cards.--3. Reconstruction & Jim Crowism. 13 cards.--4. Twentieth century. 32 cards.--5. Literature & dramatic arts. 14 cards.--6. Music. 12 cards.--7. Sports. 6 cards.--8. Famous black Americans. 33 cards.--9. Bibliography. 7 cards. 142 activity cards, 1 foreword card, 1 table of contents card, 1 title card, and 9 divider cards in container.
Social studies strategies. [flash card] by Annette Sue Hart. (1972). TKSS 703
Social studies activities emphasizing student involvement and including dramatic play, role playing, simulation games, map experiences, documents and ancient writing, and making filmstrip. For elementary education.
Developing understanding of self and others. [Kit] by Don Dinkmeyer. (1970). TKSS 704
Designed to help children better understand social emotional behavior and may be used by teachers, elementary school counselors, and others as a developmental guidance program.
Developing understanding of self and others : DUSO D-2. [Kit] (1973). TKSS 705
A program of activities designed for upper primary and grade 4 levels, ages 7-10, to help teach concepts of social and emotional behavior.
World map in equal area presentation: Peters projection. (1983). TKSS 707
"One square inch on the map = 158,000 square miles."
Escapades! [flash card]. (1996). TKSS 709
Box of 300 different games, challenges, energizers, mind stumpers, tongue twisters, alternate versions to traditional sports and games. For all ages; 2 or more players. 350 activity cards : b&w ; in box 18 x 10 x 12 cm. + 11 dividers.
Skillstreaming the elementary school child [activity card] : skill cards. (1997). TKSS 710
Skill cards list the steps needed to successfully perform each of the 60 prosocial skills outlined in "Skillstreaming the Elementary School Child. 480 cards : col. ; 8 x 13 cm.
A Visit to Colonial Williamsburg [Slide]. (1978). TKSS 713
Follows the tourist route through Colonial Williamsburg in Virginia. Shows various buildings and gardens in order as they appear along the way. Shows the interiors and exteriors of major exhibition buildings. " With lecture notes. Revised 3/21/78."
Here’s looking at you, demo kit. Grades 7-9 [kit]. (2001). TKS 714
Demonstration kit for preview purposes with a sampling of the materials contained in the complete "Here’s looking at you". A research-based, mixed-media prevention program focused on the gateway drugs of alcohol, nicotine, and marijuana. Designed to promote healthy norms, increase protective factors, and reduce risk factors that have been correlated with drug use. Developed around three components: giving students current and accurate information, teaching them social skills, and providing opportunities for them to bond with their school, their families, and their community. 3 posters, 1 audiocassette, 3 videocassettes, 1 book, 1 magazine, 1 teacher’s guide with CD-ROM, 2 inventory/ordering pages ; in container, 31 x 35 x 12 cm.
Project ALERT [videorecording] : a drug prevention program for middle grades. (2003). TKSS 715
A two year, fourteen lesson plan program for middle school students, which focuses on drugs that adolescents are most likely to use: alcohol, tobacco, marijuana, and inhalants. Parent involved homework assignments extend the learning process. Project ALERT -- A guided tour -- Let’s talk about marijuana -- Pot: the party crasher -- Lindsey’s choice -- Pot or not? -- Clearing the air -- Saying no to drugs -- Paul’s fix -- Resisting peer pressure. Videotape dates run from 1996-2002 ; teacher’s guidebook is copyrighted for 2003. Grades 6-8. 9 videocassettes : sd., col. ; 1/2 in. + 1 loose-leaf teacher’s guidebook + 12 posters. Multimedia VC 4713 Pts. 1-9 & suppl. & TKSS 715
My friends and me. [Kit] by Duane E. Davis. (1977). TKSS 716
A carefully balanced program of group activities and materials designed to help teachers and parents assist the healthy personal and social development of young children. For preschool education.
Mix-n-match cards [game]. (1995). TKSS 717
Geography term-n-definition -- Historical character-n-achievement -- Historical event-n-date -- State-n-abbreviation -- State-n-capital -- Sports. v. of cards : ill. ; 28 cm.
Raccoon circles [kit] : a handbook for facilitators. (2001). TKSS 718
Intended to teach coworkers to work as a team while performing various games. 1 piece of nylon webbing; 2 books.
Relocation of Japanese Americans [picture]. (197?). TKSS 719
Photographs of Japanese Americans who had to leave their homes and businesses during World War II. 15 study prints : b&w. ; 28 x 36 cm. + 1 explanatory sheet.
Just choices [videorecording] : exploring social justice today. (19??). TKSS 721
This video complements the Just Choices program as it teaches students about a modern-day social justice movement. The video documents the journeys of four high school students as they investigate the roles and treatment of animals in society for a class project on social justice. As the students learn about the movement, its victories and struggles, and the role of animals in society, they realize that they can make a difference through the choices that they make. 1 videocassette (21 min.) : col. ; 1/2 in. + teacher’s guide (6 p. : col. ill. ; 28 cm.) + 1 poster + 5 activity sheets.
Dr. PlayWell’s don’t pick on me game [game]. (2005). TKSS 722
Designed to help children learn to deal with bullying and teasing in a non-threatening atmosphere. For 2-4 players, ages 6-12. 1 game (1 board, 16 figures, 1 spinner, 72 game cards, 16 stands, 100 chips, 1 assessment card, 1 answer sheet (two-sided), 1 booklet of instructions) ; in box 6.5 x 29 x 24 cm.
Dr. PlayWell’s game of social success [game]. (2004). TKSS 723
Designed to help children get to know one another while playing games and learn social skills that can be used immediately with other players. For 2-4 players, ages 6-12. 1 game (1 board, 4 playing pawns, 1 egg timer, 1 spinner, 48 social success cards, 1 skills assessment card, 1 pad of questionnaires, 1 pad of success certificates, 1 booklet of instructions) ; in box 6.5 x 29 x 24 cm.
[Positive behavior poster collection] [picture]. (2003). TKSS 724
Posters designed to help encourage good behavior in children through positive reinforcement. Say it with signs -- Caught you being good -- Rules & consequences chart. 3 posters : col. ; 60 x 46 cm. + 3 directions sheets, 2 felt markers.
Coat of arms. by Catherine Daly-Weir ; illustrated by Jeff Crosby. (2000). TKSS 725
Traces the origins of coat of arms and the rules that govern their use and design. Includes a stencil to help the reader design their own coat of arms. 31 p. : col. ill. ; 24 cm. + 1 plastic stencil.
TimetoEnrich : grades 7-12 activity kit : instruction manual. (2002). TKSS 726
Presents activities that educate as well as entertain in the areas of social development, healthy living, citizenship, and career awareness. Cards present 180 activities and the leader’s guide contains a CD-ROM with reproducible worksheets, sample letters to parents, praise certificates, and a list of resources. Grades 7-12. 187 p. ; ill. : 28 cm. + 1 CD-ROM (4 3/4 in.) + 180 activity cards (in box 20 X 16 X 9 cm.)
The story grammar marker [kit]. (2002). TKSS 727
Kit provides methods for organizing the integration of thinking, speaking and writing skills in the classroom. Grades K-5. 1 teacher’s marker (27 in.) ; 1 guide (195 p.).
Harcourt horizons. (2003). TKSS 728
About my community (grade 2) -- People and communities (grade 3) -- States and regions (grade 4) -- United States history (grade 5) -- World regions (grade 6). Grades K-6/7.
Harcourt horizons. (2005). TKSS 729
About my community (grade 2) -- People and communities (grade 3) -- States and regions (grade 4) -- United States history: beginnings (grade 5) -- World history (grade 6).
Fetal alcohol spectrum disorders [kit] : education & prevention curriculum. (2006). TKSS 730
A school-based curriculum for grades K-12 that provides age-appropriate information about the consequences that alcohol can have on human development. The curriculum also teaches youth to be tolerant and accepting of all individuals regardless of their individual capabilities or disabilities. 5 books, 5 computer optical discs, 1 model of brain with stand and guide card.
This is my home [kit] : a Minnesota human rights education experience. (2005). TKSS 731
The goal of the kit is to integrate and simplify human rights education in K-12 schools statewide. The objectives are to engage all members of the school community in creating a learning environment in which everyone can grow to their full potential with their human rights and human dignity upheld; to motivate all members of the school community to take responsibility in promoting and protecting human rights, so that student achievement, development, and performance can thrive; to develop new tools for sharing and monitoring emerging human rights education practices. 2 booklets, 1 computer optical disc, 1 DVD, 1 pamphlet, 5 guide cards.
The land of Tutankhamen [chart] produced by Sunday Times Special Projects Unit. (1972). TKSS 732
Presents Egyptian civilization at the time of Tutankhamen, 1361-1352 B.C. Includes descriptions of Tutankhamen, Egyptian kingship, pyramid construction, hieroglyphics. 1 wall chart : col. ; 98 x 74 cm., folded to 25 x 19 cm.
Skillstreaming the adolescent [kit] by Arnold P. Goldstein ; Ellen McGinnis. TKSS 733
Program for teaching prosocial skills. Skill cards list the steps needed to successfully perform each of the 50 prosocial skills outlined in skillstreaming the adolescent. 337 p. : 23 cm. + 400 skillcards : (col. ; 8 x 13 cm.) + 1 program form (43 p. ; 28 cm.) + 1 student manual. (60 p. ; 28 cm.)
Dr. PlayWell’s game of self-control [game]. (2005). TKSS 734
Designed to help children learn the importance of self-control at school, in their homes and communities, and in personal habits. For 2-4 players, ages 6-12. 1 game (1 board, 2 dice, 4 pawns, 48 game cards, 100 chips, 1 assessment card, 4 colored circles, 1 booklet of instructions) ; in box 6.5 x 29 x 24 cm.
Turtle puppet [toy]. (2008). TKSS 735
Turtle plush puppet with Western Hemisphere on shell, green head, green feet with yellow soles, yellow underbelly and edge of shell, purple mouth, and embroidered eyes.
For more information, contact:
Martha Eberhart, Reference Librarian
416 Library Drive
Duluth, MN 55812
Revised and updated 11/12
| 3.728465 |
Prairie dogs enjoy a snack in their habitat at the Detroit Zoo. (Joe Ballor/Daily Tribune)
Species: Prairie dogs (Cynomus lodovicianus)
Home: North America’s prairies and open grassland. The Detroit Zoo is home to over 70 black-tailed prairie dogs. The zoo’s prairie dog habitat features three clear observation areas where children under 4-feet tall can observe the animals from inside the habitat. “The kids love it,” said Betsie Meister, assistant curator of mammals at the Detroit Zoo. “They can’t believe they can get right up close and look eye-to-eye with the prairie dog. It’s really unique.”
Average life span: 3-4 years in the wild; up to 8 years in captivity
Height: Head and body, 12-15 inches; tail, 3-4 inches
Weight: 2-4 pounds
Going to town: Prairie dogs live in towns that typically cover less than a half-square mile. Some prairie dog towns are enormous, with one in Texas reportedly covering 25,000 square miles with an estimated 400 million individuals.
Birth: Reproduce once per year, with 3-4 pups
Now you know: There are five species of prairie dogs, rodents that are considered ground squirrels. They live in underground burrows, an extensive tunnel system with front and back doors and separate rooms for sleeping, defecating, nurseries, etc. There are also listening rooms near the surface, so they can detect predators. A prairie dog family consists of a male, several females and the youngsters. They are highly sociable and grooming is a regular pastime. They even “kiss” and hug each other. Family members share food and work together to chase off other prairie dogs. They have a complex communication system that helps warn of predators.
Protection status: Because they eat crops and their burrows are a tripping danger to horses, prairie dogs have been targeted by extermination campaigns. Several species dependent on the prairie dog, such as the black-footed ferret, are endangered or threatened.
Detroit Zoo information: (248) 541-5717, www.detroitzoo.org.
— Joe Ballor, Daily Tribune
Note: Animals of the Zoo is a weekly series. Next: Tigers.
| 3.381889 |
Calendar Reform and Eclipses: The Place of Edzná
HIGH NOON AMONG THE MAYA
Although both the 260-day sacred almanac and the 365-day secular calendar predated the Maya by well over a millennium, and the "principle" of using key calendar dates to define urban locations and the Long Count itself had likewise been developed by the Olmecs several centuries before the Maya emerged as a civilized society, it was the latter who seized upon these intellectual tools and honed them to the highest level of sophistication of any of the native peoples of Mesoamerica. Ironically, they did so in one of the most difficult environments in the entire region; yet, this same environment may ultimately have been responsible for their failure to survive as an advanced and vigorous culture until the arrival of the Europeans.
Having largely been displaced from their initial homeland in the Gulf coastal plain of Mexico by the migration of the Zoques in the thirteenth century B.C., most of the Mayan-speaking peoples ended up moving toward the east. (It will be remembered that only one group of Maya moved northward as a result of the Zoque advance, becoming in the process the Huastecs.) The area into which the Maya moved can be subdivided into three rather distinct geographic regions. The first of these was the Petén region of northern Guatemala and southern Yucatán: an area of relatively heavy precipitation mantled in dense tropical rain forest, developed on a deeply weathered base of flat-lying limestone, pocked with solution valleys -- many of which contain lakes of some size -- and laced by numerous rivers. The second region was the Yucatán Peninsula itself: an area of drier climate developed on a low and almost featureless limestone plateau with no surface drainage and supporting a vegetation cover of short, deciduous, tropical scrub forest. And the third region comprised the highlands of Guatemala: an area of subtropical to temperate mixed forest developed on a base of folded limestone ridges in the north that give way to lofty volcanic peaks in the south.
The climatic station of Flores is located in the heart of the region of Petén, now a part of northern Guatemala. It is representative of Tikal and the core area of the so-called "Old Empire" of the Maya. Although its water need (i.e., temperature) curve is somewhat more variable than those of Soconusco or the Olmec region, its precipitation curve demonstrates both a monsoonal peak during the warmest months of the year and a very marked hurricane peak in early autumn. Its warmth and moisture indices insure its classification as a tropical humid climate, which supports a native vegetation of heavy rain forest.
Mérida, the capital and largest city of Yucatán, is located in the northwestern quadrant of the peninsula, away from the trade winds which blow in constantly from the Caribbean Sea. Its water budget diagram is typical of the region which constituted the so-called "New Empire" of the Maya, represented by such sites as Uxmal, Mayapán, and Chichén Itzá. Its relatively uniform water need (i.e., temperature) curve is unfortunately not matched by its precipitation curve, so much of the year the area experiences a moisture deficit. Monsoonal rains in the high-sun period seasonally provide enough moisture for a corn harvest, and a small surplus is usually recorded with the passage of an autumn hurricane. While the station's warmth index indicates that it is clearly tropical, the fact that Mérida only receives about 60 percent of the moisture it actually needs means that the native vegetation of the northern Yucatán is scrub forest. (Data from Secretaría de Recursos Hidráulicos.)
As was pointed out earlier, the Maya seem to have founded their earliest major ceremonial center in what has to have been the most favored geographic setting in all of the Yucatán -- on the edge on the largest aguada or alluvial depression, in the entire peninsula. Located in what today is the interior of Campeche state, this vast soil-filled depression provided the agricultural support system for the incipient "city" we now know as Edzná. Dating to about 150 B.C., Edzná was a bustling urban node for more than 20,000 persons at the peak of its existence in the early centuries of the Christian era.
Figure 34. The climatic station of Quezaltenango is located in the western highlands of Guatemala, just over the Sierra Madre from Soconusco. Its low latitude and high elevation combine to produce a very even water need curve which in no month exceeds 75 mm (3 in.). Although the station enjoys a 12-month growing season (delimited by the straight line near the bottom of the graph),, an occasional frost can pose a risk to crops. Its extreme precipitation curve, with a low-sun deficit and a high-sun surplus, exemplifies a typical monsoonal climate. However, with a warmth index of less than 4.0, it qualifies as a warm-temperate humid climate which supports a native vegetation of mixed broadleaf and coniferous forest.
Although the layout of Edzná mimicked that of Teotihuacán, its contemporary on the Mexican plateau, by being oriented to the setting sun on the "day the world began," the Maya priests were no doubt quick to realize that August 13 was a date that had no real meaning to the peasant farmers of the Yucatán. The priests were certainly aware of the practice of using the zenithal passage of the sun to herald the beginning of a new year, but at Edzná the sun passed overhead at noon no less than 18 days earlier than the date that the Olmecs had established as "the beginning of time." Obviously, while the Maya couldn't change the facts of history, they could amend the calendar to accord more closely with the realities of their own physical setting.
Not only did the priests of Edzná appreciate the need for such a calendar reform, but their reckoning also told them that an auspicious time for such a change was drawing nigh. The Long Count was nearing the completion of baktun 7, and baktun 8 was soon about to begin. What more appropriate a time could they have contemplated for "turning over a new leaf"?
Yet, as baktun 8 neared and the zenithal sun passed overhead, it was as if the Maya's own auguries obliged them to postpone the calendar reform. Forty-five days before the dawn of baktun 8, they recorded the passage of the zenithal sun, which fell in that year (A.D. 41) on the Maya date 3 Men 3 Uayeb. It was the latter aspect of this date which must have given them pause, because the "month" of Uayeb was the five-day unlucky period at the end of the Maya year. During these five inauspicious days, the people were wont to keep as low a profile as possible, engaging in only a minimum of activities, as if hoping thereby to escape the wrath of the gods. Certainly, not until the zenithal sun had cleared Uayeb could the calendar reform be instituted safely and prudently. But because their calendar did not take into account the extra quarter day in the length of the solar year, this meant that a full 20 years were required to advance the calendar by five days. Since the coincidence of the zenithal sun with the "month" of Uayeb had begun in the equivalent of the year A.D. 28 -- i.e., 13 years earlier -- there were still 7 years to go before its passage would occur on 0 Pop, the first day of the secular year. Thus, the very first time that the zenithal sun passed overhead in the Maya area on 0 Pop occurred in the year A.D. 48 -- an event which would have been recorded in the Long Count as 188.8.131.52.12 12 Eb 0 Pop.
In order to calibrate the zenithal sun passage, the Maya priests had erected at the base of the Cinco Pisos pyramid in Edzná an absolutely ingenious gnomon. (Remember that a gnomon can be any upright pillar or post; its function is to not cast a shadow on the days the sun is directly overhead.) The Edzná gnomon was a tapered shaft of stone about half a meter (20 in.) in height surmounted by a disk of stone which had the same diameter as the base of the shaft (see Figure 29). Thus, on the days that the sun stood directly overhead, the disk at the top of the shaft would envelop the entire shaft in its own shadow, whereas on any other day a stripe of sunlight would fall across the shaft. Hence, there was no question as to what day would begin the new year.
There were also a couple of other things which the Maya priests may have realized at the invocation of their reformed calendar of which they may have been unaware earlier. They were well aware, of course, that every time one of their "Vague Years" of 365 days was completed, the date in the 260-day sacred almanac had advanced by another 105 days. But, because the least common divisor between the two counts was 5 and there were 20 day-names in the sacred almanac, there would only be 4 day-names that would repeatedly coincide with the beginning of the 365-day year. Thus, because their calendar reform was initiated on a day called 12 Eb in the sacred almanac, in the following year the Maya new year fell on 13 Caban (105 days later in the sacred almanac). However, in the year after that -- because the sacred almanac used only 13 numerals -- the new year fell on 1 Ik (another advance of 105 days). And in the fourth year, the Maya new year's day was celebrated on 2 Manik (105 days farther along in the almanac). By the beginning of the fifth year, the cycle of 20 day-names started over, so that the following four years began on 3 Eb, 4 Caban, 5 Ik, and 6 Manik, respectively -- each year being identified with the next higher numeral but always with one of the same 4 day-names.
From this realization, the Maya developed the notion that these four days of the sacred almanac -- Eb, Caban, Ik, and Manik -- were "the bearers of the years"; that is, they "carried" the year along until it was passed on to the next "bearer," much as athletes run a relay race. This idea of "year bearers" gives us an insight into how the Maya envisioned time; each day was a "burden" to be carried by the deity who presided over it until his leg of the relay was complete, at which time he transferred it to the next deity, and so it went.
Reassuring as the notion of regular "year bearers" must have been to the Maya, they were still troubled by the fact that the beginning of their new year soon failed to coincide with the zenithal passage of the vertical sun. Naturally, this was because their "Vague Year" was 365 days long, rather than 365 days plus a fraction, so that in four years their secular calendar slipped a full day. Thus, when the zenithal sun passed over Edzná for the fifth time, it did so on 1 Pop rather than 0 Pop; by the ninth time, new year's day fell on 2 Pop; by the thirteenth time, its passage took place on 3 Pop, and so forth. Ironically, by measuring the passage of the zenithal sun over Edzná so precisely, the Maya priests came to realize as never before how imprecise their time-count really was. (In this same connection, it is interesting to note that Bishop Landa records that the Maya had, by the sixteenth century, shifted over to using the days Kan, Muluc, Ix, and Cauac as their year bearers.)
On the other hand, the decision by the priests of Edzná to make the beginning of their year accord with the zenithal passage of the sun over their city -- an event which occurs on July 26 in our own calendar -- left a lasting mark on Maya timekeeping. As it turned out, the parallel of 19º.5 N latitude on which Edzná is located neatly bisects the Yucatán Peninsula, which means that throughout the Maya heartland the zenithal passage of the sun was an event that had meaning and relevance to everyone. It appears, therefore, that Edzná, through the combined "accidents" of geography and history came to serve as the "Greenwich of the Maya," for nowhere else within the region they occupied could the July 26 zenithal passage be calibrated except there. In other words, no other ceremonial center within the Maya area is situated at precisely this latitude, so only at Edzná could the new year's date be pinpointed. In fact, writing about Edzná, Thompson observes that its priests seem to have exercised something akin to a "veto power" in calendrical matters, for he mentions a possible one-day correction to the calendar having been made there in the year 671, after which all the other Maya ceremonial centers appear to have fallen into line (Thompson, 1950).
One of the early Spanish prelates of the Yucatán, Bishop Landa, reported that the Maya marked the beginning of their new year with the zenithal passage of the sun on the equivalent of July 26 in our calendar. Such an event takes place along the parallel of 19º.5 N, a line which neatly bisects the peninsula but intersects only one major site in doing so -- Edzná. Because the first day of the Maya new year, 0 Pop, initially coincided with July 26 around the year A.D. 48, this would appear to mark the beginning of their "reformed" calendar.
Some researchers have assumed that each Maya ceremonial center had its own calendar, but the observation by Thompson cited above suggests otherwise. So, too, does the historical record, because Bishop Diego de Landa, the third Spanish prelate of the Yucatán, specifically records that the Maya began their new year with the passage of the zenithal sun, and that the day this occurred was the equivalent of July 26 in the Gregorian calendar. It is of further interest to note that in the interval between the time the Maya priests undertook the calendar reform in the year A.D. 48 and the time that Landa wrote, one entire Sothic cycle of 1460 years had passed, and astronomical events which were in phase in the year 48 were again in phase in the year 1508. (The word Sothic comes from the Egyptian name for the star Sirius. Because the ancient Egyptian calendar also had 365 days, rather than 365.25 days, the rising of Sirius, which marked the Egyptian new year, was likewise found to get out of phase with the movements of the sun. However, by careful measurement the Egyptians found that after 1460 full solar years had passed -- the equivalent of 1461 of their "imperfect" years -- the sun and the stars would once more be back in harmony with each other.) Thus, the zenithal sun passage once again coincided with the day 0 Pop in the Maya secular calendar in that year and during the three following years.
All attempts to understand Maya civilization have been made immensely more difficult because Bishop Landa, in his religious zeal, managed to consign all but a handful of the Maya's books and records to his bonfires. The rather straightforward description of the astronomical importance of Edzná which I have sketched out above was by no means as direct and uncomplicated as it might sound. But it did begin with the two clues which Landa bequeathed to us -- namely, that the Maya new year coincided with the passage of the zenithal sun and that this event occurred on the equivalent of July 26 in our present calendar.
Working from these clues, I reasoned that, if Landa's information was correct, I should be able to zero in on the geographic location where the Maya had devised their version of the calendar. A solar ephemeris revealed that on July 26 the noonday sun passes overhead at 19º.5 N latitude, so armed with this knowledge, I next turned to a detailed map of archaeological sites in the Yucatán (National Geographic's "Archaeological Map of Middle America," published in 1968). The latter showed only one ceremonial center of any significance at this latitude, its name being rendered as "Etzná"; to one side, there was a vignette describing it as a "Late Classic site [having a] temple atop a pyramid faced with four stories of rooms." A subsequent search of the literature turned up only a couple of references to Edzná, including the one attributed to Thompson which I cited above. Therefore, all I really knew about the place was that its construction had been dated to the period A.D. 600-900 and that it seemed to have had some "clout" when it came to resolving calendrical issues.
In the winter of 1976 as I was devising my computer program to run the "Maya" calendars back to the dates on which they had been initiated, I put in a "flag" to have the program alert me as to when the Maya day 0 Pop coincided with our day July 26. Employing Goodman's correlation as my starting point -- namely, that the Long Count date of 184.108.40.206.0 13 Ahau 8 Xul = November 4, 1539, 1 set the program in motion and only seconds later I was informed that the coincidence I was looking for had occurred most recently during the years 1508-1511. Thereafter, the computer churned away until an entire Sothic cycle had passed and we were back in the period A.D. 48-51.
In view of the antiquity of this date as opposed to the relative "lateness" of Edzná's supposed founding, in my 1978 article reporting the findings of my computer study I decided to make no mention of the "Maya calendar reform" which I had hypothesized had taken place there. (The deductions which had led me to Izapa had embroiled me in enough conflict with the archaeologists, I felt.) Ironically, as my article was going to press in the winter of 1978, 1 chanced to meet Prof. Matheny, who had recently excavated Edzná, in the field. When I cautiously mentioned to him how my deductions had suggested a calendar reform having taken place there "about 600 years before the place was founded," he laughed and replied, "Well, Cinco Pisos may have been a Late Classic construction but our radiocarbon data show us that Edzná itself was a thriving concern already about 150 B.C." Encouraged by both my computer findings and Prof Matheny, I then went on to Edzná to make the further discoveries reported in these pages.
MAKING SENSE OF THE MOON
Pinning down the movement of the sun, irregular as it was with respect to the Long Count, was like child's play for the Maya compared to their struggle to understand the movements of the moon. Once again their failure to recognize the concept of fractions obliged them to undertake lengthy counts of cycles in the hope of eventually finding two periods which coincided in nice, whole integers. A case in point is the length of a lunation, the period of time between two successive new moons. The Maya obviously realized that it was not 29 days, but it also was not 30 days. Attempting to describe a time period which was actually 29 days, 12 hours, 44 minutes, and 2.8 seconds in length was for them a philosophical impossibility. Yet, after they had counted 149 "moons" in a row they realized that exactly 12 tuns and 4 uinals had elapsed, or a total of 4400 days; they were then confident that the cycle would begin over again, with the moon occupying the same position it had had relative to the sun when the cycle began. That they could do so with reasonable assurance is demonstrated by the fact that 4400 days divided by 149 lunations yields an average of 29.5302 days per lunation -- a value less than 0.0004 at variance with that used by modern astronomers!
More difficult yet, however, was trying to find some regular pattern in the moon's seemingly erratic bouncing around the sky. Unlike the sun, which moves progressively farther north or south each day until it finally reaches its "stopping place" and then turns around, the moon rises and sets at such different times of the day or night in such widely differing places along the horizon that it might seem "illogical," "crazy," or "drunken" in its behavior. Indeed, if it were not for the fact that on occasion the moon suddenly became dark, or, even worse, that the sun itself was sometimes "devoured" by darkness without warning, perhaps there would have been no real reason to try to make sense out of the moon's motions. Initially, the priests may have shared the layman's terror of the disappearing sun or moon, but not too many eclipses would have occurred before they may have suspected some functional relationship between the orderly path of the sun and the seemingly disorderly path of the moon. Yet, not until the "crazy" ricocheting of the latter could be understood would they be able to predict the occurrence of eclipses, and only after they had mastered that skill would they be in a position to exercise the full power of that knowledge over their untutored subjects.
Stela C from Tres Zapotes now reposes in the National Museum of Anthropology and History in Mexico City. The missing baktun value of its Long Count inscription was found in 1969, confirming the carving's 32 B.C. date. The lower edge of the stela is ruptured through the middle of the glyphs that give the number and name of the day in the sacred calendar.
The preoccupation of the early Mesoamericans with this matter of eclipses can probably be detected in one of the oldest Long Count inscriptions yet discovered, namely, Stela C from Tres Zapotes found by Matthew Stirling in 1939. Although the controversy over whether its missing baktun value was a "7" or an "8" was conclusively settled with the discovery of the detached fragment in 1969, no real attempt has been made to ascertain what its date actually recorded. Its inscription reads "220.127.116.11.18" in the Long Count, which may be transcribed into the Julian date of September 5, -31, or 32 B.C., if we use the Goodman-Martínez-Thompson correlation value of 584,285. (Of course, if we were to employ Thompson's "revised" value of 584,283 from 1935, the date would equate to September 3 instead.)
In an earlier paper (Malmström, 1992a), I advanced the notion that, although Stela C was only discovered in 1939, the meaning of its inscription may well be found in a monumental work of European science first published in 1877. Known as Canon der Finsternisse, or "Table of Eclipses," the volume is the work of Theodor von Oppolzer, an Austrian count, and a team of his assistants, and constitutes a catalog of over 8000 solar and 5200 lunar eclipses ranging in date from 1208 B.C. to A.D. 2161. Although his 376 pages of calculations and 160 maps charting the central paths of the solar eclipses were all carried out by hand, their accuracy has only recently been reconfirmed by modern researchers using computers (Meeus and Mucke, 1979).
Listed as event no. 2803 in Oppolzer's list of solar eclipses is one whose path of centrality passed right over the Olmec ceremonial center of Tres Zapotes at dawn on the morning of August 31, 32 B.C. A more frightening celestial event can scarcely be imagined, for the sun rose out of the Gulf of Mexico totally black except for a ring of light around its outer edges. Oppolzer described it as an annular, or ringlike, eclipse, and subsequent calculations at the U.S. Naval Observatory have revealed that the disk of the sun was 93 percent obscured (personal communication). Surely, a "day without a sunrise" is not likely to have gone unrecorded by the Olmecs!
But, if this eclipse really is the same event as that described by Oppolzer, why does its date not coincide with that which he records? A number of possible explanations suggest themselves: (1) perhaps the Olmecs waited either three or five days to record it, depending on which correlation value of Thompson's one uses; (2) perhaps the stone carver who engraved the stela made an error of either three or five days in inscribing the date; or (3) perhaps the Goodman-Martínez-Thompson correlation value is incorrect by three to five days. Of course, there is also a fourth possibility -- namely, that it had nothing whatsoever to do with the eclipse recorded by Oppolzer, and that it was merely a strikingly close coincidence of both geography and history.
The first hypothesis has no merit whatsoever, for if the Olmecs consciously chose to put off recording the date, they would certainly have no means for measuring eclipse cycles with any precision. Any basis they might have had for maintaining accurate records would thus largely have been vitiated.
The second hypothesis is credible; after all, "to err is human." If this is the case, the inscription on Stela C is more likely the result of an illiterate stone carver's mistake, however, than of a priest's miscalculation. But, if so, which is the "easier" mistake to make: to carve an extra three dots in the final, or kin, position -- equivalent to a three-day error -- or to carve an extra bar in the kin position -- equivalent to a five-day error? Clearly, the mistaken addition of one symbol -- the bar -- would have been more likely than the mistaken addition of three symbols, so the discrepancy between Oppolzer and the Olmecs would appear to have been a matter of five days rather than three.
The third hypothesis -- that the GMT itself may be off by three to five days -- is hardly likely, but the merits of the second hypothesis are now reflected in the accuracy of the original Thompson correlation value of 584,285. If that value is used, then the five-day discrepancy between Oppolzer and the Olmecs is substantiated; if, on the other hand, we use Thompson's "revised" value of 584,283, then the lack of a correspondence between the two dates can no longer be explained as an error, and we would probably have to abandon any thought that the Olmecs were recording the eclipse after all. That, in turn, would mean discarding both the close historical "coincidence" between the two dates and the geographic "coincidence" of the passing of the eclipse's central path directly over Tres Zapotes. In effect, therefore, the inscription of Stela C, erroneous though it seems to be, appears to confirm the accuracy of the original Thompson correlation value between the Olmec calendar and our own.
In all fairness, however, it should be noted that there is one further complication in this interpretation of Stela C's date. The bottom edge of the inscription is broken just at the point where the number and name of the day in the sacred almanac is recorded. If the Long Count inscription itself is accurate, the day-number and -name should be 6 Eznab, and this is the way the fragmentary glyph at the bottom of the inscription is translated by most scholars. If the numeral is indeed rendered by a dot and a bar, then there is no question of its being a "6," but in keeping with my hypothesis that a second bar had been mistakenly added to the inscription. Obviously, my argument is not destroyed by such a reading but it is substantially weakened, for to be consistent with my hypothesis, the day -number should have been a " 1 " instead.
Of course, if the inscription is accurate as it stands, it would then reopen the whole issue of what it was that the Olmecs were actually recording on that intriguing occasion. If the blackened sun at dawn was not noteworthy enough to take cognizance of, what other event could have been so much more spectacular or important to them that they took notice of it instead? Was it the occultation of Mars by the sun that coincided very closely with the eclipse? Or might it have been the occultation of the bright star Regulus (magnitude 1.35) by the planet Venus that occurred during the following couple of nights? Neither of these seem very likely, for surely both of these astronomical events literally pale into insignificance when compared to a total solar eclipse. Thus, we are left with the very real possibility that nothing of note astronomically had prompted the carving of the inscription of Stela C but that something even more earthshaking had taken place in and around Tres Zapotes shortly after the ominous eclipse had occurred. Thanks to its location in the foothills of a volcanic region like the Tuxtlas, the most obvious possibilities become either a monstrous earthquake or a devastating eruption.
The point of this digression has been simply to illustrate that, about the time that Edzná was coming into being, the Mesoamericans appear to have begun recording eclipse data on their stelae, possibly in the hope that through accurate timekeeping they would eventually solve the puzzle of when these fearsome events would recur. In the intellectual community of the Maya, therefore, this problem must have been near the top of the agenda as Edzná was being founded.
In the flat and featureless landscape of Yucatán, it had been a rather simple matter to lay out a new city oriented to the sunset on "the day the world began" because the "summer solstice + 52 days" formula had already been developed. Nonetheless, in a region where the local topography presented no opportunities for calibration against a natural landmark, the "gun-sight" alignment from the courtyard of Cinco Pisos through the notch in the artificial horizon to the top of the small pyramid constituted an ingenious solution to the problem of the city's orientation. In the same way, the erection of the tapered shaft surmounted by the stone disk had been an ingenious solution to calibrating the passage of the zenithal sun. The problem now at hand required some means of marking the moon's rising and setting position along the circumference of the monotonously uniform horizon that stretched out from Edzná in all directions.
View of the western horizon as seen from the top of Cinco Pisos at Edzná. The elongated mound across the plaza served as an artificial horizon for a priest standing in the doorway of the courtyard, allowing him to sight through the notch in the middle of the mound to the summit of the pyramid immediately behind it to calibrate the sunset on August 13 -- an azimuth of 285º.5. The pyramid which intersects the true horizon farther to the right, or northwest, is "La Vieja," whose azimuth marks the northernmost stillstand of the moon (i.e., 300º).
No doubt the first task was to provide the priests with a vantage point which allowed them a complete and unencumbered view of the entire 360º circuit of the horizon -- hence the need to erect what was perhaps the highest pyramid yet constructed in Mesoamerica. When completed, the aptly named Cinco Pisos ("Five Stories") towered more than 40 m (130 ft) above the rocky platform on which it was sited, becoming in the process a true landmark visible from 40-50 km (20-30 mi) away. Although Cinco Pisos is a Late Classic structure (i.e., built between A.D. 600 and 900), its situation at a focal point for the ceremonial center's canal system (which dates to the Late Preclassic period -- 300 B.C. to A.D. 300) makes it seem likely that an earlier structure previously occupied this critical position. In fact, Matheny suggests that "perhaps the remains of the Late Preclassic structure still exist within Cinco Pisos" (1983, 81). In any case, from the top of this key structure one could look out in any direction in a clear sight-line to the far horizon.
The real problem was to keep track of the places along the horizon where the moon either rose or set. That Cinco Pisos faced slightly to the northwest to begin with -- having been oriented along with the rest of Edzná to an azimuth of 285º.5 -- meant that the moon's setting position was the one the priests chose to calibrate. But with a horizon so distant and so featureless, one is tempted to conclude that most of the initial record keeping may have been done by marking lines in the appropriate directions on the top platform of Cinco Pisos itself. Only after the observations had narrowed in on a distinct enough point to erect some structure against the horizon at the required azimuth would that have been done.
Conceptually, the Maya already had the model of the sun's behavior on which to predicate their observations. Its northernmost stopping point marked the summer solstice, which in turn established the beginning date for the 52-day count which fixed "the day the world began" -- i.e., August 13. If they could locate a similar position for the moon -- its northernmost setting point -- perhaps that would allow them to begin the count which would eventually reveal the secrets of the eclipse cycle.
Deciding what the northernmost setting point of the moon really was must have been a tedious and frequently altered judgment in itself. Each time the moon reached what appeared to be an even more extreme setting position, the count for the eclipse cycle would have to be started again. One can well imagine that sometimes years of patient counting and record keeping might have gone on before the moon unexpectedly pushed its setting position even farther north along the horizon and literally wiped out the whole exhaustive tally in one fell swoop.
When this process actually began and how long it took to yield any kind of meaningful results has to be pure conjecture. It is probably safe to say that the idea for launching the count may already have been formulated shortly after Edzná was founded, and may well have been under way when the calendar reform fixing the new new year's day was adopted. What we do know for certain is that the first time that mention is made of the phase of the moon corresponding to a given Long Count date is in an inscription dating from A.D . 357 (Coe, 1980, 159). This does not mean, of course, that the problem had been solved by then, but only that from that time forward this seemingly important fact was now to be regularly recorded along with the date itself. Indeed, this may be evidence that the lunar cycle had not yet been worked out, and that the priests felt the additional bit of information regarding the phase of the moon might actually be useful in finally establishing the cycle -- once they could examine the records in retrospect.
This is not to say that the eclipse cycle could not have been worked out within the first half century that the quest was initiated. What the Maya were ultimately to learn was that the cycle required a full 6797 days, or 18.61 years, to complete, so if they had actually recognized the moon's northernmost stillstand on one occasion, they would have had to count that long to find the moon once more back at the same setting position. Of course, to confirm the accuracy of their count would require the completion of at least one more full cycle, so by this time more than 37 years would have elapsed. Thus, to postulate even the minimum time span necessary for such an achievement makes one appreciate the care and continuity which the Maya priests exercised in keeping a constant, running tally over the equivalent of more than an entire generation.
Although less than a half dozen of the original Maya manuscripts appear to have escaped the flames of the fanatic Spaniards, one of those that did survive is the so-called Dresden Codex, which has subsequently been recognized as an elaborate eclipse warning table. Thompson, among other scholars, assigns a twelfth-century origin to it, but concedes that its three base dates go back to the middle of the eighth century. (My suggestion that the Stela C inscription may have involved a five-day error owing to a stone carver's failure to understand the date he was carving finds an ironic parallel in the Dresden Codex. Thompson's study of the manuscript revealed that no fewer than 92 errors of transcription have been made in recording its dates [1972, 115-116], but no one, least of all Thompson, has ever suggested discounting the validity of the tables on that account.)
The three base dates of the Dresden Codex occur in the latter part of the year 755 and define two 15-day intervals. For this reason, Maud Makemson suggested, in 1943, that they most likely represent two solar eclipses bracketing a lunar eclipse (Makemson, 1943). While not disputing such an interpretation, Floyd Lounsbury, writing in 1978, argues that if this is correct, then these dates must have been arrived at by calculation rather than through observation, because no such celestial events took place in Yucatán in that year (Lounsbury, 1978, 816).
The three dates in question would equate to November 8, November 23, and December 8, 755, if the original correlation value of Thompson (namely, 584,285) is employed. (Naturally, if his "revised" version is used, it would put each of these dates two days earlier.) When the first of these dates was checked against a planetarium programmed to duplicate celestial events as seen from Edzná on that day, it was found that the sun and moon did in fact rise just eight minutes apart on that morning over the Yucatán, with an angular separation of less than 2º.5. In other words, there had been a "near miss" to a solar eclipse visible in the Maya area on that date.
For the second date, we once again employ the calculations of Oppolzer (1887). Fifteen days after the near solar eclipse over the Yucatán -- i.e., on November 23 -- he records a total lunar eclipse as having taken place, but ironically, his data demonstrate that it was visible only in the half of the world centered on the Indian subcontinent. On the third date, again using Oppolzer as our source, we find that a partial solar eclipse did indeed take place on December 8, 755, but its central path lay over the ocean between South Africa and Antarctica, where probably not a single human being witnessed it.
From this data we can draw two very important conclusions. First, by the year 755 the Maya had apparently worked out the motions of the moon with such precision that they knew when an eclipse should occur, but they still could not be sure if it actually would occur, in the sense of being visible to them. Second, the original Thompson correlation value of 584,285 is clearly the correct one, for an acknowledged eclipse warning table such as the Dresden Codex could certainly not have been based upon a foundation two days out of phase with the realities of the celestial sphere.
In this map view of Edzná, we see the features shown photographically in Figure 37. The astronomical importance of Edzná may be gauged from these facts: (1) only at its specific latitude could the beginning of the Maya new year be calibrated, here with the assistance of a remarkable gnomon; (2) the "day the world began" was commemorated in the "gun-sight" orientation between the doorway of Cinco Pisos and the small pyramid across the plaza; and (3) lunar cycles were measured by using the line of sight between Cinco Pisos and "La Vieja" on the northwestern horizon.
Although we cannot be certain when the Maya finally succeeded in working out the lunar eclipse cycle, it would seem that most of the basic "research" on the problem was carried out at Edzná. Located some 300 m (1000 ft) to the northwest of Cinco Pisos is the ruin of a lofty pyramid which Matheny has termed "La Vieja," or the "Old One." The "La Vieja" complex appears to date to the Late Classic period as well, but has not experienced the "urban renewal" from which Cinco Pisos subsequently benefited (Matheny, 1983, 109). Even in its dilapidated condition it is still high enough to intersect the horizon as seen from the top of Cinco Pisos; indeed, it is the only manmade construction which does so. This fact immediately prompted me to measure its azimuth as seen from Edzná's commanding edifice, and the value I obtained was 300º. This means that the summit of the pyramid lies exactly 5º beyond the sun's northernmost setting position at the summer solstice. Because the moon's orbit is just a hair over 5º off that of the sun, it seems very likely that the Northwest Pyramid, or "La Vieja," had been erected as a horizon marker to commemorate the moon's northernmost stillstand. Not only is "La Vieja" an eloquent testimonial to the patience and accuracy of Maya "science," but because of its specialized function, it is also probably worthy of being designated as the oldest lunar observatory in the New World. (Indeed, if Matheny's dating of "La Vieja" is accurate, then it is apparent that the Maya had succeeded in measuring the interval between lunar stillstand maxima at least by A.D. 300.)
| 3.884534 |
When I was a kid, my favorite crayon was the one called "Burnt Sienna."
It produced a warm orange-brown color that reminded me of Georgia Red Clay.
I wasn't far off the mark. The rich pigments of the color sienna are derived from a form of limonite clay used in the production of oil-based paints.
Ferric oxides found in certain soils produce the complex color. Natural soil pigments, such as siena, ochre and umbra are found in many cave paintings and are believed to be the first pigments used by humans.
Burnt sienna is a warm reddish-brown hue, created by heating raw sienna to remove excess water from the clay.
"Sienna" is short for Terra di Siena, or "earth of Siena." The color is named for the town of Siena, ITALY, where it occurs in abundance.
The southwest entrance to Siena through medieval city wall.
Palazzo Pubblico (Town Hall)
Piazza del Campo
A beautiful medieval door
In Siena, ITALY, the Cathedrale di Santa Maria, known as the Duomo, is a treasure trove of sacred art from the 13th and 14th centuries.
This marble mosaic by an unknown artist depicts the "she-wolf," symbol of Siena
Our guide explained that the relief on this bronze door to the Duomo (Siena, ITALY) depicts the "glorification of the virgin."
Cathedral de Santa Maria Assunta
(Holy Mary of the Assumption)
These colorful frescoes on the library ceiling were painted by Umbrian artist Bernardino di betto, better known as "Pinturicchio." The sequential panels depict the life story of Siena cardinal Enea Silvio Piccolomini, who eventually became Pope Pius II.
Fresco Painted Archway
Bernardino di Betto (Pinturicchio) painted several frescoes in the Cathedral.
This one, in the nave above the door to the Piccolomini Library, substitutes portraits of Pinturicchio's patrons for the faces of the saints.
| 3.302505 |
Comparing an Integer With a Floating-Point Number, Part 1: Strategy
Last week, I started discussing the problem of comparing two numbers, each of which might be integer or floating-point. I pointed out that integers are easy to compare with each other, but a program that compares two floating-point numbers must take NaN (Not a Number) into account.
- A How-To Guide on Using Cloud Services for Security-Rich Data Backup
- How to test and launch a world-class application
- Best Practices: Using Apple's Global Proxy to Boost Mobile Security
- InformationWeek 2013 IT Spending Priorities Survey
- The Untapped Potential of Mobile Apps for Commercial Customers
- Secure Cloud: Taking Advantage of the Intelligent WAN
That discussion omitted the case in which one number is an integer and the other is floating-point. As before, we must decide how to handle NaN; presumably, we shall make this decision in a way that is consistent with what we did for pure floating-point values.
Aside from dealing with NaN, the basic problem is easy to state: We have two numbers, one integer and one floating-point, and we want to compare them. For convenience, we'll refer to the integer as
N and the floating-point number as
X. Then there are three possibilities:
- Neither of the above.
It's easy to write the comparisons
N < X and
X < N directly as C++ expressions. However, the definition of these comparisons is that
N gets converted to floating-point and the comparison is done in floating-point. This language-defined comparison works only when converting
N to floating-point yields an accurate result. On every computer I have ever encountered, such conversions fail whenever the "fraction" part of the floating-point number — that is, the part that is neither the sign nor the exponent — does not have enough capacity to contain the integer. In that case, one or more of the integer's low-order bits will be rounded or discarded in order to make it fit.
To make this discussion concrete, consider the floating-point format usually used for the
float type these days. The fraction in this format has 24 significant bits, which means that
N can be converted to floating-point only when
|N| < 224. For larger integers, the conversion will lose one or more bits. So, for example, 224 and 224+1 might convert to the same floating-point number, or perhaps 224+1 and 224+2 might do so, depending on how the machine handles rounding. Either of these possibilities implies that there are values of
X such that
N == X,
N+1 == X, and (of course)
N < N+1. Such behavior clearly violates the conditions for C++ comparison operators.
In general, there will be a number — let's call it
B for big — such that integers with absolute value greater than
B cannot always be represented exactly as floating-point numbers. This number will usually be 2k, where
k is the number of bits in a floating-point fraction. I claim that "greater" is correct rather than "greater than or equal" because even though the actual value 2k doesn't quite fit in
k bits, it can still be accurately represented by setting the exponent so that the low-order bit of the fraction represents 2 rather than 1. So, for example, a 24-bit fraction can represent 224 exactly but cannot represent 224+1, and therefore we will say that
B is 224 on such an implementation.
With this observation, we can say that we are safe in converting a positive integer
N to floating-point unless
N > B. Moreover, on implementations in which floating-point numbers have more bits in their fraction than integers have (excluding the sign bit),
N > B will always be false, because there is no way to generate an integer larger than
B on such an implementation.
Returning to our original problem of comparing
N, we see that the problems arise only when
N > B. In that case we cannot convert
N to floating-point successfully. What can we do? The key observation is that if
X is large enough that it might possibly be larger than
N, the low-order bit of
X must represent a power of two greater than 1. In other words, if
X > B, then
X must be an integer. Of course, it might be such a large integer that it is not possible to represent it in integer format; but nevertheless, the mathematical value of
X is an integer.
This final observation leads us to a strategy:
N < B, then we can safely convert
Nto floating-point for comparison with
X; this conversion will be exact.
- Otherwise, if
Xis larger than the largest possible integer (of the type of
Xmust be larger than
X > B, and therefore
Xcan be represented exactly as an integer of the type of
N. Therefore, we can convert
Xto integer and compare
I noted at the beginning of this article that we still need to do something about NaN. In addition, we need to handle negative numbers: If
N have opposite signs, we do not need to compare them further; and if they are both negative, we have to take that fact into account in our comparison. There is also the problem of determining the value of
However, none of these problems is particularly difficult once we have the strategy figured out. Accordingly, I'll leave the rest of the problem as an exercise, and go over the whole solution next week.
| 3.620115 |
Violin For Dummies
With all of its different parts and its beautiful, delicate-looking body, the violin can feel a bit intimidating at first. This Cheat Sheet helps you get to know your instrument by introducing the most important parts of your violin, provides some easy steps to keep it in tip-top condition, and takes you through the process of taking the violin out of its case for the very first time.
Examining the Parts of Your Violin
More than 70 parts go into making a complete violin. This hourglass-shaped string instrument consists of several basic parts, including the 21 important elements explained here.
Back: One of the most important parts of the violin, for both aesthetic and acoustic properties. The back of the violin can be made of one or two pieces, and it’s arched for strength and tone power.
Bass bar: A slim strip of wood glued under the table of the violin on the side of, and running more or less parallel to, the lower strings. The bass bar reinforces the strength of the violin’s top and enriches the tone of the lower notes.
Body: The sounding box of the violin has evolved to produce the best sound and use the most convenient playing shape. The waist of the violin is actually a necessary indentation, so that the bow can move freely across the strings without bumping into the body.
Bridge: The only piece of unvarnished wood on the violin, it sits on top, about halfway down the body, placed exactly between the little crossbars of the violin’s f-holes. The strings run over the top of the bridge, which transfers their vibrations to the main body of the violin for amplification. The bridge is slightly rounded to match the shape of the fingerboard and to enable the player to bow on one string at a time.
Chinrest: The spot on which your jaw rests when you’re playing (come to think of it, it should be called a jaw rest). Chinrests are usually made of ebony that has been carved into a cupped shape to fit the left side of your jaw. Your chinrest is attached just to the left of the tailpiece by a special metal bracket. You can choose from a variety of models to fit your chin shape and neck length most comfortably.
End button: A small circular knob made of ebony, to which the tailpiece is attached by a loop.
F-holes: The openings on either side of the bridge. They’re called f-holes because they’re shaped like the italic letter f.
Fine tuners: Small metal screws fitted into the tailpiece and used for minor tuning adjustments.
Fingerboard: A slightly curved, smooth piece of ebony that’s glued on top of the neck of the violin, under most of the length of the strings.
Neck: The long piece of wood to which the fingerboard is glued. The neck connects the body of the violin to the pegbox and scroll.
Nut: A raised ridge at the pegbox end of the fingerboard that stops the strings from vibrating beyond that point.
Pegbox: The rectangular part of the scroll immediately adjoining the nut end of the fingerboard, before all the fancy carving begins, where each of the four pegs fits snugly sideways into its individual hole.
Pegs: Four pieces of wood, usually ebony, shaped for ease of turning and fitted into round holes in the pegbox. The player turns the peg to tighten or loosen each string when tuning the violin.
Purfling: An inlay running around the edge of the top and back of the violin’s body. The purfling is both decorative and functional because it protects the main body of the violin from cracks that can occur through accidental bumps. Of all the parts of the violin, purfling is the most fun to say.
Ribs: The sides of the violin. The luthier (a fancy word for violin maker) bends the wood, curving it to fit the outline of the top and back of each instrument.
Saddle: An ebony ridge over which the tailpiece loop passes. The saddle protects the body of the violin from becoming damaged and prevents rattling sounds, which would occur if the tailpiece was to contact the top of the violin when it’s vibrating with sound.
Scroll: Named after the rolled-up paper scrolls that were sent instead of envelopes in the old days, the scroll forms the very end of the pegbox. Carving a scroll requires artistic vision and great expertise, so creative violin makers see the scroll as an opportunity to display their best work. Occasionally, you meet a violin with a lion’s head scroll, or some other fanciful shape, the result of a maker’s whimsy.
Sound post: Enhances the volume and tone of the violin by transferring the sound vibrations to the back of the instrument after the bow makes a string sound near the bridge. If you peek into the f-hole near the E string (your thinnest string), you see a small round column of unvarnished wood, about the circumference of a pencil, which fits vertically from the top to the back of the violin.
Strings: The four metal-wrapped wires (often made with silver or aluminum ribbon spiraling smoothly around a gut or synthetic core material) that you bow on (or pluck) to produce the notes of the violin.
Tailpiece: A flared-shaped piece of wood into which the top end of each string is attached. The tailpiece itself is attached to the end button by a gut or synthetic loop.
Top (or Table): The face of the violin. The top is very important to the character and quality of the violin’s sound as well as to its general appearance.
Protecting Your Violin from Damage
Violins are made of natural materials that are sensitive to temperature and humidity changes. Follow these tips to help your violin have a long and happy life:
Keep your violin at about room temperature.
Store the case away from high-traffic areas so that it doesn’t get knocked around.
Always close and latch the case when you finish playing, to protect your violin from falls.
Keep your violin away from radiators, air ducts, and direct sunlight, and avoid leaving it for long stays (or almost any stays!) in car trunks, especially in very hot or very cold weather.
Most important of all, keep your violin in a humidity of 40 to 60 percent whenever possible, and if you’re traveling to a different climate, take care to preserve the humidity in the violin’s case at a similar level.
How to Take Your Violin Out of Its Case
Taking the violin out of its case (and putting it away again safely) is a skill; mastering the art ensures that your instrument will have a long and happy life. To open the case, follow these steps:
Place the violin case on a stable, flat surface, such as a table or a sofa, with the lid facing the ceiling, and then turn the latch-and-handle side to face you.
You may have to unzip the case’s cover first, which usually has two zips that pull away to either side of the case’s handle. Pull the zippers all the way around to the back of the case so that the lid is able to open fully.
Open the latches first, then release the lock and lift the lid.
Because the cases are very snugly built, the top can be a bit sticky to lift, in which case (no pun intended!), you hold firmly on the handle while you lift the lid.
After you open the case, lift off the covering blanket (if you have one) and undo the strap or the ribbon that safely holds your violin around its neck in the case, before you lift out the violin.
Hold the violin around its neck to lift it from the case — don’t grab the body, because that’s not good for the varnish.
It’s a good idea to place the velvet cloth that covers the violin onto the table and next to the case; this way, you can place the instrument on the cloth.
Release the bow from the case by turning the toggle from a vertical to a horizontal position, taking the bow by the frog end (the name for the piece of ebony wood below the stick at the right end of the bow) with your right hand, and then sliding it gently to the right until the tip (the pointy end) of the bow is out of its loophole.
Never twist the stick while doing this — bows are strong, but they can’t always resist sideways twists.
| 3.503797 |
Bing online dictionary defines “symptom” as: “indication of illness felt by patient: an indication of a disease or other disorder, especially one experienced by the patient, e.g. pain, dizziness or itching, as opposed to one observed by the doctor.”
So, how long does it take from the time symptoms start to get help? With symptoms of a common cold, relief can be found at the nearest supermarket or pharmacy; over-the-counter remedies are as abundant as they are accessible. So on average, I would guess from the time of the first sneeze or cough until the first swig of medicine, we’re looking at about an hour.
With symptoms of cardiac or other emergency distress, 911 is typically called, and help arrives quickly. Perhaps an average 10 to 20 minutes or so from the onset of symptoms until help arrives, depending on the location of the patient. Shooting pain from a tooth? Most of us will deal with this symptom for only as long as it takes for the dentist can get us in. How about a torn knee ligament on the ski hill? Most likely, we will get that treated as soon as possible.
What about symptoms of depression, anxiety, substance abuse and other behavioral-health conditions? Interestingly, we wait 10 years on average from the onset of symptoms until we get treatment. Why do we wait? One answer is stigma, as defined by the online dictionary as, “sign of social unacceptability: the shame or disgrace attached to something regarded as socially unacceptable.”
It’s true that our culture has not been (historically) accepting of behavioral-health conditions. That being said, I wonder how we can still consider these conditions to be socially unacceptable – especially when they are so prevalent nowadays. As evidence of the widespread nature of behavioral health conditions, the National Alliance on Mental Illness reports an estimated 26.2 percent of adults, or about 1 in 4, experience a diagnosable mental disorder every year. What are the consequences of waiting 10 years? Unfortunately, as with any issue, the longer we wait to get treatment, the greater likelihood we have of developing advanced symptoms.
However, as with other illnesses, the good news is that early intervention and treatment has proved effective in treating behavioral-health conditions. As I often write in this column, Mental Health First Aid is an evidence-based set of skills that have proved to be effective in lessening the time between the start of symptoms and getting help.
For more information about MHFA, you can visit National Council for Community Behavioral Healthcare at www.thenationalcouncil.org. Locally, Axis Health System provides MHFA training. To sign up for an upcoming training or to find out more information, call Liza Fischer, MHFA training coordinator, at 259-2615.
Additionally, NAMI provides support for families of people in recovery from behavioral-health conditions. For more information about NAMI, including contact information for local and statewide resources, visit www.namicolorado.org.
Recovery can start at any time – there’s no need to wait.
Mark White is director of quality for Axis Health System. Reach him at [email protected] or 335-2217.
| 3.107281 |
Durham Cathedral Library descends from the library of the monastery founded on Lindisfarne by St Aidan in 635. When the community left Lindisfarne in 875, the monks took with them relics of St Cuthbert, and a number of books. These probably included the famous manuscript now known as the Lindisfarne gospels. The community settled at Chester-le-Street in 883, where it continued to acquire books. In 995 it moved to Durham, where it built the 'White Church' completed in 1017.
The medieval priory library
In 1083 the Bishop of Durham, William of St Calais, founded a Benedictine priory to replace the community of secular clerks which had served the White Church and shrine of St Cuthbert, and in 1093 the foundation stone of the present cathedral was laid. William drew his first monks from the recently re-founded abbeys of Wearmouth and Jarrow. The new priory inherited books from all these predecessors, including 7th and 8th century manuscripts of Northumbrian origin, some of which are still in the Cathedral Library today. The priory gradually amassed a substantial library, including books written in its own scriptorium. Most books were housed in the Spendement off the west cloister, cupboards in the north cloister, and (from the early 15th century) a new library room above the east cloister.
The Cathedral Library after the dissolution of the monastery
After the dissolution of the monastery in 1539, the cathedral was re-founded under a dean and chapter who inherited what survived of the priory's collection of manuscripts and printed books. Further severe losses occurred in the later 16th century, including the Lindisfarne Gospels (now in the British Library; a facsimile is on display in the Cathedral Treasury). Nonetheless, over 300 manuscripts and some 60 printed books from the monastic collection still remain in the Cathedral Library today.
After almost a century of neglect, the library was reformed in the 1620's through the initiative of John Cosin and other canons. During the civil war and Interregnum it suffered less depredation and dispersal than many other cathedral libraries. After the Restoration, the former monastic refectory was restored and fitted out as a library by Dean Sudbury, and the old library room above the east cloister ceased to be used for library purposes. The book collection grew rapidly, both by purchase and gift, and by 1676 the stock had increased to almost 1000 volumes.
The 18th century brought further substantial expansion, with less emphasis on theology and more on antiquities, history, travel, topography, and natural history. In 1742 the library received the important bequest of the music books and scores of Philip Falle, and in 1757 the Dean and Chapter bought the manuscripts (ca. 150 volumes) collected by the local antiquary Dr. Christopher Hunter, including several which had belonged to the medieval priory library.
The Library began to outgrow the Refectory. In 1849-54 the former monastic Dormitory was restored and fitted out as a library. (Looking today at this impressive room with its medieval open timber roof, it is difficult to picture its pre-1854 state. The Dormitory became derelict after the dissolution of the monastery. After the Restoration, a house for one of the canons was constructed in the southern end, while the northern end was used as a covered yard where children played and washing dried). In 1858 the Refectory was 'restored' by Anthony Salvin, who was responsible for the present windows. Between 1823 and the end of the century a number of important additional collections of local antiquaries were acquired (Allan, Longstaffe, Raine, Randall, and Sharp MSS, and, a little later, the Surtees MSS).
20th - 21st centuries
The Library's steady growth has continued, but, as alternative library provision in the region has increased, the range of subjects covered by new acquisitions has been gradually narrowed. Acquisitions are now concentrated on the cathedral itself (history, architecture, liturgy, music), its contents and collections, and the region it serves, and on major reference works and critical studies to set those topics in their wider contexts.
Major new developments in this period:
- 1930s: Durham branch of the Archdeacon Sharp Library located in the Dormitory, to provide a modern theology library for local users.
- 1958: music manuscripts and printed scores from the Bamburgh Castle Library deposited by Lord Crewe's Charity.
- 1990s: Meissen Library (German Protestant theology) established.
- Hughes, H.D., A history of Durham Cathedral Library (Durham, 1925)
- Piper, A.J., 'The libraries of the monks of Durham', in Malcolm Parkes (ed.), Medieval scribes, manuscripts and libraries. Essays presented to N.R. Ker (London, 1978), pp. 213-41
- Mynors, R.A.B., Durham cathedral manuscripts to the end of the 12th century (Oxford, 1939)
- Rud, T., Codicum Mss Ecclesiae Cathedralis Dunelmensis catalogus classicus (Durham, 1825)
- The cathedral libraries catalogue: books printed before 1701 in the libraries of the Anglican cathedrals of England and Wales, 2 vols (London, 1984-98)
- Doyle, A.I., 'The printed books of the last monks of Durham', The Library, 6th ser., 10 (1988), 203-19
- Crosby, Brian, A catalogue of Durham Cathedral music manuscripts (Oxford, 1968)
- Harman, R.A., A catalogue of the printed music at Durham Cathedral (Oxford, 1968)
| 3.474531 |
Grades3-5, 6-8, 9-12
Brief DescriptionStudents determine revenue generated by sample pages from the Yellow Pages and discuss the value of advertising a business in this venue.
Keywordstelephone, phone, yellow, directory, advertise, business, revenue, calculate, chart, table, research, data, decision
Note: This math activity is designed to be done by hand, but you might integrate technology by having students use a computer spreadsheet program to organize and calculate data.
Open this activity with a brief discussion about the Yellow Pages section of the local telephone directory. How do the Yellow Pages differ from other parts of the book? What kinds of people publish phone numbers in the Yellow Pages? What might be the benefits of publishing phone numbers in the Yellow Pages? Lead the discussion to how businesses use the Yellow Pages to advertise their products or services with other businesses offering similar services.
Share with students that Yellow Pages ads can be very expensive but that many business owners pay the rates because it's profitable for them to be listed among businesses that offer competitive products. A small Yellow Pages ad can cost hundreds, or even thousands, of dollars a year! In large urban areas, the costs of advertising in the Yellow Pages are likely to be greater than they are in a small, rural area. Costs vary by geographic location and by the size of the ad. Other variations might include whether the ad includes art or colored inks.
In this activity, students will use the local telephone directory and some fictitious ad rates to calculate the amount of revenue that a page in the Yellow Pages generates. The fictitious ad rates might or might not be close to rates actually paid in your area. You are welcome to substitute true rates for your local area if you can get that information from your Yellow Pages publisher. (Publishers might not be willing to provide that information; ad rates are frequently unpublished and often negotiable.)
For this activity, assign each student three pages in the Yellow Pages section of your local directory, or you might copy a number of pages and give three to each student. Have each student create a chart (or use a teacher-created chart) and do the math for the three pages he or she is assigned.
If you have time, it's a good idea to provide students with pages that include ads in a variety of sizes. You might even plan to give more complex pages (with ads of a wide variety of sizes plus a fair number of individual listings) to students with strong math abilities; for students who are not as strong in math, you might provide simpler pages that will require less calculation.
When students have their phone book pages, share with them that the rates advertisers pay for Yellow Pages ads depend on the size of the ad. Ask: What different sizes of ads do you find on the pages you have in front of you? (Students will share that they see full-page ads, ads that take up a half of the page, or ads that take up a quarter, an eighth, or a sixteenth of the page.)
To accommodate students who are visual learners, you might cut 16 squares from a page of the yellow pages; each square should be the size of an ad that takes up 1/16th of a page in the phone directory. Have students assist as you paste those squares over a sample page; students will see clearly that a page can be made up of 16 ads that are 1/16 of a page in size. You might also cut squares to show students that a page could include eight 1/8th-page ads, four 1/4th-page ads, or two 1/2-page ads.
At this time, provide a couple of sample pages from the Yellow Pages section of the phone directory. (Photocopying those pages onto overhead transparencies would be best.) As a class, count and record the number of ads of different sizes and the number of individual (1- or 2-line) ads that appear on each page. Record the results in a format similar to the one below:
|Page no.||2-line||1/16th page||1/16th page with art||1/8th page||1/8th page with art||1/4th page||1/4th page with art||1/2 page||1/2 page with art||Full page||Full page with art|
Hand out to students three pages from the Yellow Pages. Have students create a chart similar to the one above (or provide the chart for students to fill in). Have students use the chart to record the number of ads of different sizes and the number of individual listings on each page.
Introduce the Ad Rate Sheet. Provide the sheet for students in one of the following formats:
The simple Ad Rate Sheet below can be used if you do not have access to an actual Ad Rate Sheet from the publisher of your phone directory. (Notes: For younger students, you might simplify the activity by leaving off the "Extra Charge for Art" column. All rates on this chart are on the low side for most locales.)Next, have students create a chart (or use a teacher-created chart) to figure how much revenue the Yellow Pages publisher took in for three pages of the phone directory. The chart below is a sample of the chart older students might use for each page. (Younger students might have fewer types of ads listed.)
Size of Ad Cost per Year Extra Charge for Art 1 or 2 lines $120 No art allowed 1/16th page $390 $50 1/8th page $760 $75 1/4th page $900 $100 1/2 page $1,420 $125 Full page $2,170 $175
|Number of ads||Type and cost of ad||Total cost of ads|
|2-line ad @ $120||$|
|1/16th-page ad @ $390||$|
|1/16th-page ad with art @ $440||$|
|18th-page ad @ $760||$|
|1/8th-page ad with art @ $835||$|
|1/4th-page ad @ $990||$|
|1/4th-page ad with art @ $1,090||$|
|1/2-page ad @ $1,420||$|
|1/2-page ad with art @ $1,545||$|
|Full-page ad @ $2,170||$|
|Full-page ad with art @ $2,345||$|
|Total ad dollars||$|
Ask students to record on the chart the number of each type of ad that appears. Then they should figure out the total ad dollars generated by each type of ad. Finally, they should total the figures for each type of ad to find the amount generated by each page of the Yellow Pages. Have students repeat those steps for all three pages.
Students answer the following question: If you owned a restaurant, would you publish a business ad in the Yellow Pages of your phone book? Why or why not?
Lesson Plan Source
LANGUAGE ARTS: English
MATHEMATICS: Number and Operations
NL-ENG.K-12.4 Communication Skills
NL-ENG.K-12.7 Evaluating Data
NL-ENG.K-12.12 Applying Language Skills
NM-NUM.3-5.1 Understand Numbers, Ways of Representing Numbers, Relationships Among Numbers, and Number Systems
NM-NUM.3-5.3 Compute Fluently and Make Reasonable Estimates
NM-NUM.6-8.1 Understand Numbers, Ways of Representing Numbers, Relationships Among Numbers, and Number Systems
NM-NUM.6-8.3 Compute Fluently and Make Reasonable Estimates
NM-NUM.9-12.1 Understand Numbers, Ways of Representing Numbers, Relationships Among Numbers, and Number Systems
NM-NUM.9-12.3 Compute Fluently and Make Reasonable Estimates
NM-CONN.PK-12.3 Recognize and Apply Mathematics in Contexts Outside of Mathematics
NM-REP.PK-12.1 Create and Use Representations to Organize, Record, and Communicate Mathematical Ideas
NSS-EC.K-4.2 Effective Decision Making
NSS-EC.K-4.9 Competition in the Marketplace
NSS-EC.5-8.2 Effective Decision Making
NSS-EC.5-8.9 Competition in the marketplace
NSS-EC.9-12.2 Effective Decision Making
NSS-EC.9-12.9 Competition in the Marketplace
See more fun math lessons in these Education World articles:
| 3.466962 |
Torque is an important factor in much of the equipment on a factory floor. Measuring torque is often something that's misunderstood, which can lead to over- under-designing of measurement systems. This article addresses the many techniques and tradeoff of torque measurement techniques.
Torque can be divided into two major categories, either static or dynamic. The methods used to measure torque can be further divided into two more categories, either reaction or in-line. Understanding the type of torque to be measured, as well as the different types of torque sensors that are available, will have a profound impact on the accuracy of the resulting data, as well as the cost of the measurement.
In a discussion of static vs. dynamic torque, it is often easiest start with an understanding of the difference between a static and dynamic force. To put it simply, a dynamic force involves acceleration, were a static force does not.
The relationship between dynamic force and acceleration is described by Newton’s second law; F=ma (force equals mass times acceleration). The force required to stop your car with its substantial mass would a dynamic force, as the car must be decelerated. The force exerted by the brake caliper in order to stop that car would be a static force because there is no acceleration of the brake pads involved.
Torque is just a rotational force, or a force through a distance. From the previous discussion, it is considered static if it has no angular acceleration. The torque exerted by a clock spring would be a static torque, since there is no rotation and hence no angular acceleration. The torque transmitted through a cars drive axle as it cruises down the highway (at a constant speed) would be an example of a rotating static torque, because even though there is rotation, at a constant speed there is no acceleration.
The torque produced by the cars engine will be both static and dynamic, depending on where it is measured. If the torque is measured in the crankshaft, there will be large dynamic torque fluctuations as each cylinder fires and its piston rotates the crankshaft.
If the torque is measured in the drive shaft it will be nearly static because the rotational inertia of the flywheel and transmission will dampen the dynamic torque produced by the engine. The torque required to crank up the windows in a car (remember those?) would be an example of a static torque, even though there is a rotational acceleration involved, because both the acceleration and rotational inertia of the crank are very small and the resulting dynamic torque (Torque = rotational inertia x rotational acceleration) will be negligible when compared to the frictional forces involved in the window movement.
This last example illustrates the fact that for most measurement applications, both static and dynamic torques will be involved to some degree. If dynamic torque is a major component of the overall torque or is the torque of interest, special considerations must be made when determining how best to measure it.
| 3.887282 |
Printed wiring board (PWB) interconnect inductance can make or break the performance of a power supply. A large interconnect inductance can raise the high-frequency impedance of a gate drive circuit, impacting efficiency, or it can degrade the effectiveness of filter capacitors. In this Power Tip, we examine some simple formulas for interconnect inductance in free space and over a ground plane. We will find that the ground plane significantly reduces trace inductance and is critical for optimal performance in power supplies.
The simplest trace to consider is a rectangular conductor in free space. A formula for its inductance is shown in Figure 1
. Note that the inductance is a strong function of length but has a logarithmic relationship to the width of the conductor. While the recommendation of making a conductor as wide as possible to reduce its inductance is substantiated by the expression, the benefit of wide conductors is diminished by the logarithm. This is clearly shown in the table, which contains some sample conductor widths and calculates the resulting inductance of a one-inch-long conductor. For instance, a 10 mil (0.25 mm), 2 oz (1.8 mil or 0.07 mm) conductor has an inductance of about 24 nH, if it is one-inch (25 mm) long. If its width is increased by 50 times, the inductance only drops by a factor of four due to the logarithm in the expression.
Click on image to enlarge.
Figure 1: Inductance of a free space conductor has a logarithmic relationship to width.
Ground planes in circuit boards are used to ease routing, minimize the ground voltage variation, provide electrical and magnetic shielding, control impedances, and to help cool the components. Additionally, they provide the opportunity to reduce the inductance of circuit conductors in the PWB. Figure 2
presents a simple formula for the calculation of a conductor over a ground plane. The expression shows linear relationships between inductance, conductor height over the plane and its length. So to a first order, minimizing the separation of the conductor and ground or increasing the conductor width lets you drive the inductance toward zero. The table presents some sample calculations that can be compared with Figure 1
. For instance, we found that a 0.10-inch-wide conductor in free space had an inductance of 14 nH per inch. That same conductor placed over a ground plane of a two-sided board (0.06 inches thick) would have 3 nH per inch. That is a 5:1 reduction in interconnect inductance, which translates into faster gate drives, more effective filtering capacitors and reduced circuit losses from proximity effects. The table also shows that with a six-layer board where the dielectric thickness is reduced to 0.01 inch, the inductance is reduced by another factor of six. Clearly, when routing boards, you will want to put your ground layers as close to the board surface as possible to minimize inductance connecting the components on the surface.
Click on image to enlarge.
Figure 2: Inductance of conductor over plane can be driven arbitrarily small.
To summarize, inductances of conductors on single-layer boards are high due to the lack of a ground-plane layer. This can be mitigated somewhat by routing conductor pairs together. However, ground planes offer the ability to make order of magnitude reductions in this stray inductance, which will result in lower impedance signal paths. This can provide improved efficiency with improved gate drives, better electromagnetic interference (EMI) performance due to improved filter performance, and lower crosstalk due to lower impedance nodes.
Please join us next month when we discuss a second look at snubbing the flyback converter.
For more information about this and other power solutions, visit: www.ti.com/power-ca
About the author
is a senior applications manager and distinguished member of technical staff at Texas Instruments. He has more than 30 years of experience in the power electronics business and has designed magnetics for power electronics ranging from sub-watt to sub-megawatt with operating frequencies into the megahertz range. Robert earned a BSEE from Texas A&M University and an MSEE from Southern Methodist University.
| 3.224386 |
Swollen Lymph Nodes
What are lymph nodes?
Lymph nodes are small, bean-shaped glands throughout the body. They are part of the lymphatic system, which carries fluid (lymph fluid), nutrients, and waste material between the body tissues and the bloodstream.
The lymphatic system is an important part of the immune system, the body's defense system against disease. The lymph nodes filter lymph fluid as it flows through them, trapping bacteria, viruses, and other foreign substances, which are then destroyed by special white blood cells called lymphocytes.
Lymph nodes may be found singly or in groups. And they may be as small as the head of a pin or as large as an olive. Groups of lymph nodes can be felt in the neck, groin, and underarms. Lymph nodes generally are not tender or painful. Most lymph nodes in the body cannot be felt.
What causes swollen lymph nodes?
Lymph nodes often swell in one location when a problem such as an injury, infection, or tumor develops in or near the lymph node. Which lymph nodes are swollen can help identify the problem.
Common sites for swollen lymph nodes include the neck, groin, and underarms.
What does it mean when lymph nodes swell in two or more areas of the body?
When lymph nodes swell in two or more areas of the body, it is called generalized lymphadenopathy. This may be caused by:
How are swollen lymph nodes treated?
Treatment for swollen glands focuses on treating the cause. For example, a bacterial infection may be treated with antibiotics, while a viral infection often goes away on its own. If cancer is suspected, a biopsy may be done to confirm the diagnosis.
Any swollen lymph nodes that don't go away or return to normal size over about a month should be checked by your doctor.
How long will lymph nodes remain swollen?
Lymph nodes may remain swollen or firm long after an initial infection is gone. This is especially true in children, whose glands may decrease in size while remaining firm and visible for many weeks.
eMedicineHealth Medical Reference from Healthwise
To learn more visit Healthwise.org
Find out what women really need.
Most Popular Topics
Pill Identifier on RxList
- quick, easy,
Find a Local Pharmacy
- including 24 hour, pharmacies
| 3.579628 |
Biography All Music Guide
All Music Guide:
One of a handful of musicians who can be said to have permanently changed jazz, Charlie Parker was arguably the greatest saxophonist of all time. He could play remarkably fast lines that, if slowed down to half speed, would reveal that every note made sense. "Bird," along with his contemporaries Dizzy Gillespie and Bud Powell, is considered a founder of bebop; in reality he was an intuitive player who simply was expressing himself. Rather than basing his improvisations closely on the melody as was done in swing, he was a master of chordal improvising, creating new melodies that were based on the structure of a song. In fact, Bird wrote several future standards (such as "Anthropology," "Ornithology," "Scrapple from the Apple," and "Ko Ko," along with such blues numbers as "Now's the Time" and "Parker's Mood") that "borrowed" and modernized the chord structures of older tunes. Parker's remarkable technique, fairly original sound, and ability to come up with harmonically advanced phrases that could be both logical and whimsical were highly influential. By 1950, it was impossible to play "modern jazz" with credibility without closely studying Charlie Parker.
Born in Kansas City, KS, Charlie Parker grew up in Kansas City, MO. He first played baritone horn before switching to alto. Parker was so enamored of the rich Kansas City music scene that he dropped out of school when he was 14, even though his musicianship at that point was questionable (with his ideas coming out faster than his fingers could play them). After a few humiliations at jam sessions, Bird worked hard woodshedding over one summer, building up his technique and mastery of the fundamentals. By 1937, when he first joined Jay McShann's Orchestra, he was already a long way toward becoming a major player.
Charlie Parker, who was early on influenced by Lester Young and the sound of Buster Smith, visited New York for the first time in 1939, working as a dishwasher at one point so he could hear Art Tatum play on a nightly basis. He made his recording debut with Jay McShann in 1940, creating remarkable solos with a small group from McShann's orchestra on "Oh, Lady Be Good" and "Honeysuckle Rose." When the McShann big band arrived in New York in 1941, Parker had short solos on a few of their studio blues records, and his broadcasts with the orchestra greatly impressed (and sometimes scared) other musicians who had never heard his ideas before. Parker, who had met and jammed with Dizzy Gillespie for the first time in 1940, had a short stint with Noble Sissle's band in 1942, played tenor with Earl Hines' sadly unrecorded bop band of 1943, and spent a few months in 1944 with Billy Eckstine's orchestra, leaving before that group made their first records. Gillespie was also in the Hines and Eckstine big bands, and the duo became a team starting in late 1944.
Although Charlie Parker recorded with Tiny Grimes' combo in 1944, it was his collaborations with Dizzy Gillespie in 1945 that startled the jazz world. To hear the two virtuosos play rapid unisons on such new songs as "Groovin' High," "Dizzy Atmosphere," "Shaw 'Nuff," "Salt Peanuts," and "Hot House," and then launch into fiery and unpredictable solos could be an upsetting experience for listeners much more familiar with Glenn Miller and Benny Goodman. Although the new music was evolutionary rather than revolutionary, the recording strike of 1943-1944 resulted in bebop arriving fully formed on records, seemingly out of nowhere.
Unfortunately, Charlie Parker was a heroin addict ever since he was a teenager, and some other musicians who idolized Bird foolishly took up drugs in the hope that it would elevate their playing to his level. When Gillespie and Parker (known as "Diz and Bird") traveled to Los Angeles and were met with a mixture of hostility and indifference (except by younger musicians who listened closely), they decided to return to New York. Impulsively, Parker cashed in his ticket, ended up staying in L.A., and, after some recordings and performances (including a classic version of "Oh, Lady Be Good" with Jazz at the Philharmonic), the lack of drugs (which he combated by drinking an excess of liquor) resulted in a mental breakdown and six months of confinement at the Camarillo State Hospital. Released in January 1947, Parker soon headed back to New York and engaged in some of the most rewarding playing of his career, leading a quintet that included Miles Davis, Duke Jordan, Tommy Potter, and Max Roach. Parker, who recorded simultaneously for the Savoy and Dial labels, was in peak form during the 1947-1951 period, visiting Europe in 1949 and 1950, and realizing a lifelong dream to record with strings starting in 1949 when he switched to Norman Granz's Verve label.
But Charlie Parker, due to his drug addiction and chance-taking personality, enjoyed playing with fire too much. In 1951, his cabaret license was revoked in New York (making it difficult for him to play in clubs) and he became increasingly unreliable. Although he could still play at his best when he was inspired (such as at the 1953 Massey Hall concert with Gillespie), Bird was heading downhill. In 1954, he twice attempted suicide before spending time in Bellevue. His health, shaken by a very full if brief life of excesses, gradually declined, and when he died in March 1955 at the age of 34, he could have passed for 64.
Charlie Parker, who was a legendary figure during his lifetime, has if anything grown in stature since his death. Virtually all of his studio recordings are available on CD along with a countless number of radio broadcasts and club appearances. Clint Eastwood put together a well-intentioned if simplified movie about aspects of his life (Bird). Parker's influence, after the rise of John Coltrane, has become more indirect than direct, but jazz would sound a great deal different if Charlie Parker had not existed. The phrase "Bird Lives" (which was scrawled as graffiti after his death) is still very true.
| 3.106223 |
Diamonds are a scientist's best friend: Research into building better small machines
Do diamonds really last forever? That's the hope of University of Wisconsin-Madison researchers who are trying to solve the problems associated with building extremely small machines and having them withstand the test of time, wear and tear.
The problem is that these machines are so small — microscopic or smaller — that their moving parts cannot be assisted by lubricants; instead, they have to function in a dry state, like a car with no oil.
A really, really small car with no oil.
"They no longer behave in the same way as they do at the macro-scale, where materials may be far stronger, have more power to catalyze chemical reactions, be more optically responsive, and more," says Rob Carpick, associate professor of engineering physics. "That is why it is very interesting to study the fundamental physics of nanoscale materials and also to try to utilize these unique properties for real applications."
An example of a real application includes the tiny sensors in cars that sense rapid deceleration and deploy airbags.
Carpick and his colleagues — including collaborators from Argonne National Laboratories — recently published research that is integral to better understanding the issues facing the engineering of both micro- and nanoelectromechanical systems, called MEMS and NEMS. The paper, published in the journal Advanced Materials, explored a material made by their Argonne collaborators, ultrananocrystalline diamond (UNCD) and, in particular, its structure and surface chemistry.
"When you consider fabricating devices with sliding and rotational motion, you need to consider the structure and surface chemistry of the materials at the location of contact, called a tribological interface," Carpick explains.
It's this issue of tribology — the study of friction, lubrication and wear of moving parts — that's particularly interesting when considering MEMS and NEMS. Just because small machines can be made doesn't mean that they can be made to work well and not wear down the researchers say.
Due to the vast knowledge of its use in microscale fabrication, the material of choice has traditionally been silicon. But because silicon does not respond well to uses that require repetitive sliding or rolling, the machines made from it fail. Two solutions to the problem include improving silicon's wearability or finding a new material. Carpick is putting his money a new material: diamond.
The published study reported on data taken exclusively at the Synchrotron Radiation Center, an electron storage ring located at UW-Madison that uses the light produced by electrons whizzing around a basketball court-sized ring to conduct spectroscopy — a method that uses electrons kicked out of the sample by this light like knocking bricks out of a wall — to analyze the bonding configuration of materials like diamond in detail.
"To our surprise, we found that the structure and surface chemistry of the diamond at the tribological interface is worse than the original diamond. We found that at the tribological interface, the surface is more graphitic in nature," explains Carpick. "This would be bad news for a MEMS device."
The solution offered by Carpick and his colleagues is to coat the surface of the diamond by removing the graphite and attaching hydrogen to the remaining pure diamond. This forms a strongly bonded "atomic cap" to the surface. Like putting varnish on a wooden table, the diamond surface becomes sealed and the diamond becomes water repellent, a critical feature for a machine that runs without lubrication.
"This means, if one wishes to build MEMS or NEMS devices from UNCD, then we have shown a way to minimize friction and adhesion, and this will help us to develop more reliable, robust (and) long lasting MEMS devices," Carpick notes.
The next step for Carpick includes a collaborative effort with UW-Madison physics Professor Gelsomina "Pupa" de Stasio, who has developed world-renowned spectroscopy methods at the Synchrotron Radiation Center. The team has been awarded a $480,000 grant from the United States Air Force Office of Scientific Research to tackle the issue of wear and tear on these thin diamond films and to answer the question of whether diamonds can truly last forever — or at least a really long time.
| 3.618572 |
Image: Walter Tape
Colourful light pillars often appear in winter when snow or ice crystals reflect light from a strong source like the sun or moon. Aided by extreme cold, light pillars appear when light bounces off the surface of flat ice crystals floating relatively close to the ground. The pillars look like feathers of light that extend vertically either above or below the light source, or both.
Diagrams showing the formation of light pillars from street lamps (left) and the reflection of light rays from plate ice crystal surfaces (right):
Images: Keith C. Heidorn
Light pillars also form from strong artificial light sources like street lamps, car headlights or the strong light sources of an ice-skating rink as in the picture above of Fairbanks, Alaska. Though they are local phenomena, light pillars can look distant like an aurora. The closer an observer is to the source of the light pillar, the larger it seems.
National Geographic has more pictures of recent light pillars in Idaho, California, Belgium, Latvia and Canada. You can also view another Environmental Graffiti article on more incredible light phenomena here.
| 3.861617 |
The Test of Phonological Awareness (TOPA) was developed to help identify children who are delayed in their development of phonological awareness. Research supports the theory that children with poor phonological awareness are at risk of later reading difficulties. Children who score in the bottom quartile of the TOPA are considered to be at risk for reading difficulties. There are two versions of the TOPA, one for kindergarten and one for early elementary school. Both are made up of two 10-question subtests with pictures used to represent words. The quality of the items appears to be adequate for screening for awareness of phonemes, and the test appears easy to administer. The TOPA yields raw scores, percentiles, and standard scores. Scores are sensitive to the time of the school year the test is administered for the kindergarten version. The normative sample was carefully selected. Norms for the kindergarten TOPA were made up from responses of 875 children from 10 states, while those for the early elementary version are from 3,654 children from 38 states. Coefficient alpha, based on 100 children at each age level, was 0.90 for kindergarten and 0.88 for early elementary, results that support the internal consistency of the TOPA. Overall, the TOPA has many strengths, including a large and representative normative sample. This does not mean that all school districts will relate to the instrument's norms. One suggestion for improvement would be to prepare local norms. Another issue of concern is the clarity of pronunciation and dialect of the administrator. The TOPA-Early Elementary correlated well with subtests from the Woodcock Reading Mastery Test. Correlations with other measures designed to measure phonological awareness were moderate for the kindergarten version and moderate to high for the early elementary version. It is concluded that the TOPA has potential for identifying children at risk for reading difficulties, and due to the ease of administration and the short time required, it can be used as a screening device. (Contains three references.) (SLD)
Paper presented at the Education Research Exchange (College Station, TX, February 7, 1998).
| 3.313212 |
Lava flows in Daedalia Planum
Mars Express imaged Daedalia Planum, a sparsely cratered, untextured plain on the Red Planet featuring solidified lava flows of varying ages.
Daedalia Planum lies to the south-east of Arsia Mons, one of the largest volcanoes on Mars. It is 350 km in diameter and rises 14 km. The plain is dominated by numerous lava flows of varying ages.
It lies at about 21°S / 243°E. The images have a ground resolution of about 17 m/pixel and cover about 150 x 75 km or 11 250 sq km, an area roughly the size of Jamaica.
The region features numerous solidified lava flows of different ages. These flows originate at the southern flank of Arsia Mons.
The map shows two lava flows: the younger flow (upper portion visible in nadir images) exhibits flow structures, pressure ridges as well as the central lava channel (upper right corner). An older flow visible in the lower portion has a smoother surface owing to gradual accumulation of sediments.
Two striking depressions lying almost at right angles to the lava flow are also visible in the upper portion of the imaged region. These structures are related to grabens that existed earlier (grabens are depressional features formed by faults in the crust).
It is likely that the lava flows invaded the grabens partially or filled them up completely. Where a graben was only partially filled, the original dimensions are still recognisable.
In the upper left of the nadir image is a portion of this feature that remains unmodified by the younger lava flows.
Existing impact craters have also been transformed by the lava flows. The two larger craters show different stages of modification (visible in the 3D image). The largest crater (bottom) was not affected by the lava flow but the ejecta blanket formed during the impact is partially covered in lava.
The second largest crater has been flooded almost entirely, although minor portions of the rim are still preserved. It is likely that the lava entered through a breach in the rim, filling it up. Fully covered impact craters, whose outlines are still visible, are also known as ghost craters. One such ghost crater is located in the immediate vicinity of the second largest impact crater.
The colour scenes have been derived from the three HRSC-colour channels and the nadir channel. The perspective views have been calculated from the digital terrain model derived from the stereo channels. The anaglyph image was calculated from both stereo channels. The black and white high resolution images were derived from the nadir channel which provides the highest detail of all channels.
| 3.183275 |
Bethesda, MD—As you decide what to get dad for Father's Day, you might want to consider what he gave you when you were conceived. If he smoked, your genes are likely damaged, and your odds for cancers and other diseases throughout your life could be increased. A new research report appearing online in the FASEB Journal, scientists show for the first time in humans that men who smoke before conception can damage the genetic information of their offspring. These inherited changes in DNA could possibly render an offspring in the womb susceptible to later disease such as cancer. This provides evidence showing why men should be urged to stop smoking before trying to conceive in the same way women have been urged to quit. Interestingly, a fertile sperm cell takes about three months to fully develop; therefore men would ultimately need to quit smoking long before conception to avoid causing genetic problems.
"That smoking of fathers at the time around conception can lead to genetic changes in their children indicates that the deleterious effects of smoking can be transmitted through the father to the offspring," said Diana Anderson, Ph.D., a researcher involved in the work from the School of Life Sciences at the University of Bradford, in the United Kingdom. "These transmitted genetic changes may raise the risk of developing cancer in childhood, particularly leukemia and other genetic diseases. We hope that this knowledge will urge men to cease smoking before trying to conceive."
To make this discovery, Anderson and colleagues used DNA biomarkers to measure genetic changes in the paternal blood and semen around conception, as well as maternal and umbilical cord blood at delivery in families from two different European regions in central England and a Greek island. Information regarding the lifestyle, environmental and occupational exposures of these families was taken from validated questionnaires. The combined analysis of exposures and DNA biomarkers was used to evaluate the role of exposures before conception and during pregnancy in the causation of genetic changes in the offspring. These results have strong implications for the prevention of disease.
"This report shows that smoking is a germ cell mutagen. If dad uses cigarettes, his kids will be affected even before they are born," said Gerald Weissmann, M.D., Editor-in-Chief of the FASEB Journal. "As Father's Day approaches, family members may want to give dads and prospective dads the help they need to quit smoking for good."
Receive monthly highlights from the FASEB Journal by e-mail. Sign up at http://www.faseb.org/fjupdate.aspx. The FASEB Journal is published by the Federation of the American Societies for Experimental Biology (FASEB) and is the most cited biology journal worldwide according to the Institute for Scientific Information. In 2010, the journal was recognized by the Special Libraries Association as one of the top 100 most influential biomedical journals of the past century. FASEB is composed of 26 societies with more than 100,000 members, making it the largest coalition of biomedical research associations in the United States. Celebrating 100 Years of Advancing the Life Sciences in 2012, FASEB is rededicating its efforts to advance health and well-being by promoting progress and education in biological and biomedical sciences through service to our member societies and collaborative advocacy.
Details: Julian Laubenthal, Olga Zlobinskaya, Krzysztof Poterlowicz, Adolf Baumgartner, Michal R. Gdula, Eleni Fthenou, Maria Keramarou, Sarah J. Hepworth, Jos C. S. Kleinjans, Frederik-Jan van Schooten, Gunnar Brunborg, Roger W. Godschalk, Thomas E. Schmid, and Diana Anderson. Cigarette smoke-induced transgenerational alterations in genome stability in cord blood of human F1 offspring. FASEB J. doi: 10.1096/fj.11-201194 ; http://www.fasebj.org/content/early/2012/06/21/fj.11-201194.abstract
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
| 3.072795 |
History of the Baptist Churches
In Holland a group of English separatists, led by John Smyth, came under Mennonite influence and formed c.1608 in Amsterdam the first English Baptist congregation. Smyth baptized first himself, then the others. In 1611 certain members of this congregation returned to London and established a church there. This was the first of the churches afterward known as General Baptists, since they held the Arminian belief that the atonement of Jesus is not limited to the elect only but is general.
In 1633 the Particular Baptists were founded. They were a group whose Calvinistic doctrine taught that atonement is particular or individual. Immersion was not yet insisted upon in these churches, but in 1644 seven Particular Baptist churches issued a confession of faith requiring that form of baptism, and Baptist was thenceforth the name given to those who practiced it. In 1891, General and Particular Baptists united into a single body called the Baptist Union of Great Britain and Ireland.
In America it was Baptists of the Particular type that first gained influence among the Puritans and Calvinists, when Roger Williams and his companions in Rhode Island rejected infant baptism and established a church in 1639 based on the individual profession of faith. Baptists were later persecuted in New England for opposing infant baptism, and one group emigrated c.1684 from Maine to Charleston, S.C. A group of Separate Congregationalists from New England under Shubael Stearns and Daniel Marshall established (1755) the Separate Baptists in Sandy Creek, N.C.
In the Southeast the General Baptist views found acceptance, but the stricter Calvinistic ideas suited the pioneers who settled the southern mountains after the Revolution. Their opposition to mission work gave them the name Anti-Mission. They were also called Hard Shell or Primitive Baptists.
Early missionary activity extended the Baptist movement to the Continent and elsewhere. In the United States the American Baptist Missionary Union (under a longer title) was formed in 1814 to support workers in foreign lands. In 1832 the American Baptist Home Mission Society was organized. When the question of slavery became a dividing wall, the Southern Baptist Convention was established (1845).
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
| 3.156903 |
| || |
by Dave DeWitt
I am constantly asked to explain the exponential growth of interest in chile peppers and the boom in fiery foods products in the U.S. over the past two decades. How did a meat and potatoes America become enamored of hot sauces, salsas, spicy snack food, chili con carne, and hundreds and hundreds of other fiery foods? First, we must look at the historical trends for why cooks add spices to their foods in the first place.
There are a number of explanations for why we have added spices such as chile peppers to our foods over the tens or hundreds of thousands of years that we have been cooking. They are:
Spices make foods taste better.
The "eat-to-sweat hypothesis"–eating spicy foods makes us cool down during hot weather.
To disguise the taste of spoiled food.
Spices add nutritional value to food.
The antimicrobial hypothesis: spices kill harmful bacteria in food and aid in food preservation.
Which of these explanations are correct?
The First Cornell University Study
In 1998, Jennifer Billing and Paul W. Sherman published a study in The Quarterly Review of Biology that examined the reasons why humans might use spices. They studied 4,578 recipes from 93 cookbooks on traditional, meat-based cuisines of 36 countries; the temperature and precipitation levels of each country; the horticultural ranges of 43 spice plants; and the antibacterial properties of each spice.
The first thing they discovered was that many spices were incredibly antibacterial. For example, garlic, onion, allspice, and oregano were the best all-around microbe killers, killing almost everything. Next were thyme, cinnamon, tarragon, and cumin, which kill about 80 percent of all bacteria. Chile peppers were in the next group, with about a 75 percent kill rate. In the lower ranges of 25 percent were black pepper, ginger, and lime juice.
Next, they learned that "Countries with hotter climates used spices more frequently than countries with cooler climates. Indeed, in hot countries nearly every meat-based recipe calls for at least one spice, and most include many spices, especially the potent spices, whereas in cooler counties substantial fractions of dishes are prepared without spices, or with just a few." Thus the estimated fraction of food-spoilage bacteria inhibited by the spices in each recipe is greater in hot than in cold climates, which makes sense since bacteria grow faster and better in warmer areas.
The researchers addressed the various theories. First, obviously spices make food taste better, "But why do spices taste good? Traits that are beneficial are transmitted both culturally and genetically, and that includes taste receptors in our mouths and our taste for certain flavors. People who enjoyed food with antibacterial spices probably were healthier, especially in hot climates. They lived longer and left more offspring."
Billing and Sherman discounted the "eat-to-sweat" theory, noting that not all spices make people sweat and that there are easier ways to cool down, like moving into the shade. Regarding the theory that spices mask the odor of spoiled food, they noted that it "ignores the health dangers of ingesting spoiled food." And since spices, except for chiles and citrus, add minimal nutritional value to food, that theory goes nowhere.
That leaves just two theories: that spices make foods taste good, and that they kill harmful bacteria–and those two theories are inseparable. "I believe that recipes are a record of the history of the co-evolutionary race between us and our parasites. The microbes are competing with us for the same food," Sherman says. "Everything we do with food--drying, cooking, smoking, salting or adding spices--is an attempt to keep from being poisoned by our microscopic competitors. They're constantly mutating and evolving to stay ahead of us. One way we reduce food-borne illnesses is to add another spice to the recipe. Of course that makes the food taste different, and the people who learn to like the new taste are healthier for it. We believe the ultimate reason for using spices is to kill food-borne bacteria and fungi."
The Second Cornell University Study
In 2001, Paul W. Sherman and Geoffrey A. Hash continued the examination of spices in human diet with a study entitled "Why Vegetable Recipes Are Not Very Spicy," published in Evolution and Human Behavior. They compiled information from 2,129 vegetable-only recipes from 107 traditional cookbooks of 36 countries. Then they examined the history of the spice trade and discovered that for thousands of years spices have been traded all over the world, resulting in their availability in most world cuisines. The most traded spices are black pepper and chile pepper, in that order.
Many studies have proven the antibacterial properties of spices, the fact that spices are more prevalent in warm climates than cool climates, and that the concentrations of spices in recipes are sufficient to kill bacteria. It is true that cooking eliminates the antimicrobial properties of some spices, such as cumin, but has no effect on others, such as chiles.
The researchers compared the vegetable-only recipes to the previous study of meat recipes according to the spices found in the recipes and discovered that vegetable recipes used far fewer spices than meat recipes. They attributed this to the fact that bacteria "do not survive or proliferate as well in vegetables, so adding spices is not as necessary." Interestingly, the four most common spices in both the meat and vegetable recipes were onion, black pepper, garlic, and chile peppers. Onion appeared in more than 60 percent of both types of recipes; black pepper in about 60 percent of the meat recipes and 48 percent of the vegetable recipes; garlic in 35 percent of the meat recipes and 20 percent of the vegetable recipes; and chile peppers in 22 percent of the meat recipes and 18 percent of the vegetable recipes.
Within countries, vegetable-based recipes called for fewer spices than meat recipes in all 36 countries. The countries using the most spices in both vegetable and meat recipes were, in order from the most used: India, Vietnam, Kenya, Morocco, Mexico, Korea, and The Philippines. Following were France, Israel, and South Africa.
In their second study, the researchers concluded: "By every measure, vegetable-based recipes were significant less spicy than meat-based recipes. Results thus strongly support the antimicrobial hypothesis."
Chile Peppers Take Over
But in the United States, with refrigerators and freezers almost every home, the antimicrobial hypothesis simply does not explain the rush to embrace chiles and spicy foods over the past two decades. After answering questions verbally for literally dozens of media interviews, I finally decided to keep track of my reasons for why chile peppers have conquered the United States.
Ethnic diversity. Immigration patterns have changed and now feature new citizens with hot and spicy ingredients and cuisines imported from Asia, Latin America, and the Caribbean. The immigrate and open restaurants and markets, making ethnic chiles and spicy foods commonplace.
Americans are more knowledgeable now and realize that most chiles and spicy foods won't hurt them.
Increasing interest in the hobbies of cooking, gardening, and traveling.
The large number of ethnic and hot and spicy cookbooks published since 1978--literally hundreds of them.
The increasing availability of chiles and fiery foods products in mainstream locations such as supermarkets and fast-food outlets.
The publicity generated by the constant media attention. The recent National Fiery Foods Show in Albuquerque generated more than 5,000 column inches of coverage in U.S. newspapers. Do a Web search for terms like "chile peppers," "spicy," "hot sauce," or "habanero" and stand back–you will get thousands and thousands of solid citations.
Trade and consumer shows and festivals featuring chiles and fiery foods.
The enormous increase in manufacturing, with thousands of fiery foods products now on the market.
The "addiction syndrome." Chiles are not physically addicting–you don’t have withdrawal symptoms when you stop eating them. But they are psychologically addicting because chileheads miss the burn if they don’t have any spicy food for a while. I never hear anyone say, "Oh, I used to eat spicy food, but now I’m back to bland." Once someone starts liking hot and spicy foods, he or she is likely to be a chilehead for life.
The Rozin Theory
But perhaps the most fundamental reason for the boom in fiery foods is a major shift in the way many Americans are eating. My revelation began in Philadelphia while dining with Liz Rozin, who hosted an incredibly diverse dinner at Serrano Restaurant during the Book and the Cook Festival. She is a food historian with fascinating insights into the origins of spicy cuisines. "When we look at the broad spectrum of human flavoring practices, we see one curious correlation," she writes in The Primal Cheeseburger. "The heavier the dependence on plant or vegetable foods, the more pronounced the seasonings; the heavier the consumption of animal foods, the less pronounced the seasonings. Those cuisines that clearly demonstrate a highly spiced or complex seasoning profile--Southeast Asia, India, Africa, Mexico--all have long relied on high-plant, low meat diets." Her theory, interestingly enough, directly contradicts the Cornell University studies above!
Of course, the U.S. was just the opposite: a culture that in its early days relied on beef, pork, and chicken as well as dairy foods. Vegetable foods in the U.S. were eaten primarily in the same regions where the cuisine was also the spiciest: the South and the Southwest.
When Rozin turns her attention to chile peppers in high-vegetable, low-meat cultures, she notes: "The pattern of acceptance, the level of enthusiasm with which the pungent chiles were enfolded into certain existing traditions, seems to indicate that the unique stimulation they provide is an important compensation for foods that are somehow less satisfying, less perfect when eaten unseasoned. And on the other hand, the chiles were largely ignored or rejected by cuisines and areas of the world where meat and other animal foods were a significant focus of the diet."
At least three other major food trends have paralleled the move to spicy foods over the past two decades: natural foods, vegetarian foods, and low-fat foods. Meat consumption has declined as well, setting the scene for the modern return of Liz Rozin's theory of why ancient, "less satisfying" foods were highly spiced: we need the heat and flavor of chiles and other spices to make up for the lack of the flavors of meat and fat in more spartan cuisines. The new corollary of eating in the 21st century might be: "The healthier you eat, the more you need to spice it up with chile-laden condiments."
To sum up, Paul Sherman thinks that we added chiles to meat-based recipes to prevent the growth of bacteria, while Liz Rozin believes we used to chiles to spice-up bland food. Perhaps they are both correct. But we do know one thing: chile peppers have conquered America, and they are not going away.
Top 30 Spices with Antimicrobial Properties
10. Lemon grass
11. Bay leaf
12. Chile peppers
26. Pepper (white/black)
28. Anise seed
29. Celery seed
(Listed from greatest to least inhibition of food-spoilage bacteria)
Source: "Antimicrobial Functions of Spices: Why Some Like It Hot,"
by Jennifer Billing and Paul W. Sherman, The Quarterly Review of
Biology, Vol. 73, No.1, March 1998.
Top of Page
| 3.195861 |
A meteor streaks through the sky over Joshua Tree National Park in Southern California's Mojave Desert. The Leonid meteor storm was captured in this 30-minute time exposure from Nov. 17, 1998. / Reed Saxon, AP
Will this year's Leonid meteor shower roar like the lion constellation it's named for or meow like a kitty cat as it sometimes does? Stargazers who stay up late Saturday night or get up early Sunday morning can judge for themselves.
The annual meteor storm is known for sometimes producing as many as a thousand fireballs per minute, as it did in 1966, but astronomers say this year sky-watchers are likely to see maybe 20 per hour.
The peak will begin building late Friday night into Saturday morning and continue through early Sunday morning, says Ben Burress, an astronomer at Chabot Space and Science Center in Oakland.
The Leonid meteors aren't really associated with the constellation Leo, they just appear to come from the same place in the sky. The Leonids are actually tiny pieces of the comet Tempel-Tuttle, which orbits the sun in a large ellipse.
"A comet is a often called a 'dirty snowball,' as it's made up of pieces of rock held together by ice. As a comet orbits the sun, it heats up and some of the ice is vaporized, releasing bits of rock along the orbit," says Rebecca Johnson of StarDate magazine.
Tempel-Tuttle orbits the sun in an ellipse. Each year as the Earth moves around the sun, it encounters the trailing tail of debris the comet leaves in its wake. Once every 33 years, Tempel-Tuttle comes close to the Earth as it whizzes by in its orbit. In those years, the debris trail Earth travels through is especially thick, and the resulting meteor showers can be spectacular.
Burress says his grandfather saw the 1933 shower.
"There were thousands of meteors per hour," he says.
This year's Leonids aren't expected to be that spectacular because the comet last passed close to us in 1999.
The expected 10 to 20 meteors per hour isn't bad, Burress says: "That gives you a good chance of seeing one every five or so minutes."
By comparison, the big Perseid meteor shower in August typically rains about 50 meteors an hour down on the Earth.
This year should be good viewing, because the moon will set around 10:30 p.m. Saturday in each U.S. time zone.
"So Sunday morning anywhere from midnight to 3 or 4 a.m. is the prime window," Burress says.
With the exception of the Southeast coast, most of the USA east of the Rockies should have ideal weather for watching the meteor shower Friday night and Saturday morning, according to AccuWeather meteorologist Kristina Pydynowski. However, she says, clouds could prevent anyone from Florida to the eastern Carolinas from seeing it.
For Saturday night's viewing, though, the clouds will clear for most of the Southeast.
Much of the far West will be poor for Leonid-viewing through the weekend, Pydynowski says, because of a pair of Pacific storm systems that will bring clouds streaming across the West Coast and toward the Rockies Friday and Saturday nights.
Clear skies should make for great viewing in the Northeast, Midwest, lower Mississippi Valley and most of the Plains states.
Contributing: Doyle Rice
Copyright 2013 USATODAY.com
Read the original story: Leonid meteor will put on a show this weekend
| 3.459907 |
Published April 26, 2012
Anyone who’s driven on a busy highway can attest that vehicles moving erratically and braking needlessly breaks up the smooth flow of traffic and leads to congestion.
Honda has also observed this phenomenon and claims to have pioneered a technology based on this principle that, rather than just helping to avoid traffic jams, aims to prevent them from occurring entirely.
The Japanese automaker claims that its new technology can detect the potential for a traffic jam and determine whether the driving pattern of a vehicle is likely to create one.
Working with researchers from the University of Tokyo, Honda conducted experimental testing of a system utilizing the technology on a primary vehicle and with several secondary vehicles trailing behind it.
The test results demonstrated that the system helped increase the average speed of the primary vehicle by approximately 23 percent and improved fuel efficiency of the secondary trailing vehicles by approximately 8 percent.
Rather than providing information to help the driver avoid existing congestion based on current traffic information, the system monitors the acceleration and deceleration patterns of the vehicle to determine whether the driver's driving pattern is likely to create traffic congestion. Based on this determination, the system provides the driver with appropriate information, including a color-coded display through the on-board terminal, to encourage smooth driving which will help alleviate the intensity of acceleration and deceleration by trailing vehicles, thereby helping to prevent or minimize the occurrence of vehicle congestion.
Moreover, the system is said to be even more affective when all the vehicles communicate with each other, which is similar to the SARTRE ‘road train’ initiative being tested in Europe though Honda’s system still requires the driver to make adjustments to their driving pattern rather than rely on an autonomous system like with SARTRE.
With the goal to bring this technology to market, Honda will begin the first public-road testing of the technology in Italy and Indonesia in May and July of this year, respectively, to verify the effectiveness of the technology in minimizing vehicle congestion.
| 3.235611 |
Foodborne Illness: What Consumers Need to Know
What Is Foodborne Illness?
Foodborne illness is a preventable public health challenge that causes an estimated 48 million illnesses and 3,000 deaths
each year in the United States. It is an illness that comes from eating contaminated food. The onset of symptoms may occur
within minutes to weeks and often presents itself as flu-like symptoms, as the ill person may experience symptoms such as
nausea, vomiting, diarrhea, or fever. Because the symptoms are often flu-like, many people may not recognize that the
illness is caused by harmful bacteria or other pathogens in food.
Everyone is at risk for getting a foodborne illness. However, some people are at greater risk for experiencing a more serious
illness or even death should they get a foodborne illness. Those at greater risk are infants, young children, pregnant women and
their unborn babies, older adults, and people with weakened immune systems (such as those with HIV/AIDS, cancer, diabetes, kidney
disease, and transplant patients.) Some people may become ill after ingesting only a few harmful bacteria; others may remain symptom
free after ingesting thousands.
[Top of Page]
How Do Bacteria Get in Food?
Microorganisms may be present on food products when you purchase them. For example, plastic-wrapped boneless chicken breasts and
ground meat were once part of live chickens or cattle. Raw meat, poultry, seafood, and eggs are not sterile. Neither is fresh
produce such as lettuce, tomatoes, sprouts, and melons.
Thousands of types of bacteria are naturally present in our environment. Microorganisms that cause disease are called pathogens.
When certain pathogens enter the food supply, they can cause foodborne illness. Not all bacteria cause disease in humans.
For example, some bacteria are used beneficially in making cheese and yogurt.
Foods, including safely cooked and ready-to-eat foods, can become cross-contaminated with pathogens transferred from raw egg products
and raw meat, poultry, and seafood products and their juices, other contaminated products, or from food handlers with poor personal
hygiene. Most cases of foodborne illness can be prevented with proper cooking or processing of food to destroy pathogens.
[Top of Page]
The "Danger Zone"
Bacteria multiply rapidly between 40 °F and 140 °F.
To keep food out of this "Danger Zone," keep cold food
cold and hot food hot.
- Store food in the refrigerator (40 °F or below) or
freezer (0 °F or below).
- Cook food to a safe minimum internal temperature.
- Cook all raw beef, pork, lamb and veal steaks, chops, and roasts to a minimum internal temperature of 145 °F as measured with a food thermometer before removing meat from the heat source. For safety and quality, allow meat to rest for at least three minutes before carving or consuming. For reasons of personal preference, consumers may choose to cook meat to higher temperatures.
- Cook all raw ground beef, pork, lamb, and veal to an internal temperature of 160 °F as measured with a food thermometer.
- Cook all poultry to a safe minimum internal temperature of 165 °F as measured with a food thermometer.
- Maintain hot cooked food at 140 °F or above.
- When reheating cooked food, reheat to 165 °F.
[Top of Page]
In Case of Suspected Foodborne Illness
Follow these general guidelines:
- Preserve the evidence. If a portion of
the suspect food is available, wrap it securely, mark "DANGER"
and freeze it. Save all the packaging materials, such as
cans or cartons. Write down the food type, the date, other
identifying marks on the package, the time consumed, and
when the onset of symptoms occurred. Save any identical
- Seek treatment as necessary. If the victim
is in an "at risk" group, seek medical care immediately.
Likewise, if symptoms persist or are severe (such as bloody
diarrhea, excessive nausea and vomiting, or high temperature),
call your doctor.
- Call the local health department if the
suspect food was served at a large gathering, from a restaurant
or other food service facility, or if it is a commercial
- Call the USDA Meat and Poultry Hotline
at 1-888-MPHotline (1-888-674-6854) if the suspect food is a USDA-inspected product and you
have all the packaging.
[Top of Page]
||Symptoms and Potential Impact
||Contaminated water, raw or unpasteurized milk, and raw or undercooked meat, poultry, or shellfish.
|| Diarrhea (sometimes bloody), cramping, abdominal pain, and fever that appear 2 to 5 days after eating; may last 7 days. May spread to bloodstream and cause a life- threatening infection.
||Cook meat and poultry to a safe minimum internal temperature; do not drink or consume unpasteurized milk or milk products; wash your hands after coming in contact with feces.
||Improperly canned foods, garlic in oil, vacuum-packed and tightly wrapped food.
||Bacteria produce a nerve toxin that causes illness, affecting the nervous system. Toxin affects the nervous system. Symptoms usually appear 18 to 36 hours, but can sometimes appear as few as 6 hours or as many as 10 days after eating; double vision, blurred vision, drooping eyelids, slurred speech, difficulty swallowing, dry mouth, and muscle weakness. If untreated, these symptoms may progress causing muscle paralysis and even death.
||Do not use damaged canned foods or canned foods showing signs of swelling, leakage, punctures, holes, fractures, extensive deep rusting, or crushing/denting severe enough to prevent normal stacking.
Follow safety guidelines when home canning food. Boil home canned foods for 10 minutes before eating to ensure safety. (Note: Safe home canning guidelines may be obtained from State University or County Extension Office).
||Meats, meat products and gravy Called "the cafeteria germ" because many outbreaks result from food left for long periods in steam tables or at room temperature.
||Intense abdominal cramps nausea, and diarrhea may appear 6 to 24 hours after eating; usually last about 1 day, but for immune comprised individuals, symptoms may last 1 to 2 weeks. Complications and/or death can occur only very rarely.
||Keep hot foods hot and cold foods cold! Once food is cooked, it should be held hot, at an internal temperature of 140 °F or above. Use a food thermometer to make sure. Discard all perishable foods left at room temperature longer than 2 hours; 1 hour in temperatures above 90 °F.
||Soil, food, water, contaminated surfaces. Swallowing contaminated water, including that from recreational sources, (e.g., a swimming pool or lake); eating uncooked or contaminated food; placing a contaminated object in the mouth.
||Dehydration, weight loss, stomach cramps or pain, fever, nausea, and vomiting; respiratory symptoms may also be present. Symptoms begin 2 to 10 days after becoming infected, and may last 1 to 2 weeks. Immune-comprised individuals may experience a more serious illness.
||Wash your hands before and after handling raw meat products, and after changing diapers, going to the bathroom, or touching animals. Avoid water that might be contaminated. (Do not drink untreated water from shallow wells, lakes, rivers, springs, ponds, and streams.)
|Escherichia coli O157:H7
||Uncooked beef (especially ground beef), unpasteurized milk and juices (e.g., “fresh” apple cider); contaminated raw fruits and vegetables, or water. Person to person contamination can also occur.
||Severe diarrhea (often bloody diarrhea), abdominal cramps, and vomiting. Usually little or no fever. Can begin 2 to 8 days, but usually 3-4 days after consumption of contaminated food or water and last about 5 to 7 days depending on severity. Children under 5 are at greater risk of developing hemolytic uremic syndrome (HUS), which causes acute kidney failure.
||Cook hamburgers and ground beef to a safe minimum internal temperature of 160°F. Drink only pasteurized milk, juice, or cider. Rinse fruits and vegetables under running tap water, especially those that will not be cooked. Wash your hands with warm water and soap after changing diapers, using the bathroom, handling pets or having any contact with feces.
||Ready-to-eat foods such as hot dogs, luncheon meats, cold cuts, fermented or dry sausage, and other deli-style meat and poultry. Also, soft cheeses made with unpasteurized milk. Smoked seafood and salads made in the store such as ham salad, chicken salad, or seafood salad.
||Fever, muscle aches, and sometimes gastrointestinal symptoms such as nausea or diarrhea. If infection spreads to the nervous system, symptoms such as headache, stiff neck, confusion, loss of balance, or convulsions can occur .Those at risk (including pregnant women and newborns, older adults, and people with weakened immune systems) may later develop more serious illness; death can result from Listeria. Can cause severe problems with pregnancy, including miscarriage or death in newborns.
||Cook raw meat, poultry and seafood to a safe minimum internal temperature; prevent cross contamination, separating ready to eat foods from raw eggs, and raw meat, poultry, seafood, and their juices; wash your hands before and after handling raw meat ,poultry, seafood and egg products. Those with a weakened immune system should avoid eating hot dogs, and deli meats, unless they are reheated to 165 ºF or steaming hot. Do not drink raw (unpasteurized) milk or foods that have unpasteurized milk in them, (e.g. soft cheeses). Do not eat deli salads made in store, such as ham, egg, tuna or seafood salad.
|Salmonella (over 2300 types)
||Raw or undercooked eggs, poultry, and meat; unpasteurized milk and juice; cheese and seafood; and contaminated fresh fruits and vegetables.
||Diarrhea, fever, and abdominal cramps usually appear 12 to 72 hours after eating; may last 4 to 7 days. In people with weakened immune system, the infection may be more severe and lead to serious complications, including death.
||Cook raw meat, poultry, and egg products to a safe temperature. Do not eat raw or undercooked eggs. Avoid consuming raw or unpasteurized milk or other dairy products. Produce should be thoroughly washed before consuming.
|Shigella (over 30 types)
||Person-to-person by fecal-oral route; fecal contamination of food and water. Most outbreaks result from food, especially salads, prepared and handled by workers using poor personal hygiene.
||Disease referred to as "shigellosis" or bacillary dysentery. Diarrhea (watery or bloody) , fever, abdominal cramps; 1 to 2 days from ingestion of bacteria and usually resolves in 5 to 7 days
||Hand washing is a very important step to prevent shigellosis. Always wash your hands with warm water and soap before handling food and after using the bathroom, changing diapers or having contact with an infected person.
||Commonly found on the skin and in the noses of up to 25% of healthy people and animals. Person-to-person through food from improper food handling. Multiply rapidly at room temperature to produce a toxin that causes illness. Contaminated milk and cheeses.
||Severe nausea, abdominal cramps, vomiting, and diarrhea occur 30 minutes to 6 hours after eating; recovery from 1 to 3 days — longer if severe dehydration occurs.
||Because the toxins produced by this bacterium are resistant to heat and cannot be destroyed by cooking, preventing the contamination of food before the toxin can be produced is important. Keep hot foods hot (over 140°F) and cold foods cold (40°F or under); wash your hands with warm water and soap and wash kitchen counters with hot water and soap before and after preparing food.
||Uncooked or raw seafood (fish or shellfish); oysters
||In healthy persons symptom include diarrhea, stomach pain, and vomiting May result in a blood infection and death for those with a weakened immune systems particularly with underlying liver disease.
||Do not eat raw oysters or other raw shellfish; cook shellfish (oysters, clams, mussels) thoroughly. Prevent cross-contamination by separating cooked seafood and other foods from raw seafood and its juices. Refrigerate cooked shellfish within two hours after cooking.
May 24, 2011
| 3.65503 |
Play Safely and Responsibly
1. Health and Safety guide
WARNING: READ BEFORE PLAYING
Photosensitive Seizure Warning
A very small proportion of people can experience epileptic seizures when exposed to certain light patterns or flashing lights. Exposure to these patterns or backgrounds on a computer screen, and often while playing video games, may induce an epileptic seizure in these people. Certain conditions can induce previously undetected epileptic symptoms even in people who have no prior history of prior seizures or epilepsy.
If you, or anyone in your family, have an epileptic condition, consult your physician prior to playing any FunOrb games. If you experience any of the following symptoms while playing a video or computer game - dizziness, eye or muscle twitches, altered vision, loss of awareness, disorientation, involuntary movement, or convulsions - STOP PLAYING IMMEDIATELY and consult your physician. Only when they have given you the all clear should you resume play.
Repetitive Strain Injury Warning
Prolonged use of a computer, bad posture and low levels of fitness can all contribute to a condition referred to as Repetitive Strain Injury (RSI).
To help avoid RSI, please follow these simple steps:
When playing FunOrb games, take a 10 minute break from your computer every hour. Get up and walk around, do some stretches. Exercise regularly. When using the computer, only maintain a loose grip on the mouse. Never rest your wrists on a support whilst typing or using the mouse. For extended periods of mouse-only use, place the mouse directly in front of you, between you and the monitor. Make sure the top of the monitor is at eye level and directly in front of you and that your seat is at the right height so that your forearms are at right angles to your upper arms.
The picture below illustrates a correct sitting position:
If you have any serious pain or discomfort in your arms, neck, shoulders or back while playing FunOrb games stop playing immediately and consult your physician. Only when they have given you the all clear should you resume play.
2. Play Safely
FunOrb has several multiplayer games, so you are going to meet a lot of new people when playing. Please remember, however, that you DO NOT know any of these people in real-life. This is not to say that they are not as nice in real-life as they act in-game - they most likely are - but it's important and sensible to keep a safe distance from your fellow FunOrb players in real life.
For this reason, we have put together some helpful hints, so you can be safe and have fun while playing FunOrb games.
1. DO NOT tell other players personal information. Not even if they claim to be Jagex staff!
Don't tell ANYONE your real name, email address, home address, messenger handle or phone number. Use your in-game friends list to chat and interact with the other players safely in multiplayer games.
2. If someone is making you feel uncomfortable then you should not attempt to carry on a conversation with this person.
Please see our Reporting Abuse guide for details on both how to ignore these players and reporting them, should they insult you directly.
3. Chatting of a sexual nature (aka cybering) is strictly forbidden in all FunOrb games and should someone approach you in-game with this type of chat, please follow the instructions in our Reporting Abuse guide and report the player to us.
4. FunOrb games are ONLINE games. There is no need to meet anyone from FunOrb games in real life. Remember, you only have the other player's word that they are who they say they are. In addition, for our players' safety, it is against the rules to ask for or give out personal information, so please do not do so. For further information about this policy, please read Play Safe: Asking for or giving out personal details.
If you decide against this and plan to meet a FunOrb player in real life, please inform someone that you trust about what you are doing and have them accompany you to the meeting. DO NOT go alone.
5. We advise our younger players to keep your parents or guardians informed of how your playing is going. It would be a good idea to introduce them to the games and encourage them to check on you regularly to ensure that your game remains problem-free.
6. For serious cases related to Child Protection, please report the incident to the following authorities:
|USA||The Cyber Tipline|
|Europe||Child Exploitation and Online Protection Centre|
|Australia||Virtual Global Taskforce|
Or, if you are outside the United Kingdom or the United States, we suggest you contact your local law enforcement agency.
For further information and advice on how to chat safely online, please see the following sites:
|Jugendschutz im Internet (German)||http://www.jugendschutz.net/|
|Virtual Global Taskforce||http://www.virtualglobaltaskforce.com|
|Get Net Wise||http://www.getnetwise.org|
|Think You Know||http://www.thinkyouknow.co.uk|
9. If you are a parent seeking more information about FunOrb, you can view our Parents' Guide by clicking here.
3. Responsible Gaming Policy
Jagex intends everyone to use and enjoy FunOrb in a responsible way.
FunOrb games are computer entertainment products and are operated with the intention of providing our users with an engaging, challenging and entertaining experience which they can enjoy from home, work or college via an online device.
While we do not block people from being logged in to FunOrb over a certain number of hours, we do not encourage people to play for more hours than is reasonable.
Please follow these simple guidelines to maintain a balanced life and get the most out of playing our games.
- Take a break from the computer: read a book, go for a walk.
- Exercise at least once a week.
- Remember, it is just a game.
- When eating, take a break from the computer - it'll still be there when you come back.
- Don't ignore your real-life friends, meet up with them on a regular basis.
- Work and study matters: don't let playing a game stop you doing well in your studies or at work.
If you feel that your use of FunOrb is out of control and you are finding it difficult following some, or all, of the steps mentioned above, then please seek advice and help from someone that you trust or a medical professional.
| 3.007312 |
How Beans Grow
by National Gardening Association Editors
If you've ever walked by containers of bulk seed in a garden store, you may have been surprised by the many different colors, sizes and shapes of the beans -- even by the variety of designs on the seed coats and their descriptive names: 'Soldier', 'Wren's Egg', 'Yellow Eye', 'Black Eye', and others.
Maybe you were impressed, too, with how big some of these seeds are. Underneath the large, hard seed coat is an embryo, a tiny plant ready to spring to life. When you plant a bean seed, the right amount of water, oxygen and a warm temperature (65°F to 75°F) will help it break through its seed coat and push its way up through the soil.
The Seed of Life
Most of the energy the young plant needs is stored within the seed. In fact, there's enough food to nourish bean plants until the first true leaves appear without using any fertilizer at all.
As the tender, young beans come up, they must push pairs of folded seed leaves (or cotyledons) through the soil and spread them above the ground. Beans also quickly send down a tap root, the first of a network of roots that will anchor the plants as they grow. Most of the roots are in the top eight inches of soil, and many are quite close to the surface.
What Beans Need
Beans need plenty of sunlight to develop properly. If the plants are shaded for an extended part of the day, they'll be tall and weak. They'll be forced to stretch upward for more light, and they won't have the energy to produce as many beans.
The bean plant produces nice, showy flowers, and within each one is everything that's necessary for pollination, fertilization and beans. Pollination of bean flowers doesn't require much outside assistance -- a bit of wind, the occasional visit from a bee, and the job is done. After fertilization occurs, the slender bean pods emerge and quickly expand. Once this happens, the harvest isn't far off.
Although beans love sun, too much heat reduces production. Bean plants, like all other vegetables, have a temperature range that suits them best: They prefer 70°F to 80°F after germinating. When the daytime temperature is consistently over 85°F, most beans tend to lose their blossoms. That's why many types of beans don't thrive in the South or Southwest in the middle of the summer -- it's simply too hot.
Beans don't take to cold weather very well, either. Only Broad or Fava beans can take any frost at all. Other types must be planted when the danger of frost has passed and the soil has warmed up.
| 4.063786 |
|http://ghr.nlm.nih.gov/ A service of the U.S. National Library of Medicine®|
Hystrix-like ichthyosis with deafness (HID) is a disorder characterized by dry, scaly skin (ichthyosis) and hearing loss that is usually profound. Hystrix-like means resembling a porcupine; in this type of ichthyosis, the scales may be thick and spiky, giving the appearance of porcupine quills.
Newborns with HID typically develop reddened skin. The skin abnormalities worsen over time, and the ichthyosis eventually covers most of the body, although the palms of the hands and soles of the feet are usually only mildly affected. Breaks in the skin may occur and in severe cases can lead to life-threatening infections. Affected individuals have an increased risk of developing a type of skin cancer called squamous cell carcinoma, which can also affect mucous membranes such as the inner lining of the mouth. People with HID may also have patchy hair loss caused by scarring on particular areas of skin.
HID is a rare disorder. Its prevalence is unknown.
HID is caused by mutations in the GJB2 gene. This gene provides instructions for making a protein called gap junction beta 2, more commonly known as connexin 26. Connexin 26 is a member of the connexin protein family. Connexin proteins form channels called gap junctions that permit the transport of nutrients, charged atoms (ions), and signaling molecules between neighboring cells that are in contact with each other. Gap junctions made with connexin 26 transport potassium ions and certain small molecules.
Connexin 26 is found in cells throughout the body, including the inner ear and the skin. In the inner ear, channels made from connexin 26 are found in a snail-shaped structure called the cochlea. These channels may help to maintain the proper level of potassium ions required for the conversion of sound waves to electrical nerve impulses. This conversion is essential for normal hearing. In addition, connexin 26 may be involved in the maturation of certain cells in the cochlea. Connexin 26 also plays a role in the growth and maturation of the outermost layer of skin (the epidermis).
At least one GJB2 gene mutation has been identified in people with HID. This mutation changes a single protein building block (amino acid) in connexin 26. The mutation is thought to result in channels that constantly leak ions, which impairs the health of the cells and increases cell death. Death of cells in the skin and the inner ear may underlie the signs and symptoms of HID.
Because the GJB2 gene mutation identified in people with HID also occurs in keratitis-ichthyosis-deafness syndrome (KID syndrome), a disorder with similar features and the addition of eye abnormalities, many researchers categorize KID syndrome and HID as a single disorder, which they call KID/HID. It is not known why some people with this mutation have eye problems while others do not.
Changes in this gene are associated with hystrix-like ichthyosis with deafness.
This condition is inherited in an autosomal dominant pattern, which means one copy of the altered gene in each cell is sufficient to cause the disorder.
In some cases, an affected person inherits the mutation from one affected parent. Other cases result from new mutations in the gene and occur in people with no history of the disorder in their family.
These resources address the diagnosis or management of hystrix-like ichthyosis with deafness and may include treatment providers.
You might also find information on the diagnosis or management of hystrix-like ichthyosis with deafness in Educational resources (http://www.ghr.nlm.nih.gov/condition/hystrix-like-ichthyosis-with-deafness/show/Educational+resources) and Patient support (http://www.ghr.nlm.nih.gov/condition/hystrix-like-ichthyosis-with-deafness/show/Patient+support).
General information about the diagnosis (http://ghr.nlm.nih.gov/handbook/consult/diagnosis) and management (http://ghr.nlm.nih.gov/handbook/consult/treatment) of genetic conditions is available in the Handbook. Read more about genetic testing (http://ghr.nlm.nih.gov/handbook/testing), particularly the difference between clinical tests and research tests (http://ghr.nlm.nih.gov/handbook/testing/researchtesting).
To locate a healthcare provider, see How can I find a genetics professional in my area? (http://ghr.nlm.nih.gov/handbook/consult/findingprofessional) in the Handbook.
You may find the following resources about hystrix-like ichthyosis with deafness helpful. These materials are written for the general public.
You may also be interested in these resources, which are designed for healthcare professionals and researchers.
For more information about naming genetic conditions, see the Genetics Home Reference Condition Naming Guidelines (http://ghr.nlm.nih.gov/ConditionNameGuide) and How are genetic conditions and genes named? (http://ghr.nlm.nih.gov/handbook/mutationsanddisorders/naming) in the Handbook.
Ask the Genetic and Rare Diseases Information Center (http://rarediseases.info.nih.gov/GARD/).
amino acid ; autosomal ; autosomal dominant ; cancer ; carcinoma ; cell ; cochlea ; connexin ; epidermis ; gap junctions ; gene ; ichthyosis ; ions ; keratitis ; mucous ; mutation ; potassium ; prevalence ; protein ; syndrome
You may find definitions for these and many other terms in the Genetics Home Reference Glossary (http://www.ghr.nlm.nih.gov/glossary).
The resources on this site should not be used as a substitute for professional medical care or advice. Users seeking information about a personal genetic disease, syndrome, or condition should consult with a qualified healthcare professional. See How can I find a genetics professional in my area? (http://ghr.nlm.nih.gov/handbook/consult/findingprofessional) in the Handbook.
| 3.715724 |
April 16, 2002 By Bill McGarigle
Texas was among the first to adapt geospatial technologies to the monitoring, decision-making and treatment processes involved in cotton production. The need was clear: Cotton is the state's number one cash crop, contributing over $1.3 billion annually to the Texas economy, even after losing 10 percent of crops to weevils each year. Losses would be upward of 20 percent had Texas, the federal government and cotton growers not taken action, according to Carl Anderson, agricultural economist and cotton marketing specialist at the Texas A&M Cooperative Extension Program.
In an effort to banish the weevil once and for all, the State Legislature in 1996 established the Texas Boll Weevil Eradication Foundation (TBWEF), a quasi-government entity funded by cotton growers, the state and the U.S. Department of Agriculture. Since 1999, the Legislature has appropriated $125 million in support of the foundation's eradication program.
"Based on GIS analysis of predefined biological, meteorological and operational parameters, such a system could indicate which fields to treat and when," El-Lissy said. "If the system is user friendly and practical to integrate into the Boll Weevil Eradication Program, fewer, less-experienced workers will be able to produce the same results as those achieved by many experienced personnel, but faster and more efficiently."
Role of Spatial Technology
In 1996, the TBWEF introduced the Boll Weevil Eradication Expert System (BWEES) to facilitate the eradication program. A GIS-based application developed by El-Lissy and the foundation's IT group, the BWEES incorporates data from a wide range of sources. Differential GPS point files of field coordinates, field shapes, acreage and weevil trap locations are downloaded to MapInfo Pro GIS and integrated into the base map of a cotton field and its surrounding environment. Grower data, planting dates, cotton variety, numbers of weevils found in the traps and related agricultural information are all stored in an Oracle database-management system and integrated into thematic maps of the respective cotton fields.
Trap data are collected with bar code scanners during weekly field inspections. The scanner automatically records date, time and trap number, and prompts the user for the number of weevils in the trap, the growth stage of the crop and related information. Data from the scanners is downloaded to the GIS and linked to the map location of each trap, enabling supervisors and producers to precisely locate weevil infestations in the field.
MapInfo MapX compares this data against parameters established for cotton fields at various stages of crop growth and infestation. Based on the number of weevils caught in traps over time, MapX color-codes fields meeting various growth and treatment criteria. Data on fields marked for treatment are entered into a contractor's DGPS-based flight-tracking system, which is designed to trigger spray only on the infested areas of the field. After treatment, the swath tracks and related data from the aerial applications are incorporated into the BWEES and used to assess the progress of eradication and monitor the health of the field.
The foundation has also Web-enabled the BWEES. Cotton producers can now query a TBWEF site to find out if weevils are present in their fields, where they were trapped, the degree of infestation and progress toward
You may use or reference this story with attribution and a link to
| 3.336328 |
As Mexico celebrates the Day of the Dead (Día de los Muertos), I'm reminded of a visit I once made with a Swedish friend to the Museum of Mummies in the picturesque colonial Mexican city of Guanajuato. The perfectly preserved corpses of babies and adults were brashly displayed amid neon lights, fake cobwebs, and other cheap Halloween-esque adornments. Confronted with this seeming lack of respect for the dead and vulgarity of the displays, I explained to my shocked companion that Mexicans have a peculiarly different relationship with death to other cultures. As the Nobel prize-winning Mexican writer Octavio Paz explained in his seminal work Labyrinth of Solitude:
"The Mexican ... is familiar with death, jokes about it, caresses it, sleeps with it, celebrates it. True, there is as much fear in his attitude as in that of others, but at least death is not hidden away: he looks at it face to face, with impatience, disdain or irony."
The celebration of the Day of the Dead – which is actually a week of festivities which begin on 28 October and end with a national holiday on 2 November – is an integral part of this embracement of death that is particular to Mexican national identity. During this period, the popular belief is that the deceased have divine permission to visit friends and relatives on earth and enjoy once again the pleasures of life. To facilitate this, Mexicans visit the graves of families and friends and adorn them with brilliantly colourful flowers and offerings of food – in particular the sugary "bread of the dead" – spices, toys, candles, and drinks amongst other things. The period is specifically a joyous, ritualistically elaborate celebration of life, rather than a sober mourning of its passing.
The origins of the Day of the Dead rest in the 16th-century fusion of the Aztecs' belief in death as merely one part in the wider cycle of existence, their ritual venerations and offerings to the goddess Mictecacihuatl ("Lady of the Dead") for deceased children and adults, and the conquering Spaniards' desire to accommodate these festivities within the Catholic celebrations of All Saints' Day and All Souls' Day. While contemporary observance of the Day of the Dead does include masses and prayers to saints and the dead, it is dominated by carnivalesque rituals to a far greater extent that the orthodox Catholic celebrations found in western Europe.
Nevertheless, in a country as socially and geographically diverse as Mexico, there is significant regional variation in the nature of festivities: the southern state of Chiapas is far more likely to focus its efforts on processions and public commemorations of death than the valley of Mexico, where the decoration of altars in homes and tombs of the deceased is more popular. Urbanisation, too, plays a large role in regional variations. For the south and rural areas the period holds far greater social and cultural significance than in the north and large cities; families and communities in rural areas will often spend large parts of the year preparing for the occasion.
As the anthropologist Claudio Lomnitz correctly points out, in many respects this "playful familiarity and proximity to death", is all the more unusual in contemporary Mexican culture because so much of Euro-American 20th century thought has been about denying death – preserving the life of the citizen at all costs. The existence of this peculiarly Mexican attitude is born of three major themes in Mexican history.
First is the Aztec heritage of the pre-Columbian concept of life and death as part of a broader cycle of existence, which fused with the Christian veneration of the deceased on All Souls' Day into a wholly unique concept of death. Second, is the violent and tumultuous nature of Mexico's past; the brutality of the Spanish conquest where the indigenous population of central Mexico was decimated over the course of the 16th century; the humiliating subjugation at the hands of its North American neighbour; and the bloodbath of the Mexican revolution. These upheavals made it impossible to ignore the commonplace reality of unnatural death in Mexico. And thirdly, the appropriation (or reappropriation from their Mesoamerican heritage, as many saw it) of "death" by Mexican intellectuals post-revolution in the early 20th century meant direct confrontation with the mortality of life became ingrained in the national psyche. As the artist Diego Rivera said in 1920: "If you look around my studio, you will see Deaths everywhere, Deaths of every size and colour."
Learning how to cope with mortality has always been a central preoccupation of human existence. The celebrations of the Day of the Dead provide an insight into how the Mexicans do it.
| 3.196755 |
|Pertussis, also known as “whooping cough” for the trademark sound made by those suffering from the illness, is making a comeback. This highly contagious disease is caused by an infection of the respiratory system that leads to uncontrollable, violent coughing. The coughing spells can make it hard to breathe, and may cause vomiting.
Vaccines have made this disease less prevalent in recent years, but the number of cases is on the rise again. While adults usually experience mild discomfort from the coughing spells, pertussis can be deadly for infants, especially those under 6 months that have not been vaccinated.
The pertussis infection can last about 6 weeks and is highly contagious. The first symptoms resemble a common cold, including runny nose, sneezing, mild cough and low-grade fever. Infected people are most contagious during the earliest stages of the illness for up to about 2 weeks after the cough begins. Someone infected can spread the disease simply by sneezing, coughing, or laughing. If you have an infant or care for one, take the time to get vaccinated. It could save a life…
Most new moms are given a Tdap injection in the hospital after delivery, protecting them from tetanus, diphtheria and pertussis. But dads, older siblings, and caregivers need to get vaccinated too. Many elderly people may never have been vaccinated. Urge anyone that comes in regular contact with your newborn to speak to their physician about the vaccination as soon as possible.
Share your thoughts with other moms.
| 3.267592 |
Information on moles and voles can also be seen on the Healthy Canadians website.
The adult mole measures from 12 to 20 cm in length and has dark grey or brown velvety fur. Its eyes are small and its broad front feet have strong claws for digging in soil. Moles are insectivores. Most do not eat plants, but feed mainly on earthworms, insects and grubs. Some moles may damage tubers and the roots of garden plants but any plant damage is most likely incidental, or may be blamed on other small herbivorous animals using the tunnel. Moles do not hibernate but remain active day or night all year long. During the winter, the mole looks for food deep below the frost line. Most surface activity happens in the spring and fall. Moles are solitary animals, and it is likely that only one or two moles are responsible for the damage to your lawn or garden. Moles have only one litter of 3 to 4 young in the spring. These young will stay with the female in her tunnels for about one month, and then will start creating their own tunnels, reaching adult size in four to eight weeks.
Voles resemble house mice, but have a shorter tail, a rounded muzzle and head, and small ears. Like all rodents, voles have a single pair of large chisel-like incisors in the upper jaw that continue to grow from the roots as the tips wear away. The vole has a dark brown coat with a greyish belly that turns white in the winter. In contrast, the house mouse is uniformly grey. Voles search for green plants and seeds during the day or night, and in winter, they travel in tunnels beneath the insulating snow, making round holes in the snow when coming up to the surface.
The mole can be considered beneficial in some ways since it consumes insects, including grubs and other insect larvae, and slugs. Moles also feed on earthworms and some will even eat small snakes and mice.
However, the mole and its tunnels can destroy lawns, gardens, parks, golf courses and cemeteries. They can kill plants when tunnelling by removing soil around roots. The unprotected roots dry out and die. Plant diseases may also be spread by the mole's movements. Pests like voles, field mice and other rodents use these tunnels as well to feed on exposed roots. In search for food, moles create an extensive network of tunnels, many of which are used only once.
Temporary surface tunnels are where the sod is raised and appears as ridges. These feeding tunnels are used a few times, then are abandoned. Deeper tunnels from which the mole must excavate dirt, forming molehills, are used mainly as the living quarters.
Signs of vole infestation are when the bark has been removed completely around the base of a tree (girdling), or the sight of 1 to 2 inch wide dead strips (surface runways) through matted grass leading to shallow underground burrows. Small piles of brownish feces and short pieces of grass along the runways are another sign of vole activity.
Female voles can start producing litters from the age of three weeks and can produce a large number of litters in one year since their gestation period lasts only 21 days and they can breed all year round. Local populations can vary from one animal to thousands per hectare on a three to four-year cycle. When the vole population peaks, predators such as foxes, wolves or hawks feed on nothing else.
Licensed pest control operators may offer a trapping service or traps can be rented from them or a farming co-op. Be sure to ask for instructions on the proper use of the mole traps if you decide to set one. To ensure success, trapping efforts should be concentrated on the main runways in the spring and fall. Look for tunnels that appear to directly connect two or more mounds that run parallel to permanent structures such as fences or concrete paths, or that follow a tree line bordering a grassy area. Another method of identifying an actively used run would be to lightly step on a small section of several tunnels so that they are disturbed, but not completely collapsed. Make sure that these disturbed sections are clearly marked. After a few days, the raised sections can be identified as active runs, and therefore good locations for traps.
To a certain extent, a healthy lawn where the risk of grub infestations is minimized will be less attractive to moles. Cats or dogs can also discourage a mole from entering a yard.
Baits are rarely taken by moles because they prefer to feed on soil insects. Some baits containing zinc phosphide are available only to licensed pest control operators. No registered baits are available to the general public.
Cleaning up all possible food sources like vegetables left in the garden at season's end will help keep voles and other rodents away from your yard. Proper vegetation management like removing mulch from the base of fruit trees in winter will help avoid an increase in vole numbers. If you intend to put mulch down on strawberries or other perennials, do so only after the soil freezes. If you do so before the soil freezes, you will be providing an ideal location for rodents to gain access to roots in unfrozen soil.
Use metal or glass rodent-proof containers to store seeds and bird feed. Composters should also be inaccessible to rodents. Gravel or cinder barriers around garden plots are an effective and easy means of protection. The barrier should be 20 cm (6 to 8 inches) deep and a foot or more wide. The sharpness of cinder particles deters voles from pushing their noses into the soil. Commercial plastic tree guards, a piece of chicken wire or small mesh wrapped around the base of trees and extending below the soil will help prevent treegirdling. Consult with your local tree specialist for the proper use of these materials.
Traditional snap mouse traps can be used. Place them in areas where voles are known to be. Barricades may be used that allow only voles to enter a trap. Buy a large number of snap-traps and set them all out at once for a one or two night period. A good technique is to bait the traps with a tiny dab of peanut butter or bacon for two or three nights without setting the traps. When the traps are finally set, voles are less likely to shy away from them. Always exercise extreme caution when handling a trap and keep them out of the reach of children and pets.
Natural predators including cats, owls and snakes can help keep the vole population down.
If populations have built up, the use of treated baits may be necessary. Baits containing the active ingredient chlorophacinone are available in home garden centres and are registered for the control of voles. Licensed pest control operators can use commercial baits containing chlorophacinone or zinc phosphide.
Denatonium benzoate, sprayed on plant surfaces to be protected, deters voles from chewing. This animal repellent works because it has an extremely bitter and unpleasant taste. It should not be used on food, edible plants or directly on the fruits or nuts of trees. Do not use it on sugar maple trees if the sap is to be used to make syrup, since the taste of the maple syrup may be affected.
| 3.790899 |
- Hypothermia occurs when the body's core temperature drops below 37°C.
- This typically results from prolonged exposure to cold conditions, especially in damp, wet or snowy weather.
- Early signs: shivering; listlessness; cold, pale, puffy face; impaired speech and impaired judgment.
- Later signs: drowsiness, weakness, slow pulse, shallow breathing, confusion, altered behaviour, stumbling, unsteadiness.
- Move person to warmer area, shield from cold, passive rewarming with space blanket etc., give warm fluids and high energy foods if possible.
What is hypothermia and what causes it?
Hypothermia occurs when the body's core temperature drops below 37°C. This happens when more heat is lost than the body can produce through shivering and muscle contractions.
Hypothermia is the result of prolonged exposure to cold conditions, especially in damp, wet or snowy weather. Inadequate clothing during winter or at night in the wilderness, or falling into cold water, are examples of situations which commonly cause hypothermia. Inactivity rapidly leads to heat loss and this is worse if the person is injured.
Symptoms and signs of hypothermia
Hypothermia has a gradual onset and the affected person might lose heat to a critical level before becoming aware of the problem. Early signs include shivering (shivering stops once body temperature falls to below 32°C); listlessness; a cold, pale, puffy face; slurred or incoherent speech and impaired judgment. This decrease in mental sharpness typically results in someone becoming unaware of the gravity of the situation.
Later signs, indicating severe hypothermia, include an overwhelming drowsiness and weakness, slow pulse and shallow breathing, confusion, altered behaviour such as aggressiveness, stumbling when walking and unsteadiness when standing.
Infants, the very lean and the elderly are at particular risk. Elderly people may become hypothermic at temperatures as mild as 10 to 15°C, particularly if they are malnourished, have heart disease or an underactive thyroid, or if they take certain medications or abuse alcohol.
Hypothermia can be fatal and therefore needs prompt treatment. Severe hypothermia may be difficult to distinguish from death because pulses become very difficult or impossible to feel and breathing may be too shallow to notice.
First aid for hypothermia
- Call for an ambulance if the person's level of consciousness is dropping, or you have any doubt about the severity of the condition.
- If possible, move the person to a warmer area, shielded from the cold and wind. Remove wet clothing.
- Passively re-warm the person by wrapping him in a space blanket, blankets, clothing or newspapers, and cover the head. If outdoors, insulate the person from the ground and lie next to him.
- If the person is conscious, give warm fluids and high-energy foods, unless he is vomiting. Don't give any alcohol or caffeinated drinks.
- Keep the person still as movement draws blood away from the vital organs. Don't massage or rub someone with severe hypothermia, or jostle them during transport. (Cold can interfere with the electric conduction system of the heart, making it prone to irregular rhythms which may lead to cardiac arrest.)
- Do not apply direct heat, such as a hot bath, heating pad or electric blanket. (This is called active re-warming and should not be done unless the person is very far from definitive care, as it carries a risk of burns.)
Prevention of hypothermia
- If you're going to be doing outdoor sports like hiking, research the conditions first and speak to experienced people who know the area. Ask them what they would recommend in terms of gear and available shelter. As a general rule: take along several layers of warm clothing (layers help trap warmed air) and keep the head, hands and feet covered.
- Change out of wet clothes as soon as you can. Being wet and in the wind rapidly speeds up heat loss from the body.
- Take along sufficient food, especially carbohydrates, and snack regularly. It's also important to stay hydrated, even in cold weather.
- Carry a space blanket; these are available at outdoor and camping shops.
Reviewed by Barry Milner, Instructor, Blue Star Academy of First Aid, BLS National Faculty and First Aid Representative (Resuscitation Council of Southern Africa)
| 4.141936 |
Intelligence tests are psychological tests that are designed to measure a variety of mental functions, such as reasoning, comprehension, and judgment.
The goal of intelligence tests is to obtain an idea of the person's intellectual potential. The tests center around a set of stimuli designed to yield a score based on the test maker's model of what makes up intelligence. Intelligence tests are often given as a part of a battery of tests.
There are many different types of intelligence tests and they all do not measure the same abilities. Although the tests often have aspects that are related with each other, one should not expect that scores from one intelligence test, that measures a single factor, will be similar to scores on another intelligence test, that measures a variety of factors. Also, when determining whether or not to use an intelligence test, a person should make sure that the test has been adequately developed and has solid research to show its reliability and validity. Additionally, psychometric testing requires a clinically trained examiner. Therefore, the test should only be administered and interpreted by a trained professional.
A central criticism of intelligence tests is that psychologists and educators use these tests to distribute the limited resources of our society. These test results are used to provide rewards such as special classes for gifted students, admission to college, and employment. Those who do not qualify for these resources based on intelligence test scores may feel angry and as if the tests are denying them opportunities for success. Unfortunately, intelligence test scores have not only become associated with a person's ability to perform certain tasks, but with self-worth.
Many people are under the false assumption that intelligence tests measure a person's inborn or biological intelligence. Intelligence tests are based on an individual's interaction with the environment and never exclusively measure inborn intelligence. Intelligence tests have been associated with categorizing and stereotyping people. Additionally, knowledge of one's performance on an intelligence test may affect a person's aspirations and motivation to obtain goals. Intelligence tests can be culturally biased against certain groups.
When taking an intelligence test, a person can expect to do a variety of tasks. These tasks may include having to answer questions that are asked verbally, doing mathematical problems, and doing a variety of tasks that require eye-hand coordination. Some tasks may be timed and require the person to work as quickly as possible. Typically, most questions and tasks start out easy and progressively get more difficult. It is unusual for anyone to know the answer to all of the questions or be able to complete all of the tasks. If a person is unsure of an answer, guessing is usually allowed.
The four most commonly used intelligence tests are:
- Stanford-Binet Intelligence Scales
- Wechsler-Adult Intelligence Scale
- Wechsler Intelligence Scale for Children
- Wechsler Primary & Preschool Scale of Intelligence
In general, intelligence tests measure a wide variety of human behaviors better than any other measure that has been developed. They allow professionals to have a uniform way of comparing a person's performance with that of other people who are similar in age. These tests also provide information on cultural and biological differences among people.
Intelligence tests are excellent predictors of academic achievement and provide an outline of a person's mental strengths and weaknesses. Many times the scores have revealed talents in many people, which have led to an improvement in their educational opportunities. Teachers, parents, and psychologists are able to devise individual curricula that matches a person's level of development and expectations.
Some researchers argue that intelligence tests have serious shortcomings. For example, many intelligence tests produce a single intelligence score. This single score is often inadequate in explaining the multidimensional aspects of intelligence. Another problem with a single score is the fact that individuals with similar intelligence test scores can vary greatly in their expression of these talents. It is important to know the person's performance on the various subtests that make up the overall intelligence test score. Knowing the performance on these various scales can influence the understanding of a person's abilities and how these abilities are expressed. For example, two people have identical scores on intelligence tests. Although both people have the same test score, one person may have obtained the score because of strong verbal skills while the other may have obtained the score because of strong skills in perceiving and organizing various tasks.
Furthermore, intelligence tests only measure a sample of behaviors or situations in which intelligent behavior is revealed. For instance, some intelligence tests do not measure a person's everyday functioning, social knowledge, mechanical skills, and/or creativity. Along with this, the formats of many intelligence tests do not capture the complexity and immediacy of real-life situations. Therefore, intelligence tests have been criticized for their limited ability to predict non-test or nonacademic intellectual abilities. Since intelligence test scores can be influenced by a variety of different experiences and behaviors, they should not be considered a perfect indicator of a person's intellectual potential.
The person's raw scores on an intelligence test are typically converted to standard scores. The standard
Kaufman, Alan, S., and Elizabeth O. Lichtenberger. Assessing Adolescent and Adult Intelligence.Boston: Allyn and Bacon, 2001.
Matarazzo, J. D. Wechsler's Measurement and Appraisal of Adult Intelligence.5th ed. New York: Oxford University Press, 1972.
Sattler, Jerome M. "Issues Related to the Measurement and Change of Intelligence." In Assessment of Children: Cognitive Applications.4th ed. San Diego: Jerome M. Sattler, Publisher, Inc., 2001.
Sattler, Jerome M. and Lisa Weyandt. "Specific Learning Disabilities." In Assessment of Children: Behavioral and Clinical Applications.4th ed. Written by Jerome M. Sattler. San Diego: Jerome M. Sattler, Publisher, Inc., 2002.
Keith Beard, Psy.D.
| 3.708912 |
Anise (Pimpinella Anisum, Linn.), an annual herb of the natural order Umbelliferæ. It is a native of southwestern Asia, northern Africa and south-eastern Europe, whence it has been introduced by man throughout the Mediterranean region, into Germany, and to some extent into other temperate regions of both hemispheres, but seems not to be known anywhere in the wild state or as an escape from gardens. To judge from its mention in the Scriptures (Matthew xxiii, 23), it was highly valued as a cultivated crop prior to our era, not only in Palestine, but elsewhere in the East. Many Greek and Roman authors, especially Dioscorides, Theophrastus, Pliny and Paladius, wrote more or less fully of its cultivation and uses.
Anise in Flower and in Fruit
From their days to the present it seems to have enjoyed general popularity. In the ninth century, Charlemagne commanded that it be grown upon the imperial farms;
in the thirteenth, Albertus Magnus speaks highly of it; and since then many agricultural writers have devoted attention to it. But though it has been cultivated for at least two thousand years and is now extensively grown in Malta, Spain, southern France, Russia, Germany and India, which mainly supply the market, it seems not to have developed any improved varieties.
Description.—Its roots are white, spindle-shaped and rather fibrous; its stems about 18 inches tall, branchy, erect, slender, cylindrical; its root leaves lobed somewhat like those of celery; its stem leaves more and more finely cut toward the upper part of the stem, near the top of which they resemble fennel leaves in their finely divided segments; its flowers yellowish white, small, rather large, in loose umbels consisting of many umbellets; its fruits ("seeds") greenish-gray, small, ovoid or oblong in outline, longitudinally furrowed and ridged on the convex side, very aromatic, sweetish and pleasantly piquant.
Cultivation.—The seeds, which should be as fresh as possible, never more than two years old, should be sown in permanent quarters as soon as the weather becomes settled in early spring. They should be ½ inch deep, about ½ inch asunder, in drills 15 or 18 inches apart, and the plants thinned when about 2 inches tall to stand 6 inches asunder. An ounce of seed should plant about 150 feet of drill. The plants, which do not transplant readily, thrive best in well-drained, light, rich, rather dry, loamy soils well exposed to the sun. A light application of well-rotted manure, careful preparation of the ground, clean and frequent cultivation, are the only requisites in the management of this crop.
In about four months from the sowing of the seed, and in about one month from the appearance of the flowers, the plants may be pulled, or preferably cut, for drying. The climate and the soils in the warmer parts of the northern states appear to be favorable to the commercial cultivation of anise, which it seems should prove a profitable crop under proper management.
Uses.—The leaves are frequently employed as a garnish, for flavoring salads, and to a small extent as potherbs. Far more general, however, is the use of the seeds, which enter as a flavoring into various condiments, especially curry powders, many kinds of cake, pastry, and confectionery and into some kinds of cheese and bread. Anise oil is extensively employed for flavoring many beverages both alcoholic and non-spirituous and for disguising the unpleasant flavors of various drugs. The seeds are also ground and compounded with other fragrant materials for making sachet powders, and the oil mixed with other fluids for liquid perfumes. Various similar anise combinations are largely used inperfuming soaps, pomatums and other toilet articles. The very volatile, nearly colorless oil is usually obtained by distillation with water, about 50 pounds of seed being required to produce one pound of oil. At Erfurt, Germany, where much of the commercial oil is made, the "hay" and the seeds are both used for distilling.
From the Heritage Herbs Collection by M.G. Kains, American Agriculturist, 1912.
|< Prev||Next >|
Free Herb Tips & Stories
A simple and quick way to get...
Certifed Family Herbalist, Ca...
We videotaped Brian Kramer,...
Composting is pretty simple, ...
The advantages of no-till gar...
Thank You For Your Interest In HerbFest
Please comment on our blog. Click Now.
| 3.544397 |
Introduction and Installation of Php
What is PHP?
PHP is an acronym for Hypertext Preprocessor. It is a dynamic server side scripting language that allows an application developer to create very simple to very complex mechanisms for the web. Think of PHP as the brain and central nervous system behind your applications. PHP code can be directly mingled in with your HTML page content as long as the page has a .php extension ( myPage.php ). Or it can be scripts and class files placed on server and connected to your front-end files. Any HTML file you have can be turned into a PHP file and PHP script will run inside it.
Why Learn PHP?
- Relatively easy to learn and understand
- Processes data fast and dynamically
- Most dynamic web applications being made today have a PHP core (brain)
- Free Open Source Technology
- Sports over 700 Built-In functions that are ready to automate your programming tasks
- Works very well with MySQL Databases (PHP+MySQL are married)
- Create and populate XML files using PHP
- Communicates with Flash (back and forth)
- Time and date functions
- Mathematics to cover any need
- Can be used to create complete social networks, and online forums
- Parse file uploads
- Processes online forms and sends emails
- Imaging Libraries
- Resize Images on the fly, create shapes and colors on the fly out of nowhere
- Create files and directory folders out of thin air
- Create multidimensional data arrays, data looping, and deep data parsing
- Huge Free online resource databases for classes and functions
- The list goes on and on and on…….
Who Is Using PHP In the Programming of Their Website or Applications?
• Develop PHP
• Zen Cart
• Web Intersect
And Hundreds of Thousands More
Usually any time you can interact with a website in any way, PHP is doing its thing behind the scenes there.
PHP blends well into other useful programming languages
You have two options to enable yourself to work with PHP.
1. Build and test on your web server (easy)
Most people starting in PHP development already have a web server online. Simply create your .php files, FTP them into your web directory and the server will parse them for you automatically. Most web hosts offer PHP support, and if your host does not, consider switching to a better one.
If you do not have a web server online yet, buy a Domain Name for about $10 and set up a free or paid hosting account for that domain name. Be sure to choose PHP and not ASP if given a choice. Then you can FTP files to the server and test online.
2. Create a local testing environment (complicated for beginners)
To install a web server on your PC you should research and install these 3 things.
you can configure these three, Bu Installing WAMP server
3. What is WAMP server and how to Install it
WAMP stand for Windows Appache Mysql Php, in order to setup a Local testing server you should use this.
You can see Video tutorial on how to Install WAMP Server here.. http://hostinpakistan.com/learnjoomla/?p=13
| 3.399177 |
Luther, Martin, born at Eisleben, Nov. 10, 1483; entered the University of Erfurt, 1501 (B.A. 1502, M.A.. 1503); became an Augustinian monk, 1505; ordained priest, 1507; appointed Professor at the University of Wittenberg, 1508, and in 1512 D.D.; published his 95 Theses, 1517; and burnt the Papal Bull which had condemned them, 1520; attended the Diet of Worms, 1521; translated the Bible into German, 1521-34; and died at Eisleben, Feb. 18, 1546. The details of his life and of his work as a reformer are accessible to English readers in a great variety of forms. Luther had a huge influence on German hymnody.
i. Hymn Books.
1. Ellich cristlich lider Lobgesang un Psalm. Wittenberg, 1524. [Hamburg Library.] This contains 8 German h… Go to person page >
| 3.253075 |
By Gary Bogue
Thursday, April 29th, 2010 at 7:08 am in Wolves.
The latest issue of The Journal of Wildlife Management has a very interesting article on “Survival of Colonizing Wolves in the Northern Rocky Mountains of the United States, 1982-2004.” You care about wolves? Check it out. /Gary
Humans—both predators and protectors—will decide survival of gray wolves
The Journal of Wildlife Management – Survival of the gray wolf in the northern Rocky Mountains of the United States depends not as much on the wolves as on people. Humans are both predators and protectors of this species, which has been reintroduced into parts of Wyoming, Idaho, and Montana. Humans were responsible for eradicating gray wolves from this area by the 1930s. Annual survival was considered adequate to sustain the present population, but killing, both legal and illegal, continues and should be monitored to ensure their survival.
The current issue of The Journal of Wildlife Management reports on mortality rates among these wolves since their reintroduction. The authors stress the role of continued wildlife management to ensure survival of the gray wolf, which was removed from the endangered species list in Idaho and Montana in 2009.
The reestablishment of wolves began in 1979, when wolves began to enter northwest Montana from Canada; reproduction was first documented in 1986. In 1995 and 1996, wolves were introduced into central Idaho and the greater Yellowstone area in Wyoming. The wolf recovery goal calls for metapopulations in these three states of at least 30 breeding pairs and 300 wolves. The plan also includes establishing state-managed conservation programs and taking steps to minimize damage to livestock. It is legal for ranchers to shoot wolves that threaten their livestock.
The current study sought to assess biological, habitat, and anthropogenic factors contributing to wolf mortality and to indicate whether federal protection could ensure survival. Radio collars were placed on 711 wolves from 1982 to 2004. Of these wolves, 363 died, most from human causes. Montana, where less public land is available for wolf habitat, saw the highest level of mortality. However, the overall annual survival rate was found to be adequate to sustain the wolf population.
The authors offer three recommendations for management of the gray wolf population:
*** Increase survival of wolves in surrounding areas to increase survival in Montana. Emigration and retention of wolves in this area could increase with a denser surrounding wolf population. Conflict resolution and illegal mortality should also be addressed.
*** Continue to monitor survival rates. The study found a higher rate of mortality among wolves that were collared to track livestock conflicts compared with those collared for monitoring purposes only. Legal human harvest of wolves should also be monitored.
*** Establish regulations that allow wolves the opportunity to spread and overlap territories. This will help maintain connectivity and natural dispersal among the population locations.
Full text of the article, “Survival of Colonizing Wolves in the Northern Rocky Mountains of the United States, 1982–2004,” The Journal of Wildlife Management, Volume 74, Issue 4, 2010, is available at http://www.wildlifejournals.org/perlserv/?request=get-pdf&doi=10.2193%2F2008-584
About The Journal of Wildlife Management
The Journal of Wildlife Management, published since 1937, is one of the world’s leading scientific journals covering wildlife science, management, and conservation. The Wildlife Society publishes it eight times per year. To learn more about the society, please visit http://joomla.wildlife.org
| 3.490863 |
Konstantin Petrovich Pobyedonostzev
Pobyedonostzev, Konstantin Petrovich (kənstəntyēnˈ pētrôˈvĭch pəbyĕdənôsˈtsyĭf) [key], 1827–1907, Russian public official and jurist. He was professor of civil law at Moscow when he attracted the attention of Czar Alexander II and was appointed (1865) tutor to the future Alexander III. As procurator of the holy synod (1880–1905), he became the champion of autocracy, orthodoxy, and Russian nationalism. He had great power and under his influence Alexander III opposed any limitation of autocratic powers, tightened censorship, attempted to suppress opposition opinion, persecuted religious nonconformists, and adopted a policy of Russification of all national minorities. Pobyedonostzev also supported Pan-Slavism and in his writings strongly attacked Western rationalism and liberalism. He tutored Nicholas II and was one of his most influential advisers until the Revolution of 1905. He wrote a three-volume work on Russian civil law.
See his Reflections of a Russian Statesman (tr. 1898).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Russian, Soviet, and CIS History: Biographies
| 3.033305 |
Algae plus salt water equals … fuel? Bilal Bomani wants to create a biofuel that is "extreme green"— sustainable, alternative and renewable. At NASA's GreenLab Research Facility, he uses algae and halophytes to create a self sustaining, renewable energy ecosystem that doesn't consume arable land or fresh water.
Bilal Bomani currently serves as the lead scientist for NASA's biofuels research program focusing on the next generation of aviation fuel. The intent is to use algae and halophytes with the goal of providing a renewable energy source that does not use freshwater, arable land or compete with food crops.
| 3.290534 |
Hepatitis is the general term for inflammation of the liver. It can be caused by infection, drugs or toxins. When most people hear the diagnosis hepatitis, they think of the hepatitis related to infection from a virus, such as hepatitis A, B or hepatitis C.
Alcoholic hepatitis is different because it is not infectious or contagious. Alcohol acts like a toxin when it causes the liver to be inflamed.
Alcoholic hepatitis can cause the same symptoms as other types of hepatitis. For instance, nausea and vomiting. In more severe cases, it causes jaundice, confusion and liver failure.
Most people that develop alcoholic hepatitis are heavy drinkers. But not everyone. Some people are just more sensitive to alcohol. They might get the condition by only drinking a few per day.
To treat alcoholic hepatitis, the person must stop drinking entirely. Its also important to eat a balanced diet. And take in adequate minerals and vitamins.
In mild cases, inflammation usually calms down within a few days to a few weeks. In more severe cases, people can develop liver failure either from a severe episode or from many repeated episodes.
Your friend must stop all alcohol use forever.
| 3.360523 |
Grape mealybug—Pseudococcus maritimus
Grape mealybugs are soft, oval, flattened insects. Their
bodies are distinctly segmented; divisions among the head,
thorax, and abdomen are not distinct. The adult female is
about 5 mm long and appears smoothly dusted with a white,
mealy, wax secretion. Long caudal filaments along the lateral
margin of the body become progressively shorter toward the
A new vine mealybug has
recently invaded California vineyards. This species has
shorter filaments than the grape mealybug.
Mealybugs excrete large quantities of sticky honeydew, which drips onto fruit clusters and later turns
black from sooty mold. Some berries may crack. Mealybugs do not injure vines.
Remove loose bark in winter; young mealybugs and eggs are concealed in such places until spring. High
temperatures in June kill much of the most damaging brood. Control
of ants (which interfere with natural
enemies) is important. Oils applied during dormancy can reduce numbers somewhat.
| 3.075814 |
Environmental stresses (such as those caused by human practices, such as monoculture) may cause explosions of some ant populations, an effect that is particularly evident within ants’ native ranges. For example, in its native range in South America, the little fire ant Wasmannia auropunctata is a pest in disturbed forests and agricultural areas where it can reach high densities. High densities of W. auropunctata have been linked with sugar cane monocultures and cocoa farms in Colombia and Brazil, respectively. In Colombia, a high abundance of the little fire ant in forest fragments has been linked with low ant diversity. The little fire ant efficiently exploits resources including nectar, refuges within vegetation and honeydew residues (of Homopteran insects), and it may out-compete and displace native myrmecofauna (Armbrecht and Ulloa-Chacón 2003). Improved land management and a reduction of primary production will alleviate the problems associated with invasive ants and the environmental stresses that cause ant population explosions.
In agricultural areas, due to the close association of the land and workers, the little fire ant may be a great nuisance to humans. This is because it is more likely to reach high densities and sting people working in the field. The increased numbers of Homoptera insects, which sap plant nutrients and make plants susceptible to disease, may cause substantial yield losses. In Cameroon, on the other hand, the spread of the little fire ant is encouraged, due to the fact that it preys on, and thereby has a role in the control of, certain herbivorous cocoa pests (Bruneau de Mire 1969, in Brooks and Nickerson 2000).
W. auropunctata may have negative impacts on invertebrates and vertebrates. They may prey on native insects and cause declines in the numbers of small vertebrates. In human habitations it may sting, and even blind, domestic pets (cats and dogs) (Romanski 2001). It is believed to have caused a decrease in reptile populations in New Caledonia and in the Galapagos Archipelago, where it eats tortoise hatchlings and attacks the eyes and cloacae of the adult tortoises (Holway et al. 2002; J. K. Wetterer pers. comm., 2003). The little fire ant is probably the most aggressive species that has been introduced into the Galapagos archipelago, where a marked reduction of scorpions, spiders and native ant species in infested areas has been observed (Lubin 1984, Clark et al. 1982, in Roque-Albelo and Causton 1999). Similarly it has been noted to decrease local arthropod biodiversity in the Solomon Islands (Romanski 2001).
W. auropunctata rarely buries myrmecochorous seeds and sometimes ingests elaisomes without dispersing seed. In its native range, the little fire ant decreases herbivorous arthropod biodiversity, increasing the fruit and seed production and growth of the plant and decreasing pathogen attacks. W. auropunctata may also, however, exclude arthropod plant mutualists, such as plant tenders or seed dispersers (Ness and Bronstein 2004).
Please read Invasive ants impacts for a summary of the general impacts of invasive ants, such as their affect on mutualistic relations, the competitive pressure they impose on native ants and the effect they may have on vulnerable ecosystems.
Location Specific Impacts:
Human nuisance: Several residents in the infested area were stung by ants whilst swimming in their swimming pools. Dogs and cats have been stung by the ants. People working in their gardens have also been stung by the ants.
Competition: High populations of W. auropunctata have been related to marked reductions of other ant species in agricultural lands such as cocoa farms in Brazil (Delabie 1988, Majer et al 1994, in Armbrecht and Ulloa-Chacón 2003).
Cauca River Valley (Colombia)
Competition: W. auropunctata is widely distributed throughout South America and is able to displace the local myrmecofauna. The positive relation between W. auropunctata abundance and ant-plant associations in understory vegetation reinforces the belief that W. auropunctata is highly able to exploit and monopolise resources such as extrafloral nectar, refuges within vegetation, and honeydew residues from Homopteran insects, and prevent other ant species from utilising these resources successfully (Armbrecht and Ulloa-Chacón 2003).
Economic/Livelihoods: Colonises disturbed and agricultural areas, sometimes becoming an economic problem (Fowler et al. 1990, in Armbrecht and Ulloa-Chacón 2003).
Galapagos Islands (Ecuador)
Interaction with other invasive species: "Wasmannia auropunctata is often associated with another serious pest: the cottonycushion scale- Icerya purchasi, and has been observed transportingimmature stages and tending colonies" (Wetterer & Porter, 2003).
Reduction in native biodiversity: The little fire ant is probably the most aggressive species that has been introduced into the Galapagos Archipelago. Marked reductions in scorpions, spiders, and native ant populations in areas infested with the little fire ant have been observed. Many other arthropods are probably also affected, but this has not been measured (Roque-Albelo and Causton 1999).
Human nuisance: Images and text by Eric Loeve available at Fenua Animalia document the enormous impact of this invasive ant on people's lives. This is a sign of things to come as the Wasmannia auropunctata infestations become more and more established and as the scale and numbers build.
Reduction in native biodiversity: A progressive blindness syndrome known as as “Florida spots”, “Florida keratitis/keratopathy” or “tropical keratopathy” (Roze et al, 2004; Moore, 2005) has been documented in mammals and other animals that live in the proximity of colonies of the little fire ant. No scienific study had been conducted to prove that the little fire ants were responsible for this.
Scientists from Fenua animalia conducting mapping of a little fire ant colony in the Mahina commune’s highs (Tahiti), discovered that ant colony areas were also sheltering endemic hearths of "Florida keratopathy". 24 cases of keratopathy and 12 control cases were studied within the mega-colony settled. Results of an analysis showed that the affected animals were those living in contact with the ants. Apart from this predisposing factor, the scientists did not find any other characteristic facilitating this outbreak (age, sex, viral status regarding Feline leukosis). The study highlights the symptoms of acute attack such as blepharospasm and whimpering; and the topography of injuries shows that the median area of the eye is the most affected. Though the pathopysiologic model is not already understood, the authors of the study, believe as many authors previously cited that the most probable etiologic agent of this pathology is the littel fire ant Wasmannia auropunctata (Dr. Leonard Theron Assistant - Faculty of Veterinary Medicine of Liege- Production Animals Clinical Department pers.comm., September 2009).
Other: "Wetterer et al. (1999) found anecdotal evidence of an impact onvertebrates in Gabon. House cats (Felis catus) at Lopé often have W.auropunctata in their fur, and several cats developed corneal cloudingand blindness. William Karesh, field veterinarian for Wildlife Conserva-tion Society, found the cats' symptoms consistent with trauma, not communicable disease. More disturbingly, elephants (Loxodonta africana) with cloudy corneas arecommon in Lopé and Petit Loango, as well as Wonga Wongué Reserveon the central coast of Gabon (100 km south of Libreville). The possibleconnection between W. auropunctata and eye maladies deserves furtherstudy."
Reduction in native biodiversity: W. auropunctata has a negative impact on the native ant community in Gabon. Nine sites In Lope National Park were surveyed. A highly significant correlation between ant diversity and length of infestation by W. auropunctata was found. Many more native ant species were present in areas not infested with W. auropunctata (39.0 ± 4.6) compared with areas infested by W. auropunctata for approximately 5–10 yr (7.0 ± 6.2 and 1.7 ± 1.2, respectively). In infested areas, W. auropunctata made up the bulk of specimens collected in every plot (Walker, 2006).
New Caledonia (Nouvelle Calédonie)
Human nuisance: Causes painful stings.
Reduction in native biodiversity: W. auropunctata have a negative impact on endemic ants, native arachnids, beetles, and reptiles (Wetterer & Porter, 2003 and references therein).
Wewak (Papua New Guinea)
Human nuisance: The ants over-run gardens and homes in residents' houses and sting people, especially children. Control of insect pests within houses is expensive and many families may be forced to live with the ant.
Guadalcanal (Solomon Islands)
Human nuisance: Causes painful stings.
Other: Locals on Guadalcanal have reported that their dogs (Canis domesticus) were all gradually blinded by the ants' venom andrarely lived more than five years (Wetterer 1997 in Wetterer & Porter, 2003).
Hawaii (United States (USA))
Economic/Livelihoods: Many important economic crops in Hawaii are harvested by hand. W. auropunctata is small, cryptic, and has a painful sting. In some cases, agricultural workers refuse to harvest from infested trees or orchards, which is a critical issue for farms that rely on hand harvesting. W. auropunctata is also of quarantine concern, because the presence of ants on exported fruits and vegetables from Hawaii can cause rejection and return shipment to Hawaii (Costa et al., 2005; Follett & Taniguchi 2007 in Souza et al., 2006)
Human nuisance: Cause painful stings.
Economic/Livelihoods: It deters people from tending their crops, reducing productivity and imposing economic hardship on them.
Human nuisance: It stings people and animals including chickens and can cause allergic reactions in people
Vanua Lava Is. (Vanuatu)
Reduction in native biodiversity: A striking lack of butterflies has been noted on the island of Vanua Lava. This is compared to Mota Lava, which is free of W. Auropunctata and has abundant butterflies (J. Tennant, pers. comm. in Wetterer & Porter, 2003).
| 3.400118 |
The next day we learned that frogs come from eggs. Frog eggs look a little different from the eggs of other animals and it took a little convincing to persuade them that they were actually eggs. We talked about the life cycle of frogs. After that we made frogs, complete with long, curly tongues and wrote about them. Here is how they turned out...
|I just love these little guys!|
After frogs & turtles, we talked about snakes!
I am not a fan of snakes, but my students ALWAYS love to learn about them! This year my class is heavy on boys (12 out of 17) and this unit keeps them so engaged! I LOVE teaching through themes. I know that the theme keeps them so engaged that I can slip in writing, reading & math skills without them even realizing it! :) I do not have pictures of our snakes because I couldn't get them to turn out. We colored 2 sides of a paper plate and then cut them in a swirl so that they looked like curly snakes. Then, I hung them from the ceiling. They would twirl when the air kicked in and the kids loved it!
All in all, this was a great unit. My kids stayed engaged and excited the entire week. I was able to teach them valuable math, reading, writing, & science skills...what more could I have asked for? :)
| 3.326552 |
A cycle of three years, in the course of which the whole Law is read on Sabbaths and festivals. This was the practise in Palestine, whereas in Babylonia the entire Pentateuch was read in the synagogue in the course of a single year (Meg. 29b). The modern practise follows the Babylonian; but as late as 1170 Benjamin of Tudela mentioned Egyptian congregations that took three years to read the Torah ("Itinerary," ed. Asher, p. 98). The reading of the Law in the synagogue can be traced to at least about the second century
The Masoretic divisions known as "sedarim" and variously indicated in the text, number 154 in the Pentateuch, and probably correspond, therefore, to the Sabbath lessons of the triennial system, as was first surmised by Rapoport ("Halikot Ḳedem," p. 11). The number varies, however, so that Menahem Me'iri reckoned 161 divisions, corresponding to the greatest number of Sabbaths possible in three years; the Yemen grammars and scrolls of the Pentateuch enumerate 167 (see Sidra); and the tractate Soferim (xvi. 10) gives the number as 175 (comp. Yer. Shab. i. 1). It is possible that this last division corresponds to a further development by which the whole of the Pentateuch was read twice in seven years, or once in three and a half years. The minimum seder for a Sabbath portion when seven persons are called up to the Law (see 'Aliyah) should consist of twenty-one verses, since no one should read less than three verses (Meg. iv. 4). Some sedarim have less than twenty-one verses, however, as, for example, Ex. xxx. 1-8.Divisions and Beginning of the Cycle.
If the 154 sedarim are divided into three portions corresponding to the three years, the second would commence at Ex. xii. and the third at Num. vi. 22, a passage treating of the priestly blessing and the gifts of the twelve tribal chiefs after the erection of the Tabernacle. Tradition assumes that the events described in Num. vi. took place on the 1st of Nisan, and it would follow that Gen. i. and Ex. xi. would also be read on the first Sabbath of that month, while Deut. xxxiv., the last portion of the Pentateuch, would be read in Adar. Accordingly, it is found that the death of Moses is traditionally assigned to the 7th of Adar, about which date Deut. xxxiv. would be read.
A. Büchler has restored the order of the sedarim on the assumption that the reading of the Law was commenced on the 1st of Nisan and continued for three years, and he has found that Genesis would be begun on the 1st of Nisan, Deuteronomy on the 1st of Elul, Leviticus on the 1st of Tishri, and Exodus and Numbers on the 15th of Shebaṭ, the four NewYears given in the Mishnah (R. H. i. 1). Nisan has always been regarded as the ecclesiastical NewYear. This arrangement would account for many traditions giving definite dates to Pentateuchal occurrences, the dates being, strictly speaking, those of the Sabbaths on which the lessons recording the occurrences are read. Thus, it is declared that the exodus from Egypt took place on Thursday, the 15th of Nisan ("Seder 'Olam," x.), and the passage relating to the Exodus was read on that day. The slaying of the Passover lamb is said to have occurred on the 10th of Nisan, and is described in Ex. xii. 21, the passage read in the triennial cycle on the second Sabbath of Nisan, which would be the 10th where the 15th fell on Thursday. This likewise explains the tradition that the Israelites encamped at Rameses on a Sabbath, the 17th of Nisan, on which Ex. xii. 37 would be read in the triennial cycle. The tradition that Rachel was remembered on New-Year's Day (R. H. 10b) is due to the fact that in the first year of the cycle the sidra Gen. xxx. 22, beginning, "And God remembered Rachel," would be read on Rosh ha-Shanah. As the reading of Deut. xxxiv. would occur on the 7th of Adar, there would be four remaining Sabbaths to be filled in before the new triennial cycle, which began with Nisan. Four special Sabbaths, Sheḳalim, Zakor, Parah, and Ha-Ḥodesh, still occur in Adar. Including these and the festival parashiyyot, and possibly also the special sedarim for Ḥanukkah and Purim, eleven extra divisions would be obtained, making up the 166 or 167 of the Yemen Bible.Connections Between Readings and Festivals.
The triennial cycle seems to have been established in New Testament times. John vi. 4 contains an allusion to the Passover, and vii. 2 to the Feast of Tabernacles, while in vi. 59, between the two, reference is made to a sermon on the manna delivered in the synagogue at Capernaum. This would be appropriate for a discourse on the text for the first or eighth of the month Iyyar (i.e., between Passover and Tabernacles), which, in the triennial cycle, dealt with Ex. vi. 1-xvii.1, where the account of the manna is given. So, too, at the season of Pentecost the cycle of readings in the first year would reach Gen. xi., which deals with the story of Babel and the confusion of tongues, so that in Acts ii. Pentecost is associated with the gift of the spirit which led to a confusion of tongues. Similarly, the Decalogue was read on Pentecost in the second year of the cycle, whence came, according to Büchler, the traditional association of the giving of the Law with Pentecost. Ex. xxxiv., which contains a second Decalogue, is accordingly read on the 29th of Ab, or 80 days after Pentecost, allowing exactly forty days before and after the sin of the golden calf. So too Deut. v., containing a third Decalogue, began on the same day, the 29th of Ab. The above diagram shows the arrangement and the connection of the various dates with the successive sedarim, the three concentric rings showing the three cycles, and the twelve radii separating the months of the Jewish year indicated in the inner circle.The Triennial Cycle of the Psalms.
In addition to this division of the Pentateuch into a triennial reading, E. G. King has proposed an arrangement of the Psalms on the same system, thus accounting for their lection in a triennial cycle which varied between 147 and 150 Sabbaths; and he also shows the agreement of the five divisions or books of the Psalms, now fixed by the doxologies, with the five divisions of the Pentateuch, the first and third books of both the Psalter and the Pentateuch ending in the month Shebaṭ. Ps. lxxii. 19 would be read on the same day as Ex. xl. 34, the two passages throwing light on each other. The Asaph Psalms (lxxiii.-lxxxiii.) would begin, on this principle, on the Feast of "Asif" in the seventh month, just when, in the first year of the Pentateuchal cycle, Gen. xxx. et seq. would be read, dealing with the birth of Joseph, whose name is there derived from the root "asaf." A still more remarkable coincidence is the fact that Ps. c. would come just at the time in Adar when, according to tradition, the death of Moses occurred, and when Deut. xxxiii. would be read; hence, it is suggested, originated the heading of Ps. xc., "A prayer of Moses, the man of God." The Pilgrim Psalms (cxx.-cxxxiv.) would be read, in this system, during the fifteen Sabbaths from the 1st of Elul to Ḥanukkah, the very time when a constant procession of pilgrims was bringing the first-fruits to the Temple. Many other associations of appropriate Psalms with the festivals which they illustrate have been pointed out.
Besides these examples Büchler gives the following sections of the Pentateuch read on various Sabbaths in the different years of the cycle, basing his identification on certain haggadic associations of the Sabbaths with the events to which they refer. In the first year the four sedarim of Nisan appear to be Gen. i. 1-ii. 3, ii. 4-iii. 21, iii. 22-iv. 26, and v. 1-vi. 8. The second Sabbath of Iyyar was probably devoted to Gen. vi. 9-vii. 24 (comp. vii. 1). In thesecond year the readings on the Sabbaths of Nisan deal with Ex. xii., xiii., xiv., and xv., ch. xiv. concurring with the Passover; and it is for this reason that the Haggadah states that Adam taught his sons to bring a Passover offering, since the passage Gen. iii. was read during the Passover week in the cycle of the first year. In Iyyar of the second year the readings included Ex. xvi. 1, xxviii., xvii. 1, xviii. 1, and xix. 6, there being usually five Sabbaths in that month. Two of the portions for Siwan are also identified as Ex. xx. 1, xxii. 4; at the end of Elul Lev. i. was read; while on the first days of Tishri ib. iv. 1, v. 1, and vi. 12 were the readings, and on the 10th (Yom Kippur) ib. viii. 1 and x. 7.
In the third cycle, besides the account of the death of Moses already referred to as being read on the 7th of Adar, or the 7th of Shebaṭ, in Nisan the four pericopes were Num. vi. 22, vi. 48, viii. 1, and ix. 1, while the third Sabbath of Iyyar was devoted to the reading of Num. xv. 1, and the 3d of Ab to that of ib. xxxvi. Some of these passages were retained for the festival readings, even after the annual cycle had been introduced.Hafṭarot.
Besides the readings from the Law the readings from the Prophets were also arranged in a triennial cycle. These appear to have been originally a few selected verses intended to strengthen the passage from the Law read previously, and so connect it with the following discourse of the preacher, which took for its text the last verse of the hafṭarah. Thus there is evidence that Isa. lii. 3-5 was at one time regarded as a complete hafṭarah to Gen. xxxix. 1. Even one-verse hafṭarot are known, as Ezek. xlv. 17 and Isa. lxvi. 23, read on New Moons. A list of the earlier hafṭarot suitable for the festivals is given in Meg. 31a. Evidence of two hafṭarot for one festival is shown in the case of Passover, for which Josh. v. 10 and Josh. iii. are mentioned. This can easily be explained by the existence of a triennial cycle, especially as Num. ix. 2-3 was the reading for the first day of Passover, and corresponds exactly to Josh. v. 20. In the case of the NewYear it has been possible to determine the hafṭarot for the three cycles: I Sam. ii. 21, Jer. xxxi. 19, and, for the third year, Joel ii. 1, corresponding to the reading Deut. v., which formed the Pentateuchal lesson. For Ḥanukkah, the Torah seder of which treats of lamps (Num. viii. 1-2), the hafṭarot Zech. iv. 2 and I Kings vii. 49 were selected as being suitable passages. A third hafṭarah is also found (I Kings xviii. 31), completing the triennial arrangement.
The Karaites adopted some of the triennial hafṭarot in their reading of the Law. The hafṭarot of the first year of the cycle can often be identified by this fact. Of the twenty-nine sedarim of the Book of Exodus eighteen were taken from Isaiah, three from Jeremiah, four from the Minor Prophets, three from the historical works, and one from Ezekiel, whose words, for some reason, seem on the whole to have been eschewed by those who selected the prophetic readings. A certain confusion seems to have arisen among the hafṭarot, owing to the fact that among some congregations the reading of the Pentateuchal portions was begun on the 1st of Elul (also regarded as a New-Year).
In the Masoretic text of the Prophets occur a number of divisions marked as sedarim which correspond to smaller divisions in the Torah. Among these may be mentioned:
|I Kings vi. 11-13||corresponding||to||Ex. xxv.|
|Ezek. xii. 20||"||"||Lev. xxvi. 3 or 4?|
|I Sam. vi. 14||"||"||Num. iv. 17|
|Josh. xvii. 4||"||"||Num. xxvi. 52|
|Jer. ix. 22-24||"||"||Deut. viii.|
|II Kings xiii. 23||"||"||Deut. x.|
|Judges ii. 7||"||"||Deut. xxxi. 14|
The present arrangement of hafṭarot seems to have been introduced into Babylonia by Rab, especially those for the three Sabbaths of repentance preceding the Ninth of Ab, and the three consolatory ones succeeding it. Büchler has traced the prophetic portions of these three latter Sabbaths for each of the three years of the cycle as follows:
- I. Isa. xl. 1, li. 12, liv. 11.
- II. Isa. xlix. 14, lx. 1, lxi. 10.
- III. Isa. liv. 1, Zech. ii. 14, ix. 9.
He finds traces of the triennial cycle also in the prophetic portions for the four supplementary Sabbaths, Sheḳalim, Zakor, Parah, and Ḥodesh. For Sheḳalim hafṭarot are found from (a) II Kings xii., (b) Ezek. xlv. onward (among the Karaites), and (c) I Kings iv. 20 onward. It is tolerably clear that these were the hafṭarot of the three different years of the cycle when that particular Sabbath came round. It is possible that when the arrangement of the calendar and of the reading of the Law was first made these four supplementary Sabbaths were intended to fill out the time between the 7th of Adar, when the account of the death of Moses in Deut. xxxiv. was read, and the first Sabbath in Nisan, when the cycle began. Traces of the cycle are also found in the hafṭarot for the festivals. Thus, on the first day of Passover, Ex. xii. 29 was read, approximately in its due place in the cycle in the second year; and corresponding to this Josh. v. 10 was read in the Prophets, whereas there are also traces of Num. ix. 22 being read on that day, as would occur in the third year of the cycle, when Josh. iii. was read as the hafṭarah. The passage for the second day of Passover, Num. ix. 1 et seq., which was introduced by the Babylonians, has attached to it II Kings xxiii. 21 as the hafṭarah, and would correspond to the section in the first year's cycle. On Pentecost, Ex. xix. was read in the second year, while Gen. xi. 15 was read for the first year of the cycle. So, too, on New-Year, Gen. xxx. 22 was read in the first year, Lev. iv. in the second, and Deut. v. in the third, the corresponding hafṭarot being Jer. xxxi. 19, I Sam. ii., and Joel ii. For the Sukkot of the first year for the sidra of Gen. xxxii., the hafṭarah was Zech. xiv. 16-19; for that of the second year, Lev. ix. 10, the hafṭarah was I Kings viii. 8; and for that of the third year, Deut. viii. 9, the hafṭarah was Isa. iv. 6 (among the Karaites).
In the accompanying diagram the sidrot of the Law for the Sabbaths of the three years of the cycle are indicated, as well as the hafṭarot which accompany them. Sometimes these have alternatives, and in several cases, as for Gen. xl. 23, xliii. 14, Ex. i. 1, xxvii. 20, and Lev. xix. 1, three hafṭarot are given for the sidra, pointing in all probability to the hafṭarot reading during the triennial cycle. In this enlarged form the connection of the beginning of the reading of the books with the various sacred New-Years, those of Nisan, of Elul (for tithes), and of Shebaṭ (for trees), comes out most clearly and convincingly. The manner in which the present-day reading of the Law and the Prophets has been derived from the triennial cycle is shown clearly by the diagram. It would appear that at the beginning of the cycle all the sidrot of the month were read together; but this was soon given up, as obviously it would result in the whole of the Law being read in three-quarters of a year or less.
There are indications of the application of the triennial cycle to the Psalms also. The Aggadat Bereshit treats twenty-eight sedarim of Genesis uniformly in three sections, one devoted to a passage in Genesis, the next to a corresponding prophetic passage (hafṭarah), and the third to a passage from the Psalms, generally cognate with either the Law or the Prophets. It may be added that in Luke xxiv. 44 a threefold division is made of "the Law of Moses and the Prophets and the Psalms."
The transition from the triennial to the annual reading of the Law and the transference of the beginning of the cycle to the month of Tishri are attributed by Büchler to the influence of Rab, and may have been due to the smallness of the sedarim under the old system, and to the fact that people were thus reminded of the chief festivals only once in three years. It was then arranged that Deut. xxviii. should fall before the New-Year, and that the beginning of the cycle should come immediately after the Feast of Tabernacles. This arrangement has been retained by the Karaites and by modern congregations, leaving only slight traces of the triennial cycle in the four special Sabbaths and in some of the passages read upon the festivals, which are frequently sections of the triennial cycle, and not of the annual one. It would further be of interest to consult the earlier lectionaries of the Church (which has borrowed its first and second lessons from the Jewish custom) to see how far they agree with the results already obtained for the triennial cycle. The Church father Chrysostom about 175
- Büchler, in J. Q. R. v. 420-468, vi. 1-73;
- E. N. Adler, ib. viii. 528-529;
- E. G. King, Journal of Theological Studies, Jan., 1904;
- I. Abrahams, in J. Q. R. xvi. 579-583.
| 3.09183 |
JURBARKAS (Ger. Jurburg), town in S.W. Lithuania; until the incorporation of Lithuania within Russia in 1795, the town belonged to the principality of Zamut (Zhmud; Samogitia); subsequently, until the 1917 Revolution, it was in the province of Kovno. Jews who visited Jurbarkas at the end of the 16th century are mentioned in the responsa of Meir b. Gedaliah of Lublin (Metz, 1769, 4a no. 7). Within the framework of the Lithuanian Council (see *Councils of the Lands) the community of Jurbarkas belonged to the province (galil) of Kaidany (Kedainiai). In 1766, 2,333 Jews were registered with the community. A wooden synagogue built in Jurbarkas during the second half of the 17th century was preserved until the Holocaust. There were 2,527 Jews registered with the community in 1847. The Jews numbered 2,350 (31% of the total population) in 1897, and 1,887 in 1923. In June–September 1941, after the occupation of the town by the Germans, some 1,000 Jews were murdered at the cemetery and outside the town.
Lite (1951), 1595–97, 1849–54, index 2; M. and K. Piechotka, Wooden Synagogues (1959), 200; Yahadut Lita, 1 (1960), index.
Source: Encyclopaedia Judaica. © 2008 The Gale Group. All Rights Reserved.
| 3.240218 |
Physical exercise is a great way to tone your body and lose weight. There are two basic types of exercise: aerobic and anaerobic. Aerobic exercise promotes cardiovascular fitness by raising your pulse to a targeted level. It's recommended that you exercise at your target heart rate for thirty minutes, three times a week. These exercises strengthen your heart, and allow the heart to pump more blood. Aerobic exercise improves the capacity of the lungs, helps control weight, and increases muscle and joint flexibility, making you less susceptible to injury. Some examples of aerobic exercise are walking, jogging, bicycling, swimming, racquetball, and aerobic dance. Aerobic exercise also helps to reduce risks associated with developing heart disease. Anaerobic exercise focuses on specific muscles, with a goal of increasing their strength, mass, and/or endurance. Weight lifting is an example of anaerobic exercise. This form of exercise won't provide as many benefits as aerobic exercise, but it's a good supplement to your aerobic workout, and can also help increase bone density. Remember, there are many advantages to regular exercise. It can help you sleep better, handle stress better, and even improve the way you look and feel. An ideal exercise program will include both aerobic and anaerobic activity. However, the program should be tailored to your individual needs. If you're over thirty-five or have had medical problems, talk to a doctor before beginning your exercise routine. For more information on aerobic and anaerobic exercise, speak to a health care specialist.
| 3.397762 |
Another collaborative effort by the team that created The Poet King of Tezcoco: A Great Leader of Ancient Mexico (2007) chronicles the life of a controversial figure in pre-colonial Mesoamerica.
The indigenous woman who would serve as Hernán Cortés’ interpreter and companion was born in the early 1500s as Malinali and later christened Marina. She is now called La Malinche. Besides serving as translator to the Spaniard, she also gave him advice on native customs, religious beliefs and the ways of the Aztec. While Marina’s decision to help the Spanish in their often brutal quest for supremacy has led to many negative associations, others see her as the mother of all Mexicans, as she and Cortés had the first recorded mestizo. Although many of the details surrounding the specifics of Marina’s life were unrecorded, Serrano strengthens the narrative with quotations by her contemporaries and provides a balanced look at the life of a complicated, oft-maligned woman. Headers provide structure as events sometimes shift from the specific to the very broad, and some important facts are glossed over or relegated to the timeline. Reminiscent of pre-colonial documents, the illustrations convey both Marina’s adulation of Cortés and the violence of the Spanish conquest, complete with severed limbs, decapitations and more.
An inventive introduction to a fascinating historical figure. (map, chronology, glossary, sources and further reading) (Nonfiction. 10-12)
| 3.504341 |
Disruptive behavior is an ongoing pattern of conduct that bothers and upsets the worker's day-to-day interpersonal workplace surroundings. Although disruptive behaviors have long been a concern, these behaviors have gone unchecked and sometimes are accepted as part of the workplace environment.
Disruptive behavior is inappropriate behavior that interferes with the functioning and flow of the workplace. It hinders or prevents employees from carrying out their workplace responsibilities. It is important that managers and front line staff address disruptive behavior promptly. If no one tackles disruptive behavior it typically continues to escalate, resulting in negative consequences for the perpetrator as well as others.
Individuals who engage in disruptive behavior in a work setting can almost always find ways to justify their bad behavior. Although there may be legitimate concerns about questionable workplace tactics and approaches there is a gross logical error rooted in the assumption that angry potentially dangerous and devaluing behavior can positively improve the company or organization's objectives. People who display disruptive behavior generally lack the ability to be self-reflective and are often clueless to the way in which their manner impact others.
Disruptive employees can be understood as having particular shortcomings related to self-regulation. Self-regulation includes one's capacity to manage and contain anxiety and process emotional states, maintain balance in self-esteem, respect other peoples' points of view, while remaining engaged in productive goal-oriented work-related efforts with others.
Examples of Disruptive Behavior:
Part of the responsibilities for organizations and companies that want to plainly demonstrate zero tolerance for inappropriate and disruptive behaviors also must provide necessary resources and mechanisms to safe guard against such behaviors. This includes opportunities to improve teamwork, foster a sense of mutual respect, and improve communication. These efforts also enhance the workplace environment reduce the risk for legal action loss of productivity, absenteeism, turnover, low morale, and lack of trust.
Eliminating the Disruptions:
-Code of conduct
-Training and education
-Standards for behavior
-Procedures for investigation
Lastly, all of us are responsible and need to be sanctioned to address and understand the necessary changes in the workplace culture in order to eliminate disruptive behavior.
| 3.273407 |
When you need a portable, convenient power source, you can rely on batteries.
Batteries of all shapes and sizes supply power to everyday electronics like toys and power tools, but batteries also work where we don't see them too. During a power outage, phone lines still operate because they are equipped with lead-acid batteries. Batteries help control power fluctuations, run commuter trains, and provide back-up power for critical needs like hospitals and military operations.
The versatility of batteries is reflected in the different sizes and shapes, but all batteries have two common elements that combine to make power: an electrolyte and a heavy metal.
Just the Facts
Batteries contain heavy metals such as mercury, lead, cadmium, and nickel, which can contaminate the environment when batteries are improperly disposed of. When incinerated, certain metals might be released into the air or can concentrate in the ash produced by the combustion process.
One way to reduce the number of batteries in the waste stream is to purchase rechargeable batteries. Nearly one in five dry-cell batteries purchased in the United States is rechargeable. Over its useful life, each rechargeable battery may substitute for hundreds of single-use batteries.
Lead-Acid Automobile Batteries
Nearly 90 percent of all lead-acid batteries are recycled. Almost any retailer that sells lead-acid batteries collects used batteries for recycling, as required by most state laws. Reclaimers crush batteries into nickel-sized pieces and separate the plastic components. They send the plastic to a reprocessor for manufacture into new plastic products and deliver purified lead to battery manufacturers and other industries. A typical lead-acid battery contains 60 to 80 percent recycled lead and plastic.
Non-Automotive Lead-Based Batteries
Gel cells and sealed lead-acid batteries are commonly used to power industrial equipment, emergency lighting, and alarm systems. The same recycling process applies as with automotive batteries. An automotive store or a local waste agency may accept the batteries for recycling.
Dry-cell batteries include alkaline and carbon zinc (9-volt, D, C, AA, AAA), mercuric-oxide (button, some cylindrical and rectangular), silver-oxide and zinc-air (button), and lithium (9-volt, C, AA, coin, button, rechargeable). On average, each person in the United States discards eight dry-cell batteries per year.
Alkaline and Zinc-Carbon Batteries
Alkaline batteries, the everyday household batteries used in flashlights, remote controls, and other appliances. Several reclamation companies now process these batteries.
Most small, round "button-cell" type batteries found in items such as watches and hearing aids contain mercury, silver, cadmium, lithium, or other heavy metals as their main component. Button cells are increasingly targeted for recycling because of the value of recoverable materials, their small size, and their easy handling relative to other battery types.
The Rechargeable Battery Recycling Corporation (RBRC) , a nonprofit public service organization, targets four kinds of rechargeable batteries for recycling: nickel-cadmium (Ni-CD), nickel metal hydride, lithium ion, and small-sealed lead. Its "Call2Recycle!" program offers various recycling plans for communities, retailers, businesses, and public agencies.
| 3.646218 |
Most common smoke detectors (Fig. 13-2) contain a small amount of 241Am, a radioactive isotope. 241Am is produced and recovered from nuclear reactors. Alpha particles emitted by the decays of 241Am ionize the air (split the air molecules into electrons and positive ions) and generate a small current of electricity that is measured by a current-sensitive circuit. When smoke enters the detector, ions become attached to the smoke particles, which causes a decrease in the detector current. When this happens, an alarm sounds. These detectors provide warning for people to leave burning homes safely. Many lives have been saved by the their use.
Because the distance alpha particles travel in air is so short, there is no risk of being exposed to radiation by having a smoke detector in the house. Since ionization-type smoke detectors contain radioactive materials, they should be recycled or disposed of as radioactive waste. It is important to follow the instructions that come with the smoke alarm when they need to be discarded.
| 4.224923 |
Cosmogenic Nuclide Group
Humans live on the earth’s surface and Earth Surface Processes (ESP) are cornerstones defining fundamental boundaries for civilization. Many of these processes occur so rapidly and unexpectedly that they have daunting consequences. We are poorly equipped to predict their nature and possible impacts due to the lack of scientific understanding. In particular, the impact of current environmental change on the nature of Earth Surface Processes is hardly predictable. It is a high priority challenge for modern earth sciences to better understand such processes. One of the most promising approaches to this task is the quantitative investigation of ESP from the past to the present, and to apply the insight to current and future environmental challenges. The leading technique to realize this is the application of terrestrial cosmogenic nuclide.
The LDEO Cosmogenic Nuclide Group develops terrestrial cosmogenic nuclide techniques and applies those as chronometers and tracers in the Earth Sciences. Terrestrial cosmogenic nuclides are produced by interactions between secondary cosmic rays and near surface rocks. Our research interests cover a wide spectrum of earth scientific disciplines and include timing of ice ages, subglacial erosion rates, uplift rates of Pleistocene terraces, and a better understanding of the production systematics of cosmogenic nuclides. We apply the full spectrum of cosmogenic nuclides, including the routine extraction of 10Be, 26Al, and 36Cl. In cooperation with the LDEO noble gas group (Gisela Winckler, link), we also routinely measure cosmogenic 3He. Recently, we have pioneered the terrestrial 53Mn technique as new monitor of earth surface processes, and we also have established an extraction line for in situ 14C from quartz (LINK TO 14C LINE).
| 3.16009 |
o‧rig‧i‧nal1 S1 W1
existing or happening first, before other people or things:
first[only before noun]
The land was returned to its original owner.
The kitchen still has many original features (=parts that were there when the house was first built).
the original meaning of the word
The original plan was to fly out to New York.
completely new and different from anything that anyone has thought of before:
I don't think George is capable of having original ideas!
That's not a very original suggestion.
a highly original design
His work is truly original.
an original work of art is the one that was made by the artist and is not a copy:
a work of art[only before noun]
The original painting is now in the National Gallery in London.
an original Holbein drawing
| 3.002988 |
Lesson Plans and Worksheets
Browse by Subject
Blood Teacher Resources
Find teacher approved Blood educational resource ideas and activities
A series of diagrams and photographs is a vivid tool for delivering a lesson about blood vessels. Each slide has notes for the lecturer to use to explain each slide. Your young biologists will increase their understanding of the structure and function of arteries, veins, and capillaries. The final slide provides a comparison chart for them to copy and complete as a review of the information absorbed.
A thorough commentary on blood type is presented in this handout. Antigens and antibodies are defined. Punnett squares and a pedigree chart help to clarify. Human biology or genetics learners then apply their knowledge to two situations: two newborn baby girls being possibly switched in the hospital and a crime scene investigation. This is an engaging activity that ends with a lab activity simulating the blood typing and identification of the perpetrator.
Although there are vocabulary terms in this PowerPoint that use British spelling, the presentation is attractive and educational. The content flows from the general composition of blood, into the different types of blood cells and their functions. The concluding slide has review questions that you can use to assess student retention.
In this blood type worksheet, students create a wheel showing blood type, antigens and the genes involved in coding for each blood type. Students use the wheel to answer 16 questions about blood type and they complete a chart with the genes, antigens and blood types using what they learned from the wheel.
In this simulation activity, young biologists examine blood types to determine whether the death rate in a hospital was caused because of incorrect identification of patient blood types. You will need obtain and follow the procedures of a blood typing kit in order carry out this lab activity in your classroom. Using this scenario makes a blood typing or scientific method lesson more interesting, and the provided lab sheet makes it easier for you to implement.
There are factors that can be controlled and factors that can't be controlled regarding blood pressure. Read through these handouts and learn about the different factors. Then answer some questions about the information just learned. There is even an activity to determine resting heart rate, and then to make calculations regarding one's target heart rate range. Discover why all of this information is important.
In this blood worksheet, students watch a video called "The Epic Story of Blood" and answer 24 questions about the creation of blood, how it is produced, blood donation, blood banks and transfusions. Students take an short quiz about the blood and what they learned in the video.
Here is a sharp presentation on multiple alleles using the classic blood type example. Viewers revisit codominance and dominance and learn that blood type is actually a combination of both. They use Punnett squares to solve blood type problems. They learn about agglutination and antibodies that make blood type crossing a topic of study. Follow this PowerPoint with a blood typing lab activity and more Punnett square practice.
| 4.478756 |
From Irish Druids and Old Irish Religions, 1894
The MISTLETOE had an early reputation as a guide to the other world. Armed with that golden branch, one could pass to Pluto's realm:—
"Charon opposed—they showed the Branch.
They show'd the bough that lay beneath the vest;
At once his rising wrath was hush'd to rest."
Its connection with health, as the All-heal, is noted by the poet Callimachus, under the appellation of panakea, sacred to Apollo:—
"Where'er the genial panakea falls,
Health crowns the State, and safety guards the walls."
As the seat of the life of the Oak, as then believed, it had special virtues as a healer. The Coel-Creni, or omen sticks, were made of it, and also divining-rods. It had the merit of revealing treasure, and repelling the unwelcome visits of evil spirits. When cut upon St. John's Eve, its power for good was greatest. "While the shamrock is emblematic of the equinox, the mistletoe is associated with the solstice," says St. Clair.
The ancient Persians knew it as the healer. It told of the sun's return to earth. Farmers in Britain used to give a sprig of mistletoe to the first cow calving in the year. Forlong points out the recovery of old heathen ideas; saying, "Christian priests forbade the mistletoe to enter their churches; but yet it not only got in, but found a place over the altars, and was held to betoken good-will to all mankind." It was mysteriously associated with the dove. The Irish called it the uil-iceach: the Welsh, uchelwydd. The County Magazine for 1792 remarked—"A custom of kissing the women under the mistletoe-bush still prevails in many places, and without doubt the surest way to prove prolific." Pliny considered it good for sterility. It was the only thing that could slay the gentle Baldur. In England there are some twenty trees on which the mistletoe may grow.
Certain plants have at different times been objects of special consideration, and worshipped as having divine qualities, or being possessed by a soul. Some were thought to manifest sympathetic feeling with the nation by which they were cherished. The fetish tree of Coomassie fell when Wolseley's ultimatum reached the King of Ashantee. The ruthless cutting of trees was deemed cruel. Even if they had no living spirit of their own, the souls of the dead might be there confined; but perhaps Mr. Gladstone, the tree-feller, is no believer in that spiritual doctrine.
In Germany one may still witness the marrying of trees on Christmas Eve with straw-ropes, that they may yield well. Their forefathers' regard for the World-tree, the ash Yggdrasill, may incline Germans to spare trees, and raise them, as Bismarck loves to do. Women there, and elsewhere, found consolation from moving round a sacred tree on the approach of nature's trial. The oldest altars stood under trees, as by sacred fountains or wells. But some had to be shunned as demoniac trees.
The Irish respected the Cairthaim, quicken-tree, quick-beam, rowan, or mountain ash, which had magical qualities. In the story of the Fairy Palace of the Quicken-tree, we read of Finn the Finian leader being held in that tree by enchantment, as was Merlin by the fairy lady. MacCuill, son of the hazel, one of the last Tuath kings, was so-called because he worshipped the hazel. Fairies danced beneath the hawthorn. Ogham tablets were of yew. Lady Wilde styled the elder a sacred tree; and the blackthorn, to which the Irishman is said to be still devoted, was a sacred tree.
Trees of Knowledge have been recognized east and west That of India was the Kalpa. The Celtic Tree of Life was not unlike that of Carthage. The Persians, Assyrians, and American Indians had their Trees of Life. One Egyptian holy tree had seven branches on each side. From the Sycamore, the goddess Nou provided the liquor of life; from the Persea, the goddess Hathor gave fruits of immortality. The Date-palm was sacred to Osiris six thousand years ago. The Tree of Life was sometimes depicted on coffins with human arms. The Lotus, essentially phallic, self-produced, was an emblem of self-created deity, being worshipped as such at least 3000 B.C. Homa was the Life-tree of Zoroaster. The bean was thrown on tombs as a sign of immortality. The banyan and the onion denote a new incarnation.
The Indian and Cingalese Bo or Asvattha, Ficus religiosa, sheltered Gautama when he gained what is known as Entire Sanctification, or Perfection. The sacred Peepul is the male fig, the female being Ficus Indica. The fig entwines itself round the palm. The Toolsi, Ocymum Sanctum, and the Amrita are also worshipped in India; so are the Lien-wha, or Nelumbium, in China; the cypress in Mexico, and the aspen in Kirghizland.
Trees and plants were devoted to gods: as the oak, palm, and ash to Jupiter; the rose, myrtle, and poppy to Venus; the pomegranate to Proserpine; the pine-apple to Cybele; the orange to Diana; the white violet to Vesta; the daisy to Alcestis; the wild thyme to the Muses; the laurel to Apollo; the poplar to Hercules; the alder to Pan; the olive to Minerva; the fig and vine to Bacchus; the lotus to Hermes. The leek of Wales, like the shamrock of Ireland, was an object of worship in the East, and was associated with Virgo. The Hortus Kewensis states that it first came to Britain in 1562. The mandrake or Love-apple was also sacred. Brinton gives a list of seven such sacred plants among the Creek Indians. The Vervain, sacred to Druids, was gathered in Egypt at the rise of Sirius the Dogstar.
From a sad, comfortless childhood Giles Truelove developed into a reclusive and uncommunicative man whose sole passion was books. For so long they were the only meaning to his existence. But when fate eventually intervened to have the outside world intrude upon his life, he began to discover emotions that he never knew he had.
A story for the genuine booklover, penned by an Irish bookseller under the pseudonym of Ralph St. John Featherstonehaugh.
FREE download 23rd - 27th May
Join our mailing list to receive updates on new content on Library, our latest ebooks, and more.
You won't be inundated with emails! — we'll just keep you posted periodically — about once a monthish — on what's happening with the library.
| 3.235151 |
What is family practice?
A family practitioner is a primary care doctor who provides health services to adults, children and adolescents. Family practitioners have a very broad scope of practice, but are usually the patient’s first consultation before being referred to other specialists, if necessary. The family practitioner performs annual physical examinations; ensures up-to-date immunization status; counsels patients on healthy lifestyles; monitors patients’ blood pressure, cholesterol and glucose levels; and ensures other baseline tests are within normal levels for the patient’s age and gender.When do I see a family practitioner?
In the United States, about one in four visits to a doctor are to a family practice doctor. Family care physicians care for the poor, indigent and underserved in the community more than any other physician specialty.
When a patient is an infant or child, he or she might see a family practitioner specializing in pediatric care. When the young patient transitions from childhood to adulthood during adolescence, he or she can see a family practice doctor specializing in adolescent medicine or a regular family practitioner, also known as adult-care physician. Around the ages 18-21, patients typically transition to an adult-care physician who is better suited to their health-care needs. What should I expect when I visit a family practitioner?
A family practitioner’s scope of practice varies, but these specialists typically provide basic diagnoses and non-surgical treatment of common medical conditions and illnesses.
To arrive at a diagnosis, family doctors will interview and examine the patient. This requires discussing the history of the present illness, including a review of the patient’s body systems, medication history, allergies, family history, surgical history and social history. Then the physician will perform a physical examination and possibly order basic medical tests, such as blood tests, electrocardiograms or X-rays. Ultimately, all this information is combined to arrive at a diagnosis and possible treatment. Tests of a more complex and lengthy nature may be referred to a specialist
Together with the patient, the family practitioner forms a plan of care that can include additional testing if needed, a referral to see a specialist, medication prescriptions, therapies, changes to diet or lifestyle, additional patient education, or follow-up treatment. Patients also may receive advice or education on improving health behaviors, self-care and treatment, screening tests and immunizations.What are the most common conditions family practitioners treat?
Shopping for a New Doc
Decoding Your Health Test Results
Patient Types: Which One Are You?
Questions to Ask When Looking for a New Doctor
- Common cold
- Gastrointestinal complaint
- Gynecological complaint
- High blood pressure
- Infectious diseases
- Musculoskeletal complaint
- Psychiatric disease
- Sexually transmitted disease
- Skin complaint
- Urinary tract complaint
| 3.049558 |
tree-equal tree-1 tree-2 &key test test-not => generalized-boolean
Arguments and Values:
test---a designator for a function of two arguments that returns a generalized boolean.
test-not---a designator for a function of two arguments that returns a generalized boolean.
generalized-boolean---a generalized boolean.
tree-equal tests whether two trees are of the same shape and have the same leaves. tree-equal returns true if tree-1 and tree-2 are both atoms and satisfy the test, or if they are both conses and the car of tree-1 is tree-equal to the car of tree-2 and the cdr of tree-1 is tree-equal to the cdr of tree-2. Otherwise, tree-equal returns false.
tree-equal recursively compares conses but not any other objects that have components.
The first argument to the :test or :test-not function is tree-1 or a car or cdr of tree-1; the second argument is tree-2 or a car or cdr of tree-2.
(setq tree1 '(1 (1 2)) tree2 '(1 (1 2))) => (1 (1 2)) (tree-equal tree1 tree2) => true (eql tree1 tree2) => false (setq tree1 '('a ('b 'c)) tree2 '('a ('b 'c))) => ('a ('b 'c)) => ((QUOTE A) ((QUOTE B) (QUOTE C))) (tree-equal tree1 tree2 :test 'eq) => true
Side Effects: None.
Affected By: None.
The consequences are undefined if both tree-1 and tree-2 are circular.
equal, Section 3.6 (Traversal Rules and Side Effects)
The :test-not parameter is deprecated.
| 3.234933 |
Did a Comet Really Chill and Kill Clovis Culture?
A 130-foot-meteor created the mile-wide Meteor Crater in Arizona. The comet proposed to have impacted life in North America was significantly larger, but no crater indicating its collision has been found.
CREDIT: Dan Durda
A comet crashing into the Earth some 13,000 years ago was thought to have spelled doom to a group of early North American people, and possibly the extinction of ice age beasts in the region.
But the space rock was wrongly accused, according to a group of 16 scientists in fields ranging from archaeology to crystallography to physics, who have offered counterevidence to the existence of such a collision.
"Despite more than four years of trying by many qualified researchers, no unambiguous evidence has been found [of such an event]," Mark Boslough, a physicist at Sandia National Laboratories in New Mexico, told LiveScience.
"That lack of evidence is therefore evidence of absence."
Almost 13,000 years ago, a prehistoric Paleo-Indian group known as the Clovis culture suffered its demise at the same time the region underwent significant climate cooling known as the Younger Dryas. Animals such as ground sloths, camels and mammoths were wiped out in North America around the same period. [Wipe Out: The 10 Most Mysterious Extinctions]
In 2007, a team of scientists led by Richard Firestone of the Lawrence Berkeley National Laboratory in California suggested these changes were the result of a collision or explosion of an enormous comet or asteroid, pointing to a carbon-rich black layer at a number of sites across North America. The theory has remained controversial, with no sign of a crater that would have resulted from such an impact.
"If a four-kilometer [2.5-mile] comet had broken up over North America only 12.9 thousand years ago, it is certain that it would have left an unambiguous impact crater or craters, as well as unambiguous shocked materials," Boslough said.
Boslough, who has spent decades studying the effects of comet and asteroid collisions, was part of a team that predicted the visibility of plumes from the impact of the 1994 Shoemaker-Levy 9 comet with Jupiter.
"Comet impacts may be low enough in density not to leave craters," Firestone told LiveScience by email.
He also points to independent research by William Napier at the University of Cardiff in the United Kingdom that indicates such explosions could have come from a debris trail created by Comet Encke, which also would not have left a crater.
A large rock plunging into the Earth's atmosphere may detonate in the air without coming into contact with the ground. Such an explosion occurred in Siberia in the early 20th century; the explosive energy of the so-called Tunguska event was more than 1,000 times more powerful than the atomic bomb dropped on Hiroshima.
"No crater was formed at Tunguska, or the recent Russian impact," Firestone said.
But Boslough said this math doesn't add up. The object responsible for the Tunguska event was very small, about 130 to 160 feet (40 to 50 meters) wide, while the recent explosion over Russia was smaller, about 56 feet (17 meters). The proposed North American space rock linked with the Clovis demise is estimated to have been closer to 2.5 miles (4 kilometers) across.
"The physics doesn't support the idea of something that big exploding in the air," he said, noting that the original research team doesn't provide any explanation or models for how such a breakup might occur. [The 10 Greatest Explosions Ever]
If such a large object crashed into the Earth, the resulting crater would be too large to miss, particularly when it was only a few thousand years old, Boslough said. He pointed to Meteor Crater in Arizona, which is three times as old and formed by an object "a million times smaller in terms of explosive energy."
"Meteor Crater is an unambiguous impact crater with unambiguous shocked minerals," Boslough said. If a 2.5-mile comet had broken into pieces, it could have made a million Meteor Craters, he added.
Firestone argued that water or ice could have absorbed the impact, possibly leaving behind no crater.
Boslough disagreed. Even if the comet had plunged into the ice sheet covering much of North America, the crater formed beneath it would still be sizable. "We wouldn't be able to miss that right now — it would be obvious," Boslough said.
The arguments and evidence against the impact were published in the December 2012 American Geophysical Union monograph.
"Extraordinary claims require extraordinary evidence"
Powerful impacts are Boslough's field, but the other 15 scientists working on the paper offered up other sources of counterevidence for the existence of a collision.
"We all independently came to the conclusion that the evidence doesn't support a Younger Dryas impact," Boslough said. [Asteroid Basics: A Space Rock Quiz]
"We all came to this based on our own very narrow piece of the puzzle."
For instance, the initial team studying the event announced the discovery of a carbon-rich black layer, colloquially known as a "black mat," at a number of sites in North America. Containing charcoal, soot and nanodiamonds, such material could be formed by a violent collision.
But this isn't the only possible source.
"The things they call impact markers are not necessarily indicators of high-pressure shocks," Boslough said. "There are other processes that potentially could have formed them."
Speaking of the black mat found in central Mexico, Firestone said, "Boslough is correct that there are other black mats, but these are dated to 12,900 years ago at the time of impact." He points to independent research published this fall that located hundreds to thousands of samples.
However, radiocarbon dating of one of the sites in Gainey, Mich., suggested its samples were contaminated.
Melted rock formations and microscopic diamonds found in a lake in Central Mexico last year were also suggested as evidence for the collision, but Boslough's team disagrees with the age of the sediment layer in the region.
Boslough said the standard for indicating a strong shock occurred is pretty high in the impact community, and the findings by the original team don't meet them. Nor do they offer up any physical models that propose how an impact or airburst would have occurred — and the ones Boslough has run just don't pan out.
"It's really a stretch to claim that there was this large impact event with no crater and no unambiguous shock material, because large impacts are such rare events," Boslough said.
"When somebody is making a claim that something extraordinary happened, something out of the ordinary and with a very low probability, and they have ambiguous evidence, then the default is that it didn't happen," he continued.
"Extraordinary claims require extraordinary evidence."
Firestone stands firm.
"All the evidence has now been confirmed by others," he said.
"Boslough has no data supporting his arguments, and ignores the counter arguments of Bill Napier."
MORE FROM LiveScience.com
| 3.665339 |
Livescribe for Blind or Visually Impaired Students02/22/12
Limited access to graphical materials is a problem for many blind or visually impaired students. With the Livescribe smartpen, creation of these resources is low-cost, portable, and easy to execute. Students and teachers can create tactile diagrams, maps, or study guides in or outside of the classroom.
The smartpen provides raised-line figures with audio information about the diagram elements by tracking the page position and sound recording. This type of technology allows students to have access to all types of materials. By adding Braille characters to their Livescribe notebook page, students can read along with their hands while having a full audio description when needed. The smartpen includes all the hardware and software necessary to allow audio playback and tactile graphics can be created by using a raised-line drawing kit or an APH “tong” tool. Livescribe sound stickers can also add audio to existing materials. For blind or visually impaired students, understanding graphs or visuals can be difficult if not impossible. The Livescribe smartpen aims to help make graphics more accessible for the blind and visually impaired.
Do you know a blind or visually impaired student using the Livescribe smartpen? Share your story with us below!
| 3.055895 |
According to a research team at the City College of New York (CCNY), tobacco dependence has a firmer grip on those who are poor or uneducated, making it harder for them to stay off cigarettes.
Smokers who had participated in a statewide smoking cessation program in Arkansas were tracked following the program by researchers from CCNY's Sophie Davis School of Biomedical Education. The participants came from a range of socioeconomic backgrounds, and while both rich and poor groups were able to successfully quit at roughly the same rate, the poorer an individual was, the harder it was for that person to overcome cravings and ultimately remain tobacco free.
Within three months of completing treatment, the poorest participants were 55 percent more likely than the richest to begin smoking again; at six months, that number jumped to 250 percent. Nationwide, Americans from households making less than $15,000 a year smoke three times more often than those making $50,000 a year or more.
Several hypotheses have been offered to explain the wide disparity. Smoking provides stress relief in times of hardship, and those of lower socioeconomic status are more likely to experience hardship, facing discrimination, job insecurity, and financial troubles. In addition, lower-paying jobs are less likely to offer the protection of local, state, and federal smoke-free laws. This means that even if a person successfully quits, he will find himself surrounded by smokers at work every day.
It is important to included these considerations when developing treatment programs, says the team from CCNY. Treatment focused on the needs of the middle class will necessarily require revision to fit lower economic status individuals. Suggestions include custom programs, but the CCNY team says that "booster sessions" after the main sessions are over may improve outcomes. It's hard to predict what stresses might drive someone back to cigarettes several months down the road, so it's important to continuing offering resources to those who will need them the most.
Source: City College of New York
| 3.15426 |
Carl Friedrich Gauss (1777-1855) is considered to be the greatest German mathematician of the nineteenth century. His discoveries and writings influenced and left a lasting mark in the areas of number theory, astronomy, geodesy, and physics, particularly the study of electromagnetism.
Gauss was born in Brunswick, Germany, on April 30, 1777, to poor, working-class parents. His father labored as a gardner and brick-layer and was regarded as an upright, honest man. However, he was a harsh parent who discouraged his young son from attending school, with expectations that he would follow one of the family trades. Luckily, Gauss' mother and uncle, Friedrich, recognized Carl's genius early on and knew that he must develop this gifted intelligence with education.
While in arithmetic class, at the age of ten, Gauss exhibited his skills as a math prodigy when the stern schoolmaster gave the following assignment: "Write down all the whole numbers from 1 to 100 and add up their sum." When each student finished, he was to bring his slate forward and place it on the schoolmaster's desk, one on top of the other. The teacher expected the beginner's class to take a good while to finish this exercise. But in a few seconds, to his teacher's surprise, Carl proceeded to the front of the room and placed his slate on the desk. Much later the other students handed in their slates.
At the end of the classtime, the results were examined, with most of them wrong. But when the schoolmaster looked at Carl's slate, he was astounded to see only one number: 5,050. Carl then had to explain to his teacher that he found the result because he could see that, 1+100=101, 2+99=101, 3+98=101, so that he could find 50 pairs of numbers that each add up to 101. Thus, 50 times 101 will equal 5,050.
At the age of fourteen, Gauss was able to continue his education with the help of Carl Wilhelm Ferdinand, Duke of Brunswick. After meeting Gauss, the Duke was so impressed by the gifted student with the photographic memory that he pledged his financial support to help him continue his studies at Caroline College. At the end of his college years, Gauss made a tremendous discovery that, up to this time, mathematicians had believed was impossible. He found that a regular polygon with 17 sides could be drawn using just a compass and straight edge. Gauss was so happy about and proud of his discovery that he gave up his intention to study languages and turned to mathematics.
Duke Ferdinand continued to financially support his young friend as Gauss pursued his studies at the University of Gottingen. While there he submitted a proof that every algebraic equation has at least one root or solution. This theorem had challenged mathematicians for centuries and is called "the fundamental theorem of algebra".
Gauss' next discovery was in a totally different area of mathematics. In 1801, astronomers had discovered what they thought was a planet, which they named Ceres. They eventually lost sight of Ceres but their observations were communicated to Gauss. He then calculated its exact position, so that it was easily rediscovered. He also worked on a new method for determining the orbits of new asteroids. Eventually these discoveries led to Gauss' appointment as professor of mathematics and director of the observatory at Gottingen, where he remained in his official position until his death on February 23, 1855.
Carl Friedrich Gauss, though he devoted his life to mathematics, kept his ideas, problems, and solutions in private diaries. He refused to publish theories that were not finished and perfect. Still, he is considered, along with Archimedes and Newton, to be one of the three greatest mathematicians who ever lived.
|Contributed by Karolee Weller|
|Permission was requested by Michael Sirola in Tampere, Finland to translate this biography into Finnish for his blog. To read the Finnish translation goto http://www.designcontest.com/show/twsu-math-fi|
| 3.889393 |
The lacrimal or tear gland, located at the top outer edge of the eye, produces the watery portion of tears. The nasolacrimal duct system allows tears to drain from each eye into the nose. Disorders of these structures can lead to either eyes that water excessively or dry eyes. They may be congenital (present at birth) or caused by infection, foreign objects in the eye, or trauma.
Disorders of the nasal cavity and tear ducts are not as common in cats as they are in dogs. However, a few disorders occasionally are seen in this species.
Blockage of the Nasal Duct (Epiphora)
Occasionally cats will experience a chronic overflow of tears due to an obstruction of the nasal duct called epiphora. This is more common in Persian and Himalayan breeds. In most cases, there is no reason for concern when this occurs, as it does not lead to any medical problems. However, if appearance is an issue, the condition can be corrected surgically.
Inflammation of the Tear Sac (Dacryocystitis)
Inflammation of the tear sac is rare in cats. It is usually caused by obstruction of the tear sac and the attached nasolacrimal tear duct by inflammatory debris, foreign objects, or masses pressing on the duct. It results in watering eyes, conjunctivitis that is resistant to treatment, and occasionally a draining opening in the middle of the lower eyelid. If your veterinarian suspects an obstruction of the duct, he or she may attempt to unblock it by flushing it with sterile water or a saline solution. X‑rays of the skull after injection of a dye into the duct may be necessary to determine the site, cause, and outlook of longterm obstructions. The usual therapy consists of keeping the duct unblocked and using eyedrops containing antibiotics. When the tear duct has been irreversibly damaged, surgery may be necessary to create a new drainage pathway to empty tears into the nasal cavity, sinus, or mouth.
Dry Eye (Keratoconjunctivitis Sicca)
The condition known as dry eye results from inadequate tear production. It often causes persistent mucus and pus-filled conjunctivitis and slow-healing sores (ulcers) and scarring on the cornea. Dry eye is not common in cats but has been associated with longterm feline herpesvirus-1 infections. Topical therapy consists of artificial tear solutions, ointments, and, if there are no scars on the cornea, medications that contain steroids. In longterm dry eye resistant to medical therapy, surgery may be required to correct the condition.
Last full review/revision July 2011 by Kirk N. Gelatt, VMD; David G. Baker, DVM, MS, PhD, DACLAM; A. K. Eugster, DVM, PhD
| 3.128145 |
Adam Health Illustrated Encyclopedia Multimedia - TestSearch Health Information
Creatine phosphokinase test
Creatine phosphokinase (CPK) is an enzyme found mainly in the heart, brain, and skeletal muscle. This article discusses the test to measure the amount of CPK in the blood.
CPK test; Creatine kinase; CK test
How the test is performed
A blood sample is needed. This may be taken from a vein. The procedure is called a venipuncture.
This test may be repeated over 2 or 3 days for if you are a patient in the hospital.
How to prepare for the test
Usually, no special preparation is necessary.
Tell your doctor about any medications you are taking. Drugs that can increase CPK measurements include amphotericin B, certain anesthetics, statins, fibrates, dexamethasone, alcohol, and cocaine.
How the test will feel
When the needle is inserted to draw blood, you may feel moderate pain, or only a prick or stinging sensation. Afterward, there may be some throbbing.
Why the test is performed
When the total CPK level is very high, it usually means there has been injury or stress to muscle tissue, the heart, or the brain.
Muscle tissue injury is most likely. When a muscle is damaged, CPK leaks into the bloodstream. Determining which specific form of CPK is high helps doctors determine which tissue has been damaged.
This test may be used to:
- Diagnose heart attack
- Evaluate cause of chest pain
- Determine if or how badly a muscle is damaged
- Detect dermatomyositis, polymyositis, and other muscle diseases
- Tell the difference between malignant hyperthermia and postoperative infection
The pattern and timing of a rise or fall in CPK levels can be diagnostically significant, particularly if a heart attack is suspected.
Except in unusual cases, other tests are used to diagnose a heart attack.
Total CPK normal values:
- 10 - 120 micrograms per liter (mcg/L)
Normal value ranges may vary slightly among different laboratories. Some labs use different measurements or test different samples. Talk to your doctor about the meaning of your specific test results.
What abnormal results mean
High CPK levels may be seen in patients who have:
- Brain injury or stroke
- Delirium tremens
- Dermatomyositis or polymyositis
- Electric shock
- Heart attack
- Inflammation of the heart muscle (myocarditis)
- Lung tissue death (pulmonary infarction)
- Muscular dystrophies
Additional conditions may give positive test results:
What the risks are
There is very little risk involved with having your blood taken. Veins and arteries vary in size from one patient to another and from one side of the body to the other. Taking blood from some people may be more difficult than from others.
Other risks associated with having blood drawn are slight but may include:
- Excessive bleeding
- Fainting or feeling light-headed
- Hematoma (blood accumulating under the skin)
- Infection (a slight risk any time the skin is broken)
Other tests should be done to determine the exact location of muscle damage.
Factors that may affect test results include cardiac catheterization, intramuscular injections, trauma to muscles, recent surgery, and heavy exercise.
Anderson JL. ST segment elevation acute myocardial infarction and complications of myocardial infarction. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 73.
Chinnery PF. Muscle diseases. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 429.
Reviewed By: David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by A.D.A.M. Health Solutions, Ebix, Inc., Editorial Team: David Zieve, MD, MHA, David R. Eltz, Stephanie Slon, and Nissi Wang.
| 3.330867 |
This early masterpiece of the Arts and Crafts Movement exemplifies the collaborative endeavors of William Morris and his circle to improve design standards. Morris believed that a return to the principles of medieval production, with fine artists creating functional objects, could help overcome the evils of industrialization. This cabinet, one of several in which Morris enlisted the participation of Sir Edward Burne-Jones, is an attempt to erase the distinction between the fine and the applied arts. The painting on leather with a punched background is itself a craftsman's medium. Although the cabinet is usually described as in the "medieval style," it is actually a vivid example of the ability of the Morris firm to convert the eclecticism that marked much of the art of the late nineteenth century into an original and modern style. Although Burne-Jones's painted figures are in medieval costume much of the decoration is equally Oriental in inspiration. Philip Webb's straightforward design, however, which boldly displays the casework skeleton on the exterior, anticipated the emphasis on structural elements that would inform the design revolution of the next century.
| 3.034307 |
Benjamin Franklin was born to a poor soap boiler on January 17, 1706 in Boston, Massachusetts. He was one of many children in his family, and he learned to appreciate the rewards of hard work at age twelve as an apprentice at his brother James' printing shop. By the age of seventeen, Franklin's skills as a printer had become so strong that at twenty-two he left Boston to open his own printing shop in Pennsylvania. This is where he developed his two most famous publications, The Pennsylvania Gazette, the most popular newspaper in the colonies, and the annual Poor Richard's Almanac. Franklin was one of the first printers to use cartoons in his publications in order to accommodate the people who had not learned to read. He believed that his publications were a way to get information to every person, not just those who could read.
In Pennsylvania, Franklin founded organizations, became involved in the community, and assembled a vast collection of inventions that would allow people to live better lives. He started the first circulatory library known as the American Philosophical Society and founded an academy that would soon become the University of Pennsylvania. As a journalist, Franklin was constantly putting on and taking off his reading glasses. He became frustrated with the repetition of this task, so he cut the bottom half of his reading classes and put them in the frames of another pair of spectacles. He had given birth to the bifocals, which many of us still use today.
His scientific mind was always looking for ways to make life easier. This led him to invent the lightning rod, the Franklin stove, the odometer, and a variety of devices like the wooden "long arm." He also created the first fire company and the first fire insurance company in order to help people live safer lives. In 1748, Franklin retired from printing in order to focus his attention on electricity.
Franklin developed many electrical theories and often found innovative, and sometimes quite dangerous, ways to test them. One of his most well known theories was that lightning is a form of electricity. Discovering evidence to support this notion at first seemed to be a very difficult endeavor to achieve. How was he going to actually access lightning, a phenomenon that had mystified humans for thousands of years? Franklinís letters indicate that he initially planned a scheme that would involve a spire that was being built on a church in Philadelphia. He soon decided, however, that the spire would not be nearly tall enough to provide him adequate access to the clouds where lightning forms during a storm. A kite, he realized, could be flown to great heights, leading to the famous kite and key experiment.
Reportedly in June of 1752, Franklin and his son (who was twenty-one at the time, rather than a young boy as often appears in illustrations of the famous experiment) ventured into an open field as a thunderstorm approached. There they sent up a kite Franklin had constructed from silk. Because he knew that metal was a conductor of electricity, Franklin attached a metal rod to the top of the kite and a large metal key near the bottom of the string by which he held the kite. He thought that if lightning really were electricity, then it would charge the metal rod and would travel down the string to the key, electrifying it as well. The experiment worked exactly as he hoped. After flying the kite in the thunderclouds for some time, he noticed that the key gave off sparks when he touched it, a clear sign that it was electrified.
An educated Franklin brought his intellectual ideas and superior communication skills to the politics of a forming nation. In 1757, Franklin was sent to England as a representative of the colonies in the quarrel between England and the Colonies, and he returned as an advocate of independence. He served as a member of the Continental Congress where he helped Thomas Jefferson draft the Declaration of Independence in 1776. Franklin then became the ambassador to France, where he persuaded the French to aid the Americans in the Revolutionary War. He continued to serve the young nation in assisting the creation of the compromises developed at the Constitutional Convention in Philadelphia.
As a journalist, scientist, inventor, statesmen, philosopher, musician, and economist, Benjamin Franklin can be thought of as a colonial Renaissance man. Through hard work and great ideas, Franklin helped shaped a young nation with the aid of his many hard-earned skills. Benjamin Franklin was a pivotal player in the foundation of the United States of America.
BACK TO PIONEERS IN OPTICS
Questions or comments? Send us an email.
© 1995-2013 by
Michael W. Davidson
and The Florida State University.
All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
Last Modification Friday, May 18, 2007 at 03:48 PM
Access Count Since December 24, 1999: 62950
Visit the websites of our partners in education:
| 3.562048 |
Did San Luis Obispo produce any goods? Were they well-known for producing anything?
Did San Luis Obispo produce any goods? Were they well-known for producing anything? I'm working on a Mission report and I can't find any literature on
The primary products produced by the missions were cow hides and tallow. The cow hides were also known as Yankee dollars, because they were traded for manufactured goods brought by ships which were mostly from Boston. San Luis Obispo was also famous for being the first mission to make roofing tiles, which the other missions soon copied.
As of 1832, the last year for which records were kept, San Luis Obispo had a herd of 2.500 cattle and 5,432 sheep. San Luis Obispo also had large agricultural fields under cultivation.
| 3.067743 |
Since the 1960s, Washington has clamped a strict trade embargo on Cuba in the expectation that economic distress would oust Castro or at least moderate his behavior. Since 1996 the U.S. embargo has been embodied in law (heretofore it was an executive order). Here’s what Uncle Sam says the embargo, enacted on February 3, 1962, by President Kennedy, is about:
The fundamental goal of U.S. policy toward Cuba is to promote a peaceful transition to a stable, democratic form of government and respect for human rights. Our policy has two fundamental components: maintaining pressure on the Cuban Government for change through the embargo and the Libertad Act while providing humanitarian assistance to the Cuban people, and working to aid the development of civil society in the country.
Critics call it a violation of international law that injures and threatens the welfare of Cuban people. The United Nations General Assembly routinely votes to condemn it (the 2009 vote was 187-3, with only Palau and Israel—which trades with Cuba—joining the United States in voting against the resolution).
Uncle Sam even fines foreign companies doing business with Cuba. Talk about shooting oneself in the foot! For example, when Hilton and Sheraton hotels worldwide were banned from accepting Cuban trade delegations, European unions and parliamentarians initiated a boycott of the hotel chains, while the Mexican government even fined Sheraton US$100,000 for expelling Cuban guests in violation of international law. Meanwhile, the ultimate irony and hypocrisy is that although no Cuban goods can be sold to the United States, the U.S. does permit sales of “agricultural” and certain other goods to Cuba under a waiver that runs from daiquiri mix to rolls of newsprint (even the Communist rag, Granma, is printed on paper from Alabama), to the tune of US$718 million in 2008!
The effects of the embargo (which Castro calls el bloqueo, or blockade) are much debated. In 1999, Cuba filed a claim for US$181 billion in reparations. However, in 2000, the International Trade Commission (ITC) determined that the embargo has had a minimal impact on the Cuban economy, citing domestic policies as the main cause of Cuba’s economic woes (for three decades, the effects of the embargo were almost entirely offset by massive subsidies from the Soviet Union). As President Carter noted during his visit to Havana in May 2002: “These restraints are not the source of Cuba’s economic problems. Cuba can trade with more than 100 countries, and buy medicines, for example, more cheaply in Mexico than in the United States.”
The paradox is that the policy achieves the opposite effect to its stated goals: It provides a wonderful excuse for the Communist system’s economic failings, and a rationale to suppress dissidents and civil liberties under the aegis of national security for an island under siege. It also permits Fidel Castro to perform the role of Cuba’s anti-imperialistic savior that he has cast for himself.
Although State Department officials privately admit that the embargo is the fundamental source of Fidel’s hold on power, U.S. presidents are wed to what Ann Louise Bardach calls a “transparently disastrous policy, trading off sensible and enduring solutions for short-term electoral gains [in Florida]” in response to fanatically anti-Castroite Cuban-American interests.
U.S. citizens who oppose the embargo and restrictions on U.S. citizens’ constitutional right to travel can make their views known to representatives in Washington.
Contact Your Senator or Representative in Congress (U.S. Congress, Washington, DC 20510, tel. 202/224-3121 or 800/839-5276, www.house.gov and www.senate.gov ). Write a simple, moderate, straightforward letter to your representative that makes the argument for ending the travel ban and embargo and requests he/she cosponsor a bill to that effect.
Write or Call the President (The President, The White House, Washington, DC 20500, tel. 202/456-1414, president [at] whitehouse [dot] gov). Also call or fax the White House Comment Line (202/456-1111, fax 202/456-2461, www.whitehouse.gov ) and the Secretary of State (202/647-4000, www.state.gov/secretary ).
Publicize Your Concern. Write a simple, moderate, straightforward letter to the editor of your local newspaper as well as any national newspapers or magazines and make the argument for ending the embargo.
Support the Freedom to Travel Campaign. Contact the Latin America Working Group (424 C St. NE, Washington, DC 20002, tel. 202/546-7010, www.lawg.org ), which campaigns to lift the travel restrictions and U.S. embargo, monitors legislators, and can advise on how representatives have voted on Cuba-related issues; and sign the Orbitz Open Cuba (www.opencuba.org ) petition.
| 3.024188 |
Hidden for 10,000 years, Inner Space Cavern is one of the best preserved
caves in Texas and one of the few places where remains of prehistoric
animals were unearthed.
Inner Space Cavern was discovered by a Texas Highway Department
core drilling team in the Spring of 1963. While drilling through
40 feet of solid limestone, the bit broke into what is now known
as Inner Space Cavern. An adventurous employee of the highway department
was lowered into the hole while standing on the drill bit and holding
tightly to the stem. He was the first human being to enter INNER
SPACE. What an exciting voyage this must have been!
So come explore with us
and let us show you our side of Texas!
| 3.209125 |
Balancing foundational information with a real world approach to inclusion, Inclusion: Effective Practices for All Students, 2e equips teachers to create effective inclusive classrooms.
The most applied text in the market, this second edition sharpens its focus and its organization to more clearly outline best practices for inclusive classrooms. The book’s three part structure opens with the foundational materials you’ll need to truly understand inclusive classrooms, followed by brief categorical chapters to give you the information you need to meet the needs of all students. Finally, field tested and research based classroom strategies are laid out on perforated pages to make the transition from theory to practice seamless.
Table of Contents
Inclusion: Effective Practices for All Students, 2e
Part I: Foundations of Successful Inclusion
1. What Is Inclusion and Why Is It Important?
2. Inclusion: Historical Trends, Current Practices, and Tomorrow’s Challenges
Part 2: Who are the Students in an Inclusive Classroom?
3. Students with Learning Disabilities
4. Students with Attention-Deficit/Hyperactivity Disorder
5. Students with Intellectual Disabilities
6. Students with Emotional and Behavioral Disabilities
7. Students with Autism Spectrum Disorders
8. Students with Communication Disorders and Students with Sensory Impairments
9. Students with Physical Disabilities, Health impairments, and Multiple Disabilities
Part 3: Effective Practices to Meet All Students’ Needs
Chapter 10 Collaboration and Teaming To Support Inclusion
Chapter 11 Formal Plans and Planning for Differentiated Instruction
Chapter 12 Teaching Students from Diverse Backgrounds
Chapter 13 Effective Instruction in Core Academic Areas:Teaching Reading, Writing, and Mathematics
Chapter 14 Effective Instructionin Secondary Content Areas
Chapter 15 Effective Classroom Management for All Students
Chapter 16 Using Technology to Support Inclusion
$152.00 | Free Ground Shipping.
MyLab and Mastering products deliver customizable content and highly personalized study paths, responsive learning tools, and real-time evaluation and diagnostics. MyLab and Mastering products help move students toward the moment that matters most—the moment of true understanding and learning.
$88.00 | ISBN-13: 978-0-13-290203-8
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
$60.99 | ISBN-13: 978-0-13-288900-1
Books a la Carte are less-expensive, loose-leaf versions of the same textbook.
$98.80 | ISBN-13: 978-0-13-300784-8
| 3.300438 |
Texas Dust Storms
The same weather system that brought snow and ice to the American Midwest just after Thanksgiving 2005 also kicked up significant dust in western Texas and eastern Mexico. The winds associated with this cold front also fanned the flames of grass fires in the region, adding smoke to the mixture of aerosols. The most obvious dust cloud is a pale beige dust plume swirling through Texas and Mexico. However, a second, more orange-colored cloud of dust blows across northern Texas. Parts of northern Texas saw wind speeds around 60 miles per hour. Resulting dust storms reduced visibility to just 2.5 miles in some areas, and swamped local fire departments with calls regarding both fires and downed power lines.
Image Credit: NASA/GSFC/MODIS Land Rapid Response Team/Jeff Schmaltz
| 3.409824 |
The military ban on women in combat is coming to an end. Defense Secretary Leon Panetta announced the overturning of a 1994 Pentagon rule that restricts women from artillery, armor, and infantry jobs.
This country has had a storied history of women fighting for the country, despite various forces dissuading them from doing so. As of last year, for example, more than 800 women have been wounded in Iraq and Afghanistan (where about 20,000 women have served) despite the combat ban. But the story is older than that, as old as the country even. A woman by the name of Margaret Cochran fought in the Revolutionary War, and hundreds of women disguised themselves as men just to take up arms in the Civil War.
According to the National Archives, as many as 400 women fought during the Civil War while concealing their gender. Mary Livermore of the U.S. Sanitary Commission wrote in 1888:
Some one has stated the number of women soldiers known to the service as little less than four hundred. I cannot vouch for the correctness of this estimate, but I am convinced that a larger number of women disguised themselves and enlisted in the service, for one cause or other, than was dreamed of. Entrenched in secrecy, and regarded as men, they were sometimes revealed as women, by accident or casualty. Some startling histories of these military women were current in the gossip of army life.
Let’s meet some of these women who, in a sense, paved the way for today's ban reversal. Here are four women who fought, compiled from Larry G. Eggleston's Women in the Civil War.
- Loretta Janeta Velazquez was a total badass. Born to a rich Cuban aristocrat, Velazquez’s wealth played a key role in her fighting for the Confederate army. When her husband, William, went off to war in 1861, Velazquez wanted so badly to be with him that she offered to fight beside him incognito. William wouldn't hear it, and went off to war without her. Not content with life alone, Velazquez decided to use her wealth to finance and equip an infantry battalion, which she would bring to her husband to command. She cut her hair, tanned her skin, and went by the name Lt. Harry T. Buford. She went on to fight in various battles, including Bull Run and Shiloh, but her gender was twice discovered and she was discharged. So, naturally, she became a spy, with disguises in both the male and female variety.
- It must have been hard to hide your gender while serving in the war. Take it from Lizzie Compton, who enlisted at the age of 14. Her gender was discovered seven different times. But each time, she packed up her things and moved on to another regimen. Compton was wounded twice during her service, the first time by a piece of shrapnel as she charged up a hill at Antietam.
- Louisa Hoffman has the distinction of serving for both the Union and Confederate armies. When the war first started, she left her home in New York to enlist (as a man, of course) in the 1st Virginia Confederate Cavalry. But, after fighting at both battles of Bull Run, she had a change of heart, and headed up north to Ohio.
- Mary Seaberry was said to wear a disguise and have a manner that “never gave anyone in her regiment even the slightest hint that she was not a man.” Unfortunately for her, after being admitted into a hospital with a fever, there was no way she could hide her true identity. She was discharged “on the basis of sexual incompatibility.”
| 3.201144 |
Spider silk can be scary enough to insects to act as a pest repellant, researchers say.
These findings could lead to a new way to naturally help protect crops, scientists added.
Spiders are among the most common predators on land. Although not all spiders weave webs, they all spin silk that may serve other purposes. For instance, many tiny spiders use silk balloons to travel by air.
Science news from NBCNews.com
Researchers suspected that insects and other regular prey of spiders might associate silk with the risk of getting eaten. As such, they reasoned silk might scare insects off.
The scientists experimented with Japanese beetles (Popillia japonica) and Mexican bean beetles (Epilachna varivestis). These plant-munching pests have spread across eastern North America within the past half-century. [ Ewww! Nature's Biggest Pests ]
The beetles were analyzed near green bean plants (Phaseolus vulgaris) in both the lab and a tilled field outdoors. The investigators applied two kinds of silk on the plants — one from silkworms (Bombyx mori) and another from a long-jawed spider (Tetragnatha elongata), a species common in riverbank forests but not in the region the researchers studied.
Both spider and silkworm silk reduced insect plant-chewing significantly. In the lab, both eliminated insect damage entirely, while in the field, spider silk had a greater effect — plants enclosed with beetles and spider silk experienced about 50 percent less damage than leaves without spider silk, while silkworm silk only led to about a 10 to 20 percent reduction. Experiments with other fibers revealed that only silk had this protective effect.
"This work suggests that silk alone is a signal to potential prey that danger is near," researcher Ann Rypstra, an evolutionary ecologist at Miami University in Ohio, told LiveScience.
Rypstra was most surprised that the effect occurred even though the species involved do not share any evolutionary history together as predator and prey. This suggests "herbivores are using the silk as some sort of general signal that a spider — any ol' spider — is around and responding by reducing their activity or leaving the area," she said.
While more work will need to be done before this research might find applied use, the fact that the presence of silk alone reduced damage caused by two economically important pest insects "suggests that there could be applications in agricultural pest management and biological control," Rypstra said.
Rypstra is also interested in the chain reaction of events that silk might trigger in an ecosystem.
"For example, if an herbivore encounters a strand of silk and alters its behavior in a particular manner, does that make it more susceptible to predation by a non-spider?" Rypstra asked. "Do spiders that leave lots of silk behind have a larger impact in the food web, and how does it vary from habitat to habitat? These are just a couple of questions that we might be exploring in the near future."
Rypstra and her colleagues detailed their findings online Wednesday in the journal Biology Letters.
- Gallery: Spooky Spiders
- What Really Scares People: Top 10 Phobias
- Gallery: Dazzling Photos of Dew-Covered Insects
© 2012 LiveScience.com. All rights reserved.
| 3.951716 |
Science Friday is a National Public Radio program for kids interested in science. The program menu allows one to listen to the weekly program, read questions related to the program and get suggestions for further study. Another good option in the program menu is the Sounds Like Science Mentor page. A topic of current interest is chosen each week and material concerning this topic is available. A scientist with expertise in this area is available to answer questions about the current topic. Other program menu topics have general information about NPR.
Newton's APPLE is a long-running PBS science show. The site is not very fancy, but surfing through the episodes will yield some excellent home science activities.
Bill Nye the Science Guy
Bill Nye the Science Guy is a Disney science TV show. One can read today's episode and research previous episodes. A current news portion includes science news and science of sports sections. A daily demo safe for kids to do is related to the news or the current episode. Film clips and sounds of science involve downloading files. Many museum sites are included in Top Ten Links.
Scientific American Frontiers
This site offers a forum where you can post your opinion about one of the shows. Teachers can get teacher guides to each show and also access on-line activity based on each show. There is a "related links" section which offers hyper text links for each show of the season. Students can post questions to the scientists involved in each show.
This site includes lesson plans and activities for incorporating PBS programming into classroom curriculum.
| 3.130292 |
Environmental History of Georgia: Overview
Native American Influences
Humans arrived in the Southeast roughly 12,000 years ago. Over the millennia, Native Americans interacted with the landscape in numerous ways. Fire setting, first used to promote hunting, had perhaps the most significant impact.
Fire setting was probably common in the Piedmont. Varying fire-setting regimes would have created a patchy landscape, in which frequently burned, open oak forests adjoined less recently burned areas crowded with saplings, briars, and shrubs or unburned areas with more thin-barked trees. In the Coastal Plain, lightning perpetuated longleaf pine stands carpeted with diverse herbs or wiregrass and bounded by firebreaks like cypress swamps; Indian-set fires may have supplemented the natural regime. Scholarly opinion differs as to the fire regime in the mountains. Fires from lightning strikes are rare and low in intensity in most montane sites. Native Americans certainly supplemented the natural fire regime by burning in the river valleys to promote agriculture and probably burned many dry mountain ridges. Moist coves would have burned less often.
The settlement of
Thus, the landscape prior to European arrival was not entirely pristine. Rather, in a complex mosaic, human riverine settlements punctuated landscapes of large, fire-fostered pines and oaks abutting unburned areas such as cypress swamps in the Coastal Plain and huge cove hardwoods in the mountains.
European Settlement and Landscape Transformation
Europeans transformed the forested mosaic promoted by the Native Americans to a land forged for the production of goods for foreign markets. Huge harvests of forest resources in early colonial days were the first to demonstrate the size of the export market. Ginseng was collected to scarcity, and animal pelts were taken from southeastern forests in huge quantities. From 1700 to 1715 more than 1 million fox, elk, otter, and other skins were shipped from Charleston, South Carolina; from 1739 to 1761 more than 5 million pounds of deerskins alone were exported from the Southeast. Beavers were nearly extirpated from the state, and buffalo and elk were disappearing by 1760. Of the commodities that followed, rice, cotton, and timber spurred perhaps the greatest changes in Georgia's landscape.
Rice and Cotton
In 1750 the prohibition on owning slaves was lifted in Georgia, catalyzing large-scale commercial agriculture. Rice was the first commodity to transform a portion of Georgia's landscape. Landowners cleared large swaths of forest to construct plantations and rice fields. However, tidal rivers inundated and drained the impounded rice, limiting the most intense human agricultural impact to the tidal deltas.
Upon Eli Whitney's invention of the cotton gin in 1793, cotton production spurred a landscape transformation across the Piedmont and the western and Upper Coastal Plain. Following a cascade of land cessions from Native Americans, farmers rolled across the land, growing cotton for profit and corn for livestock and human consumption. They cleared and farmed a plot of land for a few years, then abandoned the depleted soil and moved on. Staggering amounts of soil were eroded off the barren fields; on average, 7.5 inches of topsoil were lost. The eroded sediments washed into streams, burying milldams and depositing as much as eight to fifteen feet of silt on some streambeds. The higher streambeds and absence of plants to impede winter storm runoff caused widespread flooding. Intense cotton farming and its deleterious effects continued until the 1920s, when the boll weevil, plummeting prices, and depleted soils contributed to a precipitous decline in production.
Hence, by the 1930s commercial agriculture, industrial logging, and nascent urbanization had created a landscape far different from that first encountered by Europeans. The stumps and scraggly trees of cut-over land, silted streams and denuded hillsides of abandoned farms, and growing networks of towns and transportation corridors had replaced much of the forest mosaic.
Conservation and Reforestation
In the mid-1900s two trends changed the tenor of human impact on the Georgia landscape. First, soil and timber depletion spurred conservation efforts. Second, much of the barren land reverted to trees.
An impetus to preserve wild landscapes had gained national momentum in the early twentieth century. Following that course, many state and national wildlife refuges, parks, and forests were established in Georgia in the 1920s and 1930s, in a movement that continued throughout the century.
While many farms adopted better farming practices, others were abandoned. In a trend that peaked in the 1960s, abandoned fields underwent secondary succession, a process in which grasses, shrubs, pines, and hardwoods succeed one another over time.
In the 1930s Charles Herty, a chemist at the University of Georgia, revolutionized land use in Georgia when he stimulated the pulpwood industry by using young pines to make white paper. Many old fields, particularly in the Coastal Plain, converted to pine plantations.
Urbanization and Regulation
In the second half of the twentieth century, human-environment interactions became increasingly complex. By the 1960s a dearth of regulation had permitted severe pollution problems. Urbanization accelerated. The population of Georgia doubled from roughly 4 million people in 1960 to more than 8 million in 2000, exacerbating pollution problems, straining water resources, and fragmenting the landscape through urban sprawl. At the same time, the science of ecology came of age and fostered an appreciation of biodiversity and an understanding of the links among species extinction, urban sprawl, and invasive exotic species.
Pollution and Water Supply Regulations
In 1960 very few air or water pollution controls existed. Industries released more than 400 toxic chemicals into the air, drinking-water quality was not regulated, and 70 percent of the municipal sewage in Georgia entered rivers untreated. Fossil fuel use spawned high levels of ground-level ozone in large urban areas. Construction sediment coated stream bottoms, and water supplies became problematic as demand from the growing population, industry, and agriculture increased. While ozone, water supply, and sediment proved more intractable, state and national regulations successfully addressed many egregious problems, markedly improving the environment.
Urban Sprawl and Species Conservation
Several forces countered these trends. In 1973 the federal Endangered Species Act was passed, spurring the protection of species in Georgia. Land conservation by many groups accelerated, and the Georgia Community Greenspace Program was created in 2000 through an act of the Georgia legislature. The voluntary program provides funds to encourage counties to preserve 20 percent of their land and water permanently as greenspace for the protection of natural resources and informal recreation.
R. Harold Brown, The Greening of Georgia: The Improvement of the Environment in the Twentieth Century (Macon, Ga.: Mercer University Press, 2002).
Linda G. Chafin, A Field Guide to the Rare Plants of Georgia (Athens: University of Georgia Press, 2007).
Donald Edward Davis, Where There Are Mountains: An Environmental History of the Southern Appalachians (Athens: University of Georgia Press, 2000).
Timothy Silver, A New Face on the Countryside: Indians, Colonists, and Slaves in South Atlantic Forests, 1500-1800 (Cambridge, England; New York: Cambridge University Press, 1990).
Mart A. Stewart, & quot;What Nature Suffers to Groe & quot;: Life, Labor, and Landscape on the Georgia Coast, 1680-1920 (Athens: University of Georgia Press, 1996).
Mark V. Wetherington, The New South Comes to Wiregrass Georgia, 1860-1910 (Knoxville: University of Tennessee Press, 1994).
Michael Williams, Americans and Their Forests: A Historical Geography (Cambridge, Eng.: University of Cambridge Press, 1989).
Leslie Edwards, Atlanta
A project of the Georgia Humanities Council, in partnership with the University of Georgia Press, the University System of Georgia/GALILEO, and the Office of the Governor.
| 3.685393 |
DID shrinking guts and high-energy food help us evolve enormous, powerful brains? The latest round in the row over what's known as the "expensive tissue hypothesis" says no. But don't expect that to settle the debate.
The hypothesis has it that in order to grow large brains relative to body size, our ancestors had to free up energy from elsewhere - perhaps by switching to rich foods like nuts and meat, which provide more calories and require less energy to break down, or possibly by learning to cook: cooked food also requires less energy to digest.
Kari Allen and Richard Kay of Duke University in Durham, North Carolina, turned to New World monkeys to explore the hypothesis. Previous studies offer a wealth of data on the monkeys' diets and show that their brain size varies greatly from species to species. But when the pair controlled for similarities between related species, they found no correlation between large brains and small guts (Proceedings of the Royal Society B, DOI: 10.1098/rspb.2011.1311).
As Robin Dunbar at the University of Oxford points out: "It is one thing to say that the hypothesis doesn't apply to New World monkeys, and another to extrapolate that to humans."
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
| 3.05564 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.