content
string
pred_label
string
pred_score
float64
Pallister-Hall syndrome Pallister-Hall syndrome is a disorder that affects the development of many parts of the body. Most people with this condition have extra fingers and/or toes (polydactyly), and the skin between some fingers or toes may be fused (cutaneous syndactyly). An abnormal growth in the brain called a hypothalamic hamartoma is characteristic of this disorder. In many cases, these growths do not cause any medical problems; however, some hypothalamic hamartomas lead to seizures or hormone abnormalities that can be life-threatening in infancy. Other features of Pallister-Hall syndrome include a malformation of the airway called a bifid epiglottis, an obstruction of the anal opening (imperforate anus), and kidney abnormalities. Although the signs and symptoms of this disorder vary from mild to severe, only a small percentage of affected people have serious complications. This condition is very rare; its prevalence is unknown. [back to the top of the page...] Genetic profile Mutations in the GLI3 gene cause Pallister-Hall syndrome. Mutations that cause Pallister-Hall syndrome typically lead to the production of an abnormally short version of the GLI3 protein. Unlike the normal GLI3 protein, which can turn target genes on or off, the short protein can only turn off (repress) target genes. Researchers are working to determine how this change in the protein's function affects early development. It remains uncertain how GLI3 mutations can cause polydactyly, hypothalamic hamartoma, and the other features of Pallister-Hall syndrome. This condition is inherited in an autosomal dominant pattern, which means one copy of the altered gene in each cell is sufficient to cause the disorder. In some cases, an affected person inherits a mutation in the GLI3 gene from one affected parent. Other cases result from new mutations in the gene and occur in people with no history of the disorder in their family. [back to the top of the page...]
__label__pos
0.82774
Restrictive Covenant Employment Agreement The employment obligation is therefore considered appropriate, but the restrictions imposed on the worker in the contract must be “proportionate” and “necessary” to protect the interests of the employer, or the validity of the obligations is examined. A common question that arises in the context of employment is whether a company can prevent outgoing employees from competing with them, from asking their customers or from using the company`s information for their own purposes. The provisions of the treaty prohibiting former employees from carrying out such activities are commonly referred to as “restrictive agreements.” This exercise point summarizes the most important points that any practitioner should know about restrictive alliances. If you want to know more, download this detailed overview of restrictive alliances. The current trend is the inclusion of certain agreements to discourage employees from regrouping with competitors after completing their work, and these agreements may result in litigation in the period following the end of the activity. Given that the legal framework for the management of such conflicts is still being developed in India, common law precedents and legal doctrines have played an important role in the development of jurisprudence that balances the conflicting issues and rights inherent in restrictive alliances and the scheme of Section 27 ica. The application of restrictive alliances requires restrictive considerations. In general, public order puts an end to the right of individuals to freely exercise their chosen profession. Contractual freedom is considered a fundamental right. On the other hand, it is recognized that employers have legitimate interests that deserve protection, such as customer relations, goodwill, investments in staff and proprietary and confidential information. In some areas, the public has an interest that can protect the courts. The health field is an example; Some states consider the doctor-patient relationship to be particularly worthy of protection, which would allow for a typical business relationship. Another factor is the development of trade. In the global internet market, which depends on the industry concerned, a vast geographical area (including the national scope) may be entirely appropriate. In this case, a broker was given six months` notice and a limitation of a competitor`s employment for six months after the end of his employment. When he tendered his resignation with immediate effect and had a competitor work, his former employer relied on the employment contract to prevent it. The broker then claimed that he had been “constructively dismissed” (because his employer did something that led him to believe that they had fired him) and argued that this had freed him from his redundancy obligation and the non-compete clause. It is not uncommon for employers to enter into restrictive agreements as a result of a transaction agreement. This may be because they see you as a threat to their business because existing agreements are not effective after termination, or because they have violated your employment contract and as such, alliances may become unenforceable. In the human resources department, a restrictive pact is a clause that prevents a worker from seeking an investment within a specified period after the termination of the company or organization. Some notable restrictive agreements are those that deal with issues of confidentiality, confidentiality and non-formal notice. This advice, received by Dodd, described the covenants as more likely than not unenforceable, because: (a) no consideration had been given and (b) the time for which they claimed to prevent Mr. Pollock from working was exaggerated. It was therefore indicated that restrictive alliances were “probably” unenforceable. “…… a negative contract, intended to result in a trade restriction during the worker`s employment, is applicable and such restrictions, which fall within the duration of the contract, could be applicable, unless such clauses are “unacceptable or excessively severe, inappropriate or unilateral”. In order for a restrictive confederation to be able to
__label__pos
0.750442
Shape Fort Lauderdale’s Future! #BeCountedFTL#FortLauderdaleCountsWhat is the Census?Every 10 years, the U.S. Census Bureau conducts a population count, or census, of every person living in the United States and in the U.S. territories, as required by the U.S. Constitution. Not only does the census count people, it also collects information on where they live to provide accurate population numbers for […]
__label__pos
0.997199
User eXperience User experience (UX) involves a person's behaviors, attitudes, and emotions about using a particular product, system or service Gangnam Style trends in User Experience Through popularity and imitation Gangnam Style videos, created by PSY the South Korean Internet sensation, became a viral phenomena. In much the same way, many current UX patterns have gained popularity through sharing and imitation. Do you understand what are the psychological foundations of social sharing and how do things “go viral”? Mass copying and imitation of some patterns can lead to the belief in a “Best Practice” even when the pattern is proved faulty. What are the trends that we have had to fight against, and what trends do we want take off like a cheesy horse dance. Bad Defaults Do you remember the Dan Akroyd series of sketches on the original Saturday Night Live (SNL) where he played a character named "Leonard Pinth Garnell" and introduced us to various forms of BAD performances? Leonard Pinth-Garnell We came across a default action on a travel website that made us think about "Bad Defaults" and why user interfaces need to have "Smart" defaults. 7 +/- 2 Things the Software Industry should Know About Cognitive Psychology The capacity of the human brain to process information has remained the same—even as the number of types of users for software-based Internet connected devices has increased at an exponential rate. The field of psychology, especially cognitive psychology has, among other things, focused on understanding the processes by which we store information, make decisions, and communicate with others. Understanding the research and the theories of cognitive psychology can help information architects to create better user experiences. The end user doesn't read the source code! For way too many years we have used that phrase to explain to developers that it is important for them to try to understand their users. Sometimes it worked, sometimes it didn’t. When it didn’t work, we tried to explain about users using most of the standard, and many of the non-standard techniques:
__label__pos
0.975254
Category Theory In my previous blog post, Programming with Universal Constructions, I mentioned in passing that one-to-one mappings between sets of morphisms are often a manifestation of adjunctions between functors. In fact, an adjunction just extends a universal construction over the whole category (or two categories, in general). It combines the mapping-in with the mapping-out conditions. One functor (traditionally called the left adjoint) prepares the input for mapping out, and the other (the right adjoint) prepares the output for mapping in. The trick is to find a pair of functors that complement each other: there are as many mapping-outs from one functor as there are mapping-ins to the other functor. To gain some insight, let’s dig deeper into the examples from my previous post. The defining property of a product was the universal mapping-in condition. For every object c equipped with a pair of morphisms going to, respectively, a and b, there was a unique morphism h mapping c to the product a \times b. The commuting condition ensured that the correspondence went both ways, that is, given an h, the two morphisms were uniquely determined. A pair of morphisms can be seen as a single morphism in a product category C\times C. So, really, we have an isomorphism between hom-sets in two categories, one in C\times C and the other in C. We can also define two functors going between these categories. An arbitrary object c in C is mapped by the diagonal functor \Delta to a pair \langle c, c\rangle in C\times C. That’s our left functor. It prepares the source for mapping out. The right functor maps an arbitrary pair \langle a, b\rangle to the product a \times b in C. That’s our target for mapping in. The adjunction is the (natural) isomorphism of the two hom-sets: (C\times C)(\Delta c, \langle a, b\rangle) \cong C(c, a \times b) Let’s develop this intuition. As usual in category theory, an object is defined by its morphisms. A product is defined by the mapping-in property, the totality of morphisms incoming from all other objects. Hence we are interested in the hom-set between an arbitrary object c and our product a \times b. This is the right hand side of the picture. On the left, we are considering the mapping-out morphism from the object \langle c, c \rangle, which is the result of applying the functor \Delta to c. Thus an adjunction relates objects that are defined by their mapping-in property and objects defined by their mapping-out property. Another way of looking at the pair of adjoint functors is to see them as being approximately the inverse of each other. Of course, they can’t, because the two categories in question are not isomorphic. Intuitively, C\times C is “much bigger” than C. The functor that assigns the product a \times b to every pair \langle a, b \rangle  cannot be injective. It must map many different pairs to the same (up to isomorphism) product. In the process, it “forgets” some of the information, just like the number 12 forgets whether it was obtained by multiplying 2 and 6 or 3 and 4. Common examples of this forgetfulness are isomorphisms such as a \times b \cong b \times a (a \times b) \times c \cong a \times (b \times c) Since the product functor loses some information, its left adjoint must somehow compensate for it, essentially by making stuff up. Because the adjunction is a natural transformation, it must do it uniformly across the whole category. Given a generic object c, the only way it can produce a pair of objects is to duplicate c. Hence the diagonal functor \Delta. You might say that \Delta “freely” generates a pair. In almost every adjunction you can observe this interplay of “freeness” and “forgetfulness.” I’m using these therm loosely, but I can be excused, because there is no formal definition of forgetful (and therefore free or cofree) functors. Left adjoints often create free stuff. The mnemonic is that “the left” is “liberal.” Right adjoints, on the other hand, are “conservative.” They only use as much data as is strictly necessary and not an iota more (they also preserve limits, which the left adjoints are free to ignore). This is all relative and, as we’ll see later, the same functor may be the left adjoint to one functor and the right adjoint to another. Because of this lossiness, a round trip using both functors doesn’t produce an identity. It is however “related” to the identity functor. The combination left-after-right produces an object that can be mapped back to the original object. Conversely, right-after-left has a mapping from the identity functor. These two give rise to natural transformations that are called, respectively, the counit \varepsilon and the unit \eta. Here, the combination diagonal functor after the product functor takes a pair \langle a, b\rangle to the pair \langle a \times b, a \times b\rangle. The counit \varepsilon then maps it back to \langle a, b\rangle using a pair of projections \langle \pi_1, \pi_2\rangle (which is a single morphism in C \times C). It’s easy to see that the family of such morphisms defines a natural transformation. If we think for a moment in terms of set elements, then for every element of the target object, the counit extracts a pair of elements of the source object (the objects here are pairs of sets). Note that this mapping is not injective and, therefore, not invertible. The other composition–the product functor after the diagonal functor–maps an object c to c \times c. The component of the unit natural transformation, \eta_c \colon c \to c \times c, is implemented using the universal property of the product. Indeed, such a morphism is uniquely determined by a pair of identity morphsims \langle id_c, id_c \rangle. Again, when we vary c, these morphisms combine to form a natural transformation. Thinking in terms of set elements, the unit inserts an element of the set c in the target set. And again, this is not an injective map, so it cannot be inverted. Although in an arbitrary category we cannot talk about elements, a lot of intuitions from Set carry over to a more general setting. In a category with a terminal object, for instance, we can talk about global elements as mappings from the terminal object. In the absence of the terminal object, we may use other objects to define generalized elements. This is all in the true spirit of category theory, which defines all properties of objects in terms of morphisms. Every construction in category theory has its dual, and the product is no exception. A coproduct is defined by a mapping out property. For every pair of morphisms from, respectively, a and b to the common target c there is a unique mapping out from the coproduct a + b to c. In programming, this is called case analysis: a function from a sum type is implemented using two functions corresponding to two cases. Conversely, given a mapping out of a coproduct, the two functions are uniquely determined due to the commuting conditions (this was all discussed in the previous post). As before, this one-to-one correspondence can be neatly encapsulated as an adjunction. This time, however, the coproduct functor is the left adjoint of the diagonal functor. The coproduct is still the “forgetful” part of the duo, but now the diagonal functor plays the role of the cofree funtor, relative to the coproduct. Again, I’m using these terms loosely. The counit now works in the category C and  it “extracts a value” from the symmetric coproduct of  c with c. It does it by “pattern matching” and applying the identity morphism. The unit is more interesting. It’s built from two injections, or two constructors, as we call them in programming. I find it fascinating that the simple diagonal functor can be used to define both products and coproducts. Moreover, using terse categorical notation, this whole blog post up to this point can be summarized by a single formula. That’s the power of adjunctions. There is one more very important adjunction that every programmer should know: the exponential, or the currying adjunction. The exponential, a.k.a. the function type, is the right adjoint to the product functor. What’s the product functor? Product is a bifunctor, or a functor from C \times C to C. But if you fix one of the arguments, it just becomes a regular functor. We’re interested in the functor (-) \times b or, more explicitly: (-) \times b : a \to a \times b It’s a functor that multiplies its argument by some fixed object b. We are using this functor to define the exponential. The exponential a^b is defined by the mapping-in property.  The mappings out of the product c \times b to a are in one to one correspondence with morphisms from an arbitrary object c to the exponential a^b . C(c \times b, a) \cong C(c, a^b) The exponential a^b is an object representing the set of morphisms from b to a, and the two directions of the isomorphism above are called curry and uncurry. This is exactly the meaning of the universal property of the exponential I discussed in my previous post. The counit for this adjunction extracts a value from the product of the function type (the exponential) and its argument. It’s just the evaluation morphism: it applies a function to its argument. The unit injects a value of type c into a function type b \to c \times b. The unit is just the curried version of the product constructor. I want you to look closely at this formula through your programming glasses. The target of the unit is the type: b -> (c, b) You may recognize it as the state monad, with b representing the state. The unit is nothing else but the natural transformation whose component we call return. Coincidence? Not really! Look at the component of the counit: (b -> a, b) -> a It’s the extract of the Store comonad. It turns out that every adjunction gives rise to both a monad and a comonad. Not only that, every monad and every comonad give rise to adjunctions. It seems like, in category theory, if you dig deep enough, everything is related to everything in some meaningful way. And every time you revisit a topic, you discover new insights. That’s what makes category theory so exciting. I’ve been working with profunctors lately. They are interesting beasts, both in category theory and in programming. In Haskell, they form the basis of profunctor optics–in particular the lens library. Profunctor Recap The categorical definition of a profunctor doesn’t even begin to describe its richness. You might say that it’s just a functor from a product category \mathbb{C}^{op}\times \mathbb{D} to Set (I’ll stick to Set for simplicity, but there are generalizations to other categories as well). A profunctor P (a.k.a., a distributor, or bimodule) maps a pair of objects, c from \mathbb{C} and d from \mathbb{D}, to a set P(c, d). Being a functor, it also maps any pair of morphisms in \mathbb{C}^{op}\times \mathbb{D}: f\colon c' \to c g\colon d \to d' to a function between those sets: P(f, g) \colon P(c, d) \to P(c', d') Notice that the first morphism f goes in the opposite direction to what we normally expect for functors. We say that the profunctor is contravariant in its first argument and covariant in the second. But what’s so special about this particular combination of source and target categories? The key point is to realize that a profunctor generalizes the idea of a hom-functor. Like a profunctor, a hom-functor maps pairs of objects to sets. Indeed, for any two objects in \mathbb{C} we have the set of morphisms between them, C(a, b). Also, any pair of morphisms in \mathbb{C}: f\colon a' \to a g\colon b \to b' can be lifted to a function, which we will denote by C(f, g), between hom-sets: C(f, g) \colon C(a, b) \to C(a', b') Indeed, for any h \in C(a, b) we have: C(f, g) h = g \circ h \circ f \in C(a', b') This (plus functorial laws) completes the definition of a functor from \mathbb{C}^{op}\times \mathbb{C} to Set. So a hom-functor is a special case of an endo-profunctor (where \mathbb{D} is the same as \mathbb{C}). It’s contravariant in the first argument and covariant in the second. For Haskell programmers, here’s the definition of a profunctor from Edward Kmett’s Data.Profunctor library: class Profunctor p where dimap :: (a' -> a) -> (b -> b') -> p a b -> p a' b' The function dimap does the lifting of a pair of morphisms. Here’s the proof that the hom-functor which, in Haskell, is represented by the arrow ->, is a profunctor: instance Profunctor (->) where dimap ab cd bc = cd . bc . ab Not only that: a general profunctor can be considered an extension of a hom-functor that forms a bridge between two categories. Consider a profunctor P spanning two categories \mathbb{C} and \mathbb{D}: P \colon \mathbb{C}^{op}\times \mathbb{D} \to Set For any two objects from one of the categories we have a regular hom-set. But if we take one object c from \mathbb{C} and another object d from \mathbb{D}, we can generate a set P(c, d). This set works just like a hom-set. Its elements are called heteromorphisms, because they can be thought of as representing morphism between two different categories. What makes them similar to morphisms is that they can be composed with regular morphisms. Suppose you have a morphism in \mathbb{C}: f\colon c' \to c and a heteromorphism h \in P(c, d). Their composition is another heteromorphism obtained by lifting the pair (f, id_d). Indeed: P(f, id_d) \colon P(c, d) \to P(c', d) so its action on h produces a heteromorphism from c' to d, which we can call the composition h \circ f of a heteromorphism h with a morphism f. Similarly, a morphism in \mathbb{D}: g\colon d \to d' can be composed with h by lifting (id_c, g). In Haskell, this new composition would be implemented by applying dimap f id to precompose p c d with f :: c' -> c and dimap id g to postcompose it with g :: d -> d' This is how we can use a profunctor to glue together two categories. Two categories connected by a profunctor form a new category known as their collage. A given profunctor provides unidirectional flow of heteromorphisms from \mathbb{C} to \mathbb{D}, so there is no opportunity to compose two heteromorphisms. Profunctors As Relations The opportunity to compose heteromorphisms arises when we decide to glue more than two categories. The clue as how to proceed comes from yet another interpretation of profunctors: as proof-relevant relations. In classical logic, a relation between sets assigns a Boolean true or false to each pair of elements. The elements are either related or not, period. In proof-relevant logic, we are not only interested in whether something is true, but also in gathering witnesses to the proofs. So, instead of assigning a single Boolean to each pair of elements, we assign a whole set. If the set is empty, the elements are unrelated. If it’s non-empty, each element is a separate witness to the relation. This definition of a relation can be generalized to any category. In fact there is already a natural relation between objects in a category–the one defined by hom-sets. Two objects a and b are related this way if the hom-set C(a, b) is non-empty. Each morphism in C(a, b) serves as a witness to this relation. With profunctors, we can define proof-relevant relations between objects that are taken from different categories. Object c in \mathbb{C} is related to object d in \mathbb{D} if P(c, d) is a non-empty set. Moreover, each element of this set serves as a witness for the relation. Because of functoriality of P, this relation is compatible with the categorical structure, that is, it composes nicely with the relation defined by hom-sets. In general, a composition of two relations P and Q, denoted by P \circ Q is defined as a path between objects. Objects a and c are related if there is a go-between object b such that both P(a, b) and Q(b, c) are non-empty. As a witness of this relation we can pick any pair of elements, one from P(a, b) and one from Q(b, c). By convention, a profunctor P(a, b) is drawn as an arrow (often crossed) from b to a, a \nleftarrow b. Composition of profunctors/relations Profunctor Composition To create a set of all witnesses of P \circ Q we have to sum over all possible intermediate objects and all pairs of witnesses. Roughly speaking, such a sum (modulo some identifications) is expressed categorically as a coend: (P \circ Q)(a, c) = \int^b P(a, b) \times Q(b, c) As a refresher, a coend of a profunctor P is a set \int^a P(a, a) equipped with a family of injections i_x \colon P(x, x) \to \int^a P(a, a) that is universal in the sense that, for any other set s and a family: \alpha_x \colon P(x, x) \to s there is a unique function h that factorizes them all: \alpha_x = h \circ i_x Universal property of a coend Profunctor composition can be translated into pseudo-Haskell as: type Procompose q p a c = exists b. (p a b, q b c) where the coend is encoded as an existential data type. The actual implementation (again, see Edward Kmett’s Data.Profunctor.Composition) is: data Procompose q p a c where Procompose :: q b c -> p a b -> Procompose q p a c The existential quantifier is expressed in terms of a GADT (Generalized Algebraic Data Type), with the free occurrence of b inside the data constructor. Einstein’s Convention By now you might be getting lost juggling the variances of objects appearing in those formulas. The coend variable, for instance, must appear under the integral sign once in the covariant and once in the contravariant position, and the variances on the right must match the variances on the left. Fortunately, there is a precedent in a different branch of mathematics, tensor calculus in vector spaces, with the kind of notation that takes care of variances. Einstein coopted and expanded this notation in his theory of relativity. Let’s see if we can adapt this technique to the calculus of profunctors. The trick is to write contravariant indices as superscripts and the covariant ones as subscripts. So, from now on, we’ll write the components of a profunctor p (we’ll switch to lower case to be compatible with Haskell) as p^c\,_d. Einstein also came up with a clever convention: implicit summation over a repeated index. In the case of profunctors, the summation corresponds to taking a coend. In this notation, a coend over a profunctor p looks like a trace of a tensor: p^a\,_a = \int^a p(a, a) The composition of two profunctors becomes: (p \circ q)^a\, _c = p^a\,_b \, q^b\,_c = \int^b p(a, b) \times q(b, c) The summation convention applies only to adjacent indices. When they are separated by an explicit product sign (or any other operator), the coend is not assumed, as in: p^a\,_b \times q^b\,_c (no summation). The hom-functor in a category \mathbb{C} is also a profunctor, so it can be notated appropriately: C^a\,_b = C(a, b) The co-Yoneda lemma (see Ninja Yoneda) becomes: C^c\,_{c'}\,p^{c'}\,_d \cong p^c\,_d \cong p^c\,_{d'}\,D^{d'}\,_d suggesting that the hom-functors C^c\,_{c'} and D^{d'}\,_d behave like Kronecker deltas (in tensor-speak) or unit matrices. Here, the profunctor p spans two categories The lifting of morphisms: f\colon c' \to c g\colon d \to d' can be written as: p^f\,_g \colon p^c\,_d \to p^{c'}\,_{d'} There is one more useful identity that deals with mapping out from a coend. It’s the consequence of the fact that the hom-functor is continuous. It means that it maps (co-) limits to limits. More precisely, since the hom-functor is contravariant in the first variable, when we fix the target object, it maps colimits in the first variable to limits. (It also maps limits to limits in the second variable). Since a coend is a colimit, and an end is a limit, continuity leads to the following identity: Set(\int^c p(c, c), s) \cong \int_c Set(p(c, c), s) for any set s. Programmers know this identity as a generalization of case analysis: a function from a sum type is a product of functions (one function per case). If we interpret the coend as an existential quantifier, the end is equivalent to a universal quantifier. Let’s apply this identity to the mapping out from a composition of two profunctors: p^a\,_b \, q^b\,_c \to s = Set\big(\int^b p(a, b) \times q(b, c), s\big) This is isomorphic to: \int_b Set\Big(p(a,b) \times q(b, c), s\Big) or, after currying (using the product/exponential adjunction), \int_b Set\Big(p(a, b), q(b, c) \to s\Big) This gives us the mapping out formula: p^a\,_b \, q^b\,_c \to s \cong p^a\,_b \to q^b\,_c \to s with the right hand side natural in b. Again, we don’t perform implicit summation on the right, where the repeated indices are separated by an arrow. There, the repeated index b is universally quantified (through the end), giving rise to a natural transformation. Bicategory Prof Since profunctors can be composed using the coend formula, it’s natural to ask if there is a category in which they work as morphisms. The only problem is that profunctor composition satisfies the associativity and unit laws (see the co-Yoneda lemma above) only up to isomorphism. Not to worry, there is a name for that: a bicategory. In a bicategory we have objects, which are called 0-cells; morphisms, which are called 1-cells; and morphisms between morphisms, which are called 2-cells. When we say that categorical laws are satisfied up to isomorphism, it means that there is an invertible 2-cell that maps one side of the law to another. The bicategory Prof has categories as 0-cells, profunctors as 1-cells, and natural transformations as 2-cells. A natural transformation \alpha between profunctors p and q \alpha \colon p \Rightarrow q has components that are functions: \alpha^c\,_d \colon p^c\,_d \to q^c\,_d satisfying the usual naturality conditions. Natural transformations between profunctors can be composed as functions (this is called vertical composition). In fact 2-cells in any bicategory are composable, and there always is a unit 2-cell. It follows that 1-cells between any two 0-cells form a category called the hom-category. But there is another way of composing 2-cells that’s called horizontal composition. In Prof, this horizontal composition is not the usual horizontal composition of natural transformations, because composition of profunctors is not the usual composition of functors. We have to construct a natural transformation between one composition of profuntors, say p^a\,_b \, q^b\,_c and another, r^a\,_b \, s^b\,_c, having at our disposal two natural transformations: \alpha \colon p \Rightarrow r \beta \colon q \Rightarrow s The construction is a little technical, so I’m moving it to the appendix. We will denote such horizontal composition as: (\alpha \circ \beta)^a\,_c \colon p^a\,_b \, q^b\,_c \to r^a\,_b \, s^b\,_c If one of the natural transformations is an identity natural transformation, say, from p^a\,_b to p^a\,_b, horizontal composition is called whiskering and can be written as: (p \circ \beta)^a\,_c \colon p^a\,_b \, q^b\,_c \to p^a\,_b \, s^b\,_c The fact that a monad is a monoid in the category of endofunctors is a lucky accident. That’s because, in general, a monad can be defined in any bicategory, and Cat just happens to be a (strict) bicategory. It has (small) categories as 0-cells, functors as 1-cells, and natural transformations as 2-cells. A monad is defined as a combination of a 0-cell (you need a category to define a monad), an endo-1-cell (that would be an endofunctor in that category), and two 2-cells. These 2-cells are variably called multiplication and unit, \mu and \eta, or join and return. Since Prof is a bicategory, we can define a monad in it, and call it a promonad. A promonad consists of a 0-cell C, which is a category; an endo-1-cell p, which is a profunctor in that category; and two 2-cells, which are natural transformations: \mu^a\,_b \colon p^a\,_c \, p^c\,_b \to p^a\,_b \eta^a\,_b \colon C^a\,_b \to p^a\,_b Remember that C^a\,_b is the hom-profunctor in the category C which, due to co-Yoneda, happens to be the unit of profunctor composition. Programmers might recognize elements of the Haskell Arrow in it (see my blog post on monoids). We can apply the mapping-out identity to the definition of multiplication and get: \mu^a\,_b \colon p^a\,_c \to p^c\,_b \to p^a\,_b Notice that this looks very much like composition of heteromorphisms. Moreover, the monadic unit \eta maps regular morphisms to heteromorphisms. We can then construct a new category, whose objects are the same as the objects of \mathbb{C}, with hom-sets given by the profunctor p. That is, a hom set from a to b is the set p^a\,_b. We can define an identity-on-object functor J from \mathbb{C} to that category, whose action on hom-sets is given by \eta. Interestingly, this construction also works in the opposite direction (as was brought to my attention by Alex Campbell). Any indentity-on-objects functor defines a promonad. Indeed, given a functor J, we can always turn it into a profunctor: p(c, d) = D(J\, c, J\, d) In Einstein notation, this reads: p^c\,_d = D^{J\, c}\,_{J\, d} Since J is identity on objects, the composition of morphisms in D can be used to define the composition of heteromorphisms. This, in turn, can be used to define \mu, thus showing that p is a promonad on \mathbb{C}. I realize that I have touched upon some pretty advanced topics in category theory, like bicategories and promonads, so it’s a little surprising that these concepts can be illustrated in Haskell, some of them being present in popular libraries, like the Arrow library, which has applications in functional reactive programming. I’ve been experimenting with applying Einstein’s summation convention to profunctors, admittedly with mixed results. This is definitely work in progress and I welcome suggestions to improve it. The main problem is that we sometimes need to apply the sum (coend), and at other times the product (end) to repeated indices. This is in particular awkward in the formulation of the mapping out property. I suggest separating the non-summed indices with product signs or arrows but I’m not sure how well this will work. Appendix: Horizontal Composition in Prof We have at our disposal two natural transformations: \alpha \colon p \Rightarrow r \beta \colon q \Rightarrow s and the following coend, which is the composition of the profunctors p and q: Our goal is to construct an element of the target coend: \int^b r(a, b) \times s(b, c) Horizontal composition of 2-cells To construct an element of a coend, we need to provide just one element of r(a, b') \times s(b', c) for some b'. We’ll look for a function that would construct such an element in the following hom-set: Set\Big(\int^b p(a, b) \times q(b, c), r(a, b') \times s(b', c)\Big) Using Einstein notation, we can write it as: p^a\,_b \, q^b\,_c \to r^a\,_{b'} \times s^{b'}\,_c and then use the mapping out property: We can pick b' equal to b and implement the function using the components of the two natural transformations, \alpha^a\,_{b} \times \beta^{b}\,_c. Of course, this is how a programmer might think of it. A mathematician will use the universal property of the coend (p \circ q)^a\,_c, as in the diagram below (courtesy Alex Campbell). Horizontal composition using the universal property of a coend In Haskell, we can define a natural transformation between two (endo-) profunctors as a polymorphic function: newtype PNat p q = PNat (forall a b. p a b -> q a b) Horizontal composition is then given by: horPNat :: PNat p r -> PNat q s -> Procompose p q a c -> Procompose r s a c horPNat (PNat alpha) (PNat beta) (Procompose pbc qdb) = Procompose (alpha pbc) (beta qdb) I’m grateful to Alex Campbell from Macquarie University in Sydney for extensive help with this blog post. Further Reading Oxford, UK.    2019 July 22 – 26 Dear scientists, mathematicians, linguists, philosophers, and hackers, We are writing to let you know about a fantastic opportunity to learn about the emerging interdisciplinary field of applied category theory from some of its leading researchers at the ACT2019 School.   It will begin in January 2019 and culminate in a meeting in Oxford, July 22-26. Applied category theory is a topic of interest for a growing community of researchers, interested in studying systems of all sorts using category-theoretic tools.  These systems are found in the natural sciences and social sciences, as well as in computer science, linguistics, and engineering. The background and experience of our community’s members are as varied as the systems being studied. The goal of the ACT2019 School is to help grow this community by pairing ambitious young researchers together with established researchers in order to work on questions, problems, and conjectures in applied category theory. Who should apply? Anyone from anywhere who is interested in applying category-theoretic methods to problems outside of pure mathematics. This is emphatically not restricted to math students, but one should be comfortable working with mathematics. Knowledge of basic category-theoretic language—the definition of monoidal category for example— is encouraged. We will consider advanced undergraduates, Ph.D. students, and post-docs. We ask that you commit to the full program as laid out below. Instructions on how to apply can be found below the research topic descriptions. Senior research mentors and their topics Below is a list of the senior researchers, each of whom describes a research project that their team will pursue, as well as the background reading that will be studied between now and July 2019. Miriam Backens Title: Simplifying quantum circuits using the ZX-calculus Description: The ZX-calculus is a graphical calculus based on the category-theoretical formulation of quantum mechanics.  A complete set of graphical rewrite rules is known for the ZX-calculus, but not for quantum circuits over any universal gate set.  In this project, we aim to develop new strategies for using the ZX-calculus to simplify quantum circuits. Background reading: 1. Matthes Amy, Jianxin Chen, Neil Ross. A finite presentation of CNOT-Dihedral operators. arXiv:1701.00140 2. Miriam Backens. The ZX-calculus is complete for stabiliser quantum mechanics. arXiv:1307.7025 Tobias Fritz Title: Partial evaluations, the bar construction, and second-order stochastic dominance Description: We all know that 2+2+1+1 evaluates to 6. A less familiar notion is that it can partially evaluate to 5+1.  In this project, we aim to study the compositional structure of partial evaluation in terms of monads and the bar construction and see what this has to do with financial risk via second-order stochastic dominance. Background reading: 1. Tobias Fritz, Paolo Perrone. Monads, partial evaluations, and rewriting. arXiv:1810.06037 2. Maria Manuel Clementino, Dirk Hofmann, George Janelidze. The monads of classical algebra are seldom weakly cartesian. Available here. 3. Todd Trimble. On the bar construction. Available here. Pieter Hofstra Title: Complexity classes, computation, and Turing categories Description: Turing categories form a categorical setting for studying computability without bias towards any particular model of computation. It is not currently clear, however, that Turing categories are useful to study practical aspects of computation such as complexity. This project revolves around the systematic study of step-based computation in the form of stack-machines, the resulting Turing categories, and complexity classes.  This will involve a study of the interplay between traced monoidal structure and computation. We will explore the idea of stack machines qua programming languages, investigate the expressive power, and tie this to complexity theory. We will also consider questions such as the following: can we characterize Turing categories arising from stack machines? Is there an initial such category? How does this structure relate to other categorical structures associated with computability? Background reading: 1. J.R.B. Cockett, P.J.W. Hofstra. Introduction to Turing categories. APAL, Vol 156, pp 183-209, 2008.  Available here . 2. J.R.B. Cockett, P.J.W. Hofstra, P. Hrubes. Total maps of Turing categories. ENTCS (Proc. of MFPS XXX), pp 129-146, 2014.  Available here. 3. A. Joyal, R. Street, D. Verity. Traced monoidal categories. Mat. Proc. Cam. Phil. Soc. 3, pp. 447-468, 1996. Available here. Bartosz Milewski Title: Traversal optics and profunctors Description: In functional programming, optics are ways to zoom into a specific part of a given data type and mutate it.  Optics come in many flavors such as lenses and prisms and there is a well-studied categorical viewpoint, known as profunctor optics.  Of all the optic types, only the traversal has resisted a derivation from first principles into a profunctor description. This project aims to do just this. Background reading: 1. Bartosz Milewski. Profunctor optics, categorical View. Available here. 2. Craig Pastro, Ross Street. Doubles for monoidal categories. arXiv:0711.1859 Mehrnoosh Sadrzadeh Title: Formal and experimental methods to reason about dialogue and discourse using categorical models of vector spaces Description: Distributional semantics argues that meanings of words can be represented by the frequency of their co-occurrences in context. A model extending distributional semantics from words to sentences has a categorical interpretation via Lambek’s syntactic calculus or pregroups. In this project, we intend to further extend this model to reason about dialogue and discourse utterances where people interrupt each other, there are references that need to be resolved, disfluencies, pauses, and corrections. Additionally, we would like to design experiments and run toy models to verify predictions of the developed models. Background reading: 1. Gerhard Jager.  A multi-modal analysis of anaphora and ellipsis. Available here. 2.  Matthew Purver, Ronnie Cann, Ruth Kempson. Grammars as parsers:    Meeting the dialogue challenge. Available here. David Spivak Title: Toward a mathematical foundation for autopoiesis Description: An autopoietic organization—anything from a living animal to a political party to a football team—is a system that is responsible for adapting and changing itself, so as to persist as events unfold. We want to develop mathematical abstractions that are suitable to found a scientific study of autopoietic organizations. To do this, we’ll begin by using behavioral mereology and graphical logic to frame a discussion of autopoiesis, most of all what it is and how it can be best conceived. We do not expect to complete this ambitious objective; we hope only to make progress toward it. Background reading: 1. Fong, Myers, Spivak. Behavioral mereology.  arXiv:1811.00420. 2. Fong, Spivak. Graphical regular logic.  arXiv:1812.05765. 3. Luhmann. Organization and Decision, CUP. (Preface) School structure All of the participants will be divided up into groups corresponding to the projects.  A group will consist of several students, a senior researcher, and a TA. Between January and June, we will have a reading course devoted to building the background necessary to meaningfully participate in the projects. Specifically, two weeks are devoted to each paper from the reading list. During this two week period, everybody will read the paper and contribute to a discussion in a private online chat forum.  There will be a TA serving as a domain expert and moderating this discussion. In the middle of the two week period, the group corresponding to the paper will give a presentation via video conference. At the end of the two week period, this group will compose a blog entry on this background reading that will be posted to the n-category cafe. After all of the papers have been presented, there will be a two-week visit to Oxford University from 15 – 26 July 2019.  The first week is solely for participants of the ACT2019 School. Groups will work together on research projects, led by the senior researchers.   The second week of this visit is the ACT2019 Conference, where the wider applied category theory community will arrive to share new ideas and results. It is not part of the school, but there is a great deal of overlap and participation is very much encouraged. The school should prepare students to be able to follow the conference presentations to a reasonable degree. How to apply To apply please send the following to • Your CV • A document with: • An explanation of any relevant background you have in category theory or any of the specific projects areas • The date you completed or expect to complete your Ph.D. and a one-sentence summary of its subject matter. • Order of project preference • To what extent can you commit to coming to Oxford (availability of funding is uncertain at this time) • A brief statement (~300 words) on why you are interested in the ACT2019 School. Some prompts: • how can this school contribute to your research goals • how can this school help in your career? Also, have sent on your behalf to a brief letter of recommendation confirming any of the following: • your background • ACT2019 School’s relevance to your research/career • your research experience For more information, contact either • Daniel Cicala. cicala (at) math (dot) ucr (dot) edu • Jules Hedges. julian (dot) hedges (at) cs (dot) ox (dot) ac (dot) uk Yes, it’s this time of the year again! I started a little tradition a year ago with Stalking a Hylomorphism in the Wild. This year I was reminded of the Advent of Code by a tweet with this succint C++ program: This piece of code is probably unreadable to a regular C++ programmer, but makes perfect sense to a Haskell programmer. Here’s the description of the problem: You are given a list of equal-length strings. Every string is different, but two of these strings differ only by one character. Find these two strings and return their matching part. For instance, if the two strings were “abcd” and “abxd”, you would return “abd”. What makes this problem particularly interesting is its potential application to a much more practical task of matching strands of DNA while looking for mutations. I decided to explore the problem a little beyond the brute force approach. And, of course, I had a hunch that I might encounter my favorite wild beast–the hylomorphism. Brute force approach First things first. Let’s do the boring stuff: read the file and split it into lines, which are the strings we are supposed to process. So here it is: main = do txt <- readFile "day2.txt" let cs = lines txt print $ findMatch cs The real work is done by the function findMatch, which takes a list of strings and produces the answer, which is a single string. findMatch :: [String] -> String First, let’s define a function that calculates the distance between any two strings. distance :: (String, String) -> Int We’ll define the distance as the count of mismatched characters. Here’s the idea: We have to compare strings (which, let me remind you, are of equal length) character by character. Strings are lists of characters. The first step is to take two strings and zip them together, producing a list of pairs of characters. In fact we can combine the zipping with the next operation–in this case, comparison for inequality, (/=)–using the library function zipWith. However, zipWith is defined to act on two lists, and we will want it to act on a pair of lists–a subtle distinction, which can be easily overcome by applying uncurry: which turns a function of two arguments into a function that takes a pair. Here’s how we use it: uncurry (zipWith (/=)) The comparison operator (/=) produces a Boolean result, True or False. We want to count the number of differences, so we’ll covert True to one, and False to zero: fromBool :: Num a => Bool -> a fromBool False = 0 fromBool True = 1 (Notice that such subtleties as the difference between Bool and Int are blisfully ignored in C++.) Finally, we’ll sum all the ones using sum. Altogether we have: distance = sum . fmap fromBool . uncurry (zipWith (/=)) Now that we know how to find the distance between any two strings, we’ll just apply it to all possible pairs of strings. To generate all pairs, we’ll use list comprehension: let ps = [(s1, s2) | s1 <- ss, s2 <- ss] (In C++ code, this was done by cartesian_product.) Our goal is to find the pair whose distance is exactly one. To this end, we’ll apply the appropriate filter: filter ((== 1) . distance) ps For our purposes, we’ll assume that there is exactly one such pair (if there isn’t one, we are willing to let the program fail with a fatal exception). (s, s') = head $ filter ((== 1) . distance) ps The final step is to remove the mismatched character: filter (uncurry (==)) $ zip s s' We use our friend uncurry again, because the equality operator (==) expects two arguments, and we are calling it with a pair of arguments. The result of filtering is a list of identical pairs. We’ll fmap fst to pick the first components. findMatch :: [String] -> String findMatch ss = in fmap fst $ filter (uncurry (==)) $ zip s s' This program produces the correct result and we could stop right here. But that wouldn’t be much fun, would it? Besides, it’s possible that other algorithms could perform better, or be more flexible when applied to a more general problem. Data-driven approach The main problem with our brute-force approach is that we are comparing everything with everything. As we increase the number of input strings, the number of comparisons grows like a factorial. There is a standard way of cutting down on the number of comparison: organizing the input into a neat data structure. We are comparing strings, which are lists of characters, and list comparison is done recursively. Assume that you know that two strings share a prefix. Compare the next character. If it’s equal in both strings, recurse. If it’s not, we have a single character fault. The rest of the two strings must now match perfectly to be considered a solution. So the best data structure for this kind of algorithm should batch together strings with equal prefixes. Such a data structure is called a prefix tree, or a trie (pronounced try). At every level of our prefix tree we’ll branch based on the current character (so the maximum branching factor is, in our case, 26). We’ll record the character, the count of strings that share the prefix that led us there, and the child trie storing all the suffixes. data Trie = Trie [(Char, Int, Trie)] deriving (Show, Eq) Here’s an example of a trie that stores just two strings, “abcd” and “abxd”. It branches after b. a 2 b 2 c 1 x 1 d 1 d 1 When inserting a string into a trie, we recurse both on the characters of the string and the list of branches. When we find a branch with the matching character, we increment its count and insert the rest of the string into its child trie. If we run out of branches, we create a new one based on the current character, give it the count one, and the child trie with the rest of the string: insertS :: Trie -> String -> Trie insertS t "" = t insertS (Trie bs) s = Trie (inS bs s) inS ((x, n, t) : bs) (c : cs) = if c == x then (c, n + 1, insertS t cs) : bs else (x, n, t) : inS bs (c : cs) inS [] (c : cs) = [(c, 1, insertS (Trie []) cs)] We convert our input to a trie by inserting all the strings into an (initially empty) trie: mkTrie :: [String] -> Trie mkTrie = foldl insertS (Trie []) Of course, there are many optimizations we could use, if we were to run this algorithm on big data. For instance, we could compress the branches as is done in radix trees, or we could sort the branches alphabetically. I won’t do it here. I won’t pretend that this implementation is simple and elegant. And it will get even worse before it gets better. The problem is that we are dealing explicitly with recursion in multiple dimensions. We recurse over the input string, the list of branches at each node, as well as the child trie. That’s a lot of recursion to keep track of–all at once. Now brace yourself: We have to traverse the trie starting from the root. At every branch we check the prefix count: if it’s greater than one, we have more than one string going down, so we recurse into the child trie. But there is also another possibility: we can allow to have a mismatch at the current level. The current characters may be different but, since we allow only one mismatch, the rest of the strings have to match exactly. That’s what the function exact does. Notice that exact t is used inside foldMap, which is a version of fold that works on monoids–here, on strings. match1 :: Trie -> [String] match1 (Trie bs) = go bs go :: [(Char, Int, Trie)] -> [String] go ((x, n, t) : bs) = let a1s = if n > 1 then fmap (x:) $ match1 t else [] a2s = foldMap (exact t) bs a3s = go bs -- recurse over list in a1s ++ a2s ++ a3s go [] = [] exact t (_, _, t') = matchAll t t' Here’s the function that finds all exact matches between two tries. It does it by generating all pairs of branches in which top characters match, and then recursing down. matchAll :: Trie -> Trie -> [String] matchAll (Trie bs) (Trie bs') = mAll bs bs' mAll :: [(Char, Int, Trie)] -> [(Char, Int, Trie)] -> [String] mAll [] [] = [""] mAll bs bs' = let ps = [ (c, t, t') | (c, _, t) <- bs , (c', _', t') <- bs' , c == c'] in foldMap go ps go (c, t, t') = fmap (c:) (matchAll t t') When mAll reaches the leaves of the trie, it returns a singleton list containing an empty string. Subsequent actions of fmap (c:) will prepend characters to this string. Since we are expecting exactly one solution to the problem, we’ll extract it using head: findMatch1 :: [String] -> String findMatch1 cs = head $ match1 (mkTrie cs) Recursion schemes As you hone your functional programming skills, you realize that explicit recursion is to be avoided at all cost. There is a small number of recursive patterns that have been codified, and they can be used to solve the majority of recursion problems (for some categorical background, see F-Algebras). Recursion itself can be expressed in Haskell as a data structure: a fixed point of a functor: In particular, our trie can be generated from the following functor: data TrieF a = TrieF [(Char, a)] deriving (Show, Functor) Notice how I have replaced the recursive call to the Trie type constructor with the free type variable a. The functor in question defines the structure of a single node, leaving holes marked by the occurrences of a for the recursion. When these holes are filled with full blown tries, as in the definition of the fixed point, we recover the complete trie. I have also made one more simplification by getting rid of the Int in every node. This is because, in the recursion scheme I’m going to use, the folding of the trie proceeds bottom-up, rather than top-down, so the multiplicity information can be passed upwards. The main advantage of recursion schemes is that they let us use simpler, non-recursive building blocks such as algebras and coalgebras. Let’s start with a simple coalgebra that lets us build a trie from a list of strings. A coalgebra is a fancy name for a particular type of function: type Coalgebra f x = x -> f x Think of x as a type for a seed from which one can grow a tree. A colagebra tells us how to use this seed to create a single node described by the functor f and populate it with (presumably smaller) seeds. We can then pass this coalgebra to a simple algorithm, which will recursively expand the seeds. This algorithm is called the anamorphism: ana :: Functor f => Coalgebra f a -> a -> Fix f ana coa = In . fmap (ana coa) . coa Let’s see how we can apply it to the task of building a trie. The seed in our case is a list of strings (as per the definition of our problem, we’ll assume they are all equal length). We start by grouping these strings into bunches of strings that start with the same character. There is a library function called groupWith that does exactly that. We have to import the right library: import GHC.Exts (groupWith) This is the signature of the function: It takes a function a -> b that converts each list element to a type that supports comparison (as per the typeclass Ord), and partitions the input into lists that compare equal under this particular ordering. In our case, we are going to extract the first character from a string using head and bunch together all strings that share that first character. let sss = groupWith head ss The tails of those strings will serve as seeds for the next tier of the trie. Eventually the strings will be shortened to nothing, triggering the end of recursion. fromList :: Coalgebra TrieF [String] fromList ss = -- are strings empty? (checking one is enough) if null (head ss) then TrieF [] -- leaf let sss = groupWith head ss in TrieF $ fmap mkBranch sss The function mkBranch takes a bunch of strings sharing the same first character and creates a branch seeded with the suffixes of those strings. mkBranch :: [String] -> (Char, [String]) mkBranch sss = let c = head (head sss) -- they're all the same in (c, fmap tail sss) Notice that we have completely avoided explicit recursion. The next step is a little harder. We have to fold the trie. Again, all we have to define is a step that folds a single node whose children have already been folded. This step is defined by an algebra: type Algebra f x = f x -> x Just as the type x described the seed in a coalgebra, here it describes the accumulator–the result of the folding of a recursive data structure. We pass this algebra to a special algorithm called a catamorphism that takes care of the recursion: cata :: Functor f => Algebra f a -> Fix f -> a cata alg = alg . fmap (cata alg) . out Notice that the folding proceeds from the bottom up: the algebra assumes that all the children have already been folded. The hardest part of designing an algebra is figuring out what information needs to be passed up in the accumulator. We obviously need to return the final result which, in our case, is the list of strings with one mismatched character. But when we are in the middle of a trie, we have to keep in mind that the mismatch may still happen above us. So we also need a list of strings that may serve as suffixes when the mismatch occurs. We have to keep them all, because they might be matched later with strings from other branches. In other words, we need to be accumulating two lists of strings. The first list accumulates all suffixes for future matching, the second accumulates the results: strings with one mismatch (after the mismatch has been removed). We therefore should implement the following algebra: Algebra TrieF ([String], [String]) To understand the implementation of this algebra, consider a single node in a trie. It’s a list of branches, or pairs, whose first component is the current character, and the second a pair of lists of strings–the result of folding a child trie. The first list contains all the suffixes gathered from lower levels of the trie. The second list contains partial results: strings that were matched modulo single-character defect. As an example, suppose that you have a node with two branches: [ ('a', (["bcd", "efg"], ["pq"])) , ('x', (["bcd"], []))] First we prepend the current character to strings in both lists using the function prep with the following signature: prep :: (Char, ([String], [String])) -> ([String], [String]) This way we convert each branch to a pair of lists. [ (["abcd", "aefg"], ["apq"]) , (["xbcd"], [])] We then merge all the lists of suffixes and, separately, all the lists of partial results, across all branches. In the example above, we concatenate the lists in the two columns. (["abcd", "aefg", "xbcd"], ["apq"]) Now we have to construct new partial results. To do this, we create another list of accumulated strings from all branches (this time without prefixing them): ss = concat $ fmap (fst . snd) bs In our case, this would be the list: ["bcd", "efg", "bcd"] To detect duplicate strings, we’ll insert them into a multiset, which we’ll implement as a map. We need to import the appropriate library: import qualified Data.Map as M and define a multiset Counts as: type Counts a = M.Map a Int Every time we add a new item, we increment the count: add :: Ord a => Counts a -> a -> Counts a add cs c = M.insertWith (+) c 1 cs To insert all strings from a list, we use a fold: mset = foldl add M.empty ss We are only interested in items that have multiplicity greater than one. We can filter them and extract their keys: dups = M.keys $ M.filter (> 1) mset Here’s the complete algebra: accum :: Algebra TrieF ([String], [String]) accum (TrieF []) = ([""], []) accum (TrieF bs) = -- b :: (Char, ([String], [String])) let -- prepend chars to string in both lists pss = unzip $ fmap prep bs (ss1, ss2) = both concat pss -- find duplicates ss = concat $ fmap (fst . snd) bs mset = foldl add M.empty ss dups = M.keys $ M.filter (> 1) mset in (ss1, dups ++ ss2) prep (c, pss) = both (fmap (c:)) pss I used a handy helper function that applies a function to both components of a pair: both :: (a -> b) -> (a, a) -> (b, b) both f (x, y) = (f x, f y) And now for the grand finale: Since we create the trie using an anamorphism only to immediately fold it using a catamorphism, why don’t we cut the middle person? Indeed, there is an algorithm called the hylomorphism that does just that. It takes the algebra, the coalgebra, and the seed, and returns the fully charged accumulator. hylo :: Functor f => Algebra f a -> Coalgebra f b -> b -> a hylo alg coa = alg . fmap (hylo alg coa) . coa And this is how we extract and print the final result: print $ head $ snd $ hylo accum fromList cs The advantage of using the hylomorphism is that, because of Haskell’s laziness, the trie is never wholly constructed, and therefore doesn’t require large amounts of memory. At every step enough of the data structure is created as is needed for immediate computation; then it is promptly released. In fact, the definition of the data structure is only there to guide the steps of the algorithm. We use a data structure as a control structure. Since data structures are much easier to visualize and debug than control structures, it’s almost always advantageous to use them to drive computation. In fact, you may notice that, in the very last step of the computation, our accumulator recreates the original list of strings (actually, because of laziness, they are never fully reconstructed, but that’s not the point). In reality, the characters in the strings are never copied–the whole algorithm is just a choreographed dance of internal pointers, or iterators. But that’s exactly what happens in the original C++ algorithm. We just use a higher level of abstraction to describe this dance. I haven’t looked at the performance of various implementations. Feel free to test it and report the results. The code is available on github. I’m grateful to the participants of the Seattle Haskell Users’ Group for many helpful comments during my presentation. I wanted to do category theory, not geometry, so the idea of studying simplexes didn’t seem very attractive at first. But as I was getting deeper into it, a very different picture emerged. Granted, the study of simplexes originated in geometry, but then category theorists took interest in it and turned it into something completely different. The idea is that simplexes define a very peculiar scheme for composing things. The way you compose lower dimensional simplexes in order to build higher dimensional simplexes forms a pattern that shows up in totally unrelated areas of mathematics… and programming. Recently I had a discussion with Edward Kmett in which he hinted at the simplicial structure of cumulative edits in a source file. Geometric picture Let’s start with a simple idea, and see what we can do with it. The idea is that of triangulation, and it almost goes back to the beginning of the Agricultural Era. Somebody smart noticed long time ago that we can measure plots of land by subdividing them into triangles. Why triangles and not, say, rectangles or quadrilaterals? Well, to begin with, a quadrilateral can be always divided into triangles, so triangles are more fundamental as units of composition in 2-d. But, more importantly, triangles also work when you embed them in higher dimensions, and quadrilaterals don’t. You can take any three points and there is a unique flat triangle that they span (it may be degenerate, if the points are collinear). But four points will, in general, span a warped quadrilateral. Mind you, rectangles work great on flat screens, and we use them all the time for selecting things with the mouse. But on a curved or bumpy surface, triangles are the only option. Surveyors have covered the whole Earth, mountains and all, with triangles. In computer games, we build complex models, including human faces or dolphins, using wireframes. Wireframes are just systems of triangles that share some of the vertices and edges. So triangles can be used to approximate complex 2-d surfaces in 3-d. More dimensions How can we generalize this process? First of all, we could use triangles in spaces that have more than 3 dimensions. This way we could, for instance, build a Klein bottle in 4-d without it intersecting itself. We can also consider replacing triangles with higher-dimensional objects. For instance, we could approximate 3-d volumes by filling them with cubes. This technique is used in computer graphics, where we often organize lots of cubes in data structures called octrees. But just like squares or quadrilaterals don’t work very well on non-flat surfaces, cubes cannot be used in curved spaces. The natural generalization of a triangle to something that can fill a volume without any warping is a tetrahedron. Any four points in space span a tetrahedron. We can go on generalizing this construction to higher and higher dimensions. To form an n-dimensional simplex we can pick n+1 points. We can draw a segment between any two points, a triangle between any three points, a tetrahedron between any four points, and so on. It’s thus natural to define a 1-dimensional simplex to be a segment, and a 0-dimensional simplex to be a point. Simplexes (or simplices, as they are sometimes called) have very regular recursive structure. An n-dimensional simplex has n+1 faces, which are all n-1 dimensional simplexes. A tetrahedron has four triangular faces, a triangle has three sides (one-dimensional simplexes), and a segment has two endpoints. (A point should have one face–and it does, in the “augmented” theory). Every higher-dimensional simplex can be decomposed into lower-dimensional simplexes, and the process can be repeated until we get down to individual vertexes. This constitutes a very interesting composition scheme that will come up over and over again in unexpected places. Notice that you can always construct a face of a simplex by deleting one point. It’s the point opposite to the face in question. This is why there are as many faces as there are points in a simplex. Look Ma! No coordinates! So far we’ve been secretly thinking of points as elements of some n-dimensional linear space, presumably \mathbb{R}^n. Time to make another leap of abstraction. Let’s abandon coordinate systems. Can we still define simplexes and, if so, how would we use them? Consider a wireframe built from triangles. It defines a particular shape. We can deform this shape any way we want but, as long as we don’t break connections or fuse points, we cannot change its topology. A wireframe corresponding to a torus can never be deformed into a wireframe corresponding to a sphere. The information about topology is encoded in connections. The connections don’t depend on coordinates. Two points are either connected or not. Two triangles either share a side or they don’t. Two tetrahedrons either share a triangle or they don’t. So if we can define simplexes without resorting to coordinates, we’ll have a new language to talk about topology. But what becomes of a point if we discard its coordinates? It becomes an element of a set. An arrangement of simplexes can be built from a set of points or 0-simplexes, together with a set of 1-simplexes, a set of 2-simplexes, and so on. Imagine that you bought a piece of furniture from Ikea. There is a bag of screws (0-simplexes), a box of sticks (1-simplexes), a crate of triangular planks (2-simplexes), and so on. All parts are freely stretchable (we don’t care about sizes). You have no idea what the piece of furniture will look like unless you have an instruction booklet. The booklet tells you how to arrange things: which sticks form the edges of which triangles, etc. In general, you want to know which lower-order simplexes are the “faces” of higher-order simplexes. This can be determined by defining functions between the corresponding sets, which we’ll call face maps. For instance, there should be two function from the set of segments to the set of points; one assigning the beginning, and the other the end, to each segment. There should be three functions from the set of triangles to the set of segments, and so on. If the same point is the end of one segment and the beginning of another, the two segments are connected. A segment may be shared between multiple triangles, a triangle may be shared between tetrahedrons, and so on. You can compose these functions–for instance, to select a vertex of a triangle, or a side of a tetrahedron. Composable functions suggest a category, in this case a subcategory of Set. Selecting a subcategory suggests a functor from some other, simpler, category. What would that category be? The Simplicial category The objects of this simpler category, let’s call it the simplicial category \Delta, would be mapped by our functor to corresponding sets of simplexes in Set. So, in \Delta, we need one object corresponding to the set of points, let’s call it [0]; another for segments, [1]; another for triangles, [2]; and so on. In other words, we need one object called [n] per one set of n-dimensional simplexes. What really determines the structure of this category is its morphisms. In particular, we need morphisms that would be mapped, under our functor, to the functions that define faces of our simplexes–the face maps. This means, in particular, that for every n we need n+1 distinct functions from the image of [n] to the image of [n-1]. These functions are themselves images of morphisms that go between [n] and [n-1] in \Delta; we do, however, have a choice of the direction of these morphisms. If we choose our functor to be contravariant, the face maps from the image of [n] to the image of [n-1] will be images of morphisms going from [n-1] to [n] (the opposite direction). This contravariant functor from \Delta to Set (such functors are called pre-sheaves) is called the simplicial set. What’s attractive about this idea is that there is a category that has exactly the right types of morphisms. It’s a category whose objects are ordinals, or ordered sets of numbers, and morphisms are order-preserving functions. Object [0] is the one-element set \{0\}, [1] is the set \{0, 1\}, [2] is \{0, 1, 2\}, and so on. Morphisms are functions that preserve order, that is, if n < m then f(n) \leq f(m). Notice that the inequality is non-strict. This will become important in the definition of degeneracy maps. The description of simplicial sets using a functor follows a very common pattern in category theory. The simpler category defines the primitives and the grammar for combining them. The target category (often the category of sets) provides models for the theory in question. The same trick is used, for instance, in defining abstract algebras in Lawvere theories. There, too, the syntactic category consists of a tower of objects with a very regular set of morphisms, and the models are contravariant Set-valued functors. Because simplicial sets are functors, they form a functor category, with natural transformations as morphisms. A natural transformation between two simplicial sets is a family of functions that map vertices to vertices, edges to edges, triangles to triangles, and so on. In other words, it embeds one simplicial set in another. Face maps We will obtain face maps as images of injective morphisms between objects of \Delta. Consider, for instance, an injection from [1] to [2]. Such a morphism takes the set \{0, 1\} and maps it to \{0, 1, 2\}. In doing so, it must skip one of the numbers in the second set, preserving the order of the other two. There are exactly three such morphisms, skipping either 0, 1, or 2. And, indeed, they correspond to three face maps. If you think of the three numbers as numbering the vertices of a triangle, the three face maps remove the skipped vertex from the triangle leaving the opposing side free. The functor is contravariant, so it reverses the direction of morphisms. The same procedure works for higher order simplexes. An injection from [n-1] to [n] maps \{0, 1,...,n-1\} to \{0, 1,...,n\} by skipping some k between 0 and n. The corresponding face map is called d_{n, k}, or simply d_k, if n is obvious from the context. Such face maps automatically satisfy the obvious identities for any i < j: d_i d_j = d_{j-1} d_i The change from j to j-1 on the right compensates for the fact that, after removing the ith number, the remaining indexes are shifted down. These injections generate, through composition, all the morphisms that strictly preserve the ordering (we also need identity maps to form a category). But, as I mentioned before, we are also interested in those maps that are non-strict in the preservation of ordering (that is, they can map two consecutive numbers into one). These generate the so called degeneracy maps. Before we get to definitions, let me provide some motivation. One of the important application of simplexes is in homotopy. You don’t need to study algebraic topology to get a feel of what homotopy is. Simply said, homotopy deals with shrinking and holes. For instance, you can always shrink a segment to a point. The intuition is pretty obvious. You have a segment at time zero, and a point at time one, and you can create a continuous “movie” in between. Notice that a segment is a 1-simplex, whereas a point is a 0-simplex. Shrinking therefore provides a bridge between different-dimensional simplexes. Similarly, you can shrink a triangle to a segment–in particular the segment that is one of its sides. You can also shrink a triangle to a point by pasting together two shrinking movies–first shrinking the triangle to a segment, and then the segment to a point. So shrinking is composable. But not all higher-dimensional shapes can be shrunk to all lower-dimensional shapes. For instance, an annulus (a.k.a., a ring) cannot be shrunk to a segment–this would require tearing it. It can, however, be shrunk to a circular loop (or two segments connected end to end to form a loop). That’s because both, the annulus and the circle, have a hole. So continuous shrinking can be used to classify shapes according to how many holes they have. We have a problem, though: You can’t describe continuous transformations without using coordinates. But we can do the next best thing: We can define degenerate simplexes to bridge the gap between dimensions. For instance, we can build a segment, which uses the same vertex twice. Or a collapsed triangle, which uses the same side twice (its third side is a degenerate segment). Degeneracy maps We model operations on simplexes, such as face maps, through morphisms from the category opposite to \Delta. The creation of degenerate simplexes will therefore corresponds to mappings from [n+1] to [n]. They obviously cannot be injective, but we may chose them to be surjective. For instance, the creation of a degenerate segment from a point corresponds to the (opposite) mapping of \{0, 1\} to \{0\}, which collapses the two numbers to one. We can construct a degenerate triangle from a segment in two ways. These correspond to the two surjections from \{0, 1, 2\} to \{0, 1\}. The first one called \sigma_{1, 0} maps both 0 and 1 to 0, and 2 to 1. Notice that, as required, it preserves the order, albeit weakly. The second, \sigma_{1, 1} maps 0 to 0 but collapses 1 and 2 to 1. In general, \sigma_{n, k} maps \{0, 1, ... k, k+1 ... n+1\} to \{0, 1, ... k ... n\} by collapsing k and k+1 to k. Our contravariant functor maps these order-preserving surjections to functions on sets. The resulting functions are called degeneracy maps: each \sigma_{n, k} mapped to the corresponding s_{n, k}. As with face maps, we usually omit the first index, as it’s either arbitrary or easily deducible from the context. Two degeneracy maps. In the triangles, two of the sides are actually the same segment. The third side is a degenerate segment whose ends are the same point. There is an obvious identity for the composition of degeneracy maps: s_i s_j = s_{j+1} s_i for i \leq j. The interesting identities relate degeneracy maps to face maps. For instance, when i = j or i = j + 1, we have: d_i s_j = id (that’s the identity morphism). Geometrically speaking, imagine creating a degenerate triangle from a segment, for instance by using s_0. The first side of this triangle, which is obtained by applying d_0, is the original segment. The second side, obtained by d_1, is the same segment again. The third side is degenerate: it can be obtained by applying s_0 to the vertex obtained by d_1. In general, for i > j + 1: d_i s_j = s_j d_{i-1} d_i s_j = s_{j-1} d_i for i < j. All the face- and degeneracy-map identities are relevant because, given a family of sets and functions that satisfy them, we can reproduce the simplicial set (contravariant functor from \Delta to Set) that generates them. This shows the equivalence of the geometric picture that deals with triangles, segments, faces, etc., with the combinatorial picture that deals with rearrangements of ordered sequences of numbers. Monoidal structure A triangle can be constructed by adjoining a point to a segment. Add one more point and you get a tetrahedron. This process of adding points can be extended to adding together arbitrary simplexes. Indeed, there is a binary operator in \Delta that combines two ordered sequences by stacking one after another. This operation can be lifted to morphisms, making it a bifunctor. It is associative, so one might ask the question whether it can be used as a tensor product to make \Delta a monoidal category. The only thing missing is the unit object. The lowest dimensional simplex in \Delta is [0], which represents a point, so it cannot be a unit with respect to our tensor product. Instead we are obliged to add a new object, which is called [-1], and is represented by an empty set. (Incidentally, this is the object that may serve as “the face” of a point.) With the new object [-1], we get the category \Delta_a, which is called the augmented simplicial category. Since the unit and associativity laws are satisfied “on the nose” (as opposed to “up to isomorphism”), \Delta_a is a strict monoidal category. Note: Some authors prefer to name the objects of \Delta_a starting from zero, rather than minus one. They rename [-1] to \bold{0}, [0] to \bold{1}, etc. This convention makes even more sense if you consider that \bold{0} is the initial object and \bold{1} the terminal object in \Delta_a. Monoidal categories are a fertile breeding ground for monoids. Indeed, the object [0] in \Delta_a is a monoid. It is equipped with two morphisms that act like unit and multiplication. It has an incoming morphism from the monoidal unit [-1]–the morphism that’s the precursor of the face map that assigns the empty set to every point. This morphism can be used as the unit \eta of our monoid. It also has an incoming morphism from [1] (which happens to be the tensorial square of [0]). It’s the precursor of the degeneracy map that creates a segment from a single point. This morphism is the multiplication \mu of our monoid. Unit and associativity laws follow from the standard identities between morphisms in \Delta_a. It turns out that this monoid ([0], \eta, \mu) in \Delta_a is the mother of all monoids in strict monoidal categories. It can be shown that, for any monoid m in any strict monoidal category C, there is a unique strict monoidal functor F from \Delta_a to C that maps the monoid [0] to the monoid m. The category \Delta_a has exactly the right structure, and nothing more, to serve as the pattern for any monoid we can come up within a (strict) monoidal category. In particular, since a monad is just a monoid in the (strictly monoidal) category of endofunctors, the augmented simplicial category is behind every monad as well. One more thing Incidentally, since \Delta_a is a monoidal category, (contravariant) functors from it to Set are automatically equipped with monoidal structure via Day convolution. The result of Day convolution is a join of simplicial sets. It’s a generalized cone: two simplicial sets together with all possible connections between them. In particular, if one of the sets is just a single point, the result of the join is an actual cone (or a pyramid). Different shapes If we are willing to let go of geometric interpretations, we can replace the target category of sets with an arbitrary category. Instead of having a set of simplexes, we’ll end up with an object of simplexes. Simplicial sets become simplicial objects. Alternatively, we can generalize the source category. As I mentioned before, simplexes are a good choice of primitives because of their geometrical properties–they don’t warp. But if we don’t care about embedding these simplexes in \mathbb{R}^n, we can replace them with cubes of varying dimensions (a one dimensional cube is a segment, a two dimensional cube is a square, and so on). Functors from the category of n-cubes to Set are called cubical sets. An even further generalization replaces simplexes with shapeless globes producing globular sets. All these generalizations become important tools in studying higher category theory. In an n-category, we naturally encounter various shapes, as reflected in the naming convention: objects are called 0-cells; morphisms, 1-cells; morphisms between morphisms, 2-cells, and so on. These “cells” are often visualized as n-dimensional shapes. If a 1-cell is an arrow, a 2-cell is a (directed) surface spanning two arrows; a 3-cell, a volume between two surfaces; e.t.c. In this way, the shapeless hom-set that connects two objects in a regular category turns into a topologically rich blob in an n-category. This is even more pronounced in infinity groupoids, which became popularized by homotopy type theory, where we have an infinite tower of bidirectional n-morphisms. The presence or the absence of higher order morphisms between any two morphisms can be visualized as the existence of holes that prevent the morphing of one cell into another. This kind of morphing can be described by homotopies which, in turn, can be described using simplicial, cubical, globular, or even more exotic sets. I realize that this post might seem a little rambling. I have two excuses: One is that, when I started looking at simplexes, I had no idea where I would end up. One thing led to another and I was totally fascinated by the journey. The other is the realization how everything is related to everything else in mathematics. You start with simple triangles, you compose and decompose them, you see some structure emerging. Suddenly, the same compositional structure pops up in totally unrelated areas. You see it in algebraic topology, in a monoid in a monoidal category, or in a generalization of a hom-set in an n-category. Why is it so? It seems like there aren’t that many ways of composing things together, and we are forced to keep reusing them over and over again. We can glue them, nail them, or solder them. The way simplicial category is put together provides a template for one of the universal patterns of composition. 1. John Baez, A Quick Tour of Basic Concepts in Simplicial Homotopy Theory 2. Greg Friedman, An elementary illustrated introduction to simplicial sets. 3. N J Wildberger, Algebraic Topology. An excellent series of videos. I’m grateful to Edward Kmett and Derek Elkins for reviewing the draft and for providing helpful suggestions. There is a lot of folklore about various data types that pop up in discussions about lenses. For instance, it’s known that FunList and Bazaar are equivalent, although I haven’t seen a proof of that. Since both data structures appear in the context of Traversable, which is of great interest to me, I decided to do some research. In particular, I was interested in translating these data structures into constructs in category theory. This is a continuation of my previous blog posts on free monoids and free applicatives. Here’s what I have found out: • FunList is a free applicative generated by the Store functor. This can be shown by expressing the free applicative construction using Day convolution. • Using Yoneda lemma in the category of applicative functors I can show that Bazaar is equivalent to FunList Let’s start with some definitions. FunList was first introduced by Twan van Laarhoven in his blog. Here’s a (slightly generalized) Haskell definition: data FunList a b t = Done t | More a (FunList a b (b -> t)) It’s a non-regular inductive data structure, in the sense that its data constructor is recursively called with a different type, here the function type b->t. FunList is a functor in t, which can be written categorically as: L_{a b} t = t + a \times L_{a b} (b \to t) where b \to t is a shorthand for the hom-set Set(b, t). Strictly speaking, a recursive data structure is defined as an initial algebra for a higher-order functor. I will show that the higher order functor in question can be written as: A_{a b} g = I + \sigma_{a b} \star g where \sigma_{a b} is the (indexed) store comonad, which can be written as: \sigma_{a b} s = \Delta_a s \times C(b, s) Here, \Delta_a is the constant functor, and C(b, -) is the hom-functor. In Haskell, this is equivalent to: newtype Store a b s = Store (a, b -> s) The standard (non-indexed) Store comonad is obtained by identifying a with b and it describes the objects of the slice category C/s (morphisms are functions f : a \to a' that make the obvious triangles commute). If you’ve read my previous blog posts, you may recognize in A_{a b} the functor that generates a free applicative functor (or, equivalently, a free monoidal functor). Its fixed point can be written as: L_{a b} = I + \sigma_{a b} \star L_{a b} The star stands for Day convolution–in Haskell expressed as an existential data type: data Day f g s where Day :: f a -> g b -> ((a, b) -> s) -> Day f g s Intuitively, L_{a b} is a “list of” Store functors concatenated using Day convolution. An empty list is the identity functor, a one-element list is the Store functor, a two-element list is the Day convolution of two Store functors, and so on… In Haskell, we would express it as: data FunList a b t = Done t | More ((Day (Store a b) (FunList a b)) t) To show the equivalence of the two definitions of FunList, let’s expand the definition of Day convolution inside A_{a b}: (A_{a b} g) t = t + \int^{c d} (\Delta_b c \times C(a, c)) \times g d \times C(c \times d, t) The coend \int^{c d} corresponds, in Haskell, to the existential data type we used in the definition of Day. Since we have the hom-functor C(a, c) under the coend, the first step is to use the co-Yoneda lemma to “perform the integration” over c, which replaces c with a everywhere. We get: t + \int^d \Delta_b a \times g d \times C(a \times d, t) We can then evaluate the constant functor and use the currying adjunction: C(a \times d, t) \cong C(d, a \to t) to get: t + \int^d b \times g d \times C(d, a \to t) Applying the co-Yoneda lemma again, we replace d with a \to t: t + b \times g (a \to t) This is exactly the functor that generates FunList. So FunList is indeed the free applicative generated by Store. All transformations in this derivation were natural isomorphisms. Now let’s switch our attention to Bazaar, which can be defined as: type Bazaar a b t = forall f. Applicative f => (a -> f b) -> f t (The actual definition of Bazaar in the lens library is even more general–it’s parameterized by a profunctor in place of the arrow in a -> f b.) The universal quantification in the definition of Bazaar immediately suggests the application of my favorite double Yoneda trick in the functor category: The set of natural transformations (morphisms in the functor category) between two functors (objects in the functor category) is isomorphic, through Yoneda embedding, to the following end in the functor category: Nat(h, g) \cong \int_{f \colon [C, Set]} Set(Nat(g, f), Nat(h, f)) The end is equivalent (modulo parametricity) to Haskell forall. Here, the sets of natural transformations between pairs of functors are just hom-functors in the functor category and the end over f is a set of higher-order natural transformations between them. In the double Yoneda trick we carefully select the two functors g and h to be either representable, or somehow related to representables. The universal quantification in Bazaar is limited to applicative functors, so we’ll pick our two functors to be free applicatives. We’ve seen previously that the higher-order functor that generates free applicatives has the form: F g = Id + g \star F g Here’s the version of the Yoneda embedding in which f varies over all applicative functors in the category App, and g and h are arbitrary functors in [C, Set]: App(F h, F g) \cong \int_{f \colon App} Set(App(F g, f), App(F h, f)) The free functor F is the left adjoint to the forgetful functor U: App(F g, f) \cong [C, Set](g, U f) Using this adjunction, we arrive at: [C, Set](h, U (F g)) \cong \int_{f \colon App} Set([C, Set](g, U f), [C, Set](h, U f)) We’re almost there–we just need to carefuly pick the functors g and h. In order to arrive at the definition of Bazaar we want: g = \sigma_{a b} = \Delta_a \times C(b, -) h = C(t, -) The right hand side becomes: \int_{f \colon App} Set\big(\int_c Set (\Delta_a c \times C(b, c), (U f) c)), \int_c Set (C(t, c), (U f) c)\big) where I represented natural transformations as ends. The first term can be curried: Set \big(\Delta_a c \times C(b, c), (U f) c)\big) \cong Set\big(C(b, c), \Delta_a c \to (U f) c \big) and the end over c can be evaluated using the Yoneda lemma. So can the second term. Altogether, the right hand side becomes: \int_{f \colon App} Set\big(a \to (U f) b)), (U f) t)\big) In Haskell notation, this is just the definition of Bazaar: The left hand side can be written as: \int_c Set(h c, (U (F g)) c) Since we have chosen h to be the hom-functor C(t, -), we can use the Yoneda lemma to “perform the integration” and arrive at: (U (F g)) t With our choice of g = \sigma_{a b}, this is exactly the free applicative generated by Store–in other words, FunList. This proves the equivalence of Bazaar and FunList. Notice that this proof is only valid for Set-valued functors, although a generalization to the enriched setting is relatively straightforward. There is another family of functors, Traversable, that uses universal quantification over applicatives: The same double Yoneda trick can be applied to it to show that it’s related to Bazaar. There is, however, a much simpler derivation, suggested to me by Derek Elkins, by changing the order of arguments: traverse :: t a -> (forall f. Applicative f => (a -> f b) -> f (t b)) which is equivalent to: traverse :: t a -> Bazaar a b (t b) In view of the equivalence between Bazaar and FunList, we can also write it as: traverse :: t a -> FunList a b (t b) Note that this is somewhat similar to the definition of toList: In a sense, FunList is able to freely accumulate the effects from traversable, so that they can be interpreted later. I’m grateful to Edward Kmett and Derek Elkins for many discussions and valuable insights. The use of free monads, free applicatives, and cofree comonads lets us separate the construction of (often effectful or context-dependent) computations from their interpretation. In this paper I show how the ad hoc process of writing interpreters for these free constructions can be systematized using the language of higher order algebras (coalgebras) and catamorphisms (anamorphisms). Recursive schemes [meijer] are an example of successful application of concepts from category theory to programming. The idea is that recursive data structures can be defined as initial algebras of functors. This allows a separation of concerns: the functor describes the local shape of the data structure, and the fixed point combinator builds the recursion. Operations over data structures can be likewise separated into shallow, non-recursive computations described by algebras, and generic recursive procedures described by catamorphisms. In this way, data structures often replace control structures in driving computations. Since functors also form a category, it’s possible to define functors acting on functors. Such higher order functors show up in a number of free constructions, notably free monads, free applicatives, and cofree comonads. These free constructions have good composability properties and they provide means of separating the creation of effectful computations from their interpretation. This paper’s contribution is to systematize the construction of such interpreters. The idea is that free constructions arise as fixed points of higher order functors, and therefore can be approached with the same algebraic machinery as recursive data structures, only at a higher level. In particular, interpreters can be constructed as catamorphisms or anamorphisms of higher order algebras/coalgebras. Initial Algebras and Catamorphisms The canonical example of a data structure that can be described as an initial algebra of a functor is a list. In Haskell, a list can be defined recursively: data List a = Nil | Cons a (List a) There is an underlying non-recursive functor: data ListF a x = NilF | ConsF a x instance Functor (ListF a) where fmap f NilF = NilF fmap f (ConsF a x) = ConsF a (f x) Once we have a functor, we can define its algebras. An algebra consist of a carrier c and a structure map (evaluator). An algebra can be defined for an arbitrary functor f: type Algebra f c = f c -> c Here’s an example of a simple list algebra, with Int as its carrier: sum :: Algebra (ListF Int) Int sum NilF = 0 sum (ConsF a c) = a + c Algebras for a given functor form a category. The initial object in this category (if it exists) is called the initial algebra. In Haskell, we call the carrier of the initial algebra Fix f. Its structure map is a function: f (Fix f) -> Fix f By Lambek’s lemma, the structure map of the initial algebra is an isomorphism. In Haskell, this isomorphism is given by a pair of functions: the constructor In and the destructor out of the fixed point combinator: When applied to the list functor, the fixed point gives rise to an alternative definition of a list: type List a = Fix (ListF a) The initiality of the algebra means that there is a unique algebra morphism from it to any other algebra. This morphism is called a catamorphism and, in Haskell, can be expressed as: cata alg = alg . fmap (cata alg) . out A list catamorphism is known as a fold. Since the list functor is a sum type, its algebra consists of a value—the result of applying the algebra to NilF—and a function of two variables that corresponds to the ConsF constructor. You may recognize those two as the arguments to foldr: foldr :: (a -> c -> c) -> c -> [a] -> c The list functor is interesting because its fixed point is a free monoid. In category theory, monoids are special objects in monoidal categories—that is categories that define a product of two objects. In Haskell, a pair type plays the role of such a product, with the unit type as its unit (up to isomorphism). As you can see, the list functor is the sum of a unit and a product. This formula can be generalized to an arbitrary monoidal category with a tensor product \otimes and a unit 1: L\, a\, x = 1 + a \otimes x Its initial algebra is a free monoid . Higher Algebras In category theory, once you performed a construction in one category, it’s easy to perform it in another category that shares similar properties. In Haskell, this might require reimplementing the construction. We are interested in the category of endofunctors, where objects are endofunctors and morphisms are natural transformations. Natural transformations are represented in Haskell as polymorphic functions: type f :~> g = forall a. f a -> g a infixr 0 :~> In the category of endofunctors we can define (higher order) functors, which map functors to functors and natural transformations to natural transformations: class HFunctor hf where hfmap :: (g :~> h) -> (hf g :~> hf h) ffmap :: Functor g => (a -> b) -> hf g a -> hf g b The first function lifts a natural transformation; and the second function, ffmap, witnesses the fact that the result of a higher order functor is again a functor. An algebra for a higher order functor hf consists of a functor f (the carrier object in the functor category) and a natural transformation (the structure map): type HAlgebra hf f = hf f :~> f As with regular functors, we can define an initial algebra using the fixed point combinator for higher order functors: newtype FixH hf a = InH { outH :: hf (FixH hf) a } Similarly, we can define a higher order catamorphism: hcata :: HFunctor h => HAlgebra h f -> FixH h :~> f hcata halg = halg . hfmap (hcata halg) . outH The question is, are there any interesting examples of higher order functors and algebras that could be used to solve real-life programming problems? Free Monad We’ve seen the usefulness of lists, or free monoids, for structuring computations. Let’s see if we can generalize this concept to higher order functors. The definition of a list relies on the cartesian structure of the underlying category. It turns out that there are multiple cartesian structures of interest that can be defined in the category of functors. The simplest one defines a product of two endofunctors as their composition. Any two endofunctors can be composed. The unit of functor composition is the identity functor. If you picture endofunctors as containers, you can easily imagine a tree of lists, or a list of Maybes. A monoid based on this particular monoidal structure in the endofunctor category is a monad. It’s an endofunctor m equipped with two natural transformations representing unit and multiplication: class Monad m where eta :: Identity :~> m mu :: Compose m m :~> m In Haskell, the components of these natural transformations are known as return and join. A straightforward generalization of the list functor to the functor category can be written as: L\, f\, g = 1 + f \circ g or, in Haskell, type FunctorList f g = Identity :+: Compose f g where we used the operator :+: to define the coproduct of two functors: data (f :+: g) e = Inl (f e) | Inr (g e) infixr 7 :+: Using more conventional notation, FunctorList can be written as: data MonadF f g a = DoneM a | MoreM (f (g a)) We’ll use it to generate a free monoid in the category of endofunctors. First of all, let’s show that it’s indeed a higher order functor in the second argument g: instance Functor f => HFunctor (MonadF f) where hfmap _ (DoneM a) = DoneM a hfmap nat (MoreM fg) = MoreM $ fmap nat fg ffmap h (DoneM a) = DoneM (h a) ffmap h (MoreM fg) = MoreM $ fmap (fmap h) fg In category theory, because of size issues, this functor doesn’t always have a fixed point. For most common choices of f (e.g., for algebraic data types), the initial higher order algebra for this functor exists, and it generates a free monad. In Haskell, this free monad can be defined as: type FreeMonad f = FixH (MonadF f) We can show that FreeMonad is indeed a monad by implementing return and bind: instance Functor f => Monad (FreeMonad f) where return = InH . DoneM (InH (DoneM a)) >>= k = k a (InH (MoreM ffra)) >>= k = InH (MoreM (fmap (>>= k) ffra)) Free monads have many applications in programming. They can be used to write generic monadic code, which can then be interpreted in different monads. A very useful property of free monads is that they can be composed using coproducts. This follows from the theorem in category theory, which states that left adjoints preserve coproducts (or, more generally, colimits). Free constructions are, by definition, left adjoints to forgetful functors. This property of free monads was explored by Swierstra [swierstra] in his solution to the expression problem. I will use an example based on his paper to show how to construct monadic interpreters using higher order catamorphisms. Free Monad Example A stack-based calculator can be implemented directly using the state monad. Since this is a very simple example, it will be instructive to re-implement it using the free monad approach. We start by defining a functor, in which the free parameter k represents the continuation: data StackF k = Push Int k | Top (Int -> k) | Pop k | Add k deriving Functor We use this functor to build a free monad: type FreeStack = FreeMonad StackF You may think of the free monad as a tree with nodes that are defined by the functor StackF. The unary constructors, like Add or Pop, create linear list-like branches; but the Top constructor branches out with one child per integer. The level of indirection we get by separating recursion from the functor makes constructing free monad trees syntactically challenging, so it makes sense to define a helper function: liftF :: (Functor f) => f r -> FreeMonad f r liftF fr = InH $ MoreM $ fmap (InH . DoneM) fr With this function, we can define smart constructors that build leaves of the free monad tree: push :: Int -> FreeStack () push n = liftF (Push n ()) pop :: FreeStack () pop = liftF (Pop ()) top :: FreeStack Int top = liftF (Top id) add :: FreeStack () add = liftF (Add ()) All these preparations finally pay off when we are able to create small programs using do notation: calc :: FreeStack Int calc = do push 3 push 4 x <- top return x Of course, this program does nothing but build a tree. We need a separate interpreter to do the calculation. We’ll interpret our program in the state monad, with state implemented as a stack (list) of integers: type MemState = State [Int] The trick is to define a higher order algebra for the functor that generates the free monad and then use a catamorphism to apply it to the program. Notice that implementing the algebra is a relatively simple procedure because we don’t have to deal with recursion. All we need is to case-analyze the shallow constructors for the free monad functor MonadF, and then case-analyze the shallow constructors for the functor StackF. runAlg :: HAlgebra (MonadF StackF) MemState runAlg (DoneM a) = return a runAlg (MoreM ex) = case ex of Top ik -> get >>= ik . head Pop k -> get >>= put . tail >> k Push n k -> get >>= put . (n : ) >> k Add k -> do (a: b: s) <- get put (a + b : s) The catamorphism converts the program calc into a state monad action, which can be run over an empty initial stack: runState (hcata runAlg calc) [] The real bonus is the freedom to define other interpreters by simply switching the algebras. Here’s an algebra whose carrier is the Const functor: showAlg :: HAlgebra (MonadF StackF) (Const String) showAlg (DoneM a) = Const "Done!" showAlg (MoreM ex) = Const $ case ex of Push n k -> "Push " ++ show n ++ ", " ++ getConst k Top ik -> "Top, " ++ getConst (ik 42) Pop k -> "Pop, " ++ getConst k Add k -> "Add, " ++ getConst k Runing the catamorphism over this algebra will produce a listing of our program: getConst $ hcata showAlg calc > "Push 3, Push 4, Add, Top, Pop, Done!" Free Applicative There is another monoidal structure that exists in the category of functors. In general, this structure will work for functors from an arbitrary monoidal category C to Set. Here, we’ll restrict ourselves to endofunctors on Set. The product of two functors is given by Day convolution, which can be implemented in Haskell using an existential type: data Day f g c where Day :: f a -> g b -> ((a, b) -> c) -> Day f g c The intuition is that a Day convolution contains a container of some as, and another container of some bs, together with a function that can convert any pair (a, b) to c. Day convolution is a higher order functor: instance HFunctor (Day f) where hfmap nat (Day fx gy xyt) = Day fx (nat gy) xyt ffmap h (Day fx gy xyt) = Day fx gy (h . xyt) In fact, because Day convolution is symmetric up to isomorphism, it is automatically functorial in both arguments. To complete the monoidal structure, we also need a functor that could serve as a unit with respect to Day convolution. In general, this would be the the hom-functor from the monoidal unit: C(1, -) In our case, since 1 is the singleton set, this functor reduces to the identity functor. We can now define monoids in the category of functors with the monoidal structure given by Day convolution. These monoids are equivalent to lax monoidal functors which, in Haskell, form the class: class Functor f => Monoidal f where unit :: f () (>*<) :: f x -> f y -> f (x, y) Lax monoidal functors are equivalent to applicative functors [mcbride], as seen in this implementation of pure and <*>: pure :: a -> f a pure a = fmap (const a) unit fs <*> as = fmap (uncurry ($)) (fs >*< as) We can now use the same general formula, but with Day convolution as the product: L\, f\, g = 1 + f \star g to generate a free monoidal (applicative) functor: data FreeF f g t = DoneF t | MoreF (Day f g t) This is indeed a higher order functor: instance HFunctor (FreeF f) where hfmap _ (DoneF x) = DoneF x hfmap nat (MoreF day) = MoreF (hfmap nat day) ffmap f (DoneF x) = DoneF (f x) ffmap f (MoreF day) = MoreF (ffmap f day) and it generates a free applicative functor as its initial algebra: type FreeA f = FixH (FreeF f) Free Applicative Example The following example is taken from the paper by Capriotti and Kaposi [capriotti]. It’s an option parser for a command line tool, whose result is a user record of the following form: data User = User { username :: String , fullname :: String , uid :: Int } deriving Show A parser for an individual option is described by a functor that contains the name of the option, an optional default value for it, and a reader from string: data Option a = Option { optName :: String , optDefault :: Maybe a , optReader :: String -> Maybe a } deriving Functor Since we don’t want to commit to a particular parser, we’ll create a parsing action using a free applicative functor: userP :: FreeA Option User userP = pure User <*> one (Option "username" (Just "John") Just) <*> one (Option "fullname" (Just "Doe") Just) <*> one (Option "uid" (Just 0) readInt) where readInt is a reader of integers: readInt :: String -> Maybe Int readInt s = readMaybe s and we used the following smart constructors: one :: f a -> FreeA f a one fa = InH $ MoreF $ Day fa (done ()) fst done :: a -> FreeA f a done a = InH $ DoneF a We are now free to define different algebras to evaluate the free applicative expressions. Here’s one that collects all the defaults: alg :: HAlgebra (FreeF Option) Maybe alg (DoneF a) = Just a alg (MoreF (Day oa mb f)) = fmap f (optDefault oa >*< mb) I used the monoidal instance for Maybe: instance Monoidal Maybe where unit = Just () Just x >*< Just y = Just (x, y) _ >*< _ = Nothing This algebra can be run over our little program using a catamorphism: parserDef :: FreeA Option a -> Maybe a parserDef = hcata alg And here’s an algebra that collects the names of all the options: alg2 :: HAlgebra (FreeF Option) (Const String) alg2 (DoneF a) = Const "." alg2 (MoreF (Day oa bs f)) = fmap f (Const (optName oa) >*< bs) Again, this uses a monoidal instance for Const: instance Monoid m => Monoidal (Const m) where unit = Const mempty Const a >*< Const b = Const (a b) We can also define the Monoidal instance for IO: instance Monoidal IO where unit = return () ax >*< ay = do a <- ax b <- ay return (a, b) This allows us to interpret the parser in the IO monad: alg3 :: HAlgebra (FreeF Option) IO alg3 (DoneF a) = return a alg3 (MoreF (Day oa bs f)) = do putStrLn $ optName oa s <- getLine let ma = optReader oa s a = fromMaybe (fromJust (optDefault oa)) ma fmap f $ return a >*< bs Cofree Comonad Every construction in category theory has its dual—the result of reversing all the arrows. The dual of a product is a coproduct, the dual of an algebra is a coalgebra, and the dual of a monad is a comonad. Let’s start by defining a higher order coalgebra consisting of a carrier f, which is a functor, and a natural transformation: type HCoalgebra hf f = f :~> hf f An initial algebra is dualized to a terminal coalgebra. In Haskell, both are the results of applying the same fixed point combinator, reflecting the fact that the Lambek’s lemma is self-dual. The dual to a catamorphism is an anamorphism. Here is its higher order version: hana :: HFunctor hf => HCoalgebra hf f -> (f :~> FixH hf) hana hcoa = InH . hfmap (hana hcoa) . hcoa The formula we used to generate free monoids: 1 + a \otimes x dualizes to: 1 \times a \otimes x and can be used to generate cofree comonoids . A cofree functor is the right adjoint to the forgetful functor. Just like the left adjoint preserved coproducts, the right adjoint preserves products. One can therefore easily combine comonads using products (if the need arises to solve the coexpression problem). Just like the monad is a monoid in the category of endofunctors, a comonad is a comonoid in the same category. The functor that generates a cofree comonad has the form: type ComonadF f g = Identity :*: Compose f g where the product of functors is defined as: data (f :*: g) e = Both (f e) (g e) infixr 6 :*: Here’s the more familiar form of this functor: data ComonadF f g e = e :< f (g e) It is indeed a higher order functor, as witnessed by this instance: instance Functor f => HFunctor (ComonadF f) where hfmap nat (e :< fge) = e :< fmap nat fge ffmap h (e :< fge) = h e :< fmap (fmap h) fge A cofree comonad is the terminal coalgebra for this functor and can be written as a fixed point: type Cofree f = FixH (ComonadF f) Indeed, for any functor f, Cofree f is a comonad: instance Functor f => Comonad (Cofree f) where extract (InH (e :< fge)) = e duplicate fr@(InH (e :< fge)) = InH (fr :< fmap duplicate fge) Cofree Comonad Example The canonical example of a cofree comonad is an infinite stream: type Stream = Cofree Identity We can use this stream to sample a function. We’ll encapsulate this function inside the following functor (in fact, itself a comonad): data Store a x = Store a (a -> x) deriving Functor We can use a higher order coalgebra to unpack the Store into a stream: streamCoa :: HCoalgebra (ComonadF Identity)(Store Int) streamCoa (Store n f) = f n :< (Identity $ Store (n + 1) f) The actual unpacking is a higher order anamorphism: stream :: Store Int a -> Stream a stream = hana streamCoa We can use it, for instance, to generate a list of squares of natural numbers: stream (Store 0 (^2)) Since, in Haskell, the same fixed point defines a terminal coalgebra as well as an initial algebra, we are free to construct algebras and catamorphisms for streams. Here’s an algebra that converts a stream to an infinite list: listAlg :: HAlgebra (ComonadF Identity) [] listAlg(a :< Identity as) = a : as toList :: Stream a -> [a] toList = hcata listAlg Future Directions In this paper I concentrated on one type of higher order functor: 1 + a \otimes x and its dual. This would be equivalent to studying folds for lists and unfolds for streams. But the structure of the functor category is richer than that. Just like basic data types can be combined into algebraic data types, so can functors. Moreover, besides the usual sums and products, the functor category admits at least two additional monoidal structures generated by functor composition and Day convolution. Another potentially fruitful area of exploration is the profunctor category, which is also equipped with two monoidal structures, one defined by profunctor composition, and another by Day convolution. A free monoid with respect to profunctor composition is the basis of Haskell Arrow library [jaskelioff]. Profunctors also play an important role in the Haskell lens library [kmett]. 1. Erik Meijer, Maarten Fokkinga, and Ross Paterson, Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire 2. Conor McBride, Ross Paterson, Idioms: applicative programming with effects 3. Paolo Capriotti, Ambrus Kaposi, Free Applicative Functors 4. Wouter Swierstra, Data types a la carte 5. Exequiel Rivas and Mauro Jaskelioff, Notions of Computation as Monoids 6. Edward Kmett, Lenses, Folds and Traversals 7. Richard Bird and Lambert Meertens, Nested Datatypes 8. Patricia Johann and Neil Ghani, Initial Algebra Semantics is Enough! Functors from a monoidal category C to Set form a monoidal category with Day convolution as product. A monoid in this category is a lax monoidal functor. We define an initial algebra using a higher order functor and show that it corresponds to a free lax monoidal functor. Recently I’ve been obsessing over monoidal functors. I have already written two blog posts, one about free monoidal functors and one about free monoidal profunctors. I followed some ideas from category theory but, being a programmer, I leaned more towards writing code than being preoccupied with mathematical rigor. That left me longing for more elegant proofs of the kind I’ve seen in mathematical literature. I believe that there isn’t that much difference between programming and math. There is a whole spectrum of abstractions ranging from assembly language, weakly typed languages, strongly typed languages, functional programming, set theory, type theory, category theory, and homotopy type theory. Each language comes with its own bag of tricks. Even within one language one starts with some relatively low level encodings and, with experience, progresses towards higher abstractions. I’ve seen it in Haskell, where I started by hand coding recursive functions, only to realize that I can be more productive using bulk operations on types, then building recursive data structures and applying recursive schemes, eventually diving into categories of functors and profunctors. I’ve been collecting my own bag of mathematical tricks, mostly by reading papers and, more recently, talking to mathematicians. I’ve found that mathematicians are happy to share their knowledge even with outsiders like me. So when I got stuck trying to clean up my monoidal functor code, I reached out to Emily Riehl, who forwarded my query to Alexander Campbell from the Centre for Australian Category Theory. Alex’s answer was a very elegant proof of what I was clumsily trying to show in my previous posts. In this blog post I will explain his approach. I should also mention that most of the results presented in this post have already been covered in a comprehensive paper by Rivas and Jaskelioff, Notions of Computation as Monoids. Lax Monoidal Functors To properly state the problem, I’ll have to start with a lot of preliminaries. This will require some prior knowledge of category theory, all within the scope of my blog/book. We start with a monoidal category C, that is a category in which you can “multiply” objects using some kind of a tensor product \otimes. For any pair of objects a and b there is an object a \otimes b; and this mapping is functorial in both arguments (that is, you can also “multiply” morphisms). A monoidal category will also have a special object I that is the unit of multiplication. In general, the unit and associativity laws are satisfied up to isomorphism: \lambda : I \otimes a \cong a \rho : a \otimes I \cong a \alpha : (a \otimes b) \otimes c \cong a \otimes (b \otimes c) These isomorphisms are called, respectively, the left and right unitors, and the associator. The most familiar example of a monoidal category is the category of types and functions, in which the tensor product is the cartesian product (pair type) and the unit is the unit type (). Let’s now consider functors from C to the category of sets, Set. These functors also form a category called [C, Set], in which morphisms between any two functors are natural transformations. In Haskell, a natural transformation is approximated by a polymorphic function: type f ~> g = forall x. f x -> g x The category Set is monoidal, with cartesian product \times serving as a tensor product, and the singleton set 1 as the unit. We are interested in functors in [C, Set] that preserve the monoidal structure. Such a functor should map the tensor product in C to the cartesian product in Set and the unit I to the singleton set 1. Accordingly, a strong monoidal functor F comes with two isomorphisms: F a \times F b \cong F (a \otimes b) 1 \cong F I We are interested in a weaker version of a monoidal functor called lax monoidal functor, which is equipped with a one-way natural transformation: \mu : F a \times F b \to F (a \otimes b) and a one-way morphism: \eta : 1 \to F I A lax monoidal functor must also preserve unit and associativity laws. Associativity law: \alpha is the associator in the appropriate category (top arrow, in Set; bottom arrow, in C). In Haskell, a lax monoidal functor can be defined as: class Monoidal f where eta :: () -> f () It’s also known as the applicative functor. Day Convolution and Monoidal Functors It turns out that our category of functors [C, Set] is also equipped with monoidal structure. Two functors F and G can be “multiplied” using Day convolution: (F \star G) c = \int^{a b} C(a \otimes b, c) \times F a \times G b Here, C(a \otimes b, c) is the hom-set, or the set of morphisms from a \otimes b to c. The integral sign stands for a coend, which can be interpreted as a generalization of an (infinite) coproduct (modulo some identifications). An element of this coend can be constructed by injecting a triple consisting of a morphism from C(a \otimes b, c), an element of the set F a, and an element of the set G b, for some a and b. In Haskell, a coend corresponds to an existential type, so the Day convolution can be defined as: data Day f g c where Day :: ((a, b) -> c, f a, g b) -> Day f g c (The actual definition uses currying.) The unit with respect to Day convolution is the hom-functor: C(I, -) which assigns to every object c the set of morphisms C(I, c) and acts on morphisms by post-composition. The proof that this is the unit is instructive, as it uses the standard trick: the co-Yoneda lemma. In the coend form, the co-Yoneda lemma reads, for a covariant functor F: \int^x C(x, a) \times F x \cong F a and for a contravariant functor H: \int^x C(a, x) \times H x \cong H a (The mnemonics is that the integration variable must appear twice, once in the negative, and once in the positive position. An argument to a contravariant functor is in a negative position.) Indeed, substituting C(I, -) for the first functor in Day convolution produces: (C(I, -) \star G) c = \int^{a b} C(a \otimes b, c) \times C(I, a) \times G b which can be “integrated” over a using the Yoneda lemma to yield: \int^{b} C(I \otimes b, c) \times G b and, since I is the unit of the tensor product, this can be further “integrated” over b to give G c. The right unit law is analogous. To summarize, we are dealing with three monoidal categories: C with the tensor product \otimes and unit I, Set with the cartesian product and singleton 1, and a functor category [C, Set] with Day convolution and unit C(I, -). A Monoid in [C, Set] A monoidal category can be used to define monoids. A monoid is an object m equipped with two morphisms — unit and multiplication: \eta : I \to m \mu : m \otimes m \to m These morphisms must satisfy unit and associativity conditions, which are best illustrated using commuting diagrams. Unit laws. λ and ρ are the unitors. Associativity law: α is the associator. This definition of a monoid can be translated directly to Haskell: class Monoid m where eta :: () -> m mu :: (m, m) -> m It so happens that a lax monoidal functor is exactly a monoid in our functor category [C, Set]. Since objects in this category are functors, a monoid is a functor F equipped with two natural transformations: \eta : C(I, -) \to F \mu : F \star F \to F At first sight, these don’t look like the morphisms in the definition of a lax monoidal functor. We need some new tricks to show the equivalence. Let’s start with the unit. The first trick is to consider not one natural transformation but the whole hom-set: [C, Set](C(I, -), F) The set of natural transformations can be represented as an end (which, incidentally, corresponds to the forall quantifier in the Haskell definition of natural transformations): \int_c Set(C(I, c), F c) The next trick is to use the Yoneda lemma which, in the end form reads: \int_c Set(C(a, c), F c) \cong F a In more familiar terms, this formula asserts that the set of natural transformations from the hom-functor C(a, -) to F is isomorphic to F a. There is also a version of the Yoneda lemma for contravariant functors: \int_c Set(C(c, a), H c) \cong H a The application of Yoneda to our formula produces F I, which is in one-to-one correspondence with morphisms 1 \to F I. We can use the same trick of bundling up natural transformations that define multiplication \mu. [C, Set](F \star F, F) and representing this set as an end over the hom-functor: \int_c Set((F \star F) c, F c) Expanding the definition of Day convolution, we get: \int_c Set(\int^{a b} C(a \otimes b, c) \times F a \times F b, F c) The next trick is to pull the coend out of the hom-set. This trick relies on the co-continuity of the hom-functor in the first argument: a hom-functor from a colimit is isomorphic to a limit of hom-functors. In programmer-speak: a function from a sum type is equivalent to a product of functions (we call it case analysis). A coend is a generalized colimit, so when we pull it out of a hom-functor, it turns into a limit, or an end. Here’s the general formula, in which p x y is an arbitrary profunctor: Set(\int^x p x x, y) \cong \int_x Set(p x x, y) Let’s apply it to our formula: \int_c \int_{a b} Set(C(a \otimes b, c) \times F a \times F b, F c) We can combine the ends under one integral sign (it’s allowed by the Fubini theorem) and move to the next trick: hom-set adjunction: Set(a \times b, c) \cong Set(a, b \to c) In programming this is known as currying. This adjunction exists because Set is a cartesian closed category. We’ll use this adjunction to move F a \times F b to the right: \int_{a b c} Set(C(a \otimes b, c), (F a \times F b) \to F c) Using the Yoneda lemma we can “perform the integration” over c  to get: \int_{a b} (F a \times F b) \to F (a \otimes b)) This is exactly the set of natural transformations used in the definition of a lax monoidal functor. We have established one-to-one correspondence between monoidal multiplication and lax monoidal mapping. Of course, a complete proof would require translating monoid laws to their lax monoidal counterparts. You can find more details in Rivas and Jaskelioff, Notions of Computation as Monoids. We’ll use the fact that a monoid in the category [C, Set] is a lax monoidal functor later. Alternative Derivation Incidentally, there are shorter derivations of these formulas that use the trick borrowed from the proof of the Yoneda lemma, namely, evaluating things at the identity morphism. (Whenever mathematicians speak of Yoneda-like arguments, this is what they mean.) Starting from F \star F \to F and plugging in the Day convolution formula, we get: \int^{a' b'} C(a' \otimes b', c) \times F a' \times F b' \to F c There is a component of this natural transformation at (a \otimes b) that is the morphism: \int^{a' b'} C(a' \otimes b', a \otimes b) \times F a' \times F b' \to F (a \otimes b) This morphism must be defined for all possible values of the coend. In particular, it must be defined for the triple (id_{a \otimes b}, F a, F b), giving us the \mu we seek. There is also an alternative derivation for the unit: Take the component of the natural transformation \eta at I: \eta_I : C(I, I) \to L I C(I, I) is guaranteed to contain at least one element, the identity morphism id_I. We can use \eta_I \, id_I as the (only) value of the lax monoidal constraint at the singleton 1. Free Monoid Given a monoidal category C, we might be able to define a whole lot of monoids in it. These monoids form a category Mon(C). Morphisms in this category correspond to those morphisms in C that preserve monoidal structure. Consider, for instance, two monoids m and m'. A monoid morphism is a morphism f : m \to m' in C such that the unit of m' is related to the unit of m: \eta' = f \circ \eta and similarly for multiplication: \mu' \circ (f \otimes f) = f \circ \mu Remember, we assumed that the tensor product is functorial in both arguments, so it can be used to lift a pair of morphisms. There is an obvious forgetful functor U from Mon(C) to C which, for every monoid, picks its underlying object in C and maps every monoid morphism to its underlying morphism in C. The left adjoint to this functor, if it exists, will map an object a in C to a free monoid L a. The intuition is that a free monoid L a is a list of a. In Haskell, a list is defined recursively: data List a = Nil | Cons a (List a) Such a recursive definition can be formalized as a fixed point of a functor. For a list of a, this functor is: data ListF a x = NilF | ConsF a x Notice the peculiar structure of this functor. It’s a sum type: The first part is a singleton, which is isomorphic to the unit type (). The second part is a product of a and x. Since the unit type is the unit of the product in our monoidal category of types, we can rewrite this functor symbolically as: \Phi a x = I + a \otimes x It turns out that this formula works in any monoidal category that has finite coproducts (sums) that are preserved by the tensor product. The fixed point of this functor is the free functor that generates free monoids. I’ll define what is meant by the fixed point and prove that it defines a monoid. The proof that it’s the result of a free/forgetful adjunction is a bit involved, so I’ll leave it for a future blog post. Let’s consider algebras for the functor F. Such an algebra is defined as an object x called the carrier, and a morphism: f : F x \to x called the structure map or the evaluator. In Haskell, an algebra is defined as: type Algebra f x = f x -> x There may be a lot of algebras for a given functor. In fact there is a whole category of them. We define an algebra morphism between two algebras (x, f : F x \to x) and (x', f' : F x' \to x') as a morphism \nu : x \to x' which commutes with the two structure maps: \nu \circ f = f' \circ F \nu The initial object in the category of algebras is called the initial algebra, or the fixed point of the functor that generates these algebras. As the initial object, it has a unique algebra morphism to any other algebra. This unique morphism is called a catamorphism. In Haskell, the fixed point of a functor f is defined recursively: with, for instance: type List a = Fix (ListF a) A catamorphism is defined as: cata alg = alg . fmap (cata alg) . out A list catamorphism is called foldr. We want to show that the initial algebra L a of the functor: \Phi a x = I + a \otimes x is a free monoid. Let’s see under what conditions it is a monoid. Initial Algebra is a Monoid In this section I will show you how to concatenate lists the hard way. We know that function type b \to c (a.k.a., the exponential c^b) is the right adjoint to the product: The function type is also called the internal hom. In a monoidal category it’s sometimes possible to define an internal hom-object, denoted [b, c], as the right adjoint to the tensor product: curry : C(a \otimes b, c) \cong C(a, [b, c]) If this adjoint exists, the category is called closed monoidal. In a closed monoidal category, the initial algebra L a of the functor \Phi a x = I + a \otimes x is a monoid. (In particular, a Haskell list of a, which is a fixed point of ListF a, is a monoid.) To show that, we have to construct two morphisms corresponding to unit and multiplication (in Haskell, empty list and concatenation): \eta : I \to L a \mu : L a \otimes L a \to L a What we know is that L a is a carrier of the initial algebra for \Phi a, so it is equipped with the structure map: I + a \otimes L a \to L a which is equivalent to a pair of morphisms: \alpha : I \to L a \beta : a \otimes L a \to L a Notice that, in Haskell, these correspond the two list constructors: Nil and Cons or, in terms of the fixed point: nil :: () -> List a nil () = In NilF cons :: a -> List a -> List a cons a as = In (ConsF a as) We can immediately use \alpha to implement \eta. The second one, \beta, one can be rewritten using the hom adjuncion as: \bar{\beta} = curry \, \beta \bar{\beta} : a \to [L a, L a] Notice that, if we could prove that [L a, L a] is a carrier for the same algebra generated by \Phi a, we would know that there is a unique catamorphism from the initial algebra L a: \kappa_{[L a, L a]} : L a \to [L a, L a] which, by the hom adjunction, would give us the desired multiplication: Let’s establish some useful lemmas first. Lemma 1: For any object x in a closed monoidal category, [x, x] is a monoid. This is a generalization of the idea that endomorphisms form a monoid, in which identity morphism is the unit and composition is multiplication. Here, the internal hom-object [x, x] generalizes the set of endomorphisms. Proof: The unit: \eta : I \to [x, x] follows, through adjunction, from the unit law in the monoidal category: \lambda : I \otimes x \to x (In Haskell, this is a fancy way of writing mempty = id.) Multiplication takes the form: \mu : [x, x] \otimes [x, x] \to [x, x] which is reminiscent of composition of edomorphisms. In Haskell we would say: mappend = (.) By adjunction, we get: curry^{-1} \, \mu : [x, x] \otimes [x, x] \otimes x \to x We have at our disposal the counit eval of the adjunction: eval : [x, x] \otimes x \cong x We can apply it twice to get: \mu = curry (eval \circ (id \otimes eval)) In Haskell, we could express this as: mu :: ((x -> x), (x -> x)) -> (x -> x) mu (f, g) = \x -> f (g x) Here, the counit of the adjunction turns into simple function application. Lemma 2: For every morphism f : a \to m, where m is a monoid, we can construct an algebra of the functor \Phi a with m as its carrier. Proof: Since m is a monoid, we have two morphisms: \eta : I \to m \mu : m \otimes m \to m To show that m is a carrier of our algebra, we need two morphisms: \alpha : I \to m \beta : a \otimes m \to m The first one is the same as \eta, the second can be implemented as: \beta = \mu \circ (f \otimes id) In Haskell, we would do case analysis: mapAlg :: Monoid m => ListF a m -> m mapAlg NilF = mempty mapAlg (ConsF a m) = f a `mappend` m We can now build a larger proof. By lemma 1, [L a, L a] is a monoid with: We also have a morphism \bar{\beta} : a \to [L a, L a] so, by lemma 2, [L a, L a] is also a carrier for the algebra: \alpha = \eta \beta = \mu \circ (\bar{\beta} \otimes id) It follows that there is a unique catamorphism \kappa_{[L a, L a]} from the initial algebra L a to it, and we know how to use it to implement monoidal multiplication for L a. Therefore, L a is a monoid. Translating this to Haskell, \bar{\beta} is the curried form of Cons and what we have shown is that concatenation (multiplication of lists) can be implemented as a catamorphism: concat :: List a -> List a -> List a conc x y = cata alg x y where alg NilF = id alg (ConsF a t) = (cons a) . t The type: List a -> (List a -> List a) (parentheses added for emphasis) corresponds to L a \to [L a, L a]. It’s interesting that concatenation can be described in terms of the monoid of list endomorphisms. Think of turning an element a of the list into a transformation, which prepends this element to its argument (that’s what \bar{\beta} does). These transformations form a monoid. We have an algebra that turns the unit I into an identity transformation on lists, and a pair a \otimes t (where t is a list transformation) into the composite \bar{\beta} a \circ t. The catamorphism for this algebra takes a list L a and turns it into one composite list transformation. We then apply this transformation to another list and get the final result: the concatenation of two lists. \square Incidentally, lemma 2 also works in reverse: If a monoid m is a carrier of the algebra of \Phi a, then there is a morphism f : a \to m. This morphism can be thought of as inserting generators represented by a into the monoid m. Proof: if m is both a monoid and a carrier for the algebra \Phi a, we can construct the morphism a \to m by first applying the identity law to go from a to a \otimes I, then apply id_a \otimes \eta to get a \otimes m. This can be right-injected into the coproduct I + a \otimes m and then evaluated down to m using the structure map for the algebra on m. a \to a \otimes I \to a \otimes m \to I + a \otimes m \to m In Haskell, this corresponds to a construction and evaluation of: ConsF a mempty Free Monoidal Functor Let’s go back to our functor category. We started with a monoidal category C and considered a functor category [C, Set]. We have shown that [C, Set] is itself a monoidal category with Day convolution as tensor product and the hom functor C(I, -) as unit. A monoid is this category is a lax monoidal functor. The next step is to build a free monoid in [C, Set], which would give us a free lax monoidal functor. We have just seen such a construction in an arbitrary closed monoidal category. We just have to translate it to [C, Set]. We do this by replacing objects with functors and morphisms with natural transformations. Our construction relied on defining an initial algebra for the functor: I + a \otimes b Straightforward translation of this formula to the functor category [C, Set] produces a higher order endofunctor: A_F G = C(I, -) + F \star G It defines, for any functor F, a mapping from a functor G to a functor A_F G. (It also maps natural transformations.) We can now use A_F to define (higher-order) algebras. An algebra consists of a carrier — here, a functor T — and a structure map — here, a natural transformation: A_F T \to T The initial algebra for this higher-order endofunctor defines a monoid, and therefore a lax monoidal functor. We have shown it for an arbitrary closed monoidal category. So the only question is whether our functor category with Day convolution is closed. We want to define the internal hom-object in [C, Set] that satisfies the adjunction: [C, Set](F \star G, H) \cong [C, Set](F, [G, H]) We start with the set of natural transformations — the hom-set in [C, Set]: We rewrite it as an end over c, and use the formula for Day convolution: \int_c Set(\int^{a b} C(a \otimes b, c) \times F a \times G b, H c) We use the co-continuity trick to pull the coend out of the hom-set and turn it into an end: \int_{c a b} Set(C(a \otimes b, c) \times F a \times G b, H c) Keeping in mind that our goal is to end up with F a on the left, we use the regular hom-set adjunction to shuffle the other two terms to the right: \int_{c a b} Set(F a, C(a \otimes b, c) \times G b \to H c) The hom-functor is continuous in the second argument, so we can sneak the end over b c under it: \int_{a} Set(F a, \int_{b c} C(a \otimes b, c) \times G b \to H c) We end up with a set of natural transformations from the functor F to the functor we will call: [G, H] = \int_{b c} (C(- \otimes b, c) \times G b \to H c) We therefore identify this functor as the right adjoint (internal hom-object) for Day convolution. We can further simplify it by using the hom-set adjunction: \int_{b c} (C(- \otimes b, c) \to (G b \to H c)) and applying the Yoneda lemma to get: [G, H] = \int_{b} (G b \to H (- \otimes b)) In Haskell, we would write it as: newtype DayHom f g a = DayHom (forall b . f b -> g (a, b)) Since Day convolution has a right adjoint, we conclude that the fixed point of our higher order functor defines a free lax monoidal functor. We can write it in a recursive form as: Free_F = C(I, -) + F \star Free_F or, in Haskell: data FreeMonR f t = Done t | More (Day f (FreeMonR f) t) Free Monad This blog post wouldn’t be complete without mentioning that the same construction works for monads. Famously, a monad is a monoid in the category of endofunctors. Endofunctors form a monoidal category with functor composition as tensor product and the identity functor as unit. The fact that we can construct a free monad using the formula: FreeM_F = Id + F \circ FreeM_F is due to the observation that functor composition has a right adjoint, which is the right Kan extension. Unfortunately, due to size issues, this Kan extension doesn’t always exist. I’ll quote Alex Campbell here: “By making suitable size restrictions, we can give conditions for free monads to exist: for example, free monads exist for accessible endofunctors on locally presentable categories; a special case is that free monads exist for finitary endofunctors on Set, where finitary means the endofunctor preserves filtered colimits (more generally, an endofunctor is accessible if it preserves \kappa-filtered colimits for some regular cardinal number \kappa).” As we acquire experience in programming, we learn more tricks of trade. A seasoned programmer knows how to read a file, parse its contents, or sort an array. In category theory we use a different bag of tricks. We bunch morphisms into hom-sets, move ends and coends, use Yoneda to “integrate,” use adjunctions to shuffle things around, and use initial algebras to define recursive types. Results derived in category theory can be translated to definitions of functions or data structures in programming languages. A lax monoidal functor becomes an Applicative. Free monoidal functor becomes: data FreeMonR f t = Done t | More (Day f (FreeMonR f) t) What’s more, since the derivation made very few assumptions about the category C (other than that it’s monoidal), this result can be immediately applied to profunctors (replacing C with C^{op}\times C) to produce: data FreeMon p s t where DoneFM :: t -> FreeMon p s t MoreFM :: p a b -> FreeMon p c d -> (b -> d -> t) -> (s -> (a, c)) -> FreeMon p s t Replacing Day convolution with endofunctor composition gives us a free monad: data FreeMonadF f g a = DoneFM a | MoreFM (Compose f g a) Category theory is also the source of laws (commuting diagrams) that can be used in equational reasoning to verify the correctness of programming constructs. Writing this post has been a great learning experience. Every time I got stuck, I would ask Alex for help, and he would immediately come up with yet another algebra and yet another catamorphism. This was so different from the approach I would normally take, which would be to get bogged down in inductive proofs over recursive data structures. Abstract: I derive free monoidal profunctors as fixed points of a higher order functor acting on profunctors. Monoidal profunctors play an important role in defining traversals. The beauty of category theory is that it lets us reuse concepts at all levels. In my previous post I have derived a free monoidal functor that goes from a monoidal category C to Set. The current post may then be shortened to: Since profunctors are just functors from C^{op} \times C to Set, with the obvious monoidal structure induced by the tensor product in C, we automatically get free monoidal profunctors. Let me fill in the details. Profunctors in Haskell Here’s the definition of a profunctor from Data.Profunctor: class Profunctor p where dimap :: (s -> a) -> (b -> t) -> p a b -> p s t The idea is that, just like a functor acts on objects, a profunctor p acts on pairs of objects \langle a, b \rangle. In other words, it’s a type constructor that takes two types as arguments. And just like a functor acts on morphisms, a profunctor acts on pairs of morphisms. The only tricky part is that the first morphism of the pair is reversed: instead of going from a to s, as one would expect, it goes from s to a. This is why we say that the first argument comes from the opposite category C^{op}, where all morphisms are reversed with respect to C. Thus a morphism from \langle a, b \rangle to \langle s, t \rangle in C^{op} \times C is a pair of morphisms \langle s \to a, b \to t \rangle. Just like functors form a category, profunctors form a category too. In this category profunctors are objects, and natural transformations are morphisms. A natural transformation between two profunctors p and q is a family of functions which, in Haskell, can be approximated by a polymorphic function: type p ::~> q = forall a b. p a b -> q a b If the category C is monoidal (has a tensor product \otimes and a unit object 1), then the category C^{op} \times C has a trivially induced tensor product: \langle a, b \rangle \otimes \langle c, d \rangle = \langle a \otimes c, b \otimes d \rangle and unit \langle 1, 1 \rangle In Haskell, we’ll use cartesian product (pair type) as the underlying tensor product, and () type as the unit. Notice that the induced product does not have the usual exponential as the right adjoint. Indeed, the hom-set: (C^{op} \times C) \, ( \langle a, b \rangle \otimes \langle c, d \rangle, \langle s, t \rangle ) is a set of pairs of morphisms: \langle s \to a \otimes c, b \otimes d \to t \rangle If the right adjoint existed, it would be a pair of objects \langle X, Y \rangle, such that the following hom-set would be isomorphic to the previous one: \langle X \to a, b \to Y \rangle While Y could be the internal hom, there is no candidate for X that would produce the isomorphism: s \to a \otimes c \cong X \to a (Consider, for instance, unit () for a.) This lack of the right adjoint is the reason why we can’t define an analog of Applicative for profunctors. We can, however, define a monoidal profunctor: class Monoidal p where punit :: p () () (>**<) :: p a b -> p c d -> p (a, c) (b, d) This profunctor is a map between two monoidal structures. For instance, punit can be seen as mapping the unit in Set to the unit in C^{op} \times C: punit :: () -> p <1, 1> Operator >**< maps the product in Set to the induced product in C^{op} \times C: (>**<) :: (p <a, b>, p <c, d>) -> p (<a, b> × <c, d>) Day convolution, which works with monoidal structures, generalizes naturally to the profunctor category: data PDay p q s t = forall a b c d. PDay (p a b) (q c d) ((b, d) -> t) (s -> (a, c)) Higher Order Functors Since profunctors form a category, we can define endofunctors in that category. This is a no-brainer in category theory, but it requires some new definitions in Haskell. Here’s a higher-order functor that maps a profunctor to another profunctor: class HPFunctor pp where hpmap :: (p ::~> q) -> (pp p ::~> pp q) ddimap :: (s -> a) -> (b -> t) -> pp p a b -> pp p s t The function hpmap lifts a natural transformation, and ddimap shows that the result of the mapping is also a profunctor. An endofunctor in the profunctor category may have a fixed point: newtype FixH pp a b = InH { outH :: pp (FixH pp) a b } which is also a profunctor: instance HPFunctor pp => Profunctor (FixH pp) where dimap f g (InH pp) = InH (ddimap f g pp) Finally, our Day convolution is a higher-order endofunctor in the category of profunctors: instance HPFunctor (PDay p) where hpmap nat (PDay p q from to) = PDay p (nat q) from to ddimap f g (PDay p q from to) = PDay p q (g . from) (to . f) We’ll use this fact to construct a free monoidal profunctor next. Free Monoidal Profunctor In the previous post, I defined the free monoidal functor as a fixed point of the following endofunctor: data FreeF f g t = DoneF t | MoreF (Day f g t) Replacing the functors f and g with profunctors is straightforward: data FreeP p q s t = DoneP (s -> ()) (() -> t) | MoreP (PDay p q s t) The only tricky part is realizing that the first term in the sum comes from the unit of Day convolution, which is the type () -> t, and it generalizes to an appropriate pair of functions (we’ll simplify this definition later). FreeP is a higher order endofunctor acting on profunctors: instance HPFunctor (FreeP p) where hpmap _ (DoneP su ut) = DoneP su ut hpmap nat (MoreP day) = MoreP (hpmap nat day) ddimap f g (DoneP au ub) = DoneP (au . f) (g . ub) ddimap f g (MoreP day) = MoreP (ddimap f g day) We can, therefore, define its fixed point: type FreeMon p = FixH (FreeP p) and show that it is indeed a monoidal profunctor. As before, the trick is to fist show the following property of Day convolution: cons :: Monoidal q => PDay p q a b -> q c d -> PDay p q (a, c) (b, d) cons (PDay pxy quv yva bxu) qcd = PDay pxy (quv >**< qcd) (bimap yva id . reassoc) (assoc . bimap bxu id) Using this function, we can show that FreeMon p is monoidal for any p: instance Profunctor p => Monoidal (FreeMon p) where punit = InH (DoneP id id) (InH (DoneP au ub)) >**< frcd = dimap snd (\d -> (ub (), d)) frcd (InH (MoreP dayab)) >**< frcd = InH (MoreP (cons dayab frcd)) FreeMon can also be rewritten as a recursive data type: data FreeMon p s t where DoneFM :: t -> FreeMon p s t (b -> d -> t) -> Categorical Picture As I mentioned before, from the categorical point of view there isn’t much to talk about. We define a functor in the category of profunctors: A_p q = (C^{op} \times C) (1, -) + \int^{ a b c d } p a b \times q c d \times (C^{op} \times C) (\langle a, b \rangle \otimes \langle c, d \rangle, -) As previously shown in the general case, its initial algebra defines a free monoidal profunctor. I’m grateful to Eugenia Cheng not only for talking to me about monoidal profunctors, but also for getting me interested in category theory in the first place through her Catsters video series. Thanks also go to Edward Kmett for numerous discussions on this topic. In my category theory blog posts, I stated many theorems, but I didn’t provide many proofs. In most cases, it’s enough to know that the proof exists. We trust mathematicians to do their job. Granted, when you’re a professional mathematician, you have to study proofs, because one day you’ll have to prove something, and it helps to know the tricks that other people used before you. But programmers are engineers, and are therefore used to taking advantage of existing solutions, trusting that somebody else made sure they were correct. So it would seem that mathematical proofs are irrelevant to programming. Or so it may seem, until you learn about the Curry-Howard isomorphism–or propositions as types, as it is sometimes called–which says that there is a one to one correspondence between logic and programs, and that every function can be seen as a proof of a theorem. And indeed, I found that a lot of proofs in category theory turn out to be recipes for implementing functions. In most cases the problem can be reduced to this: Given some morphisms, implement another morphism, usually using simple composition. This is very much like using point-free notation to define functions in Haskell. The other ingredient in categorical proofs is diagram chasing, which is very much like equational resoning in Haskell. Of course, mathematicians use different notation, and they use lots of different categories, but the principles are the same. I want to illustrate these points with the example from Emily Riehl’s excellent book Category Theory in Context. It’s a book for mathematicians, so it’s not an easy read. I’m going to concentrate on theorem 6.2.1, which derives a formula for left Kan extensions using colimits. I picked this theorem because it has calculational content: it tells you how to calculate a particular functor. It’s not a short proof, and I have made it even longer by unpacking every single step. These steps are not too hard, it’s mostly a matter of understanding and using definitions of functoriality, naturality, and universality. There is a bonus at the end of this post for Haskell programmers. Kan Extensions I wrote about Kan extensions before, so here I’ll only recap the definition using the notation from Emily’s book. Here’s the setup: We want to extend a functor F, which goes from category C to E, along another functor K, which goes from C to D. This extension is a new functor from D to E. To give you some intuition, imagine that the functor F is the Rosetta Stone. It’s a functor that maps the Ancient Egyptian text of a royal decree to the same text written in Ancient Greek. The functor K embeds the Rosetta Stone hieroglyphics into the know corpus of Egyptian texts from various papyri and inscriptions on the walls of temples. We want to extend the functor F to the whole corpus. In other words, we want to translate new texts from Egyptian to Greek (or whatever other language that’s isomorphic to it). In the ideal case, we would just want F to be isomorphic to the composition of the new functor after K. That’s usually not possible, so we’ll settle for less. A Kan extension is a functor which, when composed with K produces something that is related to F through a natural transformation. In particular, the left Kan extension, Lan_K F, is equipped with a natural transformation \eta from F to Lan_K F \circ K. (The right Kan extension has this natural transformation reversed.) There are usually many such functors, so there is the standard trick of universal construction to pick the best one. In our analogy, we would ideally like the new functor, when applied to the hieroglyphs from the Rosetta stone, to exactly reproduce the original translation, but we’ll settle for something that has the same meaning. We’ll try to translate new hieroglyphs by looking at their relationship with known hieroglyphs. That means we should look closely at morphism in D. Comma Category The key to understanding how Kan extensions work is to realize that, in general, the functor K embeds C in D in a lossy way. There may be objects (and morphisms) in D that are not in the image of K. We have to somehow define the action of Lan_K F on those objects. What do we know about such objects? We know from the Yoneda lemma that all the information about an object is encoded in the totality of morphisms incoming to or outgoing from that object. We can summarize this information in the form of a functor, the hom-functor. Or we can define a category based on this information. This category is called the slice category. Its objects are morphisms from the original category. Notice that this is different from Yoneda, where we talked about sets of morphisms — the hom-sets. Here we treat individual morphisms as objects. This is the definition: Given a category C and a fixed object c in it, the slice category C/c has as objects pairs (x, f), where x is an object of C and f is a morphism from x to c. In other words, all the arrows whose codomain is c become objects in C/c. A morphism in C/c between two objects, (x, f) and (y, g) is a morphism h : x \to y in C that makes the following triangle commute: In our case, we are interested in an object d in D, and the slice category D/d describes it in terms of morphisms. Think of this category as a holographic picture of d. But what we are really interested in, is how to transfer this information about d to E. We do have a functor F, which goes from C to E. We need to somehow back-propagate the information about d to C along K, and then use F to move it to E. So let’s try again. Instead of using all morphisms impinging on d, let’s only pick the ones that originate in the image of K, because only those can be back-propagated to C. This gives us limited information about d, but it’ll have to do. We’ll just use a partial hologram of d. Instead of the slice category, we’ll use the comma category K \downarrow d. Here’s the definition: Given a functor K : C \to D and an object d of D, the comma category K \downarrow d has as objects pairs (c, f), where c is an object of C and f is a morphism from K c to d. So, indeed, this category describes the totality of morphisms coming to d from the image of K. A morphism in the comma category from (c, f) to (c', g) is a morphism h : c \to c' such that the following triangle commutes: So how does back-propagation work? There is a projection functor \Pi_d : K \downarrow d \to C that maps an object (c, f) to c (with obvious action on morphisms). This functor loses a lot of information about objects (completely forgets the f part), but it keeps a lot of the information in morphisms — it only picks those morphisms in C that preserve the structure of the comma category. And it lets us “upload” the hologram of d into C Next, we transport all this information to E using F. We get a mapping F \circ \Pi_d : K \downarrow d \to E Here’s the main observation: We can look at this functor F \circ \Pi_d as a diagram in E (remember, when constructing limits, diagrams are defined through functors). It’s just a bunch of objects and morphisms in E that somehow approximate the image of d. This holographic information was extracted by looking at morphisms impinging on d. In our search for (Lan_K F) d we should therefore look for an object in E together with morphisms impinging on it from the diagram we’ve just constructed. In particular, we could look at cones under that diagram (or co-cones, as they are sometimes called). The best such cone defines a colimit. If that colimit exists, it is indeed the left Kan extension (Lan_K F) d. That’s the theorem we are going to prove. To prove it, we’ll have to go through several steps: 1. Show that the mapping we have just defined on objects is indeed a functor, that is, we’ll have to define its action on morphisms. 2. Construct the transformation \eta from F to Lan_K F \circ K and show the naturality condition. 3. Prove universality: For any other functor G together with a natural transformation \gamma, show that \gamma uniquely factorizes through \eta. All of these things can be shown using clever manipulations of cones and the universality of the colimit. Let’s get to work. We have defined the action of Lan_K F on objects of D. Let’s pick a morphism g : d \to d'. Just like d, the object d' defines its own comma category K \downarrow d', its own projection \Pi_{d'}, its own diagram F \circ \Pi_{d'}, and its own limiting cone. Parts of this new cone, however, can be reinterpreted as a cone for the old object d. That’s because, surprisingly, the diagram F \circ \Pi_{d'} contains the diagram F \circ \Pi_{d}. Take, for instance, an object (c, f) : K \downarrow d, with f: K c \to d. There is a corresponding object (c, g \circ f) in K \downarrow d'. Both get projected down to the same c. That shows that every object in the diagram for the d cone is automatically an object in the diagram for the d' cone. Now take a morphism h from (c, f) to (c', f') in K \downarrow d. It is also a morphism in K \downarrow d' between (c, g \circ f) and (c', g \circ f'). The commuting triangle condition in K \downarrow d f = f' \circ K h ensures the commuting condition in K \downarrow d' g \circ f = g \circ f' \circ K h All this shows that the new cone that defines the colimit of F \circ \Pi_{d'} contains a cone under F \circ \Pi_{d}. But that diagram has its own colimit (Lan_K F) d. Because that colimit is universal, there must be a unique morphism from (Lan_K F) d to (Lan_K F) d', which makes all triangles between the two cones commute. We pick this morphism as the lifting of our g, which ensures the functoriality of Lan_K F. The commuting condition will come in handy later, so let’s spell it out. For every object (c, f:Kc \to d) we have a leg of the cone, a morphism V_d^{(c, f)} from F c to (Lan_K F) d; and another leg of the cone V_{d'}^{(c, g \circ f)} from F c to (Lan_K F) d'. If we denote the lifting of g as (Lan_K F) g then the commuting triangle is: (Lan_K F) g \circ V_{d}^{(c, f)} = V_{d'}^{(c, g \circ f)} In principle, we should also check that this newly defined functor preserves composition and identity, but this pretty much follows automatically whenever lifting is defined using composition of morphisms, which is indeed the case here. Natural Transformation We want to show that there is a natural transformation \eta from F to Lan_K F \circ K. As usual, we’ll define the component of this natural transformation at some arbitrary object c. It’s a morphism between F c and (Lan_K F) (K c). We have lots of morphisms at our disposal, with all those cones lying around, so it shouldn’t be a problem. First, observe that, because of the pre-composition with K, we are only looking at the action of Lan_K F on objects which are inside the image of K. The objects of the corresponding comma category K \downarrow K c' have the form (c, f), where f : K c \to K c'. In particular, we can pick c' = c, and f = id_{K c}, which will give us one particular vertex of the diagram F \circ \Pi_{K c}. The object at this vertex is F c — exactly what we need as a starting point for our natural transformation. The leg of the colimiting cone we are interested in is: V_{K c}^{(c, id)} : F c \to (Lan_K F) (K c) We’ll pick this leg as the component of our natural transformation \eta_c. What remains is to show the naturality condition. We pick a morphism h : c \to c'. We know how to lift this morphism using F and using Lan_K F \circ K. The other two sides of the naturality square are \eta_c and \eta_{c'}. The bottom left composition is (Lan_K F) (K h) \circ V_{K c}^{(c, id)}. Let’s take the commuting triangle that we used to show the functoriality of Lan_K F: and replace f by id_{K c}, g by K h, d by K c, and d' by K c', to get: (Lan_K F) (K h) \circ V_{K c}^{(c, id)} = V_{K c'}^{(c, K h)} Let’s draw this as the diagonal in the naturality square, and examine the upper right composition: V_{K c'}^{(c', id)} \circ F h. This is also equal to the diagonal V_{K c'}^{(c, K h)}. That’s because these are two legs of the same colimiting cone corresponding to (c', id_{K c'}) and (c, K h). These vertices are connected by h in K \downarrow K c'. But how do we know that h is a morphism in K \downarrow K c'? Not every morphism in C is a morphism in the comma category. In this case, however, the triangle identity is automatic, because one of its sides is an identity id_{K c'}. We have shown that our naturality square is composed of two commuting triangles, with V_{K c'}^{(c, K h)} as its diagonal, and therefore commutes as a whole. Kan extension is defined using a universal construction: it’s the best of all functors that extend F along K. It means that any other extension will factor through ours. More precisely, if there is another functor G and another natural transformation: \gamma : F \to G \circ K then there is a unique natural transformation \alpha, such that (\alpha \cdot K) \circ \eta = \gamma (where we have a horizontal composition of a natural transformation \alpha with the functor K) As always, we start by clearly stating the goals. The proof proceeds in these steps: 1. Implement \alpha. 2. Prove that it’s natural. 3. Show that it factorizes \gamma through \eta. 4. Show that it’s unique. We are defining a natural transformation \alpha between two functors: Lan_K F, which we have defined as a colimit, and G. Both are functors from D to E. Let’s implement the component of \alpha at some d. It must be a morphism: \alpha_d : (Lan_K F) d \to G d Notice that d varies over all of D, not just the image of K. We are comparing the two extensions where it really matters. We have at our disposal the natural transformation: \gamma : F \to G \circ K or, in components: \gamma_c : F c \to G (K c) More importantly, though, we have the universal property of the colimit. If we can construct a cone with the nadir at G d then we can use its factorizing morphism to define \alpha_d. This cone has to be built under the same diagram as (Lan_K F) d. So let’s start with some (c, f : K c \to d). We want to construct a leg of our new cone going from F c to G d. We can use \gamma_c to get to G (K c) and then hop to G d using G f. Thus we can define the new cone’s leg as: W_c = G f \circ \gamma_c Let’s make sure that this is really a cone, meaning, its sides must be commuting triangles. Let’s pick a morphism h in K \downarrow d from (c, f) to (c', f'). A morphism in K \downarrow d must satisfy the triangle condition, f = f' \circ K h: We can lift this triangle using G: G f = G f' \circ G (K h) Naturality condition for \gamma tells us that: \gamma_{c'} \circ F h = G (K h) \circ \gamma_c By combining the two, we get the pentagon: whose outline forms a commuting triangle: G f \circ \gamma_c = G f' \circ \gamma_{c'} \circ F h Now that we have a cone with the nadir at G d, universality of the colimit tells us that there is a unique morphism from the colimiting cone to it that factorizes all triangles between the two cones. We make this morphism our \alpha_d. The commuting triangles are between the legs of the colimiting cone V_d^{(c, f)} and the legs of our new cone W_c: \alpha_d \circ V_d^{(c, f)} = G f \circ \gamma_c Now we have to show that so defined \alpha is a natural transformation. Let’s pick a morphism g : d \to d'. We can lift it using Lan_K F or using G in order to construct the naturality square: G g \circ \alpha_d = \alpha_{d'} \circ (Lan_K F) g Remember that the lifting of a morphism by Lan_K F satisfies the following commuting condition: We can combine the two diagrams: The two arms of the large triangle can be replaced using the diagram that defines \alpha, and we get: G (g \circ f) \circ \gamma_c = G g \circ G f \circ \gamma_c which commutes due to functoriality of G. Now we have to show that \alpha factorizes \gamma through \eta. Both \eta and \gamma are natural transformations between functors going from C to E, whereas \alpha connects functors from D to E. To extend \alpha, we horizontally compose it with the identity natural transformation from K to K. This is called whiskering and is written as \alpha \cdot K. This becomes clear when expressed in components. Let’s pick an object c in C. We want to show that: \gamma_c = \alpha_{K c} \circ \eta_c Let’s go back all the way to the definition of \eta: \eta_c = V_{K c}^{(c, id)} where id is the identity morphism at K c. Compare this with the definition of \alpha: If we replace f with id and d with K c, we get: \alpha_{K c} \circ V_{K c}^{(c, id)} = G id \circ \gamma_c which, due to functoriality of G is exactly what we need: \alpha_{K c} \circ \eta_c = \gamma_c Finally, we have to prove the uniqueness of \alpha. Here’s the trick: assume that there is another natural transformation \alpha' that factorizes \gamma. Let’s rewrite the naturality condition for \alpha': G g \circ \alpha'_d = \alpha'_{d'} \circ (Lan_K F) g Replacing g : d \to d' with f : K c \to d, we get: G f \circ \alpha'_{K c} = \alpha'_d \circ (Lan_K F) f The lifiting of f by Lan_K F satisfies the triangle identity: V_d^{(c, f)} = (Lan_K F) f \circ V_{K c}^{(c, id)} where we recognize V_{K c}^{(c, id)} as \eta_c. We said that \alpha' factorizes \gamma through \eta: \gamma_c = \alpha'_{K c} \circ \eta_c which let us straighten the left side of the pentagon. This shows that \alpha' is another factorization of the cone with the nadir at G d through the colimit cone with the nadir (Lan_K F)  d. But that would contradict the universality of the colimit, therefore \alpha' must be the same as \alpha. This completes the proof. Haskell Notes This post would not be complete if I didn’t mention a Haskell implementation of Kan extensions by Edward Kmett, which you can find in the library Data.Functor.Kan.Lan. At first look you might not recognize the definition given there: data Lan g h a where Lan :: (g b -> a) -> h b -> Lan g h a To make it more in line with the previous discussion, let’s rename some variables: data Lan k f d where Lan :: (k c -> d) -> f c -> Lan k f d This is an example of GADT syntax, which is a Haskell way of implementing existential types. It’s equivalent to the following pseudo-Haskell: Lan k f d = exists c. (k c -> d, f c) This is more like it: you may recognize (k c -> d) as an object of the comma category K \downarrow d, and f c as the mapping of c (which is the projection of this object back to C) under the functor F. In fact, the Haskell representation is based on the encoding of the colimit using a coend: (Lan_K F) d = \int^{c \in C} C(K c, d) \times F c The Haskell library also contains the proof that Kan extension is a functor: instance Functor (Lan k f) where fmap g (Lan kcd fc) = Lan (g . kcd) fc The natural transformation \eta that is part of the definition of the Kan extension can be extracted from the Haskell definition: eta :: f c -> Lan k f (k c) eta = Lan id In Haskell, we don’t have to prove naturality, as it is a consequence of parametricity. The universality of the Kan extension is witnessed by the following function: toLan :: Functor g => (forall c. f c -> g (k c)) -> Lan k f d -> g d toLan gamma (Lan kcd fc) = fmap kcd (gamma fc) It takes a natural transformation \gamma from F to G \circ K, and produces a natural transformation we called \alpha from Lan_K F to G. This is \gamma expressed as a composition of \alpha and \eta: fromLan :: (forall d. Lan k f d -> g d) -> f c -> g (k c) fromLan alpha = alpha . eta As an example of equational reasoning, let’s prove that \alpha defined by toLan indeed factorizes \gamma. In other words, let’s prove that: fromLan (toLan gamma) = gamma Let’s plug the definition of toLan into the left hand side: fromLan (\(Lan kcd fc) -> fmap kcd (gamma fc)) then use the definition of fromLan: (\(Lan kcd fc) -> fmap kcd (gamma fc)) . eta Let’s apply this to some arbitrary function g and expand eta: (\(Lan kcd fc) -> fmap kcd (gamma fc)) (Lan id g) Beta-reduction gives us: fmap id (gamma g) which is indeed equal to the right hand side acting on g: gamma g The proof of toLan . fromLan = id is left as an exercise to the reader (hint: you’ll have to use naturality). I’m grateful to Emily Riehl for reviewing the draft of this post and for writing the book from which I borrowed this proof. « Previous PageNext Page »
__label__pos
0.823566
ⓘ Blog | Latter Day Saint movement and society - religion and society .. Civil rights and Mormonism Civil Rights and Mormonism have been intertwined since the religions start, with founder Joseph Smith writing on slavery in 1836. Initial Mormon converts were from the north and opposed slavery. This caused contention in the slave state of Missouri, and the church began distancing itself from abolitionism and justifying slavery based on the Bible. During this time, several slave owners joined the church, and brought their slaves with them when they moved to Nauvoo. The church adopted scriptures which teaches against influencing slaves to be "dissatisfied with their condition" as well as scriptures which teaches that "all are alike unto God." As mayor of Nauvoo, Joseph Smith prohibited blacks from holding office, joining the Nauvoo Legion, voting or marrying whites; but, as president of the Church blacks became members and several black men were ordained to the priesthood. Also during this time, Joseph Smith began his presidential campaign on a platform for the government to buy slaves into freedom over several years. He was killed during his presidential campaign. Some slave owners brought their slaves with them to Utah, though several slaves escaped. The church put out a statement of neutrality towards slavery, stating that it was between the slave owner and God. A few years later, Brigham Young began teaching that slavery was ordained of God and that equality efforts were misguided. Under his direction, Utah passed laws supporting slavery and making it illegal for blacks to vote, hold public office, join the Nauvoo Legion, or marry whites. In California, slavery was openly tolerated in the Mormon community of San Bernardino, despite being a free state. The US government freed the slaves and overturned laws prohibiting blacks from voting. After the Civil War, issues of civil rights went largely unnoticed until the civil rights movement. The National Association for the Advancement of Colored People NAACP criticized the churchs position on civil rights, led anti-discrimination marches and filed a lawsuit against the churchs practice of not allowing black children to be troop leaders. Several athletes began protesting BYU over its discriminatory practices and the LDS Church policy that did not give black people the priesthood. In response, the Church issued a statement supporting civil rights and changed its policy on boy scouts. Apostle Ezra Taft Benson began criticizing the civil rights movement and challenging accusations of police brutality. After the reversal of the priesthood ban in 1978, the church has stayed relatively silent on matters of civil rights. Mormon views on evolution The Church of Jesus Christ of Latter-day Saints takes no official position on whether or not biological evolution has occurred, nor on the validity of the modern evolutionary synthesis as a scientific theory. In the 20th century, the First Presidency of the LDS Church published doctrinal statements on the origin of man and creation. In addition, individual leaders of the LDS Church have expressed a variety of personal opinions on evolution, many of which have affected the beliefs and perceptions of Latter-day Saints. There have been three public statements from the First Presidency 1909, 1910, 1925 and one private statement 1931 from the First Presidency about the LDS Churchs view on evolution. The 1909 statement was a delayed response to the publication of On the Origin of Species by Charles Darwin. In the statement, the First Presidency affirmed their doctrine that Adam is the direct, divine offspring of God. The statement declares evolution as "the theories of men", but does not directly qualify it as untrue or evil. In response to the 1911 Brigham Young University modernism controversy, the First Presidency issued an official statement in its 1910 Christmas message that the Church members should be kind to everyone regardless of differences in opinion about evolution and that proven science is accepted by the Church with joy. In 1925, in response to the Scopes Trial, the First Presidency published a statement, similar in content to the 1909 statement, but with "anti-science" language removed. A private memo written in 1931 by the First Presidency to general church authorities confirmed a neutral stance on the existence of pre-Adamites. There are a variety of Church publications that address evolution, often with neutral or opposing viewpoints. In order to address students questions about the Churchs position on evolution in biology and related classes, BYU released a library packet on evolution in 1992. This packet contains the first three official First Presidency statement as well as the "Evolution" section in the Encyclopedia of Mormonism to supplement normal course material. Statements from Church Presidents are mixed with some vehemently against evolution and the theories of Charles Darwin, and some willing to admit that the circumstances of Earths creation are unknown and that evolution could explain some aspects of creation. In the 1930s, Church leaders Joseph Fielding Smith, B.H. Roberts, and James E. Talmage debated about the existence of pre-Adamites, eliciting a memo from the First Presidency in 1931 claiming a neutral stance on pre-Adamites. Since the publication of On the Origin of Species, some LDS scientists have published essays or speeches to try and reconcile science and Mormon doctrine. Many of these scientists subscribe to the idea that evolution is the natural process God used to create the Earth and its inhabitants and that there are commonalities between Mormon doctrine and foundations of evolutionary biology. Debate and questioning among members of the LDS Church continues concerning evolution, religion, and the reconciliation between the two. Although articles from publications like BYU Studies often represent neutral or pro-evolutionary stances, LDS-sponsored publications such as the Ensign tend to publish articles with anti-evolutionary views. Studies published since 2014 have found that the majority of Latter-day Saints do not believe humans evolved over time. A 2018 study in the Journal of Contemporary Religion, found that very liberal or moderate members of the LDS Church were more likely to accept evolution as their education level increased, whereas very conservative members of the LDS Church were less likely to accept evolution as their education level increased. Another 2018 study found that over time, LDS undergraduate attitudes towards evolution have changed from antagonistic to accepting. The researchers attributed this attitude change to more primary school exposure to evolution and a reduction in the number of anti-evolution statements from the First Presidency. Mormon Political Manifesto The "Mormon Political Manifesto" was a document issued by The Church of Jesus Christ of Latter-day Saints in 1895 to regulate the involvement of its general authorities in politics. Up until the mid-1890s, the LDS Church had proactively supported the Peoples Party and entering politics had been seen as almost a part of several church leaders ecclesiastical responsibilities. Leading up to the issuance of the Manifesto, there was major disagreement about church members entering politics. Utah was transitioning to statehood and to a situation where the two national parties dominated Utah politics, and the church began to adopt a posture of political neutrality. The leaders of the church, led by church president Wilford Woodruff, decided to establish a written rule that the general authorities of the church would require the approval of the First Presidency before seeking public office. Apostle Moses Thatcher did not agree with this new rule and in 1896 was removed from the Quorum of the Twelve Apostles over the issue. B. H. Roberts, a general authority of the church, resisted the Manifesto at first, but agreed to it in 1896 under threat of being removed from his position. More recently, the LDS Church has taken a more stringent rule on political involvement by its leaders. Current policies prohibit general authorities not only from running for office but also from contributing financially to political campaigns. This policy was implemented in 2011. Mormonism and slavery The Latter Day Saint movement has had varying and conflicting teachings on slavery. Early converts were initially from the Northern United States and opposed slavery, believing they were supported by Mormon scripture. After the church base moved to the slave state of Missouri and gained Southern converts, church leaders began to own slaves. New scriptures were revealed teaching against interfering with the slaves of others. A few slave owners joined the church, and took their slaves with them to Nauvoo, Illinois, although Illinois was a free state. After Joseph Smiths death, the church split. The largest contingent followed Brigham Young, who supported slavery but opposed abuse, allowing enslaved men and women to be brought to the territory but prohibiting the enslavement of their descendants and requiring their consent before any move and became The Church of Jesus Christ of Latter-day Saints LDS Church. A smaller contingent followed Joseph Smith III, who opposed slavery, and became the Reorganized Church of Jesus Christ of Latter Day Saints RLDS. Young led his followers to Utah, where he led the efforts to legalize slavery in the Utah Territory. Brigham Young taught that slavery was ordained of God and taught that the Republican Partys efforts to abolish slavery went against the decrees of God and would eventually fail. He also encouraged members to participate in the Indian slave trade. Sexuality and Mormonism Sexuality has a prominent role within the theology of The Church of Jesus Christ of Latter-day Saints, which teaches that gender is defined in the premortal existence, and that part of the purpose of mortal life is for men and women to be sealed together, forming bonds that allow them to progress eternally together in the afterlife. It also teaches that sexual relations within the framework of opposite-sex marriage is healthy, necessary, and ordained of God. In contrast with some orthodox Christian movements, sexuality in the Churchs theology is neither a product of original sin nor a "necessary evil". In accordance with the law of chastity, LDS Church doctrine bars sexual activity outside of heterosexual marriage. Utah Compact The Utah Compact is a declaration of five principles whose stated purpose is to "guide Utahs immigration discussion." At a ceremony held on the grounds of the Utah State Capitol on November 11, 2010, it was signed by business, law enforcement and religious leaders including the Catholic Diocese of Salt Lake City, and by various other community leaders and individuals. Users also searched:
__label__pos
0.96992
People first Celebrate success Our yearly company events, FIT and Outlook, are ideal for celebrating successes and facilitating camaraderie. Both events offer content-oriented and social programs and revolve around a certain theme.  The FIT event focuses on Fun, Inspiration, and Training. Outlook, on the other hand, is about looking back, celebrating successes, and dwelling on the new year. Attending these events together strengthens relationships. The content-oriented program establishes a common language, while the evening program encourages employees to spend time with each other outside a business context, fostering friendships between them.  IG&H organizes various initiatives, such as FIT and Outlook, that help us develop our shared culture on a continuous basis and keep each other up to date on important organizational matters. These events are always organized by a group of employees. Derk Bouwman - Consultant, Retail How we put people first:
__label__pos
0.948459
Kombutxa Isotonic With filtered water, cane sugar, green tea and sea water from Formentera! Do you do sport? We have your drink! It was a challenge, but we succeeded. We wanted those who do sport to have a Kombucha adapted to their needs without strange and artificial ingredients. Loaded with the properties of the original fermented tea, but also adding those provided by the sea water. Freshen up, rehydrate and recover after sport, It’ll be easier than ever!
__label__pos
0.988114
Suppose $S$ is a Riemannian 2-manifold (e.g. a surface in $\mathbb{R}^3$). Let $T$ be a geodesic triangle on $S$: a triangle whose edges are geodesics. If $T$ can be moved around arbitrarily on $S$ while remaining congruent (edge lengths the same, vertex angles the same), does this imply that $S$ has constant curvature? I realize this is a naive question. If $S$ has constant curvature, then $T$ can be moved around without distortion. I would like to see reasoning for the reverse: If $T$ can be moved around while maintaining congruence, then $S$ must have constant curvature. What is not clear to me is how to formalize "moved around." • $\begingroup$ Is isometryic congruent equivalent to "Edge lengths the same, vertex angle the same", as you wrote in your question?And does this equivalency characterize non negative constant curvature? $\endgroup$ Jun 18 '18 at 7:58 • 4 $\begingroup$ If this is true for surfaces, then it should be true in arbitrary dimension, right? Because it would imply constant sectional curvature. $\endgroup$ – Gro-Tsen Jun 18 '18 at 9:51 • $\begingroup$ @Gro-Tsen: Yes, I believe you are correct: should be true in $\mathbb{R}^n$. $\endgroup$ Jun 18 '18 at 10:18 • 3 $\begingroup$ If by 'moved around' you mean that there are intrinsic isometries of the surface $S$ that allow you to move a given vertex of $T$ to any other point of the surface, then, yes, the surface has constant Gauss curvature. This follows because the group of intrinsic isometries preserves the Gauss curvature, and your 'move around arbitrarily' hypothesis would then imply that the Gauss curvature must be the same at any two points. It would be better to put conditions on the set of triangles in $S$ congruent to $T$, i.e., that there should be 'enough' of them in an appropriate sense. $\endgroup$ Jun 18 '18 at 15:16 • 3 $\begingroup$ @JosephO'Rourke: However, your question only states "Let $T$ be a geodesic triangle", not "Let $T$ be any geodesic triangle". If any geodesic triangle can be copied without 'distortion', then, sure, the Gauss curvature has to be constant. This is an easy consequence of geodesic normal coordinates. I thought you were asking about the harder problem of knowing only that you can copy a specific 'congruence class' of geodesic triangles without distortion. $\endgroup$ Jun 19 '18 at 1:04 Already Riemann in his famous "On the Hypotheses Which Lie at the Bases of Geometry" concludes that the spaces of constant curvature are precisely those in which figures can move without distortion. However, as you correctly remarked, a free mobility of rigid bodies is a rather subtle notion. For historical perspective on this problem see https://arxiv.org/abs/math/0305023 (From quaternions to cosmology: spaces of constant curvature, ca. 1873-1925, by Moritz Epple) and https://arxiv.org/abs/1310.7334 (The problem of space in the light of relativity: the views of H. Weyl and E. Cartan, by Erhard Scholz). Also Jürgen Jost provides a detailed and interesting commentaries on Riemann's paper in the book https://www.springer.com/us/book/9783319260402 This is sort of an answer and sort of not. I'll let you be the judge: Suppose you formulate the question, not in terms of 'motion' (which you left vague) but terms of 'freely copying' a triangle $T$, as follows: Let $(S,g)$ be a Riemannian surface, and let $T$ be a triangle, i.e., a triple of points $(p_1,p_2,p_3)$ on the surface together with three geodesic segments $\gamma_{ij}=\gamma_{ji}$ for $i\not=j$ where $p_i$ and $p_j$ are the endpoints of $\gamma_{ij}$. Let $\ell_{ij}>0$ be the length of the geodesic segment $\gamma_{ij}$. Now, suppose that, side-angle-side holds for this specific $T$ in the following sense: Whenever $(q_1,q_2,q_3)$ are three points of $S$ and $\eta_{12}$, respectively $\eta_{13}$, are geodesic segments of length $\ell_{12}$, respectively $\ell_{13}$, with endpoints $q_1$ and $q_2$, respectively $q_1$ and $q_3$, so that the angle between these geodesic segements at $q_1$ is the same as the angle between the geodesic segments $\gamma_{12}$ and $\gamma_{13}$ at $p_1$, then there exists a geodesic segment $\eta_{23}$ of length $\ell_{23}$ with endpoints $q_2$ and $q_3$, such that for all $i,j,k$ distinct, the angle between $\eta_{ij}$ and $\eta_{ik}$ is the same as the angle between $\gamma_{ij}$ and $\gamma_{ik}$. Does it then follow that $S$ has constant Gauss curvature? The answer is 'no', even if $S$ is a sphere: Let $(S,g)$ be a Zoll sphere all of whose geodesics are closed and of length $L$. Fix a point $p$ and let $T$ be a triangle with vertices $(p_1,p_2,p_3) = (p,p,p)$ and let the sides be any three geodesic segments $\gamma_{ij}$ of length $L$ with endpoints $p_i=p$. This 'triangle' can be copied to any triangle $T'$ with vertices $q_i=q$ by choosing the geodesic segments $\eta_{ij}$ so that the angles between sides are equal to the angles between the corresponding sides of $T$. It is more convenient to talk about hinges, i.e., pair of sides issuing from a vertex. In constant curvature spaces hinges can be moved around without distortion so long as the angle at the vertex is preserved. In rank-1 symmetric spaces hinges can be moved around without distortion so long as a pair of angles at the vertex are preserved. Thus, in the context of complex projective spaces, one has a well-developed trigonometry, including a theorem of cosines expressing the third side in terms of the edges of the hinge and the two angles. This seems to have been first written about by Shirokov in the 1950s. For a study of complex projective trigonometry see e.g., this 1991 article in Geometriae Dedicata. Here's one approach to formalizing "moved around". Let $G=(V,E)$ be a graph. Let $p:V\rightarrow S$ be a placement of $G$, that is, a map from the vertex set of $G$ to $S$. Let us call the data $(G,p)$ a framework on $S$. Let us define a motion of $(G,p)$ to be a continuous family of placements $f:V\times[0,1]\rightarrow S$ such that: 1. $f(v,0) = p(v)$ for all $v\in V$, that is, the motion begins at $p$. 2. $d_S(f(u,t),f(v,t))=d_S(p(u),p(v))$ for all $uv\in E$ and all $t\in[0,1]$, where $d_S(\cdot,\cdot)$ is the distance function on $S$. This condition just states that the motion preserves the lengths of all edges of $G$. 3. $\alpha_t(u,v,w)=\alpha_0(u,v,w)$ for all triples of vertices $uvw$ such that $uv\in E$ and $vw\in E$, where $\alpha_t(u,v,w)$ is the angle between the geodesic segments $uv$ and $vw$ at $v$. This condition ensures that the motion preserves all angles between adjacent pairs of edges of $G$. [Such frameworks are related to the point-line frameworks of Jackson and Owen, and also work of Tay, Whiteley, Jackson and Jordán and others on 2D molecular graphs and frameworks (see e.g. this paper of Jackson and Jordán.] Your question is essentially: Let $(G,p)$ be the framework constructed from a geodesic triangle $T$ on $S$. Suppose there exists a motion from $(G,p)$ to any other congruent geodesic triangle (i.e. one with the same lengths, angles and orientation as $T$). Does this imply that $S$ has constant curvature? I suspect the answer is no, for a possibly silly reason. It's not obvious to me that generic embedded surfaces need to have any pairs of distinct congruent geodesic triangles at all; the condition would be vacuous on such surfaces, which also don't have constant curvature. So let us add an additional condition on $S$ that such pairs exist. Thanks to the answer of Zurab Silagadze, I can see that a positive answer to a related question was claimed by Riemann in §II.4 of his famous paper "Ueber die Hypothesen welche der Geometrie zu Grunde liegen", (see also this English translation by Clifford). Here is an edited version of Clifford's translation of the passage in question: The common character of manifolds with constant curvature may also be expressed thus, that figures may be moved in them without stretching. For clearly figures could not be arbitrarily shifted and turned round in them if the curvature at each point were not the same in all directions. On the other hand, however, the measure-relations of the manifold are entirely determined by the curvature; they are therefore exactly the same in all directions at one point as at another, and consequently the same constructions can be made from it: whence it follows that in manifolds with constant curvature figures may be given any arbitrary position. It is not clear to me what "figures" are being considered here, and I admit to not understanding what exactly is proved here, if anything. According to §2.2 of Hans Freudenthal's "Lie groups in the foundations of geometry", the first proof was given by Rudolf Lipschitz in the 1870 paper Fortgesetzte Untersuchungen in Betreff der ganzen homogenen Functionen von n Differentialen. Unfortunately, my German is not up to the task of finding the precise statement in this paper. Any takers? • 1 $\begingroup$ Shouldn't the first condition read "$f(v,0) = p(v)$"? $\endgroup$ – tomsmeding Jun 18 '18 at 8:19 • $\begingroup$ @tomsmeding Thanks for the correction. $\endgroup$ – j.c. Jun 18 '18 at 9:19 • $\begingroup$ Your formalization of "moved around" seems to correctly capture what I had in mind. Thanks. $\endgroup$ Jun 18 '18 at 11:11 The integral over a triangle of the curvature is equal to the difference between the sum of angles for a triangle in a flat surface (that is, $\pi$) and the actual sum of angles for that triangle. $\sum \theta = \pi + \int \kappa $. For two triangles to be congruent, their sum of angles must be equal. So it suffices to show that moving the triangle does not change $\int \kappa $. • 1 $\begingroup$ I think you mean that it suffices to show that moving the triangle changes $\int \kappa$ unless $\kappa$ is constant. $\endgroup$ – Ben McKay Jun 21 '18 at 13:10 Your Answer
__label__pos
0.788976
Lab 4: Condensed Matter Calculations Topics covered in this lab • Tiling DFT primitive cells into QMC supercells • Reducing finite-size errors via extrapolation • Reducing finite-size errors via averaging over twisted boundary conditions • Using the B-spline mesh factor to reduce memory requirements • Using a coarsely resolved vacuum buffer region to reduce memory requirements • Calculating the DMC total energies of representative 2D and 3D extended systems Lab directories and files ├── - DFT only for prim to conv cell ├── - QMC only for prim to conv cell ├── - DFT and QMC for prim to 16 atom cell ├── - DFT and OPT for graphene ├── - VMC scan over orbital bspline mesh factors ├── - DMC for final meshfactor └── pseudopotentials - pseudopotential directory ├── Be.ncpp - Be PP for Quantum ESPRESSO ├── Be.xml - Be PP for QMCPACK ├── C.BFD.upf - C PP for Quantum ESPRESSO └── C.BFD.xml - C PP for QMCPACK The goal of this lab is to introduce you to the somewhat specialized problems involved in performing DMC calculations on condensed matter as opposed to the atoms and molecules that were the focus of the preceding labs. Calculations will be performed on two different systems. Firstly, we will perform a series of calculations on BCC beryllium, focusing on the necessary methodology to limit finite-size effects. Secondly, we will perform calculations on graphene as an example of a system where QMCPACK’s capability to handle cases with mixed periodic and open boundary conditions is useful. This example will also focus on strategies to limit memory usage for such systems. All of the calculations performed in this lab will use the Nexus workflow management system, which vastly simplifies the process by automating the steps of generating trial wavefunctions and performing DMC calculations. For any DMC calculation, we must start with a trial wavefunction. As is typical for our calculations of condensed matter, we will produce this wavefunction using DFT. Specifically, we will use QE to generate a Slater determinant of SPOs. This is done as a three-step process. First, we calculate the converged charge density by performing a DFT calculation with a fine grid of k-points to fully sample the Brillouin zone. Next, a non-self- consistent calculation is performed at the specific k-points needed for the supercell and twists needed in the DMC calculation (more on this later). Finally, a wavefunction is converted from the binary representation used by QE to the portable hdf5 representation used by QMCPACK. The choice of k-points necessary to generate the wavefunctions depends on both the supercell chosen for the DMC calculation and by the supercell twist vectors needed. Recall that the wavefunction in a plane-wave DFT calculation is written using Bloch’s theorem as: (76)\[\Psi(\vec{r}) = e^{i\vec{k}\cdot\vec{r}}u(\vec{r})\:,\] where \(\vec{k}\) is confined to the first Brillouin zone of the cell chosen and \(u(\vec{r})\) is periodic in this simulation cell. A plane-wave DFT calculation stores the periodic part of the wavefunction as a linear combination of plane waves for each SPO at all k-points selected. The symmetry of the system allows us to generate an arbitrary supercell of the primitive cell as follows: Consider the set of primitive lattice vectors, \(\{ \mathbf{a}^p_1, \mathbf{a}^p_2, \mathbf{a}^p_3\}\). We may write these vectors in a matrix, \(\mathbf{L}_p\), the rows of which are the primitive lattice vectors. Consider a nonsingular matrix of integers, \(\mathbf{S}\). A corresponding set of supercell lattice vectors, \(\{\mathbf{a}^s_1, \mathbf{a}^s_2, \mathbf{a}^s_3\}\), can be constructed by the matrix product (77)\[\mathbf{a}^s_i = S_{ij} \mathbf{a}^p_j]\:.\] If the primitive cell contains \(N_p\) atoms, the supercell will then contain \(N_s = |\det(\mathbf{S})| N_p\) atoms. Now, the wavefunciton at any point in this new supercell can be related to the wavefunction in the primitive cell by finding the linear combination of primitive lattice vectors that maps this point back to the primitive cell: (78)\[\vec{r}' = \vec{r} + x \mathbf{a}^p_1 + y \mathbf{a}^p_2 + z\mathbf{a}^p_3 = \vec{r} + \vec{T}\:,\] where \(x, y, z\) are integers. Now the wavefunction in the supercell at point \(\vec{r}'\) can be written in terms of the wavefunction in the primitive cell at \(\vec{r}'\) as: (79)\[\Psi(\vec{r}) = \Psi(\vec{r}') e^{i \vec{T} \cdot \vec{k}}\:,\] where \(\vec{k}\) is confined to the first Brillouin zone of the primitive cell. We have also chosen the supercell twist vector, which places a constraint on the form of the wavefunction in the supercell. The combination of these two constraints allows us to identify family of N k-points in the primitive cell that satisfy the constraints. Thus, for a given supercell tiling matrix and twist angle, we can write the wavefunction everywhere in the supercell by knowing the wavefunction a N k-points in the primitive cell. This means that the memory necessary to store the wavefunction in a supercell is only linear in the size of the supercell rather than the quadratic cost if symmetry were neglected. Total energy of BCC beryllium When performing calculations of periodic solids with QMC, it is essential to work with a reasonable size supercell rather than the primitive cells that are common in mean field calculations. Specifically, all of the finite-size correction schemes discussed in the morning require that the exchange-correlation hole be considerably smaller than the periodic simulation cell. Additionally, finite-size effects are lessened as the distance between the electrons in the cell and their periodic images increases, so it is advantageous to generate supercells that are as spherical as possible to maximize this distance. However, a competing consideration is that when calculating total energies we often want to extrapolate the energy per particle to the thermodynamic limit by means of the following formula in three dimensions: (80)\[ E_{\inf} = C + E_{N}/N\:.\] This formula derived assuming the shape of the supercells is consistent (more specifically that the periodic distances scale uniformly with system size), meaning we will need to do a uniform tiling, that is, \(2\times2\times2\), \(3\times3\times3\), etc. As a \(3\times3\times3\) tiling is 27 times larger than the supercell and the practical limit of DMC is on the order of 200 atoms (depending on Z), sometimes it is advantageous to choose a less spherical supercell with fewer atoms rather than a more spherical one that is too expensive to tile. In the case of a BCC crystal, it is possible to tile the one atom primitive cell to a cubic supercell only by doubling the number of electrons. This is the best possible combination of a small number of atoms that can be tiled and a regular box that maximizes the distance between periodic images. We will need to determine the tiling matrix S that generates this cubic supercell by solving the following equation for the coefficients of the S matrix: (81)\[\begin{split}\left[\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right] = \left[\begin{array}{rrr} s_{11} & s_{12} & s_{13} \\ s_{21} & s_{22} & s_{23} \\ s_{31} & s_{32} & s_{33} \end{array}\right] \cdot \left[\begin{array}{rrr} 0.5 & 0.5 & -0.5 \\ -0.5 & 0.5 & 0.5 \\ 0.5 & -0.5 & 0.5 \end{array}\right]\:.\end{split}\] We will now use Nexus to generate the trial wavefunction for this BCC beryllium. Fortunately, the Nexus will handle determination of the proper k-vectors given the tiling matrix. All that is needed is to place the tiling matrix in the file. Now the definition of the physical system is bcc_Be = generate_physical_system( lattice = 'cubic', cell = 'primitive', centering = 'I', atoms = 'Be', constants = 3.490, units = 'A', net_charge = 0, net_spin = 0, Be = 2, tiling = [[a,b,c],[d,e,f],[g,h,i]], kgrid = kgrid, kshift = (.5,.5,.5) where the tiling line should be replaced with the preceding row major tiling matrix. This script file will now perform a converged DFT calculation to generate the charge density in a directory called bcc-beryllium/scf and perform a non-self-consistend DFT calculation to generate SPOs in the directory bcc-beryllium/nscf. Fortunately, Nexus will calculate the required k-points needed to tile the wavefunction to the supercell, so all that is necessary is the granularity of the supercell twists and whether this grid is shifted from the origin. Once this is finished, it performs the conversion from pwscf’s binary format to the hdf5 format used by QMCPACK. Finally, it will optimize the coefficients of 1-body and 2-body Jastrow factors in the supercell defined by the tiling matrix. Run these calculations by executing the script You will notice the small calculations required to generate the wavefunction of beryllium in a one-atom cell are rather inefficient to run on a high-performance computer such as vesta in terms of the time spent doing calculations versus time waiting on the scheduler and booting compute nodes. One of the benefits of the portable HDF format that is used by QMCPACK is that you can generate data like wavefunctions on a local workstation or other convenient resource and use high-performance clusters for the more expensive QMC calculations. In this case, the wavefunction is generated in the directory bcc-beryllium/nscf-2at_222/pwscf_ output in a file called pwscf.pwscf.h5. For debugging purposes, it can be useful to verify that the contents of this file are what you expect. For instance, you can use the tool h5ls to check the geometry of the cell where the DFT calculations were performed or the number of k-points or electrons in the calculation. This is done with the command h5ls -d pwscf.pwscf.h5/supercell or h5ls -d pwscf.pwscf.h5/electrons. In the course of running, you will get an error when attempting to perform the VMC and wavefunction optimization calculations. This is because the wavefunction has generated supercell twists of the form (+/- 1/4, +/- 1/4, +/- 1/4). In the case that the supercell twist contains only 0 or 1/2, it is possible to operate entirely with real arithmetic. The executable that has been indicated in was compiled for this case. Note that where possible, the memory use is a factor of two less than the general case and the calculations are somewhat faster. However, it is often necessary to perform calculations away from these special twist angles to reduce finite-size effects. To fix this, delete the directory bcc-beryllium/opt-2at, change the line near the top of from qmcpack = '/soft/applications/qmcpack/Binaries/qmcpack' qmcpack = '/soft/applications/qmcpack/Binaries/qmcpack_comp' and rerun the script. When the optimization calculation has finished, check that everything has proceeded correctly by looking at the output in the opt-2at directory. Firstly, you can grep the output file for Delta to see if the cost function has indeed been decreasing during the optimization. You should find something like this: OldCost: 4.8789147e-02 NewCost: 4.0695360e-02 Delta Cost:-8.0937871e-03 OldCost: 3.8507795e-02 NewCost: 3.8338486e-02 Delta Cost:-1.6930674e-04 OldCost: 4.1079105e-02 NewCost: 4.0898345e-02 Delta Cost:-1.8076319e-04 OldCost: 4.2681333e-02 NewCost: 4.2356598e-02 Delta Cost:-3.2473514e-04 OldCost: 3.9168577e-02 NewCost: 3.8552883e-02 Delta Cost:-6.1569350e-04 OldCost: 4.2176276e-02 NewCost: 4.2083371e-02 Delta Cost:-9.2903058e-05 OldCost: 4.3977361e-02 NewCost: 4.2865751e-02 Delta Cost:-1.11161830-03 OldCost: 4.1420944e-02 NewCost: 4.0779569e-02 Delta Cost:-6.4137501e-04 which shows that the starting wavefunction was fairly good and that most of the optimization occurred in the first step. Confirm this by using qmca to look at how the energy and variance changed over the course of the calculation with the command: qmca -q ev -e 10 *.scalar.dat executed in the opt-2at directory. You should get output like the following: LocalEnergy Variance ratio opt series 0 -2.159139 +/- 0.001897 0.047343 +/- 0.000758 0.0219 opt series 1 -2.163752 +/- 0.001305 0.039389 +/- 0.000666 0.0182 opt series 2 -2.160913 +/- 0.001347 0.040879 +/- 0.000682 0.0189 opt series 3 -2.162043 +/- 0.001223 0.041183 +/- 0.001250 0.0190 opt series 4 -2.162441 +/- 0.000865 0.039597 +/- 0.000342 0.0183 opt series 5 -2.161287 +/- 0.000732 0.039954 +/- 0.000498 0.0185 opt series 6 -2.163458 +/- 0.000973 0.044431 +/- 0.003583 0.0205 opt series 7 -2.163495 +/- 0.001027 0.040783 +/- 0.000413 0.0189 Now that the optimization has completed successfully, we can perform DMC calculations. The first goal of the calculations will be to try to eliminate the 1-body finite-size effects by twist averaging. The script has the necessary input. Note that on line 42 two twist grids are specified, (2,2,2) and (3,3,3). Change the tiling matrix in this input file as in and start the calculations. Note that this workflow takes advantage of QMCPACK’s capability to group jobs. If you look in the directory dmc-2at_222 at the job submission script (, you will note that rather than operating on an XML input file, qmcapp is targeting a text file called This file is a simple text file that contains the names of the eight XML input files needed for this job, one for each twist. When operated in this mode, QMCPACK will use MPI groups to run multiple copies of itself within the same MPI context. This is often useful both in terms of organizing calculations and for taking advantage of the large job sizes that computer centers often encourage. The DMC calculations in this case are designed to complete in a few minutes. When they have finished running, first look at the scalar.dat files corresponding to the DMC calculations at the various twists in dmc-2at_222. Using a command such as qmca -q ev -e 32 *.s001.scalar.dat (with a suitably chosen number of blocks for the equilibration), you will see that the DMC energy in each calculation is nearly identical within the statistical uncertainty of the calculations. In the case of a large supercell, this is often indicative of a situation where the Brillouin zone is so small that the 1-body finite-size effects are nearly converged without any twist averaging. In this case, however, this is because of the symmetry of the system. For this cubic supercell, all of the twist angles chosen in this shifted \(2\times2\times2\) grid are equivalent by symmetry. In the case where substantial resources are required to equilibrate the DMC calculations, it can be beneficial to avoid repeating such twists and instead simply weight them properly. In this case, however, where the equilibration is inexpensive, there is no benefit to adding such complexity as the calculations can simply be averaged together and the result is equivalent to performing a single longer calculation. Using the command qmc -a -q ev -e 16 *.s001.scalar.dat, average the DMC energies in dmc-2at_222 and dmc-2at_333 to see whether the 1-body finite-size effects are converged with a \(3\times3\times3\) grid of twists. When using beryllium as a metal, the convergence is quite poor (0.025 Ha/Be or 0.7 eV/Be). If this were a production calculation it would be necessary to perform calculations on much larger grids of supercell twists to eliminate the 1-body finite-size effects. In this case there are several other calculations that would warrant a high priority. Script has been provided in which you can input the appropriate tiling matrix for a 16-atom cell and perform calculations to estimate the 2-body finite-size effects, which will also be quite large in the 2-atom calculations. This script will take approximately 30 minutes to run to completion, so depending on your interest, you can either run it or work to modify the scripts to address the other technical issues that would be necessary for a production calculation such as calculating the population bias or the time step error in the DMC calculations. Another useful exercise would be to attempt to validate this PP by calculating the ionization potential and electron affinity of the isolated atom and compare it with the experimental values: IP = 9.3227 eV , EA = 2.4 eV. Handling a 2D system: graphene In this section we examine a calculation of an isolated sheet of graphene. Because graphene is a 2D system, we will take advantage of QMCPACK’s capability to mix periodic and open boundary conditions to eliminate and spurious interaction of the sheet with its images in the z direction. Run the script, which will generate the wavefunction and optimize one and two body jastrow factors. In the script; notice line 160: bconds = ’ppn’ in the generate_qmcpack function, which specifies this mix of open and periodic boundary conditions. Consequently, the atoms will need to be kept away from this open boundary in the z direction as the electronic wavefunction will not be defined outside of the simulation box in this direction. For this reason, all of the atom positions at the beginning of the file have z coordinates 7.5. At this point, run the script Aside from the change in boundary conditions, the main thing that distinguishes this kind of calculation from the previous beryllium example is the large amount of vacuum in the cell. Although this is a very small calculation designed to run quickly in the tutorial, in general a more converged calculation would quickly become memory limited on an architecture like BG/Q. When the initial wavefunction optimization has completed to your satisfaction, run the script This examines within VMC an approach to reducing the memory required to store the wavefunction. In, the spacing between the B-spline points is varied uniformly. The mesh spacing is a prefactor to the linear spacing between the spline points, so the memory use goes as the cube of the meshfactor. When you run the calculations, examine the .s000.scalar.dat files with qmca to determine the lowest possible mesh spacing that preserves both the VMC energy and the variance. Finally, edit the file, which will perform two DMC calculations. In the first, (qmc1) replace the following lines: meshfactor = xxx, precision = '---', with the values you have determined will perform the calculation with as small as possible wavefunction. Note that we can also use single precision arithmetic to store the wavefunction by specifying precision=`single.’ When you run the script, compare the output of the two DMC calculations in terms of energy and variance. Also, see if you can calculate the fraction of memory that you were able to save by using a meshfactor other than 1 and single precision arithmetic. Upon completion of this lab, you should be able to use Nexus to perform DMC calculations on periodic solids when provided with a PP. You should also be able to reduce the size of the wavefunction in a solid-state calculation in cases where memory is a limiting factor.
__label__pos
0.933482
Design a tool to identify the types of devices used in communication( Such as Mobile/Desktop) How do we identify the devices that are connected to the same network? I have currently used netdiscover. However, that only gives an idea of the vendors. I am looking for something that will identify the type of devices like Mobile or Desktop. How many English words do you know? Test your English vocabulary size, and measure how many words do you know Online Test Powered by Examplum
__label__pos
0.939026
Panhead Supercharger Apa Can 375ml $5.99 each All American hops Centennial, Citra and Simcoe overwhelm the nose and kick start the taste buds, departing with a strong bitterness, making it an addictive combination Alcohol by volume Item Cost 2. Choose Delivery or Pickup 3. Add Coupon
__label__pos
0.999997
Call our office if your bite feels uneven, you have persistent sensitivity or discomfort, or if you have any questions or concerns. To protect your crown, avoid chewing ice or other hard objects. Brush and floss normally, but if your teeth are sensitive to hot, cold, or pressure, you can use a desensitizing toothpaste. If sensitivity persists beyond a few days, call us.
__label__pos
0.827292
E106: What Is This Thing? Posted: July 24, 2021 E106: What Is This Thing? Check It Out Welcome to Edition 106 of What Is This Thing?, my recurring visual riddle series consisting of one simple photo - see above - and one simple question: what is this thing? Below are a few hints to help you out. 1. In case you were wondering how many things can look like sex toys while having an entirely different, non-sexual use, the answer is: a lot. This is one of them. 2. I bet The Boring Company would approve of it. 3. Using it might make you exclaim, "Hot diggity dog!" Ready to solve the puzzle? Click here to find out what is this thing. More Products You Might Like
__label__pos
0.702074
Sierra Nevada. La Vereda de La Estrella. Time: 6 h. From Güéjar- Sierra town a track makes its way up to the river. After an hour walking we shall find a junction that, to the left, makes its way to the Refugio of the Vadillo, but we musn´t take it. When approaching river Genil, there are some signals on the rocks towards Cueva Secreta, a natural shelter used by sheperds and mountaineers. Crossing Valdeinfierno´s brood, and after a sharp ascent, we shall arrive at a place known as the Madaja de Palo, where our way ends and we can settle our tents.
__label__pos
0.809916
Article ID : S500017490 / Last Modified : 11/04/2018 How to create seals and cards? To create seals and cards. Seals and cards can be done using Print Studio of PictureGear Studio. Please refer to the steps below: 1. Click Start and click All Programs -> PictureGear Studio -> Tool -> Print Studio. 2. Click Create. 3. Select the desired type of work such as seal, name card or card. Select Card in this example. 4. Select the given paper type. * The list of recently used paper can be shown by clicking Recently used paper. 5. From the displayed list, select a given design. 6. To create a seal card, enter a given name and click Enter. * The name can be edited through the Decor window. 7. To create a name card, enter a given job title and click Enter. * The title name can be edited through the Decor window. 8. Once completed, click Print. 9. Select the printer and set the number of prints. 10. Click Print and click OK.
__label__pos
0.945067
Sheley Wimmer IMG_3089.1 copy.jpg Sheley Wimmer is a high school teacher, creativity master, and lover of all aspects of the written word.  Her passion for penning unforgettable fiction has led her to winning and finaling in more writing contests than she can name.  A long-time member of RWA, Sheley is the author of extraordinary historical romance, paranormal romance, and young adult novels.
__label__pos
0.817023
How Construction Workers Stay Safe On The Day-to-Day How Construction Workers Stay Safe On The Day-to-Day Construction professionals have a lot of on-the-job hazards to contend with within the course of their work. Not only do they have to be on the lookout for tools that can cause bodily harm, but also for scores of construction materials, including concrete, rebar, and rusty nails. These are just some examples of hazards building workers could face each day. The goal is to reduce the number of injuries on an individual or team level as much as possible through following safety practices. Ensure Employees Wear the Correct Protective Gear It’s not uncommon to hear of workers on an active worksite without the correct protective gear. When dealing with several concrete materials, including broken glass or rusty nails, it’s essential to know what kind of hazards your employees may face. It is where having a safety officer on staff can come in handy. They will help identify the potential hazards and let everyone on your team know what kind of gear is needed for each project. Correctly Construct and Maintain Scaffolding Scaffolding is an excellent tool for building workers to use to get above and below ground level and in tight areas where it might be difficult for workers to enter if they were not using the structure. However, there are some dangers involved with this tool, exceptionally when it is not cared for properly. That’s why your construction company should have an expert come out and conduct a thorough inspection of the scaffolding you will be using on your projects to ensure that everyone who works around the structure is safe. Health & Safety Training One of the best ways to make sure your employees adhere to safety practices is by providing them with regular training. Workplace safety goes beyond being taught what protective gear to wear and how to construct scaffolding properly. It also means that your team is aware of how a specific tool or procedure works and the dangers of misusing it. A construction injuries lawyer believes that education is vital to improving the safety of your employees. Display Clear Signs It’s essential to maintain a safe work environment to display signs that alert employees of potential hazards daily. One example is warning signs at the entrance and exit to a construction site. These signs can help ensure that no one gets injured, especially if they do not know what dangers are lurking in the vicinity. Another good practice is designating certain areas as no-access zones to help protect workers from getting hurt in those spots. Use Technology to Your Advantage Today, there are a variety of safety technologies that construction professionals can use to protect their employees and make sure they are aware of potential hazards. One example is an alarm that goes off when a worker gets too close to an area where the ground is weak or some overhead danger. It can help ensure that workers are protected 24/7, even when doing something as routine as eating lunch on the job site. Construction workers are always in danger of getting hurt on the job. Tools can be dangerous, and often there is not anyone around to help you if something goes wrong. That’s why it’s essential that they follow a few guidelines, including having the right gear, doing regular training, and putting up signs that alert people to potential dangers in certain areas. About Brooke
__label__pos
0.70104
wish list please !  From:  jbshorty Hi Michael. I know you've mentioned that you're considering to add a few subjects to the forum, but waiting until it's critical. But i think at least we're at the point it might be helpful to everybody if there could be a "Wish List" so everybody can see what has already been requested. And if there is a checkbox option to "vote" for a specific wish, then it might be possible for these votes to be counted so you know how big the demand is for that idea...
__label__pos
0.778941
The Top 10 Theories and Concepts of Biological Bases of Behavior on the EPPP Understanding biology, brain function and anatomy, medication, and various forms of brain impairment are important areas of the exam with which to feel comfortable. Although this material is not the most emphasized area of the exam, it surfaces enough that you should plan to spend a considerable amount of time reviewing the material and increasing your knowledge and comfort with concepts in these domains. If you map what AATBS study sections encompass biological bases of behavior, it would include material from the following domains: psychopharmacology, physiological psychology, lifespan, and abnormal psychology. When you approach this material, know that many of the questions will be straightforward, relaying on memorization of concepts, terms, and theories. To start your review, here are the top 10 theories and concepts you want to be sure to be familiar with for this section of the exam: 1. Brain-based Conditions and Symptoms Terms like achromatopsia and anomia are likely to appear on the exam, requiring you to know the symptoms of these conditions or how they present in someone with this disorder. You will notice that the majority of these terms begin with the letter “a,” so, flashcards can be a great way to review and memorize many of these terms which can get confused if not extensively reviewed. 2. Neurotransmitters Neurotransmitters, their purpose and function, and the medications and disorders associated with too much or too little of each neurotransmitter are important to be familiar with. For example, you want to be able to quickly recall that acetylcholine is associated with Alzheimer’s Disease and low levels of serotonin are linked to Depression, PTSD, and OCD, to name a few.. 3. The Nervous System Understand the central nervous system and that it includes both the brain and spinal cord. Review and become familiar with the peripheral nervous system, including both the somatic and autonomic nervous system. Be able to understand what happens in the body when an individual’s parasympathetic or sympathetic nervous system is activated. A great way to learn and memorize these structures is by using a concept map and breaking down each nervous system into its component parts. 4. Brain Anatomy and Physiology What regions of the brain are included in the hindbrain (medulla, cerebellum), midbrain, and forebrain? Understand each of these structures, their importance, and issues that surface when there is damage to each of these brain structures. 5. Learning, Memory, and Language Understand the areas of the brain associated with learning and memory as well as the neural mechanisms associated with learning and memory (long term potentiation and protein synthesis). Learn about each form of aphasia, particularly Broca’s aphasia and Wernicke’s aphasia, both of which often appear as questions on the exam. How do these forms of aphasia present in an individual and where did the damage occur? (i.e. left frontal or temporal lobe). 6. Emotion and Stress How does stress occur in the body and what areas of the brain are involved? What are the stages involved in how we respond to stress? Learn and be able to recall questions associated with theories of emotion and the areas of the brain that have proven to be connected to emotional response. For example, you might want to know that the hypothalamus is involved in the translation of emotion into physical responses. 7. Sleep and Dreaming Be able to recall and understand what occurs during each of the five stages of sleep. How is REM sleep different from NREM sleep? What EEG patterns might you notice in each of the five stages of sleep? How does sleep look different for adults and infants and how does sleep change as we get older? Questions on the EPPP regarding sleep tend to be straightforward, probing for simple recall of general information regarding each of the five stages. 8. Disorders of the Brain Understand traumatic brain injury, seizure disorders, neurological disorders, psychophysiological disorders, and endocrine disorders. 9. Drug Effects Psychopharmacology appears on the exam and you will want to have a basic understanding of the effects of psychoactive drugs on the body (agonists versus antagonists). Become familiar with the ways that age and race/ethnicity impact medication sensitivity due to metabolic differences. 10. Drugs There are several types of medications on the market and it is important to understand their use, side effects, and how they influence neurotransmitters. Get comfortable with medications by reviewing them in small segments. For example, you might start with a review of SSRIs before adding and comparing them against MAOIs. At times, many find they get overwhelmed by an intense review of all of the medications at once as the specifics can be confusing. It is best to break this section up, reviewing just a section at a time before moving forward. Find Out What to Expect on Your Licensing Exam get an introduction to the EPPP from the experts. The EPPP Orientations Webinar will go over the exam process, test-taking strategies, and review your study options. Watch Now
__label__pos
0.82291
Micronutrient fertilizer LURS® colofermin calcium Composition g/L Calcium CaO 206 Nitrogen N 103 Colofermine 953 Volume of packages: 100 mL 1 L 5 L Micronutrient fertilizer LURS® colofermin calcium LURS® colofermin calcium is a concentrated micronutrient fertilizer for elimination of plant physiological disorders due to calcium deficiency. When calcium deficiency takes place, under fruit skin its flesh becomes brown in some places. Then spots (subcortical spotting) appear in these places on the skin. They cork over time (cork spot). The fruits lose their marketable qualities. LURS® colofermin calcium contains water softener, therefore the use of plant protection products in the hard water (high content of Ca2+ and Mg2+) in conjunction with LURS® colofermin calcium does not reduce their effectiveness. Hardness salts are reliably bound by the formulation component, without causing turbidity of the spraying solution. There is no danger of the calcium sulphate (gypsum) formation, which can lead to breakdown of spraying machinery. When conducting treatments use а freshly mixed solution. LURS® colofermin calcium has a significant physiological effect on plants, namely: • regulates the construction of the cell membranes; • provides resistance to pests; • increases the viscosity and permeability of protoplasm; • stimulates processes of nitrogen fixation and transition of carbohydrates; • improves fruit marketable qualities. Due to its composition the microfertilizer can be applied in a wide range of temperatures, starting from +5° C.
__label__pos
0.70977
In the Library ~~ Before Roe V Wade , by Linda Greenhouse&Reva Siegel Before Roe v. Wade: Voices that Shaped the Abortion Debate Before the Supreme Court’s Ruling (2d edition, 2012) The Supreme Court’s 1973 decision in Roe v. Wade legalized abortion–but the debate was far from over, continuing to be a political battleground to this day. Bringing to light key voices that illuminate the case and its historical context, Before Roe v. Wade looks back and recaptures how the arguments for and against abortion took shape as claims about the meaning of the Constitution—and about how the nation could best honor its commitment to dignity, liberty, equality, and life. In this ground-breaking book, Linda Greenhouse, a Pulitzer Prize-winning journalist who covered the Supreme Court for 30 years for The New York Times, and Reva Siegel, a renowned professor at Yale Law School, collect documents illustrating cultural, political, and legal forces that helped shape the Supreme Court’s decision and the meanings it would come to have over time. A new afterword to the book explores what the history of conflict over abortion in the decade before Roe might reveal about the logic of conflict in the ensuing decades. The entanglement of the political parties in the abortion debate in the period before the Court ruled raises the possibility that Roe itself may not have engendered political polarization around abortion as is commonly supposed, but instead may have been engulfed by it.
__label__pos
0.879666
bestschoolessays Essay Writing your explanation of the difference between common sense and science, the connections between common sense and people’s beliefs, and how this all relates to critical thinking and to being a scholar-practitioner in your area(s) of interest. Include your definition of “belief perseverance” and explain how it interferes with critical thinking. Share at least one strategy you would employ (or have employed) to ensure critical thinking in the presence of a personal belief system. Save time and grade. Get a complete paper today. Place this order today and get an amazing discount!!
__label__pos
0.956224
TagResource - AWS Snowball Adds or replaces tags on a device or task. Request Syntax POST /tags/resourceArn HTTP/1.1 Content-type: application/json { "tags": { "string" : "string" } } URI Request Parameters The request uses the following URI parameters. The Amazon Resource Name (ARN) of the device or task. Required: Yes Request Body The request accepts the following data in JSON format. Optional metadata that you assign to a resource. You can use tags to categorize a resource in different ways, such as by purpose, owner, or environment. Type: String to string map Required: Yes Response Syntax HTTP/1.1 200 Response Elements An unexpected error occurred while processing the request. HTTP Status Code: 500 The request references a resource that doesn't exist. HTTP Status Code: 404 HTTP Status Code: 400 See Also
__label__pos
0.997465
In vivo models of hepatitis B and C virus infection. Winer, Benjamin Y, et al. “In vivo models of hepatitis B and C virus infection.”. FEBS Lett 590.13 (2016): , 590, 13, 1987-99. Web. Date Published: 2016 Jul Globally, more than 500 million individuals are chronically infected with hepatitis B (HBV), delta (HDV), and/or C (HCV) viruses, which can result in severe liver disease. Mechanistic studies of viral persistence and pathogenesis have been hampered by the scarcity of animal models. The limited species and cellular host range of HBV, HDV, and HCV, which robustly infect only humans and chimpanzees, have posed challenges for creating such animal models. In this review, we will discuss the barriers to interspecies transmission and the progress that has been made in our understanding of the HBV, HDV, and HCV life cycles. Additionally, we will highlight a variety of approaches that overcome these barriers and thus facilitate in vivo studies of these hepatotropic viruses. Alternate Journal: FEBS Lett.
__label__pos
0.814874
VATL courses English Grammar and Syntax: Topics Welcome to the English course.  This course is foundational to all the other courses you will take.  Once you get a good grasp of English, you will be in a much better position to master the original languages In this lesson, you will learn how to: 1. find nouns in a sentence; 2. identify a noun's case; 3. identify a noun's function; 4. identify a noun's gender and number. In this lesson, you will learn how to: 1. identify adjectives by asking the adjectival questions; 2. identify an adjective's position; 3. identify an adjective's degree. A pronoun is a word used in place of a noun. Note this sentence: "Sarah asked William to lend Sarah William's book." This is awkward. To avoid this kind of repetition, we use pronouns. "Sarah asked William to lend her his book."  In this lesson, you will learn how to: 1. distinguish between the seven different kinds of pronouns; 2. find a pronoun's antecedent; 3. find the person, gender, and number of a personal pronoun; 4. know the difference between the different kinds of demonstrative pronouns. In this lesson, you will learn how to: 1. find a verb's tense, voice, mood, person, gender, and number; 2. distinguish transitive verbs from intransitive; 3. identify a verb's agent. In this lesson, you will learn how to: 1. find adverbs; 2. find the verb they are modifying; 3. know the adverbial questions. In this lesson, you will learn how to: 1. distinguish between a verb and a verbal; 2. find participles, gerunds, and infinitives; 3. distinguish between gerunds and participles; 4. know the function of all the verbals; 5. find verbal phrases. In this lesson, you will learn how to: 1. find a preposition's object; 2. find a prepositional phrase; 3. determine the function of a prepositional phrase; 4. distinguish between phrases and clauses; Sentence diagramming is an incredible tool for Christians.  As Christians, we confess that the truth about ourselves, about our world, and about our future is laid out for us in the sentences of Scripture.  Exegesis is the process whereby we unlock that meaning and begin the process of putting it to work in our lives.  Sentence diagramming is a tool that enables us to engage in exegesis with confidence.  When you diagram a sentence, you are analyzing a sentence in terms of its syntax, and syntax is the most important part of exegesis.  Diagramming is a tool you can use to make visible a sentence's syntax. In this lesson, you will learn how to: • identify a conjunction; • recognize whether a conjunction is joining a word, phrase, or clause; • distinguish between subordinating, correlative, and coordinating conjunctions. A clause is a group of words that does have its own subject and verb (unlike a phrase). Virtual Academy of Teaching and Learning E-mail: [email protected] Phone: (+374 10) 57 06 77 Copyright © 2022 - PRINTeL, All Rights Reserved
__label__pos
1
I'm thinking of a little something where a hundred Sun-sized stars would dot the sky (night or day; the Sun just blocks other stars at day after all). These stars are scattered across the sky, but they are all exactly half a light year away from Earth. So what would the sky look like in this scenario, aside from the fact that there will be new constellations? Will the nights be brighter? Will these stars be visible at day? Any interesting deviations from the typical sky? • 2 $\begingroup$ If you want to be able to plug in some numbers into a calculator and see what you come up with in terms of light, you can use this luminosity calculator. Just plug in luminosity and distance (our sun has a luminosity of 1). For systems with a lot of stars (or for a hundred stars scattered randomly) you can basically just sum the luminosities (but you may want to divide by 2 since only half the earth can see them at a time). The moon has an apparent magnitude of -14(ish) and the sun has an apparent magnitude of -26.8. Smaller is brighter $\endgroup$ – SirTain Mar 15 '21 at 14:32 • 1 $\begingroup$ If you wanted enough starlight to make a difference to ambient illumination, consider the stellar densities in a globular cluster $\endgroup$ Mar 15 '21 at 14:36 • 1 $\begingroup$ Look at the software Universe Sandbox 2 . $\endgroup$ – JDługosz Mar 15 '21 at 14:50 • 1 $\begingroup$ You'll find links to a great many worldbuilding resources here. $\endgroup$ Mar 15 '21 at 15:06 • 1 $\begingroup$ The other answers touch on how they would be bright for a star in the sky but not spectacular to see. It is worth noting that depending on their velocity relative to your solar system, these stars might appear to move much faster than most other stars, perhaps noticeably in human timescales. Your peoples might find cultural significance in these wandering lights, especially if they're gravitationally locked to your star or neighboring ones and therefore dance cyclically. $\endgroup$ – Drake P Mar 15 '21 at 16:53 Nope, no big difference. First of all, take a look at this chart showing how the Sun look like when seen from various bodies in our solar system enter image description here 68 AU is still 0.1% of 1 light year, so every single star won't stand out that much. Below another comparison of the apparent size of the Sun. enter image description here Just for reference, again from the image, even having 100 sun-like stars at the distance of Eris, you would still get 2% of the light coming from the Sun. And they are much farther and much more feeble. To quote Zeiss Ikon Eris is only about 12 light hours away. Your stars, at 1/2 a light year, are 365 times that far, meaning about 100,000 times dimmer than the sun seen from Eris • 3 $\begingroup$ Eris is only about 12 light hours away. Your stars, at 1/2 a light year, are 365 times that far, meaning about 100,000 times dimmer than the sun seen from Eris. $\endgroup$ – Zeiss Ikon Mar 15 '21 at 14:24 • 1 $\begingroup$ "Nope, no big difference" assertion needs better argumentation. Your present answer positively answers a question "Would nights be anywhere as bright as days", but not, strictly speaking, any of the questions that OP is asking. $\endgroup$ – Alexander Mar 15 '21 at 17:36 • 1 $\begingroup$ Thank you for reminding me how vast space really is. I might want to make them closer for a significant effect, but seeing a hundred more bright spots in the sky at night is amazing to think about. $\endgroup$ Mar 15 '21 at 19:36 • 1 $\begingroup$ In conclusion (correct me if I'm wrong): 1. So what would the sky look like in this scenario, aside from the fact that there will be new constellations? An almost unrecognizable difference for the human eye; 2. Will the nights be brighter? Not noticeably when walking around; 3. Will these stars be visible at day? Only if they're clustered together and therefore on a collision course with each other - same distance and location; 4. Any interesting deviations from the typical sky? Just a few dots here and there? $\endgroup$ – Mikey Mar 15 '21 at 23:10 • 1 $\begingroup$ since all other stars are further away than half a light year, and most of them are not mayn orders of magnitude larger than the sun, this answer would prove conclusivly that the night sky is always empty. Something not apparent when looking up on a clear night ... $\endgroup$ – mart Mar 16 '21 at 15:27 You would have a sky filled with 101 Venuses The apparent magnitude of these stars would be just a bit brighter than Venus at its best, less than 10% brighter. Of course, they would appear in the day, and twilight, and in the darkest night. The nighttime view would be quite spectacular, about similar to looking at the sky over JFK airport during a baggage worker's strike. Tools: https://www.omnicalculator.com/physics/luminosity • 2 $\begingroup$ Nice! Thank you. Thinking about making them closer for a brighter lightshow. $\endgroup$ Mar 15 '21 at 19:38 • 2 $\begingroup$ @DarkPhantom2317 Venus is pretty bright. Having an average of 50 of them visible, maybe 35 high in the sky, will make for quite the lightshow! Remember that Venus is bright enough (just barely) to cast a shadow that is visible to the human eye. And that is astronomical twilight, when the sky is significantly lightened by the nearby sunlight already. Your stars will be high in the sky, and fully visible, at midnight. Not as bright in total as the moon (even a smallish crescent moon), but still a significant lightsource. maybe 200 times as bright as a moonless dark night for us. $\endgroup$ – PcMan Mar 15 '21 at 20:41 • 1 $\begingroup$ How did you calculate the apparent brightness of these fictional stars? Would make a great addition to this answer. $\endgroup$ – Harabeck Mar 16 '21 at 16:29 • $\begingroup$ @Harabeck added link to apparent luminosity calculator $\endgroup$ – PcMan Mar 16 '21 at 18:21 • $\begingroup$ Venus and Jupiter are not visible during the day, only during twilight. So if they're only a bit brighter, chances are they won't be visible during the day. At night, they will be the brightest stars in the sky, of course. $\endgroup$ – mcv Mar 17 '21 at 15:55 I'm surprised no one has given this a more direct quantitative treatment, so here goes. The perceived brightness at the Earth's surface is related to illuminance, or luminous flux per unit area. Direct sunlight at noon is about 100,000 lux. Illuminance from a source follows the inverse square law: it is inversely proportional to the square of the distance to the source. A star at the zenith at a distance of 0.5 light years (about 31,620 AU, so 31,620 times further away than the Sun) therefore causes an illuminance 1/316202 ≈ 1/1,000,000,000 = 10-9 times that of the Sun. We can therefore expect about 100,000 / 109 = 0.0001 lux from one such star. Illuminance from multiple sources is additive, so 100 such stars (again, at the zenith) would give us 0.01 lux. The stars are stated to be randomly distributed around the Earth, so they won't all be at the zenith at the same time. Illuminance is proportional to the cosine of the angle between the surface normal and the line between the surface and the source. The "average" illuminance of a collection of randomly distributed sources around a circle or sphere, relative to the illuminance if they were all at the zenith, is 1/π (this is the integral of the positive part of cos(θ), from θ = -π/2 to π/2, divided by 2π). So the total luminance of 100 randomly distributed stars is about 0.01/π ≈ 0.003 lux. For comparison, the full moon gives us between 0.05 and 0.1 lux, and starlight gives us 0.0001 lux. Note that human perception of brightness is logarithmic (proportional to the logarithm of the illuminance). There would therefore be no perceptible difference in brightness during the day, but the difference at night would often be noticeable, especially during a new moon or when the moon is below the horizon. As for whether the stars would be visible during the day, maybe, just barely: the brightness of a single star at this distance corresponds to an apparent magnitude of -26.74 (Sun) + 5 log100 1,000,000,000 = -4.24. This is comparable to the mean brightness of Venus (-4.14) which is right at the edge of daytime visibility. • 1 $\begingroup$ Great answer, thanks for laying out the math like that. $\endgroup$ – Harabeck Mar 19 '21 at 19:02 You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.87237
Moles of Potassium hydrogen phosphate potassium hydrogen phosphate: convert moles to volume and weight Volume of 1 mole of Potassium hydrogen phosphate foot³0oil barrel0 Imperial gallon0.02US cup0.3 inch³4.36US fluid ounce2.41 liter0.07US gallon0.02 meter³7.14 × 10-5US pint0.15 metric cup0.29US quart0.08 metric tablespoon4.76US tablespoon4.83 metric teaspoon14.28US teaspoon14.48 Weight of 1 mole of Potassium hydrogen phosphate milligram174 176  The entered amount of Potassium hydrogen phosphate in various units of amount of substance centimole100micromole1 000 000 decimole10millimole1 000 gigamole1 × 10-9mole1 kilogram-mole0nanomole1 000 000 000 kilomole0picomole1 000 000 000 000 megamole1 × 10-6pound-mole0 Foods, Nutrients and Calories ANIMAL COOKIES, UPC: 099482417376 contain(s) 467 calories per 100 grams (≈3.53 ounces)  [ price ] 32545 foods that contain Niacin.  List of these foods starting with the highest contents of Niacin and the lowest contents of Niacin Gravels, Substances and Oils Tungstic oxide [WO3  or  O3W] weighs 7 160 kg/m³ (446.9842 lb/ft³)  [ weight to volume | volume to weight | price | mole to volume and weight | mass and molar concentration | density ] Volume to weightweight to volume and cost conversions for Refrigerant R-407C, liquid (R407C) with temperature in the range of -51.12°C (-60.016°F) to 60°C (140°F) Weights and Measurements A square millimeter (mm²) is a derived metric SI (System International) measurement unit of area with sides equal to one millimeter (1 mm)  Inductance is an electromagnetic property of a conductor to resist a change in the electric current per unit of time, as a response to induced electric potential on the conductor. oz/pt to oz t/fl.oz conversion table, oz/pt to oz t/fl.oz unit converter or convert between all units of density measurement. Annulus and sector of an annulus calculator
__label__pos
0.993754
The Edo Period in Japan seems pretty much a feminist’s nightmare. Samurai rule and strict societal boundaries confined women within the neo-Confucianistic bonds of a deeply patriarchal society. Although women’s rights and power took a gigantic step backward during this time of military rule, women’s voices were not completely silenced. Edo women did enjoy some measure of freedom. LITERARY CREATIONS ON THE ROAD: Women’s Travel Diaries in Early Modern Japan, by Keiko Shiba, translated with notes by Motoko Ezaki. University Press of America, 2012, 156 pp., $28.95 (paperback) Historian Keiko Shiba’s work gathers travel diaries from women throughout the Tokugawa shogunate and further reveals the liberties of thought still present in the feminine mind. Motoko Ezaki, coordinator of the Japanese program at Occidental College in Los Angeles, offers a comprehensive English translation to Shiba’s collection, with additional notes to further explain the historical and cultural significance of the many travel diaries. “Literary Creations on the Road” is organized in four parts: the reasons women traveled, detailed aspects of the journey, cultural and philosophical background and the effects of the journey. Each section contains excerpts from various diaries, showcasing a wide range of Tokugawa womanhood. Shiba drew from close to 200 diaries for her research, and the entries reflect the spectrum of Tokugawa society from holiday-makers to heroes. Accounts from women who chose travel for pleasure or pilgrimage, through to refugees traveling to flee the Boshin War, or the account of Tsuchimikado Fujiko, an emissary for Princess Kazunomiya, who traveled through unstable times to help stop a full scale attack on Edo at the start of the Meiji Era. Each entry provides an important historical perspective, always through the eyes and voice of a woman. Translator Ezaki ends each chapter with a comprehensive collection of notes further highlighting the historical importance of these real-life accounts of life for Tokugawa women. Stylistically Ezaki’s commentaries are direct, clearly elucidating the historical or cultural references found in the diaries. The entries themselves, like much of Japanese literature, are poetically descriptive, with many of the women travelers following the tradition of haibun, or poetry mixed with prose. At the same time they read like modern travel literature with concise details of the where and what of new discovery. These concrete diaries invariably contain abstract wonderings, and the literary reader will delight to uncover both practical and poetic reactions to life on the road. Along the way, the reader learns many interesting side-facts about Tokugawa life — how wives dreaded the long, dangerous journey to Edo from the outlying provinces; how the education of women differed between classes; how certain shrines were considered off-limits for women, “with the possibility of defilement.” These diaries, therefore, provide a window into the larger picture of Tokugawa life, peering beyond the shiny hard steel of samurai armor into the softer recesses of everyday living. Certainly the domineering but peaceful world of the shogun harshly dictated boundaries for all its citizens, and many of the diaries speak of yearning and heartache. Still, the women who captured their thoughts on paper triumphed in the human spirit, finding much to appreciate along the open road. The words from the women who traveled the thoroughfares and byways of Tokugawa Japan afford an important glimpse into the world of the shogunate. Kris Kosaka teaches literature and writing at Hokkaido International School. By subscribing, you can help us get the story right.
__label__pos
0.857583
Freedom speech is considered as one of the fundamental rights of citizens according to Constitution of the People’s Republic of Bangladesh-illustrate and explain  1. Introduction Freedom speech is considered as one of the fundamental rights of citizens according to Constitution of the People’s Republic of Bangladesh. The right to freedom of expression and speech endorses the right of every legal personality to express their opinions and views liberally. It is an essential element of society which should be prioritized to the maximum extent in order to ensure its role in democracy and political life. On the other hand, there are certain forms of speech and expression which must be restricted in order to protect human rights. In such cases, the right to freedom of speech and expression is limited by a fine balancing act. The right to freedom of speech is significantly perceived by individuals, legal entities, mass media, journalists, information service providers, telecommunication corporations etc. Considering the various aspect of such right, restriction may result in violation of fundamental rights. 1.1 Freedom of Speech and Expression History in Bangladesh Before the independence of Bangladesh the right to exercise freedom of speech was regulated by the external government. The Indian Press act was passed providing the local government to impose necessary regulations in 1931[1]. Another significant declaration took place during the Pakistani Government in 1965, the Defiance of Pakistan Ordinance. The ordinance was issued to restrict the freedom of mass and print media specially. Moreover, The Daily Ittefaq and New Nation press were penalized for criticizing the Pakistani government. Just before the independence during the liberation war, four daily newspapers and periodicals were found and their ownership was consigned with the government in power. The government formally authorized the right to freedom of speech right after the independence in 1972. In this new constitution, the right of every citizen of freedom of speech and expression and freedom of the press was guaranteed. In 1973, The Printing Presses and Publication (Declaration and Registration) Act was endorsed. In 2001, The Dramatic Performance Act of 1876 was repealed and the copyright ordinance of 1962 was revised and became law in 2000. On September 16th, the Information Minister said that future Legislation will include an act titled ‘Television Network (Management and Control) Act 2002. The law may provide the commercial activities of Cable operators and distributors. Therefore, the regulation evolved rapidly regarding freedom of speech leaving space for restriction imposed since the independence of mass media in Bangladesh is still questionable. 2. Declarations Constitution of People’s Republic of Bangladesh includes specific legislations in Article 39. This article upholds remarkable significance in terms of regulating the right to freedom of speech and expression. In the article 39 (1, 2) of chapter-3 of this constitution these have been stated: 39 (1) Freedom of thought and conscience is guaranteed. Freedom of thought and conscience, and of speech (b) freedom of the press, are guaranteed. The article entails the implication of right to freedom of speech and expression subject to public and foreign interest. According to the article, restrictions can be imposed to ensure public security against any offensive or immoral actions by legislative authority. While Article 39 addresses freedom of the press, Article 43 provides protection of privacy to the citizen. Every citizen shall have the right, subject to any reasonable restrictions imposed by law in the interests of the security of the State, public order, public morality or public health- (a) To be secured in his home against entry, search and seizure; and (b) To the privacy of his correspondence and other means of communication. The Constitution not only holds the right to ensure freedom of speech but also protects public from moral hazards and interference in expression of opinions. 2.1 International Declaration Considering freedom of speech and expression an inherent human right to voice opinions, it should be internationally accepted and regardless of any punishment or censorship. The United NationsUniversal Declaration of Human Rights, adopted in 1948, provides, in Article 19, that: The International Covenant on Civil and Political Rights, also known by its shortening ICCPR, was activated in 1976. It expands the ideologies laid out in UDHR and is legally binding on all states who have signed and ratified its provisions. Article 19 of the ICCPR specifies that: (1)         Everyone shall have the right to hold opinions without interference. 2.2 Implication of ICCPR International Covenant on Civil and Political Rights elaborated the implication of International Law regarding freedom of speech and expression. Freedom not confined only in linguistic form but also printed or written form of expression. Once again, the article emphasizes on protection public health and morality. Therefore, nations are privileged to impose certain restrictions only subject to national security, respect and reputation of legal personalities. Therefore, freedom of speech and expression has been promoted globally by United Nations, though many countries impose restriction pressurized by political and legal environment. 3. Aspects of Freedom of Speech According to the Constitution, Government can bring charges against any form of expression or speech that violates public, state or national security. Bangladesh being an Islamic country, any speech or expression detrimental towards the religious belief will be subject to judgment law. Apart from the previously mentioned written and verbal form, freedom of speech is also observed in different means of internet. 3.1 Legal Aspect Any expression in such advanced media against the national security would also be treated according to the law. As a result of such imposed restrictions, there is profound possessiveness amongst the government in resolving the conflict between freedom of speech and defamation. Criticism of the political actions and figures are often handled strongly. Therefore, such actions have ranked Bangladesh amongst the countries with less or no freedom of speech and expression[4]. 3.2 Moral and Media Aspect Authorities restrict official access to journalists from certain publications. The government remained sensitive to international inspection; foreign publications are subject to censorship, while foreign journalists and press freedom promoters have encountered increasing difficulties in obtaining visas to enter Bangladesh and are put under observation while in the country. Information censorship laws are more restricted as the government has passed legislation. The legislation is declared to protect the “public interest” from any sort of information abuse.[5] 4. Actions against the Violation of Freedom of Speech The People’s Republic of Bangladesh is not completely free in terms of exercising the freedom of speech and expression. Based on the Constitution there are reasonable restrictions found in Article 39: (a) Against the interest of security of the State (b) Against the friendly relation with foreign state (c) Desecration of public order (d) Desecration of civility or morality (e) Anything disrespectful to the court (f) Defamation or provocation to any offense The restrictions tend have lower impact on public and social life except defamation. Defamation of the mass media and the journalists has always been in the limelight of legal system. 4.1 Defamation The constituting law does not define defamation in a proper manner. In a general perception, defamation is simply harming one’s reputation or the theft of it. The legislation against defamation manner is not mentioned specifically in binding laws. Thus, there is the conflict between freedom of speech and defamation. According to Clerk and Lindsell – When a person directly communicates to the mind of another, matter untrue, to disparage the reputation of a third person, he is on the face of it guilty of a legal wrong, namely defamation.[6] Defamations can be in the form of words, signs or visual representations which can denounce the reputation of a person exposing him/her to contempt, ridicule or public hatred and this way can lower the prestige of the person in the society. In simple words, when a person does anything by spoken or written words causing substantial damage to the reputation of an legal person, entity, state or government, it is simply know as defamation. 4.2 Defamation in English Law In English Law libel and slander are two principles that specifydefamation under any circumstances. A defamatory statement in writing, film, broadcasting, or other permanent form is libel. On the other hand orally or temporary defamatory statement is called slander. Libel in English is not only a civil wrong but also a criminal wrong. 4.3 Defamation in Bangladeshi Law In case of Bangladesh there is no statutory law concerning defamation except the amendment in chapter 21 of the Penal Code.As a result, without any specific principles the violation of freedom of speech is often misunderstood and government end up taking actions against it. 4.4 Few Significant Scenarios The case of journalist and writer Salah UddinShoaibChoudhury, who was arrested in 2003 and prohibited from attending a conference in Israel, charged with sedition, and spent 17 months in jail before being released on bail in 2005, remained open throughout 2006 as he awaited trial. The Government asked ProthomAlo publishers to suspend the publication of the newspaper’s weekly satire supplement Alpin, as e result of violation against the religious sentiment through conflicting with freedom of speech. Social networking website Facebook was suspended for a week on 29 May, 2010 due to violation of the right to freedom speech and expression against the government and national security. Therefore, government’s strict regulation prohibits every individual from defamation. In case of Taslima Nasrin, an open minded journalist and writer, was banished from the country by the reigning government. The journalist was accused of expressing opinion against the religious sentiment and public security, and was forced to leave the country. 4.5 Journalism under threat The freedom of press is required for the purpose of establishing smooth democracy in the society. The scenario is totally different in Bangladesh. Journalists are often harassed and violently attacked by a range of actors including organized crime groups, political parties and their faction, government authorities, and leftist and Islamist militant groups. Generally they are threatened because of their coverage of corruption, criminal activity, political violence, the rise of Islamic fundamentalism, or human rights abuses. Police brutality towards photographers in different protests and political events are also common form of actions against violations. 5. Recommendations The constitution doesn’t specify the criteria upon which any action would be held against the freedom of speech. There must be specified instructions in terms of content-based, non-content based restrictions. Commercial speech, symbolic speech, compelled speech and internet restriction should also be clearly specified including relevant penalties. 5.1 Content-based Restrictions According to Justice Holmes – The most stringent protection of free speech would not protect a man in falsely shouting fire in a theater and causing a panic…. The question in every case is whether the words used … create a clear and present danger.[7] The content-based restriction simply refers to utterance of “fighting” words, those which by their very utterance inflict injury or tend to incite an immediate breach of the peace.[8] The government should impose restrictions against the personal abuse tend to be made by utterance or expression. 5.2 Non-Content-Based Restrictions Two types of speech restriction can be imposed under the Non-Content based restrictions. These are: (1) Time, place or manner restrictions and (2) Incidental restrictions. The restrictions are imposed to combat the crime directly occurring from the aggressive content of speech or expression, which is considered as secondary efforts. [9] 5.3 Commercial Speech Restrictions Commercial speech is “speech that proposes a commercial transaction.[10] The books and films that are sold for profit are not a part of this commercial speech. Commercial speech, however, may be banned if it is false or misleading, or if it advertises an illegal product or service. Even if fits in none of these categories, the government should regulate it more than it may regulate fully protected speech. The right to freedom of speech and expression is not totally liberal in a democratic country like Bangladesh. The government of People’s Republic of Bangladesh must protect and ensure maximum freedom of speech in order to smoothly operate the political system and successful democracy. Mass media should be given their preferred rights to express opinion and criticize the actions independently. Unnecessary harassment of journalists and media figures would not only misinterpret the inherent meaning of freedom of speech but also deprive the nation from free flow of information in such a global village. Therefore, imposed restrictions must be relaxed significantly stating the specified difference between freedom of speech and defamation. The right to freedom of speech expression is not confined in verbal, written or printed form; it is the right to be achieved, honored and most importantly spread to every legal personality. Constitution of Bangladesh: Part II: Fundamental Principles of State Policy”. Chief Adviser’s Office. Prime Minister’s Office. Government of the People’s Republic of Bangladesh bdnews24. (2010, October 4). HC rules Bangladesh secular . Retrieved October 11, 2011, from  Bangladesh, G. o. (n.d.). Index of Constitution. Retrieved October 2, 2011, from Timeline: a history of free speechThe Guardian. February 5, 2006. 13. John Milton (1608–1674). Areopagitica; A Speech of Mr. John Milton for the Liberty of Unlicenc’d Printing, to the Parlament of England. London: [s.n. 1644″]. Early Censorship in England, “Heresy and Error”: The Ecclesiastical Censorship of Books, 1400-1800. Bridwell Library. Exhibition September 20 – December 17, 2000. Retrieved 26 October 2011. Sanders, Karen (2003). Ethics & Journalism. Sage. pp. 67 Office of the United Nations High Commissioner for Human Rights. “List of Declarations and Reservations to the International Covenant on Civil and Political Rights” Banglapedia. (2006). Constitutional Amendments. Retrieved September 29, 2011, from Sadurski, Wojciech (2001). Freedom of Speech and Its Limits. Law and Philosophy Library. 38. p. 179. Cohen, H. (2009). Freedom of Speech and Press: Exceptions to the First Amendment. Congressional Research Service, 3-12. align=”left” size=”1″ /> [2]General Assembly of the United Nations (1948-12-10).“Universal Declaration of Human Rights” (in English/French) (pdf). pp. 4–5. Retrieved 2007-05-06. [4] Press Freedom Report. 2001-09-22. Retrieved 2011-10-04. [5] The Daily Star Article. 2010-09-12. Retrieved 2011-10-04. [6]Clerk&Lindsell on Torts, Supplement 1. Anthony M. Dugdale. Wildy Logo. 2010-10-22. Retrieved 2011-09-28. [8] Chaplinsky v. New Hampshire, 315 U.S. 568, 572 (1942). Campus “hate speech” prohibitions at public colleges (the First Amendment does not apply to private colleges) are apparently unconstitutional, even as applied to fighting words, if they cover only certain types of hate speech, such as speech based on racial hatred. This conclusion is based on the cross-burning case, R.A.V. v. City of St. Paul, infra note 138. [9] For additional information on this subject, see CRS Report 95-804, Obscenity and Indecency: Constitutional Principles and Federal Statutes, by Henry Cohen. [10] Board of Trustees of the State University of New York v. Fox, 492 U.S. 469, 482 (1989) (emphasis in original). In Nike, Inc. v. Kasky, 45 P.3d 243 (2002), cert. dismissed, 539 U.S. 654 (2003),
__label__pos
0.912617
P-Shaped Conservatories Peterborough The P-Shaped conservatory is the perfect style for larger detached properties The P-Shaped conservatory is made from a combination between a Lean-to conservatory with a Victorian conservatory, with either three or five faceted. When Georgian-style conservatory is combined with the Lean-to, this becomes an L-shaped style. The P-shaped Conservatory can provide two separate living spaces. Normally combined as a lounge and a dining area or with a children’s play area. These conservatories are better suited for large detached properties due to their larger impressive structure. These P-shaped conservatories can be designed in either a three or a five segment style with the roof return being either a hipped conservatory roof or a Lean-to. P-Shaped Conservatories P-Shaped Conservatory Colour Options P Shapes Conservatory colour choice Peterborough
__label__pos
0.714132
Manage Your Workforce Do you have a mobile workforce in charge of inspecting, maintaining or interacting with assets? Watering trees, inspecting signs, putting up flyers, or targeted fundraising activities? If so, there is a new technology from Esri called Workforce that has your name all over it. Beginning with asset data hosted in ArcGIS Online, a series of filters and queries can be used to determine which assets need to be assigned to your workforce. These might be assets over a certain age due for inspection, or locations where an issue has been reported. Assignments are made to named individuals in your ArcGIS organization, and contain information including location, priority level and due dates. workforce project The assignments are then pushed to the mobile devices of your workforce, where they accept, navigate to, and complete the assignments. This may entail entering in notes, or taking pictures, and updating the assignment status. This information is sent back to the centralized cloud-based ArcGIS Online database in near real-time in a connected environment, or upon data synchronization if out of cell / wi-fi range. workforce phones Back in the office, the workforce administrator can use the information to check worker progress, see current worker status and location, and re-allocate resources if needed. The information is easily aggregated and displayed using a dashboard or configurable web application. If you’d like a hand getting set up, let us know, and SymGEO will put Workforce to work for you!
__label__pos
0.905692
+86-027-86657186          [email protected]       Safe, Quick, Exclusive Each batch is analyzed for purity and quality. Tracking provided for each order. We also keep eyes on customers' packages, warmly remind when packages are approaching to customers. Multiple special shipping lines, with experienced forwarders, ensure you receive your package safely and quickly. Home » News » What are the biochemical enzymes and coenzymes of NADh+? What are the biochemical enzymes and coenzymes of NADh+? The biochemical enzymes and coenzymes of NADh+ are NADH. NADH is the reduced form of NAD+, and NAD+ is the oxidized form of NADH. So let’s get to know NADH below. Here is the content list: l What is NADH? l What are the roles of biochemical enzymes and coenzymes? l What are the roles of NADH? What is NADH? NADH refers to reduced coenzyme I, which is produced in the first and second stages of aerobic respiration and reacts with oxygen (reduced oxygen) in the third stage to produce water. It is found widely in nature and is involved in numerous enzymatic reactions in which it serves as an electron carrier by being alternately oxidized (NAD+) and reduced (NADH). NADH is the reduced form of Coenzyme 1 NAD, and NAD+ is its oxidized form. In the redox reaction, NADH acts as a donor of hydrogen and electrons, and NAD+ acts as an acceptor of hydrogen and electrons, participating in physiological processes such as respiration, photosynthesis, and alcohol metabolism. They participate in life activities as coenzymes of many redox reactions in organisms and transform into each other. Knowing that NADH is an enzyme, what is the role of biochemical enzymes and enzymes? Let’s take a look. What are the roles of biochemical enzymes and coenzymes? Biochemical enzymes and coenzymes are proteins or RNAs that are produced by living cells and are highly specific and highly catalytic for their substrates. The catalysis of an enzyme depends on the integrity of the primary structure and spatial structure of the enzyme molecule. Biochemical enzymes and coenzymes are extremely important types of biocatalyst. It can change the acidic physique, regulate gastrointestinal function, balance nutrient absorption, eliminate toxins from the body, and improve immunity. Taking Biochemical enzymes regularly can remove toxins and garbage in the blood and intestines, unblock the blood vessels and improve the intestines. Function to achieve the purpose of two-way adjustment. Due to the lack of enzyme content in the body due to various reasons such as age, the body's detoxification system will become obstructed, leading to the accumulation of toxins in the body. Therefore, the body's detoxification must start with supplementing enzymes and using enzymes to supplement the body to achieve the effect of detoxification from the inside. Conditioning is much more effective. What are the health benefits of using products containing NADH? NADH as a coenzyme is essential to life. 1. Improve energy levels. NADH is not only an important coenzyme in aerobic respiration, but the [H] of NADH also carries a lot of energy. Studies have confirmed that the use of NADH outside the cell can promote the increase of intracellular ATP levels, indicating that NADH can penetrate the cell membrane and increase the energy level in the cell. From a macro point of view, exogenous supplementation of NADH helps restore physical strength and enhance appetite. 2. Protect cells. NADH can react with free radicals to inhibit lipid peroxidation and protect the mitochondrial membrane and mitochondrial function. Therefore, injection or oral administration of NADH has been clinically applied to improve cardiovascular and cerebrovascular diseases, adjuvant cancer radiotherapy and chemotherapy, and other fields. NADH for topical application has been proven to be effective in the treatment of rosacea and contact dermatitis. 3. Promote the production of neurotransmitters. Studies have shown that NADH significantly promotes the production of the neurotransmitter dopamine, which is a chemical signal essential for short-term memory, involuntary movement, muscle tone, and spontaneous physical responses. NADH can also promote the biosynthesis of norepinephrine and serotonin and shows good application potential for alleviating depression and Alzheimer's disease. Mulei (Wuhan) New Material Technology Co. Ltd., has been developing in the field of biochemistry for many years and conducted a large number of tests before leaving the factory to ensure quality. If you are going to purchase biochemical enzymes and coenzymes of NADh+, you can consider using our cost-effective products. Office line:  +86-027-86657186 Fax: +86-027-86657180 Email:   [email protected] Webshop:  www.whmulei.com Technology by leadong
__label__pos
0.73739
Facts — protect the environment RSS What is Agroforestry? With the population on Earth growing at such a rapid pace, food availability is becoming scarce. Although conventional farming aims at producing enough food to feed these people, it is unsustainable and damaging our planet. Instead of exploiting the soil and other resources, we need to start moving towards a more sustainable form of agriculture. Agroforestry - also referred to as permaculture and regenerative agriculture - is agriculture of the future. It is a method of farming that integrates more perennial plants than annual plants and uses many different species in one area together to utilize all layers of the soil, while simultaneously regenerating it. This not only increases the biodiversity of the environment, but also produces better quality produce... Leer más Why Are Trees So Important? We always hear that you should plant more trees, save the forests, and keep our planet green. Sometimes brands market by saying they plant a tree after every order or simply donate to programs that plant trees regularly. But why are tree so important really? Well, for a start, they provide us with oxygen - which is absolutely necessary for our survival - as well as store carbon, stabilize soil, and provide habitat and shelter for animals. Trees are the largest plants on the planet and are actually the longest living species on the planet. Recently, the emphasis on how important trees really are has become stronger and today we’ll tell you exactly why. Scientists have still not been able... Leer más
__label__pos
0.710729
Music: Nineteenth century music Photo courtesy of Evgeny Ostroushko I am about to put the boom in boomer because we are about to boom straight to the past. This past being the ‘70s and ‘90s — a high point in music as most people would say, though for the point I speak of, we have to go farther back than most think. While all the worship to the ‘70s and ‘90s eras are well-deserved, perhaps we need to go further back, straight back to the 19th century for great music. The 19th century was a glorious time where tuberculosis was a fashion statement, a time before all this “lyrics business” and where music was purely the sounds of instruments. Simply put, it was magical. There are no lyrics in these old classics or substance in metaphorical lines. However, contrary to popular belief these are not flaws. Lyrics are overrated when addressing these classics. The lack of lyricism makes 19th century music inspire emotion based solely on sound. In its essence it is an impactful and auditory form of emotion used to invoke emotion.  Old music and their lyric-lacking tunes clearly express and invoke a specific feeling in the audience. The sound of grandeur from the notes of Messiah gives off a “larger-than-life” feel. In the Hall of the Mountain King gives the general feeling of a never-ending cycle of screw ups and failures. It’s sole emotional sounds, void of words, express their unique charm.  Of course, to bring something up one must put another down; to explain another of the merits of olden notes one must drag modern beats through the mud. The classical songs of yesteryear’s yesteryear have a distinct lack of lyrics which many call out as negative. Olden notes do not have any serious metaphors to demonstrate leaving their existence simple: to call upon the listeners’ emotions. The sense of a battlefield from Overture 1812 and feelings of majestic mystery from Toreador March. The pure emotions in these notes heavily contrast the metaphorical beats of the common age in a good way. Their notes convey an emotion through music, be it the tension of battle or dazzling majesty displaying the simple focus of these emotions. The lack of lyrics, metaphors and modern techniques does not detract from the greatness of the old music but emphasizes its strengths. The lack of complicated traits leaves nothing but the soul of a song. These oldies portray emotions in ways these young songs cannot. Passing certain feelings instead of ideas is a fantastic concept, proving they are top tier songs compared to many other newer sounds.
__label__pos
0.791362
Lamer Gamers Podcast {{ selectedEpisode.title }} {{ selectedEpisode.title }} By {{ }} Broadcast by simplyTravis and Rowdy5000 discuss the Nindie Direct, the new world of games sales online (Steam vs the world), censorship in gaming stores (or lack thereof), and gaming addiction. What is Lamer Gamers Podcast? Hello Lamer Gamers! We host a gaming podcast where our main quest is to have fun and give you a combination of gaming news, views, rumors, along with special BONUS POINTS topic where we reach out to the Lamer Community to gather other points of view on hot gaming topics. Now with ridiculous gaming parody commercials! Podcasts that don't follow these two formats are considered SIDEQUEST episodes with a specific focus such as rantings, ravings, reviews, spoilercasts, "nerd alerts", Top 10's, games vs history, and more!
__label__pos
0.719643
Often Asked How Many Passes Does A Bubble Sort Need? Jan 14, 2022 How Many Passes Does A Bubble Sort Need? You will find n 1 products left to sort, and therefore you will see n2 pairs. Since each pass places the following largest value in position, the entire quantity of passes necessary is going to be n1. What is a pass in bubble sort? A bubble sort formula experiences a summary of data numerous occasions, evaluating two products which are alongside to determine what has run out of order. Every time the formula experiences their email list it’s known as a ‘pass’. How many iterations does a bubble sort have? The Bubble Sort formula utilizes two loops: an outer loop to iterate over each aspect in the input list, as well as an inner loop to iterate, compare and exchange a set of values within the list. The interior loop takes (N-1) iterations as the outer loop takes N iterations. How many passes does selection sort require? Question: The number of passes and iterations requires in selection sort and bubble sort? Answer: N-1 passes, N – quantity of elements. How many passes are required to sort a list with 5 elements? Discover the least quantity of comparisons required to sort (order) five elements and devise an formula that sorts these components by using this quantity of comparisons. Solution: You will find 5! = 120 possible outcomes. Therefore a binary tree for that sorting procedure may have a minimum of 7 levels. How many passes are needed to sort elements and array using bubble sort? Note: Bubble sort with n element needed n – 1 passes. How many passes does an insertion sort algorithm consist of? The number of passes does an insertion sort formula contain? Explanation: When given a range of N elements, an insertion formula includes N-1 passes. How many iterations would be required to sort the following array using bubble sort digits = 11 12 14 13? Explanation: Only Two elements within the given array aren’t sorted, hence only two iterations are needed to sort them. What will be the number of passes to sort the elements 14 12/16 6 3 10 using insertion sort? What would be the quantity of passes to sort the weather using insertion sort? Explanation: The amount of passes is offered by N-1. Here, N=6. Explanation: After swapping elements within the second pass, the array may be like, 8, 34, 64, 51, 32, 21. How do you calculate bubble sort? The formula runs the following: 1. Consider the first number within the list. 2. Compare the present number using the next number. 3. May be the next number smaller sized compared to current number? 4. Move to another number along within the list making this the present number. 5. Repeat from step two before the last number within the list continues to be arrived at. How many swaps are needed to sort the numbers using bubble sort algorithm in the average case? In average situation, bubble sort may need (n/2) passes and O(n) comparisons for every pass. Hence, the typical situation time complexity of bubble sort is O(n/2 x n) = (n 2 ). How many iterations does insertion sort? It requires one iteration to construct a sorted sublist of length 1, 2 iterations to construct a sorted sublist of length two and lastly n-1 iterations to construct the ultimate list. How many passes are required to sort a file of size n by selection sort method? This method continues and needs n1 passes to sort n products, because the final item should be in position following the (n1) st pass. How many passes will it take in all for selection sort to sort this array? 12) The number of passes does it consume all for Selection Sort to sort this array? 1 pass, it might still make 6 passes!)) How many times does a selection sort run? The loop in indexOfMinimum will certainly make n2 + n/2 iterations, whatever the input. Therefore, we are able to state that selection sort runs in (n2) time in every case.
__label__pos
0.993934
Click or tap here to find out how this works Stuck on a crossword puzzle answer? e.g. ???daddle  /  coldnurse Definitions of: ARMOZINE (n.) A thick plain silk, generally black, and used for clerical. anagrams of:armozine (v. t.) To Latinize; to fill with Latin words or idioms. (v. t.) To convert to the Roman Catholic religion. (v. i.) To use Latin words and idioms. (v. i.) To conform to Roman Catholic opinions, customs, or modes of speech.
__label__pos
0.725525
What is Breakdown Voltage? Leo Zimmermann Leo Zimmermann Breakdown voltage, sometimes also called dielectric strength or striking voltage, is the quantity of electrical force required to transform the electrical properties of an object. Most commonly, it is used with respect to insulators. The breakdown voltage is the minimum voltage necessary to force an insulator to conduct some amount of electricity. This measurement is meaningful only in relation to an existing system; it is the point at which a material defies the operator's expectations for how it will function. Insulators, by definition, conduct no electricity. Breakdown voltage is the point at which a material ceases to be an insulator and becomes a resistor; that is, conducts electricity at some proportion of the total current. Insulators are characterized by atoms with tightly bound electrons. The atomic forces holding these electrons in place exceeds most outside voltages that might induce electrons to flow. This force is finite, however, and can always potentially be exceeded by an external voltage, which will then cause electrons to flow at some rate through the substance. All else being equal, the quality of an insulator increases along with its breakdown voltage. Hence, porcelain, which has a dielectric strength of around 100 kilovolts per inch, is a mediocre insulator. Glass, which breaks down at 20 times the voltage that porcelain does, is much better. Diodes also have a breakdown voltage. Simple diodes are intended to conduct electricity only in one direction, referred to as "forward." At a sufficiently high voltage, however, the diode can be made to conduct electricity in "reverse." Some diodes, called avalanche diodes, are intended for this type of use. At low voltages, they conduct electricity in one direction only. At a specific point, they conduct it just as effectively in the other direction. This distinguishes them from insulators and other diodes, which, even above the breakdown level, maintain relatively high resistance. Not surprisingly, triodes and other specialized electronics components also break down at a certain point and begin to conduct electricity along the path dictated by a sufficiently high voltage. In practice, determining a material's exact breakdown voltage is difficult. A specific number attached to this quantity is not a reliable constant like a melting point; it is a statistical average. Consequently, when designing a circuit, one should make sure that its maximum voltage is well below the lowest breakdown voltage of any of the materials involved. An electrical system is only as good as the smallest breakdown voltage of one of its components. You might also Like Readers Also Love Discussion Comments @FalcoFan - Zener diodes are often used in electronic devices and equipment because they offer protection from unexpected electrical spikes. They are unusual because they allow for reverse voltage when the current approaches breakdown voltage whereas other diodes will overhead and become permanently damaged. If you use a zener diode as a shunt regulator will it prevent damage because of breakdown voltage? Post your comments Forgot password? • Worker
__label__pos
0.989001
How to Teach French to Mixed-age Classes According to Amanda Speilman of Ofsted, “learning a second language can provide pupils with many wonderful opportunities and is a great discipline in itself”. While this is true, there can be challenges when implementing a languages curriculum in primary school, most notably a lack of staff expertise and time allocated to the subject. Add teaching languages across mixed-age classes into the mix and the challenge seems to be magnified again. Normally, the mix is clear-cut with a consistent Year 3/4 and Year 5/6 arrangement, however some schools could present other variations across Key Stage 2, (such as Y2/Y3, Y3/4/5/6 or all of Key Stage 2 in one class).  Questions frequently facing the mixed-age teacher are plentiful: • Should I teach independent content to the one year group within the class, then the other? • How do I ensure younger pupils are not left behind and older and more able pupils are stretched and allowed to expand their knowledge?  • How can I prevent some younger pupils from feeling demotivated, and even intimidated that their peers have more subject knowledge? (possibly meaning that they are less likely to have a go, which becomes counterproductive in a language lesson.) There is certainly a lot to consider.   Let’s take these questions one at a time. This is more difficult to do than in some other subjects where you can easily start one year group off on an independent task, whilst addressing and teaching the other year group. Many language lessons in primary will be centered around speaking and listening activities, and rightly so. Oracy skills (Kapow Primary strands: Listening and Speaking and Pronunciation) are the most important building blocks in language learning whatever the year group. Even when focusing on developing the Reading and writing strand, you will need to ensure that pupils have an existing level of oracy competence. This can make French lessons more teacher-led than many other subjects and it would be my suggestion that you teach the whole class together and that any opportunity to extend or support particular students of a particular cohort be only for a small portion of the lesson, when you are content that others are confident enough to work independently. According to the Department of Education, “Language teaching should provide the foundation for learning further languages” – the all-important building blocks, not only in vocabulary and phrases, but also the skills and confidence to master a new language. As confidence increases, so does fluency and spontaneity, the ability to communicate what they want to say, whilst all the time improving the accuracy of pronunciation. These aims set out by the Department of Education potentially support learning in a mixed-age class as there is more opportunity to repeat and rehearse vocabulary and grammatical structures when the new cohort is introduced to them for the first time. This is also backed up by the Ofsted Language Review 2021 which states that spaced learning – where knowledge is rehearsed for short periods over a longer time – can be highly beneficial compared to mass practice, which can leave the pupil feeling overwhelmed. In so doing, with careful planning and delivery, information is stored in the long-term memory, which consists of structures (schemata) where knowledge is linked or embedded with prior learning. In my opinion, children will arguably be used to the delivery of each lesson as a mixed-age cohort, as most of their learning will be conducted in this way. Therefore, language classes are likely to be no different. However, ensuring a growth mindset amongst pupils in a supportive classroom environment, perhaps using the older cohort of children as ‘teachers’ or ‘mentors’ can help younger pupils develop a have-a-go attitude to language learning. In addition to this, Kapow Primary’s scheme of work has an overarching strand ‘Language detective skills’ which encourages pupils to be detectives rather than relying on the teacher to impart new vocabulary. This is an essential technique used to decipher a new language; looking for clues such as word similarities to be able to spot the meaning and is something the children love to do. It can make learning a new language less foreboding too, when they realise there are “hooks” on which to build their knowledge! Despite being designed for individual year groups, Kapow Primary’s French scheme can be adapted to suit your school structure by creating a rolling programme that encompasses all of the key language components. The most common mixed-age plan is likely to be a two-year cycle for a Y3/4 and Y5/6 split – Cycle 1 and Cycle 2. Here, vital language structures and skills can be revisited in different contexts, and as pupils progress through Key Stage 2, simpler vocabulary and structures evolve into more complex written sentences. Our planning documents are really useful for understanding how the curriculum has been designed, see: Particularly, Kapow Primary’s French progression of skills identifies this progress throughout Key Stage 2, and can be used to ensure the correct skills are being taught at the correct time in the pupil’s language learning journey.  In my own experience, I have tried to mirror similar themes in each cycle; for example, in Y3/4 Colours (Year 3) in Cycle A, followed by Clothes (Year 4 unit) in Cycle B, which also includes colour. This gives the opportunity to repeat the colours with Year 4 as Year 3 are introduced to them. With their prior knowledge, Year 4 pupils can build on previous foundations, for example, further extending adjectival agreement learning. • Teach French to the whole class, rather than dividing them into separate year groups. By teaching to everyone, the correct pronunciation can be modelled, and any errors quickly picked up and corrected.  • For group activities, team older children with younger children – a mix of higher and lower ability – or perhaps younger children would feel more confident working within their own peer group.  • Support a group or circulate around the room, during group tasks giving immediate feedback and encouragement and taking the opportunity to add extension tasks to the older children. Equally, a short 1:1 burst may encourage a less confident child to have a go! Some young children feel very exposed when making these new French sounds, especially in front of the class but once they are over the initial hurdle, they normally start to fly!  • Assess the older pupils’ prior knowledge by asking targeted questions. The assessment section on the lesson plan provides useful criteria for the teacher to be able to identify which pupils are showing a secure understanding and who is working at greater depth.  • Older children may feel comfortable in playing the role of the teacher for part of the task or be willing to perform their short role play. This stretches them and really helps build their confidence. Younger children in the class may just learn the key words, but it is worth noting that some may have a natural “penchant” for the language and work at a similar level to the older children anyway! C’est magnifique!  Just as there are many differentiation techniques that work well when teaching a wide variety of abilities in other subjects, these also lend themselves perfectly to teaching a mixed-aged language class to ensure good progress. Useful differentiation suggestions are listed on each French lesson plan, for those needing extra support and also extension tasks for pupils working at greater depth. For example, the younger children could work on recalling single words or short phrases, while, by comparison, the older children may be able to join phrases together or be confident in both asking and answering a question.  With the very essence of language learning being all about listening, repeating, and recall, the extra rehearsal time that mixed-age groups may encounter can actually play huge dividends and leave the learner engaged and feeling positive about their language learning experience.  It is essential for embedding the new vocabulary and rules, allowing the language to flourish, thus enabling pupils to make substantial progress in the new language. Written by: Kapow Primary www.kapowprimary.com will be down for planned maintenance this Friday from 3.15pm to 7.00am on Saturday morning
__label__pos
0.970628
So, you have hit a virtual wall, and you are tired mentally and physically. While up against that wall, you are wondering what is the secret to losing weight. Is it diet, exercise, or some other unknown factor? As much as we would like to say, diet and exercise, but we cannot. The secret is your metabolism. You have no doubt heard about metabolism and perhaps even know something about its functions. But, there are many myths related to metabolism and its impact on your health, especially when it comes to weight loss. Understanding metabolism and its functioning will enable you to achieve your goals. Our metabolism burns calories to raise our blood sugar levels. Weight watchers will attest that increasing metabolism is the golden key to weight loss, but how quickly your body burns calories may differ depending on your lifestyle. This article will explore what metabolism is, how it works, the factors that affect it, and some secrets to boost it. What is metabolism, and how it works? What is metabolism, exactly? Metabolism is a system of chemical reactions that your body uses to convert food and drink into energy. Chemical reactions within your body’s cells provide the energy needed to function. So, how does metabolism work? Living organisms can extract energy from their environments and harness it to carry out reproduction, growth, movement, and development. This metabolic complex process involves the combination of nutrients, such as food and drink and oxygen, to produce the energy your body requires to function. Regardless of whether you are inactive or active, your body requires energy to perform all its functions, such as blood circulation, breathing, hormone adjustments, and the repairing and growth of cells. Your body’s requirement for calories to perform these essential functions determines your basal metabolic rate. How metabolism affect fat gain and loss? Metabolic rate speed influences the ease or difficulty of gaining or losing weight. Our food and drinks provide us with energy. The amount of energy they contain depends on how much protein, fat, carbohydrates, and alcohol they contain, in addition to their serving size. Different ingredients and preparation methods of food can affect your metabolic speed. You may have heard the term slow metabolism or fast metabolism. Some individuals inherit a fast metabolism. -Slow metabolism burns fewer calories, thus causing the body to store more fat. So, some individuals have difficulty shedding pounds when they restrict their calorie intake. -Fast metabolism burns calories faster, explaining why some people can consume a lot of food and still lose weight. -Resting metabolism happens when you burn calories while resting. A higher metabolic rate means that you burn more calories, so losing weight and keeping it off becomes more manageable. An optimal metabolism can also give you energy and improve your health. Factors that affect our metabolism Physical activity Despite having little control over how fast your basal metabolism speeds up, you can manage how many calories you burn by your level of exercise. You burn calories more quickly if you are active. Likely, people with fast metabolisms may just be more active. Exercising aerobically burns the most calories. But experts also recommend strength training, in part because it builds muscle. The more muscle you have, the more calories you will burn. As a result, your body can keep muscle, combat the loss in metabolism that occurs during weight loss, and boost your resting metabolism, causing your body to burn more calories during resting. The human growth hormones (HGH) are also responsible for increasing your metabolism through fat-burning and muscle building. In the pituitary gland, HGH is produced naturally. However, a synthetic version of it is available for treatment or as a dietary supplement. Human growth hormones play a crucial role in cell growth, cell regeneration, and cell reproduction. It aids in maintaining, building, and repairing healthy tissue in the brain and other organs. Hormones can speed up the healing process following injury and aid in muscle repair following exercise. If you are using HGH, exercise and caution consideration. You should only take the human growth hormones that your doctor prescribes. To see how it goes in real life, here are HGH before and after results. During treatments, be sure to check in with your doctor regularly. Composition and body size Your body size and composition: Muscle is a powerhouse that burns calories even while at rest. Your sex: Men burn more calories because they have less body fat and more muscle than women of the same age and weight. Your age: Over time, your muscle mass diminishes and you gain more weight due to fat, which slows your calorie burning. Secrets to know to have faster metabolism Eat the correct number of calories Time for science! The body converts food into fuel through metabolism. You get the energy you need to get through your daily tasks. According to experts, basal metabolic rate refers to how many calories your body burns at rest. By knowing your number, you can determine how many calories you need to consume to lose, gain or maintain your weight. Calculate the calories burn Finding out how much you can eat without gaining weight can be easier if you know your resting metabolism. It is also a lot easier with a calculator. The calorie counter calculator can estimate how many calories a person should consume each day. Most calorie calculators allow you to input your age, gender, height, weight, and level of physical activity. Besides providing weight-gain or weight-loss recommendations, this calculator can also give you some simple guidelines. Do strength exercises Do workouts with strength training a regular thing: Your body constantly burns calories, even when you’re doing nothing. Constant approach on your dieting Your approach to diet is to avoid the Yo-Yo diet. Also known as weight cycling, Yo-Yo dieting involves losing weight, gaining it back, and then dieting once again. Like a Yo-Yo, weight goes up and down in this process. Keeping your weight stable prevents your metabolism from slipping. Workout regularly Aerobic exercise cannot build muscular mass, but it can increase your metabolism hours after exercising. Pushing yourself is the key. Exercising more intensely will lead to a lower resting metabolic rate. Exercise and a calorific-rich diet are the only ways to improve metabolism. You can boost your metabolism by making minor lifestyle changes and incorporating the secrets of metabolism into your daily routine. Please enter your comment! Please enter your name here
__label__pos
0.806486
Increase your With Momenry, each idea can reach thousands worldwide, creating chains of experiences that can move indefinitely between the real and virtual world to generate more substantial and lasting connections with customers. greater visibility Only through Augmented Reality will users have more immersive experiences with products, even before buying them. Augmented reality experience The experience Having the opportunity to generate sensations, why not show important highlights to make a quicker purchasing decision. The power of social Whether you bring your creativity or we create for you, let Momenry Social be the driver of the experience: gain greater reach, viral interactions with an audience that is ready for you. momenry social editor
__label__pos
0.95208
Busy. Please wait. Log in with Clever show password Forgot Password? Don't have an account?  Sign up  Sign up using Clever Username is available taken show password Already a StudyStack user? Log In Reset Password Didn't know it? click below Knew it? click below Don't know Remaining cards (0)   Normal Size     Small Size show me how "Forces of Motion" Force is a Push or pull that acts on an object and make it speed up or slow down. A force can also cause an object to change direction Gravity is the force that exists on the earth between 2 objects Gravity also pulls an object down from floating when you throw it into the air or when u jump Magnetism is a force that exists when a magnetic field is present. It force acts on certain metals such as Iron and Steel Friction Is a force that resists the movement of one surface past another surface Friction also depends on the mass of the object Air Friction Works with gravity to slow the upward motion of the rocket when u throw it up in the air Effects of friction You can walk, turn, and push. Moving Parts in a machine work better with less friction between the parts Reducing Friction You use oil in a car to reduce friction and grease on a bikes chain Lubricant is any substance that reduces friction and helps moving parts move more friction Static Friction A part of friction that makes something straight like a desk so objects can stay on it Sliding Friction Like a toy car going on a carpet and the carpet will slow it down Rolling Friction Like putting objects on a cart to move it quicker Fluid Friction Is air resistance so it affects an object when falling normal paper floats while when crumpled it moves faster Created by: MadZoe Popular Science sets Pass complete! "Know" box contains: Time elapsed: restart all cards
__label__pos
0.977631
Tudor Weapons The Tudor period in England was marked by the transition from Middle Ages to the early Modern period. This was a time when conventional weapons such as swords, axes and bows were still in use but were increasingly being replaced by more modern gunpowder weapons. Therefore, the cache of weaponry used by the English troops during the Tudor era often included both types of weapons. The conventional weapons used during the early Tudor period included longbow, dagger, battle axe, a variety of swords, caltrop, billhooks, lanes, poleaxes and spears. Gunpowder weapons which were increasingly being used in the later Tudor period included muskets, matchlock, flintlock and canons. Tudor Weapons Tudor Longbowmen were some of the best foot soldiers in the Tudor Armies Conventional Tudor Weapons A wide variety of conventional weapons were used by the Tudor troops on the battlefield. Among these were number of long weapons which were aimed at countering the cavalry of the enemies armies. Weapons used to this purpose included the bill and billhook, both of which were long poles topped with sharp blades which could effectively kill both a horse and a rider at a distance. The halberd was a long weapon topped with sharp double-edge axe which was particularly effective in infantry combat. Pikes measuring up to 20 feet in length and poleaxes were also used to attack enemy cavalry on the battlefield. On rare cases, English troops during the Tudor era also made use of spears which were primarily meant as close-combat weapons. Tudor Swords A variety of swords were used by the Englishmen during the Tudor period. These included the cutting sword, the broadsword and the rapier. The cutting sword was useful in the medieval period but was less effective on the battlefield by the time the Tudor era began. Tudor Gunpowder Weapons The bows were replaced by muskets by the end of the century. The Matchlock was another firearm which gained widespread use among English troops during the later Tudor period. However, given that it was slow to lock and fire, the matchlock was soon replaced by flintlock. Canons also became the primary sieging weapon of the Tudor armies, also used as a vital weapon by the English navy. These canons produced during the Tudor period were usually made from iron or bronze. Share this:
__label__pos
0.96811
Elpis Battle 12.36 % Change 24h Market Cap € 1,819,541 Volume 24h € 71,132 Circulating Supply Total Supply # Exchange Pair Price Volume 24h Elpis Battle is a Tactical Turn-Based game built on Binance Smart Chain Blockchain. The main goal of the game is that the player will assemble a group of characters to explore the world in the game. Besides, they can improve the group's strength with the equipment system such as the equipment, skill, pet, consumable item. The distinction of Elpis Battle is that the support equipment and the characters in the game are designed as NFT created on ERC-721 and ERC-1155, making it easier and more free to exchange items between players. The game has features such as Mining, Mill, Recruit, Train, Quest, Dungeon, Arena, Guild, etc so that players can participate and interact in many roles in a social world like Elpis. $EBA is the token used as: - Elpis Battle game’s governance token - to use to create proposals and votes for decisions that change the future of Elpis Battle, Bearing revenue asset - to share back a portion of the profits from the primary and secondary markets of Elpis Battle. In-game Currency - to use to pay for some in-game items and features. Incentive reward for early support - often recommend for early participants like Liquidity Provider, Beta Test, etc. We Use Cookies
__label__pos
0.845143
I have a rectangular object and a ball ( sphere ) in 3D and would like to orient the rectangular object in such way that when a collision with the ball and the object happens, to redirect the ball towards a certain point Here's an example of what I want to achieve In short: How to rotate the black rectangle in order to redirect the blue ball towards the yellow. Thanks in advance • \$\begingroup\$ Since you're working in 3D, is the 'rectangle' a plate or a box? Thus, can we ignore its thickness or not? \$\endgroup\$ – liggiorgio Aug 30 '21 at 11:19 • \$\begingroup\$ It's a box, the same as in this image: IMG. \$\endgroup\$ – owmygawd23 Aug 30 '21 at 13:38 This gets more complicated than it might appear on the surface, because as we rotate the box, we also change the position where the sphere comes into contact with it, and that then changes the outgoing velocity we'd need to achieve to hit our target, which then changes the angle we need to reach to rebound with that velocity, which then changes the position... There is a 1:1 relationship between these, but trying to work out a closed-form equation I could solve led me into an absolute mess of trig. Someone else may have an insight that I'm missing, but I have a sneaking suspicion that iterative approximation might be our best bet here. First though, we can reduce the dimensionality of the problem. Even though we're in 3D, the sphere's initial position, target position, and initial velocity give us a 2D plane. The normal vector pointing perpendicular from the "bouncing face" of our box has to lie in this plane too. Now we need to figure out how much to rotate the box around that movement plane's normal to achieve our goal. So from here on in we can think of the problem almost entirely in that 2D plane. We can think of our box's intersection with this plane as a rectangle, whose width is the thickness of the box from the bouncing face to the opposite face, and whose height is \$2 \sqrt{ \left(\frac {\text{diagonal}} 2\right)^2 - \text{distance}^2}\$, where \$\text{diagonal}\$ is the length of the diagonal of the bouncing face, and \$\text{distance}\$ is the distance of the sphere's motion plane from the origin of the box. Anything inside this rectangle, we can twist our box to hit, anything outside, we can't. Let's impose a coordinate system on this plane, where the origin is the center of our rectangle, and the x axis points opposite the sphere's initial velocity. You can cross this direction with the plane normal to get your y axis. (There are two choices, but either one works) We'll start with the rectangle rotated "0 degrees" relative to this coordinate frame, so that the bouncing face points directly against the sphere's incoming velocity. That gives us the best possible chance to catch the sphere. Compute where the sphere hits the line segment of the bouncing face in this orientation in the usual way. If it misses even in this most favourable angle, then we give up - there's no way we can rotate the box to affect the sphere at all - it will always fly past us unless we translate the box in space, which was not part of the question. If we do detect an intersection, then we reflect the ball's velocity in the usual way, and we check whether our target lies "above" or "below" this reflected trajectory (counter-clockwise or clockwise in our imposed 2D coordinate system). Next, we turn the rectangle as far as we can in that direction while still intersecting the sphere's path, and we check the trajectory again. If we get the same result both times ("above and above" or "below and below"), then again we give up. No matter how we rotate the box we'll always miss - though you can take the smallest miss of the two if you like. If you get opposite results (your first bounce went "above" and your second bounce went "below" or vice versa), then we have a solution in sight, and we can find it by binary search between these two extremes. You now have an upper and lower bound for the solution angle. Choose an angle halfway between those bounds, and check again. If the result is "above", replace your previous "above" bound. If the result is "below", replace the previous "below" bound. Rinse and repeat, halving your uncertainty with each iteration, until you close in to your desired precision around the solution. Once you have a solution in this imposed coordinate space, you can transform that face normal back to your original world coordinates, and aim the box to face that vector. Since the angle coming in is the same as going out - let us call it β - and you know the angle between your blue and yellow line - let us call it α - you can calculate β by 2*β + α = 180°. The angle α can be determined if you know the coordinates of the blue circle and the yellow circle and the point of contact with your rectangle: α = arccos ( a.b / |a| . |b| ) You need to read DMGregory's answer for the big picture; this is just a proof of concept for the narrow constraints known to me by your question. #include <stdio.h> // printf #include <cmath> // acos, sqrtf #define PI 3.14159265 struct vec2 struct { float x, y; }; vec2() {} vec2(float x, float y) : x(x), y(y) {} float dot(const vec2 &o) const { return x*o.x + y*o.y; } float squaredlen() const { return dot(*this); } float magnitude() const { return sqrtf(squaredlen()); } vec2 &sub(const vec2 &o) { x -= o.x; y -= o.y; return *this; } float rad2deg(float r){ return r*180.0/PI; } void example(vec2 A, vec2 B, vec2 C) // pertains to https://gamedev.stackexchange.com/q/195488/93991 vec2 v_a = vec2(A).sub(C); // a⃗: 0A - 0C (The vector a⃗ is Origin-to-PointA minus Origin-to-PointC) vec2 v_b = vec2(B).sub(C); // b⃗: 0B - 0C (see above) printf("a⃗ = (%+3.2f, %+3.2f) has |a⃗| = %3.2f\n", v_a.x, v_a.y, v_a.magnitude()); printf("b⃗ = (%+3.2f, %+3.2f) has |b⃗| = %3.2f\n", v_b.x, v_b.y, v_b.magnitude()); printf("a⃗∙b⃗ = a⃗.x*b⃗.x + a⃗.y*b⃗.y = %5.3f\n", v_a.dot(v_b)); // https://en.wikipedia.org/wiki/Dot_product float rad_alpha = acos(v_a.dot(v_b) / (v_a.magnitude() * v_b.magnitude())); // alpha in radians float deg_beta = (180.0f - rad2deg(rad_alpha)) / 2.0f; // beta in degrees printf("At C[%3.2f,%3.2f] the angle β must be %5.3f° (α = %5.3f^=%5.3f°) to reflect object from A[%3.2f, %3.2f] to reach B[%3.2f, %3.2f]\n", C.x, C.y, deg_beta, rad_alpha, rad2deg(rad_alpha), A.x, A.y, B.x, B.y); int main() example( vec2( 7.5f, 4.5f), vec2( 7.5f, -4.5f), vec2( 3.0f, 0.0f) ); // α == 90° example( vec2( 3.2f, 1.0f), vec2( 9.9f, -4.2f), vec2( 1.1f, 0.3f) ); // α ~= 45° example( vec2( 1.0f, 1.0f), vec2( 5.0f, -1.0f), vec2( 0.0f, 0.0f) ); // α ~= 56° return 0; • 1 \$\begingroup\$ Would it be possible to implement some code example in order to understand your answer better ? Thank you \$\endgroup\$ – owmygawd23 Aug 30 '21 at 15:50 You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.926602
Skip to Main Content HR Hot Topic: Recruiting Multi-generational Employees HR Hot Topic: Recruiting Multi-generational Employees HR Hot Topic: Recruiting Multi-generational Employees Estimated Length: 1.5 hours Access Time: 90 days 0.15 CEU | 1.5 HRCI | 1.5 SHRM Key Features Games & Flashcards Real-world case studies Badge and credit-awarding Video content Audio-enabled in app Course Description Recruiting multi-generational employees is crucial in a world where more individuals are putting off retirement and making later-in-life career changes. While the benefits of a multi-generational workforce may seem obvious, they can easily be overlooked in an effort to meet other organizational goals. In this course, you will learn the differences between current generations that are working together. You will explore various considerations for attracting, recruiting, and retaining an age-diverse workforce. You will also learn how age-diversity can influence a company's success, innovation, and productivity. • Define and explain a multi-generational workforce • Define and distinguish between different generations • Identify potential strengths and challenges of a particular generation • Explain the effect of age-diversity on business • Explain how to attract candidates of a given generation • Describe the ways to successfully recruit multi-generational employees • Identify how to retain an age-diverse workforce • Describe the importance of organizational culture and its relation to age-diversity • Define ageism • Understand the harm of generational stereotypes Refund Policy This course does not require any additional purchases of supplementary materials. Course Preview Pinch to zoom Back to Top
__label__pos
0.901144
Jumping Jacks Learn how to perform jumping jacks perfectly with instructions and video by SHOCK Fitness Trainer, Ashley Steele. How to Jumping Jacks Exercise Families: Cardio Equipment: Bodyweight Trainer: Ashley Steele 1. Stand in a narrow stance, maintaining good posture with your arms at your side. Hop vertically, at the same time, raise your arms above your head, spreading your feet apart in the air. 2. Land softly with your knees bent, feet outside the shoulders, and rapidly reverse the movement. Spring in the air, bring both feet back together, and return your arms to your sides. 3. Continue the movement at a fast pace for the specified amount of time. 4. Jumping Jacks are a low-impact functional movement with less joint-stress compared to most plyometrics. This exercise is great for burning calories, elevating your heart rate, improving conditioning, and overall fitness.  Alternative Exercises: Jump Rope  Incline Mountain Climber Download the SHOCK Women's Fitness App Leave a comment Please note, comments must be approved before they are published
__label__pos
0.860374
Persian in Austria Send Joshua Project a map of this people group. People Name: Persian Country: Austria 10/40 Window: No Population: 15,000 World Population: 42,528,200 Primary Language: Persian, Iranian Primary Religion: Islam Christian Adherents: 1.00 % Evangelicals: 0.20 % Scripture: Complete Bible Online Audio NT: No Jesus Film: Yes Audio Recordings: Yes People Cluster: Persian Affinity Bloc: Persian-Median Progress Level: Introduction / History By definition, Persians (also known as Iranians) are an ethnic group native to Iran. The Persian language, called Farsi, is part of the Indo-Iranian language family, and is the official language of Iran. Dari, the language of the elite in Afghanistan, is a dialect of modern Farsi. Around 1000 B. C. , Persian groups began to settle in the territory that is now Iran. Loosely associated Persian tribes became a more cohesive political unit under the Achaemenian dynasty. Their unity soon made them the dominant ethnic group in the region. For 1,200 years, Persia maintained a culture that became increasingly more complex and rigid. This laid the foundation for a successful Arabian conquest of Persia in the seventh century AD. It was not until the Islamic revolution of 1979 that massive changes came both to Iran and to the Persian people. Although the vast majority of Persians now live either in Iran or in one of the nearby Central Asian or Middle Eastern countries. Persian communities also live in European countries like Austria. What Are Their Lives Like? Most Persians in Europe came during the 1979 Iranian Revolution. They were usually people who had means, and they initially had difficulty dealing with a new language and culture. The next generation fared far better. The basic social and economic unit in Persian culture is the nuclear family; however, some families join together to make larger units. Families are traditionally patriarchal patrilineal, and patrilocal. This means that their society is strictly male-dominated. This is changing for the more secularized Persians in countries like Austria. What Are Their Beliefs? Prior to the Arab invasions, the Persian religion was Zoroastrianism. This religion taught that there was an eternal struggle between the forces of good and evil. Shia Islam became the national religion of Iran in the sixteenth century, at which time the ulama (clergy) began playing an important role in both the social and political lives of the people. Today, most Persians are Shia Muslims of the Ithna Ashari branch, and are radical in their adherence to Islamic laws and principles. Thanks to the excess of the Iranian government, a high percentage of Persians are secularized and tired of forced religion. The Lord is growing his Church in Iran, and many have come to faith in recent years. There are also some Christian believers among the Persians in Austria. What Are Their Needs? Persians need their spiritual eyes will be opened. Some are staunch Shia Muslims while others are secularized. Who will reach them in Austria? Prayer Points Pray for Persian believers from Iran to take Christ to Persians in other countries including Austria. Pray for a spiritual hunger among the Persian diaspora that will result in a movement to Christ in Austria and other European countries. Pray for the Lord to show himself powerful and loving to the Persians in Austria. Text Source:   JoshuaProject
__label__pos
0.787813
Providing Regional Climate Services to British Columbia You are here New Report on Climate Impacts and Ontario’s Public Buildings A report released today by The Financial Accountability Office of Ontario examines the costs of three climate impacts on Ontario’s public buildings: extreme rainfall, extreme heat and freeze-thaw cycles. The report, Costing Climate Change Impacts to Public Infrastructure (CIPI): Buildings finds that under a medium emissions scenario, these climate impacts will, on average, add about $800 million dollars annually to the costs of operating and maintaining these buildings, an increase of about 8%. Under a high emissions scenario, the costs are higher, at an additional $1.5 billion dollars annually, an increase of about 15%. While a projected reduction in freeze-thaw cycles will modestly reduce costs, this is more than offset by increased costs from extreme heat and rainfall. The report also found that the cost of adaptation is modestly less than the cost of not adapting. Adapting buildings would likely bring some significant co-benefits, such as minimizing costs associated with disruptions in public services. Read the report. Read the CIPI backgrounder that describes the climate projections and costing methodology.
__label__pos
0.959968
SEO Interpretations And Site Positioning In our most recent article, we discussed how Google had divided the types of searches into categories called ‘interpretations’ to determine the results shown to the user. The dominant and common interpretation Dominant interpretations are what most users refer to when searching. For example, when someone enters ‘surfboards’ in the search, they are probably looking for stores to buy them or want more information about the history of surfboards. Either way, your interest in surfboards is clear. Google results can cover the experience of buying tables without a problem and the informational segment and even provide recommendations of places where you can go to use the table. On the other hand, common interpretations can have multiple meanings, such as the word ‘apple,’ which can refer to the fruit or electronic devices. With these kinds of interpretations, Google results cover a variety of possible meanings. Do / know / go In mainstream interpretations, search engines focus on three basic intentions, categorized as Do, Know, or Go. Google and other search engines try to determine the type of intention behind a user’s search to display the most relevant results. Searches in the Do category are transactional. The user seeks to perform an action, make a purchase. This type of search is very important for eCommerce sites. Going back to the previous example, if someone enters ‘surfboards’ in the search, Google will show them the eCommerce sites where they can be purchased and nearby surf shops. Google categorizes surfboard search as ‘To Do.’ To know Queries in the Know category are informative. Users want to learn something about a product, an industry, a place, etc. Saber searches also include micro-moments when the user does a quick query to update, for example, reviewing a bank statement. In the case of surfboards, you would have to add ‘history of surfboards’ or ‘make surfboards’ for the search engine to categorize the search as Know. Instead, if the verb ‘surf’ is entered, the search results change and provide more information and knowledge about the sport. To go Searches in the Go category are based on a location. The user searches for a specific site or place. For example, if a user enters ‘Florence, Italy’ in the search engine. When you enter ‘surfboards’ into the search, Google doesn’t prioritize it as a Go search. But again, if you use the verb ‘surf,’ you get more results based on knowledge and location. To get information from the Really Go category, you must enter ‘surf trip’ or something similar. This micro example demonstrates how Google uses this categorization method to determine which results to display. This can be used to ensure a better match between users and businesses trying to solve their problems. Meta title and meta description for SEO Understanding this basic SEO information can help improve a site’s rankings and generate more traffic. Another piece of the puzzle is the meta title and meta description, which are displayed on the results pages of Google or any other search engine. It stands to reason that any business would want to use the main theme of their site or their content in their meta title and description, as these will be the ones that appear on the search page. Users click based on this information. Keep in mind that the purpose of most searches is to solve a problem, be it by taking action, learning something, or going somewhere. This is the intention that you want to fulfill. The faster a user can see the potential to solve a website, app, or content problem, the faster they will click and start reading. The meta description should, whenever possible, address this problem with the promise of a quick and easy solution. A user quickly looking at the results page will click on the answer that will likely fix their problem. If we return to the example of surfboards, our user wants to buy aboard. Enter ‘surfboard’ in the search and find several shopping options as well as a map of nearby stores. It is easy to see that these businesses are trying to solve the user’s problem by offering a large selection of tables and comparing them to ensure that the user finds the perfect table. Further down, the results page shows information about what to consider when buying a board and how to find the perfect board. This is convenient if the first results were overwhelming for the user; or if the user wants more information before buying a table.
__label__pos
0.747736
49.9 F Riverside, US Friday, January 28, 2022 Home Tags Ocean temperature Tag: Ocean temperature Rising ocean temperature has an impact on both marine and human... The rising ocean temperature should be alarming to everyone. This is by no means a new event, but rather one that has been discussed...
__label__pos
0.999966
x = created by satan i hated algebra as a teen and i now hate it as the parent of a teen. algebra is of satan. this is part of why algebra is so demonic: the sale price of a radio is 30% below the original price of the radio, but a 7 % sales tax is added to the sale price. betty’s total bill for her radio came to $97.54. what was the original price of the radio Betty bought on sale? (hint: let x be the original price of the radio. write an equation to represent the problem and then solve it.) the answer is $130.23 but the problem is writing out the equation. oh how i hate algebra. 5 Replies to “x = created by satan” 1. the key to word problem is to pick out the words that typically mean mathemetical functions. like “of” typically means you’re going to be multiplying, “less” typically means subtract, etc. Leave a Reply
__label__pos
0.986008
Bird Problem? Contact our Bird Removal Experts in Newmarket skip to main content We're here to help throughout the COVID-19 issue. Read More.. Assess and Remove The first step in resolving a nuisance bird issue is a thorough understanding of the exact nature of the problem. Our customized removal plans take into account the species of bird involved, the affected areas of the home and the time of year. clear and clean Clear and Clean Birds can be messy, their nesting material and droppings can cause home damage and result in unsanitary conditions. As part of the process our trained technicians will remove nests from vents, soffits and balconies and safely scrub away unhealthy droppings. Prevent and Protect Our prevention plans are customized to address the specific bird threats your home faces. Our technicians are trained to install protective barriers and devices designed to make your home inhospitable to birds. Julie Riegert Birds in Newmarket Birds are a common sight in just about every location in the world, though tropical environments boast significantly more species than hot or cold desert ecosystems. There are an estimated 18,000 species currently in existence. With numbers this high, it is difficult to get an exact count! Some birds spend most of their lives on the water, while others spend more days in flight than at a standstill. They have adapted to living in trees, on the ground and in human-made structures. Having birds in your home is not pleasant. They can be noisy, messy and destructive. It takes trained professionals to remove them. At Skedaddle, we understand bird behaviour and biology. Our removal process is safe and humane. We consider bird life cycles and take steps to ensure the wellbeing of their babies. Once the process is complete, we take measures to prevent any more bird issues in your home. People in Newmarket can see anywhere from 60 to 501 bird species travelling around the Ontario province. This wide swing indicates the difference between the number of bird species that stick around all year and those that head for warmer weather. Three species commonly make their nests in our homes: starlings, sparrows and pigeons. Though most of the starlings migrate south, some of them stay in our area year-round. Pigeons and sparrows don’t head south for the winter. At any time during the year, you might discover that one of these types of birds has decided to build a nest in your home. Favourite nesting locations include attics, chimneys and exterior vent openings. Droppings can contain viruses that cause illnesses in people and their pets. Birds can also cause structural damage. If you find birds using your home as their own, it is important to contact Skedaddle in Newmarket for the humane removal of our fine feathered friends. Bird Facts Birds are descendants of dinosaurs. They evolved from a group called the theropods. This was a diverse family that included the Tyrannosaurus rex, but the birds today descended from a smaller species within this family. Migrating birds have an amazing sense of direction. They utilize Earth’s magnetic field along with mental maps to stay on course to their winter or summer destinations. What do bald eagles, swans, Canadian Geese and barn owls all have in common? These diverse bird species all mate for life. Most of the birds in Canada are protected under a law that was first established in 1917. The Migratory Birds Convention Act was formed in partnership with the United States. Newmarket Wildlife Control: Why Do Birds Sing? Regardless of the temperature outside, blooming flowers and singing birds are sure signs of spring in Ontario. Everyone knows how a plant goes from a seed to a flower; but do you know why birds, who have been relatively silent all winter, suddenly begin their ... ...Continue Reading
__label__pos
0.84937
When was Vietnam divided at the 17th parallel? Why was Vietnam divided along the 17th parallel? As decolonization took place in Asia, France had to relinquish its power over Indochina (Laos, Cambodia and Vietnam). … It was decided that Vietnam would be divided at the 17th parallel until 1956, when democratic elections would be held under international supervision. What is the purpose of the 17th parallel? The 17th Parallel indicates the boundary separating North and South Vietnam following the peace negotiations in Geneva in 1954. In peace negotiations at Geneva, the decision was reached to divide Vietnam into northern and southern halves. What does the 17th parallel separate? The Seventeenth parallel (Vietnamese: vĩ tuyến 17) was the provisional military demarcation line between North and South Vietnam established by the Geneva Accords of 1954. Why did the French lose in Vietnam? The French lost their Indochinese colonies due to political, military, diplomatic, economic and socio-cultural factors. The fall of Dien Bien Phu in 1954 signalled a loss of French power. … Duncanson records that Indochina once constituted the Associated States of Indochina – being Laos, Cambodia and Vietnam. IT IS INTERESTING:  Frequent question: What is the biggest forest in the Philippines? Is Vietnam still divided? When the French left Vietnam the United States stepped in to? What role did the 17th parallel play in the Vietnam War quizlet? The 17th parallel was buffered by a demilitarized zone, or DMZ, between the two countries. A chemical herbicide and defoliant that U.S. forces sprayed extensively in order to kill vegetation in the Vietnamese jungle and expose Viet Cong hideouts. Who ruled Vietnam now? • President Nguyễn Xuân Phúc • Vice-President Võ Thị Ánh Xuân • Prime Minister Phạm Minh Chính • Chairman of National Assembly Vương Đình Huệ How did Vietnam end up being split into two quizlet? What was Vietnam called before 1956? Names of Vietnam from 1945 Việt Nam Main template History of Vietnam World Southeast Asia
__label__pos
0.991036
Examination Papers for Science Schools and Classes Front Cover What people are saying - Write a review User Review - Flag as inappropriate how does hydra occur in flower Other editions - View all Common terms and phrases Popular passages Page 128 - GENERAL INSTRUCTIONS. If the rules are not attended to, the paper will be cancelled. You may take the Elementary or the Advanced or the Honours paper, but you must confine yourself to one of them. Page 36 - FG; then, upon the same base EF, and upon the same side of it, there can be two triangles that have their sides which are terminated in one extremity of the base equal to one another, and likewise their sides terminated in the other extremity: But this is impossible (i. Page 36 - In obtuse-angled triangles, if a perpendicular be drawn from either of the acute angles to the opposite side produced, the square on the side subtending the obtuse angle is greater than the squares on the sides containing the obtuse angle, by twice the rectangle contained by the side on which, when produced, the perpendicular falls, and the straight line intercepted without the triangle, between the perpendicular and the obtuse angle. Let ABC be an obtuse-angled triangle, having the obtuse angle... Page 199 - The value attached to each question is shown in brackets after the question. But a full and correct answer to an easy question will in all cases secure a larger number of marks than an incomplete or inexact answer to a more difficult one. Page 57 - Put the number of the question before your answer. You are to confine your answers strictly to the questions proposed. Your name is not given to the Examiner, and you are forbidden to write to him about your answers. Bibliographic information
__label__pos
0.998397
Private Jet Ownership Private jets are often surrounded by an air of mystery. There is a vast quantity of questions around the people that fly Hawker 800SP Exterior Private jets are expensive assets and, as a result, various amounts of tax are levied on them. Therefore, it is critical to Beechcraft Premier I Exterior When it comes to the price of pre-owned private jets, aircraft within the same model can vary wildly. Naturally, this is expected Private jet ownership is not for the faint of heart or those without multiple millions in the bank. Not only can the Have you ever considered the factors that are involved when purchasing a private jet? How you go about it? Where do you
__label__pos
0.824594
What you need to know about Nibiru – Planet X It is the twelfth planet describe by Zecharia Sitchin.Nibiru is also called Marduk, and it arrives to our solars system with a extreme clockwise elliptical course. According to several ancient texts from Mesopotamia, there is strong evidence that supports theories that Nibiru, has an orbital period of 3600 years. The number 3,600 was represented by the Sumerians as a large circle. The expression for the planet, the “shar” also means a perfect circle or full circle and also represents the number.Ancient Astronaut theorists believe that the convergence of the three concepts – planet / orbit / and number 3,600 – could not be a coincidence. Strangely, The periods of the kingdom were also multiples of shar, 3,600 years, leading to the speculation that the empires shares were related to the orbital period of 3600 years.NASA has identified a planet with an anomalous orbit around our Sun, they refer to it as Planet X. According to mathematical models that were presented, it is believed that Planet X or Nibiru, has an extreme elliptical orbit of 30 degrees. It is believed that the planet originated from the Orion constellation, passing near our planet and coming towards it from the Sun, after making its way by Earth, it head towards outer space and “disappears”. See also: New Study Suggests The Collapse of The Universe is IMMINENT Mythologically speaking, it has the appearance of a fiery beast, appearing in the sky as a second sun. Nibriu is a magnetic planet, causing the Earth to tilt in space as it passes. Nibiru is believed to have 4 times the diameter of Earth and is 23 times more massive, a truly gigantic planet. According to ancient texts, Nibiru Is wrapped in a cloud of dust iron oxide red, making the rivers and lakes acquire a reddish color. It is believed that it would cause days of obscurity while passing next to other planets, possibly even stopping their rotation during its transition across space due to its incredible magnetic properties. Nibiru is also associated with great dangers. Find us here Get news from the CSGLOBE in your inbox each weekday morning Paid content Harvard Study Proves Why The Bees Are All Disappearing The 10 Biggest Dangers Posed By Future Technology Did we discover “Alien technology” and reverse-engineer it? A lot of people have wondered ever since the first reported UFO crash, if we had come into contact with Alien technology. Highly advanced technology... What's New Today Five bombshells about Trump from Bolton ‘s book Complete List of BANKS Owned/Controlled by the Rothschild Family China Is Turning The Rainforest Into Cheap Furniture For The U.S. Harvard Study Proves Why The Bees Are All Disappearing New Machine Turns Food Waste Into Energy An Israeli company, HomeBioGas, has created a portable machine that turns food waste into renewable energy for cooking or electricity, and its byproduct can... Got Bees? Got Vitamin A? Got Malaria? New Research Shows How Bees and Other Pollinators May Be Crucial To Human Health A new study shows that more than half the people in... Scientists Claim to Have Created 99% of a Human Brain Scientists have figured out how to grow a human brain in a lab. Well, almost. The creation is that of scientists from Ohio State... The Total Number of Coronavirus Cases Could be 10 Times Higher Than Official Numbers !? The head of Italy’s civil protection authority says he feared the true number of people infected with... The World Map we all know is WRONG We Have Been Misled By An Erroneous Map Of The World For 500 Years An explanation for the use of a false map, and why... Inside Russia’s Psychedelic Salt Mine: Naturally Forming & Mind-Bending Patterns A few hundred feet below the Russian city of Yekaterinburg is an abandoned salt mine of indescribable, psychedelic beauty. Entire caves and corridors covered with... The Playlist Used By The CIA To Torture Detainees According to declassified reports, video evidence and testimony from whistleblower guards and soldiers, the US government used music to torture detainees in a number...
__label__pos
0.890996
What Should We Teach about Human Rights? By Keith C. Barton, Professor of Curriculum and Instruction at Indiana University from Social Education 83(4), p. 212–216 Educators around the world have advocated for human rights to become a core element of students’ social and civic learning.  Although constitutional rights are typically the foundation for social studies and related subjects, human rights represent a universal and cosmopolitan vision, one that applies to citizens and non-citizens alike and is not restricted by national boundaries. Studying human rights can highlight our responsibilities to all fellow humans, not only those with whom we share national citizenship. Human rights also point to a more stable foundation for safe, secure, and fulfilling lives. Constitutional protections can change with shifting political winds, and rights that once seemed secure can disappear when overturned in court, when leaders choose to interpret them in new ways, or when governments are overthrown. Although human rights have evolved over time (and continue to do so), and although their enforcement usually has less authority than national law, they nonetheless provide a societal vision that is more stable than the changing arena of national politics. >> Read full article
__label__pos
0.988401
Desktop version Home arrow Engineering • Increase font • Decrease font <<   CONTENTS   >> Atmospheric Measurement of Greenhouse Gas Fluxes Unlike observation strategies useful for direct GHG flowrate measurement, like CEMs, flux quantification for GHG fluxes originating from surface processes where direct measurements are not possible must be accomplished using atmospheric observations of mole fraction coupled with inferential analysis methods. As would be expected, these require detailed information describing atmospheric properties and motions in addition to greenhouse gas concentration observations. Atmospheric motions at fine spatiotemporal scales are realized with simulation models, whether of the global atmosphere or more locally focused numerical weather prediction (NWP) models. Atmospheric parameters, such as wind speed and direction, temperature and pressure, are needed tlnougliout the estimation and modelling domain. These, coupled with mole fraction observations at various domain locations, provide a means to track atmospheric greenhouse and other observable trace gases through the atmosphere. From the carbon management point of view, these methods allow for the estimation of source/siuk locations and the magnitude of the flux at those locations. Equation (5) relates greenhouse gas, or any relatively long-lived trace gas, atmospheric mass flux to observable parameters, or, in the case of air density, to a parameter directly computed from observed parameter values. Mass flux is a r ector quantity due to its dependence on velocity. Both must be represented by a magnitude and direction. where: MG = Mass flux of the greenhouse gas of interest, xghg = Greenhouse gas mole fraction, MA = Mass flow or flux of the atmosphere, p t = Density of the atmosphere, and VA = Atmospheric velocity. Atmospheric velocity r ectors, composed of wind speed and direction values, across the geographic domain of interest impart these properties to mass flux with scalar quantities of greenhouse gas concentration and air density. Descriptions of atmospheric motions both at global scales, known as global circulation models, and at more local scales, NWP models, are used to simulate greenhouse gas flows in the atmosphere. Although these modelling approaches rely heavily on fundamental descriptions/models of the fluid dynamic system of the atmosphere, they are aided by the use of meteorological observations. In most cases, meteorological parameter measurement data is available at a relatively few points in the domain relative to the number of cells in a computational domain. Advances in and assimilation of radar- based observation, both from the surface and space, have substantially improved the skill of atmospheric property simulation from the surface to the top of the atmosphere. As discussed in more detail below, NWP and global atmospheric circulation models can provide simulated atmospheric parameter values throughout a domain of interest with varying degrees of temporal and spatial resolution. These, coupled with GHG mole fraction measurement data, are used in the inferential method described below for tracking atmospheric GHG flows from their sources on or near the surface, through the planetary boundary layer, and to the global atmosphere. Greenhouse gas dynamics in the global atmosphere The movement of CO, and methane at continental and global scales has been estimated with global atmospheric circulation models, coupled with mole fraction measurements, through the contributions of several nationally-sponsored, mole fraction observing systems (ECMWF, 2018; Rodeuberg. 2017). The CarbonTracker (CT) model, developed by NOAA's GMD (NOAA-CT, 2017), estimates atmospheric carbon uptake and release globally. The observed mole fractions used are predominantly taken from northern hemisphere locations, which tends to make simulated values there somewhat more accurate. CT, and the associated long-term monitoring of atmospheric CO, upon which it depends, are tools to advance the understanding of carbon uptake and release from land ecosystems and the oceans. Providing a means of studying the movement of atmospheric CO, globally, CT and other global models can be tools for monitoring environmental changes, including human management of land and oceans and the potential impact of carbon management efforts globally. A companion model tracking global methane emissions (NOAA-GMD, 2018). provides methane emission flux simulations from North American and global natural and anthropogenic methane emissions. Other global-scale GHG inversion models also produce flux estimates annually (Wageningen Univ. & ICOS Netherlands, 2019; ECMWF-CAMS. 2019; Max Planck Institut for Biogeochemistiy, 2019). Many assimilate satellite observations either in addition to in situ data or alone. Various Bayesian data assimilation methods are utilized, including Ensemble Kalman Filter and 4D-Var methods. The CarbonTracker system is an example of atmospheric inversion analysis combining statistical optimization, global atmospheric circulation simulation, and CO, and CH, observational data. The underlying global transport models have similarity to finite element, finite difference analyses used widely in many science and engineering applications. These methods enable the investigation of the behavior of complex systems using spatial and temporal gridding of the domain containing the item of interest and embedding fundamental process descriptions or parameterizations of them within each grid cell. For example, a spatial grid encompassing a body where thermal and or mechanical stresses are to be investigated or that of a fluid in a flow field confined to a specified geometry, pressure and temperature regime can be simulated and compared with measured parameters. In meteorological models, three- dimensional grids beginning at Earth's surface and extending to the top of the atmosphere are used to simulate atmospheric dynamics and properties over specific geographical regions and time periods. Physical and hydrodynamic principles and parameterizations of individual Earth system processes, e.g., cloud formation, planetary boundary layer dynamics, land-atmosphere gas exchange, and solar radiation intensity at the surface, are used at each grid cell along with pertinent input data. At each tune step, the model provides an array of atmospheric parameter values, e.g., pressures, temperatures, wind direction and speed, and surface energy exchange. As described below, application of NWP models to simulate atmospheric dynamics at local scales is a tool to investigate greenhouse motions and source and sink characteristics in urban areas. These are important to carbon management as the smaller geographic regions of cities (small relative to the total laud surfaces of the earth) har e outsized contributions to national and global emissions, and, therefore, are likely to be a focus of carbon management efforts. <<   CONTENTS   >> Related topics
__label__pos
0.952398
A device or program used to generate keys is called a key generator or keygen. Session Key Generation And Diffie Hellman Songs Generation in cryptography[edit] May 20, 2016  Diffie-Hellman key agreement (DH) is a way for two parties to agree on a symmetric secret key without explicitly communicating that secret key. As such, it provides a way for the parties to negotiate a shared AES cipher key or HMAC shared secret over a potentially insecure channel. After that, the Diffie-Hellman key gets exchange, and then both send the pre-shared key to the other for authentication. Now we have two keys: One will be generated by AES encryption; One will be generated by the Diffie-Hellman group; Which key is used to encrypt the pre-shared key? Diffie-Hellman Key exchange method The Diffie-Hellman key generation protocol is based on the hard problem of solving discrete logarithms. In the public parameters, we assume that the modulus is p and the generator g. The operations are as follows. Assume that the two communication parties are A and B. They select their own secret. Create a Diffie-Hellman key by calling the CryptGenKey function to create a new key, or by calling the CryptGetUserKey function to retrieve an existing key. Create a Diffie-Hellman private key BLOB by calling the CryptExportKey function, passing PRIVATEKEYBLOB in the dwBlobType parameter and the handle to the Diffie-Hellman key in the hKey parameter. Nov 04, 2015 Conceptually, the best way to visualize the Diffie-Hellman Key Exchange is with the ubiquitous paint color mixing demonstration. It is worth quickly reviewing it if you are unfamiliar with it. However, in this article we want to go a step further and actually show you the math in the Diffie-Hellman Key Exchange. Modern cryptographic systems include symmetric-key algorithms (such as DES and AES) and public-key algorithms (such as RSA). Symmetric-key algorithms use a single shared key; keeping data secret requires keeping this key secret. Public-key algorithms use a public key and a private key. The public key is made available to anyone (often by means of a digital certificate). A sender encrypts data with the receiver's public key; only the holder of the private key can decrypt this data. Since public-key algorithms tend to be much slower than symmetric-key algorithms, modern systems such as TLS and SSH use a combination of the two: one party receives the other's public key, and encrypts a small piece of data (either a symmetric key or some data used to generate it). The remainder of the conversation uses a (typically faster) symmetric-key algorithm for encryption. Computer cryptography uses integers for keys. In some cases keys are randomly generated using a random number generator (RNG) or pseudorandom number generator (PRNG). A PRNG is a computeralgorithm that produces data that appears random under analysis. PRNGs that use system entropy to seed data generally produce better results, since this makes the initial conditions of the PRNG much more difficult for an attacker to guess. Another way to generate randomness is to utilize information outside the system. veracrypt (a disk encryption software) utilizes user mouse movements to generate unique seeds, in which users are encouraged to move their mouse sporadically. In other situations, the key is derived deterministically using a passphrase and a key derivation function. Many modern protocols are designed to have forward secrecy, which requires generating a fresh new shared key for each session. Classic cryptosystems invariably generate two identical keys at one end of the communication link and somehow transport one of the keys to the other end of the link.However, it simplifies key management to use Diffie–Hellman key exchange instead. The simplest method to read encrypted data without actually decrypting it is a brute-force attack—simply attempting every number, up to the maximum length of the key. Therefore, it is important to use a sufficiently long key length; longer keys take exponentially longer to attack, rendering a brute-force attack impractical. Currently, key lengths of 128 bits (for symmetric key algorithms) and 2048 bits (for public-key algorithms) are common. Generation in physical layer[edit] Wireless channels[edit] A wireless channel is characterized by its two end users. By transmitting pilot signals, these two users can estimate the channel between them and use the channel information to generate a key which is secret only to them.[1] The common secret key for a group of users can be generated based on the channel of each pair of users.[2] Optical fiber[edit] A key can also be generated by exploiting the phase fluctuation in a fiber link.[clarification needed] Session Key Generation And Diffie Hellman Md Session Key Generation And Diffie Hellman See also[edit] • Distributed key generation: For some protocols, no party should be in the sole possession of the secret key. Rather, during distributed key generation, every party obtains a share of the key. A threshold of the participating parties need to cooperate to achieve a cryptographic task, such as decrypting a message. Diffie Hellman Key Exchange Example 1. ^Chan Dai Truyen Thai; Jemin Lee; Tony Q. S. Quek (Feb 2016). 'Physical-Layer Secret Key Generation with Colluding Untrusted Relays'. IEEE Transactions on Wireless Communications. 15 (2): 1517–1530. doi:10.1109/TWC.2015.2491935. 2. ^Chan Dai Truyen Thai; Jemin Lee; Tony Q. S. Quek (Dec 2015). 'Secret Group Key Generation in Physical Layer for Mesh Topology'. 2015 IEEE Global Communications Conference (GLOBECOM). San Diego. pp. 1–6. doi:10.1109/GLOCOM.2015.7417477. Diffie Hellman Key Retrieved from '' Coments are closed Scroll to top
__label__pos
0.74389
Prime Minister Naftali Bennett speaks with Russian President Vladimir Putin in Sochi, Russia, in October 2021. (Photo: Evgeny Biyatov/The Kremlin) Realpolitik Should Guide Israeli-Russian Relations One of the first foreign leaders Israel’s Prime Minister Naftali Bennett met was the Russian President Vladimir Putin, on October 22, 2021 in Sochi. Today’s Russia is no longer the superpower that was the Soviet Union. It is economically weak and dependent upon energy prices. But it still is a very important country. It has a large nuclear arsenal, and it does not hesitate to use force in foreign affairs (for example, in occupying Crimea). Its energy exports serve as levers of influence internationally. Moreover, it has a significant presence in the Middle East, a region that Russia views as its backyard. Today, Russia sells arms to Egypt, Iran, Turkey, and several other Arab countries. Egypt has purchased two Russian nuclear power reactors, which will make Egypt dependent on Russian nuclear fuel for several decades. In Syria, the Russian air force is fighting to preserve Bashir Assad’s regime, proving to everyone that Russia does not abandon its allies – in contrast to the US. Assad has rewarded Moscow by providing Russian forces a naval base in Tartus and an air base in Hmeymim. In general, Russia seeks to maintain good relations with all parties in the Middle East, including Iran and Israel, the Palestinian Authority, Hamas, Turkey, and Iraq. In October 2021, Israel and Russia marked the 30th anniversary of the resumption of relations between the two countries. It is important to recall several important historical facts in understanding the relationship. The Jewish people have a moral debt to Russia (formerly, the Soviet Union) which fought fiercely against the Nazis during World War II. The Red Army liberated many Jews from Nazi death camps. The Soviet Union voted in favor of the establishment of the State of Israel at the UN in 1947 (it did so primarily to push Britain out of the region). It enabled the transfer of weapons from Czechoslovakia to Israel during Israel’s War of Independence. This sympathetic stance towards Israel was replaced by a distinctly pro-Arab orientation in the 1950s. Israel and Russia were on opposing sides during the Cold War. The Soviet Union became the arms supplier for most Arab countries and trained their armies, seeking influence in the Middle East at the expense of Western powers. In the wake of the 1967 Six Day War, the Soviet Union led most Eastern bloc countries into severing diplomatic ties with Israel and moved to consistently vote against Israel in all international institutions. After the end of the Cold War, Russia reestablished diplomatic relations with Israel and posted an ambassador in Tel Aviv. Cordial relations followed. Russia opened its gates for the Jews to leave. Occasionally, Israel refrained from voting on anti-Russian resolutions at international fora. Israel even sold Unmanned Air Vehicles (UAVs) to Russia. In an unexpected, unprecedented and curious move, Moscow recognized West Jerusalem to be Israel’s capital (April 2017), making Russia the first country in the world to extend such a recognition to any part of the city. In parallel, Russia declared that the eastern part of the city should be the capital of a Palestinian state. President Putin sees Israel as a solid advanced country with impressive military capabilities, willing to use force to attain state interests. He also acknowledges that Israel is a key US ally in the region and even sees Israel as a potential tool for influence over the United States. A reflection of this attitude was a trilateral meeting of national security advisors from the three countries, held in Israel in June 2019. The three senior officials discussed regional and bilateral and trilateral matters, another sign of Russia’s desire to be considered a significant power on par with the US. Unlike some of his compatriots, Putin has a positive attitude towards Jews, reportedly due to childhood experiences. Moreover, he regards the 1.5 million native Russian-speaking Israelis as a Russian diaspora to be cultivated. Israel is considered the world’s only part-Russophone country outside of the former Soviet states. The Russian language is the third-most widely spoken first language in Israel after Hebrew and Arabic; Israel has the third-largest number of Russian speakers outside of the post-Soviet states and the highest proportion of the total population. Israel is the only country in the Middle East where the Russian language and culture are vibrant. In addition, over 100,000 Israeli citizens live in Russia, with 80,000 Israelis living in Moscow. The first meeting between Putin and Naftali Bennett appears to have gone well. Putin missed the previous Israeli leader, Binyamin Netanyahu, who he had a good working relationship with. Yet relations between states transcend personal chemistry. Filling Netanyahu’s big shoes is not easy, but Bennett has likely learned much from working closely with Netanyahu. The dialogue with Russia and Putin should use a realpolitik language. This is the language that Putin understands well and is comfortable with. An attempt to speak in American liberal clichés is doomed to failure. The central meeting of Israeli and Russian interests is in Syria. Moscow wants to maintain the Assad regime because it needs bases on the Mediterranean shore. Russia understands that Israel has the power to rock the boat and undermine Assad’s rule. So Russia has good reason to be sensitive to Israel’s concerns about Iranian entrenchment in Syria. Like Israel, Moscow dislikes a strong Iranian influence in Damascus. This explains why, until now, Russia has allowed Israel to strike at Iranian targets in Syria. It is essential for Bennett that Israel preserves this quiet understanding. At the same time, Israel-Russia understandings on the Iranian nuclear issue should not be expected. Russia sees a strong Iran as a useful factor that weakens the regional status of the US, which is Russia’s main rival in the international arena. Although Russia does not desire a nuclear Iran, it prefers other nations take steps to prevent such a scenario. The bilateral relationship between Russia and Israel is complex and fragile. Jerusalem should have good communication channels with Moscow in the Hobbesian world we live in. Professor Efraim Inbar is President of the Jerusalem Institute for Strategy and Security (JISS). © 2022 World Mizrachi Follow us:
__label__pos
0.733771
Health History and Screening of an Adolescent or Young Adult Client February 8, 2018 Compare/Contrast · Treaties vs. Executive Agreements February 9, 2018 Quasi-Experimental Designs Order Description Read the three study descriptions below: Study 1 utilized a participant-variable design and demonstrated that women perform more poorly, on average, than men perform on the mathematics section of the Scholastic Aptitude Test (SAT). Study 2 utilized a comparison-group before-after design and demonstrated that students from lower socioeconomic status (SES) households benefit more over time from an academic tutoring program than do students from higher SES households. Study 3 utilized a cross-sectional design and measured the IQ of individuals who are in their twenties. In Group 1 are 20-year-olds, Group 2 are 24-year-olds, and Group 3 are 29-year-olds. The mean IQ for the three groups is 103, 100, and 90, respectively. Consider whether the researcher can accurately conclude that people lose intelligence as they age. Select any two of the studies for this Assignment. The Assignment (1–2 pages): Identify the two studies you selected for this Assignment by their corresponding numbers. Identify the dependent variable and independent variable in each study. Explain what is problematic about each of the studies you selected. Briefly describe at least one other type of quasi-experimental research design that might have been used for each study to gain additional or different information. Essay Writing Service
__label__pos
0.725551
Top 10 Myths That Are Not Myths 7. Coca Cola Used to Contain Cocaine While it is true that coke used to have cocaine in the list of ingredients that made the now well-known soft drink famous, many of the myths attached to the truth are merely myths. Coke had such an minuscule quantity of cocaine in it that it can be effectively discounted 6. Eat Celery for Negative Calories Well for all those suffering from obesity, there is something to cheer about. Here we have a myth about a diet food that burns your calorie free of cost and without any sort of exercise.  The myth that celery is a negative calorie food is true to a certain extent. It burn more callories than it actually provides. There are some forms of plant cellulose that we, as humans, cannot digest. Celery falls squarely into this group. The amount of calories your body burns trying to digest this plant is greater than the amount of calories it contains. However the amounts we are talking about here are negligible. 5. The State of Michigan Threatened Beavers with $10,000 per Day Fine “The state of Michigan threatened local Beavers with a $10,000 per day fine for failing to remove their dam.” A man noticed his property was flooded. He investigated and it led back to a dam in a nearby stream. He then wrote a letter to the Michigan Department of Environmental Quality,  complaining about his property flooding. The Michigan Department of Environmental Quality then investigated and found out it led to a beaver dam. They then took action by writing a letter fining the beavers. 4. Avoid Hospitals in July like the Plague Well July is the month when freshie graduates make their way towards hospitals for their practice. They try to win their places in respective wards by earning some respect and learn different techniques from senior doctors. This is the time of year when new doctors are released from their cages and allowed to fly on their own. Due to lack of practice and experience, they prescribe wrong doses and often fail at correct diagnosis which makes patients suffer. Related posts Tulle Netting Turned Into Amazing Art Work How to check RightZoom compatibility with Snow Leopard How to Solve Error (0xe8000065) of iTunes?
__label__pos
0.74345
Silent Dismissal School dismissal made silent & simple. User Tools Site Tools Tutoring Dismissal There are two different processes that may be used with Silent Dismissal for tutoring. One process is to call students from the regular classroom to tutoring then dismiss students from tutoring without using Silent Dismissal or use Silent Dismissal for parent pick up following tutoring. Without Parent Pick Up If the only requirement is to notify students in their regular classroom that they may proceed to tutoring the following steps should be taken: 1. Create one or more groups of type “Tutoring” 2. Assign the students to the appropriate group by day of the week or by setting an override for the student 3. Call the individual tutoring groups to notify students to proceed to tutoring 4. After tutoring students are picked up using a process that does not include Silent Dismissal With Parent Pick Up After School To use Silent Dismissal to call students from the regular classroom to tutoring then to later call them from tutoring to parent pick up, follow these steps: 1. Go to Tools / School Settings 1. Open the Pick Up Number Entry section 2. Set the value for Group Priority to No 3. Click on Save All Settings 2. Create the requisite groups of type Tutoring and assign students to the group 3. For teachers with students in tutoring 1. Determine how tutoring is performed in school 1. If the students do NOT change based upon day of the week, then the tutor should go to Students / Temporary then select all of the students not currently assigned to the tutor as temporary students 2. If the students vary based upon day of the week, the tutor should go to Students / Daily 1. On the Day of the Week View page, select the appropriate students for each day of the week. It is better to include all students who attend tutoring even if they only do so occasionally 2. Determine the role of the tutor 1. If this tutor is also a regular classroom teacher during regular dismissal then do not make any other changes 2. If this tutor only performs dismissal for tutored students then set the value for each day of the week to Specific Students 3. Scroll to the middle of the page (just above the column for Saturday) then press Save 4. Dismiss the tutoring groups at the appropriate time 5. If the tutor is also a regular classroom teacher with regular dismissal, then open the Students / Daily page, change the selection for the current day from Regular Class to Specific Students, also change the value from the previous day from Specific Students to Regular Class__ then Save the page 6. Revert to the Home screen. 1. On initial load the screen will likely make an audible alert and the screen will show all of the students called to tutoring 2. The screen will refresh during the course of tutoring, but in the absence of any changes the audible alert will not sound 3. When an actual parent pick up occurs, the called student(s) will appear in the top left of the screen with an additional dismissal method of parent pick up 4. Press the Depart button on the screen when the student leaves tutoring During Dismissal Silent Dismissal offers a mechanism to allow for tutoring to occur during regular dismissal with regular classroom teachers. For many schools this can provide up to 30 minutes of daily tutoring with negligible impact on staff. This is best illustrated via the use of an example. Let's presume the following: • There are four teachers designated as T1, T2, T3, T4 • T1 is the math tutor • T2 is the Language Arts tutor • Math tutoring occurs on Monday and Wednesday • Language Arts tutoring occurs on Tuesday and Thursday The are the steps for initial set up: 1. Identify the students that will be in math tutoring 2. Teacher T1 will need to go to Students / Daily 1. Set the selection on Monday and Wednesday to Specific Students 2. Place a check mark adjacent to all of the students that will attend tutoring. It is better to err on the side of including students who may not always participate than to leave someone off of the list 3. Save the changes 3. Identify where the students in class T1 who are not attending tutoring will wait until their normal dismissal 4. Teachers T2, T3, and T4 will need to go to Students / Daily 5. Set the select on Monday and Wednesday to Specific Students 6. Select all of the regular students in the classroom who are not going to tutoring AND any of the students from classroom T1 who will waiting in this teacher's classroom. 1. Optionally all of the regular classroom students may be added to list of specific students. This will provide a method for viewing overrides in the original classroom that may affect a student who would normally participate in tutoring 7. Save the changes 8. Perform the same operations for Tuesday and Thursday, swapping the references above for T1 and T2 9. Inform the parents of those students that are participating in tutoring to arrive for pick up at the end of normal dismissal, somewhere between 30 and 45 minutes following regular dismissal time 1. Post a message instructing staff to transfer students from their regular classrooms to tutoring. This generally occurs approximately 5 minutes prior to normal name display 2. The teacher performing tutoring will sign in to Silent Dismissal, close the classroom door, then begin tutoring. If Silent Dismissal is being used properly there should be no additional noise during dismissal including no audible announcements 3. When the first student in tutoring is called for dismissal the tutor's screen will generate an audible alert. At this time that student should be released for pick up and the remaining students may begin gathering their belongings Once set up, this method does not require additional work for the staff and allows tutoring to be performed concurrent with regular dismissal dismissal/tutoring_dismissal.txt · Last modified: 2019/01/15 13:55 by gregor
__label__pos
0.987062
What is meant by vague pronoun reference? Careful pronoun use helps writers avoid confusion and miscommunication. When pronoun reference is vague or ambiguous, it means that the noun (or antecedent) that the pronoun refers to is not clear. Susan Thurman, in The Only Grammar Book You’ll Ever Need, says that after verbs, pronouns cause the most problems with writers. Indeed, vague pronoun reference occurs near the top of both the 1986 and the 2006 versions of the list of top twenty common errors. (It did slip from the top 2 position in the earlier list to the number 4 position in the later list.) Most people recognize that pronouns are used in place of nouns that are already known or have already been mentioned. Using them can make writing smoother and communication efficient. Notice the difference in the following sentences when the noun phrase the elderly man is replace with a pronoun he. The elderly man sat on the bent until the elderly man was asked to leave. The elderly man sat on the bent until he was asked to leave. In some cases, the antecedent that the pronoun refers to is clear: Shirley called to say she would be glad to help decorate for the party on Friday. Here, the pronoun she refers to the noun Shirley (it’s antecedent). In the following example, however, the reference is not as clear: Billy Joe invited Darrell to the ranch because he enjoyed horseback riding. Here there are two possible nouns or antecedents the pronoun he could refer to: Billy Joe and Darrell. In cases in which there are two (or more) antecedents, a reader cannot tell what the writer had in mind. Who enjoys horseback riding, Billy Joe or Darrell? Depending on what is meant, this sentence can be rewritten to make the meaning clear: Because Darrel enjoyed horseback riding, Billy Joe invited him to the ranch. Billy Joe, who enjoyed horseback riding, invited Darrell to the ranch. Even with vague pronoun reference, the context can supply the intention, but sometimes a reader has to go back and reread in order to trace the pronoun to its antecedent or the noun it refers to. Here’s an example from a newspaper article cited by Cathy Kehrwald Cook in Line by Line: How to Improve Your Own Writing (105). “Tell Them” was written, incidentally, by the songwriter Paul Dresser, the brother of the novelist Theodore Dreiser, whose “My Gal Sal” will be sung at the stops along East 20th Street. Though we may guess or know that Paul Dresser wrote “My Gal,” a relative pronoun like whose usually refers to the noun immediately preceding it, which would be Dreiser. This can be a stumbling block to a reader. Cook provides this suggested revision: Incidentally, both “Tell Them” and “My Gal Sal,” the song that will be sung at the stops along East 20th Street, were written by Paul Dresser, the brother of novelist Theodore Dreiser. Martha Kolln in Rhetorical Grammar also points to instances when a vague pronoun reference can cause a fuzziness, rather than clear understanding. She first provides examples in which the pronouns clearly refer to the nouns or noun phrases preceding them: The old gymnasium   needs a new roof. It                      needs a new roof. My sister’s boyfriend   works for a meat-packing company. He                   works for a meat-packing company. In other examples, Kolln shows that while the meaning is not at stake, a fuzziness occurs because the pronouns refer not to a complete noun phrases, but to only a noun modifier: The neighbor’s front porch is covered with trash, but he refuses to clean it up. The neighbor’s dog gets into my garbage every week, but he refuses to do anything about it. My sister’s boyfriend works for a meat-packing company. She’s a vegetarian. It’s hard to keep track of the Administration’s stand on tariffs. They say something different every week. Last summer I didn’t get to a single baseball game, even though it’s my favorite sport. (239-40) Sometimes a pronoun has no clear antecedent. Consider the following sentence from Cook (106): The young recording star was elated, but kept it hidden? What did the star keep hidden? If we follow the rule that a pronoun must refer to a noun already known or already mentioned, then the sentence would read: The young recording star was elated, but kept elated hidden. But, the antecedent cannot be elated because elated is not a noun. In such cases, we must replace the pronoun with a noun phrase that expresses the meaning. Here is the sentence rewritten: The young recording star was elated with his hit record, but kept his feelings hidden. The following example of the need to provide a noun (or noun phrase) instead of a pronoun for a preceding noun illustrates the problem of the vague use of pronouns such as this: I just found out that my roommate is planning to withdraw from school. This really shocked me. Though we understand what is meant, there is no clear antecedent for the pronoun this. It is a vague use. Ask of the sentence, what shocked the speaker? Her decision really shocked me. This vague use occurs because the words this, that, it, and which are sometimes used as a short cut for referring to something mentioned earlier. However, as with other pronouns, these words need a clear antecedent. As you review and revise your writing, make sure that the pronouns you are using not only have clear antecedents, but also that they are referring to something that has already been expressed. Leave a Comment
__label__pos
0.976675
Min. Order Supplier Country/Region All Countries & Regions Homeledt10 1206 28smd led t10 1206 28smd led T10 1206 28smd Led (1601 products available) Go to PageGo About products and suppliers: With the increase in the number of accidents globally, t10 1206 28smd led are essential in ensuring public safety by ensuring traffic accidents involving busses decline. Alibaba.com offers a variety of t10 1206 28smd led to ensure other road users, especially students, are safe. The products are available for different purposes, such as external lighting, internal lighting, and roof lighting. Each item is designed to meet its purpose brightly and radiate its roles effectively. Before purchasing the t10 1206 28smd led, buyers should focus on the various specifications, bus functionalities, and applications. They come in multiple sizes, shapes, colors, and designs, but they play different roles. Therefore, customers should understand these factors to ensure the buses communicate the right information to other road users and the passengers. Buyers should also ensure that their product complies with traffic standards and safety measures depending on their country. There are different types, sizes, and shapes of t10 1206 28smd led on alibaba.com. The platform ensures they are all available to meet the buyers’ needs. Buyers can select the product from a wide range displayed on the website. Each item has clear descriptions and specifications to help in the selection process and save consumers time. Alibaba.com has genuine, authentic products available at pocket-friendly prices. It offers excellent t10 1206 28smd led to meet consumers' budgets. Before product selection, customers get a chance to scroll through the products’ database and choose the one that meets their preferences.
__label__pos
0.999292
Dear Colleague, A bug in the parallel version of the Dissipative Particle Dynamics code in DL_MESO 2.4 has been discovered. For systems involving hard surfaces, no boundary halo data is required next to the surfaces: these boundaries are identified using the Boolean array srflgc for each subdomain. Therefore the export of particle data does not need to take place across boundaries containing hard surfaces. The export routines in domain_module.f90, however, depend upon each processor sending and receiving data to/from two different processors. The current implementation of omitting boundary halo data across hard surfaces would thus cause the program to hang due to orphaned MPI calls. To solve this problem, the exportdata, exportvelocitydata and exportdensitydata subroutines in domain_module.f90 need to be modified so the relevant export routines are called regardless of the existence of a surface, but still ensure that no particle data is sent across solid boundaries. To achieve this, the position range for each direction needs to be modified if a solid boundary exists so that none of the particles can be added to the boundary halo message buffer. By inserting the following lines immediately before each CALL statement in the three subroutines: IF (srflgc(1)) final = -rhalo IF (srflgc(2)) begin = sidex + rhalo IF (srflgc(3)) final = -rhalo IF (srflgc(4)) begin = sidey + rhalo IF (srflgc(5)) final = -rhalo IF (srflgc(6)) begin = sidez + rhalo the IF statements in the export routines will not be able to find a particle that qualifies for copying into a boundary halo. The IF statements should also be removed from the lines calling the export routines. For instance, in exportdata, the following line (1554) IF (.NOT. srflgc(1)) CALL export (nlimit, 1, map (1), begin, final, sidex) should be changed to IF (srflgc(1)) final = -rhalo CALL export (nlimit, 1, map (1), begin, final, sidex) It should be noted that the serial version of the module, domain_module_ser.f90, does not need to be modified as it does not rely on synchronized MPI calls. This bug fix has been applied to the current DL_MESO release and a corrected version may be downloaded by registered users of version 2.4 without re- registering and decrypted using the same password. Michael Seaton
__label__pos
0.974231
Which Statement Is Not True Of Full Employment Output (y*)? Which statement is true about full employment? Full employment embodies the highest amount of skilled and unskilled labor that can be employed within an economy at any given time. True full employment is an ideal—and probably unachievable—situation in which anyone who is willing and able to work can find a job, and unemployment is zero. What is the full employment level of output? An economy’s full employment output is the production level (RGDP) when all available resources are used efficiently. It equals the highest level of production an economy can sustain for the long-run. It is also referred to as the full employment production, natural level of output or long-run aggregate supply. Is potential output full employment? It’s a sign that the economy may not be at full employment. If the real GDP exceeds potential GDP (i.e., if the output gap is positive), it means the economy is producing above its sustainable limits, and that aggregate demand is outstripping aggregate supply. You might be interested:  How Many Jobs Are Needed For Full Employment? Which of the following are true of an economy operating at full employment? The GDP gap is negligible: When economy is operating at full employment, the economy reaches its potential GDP: it is employing all productive factors and producing (maximum) the amount of goods and servicies that can be produced with a given technology, labor and capital. What rate is full employment? What decreases the full employment level of output? Macroeconomic Equilibrium If the equilibrium level of output is below the full employment level as in the graph above the result is unemployment. Demand-pull inflation is inflation caused by an increase in AD. What changes full employment output? The two economic forces that must be in equilibrium to achieve full employment GDP are unemployment and inflation. When unemployment goes down, inflation tends to go up, and when unemployment goes up, inflation tends to fall. Why full employment is bad? Why is potential output difficult? Hard to measure Potential output and the output gap can only be estimated. Estimates are based on one or more statistical relationships and therefore contain an element of randomness. Moreover, estimating the trend in a series of data is especially difficult near the end of a sample. You might be interested:  What Is The Employment Equality Age Regulations 2006? What is the difference between actual output and potential output? Actual Output can be defined as the growth in the quantity of goods and services produced in a country, or in other words the percentage chance in GDP. While Potential Output is the change in the productive potential of a economy over time. What causes potential GDP to fall? Potential real GDP Source: Congressional Budget Office. It is quite typical to see potential GDP slowing down after the economy enters a recession. This is because investment generally falls during an economic contraction, which slows down capital accumulation and reduces the growth rate of potential GDP. When the economy is operating at full employment the actual unemployment rate is? The natural rate of unemployment is related to two other important concepts: full employment and potential real GDP. The economy is considered to be at full employment when the actual unemployment rate is equal to the natural rate. When an economy is operating below the full employment level of output? The economy is below full-employment equilibrium when its short-run GDP is lower than the potential GDP. When the economy is operating below full employment, some labor, capital, or other resources are unemployed (beyond the natural rate of unemployment). When full employment is present in the United States quizlet? Full employment is the rate of employment that results when: only frictional and structural unemployment are present. Full employment means which of the following is zero? Leave a Reply
__label__pos
0.961339
Theory of Price Determination Now that we have seen how consumers and producers behave, we can turn to the determination of price in a market, aka the price mechanism (aka price theory. As of now, there are no government involvement in our free market, and the essence is of the simultaneous interaction of demand (SS) and supply (SS) forces that determine the equilibrium price (and equilibrium quantity) of the goods and services. 5.1 Market Equilibrium The price mechanism is the essence of the free market system. The interaction of the demand and supply forces determines the prices of the goods in the market, and the market is said to arrive at an equilibrium position. Equilibrium Price and Quantity The term equilibrium means a state of balance or rest. When we see buyers (consumers on demand curve) and sellers (producers on supply curve) interact in the market, we obtain the equilibrium price and quantity. The quantity demanded and quantity supplied will be equal at one and only one market price. This is the equilibrium price. If the equilibrium price is not reached, market forces would come into play until quantity demanded equals quantity supplied to push the price back into equilibrium. Market Equilibrium | Price Mechanism | JC Econs Notes Singapore Market equilibrium occurs at a price at which the quantity demanded by consumers is equal to the quantity supplied by producers. At $0.75, the market “clears”. This is shown at point C in Figure 1. At prices above the equilibrium price, the quantity supplied > the quantity demanded; at these prices there is a surplus, and there is a downward pressure on the price. At prices below $0.75, quantity demanded > quantity supplied; the resulting shortage puts upward pressure on the price. In summary, there are tendencies for prices to rise when a shortage occurs and for prices to fall when a surplus occurs. This is known as the price mechanism (market mechanism) or the invisible hand which explains that the market behaves as if some unseen force was examining each individual’s supply or demand and then selecting a price that assured equilibrium. 5.2  Changes in Conditions of DD  Consider a non-price factor of demand at work, say incomes rise. The demand will rise, shifting the demand curve from DD to DD1. Changes in Demand- Price Mechanism | JC Econs Tuition Notes S'pore At the original price of $0.75, quantity demanded exceeds quantity supplied, i.e. a shortage occurs. The invisible hand will act, pushing the price up, so as to clear the market. As the price rises, quantity supplied increases along supply curve SS (from point A), and quantity demanded falls along the demand curve DD1 (from point B). When the new equilibrium price of $1.00 per quart is reached, the quantity demanded will once again equal the quantity supplied. Both the final price and quantity are higher following the increase in demand. In contrast, a decrease in demand will shift the demand curve to the left, resulting in a lower equilibrium price and quantity. 4.3 Changes in Conditions of SS Any non-price factor that raises the supply, will lead to the supply curve shifting to the right. The surplus that occurs at the original price will eventually lead to market to clear at a lower price, with a higher equilibrium quantity. In contrast, a decrease in supply will shift the supply curve leftward, resulting in a lower equilibrium quantity but higher equilibrium price. (Do sketch diagram to verify.) 5.4 Simultaneous Changes in DD and SS If both the demand and supply curves shift at the same time, then either final price or quantity is indeterminate. 1. An Increase in Demand and Supply Simultaneous Rise in DD & SS | JC Econs Tuition S'pore If the demand and supply curves both shift rightward, the equilibrium quantity increase. However, the new equilibrium price is indeterminate. Although E1 is to the right of E, it could be above or below E, depending on the relative shifts of the demand and supply curves. Consider the effects on price and quantity for the following: 2. A Decrease in Demand and Supply (Output falls but price is indeterminate.) 3. An Increase in Demand and a Decrease in Supply (Price rises but quantity is indeterminate.) 4. A Decrease in Demand and An Increase in Supply (Price falls but quantity is indeterminate.) (You will need to sketch out the impacts) In general, when the demand and supply curves shift in the same direction, equilibrium quantity also shifts in the direction. The effect on price depends on which curve shifts more. If the curves shift in opposite direction, price will move in the same direction as demand. The effect on quantity depends on which curve shifts more. In the next chapter, we look at the way producers behave. 5.5 Sample Short Answer Questions 1. Explain how a recession and lower fuel costs affects the price of air tickets. 2. Explain why the price of smartphones have been increasing, despite that the phones parts have been shifted to lower-cost producing countries. 3. Discuss whether the implementation of health campaigns and lower prices of packaging materials impact the revenue earned producers of potato chips. (More sample essay questions for Market Mechanism here.) The above questions require your knowledge of the content of both supply and demand theory, as well strong exam techniques to answer them to get a “Level 3 response”, otherwise known as higher order thinking skills (HOT). More of these HOT skills in our Econs tuition lessons. 5.6 Sample Outline Essay Answer “Airbus failed to deliver its super jumbo jets (A380) on time. This, coupled with more than two billion passenger trips made last year as compared to 1.88 billion trips made in 2004, could lead to an increase in air fares, say analysts. It is all a question of demand and supply.” Source: The Straits Times, 15 Oct 2006 (Update; Production has been stopped for Airbus, 2019) (a) Explain the demand and supply factors which could “lead to an increase in air fares”. [10] (b) Assuming that the increase in air fares had come about from a specific fuel tax imposed on the airline industry, discuss the impact this would have on airline firms revenue. [15] Suggested Outline Answer: Part (a) Demand factors: o increase in incomes o rise in taste and preferences for holidays o increase in prices of train/ship travel  o decrease in prices of holiday packages/hotel rooms, etc (Any 2 DD factors, as long as both are applied to the context.) Supply factors: o fall in number of planes supplied by Airbus (must be included) o increase in the price of fuel or an increase in any of the input prices o expectations of a particular flight route being unprofitable in the future o decrease in number of airline firms (e.g. bankruptcy) o contracts with plane suppliers terminating, etc (Any 2 SS factors, as long as both are applied to the context.) Level 3: (7-10m): Explain factors and illustrate with contextual examples. e.g. † income → †DD ↑P as people demand more holidays (normal good) Level 2: (5-6m): Explain at least two DD and two SS factors e.g. income → †DD Level 1: (1-4m): Mere listing of SS and DD factors without explanations Part (b)  Explain that a specific fuel tax increases unit / marginal production cost and would result in an increase in prices as the firms in the industry attempt to pass on some of the cost to consumers. Price increases and quantity demanded decreases ceteris paribus. Explain parallel SS shift left by the amount of the tax. Standard PED analysis o Elastic DD (TR decreases) and inelastic DD (TR increases), o Draw 2 diagrams (diagrams must have both SS/DD curves) (Note: extended analysis may involve the price elasticity of demand aka PED, as you will learn shortly in the next few chapters.) Derived demand: Air travel a necessity for businessmen (more inelastic DD), a luxury for holiday-makers (more elastic DD) o Depending on which market segment dominates, TR may increase (former) or decrease (latter) SR: TR may be unchanged as tickets are booked in advance and not many close substitutes for long distance travel especially, Airlines absorb the tax; profits affected; LR: TR may decrease as people look for travel alternatives o Airlines transfer some tax burden to consumers; SS decreases; P increases . LR: DD may be quite elastic as there may be available substitutes (e.g. teleconferencing for business meetings), especially if one airline pulls out of a particular route and other airlines take over L3 Analyse the effects of varying PED on TR L2 Explain and illustrate how the fuel tax affects P & Q L1 Recognising that a fuel tax causes P to increase OR causes SS curve to shift left Up to 5m for EV: Assess the possible elasticities, substantiated with theory and applied in context of the different consumer market segments Prior Chapter: Theory of Supply | Next Chapter: Price Elasticity of DD & SS
__label__pos
0.998264
The Science of the Superhero! Science of superhereoOn Thursday, October 16, Five Oaks Academy was proud to welcome Jerry DeCaire to Five Oaks as a guest artist for the Upper Elementary and Middle School students. Jerry has served as an artist for Wolverine, Thor, X-Men, Conan and many other famous comic book and film characters. His presentation showed how mathematics, science and art can merge to create superheros! Jerry had all of the students on the edge of their seats with excitement as they watched him create a superhero while explaining the math and geometry he was using to create his art. They were thrilled to be given copies of the drawings he made that day. It was a great opportunity for them to see how the subjects they are  learning in the classroom are applied in the “real world”.
__label__pos
0.981762
Player source Play recorded sources and data recorded by Phyphox Sources in Mover include the possibilitie to record the received data. Theres a record button (the red dot) availlable for that, just like the one shown in the figure of the FS2020 source. The data is recorded on a file and can be played on the Player source. Besides data recorded from sources, you can also play files generated with the Phyphox application availlable for IOS and Android. Phyphox records data received by the motion sensors of the phone, but you need to setup the data properlly to have it recognised by the player. Imagine making an aircraft travel and record the data to be replayed on the motion rig. See bellow how to setup Phyphox. Palyer with a Phyphox file How to record data compatible with the player in Phyphox Step 1: Install and open Phyphox. Step 3: Add an experiment to Phyphox by pressing the + in the orange circle. Step 4: Use the option of the QRCode and scan the QRCode in the picture to add the experiment to Phyphox. You now have the Data for FlyPT Mover in the list of experiments. Step 5: Select the experiment and press play to start recording data on the phone. Step 6: Press pause to stop recording. Step 7: Pres the 3 points on the top right and export the data. Select the CSV (Comma, decimal point) format. Now you can open the zip file generated by the experiment in the player source.
__label__pos
0.82996
Hey everybody out there, I worked on sound design in Anarchy Reigns. My name’s Sakata. Today I thought I’d use this blog to talk about the game’s sound effects. While the sound effects in Anarchy Reigns were made using a realistic sound base, the game also features characters that have gigantic weapons built into their arms. So the sounds maintain realism, but are also somewhat altered. What this means is, for example, Jack’s weapon is a giant chainsaw. That has presence. If we were to use the noise an actual chainsaw makes, the sound would pale in comparison to the picture. So we decided to use a motorcycle, revving up. Out of the multitude of sound effects that exist, perhaps most important in a fighting/action game is the sound of hitting something. The bam!/pow!/boom! when laying one on your opponent. We wanted these hit sounds to be realistic, but as we played the game, we also wanted sounds that felt right, sounds that left a hefty wallop in your ears when you heard them. During production, the director would often try to tell us what he wanted. Thus began the back and forth brining different variations of sound samples and being told “not that”, “no, not that either”… which continued until we found something we liked and he liked. And that was difficult. Sometimes the sound just wouldn’t fit. And sometimes the most basic of sounds ended up being the hardest to figure out. There was often a limit to how much could be communicated by just words. Something like “The sound when —– does —— in that movie” is… doable, but when we were told “Like, when something kind of bounces off something and goes powannnn, you know, powwaannnnnnn?” we thought, are you insane!? I have no idea what you’re talking about we had to stretch our creativity as far as it would go until we honestly couldn’t think of anything else. Here’s a short movie displaying some of the hitting sounds we worked with the director on coming up with: Finally, I’d like to mention the voice acting in this game. Anarchy Reigns is one disc with five language options: Japanese, English, French, Italian and Spanish. Each character has five different voice tracks, which are used in cut scenes and during actual gameplay. The game has some very talented voice actors for any language you choose, so it might be fun to switch it up once in awhile. Try it out!
__label__pos
0.753124
Effects of an odor or taste stimulus applied to an artificial teat on the suckling behavior of newborn dairy calves Detta är en Master-uppsats från Linköpings universitet/Biologi Sammanfattning: In their first days of life, dairy calves in artificial rearing systems often have difficulty using an artificial teat for feeding. I examined the age at which calves are able to stand up and suckle without lifting assistance, as well as their suckling behavior when presented with a plain dry teat versus a dry teat modified with a presumably attractive odor or taste substance. Single-housed newborn dairy calves (n = 51) were presented for ten consecutive days with a two-minute two-choice test, in which suckling time was recorded for 1) a plain (control) teat vs. a glucose-coated teat (taste test) and 2) a plain teat vs. a teat with a "Freshly Cut Grass" odor (odor test). On average, the calves were able to suckle without lifting assistance from the second or third day of age on. The "Freshly Cut Grass" odor had no significant effect on their suckling behavior. The calves showed a significant preference for suckling the glucose-coated teat and displayed a significantly longer total suckling time in the taste test compared to the odor test. There were no significant differences between sexes regarding suckling preference. The results of the present study show that glucose had a significant effect on the calves' teat preference and significantly increased total suckling time with a dry artificial teat. As such, glucose may increase suckling motivation in non-efficient drinkers or ill calves with low motivation to suckle.
__label__pos
0.936889
1A.4 Can thermodynamic intensification of the global Walker Circulation help resolve the East African climate paradox? Monday, 13 January 2020: 9:15 AM 156BC (Boston Convention and Exhibition Center) Chris C. Funk, USGS EROS, Santa Barbara, CA; and A. Fink In the spring of 2019, the East African boreal spring March-May “long” rains failed again contributing to a massive increase in the number of displaced people and extreme food insecurity, especially in Somalia, eastern Ethiopia, and eastern Kenya. UC Santa Barbara/US Geological Survey scientists first identified declining east African rains in the mid-2000s, and ,unfortunately, suppressed rainfall levels have continued into the 21st century. A combination of gauge-only Centennial Trends and satellite-gauge Climate Hazards center InfraRed Precipitation with Stations (CHIRPS) data can be used to assess long term changes in seasonal precipitation. Since 1999, there have only been a few wet seasons: the very wet 2018, and 2010, and 2013. Conversely, 9 years had poor rainfall: 1999, 2000, 2001, 2004, 2008, 2009, 2011, 2017, and 2019. This means that the “new normal” in eastern East Africa has been a substantial or severe March-to-May drought about every other year. Such sequential rainfall deficits have had dangerous impacts, eroding resilience, economic reserves, and herd size and health. While prior research by numerous scientists has linked these rainfall declines to various aspects of an enhanced global Walker Circulation, which tends to be associated with subsidence and dry conditions over East Africa, the relationship between the observed Walker Circulation enhancement and climate change remains contentious. Two related factors that suggest that the observed enhancement is due to natural internal variability is that a) most climate change models indicate a weakening of the Walker Circulation, while b) large-scale atmospheric energy balance considerations indicate relatively small changes in mean tropical precipitation, due to the fact that, on average, fluctuations in hydrologic balance (P-E) need to be offset by changes in radiative forcing. The latter are relatively small, as are global trends in mean precipitation. Using reanalysis fields from version 2 of the Modern Era ReAnalysis (MERRA-2), we explore a new way of thinking about the energetics of the Walker Circulation. By far, the two largest energy components of a column of air are associated with the column’s enthalpy or internal energy, which is a function of temperature, and the column’s potential energy, which is a function of geopotential height. These two terms are commonly combined in a measure known as dry static energy (DSE). This combination can mask important and interesting behavior. For example, on a grid-cell by grid-cell basis, the divergence of internal and potential energy are highly anti-correlated, with a median anti-correlation of -0.97 based on 1980-2019 March-May MERRA-2 data. This anti-correlation arises through the hydrostatic relationship, which tightly couples changes in air temperatures and geopotential height. The diabatic forcing term is a function of radiation, precipitation, and sensible heating. Diabatic forcing and internal energy convergence are positively correlated in space and time, and in general, changes in diabatic forcing and internal energy convergence are offset by changes in potential energy divergence. This global pattern is clearly associated with regions of internal energy convergence and diabatic heating near the Warm Pool, Amazon/tropical Atlantic, and Gulf of Guinea/tropical Africa. These regions also typically export more than 400 Wm-2 of geopotential height energy. Conversely, the sub-tropical high regions of the globe are associated with strong temperature divergence and geopotential height convergence. While changes in DSE are constrained by changes in radiation, the exchange between internal and potential energy is not. It is therefore possible that the Walker Circulation may become more intense as 1) increased atmospheric air temperatures increase patterns of tropical temperature convergence, and 2) diabatic forcing increases in areas associated with water vapor increases due to precipitation and the trapping of long wave radiation. Preliminary analyses of changes in MERRA-2 atmospheric energy balance terms (Figure 1), and conditions during recent East African drought years (not shown), suggest that this may be the case, at least over the Indo-Pacific warm pool and tropical Atlantic regions. These areas of enhanced temperature convergence/diabatic forcing appear alongside an area of reduced temperature convergence/diabatic forcing over tropical Africa. This talk will explore these variations, and co-variations, and their potential relationship to droughts in East Africa. Multiple data sets are also used to provide convergence of evidence. - Indicates paper has been withdrawn from meeting - Indicates an Award Winner
__label__pos
0.895556
Interspecific variation in avian blood parasites and haematology associated with urbanization in a desert habitat H. Bobby Fokidis, Ellis C. Greiner, Pierre Deviche Research output: Contribution to journalArticlepeer-review 86 Scopus citations Many avian species are negatively impacted by urbanization, but other species survive and prosper in urbanized areas. One factor potentially contributing to the success of some species in urban areas is the reduced presence of predators or parasite vectors in urban compared to rural areas. In addition, urban areas may provide increased food and water resources, which can enhance immune capacity to resist infection and the ability to eliminate parasites. We determined patterns of blood parasitism, body condition, and immune cell profiles in urban and rural populations of five adult male songbird species that vary in their relative abundance within urban areas. Urban birds generally exhibited less blood parasitism than rural birds. This difference was particularly evident for the urban-adaptable Abert's towhee Pipilo aberti. In contrast, no difference in haemoparasitism was seen between urban and rural populations of the curve-billed thrasher Toxostoma curvirostre, a less-urban adaptable species. In two closely related species, the curve-billed thrasher and the northern mockingbird Mimus polyglottos, urban birds had a higher leukocyte count and a higher heterophil to lymphocyte ratio, which is often associated with chronic stress or current infection, than rural birds. Urban northern mockingbirds were in better condition than rural counterparts, but no habitat-related differences in condition were detected for other species. Parasitic infection was correlated with body condition in only one species, the canyon towhee Pipilo fuscus. Parasitic infection in most species was correlated with changes in leukocyte abundance and profile. The findings suggest that interspecific differences in parasitic infection cannot be attributed entirely to differences in vector abundance or body condition. Interactions between immune function, parasite infection risk, and resource availability may contribute to determining the relative ability of certain species to adapt to cities. Original languageEnglish (US) Pages (from-to)300-310 Number of pages11 JournalJournal of Avian Biology Issue number3 StatePublished - May 2008 ASJC Scopus subject areas • Ecology, Evolution, Behavior and Systematics • Animal Science and Zoology Dive into the research topics of 'Interspecific variation in avian blood parasites and haematology associated with urbanization in a desert habitat'. Together they form a unique fingerprint. Cite this
__label__pos
0.950576
Vibrational energy relaxation and spectral diffusion in water and deuterated water John C. Deàk, Stuart T. Rhea, Lawrence K. Iwaki, Dana D. Dlott Research output: Contribution to journalArticlepeer-review In the broad water stretching band (2900-3700 cm-1). frequency-dependent vibrational energy relaxation (VER), and spectral diffusion both occur on the time scale of a few picoseconds. Ultrafast IR-Raman spectroscopy of water is used to study both processes. VER is also studied in solutions of HDO in D2O (HDO/D2O). The OH stretch (νOH) lifetime for water and HDO is ∼1 ps. The OD stretch (νOD) lifetime for D2O is ∼2 ps. Stretch decay generates substantial excitation of the bending modes. The lifetimes of bending vibrations (δ) in H2O, HDO, and D2O can be estimated to be in the 0.6 ps ≤ T1 ≤ 1.2 ps range. νOH decay in water produces δH2o with a quantum yield 1.0 ≤ φ ≤ 2.0. In HDO/D2O solutions, νOH(HDO) decay generates νOD(D2O), δHDO, and δ5D2o. The quantum yield for generating νOD(D2O) is φ ≈ 0.1. The quantum yield for generating both δHDO and δD2O is φ ≥ 0.6. Thus, each νOH(HDO) decay generates at minimum 1.2 quanta of bending excitation. After narrow-band pumping, the distribution of excitations within the stretch band of water evolves in time. Pumping on the blue edge instantaneously (within ∼1 ps) generates excitations throughout the band. Pumping on the red edge does not instantaneously generate excitations at the blue edge. Excitations migrate uphill to the blue edge on the 0-2 ps time scale. The fast downhill spectral diffusion is attributed to excitation hopping among water molecules in different structural environments. The slower uphill spectral diffusion is attributed to evolution of the local liquid structure. Shortly after excitations are generated, an overall redshift is observed that is attributed to a dynamic vibrational Stokes shift. This dynamic shift slows down the rate of excitation hopping. Then energy redistribution throughout the band becomes slow enough that the longer VER lifetimes of stretch excitations on the blue edge can lead to a gradual blue shift of population over the next few picoseconds. Original languageEnglish (US) Pages (from-to)4866-4875 Number of pages10 JournalJournal of Physical Chemistry A Issue number21 StatePublished - Jun 1 2000 ASJC Scopus subject areas • Physical and Theoretical Chemistry Dive into the research topics of 'Vibrational energy relaxation and spectral diffusion in water and deuterated water'. Together they form a unique fingerprint. Cite this
__label__pos
0.749923
- Global Voices - https://globalvoices.org - Global Voices Partners with UNFPA on 7 Billion Actions Categories: Development, Humanitarian Response, 7 Billion Actions, Announcements In 2011 the world's population will exceed 7 billion people. To mark this milestone, Global Voices has been commissioned by the United Nations Population Fund [1] (UNFPA) to write a series of posts that celebrate how one person or group can still make a difference in a world of 7 billion people. The stories [2], by Global Voices authors in different countries, will form part of a global campaign called 7 Billion Actions [3]. Every story will also be translated by Lingua [4] into the UN languages French, Spanish, Russian, Chinese, and Arabic. Young and old in Sri Lanka [5] Visit 7 Billion Actions [3] to engage with the campaign. Here's a slideshow by the UNFPA about the population challenges ahead.
__label__pos
0.974392
Abstract:This article proposes a space–time meshless approach based on the transient radial polynomial series function (TRPSF) for solving convection–diffusion equations. We adopted the TRPSF as the basis function for the spatial and temporal discretization of the convection–diffusion equation. The TRPSF is constructed in the space–time domain, which is a combination of n–dimensional Euclidean space and time into an n + 1–dimensional manifold. Because the initial and boundary conditions were applied on the space–time domain boundaries, we converted the transient problem into an inverse boundary value problem. Additionally, all partial derivatives of the proposed TRPSF are a series of continuous functions, which are nonsingular and smooth. Solutions were approximated by solving the system of simultaneous equations formulated from the boundary, source, and internal collocation points. Numerical examples including stationary and nonstationary convection–diffusion problems were employed. The numerical solutions revealed that the proposed space–time meshless approach may achieve more accurate numerical solutions than those obtained by using the conventional radial basis function (RBF) with the time-marching scheme. Furthermore, the numerical examples indicated that the TRPSF is more stable and accurate than other RBFs for solving the convection–diffusion equation. Keywords: space–time; transient radial basis function; convection–diffusion equation; meshless; radial polynomials Abstract: https://www.mdpi.com/2227-7390/8/10/1735 PDF Version: https://www.mdpi.com/2227-7390/8/10/1735/pdf By admin
__label__pos
0.91959
ELA Grade 1 Text Set Earth's Place in our Solar System The first text, “What Is the Solar System?” offers a broad overview of the Solar System as a whole and its composition. Next, “What’s Up in Space” is a brief text that defines the composition of the Sun and planets. In “Explore Space,” students will then be presented with more specific information about each planet’s characteristics. Ending the broader study of the Solar System is the “National Geographic Reader - Planets.” Once students have built this necessary knowledge, they are ready to examine more closely our planet using the book “On Earth.” This book focuses on the rotation and revolution of the Earth and how that impacts its daily and yearly cycles. It will provide students with information about how day and night works as well as the seasons. Finally, “Introducing Planet Earth” will be used to further pinpoint what makes life on Earth unique from the other planets in the Solar System. The last article, “Not too Hot, Not too Cold” (found under Recommended Texts) is quite complex but sheds scientific light on the discovery of another planet that is similar to Earth and helps students to consider the vast number of planets and possibilities in the larger galaxy and universe. Note to instructors: New articles emerge daily on our quest to find “new life” in space, please feel free to explore new discoveries as they are made. There are no tags for this resource
__label__pos
0.999466
Robust knowledge within the natural sciences is that Robust knowledge is knowledge that is valuable,relevant and accepted within the context of its application and its area ofknowledge. It is achieved when the knowledge is reliable, pivotal andconstructed in a legitimate way. Facts can be seen as robust knowledge as afact can be defined as information which is justified beyond doubt, thereforerobust knowledge includes, but is not limited to, facts. The reason robustknowledge is not limited to just facts is because of personal knowledge; sharedknowledge refers to facts that are socially acceptable, however one may havetheir own personal knowledge, which isn’t deemed as factual, that is acceptedas robust knowledge in their personal belief. Consensus refers to the idea ofgeneral agreement; opposing the idea of personal knowledge.The synthesis ofrobust knowledge in the natural sciences are linked to the necessity fordevelopment to help the world move forward medically and technologically.Similarly, robust knowledge from history is linked to the form the basis formodern day justice systems and political decisions. In spite of robustknowledge consisting of sturdy information, discrepancies may exist whenattempting to link facts and information across all areas of knowledge, such asfaith; faith ultimately refers to one’s personal beliefs, opposing the idea ofsocially robust knowledge, or ethics, as ethical information essentially cannotbe tested and proven or disproven. We Will Write a Custom Essay Specifically For You For Only $13.90/page! order now  AOK #1:NATURAL SCIENCESCLAIM: For robust knowledge to exist within the natural sciences, disagreementis necessary. Knowledge within the natural sciences can be considered robust ifit can withstand continuous criticism which derives from disagreement, whichleads to the examination and further testing of a theory.EXPLAIN: Science essentially works because scientists disagree; ideas arechallenged and different ways to interpret and analyse information are foundand utilised, leading to the resulting outcome being based on a robustfoundation of strong knowledge. If criticised knowledge emerges with consensus,it is robust, otherwise it ceases to be knowledge at all. The reason whydisagreement is highly necessary in the formation of robust knowledge withinthe natural sciences is that the examination that comes along with it demands deeperrecollection and judgement instead of mere settlement or easy agreement. Theway a theory within the natural sciences begins to be considered a fact is bythe testing and confirming of data. Without disagreement in the naturalsciences, many things that were believed in the past that have been falsifiedwould continue to be believed now, such as the theory that the Earth is flat.Disagreements in the natural science are fruitful are they encouragediscussions and further examination of a theory until it is proven not justsuggested, leaving no room for further disagreement.EXAMPLE: An example of the necessity of disagreement within the natural sciencesfor the formation of robust knowledge is the falsification of the phlogistontheory. During the mid-17th century, physicians suggested the existence of afire-like element called phlogiston, which was said to be contained withincombustible substances and released during combustion. The theory suggestedthat charcoal, for example, left very little residue after combustion becauseit consisted of nearly purely phlogiston. This was the accepted belief untilScheele’s experiment led chemist Antoine-Laurent Lavoisier to prove, in 1779,that when oxygen was isolated, fire could be seen as a result of a chemicalreaction, instead of as an element in the reaction itself. It was Lavoisier whonamed the element, oxygen. By falsifying the Phlogiston theory, Lavoisier ledto the birth to the field of modern chemistry. That made for dramatic changesin the science thereafter, and it was due in large part to the discovery ofoxygen.COUNTERCLAIM: On the other hand, one might argue thattheories within the natural sciences require a general consensus amongst thescientific community after the observations, experiments, investigations andresults are found to be consistent and reliable. The robustness of knowledge inthe natural science could solely depend on consensus, without the necessity fordisagreement.EXPLAIN: Scientific consensus is what most scientists in a particular areaof study agree is true on a specific question, where disagreement on thequestion is limited and insignificant. For example, if the scientific communityis asked: “If I let go of this apple in my hand, will it fall to the ground?”The answer will be “yes,” according to the scientific consensus that the appleis susceptible to Earth’s gravity. It’s worth mentioning that a scientificpaper is not an average or an ordinary paper. It is a paper that is evaluatedby a group of specialists in the field of study who examine its limitations,its experimental procedures, and its findings. Only after the paper has beenthrough this review process is it accepted for publication, and each new paperbuilds on the information communicated in the papers that preceded.Thus, theemergence of a scientific consensus isn’t dependent on a majoritarian rule. Itactually highlights the fact that many great scientists from differentbackgrounds have considered the question at hand and have reached similarconclusions. That doesn’t mean that science is flawless or always 100% correct.It is important to remember that science is adaption; it’s change. But what it does mean is that we have a pretty goodunderstanding of how things work, and it will take an enormous, perhapscurrently impossible, amount of evidence to change our current understanding.EXAMPLE: For example, scientific consensus agrees that climate change is realand that it’s caused by human activity. A recently published paper by John Cookalong with seven other researchers of climate change studies found that 97% ofpublishing climate science authenticate the consensus that climate changeoriginates from human activity. Furthermore, the paper found that the studiesconducted by more expert scientists confirmed the consensus more. Whena group of scientists, each with years of expertise in a particular area ofstudy, cooperate to make a scientific claim, it doesn’t suggest that theargument is completely over, but it does mean that the claim doubtlessly lackspersonal opinion or beliefs, and that, thanks to modern day technology anddevelopment, it is supported by enormous amounts of evidence, thussubstantially leaving no room for scientists to debate this specific issue, asit’s largely answered, rather that they should build upon this agreement. Inshort, a scientific consensus tells us things that we have already learned, andit lets us know when things have stopped being debated in the sciences.AOK #2:HISTORYCLAIM: For robust knowledge in history, consensus and disagreement mustcoexist within knowledge production.EXPLAIN: In history, strongknowledge is derived from facts about what happened in the past, why certainevents happened, their causes and effects, and the interpretation of thoseevents. Historians collect information by comparing accounts from a widevariety of sources, in search of common features that can verify a credibleclaim, therefore disagreement is necessary in order to ensure the confirmationand the validity of a theory. Recently, in the knowledgeproduction there has been an accepted norm of exploring several perspectivesthat reflect different views including considering the disagreements of ahistorical event. The issue is that singular perspectives hinder the validityof information. Thus, historical controversy arises due to several reasonsincluding a historian’s personal point of view of society, the effect of thecurrent social and political climate on the historian, the historical approachused  by the historian, the differenttypes of evidence used and how they have been understood, or newly foundevidence.EXAMPLE: An example of historical controversy is the different schools ofhistoriography. Counterfactual history refers to the ‘what if…?’ approach, andalthough some historians disregard it as it can mean dealing in possibilitieswhich will never be proven, it serves a very valuable purpose in testing howcompelling historical explanations are and makes room for discussion. Anexample of this is the analysis of what caused the First World War. A WhigHistorian, such as Taylor, would argue that the war was caused by theassassination of Franz Ferdinand, therefore a possible question would be “Whatif Franz Ferdinand had survived the assassination?”, and possible insights wouldbe that Austria then wouldn’t have had a reason for war with Serbia, althoughthey could have found an excuse at some other point, so did the assassinationspeed up or actually cause the beginning of the war? A counter argument couldbe made by a Marxist historian, such as Lenin, that capitalism was the reasonfor the outbreak of the war, leading to the question “What if the Euro had beeninvented in 1914?”, causing the possible insights to the bitterness of France’sloss of Alsace-Lorraine would have worn off, however the strive for colonieswas as much about status as economics, so should the issue of colonies beconsidered an issue as much related to national status as economics? COUNTERCLAIM:Robust knowledge in history differs between communityto community and relies on knowledge production that is mainly consensual,whereby disagreement is unnecessary.EXPLAIN: In one knowledgecommunity, what may be considered robust knowledge may not be in another.Shared knowledge within a community relies on consensus within that community,however doesn’t require consensus between other communities. Disagreementbetween communities can exist, but this will not affect the consideration ofrobustness of knowledge or information. EXAMPLE: An example of this is the Textbook Crisis in Japan over the Nanjingmassacre. Japanese students are taught in history class using a textbook thatonly refers to the Nanjing massacre in one line and refers to it as an”incident”, and glosses over the issue of comfort women. Nobukatsu Fujioka is an author ofa history book who said “It was a battlefield so people were killedbut there was no systematic massacre or rape,” “The Chinesegovernment hired actors and actresses, pretending to be the victims when theyinvited some Japanese journalists to write about them.” and that “All ofthe photographs that China uses as evidence of the massacre are fabricated”. Furthermore, there are only seven history textbooks that are approved by theMinistry of Education in Japan which schools are allowed to use. Mariko Oi, a Japanese student,researched into the historical perspective of other countries and foundcontroversy over the event. “The Chinese say 300,000 were killed andmany women were gang-raped by the Japanese soldiers, but as I spent six monthsresearching all sides of the argument, I learned that some in Japan deny theincident altogether.” In 2005, protests were held in China and South Koreabecause of a textbook prepared by the Japanese Society for History TextbookReform, which had been approved by the government in 2001. Foreign critics saidit  concealed Japan’s war record duringthe 1930s and early 1940s. Therefore it is clear that because students in Japanare taught one thing that is approved by history textbooks and famous historians,their knowledge of the Nanjing massacre is considered robust to them, due toemotion getting in the way, despite another community, which is China in thiscase, opposing their belief, so what is considered robust in the sharedknowledge of one community is not affected by the disagreement from anothercommunity. CONCLUSIONIn conclusion, it is clear that defining robustknowledge is highly complex and it differs significantly between one field ofstudy and another. This is because for robust knowledge to exist, many aspectsneed to be considered including the synthesis of the knowledge, the evidenceprovided, and the culture or community in question. Another important factorthat could hinder whether something is considered robust or not is the questionof personal versus shared knowledge, as a piece of information may beconsidered robust to one person due to faith or intuition or emotion, wherebyit cannot be proven strongly by reason, thus making it seem like frailknowledge to others. The same applies to communities or countries as shown bythe Nanjing massacre example. Furthermore, it is difficult to specify andstandardise the robustness of knowledge across different areas of study such asthe sciences and history due to the intense variation between the methods ofsynthesis of knowledge. The two areas of knowledge use completely differentmethods to produce knowledge therefore are incomparable in terms of robustness.Thus, although it makes sense to say that consensus and disagreement are bothrequired for robust knowledge, this statement can be opposed due to the idea ofpersonal knowledge and the ways of knowing of faith, intuition and emotion. I'm Mary! Check it out
__label__pos
0.862659
TY - JOUR AU - KHUDOYAN, SAMVEL PY - 2017/12/19 Y2 - 2022/01/22 TI - THERAPY OF MEANING DEPRIVATION AS A METHOD OF OVERCOMING GAMBLING ADDICTION JF - Main Issues Of Pedagogy And Psychology JA - miopap VL - 15 IS - 3 SE - DO - 10.24234/miopap.v15i3.196 UR - https://miopap.aspu.am/index.php/miopap/article/view/196 SP - 63-65 AB - <p>The author?s method of gambling addiction therapy is represented in the article. The method is based on the assumption and understanding of the meaning of an activity which motivates the person to its realization. The understanding of lack of the meaning in an unwanted behavior (in this case, gambling) and it will contribute to overcome it.</p> ER -
__label__pos
0.999865
How is engine fuel consumption calculated? The following formula is used to calculate fuel consumption in litres / 100kms, the most commonly used measure of fuel consumption. (Litres used X 100) ÷ km travelled = Litres per 100km. How is engine fuel consumption measured? In most countries, using the metric system, fuel economy is stated as “fuel consumption” in liters per 100 kilometers (L/100 km) or kilometers per liter (km/L or kmpl). How do you calculate actual fuel consumption? Divide mileage by fuel usage to see your car’s fuel consumption. This tells you how many miles you drove per gallon of gas. For example, if you drove 335 miles before refueling, and you filled your car up with 12 gallons of gas, your fuel consumption was 27.9 miles per gallon, or mpg (335 miles / 12 gallon = 27.9 mpg). What are the factors that determine engine fuel consumption? Factors that affect fuel efficiency • Driving behaviour: Rapid acceleration, speeding, driving at inconsistent speeds and even extended idling can increase your fuel consumption. … • Weather: The colder it is, the worse your fuel consumption will be. … • Weight: It’s a fact that lighter cars use less fuel. IMPORTANT:  What happens if you put too big of a motor on a boat? What is the rate of fuel consumption? Fuel consumption is the inverse of fuel economy. It is the amount of fuel consumed in driving a given distance. It is measured in the United States in gallons per 100 miles, and in liters per 100 kilometers in Europe and elsewhere throughout the world. What is the average fuel consumption in kilometers per Litre? The average fuel economy for new 2017 model year cars, light trucks and SUVs in the United States was 24.9 mpgUS (9.4 L/100 km). 2019 model year cars (ex. How much fuel does a 1500cc car consume per kilometer? The most popular Fielder choice is the 1500cc model which has an average consumption of about 15 km/L. How do I calculate fuel consumption per Litre? Or, to work it out yourself: 1. Fill your tank to the top. 2. Zero the trip counter. 3. When you next fill up, note the mileage driven. 4. Fill the tank again and note the number of litres put in. How is LPG consumption calculated? For example, a 25MJ heater will consume 1 litre per hour or a 91,502 BTU heater will consume 1 gallon of propane per hour. LPG – Propane Gas Consumption Conversion Chart LPG – Propane Unit of Measure MJ BTU 1 KG 49 46,452 45 KG 2205 2,090,340 1 Gallon 96.5 91,502 How can I reduce fuel consumption? 10 ways to reduce fuel consumption 1. Keep tires pumped up. Tires that are underinflated have a higher rolling resistance on the road. … 2. Lose the weight in your boot. … 3. Drive with AC. … 4. Don’t go too fast or too slow. … 5. Remain steady when accelerating. … 6. Avoid braking aggressively. … 7. Cruise in top gear. … 8. Practice predictive driving. IMPORTANT:  What are Nascar windshields made of? In which type of engine fuel consumption is less? Generally speaking, diesels are about 25 per cent more fuel efficient than petrol engines in normal urban and highway use. However, hybrid engine installations have progressed to the point where they are the most efficient of all. What causes bad fuel consumption? Does engine size determine fuel consumption? A car’s engine size, also known as the engine capacity or simply CC, is the size of the volume swept by each of the cylinders, which inside combine and burn air and fuel to generate energy. The larger the engine size, the more fuel your vehicle consumes, the more power it produces, and the more your car accelerates. How do I calculate fuel usage per km? You can take the cost for your fuel per tank and divide it by the kilometres driven to find out your fuel cost per kilometre. e.g. $130 / 800km = $0.16 per kilometre. How do you calculate fuel cost per km? If you know the price of fuel, then you can simply multiply the price per litre by the result and that gives you your cost per 100km. E.g. if fuel is $2, then 8.98l/100km means that it takes $17.96 of fuel to travel 100km, or around $0.18 per kilometre, not including your other costs like wear and tear. IMPORTANT:  Are electric car batteries harmful to your health?
__label__pos
0.937776
History Podcasts Congregational churches are direct descendants of the Puritan movement in England. In 1648, the Cambridge Synod adopted the Westminster Confession, an extremely Calvinistic expression of dogma. Congregationalism expanded due to the substantial immigration between 1640 and 1660. Harvard and Yale colleges were established to ensure the training of Congregationalist ministers. In 1708, the Connecticut Congregationalists adopted the Saybrook Platform which tended more towards Presbyterianism than their brethren in Massachusetts. New England Congregationalism experienced further growth during the Great Revival (1734-1744). However, doctrinal issues split a number of churches. The ongoing disputes led to the emergence of Unitarianism in 1820. Although the westward expansion of the nation included many Congregationalists from New England, the church did not have an organization for capitalizing on the new territories. As a result, Congregationalism remained largely a New England phenomenon. Watch the video: Searching the Scriptures: What Are Congregationalists? S4E02 (January 2022).
__label__pos
0.957689
Brief information about the Kunduz Kunduz Kunduz (; Pashto: کندز‎; Dari: قندوز‎ original name: کُهَندِژ) is a city in northern Afghanistan, which serves as the capital of Kunduz Province. The city has a population of about 268,893, making it about the 6th-largest city of Afghanistan, and the largest city in the northeastern section of the country. Kunduz is located in the historical Tokharistan region of Bactria, near the confluence of the Kunduz River with the Khanabad River. Kunduz is linked by highways with Kabul to the south, Mazar-i-Sharif to the west, and Badakhshan to the east. Kunduz is also linked with Dushanbe in Tajikistan to the north, via the Afghan dry port of Sherkhan Bandar. The land use of the city (within the municipal boundary) is largely agricultural (65.8% of total area). Residential land comprises nearly half of the 'built-up' land area (48.3%) with 29,877 dwellings. Institutional land comprises 17.9% of built-up land use, given that the airport is located within the municipal boundary. Kunduz Attractions
__label__pos
0.955412
Alien vs. Predator Central logo White Xenomorphs Throughout the Alien series, there have been several special types of white and albino Xenomorphs. The reasons for their existence include old age, inherited human traits due to DNA mixing and special pigmentations in their skin. Here is a list of notable white Xenomorphs, including the Newborn, Neomorph, Albino Drone, and the White Hybrids. White Hybrids White Hybrid from Aliens vs. Predator: Deadliest of the Species The white hybrids were a new race of creatures created by the rogue AI named Toy in the Aliens vs. Predator: Deadliest of the Species comic series. They inherited the traits of both Xenomorphs, Yautja and humans, but mostly looked like Xenomorphs with white skin. They had acid for blood and still used facehuggers for reproduction. However, they were more intelligent than regular Xenomorphs, some even being able to speak. The white hybrids fought against Big Mama Predator and her allies. White Hybrid King vs. Trophy Hatchling White Hybrid King vs. Trophy Hatchling The leader of the white hybrids was the Hybrid King - a big creature resembling a Predalien. The Hybrid King had the ability to speak, a trait inherited from humans. After Ash Parnall was infected by a special facehugger, she birthed another white Xenomorph - named the Trophy Hatchling (ash Ash herself was a trophy wife). The Trophy Hatchling allied itself with Big Mama and fought the Hybrid King to a stalemate. The race of white hybrids was wiped out when the skyliner belonging to the Montcalm-Delacroix et Cie exploded in orbit. Only the Trophy Hatchling survived and joined the crew of Big Mama, wishing to be something better than the evil race of Xenomorphs. Albino Drone Xenomorph The Albino Drone Xenomorph from NECA In the initial script draft of Aliens by James Cameron, Ripley encountered albino drones in the Xenomorph hive when going to rescue Newt. These were slightly smaller types of Xenomorph, with white skin and a long pink tongue. The tongue was used to help build the hive - to mold the resin on the walls and to cocoon the victims. These white Xenomorphs were probably cut due to budget reasons and Cameron rewriting the script to be more focused. The concept was later canonized by NECA who released the albino drone Xenomorph figures, although their size was the same as regular drones. The Newborn from Alien: Resurrection The Newborn from Alien: Resurrection inherited its light skin from the DNA traits of Ripley 8, the clone of Ellen Ripley. While the Auriga Queen was its mother, Ripley 8 can be considered to be its grandmother. Not just light in tone, the skin was also soft and not armored like a regular Xenomorph. Its other human traits included eyes, a tongue and a nose. The Newborn was unmatched in strength, being able to kill an Alien Queen in one swipe and is probably the strongest Xenomorph type on the list. The Neomorph from Alien: Covenant The Neomorph was a preliminary type of Xenomorph, being similar to the Deacon. While the Deacon was blue-skinned, the Neomorphs were white. The Neomorphs were created after David released the Black Goo biomatter into the atmosphere of the Engineer planet. The Black Goo fused with the plantlife to create small eggs which later hatched and infected humans exploring the planet from the Covenant ship. Born as a bloodburster, the Neomorph was an agile and vicious creature, with a thicker skin than a Xenomorph and missing the inner-jaw as well. White Matriarch Queen The Old White Queen from Aliens vs. Predator 2010 While other Xenomorphs have turned white due to DNA manipulation or albino pigmentation, the Matriarch Queen on BG-386 turned to whitish-grey color due to old age. She was over a thousand years old when she died at the hands of a lone Colonial Marine during the events of Aliens vs. Predator 2010. Her skin had formed cracks and ridges due to old age as well. Notably, the hundreds of years old Antarctica Queen from the first Alien vs. Predator movie seemed to be a lighter tone as the young one in Aliens as well. Irradiated Xenomorph The Irradiated Xenomorph from Aliens: Aftermath The Alien Expanded Universe has an unhealthy habit of returning to Hadley's Hope, even though it was destroyed in the Atmospheric Processor's explosion. In Aliens: Aftermath, a group of extremists led by Cutter Vasquez (a cousin to Jenette Vasquez) landed on LV-426 about 35 years after the events of Aliens. In the ruins of the colony, they quickly discovered an irradiated Xenomorph, who started to kill the members of the group. The Xenomorph had adapted to and transformed in the radiation of the Atmosphere Processor and was especially deadly, as anything it touched started melting. Cutter Vasquez and the Alien killed each other at the end. Number Seven Xenomorph The Number Seven Xenomorph from Aliens: Infiltrator The Number Seven Xenomorph from Aliens: Infiltrator had been transformed white by experimentations with the Black Goo on Pala station. A dozen Aliens experimented on yielded different results, and Number Seven was probably the most unique. It had a telepathic link with other Aliens and could command them into battle. Number Seven did not do much fighting itself and was killed when a group of survivors caused an explosion near the shuttle bay of the station. Dr. Timothy Hoenikker was the final survivor of the station and turned up later in the Aliens: Fireteam Elite game. Want to know more about the Xenomorph lore? Check out the lists about the android Xenomorphs and the female Xenomorphs. Recent Articles Predator Wristblade Variants Predator Wristblades Top 10 Lost Tribe Predators The Lost Tribe Predators from the end of Predator 2
__label__pos
0.834023
January 22, 2022 Lebanon (Arabic: لبنان, Lubnan), official name is the Republic of Lebanon (Arabic: الجمهورية اللبنانية; al. washed by the Mediterranean Sea. The area is 10,452 km2, the capital is Beirut. Terrain: a narrow coastal plain, the Bekaa Valley stretches from north to south between the Lebanese and Antilivan mountain ranges. The head of state - Michel Aoun, since 2016; head of government - Saad Hariri, since 2016; political system - a confessional democratic republic. Exports: citrus and other fruits, vegetables, industrial products to neighboring Arab countries. Population 4224 thousand (2008) (Lebanese 82%, Palestinians 9%, Armenians 5%); languages: Arabic and French (both state), Armenian, English. Recent History: Independence from France was gained in 1943, and the Palestine Liberation Organization (PLO) was founded in Beirut in 1964. The civil war between Christians and Muslims began in 1975-1976 and ended in 1989 with the Taif Agreement. The pro-Syrian administration was re-elected in 1992, but was later overthrown during the 2005 Cedar Revolution. In March 2020, the country defaulted. After that, the country began an economic and then an energy crisis. In October 2021, due to fuel shortages, two power plants in the country were shut down and the country was left without electricity for a day. On January 8, 2022, the situation repeated itself. The territory of Lebanon is characterized by mountainous and hilly landforms. Plains are found on the Mediterranean coast. The lowlands include the Bekaa Valley, located in the heart of the country. The territory of Lebanon can be divided into four physical and geographical areas: coastal plain Lebanon ridge Bekaa Valley Antilivan ridge with mountain range and Ash-Sheikh (Hermon). Coastal Plain The width of the coastal plain does not exceed 6 km. It is formed by the sickle-shaped lowlands facing the sea, limited by the spurs of the Lebanon ridge, which go into the sea. Lebanon Range The Lebanon Range forms the largest mountainous region in the country. The whole area, composed of thick layers of limestone, sandstone and marl, belongs to a single folded structure INSERT INTO `wiki_article`(`id`, `article_id`, `title`, `article`, `img_url`) VALUES ('NULL()','Ліван','Lebanon','','https://upload.wikimedia.org/wikipedia/commons/thumb/5/59/Flag_of_Lebanon.svg/languk-900px-Flag_of_Lebanon.svg.png')
__label__pos
0.96462
When the Speaker or the Chair announces that the yeas and nays are ordered and a recorded vote is ordered or announces that a quorum is not present and the yeas and nays are automatic, the vote is taken by electronic device. A Member casts a vote by electronic device by inserting a voting card into the nearest voting station and pressing the appropriate button: “yea,” “nay” or “present.” It is advised that Members go to another voting station and reinsert their voting card until the light comes on and verifies the vote cast at the first station. Members should also visually check the voting board to make sure that the light next to their name reflects their intended vote. Members that do not have their voting card should go to the table in the Well and obtain an appropriate voting card from the boxes placed there (green card for yea, red card for nay, orange card for present). The Member should sign the card and give it to the Tally Clerk who will be standing on the first level of the rostrum. The Clerk will then register the vote into the computer, but the Member should visually check the board to make sure the vote is recorded correctly. Members deciding to change their vote may do so by reinserting their card into a voting station and pressing the appropriate button during the first ten minutes of a fifteen-minute vote, or at any time during a five or two- minute vote. However, during the last five minutes of a fifteen-minute vote, a change in a Member’s vote can only be made by going to the Well, taking a card from the table, signing it, and handing it to the Tally Clerk on the rostrum. The Clerk then registers the change and a statement will appear in the Congressional Record indicating that the Member changed his or her vote. Members using this procedure to change their vote should be sure to check the board to see that it reflects the change. Also, Members may change their vote during a five or two-minute vote by machine and no statement about the change will appear in the Congressional Record unless it comes after the voting stations are closed and before the result of the vote is announced. NOTE: Once the record vote ends (by the Chair announcing the result), and the motion to reconsider is laid on the table, the vote is final — no further voting or changing is permitted. However, if a Member has missed the vote he or she may submit a statement declaring how he or she would have voted had he or she been present. Such an explanatory statement containing the Member’s original signature will be inserted in the Congressional Record at the point immediately after the vote. A suggested script for such an explanatory statement on missed or mistaken votes may be obtained from the Floor staff. It is important to remember that this statement does not affect whether or how the Member is recorded on the vote. Clause 2 of Rule III specifically prohibits Members from allowing another person to cast their vote and from casting the vote of another Member. This unethical action was banned at the beginning of the 97th Congress. The allotted time for a quorum call or recorded vote under the Rules of the House is not less than fifteen minutes (clause 2 of Rule XX). It is the prerogative of the Speaker or presiding officer to allow additional time beyond the fifteen minutes. Often one will hear Members calling “regular order” when an electronic vote extends beyond fifteen minutes under the mistaken impression that recorded votes are limited to fifteen minutes — they are not limited. The regular order is to allow more time on recorded votes if the Chair desires. In the 110th Congress, clause 2 of Rule XX was amended to prohibit a vote from being held open for the sole purpose of reversing the outcome of the vote. This provision was not included in the 111th Congress rules package on the recommendation of a bipartisan select committee because it was found to be unworkable in practice. It has been the custom of the House since the 104th Congress to attempt to “limit” these fifteen-minute votes to seventeen minutes. The Chair should allow all Members who are on the Floor before the final announcement to be recorded, but is not obliged to hold the vote open to accommodate requests through the Cloakrooms for Members “on their way” to the House Floor. In the 112th Congress, clause 6 of Rule XVIII, was modified to allow for two-minute voting. Two-minute voting is only permissible in the Committee of the Whole. 112th Congress House Floor Procedures Manual
__label__pos
0.783086
A User's Guide Gebonden Engels 2021 9780241481929 Verkooppositie 3385Hoogste positie: 3385 Verwachte levertijd ongeveer 8 werkdagen From the bestselling author of Team of Teams and My Share of the Task, an entirely new way to understand risk and master the unknown. In this new book, General McChrystal offers a battle-tested system for detecting and responding to risk. Instead of defining risk as a force to predict, McChrystal and coauthor Anna Butrico show that there are in fact ten dimensions of control we can adjust at any given time. By closely monitoring these controls, we can maintain a healthy Risk Immune System that allows us to effectively anticipate, identify, analyze, and act upon the ever-present possibility that things will not go as planned. Drawing on examples ranging from military history to the business world, and offering practical exercises to improve preparedness, McChrystal illustrates how these ten factors are always in effect, and how by considering them, individuals and organizations can exert mastery over every conceivable sort of risk that they might face. We may not be able to see the future, but with McChrystal’s hard-won guidance, we can improve our resistance and build a strong defense against what we know—and what we don’t. Aantal pagina's:368 Hoofdrubriek:Algemeen management Wees de eerste die een lezersrecensie schrijft! Geef uw waardering Zeer goed Goed Voldoende Matig Slecht Managementboek Top 100 Populaire producten
__label__pos
0.963643
Rickard's - Canada Rickards Red Rickard's Red A full-flavoured, medium-bodied red ale, Rickard's features a slight hoppy bitterness balanced with candy-like caramel malt sweetness. Brewed using a signature Munich malt with a touch of brewer's caramel to create a striking, ruby red appearance and a bold, refreshing taste. A perfect complement to grilled meats.
__label__pos
0.966037
Photo Montage from THE MITZVAH.jpg A combination of the German words appell meaning “roll call” and platz meaning “place.” Prisoners in the concentration camps had to line up twice a day – in the heat of summer and freezing cold of winter – to be counted. If someone died between the morning and evening count, the other prisoners held up their corpse for the often hours-long process. Those who did not stand still were beaten and even killed. A made-up concept by the Nazis. The so-called “Aryan Master Race” system graded humans from pure Aryans – Germans and other Nordic blonde, blue eyed types – to non-Aryans, most of whom were viewed as sub-humans. Auschwitz, Sachsenhausen, Sobibor, Bergen-Belsen Nazi concentration and death camps. The largest city in northeast Poland. In 1936, almost 43,000 Jews lived in Bialystok comprising about 43% of the population. By 1948 a little over 600 remained. A white supremacist pseudoscience that held that white or “Nordic” people are superior to other races and ethnicities and should be dominant over them. America’s Eugenics movement was the political driver behind the Immigration Restriction Act of 1924, which effectively put an end to the immigration of Italians, Slavic peoples and European Jews. Rectangle 2 copy.jpg Eisernes Kreuz Erste Klasse (Iron Cross First Class). A German army medal awarded to soldiers for conspicuous bravery on the field of battle. The fascist movement in Spain that became the sole political party under Spanish dictator and Nazi ally, Francisco Franco. Glatt Kosher Foods produced under the strictest Kosher standard of Jewish dietary laws. A heavily reinforced underground bunker in Berlin near the Reich Chancellery. It was part of a subterranean complex used by Adolf Hitler during World War II and also where he committed suicide in the final days of the war. Jim Crow Laws and customs used to oppress blacks including bans on interracial marriage and separation between races in  public and places of business. There were also a number of laws meant to keep blacks from voting including the poll tax, which required that a tax be paid in order to register to vote. The tax, which was designed to impede the poor, especially poor people of color, was used by a number of states until 1965. Another voting restriction was the Grandfather Clause, which was enacted after 1890 mainly in southern states. The clause exempted white men from new literacy and property voting requirements if the they and their forefathers voted before 1867. German word for “Jew.” (Pronounced Yoo-deh.) The concentration camp infirmary which provided little or no medical care. A semi-automatic pistol produced between 1898 and 1948. The German model used during World War II was a Wehrmacht P08. A derogatory term used by the Nazis to characterize Germans of Jewish descent who had one or two Jewish grandparents. Concentration camp slang for the “walking dead,” humans who were barely alive; reduced to skin and bones. Oath to Hitler Beginning on August 2, 1934, when Hitler became Germany’s dictator, all members of the German military swore an oath of loyalty to Hitler instead of the German constitution as they had previously. German Army First Lieutenant. The Treaty of Versailles After the end of World War I (July 28, 1914 – November 11, 1918), the victorious allies (mainly France, England and the United States) imposed extremely harsh peace terms on Germany. The treaty, signed in June 1919, held Germany responsible for  starting the war and imposed severe penalties including loss of territory, onerous reparation payments and demilitarization. The treaty humiliated Germany and caused widespread economic hardship and resentment that played a significant role in fueling the rise of Adolf Hitler and the Nazi Party.
__label__pos
0.904129
The Different Uses of Crystals In the world of witch’s spells and rituals, crystals have played a significant role. With a versatile number of uses, witches are not the only ones to use crystals for their benefit. The kind of material you work with provides variations in results and levels of power. Most often, the largest crystals are seen as the most powerful than smaller ones. Rare and expensive materials are also favored for their characteristics. In this article, you will learn more about crystals, such as the many different uses. What Are Crystals Used For? Witches are often associated with crystals and for many different reasons. Some are used as pendulums. Crystals have a long history with healing and aligning the body’s energy centers. Cleansing and purification rituals may use crystals. If a crystal is a part of a wand or other toll of magic, it could be used to direct energy. When creating amulets and talismans, crystals can be used for their power and protective nature. When using crystals with other gems, they are seen as a way of intensifying the power. They could become an important part of casting a spell or performing a ritual. Crystals are used in scrying, which is a technique of divination and revelation, where someone gazes intently at an object in hopes of seeing a vision , oftentimes of the future. Crystals are also used to designate sacred spaces, enhance focus, further relaxation during meditation, and retain information. If you steep crystals in water, they are thought to create a “healing crystal elixir.” A person should first charge a crystal with their intention, and then place it in a glass filled with spring water. Depending on what you wish to accomplish, you will situate the glass of water in sunlight or moonlight. The crystal will unleash its vibrations into the water. Remove the crystal and then drink the remaining water. Crystals are believed to have the ability to communicate with users and other stones. If you have one in your possession and go somewhere centered on information, such as a class or talk , the crystal is thought to absorb the details and help you remember what was said. Casting Spells with Crystals Witches would rely on crystals when casting spells, as the stones possessed a reputation for being versatile. They could play a role in nearly any spell or ritual , spanning from healing to divination. Crystals were used for practices that required conjuring up energy. For example, if you find yourself not obtaining the things you want out of life, it is suggested to seek help from a clear quartz crystal. When made into jewelry, the snowy white color of the quartz is associated with removing self-blocks. People believe they can tap into their inner resources better when using this type of quartz.
__label__pos
0.986251
Cost – Effective Solutions A variety of cost calculations are required on every project to determine what design approaches will generate the most advantages and allow budgets to be allocated most efficiently. Initial, in-ground costs are the most obvious expenses. But hidden and longer-term costs are becoming more significant as owners and designers study material’s entire budget impacts. The key to efficient design is understanding that each system choice impacts other decisions. The goal is to ensure all products and systems work together without creating redundancy or inefficiencies. For instance, creating a total-precast concrete structural solution, including spandrels with any finish desired, can eliminate trades and reduce crews at the site, alleviating congestion and improving safety. It also condenses structure and finish into one single-source provider. Precast concrete’s long-span capabilities can minimize shear walls, while lite walls can improve sight lines and enhance wayfinding and security. Maintenance needs throughout a structure’s life are a key hidden cost that must be considered. These costs accrue to the operating budget rather than the construction budget, and sometimes are not considered in evaluating the structure’s long-term costs during the design phase. Precast concrete offers high durability, with joints that need replacing after 15 years or so, with an annual inspection quickly and easily accomplished. Blakeslee also offers a personalized, in-depth maintenance schedule to ensure the structure remains at peak efficiency. A precast concrete structural system can save money in a variety of ways, speeding construction, eliminating trades from the site, reducing material needs, and allowing projects to be built on tight sites without disrupting the neighborhood. These benefits allow budget to be shifted where it can have greater impact while ensuring the project is constructed on time and on budget, with a pleasing appearance. And the cost savings will continue throughout the structure’s service life.
__label__pos
0.887894
Objective of Fig Objective of Fig. for the gain-of-function by integrating a particular gene appealing into the candida chromosome. Genetic attributes such as dominating or recessive phenotype of the identified mobile protein could possibly be examined straight through haploid or diploid phases of the HIV-1 integrase inhibitor 2 candida life routine. Finally, an determined mobile factor may be confirmed by practical complementation using candida or additional eukaryotic homologues in particular cells. Actually, many human being proteins that are essential to human being biology or illnesses such as for example cancer-associated proteins had been first found out by learning their homologs in Mouse monoclonal to CD13.COB10 reacts with CD13, 150 kDa aminopeptidase N (APN). CD13 is expressed on the surface of early committed progenitors and mature granulocytes and monocytes (GM-CFU), but not on lymphocytes, platelets or erythrocytes. It is also expressed on endothelial cells, epithelial cells, bone marrow stroma cells, and osteoclasts, as well as a small proportion of LGL lymphocytes. CD13 acts as a receptor for specific strains of RNA viruses and plays an important function in the interaction between human cytomegalovirus (CMV) and its target cells yeasts. For critiques of related topics, discover 8,9,10,11. There’s also benefits of using yeasts as model systems to review infections of higher eukaryotes such as for example plant, pet or human infections. The primary reason is basically because yeasts bring their personal indigenous infections. Both positive feeling (+) dual stranded RNA (dsRNA) infections, (+) solitary stranded RNA (ssRNA) infections and retrotransposon components have already been reported in yeasts and additional fungi 12,13. For instance, research of candida killer infections possess helped us to review mobile apoptosis and necrosis during virus-host discussion 14,15,16,17, also to understand potential mobile viral restriction elements toward viral attacks 18,19. Because the integration procedure for candida retrotransposons resembles in lots of ways retroviral integration, molecular research of fission candida Tf components or budding candida Ty elements offered insights into features of retroviruses such as for example HIV or murine leukemia infections 20,21,22. As demonstrated in Desk 1, many (+) RNA infections plus some DNA infections replicate, to different levels, in yeasts. For instance, the first record showing candida as a bunch for the replication of the vegetable viral genome was from Brome mosaic pathogen (BMV), which really is a known person HIV-1 integrase inhibitor 2 in the alphavirus-like superfamily of animal and plant positive strand RNA infections 23. In this scholarly study, candida expressing BMV RNA replication genes and facilitates RNA-dependent transcription and replication of BMV RNA3 derivatives, recommending all cellular elements that are crucial for BMV RNA transcription and replication should be within the candida. Cost synthesis of infectious virions in the candida cell monolayers24Nodamura pathogen (NoV)(+)ssRNAAnimals (Mammals)Just like FHV28AvsunviroidaeAvocado sunblotch viroid (ASBVd)ssRNA circularPlantsSelf-cleavage and replication of ASBVd RNA strands of both polarities33DNA virusesPapillomaviridaeHuman papillomavirus (HPV)dsDNA circularHumansAmount of HPV genome DNA utilizing a cells are usually circular to ovoid in form with 5 – 10 m in size. The girl cells that are produced during cell department are generally smaller sized than mom cells (Fig. 1A). Unlike fission candida, budding yeasts cell HIV-1 integrase inhibitor 2 wall structure consists of both chitin and -glucans. The optimum temperatures for development of can be 30 – 35C. For general experimental reasons, budding yeasts are expanded in the entire candida draw out generally, peptone and dextrose (YPD) moderate at 30C without selection. Regular synthetic described (SD) minimal moderate can be used to grow auxotrophic candida cultures or choose for candida transformants including plasmids. The choice press are generated with the addition of defined combination of amino acids, vitamin supplements and additional components referred to as the drop-out health supplements. A summary of budding candida selectable markers or are accustomed HIV-1 integrase inhibitor 2 to select for the current HIV-1 integrase inhibitor 2 presence of a plasmid 38. Antibiotics such as for example hygromycin B and kanamycin could be utilized as selectable markers 39 also,40. Shape 1 Open up in another window Shape 1: Existence cycles of budding candida (or cells in the lab is.
__label__pos
0.715872
What are Flying Termites? Flying termites, similar to ants, are often visible during fall and spring seasons. These swarming termites are usually mistaken for flying insects. Reproductive winged termites are also known as alates. They are known to reproduce in warm temperature and it takes only a male and female to begin a new colony. As they mate, these creatures are known to cast off their wings. Flying winged termites are attracted towards light and can enter into homes through visible cracks or holes. How Big Are Termite SwarmersFlying termites can be differentiated from winged ants on the basis of wings and antennae. Swarming termites have all four wings of same size and straight antennae. The flying ants have arched antennae and different sized wings. Damp wood termites are common in Arizona, California and coastal pacific regions. They breed in damp coastal places and high altitudes and have large brown wings for swarming. Another common type in America is the subterranean termites. They breed near decayed damp wood, muddy pipes and live in colony beneath the ground. Chemical soil treatment and preventing water leaks can help to control these termites. The dry wood flying termites are visible in the southeast, southwest and parts of California states. Swarming termites in southwest are little larger than those in southeast and have big dark wings. They harm dry wooden furniture, books, floors and shelves. It is ideal to use treated lumber as a cure. Exterminator for this breed includes fumigation and poison injections. Even though the name suggests that these termites fly, they cannot travel long distances. Hence, if you spot them flying near your home, it could mean that any part of the house or surrounding area is infested with these flying termites. Flying termites are natural agents to recycle decayed and rotten wood back to the nature. This is an impressive task but they could also harm the belongings at home. Hence, it is important to exterminate these pests from homes at the earliest. Pictures of Flying Termites Flying Termites subterranean-termite-swarmer Leave a Reply
__label__pos
0.945544
How to support learners with unique needs during distance learning For Parents Oct 30, 2020 Distance learning presents challenges that parents and teachers are unfamiliar with. For learners with unique needs, these challenges can be even greater. During Outschool’s recent Back to Outschool Live event, Outschool Class Quality Specialists Serena Loccisano and David Stoler shared their tips on how learners with unique learning needs can be successful during distance learning. David and Serena shared six common learning barriers and offered tips on how to overcome them. Ultimately, all learners have unique learning needs, and parents and teachers can find solutions that work for everyone. Let’s dive into common challenges and ways to tackle them, so you can support your unique learner as they learn online. Keeping Attention Learners with an attention barrier have trouble focusing on a task and/or completing tasks in their entirety. This can cause learners to shut down, which can be seen as lazy. The reality is that some learners struggle to focus when they are overstimulated. This presents challenges with taking turns, preparing for classes, or focusing for long periods. How to support learners with attention challenges: • Incorporate movement into lessons • Make learning fun and interactive whenever possible • Break large tasks into chunks • Take breaks and embrace flexible scheduling Social interaction Some learners may prefer distance learning because it can provide relief for social anxiety that they experience. These learners feel more comfortable learning from home. Other learners still face challenges with social interaction during distance learning. In these cases, it’s important to allow learners to take one step at a time, gradually interacting with others more and more. Start with one-on-one interaction, make sure they're comfortable, and then move to groups of two and three. Practice and prompting are also helpful solutions. Rehearsing scenarios of what might happen and how learners can respond can reduce anxiety. How to support learners with social interaction challenges: • Begin by practicing 1 to 1 interactions • Rehearse social responses to specific situations • Implement non-verbal alternatives Resources for Supporting Learners with Unique Needs Emotional control Strong emotional responses can sometimes impede learners. These responses can be exacerbated when learners are working to absorb new information. Learners can get easily frustrated, defiant, shut down, and develop poor relationships because it’s difficult for others to be around them. A great way to help overcome this is to create positive reinforcements. Work towards their reward instead of taking things away, which can trigger negative emotions. You can also create plans for their behavior and how they can respond. “Remember, the heart is connected to the brain, so when the emotions are off balance, the brain isn’t going to work in the way we need it to”  - Serena Loccisano How to support learners with emotional control challenges: • Show positive reinforcement • Create a written behavior plan • Implement choice boards • Model and practice self-regulation Verbal expression If your learner has trouble expressing themselves verbally, it may show up as inability to use precise language, inability to form sentences or original ideas, and below average vocabulary. In distance learning, the learner will not engage in discussions, they may repeat what the previous speaker said, and they may not participate in activities as they are not sure what is expected of them. How to support learners with verbal expression challenges: • Pre-teach vocabulary so learners are familiar with them before class • Use sentence stems to prompt learners to develop their thoughts • Repeat important vocabulary words often • Teach note taking skills • Practice conversation skills Auditory processing Auditory processing is how the information you hear gets into your brain. When learners struggle with auditory processing, there's a disconnect between what they’re told and how their brain registers the info. A teacher may say a word, and the learner hears something similar but wrong, making their response out of context. In these instances, the learner may be truly listening, but but they process it incorrectly. Because distance learning occurs largely over Zoom, listening is the primary way of obtaining information. Over video chat, learners may miss verbal cues, causing them to be uncomfortable expressing their ideas. How to support learners with auditory processing challenges: • Provide a quiet space work • Provide written notes and instructions • Speak slowly • Ask learners to repeat directions to check for understanding Task initiation Learners who struggle with task initiation don’t know how to get started on activities. They're overwhelmed with what they're supposed to do, which causes them to give up. Again, this can look like laziness, but it’s really just a challenge with beginning a task. How to support learners with task initiation challenges: • Check-in on learners frequently • Provide step-by-step instructions • Provide examples of assignments • Encourage the use of graphic organizers Overall, many of the challenges and obstacles faced by learners during distance learning can be overcome with the right clarifications, supports, and structures. By getting learners involved and providing them options, teachers and parents can keep learners happy and motivated during distance learning. Watch David & Serena's full presentation here: Gerard Dawson Gerard Dawson is a teacher, parent and writer for Outschool.
__label__pos
0.979929
Applying A Parametric Script to a Surface I am very new to grasshopper. I have a script that utilizes a photo to create a pattern based on an image. I basically understand how to change, for instance, the heights of the shapes, and shapes themselves, but I don’t know how to approach actually utilizing this to create a parametric solid patterning on a specific surface (i.e. for a building exterior). Reference script and and surface attached. Thank you! (667.6 KB) Ref-Surface.3dm (246.6 KB) (709.6 KB) Thank you so much! This will be helpful to learn from!
__label__pos
0.96802
Assessment of the economic impacts of porcine epidemic diarrhea virus in the United States Schulz, Lee Tonsor, Glynn Journal Title Journal ISSN Volume Title Schulz, Lee Research Projects Organizational Units Organizational Unit Journal Issue Porcine epidemic diarrhea virus (PEDV), which first emerged in the United States in 2013, spread throughout the U.S. hog population. Limited preemptive knowledge impeded the understanding of PEDV introduction, spread, and prospective economic impacts in the United States. To assess these impacts, this article reviews the timeline of PEDV in the United States and the corresponding impacts. PEDV is a supply-impacting disease and is not demand inhibiting, as pork demand remained strong since PEDV first appeared. Pig losses reached significant levels during September 2013 through August 2014, with the majority of pork production impacts occurring in 2014. PEDV had differing impacts for subsectors of the pork industry. A budget model demonstrates that producers could have had pig losses and decreases in productivity proportionally smaller than price increases, resulting in net returns above what was expected before the major outbreak of PEDV. Previous literature is reviewed to identify the potential main industry beneficiaries of the PEDV outbreaks in the United States. As a result of reduced volumes of available pig and hog supplies, reductions in annual returns likely occurred for packers, processors, distributors, and retailers. In addition, pork consumers who experienced reduced-supply-induced pork-price increases were likely harmed directly by higher prices paid for pork and indirectly as prices of competing meats were also likely strengthened by PEDV. This article also identifies future considerations motivated by the appearance of PEDV in the United States, such as discussions of industry-wide efficiency and competitive advantage, the future role of PEDV vaccines, enhancement in biosecurity measures, and consumer perceptions of food safety and insecurity. This article is from Journal of Animal Science 93 (2015): 5111, doi: 10.2527/jas2015-9136. Posted with permission. animal health, economic impacts, PEDV, porcine epidemic diarrhea virus, pork, swine
__label__pos
0.736407
Ibn al-Shatir ʿAbu al-Ḥasan Alāʾ al‐Dīn ʿAlī ibn Ibrāhīm al-Ansari[1] known as Ibn al-Shatir or Ibn ash-Shatir (Arabic: ابن الشاطر; 1304–1375) was an Arab astronomer, mathematician and engineer. He worked as muwaqqit (موقت, religious timekeeper) in the Umayyad Mosque in Damascus and constructed a sundial for its minaret in 1371/72. Ibn al-Shatir Ibn al-Shatir's lunar model. Died1375 (aged 71) kitab nihayat al-sul fi tashih al-usul Ibn al-Shatir was born in Damascus, Syria around the year 1304. His father passed away when he was six years old. His grandfather took him in which resulted in al-Shatir learning the craft of inlaying ivory.[2] Ibn al-Shatir traveled to Cairo and Alexandria to study astronomy, where he fell in, inspired him.[2] After completing his studies with Abu ‘Ali al-Marrakushi, al-Shatir returned to his home in Damascus where he was then appointed muwaqqit (timekeeper) of the Umayyad Mosque.[2] Part of his duties as muqaqqit involved keeping track of the times of the five daily prayers and when the month of Ramadan would begin and end.[3] To accomplish this, he created a variety of astronomical instruments. He made several astronomical observations and calculations both for the purposes of the mosque, and to fuel his later research. These observations and calculations were organized in a series of astronomical tables.[4] His first set of tables, which have been lost over time, allegedly combined his observations with those of Ptolemy, and contained entries on the Sun, Moon and Earth.[3] Ibn al-Shatir most important astronomical treatise was kitab nihayat al-sul fi tashih al-usul ("The Final Quest Concerning the Rectification of Principles"). In it he drastically reformed the Ptolemaic models of the Sun, Moon and planets. His model incorporated the Urdi lemma, and eliminated the need for an equant (a point on the opposite side of the center of the larger circle from the Earth) by introducing an extra epicycle (the Tusi-couple), departing from the Ptolemaic system in a way that was mathematically identical (but conceptually very different) to what Nicolaus Copernicus did in the 16th century. This new planetary model was published in his work the al-Zij al-jadid (The New Planetary Handbook.)[3] Before the kitab nihayat al-sul fi tashih al-usul was made, there was a treatise that Ibn al-Shatir made which described the observations and procedures that lead to him creating his new planetary models.[3] Unlike previous astronomers before him, Ibn al-Shatir was not concerned with adhering to the theoretical principles of natural philosophy or Aristotelian cosmology, but rather to produce a model that was more consistent with empirical observations and contemporary theory. For example, it was Ibn al-Shatir's concern for observational accuracy which led him to eliminate the epicycle in the Ptolemaic solar model and all the eccentrics, epicycles and equant in the Ptolemaic lunar model. Shatir's new planetary model consisted of new secondary epicycles instead of equant, which improved on the Ptolemaic model. His model was thus in better agreement with empirical observations than any previous model,[5] and was also the first that permitted empirical testing.[6] His work thus marked a turning point in astronomy, which may be considered a "Scientific Revolution before the Renaissance".[5] Ibn al-Shatir's model for the appearances of Mercury, showing the multiplication of epicycles in a Ptolemaic enterprise Drawing on the observation that the distance to the Moon did not change as drastically as required by Ptolemy's lunar model, Ibn al-Shatir produced a new lunar model that replaced Ptolemy's crank mechanism with a double epicycle model that reduced the computed range of distances of the Moon from the Earth.[7] This was the first accurate lunar model which matched physical observations.[8] Solar ModelEdit Ibn al-Shatir's Solar Model exemplifies his commitment towards accurate observational data, and its creation serves as a general improvement towards the Ptolemaic model. When observing the Ptolemaic solar model, it is clear that most of the observations are not accounted for, and cannot accommodate the observed variations of the apparent size of the solar diameter.[9] Because the Ptolemaic system contains some faulty numerical values for its observations, the actual geocentric distance of the sun had been greatly underestimated in its solar model. And with the problems that had arisen from the Ptolemaic models, there was an influx of need to create solutions that would resolve them. Ibn al-Shatir's model aimed to do just that, creating a new eccentricity for the solar model. And with his numerous observations, Ibn al-Shatir was able to generate a new maximum solar equation (2;2,6°), which he found to have occurred at the mean longitude λ 97° or 263° from the apogee.[10]As the method was deciphered through geometric ways, it was easy to identify that 7;7 and 2;7 were the radii of the epicycles.[11] In addition, his final results for apparent size of the solar diameter were concluded to be at apogee (0;29,5), at perigee (0;36,55), and at mean distance (0;32.32).[10] This was partially done by reducing Ptolemy’s circular geometric models to numerical tables in order to perform independent calculations to find the longitude of the planets.[1] The longitude of the planets was defined as a function of the mean longitude and the anomaly. Rather than calculating every possible value, which would be difficult and labor-intensive, four functions of a single value were calculated for each planet and combined to calculate quite accurately the true longitude of each planet.[12] To calculate the true longitude of the moon, Ibn al-Shatir assigned two variables, η, which represented the moon's mean elongation from the sun, and γ, which represented its mean anomaly. To any pair of these values was a corresponding e, or equation which was added to the mean longitude to calculate the true longitude. Ibn al-Shatir used the same mathematical scheme when finding the true longitudes of the planets, except for the planets the variables became α, the mean longitude measured from apogee (or the mean center) and γ which was the mean anomaly as for the moon. A correcting function c3' was tabulated and added to the mean anomaly γ to determine the true anomaly γ'.[12] As shown in Shatir's model, it was later discovered that Shatir's lunar model had a very similar concept as Copernicus.[2]Ibn al-Shatir never gave motivation towards his two epicycles to be adopted, so it was hard to tell the difference between his model and the Ptolemaic model. Possible influence on Nicolaus CopernicusEdit Although Ibn al-Shatir's system was firmly geocentric (he had eliminated the Ptolemaic eccentrics), the mathematical details of his system were identical to those in Copernicus's De revolutionibus.[13] Furthermore, the exact replacement of the equant by two epicycles used by Copernicus in the Commentariolus paralleled the work of Ibn al-Shatir one century earlier.[14] Ibn al-Shatir's lunar and Mercury models are also identical to those of Copernicus.[15] Copernicus's Mercury model was flawed in the fact that he was not able to properly understand the model first created by Ibn al-Shatir. Copernicus also translated Ptolemy's geometric models to longitudinal tables in the same way Ibn al Shatir did when constructing his solar model.[1] This has led some scholars to argue that Copernicus must have had access to some yet to be identified work on the ideas of ibn al-Shatir.[16] It is unknown whether Copernicus read ibn al-Shatir and the argument is still debated. The differences between the two can be seen in their works. Copernicus followed a heliocentric model (planets orbit the sun) while Ibn al-Shatir followed the geocentric model (as mentioned earlier). Also Copernicus followed the inductive reasoning while Ibn al-Shatir followed the Zij traditions.[15] A Byzantine manuscript containing a solar model diagram with a second epicycle, was discovered to have been in Italy at the time of Copernicus. The presence of this eastern manuscript containing the ideas of Islamic scholars in Italy provides potential evidence of transmission of astronomical theories from the East to Western Europe.[17] The idea of using hours of equal time length throughout the year was the innovation of Ibn al-Shatir in 1371, based on earlier developments in trigonometry by al-Battānī. Before the Islamicate scholar created the improved sundial, he had to understand the sundial created by his predecessors. The Greek had sundials too, but they had nodus-based with straight hour lines which meant that the hours in the day would be unequal (temporary hours) depending on the season. Each day was split into twelve equal segments which meant that the hours would have been shorter in the winter and longer in the summer due to the activity of the sun.[18] Ibn al-Shatir was aware that "using a gnomon that is parallel to the Earth's axis will produce sundials whose hour lines indicate equal hours on any day of the year." His sundial is the oldest polar-axis sundial still in existence. The concept later appeared in Western sundials from at least 1446.[18][19] Ibn al-Shatir also invented a timekeeping device called "Sandūq al‐Yawāqīt li maʿrifat al-Mawāqīt" (jewel box), which incorporates both a universal sundial and a magnetic compass. He invented it for the purpose of finding the times of prayers.[20] The "Sandūq al‐Yawāqīt li maʿrifat al-Mawāqīt" had a moveable hole in it which allowed the user to find the hour angle of the sun. If this angle was suitable with the horizon, then the user could use it as a polar sundial.[21] This device is preserved in the museum of Aleppo (largest museum in the city of Aleppo, Syria).[21] He also created a sundial which was placed on top of the Madhanat al-Arus (The Minaret of the Bride) in the Umayyad Mosque.[13] The sundial was created on a slab of marble which was approximately 2 meters by 1 meter. The sundial being engraved on the marble was so that Ibn al-Shatir could read the time of the day in equinoctial (equal times) hours for the prayer times.[13] This sundial was later removed in the eighteenth century and a replica was put in its place. The original sundial was placed in the Damascus archeology museum.[21] He also created another sundial but in smaller dimensions (12 cm x 12 cm × 3 cm) to find out the prayer times of midday and the afternoon. This sundial was able to tell the local meridian and the direction of the mecca (located in Saudi Arabia). [21]Other notable instruments invented by him include a reversed astrolabe and an astrolabic clock.[22] The astrolabe that he created was called the al‐āla al‐jāmiʿa (the universal instrument). This astrolabe was created by Ibn al-Shatir when he wrote on the ordinary planispheric astrolabe and when he wrote on the two most common quadrants (the astrolabic and the trigonometric varieties).[22] These two common quadrants were modified versions of the sine quadrant. He also created a set of tables that had values of spherical astronomical functions for prayer times. The tables displayed the times for the morning, afternoon, and evening prayers. The latitude that was used to create the table was 34° (which was correspondent to a location north of Damascus).[4] See alsoEdit 1. ^ a b c Roberts, Victor (1966). "The Planetary Theory of Ibn al-Shatir: Latitudes of the Planets". Isis. 57 (2): 208–219. doi:10.1086/350114. JSTOR 227960. S2CID 143576999. 2. ^ a b c d Freely, John (2015). Light from the East: how the Science of Medieval Islam helped to shape the Western World. I.B. Tauris. ISBN 978-1784531386. 3. ^ a b c d Freely, John. (2010). Light from the East : How the Science of Medieval Islam helped to shape the Western World. London: I.B. Tauris. ISBN 978-0-85772-037-5. OCLC 772844807. 4. ^ a b Abbud, Fuad (December 1962). "The Planetary Theory of Ibn al-Shatir: Reduction of the Geometric Models to Numerical Tables". Isis. 53 (4): 492–499. doi:10.1086/349635. ISSN 0021-1753. 5. ^ a b (Saliba 1994b, pp. 233–234 & 240) 6. ^ Y. M. Faruqi (2006). "Contributions of Islamic scholars to the scientific enterprise", International Education Journal 7 (4): 395–396. 7. ^ Neugebauer (1975), volume 3 at pages 1108–1109. 8. ^ Morris, T. J. A Paranormal History Guide. Lulu.com. ISBN 9781300192459. 9. ^ Saliba, George (2016-07-23). "Theory and Observation in Islamic Astronomy: The Work of IBN AL-SHĀTIR of Damascus:". Journal for the History of Astronomy. 18. doi:10.1177/002182868701800102. 10. ^ a b Roberts, Victor (1957). "The Solar and Lunar Theory of Ibn ash-Shāṭir: A Pre-Copernican Copernican Model". Isis. 48 (4): 428–432. ISSN 0021-1753. 11. ^ Roberts, Victor. "The Solar and Lunar Theory of Ibn ash-Shāṭir: A Pre-Copernican Copernican Model" (PDF). Chicago Journals. 48: 428–432 – via JSTOR. 12. ^ a b Abbud, Fuad (1962). "The Planetary Theory of Ibn al-Shatir: Reduction of the Geometric Models to Numerical Tables". The University of Chicago Press. 53: 492–499 – via JSTOR. 13. ^ a b c Berggren, J (1999). "Sundials in medieval Islamic science and civilization" (PDF). Coordinates. 14. ^ Swerdlow, Noel M. (1973-12-31). "The Derivation and First Draft of Copernicus's Planetary Theory: A Translation of the Commentariolus with Commentary". Proceedings of the American Philosophical Society. 117 (6): 424. ISSN 0003-049X. JSTOR 986461. 15. ^ a b King, David A. (2007). "Ibn al‐Shāṭir: ʿAlāʾ al‐Dīn ʿAlī ibn Ibrāhīm". In Thomas Hockey; et al. (eds.). The Biographical Encyclopedia of Astronomers. New York: Springer. pp. 569–70. ISBN 978-0-387-31022-0. (PDF version) 16. ^ Linton (2004, pp.124,137–38), Saliba (2009, pp.160–65). 17. ^ Roberts, Victor (1966). "The Planetary Theory of Ibn al-Shatir: Latitudes of the Planets". The University of Chicago. 57: 208–219 – via JSTOR. 18. ^ a b "History of the sundial". National Maritime Museum. Archived from the original on 2007-10-10. Retrieved 2008-07-02. 19. ^ Jones 2005. 20. ^ (King 1983, pp. 547–8) 21. ^ a b c d Rezvani, Pouyan. "The Role of ʿIlm al-Mīqāt in the Progress of Making Sundials in the Islamic Civilization" (PDF). Academia. 22. ^ a b King, David A. (1983). "The Astronomy of the Mamluks". Isis. 74 (4): 531–555 [545–546]. doi:10.1086/353360. S2CID 144315162. • Fernini, Ilias. A Bibliography of Scholars in Medieval Islam. Abu Dhabi (UAE) Cultural Foundation, 1998 • Kennedy, Edward S. (1966) "Late Medieval Planetary Theory." Isis 57:365–378. • Kennedy, Edward S. and Ghanem, Imad. (1976) The Life and Work of Ibn al-Shatir, an Arab Astronomer of the Fourteenth Century, History of Arabic Science Institute, University of Aleppo. • Roberts, Victor. "The Solar and Lunar Theory of Ibn ash-Shatir: A Pre-Copernican Copernican Model". Isis, 48(1957):428–432. • Roberts, Victor and Edward S. Kennedy. "The Planetary Theory of Ibn al-Shatir". Isis, 50(1959):227–235. • Saliba, George. "Theory and Observation in Islamic Astronomy: The Work of Ibn al-Shatir of Damascus". Journal for the History of Astronomy, 18(1987):35–43. • Turner, Howard R. Science in Medieval Islam, an illustrated introduction. University of Texas Press, Austin, 1995. ISBN 0-292-78149-0 (pb) ISBN 0-292-78147-4 (hc) • Saliba, George (1994b), A History of Arabic Astronomy: Planetary Theories During the Golden Age of Islam, New York University Press, ISBN 978-0-8147-8023-7 Further readingEdit External linksEdit
__label__pos
0.771249
Iditarod Trail Invitational 1000 Alaska, USA - February 2022 The Iditarod Trail Invitational 1000 is the world's longest running winter ultra-marathon. The pinnacle of all winter ultra-marathons, the ITI 1000 takes competitors through the far reaches of the Alaskan wilderness, following the Iditarod Trail to its conclusion under the famed burled arch in Nome, Alaska. Competitors have 30 days to race 1,000 miles across some of the world's most treacherous terrain in hospitable conditions. Every year on the Iditarod Trail is different and conditions change in the blink of an eye during the journey. ITI 1000 competitors face temperatures from -50F to 35F, gale force winds, rain, blizzards, waist-deep snow, mud, glare ice and bright sunny skies - all in the same day. Keith Eckert Navy veteran, ultramarathon runner, and Guardian Revival Sponsored Athlete, Keith Eckert, has chosen to run the 2022 ITI 1000. Keith's goal is to raise $100 for each mile he conquers across the Alaskan wilderness, aiming to raise a total of $100,000 by time he takes his last step across the frozen finish line in Nome, Alaska. Born and raised in Chandler, AZ, Keith grew up playing sports year-round and at an early age, discovered his passion for helping others. Keith went on to graduate from the United States Naval Academy and was a Division I collegiate wrestler before commissioning in the the U.S. Navy. Suffering for a Cause Keith will be competing in the ITI 1000 to raise awareness on our nation's veteran and first responder mental health epidemic. The frigid temperatures, gale-force winds, rain, blizzards, waist-deep snow, mud, and ice that Keith will face during his journey symbolize our Guardians' mental health struggles and what oftentimes seems like an impassable road to recovery. Keith's suffering will inspire our struggling Guardians to continue battling and to never give in, even during their darkest moments; similarly, those struggling Guardians will inspire Keith to keep battling through his 30 days and 1,000 miles of darkness and oftentimes hopelessness on his long and treacherous journey to the finish line. Click Below to Donate Previous Races
__label__pos
0.702498