core_id
stringlengths 4
9
| doi
stringlengths 10
80
| original_abstract
stringlengths 500
21.8k
| original_title
stringlengths 20
441
| processed_title
stringlengths 20
441
| processed_abstract
stringlengths 34
13.6k
| cat
stringclasses 3
values | labelled_duplicates
list |
---|---|---|---|---|---|---|---|
148660155
|
10.1007/s11045-011-0168-x
|
The evolution of the television market is led by 3DTV technology, and this tendency can accelerate during the next years according to expert forecasts. However, 3DTV delivery by broadcast networks is not currently developed enough, and acts as a bottleneck for the complete deployment of the technology. Thus, increasing interest is dedicated to ste-reo 3DTV formats compatible with current HDTV video equipment and infrastructure, as they may greatly encourage 3D acceptance. In this paper, different subsampling schemes for HDTV compatible transmission of both progressive and interlaced stereo 3DTV are studied and compared. The frequency characteristics and preserved frequency content of each scheme are analyzed, and a simple interpolation filter is specially designed. Finally, the advantages and disadvantages of the different schemes and filters are evaluated through quality testing on several progressive and interlaced video sequences
|
Analysis of frame-compatible subsampling structures for efficient 3DTV broadcast
|
analysis of frame-compatible subsampling structures for efficient 3dtv broadcast
|
television tendency accelerate expert forecasts. delivery broadcast acts bottleneck deployment technology. dedicated formats compatible hdtv video equipment infrastructure greatly encourage acceptance. subsampling schemes hdtv compatible progressive interlaced stereo compared. preserved interpolation filter specially designed. advantages disadvantages schemes filters progressive interlaced video
|
exact_dup
|
[
"11998734"
] |
148662456
|
10.1016/j.cmpb.2011.11.011
|
Systems biology techniques are a topic of recent interest within the neurological field. Computational intelligence (CI) addresses this holistic perspective by means of consensus or ensemble techniques ultimately capable of uncovering new and relevant findings. In this paper, we propose the application of a CI approach based on ensemble Bayesian network classifiers and multivariate feature subset selection to induce probabilistic dependences that could match or unveil biological relationships. The research focuses on the analysis of high-throughput Alzheimer's disease (AD) transcript profiling. The analysis is conducted from two perspectives. First, we compare the expression profiles of hippocampus subregion entorhinal cortex (EC) samples of AD patients and controls. Second, we use the ensemble approach to study four types of samples: EC and dentate gyrus (DG) samples from both patients and controls. Results disclose transcript interaction networks with remarkable structures and genes not directly related to AD by previous studies. The ensemble is able to identify a variety of transcripts that play key roles in other neurological pathologies. Classical statistical assessment by means of non-parametric tests confirms the relevance of the majority of the transcripts. The ensemble approach pinpoints key metabolic mechanisms that could lead to new findings in the pathogenesis and development of A
|
Ensemble transcript interaction networks: A case study on Alzheimer's disease
|
ensemble transcript interaction networks: a case study on alzheimer's disease
|
topic neurological field. intelligence addresses holistic perspective consensus ensemble ultimately capable uncovering findings. propose ensemble bayesian classifiers multivariate induce probabilistic dependences match unveil relationships. focuses throughput alzheimer transcript profiling. perspectives. hippocampus subregion entorhinal cortex controls. ensemble dentate gyrus controls. disclose transcript remarkable studies. ensemble transcripts roles neurological pathologies. parametric confirms relevance majority transcripts. ensemble pinpoints metabolic pathogenesis
|
exact_dup
|
[
"12001142"
] |
148663998
|
10.1007/s10044-011-0243-9
|
This paper describes the design, development and field evaluation of a machine translation system from Spanish to Spanish Sign Language (LSE: Lengua de Signos Española). The developed system focuses on helping Deaf people when they want to renew their Driver’s License. The system is made up of a speech recognizer (for decoding the spoken utterance into a word sequence), a natural language translator (for converting a word sequence into a sequence of signs belonging to the sign language), and a 3D avatar animation module (for playing back the signs). For the natural language translator, three technological approaches have been implemented and evaluated: an example-based strategy, a rule-based translation method and a statistical translator. For the final version, the implemented language translator combines all the alternatives into a hierarchical structure. This paper includes a detailed description of the field evaluation. This evaluation was carried out in the Local Traffic Office in Toledo involving real government employees and Deaf people. The evaluation includes objective measurements from the system and subjective information from questionnaires. The paper details the main problems found and a discussion on how to solve them (some of them specific for LSE)
|
Design, development and field evaluation of a Spanish into sign language translation system
|
design, development and field evaluation of a spanish into sign language translation system
|
describes machine translation spanish spanish lengua signos española focuses helping deaf want renew driver’s license. speech recognizer decoding spoken utterance word translator converting word signs belonging avatar animation module playing signs translator technological implemented translation translator. implemented translator combines alternatives hierarchical structure. evaluation. traffic office toledo involving employees deaf people. subjective questionnaires. solve
|
exact_dup
|
[
"12002348"
] |
148666023
|
10.1016/j.eswa.2012.03.047
|
ntelligent systems designed to reduce highway fatalities have been widely applied in the automotive sector in the last decade. Of all users of transport systems, pedestrians are the most vulnerable in crashes as they are unprotected. This paper deals with an autonomous intelligent emergency system designed to avoid collisions with pedestrians. The system consists of a fuzzy controller based on the time-to-collision estimate – obtained via a vision-based system – and the wheel-locking probability – obtained via the vehicle’s CAN bus – that generates a safe braking action. The system has been tested in a real car – a convertible Citroën C3 Pluriel – equipped with an automated electro-hydraulic braking system capable of working in parallel with the vehicle’s original braking circuit. The system is used as a last resort in the case that an unexpected pedestrian is in the lane and all the warnings have failed to produce a response from the driver
|
Vision-based active safety system for automatic stopping
|
vision-based active safety system for automatic stopping
|
ntelligent highway fatalities widely automotive decade. pedestrians vulnerable crashes unprotected. deals autonomous intelligent emergency avoid collisions pedestrians. fuzzy controller collision vision wheel locking vehicle’s generates safe braking action. convertible citroën pluriel equipped automated electro hydraulic braking capable vehicle’s braking circuit. resort unexpected pedestrian lane warnings failed driver
|
exact_dup
|
[
"18424378"
] |
148668582
|
10.1063/1.4822206
|
The FK concentrator has demonstrated during the last years that compares very well with other Fresnel-based concentrator optics for CPV. There are several features that provide the FK high performance: (1) high optical efficiency; (2) large tolerance to tracking misalignment and manufacturing errors, thanks to a high CAP (Concentration-Acceptance Product); (3) good irradiance uniformity and low chromatic dispersion on the cell surface. Non-uniformities in terms of absolute irradiance and spectral content produced by conventional CPV systems can originate electrical losses in multi-junction (MJ) solar cells. The aim of this work is to analyze the influence of these non-uniformities in the FK concentrator performance and how FK concentrator provides high electrical efficiencies thanks to its insensitivity to chromatic aberrations, especially when components move away from the module nominal position due to manufacturing misalignments. This analysis has been done here by means of both, experimental on-sun measurements and simulations based on 3D fully distributed circuit model for MJ cells
|
Experimental confirmation of FK concentrator insensitivity to chromatic aberrations
|
experimental confirmation of fk concentrator insensitivity to chromatic aberrations
|
concentrator compares fresnel concentrator optics cpv. tolerance tracking misalignment manufacturing thanks acceptance irradiance uniformity chromatic surface. uniformities irradiance originate electrical losses junction cells. analyze uniformities concentrator concentrator electrical efficiencies thanks insensitivity chromatic aberrations move away module nominal manufacturing misalignments. circuit
|
exact_dup
|
[
"33171331"
] |
148673836
|
10.1088/1742-6596/555/1/012020
|
A relation between Cost Of Energy, COE, maximum allowed tip speed, and rated wind speed, is obtained for wind turbines with a given goal rated power. The wind regime is\ud
characterised by the corresponding parameters of the probability density function of wind speed. The non-dimensional characteristics of the rotor: number of blades, the blade radial distributions of local solidity, twist angle, and airfoil type, play the role of parameters in the\ud
mentioned relation. The COE is estimated using a cost model commonly used by the designers. This cost model requires basic design data such as the rotor radius and the ratio between the hub height and the rotor radius. Certain design options, DO, related to the technology of the power plant, tower and blades are also required as inputs. The function obtained for the COE can be explored to �nd those values of rotor radius that give rise to minimum cost of energy for\ud
a given wind regime as the tip speed limitation changes. The analysis reveals that iso-COE lines evolve parallel to iso-radius lines for large values of limit tip speed but that this is not the case for small values of the tip speed limits. It is concluded that, as the tip speed limit decreases, the optimum decision for keeping minimum COE values can be: a) reducing the rotor radius for places with high weibull scale parameter or b) increasing the rotor radius for places with low weibull scale paramete
|
Consideration of tip speed limitations in preliminary analysis of minimum COE wind turbines
|
consideration of tip speed limitations in preliminary analysis of minimum coe wind turbines
|
rated turbines goal rated power. characterised speed. rotor blades blade solidity twist airfoil relation. commonly designers. rotor rotor radius. options tower blades inputs. explored rotor limitation changes. reveals evolve limits. concluded optimum keeping reducing rotor places weibull rotor places weibull paramete
|
exact_dup
|
[
"33175953"
] |
148676159
|
10.1016/j.knosys.2013.11.017
|
A methodology for developing an advanced communications system for the Deaf in a new domain is presented in this paper. This methodology is a user-centred design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. During the requirement analysis, both the user and technical requirements are evaluated and defined. For generating the parallel corpus, it is necessary to collect Spanish sentences in the new domain and translate them into LSE (Lengua de Signos Española: Spanish Sign Language). LSE is represented by glosses and using video recordings. This corpus is used for training the two main modules of the advanced communications system to the new domain: the spoken Spanish into the LSE translation module and the Spanish generation from the LSE module. The main aspects to be generated are the vocabularies for both languages (Spanish words and signs), and the knowledge for translating in both directions. Finally, the field evaluation is carried out with deaf people using the advanced communications system to interact with hearing people in several scenarios. In this evaluation, the paper proposes several objective and subjective measurements for evaluating the performance. In this paper, the new considered domain is about dialogues in a hotel reception. Using this methodology, the system was developed in several months, obtaining very good performance: good translation rates (10% Sign Error Rate) with small processing times, allowing face-to-face dialogues
|
Methodology for developing an advanced communications system for the Deaf in a new domain
|
methodology for developing an advanced communications system for the deaf in a new domain
|
methodology advanced communications deaf paper. methodology centred consisting requirement corpus adaptation evaluation. requirement defined. generating corpus collect spanish sentences translate lengua signos española spanish glosses video recordings. corpus modules advanced communications spoken spanish translation module spanish module. vocabularies languages spanish signs translating directions. deaf advanced communications interact hearing scenarios. proposes subjective evaluating performance. dialogues hotel reception. methodology obtaining translation allowing dialogues
|
exact_dup
|
[
"33177177"
] |
148680590
|
10.1007/978-3-319-13749-0_14
|
We propose a solution based on networks of picture processors to the problem of picture pattern matching. The network solving the problem can be informally described as follows: it consists of two subnetworks, one of them extracts simultaneously all subpictures of the same size from the input picture and sends them to the second subnetwork. The second subnetwork checks whether any of the received pictures is identical to the pattern. We present an efficient solution based on networks with evolutionary processors only, for patterns with at most three rows or columns. Afterwards, we present a solution based on networks containing both evolutionary and hiding processors running in O(n+m+kl+k) computational (processing and communication) steps, where the input picture and the pattern are of size (n,m) and (k,l), respectively
|
Solving 2D-pattern matching with networks of picture processors
|
solving 2d-pattern matching with networks of picture processors
|
propose picture processors picture matching. solving informally subnetworks extracts simultaneously subpictures picture sends subnetwork. subnetwork checks pictures pattern. evolutionary processors rows columns. afterwards evolutionary hiding processors running picture
|
exact_dup
|
[
"84138560"
] |
148686183
|
10.1098/rspb.2014.2947
|
Human languages differ broadly in abundance and are distributed highly unevenly on the Earth. In many qualitative and quantitative aspects, they strongly resemble biodiversity distributions. An intriguing and previously unexplored issue is the architecture of the neighbouring relationships between human linguistic groups. Here we construct and characterize these networks of contacts and show that they represent a new kind of spatial network with uncommon structural properties. Remarkably, language networks share a meaningful property with food webs: both are quasi-interval graphs. In food webs, intervality is linked to the existence of a niche space of low dimensionality; in language networks, we show that the unique relevant variable is the area occupied by the speakers of a language. By means of a range model analogous to niche models in ecology, we show that a geometric restriction of perimeter covering by neighbouring linguistic domains explains the structural patterns observed. Our findings may be of interest in the development of models for language dynamics or regarding the propagation of cultural innovations. In relation to species distribution, they pose the question of whether the spatial features of species ranges share architecture, and eventually generating mechanism, with the distribution of human linguistic groups
|
New patterns in human biogeography revealed by networks of contacts between linguistic groups
|
new patterns in human biogeography revealed by networks of contacts between linguistic groups
|
languages broadly abundance unevenly earth. qualitative resemble biodiversity distributions. intriguing unexplored architecture neighbouring linguistic groups. characterize contacts kind uncommon properties. remarkably share meaningful webs quasi graphs. webs intervality niche dimensionality occupied speakers language. analogous niche ecology geometric restriction perimeter covering neighbouring linguistic explains observed. propagation cultural innovations. pose ranges share architecture eventually generating linguistic
|
exact_dup
|
[
"80739473"
] |
148759607
|
10.1007/s00253-009-2422-9
|
Erworben im Rahmen der Schweizer Nationallizenz (http://www.nationallizenzen.ch)Disposable bioreactors have increasingly been incorporated into preclinical, clinical, and production-scale biotechnological facilities over the last few years. Driven by market needs, and, in particular, by the developers and manufacturers of drugs, vaccines, and further biologicals, there has been a trend toward the use of disposable seed bioreactors as well as production bioreactors. Numerous studies documenting their advantages in use have contributed to further new developments and have resulted in the availability of a multitude of disposable bioreactor types which differ in power input, design, instrumentation, and scale of the cultivation container. In this review, the term “disposable bioreactor” is defined, the benefits and constraints of disposable bioreactors are discussed, and critical phases and milestones in the development of disposable bioreactors are summarized. An overview of the disposable bioreactors that are currently commercially available is provided, and the domination of wave-mixed, orbitally shaken, and, in particular, stirred disposable bioreactors in animal cell-derived productions at cubic meter scale is reported. The growth of this type of reactor system is attributed to the recent availability of stirred disposable benchtop systems such as the Mobius CellReady 3 L Bioreactor. Analysis of the data from computational fluid dynamic simulation studies and first cultivation runs confirms that this novel bioreactor system is a viable alternative to traditional cell culture bioreactors at benchtop scale
|
Disposabe bioreactors : the current state-of-the-art and recommended applications in biotechnology
|
disposabe bioreactors : the current state-of-the-art and recommended applications in biotechnology
|
erworben rahmen schweizer nationallizenz disposable bioreactors increasingly incorporated preclinical biotechnological facilities years. developers manufacturers drugs vaccines biologicals toward disposable seed bioreactors bioreactors. numerous documenting advantages contributed developments resulted availability multitude disposable bioreactor instrumentation cultivation container. “disposable bioreactor” benefits disposable bioreactors milestones disposable bioreactors summarized. overview disposable bioreactors commercially domination orbitally shaken stirred disposable bioreactors productions cubic meter reported. reactor attributed availability stirred disposable benchtop mobius cellready bioreactor. cultivation runs confirms bioreactor viable traditional bioreactors benchtop
|
exact_dup
|
[
"149227627"
] |
148792744
|
10.1080/21681376.2017.1409650
|
This paper presents the perspectives of participants from three Community Gardens in Edinburgh, Scotland\ud
and investigates the role that food growing plays in their recreation and leisure activities, personal\ud
development, the development of their children and the impact on their communities. Thirty-eight participants\ud
were interviewed using qualitative, semi-structured questions to explore their motivations and\ud
experiences from their involvement with community gardens. Participant observation was used to better\ud
understand the importance of the gardens in their lives. The participants felt the gardens were places that\ud
fostered neighbourly engagement, increased leisure opportunities, social support, community health,\ud
connectedness, and community diversity. They were also places that promoted knowledge exchange inside\ud
the garden and in to the homes of the people and the community itself. Anxieties over land use and\ud
land reform highlighted how community gardens symbolised empowerment but also showed resistance\ud
to the hegemonic structure of local council and government. In effect, the research suggests that community\ud
gardens grow much more than just food, they grow community
|
The motivations and experiences of community garden participants in Edinburgh, Scotland
|
the motivations and experiences of community garden participants in edinburgh, scotland
|
presents perspectives gardens edinburgh scotland investigates growing plays recreation leisure personal communities. thirty eight interviewed qualitative structured explore motivations experiences involvement gardens. participant gardens lives. felt gardens places fostered neighbourly engagement leisure opportunities connectedness diversity. places promoted garden homes itself. anxieties reform highlighted gardens symbolised empowerment hegemonic council government. gardens grow grow
|
exact_dup
|
[
"146464941"
] |
151482081
|
10.1016/j.tsf.2016.02.036
|
In this paper, the effects of post-deposition annealing temperature and atmosphere on the properties of pulsed DC magnetron sputtered ceria (CeO2) thin films, including crystalline structure, grain size and shape and optical properties were investigated. Experimental results, obtained from X-ray diffraction (XRD), showed that the prepared films crystallised predominantly in the CeO2 cubic fluorite structure, although evidence of Ce2O3 was also seen and this was quantified by a Rietveld refinement. The anneal temperature and oxygen content of the Ar/O2 annealing atmosphere both played important roles on the size and shape of the nanocrystals as determined by atomic force microscopy (AFM). The average grain size (determined by an AFM) as well as the out of plane coherence length (obtained from XRD) varied with increasing oxygen flow rate (OFR) in the annealing chamber. In addition, the shape of the grains seen in the AFM studies transformed from circular to triangular as the OFR was raised from 20 sccm to 30 sccm during an 800 °C thermal anneal. X-ray photoelectron spectroscopy was used to measure near-surface oxidation states of the thin-films with varying OFR in the annealing chamber. The bandgap energies were estimated from the ultra-violet and visible absorption spectra and low-temperature photoluminescence. An extracted bandgap value of 3.04 eV was determined for as-deposited CeO2 films and this value increased with increasing annealing temperatures. However, no difference was observed in bandgap energies with variation of annealing atmosphere
|
Control of crystal structure, morphology and optical properties of ceria films by post deposition annealing treatments
|
control of crystal structure, morphology and optical properties of ceria films by post deposition annealing treatments
|
deposition annealing atmosphere pulsed magnetron sputtered ceria films crystalline grain investigated. diffraction films crystallised predominantly cubic fluorite quantified rietveld refinement. anneal annealing atmosphere played roles nanocrystals microscopy grain coherence varied annealing chamber. grains transformed circular triangular raised sccm sccm anneal. photoelectron spectroscopy oxidation films annealing chamber. bandgap ultra violet visible photoluminescence. bandgap deposited films annealing temperatures. bandgap annealing atmosphere
|
exact_dup
|
[
"132295877"
] |
151637104
|
10.1063/1.1397283
|
The d spacings in niobium have been measured to 145 GPa with a diamond anvil cell using a fluid13; pressure-transmitting medium methanolx2013;ethanolx2013;water MEW mixture, or helium. The13; conventional geometry, wherein the primary x-ray beam passes parallel to the load axis with image13; plate, has been used to record the diffraction patterns. The analysis of the d spacings using the lattice13; strain equations indicates the presence of nonhydrostatic stress component with both MEW and He13; pressure-transmitting media in the pressure ranges that are well below the freezing pressure of the13; pressure-transmitting medium. A method to correct the measured d spacings for the nonhydrostatic13; pressure effect is suggested. This study clearly emphasizes the need to carefully analyze the data for13; the nonhydrostatic compression effects even if the experiments are performed with fluid13; pressure-transmitting medium
|
Measurement and analysis of nonhydrostatic lattice strain component in niobium to 145 GPa under various fluid pressure-transmitting media
|
measurement and analysis of nonhydrostatic lattice strain component in niobium to 145 gpa under various fluid pressure-transmitting media
|
spacings niobium diamond anvil transmitting methanolx ethanolx mixture helium. wherein passes plate record diffraction patterns. spacings nonhydrostatic transmitting ranges freezing transmitting medium. spacings nonhydrostatic suggested. emphasizes carefully analyze nonhydrostatic compression transmitting
|
exact_dup
|
[
"11873881"
] |
155777464
|
10.1007/s10670-017-9887-1
|
This paper addresses the problem of judgment aggregation in science. How should scientists decide which propositions to assert in a collaborative document? We distinguish the question of what to write in a collaborative document from the question of collective belief. We argue that recent objections to the application of the formal literature on judgment aggregation to the problem of judgment aggregation in science apply to the latter, not the former question. The formal literature has introduced various desiderata for an aggregation procedure. Proposition-wise majority voting emerges as a procedure that satisfies all desiderata which represent norms of science. An interesting consequence is that not all collaborating scientists need to endorse every proposition asserted in a collaborative document
|
A role for judgment aggregation in coauthoring scientific papers
|
a role for judgment aggregation in coauthoring scientific papers
|
addresses judgment aggregation science. scientists decide propositions assert collaborative document distinguish collaborative document collective belief. argue objections formal judgment aggregation judgment aggregation former question. formal desiderata aggregation procedure. wise majority voting emerges satisfies desiderata norms science. collaborating scientists endorse asserted collaborative document
|
exact_dup
|
[
"157867501",
"160113422",
"160114360",
"195905310"
] |
156954839
|
10.1038/s41467-018-03924-3
|
The Northern Hemisphere experienced dramatic changes during the last glacial, featuring vast ice sheets and abrupt climate events, while high northern latitudes during the last interglacial (Eemian) were warmer than today. Here we use high-resolution aerosol records from the Greenland NEEM ice core to reconstruct the environmental alterations in aerosol source regions accompanying these changes. Separating source and transport effects, we find strongly reduced terrestrial biogenic emissions during glacial times reflecting net loss of vegetated area in North America. Rapid climate changes during the glacial have little effect on terrestrial biogenic aerosol emissions. A strong increase in terrestrial dust emissions during the coldest intervals indicates higher aridity and dust storm activity in East Asian deserts. Glacial sea salt aerosol emissions in the North Atlantic region increase only moderately (50%), likely due to sea ice expansion. Lower aerosol concentrations in Eemian ice compared to the Holocene are mainly due to shortened atmospheric residence time, while emissions changed little
|
Greenland records of aerosol source and atmospheric lifetime changes from the Eemian to the Holocene
|
greenland records of aerosol source and atmospheric lifetime changes from the eemian to the holocene
|
northern hemisphere experienced dramatic glacial featuring vast sheets abrupt northern latitudes interglacial eemian warmer today. aerosol records greenland neem reconstruct alterations aerosol accompanying changes. separating terrestrial biogenic glacial reflecting vegetated america. glacial terrestrial biogenic aerosol emissions. terrestrial coldest intervals aridity storm east asian deserts. glacial salt aerosol atlantic moderately expansion. aerosol eemian holocene shortened residence changed
|
exact_dup
|
[
"157697167"
] |
157867513
|
10.1016/j.shpsa.2016.10.002
|
Abstract Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon a correct estimate or not depending on the social structure of the community that uses it. Based on this study, I argue that methodological explanations of the “replicability crisis” in psychology are limited and propose an alternative explanation in terms of biases. Finally, I conclude suggesting that scientific self-correction should be understood as an interaction effect between inference methods and social structures
|
Can the Behavioral Sciences Self-Correct? A Social Epistemic Study
|
can the behavioral sciences self-correct? a social epistemic study
|
advocates corrective thesis argue refute false closer approximations truth run. contemporary thesis frequentist behavioral sciences. replications aggregation meta corrective mechanism. communities implement argue frequentist converge argue methodological explanations “replicability crisis” psychology propose explanation biases. understood inference
|
exact_dup
|
[
"160113434"
] |
157867872
|
10.1007/s11098-016-0698-z
|
Scientific realists argue that a good track record of multi-agent, and multiple method, validation of empirical claims is itself evidence that those claims, at least partially and approximately, reflect ways nature actually is independent of the ways we conceptualize it. Constructivists contend that successes in validating empirical claims only suffice to establish that our ways of modelling the world, our ``constructions,'' are useful and adequate for beings like us. This essay presents a thought experiment in which beings like us intersubjectively validate claims about properties of particular things in nature under conditions in which those beings have profoundly different personal phenomenological experiences of those properties. I submit that the thought experiment scenario parallels our actual situation, and argue that this shows that successes in intersubjectively validating empirical claims are indeed enough to claim victory for the realist. More specifically, I champion a variation of realism that marries Ronald Giere's brand of `perspectival realism' with Philip Kitcher's `real realism,' and posits that causal relations between ourselves and properties instantiated in nature ground our references to the relevant properties even though our conceptions of them are perspective relative (or filtered through, and distorted by, a perspective)
|
Invisible disagreement: an inverted qualia argument for realism
|
invisible disagreement: an inverted qualia argument for realism
|
realists argue track record agent validation claims claims partially reflect ways ways conceptualize constructivists contend successes validating claims suffice establish ways constructions adequate beings essay presents thought beings intersubjectively validate claims things beings profoundly personal phenomenological experiences properties. submit thought parallels argue successes intersubjectively validating claims claim victory realist. champion realism marries ronald giere brand perspectival realism philip kitcher realism posits causal instantiated conceptions perspective filtered distorted perspective
|
exact_dup
|
[
"160113769"
] |
159488721
|
10.1007/s00737-009-0062-9
|
Erworben im Rahmen der Schweizer Nationallizenzen (http://www.nationallizenzen.ch)Pregnancy and the postpartum may affect symptoms of depression. However it has not yet been tested how the symptoms used for the DSM IV diagnosis of depression discriminate depressed from non depressed women perinatally.
A modified version of the Structured Clinical Interview for DSM IV (SCID interview) was used that allowed assessment of all associated DSM IV symptoms of depression with depressed and non depressed women in pregnancy and the postpartum period. Loss of appetite was not associated with depression either ante or postnatally. The antenatal symptom pattern was different from the postnatal. The sensitivity of the symptoms ranged from 0.7% to 51.6%, and specificity from 61.3% to 99.1%. The best discriminating symptoms were motor retardation/agitation and concentration antenatally, and motor retardation/agitation, concentration and fatigue postnatally. Depression in pregnancy and postpartum depression show significantly different symptom profiles. Appetite is not suitable for the diagnosis of depression in the perinatal period
|
Symptoms associated with the DSM IV diagnosis of depression in pregnancy and post partum
|
symptoms associated with the dsm iv diagnosis of depression in pregnancy and post partum
|
erworben rahmen schweizer nationallizenzen pregnancy postpartum depression. depression discriminate depressed depressed perinatally. structured interview scid interview depression depressed depressed pregnancy postpartum period. appetite depression ante postnatally. antenatal symptom postnatal. ranged specificity discriminating motor retardation agitation antenatally motor retardation agitation fatigue postnatally. depression pregnancy postpartum depression symptom profiles. appetite depression perinatal
|
exact_dup
|
[
"159414780"
] |
160113675
|
10.1007/s11229-015-0970-3
|
The category-theoretic representation of quantum event structures provides a canonical setting for confronting the fundamental problem of truth valuation in quantum mechanics as exemplified, in particular, by Kochen–Specker’s theorem. In the present study, this is realized on the basis of the existence of a categorical adjunction between the category of sheaves of variable local Boolean frames, constituting a topos, and the category of quantum event algebras. We show explicitly that the latter category is equipped with an object of truth values, or classifying object, which constitutes the appropriate tool for assigning truth values to propositions describing the behavior of quantum systems. Effectively, this category-theoretic representation scheme circumvents consistently the semantic ambiguity with respect to truth valuation that is inherent in conventional quantum mechanics by inducing an objective contextual account of truth in the quantum domain of discourse. The philosophical implications of the resulting account are analyzed. We argue that it subscribes neither to a pragmatic instrumental nor to a relative notion of truth. Such an account essentially denies that there can be a universal context of reference or an Archimedean standpoint from which to evaluate logically the totality of facts of nature
|
Contextual Semantics in Quantum Mechanics from a Categorical Point of View
|
contextual semantics in quantum mechanics from a categorical point of view
|
theoretic canonical confronting truth valuation mechanics exemplified kochen–specker’s theorem. realized categorical adjunction sheaves boolean frames constituting topos algebras. explicitly equipped truth classifying constitutes assigning truth propositions describing systems. effectively theoretic circumvents consistently semantic ambiguity truth valuation inherent mechanics inducing contextual truth discourse. philosophical analyzed. argue subscribes neither pragmatic instrumental notion truth. essentially denies universal archimedean standpoint logically totality facts
|
exact_dup
|
[
"157867768"
] |
16412626
|
10.1016/j.solmat.2011.11.015
|
The intermediate band solar cell (IBSC) is based on a novel photovoltaic concept and has a limiting efficiency of 63.2%, which compares favorably with the 40.7% efficiency of a conventional, single junction solar cell. It is characterized by a material hosting a collection of energy levels within its bandgap, allowing the cell to exploit photons with sub-bandgap energies in a two-step absorption process, thus improving the utilization of the solar spectrum. However, these intermediate levels are often regarded as an inherent source of supplementary recombination, although this harmful effect can in theory be counteracted by the use of concentrated light. We present here a novel, low-temperature characterization technique using concentrated light that reveals how the initially enhanced recombination in the IBSC is reduced so that its open-circuit voltage is completely recovered and reaches that of a conventional solar cell
|
Voltage recovery in intermediate band solar cells
|
voltage recovery in intermediate band solar cells
|
ibsc photovoltaic limiting compares favorably junction cell. hosting bandgap allowing exploit photons bandgap improving utilization spectrum. regarded inherent supplementary recombination harmful counteracted concentrated light. concentrated reveals initially recombination ibsc circuit recovered reaches
|
exact_dup
|
[
"148664113"
] |
16517499
|
10.1016/j.desal.2007.09.011
|
There is no common understanding of the minimum per capita fresh water requirement for human health and economic and social development. Existing estimates vary between 20 litres and 4,654 litres per capita per day, however, these estimates are methodologically problematic as they consider only human consumptive and hygiene needs, or they consider economic needs but not the effects of trade. Reconsidering the components of a minimum water requirement estimate for human health and for economic and social development suggests that a country requires a minimum of 135 litres per person per day. With all countries except Kuwait having much greater water resources than this, water scarcity alone need not hinder development. Given the steadily decreasing cost of desalination together with the relatively small amount of water required per capita to permit social and economic development, desalination should be affordable where necessary for all but the very least economically developed countries where local naturally occurring freshwater resources are insufficient and saline water is available
|
Minimum water requirement for social and economic development
|
minimum water requirement for social and economic development
|
capita fresh requirement development. vary litres litres capita methodologically problematic consumptive hygiene trade. reconsidering requirement litres person day. kuwait scarcity hinder development. steadily decreasing desalination capita permit desalination affordable economically naturally occurring freshwater insufficient saline
|
exact_dup
|
[
"397061"
] |
18424104
|
10.1088/0741-3335/54/12/124004
|
Fundamental research and modelling in plasma atomic physics continue to be essential for providing basic understanding of many different topics relevant to high-energy-density plasmas. The Atomic Physics Group at the Institute of Nuclear Fusion has accumulated experience over the years in developing a collection of computational models and tools for determining the atomic energy structure, ionization balance and radiative properties of, mainly, inertial fusion and laser-produced plasmas in a variety of conditions. In this work, we discuss some of the latest advances and results of our research, with emphasis on inertial fusion and laboratory-astrophysical applications
|
Modelling of spectral properties and population kinetics studies of inertial fusión and laboratory astrophysical plasmas
|
modelling of spectral properties and population kinetics studies of inertial fusión and laboratory astrophysical plasmas
|
continue topics plasmas. fusion accumulated determining ionization balance radiative inertial fusion plasmas conditions. latest advances emphasis inertial fusion astrophysical
|
exact_dup
|
[
"148663769"
] |
18425052
|
10.1016/j.telpol.2014.03.004
|
We analyze the state of the art of indicators on eGovernment, eHealth, eProcurement and ePartecipation. We survey the main methodological properties of these indicators, and uncover the principal stylized facts and trends; at the same time, we highlight their heuristic limits and potential inconsistencies. Finally, we address empirically the issue of the explanation of the indexes scores – i.e. how the supply of the various eServices in each country is affected by political, institutional and socio-economic differences, and is followed by actual usage. The econometric analysis uncovers the importance of broadband penetration and higher education as drivers for most of the types of eServices and users (citizens and businesses). Moreover, a corruption-free and agile public sector shows up to be an important pre-condition for more effective supply and usage. Despite severe data limits and the complexity of the underlying diffusion phenomena, our study of eServices availability and usage across European countries is a first empirical contribution aimed at disentangling broad empirical trends – with their correlates - from unresolved methodological issues. As such, this work appears useful to inform the policy debate and practice, in a phase characterized by a prospective reorientation of public eServices provision.
|
Diffusion and usage of public e-services in Europe: An assessment of country level indicators and drivers
|
diffusion and usage of public e-services in europe: an assessment of country level indicators and drivers
|
analyze indicators egovernment ehealth eprocurement epartecipation. methodological indicators uncover principal stylized facts highlight heuristic inconsistencies. empirically explanation indexes i.e. supply eservices institutional socio usage. econometric uncovers broadband penetration drivers eservices citizens businesses corruption agile supply usage. phenomena eservices availability usage aimed disentangling broad correlates unresolved methodological issues. inform debate prospective reorientation eservices provision.
|
exact_dup
|
[
"18425121"
] |
19125380
|
10.1002/ajpa.1054
|
The aim of this study was to examine the evidence, and consider the differential diagnosis, for tuberculosis (TB) in juvenile individuals from early 20th century documented skeletons. There are 66 male and female juvenile individuals in the Coimbra Identified Skeletal Collection (CISC) with an age at death ranging from 7-21 years. The individuals died between 1904-1936 in different areas of Coimbra, Portugal. Eighteen of these individuals died from TB affecting different parts of the body. Thirteen (72.2%) showed skeletal lesions that may be related to this infection. Of the 48 individuals with a non-tuberculous cause of death, only 2 (4.2%) had skeletal changes that could be attributed to TB. The distribution of skeletal manifestations caused by the types of TB under study, based on macroscopic and radiological findings, is described and discussed. In addition, the medical records from 6 tuberculous individuals who died in Coimbra University Hospital (CUH) were analysed, and the information, including their diet and access to treatment, is presented. This work, based on data arising before antibiotics became available for treatment, can contribute to the future diagnosis of TB in non-documented skeletal material, and will facilitate a more reliable diagnosis of TB in juvenile individuals. Am J Phys Anthropol 115:38-49, 2001. © 2001 Wiley-Liss, Inc.http://dx.doi.org/10.1002/ajpa.105
|
A picture of tuberculosis in young Portuguese people in the early 20th century: A multidisciplinary study of the skeletal and historical evidence
|
a picture of tuberculosis in young portuguese people in the early 20th century: a multidisciplinary study of the skeletal and historical evidence
|
examine tuberculosis juvenile century documented skeletons. juvenile coimbra skeletal cisc ranging years. died coimbra portugal. eighteen died affecting body. thirteen skeletal lesions infection. tuberculous skeletal attributed skeletal manifestations macroscopic radiological discussed. records tuberculous died coimbra analysed diet presented. arising antibiotics became documented skeletal facilitate reliable juvenile individuals. anthropol wiley liss inc. ajpa.
|
exact_dup
|
[
"144012820"
] |
19125450
|
10.1002/jbt.10010
|
Paraquat herbicide is toxic to animals, including humans, via putative toxicity mechanisms associated to microsomal and mitochondrial redox systems. It is also believed to act in plants by generating highly reactive oxygen free radicals from electrons of photosystem I on exposure to light. Paraquat also acts on non-chlorophyllous plant tissues, where mitochondria are candidate targets, as in animal tissues. Therefore, we compared the interaction of paraquat with the mitochondrial bioenergetics of potato tuber, using rat liver mitochondria as a reference. Paraquat depressed succinate-dependent mitochondrial Deltapsi, with simultaneous stimulation of state 4 O2 consumption. It also induced a slow time-dependent effect for respiration of succinate, exogenous NADH, and N,N,N',N'-tetramethyl-p-phenylenediamine (TMPD)/ascorbate, which was more pronounced in rat than in potato mitochondria. However, with potato tuber mitochondria, the Deltapsi promoted by complex-I-dependent respiration is insensitive to this effect, indicating a protection against paraquat radical afforded by complex I redox activity, which was just the reverse of to the findings for rat liver mitochondria. The experimental set up with the tetraphenyl phosphonium (TPP+)-electrode also indivated production of the paraquat radical in mitochondria, also suggesting its accessibility to the outside space. The different activities of protective antioxidant agents can contribute to explain the different sensitivities of both kinds of mitochondria. Values of SOD activity and alpha-tocopherol detected in potato mitochondria were significantly higher than in rat mitochondria, which, in turn, revealed higher values of lipid peroxidation induced by paraquat. © 2001 John Wiley & Sons, Inc. J Biochem Mol Toxicol 15:322-330, 2001http://dx.doi.org/10.1002/jbt.1001
|
Differential sensitivities of plant and animal mitochondria to the herbicide paraquat
|
differential sensitivities of plant and animal mitochondria to the herbicide paraquat
|
paraquat herbicide toxic humans putative toxicity microsomal mitochondrial redox systems. believed generating reactive radicals photosystem light. paraquat acts chlorophyllous tissues mitochondria candidate targets tissues. paraquat mitochondrial bioenergetics potato tuber mitochondria reference. paraquat depressed succinate mitochondrial deltapsi simultaneous stimulation consumption. slow respiration succinate exogenous nadh tetramethyl phenylenediamine tmpd ascorbate pronounced potato mitochondria. potato tuber mitochondria deltapsi promoted respiration insensitive protection paraquat radical afforded redox reverse mitochondria. tetraphenyl phosphonium electrode indivated paraquat radical mitochondria accessibility space. protective antioxidant sensitivities kinds mitochondria. alpha tocopherol potato mitochondria mitochondria peroxidation paraquat. biochem toxicol jbt.
|
exact_dup
|
[
"144012899"
] |
19125456
|
10.1002/hyp.1465
|
This paper, the first in a series of two, applies the entropy (or information) theory to describe the spatial variability of synthetic data that can represent spatially correlated groundwater quality data. The application involves calculating information measures such as transinformation, the information transfer index and the correlation coefficient. These measures are calculated using discrete and analytical approaches. The discrete approach uses the contingency table and the analytical approach uses the normal probability density function. The discrete and analytical approaches are found to be in reasonable agreement. The analysis shows that transinformation is useful and comparable with correlation to characterize the spatial variability of the synthetic data set, which is correlated with distance. Copyright © 2004 John Wiley & Sons, Ltd.http://dx.doi.org/10.1002/hyp.146
|
Characterizing the spatial variability of groundwater quality using the entropy theory: I. Synthetic data
|
characterizing the spatial variability of groundwater quality using the entropy theory: i. synthetic data
|
applies synthetic spatially groundwater data. involves calculating transinformation coefficient. approaches. contingency function. reasonable agreement. transinformation comparable characterize synthetic distance. hyp.
|
exact_dup
|
[
"144012905"
] |
19125548
|
10.1002/(SICI)1097-4628(19981010)70:2
|
Polymerizable quaternary allyl-pyrimidinium salts, that is, 1-allyl-4- amino-5-phenyl-pyrimidine (IV), 1-allyl-4-acetamido-5-phenyl-pyrimidine (V), 1-allyl-5-hydroxymethylamino-5-phenyl-pyrimidine (VI), and 1-allyl-pyrimidine (VIII) bromides, as well as 4-acrylamido-5-phenyl-pyrimidines II` and IV`, were synthesized and characterized. Three of the activated pyrimidine derivatives, the two acrylamido and one allyl pyrimidine, were immobilized onto (Agar-g-co-HEMA)-X-TMPTA via graft copolymerization. Each copolymer system, based on 2-hydroxyethyl methacrylate (HEMA) grafted onto Agar followed by crosslinking with trimethyllopropane triacrylate (TMPTA), was prepared by radiation-induced copolymerization carried out in the simultaneous mode. The release properties of the immobilized pyrimidines were assessed in various aqueous media over a period of approximately 22 days. © 1998 John Wiley & Sons, Inc. J. Appl. Polym. Sci. 70: 211-218, 1998http://dx.doi.org/10.1002/(SICI)1097-4628(19981010)70:2<211::AID-APP1>3.0.CO;2-
|
The synthesis and characterization of new polymerizable pyrimidines: Immobilization of the monomers onto hydrophilic graft copolymeric supports through radiation-induced copolymerization-grafting
|
the synthesis and characterization of new polymerizable pyrimidines: immobilization of the monomers onto hydrophilic graft copolymeric supports through radiation-induced copolymerization-grafting
|
polymerizable quaternary allyl pyrimidinium salts allyl phenyl pyrimidine allyl acetamido phenyl pyrimidine allyl hydroxymethylamino phenyl pyrimidine allyl pyrimidine viii bromides acrylamido phenyl pyrimidines synthesized characterized. pyrimidine derivatives acrylamido allyl pyrimidine immobilized agar hema tmpta graft copolymerization. copolymer hydroxyethyl methacrylate hema grafted agar crosslinking trimethyllopropane triacrylate tmpta copolymerization simultaneous mode. immobilized pyrimidines aqueous days. appl. polym. sci. sici
|
exact_dup
|
[
"144013038"
] |
19125580
|
10.1002/app.10163
|
Engineering materials containing poly(vinyl acetate) (PVAc) as the key component undergo hydrolytic degradation, which must be minimized or, at least, controlled. To characterize PVAc hydrolysis quantitatively, the diffusion of acetic acid (HAc) in PVAc, poly(vinyl alcohol) (PVA), unsaturated polyester (UPE), and a UPE/PVAc blend was studied in detail. The permeability cell earlier developed by the authors was modified here to reduce experimental error. As the diffusion and solubility coefficients of water and HAc in the above materials were measured at different temperatures, a mathematical model was developed, which takes proper account of the combined water and HAc diffusion in PVAc undergoing partial hydrolysis. The model was further validated by the experimental data obtained at 70°C for UPE/PVAc film, simulating a matrix of sheet-molding compounds composite materials. © 2002 John Wiley & Sons, Inc. J Appl Polym Sci 83: 1157-1166, 2002http://dx.doi.org/10.1002/app.1016
|
Diffusion of electrolytes in hydrolyzable glassy polymers: Acetic acid in poly(vinyl acetate), poly(vinyl alcohol), and polyesters
|
diffusion of electrolytes in hydrolyzable glassy polymers: acetic acid in poly(vinyl acetate), poly(vinyl alcohol), and polyesters
|
poly vinyl acetate pvac undergo hydrolytic degradation minimized controlled. characterize pvac hydrolysis quantitatively acetic pvac poly vinyl alcohol unsaturated polyester pvac blend detail. permeability error. solubility mathematical proper pvac undergoing hydrolysis. validated pvac film simulating sheet molding composite materials. appl polym app.
|
exact_dup
|
[
"144013077"
] |
19125649
|
10.1002/(SICI)1097-4695(19991115)41:3
|
Adenosine triphosphate (ATP) has been proposed to play a role as a neurotransmitter in the retina, but not much attention has been given to the regulation of ATP release from retinal neurons. In this work, we investigated the release of ATP from cultures enriched in amacrine-like neurons. Depolarization of the cells with KCl, or activation of alpha-amino-3-hydroxy- 5-methyl-4-isoxazole-propionate (AMPA) receptors, evoked the release of ATP, as determined by the luciferin/luciferase luminescent method. The ATP release was found to be largely Ca2+ dependent and sensitive to the botulinum neurotoxin A, which indicates that the ATP released by cultured retinal neurons originated from an exocytotic pool. Nitrendipine and omega-Agatoxin IVA, but not by omega-Conotoxin GVIA, partially blocked the release of ATP, indicating that in these cells, the Ca2+ influx necessary to trigger the release of ATP occurs in part through the L- and the P/Q types of voltage-sensitive Ca2+ channels (VSCC), but not through N-type VSCC. The release of ATP increased in the presence of adenosine deaminase, or in the presence of 1,3-dipropyl-8-cyclopentylxanthine (DPCPX), an adenosine A1 receptor antagonist, showing that the release is tonically inhibited by the adenosine A1 receptors. To our knowledge, this is the first report showing the release of endogenous ATP from a retinal preparation. © 1999 John Wiley & Sons, Inc. J Neurobiol 41: 340-348, 1999http://dx.doi.org/10.1002/(SICI)1097-4695(19991115)41:3<340::AID-NEU3>3.0.CO;2-
|
Characterization of ATP release from cultures enriched in cholinergic amacrine-like neurons
|
characterization of atp release from cultures enriched in cholinergic amacrine-like neurons
|
adenosine triphosphate neurotransmitter retina retinal neurons. cultures enriched amacrine neurons. depolarization alpha hydroxy methyl isoxazole propionate ampa receptors evoked luciferin luciferase luminescent method. largely botulinum neurotoxin released cultured retinal originated exocytotic pool. nitrendipine omega agatoxin omega conotoxin gvia partially blocked influx trigger vscc vscc. adenosine deaminase dipropyl cyclopentylxanthine dpcpx adenosine antagonist tonically inhibited adenosine receptors. endogenous retinal preparation. neurobiol sici
|
exact_dup
|
[
"144013176"
] |
19125802
|
10.1023/A:1020180109275
|
We have studied the differences between erythrocytes and erythrocyte ghosts as target membranes for the study of Sendai virus fusion activity. Fusion was monitored continuously by fluorescence dequenching of R18-labeled virus. Experiments were carried out either with or without virus/target membrane prebinding. When Sendai virus was added directly to a erythrocyte/erythrocyte ghost suspension, fusion was always lower than that obtained when experiments were carried out with virus already bound to the erythrocyte/erythrocyte ghost in the cold, since with virus prebinding fusion can be triggered more rapidly. Although virus binding to both erythrocytes and erythrocyte ghosts was similar, fusion activity was much more pronounced when erythrocyte ghosts were used as target membranes. These observations indicate that intact erythrocytes and erythrocyte ghosts are not equivalent as target membranes for the study of Sendai virus fusion activity. Fusion of Sendai virus with both target membranes was inhibited when erythrocytes or erythrocyte ghosts were pretreated with proteinase K, suggesting a role of target membrane proteins in this process. Treatment of both target membranes with neuraminidase, which removes sialic acid residues (the biological receptors for Sendai virus) greatly reduced viral binding. Interestingly, this treatment had no significant effect on the fusion reaction itself.http://dx.doi.org/10.1023/A:102018010927
|
Sendai Virus Fusion Activity as Modulated by Target Membrane Components
|
sendai virus fusion activity as modulated by target membrane components
|
erythrocytes erythrocyte ghosts membranes sendai fusion activity. fusion monitored continuously fluorescence dequenching labeled virus. prebinding. sendai erythrocyte erythrocyte ghost suspension fusion erythrocyte erythrocyte ghost cold prebinding fusion triggered rapidly. erythrocytes erythrocyte ghosts fusion pronounced erythrocyte ghosts membranes. intact erythrocytes erythrocyte ghosts membranes sendai fusion activity. fusion sendai membranes inhibited erythrocytes erythrocyte ghosts pretreated proteinase process. membranes neuraminidase removes sialic receptors sendai greatly viral binding. interestingly fusion itself.
|
exact_dup
|
[
"144013455"
] |
19126143
|
10.1007/s11075-008-9194-7
|
Abstract This paper is devoted to the eigenvalue complementarity problem (EiCP) with symmetric real matrices. This problem is equivalent to finding a stationary point of a differentiable optimization program involving the Rayleigh quotient on a simplex (Queiroz et al., Math. Comput. 73, 1849–1863, 2004). We discuss a logarithmic function and a quadratic programming formulation to find a complementarity eigenvalue by computing a stationary point of an appropriate merit function on a special convex set. A variant of the spectral projected gradient algorithm with a specially designed line search is introduced to solve the EiCP. Computational experience shows that the application of this algorithm to the logarithmic function formulation is a quite efficient way to find a solution to the symmetric EiCP.http://dx.doi.org/10.1007/s11075-008-9194-
|
On the solution of the symmetric eigenvalue complementarity problem by the spectral projected gradient algorithm
|
on the solution of the symmetric eigenvalue complementarity problem by the spectral projected gradient algorithm
|
devoted eigenvalue complementarity eicp matrices. stationary differentiable involving rayleigh quotient simplex queiroz math. comput. logarithmic quadratic programming formulation complementarity eigenvalue stationary merit convex set. variant projected specially solve eicp. logarithmic formulation eicp.
|
exact_dup
|
[
"144014066"
] |
196151498
|
10.1007/s10699-016-9505-8
|
The purpose of this paper is to elucidate the semantic relation between continental drift and plate tectonics. The numerous attempts to account for this case in either Kuhnian or Lakatosian terms have been convincingly dismissed by Rachel Laudan (1978), who nevertheless acknowledged that there was not yet a plausible alternative to explain the so called “geological revolution”. In studying this case under a new light, the notion of embedding, as distinguished from other sorts of inter-theoretical relations (Moulines 1996, 2010, 2011), will have a particular significance.“La pragmática como dinamizadora del estudio de la flexibilidad semántica: contextos conversacionales y contextos teóricos” (FFI2012-33881
|
Drift Theory and Plate Tectonics: A Case of Embedding in Geology
|
drift theory and plate tectonics: a case of embedding in geology
|
elucidate semantic continental drift plate tectonics. numerous attempts kuhnian lakatosian convincingly dismissed rachel laudan nevertheless acknowledged plausible “geological revolution”. studying notion embedding distinguished sorts moulines significance.“la pragmática como dinamizadora estudio flexibilidad semántica contextos conversacionales contextos teóricos”
|
exact_dup
|
[
"80526740"
] |
25055015
|
10.1016/j.socnet.2016.06.002
|
Empirical data on the dynamics of human face-to-face interactions across a
variety of social venues have recently revealed a number of context-independent
structural and temporal properties of human contact networks. This universality
suggests that some basic mechanisms may be responsible for the unfolding of
human interactions in the physical space. Here we discuss a simple model that
reproduces the empirical distributions for the individual, group and collective
dynamics of face-to-face contact networks. The model describes agents that move
randomly in a two-dimensional space and tend to stop when meeting "attractive"
peers, and reproduces accurately the empirical distributions
|
Model reproduces individual, group and collective dynamics of human
contact networks
|
model reproduces individual, group and collective dynamics of human contact networks
|
venues networks. universality unfolding space. reproduces collective networks. describes move randomly tend stop meeting attractive peers reproduces accurately
|
exact_dup
|
[
"76981426"
] |
25323849
|
10.1063/1.1475766
|
I study various properties of the critical limits of correlators containing insertions of conserved and anomalous currents. In particular, I show that the improvement term of the stress tensor can be fixed unambiguously, studying the RG interpolation between the UV and IR limits. The removal of the improvement ambiguity is encoded in a variational principle, which makes use of sum rules for the trace anomalies a and a'. Compatible results follow from the analysis of the RG equations. I perform a number of self-consistency checks and discuss the issues in a large set of theories
|
A note on the improvement ambiguity of the stress tensor and the critical limits of correlation functions
|
a note on the improvement ambiguity of the stress tensor and the critical limits of correlation functions
|
correlators insertions conserved anomalous currents. unambiguously studying interpolation limits. removal ambiguity encoded variational trace anomalies compatible equations. consistency checks
|
exact_dup
|
[
"2515793"
] |
29137438
|
10.1016/j.mechmat.2013.05.007
|
At high loading rates, the development of adiabatic shear bands in metals is conventionally attributed to the strong interactions induced by viscoplastic dissipation within the bands and thermal softening effects. The rheological equation proposed by Johnson and Cook takes both viscoplastic hardening and thermal softening into account. The present paper reviews and includes this equation into a thermodynamic framework in order to analyse the energy impacts of thermal softening. Indeed this latter implies the existence of a ther-momechanical coupling source, probably non-negligible and which must be considered when estimating temperature variations induced by shear band development
|
Calorimetric consequences of thermal softening in Johnson–Cook’s model
|
calorimetric consequences of thermal softening in johnson–cook’s model
|
loading adiabatic metals conventionally attributed viscoplastic dissipation softening effects. rheological johnson cook viscoplastic hardening softening account. reviews thermodynamic analyse impacts softening. ther momechanical probably negligible estimating
|
exact_dup
|
[
"143692774"
] |
29137912
|
10.1016/j.tsf.2012.06.029
|
A methodology is proposed combining the scattering vector method with energy dispersive diffraction for the non-destructive determination of stress- and composition-depth profiles. The advantage of the present method is a relatively short measurement time and avoidance of tedious sublayer removal; the disadvantage as compared to destructive methods is that depth profiles can only be obtained for depth shallower than half the layer thickness. The proposed method is applied to an expanded austenite layer on stainless steel and al- lows the separation of stress, composition and stacking fault density gradients.Financial support from the Danish Research Council for Technolo- gy and Production Sciences under grant no. 274-07-0344 is gratefully acknowledged
|
Determination of composition, residual stress and stacking fault depth profiles in expanded austenite with energy-dispersive diffraction
|
determination of composition, residual stress and stacking fault depth profiles in expanded austenite with energy-dispersive diffraction
|
methodology combining dispersive diffraction destructive profiles. advantage avoidance tedious sublayer removal disadvantage destructive shallower thickness. expanded austenite stainless steel lows stacking fault gradients.financial danish council technolo gratefully acknowledged
|
exact_dup
|
[
"143693861"
] |
29515914
|
10.1007/JHEP05(2015)109
|
A measurement of the cross-section for Z-boson production in the forward
region of pp collisions at 8TeV centre-of-mass energy is presented. The
measurement is based on a sample of $\rm Z\rightarrow e^+e^-$ decays
reconstructed using the LHCb detector, corresponding to an integrated
luminosity of 2.0fb$^{-1}$. The acceptance is defined by the requirements
$2.0<\eta<4.5$ and $p_{\rm T}>20$GeV for the pseudorapidities and transverse
momenta of the leptons. Their invariant mass is required to lie in the range
60--120GeV. The cross-section is determined to be $$ \sigma({\rm pp\to Z\to
e^+e^-})=93.81\pm0.41({\rm stat})\pm1.48({\rm syst})\pm1.14({\rm lumi})\;{\rm
pb}\,,$$ where the first uncertainty is statistical and the second reflects all
systematic effects apart from that arising from the luminosity, which is given
as the third uncertainty. Differential cross-sections are presented as
functions of the Z-boson rapidity and of the angular variable $\phi^*$, which
is related to the Z-boson transverse momentum
|
Measurement of forward $\rm Z\rightarrow e^+e^-$ production at
$\sqrt{s}=8$TeV
|
measurement of forward $\rm z\rightarrow e^+e^-$ production at $\sqrt{s}=8$tev
|
boson collisions presented. rightarrow decays reconstructed lhcb luminosity acceptance pseudorapidities momenta leptons. gev. sigma stat syst lumi reflects apart arising luminosity uncertainty. boson rapidity boson
|
exact_dup
|
[
"33642044"
] |
33173007
|
10.1016/j.laa2013.01.016
|
Let P be a system of n linear nonhomogeneous ordinary differential polynomials in a set U of n-1 differential indeterminates.\ud
Differential resultant formulas are presented to eliminate the differential indeterminates in U from P. These formulas are determinants of coefficient matrices of appropriate sets of derivatives of the differential polynomials in P, or in a linear perturbation Pe of P.\ud
In particular, the formula dfres(P) is the determinant of a matrix M(P) having no zero columns if the system P is ``super essential".\ud
As an application, if the system PP is sparse generic, such formulas can be used to compute the differential resultant dres(PP) introduced by Li, Gao and Yuan
|
Linear Sparse Differential Resultant Formulas
|
linear sparse differential resultant formulas
|
nonhomogeneous ordinary polynomials indeterminates. resultant formulas eliminate indeterminates formulas determinants derivatives polynomials perturbation dfres determinant columns super sparse generic formulas resultant dres yuan
|
exact_dup
|
[
"148670277"
] |
33176989
|
10.1007/s11947-014-1389-4
|
Apples can be considered as having a complex system formed by several structures at different organization levels: macroscale (>100 μm) and microscale (<100 μm). This\ud
work implements 2D T1/T2 global and localized relaxometry\ud
sequences on whole apples to be able to perform an intensive\ud
non-destructive and non-invasive microstructure study. The 2D T1/T2 cross-correlation spectroscopy allows the extraction of quantitative information about the water compartmentation in different subcellular organelles. A clear difference is found as sound apples show neat peaks for water in different subcellular compartments, such as vacuolar, cytoplasmatic and extracellular water, while in watercore-affected tissues such compartments\ud
appear merged. Localized relaxometry allows for the\ud
predefinition of slices in order to understand the microstructure of a particular region of the fruit, providing information that cannot be derived from global 2D T1/T2 relaxometry
|
Non-Destructive Global and Localized 2D T1/T2 NMR Relaxometry to Resolve Microstructure in Apples Affected by Watercore
|
non-destructive global and localized 2d t1/t2 nmr relaxometry to resolve microstructure in apples affected by watercore
|
apples macroscale microscale implements localized relaxometry apples intensive destructive invasive microstructure study. spectroscopy extraction compartmentation subcellular organelles. sound apples neat subcellular compartments vacuolar cytoplasmatic extracellular watercore tissues compartments merged. localized relaxometry predefinition slices microstructure fruit relaxometry
|
exact_dup
|
[
"148675740"
] |
33178967
|
10.1016/j.renene.2004.07.017
|
A proposal for an extended formulation of the power coefficient of a wind turbine is presented. This new formulation is a generalization of the Betz–Lanchester expression for the power coefficient as function of the axial deceleration of the wind speed provoked by the wind turbine in operation. The extended power coefficient takes into account the benefits of the power produced and the cost associated to the production of this energy. By the simple model proposed is evidenced that the purely energetic optimum operation condition giving rise to the Betz–Lanchester limit (maximum energy produced) does not coincide with the global optimum operational condition (maximum benefit generated) if cost of energy and degradation of the wind turbine during operation is considered. The new extended power coefficient is a general parameter useful to define global optimum operation conditions for wind turbines, considering not only the energy production but also the maintenance cost and the economic cost associated to the life reduction of the machine
|
The extended Betz–Lanchester limit
|
the extended betz–lanchester limit
|
proposal formulation turbine presented. formulation generalization betz–lanchester axial deceleration provoked turbine operation. benefits energy. evidenced purely energetic optimum giving betz–lanchester coincide optimum operational benefit degradation turbine considered. optimum turbines maintenance machine
|
exact_dup
|
[
"148679913"
] |
34084268
|
10.1007/JHEP06(2015)065
|
We analyse the associated production of Higgs and Z boson via heavy-quark loops at the LHC in the Standard Model and beyond. We first review the main features of the Born 2 → 2 production, and in particular discuss the high-energy behaviour, angular distributions and Z boson polarisation. We then consider the effects of extra QCD radiation as described by the 2 → 3 loop matrix elements, and find that they dominate at high Higgs transverse momentum. We show how merged samples of 0- and 1-jet multiplicities, matched to a parton shower can provide a reliable description of differential distributions in ZH production. In addition to the Standard Model study, results in a generic two-Higgs-doublet-model are obtained and presented for a set of representative and experimentally viable benchmarks for Zh0, ZH0 and ZA0 production. We observe that various interesting features appear either due to the resonant enhancement of the cross-section or to interference patterns between resonant and non-resonant contribution
|
Higgs and Z boson associated production via gluon fusion in the SM and the 2HDM
|
higgs and z boson associated production via gluon fusion in the sm and the 2hdm
|
analyse boson loops beyond. born boson polarisation. extra dominate momentum. merged multiplicities matched parton shower reliable production. generic doublet experimentally viable benchmarks production. resonant enhancement interference resonant resonant
|
exact_dup
|
[
"35089544"
] |
35088918
|
10.1007/JHEP07(2015)027
|
We study the thermoelectric conductivities of a strongly correlated system in the presence of a magnetic field by the gauge/gravity duality. We consider a class of Einstein-Maxwell-Dilaton theories with axion fields imposing momentum relaxation. General analytic formulas for the direct current (DC) conductivities and the Nernst signal are derived in terms of the black hole horizon data. For an explicit model study, we analyse in detail the dyonic black hole modified by momentum relaxation. In this model, for small momentum relaxation, the Nernst signal shows a bell-shaped dependence on the magnetic field, which is a feature of the normal phase of cuprates. We compute all alternating current (AC) electric, thermoelectric, and thermal conductivities by numerical analysis and confirm that their zero frequency limits precisely reproduce our analytic DC formulas, which is a non-trivial consistency check of our methods. We discuss the momentum relaxation effects on the conductivities including cyclotron resonance poles
|
Thermoelectric conductivities at finite magnetic field and the Nernst effect
|
thermoelectric conductivities at finite magnetic field and the nernst effect
|
thermoelectric conductivities duality. einstein maxwell dilaton axion imposing relaxation. analytic formulas conductivities nernst horizon data. analyse dyonic relaxation. relaxation nernst bell shaped cuprates. alternating thermoelectric conductivities confirm precisely reproduce analytic formulas trivial consistency check methods. relaxation conductivities cyclotron poles
|
exact_dup
|
[
"35089095"
] |
35089134
|
10.1007/JHEP07(2015)019
|
We examine integrable λ -deformations of SO( n + 1) / SO( n ) coset CFTs and their analytic continuations. We provide an interpretation of the deformation as a squashing of the corresponding coset σ -model’s target space. We realise the λ -deformation for n = 5 case as a solution to supergravity supported by non-vanishing five-form and dilaton. This interpolates between the coset CFT SO(4 , 2) / SO(4 , 1) × SO(6) / SO(5) constructed as a gauged WZW model and the non-Abelian T-dual of the AdS 5 × S 5 spacetime
|
Integrable λ -deformations: squashing coset CFTs and AdS 5 × S 5
|
integrable λ -deformations: squashing coset cfts and ads 5 × s 5
|
examine integrable deformations coset cfts analytic continuations. deformation squashing coset model’s space. realise deformation supergravity vanishing dilaton. interpolates coset gauged abelian spacetime
|
exact_dup
|
[
"35089223"
] |
35089835
|
10.1007/JHEP05(2015)118
|
We discuss the derivation of the trace anomaly using a non-local effective action at one loop. This provides a simple and instructive form and emphasizes infrared physics. We then use this example to explore several of the properties of non-local actions, including displaying the action for the full non-local energy-momentum tensor. As an application, we show that the long-distance corrections at one loop lead to quantum violations of some classical consequences of the equivalence principle, for example producing a frequency dependence of the gravitational bending of light
|
QED trace anomaly, non-local Lagrangians and quantum equivalence principle violations
|
qed trace anomaly, non-local lagrangians and quantum equivalence principle violations
|
derivation trace anomaly loop. instructive emphasizes infrared physics. explore displaying tensor. violations consequences equivalence producing gravitational bending
|
exact_dup
|
[
"35089919"
] |
35089910
|
10.1007/JHEP05(2015)112
|
We study moduli stabilization in combination with inflation in heterotic orbifold compactifications in the light of a large Hubble scale and the favored tensor-to-scalar ratio r ≈ 0 . 05. To account for a trans-Planckian field range we implement aligned natural inflation. Although there is only one universal axion in heterotic constructions, further axions from the geometric moduli can be used for alignment and inflation. We argue that such an alignment is rather generic on orbifolds, since all non-perturbative terms are determined by modular weights of the involved fields and the Dedekind η function. We present two setups inspired by the mini-landscape models of the ℤ 6 − II orbifold which realize aligned inflation and stabilization of the relevant moduli. One has a supersymmetric vacuum after inflation, while the other includes a gaugino condensate which breaks supersymmetry at a high scale
|
Natural inflation and moduli stabilization in heterotic orbifolds
|
natural inflation and moduli stabilization in heterotic orbifolds
|
moduli stabilization inflation heterotic orbifold compactifications hubble favored planckian implement aligned inflation. universal axion heterotic constructions axions geometric moduli alignment inflation. argue alignment generic orbifolds perturbative modular weights dedekind function. setups inspired mini landscape orbifold realize aligned inflation stabilization moduli. supersymmetric inflation gaugino condensate breaks supersymmetry
|
exact_dup
|
[
"35089825"
] |
35091083
|
10.1088/1475-7516/2015/03/017
|
We discuss the possibility to construct supergravity models with a single superfield describing inflation as well as the tiny cosmological constant V ∼ 10−120. One could expect that the simplest way to do it is to study models with a supersymmetric Minkowski vacuum and then slightly uplift them. However, due to the recently proven no-go theorem, such a tiny uplifting cannot be achieved by a small modification of the parameters of the theory. We illustrate this general result by investigation of models with a single chiral superfield recently proposed by Ketov and Terada. We show that the addition of a small constant or a linear term to the superpotential of a model with a stable supersymmetric Minkowski vacuum converts it to an AdS vacuum, which results in a rapid cosmological collapse. One can avoid this problem and uplift a supersymmetric Minkowski vacuum to a dS vacuum with V0∼ 10−120 without violating the no-go theorem by making these extra terms large enough. However, we show that this leads to a strong supersymmetry breaking in the uplifted vacua
|
Inflation and dark energy with a single superfield
|
inflation and dark energy with a single superfield
|
supergravity superfield describing inflation tiny cosmological simplest supersymmetric minkowski uplift them. proven tiny uplifting modification theory. illustrate chiral superfield ketov terada. superpotential supersymmetric minkowski converts cosmological collapse. avoid uplift supersymmetric minkowski violating extra enough. supersymmetry breaking uplifted vacua
|
exact_dup
|
[
"35091168",
"35091256"
] |
35091645
|
10.1093/ptep/ptv120
|
In this paper, we apply the orbifold GUT mechanism to the SU(5) model in noncommutative geometry, including the fermionic sector. Imposing proper parity assignments for “constituent fields” of bosons and fermions, the couplings between fermions and the heavy bosons , , and are prohibited by the parity symmetry. As a result, the derived fermionic Lagrangian is just that of the standard model, and proton decay is forbidden at tree level. If quantum fluctuation respects the parity symmetry, the process will be naturally suppressed or even forbidden completely
|
SU(5) orbifold GUT in noncommutative geometry
|
su(5) orbifold gut in noncommutative geometry
|
orbifold noncommutative fermionic sector. imposing proper parity assignments “constituent fields” bosons fermions couplings fermions bosons prohibited parity symmetry. fermionic lagrangian proton forbidden level. fluctuation respects parity naturally suppressed forbidden
|
exact_dup
|
[
"35091739"
] |
35091783
|
10.1088/1475-7516/2015/9/035
|
We review the non-supersymmetric (Extended) Left-Right Symmetric Models (LRSM) and low energy E6-based models to investigate if they can explain both the recently detected excess eejj signal at CMS and leptogenesis. The eejj excess can be explained from the decay of the right-handed gauge bosons (WR) with mass ∼ TeV in certain variants of the LRSM (with gL≠ gR). However such scenarios can not accommodate high-scale leptogenesis. Other attempts have been made to explain leptogenesis while keeping the WR mass almost within the reach of the LHC by considering the resonant leptogenesis scenario in the context of the LRSM for relatively large Yukawa couplings. However, certain lepton number violating scattering processes involving the right-handed Higgs triplet and the right-handed neutrinos can stay in equilibrium till the electroweak phase transition and can washout the lepton asymmetry generated in the resonant leptogenesis scenario for mass range of WR as indicated by the CMS excess signal. Thus in such a scenario one needs to invoke post-sphaleron baryogenesis to explain the observed baryon asymmetry of the universe. Next, we consider three effective low energy subgroups of the superstring inspired E6 model having a number of additional exotic fermions which provides a rich phenomenology to be explored. We however find that these three effective low energy subgroups of E6 too cannot explain both the eejj excess signal and leptogenesis simultaneously
|
The eejj excess signal at the LHC and constraints on leptogenesis
|
the eejj excess signal at the lhc and constraints on leptogenesis
|
supersymmetric lrsm excess eejj leptogenesis. eejj excess handed bosons variants lrsm scenarios accommodate leptogenesis. attempts leptogenesis keeping resonant leptogenesis lrsm yukawa couplings. lepton violating involving handed triplet handed neutrinos stay till electroweak washout lepton asymmetry resonant leptogenesis excess signal. invoke sphaleron baryogenesis baryon asymmetry universe. subgroups superstring inspired exotic fermions phenomenology explored. subgroups eejj excess leptogenesis simultaneously
|
exact_dup
|
[
"35091687"
] |
35091987
|
10.1007/JHEP08(2015)133
|
Higgs-boson pair production is well known being capable to probe the trilinear self-coupling of the Higgs boson, which is one of the important ingredients of the Higgs sector itself. Pair production then depends on the top-quark Yukawa coupling g t S , P , Higgs trilinear coupling λ 3 H , and a possible dim-5 contact-type ttHH coupling g tt S , P , which may appear in some higher representations of the Higgs sector. We take into account the possibility that the top-Yukawa and the ttHH couplings involved can be CP violating. We calculate the cross sections and the interference terms as coefficients of the square or the 4th power of each coupling ( g t S , P , λ 3 H , g tt S , P ) at various stages of cuts, such that the desired cross section under various cuts can be obtained by simply inputing the couplings. We employ the H H → γ γ b b ¯ $$ HH\to \gamma \gamma b\overline{b} $$ decay mode of the Higgs-boson pair to investigate the possibility of disentangle the triangle diagram from the box digram so as to have a clean probe of the trilinear coupling at the LHC. We found that the angular separation between the b and b ¯ $$ \overline{b} $$ and that between the two photons is useful. We obtain the sensitivity reach of each pair of couplings at the 14 TeV LHC and the future 100 TeV pp machine. Finally, we also comment on using the b b ¯ τ + τ − $$ b\overline{b}{\tau}^{+}{\tau}^{-} $$ decay mode in appendix
|
An exploratory study of Higgs-boson pair production
|
an exploratory study of higgs-boson pair production
|
boson capable trilinear boson ingredients itself. yukawa trilinear tthh representations sector. yukawa tthh couplings violating. interference cuts desired cuts inputing couplings. employ gamma gamma overline boson disentangle triangle digram clean trilinear lhc. overline photons useful. couplings machine. comment overline
|
exact_dup
|
[
"35092446"
] |
37749577
|
10.1007/s10021-014-9751-y
|
Earth's surface is rapidly urbanizing, resulting in dramatic changes in the abundance, distribution and character of surface water features in urban landscapes. However, the scope and consequences of surface water redistribution at broad spatial scales are not well understood. We hypothesized that urbanization would lead to convergent surface water abundance and distribution: in other words, cities will gain or lose water such that they become more similar to each other than are their surrounding natural landscapes. Using a database of more than 1 million water bodies and 1 million km of streams, we compared the surface water of 100 US cities with their surrounding undeveloped land. We evaluated differences in areal (A WB) and numeric densities (N WB) of water bodies (lakes, wetlands, and so on), the morphological characteristics of water bodies (size), and the density (D C) of surface flow channels (that is, streams and rivers). The variance of urban A WB, N WB, and D C across the 100 MSAs decreased, by 89, 25, and 71%, respectively, compared to undeveloped land. These data show that many cities are surface water poor relative to undeveloped land; however, in drier landscapes urbanization increases the occurrence of surface water. This convergence pattern strengthened with development intensity, such that high intensity urban development had an areal water body density 98% less than undeveloped lands. Urbanization appears to drive the convergence of hydrological features across the US, such that surface water distributions of cities are more similar to each other than to their surrounding landscapes. © 2014 The Author(s)
|
Convergent Surface Water Distributions in U.S. Cities
|
convergent surface water distributions in u.s. cities
|
earth rapidly urbanizing dramatic abundance character landscapes. scope consequences redistribution broad understood. hypothesized urbanization convergent abundance cities lose surrounding landscapes. million bodies million streams cities surrounding undeveloped land. areal numeric densities bodies lakes wetlands morphological bodies streams rivers msas undeveloped land. cities undeveloped drier landscapes urbanization occurrence water. strengthened areal undeveloped lands. urbanization drive hydrological cities surrounding landscapes.
|
exact_dup
|
[
"37750304"
] |
38678122
|
10.1103/PhysRevD.88.114023
|
Recent measurements of exclusive B-→τ-ν and B0→π+l-ν̄l decays via the b→ulν transition process differ from the standard model expectation and, if they persist in future B experiments, will be a definite hint of the physics beyond the standard model. Similar hints of new physics have been observed in b→c semileptonic transition processes as well. BABAR measures the ratio of branching fractions of B→(D,D*)τν to the corresponding B→(D,D*)lν, where l represents either an electron or a muon, and finds 3.4σ discrepancy with the standard model expectation. In this context, we consider a most general effective Lagrangian for the b→ulν and b→clν transition processes in the presence of new physics and perform a combined analysis of all the b→u and b→c semi-(leptonic) data to explore various new physics operators and their couplings. We consider various new physics scenarios and give predictions for the Bc→τν and B→πτν decay branching fractions. We also study the effect of these new physics parameters on the ratio of the branching ratios of B→πτν to the corresponding B→πlν decays
|
Effective theory approach to new physics in b→u and b→c leptonic and semileptonic decays
|
effective theory approach to new physics in b→u and b→c leptonic and semileptonic decays
|
exclusive decays b→ulν expectation persist definite hint model. hints semileptonic well. babar branching fractions muon finds discrepancy expectation. lagrangian b→ulν b→clν leptonic explore couplings. scenarios bc→τν b→πτν branching fractions. branching b→πτν b→πlν decays
|
exact_dup
|
[
"52169357"
] |
41116760
|
10.1016/J.ACA.2008.05.048
|
A method based on the coupling of HPLC with ICP-MS with an on-line pre-concentration
micro-column has been developed for the analysis of inorganic and methyl mercury in
the dissolved phase of natural waters. This method allows the rapid pre-concentration
and matrix removal of interferences in complex matrices such as seawater with minimal
sampling handling. Detection limits of 0.07 ng L−1 for inorganic mercury and 0.02 ng L−1 for
methyl mercury have been achieved allowing the determination of inorganic mercury and
methyl mercury in filtered seawater fromtheVenice lagoon. Good accuracy and reproducibility
was demonstrated by the repeat analysis of the certified reference material BCR-579
coastal seawater. The developed HPLC separation was shown to be also suitable for the
determination of methyl mercury in extracts of the particulate phase
|
Speciation Analysis of Mercury in Seawater from the Lagoon of Venice by on-line Pre-concentration HPLC-ICP-MS
|
speciation analysis of mercury in seawater from the lagoon of venice by on-line pre-concentration hplc-icp-ms
|
hplc micro inorganic methyl mercury dissolved waters. removal interferences seawater handling. inorganic mercury methyl mercury allowing inorganic mercury methyl mercury filtered seawater fromthevenice lagoon. reproducibility repeat certified coastal seawater. hplc methyl mercury extracts particulate
|
exact_dup
|
[
"53157326"
] |
41142154
|
10.1016/j.microc.2015.08.023
|
The present study deals with 20th century manufactured artists' oil paints containing raw \ud
and burnt umber pigments, this is, natural earth pigments resulting from the combination of iron and \ud
manganese oxides. Manganese, in particular, is known to be a primary drier and to have a siccative \ud
effect on oil paint films.\ud
This research aims to show the diversity of formulations behind apparently same commercial names as \ud
well as to understand how the content of manganese, the presence of modern lipidic media and the \ud
hydrolysis mechanisms can promote significant differences in the expected mechanical properties of \ud
oil paint films, thus conditioning their long-term performance. \ud
Several manufactured artists' oil paint films containing manganese were selected. Dried films from \ud
raw and burnt umber oil paints by Winsor&Newton® (UK), Grumbacher® (USA), Gamblin® (USA) and \ud
Speedball® (USA) were studied and information about their chemical composition and mechanical \ud
behaviour is here presented. In addition to the identification and the study of the inorganic and organic \ud
components present in each formulation through LM, SEM-EDX, FTIR-ATR, XRD, GC-MS analysis, \ud
tensile tests were run and stress-strain curves were obtained. Together with evident hue differences, \ud
the obtained results showed significant differences in the chemical composition and the mechanical \ud
behaviour of the oil paint films
|
Study of the chemical composition and the mechanical behaviour of 20th century commercial artists' oil paints containing manganese-based pigments
|
study of the chemical composition and the mechanical behaviour of 20th century commercial artists' oil paints containing manganese-based pigments
|
deals century manufactured artists paints burnt umber pigments earth pigments iron manganese oxides. manganese drier siccative paint films. aims diversity formulations behind apparently commercial names manganese modern lipidic hydrolysis promote paint films conditioning performance. manufactured artists paint films manganese selected. dried films burnt umber paints winsor newton® grumbacher® gamblin® speedball® presented. inorganic formulation ftir tensile obtained. evident paint films
|
exact_dup
|
[
"53182682"
] |
42968084
|
10.1016/j.chb.2014.09.065
|
In the past decades, online learning has transformed the educational landscape with the emergence of new ways to learn. This fact, together with recent changes in educational policy in Europe aiming to facilitate the incorporation of graduate students to the labor market, has provoked a shift on the delivery of instruction and on the role played by teachers and students, stressing the need for development of both basic and cross-curricular competencies. In parallel, the last years have witnessed the emergence of new educational disciplines that can take advantage of the information retrieved by technology-based online education in order to improve instruction, such as learning analytics.\ud
\ud
This study explores the applicability of learning analytics for prediction of development of two cross-curricular competencies – teamwork and commitment – based on the analysis of Moodle interaction data logs in a Master’s Degree program at Universidad a Distancia de Madrid (UDIMA) where the students were education professionals. The results from the study question the suitability of a general interaction-based approach and show no relation between online activity indicators and teamwork and commitment acquisition. The discussion of results includes multiple recommendations for further research on this topic
|
Assessing the suitability of student interactions from Moodle data logs as predictors of cross-curricular competencies
|
assessing the suitability of student interactions from moodle data logs as predictors of cross-curricular competencies
|
decades transformed educational landscape emergence ways learn. educational europe aiming facilitate incorporation graduate labor provoked delivery instruction played teachers stressing curricular competencies. witnessed emergence educational disciplines advantage retrieved instruction analytics. explores applicability analytics curricular competencies teamwork commitment moodle logs master’s universidad distancia madrid udima professionals. suitability indicators teamwork commitment acquisition. recommendations topic
|
exact_dup
|
[
"148681351"
] |
43610271
|
10.1002/9781118914458.ch3
|
This chapter presents a review of the role of several additives on POM processing (lubricating agents, processing aids, nucleating agents), performances (fillers, impact modifiers) and lifetime increase (antioxidants, compounds reacting with secondary reaction products such as acids scavengers, UV stabilizers and flame retardants) and aspect properties (pigments). It is tried to review existing models permitting to describe the efficiency of these additives in POM and predict the effect of other comparable additives that are not included in this review. Last, since POM compounding can be relatively complex and additives are scarcely used alone, it was also tried to report some side effects of these additives and the possible synergistic or antagonistic effects in the case of combinations of additives
|
Polyoxymethylene additives
|
polyoxymethylene additives
|
presents additives lubricating aids nucleating performances fillers modifiers lifetime antioxidants reacting scavengers stabilizers flame retardants aspect pigments tried permitting additives predict comparable additives review. compounding additives scarcely tried additives synergistic antagonistic combinations additives
|
exact_dup
|
[
"143695329"
] |
46778634
|
10.1016/j.gca.2006.06.1568
|
The sorption of Eu(III) onto kaolinite and montmorillonite was investigated up to 150 °C. The clays were purified samples, saturated with Na in the case of montmorillonite. Batch experiments were conducted at 25, 40, 80 and 150 °C in 0.5 M NaClO4 solutions to measure the distribution coefficients (Kd) of Eu as a trace element (<10−6 mol/L) between the solution and kaolinite. For the Na-montmorillonite, we used Kd results from a previous study [Tertre, E., Berger, G., Castet, S., Loubet, M., Giffaut, E., 2005. Experimental study of adsorption of Ni2+, Cs+ and Ln3+ onto Na-montmorillonite up to 150 °C. Geochim. Cosmochim. Acta 69, 4937–4948] obtained under exactly the same conditions. The number and nature of the Eu species sorbed onto both clay minerals were investigated by time resolved laser fluorescence spectroscopy (TRLFS) in specific experiments in the same temperature range. We identified a unique inner-sphere complex linked to the aluminol sites in both clays, assumed to be double bond; length as m-dashAlOEu2+ at the edge of the particles, and a second exchangeable outer-sphere complex for montmorillonite, probably in an interlayer position. The Kd values were used to adjust the parameters of a surface complexation model (DLM: diffuse layer model) from 25 to 150 °C. The number of Eu complexes and the stoichiometry of reactions were constrained by TRLFS. The acidity constants of the amphoteric aluminol sites were taken from another study [Tertre, E., Castet, S., Berger, G., Loubet, M., Giffaut, E. Acid/base surface chemistry of kaolinite and Na-montmorillonite at 25 and 60 °C: experimental study and modelling. Geochim. Cosmochim. Acta, in press], which integrates the influence of the negative structural charge of clays on the acid/base properties of edge sites as a function of temperature and ionic strength. The results of the modelling show that the observed shift of the sorption edge towards low pH with increasing temperature results solely from the contribution of the double bond; length as m-dashAlOEu2+ edge complexes. Finally, we successfully tested the performance of our model by confronting the predictions with experimental Kd data. We used our own data obtained at lower ionic strength (previous study) or higher suspension density and higher starting concentration (TRLFS runs, this study), as well as published data from other experimental studies [Bradbury, M.H., Baeyens, B., 2002. Sorption of Eu on Na and Ca-montmorillonite: experimental investigations and modeling with cation exchange and surface complexation. Geochim. Cosmochim. Acta 66, 2325–2334; Kowal-Fouchard, A., 2002. Etude des mécanismes de rétention des ions U(IV) et Eu(III) sur les argiles: influence des silicates. Ph.D. Thesis, Université Paris Sud, France, 330p]
|
Europium retention onto clay minerals from 25 to 150 °C: Experimental measurements, spectroscopic features and sorption modelling
|
europium retention onto clay minerals from 25 to 150 °c: experimental measurements, spectroscopic features and sorption modelling
|
sorption kaolinite montmorillonite clays purified saturated montmorillonite. batch naclo trace kaolinite. montmorillonite tertre berger castet loubet giffaut adsorption montmorillonite geochim. cosmochim. acta conditions. sorbed clay minerals resolved fluorescence spectroscopy trlfs range. sphere aluminol clays bond dashaloeu exchangeable outer sphere montmorillonite probably interlayer position. adjust complexation diffuse complexes stoichiometry constrained trlfs. acidity amphoteric aluminol tertre castet berger loubet giffaut kaolinite montmorillonite modelling. geochim. cosmochim. acta integrates clays ionic strength. sorption solely bond dashaloeu complexes. successfully confronting data. ionic suspension trlfs runs bradbury m.h. baeyens sorption montmorillonite investigations cation complexation. geochim. cosmochim. acta kowal fouchard etude mécanismes rétention argiles silicates. ph.d. thesis université paris
|
exact_dup
|
[
"152331045"
] |
46781060
|
10.1051/0004-6361:20053263
|
To study the evolution of silicate dust in different astrophysical environments we simulate, in the laboratory, interstellar and circumstellar ion irradiation and thermal annealing processes. An experimental protocol that follows different steps in the dust life-cycle was developed. Using the silicate 10 $\mu$m band as an indicator, the evolution of the structural properties of an ion-irradiated olivine-type silicate sample, as a function of temperature, is investigated and an activation energy for crystallization is determined. The obtained value of ${E_{\rm a}}/k$ = 41 700 $\pm$ 2400 K is in good agreement with previous determinations of the activation energies of crystallization reported for non-ion-irradiated, amorphous silicates. This implies that the crystallization process is independent of the history of the dust. In particular, the defect concentration due to irradiation appears not to play a major role in stimulating, or hindering, crystallization at a given temperature. This activation energy is an important thermodynamical parameter that must be used in theoretical models which aim to explain the dust evolution from its place of birth in late type stars to its incorporation into young stellar environments, proto-stellar discs and proto-planetary systems after long passage through the interstellar medium
|
First determination of the (re)crystallization activation energy of an irradiated olivine-type silicate
|
first determination of the (re)crystallization activation energy of an irradiated olivine-type silicate
|
silicate astrophysical environments simulate interstellar circumstellar irradiation annealing processes. developed. silicate indicator irradiated olivine silicate crystallization determined. determinations crystallization irradiated amorphous silicates. crystallization dust. defect irradiation stimulating hindering crystallization temperature. thermodynamical birth incorporation environments proto discs proto planetary passage interstellar
|
exact_dup
|
[
"152385713"
] |
47081533
|
10.1017/CBO9781139236157.006
|
International audienceThe semantics of dialogue is a fundamental topic for a number of reasons. First, dialogue is the primary medium of language use, phylogenetically and ontogenetically. Second, studying dialogue forces one to a particularly careful study of the nature of context. The context has a role to play in determining what one can or should say at a given point and also how to say it. Conversely, it affords the interlocutors a very impressive economy of expression – there is much subtlety that can be achieved with relatively little effort drawing simply on material that is in the context. Consequently, two themes will drive this article, relating to two fundamental problems a semantic analysis in dialogue has to tackle: (1) Conversational relevance: given that a conversation is in a certain state, what utterances can be produced coherently by each conversational participant in that state? Conversational meaning: what conversational states are appropriate for a given word/construction and what import will that word have in such a state
|
The Semantics of dialogue
|
the semantics of dialogue
|
audiencethe semantics dialogue topic reasons. dialogue phylogenetically ontogenetically. studying dialogue forces careful context. determining conversely affords interlocutors impressive economy subtlety effort drawing context. themes drive relating semantic dialogue tackle conversational relevance conversation utterances coherently conversational participant conversational meaning conversational word import word
|
exact_dup
|
[
"51440217",
"52190722"
] |
47081755
|
10.1103/PhysRevE.91.053007
|
International audienceBoiling crisis is a transition between nucleate and film boiling. It occurs at a threshold value of the heat flux from the heater called CHF (critical heat flux). Usually, boiling crisis studies are hindered by the high CHF and short transition duration (below 1 ms). Here we report on experiments in hydrogen near its liquid-vapor critical point, in which the CHF is low and the dynamics slow enough to be resolved. As under such conditions the surface tension is very small, the experiments are carried out in the reduced gravity to preserve the conventional bubble geometry. Weightlessness is created artificially in two-phase hydrogen by compensating gravity with magnetic forces. We were able to reveal the fractal structure of the contour of the percolating cluster of the dry areas at the heater that precedes the boiling crisis. We provide a direct statistical analysis of dry spot areas that confirms the boiling crisis at zero gravity as a scale-free phenomenon. It was observed that, in agreement with theoretical predictions, saturated boiling CHF tends to zero (within the precision of our thermal control system) in zero gravity, which suggests that the boiling crisis may be observed at any heat flux provided the experiment lasts long enough
|
Criticality in the slowed-down boiling crisis at zero gravity
|
criticality in the slowed-down boiling crisis at zero gravity
|
audienceboiling crisis nucleate film boiling. heater boiling crisis hindered vapor slow resolved. tension preserve bubble geometry. weightlessness created artificially compensating forces. reveal fractal contour percolating heater precedes boiling crisis. spot confirms boiling crisis phenomenon. saturated boiling tends precision boiling crisis lasts
|
exact_dup
|
[
"51221966",
"51930578",
"52670222",
"52895644"
] |
47085349
|
10.1016/j.ic.2012.11.005
|
International audienceThe model checking problem for finite-state open systems (module checking) has been extensively studied in the literature, both in the context of environments with perfect and imperfect information about the system. Recently , the perfect information case has been extended to infinite-state systems (pushdown module checking). In this paper, we extend pushdown module checking to the imperfect information setting; i.e., to the case where the environment has only a partial view of the system's control states and push-down store content. We study the complexity of this problem with respect to the branching-time temporal logics CTL, CTL * and the propositional µ-calculus. We show that pushdown module checking, which is by itself harder than pushdown model checking, becomes undecidable when the environment has imperfect information. We also show that undecidability relies on hiding information about the pushdown store. Indeed, we prove that with imperfect information about the control states, but a visible pushdown store, the problem is decidable and its complexity is 2Exptime-complete for CTL and the propositional µ-calculus, and 3Exptime-complete for CTL *
|
Pushdown Module Checking with Imperfect Information
|
pushdown module checking with imperfect information
|
audiencethe checking module checking extensively environments perfect imperfect system. perfect infinite pushdown module checking extend pushdown module checking imperfect i.e. push store content. branching logics propositional calculus. pushdown module checking harder pushdown checking undecidable imperfect information. undecidability relies hiding pushdown store. imperfect visible pushdown store decidable exptime propositional calculus exptime
|
exact_dup
|
[
"48159370"
] |
47092652
|
10.1053/j.gastro.2014.03.051
|
International audienceBACKGROUND & AIMS: We investigated the effectiveness of the protease inhibitors peginterferon and ribavirin in treatment-experienced patients with hepatitis C virus (HCV) genotype 1 infection and cirrhosis. METHODS: In the Compassionate Use of Protease Inhibitors in Viral C Cirrhosis study, 511 patients with HCV genotype 1 infection and compensated cirrhosis who did not respond to a prior course of peginterferon and ribavirin (44.3% relapsers or patients with viral breakthrough, 44.8% partial responders, and 8.0% null responders) were given either telaprevir (n = 299) or boceprevir (n = 212) for 48 weeks. We assessed percentages of patients with sustained viral responses 12 weeks after therapy and safety. This observational study did not allow for direct comparison of the 2 regimens. RESULTS: Among patients given telaprevir, 74.2% of relapsers, 40.0% of partial responders, and 19.4% of null responders achieved SVR12. Among those given boceprevir, 53.9% of relapsers, 38.3% of partial responders, and none of the null responders achieved SVR12. In multivariate analysis, factors associated with SVR12 included prior response to treatment response, no lead-in phase, HCV subtype 1b (vs 1a), and baseline platelet count greater than 100,000/mm(3). Severe adverse events occurred in 49.9% of cases, including liver decompensation, severe infections in 10.4%, and death in 2.2%. In multivariate analysis, baseline serum albumin level less than 35 g/L and baseline platelet counts of 100,000/mm(3) or less predicted severe side effects or death. CONCLUSIONS: Relatively high percentages of real-life, treatment-experienced patients with HCV genotype 1 infection and cirrhosis respond to the combination of peginterferon and ribavirin with telaprevir or boceprevir. However, side effects are frequent and often severe. Baseline levels of albumin and platelet counts can be used to guide treatment decisions. ClinicalTrials.gov number: NCT01514890
|
Effectiveness of telaprevir or boceprevir in treatment-experienced patients with HCV genotype 1 infection and cirrhosis.
:
Triple therapy in HCV genotype 1 cirrhotics
|
effectiveness of telaprevir or boceprevir in treatment-experienced patients with hcv genotype 1 infection and cirrhosis. : triple therapy in hcv genotype 1 cirrhotics
|
audiencebackground aims effectiveness protease inhibitors peginterferon ribavirin experienced hepatitis genotype cirrhosis. compassionate protease inhibitors viral cirrhosis genotype compensated cirrhosis respond peginterferon ribavirin relapsers viral breakthrough responders responders telaprevir boceprevir weeks. percentages sustained viral safety. observational regimens. telaprevir relapsers responders responders boceprevir relapsers responders none responders multivariate subtype platelet count adverse occurred decompensation infections multivariate albumin platelet counts death. percentages experienced genotype cirrhosis respond peginterferon ribavirin telaprevir boceprevir. frequent severe. albumin platelet counts guide decisions.
|
exact_dup
|
[
"48186900",
"49282514",
"51442135",
"51948372",
"52193669",
"52778205",
"52864153",
"54033561"
] |
47117801
|
10.1002/art.23253
|
International audienceOBJECTIVE: To assess the impact, in terms of statistical power and bias of treatment effect, of approaches to dealing with missing data in randomized controlled trials of rheumatoid arthritis with radiographic outcomes. METHODS: We performed a simulation study. The missingness mechanisms we investigated copied the process of withdrawal from trials due to lack of efficacy. We compared 3 methods of managing missing data: all available data (case-complete), last observation carried forward (LOCF), and multiple imputation. Data were then analyzed by classic t-test (comparing the mean absolute change between baseline and final visit) or F test (estimation of treatment effect with repeated measurements by a linear mixed-effects model). RESULTS: With a missing data rate close to 15%, the treatment effect was underestimated by 18% as estimated by a linear mixed-effects model with a multiple imputation approach to missing data. This bias was lower than that obtained with the case-complete approach (-25%) or LOCF approach (-35%). This statistical approach (combination of multiple imputation and mixed-effects analysis) was moreover associated with a power of 70% (for a 90% nominal level), whereas LOCF was associated with a power of 55% and a case-complete power of 58%. Analysis with the t-test gave qualitatively equivalent but poorer quality results, except when multiple imputation was applied. CONCLUSION: Our simulation study demonstrated multiple imputation, offering the smallest bias in treatment effect and the highest power. These results can help in planning trials, especially in choosing methods of imputation and data analysis
|
Missing data in randomized controlled trials of rheumatoid arthritis with radiographic outcomes: A simulation study.
|
missing data in randomized controlled trials of rheumatoid arthritis with radiographic outcomes: a simulation study.
|
audienceobjective dealing missing randomized rheumatoid arthritis radiographic outcomes. study. missingness copied withdrawal efficacy. managing missing locf imputation. classic visit repeated missing underestimated imputation missing data. locf imputation nominal locf gave qualitatively poorer imputation applied. imputation offering smallest power. planning choosing imputation
|
exact_dup
|
[
"52199607"
] |
47139180
|
10.1007/s00284-013-0364-z
|
The effect of intracellular reduced glutathione (GSH) in the lead stress response of Saccharomyces cerevisiae was investigated. Yeast cells exposed to Pb, for 3 h, lost the cell proliferation capacity (viability) and decreased intracellular GSH level. The Pb-induced loss of cell viability was compared among yeast cells deficient in GSH1 (∆gsh1) or GSH2 (∆gsh2) genes and wild-type (WT) cells. When exposed to Pb, ∆gsh1 and ∆gsh2 cells did not display an increased loss of viability, compared with WT cells. However, the depletion of cellular thiols, including GSH, by treatment of WT cells with iodoacetamide (an alkylating agent, which binds covalently to thiol group), increased the loss of viability in Pb-treated cells. In contrast, GSH enrichment, due to the incubation of WT cells with amino acids mixture constituting GSH (l-glutamic acid, l-cysteine and glycine), reduced the Pb-induced loss of proliferation capacity. The obtained results suggest that intracellular GSH is involved in the defence against the Pb-induced toxicity; however, at physiological concentration, GSH seems not to be sufficient to prevent the Pb-induced loss of cell viability
|
Evaluation of the role of glutathione in the lead-induced toxicity in saccharomyces cerevisiae
|
evaluation of the role of glutathione in the lead-induced toxicity in saccharomyces cerevisiae
|
intracellular glutathione saccharomyces cerevisiae investigated. yeast exposed lost proliferation viability intracellular level. viability yeast deficient ∆gsh ∆gsh cells. exposed ∆gsh ∆gsh display viability cells. depletion thiols iodoacetamide alkylating agent binds covalently thiol viability cells. enrichment incubation mixture constituting glutamic cysteine glycine proliferation capacity. intracellular defence toxicity physiological prevent viability
|
exact_dup
|
[
"55625730"
] |
47309709
|
10.1103/PhysRevB.77.125433
|
International audienceThe high pressure phase diagram of CsC8 graphite intercalated compound has been investigated at ambient temperature up to 32 GPa. Combining X-ray and neutron diffraction, Raman and X- ray absorption spectroscopies, we report for the first time that CsC8, when pressurized, undergoes phase transitions around 2.0, 4.8 and 8 GPa. Possible candidate lattice structures and the transition mechanism involved are proposed. We show that the observed transitions involve the structural re- arrangement in the Cs sub-network while the distance between the graphitic layers is continuously reduced at least up to 8.9 GPa. Around 8 GPa, important modifications of signatures of the electronic structure measured by Raman and X-ray absorption spectroscopies evidence the onset of a new transition
|
High pressure behavior of CsC8 graphite intercalation compound
|
high pressure behavior of csc8 graphite intercalation compound
|
audiencethe graphite intercalated compound ambient gpa. combining neutron diffraction raman spectroscopies pressurized undergoes gpa. candidate proposed. involve arrangement graphitic continuously gpa. modifications signatures raman spectroscopies onset
|
exact_dup
|
[
"52328433",
"52761031"
] |
47326854
|
10.1007/978-3-662-43459-8_2
|
Part 1: Creating ValueInternational audienceManaging creativity has proven to be one of the most important drivers in software development and use. The continuous changing market environment drives companies like Google, SAS Institute and LEGO to focus on creativity as an increasing necessity when competing through sustained innovations. However, creativity in the information systems (IS) environment is a challenge for most organizations that is primarily caused by not knowing how to strategize creative processes in relation to IS strategies, thus, causing companies to act ad hoc in their creative endeavors. In this paper, we address the organizational challenges of creativity in software organizations. Grounded in a previous literature review and a rigorous selection process, we identify and present a model of seven important factors for creativity in software organizations. From these factors, we identify 21 challenges that software organizations experience when embarking on creative endeavors and transfer them into a comprehensive framework. Using an interpretive research study, we further study the framework by analyzing how the challenges are integrated in 27 software organizations. Practitioners can use this study to gain a deeper understanding of creativity in their own business while researchers can use the framework to gain insight while conducting interpretive field studies of managing creativity
|
The Challenges of Creativity in Software Organizations
|
the challenges of creativity in software organizations
|
creating valueinternational audiencemanaging creativity proven drivers use. changing drives companies google lego creativity necessity competing sustained innovations. creativity challenge organizations primarily knowing strategize creative causing companies creative endeavors. organizational challenges creativity organizations. grounded rigorous seven creativity organizations. challenges organizations embarking creative endeavors comprehensive framework. interpretive analyzing challenges organizations. practitioners deeper creativity researchers insight conducting interpretive managing creativity
|
exact_dup
|
[
"47291115"
] |
47339047
|
10.1016/j.labeco.2014.05.002
|
International audienceCombining large (up to 25%) extracts of five French censuses and data from Labor Force Surveys for 1968-1999, we use Borjas (2003)'s factor proportions methodology for France and find that a 10 p.p. increase in the immigrant share raises natives' wages by 3.3%, which is in stark contrast with the results in Borjas (2003) for the U.S. The positive impact of immigration on natives' wages and employment is shown to hold also at the regional level. We find evidence that this positive correlation partly comes from the imperfect substitutability of natives and immigrants within education/experience cells. Specifically, (i) the occupational distribution of natives and immigrants within these cells is more dissimilar when there are more immigrants in the cell; (ii) natives tend to perform more abstract tasks when there are more immigrants in the cell; and (iii) an important part of the positive relation between immigration and wages comes from a reallocation of natives to better-paid occupations within the cells. However, we argue that this positive correlation is also likely to be related to the inability of the Borjas (2003) model to perfectly account for the important changes in the wage distribution and the educational level characterizing the French economy in this period
|
The impact of immigration on the French labor market: Why so different?
|
the impact of immigration on the french labor market: why so different?
|
audiencecombining extracts french censuses labor surveys borjas proportions methodology p.p. immigrant share raises natives wages stark borjas u.s. immigration natives wages employment hold level. partly comes imperfect substitutability natives immigrants cells. occupational natives immigrants dissimilar immigrants natives tend tasks immigrants immigration wages comes reallocation natives paid occupations cells. argue inability borjas perfectly wage educational characterizing french economy
|
exact_dup
|
[
"52810118"
] |
47773973
|
10.1016/j.aap.2009.10.022
|
Unintentional injuries continue to be a serious public-health problem for children and are higher for boys than for girls, from infancy through adulthood. Literature on differential socialization concerning risky behaviors and gender stereotypes suggests that sex differences in unintentional injuries could be explained by children's differential feedback to social pressure, leading to behaviors which conform to masculine and feminine stereotypes. We made the prediction that boys' and girls' conformity with masculine stereotypes influences injury-risk behaviors among preschoolers. Masculinity scores, femininity scores, and injury-risk behaviors of 170 three- to six-year-old children (89 boys and 81 girls) were measured indirectly on two scales filled out by their parents. Results show that boys' and girls' injury-risk behaviors are predicted by masculine stereotype conformity and that girls' masculine behaviors decline with increasing age. These results underline the impact of gender roles - and of the differential socialization associated with those roles - on sex differences in children's risky behaviors as early as the preschool period. Injury, Child, Gender, Preschool, Socialization, Stereotyp
|
Gender Stereotype Conformity and Age as Determinants of Preschoolers' Injury-Risk Behaviors
|
gender stereotype conformity and age as determinants of preschoolers' injury-risk behaviors
|
unintentional injuries continue serious boys girls infancy adulthood. socialization concerning risky behaviors gender stereotypes unintentional injuries behaviors conform masculine feminine stereotypes. boys girls conformity masculine stereotypes influences injury behaviors preschoolers. masculinity femininity injury behaviors boys girls indirectly filled parents. boys girls injury behaviors masculine stereotype conformity girls masculine behaviors decline age. underline gender roles socialization roles risky behaviors preschool period. injury gender preschool socialization stereotyp
|
exact_dup
|
[
"47810186"
] |
47811133
|
10.1007/s12008-008-0051-7
|
International audienceIn this paper, we propose a prototype of a collaborative teleassistance system for mechanical repairs based on Augmented Reality (AR). This technology is generally used to implement specific assistance applications for users, which consist of providing all the information, known as augmentations, required to perform a task. For teletransmission applications, operators are equipped with a wearable computer and a technical support expert can accurately visualize what the operator sees thanks to the teletransmission of the corresponding video stream. Within the framework of remote communication, our aim is to foster collaboration, especially informal collaboration, between the operator and the expert in order to make teleassistance easier and more efficient. To do this we rely on classical repair technologies and on collaborative systems to introduce a new human-machine interaction: the Picking Outlining Adding interaction (POA interaction). With this new interaction paradigm, technical information is provided by directly Picking, Outlining and Adding information to an item in an operator's video stream
|
A New AR Interaction Paradigm for Collaborative TeleAssistance system: The P.O.A
|
a new ar interaction paradigm for collaborative teleassistance system: the p.o.a
|
audiencein propose prototype collaborative teleassistance repairs augmented reality implement assistance consist augmentations task. teletransmission equipped wearable expert accurately visualize sees thanks teletransmission video stream. remote foster informal expert teleassistance easier efficient. rely repair technologies collaborative machine picking outlining adding paradigm picking outlining adding item video stream
|
exact_dup
|
[
"47847331",
"50542753"
] |
47883130
|
10.1016/j.jneuroling.2008.07.004
|
International audienceSpecific Language Impairment (SLI) is a disorder characterised by slow, abnormal language development.<br />Most children with this disorder do not present any other cognitive or neurological deficits. There<br />are many different pathological developmental profiles and switches from one profile to another often<br />occur. An alternative would be to consider SLI as a generic name covering three developmental language<br />disorders: developmental verbal dyspraxia, linguistic dysphasia, and pragmatic language impairment.<br />The underlying cause of SLI is unknown and the numerous studies on the subject suggest that there is<br />no single cause. We suggest that SLI is the result of an abnormal development of the language system,<br />occurring when more than one part of the system fails, thus blocking the system's natural compensation<br />mechanisms. Since compensation also hinders linguistic evaluation, one possibility for diagnosis and<br />remediation control is to assess basic cognitive abilities by non-linguistic means whenever possible.<br />Neurological plausible bases for language and language development should also be taken into account to<br />offer new hypotheses and research issues for future work on SLI
|
Specific language impairment as systemic developmental disorders
|
specific language impairment as systemic developmental disorders
|
audiencespecific impairment disorder characterised slow abnormal development. disorder neurological deficits. pathological developmental switches occur. generic name covering developmental disorders developmental verbal dyspraxia linguistic dysphasia pragmatic impairment. unknown numerous cause. abnormal occurring fails blocking compensation mechanisms. compensation hinders linguistic remediation abilities linguistic whenever possible. neurological plausible bases offer hypotheses
|
exact_dup
|
[
"47846830"
] |
48162417
|
10.1016/j.jallcom.2015.08.039
|
International audienceShort range order of glassy Ge20Ga10Se70 and Ge20Ga5Se75 was investigated by neutron diffraction and extended X-ray absorption fine structure spectroscopy (EXAFS) at Ge, Ga and Se K-edges. For each composition large scale structural models were obtained by fitting simultaneously the four experimental datasets in the framework of the reverse Monte Carlo simulation technique. It was found that both Ge and Ga are predominantly fourfold coordinated. The quality of the fits was strongly improved by introducing Ge-Ga bonding. Models giving the best agreement with experimental data show that Ga has a complex effect on the Ge-Se host matrix: i) it enters the covalent network by forming Ga-Ge bonds ii) by decreasing the number of Se atoms around Ge, it contributes to the formation of Se-Se bonds, which may explain the higher solubility of lanthanide ions iii) the average coordination number of Se increases due to the Ga-Se ‘extra’ bonds. The higher average coordination of the network may be responsible for the increase of Tg upon adding Ga to Ge-Se glasse
|
Short range order in Ge-Ga-Se glasses
|
short range order in ge-ga-se glasses
|
audienceshort glassy neutron diffraction fine spectroscopy exafs edges. fitting simultaneously datasets reverse monte carlo technique. predominantly fourfold coordinated. fits introducing bonding. giving enters covalent forming bonds decreasing contributes bonds solubility lanthanide coordination ‘extra’ bonds. coordination adding glasse
|
exact_dup
|
[
"52674785"
] |
48166813
|
10.1016/j.tsf.2015.08.019
|
International audienceIn this paper, the properties of an optimized BaSrTiO 3 thin film, deposited on an alu-mina substrate using a sol-gel process, are presented. The real and imaginary parts of the permittivity and the tunability have been measured over 7 decades of frequency and in a temperature interval of 320 °C, which provides a good knowledge of the material properties for microwave applications. The dielectric properties of the films show a good stability in frequency and in temperature. From −80 °C to 20 ° C, the permittivity changes less than 2% and from −75 ° C to 100 °C, the tunability stays higher than 90% of its maximum value. The frequency dependence of the relative permittivity of the thin film is rather small since it only varies from 375 at 1 kHz to 350 at 5 GHz. As a main consequence, the tunability which attains almost 60% under a bias field of 400 kV/cm, is very stable in frequency up to 5 GHz. The dielectric losses tan δ, measured up to 1 GHz, stay below 0.02 for the complete frequency range. Although the material is in the ferroelectric phase, the hysteresis effect is quite negligible, which results in a well-determined permittivity value for a given electric bias field. The characterized thin film has been integrated into a reflectarray cell allowing a dynamic control of the reflected phase. The measured phase-shift value is close to the simulated one, showing the performance of the material
|
Temperature stable BaSrTiO3 thin films suitable for microwave applications
|
temperature stable basrtio3 thin films suitable for microwave applications
|
audiencein optimized basrtio film deposited mina presented. imaginary permittivity tunability decades microwave applications. dielectric films temperature. permittivity tunability stays value. permittivity film varies ghz. tunability attains ghz. dielectric losses stay range. ferroelectric hysteresis negligible permittivity field. film reflectarray allowing reflected phase.
|
exact_dup
|
[
"52995256"
] |
48227513
|
10.1016/j.micpro.2011.08.007
|
International audienceIn the Software Radio context, the parametrization is becoming an important topic especially when it comes to multistandard designs. This paper capitalizes on the Common Operator technique to present new common structures for the FFT and FEC decoding algorithms. A key benefit of exhibiting common operators is the regular architecture it brings when implemented in a Common Operator Bank (COB). This regularity makes the architecture open to future function mapping and adapted to accommodated silicon technology variability through dependable design
|
A common operator for FFT and FEC decoding
|
a common operator for fft and fec decoding
|
audiencein parametrization becoming topic comes multistandard designs. capitalizes decoding algorithms. benefit exhibiting architecture brings implemented bank regularity architecture adapted accommodated silicon dependable
|
exact_dup
|
[
"52689666",
"52800991",
"53009982"
] |
48337122
|
10.1007/978-3-642-36065-7_13
|
International audienceIn this paper we consider a graph parameter called contiguity which aims at encoding a graph by a linear ordering of its vertices. We prove that the contiguity of cographs is unbounded but is always dominated by O(log n), where n is the number of vertices of the graph. And we prove that this bound is tight in the sense that there exists a family of cographs on n vertices whose contiguity is Omega(log n). In addition to these results on the worst-case contiguity of cographs, we design a linear-time constant-ratio approximation algorithm for computing the contiguity of an arbitrary cograph, which constitutes our main result. As a by-product of our proofs, we obtain a min-max theorem, which is worth of interest in itself, stating equality between the rank of a tree and the minimum height of its path partitions
|
Linear-time Constant-ratio Approximation Algorithm and Tight Bounds for the Contiguity of Cographs
|
linear-time constant-ratio approximation algorithm and tight bounds for the contiguity of cographs
|
audiencein contiguity aims encoding ordering vertices. contiguity cographs unbounded dominated graph. tight cographs contiguity omega worst contiguity cographs contiguity cograph constitutes result. proofs worth stating equality partitions
|
exact_dup
|
[
"52313777"
] |
48342840
|
10.1007/978-3-642-15396-9_8
|
International audienceA new interval constraint propagation algorithm, called MOnotonic Hull Consistency (Mohc), has recently been proposed. Mohc exploits monotonicity of functions to better filter variable domains. Embedded in an interval-based solver, Mohc shows very high performance for solving systems of numerical constraints (equations or inequalities) over the reals. However, the main drawback is that its revise procedure depends on two user-defined parameters. This paper reports a rigourous empirical study resulting in a variant of Mohc that avoids a manual tuning of the parameters. In particular, we propose a policy to adjust in an auto-adaptive way, during the search, the parameter sensitive to the monotonicity of the revised function
|
Making Adaptive an Interval Constraint Propagation Algorithm Exploiting Monotonicity
|
making adaptive an interval constraint propagation algorithm exploiting monotonicity
|
audiencea propagation monotonic hull consistency mohc proposed. mohc exploits monotonicity filter domains. embedded solver mohc solving inequalities reals. drawback revise parameters. rigourous variant mohc avoids manual tuning parameters. propose adjust auto adaptive monotonicity revised
|
exact_dup
|
[
"52784663"
] |
48343606
|
10.1007/s11238-010-9220-9
|
Online version Since 2010-06-11International audienceThe objective of this article is to investigate the impact of agent heterogeneity (as regards their attitude towards cooperation) and payoff structure on cooperative behaviour, using an experimental setting with incomplete information. A game of chicken is played considering two types of agents: 'unconditional cooperators', who always cooperate, and 'strategic cooperators', who do not cooperate unless it is in their interest to do so. Overall, our data show a much higher propensity to cooperate than predicted by theory. They also suggest that agent heterogeneity matters: the higher the proportion of 'strategic cooperators' in the population, the higher their probability to cooperate. Finally, our data confirm that higher rewards to cooperation (embedded in the payoff structure) tend to lower defection. Taken together, our results suggest that the subjects might be non-expected utility maximizers, dealing with both outcomes and probabilities in a non-linear manner
|
The puzzle of cooperation in a game of chicken: An experimental study
|
the puzzle of cooperation in a game of chicken: an experimental study
|
audiencethe agent heterogeneity regards attitude cooperation payoff cooperative incomplete information. game chicken played unconditional cooperators cooperate strategic cooperators cooperate unless propensity cooperate theory. agent heterogeneity matters proportion strategic cooperators cooperate. confirm rewards cooperation embedded payoff tend defection. utility maximizers dealing probabilities manner
|
exact_dup
|
[
"47737812",
"52630570",
"52825521"
] |
50568990
|
10.1016/j.it.2016.04.002
|
As the first line of innate immune defense, neutrophils need to mount a rapid and robust antimicrobial response. Recent studies implicate various positive feedback amplification processes in achieving that goal. Feedback amplification ensures effective migration of neutrophils in shallow chemotactic gradients, multiple waves of neutrophil recruitment to the site of inflammation, and the augmentation of various effector functions of the cells. We review here such positive feedback loops including intracellular and autocrine processes, paracrine effects mediated by lipid (LTB4), chemokine, and cytokine mediators, and bidirectional interactions with the complement system and with other immune and non-immune cells. These amplification mechanisms are not only involved in antimicrobial immunity but also contribute to neutrophil-mediated tissue damage under pathological conditions. © 2016 Elsevier Ltd
|
Feedback Amplification of Neutrophil Function
|
feedback amplification of neutrophil function
|
innate immune defense neutrophils mount robust antimicrobial response. implicate amplification achieving goal. amplification ensures migration neutrophils shallow chemotactic gradients neutrophil recruitment inflammation augmentation effector cells. loops intracellular autocrine paracrine chemokine cytokine mediators bidirectional complement immune immune cells. amplification antimicrobial immunity neutrophil pathological conditions.
|
exact_dup
|
[
"78471183"
] |
50617305
|
10.1007/s10601-009-9072-5
|
International audienceInter-block backtracking (IBB) computes all the solutions of sparse systems of nonlinear equations over the reals. This algorithm, introduced by Bliek et al. (1998) handles a system of equations previously decomposed into a set of (small) k × k sub-systems, called blocks. Partial solutions are computed in the different blocks in a certain order and combined together to obtain the set of global solutions. When solutions inside blocks are computed with interval-based techniques, IBB can be viewed as a new interval-based algorithm for solving decomposed systems of non-linear equations. Previous implementations used Ilog Solver and its IlcInterval library as a black box, which implied several strong limitations. New versions come from the integration of IBB with the interval-based library Ibex. IBB is now reliable (no solution is lost) while still gaining at least one order of magnitude w.r.t. solving the entire system. On a sample of benchmarks, we have compared several variants of IBB that differ in the way the contraction/filtering is performed inside blocks and is shared between blocks. We have observed that the use of interval Newton inside blocks has the most positive impact on the robustness and performance of IBB. This modifies the influence of other features, such as intelligent backtracking. Also, an incremental variant of inter-block filtering makes this feature more often fruitful
|
Improving Inter-Block Backtracking with Interval Newton
|
improving inter-block backtracking with interval newton
|
audienceinter backtracking computes sparse reals. bliek handles decomposed blocks. blocks solutions. blocks viewed solving decomposed equations. implementations ilog solver ilcinterval library implied limitations. versions come library ibex. reliable lost gaining w.r.t. solving system. benchmarks variants contraction filtering blocks shared blocks. newton blocks robustness ibb. modifies intelligent backtracking. incremental variant filtering fruitful
|
exact_dup
|
[
"48352805",
"53013953"
] |
51223017
|
10.1016/j.jmatprotec.2015.11.002
|
International audienceTwo hot cracking criteria have been tested: the RDG criterion, based on the prediction of liquid cavitation as a precursor of crack formation, and a strain-based solid mechanics criterion. Both criteria have been implemented in a finite element thermo-mechanical simulation of gas tungsten arc welding. After comparison with experimental results obtained in a test campaign on stainless steel AISI 321, both criteria have shown good ability to predict crack occurrence. Yet, the best response in terms of cracking prediction was obtained with the strain-based solid mechanics criterion
|
Comparison of two hot tearing criteria in numerical modelling of arc welding of stainless steel AISI 321
|
comparison of two hot tearing criteria in numerical modelling of arc welding of stainless steel aisi 321
|
audiencetwo cracking criterion cavitation precursor crack mechanics criterion. implemented thermo tungsten welding. campaign stainless steel aisi predict crack occurrence. cracking mechanics criterion
|
exact_dup
|
[
"52674211"
] |
51226183
|
10.1016/j.matchemphys.2013.06.020
|
International audienceCrystallization and morphological features of syndiotactic-b-atactic polystyrene stereodiblock copolymers (sPS-b-aPS), atactic/syndiotactic polystyrene blends (aPS/sPS), and aPS/sPS blends modified with sPS-b-aPS, with different compositions in aPS and sPS, have been investigated using differential scanning calorimetry (DSC), polarized light optical microscopy (POM) and wide angle X-ray diffraction (WAXRD) techniques. For comparative purposes, the properties of parent pristine sPS samples were also studied. WAXRD analyses revealed for all the samples, independently from their composition (aPS/sPS ratio) and structure (blends, block copolymers, blends modified with block copolymers), the same polymorphic β form of sPS. The molecular weight of aPS and sPS showed opposite effects on the crystallization of 50:50 aPS/sPS blends: the lower the molecular weight of aPS, the slower the crystallization while the lower the molecular weight of sPS, the faster the crystallization. DSC studies performed under both isothermal and non-isothermal conditions, independently confirmed by POM studies, led to a clear trend for the crystallization rate at a given sPS/aPS ratio (ca. 50:50 and 20:80): sPS homopolymers > sPS-b-aPS block copolymers ∼sPS/aPS blends modified with sPS-b-aPS copolymers > sPS/aPS blends. Interestingly, sPS-b-aPS block copolymers not only crystallized faster than blends, but also affected positively the crystallization behavior of blends. At 50:50 sPS/aPS ratio, blends (Blend-2), block copolymers (Cop-1) and blends modified with block copolymers (Blend-2-mod) crystallized via spherulitic crystalline growth controlled by an interfacial process. In all cases, an instantaneous nucleation was observed. The density of nuclei in block copolymers (160,000−190,000 nuclei mm−3) was always higher than that in blends and modified blends (30,000−60,000 nuclei mm−3), even for quite different sPS/aPS ratio. At 20:80 sPS/aPS ratio, the block copolymers (Cop-2) preserved the same crystallization mechanism than at 45:55 ratio (Cop-1). On the other hand, the 20:80 sPS/aPS blend (Blend-4) and blend modified with block copolymers (Blend-4-mod) showed a spinodal decomposition
|
On the crystallization behavior of syndiotactic-b-atactic polystyrene stereodiblock copolymers, atactic/syndiotactic polystyrene blends, and aPS/sPS blends modified with sPS-b-aPS
|
on the crystallization behavior of syndiotactic-b-atactic polystyrene stereodiblock copolymers, atactic/syndiotactic polystyrene blends, and aps/sps blends modified with sps-b-aps
|
audiencecrystallization morphological syndiotactic atactic polystyrene stereodiblock copolymers atactic syndiotactic polystyrene blends blends compositions scanning calorimetry polarized microscopy diffraction waxrd techniques. comparative purposes parent pristine studied. waxrd independently blends copolymers blends copolymers polymorphic sps. opposite crystallization blends slower crystallization faster crystallization. isothermal isothermal independently confirmed crystallization homopolymers copolymers ∼sps blends copolymers blends. interestingly copolymers crystallized faster blends positively crystallization blends. blends blend copolymers blends copolymers blend crystallized spherulitic crystalline interfacial process. instantaneous nucleation observed. nuclei copolymers nuclei blends blends nuclei ratio. copolymers preserved crystallization blend blend blend copolymers blend spinodal decomposition
|
exact_dup
|
[
"48205527"
] |
51441956
|
10.1016/j.tcs.2015.06.052
|
International audienceIn this paper, we study some average properties of hypergraphs and the average com-plexity of algorithms applied to hypergraphs under different probabilistic models. Our approach is both theoretical and experimental since our goal is to obtain a random model that is able to capture the real-data complexity. Starting from a model that generalizes the Erdös-Renyi model [9, 10], we obtain asymptotic estimations on the average number of transversals, minimals and minimal transversals in a random hy-pergraph. We use those results to obtain an upper bound on the average complexity of algorithms to generate the minimal transversals of an hypergraph. Then we make our random model more complex in order bring it closer to real-data and identify cases where the average number of minimal tranversals is at most polynomial, quasi-polynomial or exponential
|
An average study of hypergraphs and their minimal transversals
|
an average study of hypergraphs and their minimal transversals
|
audiencein hypergraphs plexity hypergraphs probabilistic models. goal capture complexity. generalizes erdös renyi asymptotic estimations transversals minimals transversals pergraph. transversals hypergraph. bring closer tranversals quasi exponential
|
exact_dup
|
[
"51945668"
] |
51957566
|
10.1080/13691066.2012.667907
|
International audienceHow to value a new venture is critical in entrepreneurial financing. This article develops an integrated theoretical framework to examine whether venture capitalists' valuation of a new venture can be explained by factors identified in the strategy theories as important to firm performance. Empirical results from the analyses of 184 rounds of early-stage venture capital investments in 102 new ventures support the central proposition that venture capitalists do take into consideration those factors that are important to firm performance in their valuation of new ventures. More specifically, this article finds that attractiveness of the industry, the quality of the founder and top management team, as well as external relationships of a new venture significantly and positively affect its valuation by venture capitalists when it seeks venture capital financing in its early stages of development. These empirical findings help to establish an initial linkage between the well-developed theories in strategic management and underresearched venture capital valuation practice. It brings more theoretical rigor to the venture capital investment literature by introducing a systematic approach to identify and measure factors important to new venture valuation. It explores a possibility to develop a supplementary method to value an early-stage new venture when extant valuation methods fail to yield consistent results because these methods require accounting information that a new venture typically cannot provide
|
Startup valuation by venture capitalists: an empirical study
|
startup valuation by venture capitalists: an empirical study
|
audiencehow venture entrepreneurial financing. develops examine venture capitalists valuation venture firm performance. rounds venture capital investments ventures venture capitalists consideration firm valuation ventures. finds attractiveness founder team venture positively valuation venture capitalists seeks venture capital financing development. establish linkage strategic underresearched venture capital valuation practice. brings rigor venture capital investment introducing venture valuation. explores supplementary venture extant valuation fail accounting venture
|
exact_dup
|
[
"47280058"
] |
52130276
|
10.1080/00291951.2011.598551
|
-Many second home owners demand rights, benefits, and influence in their host community, and the article examines how second home owners in pursuit of their interests can gain acceptance among local residents. The analysis is based on interviews with local residents in four rural Norwegian second home municipalities. The findings show that local residents’ attitudes towards second home owners’ pursuit of their own interests in the host community depend to a large degree upon the residents’ perceptions of the outcome of second home tourism in their municipality. Local residents can tolerate second home owners’ demands as long as the second home owners satisfy some of the community's significant economic-material or social needs. When second home owners make demands while their presence does not bring any evident benefits to the host community they are perceived as trying to take without giving. Based on these findings, the author argues that it is not second home owners’ (objective) otherness from locals that is the main problem in cases of a conflictual climate between the two parties. Rather, it is the local structural context that constitutes the main problem if it does not make it possible for second home owners to contribute to the host community
|
Rural residents' opinions about second home owners' pursuit of own interests in the host community
|
rural residents' opinions about second home owners' pursuit of own interests in the host community
|
home owners rights benefits examines home owners pursuit interests acceptance residents. interviews residents rural norwegian home municipalities. residents’ attitudes home owners’ pursuit interests residents’ perceptions home tourism municipality. residents tolerate home owners’ demands home owners satisfy needs. home owners demands bring evident benefits perceived trying giving. argues home owners’ otherness locals conflictual parties. constitutes home owners
|
exact_dup
|
[
"154669132"
] |
52433180
|
10.1103/PhysRevLett.108.075004
|
International audienceExperimental measurements of backward accelerated protons are presented. The beam is produced when an ultrashort (5 fs) laser pulse, delivered by a kHz laser system, with a high temporal contrast (10 8), interacts with a thick solid target. Under these conditions, proton cutoff energy dependence with laser parameters, such as pulse energy, polarization (from p to s), and pulse duration (from 5 to 500 fs), is studied. Theoretical model and two-dimensional particle-in-cell simulations, in good agreement with a large set of experimental results, indicate that proton acceleration is directly driven by Brunel electrons, in contrast to conventional target normal sheath acceleration that relies on electron thermal pressure
|
Brunel-Dominated Proton Acceleration with a Few-Cycle Laser Pulse
|
brunel-dominated proton acceleration with a few-cycle laser pulse
|
audienceexperimental backward accelerated protons presented. ultrashort delivered interacts thick target. proton cutoff studied. proton acceleration brunel sheath acceleration relies
|
exact_dup
|
[
"52676510",
"52897460"
] |
52660381
|
10.1051/0004-6361/201322130
|
International audienceWe have developed two independent methods for measuring the one-dimensional power spectrum of the transmitted flux in the Lyman-α forest. The first method is based on a Fourier transform and the second on a maximum-likelihood estimator. The two methods are independent and have different systematic uncertainties. Determination of the noise level in the data spectra was subject to a new treatment, because of its significant impact on the derived power spectrum. We applied the two methods to 13 821 quasar spectra from SDSS-III/BOSS DR9 selected from a larger sample of over 60 000 spectra on the basis of their high quality, high signal-to-noise ratio (S/N), and good spectral resolution. The power spectra measured using either approach are in good agreement over all twelve redshift bins from ⟨z⟩ = 2.2 to ⟨z⟩ = 4.4, and scales from 0.001 km s-1 to 0.02 km s-1. We determined the methodological andinstrumental systematic uncertainties of our measurements. We provide a preliminary cosmological interpretation of our measurements using available hydrodynamical simulations. The improvement in precision over previously published results from SDSS is a factor 2–3 for constraints on relevant cosmological parameters. For a ΛCDM model and using a constraint on H0 that encompasses measurements based on the local distance ladder and on CMB anisotropies, we infer σ8 = 0.83 ± 0.03 and ns = 0.97 ± 0.02 based on H i absorption in the range 2.1 < z < 3.7
|
The one-dimensional Lyα forest power spectrum from BOSS
|
the one-dimensional lyα forest power spectrum from boss
|
audiencewe measuring transmitted lyman forest. fourier transform likelihood estimator. uncertainties. spectrum. quasar sdss boss resolution. twelve bins methodological andinstrumental measurements. preliminary cosmological hydrodynamical simulations. precision sdss cosmological parameters. λcdm encompasses ladder anisotropies infer
|
exact_dup
|
[
"46756681",
"52677546"
] |
52673077
|
10.1051/0004-6361/201323056
|
International audienceWe test whether or not the orbital poles of the systems in the solar neighbourhood are isotropically distributed on the celestial sphere. The problem is plagued by the ambiguity on the position of the ascending node. Of the 95 systems closer than 18 pc from the Sun with an orbit in the 6th Catalogue of Orbits of Visual Binaries, the pole ambiguity could be resolved for 51 systems using radial velocity collected in the literature and CORAVEL database or acquired with the HERMES/Mercator spectrograph. For several systems, we can correct the erroneous nodes in the 6th Catalogue of Orbits and obtain new combined spectroscopic/astrometric orbits for seven systems [WDS 01083+5455Aa,Ab; 01418+4237AB; 02278+0426AB (SB2); 09006+4147AB (SB2); 16413+3136AB; 17121+4540AB; 18070+3034AB]. We used of spherical statistics to test for possible anisotropy. After ordering the binary systems by increasing distance from the Sun, we computed the false-alarm probability for subsamples of increasing sizes, from N = 1 up to the full sample of 51 systems. Rayleigh-Watson and Beran tests deliver a false-alarm probability of 0.5% for the 20 systems closer than 8.1 pc. To evaluate the robustness of this conclusion, we used a jackknife approach, for which we repeated this procedure after removing one system at a time from the full sample. The false-alarm probability was then found to vary between 1.5% and 0.1%, depending on which system is removed. The reality of the deviation from isotropy can thus not be assessed with certainty at this stage, because only so few systems are available, despite our efforts to increase the sample. However, when considering the full sample of 51 systems, the concentration of poles toward the Galactic position l = 46.0°, b = 37°, as observed in the 8.1 pc sphere, totally vanishes (the Rayleigh-Watson false-alarm probability then rises to 18%)
|
Are the orbital poles of binary stars in the solar neighbourhood anisotropically distributed?
|
are the orbital poles of binary stars in the solar neighbourhood anisotropically distributed?
|
audiencewe orbital poles neighbourhood isotropically celestial sphere. plagued ambiguity ascending node. closer orbit catalogue orbits binaries pole ambiguity resolved coravel acquired hermes mercator spectrograph. erroneous catalogue orbits spectroscopic astrometric orbits seven spherical anisotropy. ordering false alarm subsamples sizes systems. rayleigh watson beran deliver false alarm closer robustness jackknife repeated removing sample. false alarm vary removed. reality isotropy certainty efforts sample. poles toward galactic sphere totally vanishes rayleigh watson false alarm rises
|
exact_dup
|
[
"52773976"
] |
52673333
|
10.1063/1.4936512
|
International audienceThe design of the WEST (Tungsten-W Environment in Steady-state Tokamak) Ion cyclotron resonance heating antennas is based on a previously tested conjugate-T Resonant Double Loops prototype equipped with internal vacuum matching capacitors. The design and construction of three new WEST ICRH antennas are being carried out in close collaboration with ASIPP, within the framework of the Associated Laboratory in the fusion field between IRFM and ASIPP. The coupling performance to the plasma and the load-tolerance have been improved, while adding Continuous Wave operation capability by introducing water cooling in the entire antenna. On the generator side, the operation class of the high power tetrodes is changed from AB to B in order to allow high power operation (up to 3 MW per antenna) under higher VSWR (up to 2:1). Reliability of the generators is also improved by increasing the cavity breakdown voltage. The control and data acquisition system is also upgraded in order to resolve and react on fast events, such as ELMs. A new optical arc detection system comes in reinforcement of the V r /V f and SHAD systems
|
Ion cyclotron resonance heating systems upgrade toward high power and CW operations in WEST
|
ion cyclotron resonance heating systems upgrade toward high power and cw operations in west
|
audiencethe west tungsten steady tokamak cyclotron heating antennas conjugate resonant loops prototype equipped matching capacitors. west icrh antennas asipp fusion irfm asipp. tolerance adding capability introducing cooling antenna. generator tetrodes changed antenna vswr reliability generators cavity breakdown voltage. acquisition upgraded resolve react elms. comes reinforcement shad
|
exact_dup
|
[
"52677300"
] |
52679309
|
10.1103/PhysRevB.90.060505
|
International audienceStructural and transport properties of thin Nb layers in Si/Nb/Si trilayers with Nb layer thickness d from 1.1 nm to 50 nm have been studied. With decreasing thickness, the structure of the Nb layer changes from polycrystalline to amorphous at d 3.3 nm, while the superconducting temperature T c monotonically decreases. The Hall coefficient varies with d systematically but changes sign into negative in ultrathin films with d < 1.6 nm. The influence of boundary scattering on the relaxation rate of carriers, and band broadening in the amorphous films, may contribute to this effect
|
Negative Hall coefficient of ultrathin niobium in Si/Nb/Si trilayers
|
negative hall coefficient of ultrathin niobium in si/nb/si trilayers
|
audiencestructural trilayers studied. decreasing polycrystalline amorphous superconducting monotonically decreases. hall varies systematically ultrathin films relaxation carriers broadening amorphous films
|
exact_dup
|
[
"52898958"
] |
52685200
|
10.1016/j.nima.2013.03.038
|
During therapeutic treatment with heavier ions like carbon, the beam undergoes nuclear fragmentation and secondary light charged particles, in particular protons and alpha particles, are produced. To estimate the dose deposited into the tumors and the surrounding healthy tissues, the accuracy must be higher than (±3% and±1 mm). Therefore, measurements are performed to determine the double differential cross section for different reactions. In this paper, the analysis of data from 12C +12C reactions at 95 MeV/u are presented. The emitted particles are detected with ΔEthin−ΔEthick−E telescopes made of a stack of two silicon detectors and a CsI crystal. Two different methods are used to identify the particles. One is based on graphical cuts onto the ΔE−E maps, the second is based on the so-called KaliVeda method using a functional description of ΔE versus E. The results of the two methods will be presented in this paper as well as the comparison between both
|
Comparison of two analysis methods for nuclear reaction measurements of 12C +12C interactions at 95 MeV/u for hadrontherapy
|
comparison of two analysis methods for nuclear reaction measurements of 12c +12c interactions at 95 mev/u for hadrontherapy
|
therapeutic heavier undergoes fragmentation protons alpha produced. deposited tumors surrounding healthy tissues and± reactions. presented. emitted δethin−δethick−e telescopes stack silicon detectors crystal. particles. graphical cuts δe−e kaliveda
|
exact_dup
|
[
"46762186"
] |
52685967
|
10.1016/j.neuroimage.2012.12.051
|
International audienceMagnetoencephalography (MEG) and electroencephalography (EEG) allow functional brain imaging with high temporal resolution. While solving the inverse problem independently at every time point can give an image of the active brain at every millisecond, such a procedure does not capitalize on the temporal dynamics of the signal. Linear inverse methods (Minimum-norm, dSPM, sLORETA, beamformers) typically assume that the signal is stationary: regularization parameter and data covariance are independent of time and the time varying signal-to-noise ratio (SNR). Other recently proposed non-linear inverse solvers promoting focal activations estimate the sources in both space and time while also assuming stationary sources during a time interval. However such an hypothesis only holds for short time intervals. To overcome this limitation, we propose time-frequency mixed-norm estimates (TF-MxNE), which use time-frequency analysis to regularize the ill-posed inverse problem. This method makes use of structured sparse priors defined in the time-frequency domain, offering more accurate estimates by capturing the non-stationary and transient nature of brain signals. State-of-the-art convex optimization procedures based on proximal operators are employed, allowing the derivation of a fast estimation algorithm. The accuracy of the TF-MxNE is compared to recently proposed inverse solvers with help of simulations and by analyzing publicly available MEG datasets
|
Time-Frequency Mixed-Norm Estimates: Sparse M/EEG imaging with non-stationary source activations
|
time-frequency mixed-norm estimates: sparse m/eeg imaging with non-stationary source activations
|
audiencemagnetoencephalography electroencephalography resolution. solving independently millisecond capitalize signal. norm dspm sloreta beamformers stationary regularization covariance solvers promoting focal activations stationary interval. intervals. overcome limitation propose norm mxne regularize posed problem. structured sparse priors offering capturing stationary transient signals. convex proximal allowing derivation algorithm. mxne solvers analyzing publicly datasets
|
exact_dup
|
[
"52799591"
] |
52688547
|
10.1080/14786435.2011.573815
|
International audienceDislocation climb mobilities, assuming vacancy bulk diffusion, are derived and implemented in dislocation dynamics simulations to study the coarsening of vacancy prismatic loops in fcc metals. When loops cannot glide, the comparison of the simulations with a coarsening model based on the line tension approximation shows a good agreement. Dislocation dynamics simulations with both glide and climb are then performed. Allowing for glide of the loops along their prismatic cylinders leads to faster coarsening kinetics, as direct coalescence of the loops is now possible
|
Dislocation dynamics simulations with climb: kinetics of dislocation loop coarsening controlled by bulk diffusion
|
dislocation dynamics simulations with climb: kinetics of dislocation loop coarsening controlled by bulk diffusion
|
audiencedislocation climb mobilities vacancy implemented dislocation coarsening vacancy prismatic loops metals. loops glide coarsening tension agreement. dislocation glide climb performed. allowing glide loops prismatic cylinders faster coarsening kinetics coalescence loops
|
exact_dup
|
[
"52691428"
] |
52696821
|
10.1103/PhysRevLett.103.072701
|
5 pages, submitted to PRLInternational audienceThe charge distribution of the heaviest fragment detected in the decay of quasi-projectiles produced in intermediate energy heavy-ion collisions has been observed to be bimodal. This feature is expected as a generic signal of phase transition in non-extensive systems. In this paper we present new analyses of experimental data from Au on Au collisions at 60, 80 and 100 MeV/nucleon showing that bimodality is largely independent of the data selection procedure, and of entrance channel effects. An estimate of the latent heat of the transition is extracted
|
Bimodal behavior of the heaviest fragment distribution in projectile fragmentation
|
bimodal behavior of the heaviest fragment distribution in projectile fragmentation
|
pages submitted prlinternational audiencethe heaviest fragment quasi projectiles collisions bimodal. generic extensive systems. collisions nucleon bimodality largely entrance effects. latent
|
exact_dup
|
[
"46772033"
] |
52696879
|
10.1016/J.PSS.2006.06.016
|
International audienceTitan is one of the primary scientific objectives of the NASA ESA ASI Cassini Huygens mission. Scattering by haze particles in Titan's atmosphere and numerous methane absorptions dramatically veil Titan's surface in the visible range, though it can be studied more easily in some narrow infrared windows. The Visual and Infrared Mapping Spectrometer (VIMS) instrument onboard the Cassini spacecraft successfully imaged its surface in the atmospheric windows, taking hyperspectral images in the range 0.4 5.2 μm. On 26 October (TA flyby) and 13 December 2004 (TB flyby), the Cassini Huygens mission flew over Titan at an altitude lower than 1200 km at closest approach. We report here on the analysis of VIMS images of the Huygens landing site acquired at TA and TB, with a spatial resolution ranging from 16 to14.4 km/pixel. The pure atmospheric backscattering component is corrected by using both an empirical method and a first-order theoretical model. Both approaches provide consistent results. After the removal of scattering, ratio images reveal subtle surface heterogeneities. A particularly contrasted structure appears in ratio images involving the 1.59 and 2.03 μm images north of the Huygens landing site. Although pure water ice cannot be the only component exposed at Titan's surface, this area is consistent with a local enrichment in exposed water ice and seems to be consistent with DISR/Huygens images and spectra interpretations. The images show also a morphological structure that can be interpreted as a 150 km diameter impact crater with a central peak
|
Cassini/VIMS hyperspectral observations of the HUYGENS landing site on Titan
|
cassini/vims hyperspectral observations of the huygens landing site on titan
|
audiencetitan objectives nasa cassini huygens mission. haze titan atmosphere numerous methane absorptions dramatically veil titan visible narrow infrared windows. infrared spectrometer vims instrument onboard cassini spacecraft successfully imaged windows hyperspectral october flyby december flyby cassini huygens mission flew titan altitude closest approach. vims huygens landing acquired ranging pixel. backscattering corrected model. results. removal reveal subtle heterogeneities. contrasted involving huygens landing site. exposed titan enrichment exposed disr huygens interpretations. morphological interpreted crater
|
exact_dup
|
[
"47112082",
"52743327",
"53017414"
] |
52700520
|
10.1016/j.jcp.2007.01.026
|
International audienceThis study addresses a new fictitious domain method for elliptic problems in order to handle general and possibly mixed embedded boundary conditions (E.B.C.): Robin, Neumann and Dirichlet conditions on an immersed interface. The main interest of this fictitious domain method is to use simple structured meshes, possibly uniform Cartesian nested grids, which do not generally fit the interface but define an approximate one. A cell-centered finite volume scheme with a non-conforming structured mesh is derived to solve the set of equations with additional algebraic transmission conditions inking both flux and solution jumps through the immersed approximate interface. Hence, a local correction is devised to take account of the relative surface ratios in each control volume for the Robin or Neumann boundary condition. Then, the numerical scheme conserves the first-order accuracy with respect to the mesh step. This opens the way to combine the E.B.C. method with a multilevel mesh refinement solver to increase the precision in the vicinity of the interface. Such a fictitious domain method is very efficient: the L2- and L1-norm errors vary like O(hl*) where hl* is the grid step of the finest refinement level around the interface until the residual first-order discretization error of the non-refined zone is reached. The numerical results reported here for convection–diffusion problems with Dirichlet, Robin and mixed (Dirichlet and Robin) boundary conditions confirm the expected accuracy as well as the performances of the present method
|
A general fictitious domain method with immersed jumps and multilevel nested structured meshes
|
a general fictitious domain method with immersed jumps and multilevel nested structured meshes
|
audiencethis addresses fictitious elliptic handle possibly embedded e.b.c. robin neumann dirichlet immersed interface. fictitious structured meshes possibly cartesian nested grids approximate one. centered conforming structured mesh solve algebraic inking jumps immersed approximate interface. devised robin neumann condition. conserves mesh step. opens combine e.b.c. multilevel mesh refinement solver precision vicinity interface. fictitious norm vary finest refinement residual discretization refined reached. convection–diffusion dirichlet robin dirichlet robin confirm performances
|
exact_dup
|
[
"52464177"
] |
52710053
|
10.1016/j.jhydrol.2016.06.001
|
International audienceThis study aims at establishing groundwater residence times, identifying mineralization processes and determining groundwater origins within a carbonate coastal aquifer with thick unsaturated zone and lying on a granitic depression. A multi-tracer approach (major ions, SiO2, Br-, Ba+, Sr2+, 18O, 2H, 13C, 3H, Ne, Ar) combined with a groundwater residence time determination using CFCs and SF6 allows defining the global setting of the study site. A typical mineralization conditioned by the sea sprays and the carbonate matrix helped to validate the groundwater weighted residence times from using a binary mixing model. Terrigenic SF6 excesses have been detected and quantified, which permits to identify a groundwater flow from the surrounding fractured granites towards the lower aquifer principally. The use of CFCs and SF6 as a first hydrogeological investigation tool is possible and very relevant despite the thick unsaturated zone and the hydraulic connexion with a granitic environment
|
Residence time, mineralization processes and groundwater origin within a carbonate coastal aquifer with a thick unsaturated zone
|
residence time, mineralization processes and groundwater origin within a carbonate coastal aquifer with a thick unsaturated zone
|
audiencethis aims establishing groundwater residence identifying mineralization determining groundwater origins carbonate coastal aquifer thick unsaturated lying granitic depression. tracer groundwater residence cfcs defining site. mineralization conditioned sprays carbonate helped validate groundwater weighted residence model. terrigenic excesses quantified permits groundwater surrounding fractured granites aquifer principally. cfcs hydrogeological thick unsaturated hydraulic connexion granitic
|
exact_dup
|
[
"48152432",
"54030370"
] |
52724531
|
10.1063/1.4816620
|
10 pagesInternational audienceWe report on the evolution of the structure and composition of a Pt(3 nm)/Co(0.6 nm)/AlOx(2 nm) trilayer sputtered on Si/SiO2 under oxidation and annealing processes by combined x-ray reflectivity and x-ray absorption studies. We describe the progressive and inhomogeneous oxidation of the layers by increasing the oxidation time. Before annealing, the layers have lower density than bulk samples and noticeable roughness. After thermal annealing, a significant improvement of the quality of the alumina layer goes along with the formation of a CoPt alloy that reduces the number of Co-O bonds. These structural outcomes clarify the evolution of the magnetic and transport properties reported at room temperature in these samples
|
Competition between CoOx and CoPt phases in Pt/Co/AlOx semi tunnel junctions
|
competition between coox and copt phases in pt/co/alox semi tunnel junctions
|
pagesinternational audiencewe alox trilayer sputtered oxidation annealing reflectivity studies. progressive inhomogeneous oxidation oxidation time. annealing noticeable roughness. annealing alumina goes copt alloy reduces bonds. clarify room
|
exact_dup
|
[
"51964773",
"52684005"
] |
52725438
|
10.1016/j.jconhyd.2013.02.006
|
International audienceField quantitative estimation of reaction kinetics is required to enhance our understanding of biogeochemical reactions in aquifers. We extended the analytical solution developed by Haggerty et al. (1998) to model an entire 1st order reaction chain and estimate the kinetic parameters for each reaction step of the denitrification process. We then assessed the ability of this reaction chain to model biogeochemical reactions by comparing it with experimental results from a push-pull test in a fractured crystalline aquifer (Ploemeur, French Brittany). Nitrates were used as the reactive tracer, since denitrification involves the sequential reduction of nitrates to nitrogen gas through a chain reaction (NO3− → NO2− → NO → N2O → N2) under anaerobic conditions. The kinetics of nitrate consumption and by-product formation (NO2−, N2O) during autotrophic denitrification were quantified by using a reactive tracer (NO3−) and a non-reactive tracer (Br−). The formation of reaction by-products (NO2−, N2O, N2) has not been previously considered using a reaction chain approach. Comparison of Br− and NO3− breakthrough curves showed that 10% of the injected NO3− molar mass was transformed during the 12 h experiment (2% into NO2−, 1% into N2O and the rest into N2 and NO). Similar results, but with slower kinetics, were obtained from laboratory experiments in reactors. The good agreement between the model and the field data shows that the complete denitrification process can be efficiently modeled as a sequence of first order reactions. The 1st order kinetics coefficients obtained through modeling were as follows: k1 = 0.023 h− 1, k2 = 0.59 h− 1, k3 = 16 h− 1, and k4 = 5.5 h− 1. A next step will be to assess the variability of field reactivity using the methodology developed for modeling push-pull tracer tests
|
Reaction chain modeling of denitrification reactions during a push-pull test
|
reaction chain modeling of denitrification reactions during a push-pull test
|
audiencefield kinetics enhance biogeochemical aquifers. haggerty denitrification process. biogeochemical push pull fractured crystalline aquifer ploemeur french brittany nitrates reactive tracer denitrification involves sequential nitrates nitrogen anaerobic conditions. kinetics nitrate autotrophic denitrification quantified reactive tracer reactive tracer approach. breakthrough injected molar transformed slower kinetics reactors. denitrification efficiently modeled reactions. kinetics reactivity methodology push pull tracer
|
exact_dup
|
[
"48207423"
] |
52782009
|
10.1051/proc/201446015
|
Editors: Witold Jarczyk, Daniele Fournier-Prunaret, João Manuel Goncalves CabralWe propose a new mechanism for undersampling chaotic numbers obtained by the ring coupling of one-dimensional maps. In the case of 2 coupled maps this mechanism allows the building of a PRNG which passes all NIST Test. This new geometric undersampling is very effective for generating 2 parallel streams of pseudo-random numbers, as we show, computing carefully their properties, up to sequences of one trillion consecutives iterates of the ring coupled mapping which provides more than 33.5 billions random numbers in very short time
|
From Chaos to Randomness via Geometric Undersampling
|
from chaos to randomness via geometric undersampling
|
editors witold jarczyk daniele fournier prunaret joão manuel goncalves cabralwe propose undersampling chaotic maps. prng passes nist test. geometric undersampling generating streams pseudo carefully trillion consecutives iterates billions
|
exact_dup
|
[
"53006321"
] |
52785713
|
10.1016/j.epsl.2010.12.027
|
International audienceWe surveyed the Owen Fracture Zone at the boundary between the Arabia and India plates in the NW Indian Ocean using a high-resolution multibeam echo-sounder (Owen cruise, 2009) for search of active faults. Bathymetric data reveal a previously unrecognized submarine fault scarp system running for over 800 km between the Sheba Ridge in the Gulf of Aden and the Makran subduction zone. The primary plate boundary structure is not the bathymetrically high Owen Ridge, but is instead a series of clearly delineated strike-slip fault segments separated by several releasing and restraining bends. Despite an abundant sedimentary supply by the Indus River flowing from the Himalaya, fault scarps are not obscured by recent deposits and can be followed over hundreds of kilometres, pointing to very active tectonics. The total strike-slip displacement of the fault system is 10-12 km, indicating that it has been active for the past ~3 to 6 Ma if its current rate of motion of 3±1 mm yr−1 has remained stable. We describe the geometry of this recent fault system, including a major pull-apart basin at the latitude 20°N, and we show that it closely follows an arc of small circle centred on the Arabia-India pole of rotation, as expected for a transform plate boundary
|
Owen Fracture Zone: The Arabia-India plate boundary unveiled
|
owen fracture zone: the arabia-india plate boundary unveiled
|
audiencewe surveyed owen fracture arabia india plates indian ocean multibeam echo sounder owen cruise faults. bathymetric reveal unrecognized submarine fault scarp running sheba ridge gulf aden makran subduction zone. plate bathymetrically owen ridge delineated strike slip fault segments separated releasing restraining bends. abundant sedimentary supply indus river flowing himalaya fault scarps obscured deposits hundreds kilometres pointing tectonics. strike slip displacement fault remained stable. fault pull apart basin latitude closely circle centred arabia india pole transform plate
|
exact_dup
|
[
"52735275"
] |
52825624
|
10.1016/j.regsciurbeco.2010.07.004
|
International audienceChina's export performance over the past fifteen years has been phenomenal. Is this performance going to last? Wages are rising rapidly but a population in excess of one billion represents a large reservoir of labor. Firms in export-intensive provinces may draw on this reservoir to increase competition in their labor market and keep wages low for many years to come. We develop a wage equation from a New Economic Geography model to capture the upward pressure from national and international demand and downward pressure from migration. Using panel data at the province level, we find that migration has moderately slowed down Chinese wage increase over the period 1995-2007
|
How are wages set in Beijing
|
how are wages set in beijing
|
audiencechina export fifteen phenomenal. going wages rising rapidly excess billion reservoir labor. firms export intensive provinces draw reservoir competition labor keep wages come. wage geography capture upward downward migration. province migration moderately slowed chinese wage
|
exact_dup
|
[
"47739545"
] |
52896091
|
10.1007/s00382-012-1408-y
|
International audienceGlobal aerosol and ozone distributions and their associated radiative forcings were simulated between 1850 and 2100 following a recent historical emission dataset and under the representative concentration pathways (RCP) for the future. These simulations were used in an Earth System Model to account for the changes in both radiatively and chemically active compounds, when simulating the climate evolution. The past negative stratospheric ozone trends result in a negative climate forcing culminating at −0.15 W m<sup>−2</sup> in the 1990s. In the meantime, the tropospheric ozone burden increase generates a positive climate forcing peaking at 0.41 W m<sup>−2</sup>. The future evolution of ozone strongly depends on the RCP scenario considered. In RCP4.5 and RCP6.0, the evolution of both stratospheric and tropospheric ozone generate relatively weak radiative forcing changes until 2060-2070 followed by a relative 30 % decrease in radiative forcing by 2100. In contrast, RCP8.5 and RCP2.6 model projections exhibit strongly different ozone radiative forcing trajectories. In the RCP2.6 scenario, both effects (stratospheric ozone, a negative forcing, and tropospheric ozone, a positive forcing) decline towards 1950s values while they both get stronger in the RCP8.5 scenario. Over the twentieth century, the evolution of the total aerosol burden is characterized by a strong increase after World War II until the middle of the 1980s followed by a stabilization during the last decade due to the strong decrease in sulfates in OECD countries since the 1970s. The cooling effects reach their maximal values in 1980, with −0.34 and −0.28 W m<sup>−2</sup> respectively for direct and indirect total radiative forcings. According to the RCP scenarios, the aerosol content, after peaking around 2010, is projected to decline strongly and monotonically during the twenty-first century for the RCP8.5, 4.5 and 2.6 scenarios. While for RCP6.0 the decline occurs later, after peaking around 2050. As a consequence the relative importance of the total cooling effect of aerosols becomes weaker throughout the twenty-first century compared with the positive forcing of greenhouse gases. Nevertheless, both surface ozone and aerosol content show very different regional features depending on the future scenario considered. Hence, in 2050, surface ozone changes vary between −12 and +12 ppbv over Asia depending on the RCP projection, whereas the regional direct aerosol radiative forcing can locally exceed −3 W m<sup>−2</sup>
|
Aerosol and ozone changes as forcing for climate evolution between 1850 and 2100
|
aerosol and ozone changes as forcing for climate evolution between 1850 and 2100
|
audienceglobal aerosol ozone radiative forcings historical dataset pathways future. earth radiatively chemically simulating evolution. stratospheric ozone forcing culminating meantime tropospheric ozone burden generates forcing peaking ozone considered. stratospheric tropospheric ozone radiative forcing radiative forcing projections exhibit ozone radiative forcing trajectories. stratospheric ozone forcing tropospheric ozone forcing decline stronger scenario. twentieth century aerosol burden stabilization decade sulfates oecd cooling maximal indirect radiative forcings. scenarios aerosol peaking projected decline monotonically twenty century scenarios. decline peaking cooling aerosols weaker twenty century forcing greenhouse gases. nevertheless ozone aerosol considered. ozone vary ppbv asia projection aerosol radiative forcing locally exceed
|
exact_dup
|
[
"52671684",
"52710687"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.