id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/0003/astro-ph0003005.html
|
ar5iv
|
text
|
# Effects of galaxy mergers on the faint IRAS source counts and the background
## 1 Introduction
Over the last few years, observational progress in discovering the faint Universe at ultraviolet (UV)/optical and infrared wavelength via the groundbased redshift surveys, Hubble Space Telescope (HST) multi-color images and the infrared satellites such as IRAS (Infrared Astronomical Satellite), ISO (Infrared Space Observatory) and COBE (Cosmic Background Explorer), challenges and renews our present understanding of cosmological structure formation and the evolution of luminous galaxies, which are usually based on two competing scenarios “traditional monolithic collapsing” versus “hierarchical clustering theories”. The main difference between these two kinds of models is the formation mechanism and the first epoch of the formation of massive galaxies.
More recently it has been shown that the emission history (e.g. Madau plot) derived from UV/optical detections agrees quite well with the star formation history indicated by hierarchical clustering theories (Madau 1997, Madau et al. 1996, 1998, Ellis et al. 1996). Since the FIR/submm unveiled a dust-shrouded early active phase and the recent detection of high redshift evolving galaxies by deep NICMOS/VLT images (Lilly et al. 1998, Benitez et al. 1998), the discussion of the structure formation and the scheme of galaxy evolution is going to require more subtlety and sophistication. For a clear picture of the star formation history and the galaxy evolution, we probably need more information at high redshift, especially from FIR/submm deep surveys.
A successful interpretation of the excess of faint blue galaxies within the merger-trigger starburst scenario in a hierarchical clustering universe has been presented by Cavaliere & Menci (1997) using binary aggregation dynamics.
The binary aggregation dynamical approach for the galaxy evolution presented by Cavaliere & Menci (1993, 1997), which includes more dynamics to describe a further step in galaxy-galaxy interactions within the scheme of direct hierarchical clustering (DHCs), probably can help to alleviate some intrinsic problems in the DHC scenario, such as the overproduction of small objects and the difficulty of a reconciliation between the excess faint counts and the flat local luminosity function. While, on the other hand, binary merging which plays a different and complementary role in structure formation, can actually continue to flatten the mass distribution function $`N(m,z)`$ and hence the luminosity function $`N(L,z)`$ till even moderate redshift $`z<1`$.
Although there are many other possible evolutionary scenarios which could interpret the present observations in this or that way, the reason that we are encouraged to explore here a merger-driven galaxy evolution picture with the binary aggregation dynamics, is simply because the IRAS database does show that most of the luminous infrared sources are actually interacting/merging systems (Kleinmann & Keel 1987, Sanders & Mirabel 1996, Hutchings & Neff 1987, Vader & Simon 1987a,b), and many merger events can even occur till moderate redshift ($`z<1`$).
In this paper, starting from the basic picture that galaxy evolution is driven by the galaxy-galaxy interaction complementary to the standard hierarchical clustering scenario (DHCs), we investigate a simple model with the basic concepts: 1) galaxy-galaxy interaction can even occur till quite moderate redshifts ($`z<1`$) within large scale structures that could offer the best combination of volume and density contrast, and it could erase the dwarf part of the mass distribution and produce the massive tail, thus flatten the luminosity functions; 2) starburst/AGN activities may be triggered by the merger events during the structure formation and evolution. This is actually the point different from other models where only the starburst is considered as the consequence of merger events (Cavaliere & Menci 1997, Somerville & Primack 1998, Somerville et al. 1998); 3) the gas rich merger events at high redshift may trigger drastic starburst and AGN activities in the central region of the merging galaxies and thus enhance dramatically the FIR luminosity because of the accumulating of dust grains from their progenitor galaxies as well as the dust newly formed in the star forming regions. This FIR luminosity burst phase is usually called ultraluminous infrared galaxies (ULIGs); 4) in our model, we assume a redshift dependent infrared burst phase for the gas rich merger events at high redshift and the gas poor mergers at low redshift, which means the enhancement of the infrared luminosities by the high redshift gas rich mergers is much higher than those of low redshift gas poor mergers. The motivation for this kind of thinking is: 1) direct observation of enhanced starburst-AGN activities in interacting galaxies, especially the ULIGs. This extremely infrared bright burst phase is believed due to the starburst merger events with far infrared luminosity $`L_{fir}`$ enhanced both by the accumulation of the dust mass $`M_d`$ and the increase of dust temperation $`T_d`$ as the relation $`L_{fir}M_dT_d^5`$. This burst phase could increase the infrared luminosity by a factor of about 20 over that of normal starburst galaxies. (Ashby et al. 1992, Terlevich & Melnick 1985, Heckman 1997, Perry & Williams 1993, Taniguchi & Ohyama 1998); 2) numerical simulation of the starburst/AGN evolutions by galaxy mergers, especially for their correlation and the burst phase (Wang & Biermann 1998, 1999, Wang 1999); 3) the successful interpretation of the excess faint blue counts by galaxy merging and the consequential starburst activities (Rocca-Volmerange & Guiderdoni 1990, Cavaliere & Menci 1997, Carlberg 1992, Calberg & Charlot 1992); 4) the pair production absorption of high energy $`\gamma `$ rays by intergalactic low energy photons is expected to produce a high energy cutoff in extragalactic $`\gamma `$ ray spectra. The new data of Mkn501 appear to show an extension of the TeV $`\gamma `$ ray spectrum till about 24 TeV, which sets an upper limit for the intergalactic infrared emission history during the structure formation (Aharonian et al. 1999).
Since the dust-shrouded geometry could strongly affect the AGN spectrum and cause significant radiation at infrared wavelength in the framework of a Unification Scheme, we thus add the statistics of nuclear activity in galaxies as one additional constraint in the context of the various models already in the literature. We know that nuclear activity is detected to very high redshift and provides a strong constraint on the cosmological evolution models. Therefore we are less conservative than Malkan & Stecker (1998), but use more constraints than, e.g., Somerville & Primack (1998). Obviously, any such modelling depends on the basic concepts used, and thus our results provide a strong lower limit to the far-infrared background.
The outline of this paper is as follows: 1) in Sect. 2, we introduce the binary aggregation dynamics, and the models of the interaction kernel; 2) we discuss the prescriptions of mass-to-light ratio in our model for the luminous infrared galaxies which is enhanced by the starburst and AGN activities in Sect. 3, especially modelling of the redshift dependent infrared burst phase for ultraluminous infrared galaxies (ULIGs) from gas rich mergers at high redshift and a suppressed burst phase for gas poor mergers at low redshift; 3) in Sect. 4, we discuss the numerical simulations and compare the Monte Carlo results with the IRAS $`60\mu m`$ source counts from three major infrared sources (starburst galaxies, Seyferts and spirals). We check also the redshift distribution of these infrared sources which are brighter than $`S_{60}10mJy`$, calculate the integrated background level at wavelength of $`60\mu m`$, and make an extrapolation to $`25\mu m`$ and $`100\mu m`$ based on the model spectrum and souce counts. Finally, we give our conclusions.
## 2 Binary aggregation theory
The classical approach to aggregation phenomena is based on the Smoluchowski(1916) equation. We start with the continuous form:
$`{\displaystyle \frac{\mathrm{N}(\mathrm{m},\mathrm{t})}{t}}=`$
$`{\displaystyle \frac{1}{2}}{\displaystyle _m_{}^{mm_{}}}𝑑m^{^{}}K(m^{^{}},mm^{^{}},t)N(m^{^{}},t)N(mm^{^{}},t)`$
$`N(m,t){\displaystyle _m_{}^{m_{}m_{}}}𝑑m^{^{}}K(m,m^{^{}},t)N(m^{^{}},t)`$ (1)
where $`N(m,t)`$ is the mass distribution function in “comoving” form, which describes the number density of galaxies within the mass range ($`m`$, $`m+dm`$) at cosmic time $`t`$. The mass $`mM/M_{}`$ is normalized in terms of $`M_{}`$, which is the mass corresponding locally to the standard characteristic luminosity $`L_{}`$ (see Peebles 1993). $`m_{}`$ and $`m_{}`$ represent the lower and upper limits on the masses of galaxies. Usually we can set $`m_{}=0,m_{}=\mathrm{}`$ in case when $`m_{}`$ is actually much smaller than the total mass of considered system.
The kernel $`K(m,m^{^{}},t)=n_g(t)<\mathrm{\Sigma }V(t)>`$ reflects the interaction rate for each pair of masses $`(m,m^{^{}})`$, and depends on the mechanism and environment of such aggregations. $`n_g`$ is the density of galaxies, it scales on average with the expansion of Universe as $`n_g(1+z)^3`$; $`V(t)`$ is the relative velocity of the interacting pairs, and $`\mathrm{\Sigma }`$ is the cross section. The average of them runs over the galaxy velocity distribution.
In aggregation dynamics, the interaction kernel $`K=n_g<\mathrm{\Sigma }V>`$ is the key point for driving the whole evolutionary process, which depends strongly on the environment structures.
In this context, we discuss simply the case when the colliding galaxies are in certain structures, thus the cross section could be assumed to be the hyperbolic encounter prescription by Saslaw (1985), Cavaliere et al. (1991, 1992), which is in the form:
$$\mathrm{\Sigma }=ϵ\left(\frac{V}{v_m}\right)\pi \left(r_m^2+r_m^{^{}}^2\right)\left[1+\frac{G(m+m^{^{}})}{r_mV^2}\right]$$
(2)
with $`r_m=r_{}m^{1/3},v_m=v_{}m^{1/3}`$, where $`v_{}`$ and $`r_{}`$ are the velocity dispersion and the dark halo radius of the present $`M_{}`$ galaxy. The function $`ϵ\left(\frac{V}{v_m}\right)`$ describes the decreasing efficiency of the aggregations with an increasing relative velocities, which could be determined by the N-body simulations (see, Richstone & Malumuth 1983).
Because of the uncertainty and the complexity of the components in the interaction kernel, such as the average density of galaxies in the environment $`n_g(t)`$, the relative velocity distribution of the aggregation pairs $`V(t)`$ and the interaction cross section $`\mathrm{\Sigma }`$, the exact prescription of the interaction kernel is still poorly known. We will thus adopt in our simulation a simplified formula for the aggregation kernel $`K(m,m^{^{}},t)`$ with separated time evolution term and the cross section term. We give the interaction kernel $`K(m,m^{^{}},t)`$ as:
$$K(m,m^{^{}},t)t^k(m^{2/3}+m_{}^{^{}}{}_{}{}^{2/3})\left[1+\alpha (m^{2/3}+m_{}^{^{}}{}_{}{}^{2/3})\right]$$
(3)
where $`t^k`$ is the cosmic time evolution term. The free parameter $`k`$ depends on the specific structures like clusters, filaments or sheets; $`\alpha `$ is a free parameter in our model, which describes the relative importance of two kinds of encountering (geometric collisions and focused interactions) in the cross section. The aggregation time scale is $`\tau `$, and thus the aggregation rate is $`\tau ^1`$, which is proportional to the interaction kernel $`K(m,m^{^{}},t)`$ as $`\tau ^1K(m,m^{^{}},t)`$.
## 3 Modelling the ultraluminous infrared phase at high redshift
The binary aggregation dynamics and Monte Carlo simulations give us information about the evolution of the galaxy mass distribution function $`N(m,t)`$. What we can actually observe and use to constrain the galaxy evolution model is: 1) the luminosity functions of galaxies in general or with certain morphologies; 2) the source counts, redshift distributions and the background intensity from various Hubble Deep Field surveys and the background measurements.
In this section, we will discuss the conversion of the mass distribution function $`N(m,t)`$ by Monte Carlo simulation of a merger-driven galaxy evolution scenario to the observable luminosity function $`N(L,t)`$.
In this case, we need to know i) a simple prescription of mass-to-light ratio for starburst galaxies and AGNs; and ii) the bolometric correction for certain spectral characteristics, especially at infrared wavelengths.
A simple discussion of the mass-to-light ratio for the starburst galaxies is given by Cavaliere & Menci (1997) for the faint blue galaxies who estimate the luminosity of starburst galaxies by the gaseous mass of the galaxies and the dynamical time scales. The mass to light relation of those blue starburst galaxies is given as:
$$\frac{L}{L_{}}=\left(\frac{M}{M_{}}\right)^\eta andL_{}(z)f(z,\lambda _0,\mathrm{\Omega }_0)$$
(4)
where $`\eta =4/3`$; $`L`$ is the bolometric luminosity and $`M`$ is the mass of the galaxy; $`L_{}`$ is the local standard characteristic luminosity, with the corresponding mass $`M_{}`$ (Peebles 1993). $`L_{}(z)f(z,\lambda _0,\mathrm{\Omega }_0)`$ represents a cosmological redshift dimming. Considering the large scale structure, Cavaliere et al. (1997) gave a prescription of $`f(z)`$ being a function of dimensionality $`D`$ of the large scale structure as $`L_{}f(z)(1+z)^{(3+D)/2}`$. Studies of the origin of ultraluminous infrared galaxies such as Arp220, NGC1614, NGC3256 and IRAS18293-3413, show that the infrared luminosity of this kind of galaxies is about a factor of 20 higher than that of normal starburst galaxies, with a statistical relation of $`L_{fir}M_dT_d^5`$ (Taniguchi & Ohyama 1998). The increase in both the dust mass $`M_d`$ and the dust temperature $`T_d`$ by starburst merger events could enhance the far infrared luminosities of starburst merging galaxies dramatically; we thus call them ultraluminous infrared galaxies (ULIGs). Because of the uncertainties of the gas/dust ratio and the complexity of the consequential heating of the dust grains, the mass-infrared luminosity correlation for the ultraluminous infrared galaxies is still unclear. In our calculation, we thus simply adopt the power law relation for the mass-to-infrared luminosity similar to that of starburst galaxies by Cavaliere & Menci (1997), with the exponent $`\eta `$ increased by a factor of about two to simulate the situation of an enhancement of their infrared luminosity by about a factor of near 20 for a typical ULIGs of mass $`10^{12}M_{}`$.
In our model, we basically assume that starburst and AGN activities which are triggered by the merger events at high redshift are more drastic than those of low redshift, simply because the mergers are usually between gas rich systems at high redshift and the progenitors of the low redshift mergers are already poor in cold gas. In this case, we assume in our model a redshift depenent infrared burst phase, which means the enhancement of infrared radiation from gas rich mergers at high redshift is much higher than that of low redshift. We simulate this effect with a power law suppression of the infrared burst luminosity simply as $`L_{ir}(z\mathrm{}z)L_{ir}(z)^\zeta `$ below a transition redshift $`z1`$, besides a normal redshift dimming defined in Eq. (4). This power law suppression means that the infrared luminous galaxies at the bright tail of the luminosity function become gas poor faster than the less luminous ones. On the other hand, choosing redshift $`z1`$ as a redshift transition for the gas rich mergers at high redshift and gas poor mergers at low redshift is based on the consideration that the cosmic time scale of $`z1`$ is about $`3\times 10^9`$ years, which is approximately the time scale for the disk evolution (Lin & Pringle 1987), it probably indicates a stage when galaxies are becoming gas poor.
While the basic concept is clearly correct, the details of these assumptions are a bit crude and arbitrary, especially the assumptions for the transition redshift $`z1`$ and the consequential power law suppression of the infrared burst phase. But it seems that they are the important effects for the fitting of IRAS source counts in our simulation, especially the strong evolution at flux range $`10mJy\mathrm{\hspace{0.17em}1}Jy`$. Varying any of the model parameters could influence more or less the evolution of the luminosity functions and the source counts finally. While all of the model parameters combine to influence the luminosity functions together, the impact from variation of one of the parameters such as $`\alpha ,D`$ or $`\eta `$, probably still could be compensated by varying again other related parameters. As for the assumptions of transition redshift and the differential burst phase, there seems to be not much room left for the variation of these particular values. We checked the case when the transition redshift and the enhancement of the infrared burst phase $`\eta `$ are both twice and half of the values in our model. Although we adjusted other parameters in order to approximately fit the observed local luminosity functions for the infrared luminous sources, it seems that they are far from giving an acceptable fit to the IRAS deep surveys (see Fig.3). In this case, what we show here is only one of the most possible, realistic set of model parameters which could give the best fitting of available observations. Since our model is still too crude, we need probably further information about the evolution of infrared bright sources at high redshift, or the redshift distributions to high redshift, to give more robust model constraints. We will discuss this point again later.
## 4 Numerical simulation and discussion
We use a Monte Carlo inverse-cascading process to simulate the binary aggregation evolution of galaxies which is described by Eq. (2). In our simulation, we consider the initial galaxy mass distribution as a $`\delta `$-function starting from redshift $`z=15`$ and of mass $`M=2.510^{10}M_{}`$ as an approximation. Since binary aggregation dynamics does not strongly depend on the initial condition, the memory of the initial condition will disappear after the transients and the mass distribution is independent of initial details with self-similar evolution. The specific discussion of this process was presented by Cavaliere et al. (1991, 1992) analytically and numerically.
In this case, evolution of the dwarf galaxies with mass less than $`10^{10}M_{}`$ is not relevant to our results. Even for those dwarf galaxies of mass $`10^{10}M_{}`$, we could see that their influence is not very important since our results only strongly depend on the evolution of infrared-luminous galaxies.
The large scale structures (LSSs), which have the advantage of a larger density and lower velocity dispersions compared with the field and the virialized clusters, present an ideal environment for galaxy-galaxy interactions to take place in. However, they are still quantitatively less understood both from the observational and theoretical point of view. In the paper about the evolution of faint blue galaxies, Cavaliere & Menci (1997) simply discussed the relation of the evolution of merger rate which is represented by the term $`t^k`$ in Eq. (3) with different environments, such as the homogeneous “field”, the virialized clusters or groups and the large scale structures (sheets and filaments). They include a free parameter $`D`$ in the expansion of Universe for the Large Scale Structures.
$$f(z)=\left[\frac{1+z}{1+z_{in}}\right]^{(3+D)/2}$$
(5)
where $`z_{in}`$ is the redshift when galaxy aggregation becomes effective; $`D`$ is the dimensionality of the large scale structure, with $`D=2`$ for sheetlike structures and $`D=1`$ for filaments.
We can transform $`f(z)`$ to be a function of cosmic time $`t`$ with the conversion of $`z`$ to $`t`$ for a flat universe (see Peebles 1993) as:
$$1+z=\left(\frac{2}{3}\frac{t^1}{H_0\mathrm{\Omega }^{1/2}}\right)^{2/3}$$
(6)
so, we get $`f(t)t^k`$, with $`k`$ at range of (-4/3, -5/3) corresponding to $`D=1`$ (filaments) and $`D=2`$ (sheets).
In our simulation, the best fit structure for the luminous infrared galaxies are more like in the filaments with $`D1`$ and $`k4/3`$. The free parameter $`\alpha `$ in equation (3), which describes the relative weight of two kinds of interactions (geometric collisions or gravitationally focused interactions), is close to 0.9. But we know from our simulation, although varying these parameters could influence the final results, we probably still could compensate such effects with the adjustment of other relevant model parameters. So, what we show here is only one set of the most possible values which could give the best fit to the present observations.
In our model, we assumed a redshift dependent infrared burst phase for the merger triggered luminous infrared sources since a transition redshift $`z1`$, which means the enhancement of the far infrared luminosity of ULIGs from merging galaxies at high redshift is much higher than that of low redshift, simply because the high redshift mergers are usually between gas rich systems but between gas poor systems at low redshift. We found in our simulation although the assumption of the “differential burst” phase and the consequential power law supression is very arbitrary and crude, it is actually very important for the fitting of the strong evolution of IRAS $`60\mu m`$ source counts at flux range of $`10mJy\mathrm{\hspace{0.17em}1}Jy`$. We show the results in Fig.1, where the starburst galaxies and Seyferts are both assumed to be of the evolution which is driven by galaxy-galaxy mergers/interactions, while spirals basically keep a constant star formation history since their formation. We normalized our Monte Carlo simulation with the observed local luminosity function at $`60\mu m`$ of starburst galaxies from Saunders (1990) and Seyferts from Rush et al. (1993). Varying this transition redshift by a factor of two already has significant effects for the fitting of the source counts (see Fig. 3). But, since our model is still too crude, we probably only can say what we present here is a most probable scenario which gives best fit to the IRAS deep surveys and other available observations.
We consider in our simulation both types of AGNs with the assumption that the abundances of Seyfert I and Seyfert II at low redshift are approximately equal, which is suggested by the extended $`12\mu m`$ galaxy sample (Rush et al. 1993), and the recent Hubble Space Telescope imaging survey of nearby AGNs (Malkan et al. 1998). At high redshift ($`z>1`$), we basically assume that the dust shrouded phase dominates and has a fraction of about $`80\%`$ which is also suggested by the recent cosmic X-ray background (Gilli et al. 1999).
We choose NGC1068 IR spectra as the standard template for all the obscured AGNs at low redshift, while all “Type I” have a SED well represented by the mean SED(spectral energy distribution) of Seyfert I. We also assume that the early phase of these AGNs show the typical spectra such as the dust shrouded F10214+4724, and a phase poor in cold gas like the Cloverleaf quasar. The template of all these spectra were well modelled by Rowan-Robinson (1992), Rowan-Robinson et al. (1993) and Granto et al. (1994, 1996, 1997).
In order to understand the contributing sources for the faint slope of the IRAS $`60\mu m`$ source counts, we plot out also the redshift distributions at flux range of $`10mJy1Jy`$ for the three important populations (starburst galaxies, Seyferts and spirals). We see from Fig.2 that this faint slope comes from the low redshift starburst galaxies which peak at $`z0.5`$, the local spirals with mean redshift about 0.1 and the infrared burst phase of high redshift gas rich mergers which peaks at $`z1.5`$. Fig.2 can be used to make powerful predictions about the redshifts of faint infrared sources. It seems to imply that about two-thirds of the faint $`60\mu m`$ sources should have redshifts from a little less than 1 to 2 (and that a fair fraction of those will be Seyferts). Recent ISO and NICMOS results of the high redshift ultraluminous infrared galaxies further increases our motivaton to consider the model (Dole et al. 1998, Benitez et al. 1998).
We calculate the background level $`\nu I_\nu `$ at $`60\mu m`$, it is approximately $`1.9nWm^2sr^1`$. We then extrapolate our simulation to wavelengths of $`25\mu m`$ and $`100\mu m`$. The background intensity are all shown in Fig.4. Clearly, our extrapolation to $`25\mu m`$ is too low, compared to other models, which suggests that we have too low a mixture of intermediate dust temperatures in our templates for the emission spectra. However, our $`60\mu m`$ background is very close to the lower end of the range empirically derived by Malkan & Stecker (1998), and so can be considered as a firm lower limit.
## 5 Conclusion
We described in this paper a Monte Carlo simulation of the inverse-cascading process of a merging-driven galaxy evolution scenario, where the evolution of infrared luminous starburst galaxies and AGNs may be triggered by the galaxy-galaxy interactions till even moderate redshift (say, $`z<1`$) in the Large Scale Structures. We assume in our model a redshift dependent infrared burst phase which is based on the concepts that starburst and AGN activities triggered by gas rich mergers at high redshift are more drastic than those of low redshift, thus the enhancement of the far infrared luminosities in these ULIGs from the high redshift merger events are higher than that of low redshift. We simulate this effect in our calculation by a power law suppression of the infrared burst phase since a transition redshift $`z1`$. We adopt the transition redshift here at $`z1`$ simply because the cosmic time scale of $`z1`$ ($`3\times 10^9`$ years) is approximately the disk evolution time scale (Lin & Pringle 1987). Varying any of the model parameters could influence the evolution of luminosity functions of infrared luminous sources and thus influence the souce counts more or less. But, no matter how we adjust the relevant parameters, such as $`\alpha `$, $`D`$ and $`\eta `$, in order to get a strong decrease of merger rate with cosmic time around redshift $`12`$ for the source count fitting, it seems that the quick fading and suppression of the infrared burst phase at redshift $`z1`$ is a critical effect in our simulation to interpret the strong evolution of the IRAS $`60\mu m`$ source counts at flux range $`10mJy\mathrm{\hspace{0.17em}1}Jy`$. The impact from the variation of model parameters such as $`\alpha `$, $`D`$ and $`\eta `$, probably could be compensated by varying again other related parameters, thus their particular values in our model are not the critical points for such an evolutionary scenario; However, varying the transition redshift and the infrared enhancement $`\eta `$ by a factor of two could strongly influence the final results. So, it appears that there is not much room for varying any of these parameters. Since our model is still too simple and crude, we probably need further information about the evolution of luminosity functions at high redshift for those infrared bright sources in order to give strong model constraints.
We checked also the redshift distribution of the three major infrared sources (starburst galaxies, Seyferts and spirals). We see from Fig.2 that the mean redshift of starburst galaxies and AGNs which are brighter than $`S_{60}10mJy`$ is around $`z0.5`$ and quickly diminish till $`z1`$; a new population of high redshift ultraluminous infrared burst phase at mean redshift $`z1.5`$ then takes over. This seems to be consistent with the present groundbased NIR and optical/UV HDF surveys, where they failed to have enough detection of starburst galaxies near $`z1`$ (Ashby et al. 1992, Koo & Kron 1992). On the other hand, recent NICMOS/VLT and FIR/submm survey surely found a certain amount of infrared bright galaxies at high redshift, and the newest result of FIRBACK (Far Infrared Survey) with ISO shows that more than half of the ultraluminous infrared galaxies are at redshift $`z>1`$ (Lilly et al. 1998, Benitez et al. 1998, Dole et al. 1998). This provides further motivation to consider their contribution to the strong evolution of the IRAS faint source counts at flux range of $`10mJy1Jy`$ in Fig.1.
Meanwhile the new data of nearby blazars like Mkn501 by HEGRA team, appears to show an extension of the TeV $`\gamma `$ ray spectrum till about 24 TeV, which sets an upper limit for the intergalactic infrared emission history during the structure formation (Aharonian et al. 1999). We calculated the background level at $`60\mu m`$ from a possible merger driven starbursts and AGNs scheme and a simple constant star formation history for the spiral galaxies. The infrared background level at $`60\mu m`$ is only $`1.9nWm^2sr^1`$, which is about half of those estimated by some previous papers and consistent with the upper limit from the new data of TeV $`\gamma `$ ray spectrum of Mkn501 (Stecker 1999, Funk et al. 1998). Clearly, this is a strong lower limit, because any variation on our model could produce a higher background. The forth-coming data on direct source counts at infrared wavelengths will allow to further constrain the evolution of both AGN and starbursts, as well as the absorption at gamma rays near 10 TeV photon energy.
###### Acknowledgements.
We should thank Profs. Martin Harwit, Hinrich Meyer, Drs. Norbert Magnussen, Karl Mannheim and Wolfgang Rode for their helpful comments and suggestions. In addition PLB would like to thank Profs. Ocker de Jager, Todor Stanev, Floyd Stecker, Joel Primack and Amri Wandel for intense discussions of the theme of this work. YPW was supported by the PhD fellowship of Wuppertal University of Germany and finished the calculations at MPIfR. YPW is indebted to their hospitality and kindly helps. We are very grateful to the anonymous referee for her/his helpful comments and actually suggestions to consider the model further.
|
no-problem/0003/quant-ph0003151.html
|
ar5iv
|
text
|
# Untitled Document
Unconventional Quantum Computing Devices
Seth Lloyd
Mechanical Engineering
MIT 3-160
Cambridge, Mass. 02139
Abstract: This paper investigates a variety of unconventional quantum computation devices, including fermionic quantum computers and computers that exploit nonlinear quantum mechanics. It is shown that unconventional quantum computing devices can in principle compute some quantities more rapidly than ‘conventional’ quantum computers.
Computers are physical: what they can and cannot do is determined by the laws of physics. When scientific progress augments or revises those laws, our picture of what computers can do changes. Currently, quantum mechanics is generally accepted as the fundamental dynamical theory of how physical systems behave. Quantum computers can in principle exploit quantum coherence to perform computational tasks that classical computers cannot \[1-21\]. If someday quantum mechanics should turn out to be incomplete or faulty, then our picture of what computers can do will change. In addition, the set of known quantum phenomena is constantly increasing: essentially any coherent quantum phenomenon involving nonlinear interactions between quantum degrees of freedom can in principle be exploited to perform quantum logic. This paper discusses how the revision of fundamental laws and the discovery of new quantum phenomena can lead to new technologies and algorithms for quantum computers.
Since new quantum effects are discovered seemingly every day, let’s first discuss two basic tests that a phenomenon must pass to be able to function as a basis for quantum computation. These are 1) The phenomenon must be nonlinear, and 2) It must be coherent. To support quantum logic, the phenomenon must involve some form of nonlinearity, e.g., a nonlinear interaction between quantum degrees of freedom. Without such a nonlinearity quantum devices, like linear classical devices, cannot perform even so simple a nonlinear operation as an AND gate. Quantum coherence is a prerequisite for performing tasks such as factoring using Shor’s algorithm , quantum simulation a la Feynman and Lloyd , or Grover’s data-base search algorithm , all of which require extended manipulations of coherent quantum superpositions.
The requirements of nonlinearity and coherence are not only necessary for a phenomenon to support quantum computation, they are also in principle sufficient. As shown in \[14-15\], essentially any nonlinear interaction between quantum degrees of freedom suffices to construct universal quantum logic gates that can be assembled into a quantum computer. In addition, the work of Preskill et al. on robust quantum computation shows that an error rate of no more than $`10^4`$ per quantum logic operation allows one to perform arbitrarily long quantum computations in principle.
In practice, of course, few if any quantum phenomena are likely to prove sufficiently controllable to provide extended quantum computation. Promising devices under current experimental investigation include ion traps , high finesse cavities for manipulating light and atoms using quantum electrodynamics , and molecular systems that can be made to compute using nuclear magnetic resonance \[8-9\]. Such devices store quantum information on the states of quantum systems such as photons, atoms, or nuclei, and accomplish quantum logic by manipulating the interactions between the systems via the application of semiclassical potentials such as microwave or laser fields. We will call such devices ‘conventional’ quantum computers, if only because such devices have actually been constructed.
There is another sense in which such computers are conventional: although the devices described above have already been used to explore new regimes in physics and to create and investigate the properties of new and exotic quantum states of matter, they function according to well established and well understood laws of physics. Perhaps the most striking examples of the ‘conventionality’ of current quantum logic devices are NMR quantum microprocessors that are operated using techniques that have been refined for almost half a century. Ion-trap and quantum electrodynamic quantum computers, though certainly cutting edge devices, operate in a quantum electrodynamic regime where the fundamental physics has been understood for decades (that is not to say that new and unexpected physics does not arise frequently in this regime, rather that there is general agreement on how to model the dynamics of such devices).
Make no mistake about it: a conventional quantum logic device is the best kind of quantum logic device to have around. It is exactly because the physics of nuclear magnetic resonance and quantum electrodynamics are well understood that devices based on this physics can be used systematically to construct and manipulate the exotic quantum states that form the basis for quantum computation. With that recognition, let us turn to ‘unconventional’ quantum computers.
Perhaps the most obvious basis for an unconventional quantum computer is the use of particles with non-Boltzmann statistics in a refime where these statistics play a key role in the dynamics of the device. For example, Lloyd has proposed the use of fermions as the fundamental carriers of quantum information, so that a site or state occupied by a fermion represents a 1 and an unoccupied site or state represents a 0. It is straightforward to design a universal quantum computer using a conditional hopping dynamics on an array of sites, in which a fermion hops from one site to another if only if other sites are occupied.
If the array is one-dimensional, then such a fermionic quantum computer is equivalent to a conventional quantum computer via the well-known technique of bosonization. If the array is two or more dimensional, however, a local operation involving fermions on the lattice cannot be mocked up by a local operation on a conventional quantum computer, which must explicitly keep track of the phases induced by Fermi statistics. As a result, such a fermionic computer can perform certain operations more rapidly than a conventional quantum computer. An obvious example of a problem that can be solved more rapidly on a fermionic quantum computer is the problem of simulating a lattice fermionic system in two or more dimensions. To get the antisymmetrization right in second quantized form, a conventional ‘Boltzmann’ quantum computer takes time proportional to $`T\mathrm{}^{d1}`$ where $`T`$ is the time over which the simulation is to take place, $`\mathrm{}`$ is the length of the lattice and $`d`$ is the dimension, while a fermionic quantum computer takes time proportional to $`T`$. (Here we assume that the computations for both conventional and Fermionic quantum computers can take advantage of the intrinsic parallelizability of such simulations: if the computations are performed serially an additional factro of $`\mathrm{}^d`$ is required for both types of computer to update each site sequentially.)
As the lattice size $`\mathrm{}`$ and the dimension $`d`$ grow large, the difference between the two types of computer also grows large. Indeed, the problem of simulating fermions hopping on a hypercube of dimension $`d`$ as $`d\mathrm{}`$ is evidently exponentially harder on a conventional quantum computer than a Fermionic quantum computer. Since a variety of difficult problems such as the travelling-salesman problem and data-base search problem can be mapped to particles hopping on a hypercube, it is interesting to speculate whether fermionic computers might provide an exponential speed-up on problems of interest in addition to quantum simulation. No such problems are currently known, however. Fermionic computers could be realized in principle by manipulating the ways in which electrons and holes hop from site to site on a semiconductor lattice (though problems of decoherence are likely to be relatively severe for such systems).
It might also be possible to construct bosonic computers using photons, phonons, or atoms in a Bose-Einstein condensate. Such systems can be highly coherent and support nonlinear interactions: phonons and photons can interact in a nonlinear fshion via their common nonlinear interaction with matter, and atoms in a Bose condensate can be made to interact bia quantum electrodynamics (by introduction of a cavity) or by collisions. So far, however, the feature of Bose condensates that makes them so interesting from the point of view of physics — all particles in the same state — makes them less interesting from the point of view of quantum computation. Many particles in the same state, which can be manipulated coherently by a variety of techniques, explore the same volume of Hilbert space as a single particle in that state. As a result, it is unclear how such a bosonic system could provide a speed-up over conventional quantum computation. More promising than Bose condensates from the perspective of quantum computation and quantum communications, is the use of cavity quantum electrodynamics to ‘dial up’ or synthesize arbitrary states of the cavity field. Such a use of bosonic states is important for the field of quantum communications, which requires the ability to create and manipulate entangled states of the electromagnetic field.
A third unconventional design for a quantum computer relies on ‘exotic’ statistics that are neither fermionic nor bosonic. Kitaev has recently proposed a quantum computer architecture based on ‘anyons,’ particles that when exchanged acquuire an arbitrary phase. Examples of anyons include two-dimensional topological defects in lattice systems of spins with various symmetries. Kitaev noted that such anyons could perform quantum logic via Aharonov-Bohm type interactions . Preskill et al. have shown explicitly how anyonic systems could compute in principle , and Lloyd et al. have proposed methods of realizing anyons using superconducting circuits (they could also in principle be constructed using NMR quantum computers to mock up the anyonic dynamics in an effectively two-dimensional space of spins) . The advantage of using anyons for quantum computation is that their nonlocal topological nature can make them intrinsically error-correcting and virtually immune to the effects of noise and interference.
As the technologies of the microscale become better developed, more and more potential designs for quantum computers, both conventional and unconventional, are likely to arise. Additional technologies that could prove useful for the construction of quantum logic devices include photonic crystals, optical hole-burning techniques, electron spin resonance, quantum dots, superconducting circuits in the quantum regime, etc. Since every quantum degree of freedom can in principle participate in a computation one cannot a priori rule out the possibility of using currently hard to control degrees of freedom such as quark and gluon in complex nuclei to process information. Needless to say, most if not all of the designs inspired by these technologies are likely to fail. There is room for optimism that some such quantum computer designs will prove practicable, however.
The preceding unconventional designs for quantum computers were based on existing, experimentally confirmed physical phenomena (except in the case of non-abelian anyons). Let us now turn to designs based on speculative, hypothetical, and not yet verified phenomena. (One of the most interesting of these phenomena is large-scale quantum computation itself: can we create and systematically transform entangled states involving hundreds or thousands of quantum variables?) A particularly powerful hypothesis from the point of view of quantum computation is that of nonlinear quantum mechanics.
The conventional picture of quantum mechanics is that it is linear in the sense that the superposition principle is obeyed exactly. (Of course, quantum systems can still exhibit nonlinear interactions between degrees of freedom while continuing to obey the superposition principle.) Experiment confirms that the superposition principle is indeed obeyed to a high degree of accuracy. Nonetheless, a number of scientists including Weinberg have proposed nonlinear versions of quantum mechanics in which the superposition principle is violated. Many of these proposals exhibit pathologies such as violations of the second law of thermodynamics or the capacity for superluminal communication. Despite such theoretical difficulties, it is still possible that quantum mechanics does indeed possess a small nonlinearity, even if it currently seems unlikely. If a nonlinear operation such as that proposed by Weinberg can be incorporated in a quantum logic operation, then the consequences are striking: NP-complete problems can be solved easily in polynomial time . Indeed, NP-oracle problems and all problems in $`\mathrm{\#}P`$ can be solved in polynomial time on such a nonlinear quantum computer.
A general proof of this result is given in , however, a simple argument for why this is so can be seen as follows. Suppose that it is possible to perform a non-unitary operation on a single qubit that has a positive Lyapunov exponent over some region: i.e., somewhere on the unit sphere there exists a line of finite extent along which application of the operation causes nearby points to move apart exponentially at a rate $`e^{\lambda \mathrm{\Delta }\theta }`$ proportional to their original angular separation $`\delta \theta `$. Now consider a function $`f(x)`$ from $`N`$ bits to one bit. We wish to determine whether or not there exists an $`x`$ such that $`f(x)=1`$, and if so, how many such $`x`$’s there are. Using the nonlinear operation with positive Lyapunov exponent, it is straightforward to construct a mapping leaves a point on the exponentially expanding line (call this point $`|0`$) fixed if their are no solutions to the equation $`f(x)=1`$, and that maps the point to a nearby point $`\mathrm{cos}(n/2^N)|0+\mathrm{sin}(n/2^N)|1`$ along the line if there are exactly $`n`$ solutions to the equation $`f(x)=1`$. Repeated application of the nonlinear map can be used to drive the points apart at an exponentional rate: eventually, at a time determined by the number of qubits $`N`$, the number of solutions $`n`$, and the rate of spreading $`\lambda `$, the two points will become macroscopically distinguishable, allowing one to determine whether or not there is a solution and if there is, how many solutions there are. The map $`f`$ need only be applied once, and the amount of time it takes to reveal the number of solutions is proportional to $`N`$.
The fact that nonlinear quantum mechanics allows the straightforward solution of NP-complete and $`\mathrm{\#}P`$ problems should probably be regarded as yet another strike against nonlinear quantum mechanics. Whether or not quantum mechanics is linear is a question to be resolved experimentally, however. In the unlikely event that quantum mechanics does turn out to be nonlinear, all our problems may be solved.
Finally, let us turn our attention to hypothetical quantum Theories of Everything, such as string theory. Such a theory must clearly support quantum computation since it supports cavity quantum electrodynamics and nuclear magnetic resonance. The obvious question to ask is then, does a Theory of Everything need to support anything more than quantum computation? So far as experimental evidence is concerned the answer to this question is apparently No: we have no evident reason to doubt that the universe is at bottom anything more than a giant, parallel, quantum information processing machine, and that the phenomena that we observe and attempt to characterize are simply outputs of this machine’s ongoing computation. Of course, just how the universe is carrying out this computation is likely to remain a question of great interest for some time.
To summarize: Computers are physical systems, and what they can do in practice and in principle is circumscribed by the laws of physics. The laws of physics in turn permit a wide variety of quantum computational devices including some based on nonconventional statistics and exotic effects. Modifications made to the laws of physics have the consequence that what can be computed in practice and in principle changes. A particularly intriguing variation on conventional physics is nonlinear quantum mechanics which, if true, would allow hard problems to be solved easily.
References
1. P. Benioff, ‘Quantum Mechanical Models of Turing Machines that Dissipate No Energy,’ Physical Review Letters, Vol. 48, No. 23, pp. 1581-1585 (1982)
2. D. Deutsch, ‘Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer,’ Proceedings of the Royal Society of London, A, Vol. 400, pp. 97-117 (1985).
3. R.P. Feynman, ‘Quantum Mechanical Computers,’ Optics News, Vol. 11, pp. 11-20 (1985); also in Foundations of Physics, Vol. 16, pp. 507-531 (1986).
4. S. Lloyd, ‘A Potentially Realizable Quantum Computer,’ Science, Vol. 261, pp. 1569-1571 (1993).
5. J.I. Cirac and P. Zoller, ‘Quantum Computations with Cold Trapped Ions,’ Physical Review Letters, Vol. 74, pp. 4091-4094 (1995).
6. Q.A. Turchette, C.J. Hood, W. Lange, H. Mabuchi, H.J. Kimble, ‘Measurement of Conditional Phase Shifts for Quantum Logic,’ Physical Review Letters, Vol. 75, pp. 4710-4713 (1995).
7. C. Monroe, D.M. Meekhof, B.E. King, W.M. Itano, D.J. Wineland, ‘Demonstration of a Fundamental Quantum Logic Gate,’ Physical Review Letters, Vol. 75, pp. 4714-4717 (1995).
8. D.G. Cory, A.F. Fahmy, T.F. Havel, ‘Nuclear Magnetic Resonance Spectroscopy: an experimentally accessible paradigm for quantum computing,’ in PhysComp96, Proceedings of the Fourth Workshop on Physics and Computation, T. Toffoli, M. Biafore, J. Leão, eds., New England Complex Systems Institute, 1996, pp. 87-91.
9. N.A. Gershenfeld and I.L. Chuang, ‘Bulk Spin-Resonance Quantum Computation,’ Science, Vol. 275, pp. 350-356 (1997).
10. P. Shor, ‘Algorithms for Quantum Computation: Discrete Log and Factoring,’ in Proceedings of the 35th Annual Symposium on Foundations of Computer Science, S. Goldwasser, Ed., IEEE Computer Society, Los Alamitos, CA, 1994, pp. 124-134.
11. R.P. Feynman, ‘Simulating Physics with Computers,’ International Journal of Theoretical Physics, Vol. 21, pp. 467-488 (1982).
12. S. Lloyd, ‘Universal Quantum Simulators,’ Science, Vol. 273, pp. 1073-1078 (1996).
13. L.K. Grover, ‘Quantum Mechanics Helps in Searching for a Needle in a Haystack,’ Physical Review Letters, Vol. 79, pp. 325-328 (1997).
14. D. Deutsch, A. Barenco, A. Ekert, ‘Universality in Quantum Computation,’ Proceedings of the Royal Society of London A, Vol. 449, pp. 669-677 (1995).
15. S. Lloyd, ‘Almost Any Quantum Logic Gate is Universal,’ Physical Review Letters, Vol. 75, pp. 346-349 (1995).
16. S. Lloyd, ‘Fermionic Quantum Computers,’ talk delivered at the Santa Barbara workshop on Physics of Information, November 1996.
17. D. Abrams and S. Lloyd, to be published.
18. J. Preskill et al., to be published.
19. Yu. Kitaev, to be published.
20. J. Preskill et al., to be published.
21. S. Lloyd et al. to be published.
|
no-problem/0003/astro-ph0003405.html
|
ar5iv
|
text
|
# BeppoSAX observation of the transient X–ray pulsar GS 1843+00
## 1 Introduction
The transient X–ray pulsar GS 1843+00 was discovered on 1988 April 3 during a galactic plane scan observation near the Scutum region by the Ginga satellite (Makino et al. 1988a ; Makino et al. 1988b ). The Large Area Counter (LAC, Turner et al. (1989)) on board Ginga (Makino et al. (1987)) detected at a J2000 position of $`\alpha =18^\mathrm{h}45^\mathrm{m}\pm 1^\mathrm{m}`$, $`\delta =0^{}55^{}\pm 7^{}`$, a coherent pulsation with a period of 29.5 s and an 2–37 keV X–ray intensity of 50 mCrab (Koyama et al. 1990a ). On 1988 April 19 and 20 Ginga carried out a pointed observation of GS 1843+00, measuring a highly variable X–ray flux on a wide range of time scales, ranging from 30 to 60 mCrab. In addition to a coherent oscillation with P$`=29.5056\pm 0.0002`$ s, an energy–dependent aperiodic variation was found (Koyama et al. 1990b ).
On 1997 March 3, the Burst and Transient Source Experiment (BATSE) on board CGRO detected a new outburst from this peculiar source (Wilson et al. (1997)). The mean 20–50 keV rms pulsed flux was $`37\pm 2`$ mCrab while the mean barycentric pulse period at an epoch of March 6.0 was P$`=29.5631\pm 0.0003`$ s. The P variation during this observation implies a spin–up rate, $`\dot{\mathrm{P}}`$, of $`(3.65\pm 0.11)\times 10^8`$ s s<sup>-1</sup>. The data confirmed the low pulsed fraction ($``$7%) observed by Ginga.
Between 1997 February 1 and March 19 the All Sky Monitor (ASM) on board the Rossi X–ray Timing Explorer (RXTE) observed the source to be at a flux level of $`\mathrm{F}_{210}1530`$ mCrab (Takeshima (1997)).
A pointed observation carried out on 1997 March 5 with the RXTE Proportional Counter Array (PCA) detected the source at a 2–60 keV flux level of 62 mCrab, measuring a barycentric pulse period of $`29.565\pm 0.002`$ s at an epoch of March 5.1712 UT (Takeshima (1997)).
On 1997 April 4 the BeppoSAX Narrow Field Instruments (NFIs) performed a pointed observation of GS 1843+00 (Piraino et al. (1998)). The source flux was $`(2.9\pm 0.3)\times 10^9`$erg cm<sup>-2</sup>s<sup>-1</sup> in the 0.3–100 keV energy range.
Using the capability of BeppoSAX imaging instruments, the 90% confidence J2000 position of GS 1843+00 was constrained to be within a $`30^{\prime \prime }`$ radius circular error region centered on $`\alpha =18^\mathrm{h}45^\mathrm{m}34^\mathrm{s}`$, $`\delta =0^{}52\stackrel{}{.}5`$ (Santangelo et al. (1997)). On the same day, the source was also observed by the ROSAT High Resolution Imager (HRI) which found a J2000 position for the source of $`\alpha =18^\mathrm{h}45^\mathrm{m}36\stackrel{s}{.}9`$, $`\delta =0^{}51^{}45^{\prime \prime }`$ (90 % confidence error radius $`10^{\prime \prime }`$, Dennerl & Greiner (1997)).
In this paper we present the results of both a timing and spectroscopic analysis of the BeppoSAX observation of GS 1843+00.
## 2 Observation
The Satellite for X–ray Astronomy BeppoSAX is described in detail in Boella et al. 1997a . It carries a pay–load of four co–aligned Narrow Field Instruments: the Low Energy Concentrator Spectrometer (LECS, 0.1–10 keV; Parmar et al. (1997)), the Medium Energy Concentrator Spectrometers (MECS, 1.6–10 keV; Boella et al. 1997b ), the High Pressure Gas Scintillation Proportional Counter (HPGSPC, 4–100 keV; Manzo et al. (1997)) and the Phoswich Detector System (PDS, 15–200 keV; Frontera et al. (1997)).
The LECS and MECS are imaging instruments with an angular resolution of $`1.2^{}`$. The HPGSPC and PDS are collimated instruments with field of views (FOVs) of $`1^{}\times 1^{}`$.
The BeppoSAX Target Opportunity Observation Program of GS 1843+00 started at 1997 April 4 at 02:17 and ended at 15:00 UTC. Good data were selected from intervals when the instrument configurations were nominal and the elevation angle, with respect to the Earth Limb, was greater then $`5^{}`$. The total on–source exposure times were 8.5 ks for the LECS, which is operated only during night time, 22 ks for the MECS, 9.7 for the HPGSPC and 7.4 ks for the PDS.
The MECS light curves and spectra have been extracted, following the standard procedure, from a circular region, centered on the source with a $`4^{}`$ radius, while a $`8^{}`$ radius was used for the LECS.
For both LECS and MECS, the background subtraction, was performed using the background obtained from a long blank sky observations, rescaled by a correction factor obtained as the ratio of the count rates extracted from both the blank sky and the GS 1843+00 images, in a region of the detector far from the source location. As far as concerns the two collimated instrument, HPGSPC and PDS, the background was subtracted using the standard procedure (Segreto et al. (1997); Frontera et al. (1997)) that uses the rocking collimator technique.
## 3 Temporal Analysis
The arrival times of the photons were first converted to the solar system barycenter. In Fig. 1 we show the background subtracted X–ray light curves, obtained in three energy ranges, with 300 s time bin size. The maximum intensity variation is $``$30% in the 1.6–10 keV and 20–60 keV energy ranges, and 60% in 10–20 keV range.
The (1.6–10 keV) GS 1843+00 power spectrum, is shown in Fig. 2. An outstanding peak at 0.0339 Hz is clearly observed. The GS 1843+00 pulse period was obtained with an epoch–folding technique using barycentric corrected 1.6–10 keV MECS data, while the (1$`\sigma `$) uncertainty was determined by fitting the arrival times of sets of 9 averaged profiles, each of 16 phase bins. The best–fit period is $`29.477\pm 0.001`$. There is no evidence for any change in spin–period during the observation with a $`\sigma `$ upper–limit of $`7.5\times 10^8`$ s s<sup>-1</sup>.
Using this period value, we folded the light curves in different energy bands. The pulse profiles in five energy ranges are shown in Fig. 3. At lower energies the pulse profile is clearly asymmetric with a double peak shape, whilst at higher energies it becomes a simple sinusoid.
The variation with energy of the pulsed fraction, defined as the semi–amplitude of the modulation divided by the average intensity, is shown in Fig. 4. There is no evidence for an increase in the fractional periodic variation with energy.
In the BeppoSAX observation, GS 1843+00 may have opposite behavior than during Ginga observation, where the pulsed fraction was found to increase with energy and the pulse profile was clearly single peaked a lower energies and more structured at higher energies (Koyama et al. 1990b ).
From BATSE<sup>1</sup><sup>1</sup>1BATSE data are provided on line by BATSE pulsar team, at http://www.batse.msfc.nasa.gov/data/pulsar/sources, RXTE (Takeshima (1997)) and BeppoSAX data a clear spin–up trend ($`\dot{\mathrm{P}}=3.79\pm 0.10\times 10^8`$ s s<sup>-1</sup> ) over 30 days is evident (Fig. 5). The mean spin-up timescale, $`\mathrm{P}/\dot{\mathrm{P}}`$, is a very rapid 24.6 years. However a difference of $`\mathrm{\Delta }P`$ of 0.01 s is observed between the BeppoSAX period and the one expected from the BATSE data extrapolation. This could be due to a Doppler effect of orbital motion. Actually, the change of the pulse period, $`\mathrm{\Delta }P`$, due to orbital motion is constrained to be
$$\mathrm{\Delta }P\frac{P}{c}\frac{2\mathrm{sin}i}{\sqrt{1e^2}}[\frac{2\pi G}{P_{orb}}(M_{NS}+M_c)]^{\frac{1}{3}}$$
(1)
where $`P_{orb}`$ is the orbital period, $`e`$ the eccentricity, $`i`$ the inclination angle, $`M_{NS}`$ the mass of the neutron star, $`M_c`$ the mass of the companion star, $`G`$ is the gravitational constant and $`c`$ the speed of light. Following Corbet (1986) relation to estimate the orbital period (50d) and assuming a mass of 15 $`M_{}`$ for the companion star, typical of Be star, the upper limit of $`\mathrm{\Delta }P`$ turns out to be $`0.02`$ for circular motion.
To study the aperiodic variability, firstly reported by Koyama et al. (1990b ) the (1.6–10 keV) and (10–37 keV) power spectra were fitted with a power law. The Poisson white noise was subtracted from Leahy normalized power spectra (Leahy et al. (1983)). The power indices, 1.44$`\pm `$0.17 and 0.9$`\pm `$0.25 respectively, are consistent with those found in the Ginga Observation (Koyama et al. 1990b ). The relative amplitude of the aperiodic variation, calculated dividing the root square of the integrated PDS over $`4\times 10^3`$ Hz to 10 Hz by an average intensity, is larger in the lower energy band (18$`\%`$) than in the higher energy band (2$`\%`$).
## 4 Spectral Analysis
Data were selected in the energy ranges 0.1–4 keV, 1.6–10 keV, 8–40 keV and 15–40 keV, respectively for the LECS, MECS, HPGSPC and PDS, where the instrument responses are well determined and there are sufficient counts. All spectra have been rebinned to at least 20 counts for energy channel, in order to ensure the applicability of $`\chi ^2`$ test in the spectral fits.
Exploiting the BeppoSAX spectral capability we were able to obtain the simultaneous broad band spectrum (0.1–200 keV) of GS 1843+00. The source shows a very hard spectrum strongly absorbed at lower energies. No deviation from a smooth continuum is observed. This can be seen in Fig. 6 in which the Crab ratio, upper panel, and the ratio times the functional form of the Crab (a featureless power–law with $`\alpha =2.1`$ in this energy range), lower panel, are reported. To extract more physical information we fitted the phase averaged spectra obtained from the four co–aligned instruments simultaneously.
The conventional model used to describe the spectrum of X–ray pulsar (Pravdo et al. (1978); White et al. (1983)) is an absorbed power law with exponential cut–off at higher energies, i.e. a photon spectrum of the form
$$\mathrm{f}(\mathrm{E})=\mathrm{AE}^\alpha \mathrm{exp}\{\mathrm{N}_\mathrm{H}\sigma (\mathrm{E})\mathrm{H}(\mathrm{E})\}$$
(2)
where E is the photon energy, $`\alpha `$ is the power–law photon index, $`\mathrm{N}_\mathrm{H}`$ is the absorbing column and $`\sigma (\mathrm{E})`$ is the photoelectric absorption cross sections due to cold matter (Morrison & McCammon (1983)). The high–energy cut–off is modeled by the function of the form:
$$\mathrm{H}(\mathrm{E})=\{\begin{array}{cc}0\hfill & \text{ }E<E_c\hfill \\ \frac{\mathrm{E}\mathrm{E}_\mathrm{c}}{\mathrm{E}_\mathrm{f}}\hfill & \text{ }E>E_c\hfill \end{array}$$
(3)
where $`\mathrm{E}_\mathrm{c}`$ is the cut–off energy and $`\mathrm{E}_\mathrm{f}`$ is the e–folding energy.
Using this model, we obtained a $`\chi _\nu ^2`$ of 1.08 for 477 degrees of freedom (dof). The best–fit parameters are summarized in Table. 1.
The spectrum together with the best–fit model are shown in the upper panel of Fig. 7.
Fit residuals in terms of $`\sigma `$ are reported in the lower panel and its show no clear evidence of any absorption or emission features. Normalization factors, between the instruments, were left free in the fits. Setting the MECS as reference, the relative normalizations are 0.83 for the LECS, 1.02 for the HPGSPC and 0.79 for the PDS. These values are in good agreement with the ones obtained from the intercalibration analysis of the four Narrow Field Instruments (Fiore et al. (1999)). The inclusion in the model of a gaussian line gives a marginal improvement in fit quality (at less then 90$`\%`$ confidence level) for a fluorescent K<sub>α</sub> line at 6.4 keV with a flux level of $`(2.4\pm 1.0)\times 10^4`$ photons cm<sup>-2</sup> s<sup>-1</sup>.
## 5 Discussion
After its discovery in 1988, GS 1843+00 was detected again in 1997 March as a bright (0.3–100 keV) $`2.9\times 10^9`$ erg cm<sup>-2</sup>s<sup>-1</sup> X–ray source. Due to the spatial capabilities of the BeppoSAX imaging instruments an improved position was obtained. The BeppoSAX position is within the Ginga (Koyama et al. 1990b ) and RXTE (Chakrabarty et al. (1997)) error boxes, and is also consistent with that measured by the ROSAT HRI (Dennerl & Greiner (1997)). Accurate measurement of the position of the source is important in order to carry out a systematic search for the still unidentified optical counterpart. Pulsations with a period $`P=29.477\pm 0.001`$ s together a mean pulse period change $`\dot{\mathrm{P}}/\mathrm{P}=4.1\times 10^2`$ yr<sup>-1</sup>, which is in good agreement with the one measured by Ginga were found. Koyama et al. (Koyama89 (1989)) suggested that such a high spin up rate could be due, at least partly, to an orbital Doppler motion. Pulse period variations observed in the 30 days monitoring obtained by combining data from BATSE, RXTE–PCA and BeppoSAX , confirmed the presence of a high intrinsic spin–up rate. Moreover, assuming a Be transient system having an orbital period between 50 and 60 days, inferred from the pulse–orbital periods relation of Corbet (Corbet86 (1986)), a possible Doppler effect may be overlapped to this intrinsic spin–up rate.
The source spectrum, which is well described by an absorbed power law with high energy cut–off, is typical of accreting X–ray pulsars. The very high absorption, $`\mathrm{N}_\mathrm{H}=2.3\times 10^{22}\mathrm{cm}^2`$ is consistent with that reported by Koyama et al. (1990b ). The hypothesis that the absorption is mainly interstellar rather then circumstellar (Koyama et al. 1990a ) is supported by the marginal detection of a fluorescent K<sub>α</sub> iron line in the source spectrum.
Assuming a distance of 10 kpc (Koyama et al. 1990a ; Hayakawa et al. (1977)) the 0.3–100 keV luminosity is $``$$`3\times 10^{37}`$ erg s<sup>-1</sup>.
It is unclear if cyclotron resonance scattering features are present in the hard X–ray spectrum of the source. Koyama et al. (1990b ) suggested that the cut–off in the spectrum observed at $``$18 keV could be related to a very intense magnetic field typical of this class of source. Moreover, Mihara (Mihara (1995)), fitting the phase resolved spectrum with an absorption–like feature at $``$20 keV, classified GS 1843+00 as a possible cyclotron source. Although the spectrum is observed with good statistics up to $``$100 keV, no evidence of any cyclotron feature is observed in the BeppoSAX pulse phase averaged spectrum of GS 1843+00. Also the ”crab–ratio technique”(Dal Fiume et al. (1998)), successfully exploited in detecting Resonance Cyclotron Features (RCFs) in other X–ray pulsars, does not display any sign of cyclotron features. Moreover no evidence of cyclotron absorption features was found in the phase resolved spectra below 100 keV. However, we found an upper limit on the depth of 0.15 for the possible 20 keV feature. This value is compatible with that found by Mihara (Mihara (1995)).
Manchanda (Manchanda99 (1999)), using data from the LASE experiment, a balloon–born large area scintillation counter, recently suggested the possibility of an absorption feature around 100 keV or an emission at 140 keV. Unfortunately, statistics of BeppoSAX spectra is quite low at that energy and a much deeper analysis, which is underway, is required.
There are $``$80 known accreting X–ray pulsars (see Bildsten et al. (1997) for a recent review). Until recently only the relatively bright nearby pulsars were visible due to the limited sensitivity of previous detectors. This is changing with the discovery by ASCA, ROSAT, BeppoSAX and RXTE of a population of faint, absorbed pulsars (e.g., Angelini et al. (1998); Kinugasa et al. (1998), Torii et al. (1998)). The search for faint pulsars is one of the main scientific objectives of the ASCA galactic plane survey (e.g., Sugizaki et al. (1997); Torii et al. (1998)).
###### Acknowledgements.
We wish to thank the referee, Toshiaki Takeshima, for the precise and detailed comments which improved the quality of the paper. We thank, also, the BeppoSAX Scientific Data Center staff for their support during the observation and data analysis. SP acknowledges support from CNR PhD grant. This research has been partially funded by the Italian Space Agency.
|
no-problem/0003/astro-ph0003241.html
|
ar5iv
|
text
|
# A 695-Hz quasi-periodic oscillation in the low-mass X-ray binary EXO 0748–676
## 1 Introduction
High frequency (kHz) quasi-periodic oscillations (QPOs) have been found in many neutron-star low-mass X-ray binaries (see van der Klis 2000 for a recent review). They are observed in the 300–1300 Hz range, and are often found in pairs with a nearly constant frequency separation of $``$250–350 Hz. In addition to kHz QPOs, some sources have shown slightly drifting oscillations in the 330–590 Hz range, during type-I X-ray bursts (Strohmayer, Swank, & Zhang 1998).
In this paper we present our search for both kHz QPOs and burst oscillations in the low-mass X-ray binary EXO 0748–676. This source shows periodic (P=3.82 hr) eclipses, irregular intensity dips, and type-I X-ray bursts (Parmar et al. 1986). From the eclipse duration a source inclination of 75 to 82 was derived (Parmar et al. 1986). Based on its bursting behavior (e.g. burst rate and peak flux vs. persistent flux; see Gottwald et al. 1986) EXO 0748–676 may be a member of the atoll class (Hasinger & van der Klis 1989) of the neutron-star low-mass X-ray binaries. Recently, a variable 0.58–2.44 Hz QPO was found by Homan et al. (1999). This QPO was found in all observations, except in the only observation during an outburst of the source (early 1996) observed with the Rossi X-ray Timing Explorer (RXTE), and is probably caused by an orbiting structure in the accretion disk, which modulates the radiation of the central source (Jonker et al. 1999; Homan et al. 1999).
## 2 Observations and Analysis
The data used in this paper were obtained with the Proportional Counter Array (PCA; Jahoda et al. 1996) onboard RXTE (Bradt, Rothschild, & Swank 1993), between March 12 1996 and October 11 1998. Most PCA observations were done in sets of five or six $``$2 ks segments that were centered around successive eclipses. Data of these five or six segments were taken together and treated as single observations, which resulted in a total of 20 observations. The times of the observations are indicated in Figure 1, which also shows the RXTE All Sky Monitor (ASM) light curve of EXO 0748–676. As can be seen, only the first observation was done during the early 1996 outburst of the source.
The PCA data were obtained in several different modes. The Standard 1 and 2 modes, which were always active, had 1/8 s time resolution in one energy channel (1–60 keV, representing the full energy range covered by the PCA), and 16 s time resolution in 129 energy bands (1–60 keV), respectively. In addition to the two Standard modes, another mode was always active which had a time resolution better than 1/8192 s in at least 32 energy channels (1–60 keV).
The Standard 2 data were used to produce light curves and hard color curves. The hard color was defined as the ratio of the count rates in the 6.3–11.7 keV and 5.2–6.3 keV bands; the light curves were produced in the 1.6–14.4 keV band, i.e. the energy band in which, during the first observation, the count rate spectrum of the source exceeded that of the background. Using the high time resolution data, 0.0625–2048 Hz power spectra were created in several energy bands to search for kHz QPOs. The power spectra were selected on time, count rate, or hardness, before they were averaged. The average power spectrum was rms renormalized (van der Klis 1995), and fitted (in the 100-1500 Hz range) with a constant for the Poisson level, and a Lorentzian for any QPO (only one QPO was found). Errors on the fit parameters were determined using $`\mathrm{\Delta }\chi ^2=1`$ (1$`\sigma `$, single parameter). As significance of each power spectral feature we quote the inverse relative error on the integrated power of each feature, as measured from the power spectrum. Note that the inverse relative error on the fractional amplitude (the parameter we give) is larger than the true significance by a factor 2. The energy dependence of the QPO was determined by fixing the frequency and width of the QPO to their values obtained in the band where the QPO was most significant (6.6–18.7 keV). Upper limits on kHz QPOs were determined in the 100–1500 Hz range by fixing the width of the QPO to 10, 20, 50 or 100 Hz, and using $`\mathrm{\Delta }\chi ^2=2.71`$ (95% confidence). Upper limits were only determined in the total (1–60 keV) band, and in the band where the detected QPO was most significant. To determine upper limits for oscillations in the type-I X-ray bursts, 2–1024 Hz power spectra were created. The width of the QPO was fixed to 2 Hz.
## 3 Results
Our search for kHz QPOs in EXO 0748–676 resulted in only one significant detection: a $``$695 Hz QPO was found in the 1996 March 12 observation, the only observation done during the early 1996 outburst. In the 6.6–18.7 keV band, where it was found to be most significant, it had a frequency of 695.0$`\pm `$1.2 Hz, a full-width-at-half-maximum (FWHM) of 14$`{}_{3}{}^{}{}_{}{}^{+4}`$ Hz, and an rms amplitude of 15.2$`{}_{1.3}{}^{}{}_{}{}^{+1.4}`$% (6.0$`\sigma `$, single trial). The 6.6–18.7 keV power spectrum of the 1996 March 12 observation is shown in Figure 2. In the 1–60 keV band the QPO had a frequency of 693.0$`{}_{0.8}{}^{}{}_{}{}^{+0.5}`$ Hz, a FWHM of 3.9$`{}_{0.4}{}^{}{}_{}{}^{+3.2}`$ Hz, and an rms amplitude of 5.4$`{}_{0.7}{}^{}{}_{}{}^{+1.0}`$% (3.7$`\sigma `$, single trial). The 1.6–14.4 keV light curve and the hard color curve of the 1996 March 12 observation are shown in Figure 3. In order to examine the variability of the QPO in the 6.6–18.7 keV band, selections were made on time, 1.6–14.4 keV count rate, and hard color. The selections for count rate and hard color were performed only for the data before the dip ($`t<7000s`$). The results for the different selections are shown in Table 1. The QPO frequency increased slightly with time and perhaps count rate, but it did not depend on hard color. Note that the detection of the QPO in the dip, at 708 Hz, is only at a 1.8$`\sigma `$ significance level (in the 6.6–18.7 keV band). We made some subselections on the last time interval (which includes the dip) to test whether the QPO amplitude varied with decreasing count rate; only upper limits could be determined for those subselections, with values higher than obtained for the whole selection. The energy dependence of the QPO, in the part before the dip, was determined using four energy bands and is shown in Figure 4. The QPO energy spectrum is rather steep, with an upper limit of 3.6% in the lowest energy band and an rms amplitude of 18% in the highest energy band. For reasons of comparison we also plotted the energy dependence of the lower and upper kHz QPOs of 4U 1608–52 (Berger et al. 1996; Méndez et al. 1998; Méndez et al. 2000)
Upper limits (6.6–18.7 keV) on any second kHz QPO (as often observed in other low-mass X-ray binaries) were determined in the 100–1500 Hz range. They were 9.5%, 10.5%, 14.3%, and 16.7% rms, for fixed widths of 10 Hz, 20 Hz, 50 Hz, and 100 Hz, respectively. Since the frequency of the QPO varied little, the “shift and add” method (Méndez et al. 1998) could not usefully be applied. No $``$1 Hz QPO was found either, as was already reported by Homan et al. (1999). Upper limits (1–60 keV) are $``$7% rms in the 0.001–1 Hz range, $``$2% rms in the 1–10 Hz range, and $``$4% rms in the 10–50 Hz range.
For the other observations only upper limits to the presence of a kHz QPO could be determined. This was done in the 1–60 keV and 6.6–18.7 keV bands. The upper limits are given in Table 2, for four different fixed widths. Most of the upper limits are comparable to or larger than the values found for the QPO in the 1996 March 12 observation, and therefore not very constraining. Note that the count rate in these observations was a factor 2 to 3 smaller than that during the 1996 March 12 observation, resulting in a greatly reduced sensitivity. A similar kHz QPO as the one seen there would, when present, be significant only at a $`1\sigma `$ level. In all these observations a $``$1 Hz QPO was found, with a frequency between $``$0.4 and $``$3 Hz.
Ten type-I X-ray burst were observed, and they were examined for the presence of burst oscillations. This was done in the 100–1000 Hz frequency range, for a fixed width of 2 Hz. None were found, with upper limits during the rise of the burst between 4% and 11% rms in the 1–60 keV band, and between 6% and 14% rms in the 6.6–18.7 keV band. This is well below the amplitudes of burst oscillations observed in some other sources (e.g. Strohmayer et al. (1998); see also van der Klis 2000 and references therein). It should be noted that the rise times of the bursts were rather long, between 2 and 12 s. This could be an indication for the presence of a scattering medium surrounding the neutron star, which might wash out the rapid burst oscillations. The $``$1 Hz QPO was observed in all the bursts (see also Homan et al. 1999).
## 4 Discussion
The properties of the 695 Hz QPO are similar to those of the kHz QPOs in atoll sources; the QPO is relatively narrow (5–18 Hz) and has an rms amplitude of $``$6.5% (1–60 keV, outside the dip). Since only a single peak is observed, we can not tell whether it corresponds to the lower or the upper peak of a kHz QPO pair. However, comparison with kHz QPOs in atoll sources (see van der Klis 2000) suggests that the observed QPO is the lower QPO of a kHz pair, for the following reasons: (1) of the 11 kHz QPO pairs found in atoll sources, 8 have ranges of lower peak frequencies that include 695 Hz, which is the case for only 3 of the upper peaks. (2) The upper peaks in atoll sources have widths in the 50–200 Hz range, although occasionally peaks with widths of only 10 to 20 Hz have been observed. On the other hand, the 4–18 Hz width we find is much more common for lower peaks. (3) When comparing the energy dependence of the QPO with that of the two kHz peaks in 4U 1608–52, which have rather different energy dependencies (Berger et al. 1996; Méndez et al. 1998; Méndez et al. 2000), we find that it was very similar (i.e. steep) to that of the lower peak (see Figure 4). Hence three of the QPO properties hint towards the QPO being the lower peak.
The properties of the QPO varied on a time scale of a few $`10^3`$ s, as can be seen from Table 1. Comparing the first time selection with the second, one can see that a relatively small frequency change is accompanied by a factor 3 (2$`\sigma `$) increase in width, and an almost 50% (2$`\sigma `$) increase in fractional rms amplitude.
The other source in which only a single kHz QPO has been observed is XTE J1723–376 (Marshall & Markwardt 1999). However, most sources in which kHz QPO pairs have been found, have at times also shown single kHz QPOs. The fact that EXO 0748–676 and XTE J1723–376 have only shown single kHz QPOs is therefore most likely a matter of a small amount of data and coincidence.
With $`i=75^{}82^{}`$ EXO 0748–676 is probably the source with the highest inclination angle of the $``$20 sources that have shown kHz QPOs. Twin kHz QPOs were already found in 4U 1915-05 (Barret et al. 1997, 2000; Boirin et al. 2000), a source that also shows dips (but no eclipses, which for a similar mass ratio would imply a lower inclination than EXO 0748–676). The fact that kHz QPOs are found over a large range of inclinations means that the radiation modulated by the kHz QPO mechanism should to a large extent be isotropic. The kHz QPO was detected during the dip, but at a significance of only 1.8$`\sigma `$. This means that with $``$90% confidence we can say that either the source producing the kHz QPO was not fully covered by the dipping material, or that a considerable amount of the modulated radiation went through the dipping material unperturbed, indicating that it has a scattering optical depth of at most a few. Also, the fact that the rms amplitude changes only a little in the dip suggests that the kHz QPO and the bulk of the flux are produced at the same site.
The outburst of EXO 0748–676 in early 1996 (see Fig. 1) may have been a transition from the island state to the banana state, and back, as is common for atoll sources. In addition to the increase in count rate, there are several power spectral properties that seem to confirm this idea: (1) The strength of 0.1–1.0 Hz noise during the outburst was lower than in the non-outburst observations (see Homan et al. 1999). Most atoll sources show a decrease of the noise strength when they move from the island to the banana state (Hasinger & van der Klis 1989). (2) The $``$1 Hz QPO was not observed during the only outburst observation. In 4U 1746–37, one of the other two sources were a similar $``$1 Hz QPO was found, the QPO was observed only in the island state, and not in the banana state (Jonker et al. 2000). (3) Although there are a few exceptions, in most atoll sources kHz QPOs are found only in the lower banana state (van der Klis 2000).
We find the kHz QPO in the only observation where the $``$1 Hz QPO was absent. The $``$1 Hz QPO is thought to be due to obscuration of the central source by an orbiting structure in the accretion disk at a distance of $``$1000 km from the central source (Jonker et al. 1999; Homan et al. 1999). It is interesting to see that in two of the sources were the $``$1 Hz QPOs are found, they are not observed in the banana state, indicating a change in the accretion disk geometry (at least in the area where the $``$1 Hz QPO is formed). This, together with the fact that in most atoll sources kHz QPO are only found in the banana state, suggests that changes in the accretion disk geometry (at $``$1000 km from the central source) may affect the production of kHz QPOs close to the central source.
The authors would like to thank Mariano Méndez and Peter Jonker for their help and stimulating discussions. This work was supported by NWO Spinoza grant 08-0 to E.P.J. van den Heuvel, by the Netherlands Organisation fo Scientific Research (NWO) under contract number 614-51-002, and by the Netherlands Research-school for Astronomy (NOVA). This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
|
no-problem/0003/nucl-th0003032.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
One of the generic properties of second-order phase transition is clustering. In the case of a magnetic system, it means that there are regions of all sizes, in which spins point in the same direction, and that the probability of having a cluster of a certain size satisfies a scaling law. The implication of those properties for quark-hadron phase transition (PT) is that hadronization does not occur uniformly at the critical temperature $`(T_c)`$. At any instant during the entire course of the hadronization process of a quark-gluon plasma system, there are then clusters of hadrons separated by regions of no hadrons. In the case of heavy-ion collisions it is the surface of the expanding cylinder that is around $`T_c`$, and the clusters are to appear on the two-dimensional cylindrical surface. We call the non-hadronic regions between the clusters voids. To detect voids will be a sign of the second-order PT. In this paper we study the properties of the voids and discuss how to identify their properties in heavy-ion experiments.
It should be understood that voids are not frozen on a plasma surface for all times. If one could take instantaneous pictures of the cylindrical surface at time intervals of 1 fm/c apart, one would see the clustering patterns to fluctuate from picture to picture. Simulation on a 2D lattice using the Ising model shows that at $`T_c`$ the clustering patterns differ from one configuration to another. An example of the clusters of hadrons formed can be seen in Fig. 2 of Ref. . There are regions of high hadron density, low hadron density, and no hadrons. A region without hadrons (i.e. a void) consists of quarks and gluons in the deconfined state at a particular instant in time; they are likely to form hadrons a little later in the evolution process. The reason for the voids to exist is that at the critical point the system is torn between being in the ordered state of confinement and the disordered state of deconfinement. It is this tension of coexistence of the two states at $`T_c`$ that is responsible for the many interesting behaviors of critical phenomena . Thus if the plasma volume created in a heavy-ion collision is hot in the interior and cools to $`T_c`$ on the surface, a second-order PT, which is assumed here, would imply that the hadrons are not produced smoothly in time or space. Whereas hadronic clusters in the $`\eta `$-$`\varphi `$ space may be hard to quantify, voids are relatively easy to define, though not trivially. Our first problem will be to characterize the voids. To identify them in the theoretical laboratory is, however, much simpler than to find them in the experimental data. The latter problem will also be addressed in this paper.
A simple way to appreciate voids is to examine the exclusive distribution in rapidity space for a hadronic collision process. Since the rapidity of each produced particle can be precisely determined, one can calculate the rapidity gaps between each pair of neighboring particles. Those rapidity gaps are the 1D version of the voids in 2D. An extensive consideration of the properties of gaps in hadronic processes is given in Ref. . Although it is easy to gain a mental picture of gaps in hadronic collisions, it is nontrivial to define a measure of voids in heavy-ion collisions, in which the hadronization process extends over a long period of time.
To gain some physical insight into the fluctuation phenomenon in particle production at the critical point, it is helpful to consider the production of photons at the threshold of lasing. The physics of single-mode lasers being completely understood, their operation near the threshold is known to behave as in a second-order PT . When the pump parameter is set at the threshold of lasing, the system does not continuously produce photons as a function of time. The photons are produced in spurts with gaps of quiescence between spurts. Such fluctuations have been measured and the result in terms of factorial moments agree with the prediction bases on the Ginzburg-Landau description of PT . Those gaps in the time series in the photo-count problem are similar to the voids in the hadron-count problem in heavy-ion collisions. Since lattice QCD cannot be applied efficiently to the simulation of spatial patterns in the PT problems, we make use of the universal features of the Ginzburg- Landau theory and, in particular, employ the 2D Ising model to simulate the hadronic clusters in the $`\eta `$-$`\varphi `$ plane. This approach to the problem was initiated in , where the scaling properties of cluster production were examined. More recently, an effort was made to find observable critical behavior in quark-hadron PT . Here, we use the same formalism to study the properties of voids and search for observable measures that can signal PT in heavy-ion collisions.
## 2 Hadron Production in the Ising Model
It is known through studies in lattice QCD that the nature of the phase transition depends on the number of flavors and the quark masses . For $`m_u=m_d=0`$, the PT is first-order for low $`m_s`$, but second-order at high $`m_s`$. For nonzero $`m_u`$ and $`m_d`$, the former region remains first-order (but for even smaller values of $`m_s`$), while the latter region of high $`m_s`$ becomes a cross-over. The two regions are separated by a phase boundary that is second-order and has possibly the Ising critical exponents . For realistic quark masses we are probably in the region of the cross-over, which is what we shall assume in this study.
A cross-over means that no calculable or measurable quantities and their derivatives undergo any discontinuity as $`T`$ is varied across $`T_c`$. In the 2D Ising model the situation is like having a small external magnetic field so that the average magnetization of the system varies smoothly across $`T_c`$. The phase diagrams of the order parameter versus $`T`$ are very similar for the Ising model and the QCD problem. Since lattice QCD is so much more difficult to study compared to the Ising model, we shall hereafter concentrate only on the use of the Ising model to simulate hadron production.
Following the formalism already described in Refs. let us briefly summarize how hadron density is defined on the Ising lattice in 2D. For a lattice of size $`L\times L`$ with each site having spin $`\sigma _j`$, we define the spin aligned along the overall magnetization $`m_L=_{jL^2}\sigma _j`$ by
$`s_j=sgn(m_L)\sigma _j`$ (1)
where $`sgn(m_L)`$ stands for the sign of $`m_L`$. We then define the spin of a cell of size $`ϵ\times ϵ`$ at location $`i`$ by
$`c_i={\displaystyle \underset{jϵA_i}{}}s_j`$ (2)
where $`A_i`$ is the cell block of $`ϵ^2`$ sites at $`i`$. Since $`c_i`$ averages over all site spins in a cell, it should approach zero at high $`T`$ for which the system is in a disordered state, and should approach $`ϵ^2`$ as $`T0`$ for which the system is in an ordered state. Note that even in the absence of an external magnetic field, $`c_i`$ approaches +$`ϵ^2`$ at low $`T`$, never $`ϵ^2`$, because of our definition of $`s_j`$. Thus unlike the average magnetization $`m_L`$ of the usual Ising model without external field, which is zero for all $`T>T_c`$, the average cell spin $`c_i`$ varies smoothly from high to low $`T`$, similar to the behavior of $`m_L`$ in the presence of external field.
The hadron density, being proportioned to the order parameter, can now be defined as
$`\rho _i=\lambda c_i^2\theta (c_i),`$ (3)
where $`\lambda `$ is an unspecified factor relating the lattice spins to the number of particles in a cell. In any configuration $`c_i`$ may still fluctuate from cell to cell. We identify only the positive $`c_i`$ cells with hadron formation, and associate the hadron density with $`c_i^2`$, just as the order parameter in the Ginzburg-Landau formalism is associated with the square of the Ising spins . If $`\rho `$ denotes the average density, i.e., $`\rho _i`$ averaged over all cells on the lattice and over all configurations, then the dependence of $`\rho `$ on $`T`$ is as shown in Fig. 1. It is typical of a cross-over, shown in Fig. 1 of Ref. for small quark masses. The essence of that dependence is that $`\rho `$ decreases precipitously but smoothly, as $`T`$ is increased across $`T_c`$, but remains nonzero for a range of $`T`$ above $`T_c`$. Evidently, we have succeeded in simulating a cross-over without the explicit introduction of an external field in the Ising Hamiltonian
$`H=J{\displaystyle \underset{ij}{}}\sigma _i\sigma _j.`$ (4)
Since the dependence of $`\rho `$ on $`T`$ is smooth, it is a nontrivial problem to determine the precise value of $`T_c`$. That has been done in Ref. by examining the scaling behavior of the normalized factorial moments $`F_q`$. It is found that $`F_q`$ behaves as $`M^{\phi _q}`$ only at $`T=2.315`$ (in units of $`J/k_B`$), where $`M`$ is the number of bins on the lattice. Since the critical point is characterized by the formation of clusters of all sizes in a scale independent way, we identify the critical temperature at $`T_c=2.315`$. Indeed, it has been shown in Ref. that in the neighborhood of $`T_c`$ with $`TT_c`$, $`\rho `$ behaves as
$`\rho \rho _c\left(T_cT\right)^\eta ,\eta =1.67,`$ (5)
where $`\rho _c`$ is $`\rho `$ at $`T_c`$. There exist other measures that exhibit critical behaviors, $`\left(T_cT\right)^\zeta `$, with negative exponents. They are discussed in .
## 3 Scaling Behavior of Voids
We choose to work with a lattice having $`L=256`$ and cells having $`ϵ=4`$. Thus the total number of cells on the lattice is $`N_c=(L/ϵ)^2=64^2`$. We divide the lattice into bins of size $`\delta ^2`$ so that each bin can contain $`\nu =(\delta /ϵ)^2`$ cells and the lattice can contain $`M=(L/\delta )^2`$ bins. The average density of hadrons in a bin is therefore
$`\overline{\rho }_b={\displaystyle \frac{1}{\nu }}{\displaystyle \underset{i=1}{\overset{\nu }{}}}\rho _i,`$ (6)
where $`b`$ denotes the $`b\mathrm{th}`$ bin. Near $`T_c`$, $`\overline{\rho }_b`$ fluctuates from bin to bin, especially for small $`\delta `$. We define a bin to be “empty” when
$`\overline{\rho }_b<\rho _0,`$ (7)
where $`\rho _0`$ is a floor level greater than zero. This criterion is chosen to eliminate the effect of small fluctuations on gross behavior. That is, for the purpose of defining a void, hadron clusters are counted only when the hadron density is above a threshold $`\rho _0`$. Bins with very low hadron density, i.e., where (7) holds, are then regarded as empty. A void is a contiguous collection of empty bins. Fig. 2 illustrates a pattern of voids in a configuration generated at $`T_c`$ for $`M=24^2`$ and $`\rho _0=20`$ (in units of $`\lambda `$). An open square indicates an empty bin, while a black square contains hadrons with $`\overline{\rho }_b\rho _0`$. In that configuration there are 26 voids, the sizes of which are 76, 35, 9, 8, 7, $`\mathrm{}`$ in descending order. It should be recognized that the maximum hadron density that a cell can have is $`\rho _{\mathrm{max}}=\left(ϵ^2\right)^2=256`$, so $`\rho _0=20`$ represents a threshold that is less than 8% of the maximum.
Let $`V_k`$ be the size of the $`k\mathrm{th}`$ void (in units of bins). That is, let
$`V_k={\displaystyle \underset{b_k}{}}\theta \left(\rho _0\overline{\rho }_b\right),`$ (8)
where $`b`$ implies a sum over all empty bins that are connected to one another by at least one side; $`k`$ simply labels a particular void. We can then define $`x_k`$ to be the fraction of bins on the lattice that the $`k\mathrm{th}`$ void occupies:
$`x_k=V_k/M.`$ (9)
For each configuration we thus have a set $`𝒮=\{x_1,x_2,\mathrm{}\}`$ of void fractions that characterizes the spatial pattern.
Since the pattern fluctuates from configuration to configuration, $`𝒮`$ cannot be used to compare patterns in an efficient way. For a good measure to facilitate the comparison, let us first define the moments $`g_q`$ for each configuration
$`g_q={\displaystyle \frac{1}{m}}{\displaystyle \underset{k=1}{\overset{m}{}}}x_k^q,`$ (10)
where the sum is over all voids in the configuration, and $`m`$ denotes the total number of voids. We then define the normalized $`G`$ moments
$`G_q=g_q/g_1^q,`$ (11)
which depends not only on the order $`q`$, but also on the total number of bins $`M`$. Thus by definition $`G_0=G_1=1`$. This $`G_q`$ is defined in the same spirit as that in for rapidity gaps, but they are not identical because the $`x_k`$ here for voids do not satisfy any sum rule. It is also unrelated to the $`G`$ moments defined earlier for fractal analysis. Now, $`G_q`$ as defined in Eq. (11) is a number for every configuration for chosen values of $`q`$ and $`M`$. With $`q`$ and $`M`$ fixed, $`G_q`$ fluctuates from configuration to configuration and is our quantitative measure of the void patterns, which in turn are the characteristic features of phase transition.
In Fig. 3 we show the probability distribution of $`G_q`$ for $`q=6`$ and $`M=36^2`$ and $`\rho _0=20`$ at three different values of $`T`$. The Wolff algorithm has been used in the Monte Carlo simulation to reduce the correlation between configuration . The distribution in Fig. 3 and other quantities to be calculated below are the results obtained using $`5\times 10^3`$ uncorrelated configurations. Since the value of $`G_q`$ fluctuates widely from configuration to configuration covering a range that exceeds 3 orders of magnitude, we have plotted the distribution in $`lnG_q`$. Since both the mean and the dispersion of $`lnG_q`$ shown in Fig. 3 vary significantly with $`T`$, and with $`q`$ and $`M`$ not shown in Fig. 3, it is necessary for us to search for simple regularities in the nature of the fluctuations of $`G_q`$.
Our first step in that search is to study the $`M`$ dependence of the average of $`G_q`$ over all configurations, i.e.,
$`G_q={\displaystyle \frac{1}{𝒩}}{\displaystyle \underset{e=1}{\overset{𝒩}{}}}G_q^{(e)},`$ (12)
where the superscript $`(e)`$ denotes the $`e\mathrm{th}`$ event (or configuration) and $`𝒩`$ is the total number of events. In Fig. 4 we show $`G_q`$ versus $`M`$ in a log-log plot for $`T=T_c`$, $`2q8`$, and $`\rho _0=20`$. We find very good linear behavior; consequently, we may write
$`G_qM^{\gamma _q}.`$ (13)
This scaling behavior implies that voids of all sizes occur at PT. Since the moments at different $`q`$ are highly correlated, we expect the scaling exponent $`\gamma _q`$ to depend on $`q`$ in some simple way. Fig. 5 shows the dependence of $`\gamma _q`$ on $`q`$, and we find remarkable linearity. Thus we may write
$`\gamma _q=c_0+cq,`$ (14)
where $`c=0.8`$. There is no obvious reason why the $`q`$-dependence of $`\gamma _q`$ should be so simple. We should regard (14) only as a convenient parameterization of $`\gamma _q`$ that allows us to focus on $`c`$ as a numerical description of the scaling behavior of the voids at PT.
It is evident from Fig. 3 that studying the behavior of $`G_q`$ extracts only a limited amount of information about the distribution $`P(G_q)`$. The fluctuation of $`G_q`$ from event to event can be quantified by the various moments of $`G_q`$
$`C_{p,q}={\displaystyle \frac{1}{𝒩}}{\displaystyle \underset{e=1}{\overset{𝒩}{}}}\left(G_q^{(e)}\right)^p={\displaystyle 𝑑G_qG_q^pP(G_q)},`$ (15)
among which $`G_q`$ corresponds only to $`C_{1,q}`$. Instead of examining a collection of $`C_{p,q}`$ for various values of $`p`$, we consider the derivative of $`C_{p,q}`$ at $`p=1`$ , and define
$`S_q={\displaystyle \frac{d}{dp}}C_{p,q}|_{_{p=1}}=G_qlnG_q,`$ (16)
where $`\mathrm{}`$ stands for averaging over all configurations. Despite its appearance, $`S_q`$ is not entropy, but is a measure of the fluctuations of $`G_q`$, when compared with $`G_qlnG_q`$.
In Fig. 6 we show the power-law behavior of $`S_q`$ at $`T_c`$
$`S_qM^{\sigma _q},`$ (17)
where the scaling exponents $`\sigma _q`$ are the slopes of the straight lines in the figure. Because of Eq.(13), $`G_qlnG_q`$ is not power behaved, so $`S_qG_qlnG_q`$ would not have a scaling behavior as in Eq.(17). For that reason we focus on the simple properties of $`S_q`$. The dependence of $`\sigma _q`$ on $`q`$ is shown in Fig. 7, where a remarkable linear behavior is found. We use the parameterization
$`\sigma _q=s_0+sq`$ (18)
and find $`s=0.76`$.
Among the quantities that are under our control in the analysis, we have studied the dependences on $`M`$ and $`q`$. The remaining such quantity is $`\rho _0`$, while $`T`$ is beyond experimental control, although it can be varied in the simulation. We now study the dependence of $`c`$ and $`s`$ on $`\rho _0`$ and $`T`$, which are shown in Figs. 8 and 9. Evidently, at higher values of $`\rho _0`$ the dependence on $`T`$ are more pronounced than at $`\rho _0=20`$. Similar behavior has been found for $`c_0`$ and $`s_0`$, defined in Eqs. (14) and (18). This result is very interesting, and provides a possible avenue toward learning more about the nature of PT in a realistic heavy-ion experiment.
If a quark-gluon plasma is formed in a heavy-ion collision, the expanding cylinder has high $`T`$ in the interior and low $`T`$ on the surface. Our modeling has been to investigate the properties of hadronization on the surface. Since the PT is a smooth cross-over in the neighborhood of $`T_c`$, and also since hydrodynamical flow can lead to local fluctuations in temperature and radial velocity on the surface, it is realistic to expect the hadrons to form in a small range of $`T`$ around $`T_c`$, a possibility that cannot be controlled experimentally nor excluded theoretically. To learn whether the hadronization takes place over a range of $`T`$, we suggest on the basis of Figs. 8 and 9 to use $`\rho _0`$ as a device to probe the properties of the PT.
In an analysis of the experimental data one can use a phenomenological density threshold to play the role of $`\rho _0`$ and vary it in the determination of the patterns of voids. From our study we have learned that for a wide range of $`\rho _0`$ the values of $`c`$ and $`s`$ are not independent of $`T`$. It means that if the PT occurs over a range of $`T`$ so that the hadrons detected are formed at various $`T`$ around $`T_c`$ even within one event, then for $`\rho _0`$ in that range of nonuniform $`c`$ and $`s`$ the dependences of $`G_q`$ and $`S_q`$ on $`M`$ would not exhibit simple power-law behavior as in Figs. 4 and 6, since nonuniform values of $`\gamma _q`$ and $`\sigma _q`$ for any $`q`$ would lead to $`M`$-dependences that are not simple power-laws. If, by varying $`\rho _0`$ to a point that corresponds to 20 in this study, simple scaling behaviors can be found for $`G_q`$ and $`S_q`$ with $`c`$ and $`s`$ having roughly the values 0.8 and 0.76 respectively, then we can be assured that the PT is a cross-over of the type that we have studied here in the Ising model. The signature for having a unique $`T_c`$ for hadronization is that the scaling behavior persists at any $`\rho _0`$ and the lowest values of $`c`$ and $`s`$ are at 0.7 and 0.67, respectively, when $`\rho _0=50`$.
## 4 Configuration Mixing
The possibility of hadronization occurring at a narrow range of temperatures discussed at the end of the previous section is not the only complication that may occur in a heavy-ion collision process. A configuration that we simulate on the Ising lattice corresponds to the cluster and void pattern of one instant (with uncertainty 1 fm/c) in the hadronization history of a plasma cylinder. The final state of a collision process registered at the detector is a collection of all the particles produced throughout the whole evolution process in excess of 10 fm/c. The clusters and voids produced at different times overlap one another in the integration process, resulting in a smooth spatial distribution. Thus to identify the clustering patterns in an experiment, it is necessary to make cuts in the phase space. Since $`\eta `$ and $`\varphi `$ are needed to to exhibit the 2D patterns, $`p_T`$ is the only remaining variable, in which cuts can be made. Since there is some correlation between $`p_T`$ and the evolution time, the selection of particles having their $`p_T`$ lying in a very narrow interval, $`\mathrm{\Delta }p_T`$, has the effect of selecting a small interval, $`\mathrm{\Delta }\tau `$, in evolution time . However, the correspondence is not one-to-one. In every $`\mathrm{\Delta }p_T`$ interval, patches of hadrons produced at neighboring times can contribute. That is what we mean by configuration mixing: the experimental configuration detected in a small $`\mathrm{\Delta }p_T`$ interval may be a mixture of parts of configurations produced at different times, the latter being the pure configurations that we simulate on the Ising lattice.
To simulate a mixed configuration, we make the following choice for definiteness. We divide the lattice into four quadrants. In each of the quadrants we place the corresponding quadrant of a new and independent configuration so that the mixed configuration consists of four parts of four configurations. On $`5\times 10^3`$ such mixed configurations we then performed the same analysis as in the preceding section. The results are summarized in Figs. 10 and 11 where, for comparison, the straight lines are reproduced from those in Figs. 5 and 7 for the pure configurations. The squares are the results for $`\gamma _q`$ and $`\sigma _q`$ calculated from the mixed configurations. Evidently, configuration mixing does not introduce any discernible deviation from the results of the pure configurations. The agreement being so good, we see no point in trying out other ways of mixing. The implication of the result is remarkable, but not surprising. The cluster and void patterns fluctuate so much that it does not matter whether some pieces of the patterns come from different configurations, provided that the appropriate measure of the fluctuations is extracted. What we have extracted is the scaling behavior in $`M`$. The scaling exponents are then found to be independent of the configuration mixing.
## 5 Conclusion
We have shown in this paper that the study of voids can be very fruitful in finding signals of quark-hadron phase transition in heavy -ion collisions. The use of 2D Ising model has been effective in simulating a cross-over in the hadronization process. $`G_q`$ moments have been defined to quantify the dependence of the voids on the bin sizes. The scaling behavior that has been found provides an efficient way to use the scaling exponents $`\gamma _q`$ and $`\sigma _q`$ to characterize the properties of the phase transition.
The temperature at which hadronization occurs is not under experimental control. We have found a way to learn whether hadronization occurs at a range of $`T`$ or at a unique $`T`$. That is achieved by varying the density threshold $`\rho _0`$. A bin whose average density is $`<\rho _0`$ is identified as belonging to a void. The quantity $`\rho _0`$ is under the control of the analyst of the experimental data. If by varying $`\rho _0`$ one finds that a scaling behavior can be tuned out, i. e. the power-law dependence on $`M`$ becomes invalid for a range of $`\rho _0`$, then hadronization does not occur at a unique $`T`$. On the other hand, if scaling remains manifest for a range of $`\rho _0`$, then there is only one temperature at which hadrons are formed.
Even in the case where hadronization takes place in a range of $`T`$, it is possible to tune $`\rho _0`$ to a value where strict scaling can be observed. Then $`\gamma _q`$ and $`\sigma _q`$ provide the slope parameters $`c`$ and $`s`$ that can be checked as numerical constants characteristic of the cross-over PT. Since there are no numerical inputs in our analysis, the values $`c=0.8`$ and $`s=0.76`$ are predictions in this study. Experimental verification of those numbers would, of course, lend significant support to this line of study. If the experimental numbers for $`c`$ and $`s`$ turn out not to have those values, or if there is no scaling behavior at all, one could conclude that the hadrons are formed without the system having gone through a phase transition.
We have further found that the study of the scaling behavior of the void moments has the additional virtue of being independent of configuration mixing. That property strengthens the argument that a small $`\mathrm{\Delta }p_T`$ cut in the data can provide us with a window to look into the hadronization process at a small $`\mathrm{\Delta }\tau `$ time interval where hadron clusters and voids are formed. The effect of randomization by the possible hadron gas in the final state on the scaling behavior has already been found in to be minimal. Thus, we have here a promising procedure to investigate the properties of quark-hadron phase transition that should be undertaken by the heavy-ion experiments.
### Acknowledgment
We are grateful to Z. Cao and Y.F. Wu for providing us with the original code on the Ising model. This work was supported, in part, by U. S. Department of Energy under Grant No. DE-FG03-96ER40972.
## Figure Captions
Average hadron density (in units of $`\lambda `$) versus temperature (in units of $`J/k_B`$)
Spatial pattern of a configuration on the 2D lattice at $`T_c`$. Open squares indicate voids and filled squares indicate hadrons with $`\rho _0=20`$.
Probability distributions of ln$`G_q`$ (with $`q=6`$) at two different $`T`$.
Scaling behavior of $`G_q`$ vs $`M`$ at $`T_c`$.
The dependence of $`\gamma _q`$ on $`q`$.
Scaling behavior of $`S_q`$ vs $`M`$ at $`T_c`$.
The dependence of $`\sigma _q`$ on $`q`$.
The dependences of the slope parameter $`c`$ on $`\rho _0`$ and $`T`$.
The dependences of the slope parameter $`s`$ on $`\rho _0`$ and $`T`$.
A comparison of $`\gamma _q`$ between the mixed configurations (squares) and the pure configurations (straightline taken from Fig. 5).
A comparison of $`\sigma _q`$ between the mixed configurations (squares) and the pure configurations (straightline taken from Fig. 7).
|
no-problem/0003/hep-th0003173.html
|
ar5iv
|
text
|
# Untitled Document
hep-th/0003173
Fuzzy Gravitons From Uncertain Spacetime
Miao Li
Institute of Theoretical Physics
Academia Sinica
Beijing 100080
and
Department of Physics
National Taiwan University
Taipei 106, Taiwan
[email protected]
The recently proposed remarkable mechanism explaining “stringy exclusion principle” on an Anti de Sitter space is shown to be another beautiful manifestation of spacetime uncertainty principle in string theory as well as in M theory. Put in another way, once it is realized that the graviton of a given angular momentum is represented by a spherical brane, we deduce the maximal angular momentum directly from either the relation $`\mathrm{\Delta }t\mathrm{\Delta }x^2>l_p^3`$ in M theory or $`\mathrm{\Delta }t\mathrm{\Delta }x>\alpha ^{}`$ in string theory. We also show that the result of hep-th/0003075 is similar to results on D2-branes in $`SU(2)`$ WZW model. Using the dual D2-brane representation of a membrane, we obtain the quantization condition for the size of the membrane.
Mar. 2000
In addition to the by now well-known holographic principle, the spacetime uncertainty principle, first put forward in \[1,,2\] and verified also in the D-brane dynamics , is another important principle underlying the yet mysterious grand framework of string/M theory. This was later generalized to M theory in . There are many manifestations of the uncertainty relations \[4,,5,,6\]. It is therefore useful to explore as much as possible the physical consequences of this principle. It has been suspected for some time by the present author that the so-called stringy exclusion principle is actually a consequence of the spacetime uncertainty principle. With a remarkable mechanism proposed in a recent paper , we will be able to show that indeed this is the case. For the purpose of uncovering the underlying structure of string/M theory, it is a good thing to reduce the number of principles.
It is already pointed out in that with increase of the angular momentum of a massless graviton on $`S^n`$, the size of the graviton increases and eventually reaches the size of $`S^n`$. And the authors of emphasize rightfully that this is manifestation of space noncommutativity on $`S^n`$. Whatever that noncommutativity is, we feel it is worthwhile to point out that this phenomenon fits very nicely with spacetime uncertainty relations which must hold regardless what the background is. We will work with $`AdS_7\times S^4`$, $`AdS_4\times S^7`$ $`AdS_5\times S^5`$ in order. In the end, to lend a support to the mechanism of , we show that the result of is similar to results on D-branes in a WZW model of $`SU(2)`$. To be more accurate, we will use the dual representation of a membrane in M theory as a D2-brane to rederive the result of . In addition, we derive the quantization condition for the size of the membrane.
The spacetime uncertainty relation is
$$\mathrm{\Delta }t\mathrm{\Delta }x>\alpha ^{}.$$
And the version in M theory reads
$$\mathrm{\Delta }t\mathrm{\Delta }x\mathrm{\Delta }y>l_p^3,$$
where $`l_p`$ is the Planck length in M theory. The above relation asserts that in M theory any physical process will necessarily involves space uncertainty in two orthogonal directions.
The mechanism of is based on Myers’ recent observation that a D0-brane bound state in a constant field $`F^{(4)}=dC^{(3)}`$ is polarized to become a spherical membrane . In the case of $`AdS_7\times S^4`$, a spherical membrane moves on a hemi-sphere parametrized by a angle $`\varphi `$ and the membrane size $`r(0,R)`$. As shown in , the metric on the round sphere $`S^4`$ can be written as
$$ds^2=\frac{R^2}{R^2r^2}dr^2+(R^2r^2)d\varphi ^2+r^2d\mathrm{\Omega }_2^2,$$
where the two sphere metric $`d\mathrm{\Omega }_2^2`$ is chosen to coincide with that on the spherical membrane. To see that $`(r,\varphi )`$ parametrize a round hemi-sphere with radius $`R`$, introduce a new angle associated to $`r`$ through $`r=R\mathrm{sin}\psi `$, the above metric reads
$$ds^2=R^2(d\psi ^2+\mathrm{cos}^2\psi d\varphi ^2)+R^2\mathrm{sin}^2\psi d\mathrm{\Omega }_2^2.$$
Since the range of $`\psi `$ is $`(0,\pi /2)`$, clearly the metric on the hemi-sphere parametrized by $`(\psi ,\varphi )`$ is the round metric with radius $`R`$. At the north pole $`\psi =\pi /2`$, the other two sphere parameterizing the membrane has the maximal size $`R`$, and at the boundary of the hemi-sphere $`\psi =0`$, the size of the membrane vanishes. This is the reason why the collection of all coordinates parameterizes $`S^4`$, not $`D_2\times S^2`$. The form $`(1)`$ will be used later to interpret the membrane as a D2-brane.
The authors of show that if the membrane is allowed to move along $`\varphi `$ with a fixed size $`r=R\mathrm{sin}\psi `$, the maximal allowed angular momentum $`N`$ is achieved when $`r=R`$. The mechanism for this to happen is similar to a dipole moving on a sphere $`S^2`$ with a magnetic field: There is a field strength $`F^{(4)}`$ on $`S^4`$, the membrane is electrically charged with respect to this magnetic field. Now if one half of the membrane has the orientation such that it is positively charged, then the other half of the membrane has an opposite orientation thus is negatively charged, so the whole membrane is equivalent to a dipole.
Now a spherical membrane with angular momentum $`n`$ in $`\varphi `$ direction is an almost BPS state with energy $`n/R`$. According to Heisenberg uncertainty relation, the uncertainty in time is given by $`\mathrm{\Delta }t=R/n`$. This graviton has size $`r`$ in two directions, so
$$\mathrm{\Delta }t\mathrm{\Delta }x^2=\frac{R}{n}r^2\frac{R^3}{n},$$
since the maximal size of this graviton is $`R`$. The spacetime uncertainty relation (1) implies then
$$\frac{R^3}{n}l_p^3,$$
or
$$n\frac{R^3}{l_p^3}.$$
Now $`R=l_p(\pi N)^{1/3}`$ , we thus have
$$nN.$$
We have dropped a factor $`\pi `$ since in the uncertainty relation (1) we can not take a numerical factor seriously. We see that the stringy exclusion relation $`nN`$ is a direct consequence of M theory spacetime uncertainty relation and the fact that in the background $`F^{(4)}`$, a graviton becomes a round membrane.
According to , the size of the membrane $`r`$ is quantized. If the analysis of in the flat spacetime is directly applicable here, then we would conclude $`rn`$. We will later give an explanation of this fact using D2-brane representation.
We have thus far ignored the movement of the graviton in the AdS part. To justify our use of spacetime uncertainty relation, we need to show that the movement in AdS does not change the kinematics drastically. Let $`z`$ be the AdS radial coordinate. We now show that if the graviton starts at $`z_0`$ with zero velocity in this direction, its acceleration toward the center of AdS is suppressed by a factor $`1/R`$. The relevant metric on AdS is
$$ds^2=\frac{z^2}{R^2}dt^2\frac{R^2}{z^2}dz^2.$$
The graviton looks like a massive particle in AdS. So its action is
$$S=m\left(\frac{z^2}{R^2}\frac{R^2}{z^2}\dot{z}^2\right)^{1/2}𝑑t.$$
Energy conservation implies
$$\left(1(R/z)^4\dot{z}^2\right)^{1/2}=z/z_0.$$
Thus the graviton will move toward the center of AdS. Denote the proper time $`d\tau =(z/R)dt`$ and the proper radial coordinate $`d\rho =Rd\mathrm{ln}z`$, the acceleration with respect to $`\tau `$ can be easily calculated using the above solution
$$\frac{d^2\rho }{d\tau ^2}=(z/z_0)^2\frac{1}{R}.$$
It is seen that this acceleration becomes smaller and smaller as the graviton moves toward $`z=0`$.
The case of $`AdS_4\times S^7`$ is quite similar. Here one postulates that a graviton is polarized to become a M5-brane. One parameterizes $`S^7`$ in a similar way as (1), namely one simply replaces $`r^2d\mathrm{\Omega }_2^2`$ in (1) by $`r^2d\mathrm{\Omega }_5^2`$, the metric on the spherical M5 brane. Again for a M5-brane with angular momentum $`n`$, the uncertainty in time is $`\mathrm{\Delta }tR/n`$. We can not directly use the uncertainty relation (1) in this case, since the M5-brane has extension in 5 spatial direction. A natural replacement of (1) seems to be $`\mathrm{\Delta }t\mathrm{\Delta }x^5>l_p^6`$. Using this and $`\mathrm{\Delta }t\mathrm{\Delta }x^5=Rr^5/nR^6/n`$ we find $`n<(R/l_p)^6`$. Since $`Rl_pN^{1/6}`$ , we deduce $`n<N`$, again the right answer.
It remains to make the new relation $`\mathrm{\Delta }t\mathrm{\Delta }x^5>l_p^6`$ compatible with (1), since if one naively uses the data for the spherical M5-brane, one will find (1) violated. One possible way out of this paradox is to imagine that a five dimensional object consists of a stack of lower dimensional objects, and the effective extension in two out of five spatial directions becomes much larger than it appears. One simple example of this picture is a string behaving like a random walk, its actual size is much larger than what it appears. We believe it is one of future major challenges to formulate a precise mathematical framework to incorporate all these features of uncertainty relations. Curiously, the new relation used for M5-branes is similar to the one valid in the world-volume theory . If there is any connection between the version we used in spacetime physics and the world-volume version, we suspect that this connection is related to our remark on the microscopic structure of M5-branes in terms of more fundamental degrees of freedom.
Turning to the case $`AdS_5\times S^5`$. Now the spherical brane is a D3-brane. There are 3 spatial directions involved, again we can not directly apply (1). Note that D3-brane is invariant under S-duality. The appropriate relation respecting S-duality is $`\mathrm{\Delta }t\mathrm{\Delta }x^3>l_p^4`$, where $`l_p`$ is the Planck length in 10 dimensions, and is $`l_p^4=g_s\alpha ^2`$. The spacetime uncertainty relation applied to a moving spherical D3-brane results in $`n<R^4/(g_s\alpha ^2)`$. Now $`R^4Ng_s\alpha ^2`$, again we find $`n<N`$.
One may ask the question that what happens if there is no corresponding flux on sphere $`S^n`$. For this hypothetical background, there would be no dipole mechanism, and thus no restriction on the maximal angular momentum. Therefore the spacetime uncertainty relation is violated. The answer to this question is that such a background is not a consistent solution, and spacetime uncertainty relation ought to hold for a consistent background only. Indeed the original proposal on spacetime uncertainty relation in is based on observations made in a few consistent backgrounds including AdS spaces.
We now show that the result on the giant gravitons in case of $`AdS_7\times S^4`$ is similar to recent results on D2-branes in $`SU(2)`$ WZW model . It is shown that there are $`k+1`$ distinct D2-branes on the group manifold $`S^3`$ if the level is $`k`$. There is a puzzle about this result. There is no $`S^2`$ of minimal area in $`S^3`$. This puzzle is recently resolved in . It is argued there that the stabilization of a D2-brane is due to turning on a F field flux on D2-brane. Let us briefly recall a few details. In the large $`k`$ limit, the metric on $`S^3`$ is
$$ds^2=k\alpha ^{}\left(d\psi ^2+\mathrm{sin}^2\psi d\mathrm{\Omega }_2^2\right).$$
To have a 2D CFT, there must be a $`B`$ field, and it can be written in a certain gauge as
$$B=k\alpha ^{}(\psi \frac{\mathrm{sin}2\psi }{2})d\mathrm{\Omega }_2,$$
where $`d\mathrm{\Omega }_2`$ is the volume form on the unit $`S^2`$. In order to stabilize a D2-brane wrapped on $`S^2`$ with a constant $`\psi `$, one needs to switch on a world-volume F flux:
$$F=\frac{n}{2}d\mathrm{\Omega }_2.$$
$`n`$ is an integer, since the flux must be quantized. Now the world-volume DBI action is extremized if
$$\psi =\frac{n\pi }{k}.$$
D2-branes are interpreted as M2-branes transverse to the eleventh circle in M theory . The quantization condition on flux $`F`$ as in (1) is simply the longitudinal momentum quantization, as $`F`$ is dualized to $`_tX_{11}`$. The B field (1) is simply the $`C^{(3)}`$ field in $`M`$ theory. Thus a D2-brane is interpreted as a membrane moving in the 4 manifold $`S^3\times S^1`$ with the angular momentum $`n`$ along $`S^1`$. Now it is clear that the situation is similar to what is discussed in , and to a membrane moving in $`S^4`$. The stabilization of the D2-brane is just as the stabilization of a collection of $`n`$ D0-branes moving in a constant $`F^{(4)}`$ field as a spherical membrane.
The maximal angular momentum of a membrane on $`S^3\times S^1`$ is $`k`$, and at this point the membrane shrinks to a point. This is quite different from the case $`S^4`$, where when the maximal angular momentum is achieved, the membrane has its maximal size. To make this analogy work better, we need to work with the metric (1). We re-interpret $`S^4`$ as the tensor product of a hemi-sphere $`S^3`$ with $`S^1`$. The former is parametrized by $`(\psi ,d\mathrm{\Omega }_2)`$, and the latter by $`\varphi `$. Now $`S^3`$ has a round metric with radius $`R`$, and $`S^1`$ has a $`\psi `$-dependent radius $`R\mathrm{cos}\psi `$, as can be seen from (1). The circle shrinks to zero at the equator of $`S^3`$: $`\psi =\pi /2`$. If we interpret $`S^1`$ as the M theory circle, then the string coupling constant vanishes here. If we take the value $`R`$ of the maximal radius of $`\varphi `$ as the canonical eleventh radius, then we have $`\alpha ^{}=l_p^3/R`$. Re-interpreted in string theory, the radius of $`S^3`$ is $`R^2=(R^2/\alpha ^{})\alpha ^{}=N\pi \alpha ^{}`$. Due to the existence of the nontrivial dilaton, the string metric on the half $`S^3`$ is
$$ds^2=\mathrm{cos}\psi dt^2+N\pi \alpha ^{}\mathrm{cos}\psi \left(d\psi ^2+\mathrm{sin}^2\psi d\mathrm{\Omega }_2^2\right),$$
with the dilaton field
$$e^{\frac{2\varphi }{3}}=\mathrm{cos}\psi .$$
Note the overall coefficient $`N\pi \alpha ^{}`$ in (1) is quite different from $`k\alpha ^{}`$ as in (1).
The $`C^{(3)}`$ field reads
$$C^{(3)}=\frac{N}{4\pi }\mathrm{sin}^3\psi d\varphi d\mathrm{\Omega }_2.$$
Since the relation between the $`B`$ field and $`C^{(3)}`$ is $`B_{\mu \nu }=4\pi ^2\alpha ^{}C_{11\mu \nu }`$, there is
$$B=N\pi \alpha ^{}\mathrm{sin}^3\psi d\mathrm{\Omega }_2.$$
To check that we have the right $`B`$ field, note that the flux $`𝑑B/(2\pi \alpha ^{})`$ on the half $`S^3`$ is $`2\pi N`$, the correct quantization condition.
Consider a D2-brane in the half $`S^3`$ with a fixed $`\psi `$. Again let a constant F flux be switched on as in (1). The DBI action is
$$\begin{array}{cc}\hfill S& =T_2e^\varphi \sqrt{det(G+B+2\pi \alpha ^{}F)}\hfill \\ & =\frac{1}{R}𝑑t\mathrm{cos}^1\psi \left(N^2\mathrm{sin}^4\psi 2nN\mathrm{sin}^3\psi +n^2\right)^{1/2},\hfill \end{array}$$
where we used $`T_2R=1/(4\pi ^2\alpha ^{})`$. This action is extremized if
$$(\frac{n}{N}\mathrm{sin}\psi )(\mathrm{sin}^3\psi 2\mathrm{sin}\psi +\frac{n}{N})=0.$$
We find the solution
$$\mathrm{sin}\psi =\frac{n}{N}.$$
Indeed the allowed maximal angular momentum is $`n=N`$. We have dropped a factor $`\mathrm{cos}^2\psi `$ in (1). When $`\mathrm{sin}\psi =1`$, this is a singular factor. But it is easy to see that $`\mathrm{sin}\psi =1`$ is a zero of order 3 in (1).
The static action (1) is just $`𝑑tV`$, and $`V`$ is the static potential. Thus $`E=V`$. Substituting $`\mathrm{sin}\psi =n/N`$ into $`E`$ we find $`E=n/R`$. This is just the statement that the energy of the D2-brane is equal to its longitudinal momentum in direction $`\varphi `$, a consistency check that indeed we have the right graviton. It can be checked, though a little tediously, that at $`\mathrm{sin}\psi =n/N`$, the second derivative of $`E(\psi )`$ is positive, thus these solutions are stable:
$$\frac{d^2E(\psi )}{d\psi ^2}=\frac{1}{R}\frac{n^3}{N^2n^2}.$$
This quantity diverges at $`n=N`$, indicating that the maximal membrane is very stable. We have ignored other solutions coming from $`\mathrm{sin}^3\psi 2\mathrm{sin}\psi +n/N=0`$, since they are unstable solutions.
Back to the M theory metric, the radius of the membrane is $`r=R\mathrm{sin}\psi =nR/N`$. When the membrane has the maximal angular momentum, it has the maximal size. This agrees with the result of . This is not a surprise, since we know that the D2-brane picture is dual to the membrane picture, although the procedure of executing calculations is quite different here. Although we are considering a curved manifold $`S^4`$, the quantization of $`r`$ seems to be the same as for a membrane moving in the flat spacetime with a constant $`F^{(4)}`$ field.
Much more can be done with our D2-brane approach, for instance we can analyze the spectrum in the world-volume theory, and the noncommutativity of $`S^2`$. So this approach seems to have more advantages than the original approach in .
Without much effort, the above discussion can be generalized to $`AdS_4\times S^7`$ and $`AdS_5\times S^5`$.
We end this paper with a discussion of what kind of fuzzy spheres we obtain from AdS. It is proposed in \[15,,16\] that the spheres are actually quantum deformed spheres with deformation parameter $`q=\mathrm{exp}(i2\pi /N)`$. This proposal is based on the phenomenological observation that the representations of the associated quantum group terminate. The mechanism of seems to suggest a different picture. Focus on the case $`AdS_7\times S^4`$. As we already explained in the beginning, $`S^4`$ can be viewed as the tensor product of a hemi-sphere $`S^2`$ with a constant radius and a sphere $`S^2`$ with variable radius $`r`$. The latter is wrapped by a membrane with a quantized radius $`r=nR/N`$. Viewed in M theory, this is a fuzzy sphere whose fuzziness is determined by $`n`$. In terms of the D2-brane picture, the effective flux $`=B/(2\pi \alpha ^{})+Fn(1n^2/N^2)`$. Thus the world-volume theory on this D2-brane is a noncommutative field theory. The membrane behaves like a dipole on the hemi-sphere with a magnetic field. So this hemi-sphere is another fuzzy sphere whose fuzziness is determined by $`N`$. We seem to obtain a fuzzy $`S^4`$ as the tensor product of a fuzzy hemi-sphere and a fuzzy sphere. It remains to construct a precise mathematical framework for this fuzzy $`S^4`$.
Acknowledgments. This work was supported by a grant of NSC and by a “Hundred People Project” grant of Academia Sinica. I thank P.M. Ho and Y.S. Wu for discussions.
Note added: We are informed that refs., predate in emphasizing noncommutativity of spheres. Also, discusses general features of space uncertainties which may be relevant to situation discussed here.
References
relax T. Yoneya, p. 419 in “Wandering in the Fields”, eds. K. Kawarabayashi and A. Ukawa (World Scientific, 1987) ; see also p. 23 in “Quantum String Theory”, eds. N. Kawamoto and T. Kugo (Springer, 1988). relax T. Yoneya, Mod. Phys. Lett. A4, 1587(1989). relax M. Li and T. Yoneya, hep-th/9611072, Phys. Rev. Lett. 78 (1997) 1219. relax M. Li and T. Yoneya, “Short-distance Space-time Structure and Black Holes in String Theory: A Short Review of the Present Status, hep-th/9806240, Jour. Chaos, Solitons and Fractals (1999) relax L. Susskind, Phys. Rev. D49, 6606(1994). relax D. Minic, “On the Spacetime Uncertainty Principle and Holography”, hep-th/9808035, Phys. Lett. B442 (1998) 102. relax J. Maldacena and A. Strominger, “AdS3 Black Holes and a Stringy Exclusion Principle”, hep-th/9804085, JHEP 9812 (1998) 005. relax J. McGreevy, L. Susskind and N. Toumbas, “Invasion of the Giant Gravitons from Anti-de Sitter Space”, hep-th/0003075. relax R. Myers, “Dielectric-Branes”, hep-th/9910053. relax J. Maldacena, “The Large N Limit of Superconformal Field Theories and Supergravity, hep-th/9711200. relax C. S. Chu, P. M. Ho and Y. C. Kao, “Worldvolume Uncertainty Relations for D-Branes”, hep-th/9904133, Phys. Rev. D60 (1999) 126003. relax There are numerous papers on this subject, we cite the most relevant one, A. Yu. Alekseev, A. Recknagel and V. Schomerus, “Noncommutative World-volume Geometries: Branes on SU(2) and Fuzzy Spheres”, hep-th/9908040, JHEP 9909 (1999) 023; for more papers see the reference list of the next reference. relax K. Bachas, M.R. Douglas and C. Schweigert, “Flux Stabilization of D-branes”, hep-th/0003037. relax P. Townsend, “D-branes from M-branes”, hep-th/9512062, Phys. Lett. B373 (1996) 68; C. Schmidhuber, “D-brane Actions”, hep-th/9601003, Nucl. Phys. B467 (1996) 146 . relax A. Jevicki and S. Ramgoolam, “Noncommutative gravity from AdS/CFT correspondence’, hep-th/9902059, JHEP 9904 (1999) 032. relax P.M. Ho, S. Ramgoolam and R. Tatar, “Quantum Spacetimes and Finite N Effects in 4D Yang-Mills Theories”, hep-th/9907145. relax A. Kempf, “A generalized Shannon sampling theorem, fields at the Planck scale as bandlimited signals”, hep-th/9905114.
|
no-problem/0003/quant-ph0003135.html
|
ar5iv
|
text
|
# A Beam Splitter for Guided Atoms on an Atom Chip
## Abstract
We have designed and experimentally studied a simple beam splitter for atoms guided on an Atom Chip, using a current carrying Y-shaped wire and a bias magnetic field. This beam splitter and other similar designs can be used to build atom optical elements on the mesoscopic scale, and integrate them in matterwave quantum circuits.
Beam splitters are key elements in optics and its applications. In atom optics beam splitters were, up to now, only demonstrated for atoms moving in free space, interacting either with periodic potentials (spatial and temporal), or semi transparent mirrors . On the other hand, guiding of atoms has attracted much attention in recent years and different guides have been realized using magnetic potentials , hollow fibers , and light potentials .
In this letter we describe experiments which join the above, namely demonstrating a nanofabricated beam splitter for guided atoms using microscopic magnetic guides on an Atom Chip (see Fig. 1).
By bringing atoms close to electric and magnetic structures, one can achieve high gradients to create microscopic potentials with a size comparable to the de-Broglie wavelength of the atoms, in analogy to mesoscopic quantum electronics . It is possible to design quantum wells, quantum wires and quantum dots for neutral atoms and further combine these elements to form more complex structures. A large variety of these microscopic potentials can be designed by using the interaction $`V=\stackrel{}{\mu }\stackrel{}{B}`$ between a neutral atom with magnetic moment $`\stackrel{}{\mu }`$ and the magnetic field $`\stackrel{}{B}`$ generated by current carrying structures . Mounting the wires on a surface, allows elaborate designs with thin wires which can sustain sizable currents . Such surface mounted atom optical elements were recently demonstrated for large structures (wire size $`100\mu m`$) , and nanofabricated structures , the latter achieving the scales required for mesoscopic physics and quantum information proposals with microtraps .
The simplest configuration for a magnetic guide is a straight current carrying wire . The magnetic field at a distance $`r`$ from the wire is given by $`B=\frac{\mu _o}{2\pi }\frac{I}{r}`$, where $`I`$ is the wire current. Atoms in the high field seeking state are guided in Kepler orbits around the wire (Kepler guide). By adding a homogeneous bias field one can produce a 2-dimensional minimum of the potential at a distance $`\frac{\mu _o}{2\pi }\frac{I}{B_{bias}}`$ from the wire and guide atoms in the low field seeking state (side guide).
By combining two of these guides, it is possible to design potentials where at some point two different paths are available for the atom. This can be realized using different configurations, among which the simplest and most advantageous is a Y-shaped wire (Fig. 1a) . Such a beam splitter has one accessible input for the atoms, that is the central wire of the Y, and two accessible outputs corresponding to the right and left wires. Depending on how the current $`I`$ in the input wire is sent through the Y, atoms can be directed to the output arms of the Y with any desired ratio (Fig. 1b).
The Y beam splitter can be created either as a Kepler guide or as a side guide. We previously performed preliminary experiments studying such a beam splitting potential using free standing wires . In the experiment reported here, we study a beam splitter created by a Y-shaped wire on a nano-fabricated *Atom Chip*.
Our experiments are carried out using laser cooled Li atoms. A detailed description of the apparatus and the atom trapping procedure is given in .
The *Atom Chip* consists of a 2.5$`\mu m`$ thick gold layer deposited onto a GaAs substrate. This gold layer is patterned using standard nanofabrication techniques. A schematic of the wires on the *Atom Chip* used for this experiment is shown in Fig. 1a. It includes, besides the beam splitter, a series of magnetic traps to transfer atoms into smaller and smaller potentials: the large U-shaped wires are 200$`\mu `$m wide and provide a quadrupole potential if combined with a homogeneous bias field , while the thin Y-shaped wire is 10$`\mu m`$ wide. An additional U-shaped 1mm thick wire is located underneath the chip in order to assist with the loading of the chip.
The atoms are loaded onto the *Atom Chip* using our standard procedure (see details in ): Typically $`10^8`$ cold $`{}_{}{}^{7}Li`$ atoms are accumulated in a ”reflection MOT” and transferred to the splitting potential in the following steps: Atoms are first transferred into the MOT generated by the quadrupole field of the U-wire (I=17A, $`B_{bias}=`$6G) underneath the chip. Then, the laser light is switched off, leaving the atoms confined only by the magnetic quadrupole field of the U-wire. Atoms are then further compressed and transferred into a magnetic trap generated by the two 200$`\mu m`$ wires on the chip (I=2A, $`B_{bias}`$=12G), compressed again and transferred into the $`10\mu m`$ guide (I=0.8A, $`B_{bias}`$=12G). Each compression is achieved by decreasing the current generating the larger trap to zero and simultaneously switching on the current generating the smaller trap over a time of 10ms. Typically we transfer $`>10^6`$ atoms into the 10$`\mu m`$ guide , which has a typical transverse trap frequency of $`\omega =2\pi \times 6kHz`$.
The properties of the beam splitter are investigated by letting the atoms propagate along the guide for some time due to their longitudinal thermal velocity. The resulting atom distribution is measured by fluorescence images taken by a CCD camera looking at the atom chip surface from above. For this a short ($`<0.5`$ms) molasses pulse is applied. The pictures shown in Fig. 1b are such images taken after 16ms of guiding in the beam splitter. The first two pictures are obtained at $`B_{bias}`$=12G by sending 0.8A only through one of the output wires; atoms can therefore turn either left or right. In the third and fourth pictures the atoms experience a splitting potential, the current being sent equally through both out-going arms of the Y-shaped wire. The images are taken at bias field 12G and 8G respectively. At 12G the atoms are clearly more compressed.
By changing the current ratio between the two outputs, and simultaneously keeping the total current constant, it is possible to control the probability of going left and right. Typical data for a beam splitter experiment using 8G bias field are shown in Fig. 2. Here, the number of atoms in each arm is determined by summing over the density distribution. When the current is not balanced, the side carrying more current is preferred due to the larger transverse size of the guiding potential. It can be noted that the 50/50 atomic splitting ratio occurs for a current ratio different from one half. This is due to an additional 3G field directed along the input guide to make a Ioffe-Pritchard configuration and prevent Majorana spin flips; such a field introduces a difference in the output guides which can be compensated with different currents. The solid lines shown in Fig. 2 are obtained with Monte Carlo simulations of an atomic sample at T=250$`\mu `$K propagating in the Y beam splitter.
Before discussing the Y beam splitter in detail, one should note some properties of the beam splitting potential created by the Y-shaped wire and a homogeneous bias field as shown in Fig. 3: (1) For the in-coming arm of the Y and for the two out-going arms, far away from the splitting point, we have simple side guide potentials. (2) The potential for the two out-going guides is tighter than for the in-coming guide and its minimum is at half distance from the chip surface. This is caused by the fact that the in-coming guide is formed by a current which is twice that of the out-going guides. It should also be noted that due to the change in direction of the output wires, the bias field has now a component along the guides which contributes to the Ioffe-Pritchard field. (3) The splitting point of the potentials is not at the geometrical splitting point of the wires. This can be seen in the pictures of Fig. 1b. The actual split point of the potential is located after the geometrical split. Precisely, it occurs when the distance between the output wires is given by $`d_{split}=\frac{\mu _o}{2\pi }\frac{I}{B_{bias}}`$, which is equal to the height above the chip of the input guide. (4) An additional potential minimum appears between the geometric splitting point of the Y-wire and the splitting point of the potential, forming a fourth port.
The different location of the potential split, and the additional inaccessible fourth port of the beam splitting potential, can be explained simply by taking two parallel wires with current in the same direction and adding a homogeneous bias field along the plane containing the wires and directed orthogonal to them. Depending on the distance $`d`$ between the wires one observes three different cases: (i) if $`d<d_{split}`$, two minima are created one above the other on the axis between the wires. In the limit of $`d`$ going to zero, the barrier potential between the two minima goes to infinity (in approximation of infinitely thin wires) and the minimum closer to the wires plane falls onto it. (ii) if $`d=d_{split}`$, the two minima fuse into one. (iii) if $`d>d_{split}`$ two minima are created one above each wire. The barrier between them increases with the wire distance and we eventually obtain two independent guides. In the Y beam splitter one encounters all three cases moving along the beam splitter axis. This is shown in detail in Fig. 3b and c, which present two projections, onto the beam splitter plane and orthogonal to it respectively.
The dynamic of an atom propagating through the Y beam splitter potential is best described by a scattering process in restricted space from in-coming modes into out-going modes. As in most scattering processes in free space, we expect some back scattering into the in-coming mode. Additional back scattering mechanisms due to the guides also occur: For instance, the output guides have higher transverse gradients because each of them carries half of the current. This gives rise to a reflection probability due to a mismatch of modes. Another contribution comes from the direction change of the input guide as it gets closer to the chip surface. From the atomic distribution observed in the experiment we could estimate a back-reflection of less than $`20\%`$ at the splitting point. This may be further minimized by varying the potential shape and choosing different geometries. In addition the fourth port, caused by the second minimum before the split point, induces a loss rate since atoms taking that route will hit the surface.
The Y configuration enables a 50/50 splitting over a wide range of experimental parameters due to its inherent symmetry relative to the incoming guide axis, where by inherent we mean that the symmetry of the potential is maintained for different magnitudes of current and bias field, and for different incoming transverse modes. This was also numerically confirmed up to the first 35 modes. The atom arriving at the splitting junction encounters a symmetric scattering potential, and will thus have equal right-left amplitudes regardless of the specific current and bias field in use. Therefore, such a beam splitter should allow inherently coherent splitting for multi-mode propagation. This symmetric splitting may only be corrupted by breaking the symmetry of the potential, for example by a rotation of the bias field direction, or with a current imbalance.
This is an advantage over beam splitter designs for guided matter waves which rely on tunneling . In such a configuration, two side guides coming close together and going apart again, the potential at the closest point exhibits two guides separated by a potential barrier. Here the disadvantage is that the splitting ratio for an incoming wave packet is vastly different for different propagating modes, since it depends strongly on the tunneling probability. A further disadvantage is that, even for single mode splitting, the barrier width and height and consequently the splitting amplitudes are extremely sensitive to changes in the current and bias field.
The back scattering and the inaccessible fourth port of the Y beam splitter may be, at least partially, overcome using different beam splitter designs like the ones shown in Fig. 4. The configuration shown in Fig. 4a has two wires which run parallel until a given point and then go apart. In case the bias field is chosen to exactly fulfill case (ii) of the above discussion, the splitting point of the potential is ensured to be that of the wires and the height of the potential minimum above the chip surface is maintained throughout the device (in the limit of small opening angle). Furthermore, no fourth port appears in the splitting region. In Fig. 4b we present a more advanced design. Here a wave guide is realized with two parallel wires with currents in opposite directions and a bias field perpendicular to the chip surface. The splitting potential is designed in order to have input and output guides with identical characteristics, therefore eliminating reflections due to different guide gradients. On the other hand, this multi-wire configuration might be more difficult to integrate in a complex network.
In conclusion, we have realized a beam splitter for guided atoms, with a design that ensures symmetry under a wide range of experimental parameters, and which we have shown can be further developed to bypass the main drawbacks. This device could find applications in atom interferometry and in the study of decoherence processes close to a surface. Furthermore, this basic element could be integrated into more complex quantum networks which would form the base for advanced applications such as quantum information processing.
We would like to thank A. Chenet, A. Kasper, S. Schneider and A. Mitterer for help in the experiments. Atom Chips used in the preparation of this work and in the actual experiments were fabricated at the Institut für Festkörperelektronik, Technische Universität Wien, Austria, and the Sub-micron center, Weizmann Inst. of Science, Israel. We thank E. Gornik, C. Unterrainer and I. Bar-Joseph of these institutions for their assistance. This work was supported by the Austrian Science Foundation (FWF), project SFB F15-07, and by the ACQUIRE collaboration (EU, contract Nr. IST-1999-11055). B.H. acknowledges financial support from Svenska Institutet.
|
no-problem/0003/physics0003022.html
|
ar5iv
|
text
|
# COMMENTS ON THE PAPER ”ON THE UNIFICATION OF THE FUNDAMENTAL FORCES…”
## Abstract
In this brief paper we justify observations made in El Naschie’s paper ”On the Unification of the Fundamental Forces…”, on the Planck scale, fractal space time and the unification of interactions, from different standpoints.
In a recent paper El Naschie has emphasized the intimate connection between the Planck length, the Compton wavelength and a unification of the fundamental interactions within the framework of complex time and a fractal Cantorian space. We will now make some observations which justify the contentions made in the above paper, from different standpoints.
1. Complex Time: It has been pointed out that Fermions can be thought of as Kerr-Newman Black Holes, in the context of quantized space time: There are minimum space time intervals, and when we average over these, Physics arises. Within the minimum intervals, we encounter unphysical Zitterbewegung effects, which also show up as a complexification of coordinates \- indeed they are the double Weiner process discussed by Abbott and Wise, Nottale and others. It may be mentioned that the transition from the Kerr metric in General Relativity to the Kerr-Newman metric is obtained by precisely such a complex shift of coordinates, a circumstance which has no clear meaning in Classical Physics. On the other hand it is this ”Classical” Kerr-Newman metric which describes the field of an electron including the Quantum Mechanical anomalous gyro magnetic ratio, $`g=2`$. This has been discussed in detail in references.
2. The Unification of Electromagnetism and Gravitation and the Planck Scale: It is in the context of point 1 that we arrive at a unified picture of electromagnetism and gravitation (Cf.ref.). The point is that at the Compton wavelength scale we have purely Quantum Mechanical effects like Zitterbewegung, spin half and electromagnetism, while at the Planck scale, we have a purely classical Schwarschild Black Hole. However the Planck scale is the extreme limit of the Compton scale, where electromagnetism and gravitation meet (Cf.ref., and ). This is because for a Planck mass $`m_P10^5gms`$ we have
$$\frac{Gm_P^2}{e^2}1$$
(1)
whereas for an elementary particle like an electron we have the well known equation
$$\frac{Gm^2}{e^2}\frac{1}{\sqrt{N}}10^{40}$$
(2)
where $`N10^{80}`$ is the number of elementary particles in the universe.
Another way of expressing this result is that the Schwarzschild radius for the Planck mass equals its Compton wavelength. This is where Quantum Mechanics and Classical Physics meet.
This point can be analysed further. From equations (1) and (2) it can be seen that we obtain the Planck mass, when the number of particles in the universe is 1. Indeed as has been pointed out by Rosen, the Planck mass can be considered to be a mini universe, in the context of the Schrodinger equation with the gravitational interaction.
3. The above brings us to another interesting aspect discussed by El Naschie in. This is the fact that the Planck mass is intimately related to the Hawking radiation, and infact from the latter consideration we can deduce that a Planck mass ”evaporates” within about $`10^{42}secs`$, which also happens to be its Compton time!
On the other hand, as pointed out in, an elementary particle like the pion is intimately related to Hagedorn radiation which leads to a life time of the order of the age of the universe.
The above two conclusions have been obtained on the basis of a background Zero Point Field, the Langevin equation and space time cut offs leading to a fluctuational creation of particles at the Planck scale and the Compton scale respectively.
4. Resolution and the Unification of Interactions: El Naschie has referred to the fact that there is no apriori fixed length scale (the Biedenharn conjecture). Indeed it has been argued by the author in the above context that depending on our scale of resolution, we encounter electromagnetism well outside the Compton wavelength, strong interactions at the Compton wavelength or slightly below it and only gravitation at the Planck scale. The differences between the various interactions are a manifestation of the resolution.
4. The Universe as a Black Hole: As pointed out in by El Naschie and the author, the universe can indeed be considered to be a black hole. Prima Facie this is clear from the fact that the radius of the universe is of the order of the Schwarzchild radius of a black hole with the same mass as the universe. Also as pointed out in) the age of the universe coincides with the time taken by a ray of light to travel from the horizon of a black hole to its centre or vice versa.
5. The ”Core” of the Electron: El Naschie refers to the core of the electron $`10^{20}cms`$, as indeed has been experimentally noticed by Dehmelt and Co-workers. It is interesting that this can be deduced in the context of the electron as a Quantum Mechanical Kerr-Newman Black Hole.
It was shown that for distances of the order of the Compton wavelength the potential is given in its QCD form
$$V\frac{\beta M}{r}+8\beta M(\frac{Mc^2}{\mathrm{}})^2.r$$
(3)
For small values of $`r`$ the potential (3) can be written as
$$V\frac{A}{r}e^{\mu ^2r^2},\mu =\frac{Mc^2}{\mathrm{}}$$
(4)
It follows from (4) that
$$r\frac{1}{\mu }10^{21}cm.$$
(5)
Curiously enough in (4), $`r`$ appears as a time, which is to be expected because at the horizon of a black hole $`r`$ and $`t`$ interchange roles.
One could reach the same conclusion, as given in equation (5) from a different angle. In the Schrodinger equation which is used in QCD, with the potential given by (3), one could verify that the wave function is of the type $`f(r).e^{\frac{\mu r}{2}}`$, where the same $`\mu `$ appears in (4). Thus, once again one has a wave packet which is negligible outside the distance given by (5).
It may be noted that Brodsky and Drell had suggested from a very different viewpoint viz., the anomalous magnetic moment of the electron, that its size would be limited by $`10^{20}cm`$. The result (5) was experimentally confirmed by Dehmelt and co-workers .
Finally, it may be remarked that it is the fractal double Weiner process referred to earlier that leads from the real space coordinate, $`x`$ say, to the complex coordinate $`x+ıct`$ (Cf.ref.), which is the space and time divide: As pointed out by Hawking and others, an imaginary time would lead to a ”static” Euclidean four geometry, rather than the Minkowski world, a concept that has been criticised by Prigogine. It is in the above fractal formulation (Cf.ref.), on the contrary, that we see the emergence of the space and time divide, that is, time itself.
Thus in conclusion, it may be said that the recognition of a fractal quantized underpinning of space time ties together several apparently disparate facts.
|
no-problem/0003/nlin0003048.html
|
ar5iv
|
text
|
# Noise Sustained Propagation: local vs. global noise
## I Introduction
Information transfer through nonlinear systems in the presence of fluctuations has been extensively studied in the context of stochastic resonance (SR) . The frontier of this research has shifted towards systems with spatial degrees of freedom over the past few years. While the efforts initially were directed towards enhancing the basic SR effect , recent work has demonstrated that noise can also sustain wave propagation . In early studies, Jung et al showed that noise can sustain spiral waves in a caricature model of excitable media. The assisting role of noise for one- and two-dimensional, nonlinear wave propagation was experimentally confirmed by Kád’ar et al. in a chemical medium and by Löcher et al. in an array of coupled electronic resonators. Noise enhanced propagation (NEP) for periodic signals was explored by Lindner et al. in a chain of coupled, overdamped bistable oscillators. The authors of Ref. furnish evidence for self-organized criticality underlying the creation and propagation of waves by noise in a chemical subexcitable medium. So far, all experiments and simulations on NEP utilized local and additive noise. To our knowledge, no comparative studies on the effectiveness of local vs. global, additive vs. multiplicative noise have been attempted.
In this paper, we investigate experimentally the effectiveness of global noise as compared to local noise for the propagation of a signal in a chain of coupled bistable elements. Expanding on earlier reports , the experiments are performed using a 16 x 16 array of diode resonators driven in the stable period-2 regime. A bias consisting of a second drive at half the main frequency renders one phase more stable and a phase kink can be made to propagate across the array. For an intermediate value of the coupling resistors and small bias, we observe propagation failure. Adding noise to the drive of each element, either from a single source or from individual sources - corresponding to global and local noise, respectively - greatly increases the chances of successful signal transmission. Digital simulations of NEP in a simple model of cardiac tissue compare favorably with these observations and allow us to investigate the effects of parameter mismatch between the elements. At last, we present theoretical insights borrowed from discrete kink statistics which describe the phenomena in reasonable agreement.
## II Experimental Results
The experimental setup consists of 256 coupled diode resonators, of which one element is shown in Fig.1. The elements are arranged in a $`16\times 16`$ array with periodic boundary conditions connected along two opposite edges. Each diode resonator works as a bistable element when driven in its period-2 state. We break the phase symmetry by adding to the drive a second sinusoidal signal at half its frequency. This bias renders one phase more stable. We refer to the less stable phase as the metastable state. By inducing a phase change at one edge of the array a one-dimensional wavefront comprised of phase kinks will travel towards the detector at the other edge. The noise generators were constructed using the shot noise generated by a current through a pn junction diode as a source.
It is clear that in the absence of noise, a local phase jump will lead to a “domino effect” only if the bias and coupling are strong enough. The energetically lower phase then propagates into the metastable phase in the form of a moving kink. For identical elements, the speed of this moving interface depends on both the coupling strength and the amplitude of the applied bias. If the latter two parameters are chosen low enough, kinks in discrete systems will fail to propagate. In the experiment, there is a third factor contributing to kink trapping, namely heterogeneity of the chain. In our system, the variation of key parameters of the diode resonators and the difference in local noise power is as high as 10 %. Arranging the array with periodic boundary conditions in one dimension and preserving the motion of wavefronts in the other dimension, we considerably reduce inhomogenities by effectively averaging over 16 elements.
Figure. 2 displays the measured kink velocities in the absence of noise as a function of bias for an intermediate value of the coupling resistors. The velocity decreases approximately linearly with the bias down to a cutoff value of 0.9 units where it rapidly falls off to zero. We operate the array at a bias of 0.6 units for which the system is not capable of deterministic kink motion.
Our experimental results are given in terms of the arrival-probability curves shown in Fig. 3. The procedure for obtaining these are analogous to those described in : We reset all resonators to be in the metastable state and then induce a phase flip at one edge of the array. Any successful kink propagation generates a step function at the detector, which we average over approximately 100 events, resulting in a smooth rise curve (solid lines). In order to quantify transmission degeneration by noise nucleated spurious signals we repeat the same experiment without inducing a phase flip initially (dashed lines).
The arrival-probability curves for both local and global noise at five different noise strengths are shown in Fig. 3. For local, i.e. independent from site to site, noise we obtain results analogous to those previously reported . For a local noise background of less than 0.0025 $`mV^2/Hz`$ (Fig. 3(a)) kinks remained trapped longer than the measurement time of 5000 drive cycles. At 0.0121 $`mV^2/Hz`$ the signals arrive with a wide distribution of travel times and a slow mean (Fig. 3(b)). The arrival times become shorter with increasing noise strengths (Figs. 3(c) and 3(d)). Finally, at 0.0625 $`mV^2/Hz`$ a substantial number of false starts (dashed lines) corrupt the detection of the original input signal.
We found that the qualitative behavior of the chain under the influence of global noise is similar, but the onset of detectable kink propagation is found at much lower noise strengths. We observe no kink propagation below noise levels of 0.001 $`mV^2/Hz`$. Slow and disperse kink motion occurs for noise levels of 0.0025 $`mV^2/Hz`$ (Fig. 3(a)). Higher average kink speeds and less fluctuations are recorded for levels of 0.0121 and 0.0256 $`mV^2/Hz`$ (Fig. 3(b) and 3(c)), while the signal is severely corrupted for values $``$ 0.04 $`mV^2/Hz`$ (Figs. 3(d) and 3(e)).
Fig. 4 compares the velocities of the propagating wavefront as a function of the noise strength for both global and local noise. The velocity of the propagating wavefront shows an approximate linear increase with increasing noise strength in both cases. Note that the velocity is simply the inverse of the measured arrival times -multiplied by number of sites. It is hence well-defined only for low noise levels; in the case of substantial nucleation of additional thermal kinks-antikink pairs this calculated “velocity” should be interpreted with caution. Therefore, the data points in Figs. 4 and 7 which correspond to significant noise corruption should be considered as outliers. For the coupling resistors used the velocity for the global noise case is about 15 % greater than that seen for local noise.
Besides the earlier onset in the case of global noise, there are also notable differences in the mechanism that leads to spurious signals. Clearly, for identical elements - unlike spatially uncorrelated noise \- global noise cannot induce spurious kinks, which then compete with the deterministic kink due to the signal. The only way of generating a “false alarm” at the detector would be to phase flip the entire chain, thus requiring a single large fluctuation. This picture is not fully correct if there are mismatches between the elements but illustrates the dominant mechanism of creating the signal-masking noise background. Independent noise sources, however, easily spawn spurious kinks but rarely ever cause a global phase flip of the array. We postulate that the mechanism of noise sustained propagation per se is not very different for local and global noise, presumed the kink width is small (i.e. involving a few elements only). The reason for this conjecture is the local nature of the stochastic escape processes that provide for the average effective kink displacement. As long as the spatial correlation length of the noise is not substantially smaller than the kink width, the noise induced “propulsion ” of the kink will be similar for global and local noise. Note that the findings of Kádár et al. who briefly addressed the issue of noise correlation lengths, are consistent with this conjecture.
## III Simulations of a model of cardiac tissue
Propagation failure of signals due to discreteness of the supporting medium has previously been observed in theoretical and experimental studies of cardiac tissue. In particular, Keener introduced a modified cable theory, which incorporates the discretizing effects of the so-called gap junctions . Gap junctions, characterized by the (relatively high) intercellular resistance $`r_g`$, provide the electrical coupling between cardiac cells. Mathematically, the propagation of action potential along cardiac cells is described by various cable theories, which are analogous to wave propagation in one-dimensional conductors (cables). Formally, continuous and discrete models describe the wave propagation by either a partial differential equation or - in the latter case - via coupled ordinary differential equations. Continuous cable theory either ignores the effects of the gap junctions or replaces the cytoplasmic resistance with an effective resistance; in either case the electrical resistance is assumed to be spatially homogeneous. Here, we focus on the opposite assumption that gap junctional resistance is much more important than cytoplasmic resistance. We thus neglect the dynamics within a cell and assume that the propagation of the action potential is dominated by the delay caused by the gap junctions . Within the context of this discrete cable theory, we can write the current balance as
$$C_mS\frac{d\varphi _n}{dt}=\frac{1}{r_g}\left(\varphi _{n+1}2\varphi _n+\varphi _{n1}\right)+SI_m(\varphi _n)$$
(1)
where $`\varphi _n`$ is the transmembrane potential for the $`n`$th cell, $`S`$ the surface area of cell membrane, and $`C_m`$ is the membrane capacitance per unit area of membrane. $`I_m`$ specifies the inward ionic currents per unit area of membrane and is generally postulated to be a function of $`\varphi _n`$ having three zeros. For simplicity, we choose a simple cubic polynomial $`SI_m(\varphi )=12\sqrt{3}\varphi (1\varphi )(\varphi 0.5)+0.5`$ . Note that though similar in appearance, Eqn. (1) is not simply a discretization of its continuous analog
$$C_mS\frac{\mathrm{\Phi }}{t}=\frac{L^2}{r_g}\frac{^2\mathrm{\Phi }}{x^2}+SI_m(\varphi )$$
(2)
(where L is the size of the cardiac cell), but stands in its own right as a spatially discrete nonlinear wave equation. The most important observation is that propagation can fail in model (1) if $`r_g`$ is sufficiently large, but increasing resistances in Eq. (2) can never lead to propagation failure. Note that $`r_g`$ depends on the excitability of the tissue.
Fig. 5 shows a plot of the numerically determined speed (solid line) of propagation for model (1) as a function of the coupling strength $`d=1/r_g`$ as well as the analytically obtained kink speed $`c=\frac{c_0L}{C_mR_m}\sqrt{d}`$ (dashed line) for Eq. (2). The reader is referred to Ref. for an explanation of $`c_0`$ and $`R_m`$. It is evident that propagation is impossible for $`r_g`$ larger than a certain critical value $`r^{}`$, which turns out to be a monotonic increasing function of excitability .
We have performed digital simulations of the stochastic modification of (1)
$$\frac{d\varphi _n}{dt}=ϵ\left(\varphi _{n+1}2\varphi _n+\varphi _{n1}\right)+SI_m(\varphi _n)(1+\xi _M(t))+\xi _A(t)$$
(3)
using the Euler-Maruyama algorithm with a time-step of $`dt=0.05`$ and coupling strength $`ϵ=0.07`$ (40 elements). $`\xi _A(t)`$ and $`\xi _M(t)`$ are additive and multiplicative Gaussian white noise, bandlimited in practice by the Nyquist frequency $`f_N=\frac{1}{2dt}`$. We quantify the noise by its dimensionless variance $`\sigma ^2=2Df_N`$, where $`2D`$ is the height of the one-sided noise spectrum.
Here, we only consider the case of purely additive noise, $`\xi _M(t)=0`$. By following an analogous procedure as in the experiment we obtain the probabilities for successful signal transmission as illustrated in Fig. 6. As in the experiment, global noise provides for kink propagation at much lower values of $`\sigma ^2`$ than local noise does. Fig. 7 compares the velocities of the noise propulsed kink as a function of $`\sigma ^2`$ for both global and local noise. The velocity of the propagating wavefront shows an approximately parabolic dependence on the noise power in both cases. For the coupling strength employed global noise leads to speeds about 15 % greater than that observed for local noise.
## IV Detection Criteria
The decision whether a kink arriving at the last element corresponds to a signal injected at the first site constitutes a simple binary hypothesis-testing problem . We assign the null hypothesis $`H_0`$ to “no signal injected” and the opposite for the alternative hypothesis $`H_1`$. Denote the according decisions $`D_i`$ as the judgment that hypothesis $`H_i`$ was in effect. Clearly, there are two possible errors: A so-called Type I error occurs when making the decision that $`H_1`$ was in effect while the contrary is true. Borrowing notation from radar detection, we refer to this as the probability of false alarm $`P_f=P(D_1|H_0)`$. On the other hand, if $`H_1`$ was in effect to generate the data, and we decide $`D_0`$, then we have committed a Type II error, which in radar is referred to as a missed detection . We do that with some probability $`P(D_0|H_1)`$ which is related to the probability of detection $`P_d=P(D_1|H_1)`$ in an inverse fashion: $`P_d=1P(D_0|H_1)`$.
In our experiment, there is no uniquely defined, objectively “best” noise level without first defining a decision strategy. Two suitable approaches are (i) the Neyman-Pearson strategy, in which the probability of detection $`P_d`$ is maximized while specifying an upper bound for the false alarm probability $`P_f`$, and (ii) Bayes’ rule which assigns costs to the various outcomes of the decision process, such as correctly detecting a signal or being “deceived” by a spurious kink. The optimum noise level in the latter case would be the one, which minimizes the total average cost.
In communication systems one is usually interested in the total probability of error $`P_e=P(D_1|H_0)+P(D_0|H_1)=P_f+1P_d`$. Though in the experimental setup there is no inherent time-scale, i.e. the decision when to reset the chain is rather arbitrary, in digital communication applications we would expect information bits to be sent at a constant rate. Hence, we choose a reasonable time interval at the end of which we measure the probabilities of false alarm $`P_f`$ and missed detection $`1P_d`$ as a function of noise power. Then, $`P_f`$ is simply the value of the dashed line in Fig. 6 at time = 300, and $`P_d`$ is the corresponding value for the solid line. For local and global noise, the total probability of error $`P_e`$ is displayed in Fig. 8; for very low and rather high values of the noise power, $`P_e`$ is almost one. However, there exists an optimal noise strength for which both $`P_f`$ and $`1P_d`$ nearly vanish, resulting in a sharp minimum of $`P_e`$.
## V Theory
The results of Sects. II \- IV touch upon two central properties of kink statistics in a discrete bistable chain, namely the Brownian motion of an individual kink and the nucleation of kink-antikink pairs in the presence of an external static bias (or d.c. forcing term). In the absence of a precise model for the 2-state potential that describes the phase shifts in the diode resonators of Sec. II or drives the transmembrane potential of Sec. III, we speculate that the in situ bistability of our arrays can be rendered satisfactorily in terms of a Double Quadratic (DQ) potential, that is by two parabolas displaced by a distance $`2a`$ (see Fig. 9a). A discrete DQ model is likely to capture, at least qualitatively, the essential features of the array dynamics investigated above, while affording substantial simplifications in its analytical treatment. The analysis of this Section can be carried over, with more mathematical effort, to the $`\varphi ^4`$ model of Sec. III, as well.
The DQ model has been studied both in the continuum and in the discrete case . In dimensionless units the DQ Hamiltonian reads
$$\frac{H}{H_0}=l\underset{n}{}\left\{\frac{\dot{\varphi }_n^2}{2}+\frac{c_0^2}{4l^2}[(\varphi _n\varphi _{n1})^2+(\varphi _n\varphi _{n+1})^2]+\frac{\omega _0^2}{2}(|\varphi |1)^2\right\},$$
(4)
with $`H_0=ma^2/l`$. Each $`\varphi _n`$ can be regarded as the displacement (in units of $`2a`$) of the n-th chain site with mass $`m`$, $`c_0`$ and $`\omega _0`$ represent respectively the limiting speed and frequency of the phonon modes propagating along the chain and $`l`$ denotes the chain lattice constant ( for instance, in Eqn. (1) $`l`$ was set to one). The ratio $`c_0^2/l^2`$, which quantifies the effectiveness of the coupling between two adjacent bistable units, is the coupling constant of our model. The importance of the discreteness effects is measured by the discreteness parameter
$$\gamma =\frac{c_0^2}{\omega _0l}\frac{d}{l},$$
(5)
namely the ratio of the kink length $`d`$ to the chain spacing $`l`$.
### A The continuum limit
In the continuum (or displacive) limit $`\gamma \mathrm{}`$ the Hamiltonian (4) can be expressed as the line integral of the Hamiltonian density
$$H[\varphi ]=\frac{\varphi _t^2}{2}+c_0^2\frac{\varphi _x^2}{2}+V[\varphi ],$$
(6)
where the string field $`\varphi (x,t)`$ is defined as $`lim_{l0}\varphi _{x/l}(t)`$ and $`V[\varphi ]=\frac{\omega _0^2}{2}(|\varphi |1)^2`$. The statistical mechanics of the continuum DQ model can be worked out analytically in great detail . In particular, we know that the kink ($`\varphi _+`$) and the antikink solutions ($`\varphi _{}`$)
$$\varphi _\pm (x,t)=\pm \text{sgn}[xX(t)][1\mathrm{exp}^{\frac{|xX(t)|}{d\sqrt{1u^2/c_0^2}}}]$$
(7)
can be regarded as relativistic quasi-particles with size $`d=c_0/\omega _0`$, mass $`M_0=E_0/c_0^2=1/d`$ (or rest energy $`E_0=\omega _0c_0`$) and center of mass $`X(t)=X_0+ut`$.
At low temperatures, $`kTE_0`$, any string configuration can be represented as a linear superposition of randomly distributed kinks and antikinks floating on a phonon bath. A DQ string in equilibrium at temperature $`T`$ and with boundary conditions $`\varphi (\mathrm{},t)=\varphi (+\mathrm{},t)`$ bears naturally a dilute gas of thermal kink-antikink pairs with density
$$n_0(T)=\frac{1}{2\sqrt{2}}\frac{1}{d}\left(\frac{E_0}{kT}\right)^{1/2}e^{E_0/kT}.$$
(8)
The qualification thermal underscores the fact that $`n_0`$ pairs per unit of length (with $`n_0d1`$) are being generated by thermal fluctuations alone, irrespective of any geometric constraint at the boundaries (see discussion of Fig. 2 in Sec. II).
### B Kink Brownian motion
The $`\varphi _\pm (x,t)`$ solutions (7) tend to travel with arbitrary constant speed $`u<c_0`$, unless perturbed by the coupling to a heat bath or by an external field of force. The simplest heat-bath model was obtained by adding a viscous term $`\alpha \varphi _t`$ and a zero-mean, Gaussian local noise source $`\zeta (x,t)`$ to the string equation of motion corresponding to the Hamiltonian density (6), that is
$$\varphi _{tt}c_0^2\varphi _{xx}\omega _0^2\text{sgn}[\varphi ](|\varphi |1)=\alpha \varphi _t+F+\zeta (x,t).$$
(9)
Note that all our experiments and simulations have been carried out in the overdamped limit, $`\alpha \omega _0`$, and in the presence of an additional sub-threshold force $`F`$ with $`F<\omega _0^2`$, also incorporated in Eq. (9). Thermalization is imposed here by choosing the noise autocorrelation function $`\zeta (x,t)\zeta (x^{},t^{})=2\alpha kT\delta (tt^{})\delta (xx^{})`$.
A single kink (antikink) subjected to thermal fluctuations undergoes driven Brownian motion with Langevin equation
$$\dot{X}=\frac{2F}{\alpha M_0}+\eta (t),$$
(10)
where $`\eta (t)`$ is a zero-mean-valued Gaussian noise with strength $`D=kT/\alpha M_0`$ and autocorrelation function $`\eta (t)\eta (0)=2D\delta (t)`$. As apparent from Eq. (10), the external bias pulls $`\varphi _\pm `$ in opposite directions with average speed $`\pm u_F`$ and $`u_F=2F/\alpha M_0`$.
If the local fluctuations are spatially correlated, say
$$\zeta (x,t)\zeta (x^{},t^{})=2\alpha kT\delta (tt^{})[e^{|xx^{}|/\lambda }/2\lambda ],$$
(11)
the noise strength $`D`$ changes into
$$D(\lambda )=\frac{Dd}{\lambda +d}\left(1+\frac{\lambda }{\lambda +d}\right).$$
(12)
As speculated in Sec. II, for noise correlation length $`\lambda `$ smaller than the kink size $`d`$ possible spatial dishomogeneities become negligible, i.e. $`D(\lambda )D`$ for $`\lambda d`$ . The global noise regime simulated both experimentally in Sec. II and numerically in Sec. III corresponds to the limit $`\lambda \mathrm{}`$ of the source $`\zeta (x,t)`$ rescaled by the normalization factor $`\sqrt{2\lambda }`$; the Langevin equation (10) still applies, but the relevant noise strength is now $`lim_\lambda \mathrm{}2\lambda D(\lambda )=4D`$. This accounts for the observation that global noise sustains kink propagation more effectively than local noise. Note that the enhancement factor of 4, more exactly $`4a`$, is nothing but twice the distance of the DQ potential minima (in dimensionless units).
Another important property of global noise is that it cannot trigger the nucleation of a kink-antikink pair and, therefore, minimizes the chances of a ”false alarm” (see Fig. 3). For this to occur it would be necessary that a spatial deformation of a stable string configuration (vacuum state) be generated large enough for the external bias to succeed in making it grow indefinitely. Such a 2-body nucleation mechanism would require a local breach of the $`\varphi \varphi `$ symmetry of the DQ equation (9), which can be best afforded in the presence of uncorrelated in situ fluctuations .
The nucleation rate, namely the number of kink-antikink pairs generated per unit of time and unit of length, can be easily computed by combining the nucleation theory of Ref. with the analytical results of Ref. for the DQ theory. For values of the string parameters relevant to Secs. II \- IV, that is for $`kT`$ and $`FdE_0`$ the stationary DQ nucleation rate can be approximated to
$$\mathrm{\Gamma }_1(T)=\frac{2n_0(T)}{\tau (T)}=2u_Fn_0^2(T),$$
(13)
if $`FdkT`$, or $`\mathrm{\Gamma }_2(T)=\frac{1}{2}\sqrt{\frac{kT}{Fd}}\mathrm{\Gamma }_1(T)`$, if $`kTFdE_0`$ . For an overdamped string, $`\alpha \omega _0`$ the time constant $`\tau (T)`$ amounts to the kink (antikink) lifetime prior to a destructive collision with an antikink (kink). Both estimates for the DQ nucleation rate clearly show that spontaneous nucleation of thermal pairs may degrade appreciably local-noise sustained propagation of injected (or geometric) kinks only for thermal energy fluctuations of the order of the kink rest energy.
### C The Peierls-Nabarro potential
Let us go back now to the case of a discrete DQ chain. Discreteness (with parameter $`\gamma `$) affects the kink dynamics on two accounts:
(i) The profile of a static kink (antikink) $`\varphi _\pm (x,0)`$ is deformed into
$$\varphi _{\pm ,n}^{(s)}=\pm \text{sgn}[nN][1Z_\nu \nu ^{|nN|}],$$
(14)
with $`Z_\nu =2\sqrt{\nu }/1+\nu `$, $`N=m+1/2`$, $`m=0,\pm 1,\pm 2,\mathrm{}`$, and $`\nu =[\sqrt{1+4\gamma ^2}1]/[\sqrt{1+4\gamma ^2}+1]`$. To make contact with the displacive solution $`\varphi _\pm (x,0)`$ one must replace $`nl`$ with $`x`$, $`Nl`$ with $`X_0`$ and take the continuum limit $`\gamma \mathrm{}`$ (so that $`\nu 11/\gamma `$). Note that the spatial extension of the discrete kink solutions $`\varphi _{\pm ,n}^{(s)}`$ increases monotonically with $`\gamma `$. As $`\gamma `$ decreases below unity, $`\varphi _{\pm ,n}^{(s)}`$ approaches a step function (order-disorder limit);
(ii) $`\varphi _{\pm ,n}^{(s)}`$ is centered midway between two chain sites due to the confining action of an effective \[or Peierls-Nabarro (PN)\] potential . The PN potential describes the spatial modulation of the $`\varphi _{\pm ,n}^{(s)}`$ rest energy as its center of mass is moved across one chain unit cell, say from $`ml`$ up to $`(m+1)l`$.
As a result, according to the Langevin equation approach of Sec. V B, the $`\varphi _{\pm ,n}^{(s)}`$ center of mass $`X(t)`$ diffuses on a periodic, piece-wise harmonic potential with constant $`l`$ and angular frequency $`\omega _{PN}`$, that is
$$\alpha \dot{X}=\omega _{PN}^2[Xl(\text{Int}[X/l]1/2)]2F/M_0+\alpha \eta (t),$$
(15)
where $`\omega _{PN}^2(1+\nu )\omega _0^2`$ and Int$`[X/l]`$ denotes the integer part of $`X`$ in units of $`l`$. Note that $`\omega _{PN}\omega _0`$ and $`\omega _{PN}\sqrt{2}\omega _0`$ in the highly discrete and continuum limit, respectively. The energy barriers of the PN potential are thus (almost) quadratic in $`l`$.
The one-dimensional Langevin equation (15) has been studied in great detail by Risken and coworkers . In the noiseless limit, $`\eta (t)0`$, the process $`X(t)`$ is to be found either in a locked state with $`\dot{X}=0`$, for $`4F/M_0<\omega _{PN}^2`$ , or in a running state with $`\dot{X}u_F`$, for $`4F/M_0>\omega _{PN}^2`$. This is indeed the depinning (or locked-to-running) transition described in Fig. 2. At finite temperature the stationary velocity $`\dot{X}=u(T)`$ can be cast in the form following,
$$\frac{u(T)}{u_F}=\frac{1}{\delta }\frac{1e^\delta }{AB(1e^\delta )},$$
(16)
where $`\delta =2Fl/kT`$ and the quantities $`A`$ and $`B`$ can be computed numerically with minimum effort . The ratio $`u(T)/u_F`$ is the rescaled $`\varphi _{\pm ,n}^{(s)}`$ mobility; it crosses from 0 (locked state) over to 1 (running state) continuously in a relatively narrow neighborhood of the threshold value $`F_{th}=M_0\omega _{PN}^2/4`$; moreover, $`u(T)/u_F`$ increases monotonically with $`T`$ at fixed bias. Such a temperature dependence of the kink mobility explains the sequences of rise curves in Figs. 3 and 6, where kink propagation seems to speed up on raising the noise level.
## VI Summary
In conclusion, the present analysis confirms our speculation that the apparent SR behavior of the efficiency of noise-sustained transmission of kink-like signals along a bistable chain results from two competing mechanisms, both controlled by noise: The driven diffusion dynamics of stable noninteracting kinks, which increases exponentially with the temperature in the vicinity of the depinning transition (propulsion mechanism); The detection of spurious signals, as thermal kink-antikink pairs nucleate with exponentially increasing rates, thus corrupting the propagated signal (garbling mechanism).
If the spatial distribution of the noise was constrained to a small neighborhood around the kink and zero along the rest of the chain, fast and efficient noise supported signal transmission without false alarms would be realizable. This seemingly artificially constructed scenario can be achieved naturally by considering the case of purely multiplicative noise . A detailed study of noise sustained propagation in the presence of multiplicative fluctuations is beyond the scope of this work.
During the preparation of this manuscript the authors learned about recent results on propagation failure in the context of cell differentiation . Utilizing a highly simplified model composed of coupled bistable elements, the authors furnish evidence for the discrete nature of chemical signaling waves propagating through a chain of cells. We speculate that fluctuations, inherent in biological systems, might play a significant role in the details of cell differentiation processes.
We acknowledge the Office of Naval Research for financial support. ML, NC and EH warmly thank D. Cigna for very significant contributions to the experimental setup.
References
|
no-problem/0003/astro-ph0003173.html
|
ar5iv
|
text
|
# High resolution near-infrared polarimetry of 𝜂 Carinae and the Homunculus Nebula
## 1 Introduction
$`\eta `$ Carinae, situated in the Carina complex at about 2.3kpc (Davidson & Humphreys dahu97 (1997)), is one of the most massive stars known in the Galaxy and is going through the Luminous Blue Variable phase (Humphreys & Davidson huda94 (1994)) of unsteady mass loss (Davidson et al. da86 (1986)). During the 1840’s it underwent an outburst and reached visual magnitude -1; since then is has been emerging from the dust which condensed after this ejection (Walborn & Liller, wali (1977)). Long term monitoring of optical, IR, radio and X-ray spectra has revealed evidence of periodicity perhaps related to a binary or multiple star at the core of the nebula (Daminelli et al. dam97 (1997)).
The compact nebula around $`\eta `$ Carinae (HD 93308), called the Homunculus, was first shown by Thackeray (tha56 (1961)) to be highly polarized. The initial polarimetry was confirmed by Wesselink (wess (1969)) who measured linear polarization of around 40%. Visvanathan (vis (1967)) observed that the polarization centred on $`\eta `$ Carinae was almost constant with wavelength from U to R and increased with increasing aperture size. In a small aperture, higher polarization was observed on the NW side of the nebula than on the SE. The first systematic polarization maps were made by Warren-Smith et al. (warr (1979)) in the V band and demonstrated a centro-symmetric pattern of polarization vectors with a marked asymmetry in the polarization values along the major axis (position angle $``$130) with values upto 40% in the NW lobe. To produce such high values of polarization in a reflection nebula, Mie scattering by silicate particles with a size distribution weighted to smaller particles was invoked and modelled by Carty et al. (cart (1979)). In the near ($``$0.5<sup>′′</sup>) vicinity of $`\eta `$ Car itself, speckle masking polarimetry in the H$`\alpha `$ line and local continuum has revealed evidence for a compact equatorial disc aligned with the minor axis of the Homunculus (Falcke et al. falc (1996)). Within $`<`$1<sup>′′</sup> of $`\eta `$ Car the polarization vector pattern does not remain centrosymmetric in the R band, suggesting that local structures and perhaps intrinsic emission may contribute to the morphology and scattered light (Falcke et al. falc (1996)). Polarimetry in the mid-infrared, where the dust emits rather than scatters radiation, shows an entirely different pattern of polarization vectors with a trend to be oriented radially, particularly near the boundary of the emission (Aitken et al. ait95 (1995)). Such a pattern can be interpreted in terms of emission from aligned grains; Aitken et al. (ait95 (1995)) suggest that the alignment mechanism may be gas-grain streaming, driven by the high outflow velocity, or a remnant magnetic field from a dense magnetized disc.
There is a wealth of IR observations of $`\eta `$ Car and the Homunculus on account of its intrinsic IR brightness, first observed by Westphal & Neugebauer (wene (1969)), and astrophysical interest. The IR spectrum is characterized by a peak around 10$`\mu `$m, indicative of silicate grains (Mitchell & Robinson, miro (1978)). There is a central IR point source together with a second peak on the minor axis of the nebula, whose separation increases from 1.1 to 2.2<sup>′′</sup> from 3.6 to 11.2$`\mu `$m (Hyland et al. hyl (1979)). The near-IR spectrum of $`\eta `$ Car shows a steep increase with wavelength and prominent hydrogen lines of the Paschen and Brackett series as well as He I lines (Whitelock et al. whi (1983)) and weaker Fe II and \[Fe II\] lines (Altamore et al. alta (1994)). Maps in the J, H and K bands show that the structure is dominated by scattering, but beyond about 3$`\mu `$m dust emission dominates (Allen all89 (1989)), with many clumps present. High spatial resolution observations have reported an unresolved central source (at L and M band, Bensammar et al. bens (1985)), with filaments and unresolved knots within 1<sup>′′</sup> detected in many IR bands (Gehring gehr (1992)). Maps in the mid-IR show a similar structure and the compact central source has a dust temperature $``$650K and dust mass of 10<sup>-4</sup>M with nearby dusty clouds associated into loop features (Smith et al. smai (1995)). This source has been so prodigiously studied at so many wavelengths that it possesses its own review article in Annual Reviews of Astronomy and Astrophysics (Davidson & Humphries dahu97 (1997)).
$`\eta `$ Car can be considered an ideal source for adaptive optics on account of its very bright central, almost point, source (V$``$7mag. - van Genderen et al. vang (1994)) and the limited radial extent ($`\pm `$10<sup>′′</sup>) of the Homunculus, which means that the source itself can be used as a reference star for the wavefront sensor. As a consequence, off-axis anisoplaniticity does not significantly affect the adaptive optics (AO) correction out to the edges of the nebula. Previous near-infrared AO imaging of $`\eta `$ Car was obtained (Rigaut & Gehring rige (1995)), including some limited polarimetry (Gehring gehr (1992)) using the COME-ON AO instrument. Here we report on dedicated high resolution near-IR AO imaging polarimetry conducted at J, H, K, and in a continuum band at 2.15$`\mu `$m, using the ADONIS system and SHARP II camera with the aim of studying the small-scale polarization structure of the Homunculus. The observations are described in Sect. 2; the reductions and polarization data are presented in Sect. 3 and the relevance of the results for the structure and dust properties of this remarkable reflection nebula are discussed in Sect. 4.
## 2 Observations
Imaging polarimetry of $`\eta `$ Carinae was obtained with the ADONIS Adaptive Optics instrument mounted at the F/8.1 Cassegrain focus of the ESO 3.6m telescope. ADONIS is the ESO common-user adaptive optics instrument; it employs a 64 element deformable mirror and wave-front sensor (Beuzit et al. bez97 (1997)). For the observations of $`\eta `$ Car, a Reticon detector was used as the wave-front sensor. The camera is SHARP II - a Rockwell 256<sup>2</sup> HgCdTe NICMOS 3 array (Hofmann et al. hof95 (1995)). The pixel scale was chosen as 0.050<sup>′′2</sup>, giving a field of view of 12.8$`\times `$12.8<sup>′′</sup>. Although this field does not encompass the full extent of the Homunculus nebula it was chosen so that well-sampled diffraction limited imaging would be possible in at least the H and K bands. Table 1 lists the observations; the two orientations, referred to as A and B, had $`\eta `$ Car in the lower right and upper left of the array respectively (east is up; north to the right) enabling full coverage of the Homunculus. The narrow band 2.15 $`\mu `$m (henceforth K<sub>c</sub>) observations were made with $`\eta `$ Car in the centre of the array. For each filter and orientation, exposures were made at 8 position angles of the polarizer: 0.0, 22.5, 45.0, 67.5, 90.0, 112.5, 135.0, 157.5 in sequence. In addition a repeat of the 0 exposure was made with the polarizer angle set to 180 in order to check the photometry and repeatability of the polarimetric measurements. Each full sequence of polarization measurements at the 8+1 position angles was repeated as specified in column 6 of Table 1. Offset sky chopping was employed and the relative position of the offset sky is listed in column 5 of Table 1; the exposure on sky was equal to the on-source time. As discussed in Ageorges & Walsh (agwa (1999)) it was not possible to calculate reliable K band polarimetry; these data will therefore only be discussed in terms of their high resolution imaging. On 1996 March 03 due to a technical problem, the computer control of the chopper malfunctioned and the offset sky had to be observed subsequent to each sequence of polarizer angles, and in some cases the exposure time on source was greater than on background sky. The last column of Table 1 gives an indication of the external seeing as measured by the Differential Image Motion Monitor at La Silla during the observations. Photometric standards were not observed and no attempt has been made to determine accurate magnitudes in the J, H, K and K<sub>c</sub> filters.
Polarized and unpolarized standards were observed in the course of the observations to determine the instrumental polarization and any rotation of the instrumental plane of polarization. The star HD 64299 which is relatively close to $`\eta `$ Car was observed at J, H and K as an unpolarized standard (polarization 0.15% at B (Turnshek et al. turn (1990)) and assumed to be low in the IR, although the actual values are not measured) and a point source for deconvolution. In the K<sub>c</sub> filter, the star HD 94510 was observed. These stars were chosen primarily to be bright but not so bright as to saturate the SHARP II camera. ADONIS observations of these stars and the polarized reference sources are fully described in Ageorges & Walsh (agwa (1999)), where the observational details are also given.
## 3 Reductions and Results
### 3.1 Basic reduction
The data cubes at each position of the polarizer consists of M $`\times `$ 256$`\times `$256 pixel frames, where M is given as the number of frames in Table 1. To produce a single image for each of the nine selected polarizer angles, the data were flat fielded using images obtained at the beginning of the night on the twilight sky with an identical set of polarizer angles. The data cubes were used to derive a bad pixel map as described by Ageorges & Walsh (agwa (1999)) using a sky variation method. The sky from the offset position was subtracted separately from each of the M frames before combination into a single image for each polarizer angle. These reductions were performed with the dedicated adaptive optics reduction package ‘eclipse’ (Devillard devil (1997)). The data frames at position angle 157.5 were not used in any computations of polarization on account of the discrepancy noted by Ageorges & Walsh (agwa (1999)). The reduced data then consisted of 2 sets of K band image pairs; 2 sets of H band image pairs; and 1 set of J band images; all with $`\eta `$ Car displaced from the centre of the detector and 1 set of K<sub>c</sub> filter images centred on $`\eta `$ Car.
### 3.2 Registration of images
The rotation of the polarizer induces a small shift of upto 3 pixels in the position of the images (see Ageorges & Walsh agwa (1999), Fig. 2) and coupled with the (intended) shifts of $`\eta `$ Car across the SHARP II field, it is necessary to carefully align all images to a common centre in order to calculate precise colour or polarization maps for the whole of the Homunculus. On the J, H and K frames, the image of $`\eta `$ Car was saturated (overflow of A-to-D converter). The centroids of all the J, H and K images were determined using a large radius (30 pixels = 1.5<sup>′′</sup>); it was found that with such a large radius the centroid was not sensitive to the saturated core (typically a few pixel radius). Combined images of the coverage of the whole nebula, at each position of the polarizer, were formed by shifting, and rotating by 90, each image pair (e.g. JA and JB which were taken consecutively) to a common centre and averaging the pixels in common. The rotation was required to produce astronomical orientation. Shifts were restricted to integer pixels, thus the alignment can have a maximum error of 0.050<sup>′′</sup>. The alignment procedure was carefully checked by examining the coincidence of features in the nebula and in the core when saturation was not severe (e.g. in the J band images in particular). Where there were repeats of the full combined image (column 6 of Table 1), the two sets of images were averaged. The result was an image of dimensions 326$`\times `$326 pixels (16.3$`\times `$16.3<sup>′′</sup>) with no data in the top left and lower right corners. Figure 1 shows the J, H, K and K<sub>c</sub> total flux images (i.e. Stokes I) on a logarithmic scale. All images have identical scale and orientation. In the J, H and K images the saturation of the central source is indicated by the zero flagged pixels (region of radius 4 pixels about position of peak). There are a variety of artifacts: diffraction spikes along the principal axes caused by the secondary support; low level changes consequent on merging images (the overlap regions were used to scale the image pairs); a doughnut shaped feature caused by a hot pixel cluster which occurs at equal declination values at the extremity of the NW and SE lobes and near the rim of the NW lobe. These features, which are most apparent on the colour maps (Fig. 2), have not been masked out but are obviously not interpreted.
### 3.3 Colour maps
‘Colour’ maps were made by ratioing the J, H and K images. A cut-off in the form of a mask was applied to each colour map in order to prevent division by small numbers and produces the sharp bulbous edges in the maps. Figure 2 shows the J/H and H/K images on a logarithmic scale. On account of saturation the values over the core do not hold any colour information and have been set to zero. The range of valid ratio values are: (J-H) 0.02 - 2.5 mag.; (H-K) 0.02 - 1.5 mag.
### 3.4 Polarization maps
The linear polarization and position angle were calculated for the combined maps by fitting a cosine 2$`\times \theta `$ curve to values at each point as a function of polarizer angle as fully described in Ageorges & Walsh (agwa (1999)). The discrepant point at PA 157.5 was not included in these fits (Ageorges & Walsh agwa (1999)). The input images were binned to improve the signal-to-noise in the polarization determination at the expense of spatial resolution. In addition a cut-off in polarization signal-to-noise (i.e. $`p/\sigma _p`$) was applied to exclude points with large errors, such as at the edges of the Homunculus. Figure 3 shows the J, H and K<sub>c</sub> band polarization vector maps superposed on logarithmic intensity contour maps to be directly compared to the images in Fig. 1. The data were binned into 4$`\times `$4 pixels (0.2$`\times `$0.2<sup>′′</sup>) before calculating the polarization; the polarization cut-off was set at errors of 2% for the J and K<sub>c</sub> maps and 1.7% for the H band map.
### 3.5 Restoration of 2.15$`\mu `$m images
As described in Ageorges & Walsh (agwa (1999)) the K<sub>c</sub> images were restored using the blind deconvolution algorithm IDAC (Jeffries & Christou jeff (1993)) to determine the PSF of the images. Only the central 128$`\times `$128 pixel area was restored to save computer time and only the images at polarizer angles of 0, 45, 90 and 135 were employed. Figure 4 shows the logarithmic total intensity map over the central 1$`\times `$1<sup>′′</sup> area resulting from restoring the four reduced images using the Richardson-Lucy algorithm (Lucy luc74 (1974)) with the IDAC PSF; the resulting image was reconvolved with a Gaussian of 3 pixel FWHM (0.15<sup>′′</sup>) since this is about the expected diffraction limited resolution at this wavelength.
A polarization map was calculated from the four restored 1$`\times `$1<sup>′′</sup> images and is shown in Fig. 4 for direct comparison with the logarithmic image. The data were binned 2$`\times `$2 pixels before calculating polarization and the polarization cut-off error was 4%. An attempt was made to calculate the polarization at the positions of the speckle knots, discovered by Weigelt et al. (Weigelt & Ebersberger wei86 (1986) and Hofmann & Weigelt hof88 (1988)), whose positions are shown in Fig. 4. Knot A is the central (assumed) point source whilst knots B, C and D are to the NW at offsets of 0.114 (B), 0.177 (C) and 0.211<sup>′′</sup> (D); these offsets correspond to only 2.3, 3.5 and 4.2 pixels in the images. In the restored images no distinct knots could be discerned at these positions but it is clear from Fig. 4 that there is an apron of IR radiation in the NW direction strongly hinting on an area of elevated brightness in the vicinity of these knots.
Aperture photometry of the Weigelt et al. knots in a 2$`\times `$2 pixel area was performed for the three sets of images - restored with IDAC, Richardson-Lucy restored with the IDAC PSF and Richardson-Lucy restored. All the restorations were convolved with a Gaussian of 3 pixel FWHM. For the three images the aperture polarization determinations showed that knot B could not be distinguished from knot A (identical polarization within errors). Knot C showed very differing results depending on the method (it lies on a diffraction spike); only for knot D could a fairly consistent value of polarization be determined. From the three methods a mean polarization of 18 $`\pm `$ 7 % and a position angle of 17 $`\pm `$ 14 was derived for knot D. Given the position angle of knot D from knot A of 336 (Hofmann & Weigelt hof88 (1988)), a position angle of the polarization vector of about 60 is expected. To reconcile this discrepancy, it is suggested that knot D may not be directly illuminated by $`\eta `$ Carinae, i.e. there is multiple scattering within this core region which would not be too surprizing given the high (gas) densities (Davidson et al. davebb97 (1997)). The mean total intensity ratio knot A/knot D was 10:1, to be compared with the value of about 12:1 given by Hofmann & Weigelt (hof88 (1988)) for a wavelength of $``$8500Å. It is justified to attempt polarimetry at these positions since the images of Morse et al. (morse (1998), Fig. 5) and Ebbets et al. (ebb94 (1994)) show no obvious indication that the knots have substantial proper motion. This supposition is partly supported by the low radial velocities measured by Davidson et al. (davebb97 (1997)) who classify these knots as ‘compact slow’ ejecta from $`\eta `$ Car.
## 4 Discussion
### 4.1 Morphology
In the near-IR, scattering still dominates the structure of the Homunculus nebula as it does in the optical, and the general appearance is similar. The features in the colour maps in Fig. 2 correspond to those well known in the Homunculus (see for example the sketch of the various morphological features in Fig. 3 of Currie et al. curr (1996)). The paddle to the NW, which is bluer, as are the two knots at PA 0 and 280, are interpreted as belonging to the disk in which $`\eta `$ Carinae resides (see e.g. the sketch of Smith et al. smigeh (1998), Fig. 10). The jet NN to the NE is weaker at H and is barely detected at K, probably on account of lower extinction towards this jet in comparison to the Homunculus (assuming that scattering dominates this structure). The skirt to the SW is less striking in all the IR maps compared with the optical (see the beautiful HST images reproduced in Morse et al. morse (1998)) and does not show a distinct colour in Fig. 2. The SE lobe presents a more speckled appearance than the NW one where there are some radial features which show up well in the colour maps. The most prominent large scale feature in the H-K colour map (Fig. 2) is the dark trench extending over most of the NW lobe at PA $``$320. This feature is barely visible in the J and H maps but is much brighter at K. The ‘hole’ in the SE lobe, detected in the mid-IR by Smith et al. (smigeh (1998)), has rather red J-H and H-K colours; its western edge is noticeably brighter at J. The hole in the NW lobe, detected in the mid-IR images of Smith et al. (smigeh (1998)), is not visible in the near-IR images. The rim of the SE lobe is notably blue in the J-H map whilst it is barely discernable in the H-K map; the rim of the NW lobe is notably redder. These differences must be primarily due to differing amounts of line of sight extinction at the periphery of the lobe:- the SE lobe which is tilted toward the observer suffers less extinction than the NW lobe which is tilted away.
The trend noted by Morse et al. (morse (1998)) that structures become less pronounced with increasing wavelength continues into the near-IR. Figure 5 (upper) shows a cut in Log flux along the major axis (taken as PA 132) of the Homunculus from the J, H, K and K<sub>c</sub> maps shown in Fig. 1. The central 2<sup>′′</sup> is not shown for the J, H and K maps since the images of $`\eta `$ Carinae are saturated in this region. The effect of smoothing out of features is clear from this plot. This is more strikingly seen in Fig. 6 where an attempt has been made to display the near-IR flux distribution (linear scale) along the same cut shown by Morse et al. (morse (1998)) \[their Fig. 6\]. Note in particular the depth of the feature centred at offset $`+`$1.3<sup>′′</sup> which shows a central depression of 40% of the peak value at $`+`$1.9<sup>′′</sup> for a wavelength of 1.25$`\mu `$m, compared to 94% at 0.63$`\mu `$m. The lower contrast with increasing wavelength can be attributed to lower extinction, through the Homunculus lobes, of many small ($`{}_{}{}^{}{}_{}{}^{<}`$0.3<sup>′′</sup>) optically thick knots. Such knots block the transmission of scattered radiation from $`\eta `$ Carinae through the front side of the lobes and on account of their optical thickness do not show much scattered light from their near sides. This can account for the more dappled appearance of the nebula in the J-H than the H-K colour map (Fig. 2).
Figure 6 reveals that at some positions of the nebula there can be substantial differences in the structure with wavelength: the peak at $`+`$3.3<sup>′′</sup> in the K band, which is hardly noticeable at J, is the most prominent feature in this comparison. This peak is seen on the colour maps in Fig. 2 as the dark region in the H-K map south of $`\eta `$ Carinae. Whilst there are some colour differences over the near-IR range towards the edge of the nebula the most prominent are in the central ($``$4$`\times `$4<sup>′′</sup>) area. The knots in the NW lobe within 3.5<sup>′′</sup> of $`\eta `$ Carinae are stronger in the J image than at longer wavelengths. This region is also marked out by having a distinctly lower polarization and corresponds to the disc (e.g. Smith et al. smigeh (1998)), which has a very different orientation to the Homunculus. The axis of the Homunculus is assumed to be tilted at about 35 to the plane of the sky (e.g. Meaburn et al. mea93 (1993); Davidson et al. davebb97 (1997)). However the HST proper motion studies favour a higher inclination of about 50 (Currie, 1999, priv.comm); the double flask model of Schulte-Ladbeck et al. (schul (1999)) has a 60 tilt to the plane of the sky. Differences in structure between the optical and near-IR can be understood in terms of increasing optical thinness with wavelength; at K the outer regions of the disk are optically thin and the sightline extends to the core of the Homunculus. This is also confirmed by features of the polarization maps (Sect. 4.2).
The J band image was compared in some detail with the HST F1042M image presented by Morse et al. (morse (1998)) as Fig. 4. The 1.04$`\mu `$m HST image, which differs by only 0.24$`\mu `$m in central wavelength from the J band image. These images make an excellent image pair for comparing HST with ground-based adaptive optics, although the AO image has not been deconvolved. The J band image is definately ‘fuzzier’. This must partly be due to the trend for features to be smoother at longer wavelengths but probably is primarily due to the differing character of the AO PSF compared with HST, since the diffraction limits are comparable (0.07<sup>′′</sup> at J for ESO 3.6m and 0.09<sup>′′</sup> at 1.04$`\mu `$m for HST). Clearly the ground-based AO image is approaching the HST image in terms of resolving sharp features close to the diffraction limit. The only difference noticed between the images was the presence of a narrow bright feature running through the dark region at approximately $`\mathrm{\Delta }\alpha `$=1,$`\mathrm{\Delta }\delta `$=-2<sup>′′</sup> on the J band image. Presumably this distinct feature corresponds to a shaft of radiation escaping from the central disc.
The restored K<sub>c</sub> map of the central region shown in Fig. 4 definitely shows an extension in surface brightness in the direction of the speckle knots (Weigelt & Ebersberger wei86 (1986) and Hofmann & Weigelt hof88 (1988)). As discussed by Ageorges & Walsh (agwa (1999)) several methods were used for restoring this image and all showed the presence of this feature, so its reality seems probable. That it is visible at K, in the UV and optical, strongly suggests that dust scattering is the common spectral feature, although these knots are known to have extraordinary line emission (Davidson et al. davebb95 (1995); Davidson et al. davebb97 (1997); Davidson et al. da99 (1999)). In Fig. 7 of Davidson et al. (davebb97 (1997)), the brightness profile of the continuum in the NW shows an extension in comparison with the SE direction and this was suggested as scattered light from dust in the speckle condensations C and D. The detection of an extension in this direction at K<sub>c</sub> (the filter avoids the Brackett $`\gamma `$ and He I lines, so is presumably pure continuum), confirms this interpretation. The polarization value derived for knot D in Sect. 3.5, although rather uncertain, suggests a scattering origin. Comparing the images in Fig. 4 with those presented in Fig. 5 of Morse et al. (morse (1998)), over almost identical regions, shows similarities to the NW of the central source but no detailed correspondence to the SE. This can be understood in the canonical picture that here the disk, tilted by some 35 to the line of sight, is being viewed obliquely and the radiation escapes preferentially towards the observer to the NW. That any UV continuum is visible at all to the NW indicates that the extinction must be fairly low and confirming the knots B-D as features on the nearside of the obscuring disc material.
The only ‘new’ morphological feature to come from the high resolution IR maps of the Homunculus is the linear feature at PA 320. This is seen on the H and K images and well seen in the H-K colour map (Fig. 2), but not in the J band (Fig. 1). The position angle of the linear feature points back to the position of $`\eta `$ Carinae and coincides with that of one of the whiskers detected outside the Homunculus by Morse et al. (morse (1998)) - WSK320 (see also Weis & Duschl weis (1999)). This ‘whisker’ has a high positive velocity (Weis & Duschl weis (1999)) and a suggested high proper motion (Morse et al. morse (1998)). The linear feature is also aligned with the only region of IR flux detected beyond the extent of the Homunculus, designated as NW-IR. This knot appears to be a rather diffuse cloud of extent $``$2<sup>′′</sup>, which is very red: the K/H flux ratio of the knot is twice that of the nearby region of the NW lobe of the Homunculus. NW-IR is highly polarized at H and well detected; the value of linear polarization is 36% in a 0.5<sup>′′</sup> aperture. This value is slightly larger than the corresponding values for the edge of the Homunculus nearest the knot (see Fig. 3 where the feature is apparent and Fig. 5 for the polarization at the edge of the Homunculus). It is a real feature as it was observed on both H and K band images (the signal-to-noise was not large enough to detect it on the K<sub>c</sub> image) and is presumably a dusty knot on the symmetry axis of the NW lobe. It was not however seen on the 2.15$`\mu `$m image of Smith et al. (smigeh (1998)) perhaps on account of lower signal-to-noise.
If the whiskers are high velocity ejecta then considering their high length-to-width ratio, an extreme collimation mechanism is suggested. The detection of an aligned feature inside the Homunculus suggests that this could be a spatially continuous jet feature extending from close to $`\eta `$ Carinae. There is slightly elevated polarization (3-4% above the mean of the surroundings) on this narrow feature with a trend to lower polarization on both sides of its length (1-2% less). The elevated polarization suggests that the feature cannot be intrinsic line emission which would dilute the polarized flux. From the HST WFPC2 images, the whiskers are however bright in \[N II\] line emission. The detectable polarization within the Homunculus and the presence of a dust cloud at the end of this linear feature suggests a confined dust+gas feature. The NW-IR dust cloud is however in strong contrast with the highly confined line emission. It cannot be directly claimed that both are aspects of the same phenomenon although it is highly suggestive. It is suggested that the reason the linear feature is not seen at optical wavelengths and in the J band is on account of its being confined to the inside of the Homunculus where there is enough extinction to mask it at lower wavelengths. Clearly this feature would repay further study at high spatial resolution and with spectroscopy. No IR features were convincingly seen associated with any of the other whiskers. The jet NN is probably ballistic (e.g. Currie et al. curr (1996)) and the whiskers may be also; so detectable remnants may be expected extending back to $`\eta `$ Carinae itself. The confusion by extinction and dust scattering however makes this a difficult task.
### 4.2 Linear Polarization structure
The polarization maps shown in Fig. 3 have a smooth appearance. The structure of the polarization vectors shows no strong evidence for diverging from the characteristic centro-symmetric pattern indicating illumination by a central source. This result is in contrast to the optical polarization maps of Warren-Smith et al. (warr (1979)), which show a slightly elliptical pattern of polarization vectors. This difference indicates that the dust must be substantially optically thick in the central waist in the optical but thin in the J to K region. The overall smoothness of the polarization maps indicates that there cannot be substantial variations in the positions of the scattering centres along the line of sight through the lobes otherwise the polarization would vary between say the back and front of the lobe (assuming a Mie scattering origin in which the polarization depends solely on scattering angle). However if the nebula is composed of optically thick small clouds then the scattering is always from the side facing the observer and no single scattered flux is received from the rear side of a dense cloud. The overall smoothness of the centro-symmetric pattern also shows that there must not be much, if any, multiple scattering occurring in the Homunculus itself. The central region in the restored K<sub>c</sub> polarization map (Fig. 4) appears to show some regions which depart from the centrosymmetric pattern. However these regions coincide with the positions of the telescope spider where the polarization determination is unreliable.
Comparison of the polarized and unpolarized images at J and H does not show intermediate scale features ($``$ few times the diffraction limit) with lower polarization, corresponding to emission regions. The dominant features are the lower polarization central region and the more highly polarized lobes. The polarization structure along the NN jet to the NE is very similar in both J and H and shows a plateau at 20% at distances from 3.6 to 5.8<sup>′′</sup> offset (in PA 33). The demarcation between the regions of the higher polarization lobes and the lower polarization central region is rather abrupt especially to the NW and W (the change from black to white on Fig. 7); to the SE it merges into the lower polarization of the lobe tilted towards the observer. The lower polarization region centred on $`\eta `$ Carinae is roughly rectangular in shape (see the H band polarization map - Fig. 7) with dimensions 5.5$`\times `$5.0<sup>′′</sup>; the longer axis is perpendicular to the major axis of the Homunculus. If this region is interpreted as the disc with an inclination $``$90 to that of the Homunculus, then the NW region has the smaller inclination to the line of sight and the low polarization could arise on account of small scattering angles. The SE section of the disc should be behind the SE lobe and would therefore have low polarization on account of large scattering angles, be faint and relatively obscured by dust in the foreground lobe. For an inclination of the disc of $``$35 to the line of sight, the deprojected diameter is 10<sup>′′</sup> which is similar to the projected minor axis diameter of the nebula (PA 42) excluding the NN jet and the skirt to the SW. It has been suggested that the central low polarization region reflects a change in grain properties (Smith et al. smigeh (1998)). However if the grains were smaller in the disc than in the lobes, then the polarization should be larger at long wavelengths.
The values of the linear polarization along the projected major axis at J, H and K<sub>c</sub> (Fig. 5) are very strikingly similar. This can be compared to the polarization along the same axis at V shown in Warren-Smith et al. (warr (1979)) \[Fig. 2 with direction reversed\] and Schulte-Ladbeck et al. (schul (1999)), Fig. 9. From a low plateau of about 5% in the central region, the polarization rapidly increases to about 20% at 2.5<sup>′′</sup> NW and less steeply to 13% at 2.5<sup>′′</sup> SE. In the SE lobe the polarization values are identical to within 2% at J and H (i.e. within the errors). The shape of the polarization profile is also identical to that at V (Schulte-Ladbeck et al. schul (1999)) but the V values are 3% higher on average with the SE edge about 6% higher, i.e. definately larger than the typical errors. In the NW lobe the J polarization has a pronounced peak at $`+`$3.3<sup>′′</sup>; this peak is also apparent at V but less pronounced at H. The J and H values both increase steadily from 25% at 4<sup>′′</sup> offset to 33% in J and 30% in H band at 8<sup>′′</sup> offset. At V the polarization has a different behaviour: it is higher (around 40%), flatter with offset and shows a decrease beyond 7<sup>′′</sup>.
The change in the magnitude of the peak at 3<sup>′′</sup> offset NW with wavelength can be interpreted as caused by different extinction optical depths in the various bands. At J (and V), the scattering arises predominantly in the equatorial disc, which has an inclination of about 35 and thus gives rise to polarization from scattering angles at around this value; at H the optical depth is lower and the disc begins to become transparent at this wavelength and there is a greater contribution of scattered flux from the rear lobe of the Homunculus. Further out in the NW lobe the H band polarization is systematically a few percent higher than at J whilst the V band polarization is about 10% higher. The difference between V and J band polarizations is readily understood in terms of the scattering at longer wavelength arising from deeper within the lobe since the line of sight extinction optical depth is lower. The V band polarization in the NW lobe, which is tilted away from the line of sight, arises predominantly from the nearside of the lobe where the scattering angle is closer to 90. The elevated polarization in the H band over the J band is the reverse of the trend of polarization decreasing with wavelength, but consistent with the presence of grains small with respect to the wavelength. However the deduction from the mid-IR data and colour temperature maps of Smith et al. (smigeh (1998)) suggest the hotter ($``$400K) core dust is caused by smaller ($``$0.2$`\mu `$m) grains and the cooler ($``$200K) outer lobes by larger (1-2$`\mu `$m) silicate grains (see also Robinson et al. rob87 (1987)). None of these suggested grain sizes can explain the small changes of polarization with wavelength in the optical-IR range.
In Fig. 6 (upper) the linear polarization at J, H and K<sub>c</sub> across the cut of Morse et al. (morse (1998)) is shown. The similarity of overall values is again apparent as for the projected major axis (Fig. 5), but there are local differences. At offset $``$1.5 to $``$2.0<sup>′′</sup> for example, there is a distinct dip in the polarization at J by $``$4% and at offset $`+`$3 to $`+`$4<sup>′′</sup> the peak in emission at K<sub>c</sub> has lower polarization than at J and H. These differences can be interpreted as due to scattering from material at different depths within the nebula suffering differing amounts of extinction giving rise to different scattering angles and hence lower polarization. The dip at $``$1.5 to $``$2.0<sup>′′</sup> is accountable by extinction at J biasing the polarization to regions nearer to the observer; the region at $`+`$3.5<sup>′′</sup> coming from a more extincted rearward region, perhaps in the equatorial disc.
### 4.3 Dust properties and structure
The most surprizing result of the IR polarization measurements is that the polarization along the long axis shown in Fig. 5 is so similar in J and H. In the SE lobe the polarization values are also within a few percent of those at V. This was totally unexpected. For scattering by grains small enough to produce 30% polarization at V, the H band polarization should be 80-90% since the particles are now much smaller than the wavelength (approaching the Rayleigh scattering regime). Alternatively the grains are very small and Rayleigh scattering occurs at all wavelengths; however then it is not clear why the polarization values are not larger. If there is a substantial unpolarized component which dilutes the polarized flux, say from a different grain size population, then it would be expected that this is wavelength dependent. The only strong wavelength dependent difference is the higher polarization at V than in J and H by about 10% in the NW lobe, the reverse of the behaviour expected by scattering of grains small with respect to the wavelength.
One possibility for the lower than expected polarization at J and H from the Mie scattering prediction could be dust emission from the warm grains. This would become more significant with longer wavelength; thus some depolarization would be expected at K. The similarity of the K<sub>c</sub> and H polarization shows that little depolarization is detected, thus refuting any influence on the J and H polarization values. If the grain properties were changing with position in the nebula this would be expected to have an effect on the behaviour of polarization at different wavelengths. In the NW lobe the differences between V and near-IR polarization can be attributed to differing scattering angles. If the line-of-sight optical depth is low the scattering region dominating the observed flux is deeper inside the lobe than for a high line-of-sight optical depth. The scattering angle for the lower line-of-sight optical depth will be larger. In the SE lobe there is no large-scale difference in polarization from V to K, corresponding to a situation where the line-of-sight optical depth is low or does not vary much.
In order to highlight this conclusion in Fig. 8 the polarization is plotted at four positions in the nebula defined by square 0.5$`\times `$0.5<sup>′′</sup> apertures. These positions were chosen in regions where the polarization distribution is fairly flat to give a representative estimate rather than an average over a wide range of values. The positions were chosen at ($`\mathrm{\Delta }\alpha `$, $`\mathrm{\Delta }\delta `$): ($`+`$4.75,$`+`$4.85), ($`+`$1.15,$`+`$0.55), ($``$0.55,$``$1.70) and ($``$2.05,$`+`$0.55) arcsec, representing offset distances of 6.8, 1.3, 1.8 and 4.9<sup>′′</sup> from $`\eta `$ Carinae respectively. Assuming that the axis of the Homunculus is tilted by about 35 to the plane of the sky (Meaburn et al. mea93 (1993) Davidson & Humphreys dahu97 (1997)), the scattering angles of the first and last regions are 125 and 65 (or 140 and 40 for a 50 tilt to the plane of the sky). The region at ($`+`$1.15,$`+`$0.55) arcsec is expected to be in the disc and thus have a scattering angle about 35. The scattering angle for the third region ($``$0.55,$``$1.70<sup>′′</sup>) is not easily predicted and the polarization was used to place it at an appropriate scattering angle by interpolation. A value of about 45 is suggested.
An attempt was made also to plot the scattered flux in Fig. 8 by scaling the total counts in the images to the near-IR photometry of Whitelock et al. (whi94 (1994)). Since the near-IR magnitudes are decreasing with time, an approximate extrapolation was made to the year of observation; the following total magnitudes for $`\eta `$ Carinae and the Homunculus were adopted: J 2.7; H 1.9; K 0.6. Comparison with the plot of J magnitude of $`\eta `$ Carinae from 1970 to 1999 in Davidson et al. (da99 (1999), Fig. 3) shows the J band estimate to be reliable. The zero points for the magnitude system were taken from Koornneef (koor (1983)). No correction was attempted for the slight saturation on the central point source which affected the J and H images. The K<sub>c</sub> image was used to scale the saturated K band flux. No attempt was made to correct for line of sight extinction. The lower panel of Fig. 8 shows the resulting fluxes in the apertures. The scattered flux decreases with scattering angle as expected for Mie scattering. The H and K fluxes generally agree fairly well whilst the J band fluxes are higher; this would be consistent with the scattering angles actually being less for the J band consequent on the depth at which the predominant scattering is viewed being less on account of line of sight extinction.
One way to attempt to visualize the extinction is to examine the wavelength variation of scattered light emerging at a position where there is an extinction feature. A position such as the pronounced drop in surface brightness at $`+`$1.3<sup>′′</sup> on the Morse et al. (morse (1998)) cut (Fig. 6) appears to be promising. The fractional depth of this feature was measured relative to the peaks by linearly interpolating between the values at $`+`$0.5 and $`+`$2.0<sup>′′</sup>. This is clearly dependent on the spatial resolution, especially when it comes to estimating the peaks which are much sharper than the extinction hole. The result is given in Table 2 expressed in magnitudes (i.e. A<sub>λ</sub> from the WFPC2 and ADONIS measurements). For comparison the expected extinction for a Galactic law matched to A<sub>4100A</sub> is listed in column 3 (using the Seaton (seat (1979)) extinction law as parametrized by Howarth how83 (1983) and R=3.1). For the Morse et al. (morse (1998)) data, the F336W point appears anomalous; it may be that the peaks also suffer extinction so the measurement of the extinction to the globule is underestimated. Over the range 4100 - 10420Å the extinction towards the nebula drops but less sharply than for the Galactic extinction law. There is clearly a jump in values between the HST 1.042$`\mu `$m and the ADONIS J band extinctions, probably on account of differing spatial resolution and PSF’s causing differing degrees of infilling. Treating the measurements from J to K separately in Table 2 shows that the extinction also drops less steeply than the Galactic extinction curve, strengthening the suggestion of a flatter extinction law in this region of the Homunculus. This somewhat greyer extinction favours particles larger than those typically found in the ISM, in agreement with the conclusions of Smith et al. (smigeh (1998)) and others on dust emission. Davidson et al. (da99 (1999)) also suggested grey extinction from a comparison of the modestly wavelength dependent brightening of $`\eta `$ Carinae and the Homunculus in the optical and near-IR.
There is a strong discrepancy between the predictions of the extinction and emission of grains in the Homunculus and their polarization properties. If the grains were typically 1-2$`\mu `$m in the Homunculus, Mie theory for spherical particles predicts a maximum polarization at 1.65$`\mu `$m of 38% for a scattering angle of 120 (assuming that the size distribution is flat from 1-2$`\mu `$m and using the Draine (drain (1985)) optical constants for silicate grains). However the V band polarization from Mie theory is only 13% for such a size range of particles at the same scattering angle. Whilst Rayleigh scattering from very small $`{}_{}{}^{}{}_{}{}^{<}`$0.1$`\mu `$m grains produces a similar polarization at all wavelengths, it produces only a small variation in scattered flux with scattering angle (by a factor $``$2). Thus Rayleigh scattering is not capable of matching the points as shown by the scattered intensity and polarization curves in Fig. 8 (see caption for details). It has not so far been possible to find a single size distribution which would explain the consistent polarization value over a wide wavelength range and the similar variation in scattered flux with scattering angle from J-K (see Fig. 8). To explain the consistency of the polarization, other suggestions involving a grain size distribution together with optical depth effects which ‘tune’ the scattering properties with wavelength must be invoked.
Three possibilities are suggested to explain the dust scattering structure in the Homunculus:
a) the grains possess a range in size which is similar at all positions within the nebula but the extinction of this grain distribution at a given wavelength is such that the particles which contribute most to the scattering have lowest extinction. In other words when the extinction cross section is low, the scattering cross section is high. From Mie theory this is not possible for a single grain species but could occur for some grain mixture. The extinction acts to fine tune the size range contributing to scattering. It is assumed here that the scattering angle changes rather little with wavelength (hence extinction). For the polarization to stay constant with wavelength, there would need to be a grain size distribution inside small clumps. The unit optical depth scattering surface would then be deeper at longer wavelengths on account of the lower extinction. Grain-gas or grain-grain collisions in the high velocity clouds could perhaps explain the size distribution which would affect the surface regions more strongly;
b) the effective scattering angle alters with wavelength on account of the differing extinction. To longer wavelengths the small dust globules become more transparent, resulting in an increase in scattering angle and a greater proportion of the scattered flux arises from towards the rearside of the lobes. This could compensate the increase of polarization with wavelength by providing less polarized flux. Whilst this could work for the NW lobe it is not easily able to explain the polarization behaviour in the SE lobe. Here the scattering angle increases with increasing penetration (lower line-of-sight extinction) into the lobe and so the polarization should be expected to increase with increasing wavelength. In this case the relevant parameter is again the line of sight extinction but it affects the scattering angle;
c) the grains are aligned by the macroscopic velocity field of the Homunculus such that it is their alignment that controls the polarization rather than the individual grains. A rather extreme alignment such as strings of dust particles would be required so it would be the incident radiation on a grain rather than its intrinsic scattering properties which would have a greater effect. Given that there are highly collimated ejecta observed outside the Homunculus (Weis & Duschl weis (1999) and Morse et al. morse (1998)), and that such features appear to extend inwards towards $`\eta `$ Carinae, the suggestion of an influence of the macroscopic grain alignment on the scattered radiation may not be completely ruled out.
Suggestion (c) finds support in the detection of 12.5$`\mu `$m polarization by Aitken et al. (ait95 (1995)), who first showed that there is organized grain alignment in the Homunculus. The maximum values of 12.5$`\mu `$m polarization were about 5% and the E-vectors are oriented mostly radially at the edges (Aitken et al. ait95 (1995), Fig. 2), although the pattern is complex. It is notable that the 12.5$`\mu `$m polarization is largest in each lobe where the near-IR polarization is greatest - viz. at the ends of the lobes. This suggests an intrinsic connection between the grains at the two wavelengths, rather than the polarization arising in different grain groups at the different wavelengths. Aitken et al. (ait95 (1995)) discuss radiation streaming as a mechanism to provide suprathermal grain spin of paramagnetic grains which then precess about the magnetic field direction. Fields strengths upto milli-Gauss were suggested and the field orientation in the lobes orthogonal to the major axis was favoured (Aitken et al. ait95 (1995)). One possible scenario which could relate the presence of aligned grains and the constancy of optical - IR polarization with wavelength has the grains in the foreground lobes of the Homunculus acting as the aligning medium for the scattered light from the rearside. By comparison with the case of Galactic grain alignment, the amount of extinction to produce 30% polarization (V band) is E<sub>B-V</sub> $`{}_{}{}^{}{}_{}{}^{>}`$3.3 (see e.g. Whittet, whitt (1992), Fig. 4.2). This is a large extinction but not ruled out given the deduced extinction of a few magnitudes for the ‘dark’ regions of the Homunculus (see Table 2). In addition the grains in the Homunculus are probably very different from those in the general interstellar medium, having been recently ejected from a star with anomalous abundances. It is clear that grain alignment cannot be wholly responsible for the constancy of the polarization with wavelength but may be a contributor. Further observations to explore the polarization in the 3-5$`\mu `$m region where there is a mixture of scattering and emission would be particularly valuable.
## 5 Conclusions
The first high spatial resolution adaptive optics near-IR polarization maps of $`\eta `$ Carinae and the Homunculus nebula have been presented. Since the Homunculus is dominated by scattering then the appearance in the near-IR is rather similar to that observed in the optical and a comparison of the AO results with an HST 1.04$`\mu `$m WFPC2 image was presented showing essentially the same features. The most important single result from this work is the overall similarity of the linear polarization from the V band to 2.2$`\mu `$m within a few percent for the SE lobe and the lower values at J and H compared with V for the NW lobe. Image restoration was applied to a set of 2.15$`\mu `$m continuum images to determine the polarization distribution in the near vicinity of $`\eta `$ Carinae. A tentative value of the polarization of the Weigelt et al. speckle knot D of 18% was determined suggesting that it is a dust cloud within the equatorial disc strongly illuminated by $`\eta `$ Carinae. Various models are discussed in order to explain the flat distribution of polarization with wavelength over the Homunculus. A possible association of a narrow feature within the NW lobe of the Homunculus with one of the highly collimated emission line ‘whiskers’ outside the nebula deserves further investigation.
|
no-problem/0003/astro-ph0003345.html
|
ar5iv
|
text
|
# Testing the multiple supernovae versus 𝛾-ray burst scenarios for giant HI supershells
## 1 Introduction
For several decades, 21 cm surveys of spiral galaxies have revealed the puzzling existence of expanding giant HI supershells (see e.g. Tenorio-Tagle & Bodenheimer 1988 for a review). These nearly spherical structures have very low density in their interiors and high HI density at their boundaries, and they expand at velocities of several tens of $`\mathrm{km}\mathrm{s}^1`$. The radii of these shells are much larger than those of ordinary supernova remnants and often exceed $`1`$ kpc; their ages are typically in the range of $`10^6`$$`10^8`$ years. Heiles (1979) denominated as supershells the ones whose inferred kinetic energies are $`3\times 10^{52}`$ ergs. The Milky Way contains several tens of them (Heiles 1979; Heiles, Reach, & Koo 1996), and in one case the estimated kinetic energy is as high as $`10^{54}`$ ergs. Similar supershells are also observed in other nearby galaxies.
Whereas it is clear that these HI supershells result from deposition of an enormous amount of energy in the interstellar medium, the energy source is still a subject of debate. Collisions with high-velocity clouds (Tenorio-Tagle 1981) could account for those cases where only one hemisphere is present, and the required input energy is not too large. However, it is unclear how such collisions could produce the near-complete ringlike appearance observed in some cases (Rand & van der Hulst 1993).
Small shells of radii $`200`$–400 pc and energies $`3\times 10^{52}`$ ergs are often explained as a consequence of the collective action of stellar winds and supernova explosions originating from OB star associations (McCray & Kafatos 1987; Shull & Saken 1995). The winds from the stars of the association create a bubble in the interstellar medium (ISM) that is filled with hot gas. The bubble further grows when the stars explode as supernovae, releasing their energy into the ISM. Multiple SN explosions are in principle a viable scenario even for the largest supershells, although this would require very large OB associations, not typically observed in nearby galaxies (Kennicutt, Edgar & Hodge 1989).
Another possibility that has been put forward is that giant supershells could be the remnants of gamma-ray bursts (GRBs) (Efremov, Elmegreen & Hodge 1998; Loeb & Perna 1998). In fact, if GRBs occur in galaxies and can have energies $`10^{53}`$ ergs, then remnants in the form of giant bubbles are unavoidable. Notice, however, that this conclusion relies on the assumption that the ratio of $`\gamma `$-ray energy to kinetic energy of the ejecta is very small, as required by the popular ’internal shock’ models for GRBs. If, on the other hand, this were not the case, as the analysis of GRB 970508 seems to imply, then the kinetic energy of GRBs would not be sufficient to produce a giant remnant (Paczyński 1999).
The nature of the energy source can be more easily identified in young supershells. The ones due to multiple SNe would still show ongoing activity. Bubbles powered by a GRB explosion could instead be identified by signatures of the radiation emitted by the cooling gas, which had been heated and ionized by the GRB afterglow (Perna, Raymond & Loeb 2000). However, after a time $`t10^5`$ yr, the imprints of this radiation have faded away. Old supershells remain, therefore, the most difficult to understand<sup>1</sup><sup>1</sup>1Among the observed supershells only about 10% of them seem to contain OB associations and could therefore be more naturally attributed to multiple SNe.. However, given their ages, they are by far the most abundant in galaxies. An attempt to identify their energy source has been recently made by Rhode et al. (1999). Assuming that the HI holes are created by multiple SNe, and that the SNe represent the high-mass population (OB stars) of a cluster with a normal initial mass function, they observed that the upper main-sequence stars (late B, A and F) should still be present in the cluster. However, their observations showed that in several of the holes the observed upper limits for the remnant cluster brightness are inconsistent with the expected values. Therefore their test suggested problems with the multiple SNe scenario. On the other hand, no evidence that the holes could be due to GRBs was found either. More recently, Efremov, Ehlerova & Palous (1999) discussed possible differences between the structures produced by a GRB and by an OB association, based on their shapes, expansion velocities, and fragmentation times.
Here we propose a new way of testing the multiple SNe versus GRB model to power supershells. This is based on the fact that SNe inject metals in the ISM in which they explode. As a result, if a supershell has been powered by multiple SNe, the abundances of some specific metals in its interior should be enhanced with respect to the typical values in the ISM surrounding the shell<sup>2</sup><sup>2</sup>2This is commonly observed in young supernova remnants (e.g. Canizares & Winkler 1981).. As the high-mass stars which power the supershell explode as Type II SNe, the enhancement should be particularly pronounced in elements such as Oxygen, Silicon, Neon, Magnesium, but not in others (e.g. Nomoto et al. 1997). We present line diagnostics that could help detect such unusual abundances.
If a supershell has been powered by a GRB, on the other hand, no peculiar metal enhancement is expected. The highly relativistic expansion of the ejecta requires that the baryonic load be very small <sup>3</sup><sup>3</sup>3Even if GRBs were associated with SNe (as it has been suggested in the case of SN 1998bw \[Galama et al. 1998\]), and there were some mass ejected at later times, it would be just that of a single SN, and therefore it would be highly diluted within the large volume of the supershell.($`M10^4M_{}`$). Therefore, detection of peculiar abundances in the medium within a supershell could provide a clue to was the energy source that powered it. Knowledge of the fraction of HI supershells that is likely to be associated to a GRB event would lead to important constraints on the energetics and rates of GRBs, as well as on their location within a galaxy.
## 2 Evolution of the remnant and metal injection
We consider a model in which the ambient ISM consists of gas of uniform density $`n_0`$, and treat the dynamical evolution of the supershell in a similar fashion to that of supernova remnants (SNRs). Whereas the initial stages are very complex and depend on details of environment and on how the energy is injected, the late phases of the evolution are very similar in the two cases and don’t depend much on details. The late evolutionary phases are what we are interested in.
Once the mass of the swept-up material exceeds the initial mass of the ejecta (but while radiative losses are still negligible by comparison with the initial energy), the remnant enters a phase of adiabatic expansion (Spitzer 1978). This is described by the self-similar solution derived by Sedov:
$$R_s=1.15\left[\frac{E_0t^2}{\rho _0}\right]^{1/5},$$
(1)
where $`R_s`$ is the radius of the shock, $`E_0`$ the energy of the explosion, and $`\rho _0=\mu m_pn_0`$. Let $`r`$ be the radial coordinate in the interior of the remnant, $`xr/R_s`$, and $`v_s`$ the shock velocity; then the density profile can be approximated analytically by (Cox & Anderson 1982)
$$\rho =4\rho _0\left[\frac{5}{8}+\frac{3}{8}x^8\right]x^{4.5}\mathrm{exp}\left[\frac{9}{16}(1x^8)\right],$$
(2)
the pressure by
$$p=\frac{3}{4}\rho _0v_s^2\left[\frac{5}{8}+\frac{3}{8}x^8\right]^{5/3}\mathrm{exp}\left[\frac{3}{8}(1x^8)\right],$$
(3)
and the velocity of an element of the shell at position $`x`$ by
$$v=\frac{3}{4}v_s\frac{x}{2}\left[\frac{1+x^8}{\frac{5}{8}+\frac{3}{8}x^8}\right].$$
(4)
The temperature in the interior is then found from Equations (2) and (3) and the use of the equation of state, $`p=nkT`$.
The remnant continues to expand adiabatically up to the time at which radiative cooling begins to dominate, that is when the gas temperature behind the front reaches the value $`T(56)\times 10^5`$ K, corresponding to the maximum of the cooling curve. By the time the remnant has arrived at this stage, approximately half of its thermal energy has been radiated away, and a cold dense shell is formed, containing about half of the mass of the swept-up gas. The cavity bounded by this shell contains hot, low-density gas that continues to expand nearly adiabatically (Lozinskaya 1992; Cui & Cox 1992 for the cases where thermal conduction is neglected). The evolution of the remnant following the formation of the cold shell is well described by a pressure-driven snowplough (PDS; Cox 1972; McKee & Ostriker 1977; Lozinskaya 1992). The time at which the PDS phase starts is given by Cioffi, McKee & Bertschinger (1988),
$$t_{\mathrm{PDS}}=4\times 10^4E^{0.23}n_0^{0.3}\eta ^{0.35}\mathrm{yr},$$
(5)
where $`\eta `$ is the metallicity ($`\eta =1`$ for solar abundances). This value is similar to that derived by other authors (e.g. Chevalier 1974; Falle 1981). Differences are mainly due to the use of different cooling functions, although the shell velocity predicted at the same radius is very similar for all calculations. In the PDS phase, the radius of the shell evolves as $`R_st^{0.31}`$ (Chevalier 1974). During this phase the remnant radiates most of its energy away, and therefore the physical variables describing the shell evolution are expected to deviate from the self-similar solution given in Equations (24) (see eg. Weaver, McCray & Castor 1977). However, as already mentioned, when thermal conduction is neglected the hot center of the bubble continues to evolve nearly adiabatically, though cooling eats into it from the outside. Here we only consider lines arising from gas at temperatures $`10^6`$ K, for which cooling is not yet important, and therefore it is a good approximation to keep the evolution as given by Equations (24) in the interior of the bubble, where these lines are produced. Moreover, notice that metallicity effects can significantly alter the early evolution of the remnants; however during the late PDS phase the differences due to metallicity are found to be negligible (Thorton et al. 1998; Goodwin, Pearce & Thomas 2000).
Observations of supershells yield their radii and expansion velocities, from which their kinetic energies can be inferred (e.g. Heiles 1979). The kinetic energy, however, is only a small fraction of the total energy released in the ISM. A large fraction of the energy is, in fact, radiated away by the cooling bubble. Numerical simulations of supernova explosions show that only a fraction $`f4\%`$ of the energy of the explosion is found as kinetic energy in the very late phase of evolution of the remnant (Chevalier 1974; Goodwin, Pearce & Thomas 2000). Therefore, if an old supershell has an inferred kinetic energy $`E_\mathrm{K}`$, and its energy input is provided by multiple SNe, the number of SNe required is $`N_{}E_\mathrm{K}/(fE_{\mathrm{SN}})`$. The energy released by a SN explosion is typically taken to be $`E_{\mathrm{SN}}=10^{51}`$ ergs, in agreement with the value inferred from the modelling of SN 1987A and SN 1993J (Shigeyama & Nomoto 1990; Shigeyama et al. 1994). However, before they explode, the most massive stars of the OB association contribute to the mechanical energy of the bubble with their winds (McCray & Kafatos 1987; Heiles 1987; Shull & Saken 1995). The wind energy varies with optical luminosity (Abbott 1982), but, as an average, Heiles (1987) assumes a value of $`1.17\times 10^{51}`$ ergs per star. This brings the number of required stars to $`N_{}E_\mathrm{K}/(fE_{\mathrm{SN}})`$ with $`f0.05`$. Because of the uncertainties in these estimates, we prefer to adopt a more conservative value, and therefore we take $`f=10\%`$ in our calculations.
If multiple SNe have provided the power for a supershell of energy $`E_\mathrm{K}`$, we assume that initially there was a cluster of $`N_{}`$ OB stars distributed according to an initial mass function (IMF). The IMF for such stars can be written as (Garmany, Chionti & Chiosi 1982)
$$f_{\mathrm{IMF}}(M_{})dN_{}/dM_{}M_{}^\beta ,$$
(6)
where $`\beta 2.02.7`$. Here we adopt $`\beta =2.3`$, and normalize the distribution so that $`_{M_{\mathrm{min}}}^{M_{\mathrm{max}}}f_{\mathrm{IMF}}(M_{})𝑑M_{}=N_{}`$.
The main-sequence lifetimes of massive stars are given approximately by (Stothers 1972; Chiosi, Nasi, & Sreenivasan 1978)
$$t_{}\{\begin{array}{cc}3\times 10^7(M_{}/10M_{})^{1.6}\mathrm{yr}\hfill & \text{if }\mathrm{\hspace{0.33em}7}M_{}30M_{}\hfill \\ 9\times 10^6(M_{}/10M_{})^{0.5}\mathrm{yr}\hfill & \text{if }\mathrm{\hspace{0.33em}\hspace{0.33em}30}M_{}80M_{}\hfill \end{array}.$$
(7)
The least massive star that is expected to terminate as a Type II SN has initial mass $`M_{\mathrm{min}}=7M_{}`$ (Trimble 1982). We take $`M_{\mathrm{max}}=100M_{}`$ as the mass of the most massive star of the association<sup>4</sup><sup>4</sup>4We consider a model of an OB association with coeval star formation (see e.g. Shull & Saken 1995); that is all stars are assumed to be formed at once with no age spread. We don’t expect very sensitive variations in our results with the introduction of a spread in birth dates, as long as most of the stars explode in the early phase of the supershell. This assumption is consistent with the observation that most old supershells do not show any more signs of an OB association within them. (Shull & Saken 1995).
Metallicity yields in SNe have been obtained in numerical simulations by a number of authors. Here we use the results of the computation made by Nomoto et al. (1997), who have calculated metallicity yields for several values of the progenitor mass between 13 and 70 $`M_{}`$. For other values of masses between our $`M_{\mathrm{min}}`$ and $`M_{\mathrm{max}}`$, we interpolate and extrapolate their values. Metals are injected in the ISM when the stars of the association become SNe. Let $`X(M_{})`$ be the yield of a star of mass $`M_{}`$ in element $`X`$, and let $`M_{}(t)`$ be the mass of a star with lifetime $`t`$, as given by Equation (7). The amount of mass of element $`X`$ that is injected in the ISM between the times $`t_1`$ and $`t_2=t_1+\mathrm{\Delta }t`$ is then
$$\mathrm{\Delta }M_X=_{M_{}(t_2)}^{M_{}(t_1)}f_{\mathrm{IMS}}(M_{})X(M_{})𝑑M_{}.$$
(8)
We take the initial time of the supershell to be that at which the first supernova goes off. Our final results are not very sensitive to this particular choice as long as the time at which the first star explodes is much smaller than the lifetime of the supershell. We assume spherical symmetry, and slice the volume of the bubble into a number of shells. Each shell is followed in time, and the concentrations of the ions of each element are computed, allowing for time-dependent ionization. The stepsize $`\mathrm{\Delta }t`$ is chosen so that $`\mathrm{\Delta }t/t1`$ at every time. After each time increment $`\mathrm{\Delta }t`$, a new shell is added while the others evolve according to Equations (2-4). During each $`\mathrm{\Delta }t`$, an amount of mass $`\mathrm{\Delta }M_X`$ as given by Equation (8) is injected into the expanding bubble. How these extra elements precisely mix with the medium in the supershell is a very complicated problem, and its solution depends on details of the model. However, we consider it reasonable to assume that mixing is negligible between SN material that is injected in the supershell at a given time, and ISM that is accreted by the supershell at much later times<sup>5</sup><sup>5</sup>5 Notice that the lifetime of each SN (i.e. the time that it takes the shock to slow down) is much smaller than the lifetime of the supershell, in the case where $`N_{}1`$.. Within the supershell, we then make the simplest assumption of uniform mixing of the ejecta with the medium.
Finally, a modified version of the Raymond-Smith emission code (Raymond & Smith 1977) is used to compute, at each time and for each shell, ionization and recombination rates, the time-dependent ionization state, and the $`X`$-ray spectrum.
## 3 Diagnostics of metal enhancements
Figure 1 shows the enhancements (relative to standard solar values and for $`n_0=1`$ cm<sup>-3</sup>) in the abundances of Oxygen, Silicon and Neon for a supershell of age $`t=5\times 10^7`$ yr and kinetic energy<sup>6</sup><sup>6</sup>6As long as $`N_{}1`$ (and therefore we can apply our assumption about mixing) our results at late times are roughly independent of $`E_\mathrm{K}`$ (or equivalently $`E_0`$). In fact, as $`R_sE_0^{0.32}`$, the volume of the shell is $`VE_0`$. The mass injected in element $`X`$ is $`\mathrm{\Delta }M_XE_0`$, and therefore the number density $`n_X=\mathrm{\Delta }M_X/V`$ independent of $`E_0`$. $`E_\mathrm{K}=5\times 10^{53}`$ ergs. As explained in §2, the kinetic energy as measured at late times is assumed to be on the order of 10% of the total input energy in SNe. The total amount of Oxygen mass injected by the SNe is then $`10^4`$ $`M_{}`$, for the assumed $`E_\mathrm{K}`$. The enhancements in the abundances are more pronounced in the inner regions of the supershell, as a consequence of the fact that most of the extra mass is injected at early times, due to the shorter lifetimes of the most massive stars. Notice that, whereas these results are shown for bubbles accreting from an ISM with solar metallicity, a stronger enhancement in the abundances could be observed for bubbles growing in a medium with low metallicity, such as that of the Large Magellanic Cloud, where the abundance of Oxygen is about half the solar value (Vancura et al. 1992). On the other hand, if mixing is more efficient than assumed, then more Oxygen would be found in the outer colder regions of the supershell, therefore reducing its enhancement. As already emphasized, unusual metal enhancements are only expected in supershells due to multiple SNe, but not in those powered by GRBs.
Figure 2 shows the emitted power in some of the strongest $`X`$-ray lines as a function of the position within the supershell. This is shown at various ages of the supershell. Due to the overall cooling of the shell with time, the hot region ($`T10^6`$ K) in which these lines are produced moves towards its inner part. As a consequence, the enhancements inferred from measurements of these lines will increase with time. This is illustrated by the solid lines of Figure 3, where several line ratios are shown at various ages of a supershell powered by multiple SNe. Here we have plotted ratios between lines of elements (such as O, Si, Ne, Mg) which are particularly enhanced by the SN explosion, and lines of elements (such as Fe, N) which are less affected. The enhancements are best inferred by using ratios of two lines of similar energy from different elements. These depend on the relative abundances of the two elements and on the ionization fractions for each element, but have no other significant dependence on the electron temperature<sup>7</sup><sup>7</sup>7Ratios between two lines close in energy are not much affected by interstellar absorption either. $`T`$. Thus the abundances can be determined once the ionization fractions are known. These, in turn, can be found from ratios of lines at approximately the same energy from different ionization stages of the same element. In cases where the continuum is observable, measurements of line strengths relative to the local continuum might permit the determination of absolute abundances for an ionic species (Winkler et al. 1981).
The dotted lines in Figure 3 show the same line ratios used for the case of multiple SNe, but for standard solar abundances, as they would appear if the supershell had been powered by a GRB. We find that enhancements in some specific line ratios by a factor of a few are expected in a supershell produced by multiple SNe with respect to a supershell due to a GRB. However, we need to emphasize that the precise value of the enhancement in each ion of each element will of course vary depending on the details of mixing within the shell. Nonetheless, what we hoped to identify are general features that a supershell due to multiple SNe is expected to have, as opposed to a supershell powered by a GRB. That is, a strong enhancement in the abundances of some specific elements such as O, Si, Ne, Mg, but not others. Moreover, among ions of the same element, a higher enhancement is expected to be seen in lines from a high ionization state as compared to lines from a low ionization state, the latter being produced in the outer cold shell, which has most of the mass accreted from the ISM at later times.
An issue that we have neglected in our model is that of thermal conduction across the interface between the dense outer shell and hot interior. Fast electrons from the hot interior can penetrate significant distances in the cold shell before depositing their energy in collisions with the gas, thus transferring heat across the contact discontinuity. The resulting heating raises the pressure of the inner edge of the shell, which then expands into the hot interior. It has been shown that tangled magnetic fields are able to partially suppress thermal conduction, but Slavin & Cox (1993) showed that even a small amount of conduction can lead to effective cooling in the end. If thermal conduction operates, in fact, bubbles and superbubbles would be colder but denser in their interiors, and therefore their $`X`$-ray emission would be suppressed. The importance of the effect of thermal conduction in bubbles is still an open issue, and the observational evidence appears mixed indeed. While some bubbles are fainter in $`X`$-ray than predicted by the theory, others are brighter, by up to an order of magnitude (Mac Low 1999). A detailed modelling of the $`X`$-ray emission under the various circumstances is not within the scope of our paper, and therefore we have adopted a simple model.
Within the framework of this model, the brightest $`X`$-ray lines in the late phase of a supershell of $`E_\mathrm{K}=5\times 10^{53}`$ ergs are expected to have luminosities in the range of $`10^{31}10^{32}`$ ergs. For supershells at galactic distances of a few kpcs, these lines are within the detection capability of CHANDRA or XMM. In cases where the emission lines are too faint to be detected, it would be useful to probe supershells in absorption. In fact, given their sizes on the sky ($``$ a few deg<sup>2</sup> \[Heiles 1979\] for those in our galaxy), it is likely to find a bright source behind them. Metal enhancements could then be detected by measuring the equivalent widths of absorption lines in the spectrum of the source. Again, it would be useful to compare strengths of absorption lines of the most enriched elements with those of elements which are not affected by SNe yields, and, among ions of the same element, to compare strengths of absorption lines from different ionization states. It would be worthed to attempt this test, either in emission or in absorption, especially with the most energetic supershells. Several have been observed which require an input energy $`10^{54}`$, both in our Galaxy (Heiles 1979), and in nearby ones, such as, for example, NGC 4631 (Rand & Van der Hulst 1993) or NGC 3556 (Giguere & Irwin 1996).
## 4 Conclusions
The energy source which powers giant HI supershells is still a subject of debate. Its identification is particularly difficult in the late phases of evolution of the remnant. While hemispherical supershells could be perhaps attributed to collisions with high-velocity clouds, the near-complete ringlike ones could be more easily explained by either multiple SNe from an OB association or by a GRB.
In this paper we have identified signatures that could help discriminate between the two models. Namely, we have shown that supershells powered by multiple SNe are likely to show enhanced abundances of the metals produced by the SNe themselves, and we have proposed some line diagnostics that could help reveal these unusual features.
Being able to discriminate between the multiple SNe and the GRB scenario for the production of HI supershells would help constrain GRB rates and energetics, as well as their location within a galaxy.
|
no-problem/0003/cond-mat0003228.html
|
ar5iv
|
text
|
# A new approach to the ground state of quantum-Hall systems
## Abstract
I present variational solutions of the many-body Schrödinger equation for a two-dimensional electron system in strong magnetic fields, which have, at $`\nu =1`$, the energy about 46% lower than the energy of the Laughlin liquid. At $`\nu =2/3`$, I obtain the energy, about 29% lower than was reported so far.
The need to explain the integer and the fractional quantum Hall effects (QHE) resulted in tremendous theoretical efforts to understand the nature of the ground state of a system of strongly interacting two-dimensional (2D) electrons in strong magnetic fields. The modern, widely accepted explanation of the fractional QHE is based on the theory of Laughlin , who showed that a special quantum state – an incompressible quantum liquid with fractionally charged quasiparticles – has the lowest ground state energy at the Landau-level filling factors $`\nu =1/m`$, $`m=1,3,5`$. In this Letter I present an infinite set of variational many-body wave functions for a 2D electron system (ES) in strong magnetic fields $`𝐁`$, and an exact analytical method of calculating the energy of the new states. The new wave functions describe the system at all Landau-level filling factors $`\nu 1`$, continuously as a function of magnetic field. In certain intervals of $`\nu `$ (approximately, at $`\nu 1/2`$) the calculated upper limits to the ground-state energy are substantially lower than was reported up till now .
The main theoretical efforts so far have been concentrated on the region of the very strong ($`\nu <1`$) magnetic fields . The nature of the ground state of a completely filled lowest Landau level ($`\nu =1`$) seemed to be well understood: the Hartree-Fock many-body wave function
$$\mathrm{\Psi }_{HF}=\frac{1}{\sqrt{N!}}det|\psi _{L_j}(𝐫_i)|,L_j=0,1,\mathrm{},N1,$$
(1)
– a Slater determinant built from the single-particle lowest-Landau-level states
$$\psi _L(𝐫)=\frac{(z^{})^L}{\lambda \sqrt{\pi L!}}\mathrm{exp}(zz^{}/2),$$
(2)
– was proposed about 20 years ago and was considered to be the cornerstone of the theory \[here $`z=(x+iy)/\lambda `$ is a complex coordinate of an electron, $`\lambda ^2=2\mathrm{}c/eB`$, $`L=0,1,2,\mathrm{}`$ is the angular momentum quantum number, and $`\mathrm{}`$, $`c`$, and $`e`$ are the Planck constant, velocity of light and the electron charge, respectively\]. The state (1) is a special case of the Laughlin liquid. It is characterized by the uniform electron density $`n_s=1/\pi \lambda ^2`$ and the energy per particle
$$ϵ_{HF}=\pi /2=1.5708$$
(3)
in the $`B`$-independent energy units $`e^2\sqrt{n_s}`$.
The wave function (1) describes electrons rotating around the same center (arbitrarily placed at the origin) with different angular momenta. However, in an infinite and uniform 2DES all points of the 2D plane are equivalent and no one of them can be considered as a special point, which electrons would prefer to rotate around. I consider, as trial functions for the ground state of the 2DES, an infinite set of many-body wave functions
$$\mathrm{\Psi }_L=\frac{1}{\sqrt{N!}}det|\chi _{ij}^L|,$$
(4)
$$\chi _{ij}^L\chi _L(𝐫_i,𝐑_j)=\psi _L(𝐫_i𝐑_j)e^{i\pi 𝐫_i(𝐁\times 𝐑_j)/\varphi _0},$$
(5)
which describe a system of electrons rotating around different points $`𝐑_j`$ with the same angular momentum. Here $`\varphi _0=hc/e`$ is the flux quantum, and $`L=0,1,2,\mathrm{}`$. The points $`𝐑_j`$ are chosen in view of minimizing the Coulomb energy of the system, and in a pure 2DES (without disorder) coincide with points of a triangular (or a square) lattice, uniformly distributed over the 2D plane with the average density $`n_s`$ ($`n_sa^2=2/\sqrt{3}`$ and 1 for the triangular and the square lattice, respectively, $`a`$ is the lattice constant). All the states $`\mathrm{\Psi }_L`$ are the eigen functions of the kinetic energy operator, with the energy $`\mathrm{}\omega _c/2`$ and the angular momentum $`L`$ per particle. The state $`\mathrm{\Psi }_{L=0}`$ is the Wigner crystal state . The states $`\mathrm{\Psi }_{L>0}`$ have been discussed in the literature , but only in Hartree approximation, when the neighbour single-particle states $`\psi _L`$ do not overlap. As will be seen below, the most interesting physics takes place just in the opposite case of a strong overlap of the states $`\psi _L`$ centered at different lattice points $`𝐑_j`$.
In order to calculate expectation values of different physical parameters with the wave functions (4), I derive exact analytical formulas for the norm
$$\mathrm{\Psi }_L|\mathrm{\Psi }_L=det|𝒮|,$$
(6)
and matrix elements of arbitrary single-particle, $`\mathrm{\Psi }_L|\widehat{H}_1|\mathrm{\Psi }_L`$, and two-particle, $`\mathrm{\Psi }_L|\widehat{H}_2|\mathrm{\Psi }_L`$, operators. The norm (6) and the matrix element of Coulomb energy,
$$\mathrm{\Psi }_L|V_C|\mathrm{\Psi }_L=\frac{1}{2}\underset{ijkl}{}(1)^{i+j+k+l}\mathrm{sgn}(ik)\mathrm{sgn}(jl)V_{ijkl}^{LL}det|𝒮|_{columnj,l}^{rowi,k},$$
(7)
are expressed via the matrix $`𝒮`$ with the elements
$$S_{ij}S_{ij}^{LL}=\chi _L(𝐫,𝐑_i)|\chi _L(𝐫,𝐑_j),$$
(8)
and the Coulomb matrix elements $`V_{ijkl}^{LL}=\chi _{ai}^L\chi _{bk}^L|e^2|𝐫_a𝐫_b|^1|\chi _{aj}^L\chi _{bl}^L`$. The sums in (7) are taken over $`N1`$ lattice points. Both the overlap integrals $`S_{ij}^{LL}`$ and the matrix elements $`V_{ijkl}^{LL}`$ can be analytically calculated, so that I get, finally, exact closed-form analytical expressions for the energy $`ϵ_L`$ of the system (per particle) in the states $`\mathrm{\Psi }_L`$. The final result can be presented in the form $`ϵ_L=ϵ_L^H+ϵ_L^{xc}`$. The Hartree contribution $`ϵ_L^H`$ contains the sum $`_{i<j}V_{iijj}^{LL}`$ and the energy of the uniform positive background. The exchange-correlation energy contains two-site exchange terms $`V_{ijji}`$, as well as three-site ($`V_{iikl}`$, $`V_{ijki}`$) and four-site ($`V_{ijkl}`$) correlations.
The number $`N`$ of lattice points involved in calculation of sums (7) should, ideally, be infinite. I calculate the Hartree energy $`ϵ_L^H`$ exactly, in the thermodynamic limit $`N=\mathrm{}`$, and the exchange-correlation energy $`ϵ_L^{xc}(N)`$ approximately, at a finite number of lattice points. It turns out that the contribution $`ϵ_L^{xc}(N)`$ is negative and grows with $`N`$ in its absolute value, Figures 1a and 2. The calculated energy $`ϵ_L(N)`$ is thus an upper limit to the true energy $`ϵ_L`$, $`ϵ_L<ϵ_L(N)`$.
Figure 1 exhibits $`B`$-dependencies of the calculated energies $`ϵ_L(N)`$ at a number of different $`L`$ and $`N`$ in a system with a triangular lattice of points $`𝐑_j`$. As expected, in strong magnetic fields the states with larger $`L`$ have larger energy, due to a larger Hartree contribution. When $`B`$ decreases, however, the single-particle wave functions rotating around different lattice points begin to overlap, and exchange-correlation interaction significantly reduces the energy. Due to the overlap of electron wave functions and electron interference effects, the energy of the system in the states $`\mathrm{\Psi }_L`$ has a complex oscillating dependence on magnetic field. As a consequence, in different intervals of $`B`$ the ground state of the system is realized by states with different $`L`$. The gap between the ground and the first excited state also varies with $`B`$ and vanishes in points, where two different $`L`$-states have the same energy. It seems likely that, maxima in the diagonal resistivity of the system are observed in these, zero-gap, points, while the Hall-resistivity plateaus are seen in the regions of large gaps, when the system is in the same quantum state. This view on the integer and the fractional QHEs merits detailed consideration and requires further development of the theory.
At $`\nu =1`$, the state $`\mathrm{\Psi }_{L=3}`$ has the lowest energy. For this state I calculate $`ϵ_{L=3}^{xc}(N)`$ with inclusion of up to $`N=187`$ lattice points, Figures 1 and 2. Thus calculated upper limit to the total energy per particle,
$$ϵ_{L=3}(\nu =1)<ϵ_{L=3,N=187}(\nu =1)=2.302,$$
(9)
is more than 46% lower than the energy (3) of the Hartree-Fock state (1). Note that the energy (9) is even lower than the energy of the classical Wigner crystal .
The state $`\mathrm{\Psi }_{L=3}`$ at $`\nu =1`$ is characterized by a very strong overlap of the single-particle wave functions of neighbour electrons, Figure 3. As a consequence, the density of electrons $`n_e(𝐫)`$ in this state is uniform (left panel). The wave function $`\mathrm{\Psi }_{L=3}(\nu =1)`$ thus describes a quantum liquid with strong exchange-correlation interaction of a very large number of electrons. \[For comparison, the right panel shows the electron density in the same state ($`L=3`$) at $`\nu =1/5`$ (the state $`\mathrm{\Psi }_{L=3}`$ is one of the excited states at $`\nu =1/5`$, Figure 1b). Qualitative difference between the liquid-type state at $`\nu =1`$ and a solid-type state at $`\nu =1/5`$ is obvious\].
The wave functions $`\mathrm{\Psi }_L`$ have a very large variational freedom. Apart from a possibility to vary the angular momentum $`L`$, one can consider configurations with a different (e.g. square) symmetry of lattice points $`𝐑_j`$. For instance, I have observed that, at $`\nu =2/3`$ the square lattice configuration gives the energy, lower than the triangular one. The upper limit to the total energy per particle at $`\nu =2/3`$, calculated with $`L=5`$ and $`N=149`$ in the square lattice configuration (the filled diamond in Figure 1b),
$$ϵ_{L=5}(\nu =2/3)<ϵ_{L=5,N=149}(\nu =2/3)=2.050,$$
(10)
is about 29% lower than was reported so far and also lower than the energy of the classical Wigner crystal.
Can the energy of the states $`\mathrm{\Psi }_L`$ be comparable with or lower than that of the Laughlin liquid at $`\nu =1/3`$ and $`\nu =1/5`$? This important question remains open. Calculations performed so far for $`L3`$ and $`N91`$ gave the energy higher than that of the Laughlin liquid at these values of $`\nu `$. However, Figure 1 demonstrates two important tendencies: i) the downward cusps in dependencies $`ϵ_L(B)`$ are being shifted to the right with growing $`L`$; and ii) for given $`L`$ and $`\nu `$ the energy $`ϵ_L(N)`$ substantially decreases with the growth of $`N`$. Calculations with other $`L`$ and larger $`N`$ may lead to a reduction of energy $`ϵ_L`$ below the energy of the Wigner-crystal state $`ϵ_{L=0}`$ also at $`\nu 1/3`$.
The new approach presented here enables one to analytically calculate any physical value characterizing the system at all $`\nu `$, continuously as a function of magnetic field. It naturally describes a transition to the Wigner crystal state at $`\nu 0`$, and can be straightforwardly generalized to the case of non-spin-polarized systems and hence to the region of higher Landau levels $`\nu >1`$. It also makes possible to study the interplay between disorder and electron-electron correlations in the integer and the fractional QHE (by varying configurations of points $`𝐑_j`$ which are influenced by disorder in a real system ). Due to a large variational freedom of wave functions $`\mathrm{\Psi }_L`$ it opens up wide possibilities to search for better approximations to the ground state at all values of $`\nu `$.
This work was supported by the Max Planck Society and the Graduiertenkolleg Komplexität in Festkörpern, University of Regensburg, Germany. I also thank Peter Fulde and Ulrich Rössler for support of this work, Nadejda Savostianova for numerous helpful discussions, and Vladimir Volkov for useful comments.
|
no-problem/0003/astro-ph0003483.html
|
ar5iv
|
text
|
# A Quantitative Study of Interacting Dark Matter in Halos
## 1. Introduction
A model based on the growth of small fluctuations through gravitational instability in a universe with cold dark matter (CDM) provides an excellent fit to a wide range of observations on large scales ($`1`$Mpc). However the nature and properties of the CDM, apart from its being cold and dark, remain mysterious. It is one of the major goals of cosmology to further constrain the properties of the dark matter and determine its nature. Recently much attention has been refocused on this question because of a suggestion by Spergel & Steinhardt (SpeSte (2000)) that possible discrepancies with observations on kpc scales could be probes of dark matter properties, particularly its self-interaction rate. In its simplest incarnation, the self-interacting dark matter (SIDM) model provides a 1-parameter family of models with standard CDM as a limiting case, and it is thus interesting to examine this model in some detail.
In this paper we calculate the quantitative effects of SIDM on the evolution of isolated halos using an N-body code with particle scattering. We review the astrophysical arguments motivating SIDM in §2 and describe our implementation of the scattering algorithm in §3 (our method is very similar to that of Burkert (Bur (2000)), but uses a more accurate approximation to the scattering rate integral). The impact of self-interaction on various astrophysical processes is discussed in §4 and we finish with some conclusions in §5.
## 2. The case for SIDM
In this section we briefly recap the arguments which led Spergel & Steinhardt (SpeSte (2000)) to revive the SIDM model (Carlson et al. CMH (1994); de Laix, Scherrer & Schaeffer deLaix (1995)) since it is precisely these astrophysical observations that can be used to constrain the self-interaction of dark matter.
There is a growing consensus that CDM halos form with central density cusps, where $`\rho r^\gamma `$ as $`r0`$ with $`\gamma 1`$ (Navarro et al. NFW (1997); Moore et al. Moo98 (1998); Huss et al. Huss (1999); Klypin et al. Kly (1999); Moore et al. 2000a ; but see Kravstov et al. Kravstov (1998)). Analytic arguments (Syer & White SyeWhi (1998); Kull Kull (1999)) suggest that such a core structure follows naturally from merging in collisionless systems. These results are (in some senses) confirmed experimentally by the similar stellar density cusps found in early-type galaxies (e.g. Faber et al. Faber (1997)), which also largely formed through the merging of collisionless matter (stars). However, such density cusps are inconsistent with the finite core radius required to reproduce the rotation curves of dwarf galaxies (Moore Moo94 (1994); Flores & Primack FloPri (1994)). The problem appears to be limited to very low rotation speed galaxies ($`v_c<10^2`$ km s<sup>-1</sup>) as van den Bosch et al. (vdBRobDalBlo (2000)) have now shown that the slightly more massive LSBs (low surface brightness galaxies) actually require cusped dark matter profiles. CDM simulations also produce halos which correctly reproduce the numbers of galaxies in clusters, but at first sight grossly overpredict the number of satellites orbiting the Galaxy (Moore et al. MGGLQST (1999)).
One solution to the discrepancies is to blame the differences on the effects of the baryons. The dwarf galaxies are the systems where the baryons can most easily modify the core structure of the dark matter (e.g. Hernquist & Weinberg HerW (1992); Navarro Eke & Frenk NEF (1996); Gelato & Sommer-Larsen GSL (1999); Binney, Gerhard & Silk BinGer (2000)) although it is difficult to quantitatively estimate the effects of feedback (e.g. MacLow & Ferrara Maclow (1999)). Similarly, massive spiral galaxies are probably the best environment for the effects of the baryons to modify the satellite predictions of pure dark matter simulations (e.g. Bullock, Kravstov & Weinberg BKW (2000)). It is, however, unclear whether the baryons can provide a complete solution to the two problems. A second solution is to use hot rather than cold initial conditions for the collapsing dark matter (e.g. Hozumi, Burkert & Fujiwara Hozumi (2000)).
Self-interacting dark matter may be a third solution, because it can also heat the cores of dark matter halos and evaporate orbiting satellites given an appropriately tuned interaction cross section. This idea has attracted considerable attention (Ostriker Ost (2000); Hannestad Hann (2000); Hogan & Dalcanton HogDal (2000); Burkert Bur (2000); Firmani et al. FOACH (2000); Mo & Mao MoMao (2000); Hannestad & Scherrer HanSch (2000), Bento et al. Bento (2000)), but the detailed quantitative consequences of the theory have yet to be derived. We have used N-body methods, discussed in the next section, to explore the formation and evolution of “galaxies” in a simple model.
## 3. Numerical Methods
Throughout we shall study $`10^5`$ particle realizations of a Hernquist profile (Hernquist Her (1990))
$$\rho _H(r)=\frac{M_T}{2\pi }\frac{a}{r}\frac{1}{(r+a)^3}$$
(1)
which has two parameters: the total mass $`M_T`$ and break radius $`a`$. The orbital time at the break radius $`a`$ is $`t_{\mathrm{dyn}}=4\pi (a^3/GM_T)^{1/2}`$, which we shall sometimes refer to as the “dynamical time”. With $`10^5`$ particles we can probe the halo structure down to $`0.1a`$, which is sufficient for our purposes. We evolve the system using a Tree code described in Appendix A.
In addition to the usual gravitational force, we implement the self-interaction of the dark matter using a Monte-Carlo technique. First let us define some dimensionless units. For a galaxy of mass $`M_0`$ and radius $`R_0`$ (where $`M_0=M_T`$ and $`R_0=a`$ for the Hernquist model) define a density scale $`\rho _0=M_0/R_0^3`$, a velocity scale $`v_0=(GM_0/R_0)^{1/2}`$ and a time scale $`t_0=R_0/v_0`$. The simplest self-interaction has the scattering cross section per unit mass of the dark matter, $`\sigma _{DM}`$, independent of energy. In this case we obtain the 1-parameter family of models which we shall study here. The scattering rate for a particle with velocity $`𝐯_0`$ is
$$\mathrm{\Gamma }=\frac{dn}{dt}=d^3𝐯_1f(𝐯_1)\rho \sigma _{DM}|𝐯_0𝐯_1|$$
(2)
where $`f(𝐯_1)`$ is the local distribution function of velocities. In dimensionless units,
$$\widehat{\mathrm{\Gamma }}=\frac{dn}{d\widehat{t}}=\widehat{\sigma }_{DM}d^3\widehat{𝐯}f(\widehat{𝐯}_1)\widehat{\rho }|\widehat{𝐯}_0\widehat{𝐯}_1|$$
(3)
with dimensionless cross section $`\widehat{\sigma }_{DM}=M_0\sigma _{DM}/R_0^2`$. For a Hernquist model the fraction of the particles interacting per crossing time $`t_0`$ is $`0.05\widehat{\sigma }_{DM}`$, the mean free path is $`a/\widehat{\rho }\widehat{\sigma }_{DM}`$, and these interactions have a typical radius $`a/2`$. We shall quantify the effect of self-interaction as a function of $`\widehat{\sigma }_{DM}`$, and one can then scale to cases of astrophysical interest by suitable choice of the basic dimensional parameters $`M_0`$ and $`R_0`$.
We have simulated dimensionless cross sections of $`\widehat{\sigma }_{DM}=M_T\sigma _{DM}/a^2=0`$, $`0.3`$, $`1.0`$, $`3.0`$ and $`10.0`$. For $`\widehat{\sigma }_{DM}=1`$ we also ran a case with $`8\times 10^5`$ particles, doubling the spatial resolution, and a case using the Burkert (Bur (2000)) scattering algorithm for comparison. The evolution of Navarro, Frenk & White (NFW (1997)) models should be similar if we scale the two models to have the same central density distribution. For this normalization, the evolution of an NFW model will match that of our Hernquist models if $`\widehat{\sigma }_{DM}=2\pi \delta _c\rho _{crit}r_c\sigma _{DM}`$.
To implement the scattering in an N-body code we need to discretize the calculation of the integral to compute the scattering rate for each particle. Call the particle whose rate we are calculating $`0`$ and particle $`j`$ the particle from which it is scattering. We approximate the local density from the total mass and radius enclosed by the $`N=32`$ nearest neighbors which we order in increasing distance from particle $`0`$. Writing the masses and velocities of the neighbors as $`\widehat{m}_j`$ and $`\widehat{𝐯}_j`$ with $`j=1,\mathrm{},N`$ we estimate the scattering rate for particle $`0`$ by
$$\widehat{\mathrm{\Gamma }}=\widehat{\sigma }_{DM}\widehat{\rho }_N\frac{\underset{j=1}{\overset{N}{}}\widehat{m}_j|𝐯_j𝐯_0|}{_{j=1}^N\widehat{m}_j}$$
(4)
where
$$\widehat{\rho }_N=\frac{3\underset{j=1}{\overset{N}{}}\widehat{m}_j}{4\pi r_N^3}$$
(5)
is the estimate of the local density. Note that this differs from the prescription of Burkert (Bur (2000)) who simply approximated $`|\mathrm{\Delta }𝐯|=|𝐯_0|`$, thereby underestimating the heating rate for low velocity particles.
For time step $`\mathrm{\Delta }\widehat{t}`$ the probability of an interaction during the time step is $`P_{\mathrm{int}}=\widehat{\mathrm{\Gamma }}\mathrm{\Delta }\widehat{t}`$. We adjust the time steps to keep the interaction probability relatively low ($`P_{\mathrm{int}}<0.5`$), and the actual number of interactions in each time step is determined from the Poisson distribution with an expectation value of $`P_{\mathrm{int}}`$. For each scattering event we interact with one of the $`N`$ nearest particles, where the probability of interacting with particle $`j`$ is
$$p_j=\frac{m_j|𝐯_j𝐯_0|}{_{i=1}^Nm_i|𝐯_i𝐯_0|}.$$
(6)
This makes the interaction less “point-like”, but it provides a more correct estimate of the scattering rates and their dependence on particle energy. We have run one simulation ($`\widehat{\sigma }_{DM}=1`$) with $`8`$ times as many particles to estimate the systematic error induced by this non-locality, and find that it is negligible (see Fig. 2a).
Once a particle is selected for the interaction, we assume that the scattering is isotropic. For particles with center of mass velocity $`𝐯_c`$ and velocity difference $`\mathrm{\Delta }𝐯`$, we select a random direction $`𝐞`$ and assign the particles velocities of
$`𝐯_0`$ $`=`$ $`𝐯_c+|\mathrm{\Delta }𝐯|𝐞m_j/(m_0+m_j)`$ (7)
$`𝐯_j`$ $`=`$ $`𝐯_c|\mathrm{\Delta }𝐯|𝐞m_0/(m_0+m_j).`$ (8)
Such elastic scattering will conserve energy and linear momentum, but not angular momentum because of the finite particle separations. However, the net violation of angular momentum conservation over all interactions will be zero because the particle separation vectors have random orientations. The finite interaction range also leads to diffusion effects, but these introduce negligible error as we can see by comparing to our higher resolution simulation (Fig. 2a).
Finally we note that several simulations have been run approximating self-interacting dark matter as a fluid and using numerical hydrodynamic techniques (Moore et al. 2000b ; Yoshida et al. YSWT (2000)). This is a physically different regime from the low optical depth system proposed by Spergel & Steinhardt (SpeSte (2000)). Unlike a strongly interacting fluid, the energy transfer in SIDM is dominated by stochastic collisions in the core which rapidly transport the energy into the outskirts of the halo. The fluid results are thus not directly applicable. The physical problem SIDM most closely resembles is 2-body relaxation in globular clusters (see Spergel & Steinhardt SpeSte (2000); Hannestad Hann (2000); Burkert Bur (2000)).
## 4. Results
We considered a series of experiments comparing the evolution of N-body systems with and without SIDM parameterized by $`\widehat{\sigma }_{DM}`$. The key to understanding their evolution is the relaxation time due to the scattering. It is, however, a different regime from normal 2-body relaxation. The scattering process is direct rather than diffusive and the structure of the system evolves on the relaxation time scale rather than on a large multiple of the relaxation time scale as in globular clusters. We first define a relaxation time for SIDM, and then compare the evolution of three dynamical systems with and without SIDM. We begin with the evolution of Hernquist profile galaxies, next we consider the effects of adiabatic compression of the dark matter by the baryons in modifying this evolution, and finally we examine the collapse of a cold top-hat perturbation.
### 4.1. Relaxation time
For 2-body relaxation, the relaxation time is defined to be $`\sigma ^2/D(v_{\mathrm{para}}^2)`$ where $`\sigma `$ is the one-dimensional velocity dispersion and $`D(v_{\mathrm{para}}^2)`$ is the diffusion coefficient parallel to the particle velocity (see Binney & Tremaine BinTre (1987); §8). For SIDM, the scattering angle average of the velocity change is just $`\frac{1}{2}|\mathrm{\Delta }𝐯|^2`$ so the analogy of the 2-body relaxation diffusion coefficient is
$`D(v^2)`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \rho \sigma _{DM}|\mathrm{\Delta }𝐯|^2f(𝐯_0)f(𝐯_1)d^3𝐯_0d^3𝐯_1}`$ (9)
$`=`$ $`{\displaystyle \frac{16}{\sqrt{\pi }}}\rho \sigma _{DM}\sigma ^3`$ (10)
assuming that the particle velocity distributions $`f(𝐯)`$ are Maxwellians with one-dimensional velocity dispersions of $`\sigma `$. The relaxation time is then
$$t_r\frac{3\sigma ^2}{D(v^2)}\frac{1}{3\rho \sigma _{DM}\sigma }$$
(11)
where the factor of $`3`$ arises because we considered the total velocity rather than a single component. The relaxation time associated with the mean interaction radius $`a/2`$, which we will call the core relaxation time, is $`t_{rc}=1.7t_{\mathrm{dyn}}/\widehat{\sigma }_{DM}`$. At the Hernquist break radius the relaxation time is $`8t_{\mathrm{dyn}}/\widehat{\sigma }_{DM}`$ and at the half-mass radius ($`2.4a`$) it is $`35t_{\mathrm{dyn}}/\widehat{\sigma }_{DM}`$. As expected the evolution seems to occur on the relaxation time scale associated with the mean interaction radius $`a/2`$.
Unlike globular clusters, the $`\rho 1/r`$ cusped density distributions evolve rapidly, and core collapse occurs after only 5 half-mass relaxation times (Quinlan Qui (1996)). The 2-body relaxation results will significantly overestimate the actual core collapse time scale because the SIDM scattering is not diffusive. A particle undergoes one scattering with $`\mathrm{\Delta }vv`$ rather than slowly diffusing in energy due to many small scatterings. Crudely, we expect the system to form a constant density core of size $`a/2`$ on the time scale $`t_{rc}`$ and core collapse faster than the 2-body relaxation prediction of $`5`$ half-mass relaxation times. After core collapse, the system will have a steeper $`\rho 1/r^2`$ cusp. This is just the behavior we see in the simulations, as we now describe.
### 4.2. Core formation
We first examined the formation of a core and the alteration of rotation curves as a function of the dimensionless cross section $`\widehat{\sigma }_{DM}`$. Fig. 1 shows the density distributions at $`tt_{\mathrm{dyn}}`$ for all five simulations. The scattering clearly produces a constant density core, while the simulation without any scattering roughly maintains the input density cusp. However, even after such a short time interval, the $`\widehat{\sigma }_{DM}=10`$ simulation has begun its core collapse phase and has a higher central density than any of the other SIDM cases. Our results are very similar to those of Burkert (Bur (2000)), but the higher scattering rates found with the better approximation to the scattering rate integral mean that we obtain the same evolution with approximately 1/3 the SIDM cross section.
We non-parametrically estimate the core radius by the point where the density drops to $`1/4`$ the central density. The central density is found by finding the region over which the Kolmogorov-Smirnov test provides a reasonable likelihood that the particle distribution is consistent with a constant density. This tends to provide us with an upper limit to the core radius, with an accuracy that is limited primarily by the finite particle number at very small radius. Fig. 2 shows the evolution of the core radius with time and the central density with core relaxation time $`t_{rc}`$. For comparison, the core radius due to the finite particle number and numerical relaxation effects starts on the force smoothing scale ($`0.05a`$) and slowly evolves to $`0.1a`$, which represents a negligible systematic error for the much larger core radii created by the SIDM (recall our estimate of the core radius is in fact an upper limit). We also find that the evolution of the $`\widehat{\sigma }_{DM}=1`$ system with $`8\times 10^5`$ particles is identical to the standard simulation with $`10^5`$ particles (see Fig. 2a).
The system evolution is remarkably rapid, with a very narrow window between the formation of a significant core radius and its recollapse. The maximum core size is approximately $`0.4a`$ and it forms on the core relaxation time scale $`tt_{rc}=1.7t_{\mathrm{dyn}}/\widehat{\sigma }_{DM}`$. Almost immediately after reaching its maximum size the core begins to collapse again, leaving a brief window where the dark matter shows a large core radius. Although the formation of the core seems to be nearly independent of $`\widehat{\sigma }_{DM}`$ when we scale the time by the core relaxation time scale, the subsequent core collapse phase is not. The rate of recollapse in units of $`t_{rc}`$ appears to accelerate as the cross section decreases (see Fig. 2b). Empirically we find the evolution of the central density after the reaching the maximum core radius is self-similar if we scale the time in units of $`t_{rc}\widehat{\sigma }_{DM}^{1/2}\widehat{\sigma }_{DM}^{1/2}`$ (see Fig. 2c), which is a different scaling from that for normal 2-body relaxation where the collapse depends only on the relaxation time ($`\widehat{\sigma }_{DM}^1`$). We hypothesize that the difference arises because the scattering is not a diffusive process. In each scattering $`\mathrm{\Delta }vv`$ and particles escape the core in a single scattering, leading to faster evolution.
### 4.3. Adiabatic Contraction
Very careful tuning of the SIDM cross section and ages might allow dwarf galaxies to have softened cores today without undergoing core collapse. However, we must examine the effects of the baryons on the dark matter profile to estimate the appropriate cross section, as the cooling baryons will adiabatically compress the SIDM dark matter and significantly modify the evolution of the system. The adiabatic compression both raises the scattering rates and reduces the relaxation time scales in three ways. First, by raising the densities and reducing the dynamical time scale, the relaxation time drops. Very crudely, if we compress the dark matter by a factor of $`f`$, the scattering rate jumps by a factor of $`f^{7/2}`$, producing a corresponding reduction in the relaxation time. A second, more subtle factor is the difference in how density cusps and density distributions with cores are adiabatically compressed. Standard adiabatic compression models for a $`1/r`$ density cusp will maintain a $`1/r`$ density cusp after the compression (see Quinlan, Hernquist & Sigurdsson QHS (1995)) while the compression of a density distribution with a finite core radius produces a steeper ($`1/r^{3/2}`$ to $`1/r^{9/4}`$) cusp depending on the distribution function (Young Young (1980)). Thus, if the SIDM produces a core radius before the adiabatic compression phase, the adiabatic compression produces a steeper than normal central density cusp. In 2-body relaxation, steep cusps evolve more rapidly than shallow cusps, and Quinlan (Qui (1996)) found that where a $`1/r`$ density cusp reaches core collapse in 5 half-mass relaxation times, a $`1/r^2`$ density cusp reaches core collapse in 1 half-mass relaxation time. Hence the steeper central cusp created by adiabatic compression of the SIDM core further accelerates the ultimate evolution of the system to a central $`1/r^2`$ density cusp. Third, the existence of the baryonic matter provides a component of the gravitational potential whose evolution is not driven by scattering.
We investigate this with a single, somewhat contrived experiment. We put 20% of the mass of the profile in an analytic Hernquist potential with a corresponding reduction in the masses of the dark matter particles. We modeled the adiabatic compression by running the system for $`t_{\mathrm{dyn}}`$, and then slowly changing the external potential to
$$\varphi (r)=\frac{GM_b}{\sqrt{r^2+a_d^2}}$$
(12)
between $`t_{\mathrm{dyn}}`$ and $`2t_{\mathrm{dyn}}`$, finally fixing the external potential for the further evolution. We used $`M_b=0.2M_T`$, $`a_d=0.1a`$, and reduced the force smoothing to $`0.01a`$. The final external potential was chosen to have the same circular velocity profile as a Kuzmin disk (Binney & Tremaine BinTre (1987)), and underestimates the amount of compression compared to the standard approximation (Blumenthal et al. BFFP (1986)). It corresponds to a baryonic density
$$\rho (r)=\frac{3M_b}{4\pi }\frac{a_d^2}{(r^2+a_d^2)^{5/2}}.$$
(13)
At the end of the compression phase, the central dark matter density has risen by a factor of $`10`$. We ran the compression sequence with either $`\widehat{\sigma }_{DM}=0`$ or $`1`$.
Fig. 3 shows the dark matter density and rotation curve profiles of the adiabatically compressed systems both at the end of the compression phase ($`3t_{\mathrm{dyn}}`$) and the end of the experiment ($`4t_{\mathrm{dyn}}`$). At the start of the compression, the scattering has begun to establish the typical SIDM core structure. The adiabatic compression then produces significantly different central density profiles. The no scattering simulation maintains a roughly $`\rho 1/r`$ profile shifted upwards in density, while the $`\widehat{\sigma }_{DM}=1`$ simulation has formed a steeper, nearly $`\rho 1/r^2`$ profile with a higher central dark matter density than if there had been no scattering, just as we expected from the differences in how the two profiles would be adiabatically compressed. In the subsequent evolution, the no scattering simulation maintains a stable profile while the $`\widehat{\sigma }_{DM}=1`$ simulation has a rising central density. The system appears to proceed directly into the core-collapse phase of its evolution despite the fixed baryonic potential.
### 4.4. Spherical Collapse
Our previous experiments assumed that no significant scattering occurred during the initial collapse of the dark matter halo so that the initial conditions contained a $`\rho 1/r`$ density cusp. We also considered how the formation of the halo is altered in the SIDM model by considering the very simple case of the collapse of a top-hat overdensity with vacuum boundary conditions both with and without the DM scattering. These simulations are meant only to illustrate the primary effects of DM scattering on collapsing perturbations, and not as realistic models of the formation of galaxies or clusters in a cosmological context.
We generated a $`10^5`$ particle uniform density sphere with velocity dispersion $`1\%`$ of the virial velocity and allowed it to collapse. Without particle scattering the collapse formed a profile akin to the Hernquist profiles of the previous sections, with some of the particles being ejected from the system. By fitting a Hernquist profile to the resulting system we identified the scattering rate appropriate to $`\widehat{\sigma }_{DM}1`$ and re-ran the simulation from the same initial conditions including scattering.
The simulations indicate that the final equilibrium profile is similar with and without scattering, however at fixed dynamical time the scattering run always has a higher central density. We postulate that the scattering provides a mechanism, similar to dissipation, which allows the system to attain higher phase space densities than are possible in a purely collisionless/dissipationless collapse.
## 5. Conclusions
We combined an N-body code with a Monte Carlo model of scattering to examine the effects of self-interacting dark matter (SIDM) on the properties of dark matter halos. Our models form a one parameter family with “standard” CDM as a limiting case.
We first examined the time evolution of Hernquist models with $`1/r`$ central density cusps in the presence of SIDM. Our simulations are similar to those of Burkert (Bur (2000)), but use a better approximation to the scattering integrals. The models form a constant density core with a radius $`0.4a`$ on the core radius relaxation time scale $`t_{rc}=1.7t_{\mathrm{dyn}}/\widehat{\sigma }_{DM}`$, where the SIDM cross section per unit mass is $`\widehat{\sigma }_{DM}=\sigma _{DM}M_T/a^2`$. The initial growth of the core radius is very rapid because the relaxation time is much shorter in the inner regions (see Quinlan Qui (1996)). Shortly after reaching this maximum core size the core begins to shrink. We find that the recollapse proceeds on a faster time scale than expected from 2-body relaxation. The central density evolution evolves on time scales of $`t_{rc}\widehat{\sigma }_{DM}^{1/2}\widehat{\sigma }_{DM}^{1/2}`$ rather than $`t_{rc}\widehat{\sigma }_{DM}^1`$, probably because SIDM is not a diffusive process when the mean free paths are large. Thus, the dark matter briefly has a large, finite core radius which could be fine tuned to address the problem of dwarf galaxy rotation curves at the expense of predicting core collapsed dark matter halos with $`\rho 1/r^2`$ cusps in most other galaxies. In particular, it is probably impossible to use SIDM to eliminate the dark matter cusps in the dwarf galaxies while preserving them in the low surface brightness galaxies.
Even with SIDM, however, we must also consider the role of the baryons in modifying the structure of the dark matter halo. In particular, as the baryons cool and form a disk they adiabatically compress the dark matter (Blumenthal et al. BFFP (1986); Dubinski Dub (1994)). The adiabatic compression raises the central dark matter density, increases the SIDM scattering rate, and reduces the time scale for core collapse. Moreover, adiabatic compression of a density profile with a finite core radius produces a steep central density cusp (Young Young (1980)) which relaxes to form a $`\rho 1/r^2`$ density cusp even faster than the shallower $`\rho 1/r`$ cusp we considered initially (Quinlan Qui (1996)). As a result, the development of a core radius due to the SIDM scattering is first reversed by the compression, and then core collapse begins on a far shorter time scale. Such a further acceleration of the time scale for producing a final density distribution with a steep $`\rho 1/r^2`$ density cusp will make it even harder for SIDM to maintain a core radius in the dark matter profile for periods comparable to the presumed lifetimes of dwarf galaxies.
All of the above simulations supposed that the halo collapsed and virialized before scattering became important, which is appropriate to “low” scattering rates. Finally, we considered the effects of SIDM on the collapse of cold top-hat perturbations with $`\widehat{\sigma }`$ large enough that scattering and virialization could occur at the same time. We found that the scattering allowed higher phase space densities at fixed dynamical time than could be found in the collisionless systems.
It thus appears as though SIDM exacerbates rather than solves the “central density cusp problem”. Turning this around we can say that astrophysics almost certainly requires SIDM cross sections small enough to avoid significant scattering over the age of the universe for all galaxies, or mean free paths $`1`$ Mpc.
We would like to acknowledge useful conversations with A. Burkert and L. Hernquist. M.W. thanks J. Bagla and V. Springel for numerous helpful conversations on N-body codes. M.W. was supported by NSF-9802362, and C.S.K. was supported by the Smithsonian Institution.
## Appendix A Tree N-body Code
We have used a new implementation of a Tree code to evolve our dark matter halo. Since this code is new we briefly describe it here, and also explain how we generated the initial conditions.
The basic algorithm is described in Barnes & Hut (BarHut (1986)). Space is partitioned into an oct-tree structure by bisection in the three spatial dimensions. Starting at the root node, which is the entire volume of the simulation, the tree descends until each node contains at most one particle. The force on any given particle from all of the other particles is computed by a tree walk. The speed-up over direct summation is obtained by imposing an opening criterion during the tree walk. For cells sufficiently far from the particle in question, the entire mass distribution within that cell is approximated by a point at the center of mass of the cell and with mass equal to the sum of the particle masses within the daughters of that cell.
Though the code does not require it, we have used equal mass particles throughout. The size of the computational box grows so as to always encompass all of the particles. Rather than a Plummer potential we use a spline softened force (Monaghan & Lattanzio MonLat (1985); Hernquist & Katz HerKat (1989); Springel, Yoshida & White SprYosWhi (2000)). For most of the runs we use $`h=0.05a`$ (the force was therefore exactly $`1/r^2`$ beyond $`h`$), but for the adiabatic compression runs we lower this to $`0.01a`$. Very roughly this corresponds to a Plummer law smoothing $`ϵh/3`$, although a Plummer law gives 1% force accuracy only beyond $`10ϵ`$.
With $`10^5`$ particles it is finite particle numbers which limit our resolution, not force accuracy. For profiles with almost any central cusp, the relaxation time is less than $`10t_{\mathrm{dyn}}`$ for radii $`r<0.05a`$ (Quinlan Qui (1996)). Our numerical results with differing particle numbers and force accuracies support the scalings in Fig. 2 of Quinlan (Qui (1996)). Only for the adiabatic compression runs, where more particles are concentrated at small radius and there is a stable external potential, do we gain by lowering $`h`$.
The time integration is done with a second order leap-frog method. The time step is chosen to be the smaller of $`0.5\sqrt{h/a_{\mathrm{max}}}`$, where $`a_{\mathrm{max}}`$ is the maximum acceleration on the particle, or $`2h/v`$ where $`v`$ is the particle velocity. We have shown that this time step accurately integrates binary orbits, and in combination with the opening criterion described below conserves total energy to much better than a percent over 5 dynamical times, except during the core collapse phase where the error rises to a few percent.
Finally for the tree walk we do not simply use the geometrical criterion advocated by Barnes & Hut. Rather we have augmented it with a modification of the method described in Springel et al. (SprYosWhi (2000)). Starting with the known accelerations from the last time step we descend the tree. We open an internal node if the partial acceleration from the quadrupole moment of that node exceeds $`\alpha `$ times the old acceleration, where $`\alpha `$ is a tolerance parameter. Specifically if $`\mathrm{}`$ is the size of a cell containing mass $`M`$ whose center of mass is a distance $`r`$ from the particle in question, we open the node if
$$M\mathrm{}^2>\alpha \left|\stackrel{}{a}_{\mathrm{old}}\right|r^4.$$
(A1)
This ensures that the relative force accuracy is $`𝒪(\alpha )`$ while the resulting tree walk is significantly faster (Springel et al. SprYosWhi (2000)). The tree walk is also used to define the neighbor list for the density and scattering calculations. To ensure that all nearby cells are opened so the neighbor list is “complete” we augment the opening criterion to additionally require that $`\mathrm{}<\theta r`$ for cells closer than $`a`$. This provides a good estimate of the density except in the lowest density regions where the scattering probability is anyway small. Throughout we have used $`\alpha =2\%`$ and $`\theta =0.4`$, which for a Hernquist profile results in a 90th percentile relative force error of $`5\times 10^3`$, i.e. 90% of the particles have force errors less than this.
We set up our initial conditions as a Hernquist (Her (1990)) profile
$$\rho _H(r)=\frac{M_{\mathrm{halo}}}{2\pi }\frac{a}{r}\frac{1}{(r+a)^3}$$
(A2)
where $`a`$ is the scale-length. First we choose the position of a particle with a radial probability distribution proportional to $`r^2\rho (r)`$ and an isotropic $`\widehat{r}`$. We impose an upper radial distance of $`r_{\mathrm{max}}=50a`$ at which point the density is $`10^5`$ of the density at $`r=a`$. Given the position we then calculate the magnitude of the velocity from the known distribution function $`f(E)`$ for this model using an acceptance-rejection method and again distribute $`\widehat{v}`$ isotropically. This procedure is iterated $`N_p/2=5\times 10^4`$ times. Each particle and a reflected counterpart at $`\stackrel{}{r}`$ with velocity $`\stackrel{}{v}`$ is added to the list to obtain a realization of the halo with $`N_p`$ particles. The use of ‘reflected’ initial conditions ensures that the center of mass position and velocity of the halo are both initially zero.
|
no-problem/0003/hep-ph0003006.html
|
ar5iv
|
text
|
# 1 (a) The dimensionless string tension () of the charge – anti-charge separated by the distance 𝑅 in 3D compact U(1) theory; (b) The string tension 𝜎^{𝑠𝑡𝑟𝑖𝑛𝑔} () as a function of 𝑚_𝐷𝑅 with corresponding best fitting function (see text).
ITEP-TH-74/99
Confinement and short distance physics
M.N. Chernodub<sup>a</sup>, F.V. Gubarev<sup>a,b</sup>, M.V. Polikarpov<sup>a</sup>, V.I. Zakharov<sup>a,b</sup>
$`a`$ Institute of Theoretical and Experimental Physics,
B.Cheremushkinskaya 25, Moscow, 117259, Russia
$`b`$ Max-Planck Institut für Physik,
Föhringer Ring 6, 80805 München, Germany
## Abstract
We consider non-perturbative effects at short distances in theories with confinement. The analysis is straightforward within the Abelian models in which the confinement arises on classical level. In all cases considered (compact $`U(1)`$ in $`3D`$ and $`4D`$, dual Abelian Higgs model) there are non–perturbative contributions associated with short distances which are due to topological defects. In QCD case, both classical and quantum effects determine the role of the topological defects and the theoretical analysis has not been completed so far. Generically, the topological defects would result in $`1/Q^2`$ corrections going beyond the standard Operator Product Expansion. We review existing data on the power corrections and find that the data favor existence of the novel corrections, at least at the mass scale of (1-2) GeV. We indicate crucial experiments which could further clarify the situation on the phenomenological side.
1. The perturbative QCD describes basic features of hard processes, i.e. processes characterized by a large mass scale $`Q`$. On the other hand, the perturbative QCD does not encode the effects of confinement. Hence there is a growing interest in power corrections to the parton model which might be sensitive to the nature of confinement (for a recent review see ). Moreover, in the the Abelian Higgs model the leading power correction at short distances does reflect confinement . Namely, there exist short strings which are responsible for a stringy potential between confined charges at vanishing distances $`r0`$.
In this note we are exploring Abelian models with confinement in a more regular way by including into consideration the compact three and four dimensional $`U(1)`$ theory. In all the cases the confinement mechanism can be understood classically. Moreover, we find that the confinement results in additional terms in the static potential at short distances. How topological defects can be manifested at short distances, is easy to understand on the example of the Dirac string. Naively, its energy diverges quadratically in the ultraviolet but in compact $`U(1)`$ it is normalized to zero changing the power corrections at short distances. As for non-Abelian theories, the Dirac strings are also allowed because of the compactness of the corresponding $`U(1)`$ subgroup. In this sense, there is a similarity between non–Abelian and Abelian cases. However, there is an important difference as well. On the classical level, the Dirac strings may end up with monopoles which have a vanishing non-Abelian action. Thus, the monopoles observed within the $`U(1)`$ projection of QCD (for a review see, e.g., ) is a result of an interplay between classical and quantum effects. In all the generality, one may say that the topological defects in QCD are marked rather by singular potentials than by a large non-Abelian action. Singular gauge potentials might be artifact of the gauge fixing and it is not a priori clear whether they can result in physical effects. Therefore, we will turn at this point to analysis of existing data on the power corrections. The data seem clearly favor the novel $`1/Q^2`$ corrections.
2. First, we will outline very briefly the standard approach to the power corrections which allows to account for soft non-perturbative field configurations (for further references see, e.g., ). Consider first a QED example . Namely, let an $`e^+e^{}`$ pair be placed at distance $`r`$ near the center of a conducting cage of size $`L`$, $`Lr`$. Then the potential energy of the pair can be approximated as
$$V_{e\overline{e}}(r)\frac{\alpha _e}{r}+const\frac{\alpha _er^2}{L^3},Lr$$
(1)
and the second term is a power correction to the Coulomb interaction. The derivation of (1) is of course straightforward classically, since the correction is nothing else but interaction of the dipole with its images. In the QCD case, one concludes by analogy that the heavy quark potential at short distances looks as (for explanations and further references see, e.g. ):
$$\underset{r0}{lim}V_{Q\overline{Q}}(r)=\frac{c_1}{r}+const\mathrm{\Lambda }_{QCD}^3r^2,$$
(2)
where $`c_1`$ is calculable perturbatively as a series in $`\alpha _s`$. Note the absence of a linear correction to the potential at short distances.
On the other hand, Eq. (1) can be derived also in terms of one-photon exchange. The power correction is related then to a change in modes of the electromagnetic field confined in the cage as compared to the case of the infinite space. The change is of order unit at frequencies $`\omega 1/L`$. Similarly, the logic behind Eq. (2) is that the perturbative gluon propagator is modified strongly by at $`\omega \mathrm{\Lambda }_{QCD}^1`$. In case of other processes, the relevant Feynman graphs can be more complicated of course. The power corrections still correspond to the infrared sensitive part of Feynman propagators which are obviously modified by the physics of large distances. The Operator Product Expansion (OPE) allows for a regular way to parameterize such corrections (for a review see ).
3. Intuitively, the power corrections in QCD could be very different from the conducting cage case discussed above. Indeed color particles produced at short distances find no cage but rather build up the confining field configuration in the course of the interaction between themselves and with the vacuum. The complicated space-time picture of interaction in confining theories was studied by Gribov . Thus, it could be instructive to analyze the effects of the confinement at short distances in some simple models.
The first example of a theory where the OPE does not work in fact goes back to the paper in Ref. . However, since it has not been discussed in connection with the OPE, we will explain this example in some detail. The action is that of free photons:
$$S=\frac{1}{4e^2}d^4xF_{\mu \nu }^2$$
(3)
where $`F_{\mu \nu }`$ is the field strength tensor, $`F_{\mu \nu }=_\mu A_\nu _\nu A_\mu `$. Although the theory looks absolutely trivial, it is not the case if one admits Dirac strings into the theory. Naively, the energy associated with the Dirac strings is infinite:
$$E_{\text{Dirac string}}=\frac{1}{8\pi }d^3r𝐇^2lA\left(\frac{\text{magnetic flux}}{A}\right)^2\mathrm{}$$
(4)
where $`l,A`$ are the length and area of the string, respectively. Since the magnetic flux carried by the string is quantized and finite the energy diverges quadratically in the ultraviolet, i.e. in the limit $`A0`$. However within the lattice regularization the action of the string is in fact zero because of the compactness of the $`U(1)`$, Ref. .
Now, the Dirac strings may end up with monopoles. The action associated with the monopoles diverges in ultraviolet,
$$\frac{d^3r}{8\pi }𝐇^2\frac{1}{e^2a}$$
(5)
where $`a`$ is a (small) spatial cut off. If the length of a closed monopole trajectory is L, then the suppression of such a configuration due to a non-vanishing action is of order
$$e^S\mathrm{exp}(constL/e^2).$$
(6)
On the other hand, there are different ways to organize a loop of length $`L`$. This is the entropy factor. It is known to grow exponentially with $`L`$ as $`\mathrm{exp}(const^{}L)`$. At some $`e_{crit}^21`$ there is a phase transition to the monopole condensation.
The potential between external test charges is Coulombic at all the distances for $`e^2<e_{crit}^2`$ and linear for $`e^2>e_{crit}^2`$. Since there are no perturbative graphs at all in the theory with the action (3) this phenomenon clearly goes beyond the OPE. In this case, however, the violation of the OPE is too strong. Indeed, the lattice spacing $`a`$ is the only dimensional parameter of the problem. As a result Coulomb potential is not simply modified by linear corrections but rather eliminated for $`e^2>e_{crit}^2`$ at all the distances.
4. Consider next the Dual Abelian Higgs Model with the action
$$S=d^4x\left\{\frac{1}{4g^2}F_{\mu \nu }^2+\frac{1}{2}|(iA)\mathrm{\Phi }|^2+\frac{1}{4}\lambda (|\mathrm{\Phi }|^2\eta ^2)^2\right\},$$
(7)
here $`g`$ is the magnetic charge, $`F_{\mu \nu }_\mu A_\nu _\nu A_\mu `$. The gauge boson and the Higgs are massive, $`m_V^2=g^2\eta ^2`$, $`m_H^2=2\lambda \eta ^2`$. There is a well known Abrikosov-Nielsen-Olesen (ANO) solution to the corresponding equations of motion. The dual ANO string may end up with electric charges. As a result, the potential for a test charge–anti-charge pair grows linearly at large distances:
$$V(r)=\sigma _{\mathrm{}}r,r\mathrm{}.$$
(8)
Note that there is a Dirac string resting along the axis of the ANO string connecting monopoles and its energy is still normalized to zero.
An amusing effect occurs if one goes to distances much smaller than the characteristic mass scales $`m_{V,H}^1`$. Then the ANO string is peeled off and one deals with a naked (dual) Dirac string. The manifestation of the string is that the Higgs field has to vanish along a line connecting the external charges. Otherwise, the energy of the Dirac string would jump to infinity anew.
As a result of the boundary condition that $`\mathrm{\Phi }`$ vanishes on a line connecting the charges the potential contains a stringy piece at short distances :
$$\underset{r0}{lim}V(r)=\frac{c_1}{r}+\sigma _0r.$$
(9)
The string tension $`\sigma _0`$ smoothly depends on the ratio $`m_H/m_V`$. In the Bogomol’ny limit ($`m_H=m_V`$) which is favored by the fits of the lattice simulations the string tension
$$\sigma _0\sigma _{\mathrm{}},$$
(10)
i.e. the effective string tension numerically is the same at all distances.
5. Consider now $`3D`$ compact electrodynamics. As is well known , the charge–anti-charge potential is then linear at large $`r`$. Below we consider the string tension $`\sigma _0`$ at small distances and show that it has a non-analytical piece.
As usual, it is convenient to perform the duality transformation, and work with the corresponding Sine-Gordon theory. The expectation value of the Wilson loop in dual variables is:
$$W=\frac{1}{𝒵}𝒟\chi e^{S(\chi ,\eta _𝒞)},$$
(11)
where
$$S(\chi ,\eta _𝒞)=\left(\frac{e}{2\pi }\right)^2d^3x\left\{\frac{1}{2}(\stackrel{}{}\chi )^2+m_D^2(1\mathrm{cos}[\chi \eta _𝒞])\right\},$$
(12)
$`m_D`$ is the Debye mass and $`S(\chi ,0)`$ is the action of the model. If static charge and anti-charge are placed at the points $`(R/2,0)`$ and $`(R/2,0)`$ in the $`x_1,x_2`$ plane ($`x_3`$ is the time axis), then
$$\eta _𝒞=arctg[\frac{x_2}{x_1R/2}]arctg[\frac{x_2}{x_1+R/2}],\pi \eta _𝒞\pi .$$
(13)
Below we present the results of the numerical calculations of the dimensionless string tension,
$`\sigma =E/(m_DR),`$ (14)
$`E={\displaystyle d^2x\left\{\frac{1}{2}(\stackrel{}{}\chi )^2+m_D^2(1\mathrm{cos}[\chi \eta _𝒞])\right\}}.`$ (15)
Note that the energy $`E`$ is measured in the units of the dimensional factor $`(\frac{e}{2\pi })^2`$ (cf. (12)). Variation of functional (15) leads to the equation of motion $`\mathrm{\Delta }\chi =m_D^2\mathrm{sin}[\chi \eta _𝒞]`$. For finite $`R`$ we can solve this equation numerically. The energy $`E`$ versus $`m_DR`$ is shown on Fig.1(a). At large separations between the charges ($`m_DR1`$) it tends to the asymptotic linear behavior $`E=8m_DR`$ which can be obtained also analytically .
At small distances there is a contributions of Yukawa-type to the energy (15), which should be extracted explicitly. Note that in course of rewriting original $`3D`$ compact electrodynamics in the form (11-12) the Coulomb potential was already subtracted, so that (15) contains Yukawa-like piece without singularity at $`R=0`$. It is not difficult to find the corresponding coefficient:
$$E=E^{string}2\pi (K_0[m_DR]+\mathrm{ln}[m_DR])$$
(16)
where $`K_0(x)`$ is the modified Bessel function and $`E^{string}`$ is the energy of the charge–anti-charge pair which is only due to the string formation. The corresponding string tension
$$\sigma ^{string}=\sigma +2\pi (K_1[m_DR]+\frac{1}{m_DR})$$
(17)
is shown on Fig.1(b). We found that the best fit of numerical data for small values of $`m_DR`$ is by the function $`\sigma ^{string}=const(m_DR)^\nu `$ which gives $`\nu 0.6`$.
Thus the non-analytical potential associated with small distances is softer than in the case of the Abelian Higgs model. The source of the non-analyticity is the behavior of the function $`\eta _𝒞(x_1,x_2)`$ eq. (13) which is singular along the line connecting the charges, see Fig.2(a).
6. The compact electrodynamics is usually considered as the limit of Georgi–Glashow model, when the radius of the ’t Hooft – Polyakov monopole tends to zero. For a non-vanishing monopole size the problem of evaluating the potential at small distances becomes rather involved. To avoid unnecessary complications we consider the 3D Georgi–Glashow model in the BPS limit. The ’t Hooft – Polyakov monopole corresponds then to the fields:
$`\mathrm{\Phi }^a`$ $`=`$ $`{\displaystyle \frac{x^a}{r}}\left({\displaystyle \frac{1}{\mathrm{tanh}(\mu r)}}{\displaystyle \frac{1}{\mu r}}\right),`$ (18)
$`A_i^a`$ $`=`$ $`\epsilon ^{aic}{\displaystyle \frac{x^a}{r}}\left({\displaystyle \frac{1}{r}}{\displaystyle \frac{\mu }{\mathrm{sinh}(\mu r)}}\right),A_0^a=0.`$ (19)
The contribution of this monopole to the full non-Abelian Wilson loop $`W`$ can be calculated analytically. If the static charges are placed at points $`\pm \stackrel{}{R}/2`$ in the $`(x_1,x_2)`$ plane the result is:
$$W(\stackrel{}{b}_1,\stackrel{}{b}_2,\mu )=\mathrm{cos}h(\mu b_1)\mathrm{cos}h(\mu b_2)+\frac{(\stackrel{}{b}_1\stackrel{}{b}_2)}{b_1b_2}\mathrm{sin}h(\mu b_1)\mathrm{sin}h(\mu b_2),$$
(20)
here $`\stackrel{}{b}_{1,2}=\stackrel{}{x}_0\pm \stackrel{}{R}/2`$, $`b_k=|\stackrel{}{b}_k|`$, $`\stackrel{}{x}_0`$ is the center of the ’t Hooft – Polyakov monopole and
$$h(x)=\frac{\pi }{2}\frac{x}{2}\underset{\mathrm{}}{\overset{+\mathrm{}}{}}\frac{\mathrm{d}\zeta }{\sqrt{\text{}x^2+\zeta ^2}\mathrm{sinh}\sqrt{\text{}x^2+\zeta ^2}}.$$
(21)
One way to represent (20) in terms of the function $`\eta _𝒞`$ introduced earlier is:
$`\eta _𝒞(x_0,R,\mu )=\mathrm{sign}(y)\mathrm{arccos}W(\stackrel{}{b}_1,\stackrel{}{b}_2,\mu ).`$ (22)
In the limit $`\mu R\mathrm{}`$ $`W(\stackrel{}{b}_1,\stackrel{}{b}_2,\mu )\mathrm{cos}\eta _𝒞`$ and $`\eta _𝒞(x_0,R,\mu )`$ coincides with the definition (13). For small $`\mu R`$ the function $`\eta _𝒞`$ eq. (22) is singular not only between external charges, but also outside this region (see Fig.2(b)) although the strength of singularity gets smaller. In the limit of vanishing $`\eta _𝒞`$ the string tension at small distances $`\sigma _0`$ apparently goes to zero.
To summarize, it is natural to expect that in the Georgi-Glashow model the non-analytical piece in the potential disappears at distances much smaller than the monopole size. Note, however, that to evaluate the potential consistently in this case one should have taken into account also the modification of the interaction due to the finite size of the monopoles.
7. Knowing the physics of the Abelian models outlined above it is easy to argue that the perturbative vacuum of QCD is not stable as well. Indeed, let us make the lattice coarser a la Wilson until the effective coupling of QCD would reach the value where the phase transition in the compact $`U(1)`$ occurs. Then the QCD perturbative vacuum is unstable against the monopole condensation. The actual vacuum state can of course be different from the $`U(1)`$ case but it cannot remain perturbative. Similar remark with respect to formation of $`Z_2`$ vortices was in fact made long time ago .
The existence of the infinitely thin topological defects in QCD makes it close akin of the Abelian models considered above. However, the non-Abelian nature of the interaction brings in an important difference as well. Namely, the topological defects in QCD are marked rather by singular potentials than by a large non-Abelian action. Consider first the Dirac string.
Introduce to this end a potential which is a pure gauge:
$$A_\mu =\mathrm{\Omega }^1_\mu \mathrm{\Omega }$$
(23)
and choose the matrix $`\mathrm{\Omega }`$ in the form:
$$\mathrm{\Omega }(x)=\left(\begin{array}{cc}\mathrm{cos}\frac{\theta }{2}& \mathrm{sin}\frac{\theta }{2}e^{i\phi }\\ \text{}\mathrm{sin}\frac{\theta }{2}e^{i\phi }& \mathrm{cos}\frac{\theta }{2}\end{array}\right)$$
(24)
where $`\phi `$ and $`\theta `$ are azimuthal and polar angles, respectively. Then it is straightforward to check that we generated a Dirac string directed along the $`x_3`$-axis ending at $`x_3=0`$ and carrying the color index $`a=3`$. It is quite obvious that such Abelian-like strings are allowed by the lattice regularization of the theory.
The crucial point, however, is that the non-Abelian action associated with the potential (23) is identical zero. On the other hand, in its Abelian components the potential looks as a Dirac monopole, which are known to play important role in the Abelian projection of QCD (for a review see, e.g., ). Thus, there is a kind of mismatch between short- and large-distance pictures. Namely, if one considers the lattice size $`a0`$, then the corresponding coupling $`g(a)0`$ and the solution with a zero action (23) is strongly favored at short distances. At larger distances we are aware of the dominance of the Abelian monopoles which have a non-zero action. The end-points of a Dirac string still mark centers of the Abelian monopole. Thus, monopoles can be defined as point-like objects topologically in terms of singular potentials, not action.
Similar remarks hold in case of the $`Z_2`$ vortices. Namely the $`Z_2`$ vortices which have a typical size of order $`\mathrm{\Lambda }_{QCD}^1`$ can be defined topologically in terms of the so called P-vortices which are infinitely thin but gauge dependent, see and references therein. To detect the P-vortices one uses the gauge maximizing the sum
$$\underset{l}{}|TrU_l|^2$$
(25)
where $`l`$ runs over all the links on the lattice. The center projection is obtained by replacing
$$U_l\mathrm{sign}(TrU_l).$$
(26)
Each plaquette is marked either as $`(+1)`$ or $`(1)`$ depending on the product of the signs assigned to the corresponding links. The P-vortex then pierces a plaquette with (-1). Moreover, the fraction $`p`$ of the total number of plaquettes pierced by the P-vortices and of the total number of all the plaquettes $`N_T`$, was found to obey numerically the scaling law
$$p=\frac{N_{vor}}{N_T}f(\beta )$$
(27)
where the function $`f(\beta )`$ is such that $`p`$ scales like the string tension. Assuming independence of the piercing for each plaquette one has then for the center-projected Wilson loop $`W_{cp}`$:
$$W_{cp}=[(1p)(+1)+p(1)]^Ae^{2pA}$$
(28)
where $`A`$ is the number of plaquettes in the area stretched on the Wilson loop. Numerically, Eq. (28) reproduces the full string tension.
It is quite obvious that the P-vortices, since they are constructed on negative links, correspond in the continuum limit to singular gauge potentials of order $`a^1`$. Moreover, the large potentials should mostly cancel if the corresponding field-strength tensors are calculated because of the asymptotic freedom. The argumentation is essentially the same as outlined above for the monopoles, see, e.g. and references therein.
At the moment, it is difficult to say a priori whether the topological defects defined in terms of singular potentials can be considered as infinitely thin from the physical point of view. They might be gauge artifacts. Phenomenologically, using the topologically defined point-like monopoles or infinitely thin P-vortices one can measure non-perturbative $`Q\overline{Q}`$ potential at all the distances. It is remarkable therefore that the potentials generated both by monopoles and P-vortices turn to be linear at all the distances measured:
$$V_{nonpert}(r)\sigma _{\mathrm{}}r\text{ at all}r$$
(29)
Note that the Coulomb-like part is totally subtracted out through the use of the topological defects. Moreover, the no-change in the slope (29) agrees well with the dual Abelian Higgs model as discussed above (for alternative approaches see ).
The numerical observation (29) is by no means trivial. If it were so that only the non-Abelian action counts, then the non-perturbative fluctuations labeled by the Dirac strings or by P-vortices are bulky (see discussion above) and the corresponding $`Q\overline{Q}`$ potentials (29) should have been quadratic at small $`r`$. This happens, for example, in the model with finite thickness of $`Z_2`$ vortices. Similarly, if the lessons from the Georgi–Glashow model considered above apply the finite size of the monopoles would spoil linearity of the potential at short distances.
To summarize, direct measurements of the non-perturbative $`Q\overline{Q}`$ potential indicate the presence of a stringy potential at short distances. The measurements go down to distances of order $`(2\text{ GeV})^1`$.
8. In view of the results (29) it is interesting to reexamine the power corrections with the question in mind, whether there is room for novel stringy corrections. From the dimensional considerations alone it is clear that the new corrections are of order $`\sigma _0/Q^2`$ where $`Q`$ is a large generic mass parameter characteristic for problem in hand. Also, the ultraviolet renormalons in 4D indicate the same kind of correction, see and references therein. Note that unlike the case of the non-perturbative potential discussed above, other determinations of the power corrections ask for a subtraction of the dominating perturbative part and this might make the results less definitive.
(i) The first claim of observation of the non-standard $`1/Q^2`$ corrections was made in Ref. . Namely, it was found that the expectation value of the plaquette minus perturbation theory contribution shows $`1/Q^2`$ behavior. On the other hand, the standard OPE results in a $`1/Q^4`$ correction.
(ii) The lattice simulation do not show any change in the slope of the full $`Q\overline{Q}`$ potential as the distances are changed from the largest to the smallest ones where the Coulombic part becomes dominant. An explicit subtraction of the perturbative corrections at small distances from $`Q\overline{Q}`$ potential in lattice gluodynamics was performed in ref.. This procedure gives $`\sigma _05\sigma _{\mathrm{}}`$ at very small distances.
(iii) There exist lattice measurements of the fine splitting of $`Q\overline{Q}`$ levels as function of the heavy quark mass. The Voloshin-Leutwyler picture predicts a particular pattern of the heavy mass dependence of this splitting. Moreover, these predictions are very different from the predictions based, say, on the Buchmuller-Tye potential which adding a linear part to the Coulomb potential. The numerical results favor the linear correction to the potential at short distances.
(iv) Analytical studies of the Bethe-Salpeter equation and comparison of the results with the charmonium spectrum data favor a non-vanishing linear correction to the potential at short distances .
(v) The lattice-measured instanton density as a function of the instanton size $`\rho `$ does not satisfy the standard OPE predictions that the leading correction is of order $`\rho ^4`$. Instead, the leading corrections is in fact quadratic .
(vi) One of the most interesting manifestations of short strings might be the $`1/Q^2`$ corrections to current correlation functions $`\mathrm{\Pi }_j(Q^2)`$. It is not possible to calculate the coefficient of front of the $`1/Q^2`$ terms from first principles, however, in Ref. it was suggested to simulate this correction by a tachyonic gluon mass. On one hand, the tachyonic mass imitates the stringy piece in the potential at short distances. On the other hand, it can be used in one-loop calculations of the correlation functions. Rather unexpectedly, the use of the tachyonic gluon mass ($`m_g^2=0.5\text{ GeV}^2`$) explains well the behavior of $`\mathrm{\Pi }_j(Q^2)`$ in various channels. To check the model further, it would be very important to perform accurate calculations of various correlators $`\mathrm{\Pi }_j(Q^2)`$ on the lattice.
9. As seen from the points (i)-(vi) above, the existence of the novel quadratic corrections is strongly supported by the data. There are, however, two caveats to the statement that the novel short-distance power corrections have been detected. On the theoretical side, the existence of short strings has been proven only within the Abelian Higgs model. As for the QCD itself, the analysis is so far inconclusive. On the experimental side, the data always refer to a limited range of distances. In particular, the linear non-perturbative potential has been observed at distances of order of one lattice spacing which in physical units is about $`(1÷2\text{ GeV})^1`$. One cannot rule out that at shorter distances the behavior of the non-perturbative power corrections changes (see, e.g., ). Which would be a remarkable phenomenon by itself.
10. M.N.Ch. and M.I.P. acknowledge the kind hospitality of the staff of the Max-Planck Institut für Physik (München), where the part of this work has been done. Work of M.N.C., F.V.G. and M.I.P. was partially supported by grants RFBR 99-01230a and INTAS 96-370.
|
no-problem/0003/cond-mat0003449.html
|
ar5iv
|
text
|
# Metallic Spin Glass in Infinite Dimensions[1]
Metallic spin glass is a new subject for theoretical studies, while experimentally such a state has been recognized for a long time. Theoretical works by pioneers have been done on the assumption of the presence of the metallic spin-glass state. However, the metallic spin-glass state remains to be derived from a microscopic medel. In this Short Note we clarify the present status of the microscopic theory and develop it in comparison with the theory of the Mott transition in infinite dimensions. We will find a metal-insulator transition in the following and the transition is predominantly the Mott type but not the Anderson type.
We consider the random spin-fermion model in infinite dimensions introduced in ref. 2,
$$H=\underset{ij\sigma }{}t_{ij}c_{i\sigma }^{}c_{j\sigma }+J_\mathrm{K}\underset{i}{}S_i^z\sigma _i^z\underset{ij}{}J_{ij}S_i^zS_j^z,$$
(1)
where $`c_{i\sigma }`$ represents the conduction electron with the spin $`\sigma `$, $`\sigma _i^z`$ the spin density of the conduction electron and $`S_i^z`$ the spin density of the localized spin at site $`i`$. The conduction electrons hop on the Bethe lattice. The density of states for bare conduction electrons is semicircular and the half-width is $`2t`$ where $`t_{ij}=t/\sqrt{d}`$ at a finite dimension $`d`$. Although we did not introduce a randomness in the transfer integral $`t_{ij}`$, Gaussian randomness in $`t_{ij}`$ leads to a similar model to ours after averaging over the randomness. The exchange interaction, $`J_{ij}`$, between localized spins is a random variable obeying the Gaussian distribution as in the case of the mean field theory of the Ising spin glass. We assume for simplicity that the localized spins form a robust spin-glass state unaffected by the coupling, $`J_\mathrm{K}`$, to the conduction electrons. Then we treat the localized spins as an environment which has a dynamics described by the mean field theory and drives the conduction electrons. We are interested in the possibility of a metallic state so that we calculate the density of states of conduction electrons in the following. For simplicity, we consider the half-filled case of the conduction electrons.
In infinite dimensions the momentum dependence of the self-energy for conduction electrons is absent so that we only calculate the frequency dependence of the self-energy. In the second order of the coupling, $`J_\mathrm{K}`$, the self-energy is given by
$$\mathrm{\Sigma }(\mathrm{i}\omega _n)=TJ_\mathrm{K}^2\underset{m}{}\chi (\mathrm{i}\mathrm{\Omega }_m)G_0(\mathrm{i}\omega _n\mathrm{i}\mathrm{\Omega }_m),$$
(2)
where $`\chi (\mathrm{i}\mathrm{\Omega }_m)`$ is the dynamical spin susceptibility of localized spins and $`G_0(\mathrm{i}\omega _n)`$ the Green function of conduction electrons. Here $`\omega _n`$ is a fermion frequency and $`\mathrm{\Omega }_m`$ a boson frequency at a temperature $`T`$. We consider the case of $`T=0`$ in the following. Since it has been clarified in the study of the Mott transition that the second-order perturbation gives a good interpolation between weak and strong coupling limits, we adopt the same approximation here and will see that it actually works well.
The spin susceptibility for localized spins has a static part $`\chi _\mathrm{s}`$ as
$$\chi _\mathrm{s}=\frac{\mathrm{\Delta }}{T}\delta _{m,0},$$
(3)
in the spin-glass state of the mean field theory. Here $`\mathrm{\Delta }`$ is the order parameter of the spin-glass state and the measure of broken ergodicity due to rugged energy landscape. The mean field theory becomes exact in infinite dimensions. We can introduce the dynamics of localized spins and the retarded dynamical spin susceptibility has imaginary part proportional to $`\omega ^\nu `$ where $`\omega `$ is a small real frequency and the exponent $`\nu `$ is about $`1/4`$ at $`T=0`$.
The Green function for conduction electrons is determined as
$$G_0(\mathrm{i}\omega _n)^1=\mathrm{i}\omega _nt^2G(\mathrm{i}\omega _n),$$
(4)
in the same manner as in the case of the Mott transition where $`G(\mathrm{i}\omega _n)`$ is the renormalized Green function for conduction electrons given by
$$G(\mathrm{i}\omega _n)^1=G_0(\mathrm{i}\omega _n)^1\mathrm{\Sigma }(\mathrm{i}\omega _n).$$
(5)
In infinite dimensions $`G(\mathrm{i}\omega _n)`$ plays the role of the dynamical mean field. Since we study the half-filled case, the chemical potential, $`\mu `$, can be fixed as $`\mu =0`$.
We calculate the density of states $`\rho (\omega )`$ of conduction electrons by $`\rho (\omega )=\mathrm{Im}G(\omega +\mathrm{i0}_+)/\pi `$. The density of states at the Fermi energy, $`\rho (0)`$, serves as the order parameter of the metal-insulator transition possible in our random spin-fermion model in infinite dimensions. Namely, the transition is predominantly the Mott type. Since our model is mapped onto a single-site model, the localization character of the Anderson transition is apparently absent at the metal-insulator transition. In the case of the Anderson transition $`\rho (0)`$ is uncritical. On the other hand, $`\rho (0)`$ is critical at the Anderson-Mott transition in three dimensions. Although our solution is obtained in infinite dimensions, it might be relevant to the three dimensional system.
In our study the coupling $`J_\mathrm{K}`$ plays the same role as the local repulsion $`U`$ of the Hubbard model. If we neglect the dynamics of $`\chi (\omega )`$, we obtain a metal-insulator transition of the Hubbard type. The transition obtained in ref. 3 is this type. However, such a transition is an artifact due to insufficiency of the approximation. The Mott transition should be ascribed to the quasiparticles of the Gutzwiller type whose dynamics is drived by the dynamics of localized spins. In our dynamical mean field theory both Hubbard type and Gutzwiller type characters are taken into account.
For simplicity, we approximate the dynamical susceptibility in eq. (2) by the static part in eq. (3). This approximation favors the insulating state so that the actual $`\lambda _\mathrm{c}`$ is larger than the value obtained in the following. The dynamics is taken into account as the origin of the width of the density of states of quasiparticles around the Fermi energy to be determined by the self-consistent procedure of the dynamical mean field theory.
A self-consistent numerical solution for $`\rho (\omega )`$ using the fast Fourier transform is given in Fig. 1 where $`J_\mathrm{K}\sqrt{\mathrm{\Delta }}/t\lambda =0.5`$. Here we have used the unit $`t=1`$. The density of states roughly decomposed into three parts: the quasiparticle peak around the Fermi energy, $`\omega =0`$, and the upper and the lower Hubbard bands centered around $`\omega =\pm J_\mathrm{K}\sqrt{\mathrm{\Delta }}`$. The presence of the quasiparticle peak establishes the presence of a metallic state.
When we increase the value of the parameter $`\lambda `$, we have a metal-insulator transition at a critical value $`\lambda _\mathrm{c}`$ where the quasiparticle peak vanishes. The transition point can be evaluated analytically in the same manner as in the case of the Hubbard model. In the metallic state near the transition point the self-energy at a low energy is estimated as
$$\mathrm{\Sigma }(\omega )=\frac{J_\mathrm{K}^2\mathrm{\Delta }}{1+(4t^2/J_\mathrm{K}^2\mathrm{\Delta })}\frac{\omega }{\omega ^2\omega _0^2},$$
(6)
where $`\omega _0^2=tt^{}/[1+(4t^2/J_\mathrm{K}^2\mathrm{\Delta })]`$. Here $`t^{}`$ is the renormalized transfer integral for quasiparticles to be detremined self-consistently. This self-energy serves as a good interpolation between weak and strong coupling limits. If $`\omega _0`$ is finite, $`\mathrm{\Sigma }(\omega )\omega `$ in the limit of $`\omega 0`$ so that the quasiparticle peak is obtained. If $`\omega _0`$ vanishes, $`\mathrm{\Sigma }(\omega )1/\omega `$ at small $`\omega `$ so that the quasiparticle peak vanishes and the Hubbard bands are obtained. The self-consistently determined weight of the quasiparticle peak vanishes at $`\lambda =\lambda _\mathrm{c}=1`$ and this point corresponds to the metal-insulator transition.
In summary, we have combined the dynamical mean field theories of the Mott transition and the spin glass in infinite dimensions and derived a metallic spin-glass state from the random spin-fermion model. Since our parameter $`\lambda `$ is proportional to $`\sqrt{\mathrm{\Delta }}`$, which reflects the randomness in localized spin system, our theory has a relevance to the metal-insulator transition driven by randomness observed by experiments where a spin-charge separation is established in the sense that the spin response exhibits an anomaly related to the existence of a spin-glass order, while the charge response is normal, in the metallic state.
We have many things undone in the present work. They should be clarified in future studies. For example, our theory is not fully self-consistent, since we neglect the modification of the spin susceptibility due to the coupling between conduction electrons and localized spins. In order to understand the experiments, we have to develop a theory applicable to two or three dimensional case, while our present theory is formulated in infinite dimensions.
The author is grateful to Minokichi Sato, Hiroshi Kawai and Hitoshi Yamaoka for their support on establishing our computing environment.
|
no-problem/0003/physics0003029.html
|
ar5iv
|
text
|
# Wavelet Cross-Correlation Analysis of Turbulent Mixing from Large-Eddy-Simulations
## 1 Introduction
The complex interactions existing between turbulence and mixing in a bluff-body stabilised flame configuration is investigated by means of a wavelet cross-correlation analysis on Large Eddy Simulations. The combined approach allows to better point out typical features of unsteady turbulent flows with mixing through the characterisation of the processes involved both in time and scales. The wavelet cross-correlation analysis of the time signals of velocity and mixture fraction fluctuations can be an an effective tool to study the processes involved in turbulent mixing flows which are of great interest in combustion problems.
## 2 Generalities on wavelet cross-correlation
The continuous wavelet transform of a function $`f(t)`$ is defined as the convolution between $`f`$ and a dilated function $`\psi `$ called wavelet mother:
$$W_f(a,\tau )=\frac{1}{\sqrt{a}}_{\mathrm{}}^+\mathrm{}f(t)\psi ^{}(\frac{t\tau }{a})𝑑t,$$
(1)
where $`a`$ is the dilation parameter, which plays the same role as the frequency in Fourier analysis, and $`\tau `$ indicates the translation parameter corresponding to the position of the wavelet in the physical space. In the present study we use the complex Morlet wavelet ($`\psi (t)=e^{i\omega _0t}e^{t^2/2}`$) as wavelet mother.
Let $`W_f(a,\tau )`$ and $`W_g(a,\tau )`$ be the continuous wavelet transforms of $`f(t)`$ and $`g(t)`$. We define the *wavelet cross-scalogram* as
$$W_{fg}(a,\tau )=W_f^{}(a,\tau )W_g(a,\tau ),$$
(2)
where the symbol $``$ indicates the complex conjugate. When the wavelet mother is complex, the wavelet cross-scalogram $`W_{fg}(a,\tau )`$ is also complex and can be written in terms of its real and imaginary parts:
$$W_{fg}(a,\tau )=CoW_{fg}(a,\tau )iQuadW_{fg}(a,\tau ).$$
(3)
It can be shown that the following equation holds if $`f(t),g(t)^2(\mathrm{})`$
$$_{\mathrm{}}^+\mathrm{}f(t)g(t)𝑑t=1/c_\psi _0^+\mathrm{}_{\mathrm{}}^+\mathrm{}CoW_{fg}(a,\tau )𝑑\tau 𝑑a,$$
(4)
where $`1/c_\psi `$ is a constant depending on the choice of the wavelet mother.
## 3 Cross wavelet coherence functions
The highly redundant information from a multiscale wavelet analysis of time series must be reduced by means of suitable selective procedures and quantities, in order to extract the main features correlated to an essentially intermittent dynamics. In this study, we analysed and compared the properties of two complementary wavelet local correlation coefficents which are able to well evidence peculiar and anomalous local events associated to the vortex dynamics. More precisely, given two signals $`f(t)`$ and $`g(t)`$, we refer to the so-called *Wavelet Local Correlation Coefficent* (Buresti et. al ), defined as:
$$WLCC(a,\tau )=\frac{CoW_{fg}(a,\tau )}{W_f(a,\tau )W_g(a,\tau )}.$$
(5)
This quantity is essentially a measure of the phase coherence of the signals. Here we introduce the *Cross Wavelet Coherence Function* (CWCF) defined as:
$$CWCF(a,\tau )=\frac{2W_{fg}(a,\tau )^2}{W_f(a,\tau )^4+W_g(a,\tau )^4},$$
(6)
which is essentially a measure of the intensity coherence of the signals. Using the polar coordinates we can write the wavelet transforms of $`W_f(a,\tau )`$, $`W_g(a,\tau )`$ and $`W_{fg}(a,\tau )`$ as:
$$W_f(a,\tau )=\rho _fe^{ı\theta _f}W_g(a,\tau )=\rho _ge^{ı\theta _g}$$
(7)
$$W_{fg}(a,\tau )=\rho _f\rho _ge^{ı(\theta _g\theta _f)},$$
(8)
and the Cross Wavelet Coherence Function can be written also as:
$$CWCF(a,\tau )=\frac{2\rho _f^2\rho _g^2}{\rho _f^4+\rho _g^4}.$$
(9)
It is easy to observe the two basic properties of the function (6):
$$CWCF(a,\tau )=0\rho _f=0\text{or }\rho _g=0$$
(10)
$$0CWCF1a,\tau .$$
(11)
## 4 Numerical simulation
We considered a laboratory-scale axisymmetric flame of methane-air in a non confined bluff-body configuration. More precisely, the burner consists of a 5.4 mm diameter methane jet located in the center of a 50 mm diameter cylinder. Air is supplied through a 100 mm outer diameter coaxial jet around the 50 mm diameter bluff-body. The Reynolds number of the central jet is 7000 (methane velocity =21 m/s) whereas the Reynolds number of the coaxial jet is 80000 (air velocity =25 m/s). This is a challenging test case for all the turbulence models, as well documented in the ERCOFTAC report (Chatou, 1994) . Moreover, due to the highly intermittent, unsteady dynamics involved and the high turbulence level, especially for the reactive case, the Large Eddy Simulation (LES) appears as the most adequate numerical approach (Sello et. al ).
## 5 Results and discussion
In this analysis we are mainly interested to relations existing between evolution of turbulence and mixing, for the reactive case. Previous DNS simulations on coaxial jets at different Reynolds numbers, show the ability of the wavelet cross-correlation analysis to better investigate the relations between mixing process and the dynamics of vorticity (Salvetti et. al ). Thus, the signals analysed here are velocity fluctuations (for Reynolds stress contributions) and mixture fraction fluctuations (for mixing evolution) from LES. As an example, Figure 1 shows the wavelet co-spectrum maps for a significant time interval in the pseudo-stationary regime of motion. The main contributions to the Reynolds stress are evidenced by high intensity correlations (red) and anti-correlations (blue) regions, which evolve intermittently. The dominant frequencies involved are located around 130 Hz. For the mechanisms responsable of the evolution of mixing, we note that the same regions of high Reynolds stress correspond to high correlation, or cooperation, between velocity and mixture fraction fluctuations, suggesting that, at the selected location, the same events of stretching and tilting of the vorticity layer, drive both Reynolds stress and mixing evolutions. Note that the large high value region located at low frequencies in the right map is statistically not significant if we assume a proper red noise background spectrum. To better investigate the role of the high correlation regions, we performed a cross section in the wavelet map at the frequency 160 Hz. Figure 2 (left) shows the time behaviour of the coherence functions WLCC, eq.(5), and CWCF, eq.(6). Here the phase and intensity coherence of signals are almost equivalent, but we can clearly point out an important anomalous event occurred at around t=0.19 s, corresponding to a loss of both intensity and phase coherence, followed by a change of the correlation sign. The link between this event and the dynamics of vorticity is evidenced by Figure 2 (right), which displays the wavelet map of the related vorticity signal. The higher frequency significant regions ($``$ 730 Hz) result strongly intermittent, with a bifurcation to lower and higher values than average, followed by a drop of activity, in phase with the anomalous event.
These few examples support the usefulness of the cross-wavelet analysis approach to better investigate turbulent mixing processes in real systems.
|
no-problem/0003/astro-ph0003184.html
|
ar5iv
|
text
|
# The pattern speed of the nuclear disk of M31 using a variant of the Tremaine–Weinberg method
## 1 Introduction
The nucleus of M31 was first resolved by the Stratoscope II balloon–borne telescope (Light, Danielson, and Schwarzschild 1974), which showed that the peak brightness was displaced relative to the center, as inferred from the isophotes of the outer parts of the galaxy. This was confirmed, and extended by Hubble Space Telescope (HST) observations, which revealed two peaks in the brightness, separated by $`0^{\prime \prime }.49`$ (Lauer et al. 1993, King, Stanford, and Crane 1995). Ground–based, as well as HST spectroscopy, have probed the structure of the radial velocities and velocity dispersions, in increasing detail, along many strips across the nuclear region (Dressler and Richstone 1988, Kormendy 1988, Bacon et al. 1994, van der Marel et al. 1994, Gerssen, Kuijken, and Merrifield 1995, Statler et al. 1999, Kormendy and Bender 1999). These provide evidence for the presence of a massive dark object (MDO), which could be a supermassive black hole, of mass $`3\times 10^7\mathrm{M}_{}`$, located very close to the fainter peak (P2). The dynamical center of the nucleus is believed to coincide with the center of the isophotes of the bulge of M31; this point has been estimated to lie between the two peaks.
Tremaine (1995) proposed that the nucleus could be a thick eccentric disk, composed of stars on nearly Keplerian orbits around the MDO, with their apoapsides aligned in the direction toward the brighter peak (P1); the brightness of P1 is then explained as the increased concentration of stars, resulting from their slow speeds near their apoapsides. He also showed that this model is broadly consistent with the kinematics, as inferred from the spectroscopic observations of Kormendy (1988), and Bacon et al. (1994). Recent work has not only produced further support for Tremaine’s model (Statler et al. 1999, Kormendy and Bender 1999), but has stimulated variations on the basic theme (Statler 1999). Tremaine also suggested that the alignment could be maintained by the self–gravity of the disk, wherein the eccentric distortion could arise as a discrete, nonlinear eigenmode, with some nonzero pattern speed ($`\mathrm{\Omega }_p`$), equal to the common apsidal precession rate. The dynamical question is yet to be resolved in a self–consistent manner, although explorations of orbits in model potentials have identified a family of resonant, aligned loop orbits, which could serve as building blocks (Sridhar and Touma 1999, Statler 1999); reasonably faithful reproduction of the nuclear rotation curve adds some degree of confidence (Statler 1999). If the nuclear disk is indeed a steadily rotating, nonlinear eigenmode, what is $`\mathrm{\Omega }_p`$?
Tremaine and Weinberg (1984, hereafter TW) invented a method of estimating the pattern speed of a barred disk galaxy, that uses measurements of the surface brightness, and radial velocity along a strip parallel to the line of nodes (defined as the line of intersection of the disk and sky planes). That this methods works was proved when the pattern speed of the bar in NGC 936 was estimated by Merrifield and Kuijken (1995). Unfortunately, the radial velocity measurements of the nucleus of M31 (see references above on spectroscopy) are available on strips that, either do not coincide with the line of nodes, or possess too poor an angular resolution for direct application of the TW method. In this Letter, we show that the HST observations of Statler et al. (1999, hereafter SKCJ), together with Tremaine’s (1995) model, can be used to estimate $`\mathrm{\Omega }_p`$.
## 2 A modified Tremaine–Weinberg method applicable to M31
We briefly recall TW’s derivation of their kinematic method. Let us assume that the disk is razor–thin, with a well–defined pattern speed, $`\mathrm{\Omega }_p`$. The plane of the disk is assumed to be inclined at angle $`i`$ with respect to the plane of the sky. Let $`(x,y)`$, and $`(r,\varphi )`$ be cartesian and polar coordinates, respectively, in the plane of the disk, with the origin coinciding with the center of mass of the system (disk plus MDO). Let the cartesian coordinates in the sky plane be $`(X,Y)=(x,y\mathrm{cos}i)`$; the $`x`$ and $`X`$ axis coincide with the line of nodes. The disk is assumed to rotate steadily, hence the surface brightness of stars, $`\mathrm{\Sigma }(x,y,t)=\stackrel{~}{\mathrm{\Sigma }}(r,\varphi \mathrm{\Omega }_pt)`$. The surface brightness is also assumed to obey a continuity equation, without a source term. Let $`𝒗`$ be the (mean) velocity field in the disk plane. The continuity equation can be manipulated to yield,
$$\mathbf{}\left[\left(𝒗𝒖\right)\mathrm{\Sigma }\right]=\mathrm{\hspace{0.33em}0},$$
(1)
where $`𝒖=\mathrm{\Omega }_p(y\widehat{x}+x\widehat{y})`$. TW proceed by integrating equation (1) over $`x`$ from $`\mathrm{}`$ to $`+\mathrm{}`$. Assuming $`\mathrm{\Sigma }0`$ sufficiently rapidly as $`|x|\mathrm{}`$, and integrating over $`y`$ from $`y`$ to $`\mathrm{}`$ yields $`\mathrm{\Omega }_p_{\mathrm{}}^{\mathrm{}}𝑑x\mathrm{\Sigma }x=_{\mathrm{}}^{\mathrm{}}𝑑x\mathrm{\Sigma }v_y`$. Noting that the sky brightness, $`\mathrm{\Sigma }_s=\mathrm{\Sigma }/\mathrm{cos}i`$, and the radial velocity, $`V_{}=v_y\mathrm{sin}i`$, the integrals may be expressed in terms of observable quantities; hence $`\mathrm{\Omega }_p`$ can be estimated when $`V_{}`$ has been measured on a strip parallel to the line of nodes.
SKCJ measured $`V_{}`$ along the P1–P2 line, which is inclined by about $`4^{}`$, in the sky plane, to the line of nodes. Therefore $`V_{}`$ is available on a strip that makes an angle, $`\theta 4^{}/\mathrm{cos}77^{}18^{}`$, with the $`x`$–axis, in the disk plane; the TW procedure needs some modification, before $`\mathrm{\Omega }_p`$ can be extracted. Let $`x^{}=x\mathrm{cos}\theta +y\mathrm{sin}\theta `$ and $`y^{}=x\mathrm{sin}\theta +y\mathrm{cos}\theta `$ be the rotated cartesian coordinates. Let us also denote the surface brightness by $`\mathrm{\Sigma }^{}(x^{},y^{},t)`$. The SKCJ measurements are along the strip $`y^{}=0`$, that passes through the origin. Equation (1) expresses a relation that is invariant under rotation of cartesian coordinates. Hence application of the TW procedure provides an identical relationship between the integrals over the strip, defined by $`y^{}=0`$:
$$\mathrm{\Omega }_p_{\mathrm{}}^{\mathrm{}}𝑑x^{}\mathrm{\Sigma }^{}x^{}=_{\mathrm{}}^{\mathrm{}}𝑑x^{}\mathrm{\Sigma }^{}v_y^{}$$
(2)
We now express the integrals in terms of observable quantities: $`x^{}=X\mathrm{cos}\theta +Y\mathrm{sin}\theta /\mathrm{cos}i`$ and $`0=y^{}=X\mathrm{sin}\theta +Y\mathrm{cos}\theta /\mathrm{cos}i`$ can be used to eliminate $`Y`$, giving $`x^{}=X/\mathrm{cos}\theta `$. As before, $`\mathrm{\Sigma }^{}(x^{},y^{},t)=\mathrm{cos}i\mathrm{\Sigma }_s(X,Y,t)`$. However, the radial velocity,
$$V_{}=\mathrm{sin}i\left(v_y^{}\mathrm{cos}\theta +v_x^{}\mathrm{sin}\theta \right),$$
(3)
depends on $`v_y^{}`$ as well as $`v_x^{}`$. Hence it is, in general, not possible to express $`v_y^{}`$ in equation (2) in terms of $`V_{}`$ alone (the TW method finesses this problem because $`\theta =0`$ kills the contribution to $`V_{}`$ from the $`v_x^{}`$ term). However, we are able to make progress by recalling the basic elements of Tremaine’s (1995) model. If the nuclear disk is largely composed of nearly Keplerian ellipses, with their apoapsides aligned along the P1–P2 line, and we choose $`\theta `$ such that our strip lies along this line, then the strip intersects all these ellipses at right angles. Thus $`v_x^{}=0`$, and we recover the familiar TW relation,
$$\mathrm{\Omega }_p\mathrm{sin}i_{\mathrm{}}^{\mathrm{}}𝑑X\mathrm{\Sigma }_sX=_{\mathrm{}}^{\mathrm{}}𝑑X\mathrm{\Sigma }_sV_{}.$$
(4)
It should be noted that the value of $`\theta `$ drops out of equation (4). Moreover, $`v_x^{}=0`$ even when the apsides of the ellipses precess. Below we use HST photometry and kinematics to estimate $`\mathrm{\Omega }_p`$.
## 3 $`\mathrm{\Omega }_p`$ from HST photometry and spectroscopy
HST photometric data, reported in Lauer et al. (1993), was kindly supplied to us by Prof. Ivan King. Figure 1 shows the (sky) surface brightness along the P1–P2 line. The contribution from the bulge of M31 was subtracted using two different fits to the bulge brightness, namely a Nuker fit as described in Tremaine (1995), and a Sérsic fit as described in Kormendy and Bender (1999). The disks so obtained will henceforth be referred to as a “T–disk”, and a “KB–disk”, respectively.
SKCJ observed the stellar kinematics along the P1–P2 line, using the $`f/48`$ long–slit spectrograph of the HST Faint Object Camera. We obtained radial velocities along the P1–P2 line, including the errors on them, from their “de–zoomed” rotation curve (given in Figure 3, as well as Table 1 of SKCJ); these are displayed in Figure 2a. The integrals in equation (4) need to be computed, with upper and lower limits symmetrically displaced about the center of mass of the disk plus MDO. Kormendy and Bender (1999) have determined the center of mass to be displaced by $`0^{\prime \prime }.098`$ from P2 toward P1, and we use this value in the computations for Figures 2b and 2c. The stated errors on the radial velocities were used by us to generate 300 random realizations. For each of these realizations of the rotation curve, we evaluate the integrals for five different limits, ranging from $`\pm 0^{\prime \prime }.8`$ to $`\pm 1^{\prime \prime }.2`$. As is clear from equation (4), only the combination, $`\mathrm{\Omega }_p\mathrm{sin}i`$ can be determined. It is a common assumption that the nuclear disk of M31 has the same inclination, to the sky plane, as the much larger galactic disk of M31, which is inclined at $`77^{}`$. We wish to state our results independent of this assumption, so we present estimates of the quantity, $`\stackrel{~}{\mathrm{\Omega }}_p=(\mathrm{sin}i/\mathrm{sin}77^{})\mathrm{\Omega }_p`$, in Figures 2b—2d.
Figures 2b and 2c display the estimates of $`\stackrel{~}{\mathrm{\Omega }}_p`$ so obtained, together with $`1\sigma `$ error bars, as a function of the limits of the integrals, for the T–disk and KB–disk, respectively. It is evident that the estimates of pattern speed do not vary much when the limits of integration lie between $`\pm 1^{\prime \prime }.0`$ and $`\pm 1^{\prime \prime }.2`$. We also explore the dependence on the position of the center of mass of the system, assumed in the computation of the integrals. Although we used the most recent determination, due to Kormendy and Bender (1999), it must be noted that earlier work (Lauer et. al. 1993, King, Stanford, and Crane 1995, Tremaine 1995) records smaller values, $`0^{\prime \prime }.05`$, away from P2 toward P1. Figure 2d plots our estimates of $`\stackrel{~}{\mathrm{\Omega }}_p`$, for a range of values of the center of mass, with limits of integration fixed at $`\pm 1^{\prime \prime }.0`$. A negative value of $`\stackrel{~}{\mathrm{\Omega }}_p`$ means that the pattern is prograde; Figures 2b—2d indicate that this is the most likely possibility, with the absolute value increasing with the separation between P2 and the center of mass of the system. This is reasonable, because a larger separation corresponds to a larger disk mass, which implies a greater contribution from the self–gravity of the disk, which is ultimately responsible for the (alignment and) precession of the disk. The errors on $`\stackrel{~}{\mathrm{\Omega }}_p`$ are large, and a non rotating pattern cannot be ruled out with certainty. Below we quote representative bounds on the absolute value of the prograde pattern speed:
$$\left|\mathrm{\Omega }_p\right|\frac{\mathrm{sin}77^{}}{\mathrm{sin}i}\{\begin{array}{cc}34\pm 8\mathrm{km}\mathrm{s}^1\mathrm{pc}^1,\hfill & \text{T–disk;}\hfill \\ 20\pm 12\mathrm{km}\mathrm{s}^1\mathrm{pc}^1,\hfill & \text{KB–disk.}\hfill \end{array}$$
(5)
For each realization of the rotation curve, the zero–crossing point (the “rotation center”) was determined by a third–order spline interpolation. We found the position of the rotation center to be displaced by $`0^{\prime \prime }.17\pm 0^{\prime \prime }.01`$, from P2 toward P1. This should be compared with the value of $`0^{\prime \prime }.16\pm 0^{\prime \prime }.05`$, quoted by SKCJ. We also tested for any systematic dependence of $`\mathrm{\Omega }_p`$ on the position of the rotation center, and found none.
## 4 Conclusions
Our estimate of the pattern speed of the nuclear disk of M31 should be qualified by a discussion of possible sources of errors, most of which are difficult to estimate quantitatively. SKCJ calibrate velocities relative to an average over an $`8\mathrm{arcsec}^2`$ aperture centered on the nucleus (Ho, Filippenko, and Sargent 1993), and assure us that the errors are likely to be small. More significant, perhaps, are the systematic errors in $`V_{}`$, mentioned by SKCJ; these are shown as open squares in Figure 2a. SKCJ used a slit of width $`0^{\prime \prime }.063`$, and this will introduce contributions to $`V_{}`$ from nonzero values of $`v_x^{}`$ (see equation (3)). This effect is somewhat mitigated by cancellation between positive and negative values of $`v_x^{}`$, and the fact that the width of $`0^{\prime \prime }.063`$ is of much smaller scale than the minor axis, projected onto the sky plane, of the smallest ellipse used by Tremaine (1995; see Figure 2a of his paper) to represent the nuclear disk. The limits of integration are necessarily finite in numerical computation, and we have resisted the temptation to include corrections by extrapolation of the $`\mathrm{\Sigma }_s`$ and $`V_{}`$ profiles.
A basic assumption underlying the application of a TW–like method is that the surface brightness obeys a continuity equation, which would be valid for a stellar disk in the absence of star formation (or death). We expect the numbers of stars to be conserved, except possibly in the vicinity of P2, where the observed UV excess has been interpreted as contributions from early–type stars (King, Stanford and Crane 1995, Lauer et al. 1998). However, these stars do not contribute much to the photometric and kinematic data we have used. Tremaine (1995) argues that two–body relaxation is expected to thicken the disk, whereas we assumed that the disk was razor–thin. The original TW method is applicable to thick disks, so long as the streaming velocity normal to the disk plane is zero. In addition to the assumption of zero normal streaming velocity, let us suppose that the three dimensional density, $`\rho `$, is symmetric about the midplane of the disk. The contribution to $`V_{}`$ from $`v_x^{}`$ arise from an integral along the (inclined) line of sight that runs through the thick disk. Consider two points along this line of sight that are equally displaced about the midplane of the disk. Aligned, nearly Keplerian orbits have flows such that $`\rho `$ is equal, whereas $`v_x^{}`$ is equal and opposite at these two points; in this ideal picture, there is pair–wise perfect cancellation, and no net contribution to $`V_{}`$ from $`v_x^{}`$. In practice there should be some cancellation, because $`v_x^{}`$ will have opposite signs at two corresponding points, but there could be a net contamination from the unequal values of $`|v_x^{}|`$ and $`\rho `$ .
Tremaine’s original model, which was a reasonable fit to the then available photometry and kinematics, considered a non rotating disk, and it would be appropriate to inquire about the implications of a non zero pattern speed. A pattern that is prograde with angular speed, say, of $`20\mathrm{km}\mathrm{s}^1\mathrm{pc}^1`$ would contribute only about $`35\mathrm{km}\mathrm{s}^1`$ to the radial velocity at P1, which is about $`250\mathrm{km}\mathrm{s}^1`$, according to the measurements of SKCJ. The maximum radial velocity quoted by Tremaine (1995) is less than $`200\mathrm{km}\mathrm{s}^1`$, so a non zero pattern speed could still be accomodated. Our estimates do not rule out a non rotating disk, but we would like to offer a physical argument in support of a non zero pattern speed. For the disk plus MDO to be in a steady, non rotating state, the gravitational force on the MDO should necessarily vanish. Our (unpublished) numerical computations indicate that the force is indeed non zero.
A limitation of our method is that it uses, in an essential manner, the assumption that most of the contribution to $`V_{}`$ comes from orbits which intersect the measurement strip at right angles. In comparison, the original TW method does not rely on assumptions about the geometry of the mean flow; averaging over several strips, all parallel to the line of nodes, will improve estimates of the pattern speed, as Merrifield and Kuijken (1995) demonstrated. Thus it is necessary to verify our estimates of $`\mathrm{\Omega }_p`$, by using the TW method on future observations of $`V_{}`$, together with better photometry such as Lauer et al. (1998), along strips parallel to the line of nodes. An extremely useful set of observations that could be performed would be two–dimensional spectroscopy, similar to the work of Bacon et al. (1994), with the increased angular resolution that should be available in the near future.
We are grateful to Prof. Ivan King for generously sharing with us the photometric data, an anonymous referee for pointing out a basic error in the original manuscript, and asking stimulating questions, and to R. Srianand for useful comments. NS thanks the Council of Scientific and Industrial Research, India, for financial support through grant 2–21/95(II)/E.U.II.
|
no-problem/0003/astro-ph0003471.html
|
ar5iv
|
text
|
# A THEORETICAL LIGHT-CURVE MODEL FOR THE RECURRENT NOVA V394 CORONAE AUSTRINAE
## 1. INTRODUCTION
Type Ia supernovae (SNe Ia) are one of the most luminous explosive events of stars. Recently, SNe Ia have been used as good distance indicators which provide a promising tool for determining cosmological parameters because of their almost uniform maximum luminosities (Riess et al. (1998); Perlmutter et al. (1999)). These both groups derived the maximum luminosities ($`L_{\mathrm{max}}`$) of SNe Ia completely empirically from the shape of the light curve (LCS) of nearby SNe Ia, and assumed that the same $`L_{\mathrm{max}}`$–LCS relation holds for high red-shift SNe Ia. To be sure of any systematic biases, the physics of SNe Ia must be understood completely. By far, one of the greatest problems facing SN Ia theorists is the lack of a real progenitor (e.g., Livio (1999) for a recent review). Finding a reliable progenitor is urgently required in SN Ia research. Recurrent novae are probably the best candidate for this target (e.g., Starrfield, Sparks, & Truran 1985; Hachisu et al. 1999b; Hachisu, Kato, & Nomoto 1999a).
Recently, the recurrent nova U Sco underwent the sixth recorded outburst on February 25, 1999. For the first time, a complete light curve has been obtained from the rising phase to the final fading phase toward quiescence through the mid-plateau phase (e.g., Matsumoto, Kato, & Hachisu 2000). Constructing a theoretical light curve of the outburst, Hachisu et al. (2000a) have estimated various physical parameters of U Sco: (1) The early linear phase of the outburst ($`t110`$ days after the optical maximum) is well reproduced by a thermonuclear runaway model on a $`1.37\pm 0.01M_{}`$ white dwarf (WD). (2) The envelope mass at the optical maximum is estimated to be $`3\times 10^6M_{}`$, which results in the mass transfer rate of $`2.5\times 10^7M_{}`$ yr<sup>-1</sup> during the quiescent phase between the 1987 and 1999 outbursts. (3) About 60% of the envelope mass has been blown off in the outburst wind but the residual 40% ($`1.2\times 10^6M_{}`$) of the envelope mass has been left and accumulated on the white dwarf. Therefore, the net mass increasing rate of the white dwarf is $`1.0\times 10^7M_{}`$ yr<sup>-1</sup>, which meets the condition for SN Ia explosions of carbon-oxygen cores (Nomoto & Kondo (1991)). Thus, Hachisu et al. (2000a, 2000b) have concluded that the white dwarf mass in U Sco will reach the critical mass ($`M_{\mathrm{Ia}}=1.378M_{}`$, taken from Nomoto, Thielemann, & Yokoi 1984) in quite a near future and explode as an SN Ia. Therefore, we regard that U Sco is a very strong candidate for the immediate progenitor of SNe Ia.
It has been suggested that the recurrent nova V394 CrA is a twin system of U Sco because of its almost same decline rate of the early light curve and spectrum feature during the outburst (e.g., Sekiguchi et al. (1989)). It is very likely that the physical parameters obtained for U Sco are common to V394 CrA. In this paper, we derive various physical quantities of V394 CrA, both during the 1987 outburst and in quiescence, by constructing the same theoretical light curve models as for U Sco, and examine whether or not V394 CrA is an immediate progenitor of SNe Ia. In §2, we briefly describe our light curve model during the outburst and present the fitting results for the 1987 outburst of V394 CrA. In §3, based on the physical parameters obtained in §2, we construct theoretical light curves for the quiescent phase of V394 CrA and confirm that the parameters during the outburst are consistent with those in quiescence. Discussion follows in §4, especially for relevance to SN Ia progenitors.
## 2. LIGHT CURVES FOR THE 1987 OUTBURST
The orbital period of V394 CrA has been determined to be $`P=0.7577`$ days by Schaefer (1990). Here, we adopt the ephemeris of HJD 2,447,000.250+0.7577$`\times E`$ at the epoch of the main-sequence companion in front. The orbit of the companion star is assumed to be circular. Our theoretical light curve model has already been described in Hachisu et al. (2000a) for outburst phases and in Hachisu et al. (2000b) for quiescent phases of U Sco. The total visual light is calculated from three components of the system: (1) the WD photosphere, (2) the MS photosphere, which fills its Roche lobe, and (3) the accretion disk (ACDK) surface, the size and thickness of which are simply defined by two parameters, $`\alpha `$ and $`\beta `$, as
$$R_{\mathrm{disk}}=\alpha R_1^{},$$
(1)
and
$$h=\beta R_{\mathrm{disk}}\left(\frac{\varpi }{R_{\mathrm{disk}}}\right)^\nu ,$$
(2)
where $`R_{\mathrm{disk}}`$ is the outer edge of the ACDK, $`R_1^{}`$ the effective radius of the inner critical Roche lobe for the WD component, and $`h`$ the height of the surface from the equatorial plane, $`\varpi `$ the distance on the equatorial plane from the center of the WD as seen in Figure 1. Here, we adopt $`\varpi `$-squared law ($`\nu =2`$), otherwise specified, to mimic the effect of flaring-up at the rim of the ACDK (e.g., Schandl, Meyer-Hofmeister, & Meyer 1997).
It has been established that the WD photosphere expands to a giant size at the optical maximum and then it decays gradually to the original size of the WD in quiescence, with the bolometric luminosity being kept near the Eddington luminosity (e.g., Starrfield, Sparks, & Shaviv 1988). An optically thick wind is blowing from the WD during the outburst, which plays a key role in determining the nova duration because a large part of the envelope mass is carried away by the wind. The development of the WD photosphere during the outburst is followed by a series of optically thick wind solutions (Kato & Hachisu (1994)). Each envelope solution is uniquely specified by the envelope mass, which is decreasing in time due to wind and nuclear burning. We have calculated solutions for five WD masses of $`M_{\mathrm{WD}}=1.377`$, 1.37, 1.36, 1.35, and 1.3 $`M_{}`$ with different hydrogen contents of $`X=`$0.04, 0.05, 0.06, 0.07, 0.08, 0.10, and 0.15, in mass weight. The numerical method and physical properties of the solutions are described in Kato and Hachisu (1994), and also in Kato (1999), where we use the revised OPAL opacity (Iglesias & Rogers (1996)).
Assuming a blackbody photosphere of the WD envelope, we have estimated the visual magnitude with a response function given by Allen (1973). For simplicity, we do not consider the limb-darkening effect. The early 10 days light curve of V394 CrA is mainly determined by the WD photosphere because the photospheric radius is larger than or as large as the binary size and much brighter than the MS and the ACDK. The decline rate during the early 10 days ($`t1`$—10 days after the optical maximum) depends very sensitively on the WD mass but hardly on the hydrogen content (Kato (1999)) or on the MS mass as described in Hachisu et al. (2000a). Thus, we obtain $`M_{\mathrm{WD}}=1.37\pm 0.01M_{}`$ by fitting of light curves.
The light curves are calculated for four companion masses, i.e., $`M_{\mathrm{MS}}=0.8`$, 1.1, 1.5 and $`2.0M_{}`$. Since we obtain similar light curves for all of these four cases, we show here only the results for $`M_{\mathrm{MS}}=1.5M_{}`$. For a pair $`1.37M_{}`$ WD $`+`$ $`1.5M_{}`$ MS, for example, we have the separation $`a=4.97R_{}`$, the effective radius of the inner critical Roche lobe $`R_1^{}=1.84R_{}`$, and $`R_2^{}=R_2=1.92R_{}`$, for the WD and MS (Fig. 1), respectively.
In the plateau phase of the light curve ($`t10`$—30 days), i.e., when the WD photosphere is much smaller than the binary size, the light curve is determined mainly by the irradiations of the ACDK and the MS as shown in the 1999 U Sco outburst (Hachisu et al. 2000a ). Here, we assume that the surfaces of the MS and of the ACDK emit photons as a black-body at a local temperature of the surfaces heated by the WD photosphere. We assume further that a half of the absorbed energy is emitted from the surfaces of the MS and of the ACDK (50% efficiency, i.e., $`\eta _{\mathrm{ir},\mathrm{MS}}=0.5`$, and $`\eta _{\mathrm{ir},\mathrm{DK}}=0.5`$), while the other is carried into interior of the accretion disk and eventually brought into the WD. The unheated surface temperatures are assumed to be $`T_{\mathrm{ph},\mathrm{MS}}=5000`$ K for the MS and to be $`T_{\mathrm{ph},\mathrm{disk}}=4000`$ K for the ACDK including the disk rim. The viscous heating of the ACDK is neglected during the outburst because it is much smaller than that of the irradiation effects. We have checked different temperatures of $`T_{\mathrm{ph},\mathrm{MS}}=4000`$, 6000 K and $`T_{\mathrm{ph},\mathrm{disk}}=3000`$, 5000 K, but could not find any significant differences in the light curves.
The luminosity of the accretion disk depends strongly on both the thickness $`\beta `$ and the size $`\alpha `$ in the plateau phase. We have examined a total of 160 cases for the set of $`(\alpha ,\beta )`$, which is the product of 16 cases of $`\alpha =`$ 0.5—2.0 by 0.1 step and 10 cases of $`\beta =`$ 0.05—0.50 by 0.05 step. Here, we have adopted the same set of ($`\alpha `$, $`\beta `$) as of the U Sco 1999 outburst (Hachisu et al. 2000a ), i.e., $`\alpha =`$1.4 (1.2), and $`\beta =`$0.30 (0.35), for the outburst wind phase (for the static phase). The hydrogen content $`X`$ cannot be determined from the light curve fitting mainly because the observation of the 1987 outburst does not cover the final decay phase toward quiescence. So we have adopted the same value of $`X=0.05`$ as in the U Sco 1999 outburst (Hachisu et al. 2000a ). Then, the optically thick wind stops 20 days after the optical maximum as shown in Figure 2. Because the inclination angle of the orbit is not known yet, we have fitted the light curve by changing the inclination angle, i.e., $`i=30\mathrm{°}`$, $`45\mathrm{°}`$, $`50\mathrm{°}`$, $`55\mathrm{°}`$, $`60\mathrm{°}`$, $`65\mathrm{°}`$, $`66\mathrm{°}`$, $`67\mathrm{°}`$, $`68\mathrm{°}`$, $`69\mathrm{°}`$, and $`70\mathrm{°}`$. Some of them are plotted in Figure 2. The visual magnitude of the 1987 outburst can be reproduced when the inclination angle is between $`65\mathrm{°}`$ and $`70\mathrm{°}`$ except for the first day of the outburst. It is almost certain that the visual light during the first day exceeds the Eddington limit because our optically thick wind solutions do not produce super Eddington luminosities.
Based on the best fitted solutions, we have estimated the envelope mass at the optical maximum as $`\mathrm{\Delta }M=5.8\times 10^6M_{}`$, which indicates a mass accretion rate of $`\dot{M}_{\mathrm{acc}}=1.5\times 10^7M_{}`$ yr<sup>-1</sup> during the quiescent phase between the 1949 and 1987 outbursts if no WD matter has been dredged up. About 77% ($`4.5\times 10^6M_{}`$) of the envelope mass has been blown off in the outburst wind while the residual 23% ($`1.3\times 10^6M_{}`$) has been left and added to the helium layer of the WD. Thus, the net mass increasing rate of the WD is $`\dot{M}_{\mathrm{He}}=0.34\times 10^7M_{}`$ yr<sup>-1</sup>.
The distance to V394 CrA is estimated to be 6.1 kpc for no absorption ($`A_V=0`$). We discuss the distance to V394 CrA in more detail, in the next section, to solve the discrepancy of the distance estimations between in quiescence and in bursting phases.
## 3. LIGHT CURVES IN QUIESCENCE
In a quiescent phase, we have adopted the same binary model in the 1987 outburst phase except for the disk shape (see Fig. 3), that is, we assume the WD mass as $`M_{\mathrm{WD}}=1.37M_{}`$, the accretion luminosity of the WD as
$$L_{\mathrm{WD}}=\frac{1}{2}\frac{GM_{\mathrm{WD}}\dot{M}_{\mathrm{acc}}}{R_{\mathrm{WD}}}+L_{\mathrm{WD},0},$$
(3)
(e.g., Starrfield et al. (1988)) and the viscous luminosity and the irradiation effect of the ACDK as
$$\sigma T_{\mathrm{ph},\mathrm{disk}}^4=\frac{3GM_{\mathrm{WD}}\dot{M}_{\mathrm{acc}}}{8\pi \varpi ^3}+\eta _{\mathrm{ir},\mathrm{DK}}\frac{L_{\mathrm{WD}}}{4\pi r^2}\mathrm{cos}\theta ,$$
(4)
(e.g., Schandl et al. (1997)), where $`L_{\mathrm{WD}}`$ and $`L_{\mathrm{WD},0}`$ are the total and intrinsic luminosities of the WD, respectively, $`G`$ the gravitational constant, $`\dot{M}_{\mathrm{acc}}`$ is the mass accretion rate of the WD, $`R_{\mathrm{WD}}=0.0032R_{}`$ is the radius of the $`1.37M_{}`$ WD, $`\sigma `$ is the Stefan-Boltzmann constant, $`T_{\mathrm{ph},\mathrm{disk}}`$ is the surface temperature of the ACDK, $`r`$ is the distance from the center of the WD, and $`\mathrm{cos}\theta `$ is the incident angle of the surface. The accretion luminosity of the WD is as large as $`1000L_{}`$ for $`\dot{M}_{\mathrm{acc}}1.5\times 10^7M_{}`$ yr<sup>-1</sup>. The unheated temperatures are assumed to be 4000 K at the disk rim and 5000 K at the MS photosphere.
Figure 4 shows the observational points (open circles) by Schaefer (1990) together with our calculated $`B`$ light curve (thick solid line) for the suggested mass accretion rate of To fit our theoretical light curves with Schaefer’s (1990) observational points, we have calculated $`B`$ light curves by changing the parameters of $`\alpha =0.5`$—1.0 by 0.1 step, $`\beta =0.05`$—0.50 by 0.05 step, $`T_{\mathrm{ph},\mathrm{MS}}=3000`$—6000 K by 1000 K step, $`T_{\mathrm{ph},\mathrm{disk}}=3000`$ and 4000 K at the disk rim, $`L_{\mathrm{WD},0}=0`$—1000 $`L_{}`$ by 100 $`L_{}`$ step and 1000—5000 $`L_{}`$ by 1000 $`L_{}`$ step and $`i=60`$$`70\mathrm{°}`$ by $`1\mathrm{°}`$ step and seek for the best fit model for each mass accretion rate. The best fit parameters obtained are shown in the figures (also see Table 1 for the other mass accretion rates of (0.1—5.0)$`\times 10^7M_{}`$ yr<sup>-1</sup>).
Then we have calculated the theoretical color index $`(BV)_c`$ for these best fit models. Here, we explain only the case of $`\dot{M}_{\mathrm{acc}}=1.5\times 10^7M_{}`$ yr<sup>-1</sup>. because the 1987 outburst model suggests an average mass accretion rate of $`1.5\times 10^7M_{}`$ yr<sup>-1</sup> during the quiescent phase between the 1949 and 1987 outbursts. By fitting, we obtain the apparent distance modulus of $`m_{B,0}=17.66`$, which corresponds to the distance of 34 kpc without absorption ($`A_B=0`$). On the other hand, we obtained a rather blue color index of $`(BV)_c=0.16`$ at $`m_B=19.2`$. This suggests a large color excess of $`E(BV)=(BV)_o(BV)_c=1.10`$ with the observed color of $`(BV)_o=0.94`$ at $`m_B=19.20`$ by Schaefer (1990). Here, the suffixes $`c`$ and $`o`$ represent the theoretically calculated values and the observational values, respectively. Then, we have a large absorption of $`A_V=3.1E(BV)=3.46`$ and $`A_B=A_V+E(BV)=4.56`$. Thus, we obtain the distance of 4.2 kpc. Then, V394 CrA lies $`600`$ pc below the Galactic plane ($`l=352.84\mathrm{°}`$, $`b=7.72\mathrm{°}`$).
The distance of 4.2 kpc indicates an absorption of $`A_V=0.81`$ during the outburst (6.1 kpc for $`A_V=0`$), i.e., $`A_V=5\mathrm{log}(6.1/4.2)=0.81`$. Then, we have $`E(BV)=A_V/3.1=0.26`$ and $`A_B=A_V+E(BV)=1.07`$. On the other hand, Duerbeck (1988) suggested an absorption of $`A_B=1`$ from the nearby distance-interstellar absorption relation by Neckel and Klare (1980), which is consistent with our estimation of the absorption during the 1987 outburst. Thus, we may suggest that the intrinsic absorber of V394 CrA is blown off during the outburst as discussed in U Sco (Hachisu et al. 2000b ).
## 4. DISCUSSION
Even for much different mass accretion rates, the distance to V394 CrA has been estimated not to be so much different from 4.2 kpc as tabulated in Table 1. For lower mass accretion rates such as $`\dot{M}_{\mathrm{acc}}=1.0\times 10^7M_{}`$ yr<sup>-1</sup>, however, we need the 100% irradiation efficiency of the MS ($`\eta _{\mathrm{ir},\mathrm{MS}}=1.0`$) or an intrinsic luminosity of the WD as large as $`L_{\mathrm{WD},0}300L_{}`$ for the 50% efficiency ($`\eta _{\mathrm{ir},\mathrm{MS}}=0.5`$), in order to reproduce the $`0.5`$ mag sinusoidal variation. We also need an intrinsic luminosity of the WD as large as 300 $`L_{}`$ both for $`\dot{M}_{\mathrm{acc}}=0.5\times 10^7M_{}`$ yr<sup>-1</sup> and for $`\dot{M}_{\mathrm{acc}}=0.25\times 10^7M_{}`$ yr<sup>-1</sup>, and 200 $`L_{}`$ for $`\dot{M}_{\mathrm{acc}}=0.1\times 10^7M_{}`$ yr<sup>-1</sup>, as summarized in Table 1.
The brightness of the system depends on various model parameters adopted here, that is, the efficiency of the irradiations $`\eta _{\mathrm{ir},\mathrm{DK}}`$ and $`\eta _{\mathrm{ir},\mathrm{MS}}`$, the intrinsic luminosity of the WD $`L_{\mathrm{WD},0}`$, the power of the disk shape $`\nu `$. However, the distance estimation itself is hardly affected even if we introduce the different values of the parameters, as clearly shown in Table 2. Thus, we may conclude that the determination of the distance to V394 CrA in quiescence is rather robust as has already been shown in U Sco (Hachisu et al. 2000a , 2000b).
About 0.5 mag sinusoidal variation of the $`B`$ light curve during the quiescence needs a relatively large reflection of the companion star as calculated in Figure 4, thus indicating a relatively large luminosity of the WD photosphere. If the intrinsic luminosity of the WD is negligibly small compared with the accretion luminosity (e.g., the nuclear burning is smaller than the accretion luminosity), the mass accretion rate should be higher than $`\dot{M}_{\mathrm{acc}}1.0\times 10^7M_{}`$ yr<sup>-1</sup> because the efficiency of the irradiation effect must be smaller than 100%, which is consistent with our estimation of $`\dot{M}_{\mathrm{acc}}=1.5\times 10^7M_{}`$ yr<sup>-1</sup> derived from the envelope mass at the optical maximum.
These systems with relatively high mass accretion rates are exactly the same as those proposed by Hachisu et al. (1999b) as a progenitor system of SNe Ia (see also Li & van den Heuvel (1997)). Using the same simplified evolutional model as described in Hachisu et al. (1999b), we have followed binary evolutions for various pairs with the initial sets of ($`M_{1,i}`$, $`M_{2,i}`$, $`a_i`$), i.e., for the initial primary masses of $`M_{1,i}=4`$, 5, 6, 7, and $`9M_{}`$, the initial secondary masses of $`M_{2,i}=1.7`$$`3.0M_{}`$ by $`\mathrm{\Delta }M_{2,i}=0.1M_{}`$ step, and the initial separations of $`a_i=80`$$`600R_{}`$ by $`\mathrm{\Delta }\mathrm{log}a_i=0.01`$ step. Starting from the initial set ($`7M_{}`$, $`2.0M_{}`$, $`150R_{}`$), for example, we have obtained a binary system of $`M_{\mathrm{WD},0}=0.9M_{}`$, $`M_{\mathrm{MS},0}=2.2M_{}`$, and $`P_0=1.375`$ days, after the binary underwent the first common envelope evolution and then the primary naked helium star evolved to a helium giant and had transferred helium to the secondary MS.
Then, the secondary MS has slightly evolved to expand and filled its Roche lobe. Mass transfer begins from the MS to the WD. We have further followed evolution of the binary until the binary reaches $`M_{\mathrm{WD}}=1.37M_{}`$ and $`P=0.7577`$ days at the same time, that is, we regard the binary as V394 CrA when both the conditions, $`M_{\mathrm{WD}}=1.37M_{}`$ and $`P=0.7577`$ days, are satisfied at the same time. Then, we obtain the present state of V394 CrA having the secondary mass of $`M_{\mathrm{MS}}=1.39M_{}`$ and the mass transfer rate of $`\dot{M}_2=1.6\times 10^7M_{}`$ yr<sup>-1</sup>. In our evolutionary model, this binary system will soon explode as an SN Ia when the WD mass reaches $`M_{\mathrm{Ia}}=1.378M_{}`$. The mass transfer rate of our evolutionary model is consistent with $`1.5\times 10^7M_{}`$ yr<sup>-1</sup> estimated from the light curve fitting.
Finally, we may conclude that V394 CrA is the second strong candidate for Type Ia progenitors, next to U Sco (Hachisu et al. 2000a, 2000b).
This research has been supported in part by the Grant-in-Aid for Scientific Research (09640325, 11640226) of the Japanese Ministry of Education, Science, Culture, and Sports.
|
no-problem/0003/hep-lat0003025.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
During the last two years, the carriers of topological charge in a Yang-Mills field at finite temperature (calorons) have been thoroughly reconsidered by van Baal and co-workers. The progress has been summarized at this conference . It has been demonstrated that completely different caloron solutions appear once a non-trivial holonomy $`𝒫(𝐱)`$ at $`|𝐱|\mathrm{}`$ is admitted; the Polyakov line $`L(𝐱)=\frac{1}{2}\mathrm{t}r𝒫(𝐱)`$ is the trace of the holonomy. Prior to this development, semiclassical models at finite temperature were based exclusively on properties of periodic instantons, being classical solutions with trivial holonomy i.e. $`𝒫(𝐱)1`$ for $`|𝐱|\mathrm{}`$ (’t Hooft periodic instanton) .
The outstanding feature of these new calorons is the fact that (within a certain parameter range) monopole constituents of an instanton can become explicit as degrees of freedom . They carry magnetic as well as electric charge (in fact, they are Bogomol’ny-Prasad-Sommerfield (BPS) monopoles or ‘dyons’) and $`1/N_{\mathrm{color}}`$ units of topological charge. Being part of classical solutions of the Euclidean field equations, one can hope that the instanton constituents can play an independent role in a semiclassical analysis of $`T0`$ Yang-Mills theory (and of full QCD).
The variety of selfdual solutions for various topological charge $`Q`$ has been discussed as classical solutions on the lattice with nontrivial holonomy from twisted boundary conditions .
The nontrivial holonomy per se is no obstacle to a semiclassical approach if that is not restricted to the one-instanton approximation. To what extent such a semiclassical description is reliable (and exhaustive), has to be investigated for each phase (confinement and deconfinement) of pure Yang-Mills theory.
In exploratory studies we have searched for characteristic differences between the two phases as far as quasiclassical background fields are concerned. These become visible in the result of cooling.
In a previous study we discussed finite temperature $`SU(2)`$ lattice gauge theory in a finite spatial box with specific boundary conditions. The latter were chosen such that the finite (not too large) system was put into a definite single-monopole background field. By varying the monopole scale we could study both the situations: the purely magnetic ’t Hooft-Polyakov (HP) monopole and the self-dual BPS monopole (‘dyon’). We observed a specific influence of these different boundaries on the quantum state inside the box. Whereas the HP monopole has turned out to favour deconfinement, the BPS monopole has been keeping the system in the confinement even at temperatures larger then the usual critical one. Let us note that these boundary conditions are characterized by different holonomy values, too.
In comparison with the previous study in our present work we have employed simpler spatial boundary conditions. We have fixed and have left untouched under cooling only the boundary time-like link variables in order to keep a certain value of $`𝒫(𝐱)=𝒫_{\mathrm{}}`$ everywhere on the spatial surface of the system while conserving periodicity.
In this study, the influence of the respective phase, that we want to describe, is twofold : (i) the cooling starts from genuine thermal Monte Carlo gauge field configurations, generated on a $`N_s^3\times N_t`$ lattice; (ii) the value of the holonomy $`𝒫_{\mathrm{}}`$ was chosen in accordance with the average of $`L`$, which is vanishing in the confinement phase and nonvanishing in deconfinement.
## 2 Results
We consider $`SU(2)`$ lattice gauge theory with the standard Wilson action. Our cooled samples are obtained from Monte Carlo ensembles on a $`16^3\times 4`$ lattice. For $`N_t=4`$, the gauge coupling $`\beta =2.2`$ stands for the confinement phase with $`L0.`$ and $`\beta =2.4`$ for the deconfinement phase with $`L=0.27`$, respectively. The timelike links $`U_{x,\mu =4}`$ are frozen at the spatial boundary equal to each other such that $`(U_{x,\mu =4})^{N_t}=𝒫_{\mathrm{}}`$. For the holonomy itself, an ‘Abelian’ form $`𝒫_{\mathrm{}}=a_0+ia_3\tau _3`$ was chosen, with $`a_0=L`$ and $`a_3=\sqrt{1a_0^2}`$ in correspondence with the average Polyakov line. The cooling method chosen was the simplest, rapid relaxation method keeping the Wilson action. In order to search exclusively for objects with low action the criterion for the first stopping at some cooling step $`n`$ was that $`S_n<2S_{\mathrm{inst}}`$, the last change of action $`|S_nS_{n1}|<0.01S_{\mathrm{inst}}`$, and that the second derivative $`S_n2S_{n1}+S_{n2}<0`$ ($`S_{\mathrm{inst}}`$ denoting the action of a single instanton). For each $`\beta `$-value we have scanned $`O(200)`$ configurations obtained by cooling.
The cooled sample taken from the confinement phase has a clearly different composition compared to that from the deconfinement ensemble. This is listed in Table 1 which shows the relative occurrence of the few characteristic types of nonperturbative configurations. In the following let us explain these configurations in some detail.
Confinement phase. Here dominate selfdual (or antiselfdual) configurations, and among them ‘dyon-dyon’ pairs ($`DD`$) which are reminiscent of the new caloron solutions. In Fig. 1 we show, projected onto the $`x_1x_2`$-plane (i.e. summed over $`x_3,x_4`$ or $`x_3`$, resp.), the topological charge and the Polyakov line of such a ‘dyon’ pair. Notice the opposite sign of the Polyakov line near to the two same-sign topological charge bumps.
Other selfdual objects, having a rather $`O(4)`$ rotationally invariant distribution of action and topological charge, are frozen out relatively infrequently. They resemble the ’t Hooft periodic instanton. We call them caloron ($`CAL`$), shown in Fig. 2 . Under the specific boundary conditions, however, the structure of the Polyakov line around the caloron is nontrivial in the sense that it has the opposite peaks of the Polyakov line near the center of the action (and topological charge) distribution. Thus, this type of configurations appears as a limiting case of the ‘dyon-dyon’ pairs.
Mixed configurations with two lumps of opposite topological charge are found in a quarter of the configurations. We call them ‘dyon-antidyon’ pairs ($`D\overline{D}`$). In fact, they are typical for the deconfined phase and below we discuss an example taken from $`\beta =2.4`$ . In these configurations the Polyakov line has a same-sign maximum on top of the opposite-sign topological charge lumps. Besides of this, the two sums $`Q_+=_xq(x)\mathrm{\Theta }(q(x))`$ and $`Q_{}=_xq(x)\mathrm{\Theta }(q(x))`$ are almost equal to $`+\frac{1}{2}`$ and $`\frac{1}{2}`$, respectively, which supports an interpretation as half-instanton and half-antiinstanton.
When one allows the holonomy $`𝒫(𝐱)`$ freely to relax as in usual cooling with periodic boundary conditions, configurations like $`DD`$ and $`D\overline{D}`$ do not survive.
Deconfinement phase. It is remarkable that selfdual or antiselfdual $`DD`$ configurations are very rare in this case. $`D\overline{D}`$ mixed configurations are typical for the deconfined phase. For one example we show in Fig. 3 the topological charge and Polyakov line (similar to Fig. 1).
In the deconfined phase the next important type of cooled configurations are purely magnetic ones ($`S_{\mathrm{magnetic}}>>S_{\mathrm{electric}}`$) with quantized action in units of $`S_{\mathrm{inst}}/2`$. We call these $`M`$ configurations. With a smaller probability also magnetic configurations with twice as large action ($`2M`$ type configurations) are found. In all projections, the action is constantly distributed over the lattice with a high precision. A closer look at the different field strength components reveals that the action of $`M`$ type configurations resides only in a single magnetic field strength component (in one $`3D`$ direction), while in the case of $`2M`$ type configurations two such fluxes, generically orthogonal to each other, are present. After fixing the maximally Abelian gauge (see below) these configurations turn out to be completely Abelian. Therefore, we can identify these configurations as pure equally distributed magnetic fluxes which should be related to world-sheets of Dirac strings (‘Dirac sheet’) on the dual lattice. With some rate they also emerge in the result of further cooling of $`D\overline{D}`$ configurations. Such configurations are present in the confinement phase as well, but occur with tiny probability only.
Maximizing with respect to gauge transformations the gauge functional $`R=_{x\mu }\mathrm{t}r\left(\tau _3U_{x\mu }\tau _3U_{x\mu }^+\right)`$, we have put all these configurations into the maximally Abelian gauge and have measured their ‘Abelianicity’, i.e. $`R_{\mathrm{max}}`$ per link. The $`DD`$ and $`D\overline{D}`$ configurations have an Abelianicity of 99.8% (independent of the phase where they are found) The rotationally invariant caloron is somewhat more Abelian (99.9%). But for the purely magnetic configurations we found an Abelianicity of exactly 100%.
Employing the Abelian projection, $`U(1)`$ monopoles can be localized. The $`DD`$ and $`D\overline{D}`$ configurations are found to be static to a high precision, and Abelian monopole worldlines were observed to coincide with the ‘dyon’ or ‘antidyon’ position, again irrespective of the phase where these background configurations have been extracted. The non-static ‘caloron’ configuration, however, is typically enclosed by a small Abelian monopole loop (of length 6).
## 3 Conclusions
We have studied finite temperature Yang-Mills lattice fields with given non-trivial holonomy at the spatial boundary of a finite box. Starting from Monte Carlo equilibrium configurations by cooling we have found quasi-stable solutions in accordance with that (periodic) boundary condition. The ensembles of solutions obtained strongly depend upon whether we are in the confinement or in the deconfinement phase. Most typically we observe ‘dyon-dyon’ pairs within the confinement phase, i.e. selfdual solutions of the type discussed by van Baal and collaborators . However, in the deconfinement phase ‘dyon-antidyon’ solutions dominate. The latter objects have to be understood analytically. We did not find pure magnetic HP-like monopoles as seen in , where cooling had been applied to finite temperature fields with purely periodic boundary conditions. The latter monopoles - always accompanied by a spurious opposite charged monopole - have trivial holonomy and, thus, could not show up in the present analysis.
We feel that the development of a semiclassical approach based on solutions with non-trivial holonomy, i.e. in a mean-field-like setting, might have a chance to shed light on the mechanisms of the deconfinement transition.
## Acknowledgements
The authors are grateful to P. van Baal, B. V. Martemyanov, S. V. Molodtsov, M. I. Polikarpov, A. van der Sijs, Yu. A. Simonov, and J. Smit for useful discussions. This work was partly supported by RFBR grants N 97-02-17491 and N 99-01-01230 as well as by the joint RFBR-DFG project grant 436 RUS 113/309 (R) and the INTAS grant 96-370.
|
no-problem/0003/cond-mat0003077.html
|
ar5iv
|
text
|
# Critical chain length and superconductivity emergence in oxygen-equalized pairs of YBa2Cu3O6.30
## Abstract
The oxygen-order dependent emergence of superconductivity in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> is studied, for the first time in a comparative way, on pair samples having the same oxygen content and thermal history, but different Cu(1)O<sub>x</sub> chain arrangements deriving from their intercalated and deintercalated nature. Structural and electronic non-equivalence of pairs samples is detected in the critical region and found to be related, on microscopic scale, to a different average chain length, which, on being experimentally determined by nuclear quadrupole resonance (NQR), sheds new light on the concept of critical chain length for hole doping efficiency.
The peculiarity of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>(123), the superconductor that still plays an important role in ongoing efforts to elucidate the mechanism of high-T<sub>c</sub> superconductivity, is the existence of a charge reservoir, the …-Cu-O-Cu-.. chain system in the Cu(1)O<sub>x</sub> plane, far removed from the superconducting Cu(2)O<sub>2</sub> sheets. Its structural order drives the whole crystal structure in a variety of superstructures which have been observed and modeled theoretically in the whole compositional $`x`$-range. Beside the tetragonal (T, empty chain) and orthorhombic-I (OI, full chain) structures that characterize the end members of the compositional $`x`$-range ($`x`$=0 and 1 respectively), at least two orthorhombic modifications, ortho-II (OII) and ortho-III (OIII), occurring around the ideal $`x`$=0.5 and $`x`$=0.67 compositions and characterized by a …-full-empty-.. and a …-full-full-empty-… chain sequence along the a direction, are considered thermodynamically stable. The superstructures arising from an ordering between oxygen-poor and oxygen-rich chains are well described by a simple lattice gas model, called ASYNNNI, first introduced by de Fontaine et al.. Despite its simplicity the ASYNNNI model, which considers only second nearest neighbor interactions, can account for the stability region of the OII phase. Extensions of the model to include longer range interactions predict the occurrence of more complex superstructures (e.g. the OIII phase), even if they become significant only for very well equilibrated samples, as it was systematically verified in the range $`0.67x0.75`$.
The understanding of oxygen ordering in the Cu(1)O<sub>x</sub> plane and its effects on superconductivity in 123 systems has been greatly enriched during the last eleven years. It is by now clearly established that the charge transfer process in 123 systems and the related superconducting properties are a rather sensitive function not only of the oxygen content, but also of the oxygen ordering in the Cu(1)O<sub>x</sub> plane through its induced effects on hole density in the Cu(2)O<sub>2</sub> planes and consequently on $`T_c`$ . The connection between oxygen ordering in the chains and hole behavior in the planes was already clearly manifested in the time dependent increase of $`T_c`$ during room temperature annealing of samples produced by fast quenching. The formation of the OII superstructure is responsible for the 60K plateau typically observed in the $`T_c`$ dependence from oxygen content in 123 and more recently the influence of OIII ordering on $`T_c`$ has been shown.
The variety of possible superstructures has raised the question of the existence, for each different ordering scheme, of a characteristic $`T_c`$ of their own. However, as stressed by Shaked et al., experiments that unambiguously prove this hypothesis are difficult or impossible as a result of the difficulty of stabilizing an entire sample in a particular ordered state and comparing such sample with those having the same oxygen content but a different ordering. The major limitation is the utilization of single samples, prepared one at a time in conditions that make a comparative study extremely difficult, owing to a lack in reproducibility produced by the significant influence of experimental conditions and thermal history on the 123 properties.
To investigate the effects of oxygen ordering we recently proposed a novel strategy based on oxygen-equalized pair-samples, prepared simultaneously in the same thermal conditions, one by intercalation and the other by deintercalation of oxygen, the fully oxygenated and reduced (OI and T) end-terms of the YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> system acting as oxygen donor and acceptor respectively, to arrive at the final oxygen content $`k`$ in both samples. This topotactic-like technique for low-temperature processing of oxygen-equalized ($`k`$) deintercalated \[D\]<sub>k</sub> and intercalated \[I\]<sub>k</sub> pair samples of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> allowed us to investigate unanswered questions about the relationship between structure and superconductivity in this system. On the basis of the acquired experience in controlling the process reproducibility we are now able to explore, for the first time in comparative way, the most important (and at the same time the most difficult to study) region in 123 system: the transient T-O boundary around $`k`$=0.30, characterized by the vanishing of semiconducting antiferromagnetic (SAF) behavior and the emergence of superconductivity (SC).
Bulk polycrystalline \[D\]<sub>k</sub> and \[I\]<sub>k</sub> pair-samples, hereafter referred to as $`k`$-pairs, were prepared in a reproducible way starting with fully oxygenated OI bar-shaped samples of (3.0x2.0x14.0)mm<sup>3</sup>, weighing each one about 0.5 g, prepared twenty at a time by following conventional solid-state reactions and sintering, and fully reduced T samples obtained from the former by dynamic vacuum annealing at 650 C. From iodometric and weight-loss analyses, the quoted oxygen content in the reference (OI, $`x=0.96`$) and in the derived (T, $`x=0.07`$) samples was estimated to be accurate within 0.02 oxygen atom per formula unit. Individually weighed OI and T bars were equilibrated at a given temperature ($`T_e`$) and order-stabilized at composition-dependent temperatures ($`T_s`$) within the thermal stability domain of OII and OIII superstructures. By varying the OI/T mass ratios it is possible to prepare $`k`$-pairs in a wide range of equalized oxygen stoichiometry $`k`$. The OI (T) mass loss (gain) is due solely to a change in oxygen content in the Cu(1)O<sub>x</sub> plane, and excellent agreement between calculated and experimental oxygen content at equilibrium was systematically obtained. Details on starting materials and $`k`$-pair processing were reported elsewhere. This topotactic-like procedure yields pairs of 123 specimens under equilibrium conditions with equal oxygen content and thermal history. The $`k`$-pairs under investigation ($`0.28k0.32`$) were obtained by thermal equilibration of OI and T samples at $`T_e=670^{}`$ C for 1 day, slow cooling at 0.2C/min to $`T_s=75^{}`$C followed by order-stabilization at this temperature for 3 days and final cooling (0.2C/min) to room temperature. Several batches were prepared in this way and comparatively characterized by resistive ($`\rho (T)`$), electron diffraction (ED) and NQR studies.
Displayed in fig.1 are the evolution of the $`\rho (T)`$ curves (A) and the representative densitometer traces of an ensemble of ED patterns (B) recorded independently on several fragments of three typical samples. Panels 1 in fig.1 show the transport (A<sub>1</sub>) and structural (B<sub>1</sub>) characteristics of the $`k`$=0.28 samples. Resistivity is thermally activated and only tetragonal peaks show up in the diffraction pattern. Both \[I\] and \[D\] are therefore characteristic of a non-superconducting tetragonal phase, for which $`k`$=0.28 defines the upper limit of existence in both samples. Panels 4 likewise show the corresponding lower limit ($`k`$=0.32) for the existence of a partially OII-ordered superconducting phase. Note the diffuse $`(\frac{1}{2}\mathrm{\hspace{0.17em}0\hspace{0.17em}0})`$ peak in diffraction patterns B<sub>4</sub> ( in agreement with the doubling of the a axis produced by a …-full-empty-… chain sequence in the Cu(1)O<sub>x</sub> plane) and the coincidence of T<sub>c</sub> in the \[I\] and \[D\] curves A<sub>4</sub>. The situation is totally different in $`k=0.30`$ pairs, which display a phase separation. Resistivity is shown in panels A<sub>2</sub> and A<sub>3</sub>. The \[D\] sample is insulating, but its curve (A<sub>2</sub>) displays a kink precisely at the same temperature where the corresponding \[I\] sample shows (A<sub>3</sub> curve) the onset of the SC transition, which percolates the bar. \[D\] grains invariably show the two kinds of patterns displayed in panel B2: most of them tetragonal (solid line) and a minority fraction OII (dotted line) characterized by very diffuse spots. A similar separation is observed for the \[I\] samples: most grains are characterized by diffuse OII superstructure spots (solid line) or by diffuse (dotted line) extra peaks at $`(\frac{h}{3}\mathrm{\hspace{0.17em}0\hspace{0.17em}0})`$, whereas only few are tetragonal.
These data indicate that the T-O phase transition in the $`k=0.30`$ pairs displays the coexistence of tetragonal and orthorhombic domains. This result is consistent with the prediction by de Fontaine et al. from a lattice-gas model. However the systematic observation of the $`(\frac{h}{3},\mathrm{0\hspace{0.17em}0})`$ spots adds a new detail to this picture. We believe that these spots result from domains of an orthorhombic anti-III (OIII) structure characterized by an ideal …-empty-empty-full-… periodic arrangement of chains along the a direction. Such a sequence gives rise to a tripling of the $`a`$ axis in analogy with the …-empty-full-full-… configuration for the ideal composition $`x`$=$`\frac{2}{3}`$ of the OIII superstructure. To our knowledge the OIII structure is reproducibly observed around the ideal stoichiometry $`x`$=$`\frac{1}{3}`$ for the first time by means of our equilibration technique. After long-term aging (one year) of \[I\]<sub>0.30</sub> samples at room temperature the $`(\frac{h}{3}\mathrm{\hspace{0.17em}0\hspace{0.17em}0})`$ spots disappear, the original two-phase orthorhombic state (OII+OIII) stabilizes in the OII single-phase state and the resistive SC transition broadens considerably. Hence the OIII ordering appears to be a metastable precursor in the emergence of OII ordering in the \[I\]<sub>0.30</sub> SC samples.
The OIII phase cannot be justified by the original ASYNNNI model, due to the neglect of long range interactions. These interactions were later introduced in an extended model, limited to the $`6.5x7`$ range, to account for the observation of the OIII phase. Our carefully equilibrated samples, which reproducibly develop both the OIII ($`k`$0.7) and the OIII ($`k`$=0.3) phase, call for an extension of the long range interaction models to the oxygen poor region of the phase diagram.
We investigated the local structure by NQR to determine the degree of short range order in $`k`$=0.3 samples at the SAF-SC boundary . The NQR resonance frequency, proportional to the electric field gradient (EFG) at the nucleus, is characteristic of each distinct copper site in the lattice. Since there are two Cu isotopes (63 and 65) each lattice site gives rise to an isotope doublet, at fixed frequency ($`\nu _{63}`$/$`\nu _{65}`$=1.082 ) and intensity (I<sub>63</sub>/I<sub>65</sub>=2.235) ratios. The Cu NQR spectra of the pair \[D\]<sub>0.30</sub> and \[I\]<sub>0.30</sub> are plotted in fig. 2 in the range 22-33 MHz, corrected for frequency dependent sensitivity and relaxation. Each sample shows two isotope doublets, the solid line being the best Gaussian fit to the above mentioned isotopic constrains. The EFG values of the two doublets identify them as two distinct Cu(1) sites: the 28.05-30.35 MHz doublet is 2-Cu(1), linearly coordinated with apical oxygen and neighbored by oxygen vacancies (v-Cu<sup>1+</sup>-v) in the plane, while the 22.1-23.9 MHz doublet corresponds to the chain-end configuration (O-Cu<sup>2+</sup>-v) of the 3-fold coordinated 3-Cu(1). The few 4-Cu(1) (O-Cu<sup>2+</sup>-O) contribute negligibly to the spectra because of their much larger EFG inhomogeneity.
The area $`A_i`$ ($`i`$=2,3) under each doublet yields the average number of oxygen atoms in the inter-Cu(1) sites (i.e. the average chain length) as $`\mathrm{}=\frac{3}{7}(2A_2/A_3+1)`$, and we obtain $`\mathrm{}_I=3.9(1)`$ and $`\mathrm{}_D=1.9(1)`$ for the two samples of the pair. The short average chain length found in \[D\]<sub>0.30</sub> is consistent with its broad NQR lines, since a short correlation length implies a broad distribution of EFG values. These results outline the role of the chain length in determining the chain hole-doping efficiency and confirm directly the theoretical prediction by Uimin et al. that there is essentially no charge transfer from chain fragments shorter than three oxygen atoms.
The $`k`$-pair method proves itself as an effective tool to extract more detailed information (inaccessible to single sample experiments) on the mechanism of short range oxygen-chain ordering which characterizes the transient SAF-SC region. The experimentally demonstrated inequivalence of the \[D\]<sub>0.30</sub> and \[I\]<sub>0.30</sub> samples of a pair agrees with previous analogous results obtained around the OII-OIII and the OIII-OI transition boundaries. This leads us to conclude that different metastable states exist near the thermodynamic equilibrium at a given oxygen content and are connected with the vacancy ordering in the Cu(1)-O<sub>x</sub> chain system. Different kinetic and thermodynamic reaction paths are realized during intercalation or deintercalation of oxygen and result in inequivalent chain growth processes revealed by ED and NQR. Moreover we point out that with our equilibration scheme structurally distinct domains occur in the same sample in the transient region around $`k`$=0.3, while the SC transition in \[I\]<sub>0.30</sub> and the resistive kink in \[D\]<sub>0.30</sub> (fig.1 A<sub>2</sub>-A<sub>3</sub>) systematically occur at the same temperature. This suggests that a simultaneous electronic and structural phase separation takes place at the SAF-SC boundary, where orthorhombic (SC) and tetragonal (SAF) domains coexist. They originate nanoscopically and are critically dependent on the chain growth process. We believe that the different chain lengths observed in \[I\] and \[D\] samples represents an experimentally determined critical borderline between the vanishing of SAF behavior and the emergence of SC in 123.
|
no-problem/0003/hep-th0003279.html
|
ar5iv
|
text
|
# Some Problems in Defining Functional Integration over the Gauge Group
## Abstract
We find that sometimes the usual definition of functional integration over the gauge group through limiting process may have internal difficulties.
Functional integration(or infinite dimensional integration) has become an indispensible tool for the study of modern field theories, especially gauge field theories. There is now no rigorous mathematical definition for this object. Practically one usually defines functional integration as some limit of finite dimensional integrations, as is adopted in most field theory textbooks . When the integrand is of quasi-Gaussian type, one can define functional integration rigorously in the framework of perturbation theory, as is advertised in . These are the definitions for the functional integration encountered in the process of quantising a field theory and they have been studied intensively. In the process of quantising gauge field theory using the Faddeev-Popov technique one also encounters functional integrartion over the gauge group. In it was utilized to prove the gauge independence of the Green’s function of gauge invariant operators. One should define it through limiting processes. In this paper we will show that sometimes this sort of definition may have internal difficulties.
The case we will investigate is $`_GD\omega \omega ^1(x)_\mu \omega (x)`$, where $`G=_xU(1)_x`$ is the local gauge group of QED and $`\omega (x)=e^{i\theta (x)}`$ is the group element of $`G`$, and we choose the normalisation so that $`_GD\omega =1`$.
Following the usual recipe of defining functional integration we first let the space-time be discretized and replace the derivative $`_\mu \omega (x)`$ by finite differences, after that we will do the finite dimensional integration obtained through the process of discretization and finally we will take the continuum limit.
In the course of replacing $`_\mu \omega (x)`$ by finite differences there are two prescriptions. The first one is to replace $`_\mu \omega (x)`$ by $`\frac{\omega (x+\mathrm{\Delta }x)\omega (x)}{\mathrm{\Delta }x^\mu }`$(here we take $`\mathrm{\Delta }x^\mu >0`$ for definiteness) while the second one is to replace $`_\mu \omega (x)`$ by $`\frac{\omega (x+\mathrm{\Delta }x)\omega (x\mathrm{\Delta }x)}{2\mathrm{\Delta }x^\mu }`$.
In the first prescription
$`{\displaystyle _G}D\omega \omega ^1(x)_\mu \omega (x)`$ $`=`$ $`\underset{\mathrm{\Delta }x0}{lim}{\displaystyle \underset{i}{}d\omega (x_i)\omega ^1(x)\frac{\omega (x+\mathrm{\Delta }x)\omega (x)}{\mathrm{\Delta }x^\mu }}`$ (1)
$`=`$ $`\underset{\mathrm{\Delta }x0}{lim}({\displaystyle \frac{1}{\mathrm{\Delta }x^\mu }})`$ (2)
$`=`$ $`\mathrm{}`$ (3)
where we have used the fact that $`_{U(1)}𝑑\omega \omega =_0^{2\pi }\frac{d\theta }{2\pi }e^{i\theta }=0`$ and $`_{U(1)}𝑑\omega \omega ^1=_0^{2\pi }\frac{d\theta }{2\pi }e^{i\theta }=0`$. In the second prescription
$`{\displaystyle _G}D\omega \omega ^1(x)_\mu \omega (x)`$ $`=`$ $`\underset{\mathrm{\Delta }x0}{lim}{\displaystyle \underset{i}{}d\omega (x_i)\omega ^1(x)\frac{\omega (x+\mathrm{\Delta }x)\omega (x\mathrm{\Delta }x)}{2\mathrm{\Delta }x^\mu }}`$ (4)
$`=`$ $`0`$ (5)
One may think that we have found two different ways of defining the functional integration $`_GD\omega \omega ^1(x)_\mu \omega (x)`$: in the first prescription it is infinity while in the second prescription it is zero. Now we show that both assignments have internal difficulties.
In the first prescription $`_GD\omega \omega ^1(x)_\mu \omega (x)=\mathrm{}`$. We can show that it is in conflict with the property $`_GD\omega f(\omega )=_GD\omega f(\omega ^1)`$ because if this property holds $`_GD\omega \omega ^1(x)_\mu \omega (x)`$ should be zero:
$`{\displaystyle _G}D\omega \omega ^1(x)_\mu \omega (x)`$ $`=`$ $`{\displaystyle _G}D\omega \omega (x)_\mu \omega ^1(x)`$ (6)
$`=`$ $`{\displaystyle _G}D\omega \omega ^1(x)_\mu \omega (x)`$ (7)
In the second prescription $`_GD\omega \omega ^1(x)_\mu \omega (x)=0`$. We can show that it is in conflict with the property $`_GD\omega f(\omega _0\omega )=_GD\omega f(\omega )`$ because if this property holds $`_GD\omega \omega ^1(x)_\mu \omega (x)`$ should be infinity:
$`{\displaystyle _G}D\omega \omega ^1(x)_\mu \omega (x)`$ $`=`$ $`{\displaystyle _G}D\omega (\omega _0(x)\omega (x))^1_\mu (\omega _0(x)\omega (x))`$ (8)
$`=`$ $`{\displaystyle _G}D\omega (\omega _0^1(x)_\mu \omega _0(x)+\omega ^1(x)_\mu \omega (x))`$ (9)
$`=`$ $`\omega _0^1(x)_\mu \omega _0(x)+{\displaystyle _G}D\omega \omega ^1(x)_\mu \omega (x)`$ (10)
From the above analysis we see that in this case both assignments are internally inconsistent and the functional integration $`_GD\omega \omega ^1(x)_\mu \omega (x)`$ cannot be consistently defined. In the following we will construct an example in which this functional integration appears to show that the above discussion is not purely academic.
For an arbitary operator $`O(\varphi )`$, gauge invariant or not, we can always construct an operator $`F_O(\varphi )=_GD\omega O(\varphi ^\omega )`$, where $`\varphi `$ stands for a gauge or matter field. It can be easily shown that this operator is gauge invariant:
$`F_O(\varphi ^{\omega _0})`$ $`=`$ $`{\displaystyle _G}D\omega O(\varphi ^{\omega _0\omega })`$ (11)
$`=`$ $`{\displaystyle _G}D\omega O(\varphi ^\omega )`$ (12)
$`=`$ $`F_O(\varphi )`$ (13)
Note that in the above proof we have used the property $`_GD\omega f(\omega _0\omega )=_GD\omega f(\omega )`$.
Now take $`O(\varphi )=A_\mu (x)`$ and see what happens. In this case
$`F_O(\varphi )`$ $`=`$ $`{\displaystyle _G}D\omega A_\mu ^\omega (x)`$ (14)
$`=`$ $`{\displaystyle _G}D\omega (A_\mu (x){\displaystyle \frac{i}{e}}\omega ^1(x)_\mu \omega (x))`$ (15)
$`=`$ $`A_\mu (x){\displaystyle \frac{i}{e}}{\displaystyle _G}D\omega \omega ^1(x)_\mu \omega (x)`$ (16)
If we take $`_GD\omega \omega ^1(x)_\mu \omega (x)=0`$, as is assigned in the second prescription, we immediately come to an absurd conclusion: the operator $`A_\mu (x)`$ is gauge invariant!
It is not hard to see where we go wrong. The assignment $`_GD\omega \omega ^1(x)_\mu \omega (x)=0`$ is in conflict with the property $`_GD\omega f(\omega _0\omega )=_GD\omega f(\omega )`$, which is essential for the proof of the gauge invariance of the operator $`F_O(\varphi )`$. So it is not strange that we come to a wrong conclusion.
In summary, we find that sometimes the definition of functional integration over the gauge group through limiting process may have internal difficulties.
This work is stimulated by a seminar talk given by Prof. Hung Cheng at Nanjing University in 1999 and his pioneering work in this field . This work is supported in part by the NSF(19675018), SED and SSTC of China, and in part by the DAAD.
|
no-problem/0003/cond-mat0003017.html
|
ar5iv
|
text
|
# Fermi Surface Topology of 𝐵𝑖₂𝑆𝑟₂𝐶𝑎𝐶𝑢₂𝑂_{8+𝛿} at ℎ𝜈=33𝑒𝑉: hole or electron-like?
## Abstract
We present new results from Angle-Resolved Photoemission experiments (ARPES) on overdoped $`Bi_2Sr_2CaCu_2O_{8+\delta }`$ (BSCCO) crystals. With greatly improved energy and momentum resolution, we clearly identify the existence of electron-like portions of Fermi Surface (FS) near $`\overline{M}`$ at $`h\nu =33eV`$. This is consistent with previously reported data and is robust against various FS crossing criteria. It is not due to an artifact induced from $`\stackrel{}{k}`$-dependent matrix element effects. We also present evidence for a breakage in the FS pointing to the possible existence of two types of electronic components.
Angle-Resolved Photoemission Spectroscopy (ARPES) has become one of the most powerful tools for understanding the physics and electronic structure of high temperature superconductors (HTSC) and other correlated electron systems since it allows one to probe the energy and momentum relations directly. Over the past decade, major discoveries have been made on both the normal and superconducting states which include the Fermi Surface topology, superconducting gap symmetry, normal state pseudogap, etc. .
Among the normal state properties, the Fermi Surface (FS) topology is one of the most important since it needs to be determined prior to correctly predicting many physical properties. Most of our information about the FS topology of HTSC’s has come from ARPES studies on BSCCO, and the results have been widely interpreted as a hole-like barrel centered around the $`(\pi ,\pi )`$ or X(Y) points of the Brillouin zone, as illustrated in figure 1(a) . This conclusion was made mainly by using incident photon energies around $`21eV`$ . Recently, we showed that the spectra and the physical picture appear quite different when measured using $`33eV`$ photons- there is a strong depletion of spectral weight around $`\overline{M}`$ $`(\pi ,0)`$ and the FS appears to have electron-like portions, as illustrated in figure 1(b). This result was later confirmed by Feng et al. .
This new interpretation of the data was questioned by two experimental groups in four recent papers . Fretwell et al. presented a detailed two-dimensional FS mapping on optimally doped BSCCO using $`33eV`$ photons. Although their data reproduced the salient features found in , they attempted to explain the data under the framework of the conventional hole-like FS centered on $`(\pi ,\pi )`$ by invoking empirically determined $`\stackrel{}{k}`$-dependent matrix element effects. They further argued that the poor energy and momentum resolutions combined with limited sampling of the Brillouin Zone in previous experiments led to incorrect interpretations. A similar interpretation was given by Mesot et al. on overdoped $`Bi_2Sr_2CuO_{6+\delta }`$ using symmetrization arguments on lower-resolution data. They stated “the FS is hole-like and is independent of photon energy.”
In order to clarify these issues, we have carefully probed the $`\stackrel{}{k}`$-space region around $`\overline{M}`$ on overdoped $`Bi_2Sr_2CaCu_2O_{8+\delta }`$ at $`h\nu =33eV`$ with very high energy and momentum resolution. Our raw data is similar to that presented by Fretwell et al., although ours was taken at $`100K`$ in the normal state - a more ideal experimental situation (compared to their superconducting state data) for probing FS topology. We analyze the data in a number of ways, all of which give results consistent with the electron-like topology originally proposed by us . In contrast the hole-like topology presents a much poorer fit to the data. We also show that Fretwell et al.’s data is consistent with the electron-like FS topology, further supporting our arguments. We also have performed the symmetrization method to determine FS crossings, and in contrast to Mesot et al.’s data , our symmetrized data does support the electron-like topology. In this sense our data is different from theirs, possibly due to uncertainties in their Fermi energy calibration.
Borisenko et al. and Golden et al. recently presented beautiful and highly detailed two-dimensional FS mappings on optimally doped BSCCO using $`21.22eV`$ photons. While their data gives perhaps the clearest evidence yet for the hole-like topology in this photon energy range, it does not address the different behavior observed at $`33eV`$. Golden et al. did show a small portion of data at $`33eV`$ consistent with the data of Chuang et al. , and similar to Fretwell et al. and Mesot et al. they indirectly argued that this data was affected by unfavorable matrix element effects.
The experiments were done at Beamline 10.0.1 at the Advanced Light Source (ALS), Berkeley, CA using a Scienta SES 200 energy analyzer. We used the angle mode of the analyzer to simultaneously collect 89 individual spectra along $`14^o`$ wide angular slices. We present 11 of these slices, representing almost 1000 individual energy distribution curves (EDCs) from one sample. The angular resolution along these slices was about $`\pm 0.08^o`$ in the $`\theta `$ direction and about $`\pm 0.25^o`$ in the perpendicular $`\varphi `$ direction. At $`h\nu =33eV`$, the converted momentum resolution is $`(k_x,k_y)(0.01\pi ,0.03\pi )`$. We could map out the two-dimensional Brillouin zone by rotating the sample in either $`\theta `$ (parallel to the $`14^o`$ slice) or in $`\varphi `$ (perpendicular to the slice). The analyzer was left fixed with the central analyzer angle making an $`83^o`$ angle of incidence relative to the photon beam. In this configuration, the $`14^o`$ slices are parallel to the incident photon polarization direction and the $`\mathrm{\Gamma }\overline{M}`$ high symmetry line.
The energy resolution was better than $`10meV`$ FWHM, as determined by the 10-90$`\%`$ width of a gold reference spectrum taken at $`10K`$. The sample Bi046 used in this study was about 3mm on a side and was annealed in oxygen to overdope it, giving a $`T_c=79K`$. Throughout the whole experiment, the base pressure was maintained below $`410^{11}`$ torr and the temperature was at $`100K`$, well above $`T_c`$. All experimental data were normalized by using the high-harmonic emission above $`E_F`$, as discussed in reference .
Figure 2 shows the new data from this sample. The 11 panels of part (a) show false-color Energy Distribution Curves (EDC) taken at 2 different sets of $`\theta `$ angles (left and right panels) and 9 different $`\varphi `$ angles. The vertical axes are the binding energy, and the horizontal axes are the $`\theta `$ angle along the $`14^o`$ slice, with $`0^o`$ equal to normal emission. Each of the plots was normalized separately, i.e. the color scale can not be connected from one plot to another.
Each of these plots shows one or more features which disperse in energy as a function of angle (or wavevector $`\stackrel{}{k}`$). Most of these features can be easily followed up to $`E_F`$ at which point the features disappear due to the FS crossing. As a guide to the eye, we have overlayed a black curve on top of the data and labeled each features as S.S. (superstructure band) or M (main band). Due to polarization selection rules , the ARPES features along the $`\mathrm{\Gamma }Y`$ line have unfavorable emission such that in panels (ix)-(xi) the S.S. band has stronger intensity than the main band. In figures 2(b) and 2(c) we plot white dots which indicate the FS crossing points determined by looking at the dispersion in part (a), on the $`\mathrm{\Gamma }\overline{M}Y`$ quadrant of the Brillouin zone. (The position of the 11 individual slices are indicated in panel (b).)
Another way to visualize FS crossings is to make a two-dimensional plot of the spectral intensity at $`E_F`$, i.e. $`A(\stackrel{}{k},E_F)`$. This is plotted in parts (b) and (c) with data obtained by compressing the 11 panels of part (a). For these plots the relative intensity between each of the 11 panels is critical. They were first normalized by looking at the high-harmonic emission above $`E_F`$ only. We then integrated the spectral weight of the EDCs of Figure 2(a) over a $`50meV`$ energy window centered at $`E_F`$. Figures 2(b) and 2(c) show false color plots of this spectral intensity as a function of $`\theta `$ and $`\varphi `$ with the color scale on the left. The FS should show up on this plot as the region of maximum spectral intensity.
The black lines in figure 2(b) show the experimentally determined FS from this data. The thick black lines represent the main FS in the first and second zones, while the thin lines represent the superstructure-derived FS, which is obtained by shifting the main FS by $`(0.2\pi ,0.2\pi )`$ along the $`\mathrm{\Gamma }Y`$ direction. These FS’s are consistent with the white dots obtained from panel (a) as well as with the high intensity locus of panel b. They also are consistent with the symmetrization method (figure 3(c)) and with the gradient or 50$`\%`$ point of $`n(\stackrel{}{k})`$ plots . The FS determined here is qualitatively and quantitatively (to better than 5$`\%`$) the same as that determined in reference . The very slight shift of the high intensity locus away from $`\overline{M}`$ compared to the white dots is due to an effect of the finite energy integration window used to make the plot of panel (b) .
In contrast, the hole-like FS topology overlaid on the data in panel (c) cannot explain many of the crossing locations. This FS is taken from Fretwell et al, who also took data at $`33eV`$ . The thick black line is the main FS while the thin black and red lines are the first and second order S.S. FS’s. The yellow lines are possible shadow bands obtained by reflecting the main and S.S. FS’s about the $`(\pi ,0)(0,\pi )`$ line. These FS’s show dashed sections which, as proposed by Fretwell, indicate a strongly reduced spectral intensity region due to strongly $`\stackrel{}{k}`$-dependent matrix element effects . These are supposed to account for the lack of observation of a FS crossing from $`\overline{M}Y`$. Even though this suggestion has many more free parameters than the electron-like topology of panel (b), it does not match the data as well. In particular it cannot match the curvature of the high intensity portion near $`(\theta ,\varphi )=(14,2)`$ (also see cut (iii) in panel (a)) and it has trouble with the portion of the FS naturally explained by the S.S. band in panel (b) (the part circled in white). In the hole-like topology of figure 2(c) this would have to be explained by a combination of three bands - black, red, and yellow. The unlikeliness of this is amplified when a closer look at the EDC’s is taken. For example, panel (c) shows that the crossing at $`(\theta ,\varphi )=(19,5)`$ should come from the yellow shadow band. As such it should have reversed E vs. $`\stackrel{}{k}`$ dispersion from the main band, while panel (vi) of part (a) shows that it does not. The experimental intensity is also too strong - it should be a shadow of a S.S. band and should also be strongly reduced by matrix element effects (dashed lines).
The $`33eV`$ FS plots presented by Fretwell et al. are quite similar to the data in figure 2, one of which is reproduced in figure 3. Panel (a) shows their data and their interpretation within the hole-like FS topology. The image plot was obtained by integrating the spectral weight over the relatively large energy window $`(100meV,100meV)`$ in the superconducting state at $`40K`$. We feel that the hole-like topology presented in this figure has a number of deficiencies and can not explain the data well. First, the hole-like FS segments extend towards $`\overline{M}`$ while the data does not. Fretwell et al. attempted to reconcile this by empirically introducing a strongly $`\stackrel{}{k}`$-space dependent matrix element effect to drastically reduce the weight near $`\overline{M}`$. However such an effect cannot explain the curvature of the FS which is manifested in the intensity plot by the locus of high intensity points. Namely, in figure 3(a) we have circled a high intensity region which cannot be explained by a hole-like FS. Figure 3(b) shows the electron-like FS (thick white line) plus S.S. band portions (thin white lines) overlayed with the same intensity plot. Now the main band FS trace matches beautifully with the locus of high intensity spots, without having to introduce any complicated matrix-element dependent physics.
The S.S. band circled in our figure 2(b) is not apparent in Fretwell et al.’s data of figure 3(a) and (b). We suggest two possible reasons for this discrepancy: (1) the S.S. bands may be weaker in Fretwell’s data, or (2) they perhaps did not include enough dynamic range in their color scale plot, so that the S.S. bands were not apparent.
Using symmetrized EDC’s, Mesot et al. also argued that their $`34eV`$ data from overdoped $`Bi_2Sr_2CuO_{6+\delta }`$ supported the hole-like topology since they did not observe a FS crossing along $`\mathrm{\Gamma }\overline{M}`$ but they did observe one along $`\overline{M}Y`$. We have applied this method to our data, and find that it is supportive of an electron-like topology, as shown in figure 3(c). Each EDC was added to a mirror image of itself reflected around $`E_F`$, which assuming electron-hole symmetry near $`E_F`$ will remove the effect of the Fermi function cutoff. As stressed by Mesot, the symmetrized EDC will show a peak at $`E_F`$ if there is a FS crossing, otherwise it will show a dip. Figure 3(c) shows the symmetrized EDC along $`\mathrm{\Gamma }\overline{M}Z`$ from the data of figure 2. The dip at $`E_F`$ disappears at around angle $`(\theta ,\varphi )=(15,0)`$, indicating the FS crossing - a location that agrees to within $`2\%`$ with that obtained directly from figure 2. The consistency of the results obtained by all methods (see also references and ) gives us confidence in the electron-like topology observed at $`h\nu =33eV`$. However, we still need to worry about the different result obtained by Mesot et al. The difference might be due to an incorrect determination of $`E_F`$ since the symmetrization method is very sensitive to a shift of $`E_F`$. We have performed symmetrizations on BSCCO data from three different experimental systems and from both the single layer and double-layer compounds. All data from over or optimally doped samples have given the same result - a peak at $`E_F`$ after symmetrization, indicating a FS crossing along $`\mathrm{\Gamma }\overline{M}`$. This gives us confidence that an undetected $`E_F`$ calibration error could not have adversely affected our data.
The increased energy and momentum resolution obtained in figure 2 brings up other subtleties which have not been previously observed. If we track the main electron-like FS from the $`\mathrm{\Gamma }Y`$ line towards the $`\mathrm{\Gamma }\overline{M}`$ line, the intensity first gets greater, which is understood due to polarization effects . It then gets weaker around $`(14,2)`$ before getting stronger again along the $`\mathrm{\Gamma }\overline{M}`$ line. This makes the FS appear as if it has two components - one nearer the $`\mathrm{\Gamma }Y`$ line and one nearer the $`\mathrm{\Gamma }\overline{M}`$ line. Further study needs to be carried out to deconvolve the origin of the separation of the FS into these components, as well as to study potential differences in the behavior of each component.
Finally, of course, there is the critical issue of connecting the electron-like FS topology observed at $`33eV`$ with other topologies observed at other photon energies. A natural possibility is to consider coherent three-dimensional band structure effects from the $`x^2y^2`$ band (or from the $`z^2`$ band ), although this would need to be reconciled with the highly two-dimensional nature of the cuprates. Even-odd splitting between the $`CuO_2`$ bilayer may produce both an electron and a hole like FS in the same sample, although this would need to be reconciled with the single-layer Bi2201 data which also appears to show both topologies . Two FS’s may simultaneously exist in the same sample for other reasons as well, for instance due to phase separation into hole-rich and hole-poor regions or into regions with and without stripe disorder , each of which may produce its own FS portions. Within these scenarios we still need to understand why one piece of the FS is accentuated at one photon energy while another is accentuated at another. Matrix element effects may play a role in this .
In this communication we have presented high energy and momentum resolution ARPES results on the normal state of overdoped $`Bi_2Sr_2CaCu_2O_{8+\delta }`$ at $`h\nu =33eV`$. We clearly identify the existence of electron-like portions of Fermi Surface near $`\overline{M}`$ by looking at the high intensity locus in $`A(\stackrel{}{k},E_F)`$ plots, dispersion of EDCs, and symmetrized EDCs. We reach the same result by looking at either the main band or the S.S. bands. In contrast, the hole-like topology cannot explain the details of the spectra, even with the ad-hoc inclusion of strong $`\stackrel{}{k}`$-dependent matrix element effects. In addition, our increased resolution shows some new and potentially important subtleties of the data including a break in the main FS.
We acknowledge helpful discussions with Z.-X. Shen and Y. Aiura, experimental help from S. Keller, P. Bogdanov and X.-J. Zhou, and analysis software from J. Denlinger. This work was partially supported by a grant from the ONR. The ALS is supported by the DOE.
|
no-problem/0003/cond-mat0003031.html
|
ar5iv
|
text
|
# Thermodynamic Fingerprints of Disorder in Flux Line Lattices and other Glassy Mesoscopic Systems
## Abstract
We examine probability distributions for thermodynamic quantities in finite-sized random systems close to criticality. Guided by available exact results, a general ansatz is proposed for replicated free energies, which leads to scaling forms for cumulants of various macroscopic observables. For the specific example of a planar flux line lattice in a two dimensional superconducting film near $`H_{\mathrm{c1}}`$, we provide detailed results for the statistics of the magnetic flux density, susceptibility, heat capacity, and their cross-correlations.
preprint: draft
Impurities in a sample are expected to modify various measurements, making it desirable to characterize the probability distribution functions (PDFs) for the outcomes. These PDFs may provide important insight about the underlying physics; as in the case of universal conductance fluctuations in mesoscopic circuits, a subject of much recent investigation . Here we consider the signature of impurities in thermodynamic systems at equilibrium. At one extreme, microscopic quantities such as two-point correlation functions are quite sensitive to disorder; in some cases their PDFs exhibiting complicated multiscaling behavior. On the other hand, the free energy, and other macroscopic properties, are expected to be self-averaging, converging to fixed values in an infinite system. If all correlation lengths in the system are finite, the PDFs for a mesoscopic (finite-sized) sample should be governed by the central limit theorem. We thus focus on systems with long-range correlations, such as close to a critical point, or in a flux line (FL) lattice.
In the most interesting cases, disorder is relevant, leading to novel correlations distinct from the pure case. However, due the difficulty of characterizing collective behavior in such glassy disordered systems, little is known about the corresponding PDFs. Aharony and Harris recently studied the PDFs of thermodynamic observables, near critical points with relevant randomness, in finite-sized random systems. They find a lack of self-averaging, and universal, non-Gaussian PDFs. There are also some results for elastic manifolds pinned by impurities: Using replicas, Mezard and Parisi relate the scaling of the PDF for susceptibility with size of a manifold, to its roughness exponent . Exact results on PDFs have been obtained for a directed polymer on a disordered Cayley tree by Derrida and Spohn, using a mapping to a deterministic differential equation .
An important example of pinned elastic media is provided by FLs in a superconductor with point impurities. Indeed, our study was motivated by the experiment of Bolle et al. on a 2d FL lattice oriented parallel to a thin micrometer-sized film of 2H-NbSe<sub>2</sub>. Magnetic response measurements show interesting sample-dependent fine structure in $`B(H)`$; a fingerprint of the underlying pinning landscape. Interestingly, the problem of 2d lines with impurities is amenable to an exact solution using replicas, which gives not only the quenched average but also cumulants of the free energy. We shall use this exact solution to motivate an ansatz for the scaling of PDFs in general disorder-dominated thermodynamic systems.
In what follows, we first present a general ansatz for the scaling of the replicated free energy. The key assumption is treating the number of replicas as a scaling field. Consequences of this ansatz for the scaling of cumulants of thermodynamic observables in mesoscopic samples are then enumerated. We then re-examine the example of FLs in a 2d layer in more detail, proposing specific experimental tests of the theoretical picture.
The lack of translational symmetry in the presence of impurities is cured by calculating disorder averaged moments, $`[Z^n]`$, of the partition function. These moments then provide information about PDFs of the free energy, magnetization, or susceptibility. We thus focus on the scaling of $`F_n=T\mathrm{ln}[Z^n]`$, the free energy of $`n`$ replicas of the system, with interactions resulting from the disorder average. In fact $`F_n`$ can be determined exactly for a single elastic line, or a lattice of non-crossing lines, in 2d with point impurities . While this is sufficient for the FL experiment , we would like to propose a more general scaling ansatz for random systems. Consider $`np`$ replicas of a system of size $`L_{}^{d1}L_{}`$, with $`p`$ groups of $`n`$ replicas subject to the same reduced temperature $`\tau _j`$ and scaling field $`\psi _j`$, for $`j=1\mathrm{}p`$. The vector of scaling fields $`𝝍`$ can consist of say external magnetic fields, or chemical potentials. Our scaling ansatz for the singular part of the free energy density, $`f_p(n,𝝉,𝝍)=TL_{}^{d+1}L_{}^1\mathrm{ln}[_{j=1}^pZ^n(\tau _j,\psi _j)]`$, is
$$f_p^{(\mathrm{s})}(n,𝝉,𝝍)=b^{d+1\zeta }f_p^{(\mathrm{s})}(nb^\theta _{},𝝉b^{1/\nu _{}},𝝍b^{\delta /\nu _{}}).$$
(1)
To include FL systems, we have allowed for anisotropic scaling, with a ‘roughness exponent’ $`\zeta `$ relating the $`(d1)`$-dimensional longitudinal ($`l_{}`$), and the 1d transversal ($`l_{}`$) scales, by $`l_{}l_{}^\zeta `$. (For isotropic systems, we can set $`\zeta =1`$.) Compared to the standard scaling hypothesis for pure systems, the replica structure introduces $`n`$ as an additional scaling field with dimension $`\theta _{}=\zeta \theta _{}`$. The exponent $`\theta _{}`$ appears in the modified hyperscaling relation $`2\alpha =\nu _{}(d1+\zeta \theta _{})`$, which follows readily from Eq. (1) by comparing the $`𝒪(n)`$ terms. Besides its agreement with available exact results, a mean-field analysis of the $`\varphi ^4`$-model with Gaussian random fluctuations in $`T_c`$ also supports our ansatz.
Scaling of disorder averaged cumulants of the thermodynamic observables can now be extracted from the above ansatz for $`\mathrm{ln}[_{j=1}^pZ^n(\tau _j,\psi _j)]`$. Here we enumerate the main results (details to appear elsewhere):
(1) Free energy cumulants: For $`p=1`$, we have
$$\mathrm{ln}[Z^n(\tau ,\psi )]=\underset{j=1}{\overset{\mathrm{}}{}}\frac{(n)^j}{j!}\frac{[F^j(\tau ,\psi )]_c}{T^j},$$
(2)
where $`[\mathrm{}]_c`$ denotes the connected or cumulant disorder average. Using a scaling factor proportional to the correlation length, $`b\xi _{}\tau ^\nu _{}`$, gives $`[F^p(\tau )]_c\tau ^{\nu _{}(d1+\zeta p\theta _{})}`$. For mesoscopic systems of size $`L_{}L_{}^\zeta `$, close to criticality ($`\xi _{}>L_{}`$) we must set $`bL_{}`$, resulting in $`[F^p]_cL_{}^{p\theta _{}}`$. The exponent $`\theta _{}`$ thus characterizes all sample-to-sample fluctuations in the free energy.
(2) Thermal averages, such as the magnetization or the number of ‘particles’, are first order derivatives of the free energy, which we will denote generally by $`XF/\psi |_{\psi =0}`$. For a specific realization of disorder, the partition function can be expanded as
$$Z(\psi )=Z(0)\underset{j=0}{\overset{\mathrm{}}{}}\frac{(\psi /T)^j}{j!}X^j,$$
(3)
where $`\mathrm{}`$ denotes the thermal average. Using this representation, one can show that the $`p^{\mathrm{th}}`$ coefficient of the expansion of $`\mathrm{ln}[Z^n(\psi )]`$ with respect to $`n\psi `$ is the connected average $`[X^p]_cT^p/p!`$. Applying the scaling hypothesis of Eq.(1), we obtain with $`\delta =2\alpha \beta `$ the critical scaling behavior $`[X^p]_c\tau ^{\beta _p}`$, with $`\beta _p=p\beta \nu _{}(d1+\zeta )(p1)`$, and $`\beta `$ defined by $`[X]\tau ^\beta `$.
(3) Response functions: Cumulants of the susceptibility, $`\chi =(X^2X^2)/T`$, can be calculated in a similar way. We now consider $`p`$ sets of $`n`$ replicas, each with the same scaling field $`\psi _j`$, $`j=1,\mathrm{},p`$. The coefficient of the term $`n^p(\psi _1\mathrm{}\psi _p)^2`$ of the expansion of the generating function $`\mathrm{ln}[_{j=1}^pZ^n(\psi _j)]`$ is given by the cumulant average $`(2T)^p[\chi ^p]_c`$. This gives a critical scaling of the form $`[\chi ^p]_c\tau ^{\gamma _p}`$, with $`\gamma _p=p\gamma +\nu _{}(d1+\zeta )(p1)`$.
(4) Cross correlations, such as $`\mathrm{\Gamma }_{XY}=XYXY`$, where $`Y`$ is a derivative of the free energy with respect to another scaling field $`\stackrel{~}{\psi }`$, are also of interest. For example, the cross correlation, $`\mathrm{\Gamma }_{EM}`$, of magnetization $`M`$ and energy $`E`$, has been measured in numerical simulations, since it allows for a more accurate estimate of the exponent ratio $`\alpha /\nu `$ then a direct method . The generator of $`\mathrm{\Gamma }_{XY}`$ is $`\mathrm{ln}[_{j=1}^pZ^n(\psi _j,\stackrel{~}{\psi }_j)]`$, with coefficients $`T^{2p}[\mathrm{\Gamma }_{XY}^p]_c`$ corresponding to the terms $`n^p\psi _1\stackrel{~}{\psi }_1\mathrm{}\psi _p\stackrel{~}{\psi }_p`$. Our scaling ansatz yields $`\mathrm{\Gamma }_{EM}\tau ^{p(\beta 1)\nu _{}(d1+\zeta )(p1)}`$. There are also correlations between different susceptibilities. Along the lines presented above, it is possible to show that these cross-correlations can be generated from the expansion of $`\mathrm{ln}[_{j=1}^pZ^n(\psi _j)Z^n(\stackrel{~}{\psi }_j)]`$, and that the cumulant $`(2T)^{2p}[(\chi \stackrel{~}{\chi })^p]_c`$ is the coefficient of the term $`n^{2p}\psi _1^2\stackrel{~}{\psi }_1^2\mathrm{}\psi _p^2\stackrel{~}{\psi }_p^2`$. Choosing for $`\chi `$ and $`\stackrel{~}{\chi }`$ the magnetic susceptibility and the heat capacity $`c`$, respectively, we get $`[(\chi c)^p]_c\tau ^{2\gamma _p\nu _{}(d1+\zeta )}`$.
Finally, we compare with the results of Aharony and Harris, on the relative cumulants $`R_{p,X}=[X^p]_c/[X]^p`$, in a $`\varphi ^4`$-model with random $`T_c`$. By a perturbative renormalization group (RG), in which they assume a Gaussian distribution for randomness on all length scales, they obtain $`R_{p,\chi }=p!2^{p3}R_{2,\chi }^{p1}`$, for the magnetic susceptibility. Indeed, for all observables (including susceptibilities) we find $`R_{p,X}R_{2,X}^{p1}`$, but the exact coefficient cannot be obtained from the scaling ansatz. However, even for an originally Gaussian random $`T_c`$, higher then second cumulants are generated by RG transformations and yield additional (universal) contributions to the above coefficient . Assuming that randomness is relevant at the phase transition, we find for the second relative cumulant of any observable $`X`$ that $`R_{2,X}(b/L_{})^{d1}(b^\zeta /L_{})`$. Choosing a scaling factor $`b\xi _{}`$ if $`\xi _{}L_{}`$, and $`bL_{}`$ in the critical regime $`\xi _{}L_{}`$, we reproduce the results of Ref. .
We now apply these results to the experimental study by Bolle et al. of a planar, randomly pinned vortex array penetrating a mesoscopic quasi $`2`$-dimensional thin single crystal of 2H-NbSe<sub>2</sub> with weak pinning . The crystal was glued onto a silicon micromachined mechanical resonator. By measuring jumps in the resonant frequency caused by magnetic FLs entering the sample, the number of lines was determined very accurately as a function of the applied magnetic field. Close to the lower critical field $`H_{\mathrm{c1}}`$, the jumps were irreversible, indicative of the difficulty of FLs finding optimal pinning configurations. At higher fields, the increased line density should result in a smaller pinning length, enabling the FL lattice to find its optimal pinning state. Indeed reversible behavior is observed experimentally in this case. In both regimes, the response depends on the detailed configuration of pinning sites, the sample geometry, and the vortex interactions. Therefore, the observed discrete jumps in the constitutive relation, $`B(H)`$, provide a fingerprint of the disorder in a specific sample.
Previous theoretical and experimental work concentrated mainly on the determination of the relation $`B(H)`$ near $`H_{\mathrm{c1}}`$ for different kinds of disorder . Fluctuations in $`B(H)`$ are also of interest, characterizing more clearly the non-trivial effects of randomness in this mesoscopic system. Therefore, we study entire PDFs for observables like $`B`$, magnetic susceptibility $`\chi `$, heat capacity $`c`$, and correlations between them. First consider a single FL to introduce the basic notations. The transversal wandering of such a line is described by a trajectory $`x(y)`$, which has an energy
$$=_0^L𝑑y\left[\frac{g}{2}\left(\frac{dx}{dy}\right)^2+V(x(y),y)\right].$$
(4)
Here, $`g=\varphi _0H_{\mathrm{c1}}/(4\pi )`$ is the elastic stiffness of the FL carrying a flux quantum $`\varphi _0`$. The random potential $`V(x,y)`$ mimics point like pinning, and is characterized by the disorder average $`[V(x,y)V(x^{},y^{})]=\mathrm{\Delta }\delta _\xi (xx^{})\delta (yy^{})`$, with strength $`\mathrm{\Delta }`$ and a short range function, $`\delta _\xi `$, of width of the in-plane coherence length $`\xi `$. The dimensions of the sample are taken to be $`L`$ along the field direction $`\widehat{𝐲}`$, and $`W`$ in the transverse direction $`\widehat{𝐱}`$.
A single FL can wander freely in the transverse direction to take advantage of the randomly distributed pinning centers. This leads to an anomalous growth of its displacement by $`\delta x(L)L^{2/3}`$. In contrast, in a lattice of lines, the non-crossing condition is a strong restriction for the possible configurations of each line. The results of Ref. for such a lattice, only apply to FLs at temperatures $`T`$ larger than $`T^{}=(g\xi \mathrm{\Delta })^{1/3}`$, the height of the smallest energy barrier due to the random potential. Recently, Korshunov and Dotsenko generalized the results for a single line to the low temperature limit, $`TT^{}`$, by using a replica interaction potential with a small but finite curvature, instead of the rectangular well which works in the high-$`T`$ limit. The generalization of their solution to an array of lines is straightforward. The non-trivial part of the free energy of $`n`$ replicas of the system, with a fixed number of $`N=W/a_0`$ of lines at mean separation $`a_0`$, can be summarized in both limits as
$$\frac{F_n(N)}{T}=\mathrm{ln}[Z^n]=nkN^2\frac{L}{W}\frac{\mathrm{\Delta }}{T^2}𝒢\left(n\sqrt{\frac{kg\mathrm{\Delta }W}{NT^3}}\right),$$
(5)
with the parameter $`k=1`$ for $`TT^{}`$, and $`kT/T^{}`$ for $`TT^{}`$. Here we have neglected the trivial contribution to the free energy which is linear in $`N`$, and $`𝒢`$ is an analytic function .
To study the fluctuations in the number of FLs near $`H_{\mathrm{c1}}`$, we use a grand canonical description with a chemical potential $`\mu =ghL`$, where $`h=(HH_{\mathrm{c1}})/H_{\mathrm{c1}}`$ is the reduced magnetic field. The PDF of $`N`$ is characterized by the disorder averaged cumulants $`[N^p]_c`$ in the number of FLs, which are given by $`T^p`$ times the coefficients of the terms $`(n\mu )^p`$ in the expansion of $`\mathrm{ln}[Z^n(\mu )]`$, leading to
$$[N^p]_cWL^{1p}\left(\frac{gT}{k\mathrm{\Delta }}\right)^{2p}h^{(53p)/2}.$$
(6)
The moments of the magnetic flux density follow from $`B=\varphi _0N/(dW)`$, where $`d`$ is the thickness of the sample. It is interesting to note that the second cumulant is universal, $`[N^2]_c(W/L)h^{1/2}`$, independent of temperature and disorder strength, in both the high and low temperature regimes. For $`TT^{}`$, all cumulants are independent of $`T`$, but do depend on the disorder strength. In the thermodynamic limit $`WL\mathrm{}`$, all cumulants $`[B^p]_c`$ approach zero for $`p>1`$. On the other hand, for mesoscopic systems, the divergence of $`[B^p]_c`$ for $`h0`$ is stopped if the average distance $`a_0`$ between the FLs approaches the system size $`W`$. Deep in the glassy low density phase near $`H_{\mathrm{c1}}`$, we obtain from Eq. (5) for the free energy cumulants
$$[F^p]_cWLg^2\left(\frac{T}{k\mathrm{\Delta }}\right)^{2p}h^{(5p)/2}.$$
(7)
This agrees with our scaling ansatz with exponents $`\zeta =2/3`$, $`\theta _{}=1/3`$, $`\nu _{}=3/2`$ and $`\delta =1`$, and gives a universal value for $`p=2`$, which still depends on the stiffness $`g`$.
Now consider the response of the FL lattice to changes in the external magnetic field, measured by the susceptibility $`\chi `$. This is related to thermal fluctuations in the number of FLs by $`\chi =(L/W)(N^2N^2)/T`$, for a fixed realization of disorder. The disorder averaged $`p^{\mathrm{th}}`$ cumulant of the number fluctuations can be obtained from the generating function $`\mathrm{ln}[_{j=1}^pZ^n(\mu _j)]`$ for $`p`$ different chemical potentials $`\mu _j`$, giving the susceptibility cumulants
$$[\chi ^p]_c(WL)^{1p}g^{22p}\left(\frac{T}{k\mathrm{\Delta }}\right)^{2p}h^{5(p1)/2}.$$
(8)
This result deserves a few comments: First, the disorder averaged susceptibility $`[\chi ]`$ is non-singular at the transition $`h0`$, since the exponent vanishes for $`p=1`$. In fact, due to a statistical tilt symmetry the susceptibility is simply related to the compression and tilt elastic moduli ($`c_{11}`$, $`c_{44}`$) of the FL lattice, by $`[\chi ]=(2\pi /a_0)^2(c_{11}c_{44})^{1/2}`$. This is in agreement with our result $`[\chi ]T/(k\mathrm{\Delta })`$, as can be seen by using $`c_{44}=g/a_0`$, rewriting $`c_{11}`$ in terms of the steric repulsion between the lines, and estimating $`a_0=W/N`$ from Eq. (6) with $`p=1`$, and a value for $`h`$ of order one. Again, the variance is the only moment of the susceptibility, which shows universality, $`[\chi ^2]_c(WLg^2)^1h^{5/2}`$ .
Next, consider the response of a fixed number of FLs to changes in temperature. Keeping the number $`N`$ of FLs constant by adjusting the magnetic field, but changing the temperature, allows us to study thermal fluctuations around the global ground-state. Physically, the response of the FL lattice to changes in magnetic field $`H`$, or temperature $`T`$, are quite different: A change in $`H`$ (or $`N`$) usually leads to a complete rearrangement of the whole ensemble of lines. Increasing $`T`$, however, causes stronger entropic repulsions between the $`N`$ lines, which now fluctuate around their state of optimal pinning. The response to a change in $`T`$ also depends on the detailed pinning landscape, producing a heat capacity $`c`$ which is sample specific. Therefore, the statistics of $`c`$ are also of interest; its cumulants can be obtained from the generating function $`\mathrm{ln}[_{j=1}^pZ^n(\beta _j)]`$ with $`p`$ different temperatures $`T_j=1/\beta _j`$, as (for fixed $`N`$)
$$[c^p]_cLW^{\frac{p3}{2}}(k\mathrm{\Delta })^{\frac{p+1}{2}}g^{\frac{p1}{2}}N^{\frac{5p}{2}}T^{\frac{3p+1}{2}}.$$
(9)
In the high temperature limit ($`TT^{}`$), the moments $`[c^p]_cT^{(3p+1)/2}`$ decay faster with temperature than in the low temperature limit ($`TT^{}`$), where $`[c^p]_cT^p`$.
We may also examine cross-correlations between the different susceptibilities such as the heat capacity $`c`$, and the magnetic susceptibility $`\chi `$. In the thermodynamic limit, i.e. beyond a characteristic system size, different susceptibilities are expected to be statistically independent. For example, this has been demonstrated for the magnetic susceptibilities of two noninteracting FL lattices with different random potentials. The correlations between $`c`$ and $`\chi `$ can be obtained from our scaling theory by extracting the coefficients of $`n^{2p}\mu _1^2\beta _1^2\mathrm{}\mu _p^2\beta _p^2`$ of $`\mathrm{ln}[_{j=1}^pZ^n(\mu _j)_{j=1}^pZ^n(\beta _j)]`$. Using the results of Eqs. (8) and (9), we get
$$\frac{[(\chi c)^p]_c}{[\chi ^p]_c[c^p]_c}(WL)^1\left(\frac{k\mathrm{\Delta }}{gT}\right)^2h^{5/2}.$$
(10)
For finite systems, the divergence as $`h0`$ is cut off at $`hk\mathrm{\Delta }/(gTW)`$. Then the result can be rewritten as $`[(\chi c)^p]_c/([\chi ^p]_c[c^p]_c)L_c/L`$, with a characteristic length scale $`L_c=(gT/k\mathrm{\Delta })^{1/2}W^{3/2}`$ for the decay of correlations. This length-scale has a simple physical interpretation: $`L_c`$ is the length of a single FL whose transverse wanderings can explore the whole sample width $`W`$. Since the transversal fluctuations of a line of length $`L`$ is grow as $`\delta x(L)(k\mathrm{\Delta }/gT)^{1/3}L^{2/3}`$, we get the above result for $`L_c`$ from the condition $`\delta x(L_c)W`$.
The full shape of the PDFs is determined by the relative cumulants $`R_{p,X}=[X^p]_c/[X]^p`$. For $`X=N`$, $`\chi `$, $`c`$ or $`\chi c`$, our results yield $`R_{p,X}R_{2,X}^{p1}`$ up to a numerical coefficient. Therefore, the system parameters enter the PDF shapes only through $`R_{2,X}`$, which is interestingly independent of the observable $`X`$, and given by
$$R_{2,X}(WL)^1\left(\frac{k\mathrm{\Delta }}{gT}\right)^2h^{5/2}=\frac{\xi _{}\xi _{}}{WL}.$$
(11)
The length scales in the final expression are $`\xi _{}=a_0`$, the separation of FLs, and $`\xi _{}=(gT/k\mathrm{\Delta })^{1/2}a_0^{3/2}`$, the mean longitudinal distance between collisions of FLs. To obtain a size independent PDF, anisotropy requires looking at finite-size samples of width $`WL^\zeta `$, with roughness exponent $`\zeta =2/3`$. The resulting PDF is however, still non-universal, depending on $`R_{2,X}(gT/k\mathrm{\Delta })^{1/2}`$.
The above predictions for the scaling of PDFs, or cumulants, can be tested experimentally by measurements on different realizations of randomness, drawn from the same distribution of impurities. Generating many such different realizations could in fact be quite easy, depending on the system under study. For example, in the case of the FL lattice of Ref. experiments can be performed on the same sample, with different realizations of randomness generated by simply rotating the sample with respect to the external magnetic field. Each (finite-size) realization of randomness yields a characteristic value for thermodynamic observables, providing a reproducible ‘fingerprint’ of the configuration. The statistics of these measurements is then described by the cumulants calculated here.
We thank D.R. Nelson for bringing the experimental system of Ref. to our attention. This work was supported by the Deutsche Forschungsgemeinschaft under grant EM70/1-1 (TE), and the National Science Foundation grant No. DMR-98-05833 (MK).
|
no-problem/0003/astro-ph0003252.html
|
ar5iv
|
text
|
# Atomic Carbon in Galaxies
## 1 Introduction
Due to the poor transparency of the Earth’s atmosphere at submillimeter wavelengths, there are few published measurements of the ground state fine-structure lines of atomic carbon at 492 and 809 GHz in external galaxies, despite the high abundance of this species in cool interstellar gas and its importance for the overall thermal budget of molecular gas. The first detection was reported by Büttgenbach et al. (bkp (1992)) in IC 342. A handful of galaxies have been detected since, including NGC 253 (Israel et al.iwb (1995), Harrison et al. harrison (1995)), M 82 (Schilke et al. schilke (1993), Stutzki et al. stutzki (1997)), M 83 (Petitpas and Wilson pw (1998)), M 33 (Wilson wilson (1997)). Individual clouds have been observed in M 31 (Israel et al. itb (1998)) and the LMC (Stark et al. starck (1997)).
Atomic carbon can be found in all types of neutral clouds, from diffuse clouds (Jenkins & Shaya 1979) to dense molecular gas (Phillips & Huggins 1981). In diffuse clouds, atomic carbon is a minor constituent, but the intensity ratio of the two ground state fine-structure lines is a sensitive tracer of the total gas pressure. Atomic carbon has also proven to be a good tracer of molecular gas in Galactic molecular clouds, as a linear correlation is commonly found between the strength of the CI($`{}_{}{}^{3}P_{1}^{}^3P_0`$) line at 492 GHz and the <sup>13</sup>CO(2-1) line at 220 GHz (Keene et al. 1996). This correlation corresponds to a mean abundance of neutral carbon of $`10^5`$ relative to H<sub>2</sub>. By comparing with extinction measurements, Frerking et al. (frerking (1989)) found a similar value for the abundance of atomic carbon in dense molecular clouds, where it reaches a maximum of $`2.2\times 10^5`$ for A<sub>V</sub> = 4 – 11 mag and does not deviate from this value by more than a factor of a few for larger A<sub>V</sub>. Emission from the two ground state fine-structure lines of atomic carbon is seen by COBE throughout the Milky Way and makes a significant contribution to the gas cooling (Bennett et al. 1994). Therefore, it is not surprising that these ubiquitous atomic carbon lines are present in the spectra of external galaxies.
From a theoretical point of view, the spatial distribution and line intensities of atomic carbon in molecular clouds are predicted by chemical models which include the effect of photodissociation induced by UV photons, the so-called PDR models (e.g. Tielens and Hollenbach 1985). In clouds exposed to UV radiation, carbon is mostly in the form of C<sup>+</sup> to a depth Av = 1 magnitude. Atomic carbon appears at an intermediate depth, from Av = 1 to 5 or so magnitudes, where C<sup>+</sup> is mostly recombined with electrons and not all the gas phase carbon present is yet captured into CO. The actual depth of this zone, and thus the extent and intensity of the CI emission, is a sensitive function of some important model parameters, such as the carbon and oxygen abundance in the gas phase, and the presence of Polycyclic Aromatic Hydrocarbons (see Bakes and Tielens bt (1998), Le Bourlot et al. 1993). PDR models are difficult to use for quantitative results in galaxies, because they were developed to represent the structure of an individual cloud, whereas a galaxy would have to be synthesized from a suitable ensemble of PDR clouds (see e.g. Sauty et al. sauty (1998)). Nevertheless, their predictions provide a qualitative understanding of the variations of the line flux with physical conditions. Previous CII measurements of external galaxies have been compared successfully with such PDR models (e.g. Stacey et al. 1991).
Although CII, CI and CO emission, in principle, arise from different physical regions in the cloud, in actual observations of Galactic clouds it is found that CI and CO emission on average seem to come from the same physical region. This is in part due to the clumpy or fractal nature of the clouds, so that the UV causes emission from irradiated regions even deep into the cloud. The overall impression, at moderate (say 15” at 500 pc = 0.04 pc) resolution, is that the species are coexistent. This scale must be compared to the size of Galactic clouds, which may vary from a few parsecs to a few tens of parsecs. Few large scale maps of Galactic clouds have been made for all these species, but for some giant molecular clouds (Plume et al. 1994, 1999) there is a good correspondence between the CI, CII and CO maps. For external galaxies where individual clouds are barely resolved, we would then expect that the different carbon species will be observationally coexistent (when associated with molecular gas) including the ionized carbon. Therefore, at least for the point of view of beam dilution in the observations, the 3 species (CII, CI and CO) can be considered as occupying the same volume in the beam so that observed line ratios can be compared with model predictions. The ionized interstellar medium, either diffuse or in HII regions, can contribute to the global CII emission of galaxies, but for starforming regions in galaxies, PDRs contribute most of the CII emission (Madden et al. 1993, Sauty et al. 1998). For M 82 the contribution of HII regions is estimated to be $`24\%31\%`$ of the observed CII flux (Colbert et al. 1999).
The contribution of the cool diffuse regions to the atomic carbon emission will be incorporated in the PDR models. The warm diffuse gas is generally ionized and therefore barely contributes to the CI emission. Therefore, the two fine-structure lines of atomic carbon are expected to be better tracers of PDRs than is CII. The intensity ratio CI(1-0)/CO(1-0) is a function of the gas density and UV illumination factor G<sub>0</sub>, as shown in Figure 1, produced with the PDR model developed by Le Bourlot et al. (1993) and Abgrall et al. (aa (1992)). G<sub>0</sub> is defined relative to the average interstellar radiation field in the solar neighborhood as obtained by Mathis et al. (mmp (1983)) G<sub>0</sub> = I<sub>UV</sub>(6-13.6 eV)/ $`1.4\times 10^4`$ erg cm<sup>-2</sup>s<sup>-1</sup>sr<sup>-1</sup>; n<sub>H</sub> represents the total density of hydrogen atoms.
Other ratios, CII/CI, CI/FIR and CII/FIR depend mostly on the UV illumination factor G<sub>0</sub> (see also Kaufman et al. 1999). Since atomic carbon is present in a layer of the PDR with almost the same characteristics (column density, temperature) whatever the illumination, the brightness of the CI(1-0) line is fairly constant for most of the parameter space studied. This means that the main factor affecting this line in external galaxies is the filling factor of the emission in the beam. By contrast, the FIR emission in a PDR is directly proportional to the UV illumination since nearly all the incident UV radiation is absorbed in the cloud. Finally, the CII and OI emission in PDRs, depend on both the UV illumination (due to the decrease of the efficiency of photoelectric heating at high illumination) and the gas density (for the collisional excitation of these lines). From the PDR models, it is expected that the CI/FIR ratio will be a useful tracer of the local UV illumination conditions.
In the following we present new observations of CI(1-0), as well as complementary data, <sup>13</sup>CO(2-1), <sup>12</sup>CO(2-1), (3-2) and (4-3), on nearby galaxies, of various morphological types. These data are used to study the correlation of CI emission with other tracers of the ISM in galaxies.
## 2 Observations
The observations were performed with the Caltech Submillimeter Observatory (CSO) during 1996 - 1998, using SIS receivers operated in double-side (DSB) mode. In good atmospheric conditions ($`\tau _{225GHz}0.06`$), the single sideband system temperature usually ranged between 1000 and 3000 K. The observations were performed using a chopping secondary mirror, with throw set to 1 to 3 arcminutes on the sky, depending on the size of the source. The spectra were analysed with two acousto-optic spectrometers, one with a total bandwidth of 1500 MHz (of which only 900 MHz is used, due to the bandwidth limit of the receivers) and a resolution of about 2 MHz, and a second with a total bandwidth of 500 MHz and a spectral resolution of about 1.5 MHz. The main beam efficiencies of the CSO were 0.72, 0.65 and 0.53 at 230, 345 and 492 GHz respectively; the corresponding conversion factors between Janskys and Kelvins (T$`{}_{}{}^{}{}_{A}{}^{}`$) are 50, 70 and 100 Jy/K, and beam sizes 30 ″, 20″and 15 ″. Some data have been reported previously (Gerin & Phillips gp2 (1997, 1998)). The program galaxies are listed in Table 1.
Most of the target galaxies have narrow lines which fit in a single backend spectrometer setting. For two galaxies with broad lines, Arp 220 and the center of NGC 3079, we used three different local oscillator settings to obtain a complete coverage of the lines, with central velocities at -300, 0 and +300 kms<sup>-1</sup> relative to the line center. First, we used the spectra taken at velocity offsets +300 and -300 kms<sup>-1</sup> to adjust the zero level of the baseline, with a window set by reference to published CO spectra. We then computed the baseline offset for the central spectrum, to minimize the platforming effects with the spectra taken at +300 and -300 kms<sup>-1</sup>. The overlap region is quite large, +70 to +200 kms<sup>-1</sup> at positive velocities for Arp 220 for example. The data and interpretation for Arp 220 have been reported elsewhere (Gerin & Phillips gp1 (1998)).
## 3 Results
CI(1-0) was detected in all the program galaxies. Figure 2 presents a selection of CSO spectra. The measured parameters are listed in Table 2, together with complementary data found in the literature. Linear baselines have been fitted with the original spectral resolution ($`1.5`$ MHz = 0.8 kms<sup>-1</sup>). Depending on the expected line width, the spectra have then been rebinned to 5 – 15 kms<sup>-1</sup> resolution to increase the S/N ratio. For galaxies with broad lines and not too many channels beyond the line, we chose the line window according to published CO spectra. The uncertainty in the line flux due to the uncertainty in the position of the baseline is included in the total uncertainty listed in Table 2. Figure 3 shows the integrated intensity ratio CI(1-0)/CO(1-0) in T<sub>mb</sub> as a function of the CO(1-0) integrated intensity in Kkms<sup>-1</sup>. This ratio has a mean value of $`0.2\pm 0.2`$, with a large dispersion, since the observed ratios range from $``$ 0.04 to $``$1. To get the emissivity ratio, the observed intensity ratio in T<sub>mb</sub> must be multiplied by 78, the cube of the ratio of the line frequencies. The mean emissivity ratio is therefore 16, with observed values ranging from 3 to 80. In this figure the CO(1-0) data are taken from the literature, as listed in Table 2. We also included any already published CI data. We used mostly the CO(1-0) data taken with the IRAM 30m telescope in order to match the CI beam size at the CSO as closely as possible. The CO(1-0) data taken with other telescopes have been scaled to the 22” beam of the IRAM 30m telescope at 2.6 mm.
Despite the large scatter in the CI/CO ratio, there is no apparent segregation between the different galaxy types. Viewed at large scale (the 15” CSO beam represents a linear scale of 730 pc at a distance of 10 Mpc), there are no clear differences between normal spirals, merger galaxies and even low metallicity galaxies, except maybe two regions in M 33. However, there are local differences within galaxies, which we examine in the following subsection. Because <sup>13</sup>CO emission is low in active galaxies (e.g. Taniguchi and Ohyama 1998), we compare CI(1-0) with <sup>13</sup>CO(2-1) in section 3.2. We then discuss the respective contributions of atomic carbon and carbon monoxide to the gas cooling in galaxies, and finally we compare the CI(1-0) emission of nearby galaxies with the CII and FIR emission at the same angular resolution.
### 3.1 Disk galaxies
We have mapped the north-east half of the major axis of the edge-on galaxy NGC 891, made a cut through the disk of NGC 6946, and observed points in the nucleus and in the disk of several other galaxies. We investigate the difference between galaxy nuclei and the disks, and between active regions in spiral arms and the general interstellar medium.
As a point of reference, we examine in Figure 4 the distribution of CI and CO lines in the Milky Way Galaxy, as observed by FIRAS on board COBE (Bennett et al. bennett (1994)). It is clear that for the Milky Way, CI(1-0) decreases less rapidly than the CO lines with Galactic radius, moving out along the Galactic plane. The decrease is steeper for CO(4-3) than for CO(2-1). Also, neutral carbon is more excited in the nucleus, as seen by the higher CI(2-1)/(1-0) emissivity ratio : 1.3 in the nucleus versus 0.5 in the disk. The 7 beam of COBE corresponds to a linear size of 1 kpc at the Galactic Center, quite similar to the linear size of the 15” CSO beam at a distance of 10 Mpc (0.7 kpc). Also shown in Figure 4 is the dust continuum at 1mm from the same COBE/FIRAS data. It is clear that the continuum intensity does not drop as fast as the CO and CI lines outside the nucleus of the Milky Way.
Figure 5 and 6 present a comparison of the CI(1-0) intensity with CO(1-0) in NGC 891 and NGC 6946 respectively. A similar effect to that seen in the Milky Way is seen for NGC 891, NGC 6946 and also for NGC 3079 : CI(1-0) is weaker, relative to CO(1-0) and (2-1), in the nucleus than in the disk. Toward NGC 891, we can compare CI(1-0) with both CO(1-0) and the dust continuum emission at 1.3mm from Guélin et al. (1993). Whereas the CI/dust continuum ratio stays constant throughout the disk, the CO/dust continuum increases, and the CI/CO decreases in the nucleus. Nuclear gas is usually denser and warmer than the bulk of the interstellar gas in galactic disks, resulting in a higher excitation temperature for carbon monoxide in the nucleus. This difference in excitation has some consequences for the use of <sup>12</sup>CO(1-0) as a tracer of molecular gas since the mass of molecular gas deduced from <sup>12</sup>CO(1-0) data will be overestimated in galaxy nuclei. This has been shown for the Galactic Center by Sodroski et al. (1995).
For NGC 6946, the CI(1-0)/CO(2-1) ratio increases just outside the nucleus, in an interarm region, and decreases again when reaching a star forming region near the position (0,+140”). For other galaxies also : M 33 (Wilson 1997), IC 10 (this work), a similar difference is found between clouds close to HII regions and clouds more distant from star forming complexes. CI(1-0) appears to be a tracer of the general interstellar medium, not strongly biased toward the densest and warmest places.
### 3.2 Comparison with <sup>13</sup>CO(2-1)
In the Milky Way, a remarkable linear correlation between CI(1-0) and <sup>13</sup>CO(2-1) intensities has been found in most of the clouds mapped (Keene et al. keene (1996)). This correlation can be understood in the sense that both atomic carbon and <sup>13</sup>CO are good tracers of the molecular gas given the permeability of the clouds to the UV radiation : the two lines have similar (moderate) opacities and are close to LTE for most of the physical conditions encountered in local molecular clouds. However, compared to maps of individual molecular clouds in the Milky Way, larger linear scales are sampled in external galaxies where the relation between CI and <sup>13</sup>CO has not yet been studied. It is well known that <sup>13</sup>CO lines are especially weak relative to <sup>12</sup>CO in some interacting and merging galaxies (Casoli et al. casoli2 (1992), Aalto et al. aalto (1995), Taniguchi and Ohyama 1998). We have therefore looked at possible variations of the CI/<sup>13</sup>CO(2-1) ratio as a function of the <sup>13</sup>CO(2-1) intensity. The data are displayed in Figure 7, with the same symbols as in Figure 3. We used <sup>13</sup>CO(2-1) data from the literature as listed previously, or took new CSO data when needed.
This plot shows a trend of decreasing CI/<sup>13</sup>CO(2-1) ratio with the <sup>13</sup>CO intensity, with a few exceptions : M 33, the nucleus of Centaurus A and NGC 253. The trend is particularly clear for the disk of NGC 891 (black triangles in Fig. 7). Such a trend has been observed towards well known PDRs (Tauber et al. tauber (1995), White & Sandell ws (1995), Minchin & White mw (1995)). It can be understood as revealing a smooth variation of the abundance ratio N(C)/N(CO) with the total gas column density. Indeed, chemical models predict a decrease of the N(C)/N(CO) column density ratio with increasing H<sub>2</sub> column density in the range $`0.15\times 10^{21}`$ cm<sup>-2</sup>, valid for translucent clouds (Stark et al. stark-r (1996)). If translucent gas represents a fair fraction of the molecular gas in external galaxies, atomic carbon lines will trace preferentially these physical conditions. As mentioned in the Introduction, from our knowledge of interstellar clouds in the Milky Way, it is expected that a significant fraction of the molecular gas is exposed to UV radiation. This seems to be the case in external galaxies as well since the CI(1-0) line at 492 GHz shows up at a detectable level in all types of galaxies, and does not show any peculiar behavior with the morphological type. Furthermore, in NGC 891, CI emission closely follows the dust continuum emission, which is also believed to be a reliable gas tracer in spiral galaxies. We conclude that CI(1-0) can be used as a tracer of low to moderate density molecular gas in external galaxies. It is possible that CI avoids the regions with high density, which represent a small fraction of the total gas mass of galaxies.
## 4 Gas cooling in galaxies
### 4.1 C and CO cooling
We have estimated the total cooling due to the observed lines of atomic carbon, CI(1-0), and carbon monoxide, CO(1-0) - (2-1) - (3-2) and in some cases (4-3). The data are reported in Table 3. It turns out that a significant fraction of the cooling is due to the CO(3-2) and (4-3) lines, so we report only a lower limit for CO cooling in galaxies with no CO(3-2) or (4-3) data. In order to estimate the total cooling due to all carbon and CO lines, we need to correct for the missing lines : CI(2-1) at 809 GHz, and CO(5-4), (6-5),… As templates, we used well studied cases, either the COBE data towards the Galactic Center or data on M 82 and IC 342 (Güsten et al. gusten (1993)). The correction for missing lines amounts, on average, to approximatively a factor 2 for carbon and a factor 4 for CO when no CO(4-3) and higher J data are available, or 2 when no CO(6-5) and higher J are available.
Israel et al. (1995) conclude that the contributions of CO and atomic carbon to the total cooling are similar for NGC 253. We show that this conclusion holds for all the galaxies in this sample. The contribution of CO and atomic carbon to the total cooling represent a small fraction of the total gas cooling, which is still dominated by CII, and possibly OI. Typically, the gas cooling due to atomic carbon or CO amounts to 10<sup>-5</sup> of the FIR dust continuum, while the gas cooling due to CII and OI amounts to 10<sup>-4</sup> – 10<sup>-2</sup> of the FIR dust continuum.
### 4.2 Comparison with CII and FIR
We show in Figure 8 the line to continuum ratio CII/FIR versus CI(1-0)/FIR, and the line ratio CII/CI(1-0) versus CI(1-0)/FIR. The lines drawn in Figure 8 show PDR model predictions (using the code described in Le Bourlot et al. 1993) for hydrogen densities from 10<sup>2</sup> cm<sup>-3</sup> to 10<sup>5</sup> cm<sup>-3</sup> and G<sub>0</sub> from 1 to 10<sup>4</sup> times the average radiation field in the solar neighborhood. The Far-infrared emission (FIR) is calculated from the measured fluxes at 60 and 100 $`\mu m`$, available from observations with the Kuiper Airborne Observatory (Madden et al. 1997, 1998, Stacey et al. 1991, Smith and Harvey 1996) or with the Infrared Space Observatory (ISO) (Luhman et al. 1998) with the formula : $`FIR=1.26\times 10^{14}(2.58F_{60}+F_{100})Wm^2`$, where $`F_{60}`$ and $`F_{100}`$ are the fluxes at 60 and 100 $`\mu m`$ in Janskys. Since the CII and FIR data have been taken at a lower spatial resolution than the CI data, we have smoothed or extrapolated the CI(1-0) data to a 55” beam, the spatial resolution of the CII and FIR data. For nearby spiral galaxies (NGC 891, NGC 6946), the data correspond to the central 55”. The CII/CI(1-0) ratio shows a remarkable decreasing trend with increasing CI(1-0)/FIR ratio, except for the two ultraluminous infrared galaxies, Arp 220 and Mrk 231. The line to continuum ratio CII/FIR is also well correlated with CI(1-0)/FIR (Fig. 8) for all the studied galaxies, with the exceptions of Arp 220 and Mrk 231 which again lie significantly lower than the PDR model predictions.
These two plots can be understood as showing variations of the mean UV radiation field in the galaxy sample : for moderate UV illumination (G<sub>0</sub> = 10 - 100), the gas heating due to the photoelectric effect on grains reaches maximum efficiency (Kaufman et al. 1999), so that the line to continuum ratios CI(1-0)/FIR and CII/FIR reach their maximum values. When the UV radiation field increases, the gas heating is less efficient, while the dust heating stays at the same level per UV photon. In that case, the far infrared radiation scales linearly with G<sub>0</sub>, while the CII emission scales only logarithmically, and the CI(1-0) emission stays nearly at the same level. The ratio CI(1-0)/FIR and CII/FIR are thus expected to decrease as G<sub>0</sub> increases, and can be used to get estimates of the intensity of the radiation field in starburst galaxies. Because CI(1-0) is less sensitive than CII to G<sub>0</sub> and the gas density $`n_H`$, a variation of the line ratio CII/CI(1-0) is predicted by PDR models (cf Figure 1), which explains the trends seen in Figure 8.
Most of the studied galaxies lie in the parameter space, G<sub>0</sub> = 10<sup>2</sup> to 10<sup>3</sup>, n = 10<sup>2</sup> to 10<sup>5</sup>, as expected for molecular clouds in central regions of nearby galaxies. In M 82, a nearby starburst galaxy, the radiation field is more intense and reaches $`10^3`$ in good agreement with the estimate given by Kaufman et al. (1999). The two merger galaxies, Arp 220 and Mrk 231 lie at the bottom left of the CII/FIR versus CI(1-0)/FIR plot, where both the radiation field and the gas density are high. For these extremely bright sources, the emission of individual PDR clouds can not be simply added as in nearby spiral galaxies and the opacity of the CII and OI lines (and possibly also the opacity of the dust thermal emission) must be taken into account as discussed in Gerin and Phillips (1998) and Luhman et al. (1998).
Because most of the PDR CII and CI(1-0) emission arises from the surface of molecular clouds, the filling factors of the CII, CI and FIR emission are expected to be similar for starburst galaxies. Furthermore, since CI(1-0) is less likely than CII to be contaminated by other emission sources (HII regions, WIM etc.), the CI(1-0)/FIR ratio is a better measure of G<sub>0</sub> than the CII/FIR ratio. For star forming galaxies, where the dust heating is dominated by the radiation of massive stars, the line to continuum ratio CI(1-0)/FIR can thus be used to measure the strength of the UV radiation field. This conclusion is valid for spiral galaxies with the same metallicity as the Milky Way. In low metallicity galaxies, the line and continuum emission do not scale the same way with metallicity and specific models must be used (Lequeux et al. 1994, Pak et al. 1998).
## 5 Conclusions
We have shown that the ground state fine-structure line of neutral carbon can be detected in a variety of external galaxies with current instrumentation. CI(1-0) data can be used in addition to other measurements to give information on the gas column density, the thermal balance and the UV illumination conditions in external galaxies. The results will be useful as templates for distant galaxies, when detectable with advanced instrumentation.
In nearby galaxies, atomic carbon is weaker relative to CO in the nucleus, but more widely distributed in the disk. This has important consequences for the thermal balance of the molecular interstellar medium. As a whole, the contribution of C and CO to the gas cooling are of the same order of magnitude : $`2\times 10^5`$ of the FIR continuum or 5% of the CII line. C and CO cooling becomes significant, reaching $``$ 30% of the gas total, in merger galaxies like Arp 220 where CII is abnormally faint. This conclusion rests on the current data where only the ground state line of atomic carbon has been observed. Data on the second fine-structure line of atomic carbon at 809 GHz and on other CO lines are needed to obtain a better measurement of the gas cooling in galactic disks and nuclei.
We are grateful to F. Boulanger and G. Lagache for helping us using the COBE/FIRAS data. We thank J. Le Bourlot, G. Pineau des Forêts and E. Roueff for the use of their PDR model, and M. Guélin for sending us the continuum map of NGC 891. The CSO is funded by NSF contract AST96-15025. M. Gerin acknowledges travel grants from INSU/CNRS, and NATO.
FIGURE CAPTIONS
|
no-problem/0003/astro-ph0003465.html
|
ar5iv
|
text
|
# First Results from the Large Area Lyman Alpha Survey
## 1 Introduction
More than three decades ago Partridge and Peebles (1967) predicted that galaxies in their first throes of star-formation should be strong emitters in the Lyman-$`\alpha `$ line. Their predictions were optimistic, based on converting roughly 2% of gas into stars in $`3\times 10^7`$ years in Milky Way sized galaxies, which translates into a luminosity of $`6\times 10^{44}\text{erg}\text{s}^1`$. These objects are also expected to be common - if all the $`L^{}`$ galaxies have undergone a phase of rapid star-formation one should see a surface density of about $`3\times 10^3\times (\mathrm{\Delta }t/(3\times 10^7\text{yr}))\text{deg}^2`$ (Pritchet 1994). Searches based on these expectations did not detect Lyman-$`\alpha `$ emitters (LAEs). (See review by Pritchet 1994; Koo & Kron 1980; Pritchet & Hartwick 1987, 1990; Cowie 1988; Rhea et al 1989; Smith et al 1989; Lowenthal et al 1990; Wolfe et al 1992; De Propris et al 1993; Macchetto et al 1993; Møller & Warren 1993; Djorgovski & Thompson 1992; Djorgovski, Thompson, & Smith 1993; Thompson, Djorgovski, & Trauger 1992; Thompson et al 1993; Thompson, Djorgovski, & Beckwith 1994; Thommes et al 1998.)
Only recently have Lyman-$`\alpha `$ emitters been observed, albeit at luminosity levels roughly a hundred times lower than the original prediction. These Lyman-$`\alpha `$ emitters have been found from both deep narrow band imaging surveys (Cowie & Hu 1998; Hu, Cowie & McMahon 1998; Hu, McMahon, & Cowie 1999; Kudritzki et al 2000), and from deep spectroscopic surveys (Dey et al 1998; Manning et al 2000; but see Stern et al 2000). Weak Lyman-$`\alpha `$ emitters have also been found through targeted spectroscopy of Lyman break objects (e.g., Steidel et al 1996, Lowenthal et al 1997). The lower luminosity in the Lyman-$`\alpha `$ line may be because of attenuation by dust if chemical enrichment is prompt; or because the star-forming phase is more protracted; or because the star-formation happens in smaller units which later merge. The first two scenarios will give a smaller equivalent width than early predictions, while the last scenario results in low luminosities but high equivalent width.
Dust effects are expected to be severe— even a small amount of dust can greatly attenuate this line, because it is resonantly scattered. However, two factors can help the Lyman-$`\alpha `$ photons escape. If Lyman-$`\alpha `$ photons are produced in diffuse regions of a clumpy interstellar medium, they can simply scatter off the dense clumps and escape (Neufeld 1991), and some geometries can even lead to an increase in the equivalent width of the line. Secondly, energetic winds are seen in low-$`z`$ Lyman-$`\alpha `$ emitters (Kunth et al 1998). These can displace the neutral gas and doppler-shift the peak wavelength of the resonant scattering, thereby reducing the amount of scattering and the path length for interaction with dust.
Detailed predictions for luminosities and surface densities of LAEs using a Press-Schechter formalism and exploring a range of dust obscuration and star-formation time scales have been explored by Haiman and Spaans (1999), who are able to reproduce the surface densities of LAEs reported by Hu et al (1998) with a wide range of models. In order to narrow down the range of possibilities and characterize the high redshift Lyman-$`\alpha `$ population, better statistics over a wide range of flux and source density are needed.
## 2 Narrow Band Imaging Survey
An efficient search for Lyman-$`\alpha `$ emitters (and other emission line galaxies) was started in 1998 using the CCD Mosaic Camera at the Kitt Peak National Observatory’s 4m Mayall telescope. The Mosaic camera has eight $`2048\times 4096`$ chips in a $`4\times 2`$ array comprising a $`36^{}\times 36^{}`$ field of view. The final area covered by the LALA survey is $`0.72`$ square-degrees in two MOSAIC fields centered at 14:25:57 +35:32 (2000.0) and 02:05:20 -04:55 (2000.0). Five overlapping narrow band filters of width FWHM$`80`$Åare used. The central wavelengths are $`6559`$, $`6611`$, $`6650`$, $`6692`$, and $`6730`$Å, giving a total redshift coverage $`4.37<z<4.57`$. This translates into surveyed comoving volume of $`8.5\times 10^5`$ comoving $`\text{Mpc}^3`$ per field for $`H_0=70\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`\mathrm{\Omega }=0.2`$, $`\mathrm{\Lambda }=0`$. About 70% of the imaging at $`z4.5`$ is complete, and an extension of the survey to $`z>5`$ is planned. In about 6 hours per filter per field we are able to achieve line detections of about $`2\times 10^{17}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. The survey sensitivity varies with seeing. Broadband images of these fields in a custom $`B_w`$ filter ($`\lambda _0=4135`$Å, $`\text{FWHM}=1278`$Å) and the Johnson-Cousins $`R`$, $`I`$, and $`K`$ bands are being taken as part of the NOAO Deep Widefield Survey (Jannuzi & Dey 1999).
The images were reduced using the MSCRED package (Valdes & Tody 1998; Valdes 1998) in the IRAF environment (Tody 1986, 1993), together with assorted custom IRAF scripts. In addition to standard CCD data reduction steps (overscan subtraction, bias frame subtraction, and flatfielding), it was necessary to remove crosstalk between pairs of CCDs sharing readout electronics and to remove a ghost image of the telescope pupil. Astrometry of USNO-A catalog stars was used to interpolate all chips and exposures onto a single coordinate grid prior to combining final images. Cosmic ray rejection is of particular importance in a narrowband search for emission line objects. We therefore identified cosmic rays in individual images using a digital filtering method (Rhoads 2000) and additionally applied a sigma clipping algorithm at the image stacking stage. Weights for each exposure were determined using the method of Fischer & Kochanski (1994), which accounts for sky level, transparency, and seeing to optimize the signal to noise level for compact sources in the final image.
Catalogs were generated using the SExtractor package (Bertin & Arnouts 1996). Fluxes were measured in $`2.32^{\prime \prime }`$ (9 pixel) diameter apertures, and colors were obtained using matched $`2.32^{\prime \prime }`$ apertures in registered images that had been convolved to yield matched point spread functions.
## 3 Spectroscopic Observations
Spectroscopic followup of a cross-section of emission line candidates was obtained with the LRIS instrument (Oke et al 1995) at the Keck 10m telescope on 1999 June 13 (UT). Two dithered exposures of $`1800`$ seconds each were obtained through a single multislit mask in good weather.
These data were reduced using a combination of standard IRAF ONEDSPEC and TWODSPEC tasks together with the homegrown slitmask reduction IRAF task “BOGUS” (Stern, Bunker, & Stanford, private communication) for reducing LRIS data.
## 4 The emission line source population
Our imaging data yield numbers of sources as a function of their fluxes in several filters, from which we can construct number densities as a function of magnitudes, colors, and equivalent widths. In order to gracefully handle sources that are not detected in all filters, we have chosen to use “asinh magnitudes” (Lupton, Gunn, & Szalay 1999), which are a logarithmic function of flux for well detected sources but approach a linear function of flux for weak or undetected sources. Figure 1 shows the color-magnitude diagram for the $`6559`$Å ($`\pm 40`$Å) and $`R`$ filters. The color scatter achieved for bright sources ($`R<22`$) is $`0.10`$ magnitudes (semi-interquartile range). This includes the true scatter in object colors, and is therefore a firm upper limit on the scatter introduced by any residual systematic error sources, which we expect to be a few percent at worst.
To sharpen our focus on the high redshift Lyman-$`\alpha `$ population, we identify the range of parameter space occupied by known $`z>3`$ Lyman-$`\alpha `$ emitters. These sources have typical observed equivalent widths $`80`$Å and line+continuum fluxes $`5\times 10^{17}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (adjusted to $`z=4.5`$ and measured in an $`80`$Å filter).
We further want to restrict attention to sources with sufficiently reliable detections that the number of false emission line candidates is a small fraction of the total candidate sample. The data set includes many continuum sources ($`1.4\times 10^4`$ in the flux range above). It is expected to contain a few hundred emission line sources, based on earlier source counts from smaller samples (Hu et al 1998) and on our own findings (below). The fraction of continuum sources with measured equivalent widths above some threshold $`\text{EW}_0`$ can be calculated as a function of signal to noise level $`n`$ and filter width $`\mathrm{\Delta }\lambda `$: The measured equivalent width for a source with no line emission or absorption in the narrowband filter will be $`0\pm \mathrm{\Delta }\lambda /n`$. Thus, a false positive becomes an $`m\sigma `$ event, where $`m=n\times \text{EW}_0/\mathrm{\Delta }\lambda `$. Our sample would be reasonably safe from contamination with $`m=3`$, which would correspond to about 20 false positives in a sample of 14000 sources; and very safe for $`m4`$, corresponding to $`<1`$ false positive in 14000 sources. We have conservatively chosen detection thresholds $`n=5`$ and $`\text{EW}_0=\mathrm{\Delta }\lambda =80`$Å, which gives $`m=5`$. This keeps the number of false positives small even when the foregoing analysis is expanded to include errors in the continuum flux (which increase photometric error in the color by a factor $`\sqrt{2}`$) and a realistic distribution of equivalent widths for the low-$`z`$ galaxy populations. As a further check on our sample, we use the photometric color error estimates from SExtractor to demand that the source be an emission line source at the $`4\sigma `$ level.
Combining all of these requirements, the final criteria for good candidates in our survey become $`\text{EW}>80`$Å, $`\delta (\text{EW})/\text{EW}0.25`$, and $`2.6<f_{17}<5.2`$, where $`f_{17}f/(10^{17}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1)`$. There are $`225`$ such sources detected in the $`6559`$Å filter in a solid angle of $`0.31`$ square degree and redshift range $`\delta z=0.07`$. This corresponds to $`11000`$ good candidate Lyman-$`\alpha `$ emitters per square degree per unit redshift. The precise upper flux cutoff used is somewhat arbitrary, but including sources with larger fluxes in our list of good candidates has little effect on the total source counts.
The final $`6730`$Å image had a broader PSF than the $`6559`$Å image, and consequently a reduced effective sensitivity. In addition, it samples a slightly higher redshift, resulting in an $`0.08`$ magnitude increase in distance modulus ($`q_0=0.1`$, $`\mathrm{\Lambda }=0`$). Accounting for both effects, the matched flux range is $`3.45<f_{17}<5.2`$ at $`6559`$Å and $`3.2<f_{17}<4.8`$ at $`6730`$Å. The corresponding counts are $`70`$ candidates at $`6730`$Å, and 104 candidates at $`6559`$Å. Thus, the source density in directly comparable luminosity bins varies by a factor of $`1.5`$, a difference that is significant at about the $`3\sigma `$ level. We interpret this as a likely signature of large scale structure in the Lyman-$`\alpha `$ emitter distribution at $`z4.5`$. The comoving distance between the centers of the two redshift slices is $`72\text{Mpc}`$, while the thickness of each slice is $`36\text{Mpc}`$ and the transverse comoving size $`89\text{Mpc}`$ (again assuming $`H_0=70\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`\mathrm{\Omega }=0.2`$, $`\mathrm{\Lambda }=0`$). Comparable variations have been observed in the density of Lyman break galaxies at $`z3.1`$ (Steidel et al 1998).
## 5 Spectroscopic results
Our spectroscopic followup sampled a wide range of flux and equivalent width in order to characterize the different populations of sources with extreme narrowband colors, and to thereby tune candidate selection criteria for future spectroscopic observations. Those sources with $`5\sigma `$ narrowband detections and photometrically measured $`\text{EW}>65`$Å were all confirmed as emission line objects at the narrowband wavelength. In total, we detected two \[O III\] $`\lambda `$5007 emitters at $`z=0.34`$, three \[O II\] $`\lambda `$3727 emitters at $`z=0.77`$ and $`z=0.81`$, one confirmed Lyman-$`\alpha `$ ($`1215`$Å) emitter at $`z=4.516`$, and one source that could either be Lyman-$`\alpha `$ at $`z=4.55`$ or \[O II\] $`\lambda `$3727at $`z=0.81`$. We also found a $`z=2.57`$ galaxy with emission lines of C IV $`\lambda `$1549, He II $`\lambda `$1640, and O III $`\lambda `$1663. This source may have been included in the narrowband candidate list (with $`\text{EW}=36`$Å) because of a weak spectral break around $`6560`$Å. Finally, we found a few serendipitous emission line sources in the slit spectra. One of these is a single-line source with large equivalent width, possibly Lyman-$`\alpha `$ at $`z=3.99`$. Spectra of two interesting sources are shown in figure 2.
## 6 Discussion: The Lyman-$`\alpha `$ source population
By combining our imaging survey with these spectroscopic results, we can estimate the source density of Lyman-$`\alpha `$ emitters passing our selection cut. Our spectra included three sources fulfilling the criteria given above for good candidates. Of these, one was confirmed as a $`z=4.52`$ Lyman-$`\alpha `$ source. A second remains a candidate $`z=4.55`$ source, but is more conservatively interpreted as a $`z=0.81`$ \[O II\] emitter on the basis of a rather strong continuum on the blue side of the line. The third is a clear $`z=0.34`$ \[O III\] emitter. We therefore estimate that roughly $`1/3`$ to $`1/2`$ of the good candidates will be confirmed as Lyman-$`\alpha `$ sources, yielding $`4000`$ emitters per square degree per unit redshift. This is compatible with earlier measurements from smaller volumes (Hu et al 1998) after accounting for differences in flux threshold.
Our measurement is distinct from previous efforts in the field for its basis in a large number of candidate emitters. Poisson errors in our source counts are of order $`\pm 7\%`$. This is smaller than the variations observed in the comparison of two filters (of order $`\pm 40\%`$). By combining observations in multiple fields, we will be able to average over local fluctuations in number densities effectively. When completed, the LALA survey will yield comoving volume of $`2\times 10^6\text{Mpc}^3`$2) and a sample of several hundred LAEs, and will allow the luminosity function, equivalent width distribution, and correlation function of this population to be determined for the first time.
We thank Andy Bunker and Steve Dawson for help with the spectroscopic observations and Frank Valdes for writing and helping with the MSCRED package in IRAF. JER’s research is supported by a Kitt Peak Postdoctoral Fellowship and by an STScI Institute Fellowship. SM’s research is supported by NASA through Hubble Fellowship grant # HF-01111.01-98A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
|
no-problem/0003/cond-mat0003307.html
|
ar5iv
|
text
|
# Hydrogen Bonds in Polymer Folding
## Abstract
The thermodynamics of a homopolymeric chain with both Van der Waals and highly–directional hydrogen bond interaction is studied. The effect of hydrogen bonds is to reduce dramatically the entropy of low–lying states and to give raise to long–range order and to conformations displaying secondary structures. For compact polymers a transition is found between helix–rich states and low–entropy sheet–dominated states. The consequences of this transition for protein folding and, in particular, for the problem of prions are discussed.
PACS: 05.70.Jk,82.20.Db,87.15.By,87.10.+e
Secondary structures are a prime feature of all structured polymers and proteins . These structures are stabilized by hydrogen bonding. For example, $`\alpha `$–helices of proteins are known to be stabilized by hydrogen bonds which involve couples of donor and acceptor atoms belonging to consecutive turns of the helix . Similarly, hydrogen bonds are responsible for stabilizing the helix conformation in the helix–coil transition in amino acid homopolymers . Recently, there has been a wide developement of simplified lattice models for protein folding, where each monomer interacts with its neighbours through an isotropic Van der Waals interaction . These studies, in general, show no evidence of secondary structure formation for chains of realistic length and mostly deal with statistical features of good folders .
In this paper we discuss the effect of including directed hydrogen bindings in simplified polymer models. As a starting point we adopt a lattice implementation widely used in the literature . This model is defined by a string of monomers placed on subsequent positions of a 3-d cubic lattice. The energy of a configuration $`\{𝐫_i\}`$ is defined by the Hamiltonian
$$H_{VW}=\underset{i<j}{}ϵ_{ij}\delta (|𝐫_i𝐫_j|1)$$
(1)
where $``$ include all monomer pairs, the $`\delta `$ function ensures contributions from nearest neighbours only and $`ϵ_{ij}`$ contains the strength of Van der Waals interactions. In the case of homopolymers, all $`ϵ_{ij}`$ are equal, while for proteins different choices can be adopted .
In addition to $`H_{VW}`$, we introduce in the Hamiltonian a term associated with a directed interaction between couples of monomers. On each monomer $`i`$ we assign a spin $`𝐬_i`$ representing a hydrogen donor–acceptor pair. This can be easily pictured as a spin because of the opposite directions of the $`O`$ (H acceptor) and the $`N`$ (H donor) atom on the peptide backbone. The spin is constrained to be perpendicular to the backbone. To study the secondary structure of proteins, donors and acceptors coming from the amino acid sidechains should be taken into account as well. In this simple model we consider the minimal scenario of a homopolymer with only one acceptor and one donor for each monomer. The hydrogen bond part of the Hamiltonian reads
$$H_H=ϵ_H\underset{ij}{}\delta (𝐬_i𝐬_j\mathrm{\hspace{0.33em}1})\delta (|𝐫_i𝐫_j|\mathrm{\hspace{0.33em}1}),$$
(2)
where only $`𝐬_i`$ that are perpendicular to the backbone are allowed, and thus interactions along the backbone are automatically ignored.
Setting $`ϵ_{ij}=ϵ_V`$ for all $`i`$ and $`j`$, the Hamiltonian $`H(ϵ_V,ϵ_H)=H_{VW}+H_H`$ specifies the energy of any homopolymer configuration including hydrogen bindings. To study the equilibrium properties we sample configurations of the chain using the simulated tempering techniques suggested by Marinari and Parisi and developed in the context of proteins by Irbäck and Potthast . This method consists in examining an ensemble of different temperatures, by sampling a generalized partition function that includes the temperature as a dynamical variable. By adjusting the sampling rate associated with each temperature, one avoids trapping in local minima. The sampling rates are adiabatically adjusted according to the multiple histogram equations and, to avoid overestimation of metastable states, we test for thermodynamic compatibility as described in ref. .
Figure 1 shows examples of structures obtained at low temperatures using $`H(0,2)`$ and $`H(1,2)`$, respectively. In the first case, one can observe helix–like structures that, however, differ from true $`\alpha `$ helices by the change in chirality along the helix. This is due to lattice constraints . We denote this kind of structures pseudo–helices (“p–helix”), and quantify them for a given polymer conformation by counting the number of bonds involved in them.
For example, in Fig. 1(a) the monomers between 2 and 7 initiate a pseudo helix, where neighbours 2–5 and neighbours 4–7 contribute. The p–helix continues until monomer 22, which breaks it because this monomer is not neighbour to any members of a p–helix. A new pseudo–helix is initiated at monomer 25 and last throughout the chain. A helix is similarly defined as a structure built as that of Fig. 1(a), but with constant chirality (it would have been the case if the position of monomer 7 and 9 were interchanged in Fig. 1(a) ).
Fig. 1(b) displays the ground state of a polymer where Van der Waals interactions are included. In this case one can observe structures resembling $`\beta `$–sheets. The sheets can be either parallel or antiparallel, as in natural proteins, in both cases quantified by identifying at least three pairs of consequtive neighbours in a line (that is for parallel sheet it would be $`\{(i,j),(i+1,j+1),(i+2,j+2)\}`$ and for antiparallel sheet $`\{(i,j),(i+1,j1),(i+2,j2)\}`$). In Fig. 1(b), for example, monomer pairs $`(1,10)`$, $`(2,9)`$, $`(3,8)`$ and $`(4,7)`$ contribute to an antiparallel sheet, that gets broken at monomer 10. Monomers 6-13 also participate in an antiparallel sheet with the layer above.
Both the folds displayed in Fig. 1 reveal long range order. In particular, Fig. 1(b) displays an up–down symmetry and an organization where sheets are on one side of the structure and the backbone connections between layers are concentrated on the opposite side. The key–result of this analysis is, in fact, that spin interactions induce large scale organization even in the case of a homopolymer collapse.
Furthermore, the presence of hydrogen bonds causes a dramatic reduction of entropy, compared to that found for homopolymers with only isotropic interactions. The conformation displayed in Fig. 1(b) is, in fact, the unique, zero entropy ground state for a 36mer interacting through the Hamiltonian $`H(1,2)`$ (except for trivial symmetries, i.e. lattice symmetries and flipping of lines of spins. The latter being easily removed by introducing a diedral term in the Hamiltonian). The reduction in entropy can be appreciated from the inset to Fig. 2.
Fig. 2 also displays the number of bonds involved in the four types of secondary structures ($`I`$), as function of total number of bonds ($`N_B`$). The dependence with number of bonds is obtained by thermal averaging as function of temperature. The choice of using $`N_B`$ rather than the temperature as free variable is more convenient, since we are comparing systems with different energy scales. The three curves represent the case where only Van der Waals energy is present ($`\nu ϵ_H/ϵ_V=0`$, full line), where Van der Waal and spin coupling energy are present ($`\nu =2`$), and the case where only spin coupling is present ($`\nu =\mathrm{}`$). For any degree of compactness the amount of secondary structure increases with increasing spin coupling. In general, it also increases with compactness, however a backbending for the $`\nu =2`$ case is present at nearly maximal compactness. The backbending for $`\nu =2`$ in both plots is the mark of a phase transition. The transition takes place at almost maximum compactness (which is associated with low temperature), where entropy is reduced abruptly while compactness changes to a minor extent. In fact, this transition distinguishes between two types of compact polymers, a phase in which p–helices are predominant, and an ordered, highly symmetric one, rich in $`\beta `$-sheets (cf. Figs. 1b,3).
Fig. 3 shows in detail how the different types of secondary structures change with respect to temperature $`T=1/\beta `$. The backbending in Fig. 2 corresponds to the transition at $`\beta =3/ϵ_V`$, in which relatively disordered states with a large fraction of pseudo–helices are replaced with ordered, sheet–dominated structures like that displayed in Fig. 1b. We notice that even the conformations at intermediate values of the temperature ($`\beta `$ in the range between $`1/ϵ_V`$ and $`2.5/ϵ_V`$) are quite ordered, in the sense that they have a large degree of helix structures. This behaviour can be compared with the case of homopolymers without spin interactions, which have significantly less structures, and in particular have much lesser helix content (data not shown). The conformations mentioned above are rather compact, the homopolymer collapse taking place at $`\beta 1/ϵ_V`$, while the helix–sheet transition takes place at $`\beta 3/ϵ_V`$. The gap between the two transitions are adjustable by changing $`\nu `$, f.ex. for $`\nu =3`$ we find the two transitions much closer to each other.
Most interestingly, the energy landscape of even the simple homopolymer turns out to be extremly rough, so rough that the helix–sheet transition cannot be sampled with any normal Metropolis approach. In other words, a polymer with hydrogen bonds displays a population of low energy conformations which are structurally very different from each other, separated by high energy barriers and resembling rather closely the spin glass behaviour .
It is remarkable that the present model predicts a large variety of helix structures, contrasting to a few, highly–ordered sheet structures. This fact suggests that helix–like conformations are entropically favoured in the protein folding mechanism, and consequently can act as intermediates leading the chain to its equilibrium state, while sheet–like structures first appear late in the folding process and contribute to the stabilization of the ground state. A similar pattern is found in the study of prion diseases , where the sane, helix–dominated native form of the protein seems to be a very long-lived metastable easily accessible state, whereas the true ground state is dominated by $`\beta `$–sheets and prone to aggregation.
Finally, we would like to stress that the current parametrization of hydrogen bonding is the simplest possible one. The driving force in protein folding is believed to be hydrophobicity, where non–polar amino acids are deficient in hydrogen bonds and thereby cause an ordered, entropically unfavourable arrangement of the hydrogen bonds in the surrounding water. On the contrary, hydrophilic amino acids contain hydrogen receptors and donors that can replace those that water cannot build due to the presence of the protein surface. Thus, a more realistic model should explicitly include water. To model protein realistically, one should also consider chains composed of monomers with different amount of hydrogen donors and acceptors, reflecting the different types of amino acids. In such a way one will be able to control the sequence of secondary structures in a more specific way than with heterogeneous Van der Waals forces alone.
In summa, we have implemented a minimal model to keep into account hydrogen bond effects in polymer folding. Already at the level of homopolymers we have observed pronounced secondary structures, structures which are ubiquitous in the realm of natural proteins.
|
no-problem/0003/astro-ph0003447.html
|
ar5iv
|
text
|
# 1 Spectrum of the unresolved 𝛾-ray background. Points with error bars are the observed data from EGRET1. The lower shaded region shows the expected contribution from unresolved point sources, based on empirical modeling of the luminosity function of blazars2,3. The solid line shows the diffuse emission from intergalactic shocks, according to Eq. (1) with 𝜉_𝑒=0.05 and 𝑓ₛₕ𝑘𝑇=1 keV. The upper shaded region shows the sum of these contributions, which provides an excellent fit to the data. The blazar contribution was calibrated2,3 only by the total flux in photons with energies 𝐸>10⁵ keV; hence, the small deviations of some data points from the upper shaded region might be due to uncertainties in the cumulative blazar spectrum. Although the background flux predicted by Eq. (1) is independent of the magnetic field strength in the intergalactic shocks, the maximum energy of scattered photons does depend on the field strength and is given by, ℎ𝜈ₘₐₓ∼1.2𝐵₋₇(𝑘𝑇/keV)TeV. Photons with energies ≳1 TeV produce an electron-positron pair as they scatter on the infrared background22,23. The pair cools again by scattering microwave photons, which may in turn produce new pairs. The energy originally stored in photons with ℎ𝜈≳TeV is spread smoothly over the 100 MeV to TeV energy range24, and might raise the existing flux there. However, since in our model only a small fraction of the electrons are accelerated to the energy required for the production of ℎ𝜈≳TeV photons, we expect this effect to be small.
Gamma-ray background from structure
formation in the intergalactic medium
$``$ Harvard-Smithsonian CfA, 60 Garden Street, Cambridge, MA 02138, USA
$``$ Department of Condensed Matter Physics, Weizmann Institute, Rehovot, 76100, Israel
To appear in Nature. (Under press embargo until published.)
The universe is filled with a diffuse and isotropic extragalactic background of $`\gamma `$-ray radiation<sup>1</sup>, containing roughly equal energy flux per decade in photon energy between 3 MeV–100 GeV. The origin of this background is one of the unsolved puzzles in cosmology. Less than a quarter of the $`\gamma `$-ray flux can be attributed to unresolved discrete sources<sup>2,3</sup>, but the remainder appears to constitute a truly diffuse background whose origin has hitherto been mysterious. Here we show that the shock waves induced by gravity during the formation of large-scale structure in the intergalactic medium, produce a population of highly-relativistic electrons with a maximum Lorentz factor $`10^7`$. These electrons scatter a small fraction of the microwave background photons in the present-day universe up to $`\gamma `$-ray energies, thereby providing the $`\gamma `$-ray background. The predicted diffuse flux agrees with the observed background over more than four decades in photon energy, and implies a mean cosmological density of baryons which is consistent with Big-Bang nucleosynthesis.
The universe started from a smooth initial state, with small density fluctuations that grew in time due to the effect of gravity. The formation of structure resulted in non-relativistic, collisionless, shock waves in the baryonic gas due to converging bulk flows, and raised its mean temperature to $`10^7`$ K ($`=1`$ keV) at the present time<sup>4</sup>. This is indeed the characteristic temperature of the warm gas in groups of galaxies, which are the typical objects forming at the present epoch. The intergalactic shocks are usually strong, since the gravitationally-induced bulk flows are often characterized by a high Mach number. The intergalactic gas may have also been heated by shocks driven by outflows from young galaxies<sup>5-7</sup>.
Collisionless, non-relativistic, shocks are known to generically accelerate a power-law distribution of relativistic electrons, with a number density per electron momentum, $`p_e`$, of $`dn_e/dp_e=Kp_e^\alpha `$, where $`K`$ is a constant<sup>8</sup>. The power-law index for strong shocks in a gas with an adiabatic index $`\mathrm{\Gamma }=5/3`$, is<sup>8-10</sup>, $`\alpha =[(r+2)/(r1)]=2`$, where $`r=[(\mathrm{\Gamma }+1)/(\mathrm{\Gamma }1)]`$ is the shock compression factor. Such an electron distribution is found in the strong shocks surrounding supernova remnants in the interstellar medium<sup>8</sup>. Recent X-ray<sup>11,12</sup> and TeV<sup>13,14</sup> observations of the supernova remnants SN1006 and SNR RX J1713.7–3946 imply that electrons are accelerated in the remnant shocks up to an energy $`100`$ TeV, and are confined to the collisionless fluid by magnetic fields. These shocks have a velocity of order $`10^3\mathrm{km}\mathrm{s}^1`$, similar to the velocity of the intergalactic shocks we consider here. The inferred energy density in relativistic electrons constitutes $`1`$$`10\%`$ of the post-shock energy density in these remnants, a fraction consistent with the global ratio between the mean energy density of cosmic-ray electrons and the turbulent energy density in the interstellar medium of our galaxy.
Since the physics of shock acceleration can be scaled up to intergalactic distances, it appears natural to assume that a similar population of relativistic electrons was also produced in the intergalactic medium. The maximum Lorentz factor of the accelerated electrons, $`\gamma _{\mathrm{max}}`$, is set by equating their acceleration and cooling times. The $`e`$-folding time for the acceleration of the relativistic electrons is $`t_{\mathrm{acc}}(r_\mathrm{L}c/v_{\mathrm{sh}}^2)`$, where $`v_{\mathrm{sh}}=(\mathrm{\Gamma }+1)[kT/(\mathrm{\Gamma }1)m_p]^{1/2}`$ is the shock velocity relative to the unshocked gas, $`m_p`$ is the proton mass, and $`r_\mathrm{L}`$ is the electron Larmor gyration radius. For an electron with a Lorentz factor $`\gamma _e`$, $`r_\mathrm{L}=5.5\times 10^2(\gamma _7/B_7)`$ pc, where $`\gamma _7=(\gamma _e/10^7)`$, and $`B_7`$ is the magnetic field strength in units of $`0.1\mu `$G. A magnetic field amplitude of $`0.1`$$`1\mu `$G is often detected in the halos of galaxy clusters<sup>15-17</sup>. In particular, a magnetic field amplitude of $`0.1\mu `$G was inferred on the multi-Mpc scale of the Coma–A1367 supercluster<sup>18</sup>, which provides a good example for the intergalactic structures of interest here. For such magnetic field amplitude we get, $`t_{\mathrm{acc}}2\times 10^4\mathrm{yr}(\gamma _7/B_7)(kT/\mathrm{keV})^1`$.
The acceleration time is much shorter than the lifetime of the intergalactic shocks, which is comparable to the age of the universe. The maximum Lorentz factor of the electrons, $`\gamma _{\mathrm{max}}`$, is therefore not limited by the lifetime of their accelerator, but rather by their cooling, primarily due to Compton scattering off the cosmic microwave background<sup>19</sup>. Synchrotron cooling is negligible for $`B_710`$. The characteristic cooling time due to inverse-Compton scattering is $`t_{\mathrm{cool}}=[m_ec/(4/3)\sigma _T\gamma _eu_{\mathrm{cmb}}]=1.2\times 10^{10}\mathrm{yr}\left(\gamma _e/200\right)^1`$, where $`m_e`$ is the electron mass, $`\sigma _T`$ is the Thomson cross-section, and $`u_{\mathrm{cmb}}`$ is the energy density of the cosmic microwave background. By equating the acceleration and cooling times, $`t_{\mathrm{acc}}=t_{\mathrm{cool}}`$, we find $`\gamma _{\mathrm{max}}=3.7\times 10^7[B_7(kT/\mathrm{keV})]^{1/2}`$.
The estimates given above are valid as long as the electron Larmor radius is smaller than the coherence length of the magnetic field. Since the Larmor radius of the relativistic electrons is extremely small, $`r_\mathrm{L}0.1\mathrm{pc}`$, this condition is satisfied even if the field coherence length is much shorter than the $``$ Mpc scale of the intergalactic shocks.
All electrons with $`\gamma _e>200`$ lose their energy over the age of the universe. The energy of these electrons is converted to a diffuse background of photons, which are produced through inverse-Compton scattering of microwave background photons. The initial energy of a microwave photon, $`h\nu _0`$, is boosted by the scattering up to an average value of $`h\nu =(4/3)\gamma _e^2h\nu _0`$. Substituting the mean frequency of the microwave background photons for $`\nu _0`$, we find that the accelerated intergalactic electrons produce a diffuse background of radiation extending from the UV \[$`h\nu =36(\gamma _e/200)^2`$ eV\] and up to extreme $`\gamma `$-ray energies \[$`h\nu =89\gamma _7^2`$ GeV\]. For a power-law distribution of relativistic electrons, $`dn_e/dp_e=Kp_e^2`$, the energy density of the scattered radiation is predicted to be constant per logarithmic frequency intervals, $`\nu (du/d\nu )=Kc/2=u_e/2\mathrm{ln}\gamma _{\mathrm{max}}`$. Here $`u_e=Kc\mathrm{ln}\gamma _{\mathrm{max}}`$ is the energy density in relativistic electrons produced by the intergalactic shocks. If a fraction $`f_{\mathrm{sh}}`$ of the baryons in the universe were shocked to a (mass-weighted) mean temperature $`T`$, and a fraction $`\xi _e`$ of the shock thermal energy was transferred to relativistic electrons, then $`u_e=\frac{3}{2}\xi _ef_{\mathrm{sh}}n_pkT=5.1\times 10^{16}\xi _ef_{\mathrm{sh}}\left(\mathrm{\Omega }_bh_{70}^2/0.04\right)\left(kT/\mathrm{keV}\right)\mathrm{erg}\mathrm{cm}^3`$, where $`n_p`$ is the average proton density, $`\mathrm{\Omega }_b`$ is the cosmological baryon density parameter, and $`h_{70}`$ is the Hubble constant in units of $`70\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. By substituting $`\mathrm{ln}\gamma _{\mathrm{max}}=\mathrm{ln}(4\times 10^7)`$, we then get
$$E^2\frac{dJ}{dE}=1.1\left(\frac{\xi _e}{0.05}\right)\left(\frac{\mathrm{\Omega }_bh_{70}^2}{0.04}\right)\left(\frac{f_{\mathrm{sh}}kT}{\mathrm{keV}}\right)\mathrm{keV}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1,$$
(1)
where $`E=h\nu `$, and $`J`$ is the number flux of photons per solid angle. Adopting the result from hydrodynamic simulations of<sup>4</sup> $`f_{\mathrm{sh}}(kT/\mathrm{keV})1`$, the Big-Bang nucleosynthesis value of<sup>20,21</sup> $`\mathrm{\Omega }_bh_{70}^2=0.04_{.027}^{+.014}`$ (based on the deuterium abundance), and the value of $`\xi _e0.05`$ inferred from non-relativistic collisionless shocks in the interstellar medium, we obtain a background flux of $`1\mathrm{keV}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1`$ in the photon energy range of 3 MeV to 100 GeV. As Figure 1 illustrates, this result is in excellent agreement with the $`\gamma `$-ray background detected by the EGRET instrument aboard the CGRO satellite<sup>1</sup>.
The sky-averaged contribution of identified extragalactic sources, such as blazars ($`\gamma `$-ray loud active-galactic-nuclei), amounts to only $`7\%`$ of the extragalactic $`\gamma `$-ray flux<sup>3</sup>. As shown in Figure 1, the recent EGRET data is consistent with a luminosity function of faint, undetected blazars, which could account for only $`25\%`$ of the unresolved $`\gamma `$-ray background<sup>2,3</sup>. The remainder of the hard $`\gamma `$-ray background is expected to be highly isotropic in our model because it is integrated over a large volume throughout the universe. Its large-angle fluctuations should be comparable (to within a factor of a few) to that of the hard X-ray background, of which<sup>25</sup> $`40\%`$ is contributed by early-type galaxies at $`z1`$, since these galaxies trace the same large scale structure as painted by the comic web of intergalactic shocks. This implies a root-mean-square fluctuation amplitude smaller than $`5\%`$ on angular scales larger than a degree<sup>26</sup>. The predicted high level of isotropy can serve as a test of our model, as soon as the isotropy of the $`\gamma `$-ray background is measured to a higher precision than presently available. Another possible test is the spectral distortion induced in the microwave background band by the low-energy tail of the non-thermal intergalactic electrons. This distortion is, however, well below the upper limit inferred from the COBE satellite data<sup>27</sup>.
At photon energies $``$ MeV the scattered background is sub-dominant relative to the cumulative flux produced by discrete sources, such as active galactic nuclei. In particular, the predicted scattered flux amounts only to $`10`$$`15\%`$ of the soft X-ray background measured by ROSAT at 1–2 keV, and is consistent with the upper limit of $`20`$$`30\%`$ on its unresolved fraction<sup>25,28</sup>.
Most of the $`\gamma `$-ray background is produced during the latest episode of shock heating of the intergalactic gas at $`z1`$, since the mean gas temperature is expected to decrease rapidly with increasing redshift<sup>4</sup>. Furthermore, the photons emitted at redshifts $`z0.5`$ lose a significant fraction of their energy due to the expansion of the universe. A more accurate calculation of the $`\gamma `$-ray background can be achieved by simulating the hydrodynamic evolution of the intergalactic medium and injecting a population of accelerated electrons into each fluid element which passes through a shock, at all redshifts. Such a simulation can yield both the background flux and its statistical fluctuations on the sky. For a given cosmological model of structure formation, the derived background flux will be proportional to $`\xi _e`$, and will not depend on any other free parameters.
Although the warm ($`10^7`$K) intergalactic medium is expected theoretically to include a major fraction of the baryons in the present-day universe, it has not been observed directly as of yet. The soft X-ray emission from intergalactic structures, such as sheets and filaments or galaxy groups, is often too faint to be detectable with current telescopes<sup>4</sup>. The observed cosmological density of stars<sup>21</sup>, $`\mathrm{\Omega }_{}h_{70}=3.5\times 10^3`$, is an order of magnitude smaller than the baryon density predicted by Big-Bang nucleosynthesis. If the $`\gamma `$-ray background indeed results from keV shocks in the intergalactic medium, then we can derive a lower limit on the mean baryon density required to produce its observed flux. This minimum is obtained by substituting $`\xi _e=1/2`$ (equipartition) and $`f_{\mathrm{sh}}=1`$ in equation (1), yielding $`\mathrm{\Omega }_bh_{70}^24\times 10^3`$. The inferred lower limit is larger than the observed mass density in stars. Thus, if the reality of our model is proven, then this would not only explain the origin of the diffuse component of the $`\gamma `$-ray background, but also provide the first evidence for the “missing baryons” (note that even if the truly diffuse component amounts to only $`20\%`$ of the $`\gamma `$-ray background flux, the more plausible range of $`\xi _e10\%`$ yields a lower limit on the baryon density which is still larger than the mass density in stars).
In our model, the $`\gamma `$-ray background is produced in the dense filaments and sheets which channel gas from converging bulk flows in the intergalactic medium. The densest and hottest shocks occur at the intersections of these filaments around the locations of clusters of galaxies. Although the richest accreting clusters may be rare and contribute a small fraction of the background, they contain the brightest shocks in the sky and produce the strongest fluctuations in the diffuse $`\gamma `$-ray background. Direct detection of these shocks can be used to calibrate $`\xi _e`$ in our model. Even a statistical detection of a cross-correlation signal between background fluctuations and other sources that trace the same large scale structure, such as galaxies, X-ray gas, the Sunyaev-Zel’dovich effect, or synchrotron emission from the high energy electrons in the intergalactic medium, can be used to prove the reality of our model. We predict that as cold gas goes through the strong virialization shock of an accreting cluster, it emits non-thermal radiation with photon energies between 30 eV and TeV. The shocked gas loses a fraction $`(4.5/7)\times \xi _e3\%`$ of its thermal energy in this process. For $`h\nu 10`$ keV, the cooling time of the corresponding relativistic electrons is shorter than a billion years. In this regime the total non-thermal luminosity of the cluster, $`L_{\mathrm{nt}}`$, is limited by the time it takes the cluster gas to cross the virialization shock, i.e. $`t_{\mathrm{vir}}(2\mathrm{Mpc}/2\times 10^3\mathrm{km}\mathrm{s}^1)=10^9`$ yr. For a young cluster which shocks gas of total mass $`M_{\mathrm{gas}}`$ to a virial temperature $`T_{\mathrm{gas}}`$, the total (bolometric) non-thermal luminosity is
$$L_{\mathrm{nt}}\left(\frac{4.5\xi _e/7}{t_{\mathrm{vir}}}\right)\left(\frac{M_{\mathrm{gas}}}{m_p}\right)\left(\frac{3}{2}kT_{\mathrm{gas}}\right)=1.5\times 10^{45}\left(\frac{\xi _e}{0.05}\right)\left(\frac{10^9\mathrm{yr}}{t_{\mathrm{vir}}}\right)\left(\frac{M_{\mathrm{gas}}}{10^{14}M_{}}\right)\left(\frac{kT_{\mathrm{gas}}}{5\mathrm{keV}}\right)\mathrm{erg}\mathrm{s}^1.$$
(2)
(The total cluster mass, including the dark matter, is typically an order of magnitude larger than $`M_{\mathrm{gas}}`$.) The luminosity per each logarithmic interval in photon energy, anywhere between 30 eV and TeV, is $`0.1L_{\mathrm{nt}}`$. This emission component would affect estimates of the gas temperature and the thermal luminosity of young clusters which form through shocking of cold gas. However, the high-energy emission would be suppressed in the weak shocks that result from mergers of pre-existing clusters which contain pre-heated gas, since the energy distribution of the accelerated electrons is steeper for weak shocks<sup>8</sup>. As a cluster relaxes to hydrostatic equilibrium and ages, its non-thermal luminosity declines. The emission of hard radiation disappears almost entirely as soon as there is no strong shocking of gas. At a perfectly quiescent state of hydrostatic equilibrium, the non-thermal radiation spectrum cuts-off above a photon energy, $`h\nu 0.5\mathrm{keV}(\tau /3\times 10^9\mathrm{yr})^2`$, for which the electron cooling time equals the cluster age, $`\tau `$. Detection of this cut-off can be used as a method of dating the time that has passed since the last major accretion episode of cold gas by the cluster. Most relaxed clusters with ages $`\tau 10^9`$ yr, are expected to show only weak non-thermal emission at $`h\nu 10`$ keV. We note that the detection of nonthermal X-ray emission has been reported recently for several clusters<sup>16,17,29</sup>. In addition, some X-ray clusters possess radio halos<sup>30</sup> which may result from synchrotron emission by the residual population of relativistic electrons in the cluster magnetic field.
* Sreekumar, P. et al. EGRET observations of the extragalactic gamma-ray emission. Astrophys. J. 494, 523-534 (1998).
* Chiang, J. & Mukherjee, R. The luminosity function of the EGRET gamma-ray blazars. Astrophys. J. 496, 752-760 (1998).
* Mukherjee, R. & Chiang, J. EGRET $`\gamma `$-ray blazars: luminosity function and contribution to the extragalactic $`\gamma `$-ray background. Astroparticle Phys. 11, 213-215 (1999).
* Cen, R. & Ostriker, J. P. Where are the baryons?. Astrophys. J. 514, 1-6 (1999).
* Metzler, C. A. & Evrard, A. E. A simulation of the intracluster medium with feedback from cluster galaxies. Astrophys. J. 437, 564-583 (1994).
* Nevalainen, J., Markevitch, M. & Forman, W. R. The cluster M-T relation from temperature profiles observed with ASCA and ROSAT. Astrophys. J. in press (2000); astro-ph/9911369.
* Loewenstein, M. Heating of intergalactic gas and cluster scaling relations. Astrophys. J. in press (2000); astro-ph/9910276.
* Blandford, R. & Eichler, D. Particle acceleration at astrophysical shocks – a theory of cosmic-ray origin. Phys. Reports 154, 1-75 (1987).
* Bell, A. R. The acceleration of cosmic rays in shock fronts. II. Mon. Not. R. astr. Soc. 182, 147-156 (1978).
* Blandford, R. D., & Ostriker, J. P. Particle acceleration by astrophysical shocks. Astrophys. J. 221, L29-L32 (1978).
* Koyama, K. et al. Evidence for shock acceleration of high-energy electrons in the supernova remnant SN:1006. Nature 378, 255-258 (1995).
* Koyama, K. et al. Discovery of non-thermal x-rays from the northwest shell of the new SNR RX J1713.7-3946: The Second SN 1006?. PSAJ 49, L7-L11 (1997).
* Tanimori, T. et al. Discovery of TeV gamma rays from SN 1006: further evidence for the supernova remnant origin of cosmic rays. Astrophys. J. 497, L25-L28 (1998).
* Muraishi, et al. Evidence for TeV gamma-ray emission from the shell type SNR RXJ1713.7-3946. Astron. & Astrophys., in press (2000); astro-ph/0001047.
* Kronberg, P. Extragalactic magnetic fields. Rep. Prog. Phys. 57, 325-382 (1994).
* Fusco-Femiano, R. et al. Hard x-ray radiation in the Coma cluster spectrum. Astrophys. J. 513, L21-L24 (1999).
* Rephaeli, Y., Gruber, D. & Blanco, P. Rossi X-Ray Timing Explorer observations of the Coma cluster. Astrophys. J. 511, L21-L24 (1999).
* Kim, K.-T., Kronberg, P. P., Giovannini, G. & Venturi, T. Discovery of intergalactic radio emission in the Coma-A1367 supercluster. Nature 341, 720-723 (1989).
* Felten, J. E., & Morrison, P. Omnidirectional inverse Compton and synchrotron radiation from cosmic distribution of fast electrons and thermal photons. Astrophys. J. 146, 686-708 (1966).
* Tytler, D., O’Meara, J. M., Suzuki, N., & Lubin, D. Review of Big Bang nucleosynthesis and primordial abundances. Physica Scripta in press (2000); astro-ph/0001318.
* Fukugita, M., Hogan, C. J. & Peebles, P. J. E. The cosmic baryon budget. Astrophys. J. 503, 518-530 (1998).
* Primack, J. R., Bullock, J. S., Somerville, R. S. & McMinn, D. Probing galaxy formation with TeV gamma ray absorption. Astroparticle Physics 11, 93-102 (1999).
* Konopelko, A. K., Kirk, J. G., Stecker, F. W. & Mastichiadis, A. Evidence for intergalactic absorption in the TeV gamma-ray spectrum of Markarian 501. Astrophys. J. 518, L13-L15 (1999).
* Coppi, P. S. & Aharonian, F. A. Constraints on the very high energy emissivity of the universe from the diffuse GeV gamma-ray background. Astrophys. J. 487, L9-L12 (1997).
* Mushotzky, R. F., Cowie, L. L., Barger, A. J. & Arnaud, K. A. Resolving the extragalactic hard X-ray background. Nature in press (2000); astro-ph/0002313.
* Fabian, A. C. & Barcons, X. The origin of the X-ray background. Ann. Rev. of Astron. & Astrophys. 30, 429-456 (1992).
* Fixsen, D. J. et al. The cosmic microwave background spectrum from the full COBE FIRAS data set. Astrophys. J. 473, 576-587 (1996).
* Hasinger, G. et al. The ROSAT deep survey. I. X-ray sources in the Lockman field. Astron. & Astrophys. 329, 482-494 (1998).
* Kaastra, J. S. High and low energy nonthermal x-ray emission from the Abell 2199 cluster of galaxies. et al. Astrophys. J. 519, L119-L122 (1999).
* Deiss, B. M., Reich, W., Lesch, H. & Wielebinsi, R. The large-scale structure of the diffuse radio halo of the Coma cluster at 1.4GHz. Astron. & Astrophys. 321, 55-63 (1997).
ACKNOWLEDGEMENTS. This work was supported in part by the Israel-US BSF and by NSF. AL thanks the Weizmann Institute for its kind hospitality during the course of this work. EW is the incumbent of the Beracha foundation career development chair.
|
no-problem/0003/cs0003083.html
|
ar5iv
|
text
|
# Advances in domain independent linear text segmentation
## 1 Introduction
Even moderately long documents typically address several topics or different aspects of the same topic. The aim of linear text segmentation is to discover the topic boundaries. The uses of this procedure include information retrieval \[Hearst and Plaunt, 1993, Hearst, 1994, Yaari, 1997, Reynar, 1999\], summarization \[Reynar, 1998\], text understanding, anaphora resolution \[Kozima, 1993\], language modelling \[Morris and Hirst, 1991, Beeferman et al., 1997b\] and improving document navigation for the visually disabled \[Choi, 2000\].
This paper focuses on domain independent methods for segmenting written text. We present a new algorithm that builds on previous work by Reynar \[Reynar, 1998, Reynar, 1994\]. The primary distinction of our method is the use of a ranking scheme and the cosine similarity measure \[van Rijsbergen, 1979\] in formulating the similarity matrix. We propose that the similarity values of short text segments is statistically insignificant. Thus, one can only rely on their order, or rank, for clustering.
## 2 Background
Existing work falls into one of two categories, lexical cohesion methods and multi-source methods \[Yaari, 1997\]. The former stem from the work of Halliday and Hasan \[Halliday and Hasan, 1976\]. They proposed that text segments with similar vocabulary are likely to be part of a coherent topic segment. Implementations of this idea use word stem repetition \[Youmans, 1991, Reynar, 1994, Ponte and Croft, 1997\], context vectors \[Hearst, 1994, Yaari, 1997, Kaufmann, 1999, Eichmann et al., 1999\], entity repetition \[Kan et al., 1998\], semantic similarity \[Morris and Hirst, 1991, Kozima, 1993\], word distance model \[Beeferman et al., 1997a\] and word frequency model \[Reynar, 1999\] to detect cohesion. Methods for finding the topic boundaries include sliding window \[Hearst, 1994\], lexical chains \[Morris, 1988, Kan et al., 1998\], dynamic programming \[Ponte and Croft, 1997, Heinonen, 1998\], agglomerative clustering \[Yaari, 1997\] and divisive clustering \[Reynar, 1994\]. Lexical cohesion methods are typically used for segmenting written text in a collection to improve information retrieval \[Hearst, 1994, Reynar, 1998\].
Multi-source methods combine lexical cohesion with other indicators of topic shift such as cue phrases, prosodic features, reference, syntax and lexical attraction \[Beeferman et al., 1997a\] using decision trees \[Miike et al., 1994, Kurohashi and Nagao, 1994, Litman and Passonneau, 1995\] and probabilistic models \[Beeferman et al., 1997b, Hajime et al., 1998, Reynar, 1998\]. Work in this area is largely motivated by the topic detection and tracking (TDT) initiative \[Allan et al., 1998\]. The focus is on the segmentation of transcribed spoken text and broadcast news stories where the presentation format and regular cues can be exploited to improve accuracy.
## 3 Algorithm
Our segmentation algorithm takes a list of tokenized sentences as input. A tokenizer \[Grefenstette and Tapanainen, 1994\] and a sentence boundary disambiguation algorithm \[Palmer and Hearst, 1994, Reynar and Ratnaparkhi, 1997\] or EAGLE \[Reynar et al., 1997\] may be used to convert a plain text document into the acceptable input format.
### 3.1 Similarity measure
Punctuation and uninformative words are removed from each sentence using a simple regular expression pattern matcher and a stopword list. A stemming algorithm \[Porter, 1980\] is then applied to the remaining tokens to obtain the word stems. A dictionary of word stem frequencies is constructed for each sentence. This is represented as a vector of frequency counts.
Let $`f_{i,j}`$ denote the frequency of word $`j`$ in sentence $`i`$. The similarity between a pair of sentences $`x,y`$ is computed using the cosine measure as shown in equation 1. This is applied to all sentence pairs to generate a similarity matrix.
$$sim(x,y)=\frac{_jf_{x,j}\times f_{y,j}}{\sqrt{_jf_{x,j}^2\times _jf_{y,j}^2}}$$
(1)
Figure 1 shows an example of a similarity matrix<sup>1</sup><sup>1</sup>1The contrast of the image has been adjusted to highlight the image features. . High similarity values are represented by bright pixels. The bottom-left and top-right pixel show the self-similarity for the first and last sentence, respectively. Notice the matrix is symmetric and contains bright square regions along the diagonal. These regions represent cohesive text segments.
### 3.2 Ranking
For short text segments, the absolute value of $`sim(x,y)`$ is unreliable. An additional occurrence of a common word (reflected in the numerator) causes a disproportionate increase in $`sim(x,y)`$ unless the denominator (related to segment length) is large. Thus, in the context of text segmentation where a segment has typically $`<100`$ informative tokens, one can only use the metric to estimate the order of similarity between sentences, e.g. $`a`$ is more similar to $`b`$ than $`c`$.
Furthermore, language usage varies throughout a document. For instance, the introduction section of a document is less cohesive than a section which is about a particular topic. Consequently, it is inappropriate to directly compare the similarity values from different regions of the similarity matrix.
In non-parametric statistical analysis, one compares the rank of data sets when the qualitative behaviour is similar but the absolute quantities are unreliable. We present a ranking scheme which is an adaptation of that described in \[O’Neil and Denos, 1992\].
Each value in the similarity matrix is replaced by its rank in the local region. The rank is the number of neighbouring elements with a lower similarity value. Figure 2 shows an example of image ranking using a $`3\times 3`$ rank mask with output range $`\{0,8\}`$. For segmentation, we used a $`11\times 11`$ rank mask. The output is expressed as a ratio $`r`$ (equation 2) to circumvent normalisation problems (consider the cases when the rank mask is not contained in the image).
$$r=\frac{\text{\# of elements with a lower value}}{\text{\# of elements examined}}$$
(2)
To demonstrate the effect of image ranking, the process was applied to the matrix shown in figure 1 to produce figure 3<sup>2</sup><sup>2</sup>2The process was applied to the original matrix, prior to contrast enhancement. The output image has not been enhanced.. Notice the contrast has been improved significantly. Figure 4 illustrates the more subtle effects of our ranking scheme. $`r(x)`$ is the rank ($`1\times 11`$ mask) of $`f(x)`$ which is a sine wave with decaying mean, amplitude and frequency (equation 3).
$$\begin{array}{ccc}\hfill f(x)& =& g(x\times \frac{2\pi }{200})\hfill \\ & & \\ \hfill g(z)& =& \frac{1}{2}(e^{z/2}+\frac{1}{2}e^{z/2}(1+\mathrm{sin}(10z^{0.7})))\hfill \end{array}$$
(3)
### 3.3 Clustering
The final process determines the location of the topic boundaries. The method is based on Reynar’s maximisation algorithm \[Reynar, 1998, Helfman, 1996, Church, 1993, Church and Helfman, 1993\]. A text segment is defined by two sentences $`i,j`$ (inclusive). This is represented as a square region along the diagonal of the rank matrix. Let $`s_{i,j}`$ denote the sum of the rank values in a segment and $`a_{i,j}=(ji+1)^2`$ be the inside area. $`B=\{b_1,\mathrm{},b_m\}`$ is a list of $`m`$ coherent text segments. $`s_k`$ and $`a_k`$ refers to the sum of rank and area of segment $`k`$ in $`B`$. $`D`$ is the inside density of $`B`$ (see equation 4).
$$D=\frac{_{k=1}^ms_k}{_{k=1}^ma_k}$$
(4)
To initialise the process, the entire document is placed in $`B`$ as one coherent text segment. Each step of the process splits one of the segments in $`B`$. The split point is a potential boundary which maximises $`D`$. Figure 5 shows a working example.
The number of segments to generate, $`m`$, is determined automatically. $`D^{(n)}`$ is the inside density of $`n`$ segments and $`\delta D^{(n)}=D^{(n)}D^{(n1)}`$ is the gradient. For a document with $`b`$ potential boundaries, $`b`$ steps of divisive clustering generates $`\{D^{(1)},\mathrm{},D^{(b+1)}\}`$ and $`\{\delta D^{(2)},\mathrm{},\delta D^{(b+1)}\}`$ (see figure 6 and 7). An unusually large reduction in $`\delta D`$ suggests the optimal clustering has been obtained<sup>3</sup><sup>3</sup>3In practice, convolution (mask $`\{1,2,4,8,4,2,1\}`$) is first applied to $`\delta D`$ to smooth out sharp local changes (see $`n=10`$ in figure 7). Let $`\mu `$ and $`\nu `$ be the mean and variance of $`\delta D^{(n)},n\{2,\mathrm{},b+1\}`$. $`m`$ is obtained by applying the threshold, $`\mu +c\times \sqrt{\nu }`$, to $`\delta D`$ ($`c=1.2`$ works well in practice).
### 3.4 Speed optimisation
The running time of each step is dominated by the computation of $`s_k`$. Given $`s_{i,j}`$ is constant, our algorithm pre-computes all the values to improve speed performance. The procedure computes the values along diagonals, starting from the main diagonal and works towards the corner. The method has a complexity of order $`1\frac{1}{2}n^2`$. Let $`r_{i,j}`$ refer to the rank value in the rank matrix $`R`$ and $`S`$ to the sum of rank matrix. Given $`R`$ of size $`n\times n`$, $`S`$ is computed in three steps (see equation 5). Figure 8 shows the result of applying this procedure to the rank matrix in figure 5.
$$\begin{array}{cccc}1.\hfill & s_{i,i}\hfill & =& r_{i,i}\hfill \\ & & \text{for}& i\{1,\mathrm{},n\}\hfill \\ 2.\hfill & s_{i+1,i}\hfill & =& 2r_{i+1,i}+s_{i,i}+s_{i+1,i+1}\hfill \\ & s_{i,i+1}\hfill & =& s_{i+1,i}\hfill \\ & & \text{for}& i\{1,\mathrm{},n1\}\hfill \\ 3.\hfill & s_{i+j,i}\hfill & =& 2r_{i+j,i}+s_{i+j1,i}+\hfill \\ & & & s_{i+j,i+1}s_{i+j1,i+1}\hfill \\ & s_{i,i+j}\hfill & =& s_{i+j,i}\hfill \\ & & \text{for}& j\{2,\mathrm{},n1\}\hfill \\ & & & i\{1,\mathrm{},nj\}\hfill \end{array}$$
(5)
## 4 Evaluation
The definition of a topic segment ranges from complete stories \[Allan et al., 1998\] to summaries \[Ponte and Croft, 1997\]. Given the quality of an algorithm is task dependent, the following experiments focus on the relative performance. Our evaluation strategy is a variant of that described in \[Reynar, 1998, 71-73\] and the TDT segmentation task \[Allan et al., 1998\]. We assume a good algorithm is one that finds the most prominent topic boundaries.
### 4.1 Experiment procedure
An artificial test corpus of 700 samples is used to assess the accuracy and speed performance of segmentation algorithms. A sample is a concatenation of ten text segments. A segment is the first $`n`$ sentences of a randomly selected document from the Brown corpus<sup>4</sup><sup>4</sup>4Only the news articles ca\**.pos and informative text cj\**.pos were used in the experiment.. A sample is characterised by the range of $`n`$. The corpus was generated by an automatic procedure<sup>5</sup><sup>5</sup>5All experiment data, algorithms, scripts and detailed results are available from the author.. Table 1 presents the corpus statistics.
$$\begin{array}{c}p(\text{error}|\text{ref},\text{hyp},k)=\hfill \\ p(\text{miss}|\text{ref},\text{hyp},\text{diff},k)p(\text{diff}|\text{ref},k)+\hfill \\ p(\text{fa}|\text{ref},\text{hyp},\text{same},k)p(\text{same}|\text{ref},k)\hfill \end{array}$$
(6)
Speed performance is measured by the average number of CPU seconds required to process a test sample<sup>6</sup><sup>6</sup>6All experiments were conducted on a Pentium II 266MHz PC with 128Mb RAM running RedHat Linux 6.0 and the Blackdown Linux port of JDK1.1.7 v3.. Segmentation accuracy is measured by the error metric (equation 6, fa $``$ false alarms) proposed in \[Beeferman et al., 1999\]. Low error probability indicates high accuracy. Other performance measures include the popular precision and recall metric (PR) \[Hearst, 1994\], fuzzy PR \[Reynar, 1998\] and edit distance \[Ponte and Croft, 1997\]. The problems associated with these metrics are discussed in \[Beeferman et al., 1999\].
### 4.2 Experiment 1 - Baseline
Five degenerate algorithms define the baseline for the experiments. $`B_n`$ does not propose any boundaries. $`B_a`$ reports all potential boundaries as real boundaries. $`B_e`$ partitions the sample into regular segments. $`B_{(r,\mathrm{?})}`$ randomly selects any number of boundaries as real boundaries. $`B_{(r,b)}`$ randomly selects $`b`$ boundaries as real boundaries.
The accuracy of the last two algorithms are computed analytically. We consider the status of $`m`$ potential boundaries as a bit string ($`1`$ topic boundary). The terms $`p(\text{miss})`$ and $`p(\text{fa})`$ in equation 6 corresponds to $`p(\text{same}|k)`$ and $`p(\text{diff}|k)=1p(\text{same}|k)`$. Equation 7, 8 and 9 gives the general form of $`p(\text{same}|k)`$, $`B_{(r,\mathrm{?})}`$ and $`B_{(r,b)}`$, respectively<sup>7</sup><sup>7</sup>7The full derivation of our method is available from the author..
Table 2 presents the experimental results. The values in row two and three, four and five are not actually the same. However, their differences are insignificant according to the Kolmogorov-Smirnov, or KS-test \[Press et al., 1992\].
$$p(\text{same}|k)=\frac{\text{\# valid segmentations}}{\text{\# possible segmentations}}$$
(7)
$$p(\text{same}|k,B_{(r,\mathrm{?})})=\frac{2^{mk}}{2^m}=2^k$$
(8)
$$\begin{array}{ccc}\hfill p(\text{same}|k,m,B_{(r,b)})& =& \frac{{}_{(mk)}{}^{}C_{b}^{}}{{}_{m}{}^{}C_{b}^{}}\hfill \\ & & \\ \hfill {}_{x}{}^{}C_{y}^{}& =& \frac{x!}{y!(xy)!}\hfill \end{array}$$
(9)
### 4.3 Experiment 2 - TextTiling
We compare three versions of the TextTiling algorithm \[Hearst, 1994\]. $`H94_{(c,d)}`$ is Hearst’s C implementation with default parameters. $`H94_{(c,r)}`$ uses the recommended parameters $`k=6`$, $`w=20`$. $`H94_{(j,r)}`$ is my implementation of the algorithm. Experimental result (table 3) shows $`H94_{(c,d)}`$ and $`H94_{(c,r)}`$ are more accurate than $`H94_{(j,r)}`$. We suspect this is due to the use of a different stopword list and stemming algorithm.
### 4.4 Experiment 3 - DotPlot
Five versions of Reynar’s optimisation algorithm \[Reynar, 1998\] were evaluated. $`R98`$ and $`R98_{(min)}`$ are exact implementations of his maximisation and minimisation algorithm. $`R98_{(s,cos)}`$ is my version of the maximisation algorithm which uses the cosine coefficient instead of dot density for measuring similarity. It incorporates the optimisations described in section 3.4. $`R98_{(m,dot)}`$ is the modularised version of $`R98`$ for experimenting with different similarity measures.
$`R98_{(m,sa)}`$ uses a variant of Kozima’s semantic similarity measure \[Kozima, 1993\] to compute block similarity. Word similarity is a function of word co-occurrence statistics in the given document. Words that belong to the same sentence are considered to be related. Given the co-occurrence frequencies $`f(w_i,w_j)`$, the transition probability matrix $`t`$ is computed by equation 10. Equation 11 defines our spread activation scheme. $`s`$ denotes the word similarity matrix, $`x`$ is the number of activation steps and $`\text{norm}(y)`$ converts a matrix $`y`$ into a transition matrix. $`x=5`$ was used in the experiment.
$$t_{i,j}=p(w_j|w_i)=\frac{f(w_i,w_j)}{_jf(w_i,w_j)}$$
(10)
$$s=\text{norm}\left(\underset{i=1}{\overset{x}{}}t^i\right)$$
(11)
Experimental result (table 4) shows the cosine coefficient and our spread activation method improved segmentation accuracy. The speed optimisations significantly reduced the execution time.
### 4.5 Experiment 4 - Segmenter
We compare three versions of Segmenter \[Kan et al., 1998\]. $`K98_{(p)}`$ is the original Perl implementation of the algorithm (version 1.6). $`K98_{(j)}`$ is my implementation of the algorithm. $`K98_{(j,a)}`$ is a version of $`K98_{(j)}`$ which uses a document specific chain breaking strategy. The distribution of link distances are used to identify unusually long links. The threshold is a function $`\mu +c\times \sqrt{\nu }`$ of the mean $`\mu `$ and variance $`\nu `$. We found $`c=1`$ works well in practice.
Table 5 summarises the experimental results. $`K98_{(p)}`$ performed significantly better than $`K98_{(j,)}`$. This is due to the use of a different part-of-speech tagger and shallow parser. The difference in speed is largely due to the programming languages and term clustering strategies. Our chain breaking strategy improved accuracy (compare $`K98_{(j)}`$ with $`K98_{(j,a)}`$).
### 4.6 Experiment 5 - Our algorithm, $`C99`$
Two versions of our algorithm were developed, $`C99`$ and $`C99_{(b)}`$. The former is an exact implementation of the algorithm described in this paper. The latter is given the expected number of topic segments for fair comparison with $`R98`$. Both algorithms used a $`11\times 11`$ ranking mask.
The first experiment focuses on the impact of our automatic termination strategy on $`C99_{(b)}`$ (table 6). $`C99_{(b)}`$ is marginally more accurate than $`C99`$. This indicates our automatic termination strategy is effective but not optimal. The minor reduction in speed performance is acceptable.
The second experiment investigates the effect of different ranking mask size on the performance of $`C99`$ (table 7). Execution time increases with mask size. A $`1\times 1`$ ranking mask reduces all the elements in the rank matrix to zero. Interestingly, the increase in ranking mask size beyond $`3\times 3`$ has insignificant effect on segmentation accuracy. This suggests the use of extrema for clustering has a greater impact on accuracy than linearising the similarity scores (figure 4).
### 4.7 Summary
Experimental result (table 8) shows our algorithm $`C99`$ is more accurate than existing algorithms. A two-fold increase in accuracy and seven-fold increase in speed was achieved (compare $`C99_{(b)}`$ with $`R98`$). If one disregards segmentation accuracy, $`H94`$ has the best algorithmic performance (linear). $`C99`$, $`K98`$ and $`R98`$ are all polynomial time algorithms. The significance of our results has been confirmed by both t-test and KS-test.
## 5 Conclusions and future work
A segmentation algorithm has two key elements, a clustering strategy and a similarity measure. Our results show divisive clustering ($`R98`$) is more precise than sliding window ($`H94`$) and lexical chains ($`K98`$) for locating topic boundaries.
Four similarity measures were examined. The cosine coefficient ($`R98_{(s,cos)}`$) and dot density measure ($`R98_{(m,dot)}`$) yield similar results. Our spread activation based semantic measure ($`R98_{(m,sa)}`$) improved accuracy. This confirms that although Kozima’s approach \[Kozima, 1993\] is computationally expensive, it does produce more precise segmentation.
The most significant improvement was due to our ranking scheme which linearises the cosine coefficient. Our experiments demonstrate that given insufficient data, the qualitative behaviour of the cosine measure is indeed more reliable than the actual values.
Although our evaluation scheme is sufficient for this comparative study, further research requires a large scale, task independent benchmark. It would be interesting to compare $`C99`$ with the multi-source method described in \[Beeferman et al., 1999\] using the TDT corpus. We would also like to develop a linear time and multi-source version of the algorithm.
## Acknowledgements
This paper has benefitted from the comments of Mary McGee Wood and the anonymous reviewers. Thanks are due to my parents and department for making this work possible; Jeffrey Reynar for discussions and guidance on the segmentation problem; Hideki Kozima for help on the spread activation measure; Min-Yen Kan and Marti Hearst for their segmentation algorithms; Daniel Oram for references to image processing techniques; Magnus Rattray and Stephen Marsland for help on statistics and mathematics.
|
no-problem/0003/astro-ph0003004.html
|
ar5iv
|
text
|
# On the helium content of Galactic globular clusters via the R parameter *footnote **footnote *Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555, and on observations retrieved from the ESO ST-ECF Archive.
## 1. Introduction
The He abundance is fundamental in several astrophysical problems. Big bang nucleosynthesis models supply tight predictions on the primordial He content, and therefore empirical estimates of this parameter are crucial for constraining their plausibility (Hogan, Olive, & Scully 1997). At the same time, stellar evolutionary and pulsational models do require the assumption of a He to metal enrichment ratio $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$in order to reproduce the observed properties of both metal-poor and metal-rich stellar structures (Bono et al. 1997a). Observational constraints on this parameter can improve the accuracy of several theoretical observables, and in particular of stellar yields predicted by Galactic chemical evolution models (Tsujimoto et al. 1997; Pagel & Portinari 1998).
One of the most widely used methods for measuring the He abundance is to measure fluxes of nebular emission lines in planetary nebulae (Peimbert 1995) or in extragalactic H II regions (Pagel et al. 1992; Izotov, Thuan, & Lipovetsky 1997; Olive, Steigman, & Skillman 1997). Independent estimates based on high signal-to-noise measurements of He abundance give very similar results ($`Y=0.23-0.24`$), thus suggesting that the empirical uncertainties are quite small. However, by adopting detailed radiative transfer calculations of H and He, Sasselov & Goldwirth (1995) supported the evidence that current He measurements could be affected by large systematic errors. The He content can also be obtained by direct spectroscopic measurements in hot Horizontal Branch (HB) stars. Unfortunately, these stars are affected by gravitational settling and by radiation levitation (Michaud et al. 1983; Moehler et al. 1999), and therefore they might present peculiar abundance patterns. Nevertheless, there seems to be a consensus that the primordial He abundance should not be lower than Y=0.22 (Olive, Steigman, & Walker 1999).
Both absolute and/or relative He abundances can also be estimated because the evolution of population II stars is sensitive to the primordial helium abundance. The first helium-sensitive indicator to be identified was the R parameter, defined as the ratio between the number of stars along the HB and the number of red giant branch (RGB) stars brighter than the HB luminosity ($`R=N_{HB}/N_{RGB}`$, Iben 1968). Additional parameters also use the helium burning stars of the horizontal branch as abundance indicators. The $`\mathrm{\Delta }`$ parameter is the magnitude difference between HB stars and main sequence (MS) stars (Carney 1980) and the $`A`$ parameter is the mass-luminosity exponent of RR Lyrae stars (Caputo, Cayrel, & Cayrel de Strobel 1983). The fine structure of the main sequence locus of population II stars (Faulkner 1967) is also a potentially powerful method to constrain $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$. On the basis of Hipparcos parallaxes Pagel & Portinari (1998) investigated the fine structure of solar neighborhood MS stars and found that the current estimates of $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$are still affected by large uncertainties. The other three methods have been recently applied by Sandquist (1999, hereinafter S99) to a sample of 42 Galactic Globular Clusters (GGCs). Sandquist also comprehensively discusses the pros and cons of these abundance indicators and in particular the statistical and systematic errors affecting both absolute and relative He estimates. The results by S99 support the evidence that both the $`\mathrm{\Delta }`$ and the $`A`$ parameter can only give reliable relative He abundances, due to current uncertainties on the metallicity scale and on the RR Lyrae temperature scale. At the same time, S99 brought out that absolute He abundances based on the R parameter ($`Y0.2`$) could also be affected by additional systematic errors, and that both relative and absolute estimates do not show, within current uncertainties, a clear evidence of a trend with metallicity. The latter finding does not support the results by Renzini (1994), Minniti (1995), Bertelli et al. (1996), and Desidera, Bertelli, & Ortolani (1998) who suggest that in the Galactic bulge the He abundance scales with metallicity according to a slope ranging from 2 to 3.5. Moreover detailed comparisons between solar standard models and accurate helioseismic data (Ciacio, Degl’Innocenti & Ricci 1997; Degl’Innocenti et al. 1997; Christensen-Dalsgaard 1998) is more consistent with $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$$`2`$. The large spread in the empirical values suggests that current He estimates are still hampered by large uncertainties which do not allow us to disentangle the intrinsic variation, if any, from systematic effects.
The empirical evaluation of the R parameter relies only on star counts. Nevertheless, misleading effects can be introduced by the method adopted for fixing the Zero Age Horizontal Branch (ZAHB) luminosity, by differential reddening, as well as by the occurrence of population gradients inside the cluster (Buzzoni et al. 1983; Caputo, Martinez Roger, & Paez 1987; Djorgovski & Piotto 1993; Bono et al. 1995; S99). The He abundance is estimated by comparing observed values with the ratio of HB and RGB evolutionary times, which relies on evolutionary predictions characterized by a negligible dependence on stellar age (Iben & Rood 1969). The He burning lifetimes do depend on input physics such as equation of state, opacity, and nuclear cross sections (Brocato, Castellani, & Villante 1998; Cassisi et al. 1998) adopted to construct HB models as well as on the algorithm adopted for treating the mixing processes (Sweigart 1990, and references therein).
The main aim of this investigation is to derive the R parameter for a sample of 26 GGCs, and to compare empirical values with theoretical predictions in order to gather information on both the He content and its trend with metallicity. In order to accomplish this goal we specifically calculate a large set of HB models, adopting the most up-to-date input physics. We rely on the high number of stars sampled in each cluster, on the wide metallicity range covered by clusters, and on the high homogeneity of theoretical predictions and data to constrain the behavior of both R and $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$parameters.
## 2. The cluster database
We evaluate the R parameter in 26 GGCs of our HST database (Zoccali et al. 1999, hereinafter Z99, and references therein) including images from the HST projects GO-6095 and GO-7470, and similar data retrieved from the HST archive. To avoid systematic uncertainties in the star counts, we exclude from the sample the clusters affected by strong foreground contamination (NGC 6522, NGC 6441), as well as those having hot HB stars close to the magnitude limit and for which the completeness correction was not estimated (NGC 6205).
The R parameter is defined as $`N_{HB}/N_{RGB}`$, where $`N_{RGB}`$ is the number of RGB stars brighter than a reference luminosity, generally fixed according to the luminosity of RR Lyrae stars (Buzzoni et al. 1983) or to the ZAHB luminosity (Bono et al. 1995). However, the accurate determination of this luminosity is often a thorny problem. In fact, together with the well-known difficulties in estimating the RR Lyrae luminosity and/or the ZAHB luminosity for clusters with only red or blue HB morphologies, we are also dealing with the problem of the differential bolometric correction between HB and RGB stars. This means that, as soon as the HB luminosity/magnitude is fixed, the proper RGB magnitude at this luminosity can be estimated only by accounting for the change in the bolometric correction between HB and RGB structures. Current estimates have been derived by adopting different assumptions on the value of the bolometric correction and on its variation with metallicity. Therefore, the comparison among the R values available in the literature it is not straightforward even for the same clusters.
| TABLE 1. Cluster Data | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| NGC | \[M/H\] | $`V_{\mathrm{ZAHB}}^a`$ | $`N_{\mathrm{HB}}^c`$ | $`N_{\mathrm{RGB}}^d`$ | R<sup>e</sup> | $`N_\mathrm{B}/N_\mathrm{R}^f`$ |
| 104 | –0.54 | 14.26 | 358 | 235 | 1.52$`\pm `$0.13 | 0/358 |
| 362 | –0.84 | 15.66<sup>b</sup> | 247 | 197 | 1.25$`\pm `$0.12 | 17/215 |
| 1851 | –0.93 | 16.33<sup>b</sup> | 297 | 237 | 1.24$`\pm `$0.11 | 89/180 |
| 1904 | –1.19 | 16.31 | 177 | 116 | 1.53$`\pm `$0.18 | 0/172 |
| 2808 | –1.03 | 16.50 | 851 | 606 | 1.26$`\pm `$0.06 | 462/389 |
| 4590 | –1.68 | 15.72 | 34 | 35 | 0.93$`\pm `$0.23 | 23/11 |
| 5634 | –1.45 | 18.04 | 146 | 105 | 1.33$`\pm `$0.17 | 131/2 |
| 5694 | –1.49 | 18.73 | 249 | 159 | 1.50$`\pm `$0.15 | 241/5 |
| 5824 | –1.48 | 18.55 | 520 | 372 | 1.46$`\pm `$0.10 | 480/20 |
| 5927 | –0.17 | 16.94 | 195 | 136 | 1.47$`\pm `$0.17 | 0/195 |
| 5946 | –1.24 | 17.45 | 113 | 96 | 1.13$`\pm `$0.11 | 113/0 |
| 5986 | –1.31 | 16.54 | 237 | 154 | 1.48$`\pm `$0.15 | 224/4 |
| 6093 | –1.27 | 16.46 | 263 | 221 | 1.25$`\pm `$0.11 | 251/12 |
| 6139 | –1.29 | 18.50 | 299 | 223 | 1.28$`\pm `$0.11 | 290/9 |
| 6235 | –1.06 | 17.42<sup>b</sup> | 35 | 37 | 0.90$`\pm `$0.22 | 31/2 |
| 6284 | –0.99 | 17.36<sup>b</sup> | 132 | 104 | 1.33$`\pm `$0.17 | 132/0 |
| 6287 | –1.67 | 17.29 | 92 | 59 | 1.49$`\pm `$0.25 | 82/9 |
| 6293 | –1.55 | 16.56 | 137 | 101 | 1.30$`\pm `$0.17 | 130/4 |
| 6342 | –0.43 | 17.66 | 73 | 42 | 1.74$`\pm `$0.34 | 0/73 |
| 6356 | –0.30 | 18.15 | 370 | 231 | 1.60$`\pm `$0.13 | 0/370 |
| 6362 | –0.82 | 15.63<sup>b</sup> | 38 | 33 | 1.15$`\pm `$0.27 | 6/27 |
| 6388 | –0.39 | 17.41 | 1353 | 747 | 1.74$`\pm `$0.07 | 202/1151 |
| 6624 | –0.22 | 16.30 | 123 | 86 | 1.43$`\pm `$0.20 | 0/123 |
| 6652 | –0.72 | 16.21 | 62 | 47 | 1.32$`\pm `$0.26 | 0/62 |
| 6981 | –1.19 | 17.32 | 65 | 56 | 1.16$`\pm `$0.21 | 21/24 |
| 7078 | –1.82 | 15.93 | 390 | 242 | 1.57$`\pm `$0.13 | 292/50 |
<sup>a</sup> For the $`V_{ZAHB}`$ determination see the discussion in Z99. <sup>b</sup> Due to a misidentification of some blue RR Lyrae stars, the $`V_{ZAHB}`$ of these clusters were underestimated by few hundredth of magnitude in Z99. <sup>c,d</sup> The total number of HB and RGB stars, respectively. <sup>e</sup> The error budget on R includes: the Poisson error on the raw star counts, the completeness correction, and the weighted mean of the three radial determinations. <sup>f</sup> Number of stars bluer/redder than the RR Lyrae gap.
To overcome systematic errors introduced both by metallicity and gravity variations, in the present analysis we choose to “save the observables”, i.e. we define $`N_{RGB}`$ as the number of RGB stars brighter than the ZAHB $`V`$ magnitude ($`VV_{ZAHB}`$). As a consequence, both $`t_{HB}`$ and $`t_{RGB}`$ values are estimated after theoretical predictions are transformed to the observational plane. Table 1 lists the cluster name, its global metallicity, and the other observed quantities. Owing to calibration problems with WFPC2 images (Stetson 1998, 1999), the values of $`V_{ZAHB}`$ adopted for this work could be affected by an uncertainty $`0.1`$ mag, and therefore the values in Table 1 have to be considered only as relative evaluations. Cluster metallicities are based on the Cohen et al. (1999) metallicity scale, while global metallicities were estimated by assuming a mean $`\alpha `$-element enhancement of \[$`\alpha `$/Fe\]=$`0.3`$ for \[Fe/H\]$`1`$, and \[$`\alpha `$/Fe\]=$`0.2`$ for \[Fe/H\]$`>1`$ (see Z99).
The observed star counts have been corrected for completeness. Since crowding effects depend on the distance from the cluster center, we divide each field in three radial annuli, and then correct the counts in each annulus by using the completeness correction appropriate for each magnitude level. As a consequence, we compute three independent radial values of R, and our final R value is their weighted mean. This approach also gives a check for spurious radial trends which could be caused either by population gradients (Djorgovski & Piotto 1993) or by an overestimate (underestimate) of the completeness correction. Interestingly enough, two clusters (NGC 6273 and NGC 6934) show a strong variation of R with the distance from the cluster center. Due to the small area covered by the WFPC2 field, and to the different pixel size — hence different sampling — of the most central chip (PC) when compared with to the outer ones (WFs) we cannot firmly assess whether this behavior is intrinsic — i.e. caused by a radial population gradient — or caused by systematic errors in the completeness correction. Radial color gradients in NGC 6934 have been already found by Sohn, Byun & Chun (1996), we are not aware of wide field investigations on NGC 6273. The peculiar behavior of star counts in these clusters deserves a detailed investigation, therefore we exclude both of them from the sample.
The completeness correction for red HB stars is assumed to be identical to that for RGB stars at the same magnitude. However, completeness correction for blue HB stars could be significantly different from those derived for the RGB stars. In fact, at fixed $`V`$ magnitude, blue HB stars have brighter $`B`$ magnitudes, and thus a higher probability to be detected. For some very blue HB clusters, namely NGC 2808, NGC 6273, and NGC 7078, we perform several direct experiments by adding to the frames artificial stars along the HB fiducial line. We find that the completeness correction for blue HB stars is a factor of 1.046 higher than the completeness for RGB and MS stars located at the same $`V`$ magnitude. This difference appears fairly constant in different clusters and over a large magnitude range. As a consequence, the artificial-star tests for the other clusters are performed only for RGB/MS stars, and the completeness correction for HB stars was scaled according to the same factor of 1.046. It is worth noting that such a completeness correction is significant only for extreme blue HB stars and that the correction to the R parameter is always smaller than 5%. The only exception is NGC 2808, a cluster which shows a very long and populated HB blue tail, and a very high central density. By applying the completeness correction to this cluster the R parameter changed by $`11\%`$.
## 3. Discussion
Figure 1 shows the comparison between empirical R values and theoretical predictions at fixed age (14 Gyr) for three different assumptions about the He content and for metal abundances ranging from \[M/H\]=$`2.2`$ to solar chemical composition. At fixed composition, a large set of evolutionary models for both H and He burning phases was constructed by adopting the input physics already discussed by Cassisi & Salaris (1997, hereinafter CS). As usual, the HB lifetime is estimated on the basis of the HB model located at $`\mathrm{log}T_e=3.85`$, i.e. by assuming as representative of $`t_{HB}`$ the central He burning time of a structure whose ZAHB is inside the RR Lyrae instability strip. Our calculations confirm Iben’s (1968) original finding that the R parameter is virtually independent of the adopted cluster age. The comparison between theory and observations also confirms the finding by S99 that the absolute He content resulting from the R-method is $`Y0.20`$. As discussed in Section 1, this value is significantly lower than the canonical He abundance expected from the primordial nucleosynthesis and measured in the HII regions, showing that there is some systematic uncertainty affecting the calibration of R as a function of Y.
As discussed in Brocato et al. (1998) and Cassisi et al. (1998), among the physical input parameters that govern the HB lifetime, the nuclear cross section for the $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$reaction is affected by the largest uncertainty. In order to investigate the dependence of $`t_{HB}`$ on the poorly measured $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$cross section, we performed several numerical experiments. Figure 2 displays the dependence of the theoretical R values on an increase/decrease by a factor of two in the efficiency of the quoted reaction when compared with the value provided by Caughlan et al. (1985). Even though the range of the nuclear cross section values used in Fig. 2 is quite large, it is still inside the current error of its empirical measurements (Buchmann 1996). This fact clearly shows the sensitivity of the predicted R values on input physics, and suggests that the R method can not be presently used for the determination of the absolute He abundance. Instead, we can use the R parameter to constrain the input parameters of the model. If we rely on primordial He abundance measurements in extragalactic HII regions, and we assume that mixing processes have been properly accounted for in current HB models, the data plotted in Fig. 2 suggest that the current value of the $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$nuclear cross section should be roughly a factor of two smaller.
This notwithstanding, we can still use the R parameter to constrain the still controversial $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$value. First of all, we note that even for constant Y, the R values plotted in Fig. 1 present a trend with metallicity. For \[M/H\]$`>1`$ the empirical R values show an increase and then a flat distribution. As already noted by Desidera et al. (1997), this behavior is due to the fact that an increase in metallicity causes the luminosity of the RGB bump to become fainter than the ZAHB luminosity, as shown by Z99, and it is well reproduced by the models (Fig. 4). In order to investigate the trend of the relative He content with the global metallicity, in Fig. 3 we plot the residuals of the measured R values with respect to the model that better reproduces the data in Fig. 1, i.e. Y=0.20. We emphasize that the absolute value of Y adopted as reference only changes the vertical zero point of Fig. 3. We would obtain the same distribution using a Y=0.23 model with a $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$smaller by a factor of two. The two dashed lines show the expected variation in the He abundance using two different assumed values for $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$. The present data suggest that the upper limit to the He to metal enrichment ratio should be $``$2.5. In particular the data would appear to exclude $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$$`5`$ because this would require to count, at metallicity \[M/H\]$`0.3`$, a number of HB stars lower than that observed by about $`30\%`$, or, conversely, a number of RGB stars higher by the same amount. Although toward higher metallicity our counts could be contaminated by upper AGB stars, the stellar lifetimes of this objects can not account for such a high effect.. On the other hand, the number of red HB stars can be affected by systematic error, although not as high as $`30\%`$, only for the clusters affected by strong differential reddening, such as NGC 6388 (because the HB red clump merges into the RGB). The CMD of other metal-rich clusters (NGC 5927 and NGC 6624) show very well separated HB and RGB (Sosin et al. 1997) and can not be affected by such a problem. Our finding of a $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$$`2.5`$ confirms the original results by Peimbert & Torres-Peimbert (1976), and more recently by Peimbert & Peimbert (2000, and references therein).
Figure 3 shows another interesting feature: metal-poor GGCs present on average a larger R value. A similar conclusion was recently reached in S99, who also note that GGCs with blue HB morphology also have systematically large R parameters; they offer no explanation for this effect. Keeping in mind this problem, we investigate the behavior of each single term in the predicted time ratio. Figure 4 shows the key theoretical ingredients of this parameter i.e. $`t_{HB}`$ (top), $`t_{RGB}`$ (middle), and the magnitudes of both ZAHB and RGB bump (bottom) as a function of metallicity. The dependence of both $`M_V(ZAHB)`$ and $`M_V(bump)`$ on metallicity accounts for the slope disclosed by both empirical and theoretical R values (see Fig. 1). In fact, for metallicity larger than \[M/H\]$`1`$, the $`t_{RGB}`$ presents a sudden decrease caused by the fact that the RGB bump becomes fainter than the ZAHB. This occurrence implies a strong decrease in $`t_{RGB}`$, since, for clusters in this metallicity range, the RGB bump phase alone contributes for $``$20% to the total $`t_{RGB}`$ value. This means that small changes in $`M_V(ZAHB)`$ can cause substantial variations in the R value. This effect explains why the R values of the intermediate metallicity clusters present a large scatter.
The values of $`t_{HB}`$ shown in Fig. 4 refer to a star located in the middle of the RR Lyrae instability strip ($`\mathrm{log}T_e=3.85`$). However, current evolutionary calculations (Castellani et al. 1994) suggest that the lifetime of blue tail HB stars is roughly 30% longer when compared with the lifetimes of HB stars located inside the instability strip. As a consequence, to assess whether predicted R values are affected by systematic uncertainties we must explore in more detail the dependence of the R parameter on $`t_{HB}`$ and in particular on HB morphology. We split the HB into three different regions, namely the HB stars bluer than RR Lyrae variables (B), the RR Lyrae variables (V), and the HB stars redder than RR Lyrae variables (R) (Lee, Demarque, & Zinn 1994). As a plausible estimate for the blue and the red edge of the RR Lyrae instability strip we adopted, according to Bono et al. (1997b), $`T_e7300`$ and 5900 K respectively. On the basis of these ingredients, and by assuming a linear mass distribution along the HB, we estimated the average HB lifetime, $`\overline{t_{HB}}`$, at fixed He abundance $`Y=0.20`$, for stars belonging to B, V, and R regions respectively. Figure 5 shows the theoretical R parameters for the three different HB morphologies. The R values based on the HB lifetime of RR Lyrae stars — $`\overline{t_{HB}}(V)`$ — are almost identical to those based on $`\overline{t_{HB}}(R)`$. Thus suggesting that $`t_{HB}(\mathrm{log}T_e=3.85)`$ is representative of central He-burning lifetime for GCs characterized by HB stars with $`\mathrm{log}T_e3.86`$. On the other hand, we find that the lifetimes of blue HB stars — $`\overline{t_{HB}}(B)`$ — are approximately 20% longer than $`\overline{t_{HB}}(V)`$ and $`\overline{t_{HB}}(R)`$. This means that the R values of GGCs with blue HB morphologies are expected to be $`0.25`$ units higher than those with red HBs. Therefore the observed high R values in metal-poor clusters are not due to a real increase in the He abundance, but are likely the consequence of their blue HB morphology. The large scatter among metal-poor clusters is mainly due to the different stellar distributions along the blue tail (see Fig. 9 in Piotto et al. 1999).
In conclusion, a correct measure of the absolute He content on the basis of the R parameter requires the construction, for any given cluster, of a synthetic CMD which properly reproduces the distribution of the stars along the HB and in turn a meaningful evaluation of the $`t_{HB}`$. However, this approach is beyond the scope of the present investigation.
## 4. Conclusions
We have measured the helium-sensitive R parameter in 26 Galactic globular clusters imaged with WFPC2 on board the Hubble Space Telescope. Our calculated R values are based on star counts that are corrected for completeness and tested for radial variations within each cluster. The high quality HST photometry also permits more clear separation of the HB, AGB, and RGB stars.
The comparison between predicted and empirical R values appears to be consistent with the absolute He abundance being lower than that found from the observations of HII regions and from the primordial nucleosynthesis models. One approach to overcome this discrepancy is to adopt a $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$nuclear cross section about a factor of two smaller than current canonical values. We note that HB lifetimes depend not only on nuclear reaction rates, but also on the efficiency of mixing processes and on the algorithms adopted for handling these physical mechanisms. In fact, as recently suggested by Cassisi et al. (2000), current algorithms adopted for quenching the "breathing pulses" introduce a $`5\%`$ uncertainty on $`t_{HB}`$. As a consequence, the R parameter cannot presently be absolutely calibrated in terms of a helium abundance.
The only trend in our data set is an unphysical trend toward higher helium abundance in the clusters of lowest metallicity. These clusters tend to have blue horizontal branches, and we argue that longer HB lifetimes in high temperature HB stars likely account for this trend. In fact, the global trend in R with metallicity is well accounted for by changes in HB lifetime as a function of metal abundance.
The trend in R values of the metal-rich globular clusters in our sample is consistent with an upper limit of 2.5 for the helium to metal enrichment ratio ($`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$). The increased dispersion in R for the intermediate metallicity clusters may be caused by the RGB bump fading below the HB luminosity at \[Fe/H\]$`1`$, causing a drop in the calculated RGB lifetime. We conclude that these factors make the R values of low and intermediate globular clusters less useful in constraining the helium abundance. Accurate photometric data for metal-rich globular clusters, however, do place an interesting constraint on $`\mathrm{\Delta }Y/\mathrm{\Delta }Z`$.
The GGCs in the Galactic bulge are certainly a key target to accomplish these measurements. It is not a trivial effort to collect high quality data for such clusters, since they are often affected by high absolute and differential extinction (Ortolani et al. 1999), but the new NIR detectors should allow us to overcome these difficulties and to secure accurate data for a sizable fraction of cluster stars. A reconsideration of the bulge clusters and field population would also be in order given the results of Minniti (1995); when his counts are corrected for the contribution of AGB stars, the bulge fields have R values of 1.7 to 2, higher than those of the most metal-rich clusters in our sample.
The promise of the R parameter being a constraint on the primordial helium abundance remains frustratingly unfulfilled. Two further thorny problems affect sound empirical estimates of this parameter, namely the foreground contamination and the radial population gradients. The former can be overcome by including only the innermost regions, whereas the latter is an open problem, since we still lack a systematic and quantitative estimate of this effect in GGCs from the very center to the tidal radius (Walker 1999). The precise comparison of synthetic H-R diagrams with observational data may be required to achieve further progress.
We thank an anonymous referee for his/her pertinent comments and suggestions that improved the content and the readability of the paper. This work was supported by MURST under the projects: "Stellar Dynamics and Stellar Evolution in Globular Clusters" ( G. P. & M. Z.) and "Stellar Evolution" (G. B. & S. C.). S.G.D., and R.M.R. acknowledge support by NASA through grant GO-6095 and GO-7470 from the Space Telescope Science Institute. Partial support by ASI and CNAA is also acknowledged.
|
no-problem/0003/astro-ph0003146.html
|
ar5iv
|
text
|
# Michelson Interferometry with the Keck I Telescope
## 1 Introduction
Developments in detector technology and opto-electronic hardware over the past decade have meant that real-time adaptive optical systems have now become a common feature of large ground-based optical and near-infrared telescopes (recent reviews may be found in Bonaccini & Tyson 1998, Hardy 1998 & Roddier 1999) However, while adaptive optics has enjoyed considerable recent success, other techniques that utilize post-detection data processing, rather than real-time compensation, have remained valuable for imaging at the very highest angular resolutions. The best known, and most straightforward of these to implement, is speckle imaging Labeyrie (1970); Weigelt (1991); Negrete-Regagnon (1996) in which sequences of short-exposures of a target and an unresolved calibrator are used to recover high-resolution maps beyond the natural seeing limit. Although this method in principle allows the recovery of images of arbitrary complexity, the difficulty of attaining an adequate signal-to-noise ratio has meant that it has mainly been confined to studies of binary stars (see, for example, Patience et al. 1998) and other astronomical sources with similarly simple geometries (though Weigelt et al. 1998 is a recent counterexample).
One solution to this signal-to-noise problem is to modify the pupil geometry of the telescope using a mask so as to mimic the operation of a separated-element interferometer array such as the VLBA. This process can be considered as finding an optimal balance between the level of atmospheric perturbations, the number of photons, and the amount of structural information measured about the source – all of which increase as the pupil area rises. When an aperture mask is being used, the data collection and analysis methods are similar to those utilized for speckle imaging, but with a reduction in the number of independent spatial frequencies measured, which is balanced by an improved signal-to-noise ratio on the data which are obtained. This post-processing approach has been widely exploited at optical wavelengths where it has established itself as the only method by which reliable images of the surfaces of nearby stars at the diffraction limit have been recovered for ground-based telescopes (see, for example, Buscher et al. 1990; Wilson Dhillon & Haniff 1997; Tuthill Haniff & Baldwin 1999a).
In this paper, we report the first aperture masking experiments to exploit the new generation of 10 m-class telescopes. We have used the the Keck I telescope with a variety of sparse multi-aperture pupil masks both to verify the signal-to-noise and calibration advantages of these pupil geometries, and to demonstrate the ability of this method to provide diffraction-limited imaging of resolved targets in the near infrared with excellent dynamic range. We have used multi-epoch measurements to establish the reliability of our imaging, and present near infrared maps of the highly-structured dust shells of a number of evolved stars at resolutions exceeding 50 milli-arcseconds.
## 2 Experimental Design
Aperture masking and conventional speckle interferometry share much in common, including the ultimate goal of recovering the complex visibility function of the target of interest (i.e. the Fourier transform of its brightness distribution) at all spatial frequencies up to the telescope diffraction limit. Generally, although Fourier amplitudes can be measured Fizeau (1868); Michelson (1890), phases are scrambled by the atmosphere necessitating the use of an observable known as the closure phase Jennison (1958). More recently, closure phase concepts have been generalized into the mathematical formalism of bispectral or triple correlation analysis Lohmann Weigelt & Wirnitzer (1983).
Apart from the mask itself, the instrumental set-up required for masking is almost identical to that for speckle, with sequences of short-exposure data frames being recorded at high magnification, allowing for the image degradation caused by atmospheric turbulence to be removed during post-processing. The principal difference between the two types of experiments is that in aperture masking the pupil geometry can be adjusted to optimize the signal-to-noise ratio (SNR). The implementation of a masking interferometer at the Keck I telescope is discussed in more detail in the following sections.
### 2.1 Optical Setup
Unlike most earlier aperture masking experiments (e.g. Haniff et al. 1987; Readhead et al. 1988), which used screens placed in a small re-imaged pupil, masking of the Keck primary was achieved by placing large ($`30`$ cm on a side hexagonal) aluminium masks directly in front of the f/25 infrared secondary mirror as shown in Figure 1. At this location, beams propagating to the detector are sufficiently well separated that masks can be treated as selecting discrete portions of the pupil, despite this not being a true pupil plane. With this design, however, the masks intercept radiation from the source twice: once on the way from the primary to the secondary, and a second time when traveling back towards the detector. As a consequence, masks selecting $`N`$ sub-apertures on the primary mirror in principle require $`2N`$ holes. In order to accommodate this double-pass correctly, ray tracing software was used to confirm that rays passing through each sub-aperture could indeed be traced back to discrete regions on the primary mirror. The masks were fabricated of 3/16” aluminium sheet, and were mounted on a custom-built cylindrical post that protruded through the central hole of the f/25 secondary. At this location, the masks could be accessed during the night, permitting changes from one mask to another to optimize the configuration for a given source (this swap procedure took approximately 10 min). A typical desired pupil shape and the mask required to implement it are shown in Figure 2.
After passing through the mask, the beams were focused on the Keck facility Near-IR Camera, NIRC, using external magnifying optics (the so-called image converter; Matthews et al. 1996). The plate scale was 20.57 milli-arcsec/pixel on the 256$`\times `$256 pixel InSb array; sufficient to Nyquist sample data collected in the K-band or longer wavelength bands. The observing wavelengths were selected from NIRC’s standard complement of interference filters which offered a range of bandwidths (1 $``$ 20%)
Fig 1.— Optical ray-trace of starlight from the primary mirror of the Keck telescope (right) to the Near InfraRed Camera (NIRC). Aperture masks placed in front of the secondary mirror as shown must be designed to take account of complications arising from the double light path, and the effects of the converging beam at this point in the optical train. covering the 1 – 3.5 $`\mu `$m region.
Although the masks were not cooled, this had little impact on the experiment. At wavelengths shorter than the K band, the fast exposures (typically 140 ms or less) ensured that there was little contribution from thermal emission. At longer wavelengths, thermal emission from the masks should have been a significant factor. However, the masks produced only a negligible increase in the thermal background, primarily because of the design of the NIRC magnifying optics. Because NIRC contains only a single cooled Lyot stop, when the image converter (Matthews et al. 1996) is in use, the cold stop is significantly oversized. The array is exposed to room-temperature radiation (mostly from baffles and other camera structures), so the presence of the masks had little effect on the thermal background, which was already dominated by ambient flux.
### 2.2 Mask Design
Most pupil mask designs used non-redundant configurations of sub-apertures, each with an effective size when projected onto the primary mirror of 20 – 35 cm. This was tailored to be of order the seeing scale size, $`r_o`$. The lack of redundancy ensured that any Fourier component measured could be uniquely identified with a particular pair of sub-apertures, while the small sizes minimized the effects of wavefront perturbations across each sub-aperture ($`<1`$ radian r.m.s). However, the segmented nature of the Keck primary mirror, and the undersized infrared secondary significantly complicated the task of locating the sub-apertures. In particular when designing the masks, geometries where sub-apertures were crossed by the telescope spider or panel boundaries in the segmented primary mirror had to be avoided where possible. Designs were also driven by the desire that snapshot Fourier plane coverage be uniform and isotropic. The specific nature of these constraints meant that it was not possible to exploit the results of Keto (1997) or Cornwell (1988), both of whom explored optimum snapshot array configurations for radio interferometers.
Mask designs were based on the approach of Golay (1970), in which 3-fold symmetric spaces were searched for non-redundant array solutions with compact and dense uv-coverage. Examples of arrays developed for 15- and 21-hole masks are shown in Figures 2 and 3. As can be seen in the 21-hole mask, the Fourier plane coverage has a densely filled core out to baselines of approximately 3 m, and then adequate, but not isotropic coverage out to the edge of the telescope pupil. Solutions with numbers of apertures as large as 36 were found, but for many experiments a 21-hole mask was more than adequate: this allowed the simultaneous measurement of 210 baselines and 1330 closure phases, yielding an excellent snapshot imaging capability.
Apart from defining the spatial frequencies measured by the telescope, the aperture mask also served to limit the total amount of flux collected. For observations of bright sources (M supergiants and Miras can have K magnitudes as high as $`4`$ mag) this was an important feature, as the use of the unobscured pupil would have saturated NIRC in a small fraction of the minimum available exposure time, despite the use of the narrowest available filters.
For dimmer sources, non-redundant masks such as those described above did not transmit enough flux to overcome the array readout noise. The use of partially-redundant annular masks, as suggested by Haniff & Buscher (1992), provides continuous Fourier plane coverage and enhanced throughput at the expense of only twofold redundancy. This approach has been adopted here. An example of such a mask with a throughput of approximately 10% is shown in Figure 3, together with a short exposure interferogram and its power spectrum. For sources with K magnitudes fainter than about 4 mag, the SNR from masking interferometry was dominated by readout noise, undermining the advantages offered by sparse pupils and necessitating the use of the full pupil.
One unusual feature of our non-redundant Golay masks is that they allowed measurements of interference fringes that were sub-Nyquist sampled. As mentioned in section 2.1, the pixel scale of NIRC only allowed for Nyquist sampling of the longest available baselines at wavelengths greater than 2 $`\mu `$m. At shorter wavelengths, power at the corresponding spatial frequencies is “aliased” back into the power spectrum at lower spatial frequencies, and in most experiments, this signal will overlap and become confused with other shorter baseline signals. With the sparse pupils used here, however, these aliased signals were often mapped back onto otherwise unsampled frequencies and so measurements of the long baseline interference signals could still be made. In practice, the undersampling attenuated the signals significantly, and the recovery of useful data was only possible in cases where the signal-to-noise was initially very high. Nevertheless, we have used this device to recover the J band visibility functions of a handful of the brightest supergiants and Miras at the highest spatial frequencies where the fringes were sub-Nyquist sampled.
## 3 Observing Procedure
In view of the close relationship between aperture masking and conventional filled aperture speckle interferometry, observations were secured using a schedule very similar to that used in other high resolution imaging experiments (see, e.g. Matthews et al. 1996). For each target 100 short exposure interferograms were collected, after which a smaller number (usually 10) of identical exposures were secured on nearby blank sky. The exposure times were limited by the array readout, with times of 140 ms possible for the full $`256\times 256`$ NIRC array. This is longer than the typical atmospheric coherence time ($`t_0`$) expected at $`2.2`$$`\mu `$m ($`40`$ ms) resulting in added atmospheric noise for the bispectral measurements. However, integration times of a few $`t_0`$ preserve significant fringe power and actually deliver higher SNR in the photon-noise limited (faint source) regime Buscher (1988), while the closure phase is even more robust and can be measured with integrations of many $`t_0`$ Readhead et al. (1988). Shorter exposure times could be obtained by reading out smaller sub-frames of the array, but in general the large sizes of the interferograms, resulting from the small dimensions of the pupil sub-apertures, meant that sub-framing was problematic for reasons of data calibration and loss of field of view.
Calibration of the mean telescope-atmosphere transfer function was performed by interleaving the 100 source + 10 sky datasets with identical exposures of nearby similarly bright calibrator stars. Sets of these “matched pairs” of data were secured for each source and wavelength of interest, giving a total elapsed time for each complete observation of order ten minutes. Currently this time remains limited by the $``$10% duty cycle of the real-time archiving software available at NIRC, which was not designed with this mode of operation in mind: custom designed hardware would likely increase the data collection rate by an order of magnitude. Despite the low duty cycle of the camera, the use of non-redundant pupils allowed high-signal-to-noise image reconstructions of resolved targets to be achieved with relatively small numbers of specklegrams (i.e. as few as a hundred). This should be contrasted with the thousands typically used in filled-aperture speckle experiments where the atmospheric redundancy noise limits the signal-to-noise per frame of the Fourier measurements to unity even at high light levels.
Where possible, filter bandpasses of $`\stackrel{<}{_{}}5`$% were used in order to minimize the effects of the expected differences in spectral type between the targets and their calibrator stars. Without such precautions, the differing spectral shapes over the bandpass could affect the calibrated data. The most interesting sources – those with suspected well resolved structure – were observed at two well separated times during the night. This allowed independent checks to be made on the reliability of source structure determinations since spurious signals associated with the mask and detector array are not expected to rotate with the sky. Additional observations were also secured each night of a number of binary stars with well determined orbits to allow independent calibration of the detector orientation and scale and to assess the reliability of the subsequent data reduction pipeline.
## 4 Data Reduction
Analysis of the data followed standard methods for aperture masking experiments and involved the accumulation of the power spectra and bispectra of each set of interferograms. As a first step, each short exposure image was dark-subtracted, flat fielded, and cleaned of pattern noise arising from transients in the readout electronics and variations in the behavior of the four different readout amplifiers. Images were subsequently windowed with a two-dimensional Hanning function tapered to zero so as to eliminate edge effects. Power spectra could now be computed frame by frame from the squared modulus of the Fourier transform. Stellar fringe signals appeared as power at discrete locations in such spectra, with the origin occupied by a peak whose height was proportional to the squared flux in the frame, while the remaining areas were filled with a signal caused by a combination of photon and readout noise (for illustration, see Figure 3).
The noise power level, which would otherwise bias the measurements, could be obtained by averaging over those regions where no stellar signal was expected (usually, but not always, at the edges of the power spectra). Having subtracted off the noise bias, squared visibilities (Fourier amplitudes) for the stellar interference signal were found by taking the ratio of the power at the spatial frequency of the fringes to the power at the origin, and then normalizing with the corresponding signal from the calibrator spectrum. Error estimates were derived from the spread in values amongst each ensemble of 100 exposures. Some typical results from this procedure are shown in Figure 4.
Fourier triple products, or bispectral data, could also be computed for each frame. Closure phase information was recovered from the argument of the complex bispectral data accumulated over the ensembles of short exposures. Again the measured closure phases for the reference stars were used to calibrate the measurements of the targets, however such corrections, indicative of optical aberrations in the telescope/camera, were small ($`\stackrel{<}{_{}}`$few degrees). Unlike filled aperture speckle experiments, the use of an aperture mask ensured that only a very small subset of the full bispectral hypervolume needed to be accumulated dramatically reducing computational and data storage requirements. A standard workstation was adequate for all data processing.
Having obtained sets of calibrated Fourier amplitudes and closure phases, diffraction-limited images were recovered using standard radio astronomical self-calibration methods. These techniques, originally developed for dilute phase-unstable radio arrays such as the VLBA, are readily transferable to this application since our data, i.e. sampled Fourier amplitudes and closure phases, are almost identical to those delivered by arrays such as the VLBA. The imaging results reported here were obtained using a “Maximum Entropy Method” based implementation of self-calibration (Gull & Skilling 1984, Sivia 1987), but in all cases reconstructions from CLEAN-based methods (Högbom 1974) gave similar results. In many instances the extraction of quantitative information was achieved in a more robust and precise fashion from model-fitting directly to the Fourier data than from mapping, particularly when source structure was relatively simple and/or partially resolved.
## 5 Wavefront Coherence
Since the success of interferometric imaging is critically dependent on the coherence properties of the incoming wavefront, experiments such as the ones reported here can in principle yield valuable information on the stability and aberrations introduced by mechanical deformation of the telescope, and by fluctuations in the atmosphere. The former of these is particularly relevant in consideration of the performance of the segmented primary mirror. Quantitative investigations require a numerical simulation of the phase irregularities introduced by both the atmosphere and telescope pupil. Although we have performed such computations, a full discussion of this work lies beyond the scope of the present report. Instead, a number of general conclusions and difficulties experienced, some of which may be peculiar to the Keck, are outlined below.
The most obvious manifestation of optical aberrations related to the segmented primary is the loss of spectral power (or fringe visibility), implying decorrelation of the wavefronts, at spatial frequencies which can be traced back to the locations of the edges of the hexagonal primary panels. This effect, sometimes referred to as “print-through” from the segmented pattern which appears in the power spectra, has also been seen by workers undertaking full-pupil speckle observations (Ghez 1997). We have studied this effect in two ways. First, numerical simulations involving turbulence degraded wavefronts have verified that significant phase discontinuities ($`\stackrel{>}{_{}}`$0.5 $`\lambda `$) at the segment boundaries do indeed result in a loss of spectral power which mimics the observations. Second, experimental confirmation of this effect has been obtained by deliberately displacing selected mirror segments in small increments with respect to their neighbors. The overall finding of these studies is that between 1995 through 1997, the Keck I primary was poorly phased when working in the infrared, with most segment edges exhibiting $`\stackrel{>}{_{}}`$0.5 $`\lambda `$ phase steps. This can probably be traced to sub-optimal performance of the “malign” alignment procedure. In 1997, however, the primary mirror alignment procedures Troy Chanan Sirko & Leffert (1998); Chanan et al. (1998) were refined and since then, phase distcontinuities, while still present, have been greatly ameliorated.
A further problem we have experienced that is possibly related to the segmented primary concerns the anomalously low visibilities measured at long baselines. Measured point-source visibilities were lower by a factor of 2 $``$ 3 than the values expected on the basis of numerical simulations. One likely candidate for this loss is the active control system responsible for maintaining the relative alignment between segments. Significant oscillation or jitter of the system is known to occur (Wizinowich 1999), which perturbs the segments at frequencies of tens of Hertz – rapidly enough to blur the fringes and introduce a loss in visibility comparable to that we have observed.
Perhaps most damaging of the limitations encountered was the difficulty in ensuring the intermediate term ($``$ 10 min) stability of the optical transfer function necessary for reliable calibration of the measurements. Mis-calibration effects, introduced where there had been a change in the seeing-averaged system transfer function between measurements of source and calibrator, were the major obstacle in recovering high fidelity image reconstructions. One can envisage many sources of this problem. Without a doubt, changes in the local seeing could have occurred over the $`\stackrel{>}{_{}}`$ 4 min time lag between source and calibrator observations. Although the use of an aperture mask is known to limit the sensitivity of spatial interferometry to seeing variations, some mis-calibration must be traced to this. Alternatively, any flexture of the telescope structure or changes in the phasing of the primary mirror could contribute to a non-stationary transfer function. In practice, the reference stars used were invariably at different elevations in the sky (worst cases could be greater than ten degrees away), and inevitable changes in the telescope wind loading and temperature all will have contributed to the difficulty of maintaining tolerances for interferometric observations. Further work to identify and remedy this calibration problem would dramatically enhance the precision of this high spatial resolution experiment.
## 6 Results and Discussion
Observing in the near-infrared with the 10 m baselines available at the Keck yields an angular resolution appropriate for large targets. The excellent snapshot Fourier coverage of the telescope provides a densely sampled a stellar visibility function with orders of magnitude greater efficiency than a separated element array with a small number of stations. The ability to secure data rapidly and through a wide range of narrow bandpasses has been particularly useful for investigating the atmospheric structure of cool supergiants. Stellar diameters are common science targets in high-angular resolution astrophysics (see, e.g. van Belle et al. 1999 for some recent results). Figure 5 shows the azimuthally averaged visibility function of Betelgeuse ($`\alpha `$ Ori) at four different wavelengths as measured in 1997 December. Although a uniform disk model appears to be quite a good fit to the data at all the measured wavelengths, the star appears to exhibit an anomalously large diameter at $`3.08`$$`\mu `$m ($`\delta \lambda =0.10`$$`\mu `$m), probably related to the presence of atomic or molecular absorption within the bandpass changing the optical depth of the stellar atmosphere.
In cases where the sources were significantly resolved, very high quality images could be obtained. Figure 6 shows maps of the binary stars 126 Tau and $`\alpha `$ Com recovered from 100 interferograms. For both maps the dynamic range achieved, as measured by the ratio of the noise in the map to the peak intensity, was considerably better than 0.5% or 1:200. The nonlinear contour levels in the plots of Figure 6 were chosen to highlight features at the level of the noise. The ultimate limitations to the dynamic range can be traced back to systematic miscalibration of the visibility amplitudes and poor handling of noise on the closure phases by the mapping software. Insufficient Fourier sampling and low signal flux – problems which are exacerbated by the use of a mask – were rarely an important contribution to the mapping error budget.
More powerful arguments for the use of sparse pupil geometries in near infrared speckle imaging are provided by our results for complex resolved sources. Figure 7 shows two such images of the dust enshrouded Carbon star, CIT 6, and the IR-bright Wolf-Rayet star WR 104. Both of the sources are clearly resolved at the tens of millarcsecond scale, and their complex structures, which have been confirmed through independent repeated observations (see, e.g., Tuthill et al. 1999b), demonstrate the unique facility that the combination of interferometric methods with the large Keck primary provides. Both maps have dynamic ranges better than 100:1, and are of comparable quality to the images routinely produced by modern radio VLBI arrays. Illustrations of the high level of repeatability for maps taken over separate epochs can be found in Monnier et al. (1999a).
Scientific results from the masking program at Keck have encompassed a range of astronomical topics from measurements of stellar photospheres (e.g. Monnier et al. 1997; Tuthill Monnier & Danchi 1998b; Tuthill et al. 2000) to imaging circumstellar dust shells in evolved stars (e.g. Tuthill et al. 1998a,1999c; Danchi et al. 1998; Monnier et al. 1999a) and enshrouded Wolf-Rayets (Tuthill et al. 1999b,c; Monnier Tuthill & Danchi 1999). As is discussed further in Monnier (1999), the parameter space addressed by this experiment – bright objects ($`m_k\stackrel{<}{_{}}4`$ mag) with resolvable structure on 10 m baselines – is a particularly rich one since the combined resolution and magnitude limits are fortuitously matched for targets containing hot astrophysical dust at around $`1000`$ K. Further discussion of the program stars and techniques may be found in Tuthill et al. (1998c) and Monnier (1999).
In contrast to the situation only decades ago, astronomers are now armed with a number of different techniques all aimed at overcoming the seeing limit in astronomical observations. These include aperture masking, separate element interferometry, speckle interferometry, observations from space, and adaptive optics (AO), with the distinctions between these areas becoming increasingly blurred. Non-redundant, or partially-redundant masking occupies a particular niche in this parameter space, and has been successful for bright objects resolvable with relatively modest baselines. The primary competing technologies here are speckle interferometry and adaptive optics, which we discuss in turn below.
Although there has been much debate on the relative merits of masking versus speckle interferometry, there is agreement that for the faintest objects (where “faint” is ultimately defined as a small number of photons in a coherent patch per coherence time, but in practice is usually governed by readout noise in the IR array detector before this limit is attained) then speckle interferometry is the superior choice. For bright sources, there is little doubt that masking offers dramatic signal-to-noise advantages for discrete measurements of Fourier amplitudes and closure phases (e.g. Roddier 1987). However, filled-pupil speckle interferometry is capable of recovering far greater volumes of data, completely filling the bispectral volume, albeit at low signal-to-noise. A number of studies have addressed the comparison of non-redundant versus filled pupil from theoretical, numerical simulation, and observational approaches Readhead et al. (1988); Haniff & Buscher (1992); Buscher & Haniff (1993), and have concluded that there are clear regions where masking can outperform filled-pupil techniques in recovering diffraction limited images of arbitrary celestial objects. In practice, superb images have been recovered using both techniques (for examples of recent speckle images, see Weigelt et al. 1998). A choice of which technique should be used is often also driven by more mundane considerations, which in our case included saturation of the camera on bright stars and the availability of a well characterized and mature image reconstruction code, both of which favor a masking strategy.
In making a comparison of the relative merits of adaptive optics a separate set of considerations present themselves. Many of the masking program stars, in addition to being bright, also exhibit a compact core and therefore might be thought ideal targets for natural guide-star adaptive optical systems. However, a strong note of caution needs to be sounded against the optimistic projection that such systems will render post-processing techniques such as masking or speckle rapidly obsolete. Obtaining true diffraction-limited images from current AO data requires careful deconvolution of the point-spread function (PSF). It has been our experience Monnier et al. 1999a that the PSF of an AO system is difficult to fully characterize, and worse, is not stationary when the telescope is moved to a calibration star which will usually be of different brightness, elevation and spectrum. In his recent review of AO, Ridgway (1999) notes that the problem of erratic PSF artifacts, which can easily masquerade as genuine source structure, is endemic to current-generation AO systems. For all its unappealing appearance, an uncompensated speckle cloud has the advantage that it results from simpler underlying processes – the atmosphere and telescope optics only – with the result that (in the high light limit) it is easier to calibrate. We hasten to point out that AO offers numerous and dramatic advantages across a wide spectrum of observational problems (e.g. Ridgway 1999); our discussion here is limited only to bright stars with structure at the highest angular resolutions.
## 7 Conclusions
Results from the first aperture masking experiment performed on a 10 m class telescope are presented. A suitable choice of non- or low-redundancy pupil geometries has been found to dramatically improve the signal-to-noise on recovered bispectral data. Reliable images with complex and asymmetric structure at the diffraction limit have been routinely produced in the near-infrared JHK and L bands. With dynamic ranges in excess of 200:1, and demonstrated repeatability of map structure over multiple observing epochs, the expected advantages of sparse-aperture interferometry for bright targets have been confirmed. In a comparison of aperture masking, full-pupil speckle, and adaptive optics, the most reasonable conclusion appears that each technique has regions of the parameter space of source-brightness and spatial structure where it offers superior performance. The existence of such complementary observational techniques will certainly be beneficial in addressing a range of problems in high resolution astronomy, with masking being at its most effective for the brightest objects at the highest angular resolutions. The robust reconstruction of complex brightness distributions from sparsely sampled Fourier data augers well for the future of the next generation of separate-element ground-based imaging arrays with baselines in excess of an order of magnitude larger than those available here.
The Authors would like to thank Gary Chanan and Mitchell Troy for their help with performing the segment phasing experiments. We would also like to thank Everett Lipman, Charles Townes, Peter Gillingham, and Terry McDonald, all of whom have contributed to the success of our endeavors. Devinder Sivia kindly provided the maximum entropy mapping program “VLBMEM”, which we have used to reconstruct our diffraction limited images. This work has been supported by grants from the National Science Foundation (AST-9321289 and AST-9731625). CAH is grateful to the Royal Society for financial support. EHW was supported in part under the auspices of the University Relations Program at Lawrence Livermore National Laboratory using UCDRD funds, and by the U.S. Department of Energy at LLNL under the contract no. W-7405-ENG-48.
|
no-problem/0003/hep-ph0003034.html
|
ar5iv
|
text
|
# What are sterile neutrinos good for?*footnote **footnote *Talk presented by the first author at the American Physical Society (APS) Division of Particles and Fields Conference (DPF’99), hosted by the University of California, Los Angeles, from January 5-9, 1999.
## I Introduction
In 1930, after inferring the existence of the neutrino from the continuous electron spectrum of nuclear $`\beta `$-decay, W. Pauli remarked , “I have done a terrible thing. I have postulated a particle that cannot be detected.” Twenty-three years later, F. Reines and C. L. Cowan reported the first detection of neutrinos via inverse $`\beta `$-decay.
Encouraged by these events, the propensity of history to repeat itself, and a steady stream of positive experimental data, contemporary particle physicists and astrophysicists have recently explored the ramifications of so-called “sterile” neutrinos, denoted $`\nu _s`$. The existence of such Standard Model-singlet fermions, which couple to the conventional (or “active”) neutrinos $`\nu _e`$, $`\nu _\mu `$, and $`\nu _\tau `$ solely through effective mass terms, is implied by the confluence of several neutrino oscillation experiments:
* Atmospheric neutrinos The Super-Kamiokande Collaboration has reported convincing evidence for the suppression of the flux of $`\nu _\mu `$ and $`\overline{\nu }_\mu `$ produced by cosmic ray collisions with the Earth’s upper atmosphere . (The measured flux of $`\nu _e`$ and $`\overline{\nu }_e`$ is within expectation.) In particular, there is a statistically significant zenith angle dependence of the high energy muon-like events which is consistent with neutrino oscillations.Continued observations and future long base-line accelerator experiments will help rule out other explanations for the anomaly. See Ref. for a catalog and discussion of these “non-standard” solutions. A two-neutrino vacuum mixing fit yields $`\delta m^210^310^2\mathrm{eV}^2`$ and $`\mathrm{sin}^2(2\theta )1`$, if the neutrino mixing maximally with $`\nu _\mu `$ is $`\nu _\tau `$ . There are also matter-enhanced (Mikheyev-Smirnov-Wolfenstein or MSW ) solutions $`\delta m^2\pm 5\times 10^3\mathrm{eV}^2`$ and $`\mathrm{sin}^2(2\theta )1`$ if the mixing partner is $`\nu _s`$ .<sup>§</sup><sup>§</sup>§More recent data favors the $`\nu _\mu \nu _\tau `$ channel over $`\nu _\mu \nu _s`$ .
* Solar neutrinos An array of solar neutrino experiments has observed an energy-dependent deficit of $`\nu _e`$ emitted by nuclear reactions in the sun. The Kamiokande and Super-Kamiokande experiments observe about one-half of the expected flux of the highest energy solar neutrinos. The Homestake chlorine experiment, sensitive to intermediate and higher energy neutrinos, sees approximately one-third of the expected flux. Further, the Soviet/Russian-American Gallium Experiment (SAGE) and Gallex record roughly one-half of the expected flux integrated over nearly the entire solar spectrum. The combined result is a distorted spectrum which is extremely difficult to reconcile with the standard solar model. Global two-neutrino fits to these observations yield the large angle ($`\delta m^210^5\mathrm{eV}^2`$, $`\mathrm{sin}^2(2\theta )1`$) and small angle ($`\delta m^210^5\mathrm{eV}^2`$, $`\mathrm{sin}^2(2\theta )5\times 10^3`$) MSW solutions and the “just-so” vacuum oscillation solution ($`\delta m^210^{10}\mathrm{eV}^2`$, $`\mathrm{sin}^2(2\theta )1`$). The neutrino mixing with $`\nu _e`$ may be $`\nu _\mu `$, $`\nu _\tau `$, or $`\nu _s`$, depending on the solution.Of course, requiring compatibility with other experiments and astrophysical constraints (see below) restricts the allowed oscillation channels.
* Accelerator neutrinos The Los Alamos Liquid Scintillator Neutrino Detector (LSND) experiment has recorded an excess of $`\nu _e`$ and $`\overline{\nu }_e`$ events in accelerator-produced beams of $`\nu _\mu `$ and $`\overline{\nu }_\mu `$ respectively . There are several allowed regions in the oscillation parameter space, all of which fall in the ranges $`0.2\mathrm{eV}^2\stackrel{<}{}\delta m^2\stackrel{<}{}8\mathrm{eV}^2`$ and $`10^3\stackrel{<}{}\mathrm{sin}^2(2\theta )\stackrel{<}{}10^1`$, assuming two-neutrino mixing. The agreement between the regions for the neutrino ($`\nu _\mu \nu _e`$) and antineutrino ($`\overline{\nu }_\mu \overline{\nu }_e`$) channels reinforces the oscillation interpretation. The Karlsruhe Rutherford Medium Energy Neutrino (KARMEN) experiment has searched for excess events in the same channels , and despite a null result, a joint analysis of the KARMEN and LSND data preserves some of the LSND solution space .
The mutual incompatibility of these disparate sets of results is prima facie evidence for the existence of a light sterile neutrino $`\nu _s`$, since the number of light weakly interacting neutrino species is known to be three (namely, $`\nu _e`$, $`\nu _\mu `$, and $`\nu _\tau `$) , an effectively three-neutrino mass matrix yields at most two independent $`\delta m^2`$’s, and the active neutrinos are known to be very light compared to their charged leptonic counterparts.If the additional neutrino is not a Standard Model (SM) singlet, then it must have very weak interactions with the SM particles in order to meet these constraints. See Ref. for an astrophysical limit on such interactions. Global fits of the data to three-neutrino mass matrices have borne out this conclusion, and several authors have indicated how to accommodate all of the data in a four-neutrino mixing matrix .
Assuming that the current data is explained by oscillations and that future neutrino experiments confirm the reality of an additional light neutrino species, a number of workers have begun constructing theoretical models which yield the naturally small Dirac and Majorana neutrino masses required for appreciable active-sterile neutrino mixing. Some of these models are generalizations of the traditional see-saw mechanism for generating light active neutrinos in the presence of very heavy sterile neutrinos . Others involve restricted or extended couplings in the context of the Standard Model (SM) . Many methods rely on new symmetries to ensure that both active and sterile neutrino masses are small and comparable. All of them can be classified as simple extensions of the SM gauge group and matter content (including Grand Unified Theories or GUTs), supersymmetric models, or superstring-inspired scenarios.
The phenomenological consequences and uses of sterile neutrinos are equally interesting. In particular, resonant transitions among active and sterile neutrinos can alter significantly the dynamics of early universe cosmology and various astrophysical venues. Indeed, significant consequences are almost guaranteed in phenomena such as Big Bang nucleosynthesis (BBN) and core-collapse supernovae, whose outcome is determined or dominated by neutrino physics. Transitions to and from sterile neutrinos can distort severely the active neutrinos’ energy spectra, resulting, for example, in nucleosynthetic abundances markedly different from the commonly accepted values.
These effects are not invariably unfavorable. Indeed, sterile neutrinos have been variously invoked to explain the origin of pulsar kicks , provide a new dark matter candidate , account for the diffuse ionization in the Milky Way galaxy , resolve the “crisis” in BBN , and help enable the synthesis of heavy elements in Type II supernovae . In these proceedings, we describe the interesting phenomenological implications of two of these scenarios. We recapitulate in Sec. II the physics of sterile neutrino dark matter, concentrating on the production of the cold, non-thermal variety. In Sec. III, we summarize our recent work on and the status of matter-enhanced active-sterile neutrino transformation solutions to heavy-element nucleosynthesis. We give conclusions in Sec. IV.
## II Sterile neutrino dark matter
The number of suitable candidates for the dark matter of the universe is staggering! In Eq. (7) we have listed some of the possible constituents — without regard to their potential mutual exclusivity — of the total mass-energy of the universe (measured as a fraction $`\mathrm{\Omega }`$ of the Friedmann-Robertson-Walker (FRW) closure energy density):<sup>\**</sup><sup>\**</sup>\**A number of these dark matter hopefuls have been culled from Ref. and preprint archive listings . See Ref. for a lucid discussion of the dark matter problem and structure formation.
$`\mathrm{\Omega }_{\mathrm{TOTAL}}`$ $`=`$ $`\mathrm{\Omega }_{\mathrm{baryon}}+\mathrm{\Omega }_\mathrm{\Lambda }+\mathrm{\Omega }_{\mathrm{axion}}+\mathrm{\Omega }_{\nu _{\mathrm{e},\mu ,\tau }}+\mathrm{\Omega }_{\mathrm{Q}\mathrm{ball}}`$ (7)
$`+\mathrm{\Omega }_{\mathrm{LSP}}+\mathrm{\Omega }_{\mathrm{WIMP}}+\mathrm{\Omega }_{\mathrm{WIMPZILLA}}+\mathrm{\Omega }_{\mathrm{SUSY}}+\mathrm{\Omega }_{\mathrm{CMBR}}`$
$`+\mathrm{\Omega }_{\mathrm{monopole}}+\mathrm{\Omega }_{\mathrm{starlight}}+\mathrm{\Omega }_{\mathrm{cosmic}\mathrm{string}}+\mathrm{\Omega }_{\mathrm{primordial}\mathrm{black}\mathrm{hole}}`$
$`+\mathrm{\Omega }_{\stackrel{~}{\chi }^0}+\mathrm{\Omega }_{\mathrm{majoron}}+\mathrm{\Omega }_{\mathrm{moron}}+\mathrm{\Omega }_{\stackrel{~}{\gamma }}+\mathrm{\Omega }_{\mathrm{Newtorite}}+\mathrm{\Omega }_{\mathrm{quark}\mathrm{nugget}}`$
$`+\mathrm{\Omega }_{\stackrel{~}{\nu }}+\mathrm{\Omega }_{\mathrm{pyrgon}}+\mathrm{\Omega }_{\stackrel{~}{\mathrm{g}}}+\mathrm{\Omega }_{\mathrm{gravitational}\mathrm{wave}}+\mathrm{\Omega }_{\mathrm{maximon}}`$
$`+\mathrm{\Omega }_{\stackrel{~}{\mathrm{h}}}+\mathrm{\Omega }_{\mathrm{familon}}+\mathrm{\Omega }_{\mathrm{tetron}}+\mathrm{\Omega }_{\mathrm{penton}}+\mathrm{\Omega }_{\mathrm{hexon}}+\mathrm{\Omega }_{\mathrm{crypton}}`$
$`+𝛀_{\nu _𝐬}`$
The astute reader will have noticed the addition of sterile neutrino dark matter to this list. If one light sterile species $`\nu _s`$ is required by the neutrino oscillation experiments, then the multi-generational structure of the SM argues strongly in favor of two additional steriles $`\nu _s^{}`$ and $`\nu _s^{\prime \prime }`$. While more massive than their lighter sibling, these additional neutrinos could also have relatively small masses which are easily compatible with the data.
If there is a $`\nu _s^{}`$ in the 200 eV to 10 keV mass range and a primordial lepton number $`L10^310^1`$ in any of the active neutrino flavors, Shi & Fuller have shown that this asymmetry can drive resonant production of sterile neutrino dark matter in the early universe. Furthermore, since the MSW mechanism in this environment favors efficient conversion from active to sterile neutrinos only for the lowest energy neutrinos and the process itself destroys lepton number, the resulting $`\nu _s^{}`$ spectrum is non-thermal and essentially “cold.”
Unlike conventional neutrino dark matter (Hot Dark Matter or HDM), this Cold Dark Matter (CDM) candidate has a relatively short free-streaming length at the epoch of structure formation, so density fluctuations may grow unimpeded on galactic scales. For example, a $`500`$ eV sterile neutrino can contribute a fraction $`\mathrm{\Omega }_{\nu _s}0.4`$ to the critical density and stream freely over only $`0.4`$ Mpc, which corresponds to a structure cut-off $`10^{10}M_{}`$, about the size of dwarf galaxies. For further details, please consult Ref. .
## III Active-sterile neutrino transformation and heavy-element nucleosynthesis
What is the origin of the heavy elements? The light elements (H, He, D, and Li) are produced (for the most part) in the early universe and the intermediate mass elements (up to Fe) during the “main sequence” evolution of stars,<sup>††</sup><sup>††</sup>††See Ref. for a detailed exposition on stellar structure and dynamics. but the source of at least half of the nuclides heavier than iron is unknown.
It is not difficult to understand why ordinary stellar evolution cannot produce nuclei more massive than iron. The thermonuclear processes which take place in stellar interiors work gradually from hydrogen toward iron by fusing together ever heavier nuclei. Since these thermal reactions can proceed only if the products are more tightly bound than the reactants, the path of stellar nucleosynthesis ends at iron, which is the most tightly bound nucleus.
It is clear, then, that elements heavier than iron must form via some other process(es) in some other environment(s). One pathway from iron to heavy nuclides such as plutonium, iodine, tin, lead, and gold is the r-process, so named because it proceeds via the rapid capture of neutrons on iron-sized “seed” nuclei. In the r-process, seed nuclei capture neutrons so rapidly that they can leap past iron on the curve of binding energy versus atomic mass. After the r-process is complete, the now extremely neutron-rich nuclei $`\beta `$-decay to more stable elements on the periodic table, yielding some mass distribution of nuclides.<sup>‡‡</sup><sup>‡‡</sup>‡‡Preferably a distribution which matches the observed abundances! In all but extreme astrophysical environments, a key necessary condition for a successful r-process is a preponderance of neutrons over protons, as the heavy elements are quite neutron-rich.<sup>\**</sup><sup>\**</sup>\**Neutron-richness of the heavy nuclides can be understood roughly as a compromise among Pauli’s exclusion principle, long-range Coulomb forces between protons, and short-range nuclear (strong) forces between nucleons. Once seed nuclei are available for neutron capture to proceed, this requirement becomes having a large number of free neutrons for each seed nucleus.
Although the physics of the r-process is relatively well-known, the astrophysical environment in which it takes place has not been determined conclusively. The major contenders include Type II (or core-collapse) supernovae and binary neutron star mergers. The former is currently the most promising r-process site, and we now restrict ourselves to a discussion of supernova physics and how non-standard particle physics like neutrino oscillations can help enable the r-process.
A Type II supernova results from the catastrophic gravitational collapse of a massive star . Specifically, a star with mass $`M\stackrel{>}{}8M_{}`$ burns successively heavier fuels throughout its life, eventually accumulating an inner iron core. As indicated above, thermal fusion processes cannot extract energy from iron to provide pressure support against gravity, so the core eventually collapses under its own gravity. Abetted by the photodissociation of the iron nuclei, the core rapidly neutronizes via electron capture on protons. When the central core density reaches the saturation density of nuclear matter, the nucleons touch, and the core “bounces,” yielding an outgoing shockwave. Unfortunately, the shock wave eventually dies out, as it loses all of its energy dissociating the mantle of the star. Meanwhile, however, thermally produced neutrinos, trapped in the dense core, diffuse out and revive the shock (the supernova explodes!). The remaining neutrinos continue to leak out on a time scale of some 10-20 seconds, driving significant mass loss from the newly formed proto-neutron star.
It is in this neutrino-driven “wind” that the r-process is believed to occur . The hot proto-neutron star emits neutrinos of all flavors, with average energies $`E_{\nu _\delta }E_{\overline{\nu }_\delta }\stackrel{>}{}E_{\overline{\nu }_e}\stackrel{>}{}E_{\nu _e}`$, where $`\delta =\mu ,\tau `$ refers to the muon and tau neutrinos, and comparable luminosities. They stream nearly freely through a plasma of electrons, positrons, neutrons, and protons. By exchanging energy with the cooler plasma, the neutrinos drive the r-process ingredients out of the gravitational potential well of the neutron star. As a fluid element of this neutrino-heated ejecta travels away from the surface of the star, its temperature gradually drops, its velocity increases, and the following sequence ensues:
* The reactions
$`\nu _e+n`$ $``$ $`p+e^{}`$ (8)
$`\overline{\nu }_e+p`$ $``$ $`n+e^+`$ (9)
set the ratio $`n/p`$ of the number densities of neutrons and protons.<sup>\*†</sup><sup>\*†</sup>\*†The reactions $`np+e^{}+\overline{\nu }_e`$ contribute only negligibly to $`n/p`$ for the relevant neutrino energies. This number is usually written in terms of the electron fraction $`Y_e`$, the net number of electrons per baryon. Charge neutrality of the plasma gives $`Y_e=1/(1+n/p)`$. Thus a higher (lower) $`n/p`$ corresponds to a lower (higher) $`Y_e`$. The reactions in Eqs. (2-3) freeze out at a temperature $`T0.8\mathrm{MeV}`$, when their rates fall below the local expansion rate.
* When the plasma cools to $`T0.75\mathrm{MeV}`$, $`\alpha `$-particles ($`{}_{}{}^{4}\mathrm{He}`$ nuclei) start to form from the ambient neutrons and protons.
* At lower plasma temperatures, some of the $`\alpha `$-particles combine via three-body interactions into seed nuclei with mass number $`A=50100`$.
* Finally, at sufficiently low temperatures ($`T0.25`$ MeV), the r-process occurs.
Unfortunately, detailed numerical simulations have found that the r-process is precluded in neutrino-heated ejecta by a phenomenon termed the “$`\alpha `$-effect.” The $`\alpha `$-effect occurs when nucleons are taken up into $`\alpha `$-particles, leaving nearly inert $`{}_{}{}^{4}\mathrm{He}`$ nuclei and excess neutrons (the plasma is already neutron-rich). Since the $`\nu _e`$ flux is still sizable in the region beyond freeze-out, the forward reaction in Eq. (2) proceeds to convert the leftover neutrons into protons. These protons and remaining neutrons quickly form additional $`\alpha `$-particles before the forward reaction in Eq. (3) can reconvert the newly-formed protons. Ultimately, nearly all of the free neutrons end up in $`{}_{}{}^{4}\mathrm{He}`$ nuclei, the electron fraction is driven to 0.5, and the final neutron-to-seed ratio is too small to give even an anemic r-process!
We have shown in Fig. 1 the radial evolution of the electron fraction in the hot, high-entropy “bubble” above the proto-neutron star. We have used a simple model for the wind and taken representative values of the neutron star radius ($`R=10\mathrm{km}`$), expansion time scale ($`\tau =0.3\mathrm{s}`$), entropy per baryon ($`s/k_B=100`$), and neutrino spectral parameters . The $`\alpha `$-effect begins when the temperature in the plasma drops to $`0.75\mathrm{MeV}`$. Comparison of the behavior with and without the $`\alpha `$-effect illustrates its negative impact.
Various calculations have shown that this problem persists for all reasonable variations in wind outflow and neutrino spectral parameters. Since there is mounting evidence that the r-process must occur in neutrino-heated ejecta in Type II supernovae, a number of authors have attempted to circumvent the $`\alpha `$-effect in order to enable heavy-element nucleosynthesis.
The high neutron-to-seed ratio requirement for a successful r-process is met if $`Y_e`$ is sufficiently small ($`Y_e<1/2`$ corresponds to neutron-rich ejecta); the expansion rate of the ejecta is sufficiently high that the three-body reactions making seed nuclei are inefficient; and/or the entropy of the plasma is high enough that nucleons prefer to remain free rather than bound in nuclei . Increasing the expansion rate will not redress the $`\alpha `$-effect, since $`\alpha `$-particles form via relatively fast two-body processes. Raising the entropy in the ejecta substitutes for the $`\alpha `$-effect another process which decimates the neutron-to-seed ratio: neutrino neutral current spallation of $`{}_{}{}^{4}\mathrm{He}`$ nuclei . Since the severity of the effect depends on the $`\nu _e`$ flux in the region of $`\alpha `$-particle production, lowering $`Y_e`$ without changing the $`\nu _e`$ flux also cannot help enable the r-process.
Non-standard neutrino physics such as neutrino oscillations is an elegant solution to this problem, for resonant transformation between $`\nu _e`$ and some other species can directly influence the $`\nu _e`$ flux. Given the hierarchy of the energies of the active neutrinos, matter-enhanced transformation between $`\nu _e`$ and $`\nu _\mu `$ or $`\nu _\tau `$ will actually enhance the $`\alpha `$-effect. Transformations among the active antineutrinos will affect only $`\overline{\nu }_e`$. As discussed in the Introduction, however, various neutrino experiments imply the existence of a sterile neutrino which mixes appreciably with the active neutrinos. As long as the neutron star emits a negligible number of sterile neutrinos, effective active-sterile mixing in the form of $`\nu _e\nu _s`$ and $`\overline{\nu }_e\overline{\nu }_s`$ has the potential to forestall the $`\alpha `$-effect by removing the offending $`\nu _e`$’s .
The authors of Ref. and we have recently investigated this mixing scheme in neutrino-heated ejecta. Shown in Fig. 1 is the evolution of $`Y_e`$ with radius for a representative set of mixing parameters. In Ref. , the authors included all contributions to the effective mass of electron neutrinos except neutrino forward scattering on the “background” of other neutrinos emitted by the neutron star . In Ref. , we have also included this many-body effect. The behavior of $`Y_e`$ for both analyses is indicated in the figure, both with and without the $`\alpha `$-effect. In Fig. 2, we have shown final electron fraction attained (at $`r=35\mathrm{km}`$ in Fig. 1) without neutrino background effects.
In the absence of the neutrino background, active-sterile mixing is a viable solution to the r-process problem. For a relatively large range of neutrino mixing parameters, the final electron fraction is less than 0.3, yielding a highly neutron-rich r-process environment. Calculations with various choices of the outflow and neutrino spectral parameters have confirmed that this is also a robust solution : a high neutron-to-seed ratio obtains for a wide range of supernova and mixing parameters. An additional benefit is that the $`\delta m^2`$-$`\mathrm{sin}^2(2\theta )`$ region favored for enabling the r-process coincides with one of the four-neutrino mixing schemes explaining the current experiments .
The presence of the neutrino background unfortunately mitigates this remedy, but the background gradually dies away as neutrinos diffuse out of the neutron star. At sufficiently late times after the supernova bounce and explosion, the probability of neutrino-neutrino forward scattering is sufficiently small that the scenario of Ref. prevails. At these times, however, the neutrino fluxes have fallen sufficiently that there may not be a need for neutrino transformation to relieve the $`\alpha `$-effect. If the total neutron-rich mass ejected at late times is sufficient to account for all of the r-process material in the galaxy, then there is no need to invoke non-standard neutrino physics. Settling this issue requires a self-consistent calculation of the wind coupled with neutrino oscillations and including neutrino background effects. On the other hand, if future experiments conclusively establish the existence of a light sterile neutrino in the mass range relevant for Type II supernovae, active-sterile mixing may have profound consequences for the synthesis of the heavy elements.
## IV Conclusion
The present solar, atmospheric, and accelerator neutrino experiments suggest the existence of a light sterile neutrino. While this is a radical departure from the folklore that sterile neutrinos (if they exist) are very heavy, future experiments may confirm their reality and force theorists to modify or discard cherished models of neutrino mass. As we have indicated in this paper, the mixing of sterile and active neutrinos has potentially far-reaching consequences for cosmology and astrophysics. They may account for much of the dark matter of the universe. They may even be the reason why we have gold rings, tin cans, atomic bombs, and lead shielding!
MP is supported in part by a NASA GSRP fellowship. This work was partially supported by NSF grant PHY98-00980.
|
no-problem/0003/cond-mat0003253.html
|
ar5iv
|
text
|
# Observation of a linear temperature dependence of the critical current density in a Ba0.63K0.37BiO3 single crystal∗
\[
## Abstract
For a Ba<sub>0.63</sub>K<sub>0.37</sub>BiO<sub>3</sub> single crystal with $`T_c`$$``$31 K, $`H_{c1}750Oe`$ at 5 K, and dimensions 3$`\times `$3$`\times `$1$`mm^3`$, the temperature and field dependences of magnetic hysteresis loops have been measured within 5-25 K in magnetic fields up to 6 Tesla. The critical current density is $`J_c(0)`$1.5 $`\times `$ 10$`{}_{}{}^{5}A/cm^2`$ at zero field and $`1\times 10^5A/cm^2`$ at 1 $`kOe`$ at 5 K. $`J_c`$ decreases exponentially with increasing field up to 10 $`kOe`$. A linear temperature dependence of $`J_c`$ is observed below 25 K, which differs from the exponential and the power-law temperature dependences in high-$`T_c`$ superconductors including the BKBO. The linear temperature dependence can be regarded as an intrinsic effect in superconductors.
\]
It is well known that Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> (BKBO) with $`T_c30`$ K is very suitable for research of high-$`T_c`$ superconductivity, because it has a simple perovskite structure and characteristics similar to cuprate superconductors. The superconductivity mechanism and the metal-insulator transition for BKBO still remain to be clarified. Up to now, although there has been much research on the superconductivity mechanism for BKBO, only little of the research was carried out on very high quality crystals. In this paper, the critical current and its temperature dependence are investigated by observing magnetic properties of a high quality BKBO single crystal. The results are compared with other work published on BKBO and cuprate data characterized by the power-law and exponential-temperature dependences.
The Ba<sub>0.63</sub>K<sub>0.37</sub>BiO<sub>3</sub> single crystal was synthesized by the electro-chemical method reported elsewhere.<sup>1</sup> The size of the crystal with $`T_c`$31 K was $`3\times 3\times 1mm^3`$. The potassium concentration was found to be $`x0.37`$ by electron-probe microanalysis. The value $`H_{c1}750Oe`$ was determined at 5 K. The paramagnetic Meissner effect with the crystal was investigated at low fields.<sup>2</sup> The zero-field-cooled(ZFC), field-cooled(FC) susceptibilities and the magnetic hysteresis loops were measured by using a magnetometer of Quantum Design Co.(MPMS7). Before measuring the hysteresis loops, zero setting for the magnetic field was performed to remove any remnant field in the superconducting magnet. A magnetic field of 6 Tesla in the c-direction was applied.
Figure 1 shows the ZFC and FC susceptibilities measured at 4 $`Oe`$ in the virgin-charged superconducting magnet with the crystal with $`T_c31`$ K. The ZFC absolute value evidently exhibits no temperature dependence up to 24 K indicated by arrow A and decreases rapidly between 24 K and $`T_c`$. In the case of the ZFC susceptibility, the transition width of $`T_c`$ is $`\mathrm{}T=7`$ K (defined from $`T_c`$ to 24 K). In the Meissner state, the susceptibility is defined as $`4\pi \chi _m\rho =V/(1D)`$, where $`\chi _m`$, $`V`$ and $`D`$ are the mass susceptibility, volume fraction and demagnetization factor, respectively, while the X-ray density $`\rho 8g/cm^3`$. If $`V`$ is assumed to be unity and independent of the field orientation, $`D0.68`$ is calculated from the ZFC susceptibility. If an ellipsoid of revolution is used to approximate the crystal shape with the dimensions mentioned above, then $`D0.65`$. This agrees fairly with that calculated from the ZFC susceptibility. This agreement indicates that the crystal is fully superconducting, nearly single and homogeneous.
Figure 2 (a) and (b) show temperature dependences of a half of the magnetic hysteresis loops measured up to 6 Tesla in the 5 - 25 K range; here data are shown only up to 1 Tesla beyond which all functions are very nearly constant. The loops are typical of those observed in high $`T_c`$ superconductors. The field and temperature dependences of $`J_c`$ are shown in Figs. 2 (c) and (d). $`J_c`$ at 5 K at zero field and 1 $`kOe`$ were evaluated as $`1.5\times 10^5A/cm^2`$ and $`1\times 10^5A/cm^2`$, respectively, by using the Bean critical state model<sup>3</sup> applicable to bulk; $`J_c=\frac{10\mathrm{}M}{a\frac{a^2}{3b}}`$ in $`CGS`$ units, where $`a`$, $`b`$ and $`\mathrm{}M`$ are the grain dimensions of a bulk crystal and the magnetic moment corrected by the demagnetization factor, respectively. The magnetic field dependence of $`J_c`$ decreases exponentially with increasing field up to 10 $`kOe`$. The temperature dependence of $`J_c`$, as shown in Fig. 2 (d), is linear below $``$25 K indicated as arrow A in Fig. 1. This indicates that the superconducting condensed state below $`T_c`$ has a linear temperature dependence for $`J_c`$. The linear dependence differs from the published results which follow a power-law<sup>4,5</sup> for the BKBO and an exponental dependence<sup>6</sup> for a Hg system. The cause of the difference is that the absolute values of ZFC susceptibilities at low temperatures in these papers are not constant but decrease with increasing temperature<sup>4,6</sup>; $`i.e.`$, this indicates that the nonlinearity (or power law) may be attributed to a pinning effect due to impurities in the crystals. In addition to the above crystal, the linearity for another high quality crystal was observed by the same experimental method.
Finally, we suggest that the linear temperature dependence of $`J_c`$ is an intrinsic effect in superconductors.
|
no-problem/0003/hep-ph0003274.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In three different approaches-by using mass Lagrangian , by using Dirac equation , and using the operator formalism - I discussed the problem of the mass generation in the standard weak interactions. The result was- the standard weak interaction cannot generate masses of fermions since the right components of fermions do not participate in these interactions. Then using this result in works I have shown that the effect of resonance enhancement of neutrino oscillations in matter must not exist.
At present a number of works was published (see and references there) where by using the Green’s function method it was obtained that the weak interactions can generate the resonance enhancement of neutrino oscillations in matter (it means that the weak interaction can generate masses). As we see below this result is a consequence of using weak interactions interaction term $`H_\mu ^{int}=V_\mu \frac{1}{2}\left(1\gamma _5\right)`$ in inncorrect manner and in result they obtained that right components of the fermions participate in weak interactions.
Let us consider the equation for Green’s function of fermions taking into account the standard weak interactions.
## 2 Equation for Green’s Function in Weak Interactions
The Green’s function method is frequently used for taking into account electromagnetic interactions and strong interactions (chromodynamics) effects . The equation for Green’s function has the following form:
$$\left[\gamma ^\mu \left(i_\mu V_\mu \right)M\right]G(x,y)=\delta ^4\left(xy\right),$$
$`\left(1\right)`$
where $`V_\mu `$ characterizes electromagnetic or strong interactions and
$$iG(x,y)=<T\mathrm{\Psi }\left(x\right)\overline{\mathrm{\Psi }}\left(y\right)>_o.$$
It is necessary to mention that the Green’s function method is a very convenient method for studying the electromagnetic and strong interaction effects since these interactions are left-right side symmetric interactions.
At present a number of works was published where Green’s function was used for taking into account the weak interaction. There was shown that the weak interaction can generate masses, i.e. masses of fermions are changed in the weak interactions and then the resonance enhancement of neutrino oscillations appears in matter . In this work we want to show that this result is a consequence of incorrect using a specific feature of the standard weak interactions, namely, that the right components of fermions do not participate in these interactions (i.e. $`\mathrm{\Psi }_R=\overline{\mathrm{\Psi }}_R0`$).
Usually the equation for Green’s function for fermion (neutrino) with weak interactions is taken in the following form:
$$\left[\gamma ^\mu \left(i_\mu V_\mu \right)M\right]G(x,y)=\delta ^4\left(xy\right),$$
$`\left(2\right)`$
where $`V_\mu `$ is
$$V_\mu =V_\mu \frac{1}{2}\left(1\gamma _5\right)=V_\mu P_L.$$
$`\left(3\right)`$
It is supposed that the term (3) in Eq.(2) reproduces a specific feature of the weak interactions
$$V_\mu G(x,y)V_\mu \frac{1}{4}\left(1\gamma _5\right)^2T\left(\mathrm{\Psi }\left(x\right)\overline{\mathrm{\Psi }}\left(y\right)\right)=V_\mu T\left(\mathrm{\Psi }_L\left(x\right)\overline{\mathrm{\Psi }}_R\left(y\right)\right).$$
However, this operation is not correct since it does not reproduce the standard weak interaction. We see that, if we use directly the specific feature of these interactions, then the equation for Green’s function we must rewrite in the form
$$\left[\gamma ^\mu \left(i_\mu V_\mu \left[\begin{array}{cc}\mathrm{\Psi }_R=0& \\ \overline{\mathrm{\Psi }}_R=0& \end{array}\right]\right)M\right]G(x,y)=\delta ^4\left(xy\right),$$
$`\left(4\right)`$
Then the interaction term in Eq.(4) is
$$V_\mu \left[\begin{array}{cc}\mathrm{\Psi }_R=0& \\ \overline{\mathrm{\Psi }}_R=0& \end{array}\right]T\left(\mathrm{\Psi }_l\overline{\mathrm{\Psi }}_R\left(\overline{\mathrm{\Psi }}_R=0\right)+\left(\mathrm{\Psi }_R=0\right)\mathrm{\Psi }_R\overline{\mathrm{\Psi }}_L\right)=V_\mu 00,$$
$`\left(5\right)`$
and then Eq.(4) is transformed in the following equation:
$$\left[\gamma ^\mu \left(i_\mu \right)M\right]G(x,y)=\delta ^4\left(xy\right),$$
$`\left(6\right)`$
which coincides with the equation for free Green’s function (i.e. equation without interactions). So, we see that the equation for Green’s function with weak interactions (in matter) coincides with the equation for Green’s function in vacuum.
## 3 Impossibility to Realize the Mechanism of Resonance Enhancement of Neutrino Oscillations in Matter
In the previous part we have obtained that the equation for Green’s function of fermions with weak interactions has the form (6). It is a consequence of the fact that the right components of fermions (neutrinos) do not participate in the weak interactions. It means that the weak interaction cannot generate masses (see also works \[1-4\]) and, correspondingly, the weak interactions do not give a deposit to effective masses of fermions (neutrinos) therefore, the mixing angle cannot be changed in weak interactions (in matter) and it coincides with mixing angle in vacuum. Hence, the mechanism of resonance enhancement of neutrino oscillations in matter (MSW effect) cannot exist.
Probably, the same result takes place for renormcharge $`Q^2\left(t\right)`$ (where $`t`$ is a transfer momentum in square) of the weak interactions , i.e. renormcharge $`Q^2\left(t\right)`$ in the weak interactions does not change and $`Q^2\left(t\right)=const`$ differs from renormcharges $`e^2\left(t\right),g^2\left(t\right)`$ of the electromagnetic and strong interactions .
## 4 Conclusion
It was shown that the equation for Green’s function of fermions (neutrinos) with weak interactions (i.e. in matter) coincides with the equation for Green’s function of fermions in vacuum. This result is a consequence of the fact that the right components of fermions do not participate in weak interactions. In result we have come to a conclusion: the mechanism of resonance enhancement of neutrino oscillations in matter (i.e. MSW effect) cannot exist.
In conclusion we would like to stress that in the experimental data from there is no visible change in the spectrum of the $`B^8`$ Sun neutrinos. The measured spectrum of neutrinos lies lower than the computed spectrum of the $`B^8`$ neutrinos . In the case of realization of the resonance enhancement mechanism this spectrum must be distorted. Also, the day-night effect on neutrinos regeneration in bulk of the Earth keeps within the mistakes , i.e. it is not observed.
References
1. Kh.M. Beshtoev, JINR Communication P2-93-44, Dubna, 1993.
2. Kh.M. Beshtoev, Fiz. Elem. Chastits At. Yadra (Phys.
Part. Nucl.), 1996, v.27, p.53.
3. Kh.M. Beshtoev, JINR Communication E2-93-167, Dubna, 1993.
4. Kh. M. Beshtoev, HEP-PH/9912532, 1999;
Turkish Journ. of Physics (to be published).
5. C.Y. Cardall, D.J. Chung, Phys. Rev.D 1999, v.60, p.073012.
6. J. Schwinger, Phys. Rev. 1949, v.76, p.790;
F.J. Dyson, Phys. Rev. 1952, v.85, p.631;
N.N. Bogolubov, D.V. Schirkov, Intr. to Theory of Quant.
Field, M., 1994, p.372, 437;
Y. Nambu and G. Jona-Lasinio, Phys. Rev. 1961, v.122, p.345;
M.K. Volkov, Fiz. Elem. Chastitz At. Yadra, 1986, v.17, p.433;
A.A. Abrikosov at. al., Methods of Quant. Theory Field. in
Statistic Phys. M., 1962.
7. Kh.M. Beshtoev, JINR Communication E2-94-31, Dubna, 1994.
8. T. Izikson amd K. Suber, Quant. Field Theory, M., 1984;
N.N. Bogolubov, D.V. Schirkov, Intr. to Theory of Quant.
Field, M., 1994, p.469.
9. K.S. Harita, et al., Phys. Rev. Lett. 65, 1297, (1991);
Phys. Rev., D 44, 2341, (1991);
Y. Totsuka, Rep. Prog. Physics 377, (1992).
Y. Suzuki, Proceed. of the Intern. Conf. Neutrino-98, Japan, 1998.
10. J.N. Bahcall, Neutrino Astrophysics, Cambridge U.P.
Cambridge, 1989.
|
no-problem/0003/nucl-th0003069.html
|
ar5iv
|
text
|
# Fine structure of alpha decay in odd nuclei
## Abstract
Using an $`\alpha `$ decay level scheme, the fine structure in odd nuclei is explained by taking into account the radial and rotational couplings between the unpaired valence nucleon and the core of the decaying system. It is shown that the experimental behavior of the $`\alpha `$ decay fine structure phenomenon is governed by the dynamical characteristics of the system.
preprint: HEP/123-qed
Keywords: alpha decay fine structure
The $`\alpha `$ decay fine structure was discovered by Rosenblum since 1929 by measuring the ranges of the emitted particle in the air. Usually, theoretical attempts to investigate this phenomenon are based on the calculation of the overlaps between, on one hand, the ground–state wave function of the parent and, on the other hand, the antisymmetric product between the wave functions of the nascent fragments in different configurations after the scission . However, quantitatively this phenomenon was not explained rigorously.
It was evidenced , at least formally, that the significance of the emitted particle preformation and that of its penetrability through the barrier calculated from the ground–state configuration of the parent up to the scission point are equivalent. This equivalence gives a support in the attempt to investigate the $`\alpha `$ decay process using fission theories.
Recently, a theory based on the Landau–Zener effect was developed intending to describe quantitatively the cluster decay fine structure phenomenon. This new formalism succeeds to reproduce the values of the fine structure hindrance factors for the <sup>14</sup>C emission from <sup>223</sup>Ra being in a better agreement than other microscopic theories with the experiment. In this alternative description, the cluster decay fine structure is caused by the promotion of the valence unpaired nucleon on some excited daughter levels during the disintegration process. It was claimed that the same promotion effect can also govern the fine structure in the case of $`\alpha `$ decay. The first step in such a treatment is the elaboration of a two–center realistic level diagram during the whole disintegration process, that means, starting from the parent single–particle energy distribution and following the variations of the level energies up to their asymptotic configuration attributed to the separated daughter nucleus and the alpha particle. In this picture, it is only intended to treat the $`\alpha `$ cluster with a smooth potential in order to estimate its influence upon the levels of the daughter during the disintegration process, like a polarization effect. The particle–core couplings are produced merely between levels belonging to the nascent heavy nucleus, so that the influence of the potential attributed to the $`\alpha `$ particle needs only to be simulated. Of course, in this context, it is not assumed that the oscillator well is an appropriate tool to describe an alpha nucleus. So, to attain this purpose in a realistic manner, the superasymmetric two–center shell model (STCSM) described in Ref. and improved in Ref. was modified in order to reproduce the single–particle levels assumed to describe an $`\alpha `$ cluster in the final stage of the disintegration. Consequently, the alpha oscillator stiffness $`\mathrm{}\omega _{2\alpha }`$ was forced to vary gradually from the usual STCSM value $`\mathrm{}\omega _2=41A_2^{1/3}`$ when the normalized coordinate $`R_n`$=0, to the value extracted from Ref. $`\mathrm{}\omega _\alpha `$=21.8 MeV when the scission point is reached, that means $`R_n=1`$. The nuclear shape parametrization being defined by two intersected spheres of different radii, $`R_1`$ for the daughter and $`R_2`$ for the emitted fragment, the single generalized coordinate remains the elongation $`R`$, denoting the distance between the centers of the nuclei, or the normalized coordinate $`R_n=(RR_i)/(R_fR_i)`$, where $`R_i=R_0R_2`$, $`R_f=R_1+R_2`$ and $`R_0`$ denotes the radius of the parent. Subsequently, another three modifications were introduced in the present version of the STCSM in order to obtain good $`\alpha `$–level energies associated with the value $`\mathrm{}\omega _\alpha `$=21.8 MeV. The first term characterizing the mass-asymmetry along the $`\rho `$–axis of Eq. (43) of Ref. (proportional to $`\mathrm{}\omega _2\mathrm{}\omega _1`$) was multiplied by the ratio $`(\omega _{2\alpha }\omega _1)/(\omega _2\omega _1)`$. Another mass-asymmetry term along the $`z`$–axis was diagonalized using the eigenvalues of the asymmetric two–center potential:
$$\begin{array}{cc}\hfill <\nu ^{},n_\rho ^{},m^{},s^{}|\frac{m_0}{2}(\omega _{2\alpha }^2\omega _2^2)(zz_2)^2|\nu ,n_\rho ,m,s>=& _0^{\mathrm{}}\frac{m_0}{2}(\omega _{2\alpha }^2\omega _2^2)(zz_2)^2Z_{\nu _2^{}}Z_{\nu _2}𝑑z\delta _{s^{}s}\delta _{n_\rho ^{}n_\rho }\delta _{m^{}m}\hfill \\ \hfill =& \frac{C_{\nu _2^{}}C_{\nu _2}}{2\alpha _2}I_{\nu _2^{},2,\nu _2,\zeta _0^{}}(\mathrm{}\omega _{2\alpha }^2/\omega _2\mathrm{}\omega _2)\delta _{s^{}s}\delta _{n_\rho ^{}n_\rho }\delta _{m^{}m}.\hfill \end{array}$$
(1)
In this way, energy shifts in single–particle levels associated to the change from $`\omega _2`$ to $`\omega _{2\alpha }`$ are produced, so that a good two–center oscillator with $`\mathrm{}\omega _1=41A_1^{1/2}`$ and $`\mathrm{}\omega _2=21.8`$ MeV is obtained after the scission. Finally, the spin – orbit and $`L^2`$ interaction matrix elements associated to the light fragment were multiplied by the ratio $`\omega _{2\alpha }/\omega _2`$ in order to be coherent with the formal definitions of these interaction terms which assume proportionality to $`\mathrm{}\omega _2`$. The signification of the notations can be found in Ref. .
The spin–orbit and $`L^2`$ coefficients, together with the depths of the potentials for the <sup>211</sup>Po and <sup>207</sup>Pb used in this work are $`\kappa =5.75\times 10^2`$, $`\eta =0.43`$ and $`V_c`$=55.80 MeV, respectively. These values for the smooth spherical oscillator potentials were deduced from a fit of the <sup>208</sup>Pb single–particle energies given in Ref. : $`E_{1i_{11/2}}=`$-3.16 MeV, $`E_{2g_{9/2}}=`$-3.94 MeV, $`E_{3p_{1/2}}=`$-7.37 MeV, $`E_{2f_{5/2}}=`$-7.94 MeV, $`E_{3p_{3/2}}=`$-8.36 MeV and $`E_{1i_{13/2}}=`$-9.00 MeV. The single–particle energies of <sup>4</sup>He are: -12 MeV, 1.38 MeV, 1.68 MeV, 5.34 MeV, 10.43 MeV and 11.86 MeV for the levels 1$`s_{1/2}`$, 1$`p_{3/2}`$, 1$`p_{1/2}`$, 2$`s_{1/2}`$, 1$`d_{5/2}`$ and 1$`d_{3/2}`$, respectively. A good fit for the lower energies in the frame of the oscillator model with $`\mathrm{}\omega _\alpha `$=21.8 MeV was found to be: $`\kappa `$=4.51$`\times 10^3`$, $`\eta =`$0 and $`V_c`$=52.91 MeV.
The STCSM level scheme is plotted in Fig. 1. In the following, the condition of consistency is also achieved, the same shape for the microscopical and phenomenological models involved. The parent and the daughter do not have pronounced deformations, so that their ground state nuclear shapes can be approximated with spheres. The ground state configuration of <sup>211</sup>Po is $`(\pi (h_{9/2})^2\nu (g_{9/2})^1)_{9/2^+}`$ . For clarity, the levels of the parent will be labeled with the superscript P, while the superscripts D and $`\alpha `$ will be used for the daughter and the $`\alpha `$ nuclei, respectively. Up to now, the model evidences the variation of the levels for a modification of the nuclear shape, and indicates the origin of the nucleons belonging to the $`\alpha `$–particle. As an interesting feature, the STCSM predicts that the linked level 1$`s_{1/2}^\alpha `$ of the $`\alpha `$ particle emerges from the orbital 1$`g_{9/2}^\mathrm{P}`$ of the spherical <sup>211</sup>Po, which is deeply located in the parent potential well. The present formalism intends to explain the fine structure by considering single–particle transitions due to the radial and the rotational couplings. The levels with the same good quantum numbers associated to some symmetries of the system cannot in general intersect, but exhibit quasi–crossings, or pseudo–crossings, or avoided level crossings. The system is characterized by an axial symmetry, therefore the good quantum numbers are the projections of the nucleon spin $`\mathrm{\Omega }`$. The radial coupling causes transitions of the unpaired nucleon near the avoided level crossings. True crossings can also be obtained between levels characterized by different quantum numbers. Generally, the rotational coupling has a maximum strength in the vicinity of the true crossings. Transitions due to both couplings are taken into account in order to explain the excitations of the unpaired nucleon. It can be considered that the last unpaired neutron, initially located in the orbital 2$`g_{9/2}^\mathrm{P}`$ has the same chance to choose one of the levels with the projection $`\mathrm{\Omega }`$ included between 1/2 and 9/2 when the nucleus starts to deform, therefore the occupation probabilities are the same. Comparing the diagrams (a)–(d) of Fig. 2, it is clear that only the level emerging from $`2g_{9/2}^\mathrm{P}`$ with the spin projection $`\mathrm{\Omega }`$=1/2 finally reaches adiabatically the 3$`p_{1/2}^\mathrm{D}`$ ground–state of the daughter, which means, a disintegration performed without excitations. Due to the rotational coupling, a nucleon initially in the state 2$`g_{9/2}^\mathrm{P}`$ $`\mathrm{\Omega }`$=3/2 can jump in the state $`\mathrm{\Omega }`$=1/2 during the disintegration, contributing in a smaller measure to obtain finally the daughter ground–state. Even by taking into account the rotational coupling, the other levels with $`\mathrm{\Omega }>`$3/2 emerging from $`2g_{9/2}^\mathrm{P}`$ have a negligible contribution in the ground–state channel. Moreover, if the Coriolis coupling is not taken into account (using only the radial coupling), the levels with $`\mathrm{\Omega }`$1/2 emerging from 2$`g_{9/2}^\mathrm{P}`$ attain finally higher daughter levels with at least 3 MeV excess in energy (for example the orbital 2$`g_{9/2}^\mathrm{D}`$ of the daughter). As the penetrability decreases dramatically with the height of the barrier, it becomes clear that the processes characterized by a nucleon emerging from 2$`g_{9/2}^\mathrm{P}`$ $`\mathrm{\Omega }>`$3/2 in the initial moment of the disintegration are unlikely. This discussion allows us to fix the initial conditions for our process: initially, the valence nucleon of the decaying system can be considered in the state 2$`g_{9/2}^\mathrm{P}`$ with $`\mathrm{\Omega }`$=1/2 and 3/2. To show how it is possible that the unpaired nucleon arrives on allowed excited states of the daughter due only to the radial effect, an arrow in Fig. 2 (a) indicates some avoided level crossings between the levels which reach adiabatically the 3$`p_{1/2}^\mathrm{D}`$, 2$`f_{5/2}^\mathrm{D}`$ and 3$`p_{3/2}^\mathrm{D}`$ daughter orbitals. The transition probabilities are strongly enhanced in the avoided level crossing regions in accordance with the Landau–Zener effect. The transition probability between two adiabatic single–particle states strongly depends on the velocity of passage through the avoided crossing regions $`v_{\mathrm{tun}}`$, and, implicitly on the tunneling time of the barrier. When the velocity increases, the transition probability is enhanced and the nucleon follows with a larger probability the so called diabatic state. Using the above description, it is intended to reproduce the fine structure exhibited by the $`\alpha `$ decay of <sup>211</sup>Po: 98.9 % transitions to the 1/2<sup>-</sup> (level 3$`p_{1/2}^\mathrm{D}`$) ground state of the daughter, 0.55% transitions to the 5/2<sup>-</sup> (level 2f$`{}_{}{}^{\mathrm{D}}{}_{5/2}{}^{}`$) first single–particle excited state and 0.54% transitions to the 3/2<sup>-</sup> (level 3p$`{}_{}{}^{\mathrm{D}}{}_{3/2}{}^{}`$) second single–particle excited state.
As briefly mentioned, both radial and rotational couplings caused by the relative motion between the nascent fragments are taken into account in order to calculate the occupation probabilities of several levels of the daughter by the unpaired nucleon. For simplicity, the effect due to the radial coupling is considered to be well reproduced by the Landau–Zener promotion mechanism in the avoided crossing regions. Therefore, the fine structure of the process is strongly related to the dynamic characteristics of the system. The rotational or Coriolis coupling causes transitions between two levels for which the value of $`\mathrm{\Omega }`$ differs by one unit and it is proportional to the angular momentum operator of the single–particle $`j_\pm `$ matrix element. The STCSM provides the ingredients for calculating the single–particle transitions probabilities due to the Landau–Zener effect and to the Coriolis couplings: the interaction energies between the diabatic states $`ϵ_{ij}^{\mathrm{\Omega }_k}`$ in the avoided crossing regions, the diabatic level energies $`ϵ_i^{\mathrm{\Omega }_k}`$ (using spline interpolations) and the wave functions required to compute $`\mathrm{\Omega }_k|j_\pm |\mathrm{\Omega }_k1`$ as described in Refs. . The behavior of these ingredients are plotted in Fig. 3. The relative velocity between the nascent fragments can also be calculated quantum mechanically using a method similar to that of the variation of constants , but in the following it will be considered as a fit parameter as in Refs. . To obtain the final occupation probabilities of the daughter levels by the unpaired nucleon, a system of differential coupled equations must be solved:
$$\begin{array}{cc}\dot{c}_i^{\mathrm{\Omega }_k}=\hfill & \frac{1}{i\mathrm{}}_{ji}ϵ_{ij}^{\mathrm{\Omega }_k}\mathrm{exp}\left(i\alpha _{ij}^{\mathrm{\Omega }_k\mathrm{\Omega }_k}\right)c_j^{\mathrm{\Omega }_k}+\hfill \\ & \frac{1}{i\mathrm{}}_l\frac{\mathrm{}^2}{2B_RR^2}\sqrt{I(I+1)\mathrm{\Omega }_k(\mathrm{\Omega }_k+1)}i,\mathrm{\Omega }_k|j_+|l,\mathrm{\Omega }_k1\mathrm{exp}\left(i\alpha _{il}^{\mathrm{\Omega }_k\mathrm{\Omega }_k1}\right)c_k^{\mathrm{\Omega }_k1}+\hfill \\ & \frac{1}{i\mathrm{}}_l\frac{\mathrm{}^2}{2B_RR^2}\sqrt{I(I+1)\mathrm{\Omega }_k(\mathrm{\Omega }_k+1)}i,\mathrm{\Omega }_k|j_{}|l,\mathrm{\Omega }_k+1\mathrm{exp}\left(i\alpha _{il}^{\mathrm{\Omega }_k\mathrm{\Omega }_k+1}\right)c_k^{\mathrm{\Omega }_k+1}\hfill \end{array}$$
(2)
with $`\alpha _{ij}^{\mathrm{\Omega }_l\mathrm{\Omega }_m}=_0^t(ϵ_i^{\mathrm{\Omega }_l}ϵ_j^{\mathrm{\Omega }_m})𝑑t/\mathrm{}`$, $`B_R`$ is the effective mass along the elongation $`R`$ which was taken approximatively equal to the reduced mass of the system and $`I`$=9/2 is the total spin of the system. The time dependence in the above equation can be removed by using the relations $`\dot{c}_i^{\mathrm{\Omega }_k}=v_{\mathrm{tun}}c_i^{\mathrm{\Omega }_k}/R`$ and $`R=v_{\mathrm{tun}}t`$. The coefficients $`(c_j^{\mathrm{\Omega }_k})^2`$ give the occupation probabilities of the diabatic levels $`\{j,\mathrm{\Omega }_k\}`$. To solve this system, following the above discussion and inspecting the Fig. 2, it was considered that it is sufficient to choose the initial conditions so that the levels with $`\mathrm{\Omega }`$=1/2 and 3/2 emerging from 2$`g_{9/2}^\mathrm{P}`$ have the same initial occupation probabilities, which means that the equality $`(c_{g9/2}^{1/2})^2+(c_{g9/2}^{3/2})^2=1`$ is fulfilled. Also, by solving the system (2), it is satisfactory to take into account the levels with $`\mathrm{\Omega }`$=1/2 emerging from $`1i_{11/2}^\mathrm{P}`$, 2$`g_{9/2}^\mathrm{P}`$, 3$`p_{1/2}^\mathrm{P}`$, 2$`f_{5/2}^\mathrm{P}`$, 3$`p_{3/2}^\mathrm{P}`$, those with $`\mathrm{\Omega }`$=3/2 emerging from 2$`g_{9/2}^\mathrm{P}`$, 2$`f_{5/2}^\mathrm{P}`$, 3$`p_{3/2}^\mathrm{P}`$, and those with $`\mathrm{\Omega }`$=5/2 emerging from 2$`g_{9/2}^\mathrm{P}`$, 1$`i_{13/2}^\mathrm{P}`$. All the avoided crossing levels between the selected adiabatic states are taken into account. Levels with $`\mathrm{\Omega }>`$5/2 do not reach the final channels we are interested in.
For each channel, the penetrability $`P_{L_{im_i}}(Q_i)`$ of the barrier was obtained using the numerical superasymmetric fission model , the nuclear part being given by the Yukawa–plus–exponential approximation. This penetrability depends on the $`Q_i`$–value of the channel $`i`$ ($`i`$ labels here the single–particle state of the daughter) and of the relative motion orbital momentum $`L_{im_i}`$. In the final channel $`3p_{1/2}^\mathrm{D}`$, due to the conservation laws, $`L_{im_i}`$ has the value 5 ($`m_i`$=1), in the final channel 2$`f_{5/2}^\mathrm{D}`$, $`L_{im_i}`$ can be either 3, 5 or 7 ($`m_i`$=1,2,3) and in the final channel 3$`p_{3/2}^\mathrm{D}`$, $`L_{im_i}`$ can be either 3 or 5 ($`m_i`$=1,2). For a specific final single–particle state, for example 2$`f_{5/2}^\mathrm{D}`$, it is not possible to discriminate between the possible values of the relative motion orbital momentum $`L_{im_i}`$=3,5 and 7 in order to compute only one barrier penetrability. In these circumstances, by analogy to the Mang’s formulae of Refs. concerning the radial motion and the associated angular momentum, we consider that the angular momentum $`L_{im_i}`$, used in calculating the penetrabilities, has a probability to be obtained in the final channel directly proportional to the square of the Clebsh–Gordon coefficient $`(jI\mathrm{\Omega }\mathrm{\Omega }|L_{im_i}0)^2`$. So that, the spectroscopic amplitude in the channel $`i`$ associated to the spin $`\mathrm{\Omega }_k`$ and the momentum $`L_{im_i^{}}`$ will be $`p_i^{\mathrm{\Omega }_kL_{im_i^{}}}=(c_i^{\mathrm{\Omega }_k})^2(j_iI\mathrm{\Omega }_k\mathrm{\Omega }_k|L_{im_i^{}}0)^2/_{m_i}(j_iI\mathrm{\Omega }_k\mathrm{\Omega }_k|L_{im_i}0)^2`$ where the summation on $`m_i`$ is done on the allowed values of $`L_{im_i}`$. The partial half–life $`T_i^{\mathrm{\Omega }_k}`$ for the channel $`\{i,\mathrm{\Omega }_k\}`$ becomes proportional to the quantity:
$$T_i^{\mathrm{\Omega }_k}\frac{1}{_{m_i}p_i^{\mathrm{\Omega }_kL_{im_i}}P_{L_{im_i}}(Q_i)},$$
(3)
the proportionality factor being given by the barrier assault frequency. The partial half–lives for the transitions to the ground–state $`T_{3p_{1/2}^\mathrm{D}}`$, to the first excited state $`T_{2f_{5/2}^\mathrm{D}}`$ and to the second excited state $`T_{3p_{3/2}^\mathrm{D}}`$ are:
$$\begin{array}{c}\frac{1}{T_{3p_{1/2}^\mathrm{D}}}=\frac{1}{T_{3p_{1/2}^\mathrm{D}}^{1/2}}\hfill \\ \frac{1}{T_{2f_{5/2}^\mathrm{D}}}=\frac{1}{T_{2f_{5/2}^\mathrm{D}}^{1/2}}+\frac{1}{T_{2f_{5/2}^\mathrm{D}}^{3/2}}+\frac{1}{T_{2f_{5/2}^\mathrm{D}}^{5/2}}\hfill \\ \frac{1}{T_{3p_{3/2}^\mathrm{D}}}=\frac{1}{T_{3p_{3/2}^\mathrm{D}}^{1/2}}+\frac{1}{T_{3p_{3/2}^\mathrm{D}}^{3/2}}\hfill \end{array}$$
(4)
The barrier assault frequency being the same for all the channels, the relative intensities $`T_{3p_{1/2}^\mathrm{D}}/T_{2f_{5/2}^\mathrm{D}}`$ and $`T_{3p_{1/2}^\mathrm{D}}/T_{3p_{3/2}^\mathrm{D}}`$ for the fine structure can be obtained. Several tunneling velocities, considered here as a fit parameter, have been tried. For a tunneling velocity of 9$`\times 10^6`$ fm/fs, the ratio between the intensity for transitions to the first excited state and to the ground state was found to be 0.0071 and the obtained ratio of the same parameter between the second excited state and the ground state was 0.0062. These results are in good agreement with the experimental values presented before. Moreover, calculations of Ref. show that in the quantum time–dependent approach, the tunneling velocity is of the order of 1$`\times 10^7`$ fm/fs. These calculations suggest that the $`\alpha `$ decay fine structure phenomenon can be explained quantitatively by describing the decaying system with molecular models, and it can be stated that the quantitative characteristics of this phenomenon are ruled by dynamical effects. In an avoided crossing region, the two eigenfunctions of the adiabatic levels exchange their characteristics. If the relative distance $`R`$ change infinitely slow, the unpaired nucleon will remain in the same adiabatic single particle state after the passage through a quasi–crossing, any other available single particle excited state being unfavoured. For a large tunneling velocity, the unpaired nucleon will follow the diabatic state after the passage through a quasi–crossing, all the other states being unfavoured. For a finite tunneling velocity, an intermediate situation arises, some single particle states being favoured and other single particle states being unfavoured. The model propose here, as the usual picture that consists of calculating the overlaps between the parent and the channel wave functions, is also based on the existence of favoured and unfavoured transitions in odd nuclei. The proposed formalism offers a competitive description of the alpha decay mechanism, by investigating for the first time the modality in which the levels initially bunched in shells are reorganized during the disintegration to realize the final energy configuration.
|
no-problem/0003/cond-mat0003226.html
|
ar5iv
|
text
|
# How the Replica-Symmetry-Breaking Transition Looks Like in Finite-Size Simulations
## I Introduction
The concept of replica-symmetry breaking (RSB) gives us new insight into the character of the ordered state of complex systems such as spin glasses (SG) and real structural glasses. Systems exhibiting the RSB can roughly be divided into two categories depending on their breaking patterns: One is a full or hierarchical RSB and the other is a one-step RSB. In both cases, there are many different equilibrium states unrelated by global symmetry of the Hamiltonian, and an overlap between these states plays an important role in describing the ordered state.
In the case of one-step RSB, the overlap $`q`$ takes only two values in the thermodynamic limit, namely, either a self-overlap equal to the Edwards-Anderson order parameter, $`q=q_{\mathrm{EA}}`$, or a non-self-overlap usually equal to zero, $`q=0`$. The overlap distribution function $`P(q)`$ consists of two distinct delta-function peaks, one at $`q=q_{\mathrm{EA}}`$ and the other at $`q=0`$. One-step RSB transitions could be either continuous or first-order, either with or without a finite discontinuity in $`q_{\mathrm{EA}}`$ at the transition. Examples of the first-order one-step RSB transition may be the mean-field $`p`$-spin glass with $`p>2`$, the random energy model, and the mean-field $`p`$-state Potts glass with $`p>4`$, while those of the continuous one-step RSB transition may be the mean-field $`p`$-state Potts glass with $`2.8<p4`$.
In the case of the full RSB, by contrast, possible values of the overlap are distributed continuously in a certain range, and the states are organized in a hierarchical manner. The overlap distribution function has a continuous plateau at $`q<q_{\mathrm{EA}}`$ in addition to the delta-function peak at $`q=q_{\mathrm{EA}}`$. Well-known example of this category is the standard mean-field Ising SG, namely, the Sherrington-Kirkpatrick (SK) model. In some special cases, the admixture of the above twos, where the overlap distribution function has a continuous plateau together with the delta-function peak at $`q=0`$ (and the one at $`q=q_{\mathrm{EA}}`$), is also possible. An example of this may be the mean-field $`p`$-state Potts glass with $`2<p<2.8`$.
Recent interest in SG studies has been focused largely on the validity of applying the RSB idea established in some mean-field models to more realistic short-range SG models. In almost all such studies, the three-dimensional (3D) Ising SG model has been employed. Indeed, some researchers have claimed that the full or hierarchical RSB as observed in the SK model is also realized in realistic 3D SG, while other researchers have claimed, based on an alternative droplet picture, that the ordered state of realistic 3D SG is unique up to global symmetry of the Hamiltonian, without showing RSB of any kind. Thus, intensive debate has continued between these two scenarios as to the true nature of the SG ordered state of 3D short-range systems.
Meanwhile, the one-step RSB has been discussed mainly with interest in its close connection to structural glasses rather than SG magnets. Recently, however, one-step RSB features have been found unexpectedly by the present authors in the chiral-glass state of a 3D Heisenberg SG. According to the chirality mechanism of experimental SG transitions based on the spin-chirality decoupling-recoupling scenario , the SG ordered state and the SG phase transition of real Heisenberg-like SG magnets possessing weak but nonzero magnetic anisotropy are governed by the chirality ordering of the fully isotropic system which is “revealed” by the weak magnetic anisotropy, not by the spin ordering which has been “separated” in the fully isotropic case from the chirality ordering. Then, the observation of Ref. means that the SG ordered state of most of real SG magnets should also exhibit such one-step RSB-like features. Note that such a picture of the SG ordered state contrasts with the standard pictures discussed so far, either the droplet picture without the RSB or the SK picture with the full RSB.
Under such circumstances, further studies of the nature of the possible RSB in 3D short-range SG models are clearly required. Since we are usually forced to employ numerical simulations to investigate 3D short-range models, and since numerical simulations are often hampered by severe finite-size effects, we feel it worthwhile to further clarify by numerical simulations the finite-size effects in some mean-field models which are exactly known to exhibit RSB transitions in the thermodynamic limit. In particular, the question how the one-step and full RSB transitions look like in finite-size simulations is of both fundamental and practical interest. Such information would be of much help as a reference in interpreting the numerical data obtained for finite-dimensional short-range SG models.
In the present paper, we choose two mean-field SG models exactly known to exhibit a continuous (second-order) phase transition in the thermodynamic limit: One is the SK model which shows the full RSB, and the other is the mean-field three-state ($`p=3`$) Potts-glass model which shows the one-step RSB. We calculate by Monte Carlo simulations several quantities which have widely been used in identifying the phase transition, including the spin-glass order parameter and the Binder parameter, together with the quantities recently introduced to represent the non-self-averaging character of the ordered state. By carefully examining the size dependence of these quantities, comparison is made between the two types of RSB. Our results have revealed that the Binder parameter of the one-step RSB system shows the behavior very different from the standard behavior, giving warning about the interpretation of the numerical data for relevant short-range systems.
## II models
The mean-field $`p`$-state Potts-glass model is defined by the Hamiltonian,
$$=p\underset{i<j}{\overset{N}{}}J_{ij}\delta _{n_i,n_j},$$
(1)
where $`n_i`$ denotes a Potts-spin variable at the $`i`$-th site which takes $`p`$ distinct states, and $`N`$ is the total number of Potts spins. The exchange interaction $`J_{ij}`$ is an independent random Gaussian variable with zero mean and variance $`J^2/N`$. The model with $`p=2`$ is equivalent to the SK model. In the present study, we focus our attention on the standard SK model corresponding to $`p=2`$ and the three-state Potts-glass model corresponding to $`p=3`$. Although the thermodynamic properties of an infinite system have been rather well understood by the calculation based on a replica technique, its finite-size properties have been much less understood.
It is convenient to use an equivalent simplex spin representation where the Potts spin $`n_i`$ is written in terms of a $`p1`$ dimensional unit vector $`\stackrel{}{S_i}`$, which satisfies $`\stackrel{}{S_i}\stackrel{}{S_j}=\frac{p\delta _{n_in_j}1}{p1}`$,
$$=(p1)\underset{i<j}{\overset{N}{}}J_{ij}\stackrel{}{S}_i\stackrel{}{S}_j.$$
(2)
In the particular case of $`p=2`$, $`\stackrel{}{S_i}`$ simply reduces to the one-component Ising variable $`S_i=\pm 1`$, and the Hamiltonian (2) is equivalent to the standard SK Hamiltonian.
In terms of the simplex spin $`S_i^\mu `$ ($`1\mu p1`$), the parameter $`q`$ may be defined by
$$q=\sqrt{\underset{\mu ,\nu }{\overset{p1}{}}(q^{\mu \nu })^2},$$
(3)
where $`q^{\mu \nu }`$ denotes an overlap tensor between two replicas 1 and 2,
$$q^{\mu \nu }=\frac{1}{N}\underset{i=1}{\overset{N}{}}S_{i,1}^\mu S_{i,2}^\nu .$$
(4)
The Binder parameter is then given by
$$g(T,N)=\frac{(p1)^2}{2}\left(1+\frac{2}{(p1)^2}\frac{[q^4]}{[q^2]^2}\right),$$
(5)
where $`\mathrm{}`$ denotes the thermal average and $`[\mathrm{}]`$ denotes the average over the quenched randomness $`\{J_{ij}\}`$. The Binder parameter is normalized so as to vanish above the transition temperature $`T_\mathrm{g}`$ in the thermodynamic limit. Recall that at $`T>T_\mathrm{g}`$ each component $`q_{\mu \nu }`$ should behave as an independent Gaussian variable. Below $`T_\mathrm{g}`$, $`g`$ is normalized to give unity in the thermodynamic limit for the nondegenerate ordered state where $`P(q)`$ has only trivial peak at $`q=q_{\mathrm{EA}}`$. Of course, this is not the case for the SG models showing the RSB including the present mean-field SG models, for which $`g`$ takes nontrivial values different from unity even in the thermodynamic limit. Hence, at least in the case where a continuous phase transition occurs into the trivial ordered state, $`g`$ for various finite sizes is expected to cross at $`T=T_\mathrm{g}`$. Indeed, this aspect has widely been used for locating the transition temperature from the numerical data for finite systems.
## III Monte Carlo Results
We perform MC simulations based on a version of the extended ensemble method, called the exchange method. As in other SG models, an extremely slow relaxation becomes a serious problem of MC simulations in the present mean-field SG models. Such difficulty could partly be overcome by using the exchange method, which has turned out to be quite efficient in thermalizing various hardly relaxing systems. The method enables us to study larger sizes and/or lower temperatures than those attained previously. Our MC simulations have been performed up to $`N=512`$ at $`T/J=0.25`$ for the SK model, and $`N=256`$ at $`T/J=0.4`$ for the mean-field $`p=3`$ Potts-glass model, where $`T_\mathrm{g}/J=1`$ in both models. Sample averages are taken over $`2001792`$ independent bond realizations depending on the size $`N`$. We note that the minimum temperature studied here are considerably lower than the previous ones; e.g., $`N=512`$ at $`T/J=0.75`$ for the SK model and $`N=120`$ at $`T/J=0.98`$ for the mean-field $`p=3`$ Potts-glass model.
The temperature and size dependence of the calculated Binder parameter $`g`$ is shown in Figs. 1 and 2 for the SK and the $`p=3`$ Potts-glass models, respectively. As is evident from these figures, the Binder parameters of the two mean-field models show considerably different behaviors from each other.
In the SK model, as shown in Fig. 1, a clear crossing of $`g`$ is observed at $`T=T_\mathrm{g}`$, which looks similar to the ones seen in the standard continuous transitions. In fact, the behavior of $`g`$ found here also resembles the ones observed in the short-range Ising SG models in 3D and in 4D, though the crossing tendency is less pronounced in 3D than in 4D. As mentioned, $`g`$ of the SK model takes a nontrivial value below $`T_\mathrm{g}`$ even in the thermodynamic limit due to its RSB. We show in Fig. 1 the behavior of $`g(T,\mathrm{})`$ evaluated in the replica formalism by numerically solving the Parisi equation \[1(a)\]. Note that, as the temperature approaches $`T_\mathrm{g}`$ from below, the limiting value $`g(T_\mathrm{g}^{},\mathrm{})`$ goes to unity as in the case of ordinary continuous phase transitions. Hence, with increasing $`N`$, $`g(T_\mathrm{g}^{},N)`$ just below $`T_\mathrm{g}`$ is expected to approach unity from below while $`g(T_\mathrm{g}^+,N)`$ just above $`T_\mathrm{g}`$ approaches zero from above, which entails a crossing of $`g`$ at $`T=T_\mathrm{g}`$. With lowering the temperature, $`g(T,\mathrm{})`$ first decreases, reaching a minimum around $`T/J=0.5`$, and increases again tending to unity at $`T=0`$. Here note that, for any model with nondegenerate ground state, $`P(q)`$ becomes trivial at $`T=0`$ irrespective of the occurrence of RSB, and $`g`$ tends to unity. As can be seen in Fig. 1, the present MC results for finite $`N`$ gradually approach the $`g(T,\mathrm{})`$ curve of an infinite system.
In the mean-field $`p=3`$ Potts glass, as shown in Fig. 2, no crossing of $`g`$ is observed at $`T=T_\mathrm{g}`$, at least of the type as observed in the SK model. Instead, unlike the case of the SK model, a shallow negative dip develops above $`T_\mathrm{g}`$ for larger $`N`$ which becomes deeper as the system gets larger. Although the existence of a negative dip was not reported in the previous numerical works, we note that a negative dip appears only for larger $`N`$ which accounts for the absence of a negative dip in the previous data. Perhaps, on looking at Fig. 2, one would hardly imagine that there occurs a continuous phase transition at $`T/J=1`$: Nevertheless, the occurrence of a continuous transition at $`T/J=1`$ is an exactly established property of the model. We also note that, while the appearance of a growing negative dip in the Binder parameter is often related to the occurrence of a first-order transition, this is not always the case: Here, the transition is established to be continuous.
It might be instructive to examine here the behavior of $`g`$ in the thermodynamic limit. As the temperature approaches $`T_\mathrm{g}`$ from below, $`g(T_\mathrm{g}^{},\mathrm{})`$ tends to a negative value, $`1`$ in the present case. Such a negative value of $`g(T_\mathrm{g}^{},\mathrm{})`$ is in sharp contrast to the system showing the full RSB where $`g(T_\mathrm{g}^{},\mathrm{})=1`$. Indeed, this negativity is closely related to the occurrence of the one-step RSB in the model.
Then, one expects that the negative dip of $`g(T,N)`$ observed in Fig. 2 further deepens with increasing $`N`$, and eventually approaches $`1`$ from above at $`T=T_\mathrm{g}^{}`$, in sharp contrast to the SK case where $`g(T_\mathrm{g}^{},N)`$ approaches 1 from below. Therefore, the crossing of $`g`$ in the $`g>0`$ region as observed in the SK model hardly occurs in the $`p=3`$ Potts-glass model. Rather, if one considers the fact that $`g(T,N)`$ above $`T_\mathrm{g}`$ is negative for moderately large $`N`$ approaching zero from below, the crossing of $`g`$ is expected to occur in the $`g<0`$ region, not in the $`g>0`$ region as in the case of the SK model. The data of Fig. 2 are certainly consistent with such a behavior. Anyway, our present result of the mean-field $`p=3`$ Potts glass has revealed that the data of the Binder parameter has to be interpreted with special care particularly when the ordered state has one-step RSB features.
Next, we study the so-called Guerra parameter which was originally introduced to detect the RSB transition,
$$G(T,N)=\frac{[q^2^2][q^2]^2}{[q^4][q^2]^2}.$$
(6)
Since the numerator represents a sample-to-sample fluctuation of the overlap, non-vanishing of $`G`$ means a lack of self-averaging so long as the denominator remains nonzero. In the mean-field SG models studied here, their RSB indeed gives rise to the lack of self-averaging, i.e., the occurrence of a non-trivial probability distribution of the overlap over quenched disorder. It has been rigorously proven, without using the replica trick, that in the SG phase of the SK model the $`G`$ parameter in the thermodynamic limit is equal to 1/3 independent of the temperature. Meanwhile, it has been pointed out in Ref. that, even when $`P(q)`$ is trivial and the ordered state is self-averaging, the $`G`$ parameter can still take a nonzero value due to the possible vanishing of the denominator, leading to a crossing at $`T_\mathrm{g}`$. Hence, the crossing of $`G`$ does not necessarily mean the lack of the self-averaging, although it can still be used as an indicator of a phase transition. As an indicator of the non-self-averageness in the ordered state, one may use the $`A`$ parameter defined by
$$A(T,N)=\frac{[q^2^2][q^2]^2}{[q^2]^2}.$$
(7)
We calculate these two parameters, $`G`$ and $`A`$, both for the SK and the mean-field $`p=3`$ Potts-glass models. The temperature and size dependence of the $`G`$ and $`A`$ parameters of the SK model is shown in Figs. 4 and 4, respectively. Although the error bars are still large, both $`G`$ and $`A`$ show a clear crossing at $`T_\mathrm{g}`$, remaining positive at any temperature. As expected, with increasing $`N`$, the $`G`$ parameter approaches $`1/3`$ independent of $`T`$ below $`T_\mathrm{g}`$. By contrast, the $`A`$ parameter for various sizes merge into a curve below $`T_\mathrm{g}`$, which clearly stays nonzero indicating the non-self-averageness of the ordered state. Here it should be noticed that, just at the transition point, the non-self-averageness is expected to occur in any random system, even including the ones without showing the RSB in the ordered state. Hence, in the type of random systems which do not show the RSB in the ordered state, $`A(T,\mathrm{})`$ stays nonzero only just at $`T=T_\mathrm{g}`$ and vanishes on both sides of $`T_\mathrm{g}`$. By contrast, in the present SK model, $`A(T,\mathrm{})`$ should stay finite even below $`T_\mathrm{g}`$ due to its RSB, which explains the observed merging behavior seen in Fig. 4 at $`T<T_\mathrm{g}`$. As can be seen from Fig. 4, on further lowering the temperature toward $`T=0`$, $`A(T,N)`$ tends to vanish in contrast to the behavior of $`G(T,N)`$. This aspect is consistent with the fact that at $`T=0`$ the overlap distribution becomes trivial and the self-averageness is recovered irrespective of the occurrence of RSB.
The $`G`$ and $`A`$ of the mean-field $`p=3`$ Potts glass are presented in Figs. 5 and 6, respectively. Unlike the case of the Binder parameter $`g`$ shown in Fig. 2, the $`G`$ and $`A`$ parameters remain positive at any $`T`$ and show a clear crossing at $`T=T_\mathrm{g}`$: They behave more like the Binder parameter of standard systems, e.g., like the one shown in Fig. 1. In fact, the behaviors of the $`G`$ and $`A`$ parameters shown in Figs. 5 and 6 are similar to those of the SK model shown in Figs. 4 and 4, suggesting that $`G`$ and $`A`$ are less sensitive to the kind of breaking pattern of replica symmetry. Hence, one could use the $`G`$ and $`A`$ parameters to identify the SG transition based on the standard crossing method even for systems showing a one-step RSB.
Once the transition temperature is established, the next task would be to determine critical exponents. Here we wish to examine a finite-size scaling hypothesis concerning the SG order parameter for the present mean-field models. Similar analysis has widely been used for extracting the critical exponents from the numerical data. According to Ref. , finite-size scaling of the mean-field models can be derived by assuming that the “coherence number” behaves as $`\xi ^{d_u}`$ where $`d_u`$ is the upper critical dimension of the corresponding short-range model, while the “coherence length” $`\xi `$ diverges at $`T=T_\mathrm{g}`$ with the correlation-length exponent at the upper critical dimension $`\nu _{\mathrm{MF}}`$, $`\xi |TT_\mathrm{g}|^{\nu _{\mathrm{MF}}}`$. Then, the squared order parameter can be written as
$`[q^2]`$ $`|TT_\mathrm{g}|^{2\beta _{\mathrm{MF}}}f(N|TT_\mathrm{g}|^{d_u\nu _{\mathrm{MF}}}),`$ (8)
$`N^{2\beta _{\mathrm{MF}}/d_u\nu _{\mathrm{MF}}}f^{}(N|TT_\mathrm{g}|^{d_u\nu _{\mathrm{MF}}}),`$ (9)
where $`\beta _{\mathrm{MF}}=1`$ is the mean-field order-parameter exponent whereas $`f`$ and $`f^{}`$ are the scaling functions. Noting the fact that the upper critical dimension of the SG models is $`d_u=6`$ and the correlation-length exponent at $`d=d_u=6`$ is equal to $`\nu _{\mathrm{MF}}`$=1/2, it follows
$`[q^2]N^{2/3}f^{\prime \prime }(|TT_\mathrm{g}|N^{1/3}).`$ (10)
The resulting finite-size scaling plots are shown in Figs. 7 and 8 for the SK and the $`p=3`$ Potts-glass models, respectively. In both models, the scaling of the form (10) turns out to work fairly well both below and above $`T_\mathrm{g}`$ as far as the temperature is sufficiently close to $`T_\mathrm{g}`$. We note that a similar finite-size-scaling analysis has already been reported for the SK model just at $`T_\mathrm{g}`$ and for the $`p=3`$ Potts-glass model above $`T_\mathrm{g}`$ . In particular, the scaling turns out to be reasonably good even for the $`p=3`$ Potts glass where the Binder parameter does not exhibit a clear crossing in the range of sizes studied. This implies that the standard finite-size scaling analysis of the order parameter could still be useful even in RSB systems including the one-step RSB systems.
## IV Discussion and Remarks
In this section, with our present results for the mean-field models in mind, we wish to comment on the possible RSB in some short-range SG models.
As mentioned, one-step RSB-like features were recently observed in the chiral-glass state of the 3D short-range Heisenberg SG. There, the Binder parameter for the chirality, the order parameter of the chiral-glass transition, did not cross in the $`g>0`$ region and developed a negative dip which deepened with the system size. Instead, a crossing of $`g`$ was observed in the $`g<0`$ region close to the negative dip (see Fig. 1 of Ref. ). Meanwhile, the $`G`$ parameter always remained positive and showed a clear crossing at $`T=T_\mathrm{g}`$ (see Fig. 3 of Ref. ). All these features are similar to the ones observed here in the mean-field $`p=3`$ Potts glass, suggesting that the chiral-glass state of the 3D Heisenberg SG has a one-step RSB-like character.
Other obvious interest is the nature of the possible phase transition of the short-range three-state ($`p=3`$) Potts-glass model in 3D. It is widely believed that there is no finite-temperature phase transition in 3D $`p=3`$ Potts glass which were investigated by MC simulations and other numerical methods. In particular, MC results of Refs. revealed that the Binder parameter decreased monotonically with system size without showing a crossing, which was taken as an evidence of the absence of a finite-temperature transition. However, the behavior of $`g`$ observed in Refs. was not dissimilar to the one observed here in the mean-field $`p=3`$ Potts glass, and we feel that the possibility of the occurrence of a one-step RSB-like transition at finite $`T_\mathrm{g}`$ still cannot be ruled out.
Recently, short-range $`p`$-spin glass models whose mean-field versions have been known to show the one-step RSB were studied by MC simulations. For example, according to the calculation of Ref. for the 4D $`3`$-spin model, the Binder parameter did not exhibit a crossing of the standard type, while the $`G`$ and $`A`$ parameters showed a clear crossing at $`T=T_\mathrm{g}>0`$, strongly suggesting the occurrence of a finite-temperature transition. Thus, from out present study, the possible occurrence of a one-step RSB transition at $`T=T_\mathrm{g}>0`$ is suspected. Meanwhile, a closer inspection reveals that a negative dip observed in $`g`$ becomes shallower with increasing system size, in contrast to the case of the mean-field $`p=3`$ Potts glass studied here. Further studies seems to be required to clarify the nature of the RSB in the short-range $`p`$-spin glass.
In conclusion, we have investigated by MC simulations the finite-size effects of the two mean-field SG models whose replica-symmetry-breaking properties in the thermodynamic limit are well established. In the mean-field Ising spin glass (the SK model), the Binder parameter $`g`$ of various sizes always remains positive and crosses at $`T=T_\mathrm{g}`$, while in the mean-field three-state Potts glass, it develops a negative dip which deepens as the system size increases, without a crossing in the $`g>0`$ region as observed in the SK model. Instead, a crossing of $`g`$ occurs in the $`g<0`$ region near the negative dip. Such difference in the behaviors of $`g`$ reflects the different types of associated RSB of the two models, i.e., full versus one-step RSB. By contrast, the Guerra parameter $`G`$ as well as the non-self-averaging parameter $`A`$ always remain positive and show a crossing of the standard type at $`T=T_\mathrm{g}`$ for both the SK and $`p=3`$ Potts-glass models. We have also discussed implications of the present results to the possible interpretation of the numerical results for some short-range SG models.
###### Acknowledgements.
The authors would like to thank H. Takayama and H. Yoshino for valuable discussions. One of the present authors (KH) was supported by Grant-in-Aid for the Encouragement of Young Scientists from the Ministry of Education, Science, Sports and Culture of Japan (No. 11740220). Numerical simulation was performed on several workstations at ISSP, University of Tokyo.
|
no-problem/0003/cond-mat0003185.html
|
ar5iv
|
text
|
# Two Stages in Evolution of Binary Alkali BEC Mixtures towards Phase Segregation
## I Introduction
Spinodal decomposition in a binary-solution system is a typical example of phase ordering dynamics: the growth of order through domain coarsening when a system is quenched from the homogeneous phase into a broken-symmetry phase . Systems quenched from a disordered phase into an ordered phase do not order instantaneously. Instead, different length scales set in as the domains form and grows with time, and different broken symmetry phases compete to select the equilibrium state. In the dynamical equation, the Cahn-Hilliard equation, for the spinodal decomposition, the diffusion constant of the material plays a decisive role and determines both the length and time scales. It appears that a system without the diffusion constant may not show all the main features of the spinodal decomposition. With the recent realization of the binary Alkali Bose-Einstein condensates (BEC’s) we demonstrate in the present article that indeed it is possible to have a spinodal decomposition without the diffusion constant and that this may have been realized experimentally. The binary alkali BEC mixtures provide new systems to non-equilibrium phenomena in new parameter regimes. Its mathematical description is simple enough that a theoretical description of the whole process is feasible. A direct comparison between theoretical calculations and experimental observations can be made. Here, we shall study the gross features in the dynamical evolution process, starting from the homogeneous unstable state. To differentiate the present situation from the usual ones, we shall call the present one the quantum spinodal decomposition, and the previous ones the classical spinodal decomposition.
In a classical spinodal decomposition process, the particle number for each specie is conserved separately. The process can be classified into two stages, the initial stage of fast growth from the homogeneous unstable state and the late stage towards to the equilibrium of true ordering. The initial stage is dominated by the fastest growth mode determined by the dynamical equation. It is a highly non-equilibrium process. There is a length scale associated with this time scale, which gives the characteristic domain size after this initial stage. In the late stage the domains grow and merge, and their numbers becomes smaller and smaller. This is a relaxation process towards the equilibrium, dominated by the slowest time scale. For an infinitely large system, the time scale may be infinity. Hence the system may never achieve true equilibrium. We shall show below that the quantum spinodal decomposition occurring in the binary BEC mixtures shares all the features of the classical spinodal decomposition. The main difference is that the size of binary BEC mixtures is finite. Therefore it is possible to achieve true equilibrium within a finite time scale. To further differentiate the present process from the classical one, we shall call the initial stage in the dynamical evolution of the binary BEC mixtures the stage I, and call the second stage towards equilibrium the stage II.
In the following, we first formulate the problem in Sec. II. The coupled nonlinear Schrödinger equations and the parameter regime to realize the quantum spinodal decomposition will be specified. In Sec. III the stage I is studied in detail. The fastest growth mode is explicitly given. Interspersed regions or domains of coexisting condensate 1 and 2 are formed at this stage, characterized by a length scale. Stage II is studied in Sec. IV. We first demonstrate that the scenarios in classical spinodal decompositions are not possible here. We propose and analyze a new mechanism for the approach towards equilibrium: the Josephson effect between different domains of same condensate. This is in accordance with the present quantum spinodal decomposition concept. A comparison to experimental situations is discussed in Sec. V. We conclude there that the quantum spinodal decomposition can be realized. We summarize in Sec. VI.
## II Formulation of Problem
We start from the Hamiltonian formulation of a binary BEC mixture at zero temperature:
$`H`$ $`=`$ $`{\displaystyle d^3x\left[\psi _1^{}(x)\left(\frac{\mathrm{}^2^2}{2m_1}\right)\psi _1(x)+\psi _1^{}(x)U_1(x)\psi _1(x)\right]}`$ (5)
$`+{\displaystyle d^3x\left[\psi _2^{}(x)\left(\frac{\mathrm{}^2^2}{2m_2}\right)\psi _2(x)+\psi _2^{}(x)U_2(x)\psi _2(x)\right]}`$
$`+{\displaystyle \frac{G_{11}}{2}}{\displaystyle d^3x\psi _1^{}(x)\psi _1(x)\psi _1^{}(x)\psi _1(x)}`$
$`+{\displaystyle \frac{G_{22}}{2}}{\displaystyle d^3x\psi _2^{}(x)\psi _2(x)\psi _2^{}(x)\psi _2(x)}`$
$`+G_{12}{\displaystyle d^3x\psi _1^{}(x)\psi _1(x)\psi _2^{}(x)\psi _2(x)}.`$
Here $`\psi _j`$, $`m_j`$, $`U_j`$ with $`j=1,2`$ are the effective wave function, the mass, and the trapping potential of the $`j`$th condensate. The interaction between the $`j`$th condensate atoms is specified by $`G_{jj}`$, and that between 1 and 2 by $`G_{12}`$. In the present paper all $`G^{}s`$ will be taken to be positive. The corresponding time dependent equations of motion are the well-known non-linear Schrödinger equations , obtained here by minimization of the action, $`S=𝑑t\{_{j=1,2}\psi _j^{}i\mathrm{}\frac{}{t}\psi _jH\}`$,
$`i\mathrm{}{\displaystyle \frac{}{t}}\psi _1(x,t)`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{2m_1}}^2\psi _1(x,t)+(U_1(x)\mu _1)\psi _1(x,t)+G_{11}|\psi _1(x,t)|^2\psi _1(x,t)`$ (7)
$`+G_{12}|\psi _2(x,t)|^2\psi _1(x,t),`$
and
$`i\mathrm{}{\displaystyle \frac{}{t}}\psi _2(x,t)`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{2m_2}}^2\psi _2(x,t)+(U_2(x)\mu _2)\psi _2(x,t)+G_{22}|\psi _2(x,t)|^2\psi _2(x,t)`$ (9)
$`+G_{12}|\psi _1(x,t)|^2\psi _2(x,t).`$
The Lagrangian multipliers, the chemical potentials $`\mu _1`$ and $`\mu _2`$, are fixed by the relations $`d^3x|\psi _j(x,t)|^2=N_j,j=1,2`$, with $`N_j`$ the number of the $`j`$th condensate atoms. Eq. (2) and (3) are mean field equations, since we treat the effective wave functions as $`c`$ numbers, corresponding to the Hartree-Fock-Bogoliubov approximation.
Experimentally, the trapping potentials $`\{U_j\}`$ are simple harmonic in nature. For the sake of simplicity and to illustrate the physics we shall consider a square well trapping potential $`U_j=U`$: zero inside and large (infinite) outside, unless otherwise explicitly specified. We will come back to this question in the discussion of experimental feasibility.
We shall consider the strong mutual repulsive regime
$$G_{12}>\sqrt{G_{11}G_{22}}.$$
(10)
In this regime the equilibrium state for two Bose-Einstein condensates is a spatial segregation of two condensates, where two phases, the weakly and strong segregated phases, characterized by the healing length and the penetration depth, have been predicted . We shall use Eq. (2) and (3) under the condition (4) to study a highly non-linear dynamical process: The two condensates are initially in a homogeneously mixed state, then eventually approach to the phase segregated state. In the same mean field manner as in Ref., we find that this dynamic process can be classified into two main stages: The initially high non-equilibrium dynamical growth in stage I, where the dynamics is governed by the fastest growth mode, and the stage II of approaching to equilibrium where the dynamics is governed by the slowest mode. The stage II is the typical of a relaxation process near equilibrium. However, we shall show again it is governed by a quantum effect, namely, the Josephson effect.
## III Stage I: Fastest Growth Mode
With the square well trapping potential specified in Sec. II, the coupled non-linear Schrödinger equations have an obvious homogeneous solution: Inside the trap the condensate densities $`|\psi _j|^2=\rho _{j0}`$, $`\rho _{j0}=N_j/V,`$ with $`V`$ the volume of the square well potential trap, and the chemical potentials $`\mu _1=G_{11}\rho _{10}+G_{12}\rho _{20}`$ and $`\mu _2=G_{22}\rho _{20}+G_{12}\rho _{10}`$. This is the initial condition of the present problem. It is known that for a large enough mutual repulsive interaction, that is, if $`G_{12}>\sqrt{G_{11}G_{22}},`$ this initial state is not the ground state. Rich physics has been displayed by various theoretical studies . Among many unstable and growing modes in this parameter regime, we will find the fastest growth mode in the initial stage of the process towards to the equilibrium state.
To look for the fastest growth mode out of the homogeneous state, we start with small fluctuations from the homogeneous state. This is consistent with the usual stability analysis . Our approach here is to emphasize the connection with the physics of the classical spinodal decomposition and the role played by the Josephson relationships. Define
$$\psi _1(x,t)=\sqrt{\rho _1(x,t)}e^{i\theta _1(x,t)},$$
(11)
and
$$\psi _2(x,t)=\sqrt{\rho _2(x,t)}e^{i\theta _2(x,t)}.$$
(12)
and define the density fluctuations $`\delta \rho _j=\rho _j\rho _{j0}`$ and the phase fluctuations $`\theta _j`$, and assume they are small: $`|\delta \rho _j|/\rho _j,|\theta _j|<<1`$. The definition of the phase fluctuations here has made use of the implicit assumption that there is no average current. Then, to the linear order, we have from Eq. (2,3), for condensate 1,
$`\delta \dot{\rho }_1+\rho _{10}{\displaystyle \frac{\mathrm{}}{m_1}}^2\theta _1=0,`$ (13)
$`\mathrm{}\dot{\theta }_1={\displaystyle \frac{\mathrm{}^2}{4m_1}}{\displaystyle \frac{^2\delta \rho _1}{\rho _{10}}}G_{11}\delta \rho _1G_{12}\delta \rho _2.`$ (14)
In terms of hydrodynamics, Eq.(7) is the continuity equation, and Eq.(8) is the Bernoulli equation, with the first term at the right hand side as the so-called ‘quantum pressure’. Similarly, for condensate 2 we have
$`\delta \dot{\rho }_2+\rho _{20}{\displaystyle \frac{\mathrm{}}{m_2}}^2\theta _2=0,`$ (15)
$`\mathrm{}\dot{\theta }_2={\displaystyle \frac{\mathrm{}^2}{4m_2}}{\displaystyle \frac{^2\delta \rho _2}{\rho _{20}}}G_{22}\delta \rho _2G_{12}\delta \rho _1.`$ (16)
Eliminating the phase variables from Eqs. (7-10), we have
$$\frac{^2}{t^2}\left(\begin{array}{c}\delta \rho _1\\ \delta \rho _2\end{array}\right)=\left(\begin{array}{cc}\frac{\mathrm{}^2}{4m_1^2}^4+\frac{\rho _{10}}{m_1}G_{11}^2& \frac{\rho _{10}}{m_1}G_{12}^2\\ \frac{\rho _{20}}{m_2}G_{12}^2& \frac{\mathrm{}^2}{4m_2^2}^4+\frac{\rho _{20}}{m_2}G_{22}^2\end{array}\right)\left(\begin{array}{c}\delta \rho _1\\ \delta \rho _2\end{array}\right).$$
(17)
We look for the solution of the form
$`\left(\begin{array}{c}\delta \rho _1\\ \delta \rho _2\end{array}\right)=\left(\begin{array}{c}A\\ B\end{array}\right)e^{i(𝐪𝐫\omega t)},`$
with $`A,B`$ constants. Eq. (11) then becomes
$`\left(\begin{array}{cc}\frac{\mathrm{}^2}{4m_1^2}q^4+\frac{\rho _{10}}{m_1}G_{11}q^2\omega ^2& \frac{\rho _{10}}{m_1}G_{12}q^2\\ \frac{\rho _{20}}{m_2}G_{12}q^2& \frac{\mathrm{}^2}{4m_2^2}q^4+\frac{\rho _{20}}{m_2}G_{22}q^2\omega ^2\end{array}\right)\left(\begin{array}{c}A\\ B\end{array}\right)=0.`$
In order to have a non-zero solution for $`A`$ and $`B`$, the determinant must be zero:
$$det\left(\begin{array}{cc}\frac{\mathrm{}^2}{4m_1^2}q^4+\frac{\rho _{10}}{m_1}G_{11}q^2\omega ^2& \frac{\rho _{10}}{m_1}G_{12}q^2\\ \frac{\rho _{20}}{m_2}G_{12}q^2& \frac{\mathrm{}^2}{4m_2^2}q^4+\frac{\rho _{20}}{m_2}G_{22}q^2\omega ^2\end{array}\right)=0,$$
(18)
which determines the dispersion relation between the frequency and the wave number. One can easily check that for the one component condensate case, by putting $`G_{12}=0`$, Eq.(12) indeed gives the usual phonon spectrum. In the present binary BEC mixture case, we have
$$\omega _\pm ^2=\frac{q^2}{2}\left[b_{11}+b_{22}\pm \sqrt{(b_{11}b_{22})^2+4b_{12}b_{21}}\right],$$
(19)
with
$`b_{11}`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{4m_1^2}}q^2+{\displaystyle \frac{\rho _{10}}{m_1}}G_{11},b_{12}={\displaystyle \frac{\rho _{10}}{m_1}}G_{12},`$ (20)
$`b_{21}`$ $`=`$ $`{\displaystyle \frac{\rho _{20}}{m_2}}G_{12},b_{22}={\displaystyle \frac{\mathrm{}^2}{4m_2^2}}q^2+{\displaystyle \frac{\rho _{20}}{m_2}}G_{22}.`$ (21)
Obviously, both $`\omega _+^2`$ and $`\omega _{}^2`$ are real as given by Eq.(13). They give the two phonon velocities in the binary BEC mixture:
$`c_\pm =\sqrt{\left[\rho _{10}G_{11}/m_1+\rho _{20}G_{22}/m_2\pm \sqrt{(\rho _{10}G_{11}/m_1\rho _{20}G_{22}/m_2)^2+4\rho _{10}\rho _{20}G_{12}^2/(m_1m_2)}\right]/2}.`$
However, if
$$b_{11}b_{22}b_{12}b_{21}<0,$$
(22)
the branch $`\omega _{}^2`$ can be negative. This implies an imaginary frequency $`\omega _{}`$ and an imaginary phonon velocity, which shows that the initial homogeneously mixed state is unstable. One can verify that the sufficient condition for the inequality Eq.(14) to hold for small enough wave number $`q`$ is the validity of the inequality Eq.(4). The modes defined by Eq. (13) with imaginary frequencies will then grow exponentially with time. Unlike the usual situation near equilibrium, this growth from the present non-equilibrium homogeneously mixed state will be dominated by the fastest growth mode. This is precisely the same case as in the initial stage of the classical spinodal decomposition.
Now we look for the fastest growth mode. It is equivalent to find the most negative value of $`\omega _{}^2`$. The condition $`\omega _{}^2/(q^2)=0`$ leads to
$`{\displaystyle \frac{\left[\left(b_{11}b_{22}\right)^2+\left(b_{11}b_{22}\right)\left(\frac{\mathrm{}^2}{4m_1^2}\frac{\mathrm{}^2}{4m_2^2}\right)q_{max}^2+4\frac{\rho _{10}}{m_1}\frac{\rho _{20}}{m_2}G_{12}^2\right]^2}{\left[\left(b_{11}b_{22}\right)^2+4\frac{\rho _{10}}{m_1}\frac{\rho _{20}}{m_2}G_{12}^2\right]}}`$ (23)
$`=\left[2q_{max}^2\left({\displaystyle \frac{\mathrm{}^2}{4m_1^2}}+{\displaystyle \frac{\mathrm{}^2}{4m_2^2}}\right)+{\displaystyle \frac{\rho _{10}}{m_1}}G_{11}+{\displaystyle \frac{\rho _{20}}{m_2}}G_{22}\right]^2,`$ (24)
with
$`b_{11}b_{22}=\left({\displaystyle \frac{\mathrm{}^2}{4m_1^2}}{\displaystyle \frac{\mathrm{}^2}{4m_2^2}}\right)q_{max}^2+{\displaystyle \frac{\rho _{10}}{m_1}}G_{11}{\displaystyle \frac{\rho _{20}}{m_2}}G_{22}.`$
Eq.(15) is actually a quartic equation for $`q_{max}^2`$, whose algebraic solutions can be found by the standard way. The corresponding imaginary frequency branch is
$$\omega _{max}^2=\frac{q_{max}^4}{2}\left[\frac{\mathrm{}^2}{4m_1^2}+\frac{\mathrm{}^2}{4m_2^2}\frac{\left(b_{11}b_{22}\right)\left(\frac{\mathrm{}^2}{4m_1^2}\frac{\mathrm{}^2}{4m_2^2}\right)}{\sqrt{\left(b_{11}b_{22}\right)^2+4\frac{\rho _{10}}{m_1}\frac{\rho _{20}}{m_2}G_{12}^2}}\right].$$
(25)
The detailed general expressions for both $`q_{max}`$ and $`\omega _{max}`$ are not physically transparent and particularly illustrative. We will not explore them here.
To get a better understanding of the physical implications of Eqs. (15,16), we consider a special case relevant to recent experiments where particles of the two condensates have the same mass, $`m_1=m_2=m`$. In this case we find the wavenumber corresponding to the most negative $`\omega _{max}^2`$ is
$$q_{max}^2=\frac{m}{\mathrm{}^2}\left[\sqrt{(\rho _{10}G_{11}+\rho _{20}G_{22})^2+4\rho _{10}\rho _{20}G_{11}G_{22}\left(\frac{G_{12}^2}{G_{11}G_{22}}1\right)}(\rho _{10}G_{11}+\rho _{20}G_{22})\right],$$
(26)
and
$$\omega _{max}=i\frac{\mathrm{}}{2m}q_{max}^2.$$
(27)
The physics implied in Eqs. (15,16) or Eqs. (17,18) is as follows. Starting from the initial homogeneous mixture of the two condensates, on the time scale given by
$`t_I=1/|\omega _{max}|,`$
domain patterns of the phase segregation with the characteristic length
$`l_I=1/q_{max}`$
will appear. Particularly, for the weakly segregated phase of $`\frac{G_{12}^2}{G_{11}G_{22}}10`$, we have the length scale
$`l_I^1={\displaystyle \frac{\sqrt{m}}{\mathrm{}}}\sqrt{{\displaystyle \frac{2\rho _{10}\rho _{20}G_{11}G_{22}}{(\rho _{10}G_{11}+\rho _{20}G_{22})}}\left({\displaystyle \frac{G_{12}^2}{G_{11}G_{22}}}1\right)}={\displaystyle \frac{\sqrt{2}}{\sqrt{\mathrm{\Lambda }_1^2+\mathrm{\Lambda }_2^2}}},`$
and for the strongly segregated phase of $`\frac{G_{12}^2}{G_{11}G_{22}}1\mathrm{}`$, we have
$`l_I^1={\displaystyle \frac{2m^{1/2}}{\mathrm{}}}\left(\rho _{10}\rho _{20}G_{11}G_{22}\left({\displaystyle \frac{G_{12}^2}{G_{11}G_{22}}}1\right)\right)^{1/4}={\displaystyle \frac{\sqrt{2}}{\sqrt{\mathrm{\Lambda }_1\mathrm{\Lambda }_2}}}.`$
Here $`\mathrm{\Lambda }_j=\xi _j/\sqrt{G_{12}/\sqrt{G_{11}G_{22}}1}`$ and $`\xi _j=\sqrt{\mathrm{}^2/2m_j\mathrm{\hspace{0.33em}1}/\rho _{j0}G_{jj}}`$ are the penetration and healing lengths in the binary BEC mixture . Those length and times scales can be measured experimentally. We will come back to the experimental situation below.
After the stage I of fast growth into the domain pattern characterized by the length scale $`l_I`$, the system will gradually approach to the true ground state of the complete phase segregation: one condensate in one region and the second condensate in another region. This stage is slow, dominated by the slowest mode, and is the subject of next section.
It is now evident that the stage I of the growth of binary BEC’s shares the same phenomenology of the initial stage of the classical spinodal decomposition: the domination of the fastest growth mode, the appearance of domains of segregated phases, and the conservation of particle numbers. There are, however, two important differences. First, the dynamical evolution of the binary BEC’s is governed by a coupled nonlinear time dependent Schrödinger equations, not by a nonlinear diffusion equation supplemented with the continuity equation, the Cahn-Hilliard equation. There is no external relaxation process for the present wave functions. Secondly, the energy of the binary BEC’s is conserved during the growth process, not as in the case of classical spinodal decomposition where the system energy always decreases.
We wish to point out that the initial stage of spinodal decomposition without diffusion constant has been explored in heavy ion collisions for Fermi systems, based on the kinetic equation approach . Qualitatively similar results have been obtained there. The study in alkali BEC mixtures should shed light on this important process, because the observation of real time evolution is in principle attainable experimentally.
## IV Stage II: Merging and Oscillating between Domains
The BEC binary mixture occurs in a trap. This finite size effect of the droplet leads to the broken symmetry of the condensate profiles , which tends to separate the condensates in mutually isolated regions. This implies that there is no contact between different domains of the same condensates formed in the stage I. The classical spinodal process involves diffusion. An estimate of the diffusion constant for the BEC system can be made from kinetics theory. The ratio of the time scales for quantum and classical particle transport is of the order of the ratio of the BEC cloud size to the de Broglie wavelength. This is much smaller than one for the experimental systems of interest. The classical diffusion process is thus not important. Because of the domains are not connected and because the diffusion for the BEC mixture is extremely low, all the mechanisms for the late stage classical spinodal decomposition are not applicable. We propose that it is the Josephson effect that is responsible for the approach to equilibrium in stage II. Two models for the Josephson effect, the ‘rigid pendulum’ model and the ‘soft pendulum’ model will be discussed. They both give the same time scale when the ‘Rabi’ frequency is small. Those model studies are performed for a single BEC condensate separated by an external potential barrier. We adapt their analysis to the present situation of one condensate sitting in the potential wells formed by another condensate during the phase segregation process.
For a classical spinodal decomposition in a binary fluid mixture, there are several scenarios of growth in stage II. (1) For the domains to grow by the diffusion of materials between them, a finite diffusion constant is needed. For the binary BEC mixture, the temperature is too low for the diffusion to occur. The diffusion constant is practically zero. Furthermore, particles of one condensate have to diffuse into the domains of the other condensate because domains of the same condensate are not connected. This requires a finite activation energy. Hence this process is not possible at low enough temperatures. (2) The same argument also rules out the applicability of the diffusion-enhanced collisions by which the diffusion field around the domains leads to an attraction between them. (3) Another mechanism is the noise-induced growth. However, there is no external noise in the BEC described by the time dependent nonlinear Schrödinger equations. This mechanism is thus irrelevant here. (4) A seemingly relevant one is the hydrodynamic growth that is driven by the pressure difference between the points of different curvatures. This would be a rather fast process. However, this also requires a connected phase of one of the species. Hence, at zero temperature and for a finite system, classical scenarios for the later stage will not work here. We need to find a scenario working under the present conditions, and at the same time that is consistent with the time dependent nonlinear Schrödinger equations.
The only scenario which can transport particles across a forbidden region at zero temperature is the tunneling process. Let us consider the specific case of two domains of condensate 1 separated by a domain with width $`d`$ of condensate 2. The ability of condensate 1 to tunnel through condensate 2 is described by the penetration depth $`\mathrm{\Lambda }`$, as discussed in Ref. . Hence the probability of condensate 1 to tunnel through condensate 2 can be estimated as
$$p=e^{2\frac{d}{\mathrm{\Lambda }}},$$
(28)
when $`p`$ is much smaller than 1. The finiteness, though small, of the tunneling probability suggests that it is the Josephson effect responsible for the relaxation process in the stage II. The Josephson effect leads to the merging of two domains of the same condensate at sufficiently low temperatures. The dynamics of such motion may be governed by the ‘rigid pendulum’ Hamiltonian for a Josephson junction :
$`H(\varphi ,n)=E_J(1\mathrm{cos}\varphi )+{\displaystyle \frac{1}{2}}E_Cn^2,`$
where $`E_J`$ is the Josephson coupling energy determined by the tunneling probability, $`n=(n_xn_y)/2`$ is the particle number difference between the numbers of particles, $`n_x`$ and $`n_y`$, in the two domains, and $`E_C\mu /n`$ is the ‘capacitive’ energy due to interactions. In the absence of external constraints, $`\mu =E_Cn`$. The phase difference $`\varphi `$ between the two domains is conjugated to $`n`$, as in usual Josephson junctions. Under the appropriate condition, such as low temperature and smallness of the capacitive energy, there may be an oscillation between the two domains of condensate 1 separated by condensate 2. In such a case, we may estimate that the oscillation period $`\tau _{II}=2\pi /\omega _{JP}`$, with the so-called Josephson plasma frequency
$$\omega _{JP}=\frac{\sqrt{E_CE_J}}{\mathrm{}}.$$
(29)
For small tunneling probability, the Josephson junction energy may be estimated as
$`E_J=n_T^{1/3}\mathrm{}\omega _0e^{2\frac{d}{\mathrm{\Lambda }}},`$
and the capacitive energy as
$`E_C={\displaystyle \frac{2}{5}}\left({\displaystyle \frac{n_T}{2}}\right)^{0.6}\left({\displaystyle \frac{15a_{11}}{a_0}}\right)^{0.4}\mathrm{}\omega _0.`$
Here $`\omega _0`$ is the harmonic oscillator frequency for condensate 1 in a harmonic trap, $`a_0=\sqrt{\mathrm{}/m_1\omega _0}`$ is the corresponding oscillator length, the $`a_{11}`$ is the scattering length of condensate 1, and $`n_T=n_x+n_y`$ is the total number of particles in domain $`x`$ and $`y`$. Then the oscillatory time scale between the domains determines by the Josephson plasma frequency
$$\tau _{II}^1=\left(\frac{2a_0}{15a_{11}n_T}\right)^{2/15}\frac{\omega _0}{2\pi }e^{\frac{d}{\mathrm{\Lambda }}}.$$
(30)
The rigid pendulum Hamiltonian would give a good description when $`n<<n_T`$.
Another description of the Josephson effect uses the ‘soft pendulum’ Hamiltonian proposed in Ref. :
$`H(\varphi ,n)=E_J(n)\mathrm{cos}\varphi +{\displaystyle \frac{1}{2}}E_Cn^2,`$
with the number dependent Josephson coupling energy $`E_J(n)=n_T\mathrm{}\omega _R\sqrt{1(n/n_T)^2}`$. Here $`\omega _RE_J/\mathrm{}`$ is the Rabi frequency determined by the overlap integral between the two wave functions in the absence of the mutual repulsive interaction $`G_{12}`$. The appropriate frequency scale in this case is
$$\omega _{sp}^2=\omega _{JP}^2+\omega _R^2,$$
(31)
which determines the time scale for the two domains to merge. The Josephson plasma frequency in Eq.(22) of the soft pendulum model is $`\omega _{JP}=\sqrt{n_TE_c\omega _R/\mathrm{}}`$. For the experimental situation of interest, $`\omega _R<<\omega _{JP}`$. Thus the two approaches give essentially the same answer. The detailed derivation of $`E_J`$ was given in Ref. .
Given the Josephson effect is the dominant mechanism in the stage II, the time scale to arrive at the ground state will be determined by the Josephson effect at the final two domains, in which the two domains of condensate 1 is separated by the domain of condensate 2, in the manner of $`\mathrm{1\hspace{0.17em}2\hspace{0.17em}1\hspace{0.17em}2}`$ spatial configuration for the case of equal numbers of the condensates. The width of each domain is then $`D/4`$, with $`D`$ the size of the trap. According to the above analysis, the largest time scale determined by Eq.(22), the slowest mode, in the stage II is:
$$t_{II}=2\pi /\omega _{sp}.$$
(32)
The arrangement of $`\mathrm{1\hspace{0.17em}2\hspace{0.17em}1}`$ spatial configuration may also occur here, in which it is more likely for condensate 2 to tunnel through 1 to the edge of the trap, because of the larger tunneling probability.
## V Discussions
The first question is that the quantum spinodal decomposition discussed above can happen or not. In terms of the atomic scattering lengths of condensate atoms $`a_{jj}`$, the interactions are $`G_{jj}=4\pi \mathrm{}^2a_{jj}/m_j`$. The typical value of $`a_{jj}`$ for <sup>87</sup>Rb is about $`50`$Å. The typical density realized for the binary BEC mixture is about $`\rho _{i0}10^{14}/cm^3`$. Hence the healing length is $`\xi =\sqrt{(\mathrm{}^2/2m)(1/G_{jj}\rho _{i0})}=\sqrt{1/(8\pi a_{jj}\rho _{i0})}3000`$Å. For the different hyperfine states of <sup>87</sup>Rb, it is known now that $`G_{12}/\sqrt{G_{11}G_{22}}>1`$. Hence the ground state of the phase segregated phase can be realizable. Therefore, the quantum spinodal decomposition can happen.
Experimentally, starting from the initially homogeneous state, after a short period of time a domain pattern does appear. Then a damped oscillation between the domain pattern has been observed. Eventually the binary BEC mixture sets to the segregated phase . If we take $`G_{12}/\sqrt{G_{11}G_{22}}=1.04`$, the penetration depth is $`\mathrm{\Lambda }=\xi /\sqrt{G_{12}/\sqrt{G_{11}G_{22}}1}1.5\mu `$m. The length scale $`l_I`$ determined by Eq.(17) in the stage I is $`1.5\mu `$m, which is the same order of magnitude in comparison with experimental data. This length is also comparable to the domain wall width seen experimentally. The corresponding time scale $`t_I`$ (Eq. (18)) is then 6 ms, again the same order of magnitude, though both estimated $`l_I`$ and $`t_I`$ are somewhat smaller. If $`G_{12}/\sqrt{G_{11}G_{22}}=1.0004`$, the corresponding length and time scales in stage I according to the present analysis are $`l_I=15\mu `$m and $`t_I=600`$ms, respectively. Time and length scales calculated in this way are larger than the numbers extracted from the experiment . If we assume the damped oscillation to equilibrium observed experimentally is the stage II discussed here, taking the Rabi frequency $`\omega _R=0.2`$nK = 4 Hz and the total number of particle $`N_1=10^6`$, we find the period according to Eq. (23) is 30 ms, comparable to the experimental value. The estimation from Ref. gives a larger value of about 200 ms. We think this estimate of the larger time scale is due to the use of a larger estimate of $`d/\mathrm{\Lambda }`$. In our view, d and $`\mathrm{\Lambda }`$ are comparable in stage I. Thus for stage II, $`d/\mathrm{\Lambda }`$ is of the order of $`n_d`$, the number of domains formed in stage I. For the experiment of interest to use, $`n_d2`$. Thus for the case at hand, we think the first estimate is more reasonable.
The present analysis shows that the stage I is insensitive to damping. Similar conclusion has been reached in the study of heavy ion collisions for fermions . At this moment we do not have a reliable estimation of the damping in the Josephson oscillation, as indicated experimentally. Nevertheless, given the uncertainty in the value of $`G_{12}`$ or $`a_{12}`$, we conclude that the stage II of the quantum spinodal decomposition may have been observed.
## VI Conclusion
In the present paper we have studied the problem of the dynamical evolution of the binary BEC mixtures starting from the homogeneously mixed unstable state. We have found a parallel analogy to the usual spinodal decomposition process by calling the present one the quantum spinodal decomposition. The quantum spinodal decomposition consists of two stages: the stage I of initial fast growth of domains, characterized by a time and a length scale, and the stage II of a slow relaxation towards equilibrium dominated by the Josephson effect. The coupled non-linear Schrödinger equations provide a good theoretical description, which enable us to calculate the time and length scales. In comparison with recent experiments, we conclude that the quantum spinodal decomposition can be, and may have been, realized.
The length and time scales for Rb mixtures is controlled by the difference of $`r=G_{12}/\sqrt{G_{11}G_{22}}`$ and 1. Since $`r`$ is close to 1 experimentally, the data can provide for a very sensitive estimate of r-1. Since this parameter determines $`\mathrm{𝐛𝐨𝐭𝐡}`$ the time and length scale of stage I, it is a self-consistent check on the physical picture provided here.
Periodic-like structures have also been observed in the phase segregation of spin 1 Na mixtures . We think a similar picture of quantum spinodal decomposition applies to that case as well.
This work was supported in part by a grant from NASA (NAG8-1427). One of us (PA) thanks H. Heiselberg and C.J. Pethick for discussions, and the Bartol Research Institute as well as the Department of Physics at University of Delaware for the pleasant hospitality, where the main body of the work was completed. We appreciate the D. S. Hall for sending us their data and informing us on measurement details.
|
no-problem/0003/astro-ph0003442.html
|
ar5iv
|
text
|
# A Comparative Study of the Depth of Maximum of Simulated Air Shower Longitudinal Profiles
## 1 Introduction
The energy spectrum of cosmic rays is a power law with the flux falling by three orders of magnitude for each decade increase in energy. At $`10^{14}`$ eV the flux becomes so low that current balloon and satellite experiments lack the exposure required to detect a significant sample of events. This is unfortunate as the nature of the primaries remains of great astrophysical interest. Where direct measurements are possible the cosmic rays are known to be mostly protons and atomic nuclei. The most plausible acceleration site is at the shock fronts produced by supernova explosions. However, theoretical considerations predict a maximum energy from this process of $`10^{15}`$ eV, whereas the energy spectrum is observed to continue with only small deviations up to $`>10^{19}`$ eV. The origin of the particles at $`>10^{15}`$ eV is somewhat mysterious.
It has long been supposed that insight would result if the composition of the primaries could be measured. Due to the extremely low flux the only way to get information on these particles is to study the extensive atmospheric cascades which they initiate. When a cosmic ray enters the atmosphere it collides with the nucleus of an air atom, producing a number of secondaries. These go on to make further collisions, and the number of particles grows. Eventually the energy of the shower particles is degraded to the point where ionization losses dominate, and their number starts to decline.
It is a coincidence that at the energy where direct detection of the cosmic rays becomes impractical, the resulting air showers become big enough to be easily detectable at ground level. The number of particles in the cascade also becomes large enough that the longitudinal profile, or number of particles versus atmospheric depth, becomes a smooth curve, with a well defined maximum. This maximum depth, referred to as $`X_{max}`$, is often regarded as the most basic parameter of an air shower, and much effort has been expended to measure and interpret it.
The depth of maximum increases with primary energy as more cascade generations are required to degrade the secondary particle energies. For given total energy $`X_{max}`$ is related to the energy per nucleon of the primary. To first order the interactions occur between individual nucleons of the primary, and the target air nuclei. Therefore a shower initiated by a compound nucleus can be thought of as the superposition of many proton initiated showers, with correspondingly lower energy.
Unfortunately, of course, the detail is not so simple. For a number of reasons extracting information on the nature of the cosmic ray primaries from the air showers they produce has proved to be exceedingly difficult. The most fundamental problem is that the initial interactions are subject to large inherent fluctuations. This limits the event-by-event mass resolution of even an ideal detector. However, progress can still be made by looking at mean parameter values, or better, their distributions.
The second major problem is that sophisticated modeling is required to predict the absolute value of an observable parameter which is expected for a primary of given type and energy. Nucleus-nucleus interactions at the energies of the first few cascade steps are well beyond the reach of accelerator experiments. Therefore it is necessary to rely on hadronic interaction models which attempt to extrapolate from the available data using different mixtures of theory and phenomenology. The lower energy part of the cascade can be modeled using well known physics, although the programs are complex with corresponding scope for errors.
The depth of shower maximum has been determined by a number of experiments. In the energy range $`10^{14}`$ to $`10^{16}`$ eV it has been measured with varying degrees of directness using Čerenkov light . The range $`10^{17}`$ to $`10^{19}`$ eV has been observed rather directly by the Fly’s Eye detector through fluorescence light . Finally the region above $`10^{19}`$ eV is the focus of the HiRes Fly’s Eye , Auger Project , and others. In the past the experimental resolution and statistics have often been so poor that the mean value of $`X_{max}`$ has been discussed rather than its distribution — this is changing.
The simulations required to interpret the data from any given experiment have usually been performed only for the energy range accessible to it. This is unfortunate since checking the consistency of experiments in adjacent energy ranges is critically important, given the uncertainty of the high energy hadronic interaction models. Additionally the exact value of $`X_{max}`$ for a given model can depend on the way in which the longitudinal profiles are recorded and fit. The purpose of this paper is to provide $`X_{max}`$ values with good statistical precision, over a wide energy range, and computed in a consistent way using several hadronic models and two different cascade “framework” programs; for a more detailed discussion see .
The process of air shower simulation can be broken up into several parts. A framework program is required which handles the mechanics of the process and calls appropriate subroutines to model the interaction and propagation of the particles. Some fraction of the required transport and interaction modeling may be provided using third party code. In this paper two air shower simulation packages which have been heavily used in the literature are considered. The first is MOCCA written by Hillas . This originally used a simple, built-in hadronic interaction model, but has also been linked to SIBYLL ; all other modeling is handled internally. The second program is CORSIKA, a well documented and thorough program prepared originally for the Kascade experiment . It is linked to a number of high energy hadronic models, two of which are suitable for use over the very wide energy range of this study; SIBYLL and QGSjet . An attractive feature of this program is the use of the well established High Energy Physics codes EGS4 and GEISHA , for the electromagnetic, and lower energy hadronic modeling respectively. See Table 1 for a summary.
Due to the inherent limitations of air shower fluctuations, and also because of poor experimental resolution and statistics, $`X_{max}`$ data is often compared only to simulated values for proton and iron nuclei primaries. These are generally regarded as the extreme ends of the possible range. At lower energies the composition of cosmic rays tracks the general abundances of solar system material, with some modifications due to propagation spallation effects. Iron is the heaviest significantly abundant element.
## 2 Technical Details
At the highest cosmic ray energies it is absolutely necessary to use techniques which accelerate the simulation process. A popular approach is called thinning: below a threshold energy only a sub-set of the particles are tracked, with weightings to compensate for those discarded . The threshold is usually specified as a fraction of the primary energy, and referred to as the “thinning level”.
For this study MOCCA92 and CORSIKA 5.62 were used. This version of CORSIKA includes a very similar thinning algorithm to MOCCA. In all cases the high energy hadronic interaction model was used with the set of inelastic cross sections provided by its authors. Electromagnetic particle energy cutoff was a uniform 0.2 MeV.
When considering gross quantities, like the depth of shower maximum, it is possible to run the simulation codes with very severe thinning, and still obtain results of adequate quality. This means that many showers can be generated, over a multidimensional grid of primary parameters and shower models, within an acceptable computing time. The thinning process leads to longitudinal profiles which have large non-statistical fluctuations. The magnitude of these fluctuations increases with the severity of the thinning; see Figure 1.
To recover the depth of shower maximum from “noisy” thinned profiles it is customary to fit them to an empirical cascade shape function. This is also necessary when analyzing experimental data as the quality is often poor. The following function was introduced by Gaisser and Hillas as a “simple analytic parameterization” of the longitudinal profile of air showers:
$$N(X)=N_{max}\left(\frac{X}{X_{max}}\right)^{X_{max}/\lambda }\mathrm{exp}(X_{max}X)/\lambda ,$$
(1)
where $`X`$ is the atmospheric depth (in g cm<sup>-2</sup>), $`N_{max}`$ is the number of cascade particles at shower maximum, $`X_{max}`$ is the depth of maximum, and $`\lambda `$ is a characteristic length parameter (in the above reference fixed at 70 g cm<sup>-2</sup>). This is a Gamma Function, a form which naturally arises in cascade theory, and assumes that the first interaction is at $`X=0`$.
A fourth parameter $`X_0`$ is often introduced into Equation 1, ostensibly to allow for a variable first interaction point,
$$N(X)=N_{max}\left(\frac{XX_0}{X_{max}X_0}\right)^{(X_{max}X_0)/\lambda }\mathrm{exp}(X_{max}X)/\lambda .$$
(2)
This seems somewhat inelegant as varying $`X_0`$ does not correspond to a translation along the $`X`$ axis, unless $`X_{max}`$ is also changed. The following (equivalent) form is physically clearer, where $`X_{rise}`$ is the distance from first interaction to shower maximum,
$$N(X)=N_{max}\left(\frac{XX_0}{X_{rise}}\right)^{X_{rise}/\lambda }\mathrm{exp}(X_{rise}+X_0X)/\lambda .$$
(3)
In practice, when fitting simulated hadronic cascade profiles to either Equation 2 or 3, it turns out that $`X_0`$ correlates poorly with the actual depth of first interaction, and frequently takes unphysical negative values. Experiments were made performing the fit with $`X_0`$ fixed at the actual depth of first interaction $`X_1`$. This produces a significantly poorer goodness of fit, and reduces the $`X_{max}`$ results by $`10`$ g cm<sup>-2</sup>. The choice of Equation 2 or 3 also influences the $`X_{max}`$ results. For the remainder of this paper, in the interests of compatibility with other published results, Equation 2 was used with all four parameters free. Note that $`X_0`$ is best regarded as simply an additional arbitrary shape parameter.
To determine an appropriate thinning level for this study, sets of 500 proton showers were generated and fit, at each of 5 thinning levels; $`10^{3.5}`$, $`10^{4.0}`$, $`10^{4.5}`$, $`10^{5.0}`$ and $`10^{5.5}`$. Taking the $`10^{5.5}`$ thinned distributions as reference, a Kolmogorov test<sup>1</sup><sup>1</sup>1 The test as implemented in the HBOOK function HDIFF was used. was performed on each of the more heavily thinned sets, and for each of the fit parameters. This is a statistical test of the compatibility in shape between two histograms — it yields the probability that they are drawn from the same parent distribution. All the fit parameters returned a high probability at thinning levels of $`10^{5.0}`$ and $`10^{4.5}`$, i.e. the results are indistinguishable within the statistics of a 500 event set. $`X_{max}`$ itself is extraordinarily robust, remaining unbiased even at $`10^{3.5}`$ thinning. To be conservative a value of $`10^{4.5}`$ was selected for the main study.
The compatibility of the parameter distributions was also checked when varying the zenith angle of the primary from 0 to 60 deg. $`X_{max}`$ was unaffected, but interestingly $`N_{max}`$ showed a small systematic increase with angle .
## 3 $`X_{max}`$ Results
### 3.1 Mean value of $`X_{max}`$
For the main study sets of 500 events were run at 14 half decade energy steps between $`10^{14}`$ and $`10^{20.5}`$ eV, with the 4 combinations of framework program and high energy hadronic interaction model given in Table 1. Sets were generated with both proton and iron nucleus primaries. This gives $`500\times 14\times 4\times 2=56,000`$ showers. The showers were run at a thinning level of $`10^{4.5}`$ of primary energy, and a zenith angle of 45 deg. The resulting profiles were fit to Equation 2. Figure 2 shows the mean value of $`X_{max}`$ plotted against energy over the complete range; numerical values are given in Table 3. MOCCA-SIBYLL and CORSIKA-SIBYLL proton results are in good agreement, and the iron results are also close. This is very encouraging — the framework programs are complex and entirely independent — nevertheless they produce the same result. At higher energies the older MOCCA-Internal model diverges strongly to deeper $`X_{max}`$. CORSIKA-QGSjet produces much shallower results than SIBYLL at all energies; so much so that at $`10^{20.5}`$ eV MOCCA-Internal iron is equal to CORSIKA-QGSjet proton.
Figure 3 shows a comparison between published mean $`X_{max}`$ data from two experiments and the CORSIKA calculations. The data for $`E<10^{17}`$ eV are from the BLANCA experiment , and for $`E>10^{17}`$ eV from the Fly’s Eye <sup>2</sup><sup>2</sup>2 These points are used rather than the newer ones in since they have a much wider energy range, and their quoted errors are comparable, or better.. The Fly’s Eye data contains a small un-corrected experimental bias, the removal of which would shift the lower energy points higher in the atmosphere by around 20 g cm<sup>-2</sup> . Both SIBYLL and QGSjet are consistent with the data, in that the value remains within the proton-iron bounds. Also the general trends in composition versus energy are the same under the two models, although the absolute value and size of the changes differ. There is some evidence at $`10^{16}`$ eV that QGSjet is a more realistic model than SIBYLL ; the extrapolation to the highest energies in both models must be regarded as tentative at best.
### 3.2 Influence of the First Interaction on $`X_{max}`$
Why do the results shown in Figure 2 differ between the models? The proton-air cross sections used by SIBYLL and QGSjet are sufficiently similar that the mean free paths differ by $`<5`$ g cm<sup>-2</sup> over the complete energy range. This is to be compared to the 30–50 g cm<sup>-2</sup> difference in mean $`X_{max}`$.
When investigating the way in which the first interaction controls $`X_{max}`$ it is natural to subtract out the position of the first interaction; $`X_{rise}=X_{max}X_1`$. Elasticity is defined as the energy fraction of the most energetic secondary. Normally a large fraction of the primary energy continues in the form of a “leading nucleon” and the remainder is split between many secondary pions and nucleons. Figure 4 shows $`X_{rise}`$ versus first interaction elasticity at $`10^{19}`$ eV. For events where the first interaction is catastrophic (small elasticity), the resulting shower takes fewer generations to reach maximum, and the correlation is strong. As elasticity becomes larger, the first interaction is no longer the controlling factor, and the relationship weakens. Interestingly, both models exhibit approximately the same correlation between elasticity and $`X_{rise}`$.
Figure 5 shows the elasticity distributions versus energy. The reason the models produce different mean $`X_{max}`$ values is primarily because of their different elasticity distributions; QGSjet produces many more “hard” events, which lead to less deeply penetrating showers. SIBYLL has rather constant behaviour versus energy, while QGSjet is a more radical model, showing a stronger change; this is why the corresponding mean $`X_{max}`$ results diverge with increasing energy.
Hadronic interactions models are complex and esoteric, with many parameters which can potentially be compared. The significance of Figure 4 is the realization that, for calculations of $`X_{max}`$ at least, the most important characteristic is also a very simple one: how much of the primary energy is expended in the first interaction?
### 3.3 Distribution of $`X_{max}`$
In Figure 2 it can be seen that QGSjet predicts that the difference between proton and iron mean $`X_{max}`$ decreases significantly with energy. However, the fluctuations do not decrease correspondingly, and the proton and iron distributions overlap to an increasing extent. If this model is correct, greater experimental statistics would be required to determine the mean composition with given accuracy. The situation is illustrated in Figure 6 where the bands contain 68% of the data (i.e. spanning the 16% and 84% points of the integral distribution). CORSIKA-SIBYLL shows stronger separation improving with increasing energy, while CORSIKA-QGSjet has weaker separation degrading with energy.
Proton primaries are deflected less by magnetic fields than more highly charged particles of the same total energy. It has been suggested that attempts to locate the origin of the highest energy cosmic rays, by studying their arrival directions, could be enhanced by making cuts on composition sensitive parameters, to increase the fraction of protons in the data sample. This would clearly work much less well if QGSjet is a more realistic model than SIBYLL.
For proton primaries the distribution of $`X_{max}`$ is strongly asymmetric, with a tail to deep $`X_{max}`$. This is presumably the result of fluctuations in the first interaction point, and is therefore connected to the proton-air cross section, which is a quantity of fundamental interest. Earlier simulations have suggested that,
$$\mathrm{\Lambda }=c\lambda _{pair},$$
(4)
where $`\mathrm{\Lambda }`$ is the exponential slope, or “decrement”, of the trailing edge of the $`X_{max}`$ distribution, $`\lambda _{pair}`$ is the proton-air mean free path, and $`c`$ is a constant of proportionality with a value between 1.2 and 1.6 dependent on hadronic interaction model .
The $`X_{max}`$ distributions were fit to an exponential starting at 100 g cm<sup>-2</sup> beyond the peak. An example distribution, with the fit, is shown in Figure 7. To avoid biasing the results it is essential to use a maximum likelihood algorithm since the bins on the far tail necessarily contain few events. Figure 8 shows the value of the ratio $`c`$, plotted versus energy, for one of the models. Testing each set of results against the hypothesis of energy independence yields the values given in Table 2. With the available statistics, the reduced $`\chi ^2`$ numbers show little evidence for energy dependence. The SIBYLL based models give values close to $`1.15`$, while the other two are around $`1.30`$; this difference appears to have significance.
## 4 Conclusions
When running shower simulations to study $`X_{max}`$ it is better to generate heavily thinned showers, with explicit low energy hadronic and electromagnetic cascades, than to rely on analytic approximations for these parts of the calculation. The latter has frequently been done in the past, leading to concerns that the results are biased to an unknown extent. When working with an explicit, but thinned, cascade simulation it is possible to determine an appropriate thinning level empirically, by comparing against more lightly thinned results.
Carefully calculated $`X_{max}`$ results have been presented, over a wide energy range, for proton & iron primaries, using four combinations of framework program and high energy hadronic interaction model. It is hoped that these will be of use for future comparisons with experimental data, and with other simulation results.
The way in which the first interaction controls $`X_{max}`$ has been investigated. The influence is strong — if one were to use model $`A`$ for the first few interactions, and model $`B`$ thereafter, the mean $`X_{max}`$ results would be close to using model $`A`$ throughout. QGSjet predicts that the separation between proton and iron $`X_{max}`$ declines at the highest energies. If this is true it is unfortunate from an experimental perspective.
It would be very useful if a common reference set of showers were made available by the authors of new, or modified, hadronic interaction models. For the purposes of longitudinal profile comparison the set used here seems adequate; the raw and processed output is available online .
The Fermilab computing department are thanked for the use of their machines.
|
no-problem/0003/math-ph0003039.html
|
ar5iv
|
text
|
# References
LIEB-THIRRING INEQUALITIES – Inequalities concerning the negative eigenvalues of the Schrödinger operator
$$H=\mathrm{\Delta }+V(x)$$
on $`L^2(𝐑^n),n1`$. With $`e_1e_2\mathrm{}<0`$ denoting the negative eigenvalues of $`H`$ (if any), the Lieb-Thirring inequalities state that for suitable $`\gamma 0`$ and constants $`L_{\gamma ,n}`$
$$\underset{j1}{}|e_j|^\gamma L_{\gamma ,n}_{𝐑^n}V\mathrm{\_}(x)^{\gamma +n/2}dx$$
(1)
with $`V\mathrm{\_}(x):=\mathrm{max}\{V(x),0\}`$. When $`\gamma =0`$ the left side is just the number of negative eigenvalues. Such an inequality (1) can hold if and only if
$`\gamma {\displaystyle \frac{1}{2}}`$ $`\mathrm{for}`$ $`\mathrm{n}=1`$
$`\gamma >0`$ $`\mathrm{for}`$ $`\mathrm{n}=2`$ (2)
$`\gamma 0`$ $`\mathrm{for}`$ $`\mathrm{n}3.`$
The cases $`\gamma >\frac{1}{2},n=1,\gamma >0,n2`$, were established by E.H. Lieb and W.E. Thirring in connection with their proof of stability of matter. The case $`\gamma =\frac{1}{2},n=1`$ was established by T. Weidl . The case $`\gamma =0`$, $`n3`$ was established independently by M. Cwikel , Lieb and G.V. Rosenbljum by different methods and is known as the CLR bound; the smallest known value for $`L_{0,n}`$ is in .
Closely associated with the inequality (1) is the semi-classical approximation for $`|e|^\gamma `$, which serves as a heuristic motivation for (1). It is (cf. ).
$`{\displaystyle \underset{j1}{}}|e|^\gamma `$ $``$ $`(2\pi )^n{\displaystyle _{𝐑^n\times 𝐑^n}}\left[p^2+V(x)\right]_\mathrm{\_}^\gamma dpdx`$
$`=`$ $`L_{\gamma ,n}^c{\displaystyle _{𝐑^n}}V\mathrm{\_}(x)^{\gamma +n/2}dx`$
with
$$L_{\gamma ,n}^c=2^n\pi ^{n/2}\mathrm{\Gamma }(\gamma +1)/\mathrm{\Gamma }(\gamma +1+n/2).$$
Indeed, $`L_{\gamma ,n}^c<\mathrm{}`$ for all $`\gamma 0`$ whereas (1) holds only for the range given in (2). It is easy to prove (by considering $`V(x)=\lambda W(x)`$ with $`W`$ smooth and $`\lambda \mathrm{}`$) that
$$L_{\gamma ,n}L_{\gamma ,n}^c$$
An interesting, and mostly open problem is to determine the sharp value of the constant $`L_{\gamma ,n}`$, especially to find those cases in which $`L_{\gamma ,n}=L_{\gamma ,n}^c`$. M. Aizenman and Lieb proved that the ratio $`R_{\gamma ,n}=L_{\gamma ,n}/L_{\gamma ,n}^c`$ is a monotonically non-increasing function of $`\gamma `$. Thus, if $`R_{\mathrm{\Gamma },n}=1`$ for some $`\mathrm{\Gamma }`$ then $`L_{\gamma ,n}=L_{\gamma ,n}^c`$ for all $`\gamma \mathrm{\Gamma }`$. The equality $`L_{\frac{3}{2},n}=L_{\frac{3}{2},n}^c`$ was proved for $`n=1`$ in and for $`n>1`$ in by A. Laptev and T. Weidl. (See also .)
The following sharp constants are known:
$`L_{\gamma ,n}`$ $`=`$ $`L_{\gamma ,n}^c\mathrm{all}\gamma 3/2,\text{[13]},\text{[1]},\text{[8]}`$
$`L_{1/2,1}`$ $`=`$ $`1/2\text{[7]}`$
There is strong support for the conjecture that
$$L_{\gamma ,1}=\frac{1}{\sqrt{\pi }(\gamma \frac{1}{2})}\frac{\mathrm{\Gamma }(\gamma +1)}{\mathrm{\Gamma }(\gamma +1/2)}\left(\frac{\gamma \frac{1}{2}}{\gamma +\frac{1}{2}}\right)^{\gamma +1/2}$$
(3)
for $`\frac{1}{2}<\gamma <\frac{3}{2}`$.
Instead of considering all the negative eigenvalues as in (1), one can consider just $`e_1`$. Then for $`\gamma `$ as in (2)
$$|e_1|^\gamma L_{\gamma ,n}^1_{𝐑^n}V\mathrm{\_}(x)^{\gamma +n/2}dx.$$
Clearly, $`L_{\gamma ,n}^1L_{\gamma ,n}`$, but equality can hold, as in the cases $`\gamma =1/2`$ and $`3/2`$ for $`n=1`$. Indeed, the conjecture in (3) amounts to $`L_{\gamma ,1}^1=L_{\gamma ,1}`$ for $`1/2<\gamma <3/2`$. The sharp value (3) of $`L_{\gamma ,n}^1`$ is obtained by solving a differential equation . It has been conjectured that for $`n3,L_{0,n}=L_{0,n}^1`$. In any case, B. Helffer and D. Robert showed that for all $`n`$ and all $`\gamma <1`$, $`L_{\gamma ,n}>L_{\gamma ,n}^c`$.
The sharp constant $`L_{0,n}^1,n3`$ is related to the sharp constant $`S_n`$ in the Sobolev inequality
$$f_{L^2(𝐑^n)}S_nf_{L^{2n/(n2)}(𝐑^n)}$$
(4)
by $`L_{0,n}^1=(S_n)^n`$.
By a ‘duality argument’ the case $`\gamma =1`$ in (1) can be converted into the following bound for the Laplacian, $`\mathrm{\Delta }`$. This bound is referred to as a Lieb-Thirring kinetic energy inequality and its most important application is to the stability of matter , . Let $`f_1,f_2,\mathrm{}`$ be any orthonormal sequence (finite or infinite) in $`L^2(𝐑^n)`$ such that $`f_jL^2(𝐑^n)`$ for all $`j1`$. Associated with this sequence is a ‘density’
$$\rho (x)=\underset{j1}{}|f_j(x)|^2.$$
(5)
Then, with $`K_n:=n(2/L_{1,n})^{2/n}(n+2)^{12/n},`$
$$\underset{j1}{}_{𝐑^n}|f_j(x)|^2dxK_n_{𝐑^n}\rho (x)^{1+2/n}dx.$$
(6)
This can be extended to antisymmetric functions in $`L^2(𝐑^{nN})`$. If $`\mathrm{\Phi }=\mathrm{\Phi }(x_1,\mathrm{},x_N)`$ is such a function we define, for $`x𝐑^n`$,
$$\rho (x)=N_{𝐑^{n(N1)}}|\mathrm{\Phi }(x,x_2,\mathrm{},x_N)|^2dx_2\mathrm{}dx_N.$$
Then, if $`_{𝐑^{nN}}|\mathrm{\Phi }|^2=1`$,
$$_{R^{nN}}|\mathrm{\Phi }|^2K_n_{𝐑^n}\rho (x)^{1+2/n}dx.$$
(7)
Note that the choice $`\mathrm{\Phi }=(N!)^{1/2}detf_j(x_k)|_{j,k=1}^N`$ with $`f_j`$ orthonormal reduces the general case (7) to (6).
If the conjecture $`L_{1,3}=L_{1,3}^c`$ is correct then the bound in (7) equals the Thomas-Fermi kinetic energy ansatz, and hence it is a challenge to prove this conjecture. In the meantime, see , for the best available constants to date (1998).
Of course, $`(f)^2=f(\mathrm{\Delta }f)`$. Inequalities of the type (7) can be found for other powers of $`\mathrm{\Delta }`$ than the first power. The first example of this kind, due to I. Daubechies , and one of the most important physically, is to replace $`\mathrm{\Delta }`$ by $`\sqrt{\mathrm{\Delta }}`$ in $`H`$. Then an inequality similar to (1) holds with $`\gamma +n/2`$ replaced by $`\gamma +n`$ (and with a different $`L_{\gamma ,n_1}`$, of course). Likewise there is an analogue of (7) with $`1+2/n`$ replaced by $`1+1/n`$.
All proofs of (1) (except and )actually proceed by finding an upper bound to $`N_E(V)`$, the number of eigenvalues of $`H=\mathrm{\Delta }+V(x)`$ that are below $`E`$. Then, for $`\gamma >0`$,
$$|e|^\gamma =\gamma _0^{\mathrm{}}N_E(V)E^{\gamma 1}dE$$
Assuming $`V=V\mathrm{\_}`$ (since $`V_+`$ only raises the eigenvalues), $`N_E(V)`$ is most accessible via the positive semidefiniate Birman-Schwinger kernel (cf. )
$$K_E(V)=\sqrt{V\mathrm{\_}}(\mathrm{\Delta }+E)^1\sqrt{V\mathrm{\_}}.$$
$`e<0`$ is an eigenvalue of $`H`$ if and only if 1 is an eigenvalue of $`K_{|e|}(V)`$. Furthermore, $`K_E(V)`$ is operator monotone decreasing in $`E`$, and hence $`N_E(V)`$ equals the number of eigenvalue of $`K_E(V)`$ that are greater than 1.
An important generalization of (1) is to replace $`\mathrm{\Delta }`$ in $`H`$ by $`|i+A(x)|^2`$, where $`A(x)`$ is some arbitrary vector field in $`𝐑^n`$ (called a magnetic vector potential). Then (1) still holds but it is not known if the sharp value of $`L_{\gamma ,n}`$ changes. What is known is that all presently known values of $`L_{\gamma ,n}`$ are unchanged. It is also known that $`(\mathrm{\Delta }+E)^1`$, as a kernel in $`𝐑^n\times 𝐑^n`$, is pointwise greater than the absolute value of the kernel $`(|i+A|^2+E)^1`$.
There is another family of inequalities for orthonormal functions, which is closely related to (1) and to the CLR bound . As before, let $`f_1,f_2,\mathrm{},f_N`$ be $`N`$ orthonormal functions in $`L^2(𝐑^n)`$ and set
$`u_j`$ $`=`$ $`(\mathrm{\Delta }+m^2)^{1/2}f_j`$
$`\rho (x)`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{N}{}}}|u_j(x)|^2.`$
$`u_j`$ is a Riesz potential ($`m=0`$) or Bessel potential ($`m>0`$) of $`f_j`$. If $`n=1`$ and $`m>0`$ then, $`\rho C^{0,1/2}(𝐑^n)`$ and $`\rho _{L^{\mathrm{}}(𝐑)}L/m`$.
If $`n=2`$ and $`m>0`$ then for all $`1p<\mathrm{}\rho _{L^p(𝐑^2)}B_pm^{2/p}N^{1/p}.`$
If $`n3,p=n/(n2)`$ and $`m0`$ (including $`m=0`$) then $`\rho _{L^p(𝐑^n)}A_nN^{1/p}.`$
Here, $`L,B_p,A_n`$ are universal constants. Without the orthogonality, $`N^{1/p}`$ would have to be replaced by $`N`$. Further generalizations are possible .
Elliott H. Lieb
Departments of Mathematics and Physics
Princeton University
©1998 by Elliott H. Lieb
|
no-problem/0003/astro-ph0003357.html
|
ar5iv
|
text
|
# 1 Image of the region studied in the HH46/47 complex. The observed slit position is shown superposed on a [SII] 0.673 𝜇m plus H2 1-0 S(1) 2.12 𝜇m (contours) map. Adapted from Eislöffel et al. (1994).
|
no-problem/0003/astro-ph0003102.html
|
ar5iv
|
text
|
# The Galactic Exoplanet Survey Telescope: A Proposed Space-Based Microlensing Survey for Terrestrial Extra-Solar Planets
## 1. Microlensing Planet Detection: GEST’s Main Goal
The main strength of the gravitational microlensing planet search technique is that it is sensitive to lower mass planets than other techniques. Observed from space, signals for planets down to the mass of Mars are detectable, but they are much rarer and have a shorter duration than higher mass planetary signals. Thus, a large number of stars must be followed with a high sampling frequency in order to detect low mass planets. With GEST, we are able to monitor $`2\times 10^8`$ stars once every 30 minutes or so with a photometric accuracy of $`1`$%. The microlensing event will not repeat, so high quality photometric data must be obtained while the event is in progress. Our proposed GEST mission will accomplish this.
The planetary systems studied by the microlensing technique are located $`18`$kpc away towards the Galactic center rather than in the local neighborhood. The planetary signals will usually be detected as a modifications of the single lens light curve due to the gravitational effect of the planet. Microlensing is most sensitive to planets near the Einstein ring radius which corresponds to a distance of $`110`$AU from the lens star, but because GEST does not require the discovery of stellar microlensing event to begin intensive monitoring for planets, GEST will be able to detect planets at arbitrarily large separations from their host stars. One such isolated planet may have already been observed (Bennett et al 1997) by the MACHO Collaboration.
## 2. Required Features of the Satellite Design
The main challenge for a microlensing search for low mass planets is that, although the planetary signals can be strong, they are both very rare and and have durations as short as $`2`$hours (Mao & Paczynski 1991; Gould & Loeb 1992). Furthermore, the relatively large angular size of giant source stars makes them poor targets for a low mass microlensing planet search project as finite source effects (Bennett & Rhie 1996) tend to wash out the microlensing signal. Thus, GEST must be able to monitor large areas of sky rapidly while attaining $`1`$% photometry on main sequence source stars. In the central Galactic bulge fields where the microlensing rate is highest, ground based images are seriously incomplete at or above the bulge main sequence turn-off, so there is a great advantage to be gained from higher resolution imaging from space which will allow many of the main sequence stars to be separately resolved. Since areas of high star density must be observed, we need high angular resolution to maintain photometric accuracy. So, we are led to a requirement of a large field of view and a high angular resolution–for a very large number of CCD pixels. Our baseline design calls for a $`1`$ square degree field of view with pixels of $`0.1`$” or smaller, for a total of $`\mathrm{}>1.3\times 10^9`$ pixels. Our suggested observing strategy will be to cycle over 4-6 selected fields near the Galactic center every 20-30 minutes. Given the high data rate implied by this strategy, a geosynchronous orbit seems sensible.
Our selected Galactic bulge fields fields have high reddening, so it is advantageous to observe at long wavelengths. Fortunately, recent developments in CCD technology have produced devices with a quantum efficiency of $`\mathrm{}>80`$% at 750-950nm (Groom et al 2000). This is $`4`$ times better than the CCD’s used by HST’s WFPC2 camera.
## 3. A Simulation of the GEST Mission
We have carried out a detailed simulation of the expected performance of the GEST mission. Here are the relevant features of our simulation: We assume that 6 square degrees of the Galactic bulge are observed once every 27 minutes. Photometry is assumed to be done using a difference imaging technique which is photon noise limited with a 0.3% systematic error added in quadrature. The photon noise is due to the target star and also any neighboring stars which have a PSF that overlaps with the target star. The effect of the neighboring stars was taken into account by constructing an artificial image with a resolution of 0.16” (the diffraction limit for a 1.5m telescope at $`\lambda =900`$nm). We assume a signal to noise of 60 for a source with $`I=22`$, or about 16 detected photo-electrons per second for a 225 sec exposure. This estimate assumes the use of high sensitivity CCD’s and a broad passband of 750-950nm.
We assume that the luminosity function found by Holtzman et al. (1998) for Baade’s window applies to the entire 6 square degrees. This is conservative in that most of our fields will be closer to the Galactic Center with a higher star density, but this will be partially compensated for by the higher reddening in these fields. We conservatively assume a microlensing optical depth of $`\tau =3\times 10^6`$ which is slightly below the measured values (Udalski et al, 1994; Alcock et al 1997), and we assume the event timescale distribution given by Alcock et al. (2000) as shown in Figure 1.
## 4. GEST Simulation Results
Our simulated GEST mission observes a total of 18,000 microlensing events with peak magnification $`>1.34`$ for source stars down to $`I=24.5`$ over an assumed 3 year mission which observes the Galactic bulge for 8 months per year. Our detection threshold is a cut on $`\mathrm{\Delta }\chi ^2`$ which is the $`\chi ^2`$ difference between a single lens and planetary binary lens fit. We’ve simulated planetary microlensing lightcurves for planets at a range of separations ranging from $`0.730(M_{\mathrm{star}}/\mathrm{M}_{})`$AU, and the left hand panel of Figure 2 shows the number of detected planets if each lens system has a planet of the assumed mass fraction ($`ϵ`$) at the assumed separation. The right hand panel of Figure 2 shows the number of detected “isolated” planets as a function of the planetary mass fraction, $`ϵ`$, which is now compared to the average stellar mass. Figure 2 indicates that we’ll be able to detect 50-100 Earth mass planets if they are common at distances of a few AU or more, and 10-20 Earth mass planets if they are common at 1 AU. We also expect to detect $``$ 10 Mars mass planets if they are common at a few AU. The right panel of Figure 1 shows the distribution of low mass planet detections as a function of our signal-to-noise parameter $`\mathrm{\Delta }\chi ^2`$, and it indicates that most of the detections are well above our nominal detection threshold of $`\mathrm{\Delta }\chi ^2=160`$. Examples of simulated GEST low mass planetary lightcurves are shown in Figures 3-7.
For more massive planets, our sensitivity is much better. For solar system analogs, our simulations indicate that GEST would detect $`5000`$ gas giant planets if our solar system is typical. If the typical planetary system resembles the solar system with Saturn and Jupiter replaced by Neptune mass planets, then GEST would detect $`1300`$ of these Neptune-like planets.
## 5. Comparison to Ground Based Surveys and Conclusions
In addition to detecting the planetary perturbation to the microlensing lightcurve, it is also important to determine the characteristics of the planet that has been detected. Microlensing generally allows the determination of the planetary mass fraction, $`ϵ`$, and the transverse separation of the planet from the lens star in units of the Einstein radius which is typically about $`3(M_{\mathrm{star}}/\mathrm{M}_{})`$AU. Gaudi and Gould (1997) have shown that these parameters can be accurately determined if the lightcurve deviations are well sampled. For caustic crossing planetary microlensing events which comprise a large fraction of the low mass planet detections, it is also possible to determine the planetary mass to about a factor of 2 or 3.
The gravitational microlensing planet search technique has previously been been considered for ground based observations (Peale 1997; Sackett 1997; Rhie et al 2000; Albrow et al 2000), but ground based surveys face some difficulties due to the requirement of continuous lightcurve monitoring. This can be accomplished with a network of microlensing follow-up telescopes spanning the globe at southern latitudes, but this requires that the survey rely upon observing sites that often have poor weather or seeing conditions. Sackett (1997) has argued that a dedicated 2.5m telescope like the VST with a wide field camera at an excellent site like Paranal could efficiently search for low mass planets where the deviations are expected to last only a few hours. We have done a simulation of such a survey (optimistically) assuming observations in consistent 0.7” seeing for 8 hours every night for 3 bulge seasons, and we find that low mass planets are not easily detected by such a ground based survey because the highest signal-to-noise events generally last longer than a few hours. If we demand that more than 90% of the $`\mathrm{\Delta }\chi ^2`$ signal occur in the 8 hour observing window for an event to have measurable parameters, then we find that this “VST” survey is 30-50 times less sensitive to low mass planets than GEST as shown in Figure 2. Also, even the “best” planet detection in the VST survey misses a significant part of the planetary deviation as seen in Figure 6. A study by Peale (1997) of a ground based microlensing planet search program with follow-up telescopes in Chile, Australia, and South Africa, also only manages a handful of low mass planet detections after eight years of observations and is not sensitive to isolated or Mars mass planets. So, the proposed GEST mission is $`\mathrm{}>50`$ times more sensitive to low mass planets than both types of propsed ground based surveys.
## References
Albrow, M.D., et al, 2000, ApJ, in press
Alcock, C.A., et al, 1997, ApJ, 479, 119
Alcock, C.A., et al, 2000, ApJ, submitted (astro-ph/0002510)
Bennett, D.P. et al, 1997, in ASP Conf. Proc. 119: Planets Beyond the Solar System and the Next Generation of Space Missions, D.R. Soderblom, ed. 95
Bennett, D.P., & Rhie, S.H. 1996, ApJ, 472, 660
Gaudi, B.S. & Gould, A. 1997, ApJ, 486, 85
Gould, A. & Loeb, A. 1992, ApJ, 396, 104
Groom, D.E., et al, 2000, Nucl. Instrum. Methods A, in press
Holtzman, J.A., et al, 1998, AJ, 115, 1946
Mao, S., & Paczynski, B. 1991, ApJ, 374, L37
Peale, S.J. 1997, Icarus, 127, 269
Rhie, S.H. et al, 2000, ApJ, in press
Sackett, P., 1997, in Appendix C of the Final Report of the ESO Working Group on the Detection of Extrasolar Planets (astro-ph/9709269)
Udalski, A., et al, 1994, Acta Astron., 44, 165
|
no-problem/0003/astro-ph0003327.html
|
ar5iv
|
text
|
# Photometric and kinematic studies of open star clusters. II. NGC 1960 (M 36) and NGC 2194partly based on data observed at the German-Spanish Astronomical Centre, Calar Alto, operated by the Max-Planck-Institute for Astronomy, Heidelberg, jointly with the Spanish National Commission for Astronomy
## 1 Introduction
The shape of the initial mass function (IMF) is an important parameter to understand the fragmentation of molecular clouds and therefore the formation and development of stellar systems. Besides studies of the Solar neighbourhood (Salpeter salpeter (1955), Tsujimoto et al. tsuji (1997)), work on star clusters plays a major role (Scalo scalo1 (1986)) in this field, as age, metallicity, and distance of all stars of a star cluster can generally be assumed to be equal.
Restricted to certain mass intervals, the IMF can be described by a power law in the form
$$\text{d}\mathrm{log}N(m)m^\mathrm{\Gamma }\text{d}\mathrm{log}m.$$
(1)
In this notation the “classical” value found by Salpeter (salpeter (1955)) for the Solar neighbourhood is $`\mathrm{\Gamma }=1.35`$. Average values for $`\mathrm{\Gamma }`$ from more recent studies, mostly of star clusters, can be found, e.g., in Scalo (scalo2 (1998)):
$`\mathrm{\Gamma }=1.3\pm 0.5`$ for $`m>10M_{},`$
$`\mathrm{\Gamma }=1.7\pm 0.5`$ for $`1M_{}<m<10M_{},\text{ and}`$ (2)
$`\mathrm{\Gamma }=0.2\pm 0.3`$ for $`m<1M_{},`$
where the “$`\pm `$” values refer to a rough range of the slopes derived for the corresponding mass intervals, caused by empirical uncertainties or probable real IMF variations.
Knowledge of membership is essential to derive the IMF especially of open star clusters, where the contamination of the data with field stars presents a major challenge. Two methods for field star subtraction are in use nowadays: separating field and cluster stars by means of membership probabilities from stellar proper motions on one hand, statistical field star subtraction on the other hand. Our work combines these two methods: The proper motions are investigated for the bright stars of the clusters, down to the completeness limit of the photographic plates used, whereas the fainter cluster members are selected with statistical considerations.
From the cleaned data we derive the luminosity and mass functions of the clusters. Including the proper motions, we expect to receive a more reliable IMF, since the small number of bright stars in open clusters would lead to higher uncertainties, if only statistical field star subtraction were applied.
This is the second part of a series of studies of open star clusters, following Sanner et al. (n0581paper (1999)). Here we present data on two clusters of the northern hemisphere, NGC 1960 (M 36) and NGC 2194.
NGC 1960 (M 36) is located at $`\alpha _{2000}=5^\mathrm{h}36^\mathrm{m}6^\mathrm{s}`$, $`\delta _{2000}=+34^{}8\mathrm{}`$ and has a diameter of $`d=10\mathrm{}`$ according to the Lyngå (lynga (1987)) catalogue. Morphologically, NGC 1960 is dominated by a number of bright ($`V11\text{ mag}`$) stars, whereas the total stellar density is only marginally enhanced compared to the surrounding field. The cluster has not yet been studied by means of CCD photometry. Photographic photometry was published by Barkhatova et al. (barkhatova (1985)), photoelectric photometry of 50 stars in the region of the cluster by Johnson & Morgan (johnsmorg (1953)). The most recent proper motion studies are from Meurers (meurers (1958)) and Chian & Zhu (chianzhu (1966)). As their epoch differences between first and second epoch plates (36 and 51 years, respectively) are smaller than ours and today’s measuring techniques can be assumed to be more precise we are confident to gain more reliable results.
Tarrab (tarrab (1982)) published an IMF study of 75 open star clusters, among them NGC 1960, and found an exteme value for the slope of (in our notation) $`\mathrm{\Gamma }=0.24\pm 0.05`$ for this object. Her work includes only 25 stars in the mass range $`3.5M_{}m9M_{}`$, so that a more detailed study covering more members and reaching towards smaller masses is necessary.
For NGC 2194 (located at $`\alpha _{2000}=6^\mathrm{h}13^\mathrm{m}48^\mathrm{s}`$, $`\delta _{2000}=+12^{}48\mathrm{}`$, diameter $`d=9\mathrm{}`$), our work is the first proper motion study according to van Leeuwen (vanleeuwen (1985)). The RGU photographic photometry of del Rio (delrio (1980)) is the most recent publication on NGC 2194 including photometric work.
The cluster is easily detectable as it contains numerous intermediate magnitude ($`13\text{ mag}V15\text{ mag}`$) stars, although bright stars $`V10\text{ mag}`$ are lacking.
In Sect. 2, we present the data used for our studies and the basic steps of data reduction and analysis. Sects. 3 and 4 include the proper motion studies, an analysis of the colour magnitude diagrams (CMDs), and determination of the IMF of the clusters. We conclude with a summary and discussion in Sect. 5.
## 2 The data and data reduction
### 2.1 Photometry
CCD images of both clusters were taken with the 1.23 m telescope at Calar Alto Observatory on October 15, 1998, in photometric conditions. The seeing was of the order of $`3\mathrm{}`$. The telescope was equipped with the $`1024\times 1024`$ pix CCD chip TEK 7\_12 with a pixel size of $`24\text{ }\mu \text{m}\times 24\text{ }\mu \text{m}`$ and the WWFPP focal reducing system (Reif et al. wwfpp (1995)). This leads to a resolution of $`1.0\mathrm{}\text{ pix}^1`$ and a field of view of $`17\mathrm{}\times 17\mathrm{}`$. Both clusters were observed in Johnson $`B`$ and $`V`$ filters, the exposure times were 1 s, 10 s, and 600 s in $`V`$, and 2 s, 20 s, and 900 s in $`B`$. Figs. 1 and 2 show CCD images of both clusters.
The data were reduced with the DAOPHOT II software (Stetson daophot (1991)) running under IRAF. From the resulting files, we deleted all objects showing too high photometric errors as well as sharpness and $`\chi `$ values. The limits were chosen individually for each image, typical values are $`0.03\text{ mag}`$ to $`0.05\text{ mag}`$ for the magnitudes, $`\pm 0.5`$ to $`1`$ for sharpness, and $`2`$ to $`4`$ for $`\chi `$.
Resulting photometric errors of the calibrated magnitudes in different $`V`$ ranges valid for both clusters as given by the PSF fitting routine are given in Table 1.
The data were calibrated using 44 additional observations of a total of 27 Landolt (landolt (1992)) standard stars. After correcting the instrumental magnitudes for atmospheric extinction and to exposure times of 1 s, we used the following equations for transformation from instrumental to apparent magnitudes:
$`vV`$ $`=`$ $`z_Vc_V(BV)`$ (3)
$`(bv)(BV)`$ $`=`$ $`z_{BV}c_{BV}(BV)`$ (4)
where capital letters represent apparent and lower case letters (corrected as described above) instrumental magnitudes. The extinction coefficients $`k_V`$ and $`k_{BV}`$, zero points $`z_V`$ and $`z_{BV}`$ as well as the colour terms $`c_V`$ and $`c_{BV}`$ were determined with the IRAF routine fitparams as:
$`k_V=0.14\pm 0.02,`$ $`k_{BV}=0.19\pm 0.03`$
$`z_V=2.52\pm 0.04,`$ $`z_{BV}=0.88\pm 0.04`$ (5)
$`c_V=0.09\pm 0.01,`$ $`c_{BV}=0.19\pm 0.01.`$
We checked the quality of these parameters by reproducing the apparent magnitudes of the standard stars from the measurements. The standard deviations derived were $`\sigma _V=0.02\text{ mag}`$ and $`\sigma _{BV}=0.06\text{ mag}`$.
Johnson & Morgan (johnsmorg (1953)) published photoelectic photometry of 50 stars in the region of NGC 1960. Their results coincide with ours with a standard deviation of approx. $`0.03\text{ mag}`$ in $`V`$ and $`0.02\text{ mag}`$ in $`BV`$, respectively. There is only one exception, star 110 (Boden’s (boden (1951)) star No. 46) for which we found $`V=14.25\text{ mag}`$, $`BV=0.66\text{ mag}`$, which differs by $`\mathrm{\Delta }V2\text{ mag}`$ and $`\mathrm{\Delta }BV0.3\text{ mag}`$ from the value of Johnson & Morgan (johnsmorg (1953)). In their photographic photometry, Barkhatova et al. (barkhatova (1985)) found values for this star which coincide with ours. We therefore assume the difference most likely to be caused by a mis-identification of this star by Johnson & Morgan (johnsmorg (1953)).
All stars for which $`B`$ and $`V`$ magnitudes could be determined are listed in Tables 2 (NGC 1960, 864 stars) and 3 (NGC 2194, 2120 stars), respectively. We derived the CMDs of the two clusters which are shown in Figs. 3 and 4. A detailed discussion of the diagrams is given in Sects. 3 and 4.
### 2.2 Actual cluster sizes
Mass segregation might lead to a larger “true” cluster size than stated, e.g., in the Lyngå (lynga (1987)) catalogue: While the high mass stars are concentrated within the inner part of the cluster, the lower mass stars might form a corona which can reach as far out as the tidal radius of the cluster (see, e.g., the recent work of Raboud & Mermilliod raboud2 (1998)). Therefore, the range of the cluster stars had to be checked. We applied star counts in concentric rings around the centre of the clusters.
Star counts in the vicinity of NGC 2194 show no significant variations of the stellar density outside a circle with a diameter of $`10\mathrm{}`$ (corresponding to $`8.5`$ pc at the distance of the object) around the centre of the cluster. For NGC 1960, this point is more difficult to verify, since its total stellar density is much lower than for NGC 2194, so that it is not as easy to see at which point a constant level is reached, and on the other hand, its smaller distance lets us reach fainter absolute magnitudes so that the effect of mass segregation might be more prominent within the reach of our photometry. However, our tests provided evidence, too, that the cluster diameter is no larger than $`12\mathrm{}`$. It must be stressed that these figures can only provide lower limits for the real cluster sizes: Members fainter than the limiting magnitude of our photometry might reach further out from the centres of the clusters.
### 2.3 Proper motions
For our proper motion studies we used photographic plates which were taken with the Bonn Doppelrefraktor, a 30 cm refractor ($`f=5.1\text{ m}`$, scale: $`40\stackrel{}{.}44\text{ mm}^1`$) which was located in Bonn from 1899 to 1965 and at the Hoher List Observatory of Bonn University thereafter. The 16 cm $`\times `$ 16 cm plates cover a region of $`1.6^{}\times 1.6^{}`$. They were completely digitized with $`10\text{ }\mu \text{m}`$ linear resolution with the Tautenburg Plate Scanner, TPS (Brunzendorf & Meusinger TPS (1998), TPS99 (1999)). The positions of the objects detected on the photographic plates were determined using the software search and profil provided by the Astronomisches Institut Münster (Tucholke aim (1992)).
In addition, we used the 1 s to 20 s Calar Alto exposures to improve data quality and — for NGC 2194 — to extend the maximum epoch difference. Furthermore, a total of 16 CCD frames of NGC 1960 which were taken with the 1 m Cassegrain telescope ($`f/3`$ with a focal reducing system) of the Hoher List Observatory were included in the proper motion study. The latter observations cover a circular field of view of $`28\mathrm{}`$ in diameter which provides a sufficiently large area for the cluster itself and the surrounding field. The astrometric properties of this telescope/CCD camera system were proven to be suitable for this kind of work in Sanner et al. (holicam (1998)). The stellar $`(x,y)`$ positions were extracted from the CCD frames with DAOPHOT II routines (Stetson daophot (1991)). A list of the plates and Hoher List CCD images included in our study can be found in Table 4.
The fields of the photographic plates contain only a very limited number of HIPPARCOS stars (ESA hipp (1997)), as summarized in Table 5. Therefore, we decided to use the ACT catalogue (Urban et al. act (1998)) as the basis for the transformation of the plate coordinates $`(x,y)`$ to celestial coordinates $`(\alpha ,\delta )`$. For NGC 2194 this decision is evident, for NGC 1960 we preferred the ACT data, too, as the brightest HIPPARCOS stars are overexposed on several plates, thus lowering the accuracy of positional measurements: It turned out that only three of the HIPPARCOS stars were measured well enough to properly derive their proper motions from our data. The celestial positions of the stars were computed using an astrometric software package developed by Geffert et al. (geffert97 (1997)). We obtained good results using quadratic polynomials in $`x`$ and $`y`$ for transforming $`(x,y)`$ to $`(\alpha ,\delta )`$ for the photographic plates and cubic polynomials for the CCD images, respectively.
Initial tests in the fields of both clusters revealed that the proper motions computed for some ten ACT stars disagreed with the ACT catalogue values. We assume that this is caused by the varying accuracy of the Astrographic Catalogue which was used as the first epoch material of the ACT proper motions or by unresolved binary stars (see Wielen et al. wielen (1999)). We eliminated these stars from our input catalogue.
The proper motions were computed iteratively from the individual positions: Starting with the ACT stars to provide a calibration for the absolute proper motions and using the resulting data as the input for the following step, we derived a stable solution after four iterations. Stars with less than two first and second epoch positions each or a too high error in the proper motions ($`>8\text{ mas yr}^1`$ in $`\alpha `$ or $`\delta `$) were not taken into further account.
To determine the membership probabilities from the proper motions, we selected $`18\mathrm{}`$ wide areas around the centres of the clusters. This dimension well exceeds the proposed diameter of both clusters so that we can assume to cover all member stars for which proper motions were determined. Furthermore, this region covers the entire field of view of the photometric data. The membership probabilities were computed on the base of the proper motions using the method of Sanders (sanders (1971)): We fitted a sharp (for the members) and a wider spread (for the field stars) Gaussian distribution to the distribution of the stars in the vector point plot diagram and computed the parameters of the two distributions with a maximum likelihood method. From the values of the distribution at the location of the stars in the diagram we derived the membership probabilities. The positions of the stars did not play any role in the derivation of the membership probabilities. In the following, we assumed stars to be cluster members in case their membership probability is $`0.8`$ or higher.
### 2.4 Colour magnitude diagrams
Before analysing the CMDs in detail, we had to distinguish between field and cluster stars to eliminate CMD features which may result from the field star population(s). For the stars down to $`V=14\text{ mag}`$ (NGC 1960) and $`V=15\text{ mag}`$ (NGC 2194) we found after cross-identifying the stars in the photometric and astrometric measurements that our proper motion study is virtually complete. Therefore we used these magnitudes as the limits of our membership determination by proper motions. For the fainter stars we statistically subtracted the field stars:
We assumed a circular region with a diameter of 806 pixels or $`13\stackrel{}{.}4`$ to contain all cluster member stars. As seen in Sect. 2.2, this exceeds the diameters of the clusters. The additional advantage of this diameter of the “cluster” region is that this circle corresponds to exactly half of the area covered by the CCD images so that it was not necessary to put different weights on the star counts in the inner and outer regions. We compared the CMDs of the circular regions containing the clusters with the diagrams derived from the rest of the images to determine cluster CMDs without field stars. The method is described in more detail in, e.g., Dieball & Grebel (dieball (1998)).
We fitted isochrones based on the models of Bono et al. (isoteramo (1997)) and provided by Cassisi (private communication) to the cleaned CMDs. We assumed a Solar metallicity of $`Z=0.02`$ and varied the distance modulus, reddening, and ages of the isochrones. Comparison with the isochrones of other groups (Schaller et al. schaller (1992), Bertelli et al. padua (1994)) does not show any significant differences in the resulting parameters.
### 2.5 Mass function
For the IMF study it is important to correct the data for the incompleteness of our photometry. With artificial star experiments using the DAOPHOT II routine addstar we computed $`B`$-magnitude depending completeness factors for both clusters. The $`B`$ photometry was favoured for these experiments since its completeness decreases earlier as a consequence of its brighter limiting magnitude. According to Sagar & Richtler (sagricht (1991)), the final completeness of the photometry after combining the $`B`$ and $`V`$ data is well represented by the least complete wavelength band, hence $`V`$ completeness was not studied. The results, which are approximately the same for both NGC 1960 and NGC 2194, are plotted in Fig. 5: The sample is — except for a few stars which likely are missing due to crowding effects — complete down to $`B=19\text{ mag}`$, and for stars with $`B20\text{ mag}`$, we still found more than 60 % of the objects. In general, we found that the completeness in the cluster regions does not differ from the values in the outer parts of the CCD field. We therefore conclude that crowding is not a problem for our star counts, even at the faint magnitudes. However, crowding may lead to an increase in the photometric errors, especially in the region of NGC 2194, in which the stellar density is considerably higher than for NGC 1960.
Several objects remained far red- or bluewards of the lower part of the main sequence after statistical field star subtraction. We assume that this results from the imperfect statistics of the sample. For a formal elimination of these stars we had to define a region of the CMD outside of which all objects can be considered to be non-members. This was achieved by shifting the fitted isochrones by two times the errors listed in Table 1 in $`V`$ and $`BV`$ to the lower left and the upper right in the CMD (Since this procedure applies only to stars within the range of the statistical field star subtraction, we used the errors given for the faint stars in our photometry.). To take into account probable double or multiple stars we added another $`0.75\text{ mag}`$ to the shift to the upper right, and for NGC 2194 we allowed another $`\mathrm{\Delta }(BV)=0.2\text{ mag}`$ in the same direction as a consequence of the probably higher photometric errors due to crowding in the central part of the cluster. All stars outside the corridor defined in this way are not taken into account for our further considerations. The shifted isochrones are plotted as dotted lines in Figs. 10 and 14, respectively. It may be remarked that according to Iben (iben (1965)) we can exclude objects with a distance of several magnitudes in $`V`$ or a few tenths of magnitudes in $`BV`$ from the isochrone to be pre-main sequence members of neither NGC 1960 nor NGC 2194.
We furthermore selected all objects below the turn-off point of the isochrones. For the remaining stars, we calculated their initial masses on the base of their $`V`$ magnitudes. We used the mass-luminosity relation provided with the isochrone data. $`V`$ was preferred for this purpose as the photometric errors are smaller in $`V`$ compared to the corresponding $`B`$ magnitudes. The mass-luminosity relation was approximated using $`6^{\text{th}}`$ order polynomials
$$m[M_{}]=\underset{i=0}{\overset{6}{}}a_i(V[\text{mag}])^i$$
(6)
which resulted in an rms error of less than 0.01. Using $`5^{\text{th}}`$ or lower order polynomials caused higher deviations especially in the low mass range. The values of the parameters $`a_i`$ are listed in Table 6.
Taking into account the incompleteness of the data, we determined the luminosity and initial mass functions of the two clusters. The IMF slope was computed with a maximum likelihood technique. We preferred this method instead of the “traditional” way of a least square fit of the mass function to a histogram, because those histogram fits are not invariant to size and location of the bins: Experiments with shifting the location and size of the bins resulted in differences of the exponent of more than $`\mathrm{\Delta }\mathrm{\Gamma }=0.2`$. Fig. 6 shows the results of such an experiment with the NGC 1960 data. The fitted IMFs show an average $`\mathrm{\Gamma }`$ value of around $`1.2`$ with individual slopes ranging from $`1.1`$ down to $`1.4`$. This can be explained by the very small number of stars in the higher mass bins which contain only between one and ten stars. In case only one member is mis-interpreted as a non-member or vice versa, the corresponding bin height might be affected by up to $`\mathrm{\Delta }\mathrm{log}N(m)=0.3`$ in the worst case which will heavily alter the corresponding IMF slope. In addition, all bins of the histogram obtain the same weight in the standard least square fit, no matter how many stars are included. For very populous or older objects (globular or older open star clusters, see, e.g., the IMF of NGC 2194) this effect plays a minor role, because in these cases the number of stars per bin is much higher. On the other hand, the maximum likelihood method does not lose information (as is done by binning), and each star obtains the same weight in the IMF computation.
Nevertheless, for reasons of illustration we sketch the IMF of our two clusters with an underlying histogram in Figs. 12 and 16.
## 3 NGC 1960
### 3.1 Proper motion study
With the method described above, we determined the proper motions of 1,190 stars within the entire field of the photographic plates. We found that the limiting magnitude of the second epoch plates is brighter than that of the first epoch plates. This effect is compensated by the addition of the CCD data, so that in the cluster region we reach fainter stars than in the outer region of the field. Therefore, the limiting magnitude of the proper motion study is fainter in the area for which the CCD data were available.
After four iterations of proper motion determination, the comparison of the computed proper motions with ACT led to systematic positional differences of the order of $`\mathrm{\Delta }\alpha =0\stackrel{}{.}04`$ and $`\mathrm{\Delta }\delta =0\stackrel{}{.}10`$ and for the proper motions of around $`0.4\text{ mas}`$ $`\text{yr}^1`$. The internal dispersion of the proper motions was computed from the deviations of the positions from a linear fit of $`\alpha `$ and $`\delta `$ as functions of time. We derived mean values of $`\sigma _{\mu _\alpha \mathrm{cos}\delta }=1.1\text{ mas yr}^1`$ and $`\sigma _{\mu _\delta }=0.9\text{ mas yr}^1`$ for individual stars.
We detected a slight, but systematic slope of the proper motions in $`\delta `$ depending on the magnitude of the stars resulting in a difference of approximately $`2\text{ mas yr}^1`$ for the proper motions between the brightest and faintest members. As the magnitude range of the ACT catalogue within our field of view is limited to approximately 11 mag, we used the positions provided by the Guide Star Catalog (GSC) Version 1.2 (Röser et al. gsc (1998)), which covers stars over the entire range of our proper motion study, for further analysis. The disadvantage of GSC is the fact that it does not provide proper motions. Therefore, all results obtained with this catalogue are relative proper motions only. Fig. 7 shows the proper motions in $`\delta `$ and their dependence on the magnitudes derived from this computation for the stars in the inner region of the photographic plates. The diagram shows that only the stars brighter than $`10.5\text{ mag}`$ are influenced by a clear magnitude term leading to deviations of up to $`2\text{ mas yr}^1`$ with respect to the stars fainter than $`10.5\text{ mag}`$ which do not show any systematic trend. We included a magnitude term into our transformation model, however, since the behaviour is not linear with magnitudes and different from plate to plate we were unable to completely eliminate the effect. Furthermore, taking into account that many of the ACT stars are brighter than $`10.5\text{ mag}`$, it is clear that this deviation was extrapolated over the entire range of the proper motion study.
Meurers (meurers (1958)), who had used the same first epoch material for his proper motion study, found a similar phenomenon and suggested that the bright and the faint stars in the region of NGC 1960 form independent stellar “aggregates”. In his study the proper motion difference between bright and faint stars is much more prominent. Taking into account his smaller epoch difference of 36 years this could be explained assuming that the effect is caused by (at least some of) the first epoch plates on which the positions of the brighter stars seem to be displaced by an amount of approximately $`0.1\mathrm{}`$ to $`0.2\mathrm{}`$ compared to the fainter objects, whereas both his and our second epoch data are unaffected. This proposition would also explain why we did not detect this inaccuracy during the determination of the positions on the plates, since the uncertainties of single positional measurements are of the same order of magnitude.
The proper motions in $`\alpha `$ proved to be unaffected by this phenomenon.
We found that when using the ACT based proper motions, the membership determination is not affected by this problem, since the magnitude trend in $`\mu _\delta `$ is smoothed over the magnitude range (in comparison with Fig. 7). On the other hand, in the GSC solution, the bright stars have proper motions differing too much from the average so that almost all of them are declared non-members. Therefore we used the results based on the ACT data for the computation of the membership probabilities. Table 7 shows a list of all proper motions determined on the base of the ACT catalogue.
The vector point plot diagram as determined on the base of ACT for the stars in the central region of the plates is presented in Fig. 8. Membership determination resulted in 178 members and 226 non-members of NGC 1960. The distribution of membership probabilities sketched in Fig. 9 shows a clear separation of members and non-members with only a small number of stars with intermediate membership probabilities. The centre of the proper motion distribution of the cluster members in the vector point plot diagram is determined to be
$`\mu _\alpha \mathrm{cos}\delta `$ $`=`$ $`3.2\pm 1.1\text{ mas yr}^1\text{ and}`$ (7)
$`\mu _\delta `$ $`=`$ $`9.8\pm 0.9\text{ mas yr}^1.`$ (8)
The width of the Gaussian distribution of the proper motions is around $`1\text{ mas yr}^1`$ and hence the same as the $`1\sigma `$ error of the proper motion of a single object. The field moves very similarly with
$`\mu _\alpha \mathrm{cos}\delta `$ $`=`$ $`4.4\pm 5\text{ mas yr}^1\text{ and}`$ (9)
$`\mu _\delta `$ $`=`$ $`11.4\pm 5\text{ mas yr}^1.`$ (10)
The similarity of field and cluster proper motions makes membership determination a difficult task: Several field stars which by chance have the same proper motion as the cluster stars will be taken for members.
These results cannot be used for a determination of the absolute proper motion of the cluster, since the centre of the distribution can be assumed to be displaced upwards in the vector point plot diagram as a consequence of the magnitude dependence of the $`\mu _\delta `$ values. To obtain reliable absolute proper motions, nevertheless, we used the fainter ($`>10.5\text{ mag}`$) part of the proper motions computed on the base of GSC 1.2 which are stable with magnitudes and compared their relative proper motions with the values given for the corresponding stars in the ACT. We found a difference of $`\mathrm{\Delta }(\mu _\alpha \mathrm{cos}\delta )=1.4\pm 2.6`$ and $`\mathrm{\Delta }\mu _\delta =6.5\pm 2.4`$ and centres of the GSC based proper motion distributions of $`\mu _\alpha \mathrm{cos}\delta =1.5\pm 0.7`$ and $`\mu _\delta =1.5\pm 0.7`$. As a consequence we determined the absolute proper motions of NGC 1960 to be
$`\mu _\alpha \mathrm{cos}\delta `$ $`=`$ $`2.9\pm 2.7\text{ mas yr}^1\text{ and}`$ (11)
$`\mu _\delta `$ $`=`$ $`8.0\pm 2.5\text{ mas yr}^1.`$ (12)
As expected, the value of the proper motion in right ascension is — compared to Eq. (7) — unaffected within the errors, whereas $`\mu _\delta `$ is different from the value of Eq. (8) by a value which corresponds to the $`2\sigma `$ error of $`\mu _\delta `$.
### 3.2 Colour magnitude diagram properties
The CMD of NGC 1960 (Fig. 10) shows a clear and narrow main sequence with an indication of a second main sequence including approximately 15 stars from $`V=11.5\text{ mag}`$ to $`V=14\text{ mag}`$ (corresponding to masses from $`m3.5M_{}`$ to $`m1.4M_{}`$). These stars might be Be stars (see, e.g., Zorec & Briot zorec (1991)) or unresolved binaries (see, e.g., Abt abt (1979) or the discussion in Sagar & Richtler sagricht (1991)).
Slettebak (slettebak (1985)) reports on two Be stars in NGC 1960. One of them, our star 1374 (Boden’s (boden (1951)) star No. 505, erroneously named No. 504 by Slettebak), clearly fails to fulfil our membership criterion with a proper motion of $`\mu _\alpha \mathrm{cos}\delta =0.7\text{ mas yr}^1`$ and $`\mu _\delta =0.3\text{ mas yr}^1`$. In addition, it is located so far off the centre of the cluster that it is even outside the field of our CCD images. On the other hand, star 4 (Boden’s (boden (1951)) star No. 101, $`V=9.050\text{ mag}`$, $`BV=0.106\text{ mag}`$) shows a proper motion of $`\mu _\alpha \mathrm{cos}\delta =3.9\text{ mas yr}^1`$ and $`\mu _\delta =8.8\text{ mas yr}^1`$ which makes it a very likely cluster member with a membership probability of $`0.95`$. In Fig. 10, this object is marked with a circle. A third Be star in the region is mentioned in the WEBDA database (Mermilliod webda (1999)): Boden’s (boden (1951)) star No. 27, or our star 8. We obtained a proper motion of $`\mu _\alpha \mathrm{cos}\delta =3.36\text{ mas yr}^1`$, $`\mu _\delta =8.02\text{ mas yr}^1`$. From these figures we computed a membership probability of $`0.89`$ so that this star is a likely cluster member. We marked this object in Fig. 10 with a cross. As there is no evidence for any further Be stars, it is plausible to assume that the other stars forming the second main sequence most likely are unresolved binary stars.
The star at $`V=11.3\text{ mag}`$, $`BV=0.46\text{ mag}`$ (star 29 of our sample, marked with a triangle in Fig. 10) does not fit to any isochrone which properly represents the other stars in this magnitude range. It shows a proper motion of $`\mu _\alpha \mathrm{cos}\delta =2.9\text{ mas yr}^1`$ and $`\mu _\delta =9.7\text{ mas yr}^1`$ resulting in a membership probability of $`0.97`$. This object may be an example for a star which coincidentally shows the same proper motion as the cluster, but being in fact a non-member.
From our isochrone fit, we derived the parameters given in Table 8. Age determination was quite a difficult task for NGC 1960, as there are no significantly evolved (red) stars present in the CMD. We found that the 16 Myr isochrone might be the optimal one, since it represents the brightest stars better than the (in terms of ages) neighbouring isochrones. The comparably large error we adopted reflects this consideration.
### 3.3 Initial mass function
The determination of the IMF slope from the completeness corrected data obtained from the CMD (Fig. 10) leads to the value of $`\mathrm{\Gamma }=1.23\pm 0.17`$ for NGC 1960 in a mass interval from $`m=9.4M_{}`$ down to $`m=0.725M_{}`$ (corresponding to $`V=8.9\text{ mag}`$ to $`V=19\text{ mag}`$). This restriction was chosen to guarantee a completeness of the photometry of at least 60 %. To test the stability of the IMF concerning the probable double star nature of several objects, we assumed the stars above the brighter part of the main sequence (a total of 18 objects) to be unresolved binary stars with a mass ratio of 1 and computed the IMF of this modified sample, as well. The slope increased to the value of $`\mathrm{\Gamma }=1.19\pm 0.17`$ within the same mass range, representing a slightly shallower IMF. Anyway, the influence of a binary main sequence is negligible within the errors. We also experimented with leaving out the magnitude range critical for membership determination (see Sect. 3.1 and Fig. 7), i.e. the stars brighter than $`V=10.5\text{ mag}`$ ($`m>5.75M_{}`$), and derived $`\mathrm{\Gamma }=1.26\pm 0.2`$ — a result which coincides well within the errors with the above ones. This shows once more that the membership determination — and therefore the IMF — was almost not affected by the magnitude term of our proper motion study. Fig. 12 sketches the IMF of NGC 1960.
## 4 NGC 2194
### 4.1 Proper motion study
For stars brighter than $`V=9\text{ mag}`$, some of the plates of NGC 2194 showed a systematic shift of the computed $`\delta `$ positions with respect to the ACT values. We therefore excluded all those stars from our input catalogue. However, this effect — which is different from the one described before for NGC 1960, since it perceptibly affects the positions — does not influence our membership determination as the region of NGC 2194 does not cover any stars of this brightness (see also Table 3).
Proper motions of 2,233 stars could be computed from the plates of NGC 2194. This figure is significantly higher than for NGC 1960, since this time the second epoch plates are of much higher quality, so that we reach fainter stars over the entire $`1.6^{}\times 1.6^{}`$ field. After four iterations of our proper motion determination, the systematic difference between ACT and our results were $`0\stackrel{}{.}07`$ for the positions and $`0.07\text{ mas yr}^1`$ for the proper motions. The standard deviation of the proper motions were $`\sigma _{\mu _\alpha \mathrm{cos}\delta }=1.6\text{ mas yr}^1`$ and $`\sigma _{\mu _\delta }=1.5\text{ mas yr}^1`$. The vector point plot diagram (Fig. 13) of the stars in the region of NGC 2194 shows that the stars are not as highly concentrated in the centre of the distribution of the cluster stars as for NGC 1960. This also explains the less distinct peak for high membership probabilities in the histogram shown in Fig. 9.
Although one finds more stars on the whole plates, in the inner region the proper motions of fewer objects were detected. This is caused by the lower number of sufficiently bright stars in and around the cluster (see Figs. 2 and 4). On the other hand, the low part of the main sequence is much more densely populated. We will see in Sect. 4.2 that the total number of members detected is higher for NGC 2194 than for NGC 1960.
We classified 149 members and 81 non-members. For this cluster, the separation between members and non-members was even more difficult as can be seen from the membership probability histogram plotted in Fig. 9: Approximately 50 stars show intermediate membership probabilities between 0.2 and 0.8. The result for the absolute proper motion of the cluster is
$`\mu _\alpha \mathrm{cos}\delta `$ $`=`$ $`2.3\pm 1.6\text{ mas yr}^1\text{ and}`$ (13)
$`\mu _\delta `$ $`=`$ $`0.2\pm 1.5\text{ mas yr}^1`$ (14)
and for the field
$`\mu _\alpha \mathrm{cos}\delta `$ $`=`$ $`0.4\pm 5.5\text{ mas yr}^1\text{ and}`$ (15)
$`\mu _\delta `$ $`=`$ $`0.1\pm 5.5\text{ mas yr}^1.`$ (16)
The measured standard deviation of the cluster proper motion distribution again is the same as was determined for one individual object.
Table 9 shows a list of all proper motions computed.
### 4.2 Colour magnitude diagram properties
According to the field star subtracted CMD, presented as Fig. 14, NGC 2194 shows a prominent main sequence with a turn-off point near $`V=14.5\text{ mag}`$ and a sparsely populated red giant branch. For this cluster, the proper motion study is not of great value for the isochrone fitting process: As the main sequence turn-off is located around $`V=14.5\text{ mag}`$, it is clear that the bright blue stars either do not belong to the cluster or do not help in finding the best isochrone fit because of their non-standard evolution like blue stragglers (see, e.g., Stryker stryker (1993)). We assume that the presence of the blue bright stars is mainly caused by the coincidence of the field and cluster proper motion centres which causes a certain number of fields stars to be mis-identified as cluster members. As expected in Sect. 4.1, this effect is more dramatic here than in the case of NGC 1960. From the comarison of the isochrones we derived the parameters for NGC 2194 given in Table 8.
Star No. 38 of our sample was first mentioned by del Rio (delrio (1980), star 160 therein) who considers this object a field star as a consequence of its location in the CMD. For the same reason, but with the opposite conclusion, Ahumada & Lapasset (bluestrag (1995)) mention this object as a cluster member in their “Catalogue of blue stragglers in open star clusters”. We find for this star a proper motion of $`\mu _\alpha \mathrm{cos}\delta =2.5\text{ mas yr}^1`$ and $`\mu _\delta =0.1\text{ mas yr}^1`$ leading to a membership probability of 0.34. Therefore, we agree with del Rio and assume star 38 to be a field star, too. So far, no further information is known about the bright blue stars in Fig. 14, so that we cannot give any definite statement about the nature of these objects.
### 4.3 Initial mass function
The age of NGC 2194 of $`550`$ Myr — together with its distance of almost $`3`$ kpc — implies that the range of the observable main sequence is very limited. We therefore could compute the IMF only over the mass interval from $`m1M_{}`$ (corresponding to $`V19\text{ mag}`$) to $`m2.1M_{}`$ (or $`V15.0\text{ mag}`$). The slope determined is $`\mathrm{\Gamma }=1.33\pm 0.29`$. The comparably higher error is a consequence of the smaller mass interval.
Comparing the resulting IMF with a histogram (see Fig. 16), one finds good agreement for three of the four bins (The bin width is the same as for NGC 1960). As the completeness drops rapidly within the leftmost bin, it has — in total — a high uncertainty. However, only stars with masses of $`m>1M_{}`$ (corresponding to a completeness of higher than 60%) were taken into consideration for the IMF computation, which is indicated by the limits of the IMF line in Fig. 16.
Note that as a consequence of the age of NGC 2194, the cluster may have encountered dynamical evolution during its lifetime, which has to be kept in mind when using the term “initial mass function”. However, we follow Scalo’s (scalo1 (1986)) nomenclature, who uses the expression for intermediate age clusters, to discriminate between a mass function based on the initial and the present day stellar masses.
## 5 Summary and discussion
With our work we found NGC 1960 to be a young open star cluster with an age of 16 Myr. It is located at a distance of 1300 pc from the Sun. These results confirm the findings of Barkhatova et al. (barkhatova (1985)) obtained with photographic photometry.
We derived proper motions of 404 stars in the region of the cluster down to $`14\text{ mag}`$. 178 of those can be considered members of NGC 1960. Despite the problems with our proper motion determination (see Sect. 3.1), we are able to state that our results do not support the values given as the absolute proper motion of NGC 1960 by Glushkova et al. (glush (1997)) on the base of the “Four Million Star Catalog” (Gulyaev & Nesterov 4M (1992)): They found $`\mu _\delta =8.2\pm 1\text{ mas yr}^1`$ which is in agreement with our study, but $`\mu _\alpha \mathrm{cos}\delta =14.7\pm 1\text{ mas yr}^1`$ which differs from our result by more than $`10\text{ mas yr}^1`$.
Our study of the IMF of NGC 1960 led to a power law with a slope of $`\mathrm{\Gamma }=1.23\pm 0.17`$. This value is very high (i.e. the IMF is shallow) compared to other studies, however, it still matches the interval for $`\mathrm{\Gamma }`$ suggested by Scalo (scalo2 (1998)) for intermediate mass stars ($`2.2\mathrm{\Gamma }1.2`$).
Although we should stress that we cannot say anything about the shape of the IMF in the very low mass range ($`mM_{}`$), we do not see any evidence for a flattening of the IMF of NGC 1960 below $`1M_{}`$.
NGC 2194 — with an age of 550 Myr — belongs to the intermediate age galactic open star clusters. Our findings from the photometric study are in good agreement with the photographic $`RGU`$ photometry published by del Rio (delrio (1980)).
As the cluster is located at a distance of almost 3 kpc we could only cover its mass spectrum down to $`1M_{}`$. Nevertheless, we were able to determine the IMF on the base of 623 main sequence stars which led to a slope of $`\mathrm{\Gamma }=1.33\pm 0.29`$, almost Salpeter’s (salpeter (1955)) value, but still close to the shallow end of the interval given by Scalo (scalo2 (1998)).
In our previous paper (Sanner et al. n0581paper (1999)), we studied the open star cluster NGC 581 (M 103) for which we found the same age of $`16\pm 4`$ Myr as for NGC 1960, but a much steeper IMF slope of $`\mathrm{\Gamma }=1.80\pm 0.19`$. We therefore can state that our method of IMF determination does not systematically lead to steep or shallow mass functions.
With our yet very small sample, it is not possible to find evidence for the dependence of the IMF of open star clusters on any parameter of the cluster. Therefore, we will have to investigate further clusters and also compare our results with other studies.
###### Acknowledgements.
The authors thank Wilhelm Seggewiss for allocating observing time at the telescopes of Hoher List Observatory, Santi Cassisi for providing the isochrones necessary for our study and Andrea Dieball and Klaas S. de Boer for carefully reading the manuscript. J.S. thanks Georg Drenkhahn for his valuable hints concerning the maximum likelihood analysis software. M.A. and J.B. acknowledge financial support from the Deutsche Forschungsgemeinschaft under grants Bo 779/21 and ME 1350/3-2, respectively. This research has made use of NASA’s Astrophysics Data System Bibliographic Services, the CDS data archive in Strasbourg, France, and J.-C. Mermilliod’s WEBDA database of open star clusters.
|
no-problem/0003/cond-mat0003354.html
|
ar5iv
|
text
|
# Diffusion as mixing mechanism in granular materials
## I Introduction
Granular media are notoriously difficult to mix. For a variety of reasons and under rather general conditions, they tend to form segregated steady states . For example, segregation can occur during granular flow, the bigger particles moving farther than the smaller ones. Difference of size (polydispersity of the grains) or difference of material (different kinds of grains) produce different geometrical or physical properties. Segregation can also be due to percolation where the small grains fall through the holes between the big grains leaving only the bigger particles behind. Shear and vibration can also produce segregation. One of the best known examples of vibration segregation is perhaps the “Brazil nut effect”. In this case, the geometrical properties are responsable for the upward movement of big particles although convection processes near the boundaries can also be very important . All these processes (flow, shearing and convection) are very common in industrial applications such as in mixers . For these reasons such mixers are efficient only for rather homogeneous materials. In the polydisperse case it is very hard to avoid segregation.
For gases and liquids, the thermal agitation of molecules is a natural and efficient mechanism leading to throughly mixed systems with homogenous equilibrium steady states . We propose here, a study of a system of agitated grains in analogy with liquid or gas molecules at the miroscopic scale.
Two major differences between granular materials and fluids are: (1) the particle size compared to the mean free path and (2) the inelastic properties and friction responsable for energy dissipation during collisions. The question then is if these differences will alter the system’s natural tendency to mix by diffusion. In other words, is it possible to use diffusion to mix grains. In spite of the dissipative collisions, it is possible to keep a granular system agitated, for example on an air-table or a vibrating bed. To simulate numerically such constantly agitated granular systems, we add an external random force to the equations of motion (see section II). We then analyze the grain diffusion and its dependence on the various parameters of the system such as grain size. We find that inspite of the dissipative nature of the collisions, diffusion is still a good mixing mechanism just like in fluids.
In section II we detail the algorithm and summarize the principal dynamic equations and parameters of our system. We verify our procedure with the study of a monodisperse system in section III and etablish relations which characterize the temporal evolution of an initially segregated system. The bidisperse case is studied in section IV. We show in particular the evolution of a system with homogeneous agitation, and also the effects of a gradient in this agitation. Our conclusions and discussion are in section V.
## II Algorithm and review
We use an Event Driven Molecular Dynamics algorithm. The simulated grains have the same characteristics (e.g. normal and tangential restitution coefficients) as measured experimentally . To thermalize the system, we add at regular time step intervals, $`dt`$, external random forces which act on every particle. There are several choices one can make for this force. Our choice is the following:
$$F_i^t=m\left(\sqrt{\eta _0^2/dt}\right)\zeta _i$$
(1)
with $`i=x,y`$ (corresponding to the two directions), $`\zeta _i`$ is a gaussian noise caracterized by $`<\zeta _i\zeta _j>=\delta _{i,j}`$ and $`m`$ is the particle mass. $`\eta _0^2`$ is the control parameter which we use to increase or decrease the agitation. Experimentally, on the air table, plastic disks of radius $`R`$ move due to the fluctuations of the air flux acting on their surfaces. Therefore, since their mass is proportional to $`R^2`$, we expect the acceleration to be independent of $`R`$. That is why we have chosen an external force proportional to the particle mass, Eq. (1). The equation of motion of a particle between two collisions becomes:
$$\frac{d\mathrm{v}_\mathrm{i}}{dt}=\sqrt{\eta _0^2}\zeta _i.$$
(2)
In this paper $`\mathrm{v}_\mathrm{i}`$ denotes the instantaneous velocity in the $`\mathrm{i}`$ direction and $`v^2`$ the mean square velocity. The system is two dimensional and is enclosed in a square box whose walls are made of grains of radius $`r_w`$ and are infinitely massive. The particle-wall collisions are taken to be elastic. Note that in these two-dimensional simulations, the particles are represented as spheres interacting at their equators.
### A Macroscopic characteristics of the steady state
The above model leads to a steady state characterized by a constant mean square velocity, $`v^2(t\mathrm{})`$ for all particles. The energy loss during collisions is compensated for by the random force. For a monodisperse gas, the energy balance is easily calculated. The energy loss, $`\mathrm{\Gamma }`$, per unit time in the steady state is given by
$$\mathrm{\Gamma }\omega mv^2,$$
(3)
where $`\omega `$ is the frequency of collisions. On the other hand, the average gain in energy due to the random force during $`dt`$ is easily obtained from Eq. (2):
$$\frac{1}{2}m\left[v^2(t+dt)v^2(t)\right]=m\eta _0^2dt.$$
(4)
In the steady state we can thus write
$$\frac{1}{2}m\frac{v^2}{t}=\mathrm{\Gamma }+m\eta _0^2.$$
(5)
We can write $`\omega \frac{\sqrt{v^2}}{l}`$ where $`l`$ is a caracteristic length depending only on the packing fraction and on the radius of the grains. With this assumption and the fact that $`\frac{v^2(t\mathrm{})}{t}=0`$, Eq. (5) gives
$$v^2(\mathrm{})(l\eta _0^2)^{2/3}.$$
(6)
This power law is independent of the coefficients of restitution and friction if dissipation is not too large. The results of the kinetic theory of inelastic gases can be applied, in particular the velocity distribution can be approximated by a Maxwellian. The parameter $`\eta _0^2`$ allows us to change the granular temperature, $`T`$, since $`v^2T`$. Therefore, in the steady state, $`T`$ is independent of the initial conditions. For polydisperse gases the problem becomes more complicated as will be seen below.
### B Coefficient of diffusion
Since the mean square velocity is constant, and consequently the collision frequencies too, particles have a simple diffusive behavior. The mean square displacement, $`<(r(t+t_0)r(t_0))^2>`$, gives the coefficient of diffusion
$$<(r(t+t_0)r(t_0))^2>=4Dt,$$
(7)
where $`t_0`$ is large enough to ensure thermalization of the system. Clearly, if the system is examined at a time scale $`t1/\omega `$ we will not observe the true diffusive behaviour. At short time, $`\mathrm{v}_\mathrm{i}`$ is not constant due to the action of $`F_i^t`$ and so the mean square displacement is not yet linear with $`t`$. For $`t>>1/\omega `$ we verify the linear dependence of the mean square displacement on time, for all particles. The value of the coefficient of diffusion $`D`$ found from the simulation is thus larger than the theoretical value predicted by a Langevin description due to the dynamics of the particle at short time .
### C Simulation procedure
We study systems made of two species of grains, $`s`$ and $`b`$. The radius of the particles are respectively $`R_s`$ and $`R_b`$. The system is a square box of length $`L`$ and we use the boundary conditions discussed above. The number of particles of each species is calculated based on the desired packing fraction $`C`$ and the relative proportion of $`s`$ particles $`x_s`$.
$$\{\begin{array}{c}C=\frac{n_s\pi R_s^2+n_b\pi R_b^2}{L^2},\hfill \\ x_s=\frac{n_s\pi R_s^2}{n_s\pi R_s^2+n_b\pi R_b^2},\hfill \end{array}$$
(8)
where $`n_s`$ and $`n_b`$ are repectively the number of particles $`s`$ and $`b`$.
For all mixtures, we performed two types of simulations. In the first one, the two species $`s`$ and $`b`$ are already mixed and the initial position of each particle is chosen randomly in the box by using a classical algorithm of Random Sequential Adsorption (R. S. A.) . We are careful that this algorithm does not introduce segregation in the initial configuration. For the second type of simulation, the two species are initially separated with the $`s`$ particles on the right and the $`b`$ particles on the left. The system is prepared such that the packing fraction is homogeneous in the whole system.
In the following section we present our results in the simple case where the two species have the same mechanical and geometrical properties, i.e. $`s`$ and $`b`$ grains are of the same type.
## III Monodisperse case
To test the validity of our algorithm we start with identical grains, i.e. $`R_s=R_b`$. In this case, we do not expect segregation because the grains are identical, but we would like to verify that the thermal process is efficient and that particles do not agregate. In other words, after some time each grain will have visited all regions of the box. In figure 1 is presented the temporal evolution of the system for the two different initial configurations specified above. In figure 1a the system is already mixed and in 1b the particles are initially separated. As one can see in this figure, the system does not collapse and the grains are homogeneously distributed in the box. To analyse the dynamics of the mixture we measure the quantity $`N_{s,b}(t)`$ defined as the number of collisions between $`s`$ and $`b`$ grains per unit time. The evolution of $`N_{s,b}(t)`$ with time gives two important results. For the initial configuration corresponding to figure 1a, the quantity $`N_{s,b}(t)`$ fluctuates around a mean value, $`N_{s,b}(\mathrm{})`$, as seen in figure 2. This means that the system has reached a steady state in which the mean square velocity of the particles is constant and equal to $`v^2(\mathrm{})`$. Note that the system evolves very quickly into this steady state. We have checked that all the configurations at different times, $`t`$, are statistically identical and that the system remains homogeneous. There is no evidence of collapse or cluster formation. The second observation is that for the initial configuration corresponding to figure 1b, the quantity $`N_{s,b}(t)`$ increases and then stabilizes at large time at the value $`N_{s,b}(\mathrm{})`$ defined above. However, the mean square velocity of the grains, $`v^2(t)`$, reaches the steady state value $`v^2(\mathrm{})`$ much more quickly since the grains are identical. The knowledge of the mean square velocity is not sufficient to define the state of the system since it gives no information about the spatial repartition of the two species. $`N_{s,b}(t)`$ is therefore the only pertinent quantity to characterize the homogeneity of the system.
A few more comments about $`N_{s,b}(t)`$ are in order. In the system studied above, the packing fraction and the velocity distribution are spatially homogeneous and constant in time (except at very short time). As a consequence the quantity $`N_{s,b}(t)`$ depends only on the spatial repartition of the two types of grains. The evolution of $`N_{s,b}(t)`$ allows us to define a mixing time, $`\tau _{mix}`$. We have already mentioned that the velocity is the same for all particles and independent of position. This is also true for the local density. We conclude that also the frequency of collisions is the same for all particles and is space independent. In this monodisperse case, the dynamics are purely diffusive and can be characterized by a coefficient of diffusion $`D`$ which is independent of horizontal spatial position $`x`$. Let us call $`\delta N_{s,b}(x,t)`$ the number of collisions between the two species occuring at a position between $`x`$ and $`x+dx`$ at time $`t`$. Clearly, $`\delta N_{s,b}(x,t)`$ is directly proportionnal to $`d_s(x,t)`$ and $`d_b(x,t)`$, the densities of $`s`$ and $`b`$ grains at position $`x`$. The densities $`d_s`$ and $`d_b`$ do not depend on the vertical position since the system is invariant along this direction. We will define $`d_0`$ as the total density and can, thus, write $`d_0=d_s(x,t)+d_b(x,t)`$. $`d_0`$ is of course independent of $`x`$ because the system remains homogeneous. We then obtain an expression for $`N_{s,b}(t)`$:
$$N_{s,b}(t)\underset{0}{\overset{L}{}}d_b(x,t)(d_0d_b(x,t))𝑑x.$$
(9)
The density of big particles at $`(x,t)`$, $`d_b(x,t)`$, is described by Fick’s equation,
$$\frac{d_b(x,t)}{t}=D\frac{^2d_b(x,t)}{x^2},$$
(10)
with the following boundary conditions:
$$\{\begin{array}{cc}d_b(x)=d_0\hfill & \text{for }0x<L/2\text{ and }t=0,\hfill \\ d_b(x)=0\hfill & \text{for }L/2xL\text{ and }t=0,\hfill \\ d_b(x)=d_0/2\hfill & \text{for all }x\text{ and }t\mathrm{}.\hfill \end{array}$$
(11)
We assume the solution of Eq. (10) has the form:
$$d_b(x,t)=\underset{m=0}{\overset{\mathrm{}}{}}(B_msin(\lambda _mx)+A_mcos(\lambda _mx))exp(\lambda _m^2Dt)+\frac{d_0}{2}$$
(12)
where the $`\lambda _m`$ are constants. Using the conditions Eq. (11) gives:
$$d_b(x,t)=\underset{k=0}{\overset{\mathrm{}}{}}a_kcos(\lambda _kx)exp(\lambda _k^2Dt)+\frac{d_0}{2}$$
(13)
with
$$\begin{array}{c}a_k=\frac{2d_0(1)^k}{\pi (2k+1)},\hfill \\ \lambda _k=\frac{(2k+1)\pi }{L}.\hfill \end{array}$$
(14)
Eq. (9) then gives the final expression for $`N_{s,b}(t)`$,
$$\begin{array}{c}N_{s,b}(t)\frac{d_0^2L}{4}\left(1\underset{k=0}{\overset{\mathrm{}}{}}\frac{8}{\pi ^2(2k+1)^2}exp(2\lambda _k^2Dt)\right),\\ \lambda _k^2=\frac{(2k+1)^2\pi ^2}{L^2}.\end{array}$$
(15)
As a first approximation, we may keep only the first mode and write:
$$\begin{array}{c}N_{s,b}(t)N_{s,b}(\mathrm{})\left(1exp(t/\tau _{mix})\right),\\ \tau _{mix}=\frac{L^2}{2\pi ^2D},\end{array}$$
(16)
where $`\tau _{mix}`$ can be taken as the typical time for mixing.
To check the validity of the theoretical expressions for $`N_{s,b}(t)`$ and $`\tau _{mix}`$ established above, we have performed simulations for different values of $`x_s`$ and $`C`$. For a given set of parameters, we have performed five simulations corresponding to different initial positions and velocities of the particles for the case where the $`s`$ and $`b`$ grains are initially separated. Figure 3 shows $`N_{s,b}(t)`$ versus $`t`$ averaged over the 5 simulations. We see that the agreement between theory and numerical simulation is very good. Figure 4 shows the dependence of the mixing time, $`\tau _{mix}`$, on the coefficient of diffusion $`D`$. To get this, we performed several simulations changing the packing fraction and the radius of the particles in order to vary the coefficient of diffusion. Note that $`D`$ was estimated using Eq. (7). The slope of the curve is exactly that predicted by the theory. As the coefficient $`D`$ can be calculated from the parameters of the system ($`R_s=R_b`$, $`\eta _0^2`$, $`C`$), we can estimate analytically and with high accurancy the mixing time. We have also verified the dependence of $`\tau _{mix}`$ on $`L^2`$, Eq. (16), and have found very good agreement too.
## IV Bidisperse case
We now discuss the case of a binary mixture. We will see that the size difference between the grains changes drastically the dynamics of the system. The grains $`s`$ and $`b`$ are taken of equal density and identical coefficients of restitution and friction, and we take $`R_s<R_b`$.
We present first the case where the system is thermalyzed uniformly, i.e. $`F_i^t`$ does not depend of the position of the grain. Then we will examine the case where a gradient is imposed on the agitation force.
### A Case of homogeneous agitation
We will show that, in the bidisperse case, the system also evolves into a homogeneous steady state. We will see here that the form of the thermalization force and the initial conditions determine the evolution of the system towards the steady state. In the simulations the packing fraction is fixed to $`40\%`$ and $`x_s`$, which represents the relative proportion of small grains (Eq. (8)), is the only parameter to be varied.
#### 1 Evolution at short time
Figure 5 shows the evolution of the system with time $`t`$. In the initial configuration, the two species are separated and the two populations have the same initial velocity distribution and therefore $`v_s^2=v_b^2`$. The initial local packing fraction, as one can see in figure 5, is the same in the whole box.
Recall that in our simulations the surface occupied by a particle in the plane is proprotional to $`R^2`$ and its mass to $`R^3`$ since the particles are spherical. Since the pressure is $`Pmdv^2`$, where $`d`$ is the density of grains, the initial pressure is larger for the bigger particles. The system therefore has an initial pressure difference which will govern its behaviour immediately after the partition is removed whereby the bigger particles, $`b`$, compress the small ones $`s`$. As $`t`$ increases, (see figure 5) the density of the $`b`$ particles decreases and so does its pressure. On the other hand, the density and pressure of the $`s`$ particles increase. During this compression period we can consider the system as two interacting monodisperse systems. In the left part of the box (occupied by the larger particles), as the density decreases, the mean free path, $`l`$ increases. We have seen in section II A (see eq 6) that $`v^2`$ increases with $`l`$. The velocity $`v_b^2`$ is then increasing with time. For the same reason, in the right part of the box, $`v_s^2`$ is decreasing. As a consequence, the pressure, which is proportional to the product of the square velocity and the density, is maintained almost constant in each subsystem. The pressure difference between the two subsystems remains therefore important and favors the compression of small particles. The packing fraction of the $`s`$ grains increases up to a value around $`68\%`$.
It is worth noting that if the walls were inelastic, collapse would occur whereby the small grains would be squeezed near the wall and would loose all their energy due to dissipation. In our simulations we use elastic wall and thus observe a reflection of the compression wave. To illustrate this we show in figure 6 the packing fraction of the $`s`$ grains as a function of $`t`$ and $`x`$. We observe a compression wave which traverses the system. On average, the small particles remain compressed and the big ones dilute. During this process the diffusion between the big and the small particles is very efficient due to the high concentration gradient. The evolution of the quantities $`v_s^2(t)`$ and $`v_b^2(t)`$ as a function of time is illustrated in figure 7. One can see that the velocity of small particles decreases at short time and then increases when the mixing process starts. At long time the mean square velocities of both species reach a constant value corresponding to a steady state.
#### 2 Mixing time
After the compression phase, the system starts to mix. As we have done for the monodiperse case we examine the quantity $`N_{s,b}(t)`$. To have a good estimate of $`N_{s,b}(t)`$, we take (as in the previous section) the mean value obtained over 5 simulations. Figure 8 shows that we can appoximate $`N_{s,b}`$ by $`N(\mathrm{})exp(t/\tau _{mix})`$. Note that the compression phase occurs during a short time compared to $`\tau _{mix}`$. We obtain in this way $`\tau _{mix}`$ for different values of $`x_s`$ for $`R_s=0.4`$ and $`R_b=0.6`$. It appears that this time $`\tau _{mix}`$ can be considered as independent of $`x_s`$ (see Figure 9). This mixing time obtained in a bidisperse system is smaller than that obtained for the same packing fraction in the monodisperse case.
#### 3 Steady state
Even though the collisions are dissipative, the system does reach an out of equilibrium stationary state due to the random agitation force. This stationary state should be characterized by macroscopic functions which should be independent of time.
Using thermalized configurations (long evolution times), we performed a geometrical analysis using Voronoï tessellation to check that no segregation exists. We calculated the number of $`s`$ neighbours for a $`b`$ particle and found that the distributions of neighbours are roughly identical to the distributions obtained from static configurations generated by an R.S.A. algorithm. We should point out that the distances between particles can be different from the static case but the neighbourhood of a grain is the same in the dynamic and static situations. This demonstrates that there is no segregation.
We now consider the distribution of the kinetic energy as a function of the radius of grains. In the case of elastic collisions, we can define a kinetic temperature $`T`$ even in a polydisperse case. In a forced inelastic system the repartition of energy seems to be very different and depends on the type of forcing used.
In our system, the mean square velocity $`v_i^2`$ of particle $`i`$ depends on its mass $`m_i`$ and also on the proportion of all species $`j`$ and their masses $`m_j`$. In all cases $`v_i^2`$ is constant at large time. Energy balance in a bidisperse system means that the agitation energy per unit time for a given species equals the energy lost in collisions with particles from all species. This can be written as follows:
$$\{\begin{array}{c}P(m_s,m_s)w_{ss}m_sv_s^2+P(m_s,m_b)w_{sb}m_sv_s^2=m_s\eta _0^2,\hfill \\ P(m_b,m_b)w_{bb}m_bv_b^2+P(m_b,m_s)w_{bs}m_bv_b^2=m_b\eta _0^2,\hfill \end{array}$$
(17)
where $`P(m_i,m_j)`$ is the mean relative loss of kinetic energy by particle $`i`$ when particles $`i`$ and $`j`$ collide. Clearly, this term depends both on mass and relative velocity. We also expect that $`P(m_s,m_s)`$ should be equal to $`P(m_b,m_b)`$. Therefore the quantities $`P(m_s,m_s)`$ and $`P(m_b,m_b)`$ calculated in a bidisperse system must be the same as those obtained for the monodisperse case. This term $`P(m_i,m_i)`$ is therefore only a function of coefficients of restitution and friction. The term $`w_{ij}`$ represents the frequency of collisions of the grains $`i`$ with $`j`$ grains. We note that $`n_sw_{sb}=n_bw_{bs}`$. Using Enskog theory , the frequencies of collisions of the grains in our two-dimensional system are given by:
$$\{\begin{array}{c}w_{ss}=\chi \sqrt{\pi }(2R_s)\frac{Cx_s}{\pi R_s^2}\sqrt{2v_s^2},\hfill \\ w_{sb}=\chi \sqrt{\pi }(R_s+R_b)\frac{C(1x_s)}{\pi R_b^2}\sqrt{v_s^2+v_b^2},\hfill \\ w_{bb}=\chi \sqrt{\pi }(2R_b)\frac{C(1x_s)}{\pi R_b^2}\sqrt{2v_b^2},\hfill \\ w_{bs}=\chi \sqrt{\pi }(R_s+R_b)\frac{Cx_s}{\pi R_s^2}\sqrt{v_s^2+v_b^2}.\hfill \end{array}$$
(18)
$`\chi `$ is a correction factor and corresponds to the local radial distribution around a particle . We have previously shown in that $`\chi `$ does not depend on the type of particles but only on the packing fraction.
The two limit cases, $`x_s=0`$ and $`x_s=1`$, correspond to monodisperse situations with $`R=R_s`$ and $`R=R_b`$ respectively. In these two cases, we have determined numerically the four parameters $`v_s^2(x_s=0)`$, $`v_s^2(x_s=1)`$, $`v_b^2(x_s=0)`$ and $`v_b^2(x_s=1)`$ by simulating a particle of radius $`R_i`$ in a sea of particles of radius $`R_j`$. Using Eq. (17), we can calculate for these limiting values of $`x_s`$ the four parameters $`P(m_i,m_j)`$. We have verified that $`P(m_i,m_i)`$ is independent of the type of particle. We have found that $`P(m_s,m_s)=P(m_b,m_b)=0.145`$, $`P(m_b,m_s)=0.229`$ (at $`x_s=1`$) and $`P(m_s,m_b)=0.066`$ (at $`x_s=0`$).
To a first approximation, we consider the $`P(m_i,m_j)`$ to be independent of the relative velocity. We compare in figure 10 the values of $`v_s^2`$ and $`v_b^2`$ obtained from the numerical simulations and those deduced from Eq. (17). These values were calculated for different $`x_s`$ at a packing fraction of $`40\%`$. The dashed lines correspond to the theoretical values and the symbols to the simulations. The agreement between simulations and theory is quite good, in particular for the big particles.
However, the energy lost in a collision does depend on the relative velocity, and therefore on $`v_s^2`$ and $`v_b^2`$. To treat this correctly in Eq. (17), we show in figure 11 that both, the kinetic energy of the system and $`v_s^2/v_b^2`$, decrease linearly with $`x_s`$. Recalling that $`x_s`$ lies in the interval , we can thus write:
$$\{\begin{array}{c}\frac{Cx_s}{\pi R_s^2}m_sv_s^2(x_s)+\frac{C(1x_s)}{\pi R_b^2}m_bv_b^2(x_s)=\frac{C}{\pi R_b^2}m_bv_b^2(x_s=0)+\left[\frac{C}{\pi R_s^2}m_sv_s^2(x_s=1)\frac{C}{\pi R_b^2}m_bv_b^2(x_s=0)\right]x_s,\hfill \\ \frac{v_s^2(x_s)}{v_b^2(x_s)}=\frac{v_s^2(x_s=0)}{v_b^2(x_s=0)}+\left[\frac{v_s^2(x_s=1)}{v_b^2(x_s=1)}\frac{v_s^2(x_s=0)}{v_b^2(x_s=0)}\right]x_s.\hfill \end{array}$$
(19)
Using the four equations 19 and 17, we can calculate directly $`v_s^2`$, $`v_b^2`$, $`P(m_s,m_b)`$ and $`P(m_b,m_s)`$ as a function of $`x_s`$. Note that different values of $`x_s`$ correspond to different values of the ratio $`v_s^2/v_b^2`$. The solid lines in figure 10 correspond to the theoretical velocities squared calculated with this approach. The agreement with simulations is now very good. The values of $`P(m_s,m_b)`$ and $`P(m_b,m_s)`$ from the solution of our four equations are shown in figure 12. The values of $`P(m_s,m_b)`$ for $`x_s`$ near 1 and $`P(m_b,m_s)`$ for $`x_s`$ near $`0`$ should be taken with precaution, because their weight in the balance of energy (Eq. (17)) is negligeable for $`x_s1`$ and $`x_s0`$. Indeed the energy loss in these limit cases corresponds to rare collisions.
Note that the approximation of $`P(m_b,m_s)`$ by a constant (as done in the first approach) is fairly good. However, the value of $`P(m_s,m_b)`$ varies significantly with $`x_s`$. This explains the slight difference between simulation and theory in the first approach. Finally we see that at large $`x_s`$ (corresponding to small ratio $`v_s^2/v_b^2`$) the small particles gain energy in collisions with big particles. This phenomenon occurs only in dissipative forced gases where $`v^2`$ is no longer proportional to $`1/m`$.
### B Non-uniform agitation
We will now treat the case of non-uniform agitation. To do this we use exactly the same algorithm but introduce a gradient in the agitation by imposing the following spatial dependence for $`\eta _0^2`$:
$$\eta _0^2(x)=a+bx,$$
(20)
where $`a`$ and $`b`$ are constants. The initial configuration of the system is taken to be the stationary state found in the case of homogeneous agitation. The gradient of the agitation is chosen such that the mean value of $`\eta _0^2(x)`$ over the whole system corresponds to the agitation of the initial state. The simulations show that a concentration gradient appears in the system. The system reaches a non-uniform steady state where the density gradient remains present in the course of time. Note, however, that if we follow the motion of a particle, we find that it does visit the whole system. All the previous relations and equations are still valid in this case but one should consider a local “equilibrium”. Figure 13 represents a typical configuration obtained at large time. The hot agitation is on the right side of the system. In the stationary state, the gradient of concentration balances the gradient of agitation such that the pressure is homogeneous throughout the system.
The other main observation is that segregation appears in the system. The local proportion of $`s`$ and $`b`$ grains is no longer the initial one. The big particles are more sensitive to the gradient of agitation than the small ones. We show in figure 14 the local packing fraction for both species as a function of the position $`x`$ in the system. We have found in all cases we have investigated so far that the stationary state always exhibits segregation. Our numerical results on segregation (the big particles are more concentrated in the colder region) is in complete agreement with the theoretical calculations based on the granular kinetic theory .
## V Conclusion
The main purpose of this paper is to investigate granular mixtures numerically. We used an algorithm which keeps the dissipative particles agitated by applying to the grains an external random acceleration independent of their mass. In the case where the external acceleration is independent of the position of the particles (homogeneous agitation) we have shown that the system reaches a well mixed homogeneous stationary state even in the bidisperse case. We have therefore shown that agitation, and therefore diffusion, is an efficient mechanism for mixing. We have established a theoretical expression between the typical time of mixing, $`\tau _{mix}`$, and the number $`N_{s,b}`$ of collisions between $`s`$ and $`b`$ grains per unit of time which is valid for monodisperse as well as bidisperse systems. For the monodiperse case we have given the exact expression of $`\tau _{mix}`$ as a function of the diffusion coefficient. For the bisperse case we have found that $`\tau _{mix}`$ depends strongly on the initial configuration of the system. We have characterized the steady state reached by bidisperse assemblies and in particular we have established energetic relations which allow us to evaluate the square velocities of the grains as a function $`x_s`$ (the relative proportion of both species).
In addition we have investigated the case where there is a gradient of agitation through the system and have shown that segregation appears. The main cause of this segregation is not related directly to the nature of the grains (size, dissipation, roughness) but originates from the presence of a temperature gradient. In a lot of mixing experiments (rotating drum, vibrated system) there often exists a gradient of agitation which then leads to a segregation process. Of course there are other types of segregation mechanisms which are sometimes more efficient, but this one is in some sense intrinsic.
Acknowledgments
This work was partially funded by the CNRS Programme International de Cooperation Scientifique PICS $`\mathrm{\#}753`$. C. H. thanks Alexandre Valance for his support during this work and his help for writing of this paper.
|
no-problem/0003/hep-ph0003322.html
|
ar5iv
|
text
|
# Neutrino survival probabilities in magnetic fields
## I INTRODUCTION
In the past few years neutrino physics has generated tremendous interest and has been looked upon as a window to physics beyond the standard model. Recent results from Super-Kamiokande have boosted this search for new physics. Both the solar and atmospheric neutrino anomalies agree well with the neutrino oscillation hypothesis (for fits to the solar neutrino data see Ref.), which requires neutrinos to be massive. For recent reviews see, e.g., Ref.. However, it should be kept in mind that neutrino flavour oscillations is not the only possibility to describe the data. It may be that massive neutrinos have non-zero magnetic moments (MM) and electric dipole moments (EDM), which could play an important role in the solar neutrino puzzle if large magnetic fields are present in the interior of the sun . In this framework, the attractive scenario of resonant spin – flavour transitions has received attention and good fits to the solar neutrino data have been obtained (for recent fits see Refs. and for reviews see Ref.). In solutions of the solar neutrino puzzle with non-zero MMs and EDMs, it is assumed that the solar neutrinos interact with the magnetic field in the sun to produce right-handed neutrinos due to a helicity flip. Right-handed neutrinos are sterile in the case of Dirac neutrinos and behave like antineutrinos in the Majorana case. In the latter case, a flavour transition simultaneous with the helicity flip produces the suppression in solar neutrino detection with elastic neutrino – electron scattering.
So far, it is not known if neutrinos possess MMs and/or EDMs. The most stringent laboratory bounds come from elastic neutrino – electron scattering: for $`\overline{\nu }_e`$ reactor neutrinos the limit is $`1.8\times 10^{10}\mu _B`$ whereas for $`\underset{\mu }{\overset{()}{\nu }}`$ the limit is $`7.4\times 10^{10}\mu _B`$ , where $`\mu _B`$ is the Bohr magneton. More stringent limits are obtained with astrophysical considerations . For a collection of references on limits on neutrino MMs see also Ref..
In this paper we focus on neutrino survival probabilities. Apart from the three active neutrino flavours ($`\nu _\alpha `$ with $`\alpha =e,\mu ,\tau `$), we allow an arbitrary number of additional neutrinos of the sterile type ($`\nu _s`$) . In vacuum, the survival probabilities of left-handed and right-handed (anti)neutrinos are equal:
$$P^D(\nu _{\alpha L}\nu _{\alpha L})=P^D(\overline{\nu }_{\alpha R}\overline{\nu }_{\alpha R})\text{and}P^M(\nu _{\alpha L}\nu _{\alpha L})=P^M(\nu _{\alpha R}\nu _{\alpha R}),$$
(1)
where the superscripts $`D`$ and $`M`$ refer to Dirac and Majorana neutrinos, respectively. These equalities are not valid if matter effects become important, but matter effects in neutrino oscillations do not distinguish between the Dirac and Majorana nature . Here we concentrate on the situation that matter effects are negligible but the MM and EDM interaction of neutrinos with magnetic fields becomes important. Thus our discussion does not apply to solar neutrinos. We will show that for constant magnetic fields the equality (1) remains valid for Majorana neutrinos whereas in general it gets lost for Dirac neutrinos. This is to be contrasted with the matter effects. We will also consider a situation where the magnetic field depends on $`\stackrel{}{x}`$ where this equality for Majorana neutrinos persists. Though our discussion will be rather formal we hope that it can shed some light on a possible distinction between Dirac and Majorana neutrinos. Up to now such efforts are mainly concentrated on neutrinoless double beta decay .
Our paper is organized as follows. In Section II, we discuss the general formalism for an oscillating neutrino (both, left and right-handed neutrinos) propagating in an external electromagnetic field, in the absence of matter effects, and prove the equality (1) for Majorana neutrinos in a constant magnetic field. In Section III we study some conditions where equality (1) also holds for Dirac neutrinos, but we show that it is violated in the general Dirac case. In Section IV we consider non-constant magnetic fields and in Section V we summarize our results.
## II GENERAL FORMALISM
In the following we will work with $`n`$ neutrino flavours or types. We will take into account neutrino mixing and electromagnetic interactions through MMs and EDMs but we will not consider matter effects.
### A DIRAC CASE
Let us assume that the neutrino MMs, EDMs and transition MMs and EDMs are given for the neutrino mass eigenfields $`\nu _j`$. Then the MM and EDM interaction of the Dirac neutrinos is expressed by the Hamiltonian density
$$_{\mathrm{em}}^D=\frac{1}{2}\overline{\nu }(\mu +id\gamma _5)\sigma _{\alpha \beta }F^{\alpha \beta }\nu \text{with}\mu ^{}=\mu ,d^{}=d$$
(2)
being the magnetic moment and electric dipole moment matrices, respectively. $`F^{\alpha \beta }`$ is the antisymmetric electromagnetic field tensor. Whereas $`\nu _j`$ denotes the fields in the mass basis, we denote the chiral fields in the flavour basis where the charged lepton matrix is diagonal by $`\nu _L`$, $`\nu _R`$. The mass matrix $`M`$ in the neutrino mass term
$$_m^D=\overline{\nu }_RM\nu _L+\text{h.c.}$$
(3)
is bidiagonalized by
$$U_R^{}MU_L=\widehat{M}$$
(4)
such that
$$\nu _{\alpha L}=\underset{j}{}U_{L\alpha j}\nu _{jL}\text{and}\nu _{\alpha R}=\underset{j}{}U_{R\alpha j}\nu _{jR},$$
(5)
where the indices $`\alpha `$ denote the neutrino flavours or types. The diagonalizing matrix $`U_L`$ is the usual neutrino mixing matrix.
For massive Dirac neutrinos with mixing, the Hamiltonian in the *flavour basis* describing the neutrino interacting with a magnetic field is given as
$$H_\nu ^D=\left(\begin{array}{cc}\frac{1}{2E_\nu }U_L\widehat{M}^2U_L^{}& B_+U_L(\mu +id)U_R^{}\\ B_{}U_R(\mu id)U_L^{}& \frac{1}{2E_\nu }U_R\widehat{M}^2U_R^{}\end{array}\right),$$
(6)
where the upper half of the matrix $`H_\nu ^D`$ corresponds to negative helicity while the lower half corresponds to positive helicity. Assuming that the neutrino is propagating along the $`z`$ direction, the magnetic fields $`B_\pm `$ are defined as
$$B_\pm =B_x\pm iB_y=Be^{\pm i\beta }\text{with}B=\sqrt{B_x^2+B_y^2}.$$
(7)
In the approximation we are working with, the longitudinal magnetic field is negligible. The neutrino energy is denoted by $`E_\nu `$. Note that for the right-handed Dirac neutrino states there is no preferred flavour basis because these states do not couple to the charged leptons. The Hamiltonian matrix $`H_{\overline{\nu }}^D`$ for Dirac antineutrinos is obtained by making the replacements
$$U_{L,R}U_{L,R}^{},\mu \mu ^T=\mu ^{},dd^T=d^{}$$
(8)
in the matrix (6). The superscript $``$ on the MM and EDM matrices indicates complex conjugation of all elements of the matrices. The upper half of $`H_{\overline{\nu }}^D`$ maps onto positive and the lower half onto negative helicity.
The Hamiltonian (6) is re-expressed in the mass basis as $`\stackrel{~}{H}_\nu `$, where
$$\stackrel{~}{H}_\nu =\left(\begin{array}{cc}\frac{1}{2E_\nu }\widehat{M}^2& B_+(\mu +id)\\ B_{}(\mu id)& \frac{1}{2E_\nu }\widehat{M}^2\end{array}\right),$$
(9)
and the corresponding Hamiltonian for antineutrinos is given by
$$\stackrel{~}{H}_{\overline{\nu }}=\left(\begin{array}{cc}\frac{1}{2E_\nu }\widehat{M}^2& B_+(\mu ^{}+id^{})\\ B_{}(\mu ^{}id^{})& \frac{1}{2E_\nu }\widehat{M}^2\end{array}\right).$$
(10)
In order to obtain the survival probabilities of neutrinos (which have negative helicity) and antineutrinos (which have positive helicity), one needs to diagonalize the matrices $`\stackrel{~}{H}_\nu `$ and $`\stackrel{~}{H}_{\overline{\nu }}`$, respectively. Observing that
$$J^{}\stackrel{~}{H}_{\overline{\nu }}J=\stackrel{~}{H}_\nu ^{}\text{with}J=\left(\begin{array}{cc}\hfill 0& \hfill \mathrm{𝟏}\\ \hfill \mathrm{𝟏}& \hfill 0\end{array}\right),$$
(11)
where $`\mathrm{𝟏}`$ is the $`n\times n`$ unit matrix, we find that $`H_\nu ^D`$ and $`H_{\overline{\nu }}^D`$ have the same eigenvalues $`E_1,\mathrm{},E_{2n}`$. Furthermore, there are unitary matrices $`W_\nu `$ and $`W_{\overline{\nu }}`$ such that
$$W_\nu ^{}\stackrel{~}{H}_\nu W_\nu =W_{\overline{\nu }}^{}\stackrel{~}{H}_{\overline{\nu }}W_{\overline{\nu }}=\mathrm{diag}(E_1,E_2,\mathrm{},E_{2n}),$$
(12)
which, according to Eq.(11), are related by
$$W_{\overline{\nu }}=JW_\nu ^{}.$$
(13)
The survival probabilities for Dirac neutrinos (helicity = $`1`$) and Dirac antineutrinos (helicity = $`+1`$) are then given as
$$P^D(\nu _{\alpha L}\nu _{\alpha L})=\left|\underset{j=1}{\overset{2n}{}}\left|U_{\alpha j}^\nu \right|^2e^{iE_jL}\right|^2$$
(14)
and
$$P^D(\overline{\nu }_{\alpha R}\overline{\nu }_{\alpha R})=\left|\underset{j=1}{\overset{2n}{}}\left|U_{\alpha j}^{\overline{\nu }}\right|^2e^{iE_jL}\right|^2,$$
(15)
respectively. In the above probability expressions, $`L`$ is the distance between neutrino source and detector. From the $`2n\times 2n`$ matrices $`U^\nu `$ and $`U^{\overline{\nu }}`$ diagonalizing $`H_\nu ^D`$ and $`H_{\overline{\nu }}^D`$, respectively, we only need the first $`n`$ lines labelled by the neutrino flavours or types, which can be expressed by $`U_L`$ and $`WW_\nu `$:
$$\begin{array}{ccc}\hfill 1jn:& U_{\alpha j}^\nu =_{k=1}^nU_{L\alpha k}W_{kj},\hfill & U_{\alpha j}^{\overline{\nu }}=_{k=1}^nU_{L\alpha k}^{}W_{n+kj}^{},\hfill \\ \hfill n+1j2n:& U_{\alpha j}^\nu =_{k=1}^nU_{L\alpha k}W_{kn+j},\hfill & U_{\alpha j}^{\overline{\nu }}=_{k=1}^nU_{L\alpha k}^{}W_{n+kn+j}^{}.\hfill \end{array}$$
(16)
### B MAJORANA CASE
For Majorana neutrinos we start with the same electromagnetic Hamiltonian density (2) as in the Dirac case, except that the factor $`1/2`$ is replaced by $`1/4`$ to account for the charge conjugation property $`(\nu _j)^c=\nu _j`$ of the Majorana fields. Then the Hamiltonian $`H_M`$, corresponding to $`H_\nu ^D`$, is given by
$$H_M=\left(\begin{array}{cc}\frac{1}{2E_\nu }U_L\widehat{M}^2U_L^{}& B_+U_L(\mu +id)U_L^T\\ B_{}U_L^{}(\mu id)U_L^{}& \frac{1}{2E_\nu }U_L^{}\widehat{M}^2U_L^T\end{array}\right).$$
(17)
Note that $`U_R`$ in $`H_\nu ^D`$ (6) is replaced by $`U_L^{}`$ in the Majorana case. Furthermore, we have to keep in mind that the matrices $`\mu `$ and $`d`$ are antisymmetric, i.e.,
$$\mu ^T=\mu ^{}=\mu ,d^T=d^{}=d$$
(18)
in $`H_M`$. This equation formulates the well known properties of Majorana neutrinos that only transition moments are non-zero and that $`\mu `$ and $`d`$ are purely imaginary. The latter property allows to check immediately that
$$J^{}H_MJ=H_M^{}$$
(19)
holds (compare with Eq.(11)).
Denoting the diagonalizing $`2n\times 2n`$ unitary matrix of $`H_M`$ by $`U_M`$, the survival probabilities of Majorana neutrinos with negative and positive helicities are given by
$$P^M(\nu _{\alpha L}\nu _{\alpha L})=\left|\underset{j=1}{\overset{2n}{}}\left|U_{M\alpha j}\right|^2e^{iE_jL}\right|^2$$
(20)
and
$$P^M(\nu _{\alpha R}\nu _{\alpha R})=\left|\underset{j=1}{\overset{2n}{}}\left|U_{M+\alpha j}\right|^2e^{iE_jL}\right|^2,$$
(21)
respectively. The subscript $``$ of $`U_M`$ in Eq.(20) indicates that the index $`\alpha `$ refers to the first $`n`$ lines of this matrix (negative helicity), whereas in Eq.(21) the subscript $`+`$ indicates the use of the last $`n`$ lines (positive helicity).
The relation (19) allows to be more specific with respect to the matrix $`U_M`$.
Lemma: The matrix $`U_M`$ which diagonalizes $`H_M`$ can be chosen to be of the form
$`U_M=\left(\begin{array}{cc}A& \hfill B^{}\\ B& \hfill A^{}\end{array}\right).`$
Furthermore, diagonalizing $`H_M`$ with this $`U_M`$ we obtain
$`E_j=E_{n+j}\text{for}j=1,\mathrm{},n.`$
Proof of the lemma: Suppose we have an eigenvector $`\psi `$ of $`H_M`$, i.e.,
$$H_M\psi =E\psi .$$
(22)
Then with Eq.(19) we find that
$$H_MJ\psi ^{}=J\left(J^{}H_MJ\right)\psi ^{}=JH_M^{}\psi ^{}=EJ\psi ^{}\text{and}J\psi ^{}\psi .$$
(23)
This observation allows to construct an orthonormal basis of eigenvectors of $`H_M`$ in the following way. Starting with an eigenvector $`\psi _1`$ normalized to unit length and with eigenvalue $`E_1`$, then $`J\psi _1^{}`$ is orthogonal to $`\psi _1`$ and has the same eigenvalue. Next we can find a normalized eigenvector $`\psi _2`$ with eigenvalue $`E_2`$ such that $`\psi _2\{\psi _1,J\psi _1^{}\}`$. Then it is easy to show that $`J\psi _2^{}`$ is orthogonal to all three previously constructed eigenvectors. We continue by finding $`\psi _3`$ with eigenvalue $`E_3`$ orthogonal to all four previously constructed eigenvectors and so on. After $`n`$ steps, two orthonormal systems $`\{\psi _j\}_{j=1,\mathrm{},n}`$ and $`\{J\psi _j^{}\}_{j=1,\mathrm{},n}`$ with the same eigenvalues $`E_1,\mathrm{},E_n`$ are found which together form an orthonormal basis of eigenvectors of $`H_M`$. Thus $`U_M`$ is given by
$$U_M=\left(\psi _1\mathrm{}\psi _nJ\psi _1^{}\mathrm{}J\psi _n^{}\right),$$
(24)
which is of the form announced in the lemma. $`\mathrm{}`$
An immediate consequence of the lemma and of Eqs.(20) and (21) is the following theorem.
Theorem 1: Without matter effects and in a constant magnetic field the survival probabilities for left-handed and right-handed Majorana neutrinos are equal, i.e.,
$`P^M(\nu _{\alpha L}\nu _{\alpha L})=P^M(\nu _{\alpha R}\nu _{\alpha R})=\left|{\displaystyle \underset{j=1}{\overset{n}{}}}\left(|A_{\alpha j}|^2+|B_{\alpha j}|^2\right)e^{iE_jL}\right|^2.`$
## III ASYMMETRIES OF SURVIVAL PROBABILITIES
Let us define an asymmetry
$$\mathrm{\Delta }_\alpha ^D=P^D(\nu _{\alpha L}\nu _{\alpha L})P^D(\overline{\nu }_{\alpha R}\overline{\nu }_{\alpha R})$$
(25)
for every Dirac neutrino flavour or type $`\alpha `$. Note that we have just discussed in Theorem 1 that the corresponding asymmetry for Majorana neutrinos,
$$\mathrm{\Delta }_\alpha ^M=P^M(\nu _{\alpha L}\nu _{\alpha L})P^M(\nu _{\alpha R}\nu _{\alpha R}),$$
(26)
vanishes with the assumptions in this theorem. If we set $`\mu =d=0`$ and assume vanishing matter effects, then we have vacuum oscillations and both asymmetries (25) and (26) are zero. Clearly, the asymmetry (25) is not a CP asymmetry due to a KM type of phase . If it is non-zero it is because the presence of a magnetic field represents a CP-violating situation.
As a passing remark, matter effects would induce such an asymmetry for the same reason: background matter is not CP-invariant or, in other words, neutrinos and antineutrinos interact differently with matter. The matter potentials enter in the diagonal of the Hamiltonian matrices as follows:
$$\begin{array}{cc}\hfill (V_L,0)& \text{for Dirac neutrinos,}\hfill \\ \hfill (V_L,0)& \text{for Dirac antineutrinos,}\hfill \\ \hfill (V_L,V_L)& \text{for Majorana neutrinos,}\hfill \end{array}$$
(27)
with the diagonal matrix $`V_L=\sqrt{2}G_F\mathrm{diag}(N_\alpha )`$, where, for ordinary matter, $`N_e=n_en_n/2`$, $`N_{\mu ,\tau }=n_n/2`$, $`N_s=0`$ and $`n_e`$ and $`n_n`$ are the electron and neutron densities, respectively.
However, there is a fundamental difference between matter effects and the effects of MMs, EDMs and a magnetic field: For $`B_\pm =0`$, but matter effects becoming important, one has $`\mathrm{\Delta }_\alpha ^D=\mathrm{\Delta }_\alpha ^M0`$ in general, whereas with $`V_L=0`$ but $`B_\pm 0`$ and constant magnetic field one has $`\mathrm{\Delta }_\alpha ^D0`$ in general, as we will see, but $`\mathrm{\Delta }_\alpha ^M=0`$.
In view of Theorem 1 and in order to elaborate under which conditions Majorana and Dirac neutrinos behave differently, it is necessary to study the asymmetry (25) in more detail. The following theorem describes under which sufficient conditions the asymmetry is still zero.
Theorem 2: For the case of Dirac neutrinos, without matter effects and with a constant magnetic field, if the electric and magnetic moment matrices obey the proportionality $`\mu =cd`$ (or $`d=c\mu `$), where $`c`$ is a real number, then the survival probabilities of a neutrino with flavour (type) $`\alpha `$ is equal to the survival probability of the corresponding antineutrino, i.e., the asymmetry (25) is zero.
Corollary: If $`d=0`$ or $`\mu =0`$ or $`n=1`$ the asymmetry (25) is zero.
Proof of the theorem: We define (see Eq.(7))
$$e^{i\beta }(1+ic)z=|z|e^{i\zeta }.$$
(28)
We can write $`\stackrel{~}{H}_\nu `$ (9) as
$$\stackrel{~}{H}_\nu =\left(\begin{array}{cc}\frac{1}{2E_\nu }\widehat{M}^2& Bz\mu \\ Bz^{}\mu & \frac{1}{2E_\nu }\widehat{M}^2\end{array}\right).$$
(29)
The phase $`\zeta `$ can be removed by the unitary transformation
$$H_\nu ^{}\left(\begin{array}{cc}\mathrm{𝟏}& 0\\ 0& \mathrm{𝟏}e^{i\zeta }\end{array}\right)\stackrel{~}{H}_\nu \left(\begin{array}{cc}\mathrm{𝟏}& 0\\ 0& \mathrm{𝟏}e^{i\zeta }\end{array}\right)=\left(\begin{array}{cc}\frac{1}{2E_\nu }\widehat{M}^2& B|z|\mu \\ B|z|\mu & \frac{1}{2E_\nu }\widehat{M}^2\end{array}\right)=\left(\begin{array}{cc}\hfill H_1& \hfill H_2\\ \hfill H_2& \hfill H_1\end{array}\right).$$
(30)
The individual block matrices $`H_1`$ and $`H_2`$ are independent Hermitian matrices.
We consider the eigenvector equations
$`(H_1H_2)X_j`$ $`=`$ $`E_jX_j,`$ (31)
$`(H_1+H_2)Y_j`$ $`=`$ $`E_{n+j}Y_j,`$ (32)
where $`j=1,\mathrm{},n`$. The sets $`\{X_j\}_{j=1,\mathrm{},n}`$ and $`\{Y_j\}_{j=1,\mathrm{},n}`$ form orthonormal bases of eigenvectors of the Hermitian matrices $`H_1H_2`$ and $`H_1+H_2`$, respectively. Therefore, the structure of $`W_\nu `$ is given by
$$W_\nu =\frac{1}{\sqrt{2}}\left(\begin{array}{cccccc}X_1& \mathrm{}& X_n& Y_1& \mathrm{}& Y_n\\ e^{i\zeta }X_1& \mathrm{}& e^{i\zeta }X_n& e^{i\zeta }Y_1& \mathrm{}& e^{i\zeta }Y_n\end{array}\right)\left(\begin{array}{cc}C& D\\ e^{i\zeta }C& e^{i\zeta }D\end{array}\right).$$
(33)
Furthermore, using the relations (16) and the expressions (14) and (15) for the Dirac neutrino survival probabilities, Theorem 2 follows. $`\mathrm{}`$
Now we want to show that Theorem 2 describes an exceptional situation and that indeed in general the asymmetry $`\mathrm{\Delta }_\alpha ^D`$ is different from zero. It is sufficient to see this in the case of vanishing neutrino masses in $`H_\nu ^D`$ (6) and $`H_{\overline{\nu }}^D`$ (8). Defining a matrix
$$\lambda =\mu id,$$
(34)
we notice that this is a completely general matrix which can be bidiagonalized with unitary matrices $`R`$ and $`S`$:
$$\lambda =R\widehat{\lambda }S^{}\text{with}S=(x_1,\mathrm{},x_n),R=(y_1,\mathrm{},y_n),$$
(35)
where $`\widehat{\lambda }`$ is diagonal and positive and $`\{x_j\}_{j=1,\mathrm{},n}`$ and $`\{y_j\}_{j=1,\mathrm{},n}`$ are orthonormal bases. Dropping the neutrino masses we get the Hamiltonian matrix
$$H_\nu ^D=B\left(\begin{array}{cc}0& e^{i\beta }S\widehat{\lambda }R^{}\\ e^{i\beta }R\widehat{\lambda }S^{}& 0\end{array}\right),$$
(36)
which has the following eigenvectors:
$`\varphi _j={\displaystyle \frac{1}{\sqrt{2}}}\left(\begin{array}{c}x_j\\ e^{i\beta }y_j\end{array}\right)`$ with eigenvalue $`B\widehat{\lambda }_j,`$ (39)
$`\psi _j={\displaystyle \frac{1}{\sqrt{2}}}\left(\begin{array}{c}x_j\\ e^{i\beta }y_j\end{array}\right)`$ with eigenvalue $`B\widehat{\lambda }_j.`$ (42)
These eigenvectors of the Hamiltonian matrix (36) form the matrix $`W_\nu `$ (12), and thus with Eqs.(14), (15) and (16) we obtain
$`P^D(\nu _{\alpha L}\nu _{\alpha L})`$ $`=`$ $`\left|{\displaystyle \underset{j=1}{\overset{n}{}}}|x_{\alpha j}|^2\mathrm{cos}(B\widehat{\lambda }_jL)\right|^2,`$ (43)
$`P^D(\overline{\nu }_{\alpha R}\overline{\nu }_{\alpha R})`$ $`=`$ $`\left|{\displaystyle \underset{j=1}{\overset{n}{}}}|y_{\alpha j}|^2\mathrm{cos}(B\widehat{\lambda }_jL)\right|^2.`$ (44)
From these expressions it is obvious that in general the probabilities $`P^D(\nu _{\alpha L}\nu _{\alpha L})`$ and $`P^D(\overline{\nu }_{\alpha R}\overline{\nu }_{\alpha R})`$ are different, because the orthonormal bases $`\{x_j\}_{j=1,\mathrm{},n}`$ and $`\{y_j\}_{j=1,\mathrm{},n}`$ are independent of each other.
To elaborate this in more detail we consider the case of two neutrino flavours ($`n=2`$). Since we neglect here the neutrino masses all phases in $`\lambda `$ (34) can be removed and the matrices $`S`$ and $`R`$ (35) are characterized by the angles $`\theta `$ and $`\theta ^{}`$, respectively:
$$S=\left(\begin{array}{cc}\hfill \mathrm{cos}\theta & \hfill \mathrm{sin}\theta \\ \hfill \mathrm{sin}\theta & \hfill \mathrm{cos}\theta \end{array}\right)=(x_1,x_2),R=\left(\begin{array}{cc}\hfill \mathrm{cos}\theta ^{}& \hfill \mathrm{sin}\theta ^{}\\ \hfill \mathrm{sin}\theta ^{}& \hfill \mathrm{cos}\theta ^{}\end{array}\right)=(y_1,y_2).$$
(45)
Inserting $`x_j`$ and $`y_j`$ into the survival probabilities (44), it is evident that $`P^D(\nu _{\alpha L}\nu _{\alpha L})P^D(\overline{\nu }_{\alpha R}\overline{\nu }_{\alpha R})`$ holds as long as $`\widehat{\lambda }_1\widehat{\lambda }_2`$ and $`\mathrm{cos}^2\theta \mathrm{cos}^2\theta ^{}`$.
## IV MAJORANA NEUTRINOS AND $`z`$-DEPENDENT MAGNETIC FIELDS
Even if we assume that $`V_L=0`$ but $`B_x`$, $`B_y`$ depend on $`z`$, the survival probabilities for left-handed and right-handed Majorana neutrinos will in general be different, i.e., Theorem 1 will not hold anymore. This provides some obstacle for an application of the results of this paper with the aim to find a way to distinguish between the Dirac and Majorana nature of neutrinos. We briefly discuss the general formalism for the case of $`z`$-dependent $`H_M`$. One needs to find $`2n`$ linearly independent solutions of the differential equation
$$i\frac{d\phi (z)}{dz}=H_M(z)\phi (z).$$
(46)
The survival probabilities depend now also on the locations $`z_0`$ of the neutrino source and $`z_1`$ of the neutrino detection. Assuming to know a complete orthonormal set $`\{\phi _j(z)\}_{j=1,\mathrm{},2n}`$ of solutions of Eq.(46), we can formulate the transition and survival probabilities as
$`P^M(\nu _{\alpha L}(z_0)\nu _{\beta L}(z_1))`$ $`=`$ $`\left|{\displaystyle \underset{j=1}{\overset{2n}{}}}\phi _{\alpha j}^{}(z_0)\phi _{\beta j}(z_1)\right|^2,`$ (47)
$`P^M(\nu _{\alpha R}(z_0)\nu _{\beta R}(z_1))`$ $`=`$ $`\left|{\displaystyle \underset{j=1}{\overset{2n}{}}}\phi _{+\alpha j}^{}(z_0)\phi _{+\beta j}(z_1)\right|^2,`$ (48)
and similarly those with LR and RL transitions. We use the indices $`\alpha `$ and $`+\alpha `$ to indicate the $`n`$ upper and $`n`$ lower components of $`\phi `$, respectively.
If the matter potential can be neglected ($`V_L=0`$), the relation (19) for the Majorana Hamiltonian matrix holds, irrespective if $`H_M`$ depends on $`z`$ or not. It is then easy to check that, given a complete orthonormal system of solutions $`\{\phi _j(z)\}_{j=1,\mathrm{},2n}`$, then $`\{J\phi _j^{}(z)\}_{j=1,\mathrm{},2n}`$ is a complete orthonormal set of solutions of the differential equation (46) with $`H_M(z)`$ replaced by $`H_M(z)`$. This suggests that with a Hamiltonian matrix $`H_M`$ symmetric in $`z`$ one can arrive at a situation where Theorem 1 is still valid. Indeed, using Eq.(48) and replacing $`\phi _j(z)`$ by $`J\phi _j^{}(z)`$, one arrives at the following relations for the Majorana transition and survival probabilities.
Theorem 3:
$$V_L=0,H_M(z)=H_M(z)P^M(\nu _{\alpha L}(z_0)\nu _{\beta L}(z_1))=P^M(\nu _{\beta R}(z_1)\nu _{\alpha R}(z_0)),$$
(49)
and, therefore, with $`z_1=z_0`$, we have
$$P^M(\nu _{\alpha L}(z_1)\nu _{\alpha L}(z_1))=P^M(\nu _{\alpha R}(z_1)\nu _{\alpha R}(z_1)).$$
(50)
Therefore, Theorem 1 holds also for a $`z`$-dependent magnetic field, provided it is symmetric between the neutrino source and detection points.
Note that from Eq.(48) by exchanging complex conjugation within the absolute values, for any $`V_L`$ and $`z`$-dependent magnetic field, we obtain the relations
$`P^M(\nu _{\alpha L}(z_0)\nu _{\beta L}(z_1))`$ $`=`$ $`P^M(\nu _{\beta L}(z_1)\nu _{\alpha L}(z_0)),`$ (51)
$`P^M(\nu _{\alpha R}(z_0)\nu _{\beta R}(z_1))`$ $`=`$ $`P^M(\nu _{\beta R}(z_1)\nu _{\alpha R}(z_0)),`$ (52)
which express CPT invariance. However, they do not allow to relate the survival probabilities for different helicities and are, therefore, not useful in our context.
## V SUMMARY
In this paper we have studied the survival probabilities of neutrinos and antineutrinos which possess magnetic moments and electric dipole moments and propagate in magnetic fields. In particular, given a neutrino flavour (type) $`\alpha `$, we have studied the difference $`\mathrm{\Delta }_\alpha ^{D,M}`$ (25,26) between the survival probabilities for $`\nu _{\alpha L}`$ and $`\underset{\alpha R}{\overset{()}{\nu }}`$. The bar on $`\nu _{\alpha R}`$ refers to Dirac neutrinos, without bar a Majorana neutrino is understood. It is well known that matter effects lead to non-zero asymmetries $`\mathrm{\Delta }_\alpha ^{D,M}`$, but without electromagnetic neutrino interactions one has $`\mathrm{\Delta }_\alpha ^D=\mathrm{\Delta }_\alpha ^M`$ expressing the fact that with neutrino oscillations one cannot distinguish between Dirac and Majorana neutrinos.
Here we have considered the opposite situation: we have assumed that matter effects are negligible but neutrinos have MMs and EDMs and propagate in magnetic fields. Assuming a *constant* magnetic field, we have shown that in this case we have $`\mathrm{\Delta }_\alpha ^M=0`$ (Theorem 1), but $`\mathrm{\Delta }_\alpha ^D0`$ in general. However, if the MM and EDM matrices are proportional to each other in the case of Dirac neutrinos, then the asymmetry of survival probabilities is zero as well, i.e., $`\mathrm{\Delta }_\alpha ^D=0`$ (Theorem 2). What happens if the magnetic field is not constant along the neutrino path between source and detector? In general, this will result in $`\mathrm{\Delta }_\alpha ^M0`$ for Majorana neutrinos, but if the transverse magnetic field is symmetric with respect to the center $`z_c`$ of the line connecting source and detector ($`B_{x,y}(z_c+z)=B_{x,y}(z_cz)`$) one still has $`\mathrm{\Delta }_\alpha ^M=0`$ (Theorem 3).
The observations summarized here indicate a fundamental difference in the behaviour of Dirac and Majorana neutrinos with respect to magnetic fields, which is a consequence of the fact that the Hermitian MM and EDM matrices $`\mu `$ and $`d`$ have to be antisymmetric for Majorana neutrinos. Further research is necessary to see if the results of this paper lead to realistic possibilities to distinguish between the Dirac and Majorana nature of neutrinos.
###### Acknowledgements.
One of us (Balaji) gratefully acknowledges the hospitality at the Institute for Theoretical Physics, University of Vienna, Austria, where this work was initiated.
|
no-problem/0003/hep-ph0003308.html
|
ar5iv
|
text
|
# ITEP-11/00 March 2000 On decay of large amplitude bubble of disoriented chiral condensate
Abstract
The time evolution of initially formed large amplitude bubble of disoriented chiral condensate (DCC) is studied. It is found that the evolution of this object may have a relatively long pre-decay stage. Simple explanation of such delay of the DCC bubble decay is given. This delay is related to the existence of the approximate solutions of multi-soliton type of the corresponding radial sine-Gordon equation in (3+1) dimensions at large bubble radius.
In our previous paper we discussed the time evolution of initially formed DCC bubbles in the simplified chiral two-component classical sigma-model ($`\sigma ^2+\pi ^2=f_\pi ^2`$). In such model it is convenient to introduce a new field variable $`\varphi `$: $`\pi =f_\pi \mathrm{sin}\varphi `$, $`\sigma =f_\pi \mathrm{cos}\varphi `$. The equation of motion in terms of $`\varphi `$, studied in Ref. , has the form
$$\varphi _{tt}\varphi _{rr}\frac{2}{r}\varphi _r+\frac{m^2}{n}\mathrm{sin}(n\varphi )=0,$$
$`(1)`$
where $`\varphi [0,2\pi ]`$, $`m`$ is the mass of the field $`\varphi `$ and $`n`$ is integer. If $`m=0`$, we get the case of unbroken chiral symmetry. If $`m0`$, the chiral symmetry is broken. In the case $`n=1`$ the theory has the only vacuum state at $`\varphi =0`$. In terms of the field variable $`\varphi `$ the bubble of DCC corresponds to the following field configuration: everywhere inside the spherical symmetrical domain $`\varphi =const0`$; everywhere outside this domain $`\varphi 0`$ (true vacuum, i.e. $`<\sigma >=1`$, $`<\pi >=0`$). The decay of DCC bubble in the model with $`n=1`$ leads finally to the formation of a breather-like solution , located at the center of the initial bubble. Formation of a long-living breather solution is a typical phenomenon for a wide class of nonlinear systems including one described by Eq. (1). It is worth to mention that, because of nonlinearity of the problem, the mean life-time of large amplitude DCC bubbles significantly exceeds one of linearized DCC systems with external sources, see, e.g. Ref. .
The case $`n2`$ corresponds to the theory with $`n`$ degenerate vacua at $`\varphi =0`$, $`2\pi /n`$, $`4\pi /n`$, …, $`2(n1)\pi /n`$. As it was demonstrated in Ref. , the evolution picture of DCC bubbles in this case depends crucial on the initial amplitude. The case of small amplitude also leads to the formation of a breather-like solution. But initially formed bubble of large amplitude behaves in a different way which is characterized by rather long pre-decay stage with relatively low radiation. This first stage of evolution consists in splitting of the shell of the initial bubble into several concentric shells of smaller amplitudes. The next step is the interaction of these shells. To the end we observed emission of the main part of the initial energy of the bubble in the form of waves of small amplitude followed by formation of a long-living localized breather-like solution in the center. This rather complicated picture may be called the delayed decay of DCC bubble. In this paper we continue studying the process of DCC bubble decay for the special initial conditions
$$\varphi (r,0)=\frac{2\pi }{1+(r/r_0)^K},\varphi _t(r,0)0,$$
$`(2)`$
where $`K`$ is a large and positive number, in our calculations we assign $`K=20`$. As it was discussed in Ref. , the evolution picture of the initial bubble depends crucial on dimensionless parameter $`\xi =mr_0`$. In the case of small $`\xi <\xi _{cr}2`$ in the model with $`n=2`$ we observed a prompt decay of DCC bubbles followed by formation of a breather-like solution. But for $`\xi >\xi _{cr}`$ we observed splitting of $`2\pi `$-shell of the initial bubble into a pair of concentric $`1\pi `$-shells. The transition from prompt decay to delayed decay is clearly seen in Fig. 1. In this Figure we give the energy flux through the sphere of radius $`R>r_0`$ in units of the total energy at two typical moments of time as a function of the parameter $`\xi `$. As it is seen from Fig. 1, at $`\xi <\xi _{cr}`$ the main part of the released energy is emitted from the region of the bubble during a relatively short period $`T<50`$ (in dimensionless units). But at $`\xi >\xi _{cr}`$ we observed that only some part of the released energy is emitted from the region of the bubble during the same period of time. This phenomenon we call the delayed decay of DCC bubble. In Fig. 2 we give pictures of the field configurations ($`n=2`$, $`\xi =10`$) at some typical moments of time $`t_i`$. As it is seen from Fig. 2, further evolution of the split $`1\pi `$-shells leads to their secondary interaction. This interaction is of repulsive character and it takes place if their radii coincide. After several collisions of $`1\pi `$-shells the total configuration transforms into a localized breather-like solution at the center of the initial bubble.
The qualitative explanation of the observed splitting of $`2\pi `$-boundary of the initial bubble (2) into a pair of concentric $`1\pi `$-shells may be the following. Assume, that for sufficiently large radius $`r_0`$ of the bubble the term $`\frac{2}{r}\frac{\varphi }{r}`$ in the left-hand side of Eq. (1) is getting inessential and may be dropped. Making so, we get from Eq. (1) the one-dimensional sine-Gordon equation on semi-axis $`0r<\mathrm{}`$. Solutions of this equation at large $`r`$ look like solutions of the integrable sine-Gordon equation. Multi-soliton solutions of the one-dimensional sine-Gordon equation may be obtained analytically. In particular, the double-soliton solution for $`n=2`$ looks like
$$\varphi (x,t)=2\mathrm{arctan}\left(\frac{v\mathrm{sinh}(mx/\sqrt{1v^2})}{\mathrm{cosh}(mt/\sqrt{1v^2})}\right),$$
$`(3)`$
where $`v`$ is the solitons’ velocity at the infinity. At an arbitrary moment of time $`t`$ this solution looks as a superposition of two solitons with the integral of motion
$$\varphi (+\mathrm{},t)\varphi (\mathrm{},t)2\pi ,$$
called ”topological charge”. At $`t=0`$ solution (3) is a $`2\pi `$-jump of characteristic size $`x_{char}\sqrt{1v^2}/mv`$. In the relativistic limit $`(1v)1`$ the size $`x_{char}`$ is small, $`x_{char}m^1`$. Because of similarity of shapes of both the initial condition (2) at $`K1`$ and the solution (3) at $`t=0`$ in the relativistic case, the time evolution of both solutions should also be similar at least at small positive $`t`$. Looking at solution (3) at $`t>0`$, we get that $`2\pi `$-kink splits into a pair of $`1\pi `$-kinks moving in opposite directions. Just the same happens at $`t>0`$ with the solution of Eq. (1) with initial conditions (2). So, we conclude that 3-dimensional $`2\pi `$-shell behaves like the double-kink solution (3) at least for small positive $`t`$.
Further evolution of both separated $`1\pi `$-shells looks differently. The internal $`1\pi `$-bubble behaves as the usual large amplitude bubble in the model with $`n=1`$, studied long ago by Bogolubsky and Makhankov , see also Review . It shrinks and expands again, emitting part of its energy. The external $`1\pi `$-shell expands, then stops and goes back. The maximal radius of this $`1\pi `$-shell may be estimated from the simple energy conservation consideration. The dynamics of both shells is illustrated by pictures of Fig. 2. The splitting of the initial shell into several separated shells is better seen from pictures of the radial field energy density $`\epsilon (r)`$ shown in Fig. 3. Notice, that $`\epsilon (r)`$ is related with the total energy by $`E=\epsilon (r)𝑑r`$.
We also observed analogous splitting of $`2\pi `$-shell (2) in the model with $`n=3`$. In this case the initial $`2\pi `$-shell splits into two shells of amplitudes $`2\pi /3`$ and $`4\pi /3`$, and some time later $`4\pi /3`$-shell splits into a pair of $`2\pi /3`$-shells. In further evolution all these $`2\pi /3`$-shells interact. Corresponding field configurations and radial energy densities at some typical moments of time are shown in Figs. 4 and 5.
Notice, that skipped term $`\frac{2}{r}\frac{\varphi }{r}`$ in Eq. (1) has an essential influence on the evolution of solutions. In particular, it is responsible for instability of the bubble with respect to collapse. To study such configurations it is convenient to apply the effective Lagrangian method. The spherical symmetrical bubble collapse in the $`\lambda \varphi ^4`$-theory was analyzed by this method for the first time in the thin wall approximation in paper , see also . In the nearest future we are planning to discuss the form of effective Lagrangian and the corresponding equations of motion for multi-shell configurations studied in the present paper.
In conclusion it is worth to stress again, that the observed splitting of the initial shell of DCC bubble of large size and large amplitude leads to essential prolongation of its mean life-time. That is why the emission of waves from the region of DCC looks not as an instant but as a rather long process. The main reason of this prolongation of mean life-time is in nonlinearity of decay process.
Acknowledgments
This work was supported in part by the Russian Foundation for Basic Research under grants No 98-02-17316 and No 96-15-96578. The work of V. A. Gani was also supported by the INTAS Grant No 96-0457 within the research program of the International Center for Fundamental Physics in Moscow.
Figure captions
Fig. 1. Energy flux (in units of total energy), that had passed through the sphere of radius $`R=20`$ to the moment of time $`T`$, as a function of dimensionless parameter $`\xi `$.
Fig. 2. Typical field configurations at some moments of time $`t_i`$ in the model with $`n=2`$.
Fig. 3. Radial energy densities corresponding to the field configurations of Fig. 2.
Fig. 4. Typical field configurations at some moments of time $`t_i`$ in the model with $`n=3`$.
Fig. 5. Radial energy densities corresponding to the field configurations of Fig. 4.
|
no-problem/0003/cond-mat0003011.html
|
ar5iv
|
text
|
# An equation of state à la Carnahan-Starling for a five-dimensional fluid of hard hyperspheres
## Abstract
The equation of state for five-dimensional hard hyperspheres arising as a weighted average of the Percus-Yevick compressibility ($`\frac{3}{5}`$) and virial ($`\frac{2}{5}`$) equations of state is considered. This Carnahan-Starling-like equation turns out to be extremely accurate, despite the fact that both Percus-Yevick equations of state are rather poor.
Although not present in nature, fluids of hard hyperspheres in high dimensions ($`d4`$) have attracted the attention of a number of researchers over the last twenty years. Among these studies, one of the most important outcomes was the realization by Freasier and Isbister and, independently, by Leutheusser that the Percus-Yevick (PY) equation admits an exact solution for a system of hard spheres in $`d=\text{odd}`$ dimensions. In the special case of a five-dimensional system ($`d=5`$), the virial series representation of the compressibility factor $`Zp/\rho k_BT`$ (where $`p`$ is the pressure, $`\rho `$ is the number density, $`k_B`$ is the Boltzmann constant, and $`T`$ is the temperature) is $`Z(\eta )=_{n=0}^{\mathrm{}}b_{n+1}\eta ^n`$, where $`\eta =(\pi ^2/60)\rho \sigma ^5`$ is the volume fraction ($`\sigma `$ being the diameter of a sphere) and $`b_n`$ are (reduced) virial coefficients. The exact values of the first four virial coefficients are $`b_1=1`$, $`b_2=16`$, $`b_3=106`$, and $`b_4=311.18341(2)`$. The fifth virial coefficient was estimated by Monte Carlo integration to be $`b_5970`$. More recent and accurate Monte Carlo calculations yield $`b_5=843(4)`$ and $`b_6=988(28)`$. The exact knowledge of the virial coefficients $`b_1`$$`b_4`$ and in some cases of the Monte Carlo values for $`b_5`$ and $`b_6`$ has been exploited to construct several approximate equations of state (EOS), several of them being reviewed in Ref. .
One of the simplest proposals is Song, Mason, and Stratt’s (SMS), who, by viewing the Carnahan-Starling (CS) EOS for $`d=3`$ as arising from a kind of mean-field theory, arrived at a generalization for $`d`$ dimensions that makes use of the first three virial coefficients. Baus and Colot (BC) proposed a rescaled (truncated) virial expansion that explicitly accounts for the first four virial coefficients. A slightly more sophisticated EOS is the rescaled Padé approximant proposed by Maeso et al. (MSAV), which reads
$$Z_{\text{MSAV}}(\eta )=\frac{1+p_1\eta +p_2\eta ^2}{(1\eta )^5(1+q_1\eta )},$$
(1)
where $`p_1=(776b_4)/36`$, $`p_2=(547611b_4)/36`$, and $`q_1=(380b_4)/36`$. One of the most accurate proposals to date is the semi-empirical EOS proposed by Luban and Michels. These authors first introduce a function $`\zeta (\eta )`$ defined by
$$Z_{\text{LM}}(\eta )=1+\frac{b_2\eta \left\{1+\left[b_3/b_2\zeta (\eta )b_4/b_3\right]\eta \right\}}{1\zeta (\eta )(b_4/b_3)\eta +\left[\zeta (\eta )1\right](b_4/b_2)\eta ^2}.$$
(2)
Equation (2) is consistent with the exact first four virial coefficients, regardless of the choice of $`\zeta (\eta )`$. The approximation $`\zeta (\eta )=1`$ is equivalent to assuming a Padé approximant for $`Z(\eta )`$. Instead, Luban and Michels observed that the computer simulation data of Ref. , $`\{Z_{\text{sim}}(\eta _i),i=1,\mathrm{},8\}`$ \[cf. Table I\], favor a linear approximation for $`\zeta (\eta )`$ and, by a least-square fit, they found $`\zeta (\eta )=1.074(16)+0.350(96)\eta `$. Another semi-empirical EOS (not included in Ref. ) was proposed by Amorós et al. (ASV):
$$Z_{\text{ASV}}(\eta )=\underset{n=0}{\overset{4}{}}\beta _{n+1}\eta ^n+\frac{5\eta _0}{\eta _0\eta }+C\eta ^4\left[\frac{1}{(1\eta )^4}1\right].$$
(3)
This equation imposes a single pole at the close-packing fraction $`\eta _0=\sqrt{2}\pi ^2/30`$. The parameters $`\beta _n=b_n5\eta _0^{(n1)}`$ are fixed so as to reproduce the first five virial coefficients, while $`C`$ is determined by a fit to simulation data. By using the presently known values of $`b_4`$ and $`b_5`$ and minimizing $`_{i=1}^8[1Z_{\text{ASV}}(\eta _i)/Z_{\text{sim}}(\eta _i)]^2`$ one finds $`C=276.88`$. Finally, Padé approximants and have also been considered.
All the previous EOS rely upon some extra information, such as known virial coefficients and simulation data. On the other hand, (approximate) integral equation theories provide the correlation functions describing the structure of the fluid. From these functions one can obtain the EOS, that usually adopts a different form depending on the route followed. As said above, the PY integral equation has an exact solution for a system of hard spheres in odd dimensions. In particular, the analytical expressions of the EOS obtained from the virial route, $`Z_{\text{PY-v}}(\eta )`$, and from the compressibility route, $`Z_{\text{PY-c}}(\eta )`$ are known for $`d=5`$. Nevertheless, these two EOS are highly inconsistent with each other. This inconsistency problem is also present with lower dimensions (except in the one-dimensional case, where the PY theory becomes exact), but to a lesser extent. This led Freasier and Isbister to conclude that “the PY approximation for hard cores is an increasingly bad approximation as the dimensionality of the system grows larger.”
As is well known, the CS EOS for three-dimensional hard spheres plays a prominent role in liquid state theory. While originally derived from the observation that the numerical values of the known virial coefficients came remarkably close to fitting a simple algebraic expression, the CS equation is usually viewed as a suitable linear combination of the compressibility and virial EOS resulting from the PY theory, namely
$$Z_{\text{CS}}(\eta )=\alpha ^{(d)}Z_{\text{PY-c}}(\eta )+(1\alpha ^{(d)})Z_{\text{PY-v}}(\eta )$$
(4)
with $`\alpha ^{(3)}=\frac{2}{3}`$. Since, as it happened in the case $`d=3`$, both PY routes keep bracketing the true values in the case $`d=5`$, it seems natural to wonder whether the simple interpolation formula (4) works in this case as well. This question was addressed by González et al., who kept the value $`\alpha ^{(5)}=\frac{2}{3}`$. The main goal of this Note is to propose a different choice for $`\alpha ^{(5)}`$. The virial coefficients corresponding to $`Z_{\text{CS}}(\eta )`$ are $`b_n^{\text{CS}}=\alpha ^{(d)}b_n^{\text{PY-c}}+(1\alpha ^{(d)})b_n^{\text{PY-v}}`$. By using the known values of $`b_4`$$`b_6`$ one gets, however, conflictive estimates for the mixing parameter $`\alpha ^{(5)}`$, namely $`\alpha ^{(5)}(b_4b_4^{\text{PY-v}})/(b_4^{\text{PY-c}}b_4^{\text{PY-v}})0.68`$, $`\alpha ^{(5)}(b_5b_5^{\text{PY-v}})/(b_5^{\text{PY-c}}b_5^{\text{PY-v}})0.02`$, and $`\alpha ^{(5)}(b_6b_6^{\text{PY-v}})/(b_6^{\text{PY-c}}b_6^{\text{PY-v}})0.40`$. On the other hand, minimization of $`_{i=1}^8[1Z_{\text{CS}}(\eta _i)/Z_{\text{sim}}(\eta _i)]^2`$ yields $`\alpha ^{(5)}=0.62`$. By simplicity, here I take the rational number $`\alpha ^{(5)}=\frac{3}{5}`$ and propose the corresponding EOS (4).
Table I compares the MSAV, LM, ASV, and CS EOS with available computer simulations. This table complements Table II of Ref. , where $`Z_{\text{MSAV}}`$, $`Z_{\text{ASV}}`$, and $`Z_{\text{CS}}`$ (the latter being proposed in this Note) were not included. In general, the accuracy of the EOS with adjusted virial coefficients improves as the degree of complexity increases. More specifically, the average relative deviations from the simulation data are, from worse to better, as follows: 4.7% ($`Z_{[3,2]}`$), 3.7% ($`Z_{\text{SMS}}`$), 3.0% ($`Z_{[2,3]}`$), 2.3% ($`Z_{\text{BC}}`$), 0.4% ($`Z_{\text{MSAV}}`$), 0.17% ($`Z_{\text{LM}}`$), and 0.15% ($`Z_{\text{ASV}}`$). Concerning the two PY EOS, both are quite poor, with average relative deviations equal to 7.1% ($`Z_{\text{PY-v}}`$) and 4.2% ($`Z_{\text{PY-c}}`$). The most interesting point, however, is that the CS-like EOS (4) (with the choice $`\alpha ^{(5)}=\frac{3}{5}`$) presents an excellent agreement with the simulation data (the average relative deviation being 0.3%), only slightly inferior to that of the semi-empirical EOS $`Z_{\text{LM}}`$ and $`Z_{\text{ASV}}`$. This is especially remarkable if one considers that only the first three virial coefficients of $`Z_{\text{CS}}`$ are exact, a circumstance also occurring in the case of the original CS equation. The choice $`\alpha ^{(5)}=\frac{2}{3}`$, on the other hand, yields an average relative deviation of 0.5%.
Let me conclude with some speculations. It seems interesting to conjecture about the existence of possible “hidden” regularities in the PY theory for hard hyperspheres explaining the paradox that, although the virial and the compressibility EOS strongly deviate from each other, a simple linear combination of them might be surprisingly accurate. Since the adequate value of the mixing parameter is $`\alpha ^{(3)}=\frac{2}{3}`$ for $`d=3`$ and $`\alpha ^{(5)}=\frac{3}{5}`$ for $`d=5`$, it is then tempting to speculate that its generalization to $`d`$ dimensions might be $`\alpha ^{(d)}=(d+1)/2d`$, so that $`\alpha ^{(\mathrm{})}=\frac{1}{2}`$ in the limit of high dimensionality, in contrast to other proposals. For $`d=7`$ the above implies that, while $`Z_{\text{PY-v}}`$ and $`Z_{\text{PY-c}}`$ would dramatically differ, the recipe (4) with $`\alpha ^{(7)}=\frac{4}{7}`$ could be very close to the true EOS. The confirmation or rebuttal of this expectation would require the availability of simulation data for $`d=7`$, which, to the best of my knowledge, are absent at present.
I would like to thank Drs. S. B. Yuste and M. López de Haro for a critical reading of the manuscript and Dr. J. R. Solana for providing some useful references. Partial financial support by the DGES (Spain) through grant PB97-1501 and by the Junta de Extremadura-Fondo Social Europeo through grant IPR99C031 is also gratefully acknowledged.
|
no-problem/0003/cond-mat0003035.html
|
ar5iv
|
text
|
# Metallization of molecular hydrogen
## Abstract
We study metallization of molecular hydrogen under pressure using exact exchange (EXX) Kohn-Sham density-functional theory in order to avoid well-known underestimates of band gaps associated with standard local-density or generalized-gradient approximations. Compared with the standard methods, the EXX approach leads to considerably (1 - 2 eV) higher gaps and significant changes in the relative energies of different structures. Metallization is predicted to occur at a density of $`\stackrel{>}{}`$ 0.6 mol/cm<sup>3</sup> (corresponding to a pressure of $`\stackrel{>}{}`$ 400 GPa), consistent with all previous measurements.
Despite great efforts starting with the first theoretical predictions in 1935 , the determination of the electronic and structural properties of hydrogen at high pressure is still extremely incomplete . Experimentally, it is established that hydrogen transforms to high pressure phases, but remains molecular up to pressures of at least $``$ 200 GPa . Metallization of solid hydrogen has been actively sought, but not yet observed, with one experimental team reporting no sign of metallization up to 342 GPa . It is widely assumed that metallization would occur either through a structural transformation to an atomic metallic phase, which involves dissociation of the H<sub>2</sub> molecules, or through band overlap within the molecular phase itself. This latter mechanism is supported by a recent experiment-based equation of state that, combined with Quantum Monte Carlo (QMC) calculations for metallic atomic hydrogen , yields an estimate for the dissociation pressure of as much as 620 GPa .
The theoretical situation is complicated by the fact that the structures at high pressures are not known, together with the well-known difficulties of quantitative predictions for metal-insulator transitions. Various candidate structures for the high-pressure phases (called “phase II” or “BSP” below $``$ 150 GPa and “phase III” or “HA” above $``$ 150 GPa) have been proposed based on static , and dynamic density-functional calculations and on QMC investigations. Most of these structures have hexagonal and orthorhombic unit cells with up to four molecules. However, there are serious difficulties associated with the estimates of metallization pressures. The major problem is the well-known fact that the local-density (LDA) or generalized gradient (GGA) approximations of density-functional theory cause drastic underestimates of band gaps (by typically 50 - 100 %). This leads to much too low metallization pressures and also affects the quality of LDA and GGA total energies that are needed for the prediction of energetically favorable structures. Previous work beyond the LDA and GGA was carried out within the X$`\alpha `$ approximation , the many-body GW framework in a first-principles and an approximate formulation, and QMC simulations . The first two studies were limited to a simple hcp structure with two molecules per cell oriented along the c-axis (called “mhcp” hereafter), which has been found to be energetically unfavorable ; the GW calculations are not able to determine relative energies of structures. The QMC calculations indicated qualitatively the problems with the LDA calculations but did not determine gaps.
In this Letter, we present a first-principles investigation of band-gap closure within the molecular phase. We employ the framework of exact exchange density-functional theory (EXX), which has been shown recently to yield very accurate band gaps and total energies for a large set of semiconductors . The EXX method has crucial advantages for the present study. Since it treats exactly all exchange-related quantities of Kohn-Sham density-functional theory , it is inherently self-interaction free. This remedies largely the band-gap underestimates that plague all LDA and GGA calculations, without an artificial band-gap correction and in a parameter-free way. Second, it yields band structures and total energies from the same calculation, which we believe to be a key requirement for the solution of the hydrogen problem.
In the EXX scheme, which is explained in detail in Ref. , total energies $`E_{tot}`$ are obtained from the expression
$$E_{tot}=T_0+E_{elprot}+E_H+E_x+E_c.$$
(1)
Here $`T_0`$ is the noninteracting kinetic energy, $`E_{elprot}`$ is the interaction energy between the electrons and the protons, $`E_H`$ is the Hartree energy, $`E_x`$ the exact exchange energy, and $`E_c`$ denotes the correlation energy, which is the only quantity that has to be approximated (the LDA is used in this work). Band structures $`\{\epsilon _{n𝐤}\}`$ and wavefunctions $`\varphi _{n𝐤}`$ for states with band index $`n`$ and wavevector $`𝐤`$ are obtained from the Kohn-Sham equations
$$\left(\frac{^2}{2}+V_{prot}(𝐫)+V_H(𝐫)+V_x(𝐫)+V_c(𝐫)\right)\varphi _{n𝐤}(𝐫)=\epsilon _{n𝐤}\varphi _{n𝐤}(𝐫),$$
(2)
where $`V_{prot}(𝐫)`$ is the potential due to the protons, $`V_H(𝐫)`$ is the Hartree potential, and $`V_c(𝐫)=\delta E_c/\delta n(𝐫)`$ with the density $`n(𝐫)=2_{n𝐤}^{occ}|\varphi _{n𝐤}(𝐫)|^2`$. The crucial part of the EXX scheme is the construction of the exact local exchange potential $`V_x(𝐫)=\delta E_x/\delta n(𝐫)`$ as the functional derivative of the exact exchange energy with respect to the density . Since this requires repeated computation of nonlocal-exchange integrals and a linear-response function, an EXX calculation is much more demanding than standard LDA or GGA methods. The present calculations were performed using the bare proton potential for hydrogen and plane-wave basis sets with a kinetic energy cutoff of 36 Ry that has been employed in previous calculations on H<sub>2</sub> . We have also performed tests with cutoff energies of 60 Ry and observed that band gaps $`\epsilon _{gap}`$ and total energy differences $`\mathrm{\Delta }E_{tot}`$ among different structures were changed only by a few hundredths of an eV and a few tenths of a mRy/molecule, respectively. Dense $`𝐤`$-point meshes with $`N_𝐤3500/N_{at}`$ special points in the Brillouin zone were employed ($`N_{at}`$ denotes the number of protons in the unit cell). This guarantees convergence of $`\mathrm{\Delta }E_{tot}`$ better than 1 mRy/molecule.
First we show how the EXX band gaps compare with LDA and GW band gaps over a wide range of densities, as depicted in Fig. 1 for the mhcp structure with the bond length and c/a ratio fixed at the values determined from LDA and extrapolations of X-ray data (the only case for which GW calculations have been done). We can recognize several salient features: (i) EXX gaps are about 1.5 - 2 eV larger than the LDA gaps for all densities. Consequently, the EXX metallization density is considerably higher than for LDA. Similar corrections to band gaps (1 - 2 eV) have previously been reported for semiconductors . (ii) EXX and LDA gaps are almost linear functions of density, which holds also for the gaps of the other structures considered below. (iii) a linear extrapolation of the EXX data to zero density (isolated H<sub>2</sub> molecules) yields a gap of 11.4 eV, close to the weighted average of the lowest experimental singlet and triplet excitation energies of the H<sub>2</sub> molecule , 11.6 eV (indicated by the left cross in Fig. 1).
The last point is in agreement with recent work on isolated noble-gas atoms : the differences between the highest occupied eigenvalue and the unoccupied EXX Kohn-Sham eigenvalues are very good approximations to excitation energies. This can be attributed to (i) the correct asymptotic $`1/r`$ behavior of the exact exchange potential $`V_x(𝐫)`$ in Eq. (2), which causes the EXX spectrum of the unoccupied states to be a Rydberg series (with energies $`\epsilon _1<\epsilon _2<\mathrm{}<\epsilon _{\mathrm{}}=0`$, see inset of Fig. 1) and (ii) $`\epsilon _{\mathrm{}}\epsilon _{HOMO}`$ equaling the negative of the ionization energy $`I`$ . Indeed, we find $`\epsilon _1\epsilon _{HOMO}`$ in EXX to agree very well with the lowest experimental excitation energies (crosses in Fig. 1), both for the isolated hydrogen molecule and the molecular solid at low density. Thus, the quasiparticle band gap $`E_{gap}`$, defined as the difference of the ionization energy and the electron affinity , $`E_{gap}=I_{H_2}A_{H_2}I_{H_2}`$, differs from the lowest EXX gap by an exciton binding energy $`\epsilon _{\mathrm{}}\epsilon _1`$ . At high densities excitonic effects are reduced, so that we expect the real quasiparticle gaps to deviate only slightly from the EXX gaps, just as has been demonstrated for semiconductors .
For densities greater than 0.35 mol/cm<sup>3</sup>, we have also carried out EXX calculations on other structures with hexagonal and orthorhombic unit cells (see Fig. 2) that have been proposed previously as possible lowest-energy structures on the basis of LDA and GGA total-energy calculations. Here, the H<sub>2</sub> molecules are tilted with respect to the z direction by an angle $`\alpha 55^o`$ and the $`c/a`$ ratio is approximately 1.58 (at high pressures). In the structures denoted by $`Cmc2_1^\delta `$, the centers of the molecules are displaced from ideal hcp sites by a distance $`\delta `$ (we normalize $`\delta `$ such that $`Cmc2_1^1`$ coincides with the $`Cmca`$ structure). Proton coordinates derived from LDA calculations for these structures were used as input for the present EXX calculations since a complete unit-cell relaxation within the EXX scheme is computationally too demanding at present.
Figure 3 depicts the fundamental EXX band gaps of the structures of Fig. 2 for densities between $`n_1`$ = 0.35 and $`n_2`$ = 0.60 mol/cm<sup>3</sup>. \[The corresponding pressures can be specified by our theoretical calculations or by using an extrapolated experimental equation of state . We find that at the densities $`n_1`$ and $`n_2`$, the theoretical pressures ($`P_1`$ and $`P_2`$) are close to those of Ref. , corresponding to $`100`$ GPa and $`400`$ GPa; Ref. leads to much lower pressures at high density ($`P_1=100`$ GPa, $`P_2=325`$ GPa), whereas Ref. gives higher pressures ($`P_1=115`$ and $`P_2=500`$ GPa).\] For the $`Cmc2_1^{0.5}`$, $`Pca2_1`$, $`Cmc2_1^0`$, and $`P2_1/c`$ structures, we obtain metallization densities of 0.468, 0.535, 0.537, and 0.542 mol/cm<sup>3</sup>, respectively. Note that the use of LDA coordinates means at high pressure a bondlength of $`r_0`$ 1.45 a.u. We have verified that using the experimental ($`r_0^{Expt.}=1.40`$ a.u.) or EXX ($`r_0^{EXX}1.38`$ a.u.) bondlength of the isolated H<sub>2</sub> molecule increases calculated band gaps by about 0.6 and 0.9 eV, respectively. For the $`P2_1/c`$ structure, this is indicated by the thin dashed lines in Fig. 3. The larger bondlength causes the predicted metallization density to increase up to 0.58 mol/cm<sup>3</sup>, corresponding to a calculated pressure of 375 GPa.
The EXX scheme predicts that the three structures with the largest gaps ($`P2_1/c`$, $`Cmc2_1^0`$, and $`Pca2_1`$) are the most stable ones. Our results are reported in Fig. 4 which shows the total energy differences among the structures for densities up to 0.62 mol/cm<sup>3</sup>. A key result of the utilization of the EXX functional is that the metallic $`Cmca`$ structure becomes more stable than the insulating phases only above a density of 0.61 mol/cm<sup>3</sup> (calculated pressure 415 GPa). In contrast, LDA and GGA calculations find this to be the most stable structure at much lower density (at quoted pressures of P $``$ 140 and 180 GPa). Such a low metallization pressure is in severe disagreement with experiment, and we believe the problem is a consequence of the erroneous LDA and GGA band gaps that indirectly affect the total energy. The energy is decreased by populating the conduction states, an effect that occurs in the EXX calculations only upon much higher compression. However, the rule “the lower the energy, the wider the band gap” is not exactly obeyed: the most stable structure $`Pca2_1`$ has only the third highest band gap. We find $`Pca2_1`$ to be more stable than $`P2_1/c`$ for all densities, in agreement with the LDA results of Ref. , but in disagreement with the LDA and GGA calculations of Refs. and that slightly favor $`P2_1/c`$.
Zero point motion of the protons is a very difficult problem that has been the subject of much debate. For the present purposes there are three effects. First, the pressure is increased (by approximately 10% ). Second, the band gaps may change. The three most stable structures have gaps that differ only by a few tenths of an eV. One might interpret this as an estimate of the influence of zero-point motion which is expected to average over low-energy structures. Tight-binding calculations on large cells with hydrogen molecules in disordered arrangements indicate effects that are similarly small. However, as the gaps become small, of order of the vibron energies $``$ 0.5 eV, we expect the zero point motion to increase the gaps by a dynamic level-repulsion effect. Third, relative energies of different structures are changed. QMC calculations and work by Straus and Ashcroft suggest that zero-point motion favors isotropic structures (in our case, $`Pca2_1`$, $`P2_1/c`$, and $`Cmc2_1^0`$) with respect to anisotropic ones like $`Cmca`$. Including all these effects, we expect the metallization pressure to increase to $`\stackrel{>}{}`$ 400 GPa.
Another possibility is the metallization by a structural transition to a possible monatomic phase. A comparison of enthalpies derived from various experimental equations of state with QMC calculations for hydrogen in the diamond phase yields dissociation pressures between 300 and 620 GPa . The large range is due to the extreme sensitivity of the transition point to the form of the equation of state. Thus we can only conclude that our calculated metallization pressure is in the same range as possible transitions to other structures. However, this does not affect our main point that up to pressures of at least 400 GPa, molecular hydrogen is predicted to be stable and insulating.
In summary, we have investigated band gaps and total energies of possible candidate structures for compressed molecular hydrogen using a Kohn-Sham density-functional scheme (EXX) that treats exchange interactions exactly. EXX leads to band gaps that are 1 - 2 eV higher than in LDA (similar to gaps found recently using an approximate GW approach ) and, in addition, predicts changes of the relative energies of structures near the metal-insulator transition. In contrast to LDA and GGA calculations, the energetically preferred structure has $`Pca2_1`$ symmetry up to density 0.61 mol/cm<sup>3</sup> (pressure $``$ 400 - 450 GPa). In this structure there is the possibility of metallization via band overlap, which is here found to occur at $``$ 400 GPa. Above this pressure there are three possibilities: a metallic molecular phase as described here; some new molecular phase that is more stable and insulating; or a transition to an atomic phase expected to be metallic.
We acknowledge interesting discussions with W. Evans, A. Görling, R. J. Hemley, J. Kohanoff, J. B. Krieger, P. Loubeyre, J. P. Perdew, I. F. Silvera, I. Souza, and B. Tuttle. This work has been supported by the Office of Naval Research under Grant No. N00014-98-1-0604.
|
no-problem/0003/hep-th0003205.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The simplest candidate for a string representing pure Yang Mills theory, is a non critical string with *curved Liouville* action:
$$L=(\frac{a(\varphi )}{l_s})^2(x)^2+(\varphi )^2+T(\varphi )+\mathrm{\Phi }(\varphi )R_2$$
(1)
where $`x`$ stands for the four dimensional space-time coordinates, $`\varphi `$ for the Liouville field and $`T(\varphi )`$ and $`\mathrm{\Phi }(\varphi )`$ for the closed tachyon and dilaton backgrounds. The factor $`\frac{a(\varphi )}{l_s}`$ would be interpreted as an effective *running* string tension for the four dimensional non critical string . The scale $`l_s`$ plays then the rôle of a *bare* string tension. The space-time metric associated to the preceding action (1) is:
$$ds^2=a(\varphi )^2dx^2+l_c^2d\varphi ^2$$
(2)
where we have introduced an extra scale $`l_c`$ for dimensional reasons. The physical meaning of this scale will become clear as we proceed.
The physical backgrounds $`a(\varphi )`$, $`T(\varphi )`$ and $`\mathrm{\Phi }(\varphi )`$ should be restricted by demanding the vanishing of the two dimensional sigma model beta functions.
The soft dilaton theorem for vanishing dilaton tadpoles (owing to conformal invariance) reads:
$$(\sqrt{\alpha }^{}\frac{}{\sqrt{\alpha }^{}}\frac{1}{2}(d2)g\frac{}{g})A(p_1,p_2,\mathrm{}p_n)=0$$
(3)
This equation (3)can be considered, for $`d=4`$, as a renormalization group equation <sup>2</sup><sup>2</sup>2 In closed string field theory the soft dilaton theorem becomes equivalent to the invariance of the string field action under space-time dilatations and changes of the string coupling (see and ) with:
$$\sqrt{\alpha }^{}\frac{g}{\sqrt{\alpha }^{}}=\beta (g)=g$$
(4)
Using the definition of $`g`$ in terms of the dilaton field:
$$e^\mathrm{\Phi }=g$$
(5)
and interpreting , as discussed above, $`\sqrt{\alpha }^{}`$ as $`\frac{l_s}{a(\varphi )}`$ we get from (4) <sup>3</sup><sup>3</sup>3After these identifications (3) becomes equivalent to the holographic renormalization group :
$$\mathrm{\Phi }=log(\frac{a(\varphi )}{l_s})$$
(6)
We will take (6) as the starting point to determine the backgrounds in (1). Vanishing of the sigma model beta functions (to first order) leads to the following solution:
$$ds^2=\varphi d\stackrel{}{x}^2+l_c^2d\varphi ^2$$
(7)
$$\mathrm{\Phi }(\varphi )=\frac{1}{2}log(\varphi )$$
(8)
where we have fine tunned the closed string tachyon vacuum expectation value to compensate the central charge deficiency in the dilaton beta function equation. A first analysis of the stability of this solution was presented in .
In this letter we will consider the problem of confinement for the background metric (7), by explicit computation in the semiclassical approximation of the Wilson loop vacuum expectation value.
## 2 Wilson Loop
An interesting feature of the preceding metric (7) is the existence of a *naked* singularity at $`\varphi =0`$. From (5) and (7) this corresponds to the weakly coupled regime of the dual gauge theory. The boundary of space-time (7) is at $`\varphi =\mathrm{}`$.
In order to compute the Wilson loop we will follow a Nambu-Goto semiclassical approximation in static gauge . We then indentify
$`t`$ $`=`$ $`\tau `$
$`x`$ $`=`$ $`\sigma `$ (9)
The induced metric on the world sheet will then be, for static configurations,
$$ds^2=\varphi d\tau ^2+(\varphi +l_s^2\varphi ^2)d\sigma ^2$$
(10)
and the action reads:
$$S=\frac{T}{l_{s}^{}{}_{}{}^{2}}_0^1𝑑\sigma \sqrt{\varphi (\varphi +l_c^2\varphi ^2)}$$
(11)
We will consider U-shape string configurations with Dirichlet boundary conditions at the codimension one hyersurface $`\varphi =\mathrm{\Lambda }`$. Denoting by $`\varphi _0`$ the tip of the U-shape string we get:
$$\frac{\varphi ^2}{\sqrt{\varphi (\varphi +l_c^2\varphi ^2)}}=\varphi _0$$
(12)
From (12) we easily get for a loop of size $`L`$ the relation:
$$L=2l_c\varphi _0^{\frac{1}{2}}_1^{\sqrt{\frac{\mathrm{\Lambda }}{\varphi _0}}}\frac{d\xi }{\sqrt{\xi ^41}}$$
(13)
The action is then given by:
$$S=\frac{2Tl_c\varphi _0^{\frac{3}{2}}}{l_s^2}_1^{\sqrt{\frac{\mathrm{\Lambda }}{\varphi _0}}}\frac{\xi ^4d\xi }{\sqrt{\xi ^41}}$$
(14)
Please notice that the integral (14) is divergent in the limit $`\mathrm{\Lambda }=\mathrm{}`$. In terms of elliptic functions we get:
$$L=\frac{2l_c\varphi _0^{\frac{1}{2}}}{\sqrt{2}}F(cos^1\sqrt{\frac{\varphi _0}{\mathrm{\Lambda }}},\frac{1}{\sqrt{2}})$$
(15)
$$S=\frac{2Tl_c\varphi _0^{\frac{3}{2}}}{l_s^2}(\frac{1}{3\sqrt{2}}F(arcos\sqrt{\frac{\varphi _0}{\mathrm{\Lambda }}},\frac{1}{\sqrt{2}})+\frac{1}{3}\sqrt{\frac{\mathrm{\Lambda }}{\varphi _0}}\sqrt{(\frac{\mathrm{\Lambda }^2}{\varphi _0^2}1)})$$
(16)
From the first equation (15) we can read the relationship between $`\frac{L}{\sqrt{\mathrm{\Lambda }}}`$ and $`\sqrt{\frac{\varphi _0}{\mathrm{\Lambda }}}`$ as plotted in Fig 1.
There are several interesting features. First of all the existence of a maximun indicates that the size of the loop $`L`$ should be necessarily smaller that $`\sqrt{\mathrm{\Lambda }}`$ in $`l_c`$ units. Secondly for a given $`L`$ we get two different U-shape string configurations (see Fig 2).
This phenomenon is similar to the one found in for Schwarzschild-anti de Sitter (S-AdS) space-time ( see Fig 3 for a qualitative comparison ). In fact in a maximum was also obtained (in the context of the S-AdS space-time) as well as two possible U-string configurations for each loop size $`L`$. However it is important to stress that in our case this phenomenon depends on having the cutoff $`\mathrm{\Lambda }`$. In fact the *naïve* limit $`\mathrm{\Lambda }=\mathrm{}`$ would produce the relationship:
$$L=\frac{2l_c\varphi _0^{\frac{1}{2}}}{\sqrt{2}}F(\frac{\pi }{2},\frac{1}{\sqrt{2}})$$
(17)
Relying upon the similarity with the Schwarzschild-anti de Sitter example we can think of the vertical dashed line in Fig 3 as a sort of *effective horizon* covering the singularity at $`\varphi =0`$. Please notice that the limit $`\sqrt{\frac{\varphi _0}{\mathrm{\Lambda }}}=0`$ corresponds to pushing this horizon on top of the singularity itself. Hence it looks as if the system sort of regularizes the singularity through a non trivial dependence of $`\varphi _0`$ on $`\mathrm{\Lambda }`$.
Coming back to Fig 3 let us briefly recall the physical meaning of the two branches. Region $`I`$ corresponds (see ) to a confining behavior for the quark potential while region $`II`$ describe the Coulomb phase typical of $`AdS_5`$. Qualitatively in the deep region $`II`$, Wilson loops are defined in terms of electric flux tubes that enter only in the asymptotically AdS region reproducing the result of $`N=4`$ super Yang- Mills.
In our case we can also differenciate between regions $`I`$ and $`II`$ of Fig 2. The static potential in region $`I`$ corresponding to $`\varphi _0\mathrm{\Lambda }`$ is given by:
$$V=\frac{L^3}{24K^2l_s^2l_c^2}+\frac{4}{3}\frac{l_c}{l_s^2}\mathrm{\Lambda }^{3/2}$$
(18)
(where $`KK(k=1/\sqrt{2})`$ is the complete elliptic integral of the first kind), This physically means an *overconfining* $`L^3`$ potential between static probes. It is important to stress that the divergent part in (18) cannot be directly interpreted as a *mass renormalization*.
In the region $`II`$ for $`\varphi _0`$ close to $`\mathrm{\Lambda }`$ and $`L\mathrm{\Lambda }`$ ,in $`l_c`$ units <sup>4</sup><sup>4</sup>4 Notice that here $`l_c`$ is playing the role of $`1/\mathrm{\Lambda }_{QCD}`$., we get:
$$V=\frac{L\mathrm{\Lambda }}{l_s^2}(1+\frac{\pi \sqrt{2}}{2K})$$
(19)
i.e a linear confining behavior. Using equations (5) and (8) the leading term of the potential can be rewritten in terms of the *runnig* effective coupling $`g`$ as:
$$V=\frac{Lg^2}{l_s^2}(1+\frac{\pi \sqrt{2}}{2K})$$
(20)
i.e a string tension of the order $`\frac{g^2}{l_s^2}`$.
Given a physical value of $`L`$, it is possible, for each $`\mathrm{\Lambda }`$, to determine which of the two allowed values of $`\varphi _0`$ (corresponding to region $`I`$ or region $`II`$, respectively), gives smaller potential energy. In the region in which the approximations are valid it can be written
$$L=ϵl_c\mathrm{\Lambda }^{1/2}$$
(21)
in such a way that
$$\frac{V_{II}}{V_I}=\frac{ϵ}{\frac{ϵ^3}{24K^2}+\frac{4}{3}}$$
(22)
## 3 Comments
It is natural to expect a renormalization group equation for the Wilson loop of the type:
$$(\lambda \frac{}{\lambda }+\beta (g)\frac{}{g})W(C)=0$$
(23)
for dilatations $`x\lambda x`$. The Wilson loop we get satisfy this equation for the beta function (4). This is the beta function governing the string field theory. It would be extremely interesting to unravel the relation between equations of type (23) and the loop equations.
In summary in this letter we have presented a gravitational framework to study confinement in non supersymmetric Yang Mills. The gravitational background is dictated by the closed-open relation in string theory as encoded in the soft dilaton theorem. The dynamics of Wilson loop on this background shares many of the features previously found in descriptions of confinement using generalizations of AdS/CFT correspondence ,
## Acknowledgments
This work has been partially supported by the European Union TMR programs FMRX-CT96-0012 Integrability, Non-perturbative Effects, and Symmetry in Quantum Field Theory and ERBFMRX-CT96-0090 Beyond the Standard model as well as by the Spanish grants AEN96-1655 and AEN96-1664.
|
no-problem/0003/nucl-th0003054.html
|
ar5iv
|
text
|
# Total cross section for 𝑝-𝑑 breakup below 30 MeV
## Abstract
The total cross section for p-d breakup is studied in terms of the elastic $`𝒮`$–matrix through the unitary condition. Calculations using the complex Kohn variational method along with the Pair Correlated Hyperspherical Harmonic basis are presented. The results have been restricted to energies below $`E_p=30`$ MeV where Coulomb effects are expected to be sizable and are compared to the existing data. Two different measurements have been found in the literature: 40 years ago, Gibbons and Macklin (1959); and 26 years ago, Carlson et al. (1973). The calculations are found to be in reasonable agreement with these old data, though a discrepancy is observed near the deuteron breakup threshold. Moreover, a detailed analysis of the contributions to the observable from different partial waves has been performed. Unexpectedly, the main contribution for a wide range of energies has been detected in the $`J=3/2^{}`$ state.
PACS: 21.45.+v, 25.10.+s, 25.55.Ci
keywords: N-d scattering; breakup cross section
corresponding author: Alejandro Kievsky
Istituto Nazionale di Fisica Nucleare, Via Buonarroti 2, 56100 Pisa, Italy
tel. +39-050-844546, FaX +39-050-844538, e-mail: [email protected]
Studies in the three-nucleon (3N) continuum are based mainly in N-d scattering experiments. The deuteron is the only existing bound state in the two-nucleon (2N) system with a binding energy $`B_d=2.225`$ MeV. Therefore, for incident nucleon energies below $`3.337`$ MeV, or incident deuteron energies below $`6.675`$ MeV, there are no open channels (disregarding the very low probability for radiative capture or bremsstrahlung) and the reaction goes through the elastic channel. Accordingly the elastic scattering matrix, the $`𝒮`$–matrix, is unitary.
For energies above the deuteron breakup threshold (DBT) three free nucleons are present in the outgoing channel. For example, in p-d scattering a free neutron can be observed only above the DBT (neutron production) whereas for the n-d reaction the observation of a free proton is only possible above the DBT. The elastic $`𝒮`$–matrix is no longer unitary and the missing flux in the elastic channel is related to the total breakup cross section.
The total breakup cross section accounts for all possible configurations of the three free outgoing particles and its expression is given for example in Ref. in terms of the $`𝒯`$-matrix elements connecting the elastic channels to the inelastic ones:
$$\sigma _b(Nd)=\frac{\pi }{k^2}\frac{1}{(2I_1+1)(2I_2+1)}\underset{J}{}(2J+1)\underset{\alpha \beta }{}|T_{\alpha \beta }^J|^2.$$
(1)
The index $`\alpha `$ labels the elastic channels while $`\beta `$ runs over all possible configurations of the three outgoing nucleons. In addition $`k^2=\frac{2\mu }{\mathrm{}^2}E_0`$, where $`\mu `$ is the reduced nucleon-deuteron mass, $`E_0`$ is the center of mass energy, and $`I_1`$ ($`I_2`$) is the spin of the incident particle (target). For each state with total angular momentum $`J`$ the flux conservation in channel $`\alpha `$ imposes the unitary condition
$$\underset{\alpha ^{}}{}|𝒮_{\alpha \alpha ^{}}^J|^2+\underset{\beta }{}|T_{\alpha ,\beta }^J|^2=1.$$
(2)
Experimental measurements of the total breakup cross section $`\sigma _b`$ are scarce and have been performed many years ago with limited accuracy. For the n-d system an indirect determination is possible through the subtraction of the total elastic cross section $`\sigma _{el}(nd)`$ from the total nuclear cross section $`\sigma _{tot}(nd)`$. Existing $`\sigma _{el}(nd)`$ data lack the desired accuracy producing a large uncertainty in $`\sigma _b(nd)`$, particularly at low energies . Direct measurements of $`\sigma _b(nd)`$ have also been done more than 20 years ago . The interest at that time was to produce data for comparison with the first theoretical attempts to solve the 3N continuum using semi-realistic N-N potentials. These early potential models gave a qualitative description of the data but the uncertainties of $`10\%`$ were too large to make definitive conclusions. The present status of this observable is briefly given in Ref. (see page 184) where modern theoretical predictions are compared to the data. The calculations in Ref. have been performed by solving the Faddeev equations in momentum space using different realistic potential models. The discrepancies observed at low energies have been attributed to the low quality of the data. In fact, since the total and elastic cross sections are well described by theory there is no reason to expect a discrepancy in the total breakup cross section. On the other hand, since the the total, elastic, and breakup cross section are related by the unitary condition and the optical theorem, it is highly desirable to have a consistent description of all three observables. New determinations of $`\sigma _b(nd)`$ could come from both direct or indirect measurements.
The situation of the p-d total breakup cross section is somewhat different. In this case no indirect measurement is possible due to the Coulomb divergence in the elastic amplitude. Two direct measurements have been found in the literature. A very old one by Gibbons and Macklin (1959) below $`E_p=5.6`$ MeV , with systematic uncertainties estimated to be $`40\%`$ at $`E_p=3.5`$ MeV and decreasing to $`5\%`$ at $`E_p=5.5`$ MeV. A second experiment was performed by Carlson et al. (1973) for proton energies between $`22`$ and $`46`$ MeV , with uncertainties between $`8\%`$ and $`5\%`$. The experimental techniques utilized were quite different in the two measurements. In the first $`\sigma _b`$ has been determined by measuring the total neutron yield while in the second one an attenuation technique has been used. These two measurements provide valuable information to be used in the comparison to a theoretical description of p-d scattering including the long-range Coulomb force. In particular Coulomb effects are expected to be important in the Gibbons and Macklin experiment performed at energies near threshold. For example $`\sigma _b(nd)=(34\pm 6)`$ mb at $`E_n=4.9`$ MeV (Holmberg (1969) ) to be compared to the measurement of Gibbons and Macklin, $`\sigma _b(pd)=21.5\pm 1.4`$ at the same energy. The Coulomb force reduces the breakup cross section by approximately $`30\%`$. At higher energies the effect is less evident: at nucleon energies of $`22`$ MeV the error bars of the n-d and p-d measurements start to overlap, and the effect seems to be extremely small above $`40`$ MeV. A collection of n-d and p-d data is given in Fig. 1.
Recently the variational technique based on an expansion in terms of the Pair Correlated Hyperspherical Harmonic (PHH) basis has been extended to describe elastic N-d scattering above the DBT . The method consists of an expansion of the 3N wave function onto the PHH basis and the elastic elements of the $`𝒮`$–matrix are obtained through the application of the Kohn variational principle in complex form (see Ref. and references therein). The inclusion of the Coulomb interaction in the context of the variational method using the PHH expansion gives no particular troubles provided the correct boundary conditions are imposed on the wave function in the asymptotic region. An extensive discussion of the asymptotic form of the wave function in connection with the Kohn variational principle and the application of the PHH expansion is given in Ref. . In Ref. it was shown that the method reproduces the benchmarks of Ref. for n-d scattering whereas in Ref. the elastic cross section and polarization observables have been calculated for n-d and p-d scattering at nucleon energies of $`5`$ and $`10`$ MeV using realistic potentials. In the later case the applications to n-d scattering have been shown to be in close agreement to those given by the Bochum-Cracow group .
Applications to describe p-d scattering above the DBT are of particular interest due to the historical difficulties to manage the distortion introduced by the Coulomb force in the asymptotic region. In this context, observables sensitive to the long-range Coulomb interaction give an unique opportunity to test techniques devised to take into account such distortion. The p-d total breakup cross section is a well suited observable to make comparisons. The existing amount of data, though small, is sufficient to extract conclusion about the capability of the PHH technique to describe the charged 3N system above threshold. Moreover, as we shall see, it will be possible to evaluate the need for new and higher-precision measurements of $`\sigma _b`$. These measurements would increase our understanding of $`\sigma _b`$ and impose stringent conditions on the theoretical methods and phase-shift analyses (PSA’s).
The breakup cross section $`\sigma _b(Nd)`$ is given in Eq. (1) in terms of the $`𝒯`$–matrix elements connecting the elastic and inelastic channels. Using the fact that those elements are related to the elastic $`𝒮`$–matrix elements through the unitary condition of Eq. (2) it is possible to re-write the above equation in terms of the elastic matrix elements.. In the case of N-d scattering the elastic $`𝒮`$–matrix corresponding to a state with total angular momentum $`J`$ is a $`3\times 3`$ matrix. Each matrix element $`{}_{}{}^{J}𝒮_{LS}^{L^{}S^{}}`$ is labeled by the sets of quantum numbers $`[\alpha LS]`$ coupled to $`J`$. Here $`L`$ is the relative angular momentum between the incident nucleon and the deuteron and $`S`$ is the total spin coming from the coupling of the spin of the deuteron $`S_d=1`$ to the spin $`S_N=1/2`$ of the nucleon. Accordingly, $`S=1/2`$ or $`3/2`$ and there are three possible couplings of $`[LS]`$ giving $`J`$ and conserving parity \[the parity of the state is given by $`(1)^L`$\]. In the case of $`J=1/2^\pm `$ there are two possible couplings and the $`𝒮`$–matrix is a $`2\times 2`$ matrix. Moreover for N-d scattering $`(2I_1+1)(2I_2+1)=6`$. From the above considerations Eq. (1) can be given in the more compact form:
$$\sigma _b(Nd)=\frac{\pi }{k^2}\frac{1}{6}\underset{J}{}(2J+1)tr\{I_J𝒮_J𝒮_J^{}\},$$
(3)
where $`I_J`$ is the $`3\times 3`$ identity matrix excepts for $`J=1/2^\pm `$ which is the $`2\times 2`$ identity matrix. Eq.(3) gives $`\sigma _b`$ in terms of the elastic matrix elements at fixed $`J`$, the sum runs over all possible values of $`J`$ and parity (the sum over different parities is understood). Although the sum runs from $`J=0`$ to infinity there is a rapid convergence due to the fact that each $`𝒮_J`$ matrix becomes closer to unitary as $`J`$ increase. High $`J`$ values correspond to high $`L`$ values and the centrifugal barrier prevents peripheral waves from participating actively in the breakup process. Moreover quartet states are also peripheral due to Pauli blocking. For each $`J`$ states there are two quartet states and one doublet state which in combination with the increasing $`L`$ values limit the number of terms in the sum to a tractable number inside the energy range we are going to consider. In fact, below $`30`$ MeV states with $`J>11/2^\pm `$ make a negligible contribution to $`\sigma _b`$. Eq. (3) gives a very simple picture of the observable, as the diagonal elements of the product matrix $`𝒮_J𝒮_J^{}`$ are always in the range $`(0,1)`$. Therefore the quantity $`A_J=tr\{I_J𝒮_J𝒮_J^{}\}`$ divided by $`tr\{I_J\}`$ gives a measurement of the inelasticity in that state. The complete contribution to the breakup cross section is obtained after taking into account the spin degeneracy $`(2J+1)`$.
In the present paper we present calculations of $`\sigma _b(Nd)`$ based on the PHH technique. Thorough details of the method are given in Refs. and only a brief discussion will be presented. The starting point in the present calculations is the Kohn variational principle in complex form, which states that the functional
$$[{}_{}{}^{J}𝒮_{LS}^{L^{}S^{}}]={}_{}{}^{J}𝒮_{LS}^{L^{}S^{}}+i\mathrm{\Psi }_{LSJ}^{}|HE|\mathrm{\Psi }_{L^{}S^{}J}^+,$$
(4)
is stationary with respect to the trial parameters in the three-nucleon scattering wave function. For an incident state with relative angular momentum $`L`$, spin $`S`$ and total angular momentum $`J`$ it is
$$\mathrm{\Psi }_{LSJ}^+=\underset{i=1,3}{}\left[\mathrm{\Psi }_C(𝐱_i,𝐲_i)+\mathrm{\Omega }_{LSJ}^+(𝐱_i,𝐲_i)\right],$$
(5)
and its complex conjugate is $`\mathrm{\Psi }_{LSJ}^{}`$. The first term, $`\mathrm{\Psi }_C`$, describes the system when the three–nucleons are close to each other and, asymptotically, an outgoing three-particle state. Each amplitude $`\mathrm{\Psi }_C(𝐱_i,𝐲_i)`$, where $`𝐱_i,𝐲_i`$ are the Jacobi coordinates corresponding to the $`i`$-th permutation, has total angular momentum $`JJ_z`$ and total isospin $`TT_z`$ and it is decomposed into channels labelled by angular-spin-isospin quantum numbers ($`\beta `$–channels). The remaining two-dimensional amplitude is expanded in terms of the PHH basis
$$\varphi _\beta (x_i,y_i)=\rho ^{\mathrm{}_\beta +L_\beta 5/2}f_\beta (x_i)\left[\underset{K}{}u_K^\beta (\rho ){}_{}{}^{(2)}P_{K}^{\mathrm{}_\beta ,L_\beta }(\varphi _i)\right],$$
(6)
where the hyperspherical variables are defined by the relations $`x_i=\rho \mathrm{cos}\varphi _i`$ and $`y_i=\rho \mathrm{sin}\varphi _i`$, $`f_\beta (x_i)`$ is a pair correlation function and $`{}_{}{}^{(2)}P_{K}^{\mathrm{},L}(\varphi )`$ is a hyperspherical polynomial. The second term in the variational scattering w.f. describes the asymptotic motion of a deuteron relative to the third nucleon. It can be written in terms of the ingoing and outgoing solutions of the asymptotic N-d Schroedinger equation.
$$\mathrm{\Omega }_{LSJ}^+(𝐱_i,𝐲_i)=\mathrm{\Omega }_{LSJ}^{in}(𝐱_i,𝐲_i)\underset{L^{}S^{}}{}{}_{}{}^{J}𝒮_{LS}^{L^{}S^{}}\mathrm{\Omega }_{L^{}S^{}J}^{out}(𝐱_i,𝐲_i).$$
(7)
For energies above the DBT the hyperradial functions $`u_K^\beta (\rho )`$ in Eq.(6) should describe an outgoing wave distorted by the Coulomb interaction between the two protons. Accordingly, for $`\rho \mathrm{}`$, the hyperradial functions behaves as
$$u_K^\beta (\rho )\underset{\beta ^{}K^{}}{}(e^{i\chi \mathrm{log}2Q\rho })_{\beta \beta ^{}}^{KK^{}}B_K^{}^\beta ^{}e^{iQ\rho }$$
(8)
where $`Q^2=ME/\mathrm{}^2`$, with $`M`$ the nucleon mass, $`E`$ is the total energy ($`E=E_0B_d`$), and the $`\chi `$–matrix originates from the Coulomb potential. Extending the index $`\beta `$ to include the hyperspherical quantum number $`K`$, the numerical constants $`B_K^\beta `$ are the $`𝒯`$-matrix elements $`T_{\alpha ,\beta }^J`$ defined in Eq.(1).
We have studied the convergence of the quantity $`A_J`$ in terms of the partial wave channels used in the decomposition of the wave function for each $`J^\pm `$ state. Examples of the convergence can be found in Ref. . The accuracy of the results given here is estimated to be better than $`1\%`$. The calculations have been done with the Argonne $`v_{18}`$ (AV18) interaction and in some cases the three-nucleon force (3NF) of Urbana (UR) has been included for the study of the sensitivity to 3NF’s.
In Fig. 2 the PHH results for $`\sigma _b(pd)`$ are shown together with the experimental p-d data. The theoretical calculations (open squares) are in good agreement with the data. There is a window between $`6`$ MeV and $`22`$ MeV with no data, just in the region where the observable is varying rapidly and reaches its maximum. Above $`20`$ MeV the calculations have been done at $`22.7`$ MeV and $`28`$ MeV, and are in close agreement with the measurements from Carlson et al. In Fig. 3 a detailed plot with semilogarithmic scale is given at the very low energies explored by Gibbons and Macklin . The theoretical results with the AV18 interaction (open triangles) and AV18+UR interaction (open squares) are in agreement at the higher energies but start to deviate as the energy approaches the DBT. This region is of particular interest since Coulomb effects are huge and the breakup amplitude is attenuated by the Coulomb penetration factor $`\mathrm{exp}(2\pi \eta )`$, with $`\eta 1/\sqrt{E_0B_d}`$. At $`5`$ MeV the PHH n-d results are also given in Fig. 3 for the sake of comparison (diamond). The n-d data from Holmberg closest to $`5`$ MeV are plotted also (open circles). The effect of the Coulomb interaction is clearly evident. Moreover the calculations at $`5`$ MeV based on the variational method reproduce both the n-d and p-d breakup reactions.
Differences between theory and experiment are observed around $`4`$ MeV and below. These discrepancies could originate from an incomplete treatment of the electromagnetic interaction. In fact, the calculations included the Coulomb interaction but not other small electromagnetic parts of the interaction such as the vacuum polarization or the magnetic moment potentials. On the other hand systematic errors in the measurements are not unlikely – a new set of measurements will be highly desirable.
In Fig. 3 the effects of the 3NF’s are also studied (open squares). At these very low energies the breakup cross section is dominated by doublet states with $`L=0,1`$ since the probability to find the three nucleons close to each other is not reduced by either Pauli blocking or a high centrifugal barrier. The doublet $`{}_{}{}^{2}S_{1/2}^{}`$ state is sensitive to 3NF’s. For example calculations with AV18 or AV18+UR at $`3`$ MeV are $`20\%`$ different for the $`{}_{}{}^{2}S_{1/2}^{}`$ phase-shift . In this case we are referring to energies below the DBT where the phase-shifts are real but we can expect sensitivity to 3NF’s also in the imaginary parts above the DBT. The results given in Fig. 3 show a reduction of $`\sigma _b`$ of the order or $`3\%`$$`4\%`$ when the AV18+UR interaction is used. The inclusion of the 3NF increases the binding energy of the three-nucleon system, and accordingly the inelasticity of the $`J=1/2^+`$ state is reduced, diminishing the total breakup cross section.
Finally we want to discuss the contribution of the different states to the total breakup cross section. Each state contributes to $`\sigma _b`$ an amount equal to $`(2J+1)A_J`$. In Fig. 4 we plot this quantity up to $`J=9/2^\pm `$. It is interesting to note that above $`6`$ MeV the state $`J=3/2^{}`$ gives by far the main contribution to the observable. This state has the optimum combination of a large $`A_J`$ value coming mainly from $`{}_{}{}^{2}P_{3/2}^{}`$ and the spin degeneracy $`2J+1=4`$. This is twice the value for $`J=1/2`$ and in fact the $`J=3/2^{}`$ contribution is double the $`J=1/2^{}`$ one. Although the $`J=1/2^+`$ state has the highest $`A_J`$ value the degeneracy is too low and its contribution is the largest one only below $`6`$ MeV. Above $`24`$ MeV other states start to have important contributions, showing the weakness of the centrifugal barrier as the energy increases. In Fig. 5 the contributions to $`\sigma _b`$ are given for energies below $`7`$ MeV. The contributions calculated using the AV18+UR potential are also given (dotted line). As it was expected the $`J=1/2^+`$ contribution is principal below $`6`$ MeV and, more important, is the only one below $`4`$ MeV.
The reduced numbers of parameters in description of $`\sigma _b`$ at low energy is of particular interest. At $`4`$ and $`5`$ MeV a complete set of p-d vector and tensor analyzing powers as well as p-d differential cross section have been measured . These high-precision data can be used to perform a single-energy PSA in which some of the phases must be allowed to be complex. From the above analysis it is clear that only few phases or mixing parameters have sizable imaginary parts. In addition, the inclusion of $`\sigma _b`$ in the data base will impose further restrictions on the imaginary parts.
PSA is extremely useful for performing comparisons between theory and experiment. Discrepancies at the level of observables can be traced back to single phases or mixing parameters which in turn are directly related to specific parts of the interaction. Above the DBT phases and mixing parameters are complex doubling the number of variables in the search procedure. Old PSA’s encountered difficulties finding a precise determination of the real and imaginary parts of the phases and mixing parameters . The high-precision p-d data obtained through the past years allowed for a better determination of the parameters through single energy PSA . These analyses were limited to energies below the DBT. The study of the breakup cross section in terms of partial waves will be helpful when PSA’s are extended to higher energies. In this light it would be particularly useful to perform new experimental determinations of $`\sigma _b(pd)`$ at energies between 6 and 22 MeV, where there are presently no experimental results. It would also be most advantageous to perform these measurements at the same energies where highly-accurate differential cross section and analyzing power data already exist .
###### Acknowledgements.
One of the authors (A. K.) would like to thank the Triangle Universities Nuclear Laboratory for hospitality and support where much of this work was performed. Moreover, the authors would like to thank many useful discussion with W. Tornow, E. J. Ludwig, H. J. Karwowski and S. Rosati.
FIGURE CAPTIONS
Fig. 1. Total breakup cross section for n-d (filled circles) and p-d (open circles) scattering up to $`50`$ MeV. Experimental data are from Ref. .
Fig. 2. Theoretical calculations of $`\sigma _b(pd)`$ up to $`30`$ MeV (open squares). The experimental results of Gibbons and Macklin (open triangles) and Carlson et al. (open circles) are given for the sake of comparison.
Fig. 3. Total breakup cross section below 6 MeV. Theoretical calculations for $`\sigma _b`$ are given at four energies using the AV18 potential (p-d:open triangles, n-d:diamond) and AV18+UR potential (p-d:open squares). The experimental data are from Gibbons and Macklin (p-d:filled circles) and Holmberg (n-d:open circles). The solid line is a fit to the p-d data.
Fig. 4. The quantity $`(2J+1)A_J`$ for different values of $`J`$ and parity between 5 and 28 MeV. Calculations have been done at the same energies of Fig. 2. The solid lines are linear interpolations.
Fig. 5. Same as Fig. 4 below $`7`$ MeV. Calculations have been done using AV18 (solid line) and AV18+UR (dotted line).
|
no-problem/0003/cond-mat0003455.html
|
ar5iv
|
text
|
# A phenomenological glass model for vibratory granular compaction
## I Introduction
It is well known that, with the appropriate driving and boundary conditions, granular matter can approximate each of the three major states of matter: gas, liquid and solid . Conspicuous by its absence is a glass state; that is, a state where the relaxation times far exceed the observational timeframe . However, it is becoming increasingly clear that the granular analogue of glass has been found in a recent series of experiments performed at the University of Chicago . They measured the density of a system that was weakly perturbed or ‘tapped’ by the application of a driving pulse to the container. A first indication of glass-like relaxation processes came from analysis of the density $`\rho (t)`$, where $`t`$ is the number of times the system had been tapped, which was found to increase only logarithmically slowly ,
$$\rho (t)=\rho _\mathrm{f}\frac{\mathrm{\Delta }\rho }{1+B\mathrm{ln}(1+t/\tau )}.$$
(1)
The fitting parameters $`\rho _\mathrm{f}`$ , $`\mathrm{\Delta }\rho `$, $`B`$ and $`\tau `$ are functions of the control parameter $`\mathrm{\Gamma }`$, defined as the peak acceleration of the driving pulse scaled by gravity, $`\mathrm{\Gamma }=a_{\mathrm{max}}/g`$. Subsequent experiments in which $`\mathrm{\Gamma }`$ was varied during a run also behaved in a similar manner to glasses under a variable temperature , suggesting a relationship between $`\mathrm{\Gamma }`$ and some elusive temperature-like quantity.
Theoretical attempts to understand the experiments have ranged from the construction of toy microscopic models to higher level, coarse grained descriptions . The general consensus has been that the slow relaxation is due to frustrated dynamics resulting from excluded volume effects. More insightful are the free volume arguments postulated by the Chicago group and Boutreux and de Gennes , which derive the logarithmic compaction with only a small number of assumptions. Provocatively, these assumptions are also key components in established phenomenological models of glass-forming liquids, namely those of Adam and Gibbs and Cohen and Turnbull , respectively. This further suggests that the analogy with glasses is a valid one. However, both of the granular free volume descriptions currently lack any mention of the experimental control parameter $`\mathrm{\Gamma }`$ and hence must be regarded as incomplete.
In this paper we demonstrate how one of the free volume arguments, namely that of the Chicago group, can be expanded into a full model that incorporates $`\mathrm{\Gamma }`$. This is made possible by postulating a loose analogy between $`\mathrm{\Gamma }`$ in the granular system and temperature in supercooled liquids, and then using this analogy to incorporate elements of the Adam and Gibbs theory. The result of this process is a master equation for weakly excited granular media that is capable of reproducing a wide range of known experimental behaviour. The motivations behind this work are twofold. Firstly, by focusing on only a small number of physical mechanisms, the success of the model in emulating the experiments indicates that the dominant mechanisms may have been correctly identified. It is further hoped that this work may help to strengthen the relationship between granular matter and glasses. This second goal is easily achieved once we show that the derived master equation is identical to that of a simple glass model due to Bouchaud .
This paper is arranged as follows. In Sec. II the Chicago group’s free volume argument is summarised and then expanded to a full model by importing elements of the Adam and Gibbs theory. The resulting master equation that describes the evolution of the system in time is specified. Numerical integration of this master equation, plus analytical results wherever possible, are compared to the experimental data in Sec. III. Of particular importance here is an explanation for the apparent contradiction between the experiments and any model based on the free volume approach, concerning the supposed $`\mathrm{\Gamma }`$–dependence of the projected final density, $`\rho _\mathrm{f}`$ in (1). Further discussion on the physical interpretation of the model parameters is given in Sec. IV, as well as suggested ways in which the various assumptions behind the model may be more rigorously checked. Finally, we summarise our findings in Sec. V and make some tentative predictions for future experiments that may help to further elucidate the relevant physical mechanisms in granular compaction.
## II Description of the model
The relationship between the Chicago group’s free volume description and the Adam and Gibbs theory is that they both regard the dominant relaxation process to be the cooperative rearrangement of particles. The correspondence between the two theories can be taken further by postulating the existence of a temperature-like noise parameter $`\eta (\mathrm{\Gamma })`$ for weakly excited granular matter. This procedure forms the basis of our work, and is described in full below. For current purposes it is sufficient to provide a somewhat heuristic description of the model; a fuller discussion of the various parameters can be found in Sec. IV.
### A First principles derivation
The Chicago group’s , and also Boutreux and de Gennes’ , arguments employ the concept of the mean free volume per particle, here denoted $`\overline{v_\mathrm{f}}`$ . For a system of $`N`$ particles occupying a total volume $`V`$, $`\overline{v_\mathrm{f}}`$ is defined as
$$\overline{v_\mathrm{f}}=\frac{VV_{\mathrm{min}}}{N}=v_\mathrm{g}\left(\frac{1}{\rho }\frac{1}{\rho _{\mathrm{max}}}\right),$$
(2)
where $`v_\mathrm{g}`$ is the volume of a single particle and $`\rho `$ is the volume fraction $`Nv_\mathrm{g}/V`$. Units are chosen so that the density of a single grain is unity, hence $`\rho `$ is also the density of the system. Following , $`\rho _{\mathrm{max}}Nv_\mathrm{g}/V_{\mathrm{min}}`$ is identified with the most compact state possible in a disordered system, i.e. the random close-packing limit. In what follows we shall fix $`\rho _{\mathrm{max}}=0.64`$, believed to be the random close packed density for a system of monodisperse spheres .
The Chicago group postulated that the compaction process is dominated by the cooperative rearrangement of local domains of particles. If $`z`$ is the number of particles in a region that can rearrange independently of its environment, they argued there is a lower cut-off
$$zz^{}=a\frac{v_\mathrm{g}}{\overline{v_\mathrm{f}}}$$
(3)
below which there is not enough free volume available to allow reconfiguration. Roughly speaking, $`z^{}`$ is the number of particles that, by adding up their individual free volumes, can make a single ‘hole’ big enough to allow exactly one particle to fit through. The explicit dependence of $`z^{}`$ on $`\rho `$ is found by combining (2) and (3),
$$z^{}=a\left(\frac{1}{\rho }\frac{1}{\rho _{\mathrm{max}}}\right)^1.$$
(4)
By assuming that the density increases at a rate proportional to $`\mathrm{e}^z^{}`$, it is now possible to derive the logarithmic compaction law $`\rho (t)1/\mathrm{ln}(t)`$ .
As mentioned in the introduction, the theory just outlined is incomplete as it does not incorporate the experimental control parameter $`\mathrm{\Gamma }`$. In an attempt to resolve this deficiency, we observe that a similar description for cooperative relaxation is also central to the theory proposed by Adam and Gibbs for structural relaxation in supercooled liquids . An intermediate stage of their calculations is of interest here, namely that the relaxation rate $`W`$ can be expressed as a function of temperature $`T`$ as
$$W(T)\mathrm{exp}\left(\frac{z^{}\mathrm{\Delta }E}{k_\mathrm{B}T}\right),$$
(5)
where $`\mathrm{\Delta }E`$ is the free energy barrier per particle, $`k_\mathrm{B}`$ is Boltzmann’s constant and $`z^{}`$ is again the smallest number of particles that can rearrange independently of their environment (which was ultimately related to the configurational entropy).
The principle assumption behind our current work is that an expression analogous to (5) also holds for weakly excited granular media. More precisely, we propose that a region with local density $`\rho `$ reconfigures at a rate
$$W(\rho ,\mathrm{\Gamma })\mathrm{exp}\left(\frac{z^{}(\rho )\mathrm{\Delta }E}{\eta (\mathrm{\Gamma })}\right),$$
(6)
where $`z^{}`$ is related to $`\rho `$ via (4). $`\mathrm{\Delta }E`$ can be interpreted as a gravitational potential energy barrier per particle, and $`\eta (\mathrm{\Gamma })`$ gives some measure of the degree of excitation of the system. Note that although $`\eta (\mathrm{\Gamma })`$ plays the role of $`k_\mathrm{B}T`$, we stop short of referring to it as a ‘granular temperature’ and instead regard it as a noise parameter which is defined by (6), with the only restriction that $`\eta (\mathrm{\Gamma })`$ should be a monotonic increasing function of $`\mathrm{\Gamma }`$. In what follows $`\eta (\mathrm{\Gamma })`$ is essentially treated as a fitting parameter. The physical meaning of $`\eta (\mathrm{\Gamma })`$ and $`\mathrm{\Delta }E`$ is discussed further in Sec. IV.
To fully specify the model, some rule is required that gives the density of a region after it has reconfigured. In general this will depend on its density before reconfiguration as well as $`\eta (\mathrm{\Gamma })`$, but for simplicity we shall ignore such considerations here and simply assume that the density after reconfiguration is given by the fixed probability density function $`\mu ^{}(\rho )`$. Specifically, $`\mu ^{}(\rho )\mathrm{d}\rho `$ is the probability that a region ‘falls’ into a configuration with a density in the range $`[\rho ,\rho +\mathrm{d}\rho )`$. The prior distribution $`\mu ^{}(\rho )`$ (re-expressed in terms of the total energy barrier $`E`$ – see below) will play a central role in our model, although it shall be demonstrated that, over timescales relevant to the experiments, the model is essentially robust to the particular choice of $`\mu ^{}(\rho )`$. This is fortuitous, as the precise form of $`\mu ^{}(\rho )`$ is unknown and we have instead considered a range of plausible functional forms.
### B Summary of the model
Since the reconfiguration rate $`W(\rho ,\mathrm{\Gamma })`$ given in (6) depends on $`\rho `$ only via the total energy barrier $`E=z^{}(\rho )\mathrm{\Delta }E`$, it is convenient to now make the change of variables $`\rho E`$, where
$`E`$ $`=`$ $`z^{}\mathrm{\Delta }E=A\left({\displaystyle \frac{1}{\rho }}{\displaystyle \frac{1}{\rho _{\mathrm{max}}}}\right)^1,`$ (8)
$`A`$ $`=`$ $`a\mathrm{\Delta }E,`$ (9)
which is one-to-one and hence invertible for all $`\rho [0,\rho _{\mathrm{max}})`$ and $`E[0,\mathrm{})`$. Thus, in what follows, the state of the system at any given time $`t`$ will in the first place be defined by the distribution of energy barriers $`P(E,t)`$, and only then shall the mean density $`\rho (t)`$ be found by inverting the mapping (II B) and averaging over $`P(E,t)`$, i.e.
$$\rho (t)=_0^{\mathrm{}}\frac{P(E,t)}{\frac{A}{E}+\frac{1}{\rho _{\mathrm{max}}}}dE.$$
(10)
Note that, in principle, small values of $`E`$ should be disallowed to reflect the fact that low density configurations are not mechanically stable and will not arise. For the sake of simplicity we choose to ignore this subtlety here.
The master equation for $`P(E,t)`$ can be derived as follows. The rate at which a region with a local barrier $`E(\rho )`$ reconfigures is given by $`\omega _0\mathrm{e}^{E/\eta }`$, where the constant $`\omega _0`$ fixes the timescale. After reconfiguring, the region falls into a state with a new barrier $`E_{\mathrm{new}}`$ with a probability $`\mu (E_{\mathrm{new}})`$, where $`\mu (E)`$ is just $`\mu ^{}(\rho )`$ after the change of variables, $`\mu (E)\mathrm{d}E=\mu ^{}(\rho )\mathrm{d}\rho `$. Assuming that the number of taps $`t`$ can be well approximated as a continuous variable, $`P(E,t)`$ evolves in time according to
$`{\displaystyle \frac{1}{\omega _0}}{\displaystyle \frac{P(E,t)}{t}}`$ $`=`$ $`\mathrm{e}^{E/\eta }P(E,t)+\omega (t)\mu (E),`$ (12)
$`\omega (t)`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}\mathrm{e}^{E/\eta }P(E,t)dE.`$ (13)
The first and second terms on the right hand side of (12) correspond to regions with barriers $`E`$ before and after a reconfiguration event, respectively. Conservation of probability is ensured by $`\omega (t)`$, which is the total rate of reconfiguration events at time $`t`$.
Remarkably, the coupled equations (12) and (13) are identical to the trap model of Bouchaud, which is known to qualitatively reproduce many features of spin glasses and supercooled liquids . Thus the model we have derived can also be viewed as Bouchaud’s trap model, with a mapping from the energy barrier $`E`$ to density $`\rho `$ that is reached via the two-stage process of first assuming that $`E`$ is proportional to the smallest region that can rearrange independently of its environment, à la Adam and Gibbs, and then using the Chicago group’s free volume argument to relate the size of this region to its density. The relationship with Bouchaud’s trap model is useful as it allows known analytical results to be transferred to this application, as described in the following section.
## III Comparison to the experiments
In this section we compare the predictions of the model to the experimental results given in . The general procedure employed throughout was to numerically integrate $`P(E,t)`$ in time according to the master equation (II B) from an initial state $`P(E,0)`$, using the method described in Appendix A. Ideally $`P(E,0)`$ would be chosen to mimic the distribution of density in the apparatus after the preparation phase, but since such information is not available we have instead employed the natural choice of $`P(E,0)=\mu (E)`$, which formally corresponds to an instantaneous ‘quench’ from $`\eta =\mathrm{}`$. No significant deviations are expected for other initial conditions after an initial transient. Once $`P(E,0)`$ was fixed, the constant $`A`$ in (II B) was chosen by trial-and-error to give an initial density close to the experimental value $`\rho (0)0.58`$. The density $`\rho (t)`$ was extracted at regular intervals by numerical evaluation of (10).
Each simulation was repeated for two different choices of $`\mu (E)`$, namely an exponential $`\mu (E)=\frac{1}{E_0}\mathrm{e}^{E/E_0}`$ and a Gaussian $`\mu (E)=\sqrt{2/\pi \sigma ^2}\mathrm{e}^{E^2/2\sigma ^2}`$, where without loss of generality we now choose units such that $`E_0=\sigma =1`$. Other $`\mu (E)`$ were also considered for the compaction under constant $`\eta `$ described in Sec. III A and were found to give the same behaviour for $`t\stackrel{<}{}10^4`$ taps, indicating that the model is robust to the particular choice of $`\mu (E)`$ over the experimental timeframe. However, this robustness does not extend to the $`t\mathrm{}`$ limit, where it is already known that different $`\mu (E)`$ can give qualitatively different behaviour. This is discussed thoroughly in , but in brief, an exponential tail $`\mu (E)\mathrm{e}^E`$ gives rise to a glass transition at $`\eta =1`$ , in the sense that an equilibrium solution only exists for $`\eta >1`$ . This can be seen by simply setting $`P/t=0`$ in the master equation (II B),
$$P_{\mathrm{eqm}}(E)\underset{t\mathrm{}}{lim}P(E,t)=\omega (\mathrm{})\mathrm{e}^{E/\eta }\mu (E),$$
(14)
which is not normalisable for $`\eta 1`$ if $`\mu (E)\mathrm{e}^E`$, and hence equilibrium cannot be reached. By contrast, if $`\mu (E)`$ decays more rapidly than exponentially, e.g. if it has a Gaussian tail, then an equilibrium solution exists for all $`\eta >0`$, although the equilibration time may be excessively large for small $`\eta `$. Note that this model quite generally predicts that the limiting density $`\rho _{\mathrm{}}=lim_t\mathrm{}\rho (t)`$ is a monotonic decreasing function of $`\eta `$. A proof of this is given in Appendix B.
### A Constant excitation intensity
Simulation results for the mean density $`\rho (t)`$ over a range of $`\eta `$ is given in Fig. 1. Also given are fits to the empirical law (1), demonstrating that it is well obeyed with either an exponential and Gaussian $`\mu (E)`$. We have also checked and found similar logarithmic behaviour for a selection of other $`\mu (E)`$, such as uniform on $`[E_0,E_1]`$, both with $`E_0=0`$ and $`E_0>0`$, Gaussian with a non-zero mean, Cauchy, and exponential limited to the range $`[E_0,E_1]`$. However, logarithmic relaxation is not expected for pathological $`\mu (E)`$ such as $`\delta (EE_0)`$ or $`\mathrm{exp}(\mathrm{e}^E)`$.
The logarithmic behaviour can be understood by considering the scaling solution to the master equation (II B) already found by Monthus and Bouchaud . They demonstrated that, after a short transient, $`P(E,t)`$ can be expressed in terms of a single scaling variable $`u`$ as
$$P(E,t)=\frac{1}{\eta }u\varphi (u),u=\frac{\mathrm{e}^{E/\eta }}{\omega _0t}.$$
(15)
Strictly speaking this is only true for an exponential $`\mu (E)`$ below the glass point, but it was also demonstrated that a Gaussian $`\mu (E)`$ admits a similar scaling solution until a time $`t^{}\omega _0^1\mathrm{exp}(1/\eta ^2)`$, which may lie well beyond the experimental timeframe when $`\eta `$ is small. The physical picture underlying this scaling behaviour is that the sizes of the cooperatively rearranging regions, which are proportional to $`E=\eta \mathrm{ln}(\omega _0tu)`$, are increasing logarithmically in time. A logarithmic increase in domain size has also been found in the Tetris model .
Over timescales for which the scaling solution (15) holds, the density can be expressed in terms of $`\varphi (u)`$ by changing variables from $`E`$ to $`u`$ in (10),
$$\frac{\rho (t)}{\rho _{\mathrm{max}}}=1_{\frac{1}{\omega _0t}}^{\mathrm{}}\frac{\varphi (u)}{1+{\displaystyle \frac{\eta }{A\rho _{\mathrm{max}}}}\mathrm{ln}(\omega _0tu)}du.$$
(16)
The similarity of this expression to the empirical law (1) is striking. The primary difference is that here we must integrate over a distribution of $`u`$, which will in general introduce corrections to the simple logarithmic law. The simulation results in Fig. 1 demonstrate that any such corrections are at most small.
The form of the theoretical prediction (16) makes it difficult to calculate the fitting parameters $`\mathrm{\Delta }\rho `$, $`B`$ and $`\tau `$ in the empirical law (1). However, one parameter that can trivially be fixed is the projected final density $`\rho _\mathrm{f}`$ , which is always equal to $`\rho _{\mathrm{max}}`$ here, regardless of $`\eta (\mathrm{\Gamma })`$. In contrast, the experiments seem to indicate that $`\rho _\mathrm{f}`$ is a non-monotonic function of $`\mathrm{\Gamma }`$ . There is no easy way to resolve this discrepancy. For instance, one cannot simply assume that $`\rho _{\mathrm{max}}`$ is itself a function of $`\mathrm{\Gamma }`$, i.e. $`\rho _{\mathrm{max}}=\rho _{\mathrm{max}}(\mathrm{\Gamma })`$. Quite apart from the conceptual difficulties this would invoke for the physical meaning of $`\rho _{\mathrm{max}}`$ , it would allow situations in which negative free volume could arise, for instance by first allowing a system to relax arbitrarily close to $`\rho _{\mathrm{max}}(\mathrm{\Gamma })`$ and then suddenly changing to a $`\mathrm{\Gamma }^{}`$ for which $`\rho _{\mathrm{max}}(\mathrm{\Gamma }^{})<\rho _{\mathrm{max}}(\mathrm{\Gamma })`$. By definition, $`\overline{v_\mathrm{f}}`$ would then be negative. Note that this contradiction is not specific to this model but will arise whenever the definition of free volume (2) is used.
We believe the solution to this problem lies in the range of $`t`$ over which the data fitting has been performed. As mentioned previously, the scaling solution (15), and hence the logarithmic relaxation, only applies after a short transient, typically $`t\stackrel{>}{}10^210^3`$ taps. However, we have found that it is still possible to attain a very reasonable fit to the empirical law over the whole range $`0t10^4`$, but only at the expense of predicting the wrong $`\rho _\mathrm{f}`$ . This is clearly demonstrated in Fig. 2, which shows that a fit that works well for $`0t10^4`$ fails when extrapolated to larger $`t`$, whereas fixing $`\rho _\mathrm{f}=\rho _{\mathrm{max}}`$ gives an initially poorer fit but recovers the correct asymptotic behaviour. Transferring this insight to the experiments suggests that discarding the first 1-10% of the experimental data points and then repeating the fitting procedure would result in a similar logarithmic compaction law as before, but with $`\rho _\mathrm{f}`$ independent of $`\mathrm{\Gamma }`$. The various time regimes in this model are summarised schematically in Fig. 3.
### B Annealing curve
Further insight into the nature of the system’s relaxation properties can be gained by allowing the tap intensity to vary in time, which roughly corresponds to varying the temperature in other slowly relaxing systems . Two time-dependencies will be considered in this paper. The first is the ‘annealing curve,’ which was experimentally attained by cyclically ramping $`\mathrm{\Gamma }`$ in a stepwise fashion between some high value $`\mathrm{\Gamma }=\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }=0`$ . Slowly decreasing $`\mathrm{\Gamma }`$ removes low density local configurations without creating many new ones, hence the term ‘annealing.’ The second protocol for varying $`\mathrm{\Gamma }`$ will be investigated in the next subsection.
The annealing curve for this model is obtained by allowing $`\eta `$ to smoothly vary from 0 to some value $`\eta _1`$ to 0 to $`\eta _1`$ again, where the duration of each leg is denoted by $`t_{\mathrm{leg}}`$ . Simulation results for $`t_{\mathrm{leg}}=10^6`$ are given in Fig. 4. The experimental annealing curve has a similar shape, except that the initial density increase for small $`\mathrm{\Gamma }`$ is noticeably slower than that for small $`\eta `$ . This may simply be due to a non-trivial mapping from $`\mathrm{\Gamma }`$ to $`\eta `$, as discussed in Sec. IV. Note that the second and third legs in Fig. 4 form a reversible curve which is nonetheless out of equilibrium for small $`\eta `$. Observe also the presence of a narrow hysteresis loop, which is also present in microscopic models but has never been systematically searched for in experiments. The area of this hysteresis loop decreases for slower cooling rates, as demonstrated in Fig. 5.
To interpret these results in a glassy context, recall that the initial conditions were chosen to conform to the equilibrium state at $`\eta =\mathrm{}`$, i.e. $`P(E,0)=P_{\mathrm{eqm}}(E)|_{\eta =\mathrm{}}=\mu (E)`$. This would be valid if the initial low density configuration in the experiments corresponded to an equilibrium state for very large tapping intensity $`\mathrm{\Gamma }`$, which seems plausible. Thus the start of the first leg corresponds to a rapid quench from high $`\eta `$ to $`\eta 0`$, leaving the system far from equilibrium. The rate of compaction is initially rapid but slows as the density, and hence the relaxation times, increase. For sufficiently high $`\eta `$, the density reaches and starts to follow the equilibrium curve, rapidly erasing the memory of its history. As $`\eta `$ is lowered a second time, this time corresponding to a slow quench, the system remains near equilibrium until some value $`\eta _0`$ (which depends on the cooling rate) when the relaxation time rapidly increases and the system essentially freezes. Thus the difference between the first and third legs in Fig. 4 can be understood as the recovery from a rapid and slow quench, respectively.
### C Shift in the excitation intensity
Recent experiments have investigated the effect of allowing $`\mathrm{\Gamma }`$ to ‘shift’ from a constant value $`\mathrm{\Gamma }_0`$ to another constant value $`\mathrm{\Gamma }_1`$ at a given time $`t_0`$ . It was found that the system evolved in a way that depended on its history as well as its current density and excitation intensity $`\mathrm{\Gamma }`$, representing a form of memory akin to that in glassy systems . Plotted in Fig. 6 are the corresponding results for this model, where $`\eta =\eta _0=0.5`$ until $`t=t_0=50`$ taps, when it changes to $`\eta _1=\eta _0+\mathrm{\Delta }\eta `$. The sign of the initial density change is opposite to the sign of $`\mathrm{\Delta }\eta `$, as in the experiments, although this is not entirely general and the behaviour is reversed if $`t_0`$ is too small. Also shown in the inset is the case when $`\eta `$ is changed from different $`\eta _0`$ to the same value $`\eta _1=0.3`$ when the density reaches a predetermined value. Again there is qualitative agreement with the experiments.
The analogy with glass systems suggests that the timescale of the response to a shift in $`\eta `$ at $`t_0`$ should scale with $`t_0`$ in some manner . With this insight, we now make the following prediction, in the hope it may be tested experimentally. Let $`\mathrm{\Delta }\rho (tt_0)`$ be the difference in density at time $`t`$ between the perturbed system and an unperturbed one, i.e. one with $`\mathrm{\Delta }\eta =0`$ . Plotted in Fig. 7 is $`\mathrm{\Delta }\rho (tt_0)`$ for a shift from a low to a high $`\eta `$ at times $`t_0=10^3`$, $`10^4`$, $`10^5`$, $`10^6`$ and $`10^7`$. In each case there is a well-defined time for the peak response $`t^{\mathrm{resp}}`$, which increases with $`t_0`$ . Known results for the trap model with an exponential $`\mu (E)`$ suggest that $`t^{\mathrm{resp}}t_0^{\eta _0/\eta _1}`$, where the exponent is independent of $`t_0`$ . We find this to be a good first approximation to our data, as demonstrated by the inset to Fig. 7, although there are corrections arising from the non-linear mapping from $`E`$ to $`\rho `$, which distorts the underlying scaling behaviour in $`E`$. There are also additional small corrections when using a Gaussian $`\mu (E)`$.
Finally, the experiments briefly investigated what happens when $`\mathrm{\Gamma }`$ is allowed to return to its initial value after $`\delta t`$ taps at a higher value $`\mathrm{\Gamma }_1`$ . For comparison, the equivalent results from this model are given in Fig. 8. The observed trend is in accord with the experimental observations. A full study of this variation in $`\eta (t)`$ for all $`\eta _0`$ , $`\eta _1`$ , $`t_0`$ and $`\delta t`$ is beyond the scope of this paper and will not be discussed further here.
### D Fluctuations and power spectra
In a finite system the density in equilibrium is not constant but fluctuates about its mean value. To investigate density fluctuations in this model, a different version of the code was employed which explicitly simulates a system consisting of $`N`$ separate subsystems (details given in Appendix A). Fig. 9 shows the probability distribution $`Q(\mathrm{\Delta }\rho )`$ of fluctuations $`\mathrm{\Delta }\rho \rho (t)\rho _{\mathrm{modal}}`$ for $`N=500`$ and different $`\eta `$. To first approximation $`Q(\mathrm{\Delta }\rho )`$ is Gaussian, but it is slightly skewed towards lower densities, becoming more so as $`\eta `$ is lowered. The skewness arises from the non-linear mapping from $`E`$ to $`\rho `$, which exaggerates fluctuations to lower densities whilst suppressing those to high densities. There is also a cut-off for very large $`|\mathrm{\Delta }\rho |`$ , when $`P(E,t)`$ has deviated significantly from $`P_{\mathrm{eqm}}(E)`$. The experiments exhibited Gaussian fluctuations with some anomalous deviations for $`\mathrm{\Delta }\rho >0`$ ; this is discussed below.
The power spectra $`S(f)`$ of density fluctuations in equilibrium for various $`\eta `$ and a Gaussian $`\mu (E)`$ is given in Fig. 10. $`S(f)1/f^2`$ for $`f`$ greater than a high frequency shoulder $`f_\mathrm{H}`$ , where $`f_\mathrm{H}`$ is only weakly dependent on $`\eta `$, although it should be stressed that this model focuses on cooperative relaxation modes and is not intended to describe the high frequency, single particle dynamics. For low frequencies, $`S(f)`$ appears to obey non-trivial power law behaviour $`S(f)1/f^\delta `$, where the exponent $`\delta `$ can be very approximately fitted to $`\delta 1\eta `$ over the range $`10^5<f<10^3`$. However, the analysis given in Appendix C shows that this is not the true asymptotic behaviour and $`S(f)1/f^0`$ as $`f0`$. The crossover to $`1/f^0`$ behaviour occurs around a low frequency shoulder $`f_\mathrm{L}`$ , where $`f_\mathrm{L}0`$ rapidly as $`\eta 0`$. For an exponential $`\mu (E)`$ there is only one shoulder frequency separating the high frequency, $`1/f^2`$ regime from a low frequency regime in which $`S(f)1/f^\delta `$, where $`\delta =2\eta `$ for $`1<\eta <2`$ and $`\delta =0`$ for $`\eta 2`$. Note that there is no $`1/f^0`$ region for $`\eta <2`$, even though the system is in equilibrium. This apparent anomaly is explained in Appendix C.
In the experiments, the power spectra were found to obey non-trivial power law behaviour $`S(f)1/f^\delta `$, with $`\delta =0.9\pm 0.2`$, between two corner frequencies $`f_\mathrm{L}`$ and $`f_\mathrm{H}`$ that both decreased as $`\mathrm{\Gamma }`$ was lowered . More complex behaviour was observed for larger $`\mathrm{\Gamma }`$ towards the bottom of the apparatus. The results from this model are in partial agreement; for instance, it is still one of few that can exhibit a $`1/f^\delta `$ regime with $`\delta 1`$ (see also ). There are some discrepancies, but these may simply be due to processes not currently incorporated into the model, such as single particle dynamics or the existence of metastable, high-density crystalline domains.
## IV Discussion of the model parameters
Given the success of this model in reproducing the experimental phenomenology, it is natural to ask if its principle assumptions can be placed on a firmer foundation. In particular, a number of parameters introduced in Sec. II A have so far been treated somewhat heuristically. To redress the balance, we now discuss the physical interpretation of some of these parameters. A more thorough analysis may be possible by detailed comparison with a microscopic model, for instance.
#### 1 The noise parameter $`\eta (\mathrm{\Gamma })`$
It was stressed during the derivation of this model that the noise parameter $`\eta `$ need not bear any relation to the concept of granular temperature . By the same token, the use of the term ‘equilibrium’ to describe the statistical steady state merely refers to a dynamic equilibrium, without supposing any analogy with a thermodynamic one. Instead, $`\eta `$ was defined in the broadest sense of simply giving some measure of the degree of excitation of the system during a single tap. This loose definition makes finding the precise relationship with $`\mathrm{\Gamma }`$ difficult. Nonetheless it is still possible to predict the overall shape of $`\eta (\mathrm{\Gamma })`$, as we now argue.
For $`\eta `$ to be non-zero, the particles must at the very least separate from their nearest neighbours. Experiments on vibrated granular systems often claim to find some critical $`\mathrm{\Gamma }_\mathrm{c}`$ such that the relative motion of the particles is either minimal or non-existent for $`\mathrm{\Gamma }<\mathrm{\Gamma }_\mathrm{c}`$ (usually $`1\stackrel{<}{}\mathrm{\Gamma }_\mathrm{c}\stackrel{<}{}2`$, see e.g. ). A facile explanation for this is to suppose that a granular body is held together by frictional forces, and that relative motion between adjacent particles is not possible until some static friction threshold has been overcome. Since all the normal contact forces are proportional to $`g`$, the distribution of threshold forces will also scale with $`g`$ and thus the relevant parameter would indeed be $`\mathrm{\Gamma }=a_{\mathrm{max}}/g`$. However, friction is not the only relevant mechanism. For instance, particle separation will still occur in vertical one dimensional columns, where friction clearly plays no part. Even in this case theory suggests that the relevant parameter is again $`\mathrm{\Gamma }`$, at least in the limit of hard spheres .
Assuming that a well-defined $`\mathrm{\Gamma }_\mathrm{c}`$ exists, the overall shape of $`\eta (\mathrm{\Gamma })`$ will obey $`\eta (\mathrm{\Gamma })0`$ for small $`\mathrm{\Gamma }`$, only significantly deviating from zero for $`\mathrm{\Gamma }\stackrel{>}{}\mathrm{\Gamma }_\mathrm{c}`$ . Just this qualitative behaviour has been found in simulations of horizontally vibrated systems . As a corollary, the annealing curves presented in Figs. 4 and 5 will be flatter for small $`\mathrm{\Gamma }`$ when plotted against $`\mathrm{\Gamma }`$ rather than $`\eta `$, in better agreement with the experimental graphs .
#### 2 The prior distribution $`\mu (E)`$
The Gaussian and exponential $`\mu (E)`$ employed in the simulations were chosen as plausible first guesses of the real $`\mu (E)`$. To calculate the actual $`\mu (E)`$ is a non-trivial problem, but a first step might be to re-express $`\mu (E)`$ in terms of $`\psi (\overline{v_\mathrm{f}})`$, the distribution of free volume after reconfiguration. Using $`E=z^{}\mathrm{\Delta }E=Av_\mathrm{g}/\overline{v_\mathrm{f}}`$ (3), this gives
$$\mu (E)=\frac{Av_\mathrm{g}}{E^2}\psi \left(\frac{Av_\mathrm{g}}{E}\right).$$
(17)
In principle, $`\psi (\overline{v_\mathrm{f}})`$ could be found from a microscopic model, such as the parking lot model , for which the free volume is also the void volume.
Note that, from (17), the tail of $`\mu (E)`$, which is so important to the long-time relaxational properties of Bouchaud’s trap model, can be related to the $`\overline{v_\mathrm{f}}0^+`$ behaviour of $`\psi (\overline{v_\mathrm{f}})`$. For example, if $`\psi (\overline{v_\mathrm{f}})`$ vanishes according to $`\psi (\overline{v_\mathrm{f}})\mathrm{exp}(\alpha /\overline{v_\mathrm{f}})`$, then $`\mu (E)`$ will have an exponential tail and the trap model predicts a glass transition at a finite noise intensity $`\eta =Av_\mathrm{g}/\alpha `$. Similarly, $`\psi (\overline{v_\mathrm{f}})\mathrm{exp}(\alpha /\overline{v_\mathrm{f}}^2)`$ corresponds to a $`\mu (E)`$ with a Gaussian tail.
#### 3 The constant $`A=a\mathrm{\Delta }E`$
Even though $`A`$ has been treated as an arbitrary constant and fixed by the initial conditions, its component factors $`a`$ and $`\mathrm{\Delta }E`$ have a physical interpretation, as we now discuss. In (3), $`a`$ is defined as the constant of proportionality between $`z^{}`$, the smallest number of particles that can cooperatively rearrange, and the ratio $`v_\mathrm{g}/\overline{v_\mathrm{f}}`$ . In the Chicago group’s original argument , $`a`$ was set to 1; however, we prefer not to fix $`a`$ at any particular value and suggest that it may depend upon particle properties such as their shape. For instance, highly irregular particles will obstruct motion more effectively than rounder particles of the same volume, and so should have a higher value of $`a`$.
$`\mathrm{\Delta }E`$ was originally defined as a gravitational potential energy barrier. Indeed, assuming that the particles interact via hard core repulsion, this is the only available potential energy scale in the system. This suggests that $`\mathrm{\Delta }E`$ is proportional to the mean vertical displacement between adjacent particles. If so, then our implicit assumption that $`\mathrm{\Delta }E`$ is independent of $`\eta `$ and $`\rho `$ is compatible with the hard sphere Monte Carlo simulations of Barker and Mehta , who demonstrated that the distribution of contact angles between particles is roughly constant over a wide range of shaking amplitudes.
More importantly, if $`\mathrm{\Delta }E`$ is gravitational potential energy, then inspection of the reconfiguration rate (6) indicates that $`\eta `$ must also have units of energy, with an energy scale that is presumably coupled to the driving. For definiteness, suppose $`\eta `$ can be written as $`\eta =mgA_0f(\mathrm{\Gamma })`$, where $`m`$ is the typical mass of the particles, $`A_0`$ is the amplitude of the driving and $`f(\mathrm{\Gamma })`$ is some function of the dimensionless parameter $`\mathrm{\Gamma }=a_{\mathrm{max}}/g`$ (possibly with a threshold around $`\mathrm{\Gamma }\mathrm{\Gamma }_\mathrm{c}`$ as discussed earlier). Similarly, write $`\mathrm{\Delta }Emgr`$, where $`r`$ is the typical particle radius. Since $`\mathrm{\Delta }E`$ and $`\eta `$ only appear in the ratio $`\mathrm{\Delta }E/\eta `$, $`m`$ and $`g`$ will cancel and the dynamics of the model will depend on two dimensionless quantities, namely $`\mathrm{\Gamma }`$ and a dimensionless displacement $`A_0/r`$. The existence of a second relevant dimensionless parameter implies that the behaviour in response to high amplitude, low frequency driving may be qualitatively different from a low amplitude, high frequency driving with the same value of $`\mathrm{\Gamma }`$. This possibility has not yet been explored in the experiments, which seem to have focused on the low frequency regime.
Note that we could equally have expressed $`\eta `$ in terms of the kinetic energy supplied by the driving, i.e. $`\eta =mv_0^2\stackrel{~}{f}(\mathrm{\Gamma })`$, where $`v_0`$ is the typical driving velocity. However, this is not an independent energy scale as $`v_0`$ can be dimensionally related to $`a_{\mathrm{max}}`$ and $`A_0`$ by $`v_0\sqrt{a_{\mathrm{max}}A_0}`$ . We only mention this latter alternative because simulations often show scaling plots in terms of $`\mathrm{\Gamma }`$ and $`v_0`$ (see e.g. and references therein).
## V Summary and conclusions
To summarise, we have constructed a simple model for weakly excited granular media that combines the Chicago group’s free volume argument with elements of the supercooled liquid theory of Adam and Gibbs. Integration of the master equation has shown that the model behaves in a similar manner to the experiments for each of the situations considered. Some slight discrepancies remain with the power spectra, but these may be due to mechanisms currently lacking from the model, such as ordering effects and crystallinity, depth dependency or wall effects. It would be interesting to see if any of these mechanisms could be incorporated into an extended version of model. It may also be possible to introduce orientational degrees of freedom and compare the results to recent experiments on nylon rods .
The model has also been used to predict the manner in which the time of the peak response to a shift in $`\mathrm{\Gamma }`$ at $`t=t_0`$ scales with $`t_0`$ , as discussed in Sec. III C. This prediction could be tested experimentally and may help to differentiate between the large number of models that have so far been proposed , as it seems unlikely that they will all give the same scaling behaviour. Further insight into the physical mechanisms underlying the compaction process could be gained by measuring the typical size of reconfiguring regions as a function of time, or by seeing if the locations of such regions are spatiotemporally correlated. Such measurements could be performed in simulations, or by direct visualisation of two dimensional experiments , for instance.
Finally, we note that the relationship between granular media and glasses can be given a more intuitive appeal by the following simple argument. Consider sand poured from a great height into a container. When the particles first hit the surface of the forming sandpile, the direction in which they bounce will essentially be random, giving rise to a large random velocity component. This corresponds to a highly excited state with (in our notation) a high $`\eta `$. However, the particles will rapidly lose their kinetic energy by inelastic collisions and will soon come to rest, jamming under gravity into a static, disordered configuration with $`\eta =0`$. It is not difficult to see how this sequence of events can be related to the rapid ‘quench’ of a supercooled liquid or other glass-forming material.
Just after the initial submission of this work, we became aware of a master equation for the glass transition due to Dyre , which is similar to Bouchaud’s equation studied in this paper but with a built-in cut-off in the range of allowed energies. Also, it has been brought to our attention that the two regimes of vibration mentioned in Sec. IV 3 have previously been discussed in the context of size segregation by Mehta and Barker .
## Acknowledgements
The author would like to thank Mike Cates for helpful discussions and careful reading of the manuscript, and also Joachim Wittmer, Mario Nicodemi, Alan Bray, Jean-Philippe Bouchaud, Robin Stinchcombe and Suzanne Fielding for stimulating discussions on the experiments and this model. We would also like to thank Jeppe Dyre for bringing our attention to reference , and Anita Mehta and Gary Barker for reference . This work was funded by UK EPSRC grant no. GR/M09674.
## A Simulation details
The bulk of the simulation results were obtained by numerical integration of the continuous master equation (II B). $`P(E,t)`$ was defined on a mesh of points $`P_{ij}=P(i\delta E,j\delta t)`$, where $`0ii_{\mathrm{max}}`$ and $`j0`$. Care was taken to ensure that $`E_{\mathrm{max}}i_{\mathrm{max}}\delta E`$ was set sufficiently high that there was no significant cut-off to $`P(E,t)`$ for large $`E`$. To iterate over a single time step $`\delta t`$, $`\omega (t)`$ was found from numerical integration of (13) and then assumed to remain constant over the required time interval. This allowed the time evolution equation (12) to be solved and $`P_{ij+1}`$ found from $`P_{ij}`$ $`i`$. The whole distribution was then renormalised by a factor $`(_iP_{ij})^1`$ to correct for the non-conservation of probability resulting from the assumption of a constant $`\omega (t)`$. For relaxation under constant $`\eta `$, simulation times were improved by employing a geometric mesh with a linearly increasing time step $`\delta tt`$. This allowed for times up to $`t=10^{10}`$ to be reached with only modest CPU time.
For the density fluctuations investigated in Sec. III D, the continuous master equation was of no use and an alternative method was employed which explicitly included finite size effects. This involved assigning $`N`$ array elements a barrier $`E_\mathrm{i}`$ , $`i=1\mathrm{}N`$, according to the chosen initial conditions. At every time step $`\delta t=1`$, each element was assigned a new barrier with probability $`\omega _0\mathrm{e}^{E_i/\eta }`$, where the new barrier values were drawn from the prior $`\mu (E)`$. The density $`\rho _i`$ of each element was found by inverting the mapping (II B), and the mean density calculated by straightforward summation, $`\rho (t)=\frac{1}{N}\rho _i`$ .
## B Monotonicity of $`\rho _{\mathrm{}}(\eta )`$ on $`\eta `$
In this appendix it is shown that the asymptotic density $`\rho _{\mathrm{}}=lim_t\mathrm{}\rho (t)`$ is a monotonic decreasing function of $`\eta `$ for essentially any $`\mu (E)`$. If an equilibrium state exists, it takes the form $`P_{\mathrm{eqm}}(E)=\omega _{\mathrm{}}(\eta )\mathrm{e}^{E/\eta }\mu (E)`$ and hence from (10),
$`{\displaystyle \frac{\rho _{\mathrm{}}(\eta )}{\rho _{\mathrm{max}}}}`$ $`=`$ $`1{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{\omega _{\mathrm{}}(\eta )\mathrm{e}^{E/\eta }\mu (E)}{1+E/A\rho _{\mathrm{max}}}}dE,`$ (B1)
$`\omega _{\mathrm{}}(\eta )`$ $``$ $`\underset{t\mathrm{}}{lim}\omega (\eta ,t)=\left({\displaystyle _0^{\mathrm{}}}\mathrm{e}^{E^{}/\eta }\mu (E^{})dE^{}\right)^1.`$ (B2)
Differentiating (B1) with respect to $`\eta `$ and rearranging gives
$`{\displaystyle \frac{\eta ^2}{\omega _{\mathrm{}}^2(\eta )\rho _{\mathrm{max}}}}{\displaystyle \frac{\rho _{\mathrm{}}(\eta )}{\eta }}=`$ (B4)
$`{\displaystyle _0^{\mathrm{}}}{\displaystyle _0^{\mathrm{}}}(EE^{}){\displaystyle \frac{\mu (E)\mu (E^{})\mathrm{e}^{(E+E^{})/\eta }}{1+E/A\rho _{\mathrm{max}}}}dEdE^{}.`$
After the change of variables $`u=E+E^{}`$ and $`v=EE^{}`$ and substituting $`vv`$ over the domain $`v<0`$, the right hand side of (B4) transforms to
$`{\displaystyle _{u=0}^{\mathrm{}}}{\displaystyle _{v=0}^u}v^2\mathrm{e}^{u/\eta }\mu \left({\displaystyle \frac{u+v}{2}}\right)\mu \left({\displaystyle \frac{uv}{2}}\right)`$ (B6)
$`\times \left[(2A\rho _{\mathrm{max}}+u+v)(2A\rho _{\mathrm{max}}+uv)\right]^1\mathrm{d}u\mathrm{d}v.`$
Since all the factors inside the integral (B6) are positive, it can be trivially deduced that
$$\frac{\rho _{\mathrm{max}}}{\eta }0\text{for all }\eta \text{.}$$
(B7)
Equality is attained in only two cases. The first is if the integrand is strictly zero over the entire range $`v>0`$, which can only happen in the trivial case of a single valued distribution $`\mu (E)=\delta (EE_0)`$. The second and more important situation is over a range of $`\eta `$ for which no equilibrium state exists. In this case, all the moments of $`P(E,t)`$ diverge as $`t\mathrm{}`$ and $`\rho (t)\rho _{\mathrm{max}}`$ from (II B). A plot of $`\rho _{\mathrm{}}(\eta )`$ versus $`\eta `$ for an exponential and a Gaussian $`\mu (E)`$ is given in Fig. 12.
## C Analysis of the power spectra
The small $`f`$ behaviour of the power spectra of density fluctuations $`S(f)`$ is analytically derived in this appendix, which extends the range of the numerical observations discussed in Sec. III D. The time-dependent spectrum near the glass point has already been derived in ; here we consider the $`f0^+`$ limit in equilibrium for general $`\mu (E)`$ over a wider range of $`\eta `$.
Since each local region is assumed to relax independently of its environment, the total power spectrum $`S(f)`$ is just the spectrum for a single region with a relaxation time $`\tau `$ averaged over $`\mathrm{\Phi }(\tau )`$, the distribution of relaxation times in equilibrium,
$$S(f)\frac{\tau \mathrm{\Phi }(\tau )}{1+(2\pi f\tau )^2}d\tau .$$
(C1)
The small $`f`$ behaviour of (C1) depends on the asymptotic behaviour of $`\mathrm{\Phi }(\tau )`$ for large $`\tau `$. If $`\mathrm{\Phi }(\tau )`$ decays faster than $`\tau ^2`$, then the $`f0`$ limit exists and $`S(f)`$ exhibits the expected $`1/f^0`$ noise for low frequencies. However, if $`\mathrm{\Phi }(\tau )\tau ^x`$ with $`1<x<2`$, then $`S(f)1/f^{2x}`$, as can be readily seen by substituting for $`f\tau `$ in (C1) (note that $`x>1`$ since $`\mathrm{\Phi }(\tau )`$ is normalisable).
For the trap model, $`\mathrm{\Phi }(\tau )`$ can be found for any given $`\mu (E)`$ by simply making the change of variables $`\tau =\frac{1}{\omega _0}\mathrm{e}^{E/\eta }`$ into the expression for $`P_{\mathrm{eqm}}(E)`$, equation (14). Thus, an exponential $`\mu (E)\mathrm{e}^{E/\eta _\mathrm{g}}`$ gives $`\mathrm{\Phi }(\tau )\tau ^{\eta /\eta _\mathrm{g}}`$, implying that $`S(f)1/f^{2\eta /\eta _\mathrm{g}}`$ for $`\eta _\mathrm{g}<\eta <2\eta _\mathrm{g}`$ . This confirms that $`S(f)\to ̸1/f^0`$ for this range of $`\eta `$, even though the system is in equilibrium. The usual $`1/f^0`$ behaviour is recovered when $`\eta 2\eta _\mathrm{g}`$ , which also applies for all $`\eta `$ when $`\mu (E)`$ decays faster than exponentially. In particular, a Gaussian $`\mu (E)\mathrm{e}^{E^2/2\sigma ^2}`$ leads to an equilibrium distribution of relaxation times of the form
$$\mathrm{\Phi }(\tau )\tau ^{\frac{\eta ^2}{2\sigma ^2}\mathrm{ln}(\omega _0\tau )},$$
(C2)
which is suggestive of a power law with a slowly varying exponent $`x=\frac{\eta ^2}{2\sigma ^2}\mathrm{ln}(\omega _0\tau )`$. Thus one would expect $`S(f)`$ to exhibit approximate power law behaviour over a wide range of $`f`$, reverting to $`1/f^0`$ only for frequencies comparable to the ‘largest’ relaxation time $`\omega _0\tau ^{}\mathrm{e}^{\sigma ^2/\eta ^2}`$. Any attempt to fit $`S(f)`$ to a power law will give an exponent that depends on the range of $`f`$ considered as well as the ratio $`\eta /\sigma `$.
|
no-problem/0003/astro-ph0003062.html
|
ar5iv
|
text
|
# Primordial black holes, phase transitions, and the fate of the universe11footnote 1Talk presented by the first author at the Primordial Black Hole Workshop/Dark Matter 1998, held in Marina del Rey, CA, from February 17-20, 1998.
## 1 Introduction
Although they have not yet been observed, primordial black holes (PBHs) already deserve a special place in the temple of modern theoretical physics, for they have spawned many creative ideas at the intersection of cosmology, astrophysics, and particle physics. In cosmology, for example, they may affect the outcome of Big Bang nucleosynthesis (BBN) . Primordial black holes may be of astrophysical interest through their evaporative production of the highest energy cosmic rays . Their formation during cosmic phase transitions (e.g., the electroweak (EW) and quantum chromodynamic (QCD) transitions) also may help constrain the relevant particle physics. Further examples of this rich spectrum of applications abound in these proceedings. We consider below the example of Massive Compact Halo Object (MACHO) black holes and discuss two bulk cosmological constraints on the production of such holes.
## 2 An example: MACHO black holes formed at the QCD epoch
One of the recent uses of PBHs involves the dark matter in the halo of our Galaxy. The MACHO Project reports that the Galactic halo contains condensed objects of about 0.2 to 0.8 solar masses . Since candidates such as ordinary stars, white dwarfs, neutron stars, etc., are constrained by various observations such as the Hubble Deep Field, stellar and cosmological nucleosynthesis, etc., we must consider more exotic possibilities to explain the MACHO events. Primordial black holes evade these constraints, because they would have formed in the early universe before BBN.
In order to produce MACHO-sized black holes, the existence of particle horizons requires that we study an epoch during which a horizon volume contains at least about one solar mass in mass-energy. The early universe is characterized by a very high degree of homogeneity and isotropy of spacetime. Further, any isocurvature fluctuations larger than a horizon length are “frozen,” and those smaller are damped. The only reasonable way to produce PBHs, therefore, must involve the amplification of pre-existing curvature fluctuations (those laid down by inflation, for example) or the creation of such perturbations as a result of phase transitions (PTs). Typical first-order PTs, however, nucleate bubbles of the broken phase that are small compared to the horizon, so they cannot make MACHO-sized black holes. Second-order PTs have even less spectacular consequences, so we focus on the possibility that a first-order PT amplifies pre-existing fluctations to the point of gravitational collapse into roughly horizon-sized black holes. Since the mass in the horizon at the QCD epoch is roughly one solar mass, there is a chance that some horizon volumes will “go down” into MACHO-sized black holes during or just after the QCD transition (see, for example, Ref. for detailed discussions of specific formation mechanisms).
## 3 Two constraints on PBH formation
The mechanism of black hole formation had better not be too efficient, for energy density in black holes redshifts like ordinary matter (i.e., more slowly than radiation), and copious PBH production would make the universe prematurely dominated by non-relativistic matter, precluding crucial cosmological events like BBN. Another way to state this is that the universe becomes “overclosed” (i.e., $`\mathrm{\Omega }_{\mathrm{PBH}}`$ today is inconsistent with the present observational bounds on $`\mathrm{\Omega }_0`$) unless there is at most one PBH formed per $`10^7`$ horizon volumes . This is a powerful constraint on building models of PBH formation.
Another possible constraint involves the manifest breaking of Friedmann–Robertson–Walker (FRW) symmetry when a PBH forms. Using the previous constraint, we can view the universe as a lattice of comoving black holes with a lattice separation of at least $`215`$ horizons at the epoch of formation. The horizon eventually expands to encompass many holes, at which time they may attract each other and cluster together. Since black holes are the ultimate curvature fluctations, spacetime expands non-uniformly, and the overall Hubble expansion of the universe is valid only in an averaged sense. In particular, these fluctuations effect a “back-reaction” on the expansion of the universe and may change the values of $`\mathrm{\Omega }_0`$ and $`H_0`$ observed today (i.e., the fate of the universe). If the resulting theoretical values disagree significantly with the current values, then this is a valid constraint on PBH formation.
The latter constraint is quantitatively calculable using a general relativistic perturbation theory in a suitable gauge but, with minimal perspiration, we can conclude that this effect is utterly negligible. First, we expect the effect of inhomogeneities on the expansion or age of the universe to be more significant during the epoch of matter domination than radiation domination, since the former lasts much longer than the latter and since nonlinear structures such as galaxies and clusters may form only during the former epoch. Using a general relativistic analog of Zel’dovich’s pancake approximation for gravitational collapse, Russ and collaborators have calculated the change in the age of the universe due to growing inhomogeneities. They find that the age of the universe decreases from the usual FRW value by a part in $`10^3`$ to $`10^4`$, depending on the composition of the dark matter. Therefore, the potential constraint on PBH production described in the previous paragraph is very weak. To put it another way: the constraint may become important only when we know the age of the universe to at least three significant figures.
This work was supported in part by grants from the NSF and NASA.
|
no-problem/0003/cond-mat0003518.html
|
ar5iv
|
text
|
# Photonic crystals of coated metallic spheres
( FOM Institute AMOLF, Kruislaan 407, 1098 SJ Amsterdam, The Netherlands)
abstract
It is shown that simple face-centered-cubic (fcc) structures of both metallic and coated metallic spheres are ideal candidates to achieve a tunable complete photonic bandgap (CPBG) for optical wavelengths using currently available experimental techniques. For coated microspheres with the coating width to plasma wavelength ratio $`l_c/\lambda _p10\%`$ and the coating and host refractive indices $`n_c`$ and $`n_h`$, respectively, between $`1`$ and $`1.47`$, one can always find a sphere radius $`r_s`$ such that the relative gap width $`g_w`$ (gap width to the midgap frequency ratio) is larger than $`5\%`$ and, in some cases, $`g_w`$ can exceed $`9\%`$. Using different coatings and supporting liquids, the width and midgap frequency of a CPBG can be tuned considerably.
PACS numbers: 42.70.Qs - Photonic bandgap materials
PACS numbers: 82.70.Dd - Colloids
Introduction. - Photonic crystals are characterized by a periodically modulated dielectric constant. Some of such structures occur in nature, for instance, opals and nanostructured colour wings of butterflies . There is a common belief that in the near future photonic crystals will give us the same control over photons as ordinary crystals give us over electrons . At the same time, photonic structures are of great promise to become a laboratory for testing fundamental processes involving interactions of radiation with matter in novel conditions. This promise originates from the fact that, in analogy to the case of an electron moving in a periodic potential, certain photon frequency modes within a photonic crystal can become forbidden, independent of the photon polarization and direction of propagation - a complete photonic bandgap (CPBG) . Consequently, the density of states (DOS) and the local DOS (LDOS) of photons are significantly changed compared to their vacuum value (see for the exact results in one-dimensional photonic crystals). If the LDOS is sufficiently smooth, the spontaneous emission (SE) rate $`\mathrm{\Gamma }`$ of atoms and molecules embedded in a photonic crystal is directly proportional to the LDOS . On the other hand, if the LDOS exhibits sharp features (as a function of frequency and position in the unit cell), one expects the Wigner-Weisskopf approximation to break down and novel phenomena to occur, such as non-Markovian behaviour and non-exponential SE accompanied by the change of the spectrum from a single Lorentzian peak into a two-peaked structure .
Unfortunately, the problems in the fabrication of three-dimensional CPBG structures increase rapidly with decreasing wavelengths for which a CPBG is required - mainly because of the simultaneous requirements on the modulation (the total number and the length of periodicity steps) and dielectric contrast. In order to achieve a CPBG below infrared wavelengths, the modulation is supposed to be on the scale of optical wavelengths or even smaller and, as for any CPBG structure, has to be achieved with roughly ten periodicity steps in each direction, a task currently beyond the reach of reactive ion and chemical etching techniques. Fortunately, such a modulation occurs naturally in colloidal crystals formed by monodisperse colloidal suspensions of microspheres. The latter are known to self-assemble into three-dimensional crystals with excellent long-range order on the optical scale , removing the need for complex and costly microfabrication. They form a face-centered-cubic (fcc) or (for small sphere filling fraction) a body-centered-cubic (bcc) lattice . Thus, it comes as no surprise that the best photonic crystals in the visible (although without any CPBG) are colloidal based fcc structures . The latter are purely dielectric structures composed of spheres with the dielectric constant $`\epsilon _s`$ embedded in a host with the dielectric constant $`\epsilon _h`$. For such structures the dielectric contrast $`\delta `$ is defined as $`\delta =\text{max}(\epsilon _h/\epsilon _s,\epsilon _s/\epsilon _h)`$. Then $`\delta \stackrel{>}{}8.2`$ is required to open a CPBG . Colloidal crystals suffer from the same kinds of defects as ordinary electronic crystals. Therefore, practical crystals should have such a dielectric contrast which for an ideal crystal yields a CPBG with the gap width-to-midgap frequency ratio (the relative gap width), $`g_w=\mathrm{}\omega /\omega _c`$, of at least $`5\%`$ \- to leave a margin for gap edge distortions due to impurities and yet to have a CPBG useful for applications. Then, for an fcc structure, $`\delta \stackrel{>}{}12`$ is required which makes fabrication of photonic crystals with an operational CPBG at optical wavelength seemingly hypothetical . The requirements on $`\delta `$ are less restrictive (although still rather strong in the visible) for colloidal crystals with a diamond structure . The latter, however, have yet to be fabricated. As a result of the above, no three-dimensional CPBG structure below the infrared wavelengths have been fabricated thus far.
Recently we have shown that a way to avoid the requirements on $`\delta `$ is to use spheres with a Drude-like behaviour of the dielectric function
$$\epsilon _s(\omega )=1\omega _p^2/\omega ^2,$$
(1)
where $`\omega _p`$ is called the plasma frequency . In the following, $`r_s`$ is the sphere radius and $`\lambda _p=2\pi c/\omega _p`$ the plasma wavelength, $`c`$ being the speed of light in vacuum. For notational simplicity we often refer to the spheres having such a dielectric function (1) as metallic spheres, although we are aware that (i) not all metals show a Drude-like behaviour and (ii) such a behaviour can also be found in certain semiconductors and in new artificial structures . Fcc structures of metallic spheres exhibit several exceptional properties . For frequencies within $`0.6\omega _p\omega 1.1\omega _p`$, where the bulk metal absorption can be negligible , a CPBG opens in the spectrum with $`g_w`$ up to $`10\%`$, and that already for a host dielectric constant $`\epsilon _h=1`$. Moreover, up to four CPBG’s can open in the spectrum. A CPBG with $`g_w5\%`$ can be achieved for sphere filling fractions from $`f=0.56`$ till the close-packed case ($`f0.74`$). For the wavelengths $`\lambda `$ within the CPBG’s the size parameter of spheres $`x=2\pi r_s/\lambda `$ satisfies $`x5`$. Consequently, absorption is governed entirely by the bulk absorption, since the so-called plasmon-induced absorption is negligible . Another unexpected feature is that for some values of $`r_s/\lambda _p`$, for example $`r_s/\lambda _p=1.35`$, extremely narrow almost dispersionless bands ‘within a bandgap’ appear. These bands can to a certain extent perform many functions of an impurity band since they involve photons with extremely small group and phase velocities (less than $`c/200`$) .
Motivation. - In this letter we study the question how the photonic-bandgap structure is affected by coating monodisperse metallic microspheres with a semiconductor or an insulator. This question is not only of theoretical but also of significant experimental interest. Coating can actually facilitate the preparation of photonic colloidal crystals made up from metallic spheres because it can (i) stabilize metallic microparticles by preventing, or, at least, by significantly reducing their oxidation; (ii) prevent aggregation of metallic particles by reducing Van der Waals forces between them. In the latter case, a coating of roughly $`30`$ nm is required. Also a suitable coating can enlarge some of the stop gaps (gaps in a fixed direction of propagation) by as much as $`50\%`$ . From the application point of view, optically nonlinear Bragg diffracting nanosecond optical switches have been fabricated by doping silica (SiO<sub>2</sub>) shell with an absorptive dye . On the other hand, coating with an optically nonlinear material can reduce the required intensity for the onset of optical bistability due to the enhancement of local fields near the surface-plasmon resonance. Last but not least, using a semiconductor coating may allow a matching of the photonic and electronic bandgaps, which is important for many applications involving photonic crystals .
Let $`r_c`$ be the core radius and $`l_c`$ be the coating width, i.e., $`r_s=r_c+l_c`$. We assumed that the refractive indices $`n_c`$ and $`n_h=\sqrt{\epsilon _h}`$ of the coating material and host, respectively, are constant within the frequency range considered. The latter was taken to be roughly $`0.55\omega _p\omega 1.1\omega _p`$, where the bulk absorption of the metal is assumed to be small. This is a good approximation, for instance, for structures made of silica coated silver microspheres . For silver $`\lambda _p=328`$ nm and the bulk absorption is rather small in the region $`310520`$ nm . The dependence of the refractive index of silica on frequency in this frequency region is very weak and is described by a Cauchy model. The actual value of $`n_c`$ depends on the method used to synthetize silica. We took $`n_c=1.47`$. In view of their experimental relevance, we only investigated simple fcc structures. Band-structure calculations were performed using a photonic analogue of the familiar Korringa-Kohn-Rostocker (KKR) method . Compared to the plane-wave method, dispersion does not bring any difficulties to the KKR method and computational time is the same as without dispersion. In order to ensure precision within $`0.1\%`$, spherical waves were included with the angular momentum up to $`l_{max}=10`$. Further discussion of convergence and errors can be found in . The values of the angular frequency are, unless otherwise stated, in the units $`2c/A`$, where $`A`$ is the length of the conventional unit cell of a cubic lattice (not to be confused with the lattice spacing ).
Results. - At first glance it seems that coating destroys the CPBG’s. For example, if one begins with noncoated metallic spheres with $`\omega _p=9.5`$ and filling fraction $`f=0.6`$, one finds two CPBG’s with $`g_w4.07\%`$ and $`2.94\%`$ at midgap frequencies $`\omega _c0.874\omega _p`$ and $`0.813\omega _p`$, respectively. As the coating width $`l_c`$ for $`n_c=1.47`$ increases from zero till the spheres are close-packed, the two CPBG’s steadily decrease to zero. Nevertheless, when also the host refractive index is allowed to vary, one can recover almost all the exceptional features of the photonic band structure of noncoated metallic spheres. Generically, three CPBG’s open in the spectrum (see Fig. 1), i.e., one less than for noncoated metallic spheres, however, still two CPBG’s more than for purely dielectric structures . The relative gap width $`g_w`$ can exceed $`9\%`$. Fig. 2 shows how the CPBG’s of the close-packed fcc lattice of silica coated metallic spheres with a coating width $`l_c/\lambda _p9.15\%`$ (corresponding to $`l_c=30`$ nm for silver), change when the host refractive index $`n_h`$ is varied between $`1`$ and the coating refractive index $`n_c=1.47`$. Data on the right y-axis correspond to the case where $`n_h`$ matches $`n_c`$, i.e., they are identical to those for an fcc structure of purely metallic spheres embedded in the host $`n_h`$. It is clear from Fig. 2 that the answer to the question whether coating enhances the relative gap width $`g_w`$ of a CPBG strongly depends on $`n_h`$ and $`r_s/\lambda _p`$.
Let us denote $`\omega _r`$ as the midgap to plasma frequency ratio, i.e., $`\omega _r=\omega _c/\omega _p`$. Surprisingly enough, the ratio $`\omega _r`$ (for the CPBG’s shown in Fig. 2) as a function of $`n_h`$ can be described with high precision by a simple linear relation
$$\omega _r(n_h)=\omega _r(n_0)C(n_hn_0),$$
(2)
with a universal constant $`C=0.2478\pm 0.0075`$. Here $`n_0`$ is the lowest host refractive index for which a given CPBG opens in the spectrum and $`\omega _r(n_0)`$ is the corresponding value of $`\omega _r`$ for $`n_h=n_0`$. This show that changing $`n_h`$ allows one to tune not only the width $`g_w`$ of a CPBG but also, within $`8\%`$, the corresponding midgap frequency.
The tuning of the midgap frequency $`\omega _c`$ by changing $`n_h`$ is more pronounced in the absence of coating. In the latter case, for example for sphere filling fraction $`f=0.6`$, $`\omega _p=9`$, and $`n_h=1`$, two CPBG’s appear with $`g_w=3.27\%`$ and $`g_w=2.55\%`$ at the midgap frequencies $`\omega _c0.904\omega _p`$ and $`0.839\omega _p`$, respectively. As $`n_h`$ increases from $`1`$ to $`1.47`$, the midgap frequencies can be tuned down to $`65\%`$ of their original values and their respective values reach $`\omega _c0.593\omega _p`$ and $`0.556\omega _p`$. At the same time, the relative gap width gradually increases up to $`g_w=5.1\%`$ and $`g_w=6.43\%`$, respectively (see Figs. 2, 3).
The above results are not specific for the case of silica coating when $`n_c=1.47`$. For example, for $`n_c=1.4`$ and $`l_c/\lambda _p9.15\%`$ one can find a region of parameters ($`n_h=1.3`$ and $`\omega _p=12`$ (the metallic filling fraction $`f_m0.6`$)) for which $`g_w`$ can be as large as $`8.9\%`$.
It is important to realize that, because the metallic core size parameter $`x=2\pi r_c/\lambda `$ satisfies $`x3.4`$ for all wavelengths within a CPBG for all CPBG’s considered here in the frequency region $`0.55\omega _p\omega 1.1\omega _p`$, the absorption is still dominated by bulk properties, i.e., can be negligible , since the plasmon-induced absorption becomes relevant only for particle sizes much smaller than the wavelength . One cannot get rid of absorption completely. Nevertheless moderate absorption was shown to cause only a slight perturbation of the band structure calculated in the absence of absorption .
It is worthwhile to mention that the exact Drude-like dispersion (1) of $`\epsilon _s`$ is not necessary to reproduce the exceptional properties of metallo-dielectric photonic crystals. It is enough if, for sufficiently large frequency window, $`15\stackrel{<}{}\epsilon _s(\omega )0`$ . Many of the above features (except for the extremely narrow almost dispersionless bands ‘within a bandgap’) can also be reproduced for a constant and sufficiently small negative $`\epsilon _s`$ . This is important since, in real systems, a deviation from the ideal Drude behaviour can occur at a proximity of the zero crossing of Re $`\epsilon `$ at some $`\lambda _z`$. If $`\lambda _p`$ is the plasma wavelength extracted from the fit (1) to a material data, then $`\lambda _z`$ is red-shifted compared to $`\lambda _p`$ and the band structure between $`\lambda _z`$ and $`\lambda _p`$ can be modified compared to the ideal Drude behaviour (1).
Outlook and conclusions. - Our calculations show that simple fcc structures of both metallic and coated metallic spheres are ideal candidates to achieve a CPBG for optical wavelengths. For coated microspheres with a coating width $`l_c/\lambda _p10\%`$ (up to $`l_c=30`$ nm for silver) and the coating and host refractive indices $`n_c`$ and $`n_h`$, respectively, between $`1`$ and $`1.47`$, one can always find a sphere radius $`r_s`$ such that the relative gap width $`g_w`$ is larger than $`5\%`$ and, in some cases, $`g_w`$ can even exceed $`9\%`$. This provides a sufficiently large margin for gap-edge distortions due to omnipresent imperfections and impurities to allow both technological and experimental applications involving the proposed structures. Using different coatings and by changing the refractive index $`n_h`$ of the supporting liquid (this can be easily achieved), one can tune the width and midgap frequency of a CPBG considerably. In principle, the midgap frequency $`\omega _c`$ can be tuned to whatever frequency within a nonabsorptive window ($`0.6\omega _p\omega 1.1\omega _p`$ for silver ). Using a procedure in which fluorescent organic groups are placed inside the silica shell with nm control over the radial position makes it, in principle, possible to perform a precise position-dependent testing of the spontaneous emission within a photonic crystal. By applying an electric field one can switch in ms from an fcc colloidal crystal to a body centered tetragonal (bct) crystal: a so-called martensitic transition . Hence, the proposed structures are also promising candidates for the CPBG structures with tunable bandgaps. Last but not least, since metals are known to possess large nonlinear susceptibilities, switching and optical bistability can be studied in the presence of a CPBG. It is interesting to note that many of these ideas also applies to two-dimensional photonic structures .
The region of plasma frequencies of conventional materials ranges from the near-infrared to the ultraviolet . However, in a recent interesting paper , it has been shown that a whole new class of artificial materials can be fabricated in which the plasma frequency may be reduced by up to 6 orders of magnitude compared to conventional materials, down to GHz frequencies. Correspondingly, the proposed structures can provide CPBG structures from the GHz up to ultraviolet frequencies. Apparently, the main experimental problem in fabricating the proposed photonic structures, using colloidal systems of metallic microspheres, is to synthetize large enough spheres in order to reach the threshold value $`r_sn_h/\lambda _p\stackrel{>}{}0.9`$ to open a CPBG. However, a method to produce monodisperse gold colloids of several hundred nm radius and larger has been developed . Recent results on the fabrication of such spheres from silver, i.e., material with a Drude-like behaviour of the dielectric function, are promising . The only remaining problem is to control the size polydispersity of spheres and reduce it below $`5\%`$ to trigger crystalization .
I should like to thank Dr. H. Bakker, Prof. A. van Blaaderen, Dr. A. Tip, and Dr. K. P. Velikov for careful reading of the manuscript and useful comments and M.J.A. de Dood for help with figures. SARA computer facilities are also gratefully acknowledged. This work is part of the research program by the Stichting voor Fundamenteel Onderzoek der Materie (Foundation for Fundamental Research on Matter) which was made possible by financial support from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (Netherlands Organization for Scientific Research).
Figure captions
Fig. 1.- Calculated photonic-band-structure of a close-packed fcc lattice of silica coated ($`n_c=1.47`$) metallic spheres embedded in a host dielectric with refractive index $`n_h=1.4`$ for $`\omega _p=11`$ ($`r_s/\lambda _p1.238`$). For convenience, on the right y-axis the angular frequency is shown in units $`\omega _p`$. The coating width $`l_c`$ to the sphere radius ratio is $`l_c/r_s7.39\%`$ ($`l_c/\lambda _p9.15\%`$). This corresponds to a metallic (core) filling fraction of $`f_m0.588`$ (to be compared with the fcc close-packed filling fraction $`f0.7405`$). Note that there are three CPBG’s, one with $`g_w=3.2\%`$ at midgap frequency $`7.01`$, the second with $`g_w=8.4\%`$ at midgap frequency $`6.47`$, and the third with $`g_w=2.59\%`$ at midgap frequency $`6.1`$.
Fig. 2.- The relative gap width $`g_w`$ of the CPBG’s of a close-packed fcc lattice of silica coated metallic spheres as a function of the host refractive index $`n_h`$ for $`l_c/\lambda _p9.15\%`$ (i.e., $`l_c=30`$ nm for silver). For a close-packed fcc lattice, the value of $`r_s/\lambda _p`$ can be recovered by multiplying the numerical values of $`\omega _p`$ by $`1/(2\pi \sqrt{2})`$, i.e., $`r_s/\lambda _p=1.013`$, $`1.125`$, $`1.238`$, and $`1.35`$ for $`\omega _p=9`$, $`10`$, $`11`$, $`12`$ in this order. The metallic filling fractions in the same order are $`f_m0.557`$, $`0.574`$, $`0.588`$, and $`0.6`$. The symbols $``$, $`\mathrm{}`$, and $`\times `$ are for the upper, middle and the lower CPBG, respectively (cf. Fig. 3). Data on the right y-axis correspond to the case when $`n_h`$ matches $`n_c`$.
Fig. 3.- The ratio $`\omega _c/\omega _p`$, corresponding to the CPBG’s shown in Fig. 2, as a function of the host refractive index $`n_h`$. With a high precision the curves can be described by a simple linear relation \[see Eq. (2)\].
|
no-problem/0003/cond-mat0003123.html
|
ar5iv
|
text
|
# A critical review of techniques for Term Structure analysis
## 0. Introduction
Many ideas and techniques developed for the analysis of financial markets can be applied both to equities and to fixed income securities. For instance the distribution and the dynamical evolution of price variations are studied under the assumption that such variations can be described by a random process, regardless of the asset category (equity or fixed income) in exam.
However fixed income markets have many special features, thus specific concepts and tools have been introduced in the past to study (and possibly forecast) the behavior of such markets. For instance, a correct pricing of fixed income securities with fixed cash-flows requires the knowledge of the term structure of interest rates.
A number of “numerical recipes” have been proposed for estimating and interpreting the term structure of interest rates, yet a solid theoretical foundation of such concept and a comparative assessment of the results produced by the different techniques is not available.
In this paper we define formally the concept of absence of arbitrage opportunities and derive rigorously the existence of discount factors. Moreover the most widely used approaches in estimating the term structure are presented and their performance is checked with an extensive set of experiments.
The structure of the paper is the following: section 1 is a reminder of the terminology used in fixed income markets; section 2 defines a mathematical framework for the discount function; section 3 describes the major techniques currently in use to estimate the term structure of interest rates; section 4 presents the results of our numerical experiments; section 5 concludes the work with the future perspectives.
## 1. Fixed income securities
A bond is a credit instrument issued by a public institution or a corporate company on raising a debt. When the size and the timing of the payments due to the investor are fully specified in advance, the bond falls within the fixed-income securities category.
The main source of uncertainty for a bond markets investor is the default risk of the issuer, that is the chance that the issuer becomes unable to fulfill the commitment of paying all promised cash-flows on time. For this reason the only (true) fixed-income securities are those issued by the governments of some developed countries or by “well-known” large corporate institutions to which credit rating agencies also give their highest rating: these are the borrowers least likely to default on their debt repayments.
Likewise should not be regarded as fixed-income securities those bonds having cash-flows indexed to the rate of inflation (index-linked bonds) or those with embedded optionality giving either the issuer or the holder some discretion to redeem early or to convert to another security <sup>1</sup><sup>1</sup>1To give some examples, we recall the US, UK and Japanese government callable bonds, giving the Treasury the option to redeem the bond at face value at any time between two dates specified at the time of issue; or the Italian government putable bonds (CTO) giving the holder the option to sell the bond back to the Treasury at face value on pre-specified dates; or the UK government and corporate convertible bonds giving the holder the option to convert the bond into another pre-specified bond at a pre-specified ratio on one or more pre-specified dates (see e.g. ) .
The lack of any uncertainty makes fixed-income bonds a useful means for measuring market interest rates.
An investor can purchase “new” bonds in the primary market (i.e. directly from the issuer) or formerly issued bonds from other investors in the secondary market. Furthermore large institutional investors can sell bonds to the issuer as well, that is they can pay the price of the bond and undertake to repay the corresponding future cash-flows. Hence the bid and ask prices of a bond <sup>2</sup><sup>2</sup>2respectively the price received when selling and paid when buying that bond are set continuously, as long as it remains outstanding, by transactions occurring between market participants.
In principle a bond can promise any pattern of future cash-flows. However there are two main categories.
Zero-coupon bonds, known also as discount bonds, make a single payment at a date in the future known as the maturity date. The size of this payment is called the face value, or par value, of the bond. The period of time up to the maturity date is the maturity of the bond. US Treasury bills (Treasury obligations with maturity at issue of up to twelve months) take this form.
Coupon bonds make interest payments (often referred to as coupon payments) of a given fraction of face value, called the coupon rate, at equally spaced dates up to and including the maturity date when the face value is also paid. The frequency at which interest payments are made varies from market to market, but generally they are made either annually or semi-annually. US Treasury notes and bonds (Treasury obligations with maturity at issue of twelve months up to ten years and above ten years, respectively) take this form. Coupon payments on US Treasury notes and bonds are made every six months.
In the following, for sake of simplicity, we will focus on US Treasury bond markets only. There are a number of advantages in this choice. First of all these markets are extremely large regardless of whether size is measured by outstanding or traded quantities, are very liquid and have uniform rules. Moreover the number of issues is quite high (at present there are about two hundreds obligations outstanding). These features make the study of such markets somewhat easier.
### 1.1. The Law of One Price
The key idea behind asset valuation in fixed-income markets is the absence of arbitrage opportunities, also known as the ‘Law of One Price’.
The ‘Law of One Price’ states that two portfolios of fixed-income securities (that is two positions taken in the marketplace through buying and selling fixed-income securities) that guarantee the investor the same future cash-flows and give him the same future liabilities, must sell for the same net price. Phrased another way, a portfolio of fixed-income securities giving the investor neither the right to receive any future cash-flow nor any future liability, must have zero initial net cost. Conversely a portfolio requiring no initial net investment must give the investor some “real” liability, that is some liability whose size is larger than the net amount of money earned till then.
If any violation of these constraints on the prices of all fixed-income securities outstanding at a given time occurs, then a profitable and risk-less investment opportunity, called an arbitrage opportunity, or simply an arbitrage, arises.
Arbitrage opportunities may arise (sporadically) in financial markets but they cannot last long. In fact, as soon as an arbitrage becomes known to sufficiently many investors, the prices will be affected as they move to take advantage of such an opportunity. As a consequence, prices will change and the arbitrage will disappear. This principle can be stated as follows: in an efficient market <sup>3</sup><sup>3</sup>3we recall that according to Malkiel “A capital market is said to be efficient if it fully and correctly reflects all relevant information in determining security prices” there are no permanent arbitrage opportunities.
The prices of all fixed-income securities (and in particular those of US government bonds) as they are quoted at any given time, cannot, then, be independent one of the other and the existence of (moderate) arbitrage opportunities in financial markets can be viewed as a relative mispricing between correlated assets. Such mispricing can be ascribed to taxation, transaction costs and commissions associated with trading that make the net price of a bond different from its quoted gross price (and, hence, make the corresponding arbitrage opportunity not really profitable), to liquidity effects and other market frictions, or to non-synchronous quotations.
## 2. The mathematical setting
The cash-flows that a US government bond holder is entitled to receive are completely determined by the bond face value that, unless otherwise specified, will always be supposed equal to $`\$100`$, by its coupon rate and by its maturity. For this reason an outstanding bond can be identified with a triplet $`(c,m,p)`$, where $`c0`$ is its (semi-annual) coupon rate in percentage points, $`m>0`$ is its maturity in years (computed using ‘$`30/360`$’ convention) and $`p>0`$ is its gross price in dollars. For a US Treasury bill $`c=0`$ and $`m1`$; for a US Treasury note or bond $`c>0`$.
We will state, now, formally the condition of absence of arbitrage opportunities and derive its consequences on the prices of US government bonds. This will lead us to the definition of the discount factors or zero-coupon bond prices. We start by giving some definitions.
Let $`:=[0,+\mathrm{})\times _+\times _+`$ be the set of all bonds, actually issued by the US Treasury or not, and still outstanding at the date in exam.
Secondly, given a subset $`𝒮`$ of $``$, denote by $`T(𝒮)`$ the vector of the maturities of all coupon payments of all bonds belonging to $`𝒮`$, sorted in increasing order. This amounts to say that if $`T(𝒮)=(t_1,\mathrm{},t_N)`$ for some $`N`$, then
$$\{t_1,\mathrm{},t_N\}:=_{(c,m,p)𝒮}\{mi/2;i\{0\}\text{and}mi/2>0\}$$
and $`t_i<t_{i+1}`$ for every $`i`$. Moreover for every $`(c,m,p)𝒮`$ define the cash-flows vector $`𝝋(c,m)=(\phi _1(c,m),\mathrm{},\phi _N(c,m))^N`$ by
$$\phi _i(c,m):=\{\begin{array}{cc}100+c\hfill & ,\text{if}m=t_i\hfill \\ c\hfill & ,\text{if}t_im\frac{}{2}\hfill \\ 0\hfill & ,\text{otherwise}\hfill \end{array}(i=1,\mathrm{},N).$$
Thus $`\phi _i(c,m)`$ represents the cash-flow (in dollars) that an investor holding bond $`(c,m,p)`$ is entitled to receive in $`t_i`$ years’ time.
Thirdly, we say that a subset $`𝒯`$ of $``$ is a complete coupon term structure if and only if it contains at least a bond maturing on each coupon payment day (see e.g. ). Observe that for a complete coupon term structure $`𝒯`$ the set of the maturities of all coupon payments of all bonds belonging to $`𝒯`$ is the same as the set of the maturities of all bonds belonging to $`𝒯`$, that is, if $`T(𝒯)=(t_1,\mathrm{},t_N)`$, then
$$\{t_1,\mathrm{},t_N\}:=_{(c,m,p)𝒯}\{m\}.$$
Finally we represent a portfolio of $`B`$ bonds $`(c_1,m_1,p_1),\mathrm{},(c_B,m_B,p_B)`$ by a vector $`(q_1,\mathrm{},q_B)^B`$ such that for each $`j`$, $`q_j`$ represents the traded quantity of bond $`(c_j,m_j,p_j)`$. By definition, $`q_j>0`$ means that the investor has bought $`\$100q_j`$ face value of bond $`j`$ whereas $`q_j<0`$ means that he has sold $`\$100|q_j|`$ face value of that bond.
We are, now, in a position to state formally the condition of absence of arbitrage opportunities among the bonds belonging to a given set $`𝒮`$, and to show which constraints on their prices such condition implies (see Theorem 2.5 below).
###### Definition 2.1.
Let $`𝒮`$ be a subset of $``$ and let $`T(𝒮)=(t_1,\mathrm{},t_N)`$ for some $`N`$. We say that $`𝒮`$ satisfies the hypothesis of Absence of Arbitrage Opportunities if and only if, given $`(c_1,m_1,p_1),\mathrm{},(c_B,m_B,p_B)𝒮`$ ($`B`$) the following three conditions are fulfilled:
1. if $`(q_1,\mathrm{},q_B)^B`$ is such that $`_{j=1}^Bq_j𝝋(c_j,m_j)=\mathrm{𝟎}`$, then $`_{j=1}^Bq_jp_j=0`$;
2. if $`(q_1,\mathrm{},q_B)^B`$ is such that
$$\underset{j=1}{\overset{B}{}}q_j\phi _i(c_j,m_j)=\{\begin{array}{cc}f_{\overline{ı}}\hfill & ,\text{if}i=\overline{ı}\hfill \\ 0\hfill & ,\text{otherwise}\hfill \end{array},$$
for some $`f_{\overline{ı}}>0`$, then $`0<_{j=1}^Bq_jp_j<f_{\overline{ı}}`$;
3. if $`(q_1,\mathrm{},q_B)^B\backslash \{0\}`$ is such that $`_{j=1}^Bq_jp_j=0`$, then there exists $`\overline{ı}\{1,\mathrm{},N\}`$ such that $`_{i=1}^{\overline{ı}}_{j=1}^Bq_j\phi _i(c_j,m_j)<0`$.
###### Remark 2.2.
The $`i`$th component of the vector $`_{j=1}^Bq_j𝝋(c_j,m_j)`$ represents, if it is positive, the total cash-flow that an investor holding portfolio $`(q_1,\mathrm{},q_B)`$ will receive in $`t_i`$ years’ time; if it is negative, the total liability that he will have at that date. The quantity $`_{j=1}^Bq_jp_j`$ is the (net) price of the portfolio.
Hence, condition $`(NA_1)`$ means that a portfolio of bonds giving the investor neither the right to receive any future cash-flow nor any future liability must have zero initial net cost. Conversely $`(NA_3)`$ means that a non-trivial portfolio requiring no initial net investment must, sooner or later, give the investor a liability of larger size than the total net amount of money received till then.
The meaning of condition $`(NA_2)`$ is self-evident and will not be discussed any further.
###### Definition 2.3.
Given $`B+1`$ bonds $`(c,m,p),(c_1,m_1,p_1),\mathrm{},(c_B,m_B,p_B)`$ in $``$, a portfolio $`(q_1,\mathrm{},q_B)`$ of the last $`B`$ bonds such that $`_{j=1}^Bq_j𝛗(c_j,m_j)=𝛗(c,m)`$, is called a replicating portfolio of bond $`(c,m,p)`$. In fact it entitles an investor to receive exactly the same cash-flows as if he held bond $`(c,m,p)`$. If, furthermore, each bond $`(c_j,m_j,p_j)`$ expires on a different coupon payment date of bond $`(c,m,p)`$, then we will refer to such a portfolio as a minimal replicating portfolio.
###### Remark 2.4.
Since the case $`q_j0`$ is trivial, condition $`(NA_1)`$ amounts to say that the price of each bond $`(c,m,p)`$ in $`𝒮`$ equals the ones of all its replicating portfolios, whenever they exist.
###### Theorem 2.5.
Let $`𝒯`$ be a complete coupon term structure and let $`T(𝒯)=(t_1,\mathrm{},t_N)`$ for some $`N`$. If $`\mathrm{Card}(𝒯)N`$ and if condition $`\mathit{(}NA_\mathit{1}\mathit{)}`$ is satisfied, then there exist $`d_1,\mathrm{},d_N`$ such that for every $`(c,m,p)𝒯`$
(2.1)
$$p=\underset{i=1}{\overset{N}{}}d_i\phi _i(c,m).$$
If, furthermore, conditions $`\mathit{(}NA_\mathit{2}\mathit{)}`$ and $`\mathit{(}NA_\mathit{3}\mathit{)}`$ are fulfilled as well, then $`1>d_1>d_2>\mathrm{}>d_N>0`$.
A proof of this theorem is reported in the Appendix.
###### Remark 2.6.
In order to prove Theorem 2.5 we need to find $`N`$ bonds in $`𝒯`$ such that the matrix whose columns are their cash-flows vectors have maximum rank $`N`$. The hypothesis that $`𝒯`$ forms a complete coupon term structure just helps to ensure that this choice is possible. However this is not the only case in which this is true. To realize this it is sufficient to think that if $`𝒮=\{(c_1,m_1,p_1),(c_2,m_2,p_2),(c_3,m_3,p_3)\}`$ with $`m_1=\frac{1}{2}`$, $`m_2=m_3=\frac{3}{2}`$ and (of course) $`c_2c_3`$, then the matrix whose columns are the cash-flows vectors of these three bonds has maximum rank $`3`$, even if $`𝒮`$ is not a complete coupon term structure.
###### Remark 2.7.
By equation (2.1) with $`c=0`$ it is apparent that each $`d_i`$ represents the price of a (hypothetical) zero-coupon bond in $`𝒯`$ with unitary face value and maturity $`m=t_i`$. For this reason $`d_1,\mathrm{},d_N`$ are called the discount factors, or zero-coupon bond prices, corresponding to the maturities $`t_1,\mathrm{},t_N`$.
###### Definition 2.8.
Given $`N`$ discount factors $`d_1,\mathrm{},d_N`$ corresponding to maturities $`t_1,\mathrm{},t_N`$, we define the spot rate (of return), or zero-coupon yield, $`s_i`$, by
$$s_i:=\frac{1}{t_i}\mathrm{lg}d_i$$
Furthermore, given $`P`$, if there exists $`j(i)>i`$ such that $`t_{j(i)}t_i=\frac{P}{2}`$, we define the $`P`$-periods forward rate (of return) <sup>4</sup><sup>4</sup>4as it is common in financial literature, we will refer to any six months’ time interval as a period , $`f_i^P`$, by
$$f_i^P:=\frac{2}{P}\mathrm{lg}\frac{d_{j(i)}}{d_i}.$$
Since $`\frac{1}{d_i}=\mathrm{exp}\left(_0^{t_i}s_i𝑑t\right)`$, then $`s_i`$ represents the continuously compounded rate of return per unit time on a $`t_i`$-years investment, made today, in the bond markets. Analogously $`f_i^P`$ represents the continuously compounded rate of return per unit time on a $`P`$-periods investment to be made $`t_i`$ years ahead.
As it is common in financial literature, we will refer to the set of the spot rates corresponding to all maturities as the (cross-sectional) term structure of interest rates.
### 2.1. The method of Carleton and Cooper
If a complete coupon term structure is available and if conditions $`(NA_1)`$-$`(NA_3)`$ are (strictly) satisfied, then Theorem 2.5 tells us how to compute a complete set of decreasing discount factors (one for each coupon payment date) that summarize the whole information carried about nominal discount rates by bonds in our data set.
When small violations of condition $`(NA_1)`$ are detected in the bond set $`𝒯`$ forming a complete coupon term structure, Carleton and Cooper suggested to consider the prices of the bonds in $`𝒯`$ as random variables with expected values satisfying $`(NA_1)`$ \- $`(NA_3)`$. According to their hint, equation (2.1) must be modified by adding a bond-specific error term to (say) its right-hand side:
(2.2)
$$p_j=\underset{i=1}{\overset{N}{}}d_i\phi _i(c_j,m_j)+\epsilon _j,$$
where $`j`$ ranges over all bonds in the set $`𝒯`$, the $`\epsilon _j`$’s are random variables with $`E\{\epsilon _j\}0`$, and $`1>d_1>d_2>\mathrm{}>d_N>0`$.
Estimators, $`\widehat{d}_1,\mathrm{},\widehat{d}_N`$, of the discount factors $`d_1,\mathrm{},d_N`$, subjected to the condition $`1>\widehat{d}_1>\widehat{d}_2>\mathrm{}>\widehat{d}_N>0`$, can, then, be attained by a constrained least squares procedure with all bonds in the set $`𝒯`$.
We recall, however, that the least-squares estimators $`\widehat{d}_1,\mathrm{},\widehat{d}_N`$ of the $`d_i`$’s exist and are unique provided that the regressors’ matrix, $`\mathrm{\Phi }=((\phi _i(c_j,m_j)))`$, has maximum rank $`N`$, which is certainly true if $`𝒯`$ is a complete coupon term structure (see Remark 2.6). If, on the contrary, the rank of the matrix $`\mathrm{\Phi }`$ is less than $`N`$, then the problem of estimating the discount factors $`d_1,\mathrm{},d_N`$ by a least squares procedure is improperly posed, or ill-posed, in the sense of Hadamard (see, e.g., ) and does not admit any solution. Hence the completeness hypothesis of the term structure of $`𝒯`$ (or a weaker version of its) is necessary to apply the method of Carleton and Cooper.
## 3. Estimation of the term structure
In general, the bond set you decide to use for your analysis does not form a complete coupon term structure. This is true, for example, for the set we used in the present paper. To realize this look at the next figure 1, where we have reported the maturity spectrum of all of $`206`$ bonds in our sample.
Since no bond has maturity of $`10`$ up to $`15`$ years and the maturity of the longest (coupon) bond reaches about $`30`$ years, the term structure of coupon bonds in our data set cannot be complete.
In $`1971`$ McCulloch introduced a new technique for studying the term structure of interest rates based on the following assumption <sup>5</sup><sup>5</sup>5Actually McCulloch’s statement of Hypothesis 3.1 differs somewhat from ours: he assumed that coupon payments arrive in a continuous stream instead of semiannually. However our formulation corresponds to the one preferred by most authors after McCulloch .
###### Hypothesis 3.1.
If the bond set $`𝒮`$ satisfies conditions $`\mathit{(}NA_\mathit{1}\mathit{)}`$-$`\mathit{(}NA_\mathit{3}\mathit{)}`$, then there exists a decreasing function $`d:[0,+\mathrm{})(0,1]`$ such that $`d(0)=1`$, $`lim_{t+\mathrm{}}d(t)=0`$ and that for every bond $`(c,m,p)𝒮`$
$$p=\underset{i=1}{\overset{N}{}}d(t_i)\phi _i(c,m).$$
###### Remark 3.2.
Function $`d`$ is called the discount function for the bond set $`𝒮`$. For each $`t0`$, $`d(t)`$ represents, then, the present value of $`\$1`$ to be received in $`t`$ years’ time. When $`𝒮=𝒯`$ forms a complete coupon term structure, the discount factors $`d_1,\mathrm{},d_N`$, whose existence is guaranteed by Theorem 2.5, are just the values that $`d`$ takes on the corresponding maturities $`t_1,\mathrm{},t_N`$, that is $`d_i=d(t_i)`$ for every $`i`$.
The definitions of spot rate and of $`P`$-periods forward rate can be easily extended, by means of the discount function $`d`$, to all maturities $`t0`$. In particular the spot rate function $`s:(0,+\mathrm{})_+`$ is defined by
(3.1)
$$s(t):=\frac{1}{t}\mathrm{lg}d(t)$$
and the $`P`$-periods forward rate function $`f^P:[0,+\mathrm{})_+`$ by
$$f^P(t):=\frac{2}{P}\mathrm{lg}\frac{d(t+P/2)}{d(t)}.$$
Furthermore, if $`d`$ is differentiable, we can define the instantaneous forward rate (of return) function $`f:[0,+\mathrm{})_+`$ by
(3.2)
$$f(t):=\frac{d}{dt}\mathrm{lg}d(t).$$
Then $`f(t)`$ represents the simple net rate of return per unit time of an investment in the bond markets over an infinitesimal time interval starting $`t`$ years ahead. Moreover the $`P`$-periods forward rate $`f^P(t)`$ equals the average of the instantaneous forward rate $`f`$ over the length $`P/2`$ interval $`[t,t+P/2]`$, that is
$$f^P(t)\frac{2}{P}_t^{t+P/2}f(s)𝑑s.$$
It is important to understand, however, that the existence of a discount function, even when conditions $`(NA_1)`$-$`(NA_3)`$ are strictly satisfied, is nothing but a hypothesis that scholars and practitioners assume in order to fill the gaps in the maturity spectrum of bond samples that do not form a complete coupon term structure. It is by no means a consequence of $`(NA_1)`$-$`(NA_3)`$. As we have already pointed out at the end of section 2, building a complete set of discount factors is an ill-conditioned problem when the term structure of coupon bonds in the sample is incomplete. This explains the difficulties met with by people when trying to estimate a discount function from a generic bond data set.
As to the regularity properties of the discount function, some researchers (see, e.g., ) assert that on the ground of mere economic considerations we can only require that $`d`$ be monotonic decreasing, any further restrictions being not justified. On the contrary, most authors assume also that $`d`$ be twice differentiable so attaining a smooth forward rate curve. In particular Langetieg and Smoot argue that a non-smooth forward rate function $`f`$ should give rise to arbitrage opportunities and that, consequently, any irregularity of $`f`$ should be quickly priced out in an efficient market.
If we assume Hypothesis 3.1, then the problem of estimating the term structure of interest rates from a bond data set reduces to choosing an $`n`$-parameter family of functions $`d(;𝜶):[0,+\mathrm{})`$ that we consider capable of capturing the main characteristics of the bond markets in exam and determining the parameter vector $`𝜶^n`$ by minimizing the sum of squared residuals (least squares fitting procedure)
(3.3)
$$\underset{𝜶}{\mathrm{min}}\underset{j}{}\left|p_j\underset{i=1}{\overset{N}{}}d(t_i;𝜶)\phi _i(c_j,m_j)\right|^2,$$
where $`j`$ ranges over all bonds $`(c_j,m_j,p_j)`$ in our data set. The choice of the functions $`d(;𝜶)`$, that at the very end is always a matter of judgment, is crucial to the quality of our fit.
The test functions $`d(;𝜶)`$ that are usually employed in the fitting procedure (3.3) may be roughly divided in two categories: piecewise (polynomial or exponential) functions or spline and functions generated by “parsimonious” models.
Spline functions are more capable of capturing genuine bends in the discount function but have the drawback of over-fitting risk; parsimonious models, on the contrary, are less flexible but do a better job in removing noise from the data. So the choice of spline functions is better for getting a “good fit” whereas parsimonious models should be privileged for sake of “smoothness”.
### 3.1. Spline based techniques
The most widely used spline method, specially among practitioners, is due to McCulloch (see also ) and is based on the use of polynomial spline functions.
We recall that, given $`\mathrm{}<K_1<K_2<\mathrm{}<K_k<+\mathrm{}`$ (for some $`k>2`$) and $`r\{0\}`$, we say that $`f:[K_1,K_k]`$ is an $`r`$-degree polynomial spline function with knot points $`K_1,K_2,\mathrm{},K_k`$, if and only if $`f`$ is an $`r`$-degree polynomial in each interval $`[K_i,K_{i+1}]`$ and if it is continuous, along with all its derivatives up to order $`r1`$ (if $`r>0`$), in $`[K_1,K_k]`$.
The reasons underlying the choice of polynomial spline functions are as follows. It is apparent that a very simple, but somehow naïve, approach to the problem of fitting a discount function to a bond data sample is to use polynomial test functions. However such functions have uniform resolving power, that is they tend to fit better at the short end of the maturity spectrum where the greatest concentration of bond maturities occurs (see figure 1), and worse at the long end. A possible way out is, of course, to increase the order of the polynomial but this can cause instability in the parameter estimates. On the other hand, practitioners often require methods endowed with a sufficiently high flexibility to have the chance to choose a priori how good their fit of the discount function will be in various regions of the maturity spectrum. For instance, they may desire to improve the goodness of their fit in those maturity ranges where many bonds are traded.
A fitting procedure that meets all these requirements involves the use of polynomial spline functions. By means of a convenient choice of the number and the position of the knots, such method allows to achieve the desired resolution degree along the whole maturity spectrum, even employing relatively low order polynomials (which means more stable curves).
So the choice of the number and the position of knot points affects dramatically the quality of the estimated term structure of interest rates.
McCulloch suggested to choose a number $`k`$ of knots equal (approximately) to the square root of bonds in the sample and to place them in such a way that $`K_1=0`$, $`K_k`$ equals the maturity of the longest bond and that each subinterval $`[K_i,K_{i+1}]`$ contains approximately the same number of observed maturities. This choice should provide an increasing resolving power as the number of observations increases and should avoid over-fitting phenomena.
Currently, practitioners and financial analysts prefer to place knots at $`0`$, $`1`$, $`3`$, $`5`$, $`7`$, $`11=10^+`$ and $`30`$ years, so splitting up the whole maturity range into the intervals $`(0,1)`$, $`(1,3)`$, $`(3,5)`$, $`(5,7)`$, $`(7,11)`$, and $`(11,30)`$. The rationale behind this choice is that most asset management companies divide their assets into groups (or categories) that always lie in one of the above intervals of the maturity spectrum. This fact is reflected also in the existence of several benchmark securities with maturity of about $`1`$, $`3`$, $`5`$, $`7`$, $`10`$ and $`30`$ years.
Given $`k`$ knot points $`K_1,\mathrm{},K_k`$, the set of all $`r`$-degree polynomial spline functions on $`[K_1,K_k]`$ form a linear space of dimension $`k+r1`$. As a matter of fact we have the chance of choosing arbitrarily the $`r+1`$ coefficients of the polynomial in the first subinterval $`[K_1,K_2]`$ and one coefficient for each polynomial in the $`k2`$ remaining subintervals $`[K_i,K_{i+1}]`$ with $`i=2,\mathrm{},k1`$. The others parameters are fixed by the conditions $`f^{(n)}(K_i0)=f^{(n)}(K_i+0)`$ for all $`i\{2,\mathrm{},k1\}`$ and all $`n\{0,\mathrm{},r1\}`$.
Let $`\{f_0,f_1,\mathrm{},f_{r+k2}\}`$ be a base for the linear space of $`r`$-degree polynomial spline functions with knot points $`K_1,\mathrm{},K_k`$. Without loss of generality we can assume that $`f_0(t)1`$ (as a constant function is a polynomial spline function) and that $`f_1(0)=\mathrm{}=f_{r+k2}(0)=0`$. Then $`𝜶=(\alpha _1,\mathrm{},\alpha _{r+k2})`$ and the McCulloch model for the discount function is
$$d(t;𝜶)=1+\underset{j=1}{\overset{r+k2}{}}\alpha _jf_j(t).$$
The corresponding least squares problem is therefore linear in the parameter vector $`𝜶`$.
A major limitation of this method is that it does not force a monotonous decreasing behavior of the resulting discount function. As a consequence, meaningless results, like negative forward rates, may be obtained. In this respect Barzanti and Corradi (see also and ) have recently, proposed the introduction a set of linear constraints on the parameters $`\alpha _1,\mathrm{},\alpha _{r+k2}`$ that ensure the desired monotonicity of the employed spline functions.
A further criticism directed by Vasicek and Fong to McCulloch’s method is that most general-equilibrium models of the term structure of interest rates (see, e.g., and ) expect an exponential form for the discount function. However piecewise polynomials have a different curvature compared to an exponential function. Hence they conclude that “conventional” polynomial splines cannot provide a good fit to an exponential-like discount function.
Following the arguments of Vasicek and Fong, Langetieg and Smoot and Coleman, Fisher and Ibbotson proposed, respectively, to fit a polynomial spline to the spot rate function and to the instantaneous forward rate function rather than to the discount function <sup>6</sup><sup>6</sup>6Actually Coleman, Fisher and Ibbotson proposed only the use of piecewise constant functions, that is of $`0`$-degree polynomial splines. However their method can be easily extended to polynomial splines of any order. . They argued that, assuming an exponential form for the discount function, their regression procedure should give better results (i.e. more “realistic” from the financial viewpoint) compared to McCulloch’s technique.
By (3.1) and (3.2) $`d(t)\mathrm{exp}(ts(t))`$ and $`d(t)\mathrm{exp}\left(_0^tf(s)𝑑s\right)`$. Hence the method by Langetieg and Smoot amounts to take $`𝜶=(\alpha _0,\mathrm{},\alpha _{r+k2})`$ (for some $`r\{0\}`$ and some $`k2`$) and
$$d(t;𝜶)=\mathrm{exp}\left(t\underset{j=0}{\overset{r+k2}{}}\alpha _jf_j(t)\right),$$
where $`\{f_0,f_1,\mathrm{},f_{r+k2}\}`$ is a base for the linear space of $`r`$-degree polynomial spline functions with knot points $`K_1,\mathrm{},K_k`$.
Analogously, since the integral of an $`r`$-degree polynomial spline function is an $`(r+1)`$-degree polynomial spline, the method by Coleman, Fisher and Ibbotson amounts to take $`𝜶=(\alpha _0,\mathrm{},\alpha _{r+k1})`$ and
$$d(t;𝜶)=\mathrm{exp}\left(1+\underset{j=1}{\overset{r+k1}{}}\alpha _jf_j(t)\right),$$
where, as above, $`\{f_0,f_1,\mathrm{},f_{r+k1}\}`$ is a base for the linear space of $`(r+1)`$-degree polynomial spline functions with knot points $`K_1,\mathrm{},K_k`$ and with the property that $`f_0(t)1`$ and that $`f_1(0)=\mathrm{}=f_{r+k1}(0)=0`$.
The methods by Langetieg and Smoot and by Coleman, Fisher and Ibbotson share the same limitation of the method by McCulloch in that it is impossible to force a monotonous decreasing behavior of the resulting discount function by simply introducing linear constraints on the parameters to be determined. The only exception is for the method by Coleman, Fisher and Ibbotson with $`r=0`$, that is when a piecewise constant functions are used to fit the instantaneous forward rate function. Hence, again, financially unacceptable results may be found.
### 3.2. Parsimonious models
It is readily apparent that, although spline based techniques are still widely used to estimate the term structure of interest rates, they present many drawbacks (as pointed out, for example, by Shea ). As a matter of fact several choices, such as the number and the position of knots and the degree of the basic polynomials, affect dramatically the quality of fit.
An alternative approach to the estimation of the term structure of interest rates is the use of the so-called parsimonious models. The main idea behind this class of models is to postulate a unique functional form on the whole range of maturities for the discount function. This implies that in parsimonious models some basic properties of the discount function, such as monotonicity, we are interested in can be imposed a priori. Furthermore, the number of parameters to estimate is typically much smaller than that needed for a typical spline based approach.
Let us describe briefly some of the parsimonious models proposed in the literature.
In the functional form for the discount function is proposed to be
$$d(t;𝜶)=\mathrm{exp}(\underset{j=1}{\overset{n}{}}\alpha _jt^j).$$
The unknown parameters $`\alpha _1,\mathrm{},\alpha _n`$ can be estimated either by (non-linear) least square regression or by a maximum likelihood approach. The latter one is found to perform better, probably, because of the explicit consideration of the heteroskedasticity in the data. Notice that this model is a particular case of the ones by Langetieg and Smoot and by Coleman, Fisher and Ibbotson with just two knots.
A particularly relevant approach is the one proposed by Nelson and Siegel . They attempted to model explicitly the implied instantaneous forward rate function $`f`$. The Expectation Hypothesis provides a heuristic motivation for their model since $`f`$ is modeled as the solution of a second order differential equation. The functional form suggested for $`f`$ is
(3.4)
$$f(t;𝜶)=\beta _0+\beta _1e^{t/\tau }+\beta _2\frac{t}{\tau }e^{t/\tau },$$
where $`𝜶=(\beta _0,\beta _1,\beta _2,\tau )`$. It implies the following model for the discount function:
$$d(t;𝜶)=\mathrm{exp}\left(t\left(\beta _0+(\beta _1+\beta _2)\frac{\tau }{t}\left(1e^{t/\tau }\right)\beta _2e^{t/\tau }\right)\right).$$
The flexibility of this model is explained by the authors by observing that the instantaneous forward rate is the linear combination of three components, $`f_s(t):=e^{t/\tau }`$, $`f_m(t):=\tau e^{t/\tau }`$ and $`f_l(t):=1`$, modeling respectively the short-, medium-, and long-term behavior of $`f`$. The parameters $`\beta _0`$, $`\beta _1`$ and $`\beta _2`$ measure respectively the strength of each component, whereas $`\tau `$ is a time-scale factor.
In an extended model for the instantaneous forward rate $`f`$ was proposed to increase the flexibility of the original model by Nelson and Siegel model. It was obtained by adding a fourth term in equation (3.4), with two extra parameters, which allow the instantaneous forward rate curve for a second “hump”:
$$f(t;𝜶)=\beta _0+\beta _1e^{t/\tau _1}+\beta _2\frac{t}{\tau _1}e^{t/\tau _1}+\beta _3\frac{t}{\tau _2}e^{t/\tau _2}.$$
The model proposed by a J. P. Morgan group and reported by Wiseman , is the so-called Exponential Model. The instantaneous forward rate function is modeled by
$$f(t;𝜶)=\underset{j=0}{\overset{n}{}}a_j\mathrm{exp}(b_jt),$$
where $`a_1,b_1,\mathrm{},a_n,b_n`$ are the parameters to be estimated. It has been noticed that this model is able to capture the macro shape of the forward rate curve rather than the features of a single data set.
## 4. Numerical Experiments
In this section we report the results of a set of numerical experiments for estimating the term structure of interest rates from a cross-sectional data set of US government bonds.
The data set was made available by Datastream. It contains the annual coupon rate, the maturity date and the gross (or dirty or tel quel) price <sup>7</sup><sup>7</sup>7i.e. the sum of the quoted, or clean, price and the accrued interest (computed by the ’$`30/360`$’ convention) for all (fixed-income) US Treasury bills, notes and bonds outstanding at the dates from May $`31^{\text{st}}`$ $`1999`$ to June $`11^{\text{th}}`$ $`1999`$ (two weeks).
Hereafter, for economy of presentation, since the results are very similar, only the results for June $`3^{\text{rd}}`$ $`1999`$ (when $`33`$ bills and $`173`$ notes and bonds were outstanding) will be presented and analyzed.
As first step, we built all minimal replicating portfolios (if any) of each available bond to double check whether condition $`(NA_1)`$ is actually violated by bonds in the data set. As a “measure” of the arbitrage opportunities for a bond, we took the largest absolute value of the difference between the price of the bond and the price of all its minimal replicating portfolios. The results of this test are reported in the figure 2.
Note that just $`69`$ bonds, out of $`206`$, admit a (minimal) replicating portfolio, the maturity of such bonds reaching at most $`7`$ years. This is easily seen by comparing figures 2 and 1. Moreover arbitrage opportunities amount to less than $`\$0.1`$ for bonds with maturity up to $`3`$ years, and range between $`\$0.2`$ and $`\$1`$ for bonds with maturity within the range $`37`$ years.
To estimate the term structure of interest rates we have resorted to the approaches suggested by Carleton and Cooper , to the spline approaches by McCulloch and by Coleman, Fisher and Ibbotson and to the parsimonious model by Nelson and Siegel (see section 3 for a description of these methods). The technique proposed by Svensson gave us the same kind of results as that proposed by Nelson and Siegel. Hence this set of experiments will not be reported in the present paper.
To apply the method of Carleton and Cooper to a bond sample, $`𝒮`$, whose maturity spectrum contains ”gaps”, it is necessary to shrink the sample itself to a suitable subset, $`𝒯`$, that forms a complete coupon term structure. After that we can estimate the discount factors by a constrained least squares procedure as described in section 3. In our case the subset $`𝒯`$ contains $`152`$ (out of $`173`$) US Treasury notes and bonds (the longest of which has maturity of about $`7`$ years) and, obviously, all of $`33`$ US Treasury bills.
For our numerical experiments, neither the covariance matrix, $`\sigma _{ij}=E\{\epsilon _i\epsilon _j\}`$ nor other information about the statistical properties of the errors $`\epsilon _j`$’s was available. For such reason we have resorted to a straight least squares procedure, instead of a weighted one, so as to obtain unbiased, even though not minimum variance, estimators of the discount factors. Our results are reported in figure 3.
Note that the discount factors approach one as their maturity tends to zero. The discount factors vary quite regularly with maturity, unlike spot rates and, even more significantly, one-period forward rates. These present irregularities for maturities of about $`1`$ year and above $`3`$ years, where the largest violations of condition $`(NA_1)`$ occur. The plot of the residuals
$$p_j\underset{i=1}{\overset{N}{}}\widehat{d}_i\phi _i(c_j,m_j),$$
shows that, as expected, the fitting procedure performs worse where condition $`(NA_1)`$ is more severely violated.
McCulloch’s method has been tested on our data sample with two different sets of knot points. The first set (hereafter referred to as “knots ala McCulloch”) is built following McCulloch’s indication and consists of $`14`$ knots placed at $`0`$, $`0.2`$, $`0.4`$, $`0.6`$, $`0.9`$, $`1.4`$, $`1.8`$, $`2.4`$, $`3.4`$, $`4.3`$, $`6.3`$, $`16.6`$, $`22.8`$, $`29.7`$ years. The second set is made of seven knots placed at $`0`$, $`1`$, $`3`$, $`5`$, $`7`$, $`11`$ and $`30`$ years. The base we used (see section 3.1) consists of the following functions:
$`f_i(t)`$ $`=t^i(i=0,\mathrm{},r1),`$
$`f_i(t)`$ $`=\{\begin{array}{cc}0\hfill & ,\text{if}tK_{ir+1},\hfill \\ (tK_{ir+1})^r\hfill & ,\text{if}K_{ir+1}<tK_{ir+2},\hfill \\ (tK_{ir+1})^r(tK_{ir+2})^r\hfill & ,\text{if}K_{ir+2}<t\hfill \end{array}`$
$`(i=r,\mathrm{},r+k2).`$
and polynomial spline functions up to the fifth degree have been used. Our results are reported in figure 4 for the first set of knots, and in figure 5 for the second set.
These graphs show all drawbacks of McCulloch’s method. First the outcome of the regression procedure depends strongly on the positions of the knots and on the degree of the polynomial spline employed. This feature is apparent in the forward rate curves (compare figure 4 and 5) whereas the fits for the discount function and the spot rate are more stable. Moreover, a polynomial spline is not, in general, a decreasing function, hence negative forward rates may occur. It is interesting to note how results for one-period forward rates obtained both by the method of McCulloch (or Coleman, Fisher and Ibbotson as we will see soon) and by the method of Carleton and Cooper share the same oscillating behaviour within $`3`$ and $`7`$ years. This suggests that such behaviour actually depends on the violations of the condition of absence of arbitrage opportunities that occur at those maturities (see figure 2).
To assess the criticism of Vasicek and Fong to McCulloch’s technique (see section 3), we assumed an exponential discount function of the form
(4.1)
$$d(t)e^{0.06t},$$
which corresponds to a spot rate function and a forward rate function identically equal to $`6\%`$ per year, and generated two sets of artificial data as follows. For each bond $`(c_j,m_j,p_j)`$ in the original data set, $`𝒮`$, we defined a fake price:
$$\widehat{p}_j:=\underset{i=1}{\overset{N}{}}d(t_i)\phi _i(c_j,m_j)+\epsilon _j,$$
where $`T(𝒮)=(t_1,\mathrm{},t_N)`$ and $`\epsilon _j0`$ for the first set of artificial data (named hereafter “exact” data) whereas $`\epsilon _j𝒩(0,1)`$ for the second set (“noisy” data).
After that McCulloch’s method, with knots ala McCulloch, has been applied to both “exact” and “noisy” bond data so attaining an estimate, $`\widehat{d}`$, of the discount function $`d`$ defined in (4.1). The results reported in figures 6 and 7 seem to confirm Vasicek and Fong’s opinion. McCulloch’s method performs pretty well on the “exact” data, whereas it provides a bad fit when “noisy” data are used, even if the artificial perturbations introduced are quite small compared to the typical price of a bond ($`<1\%`$ on average).
The method of Coleman, Fisher and Ibbotson has been tested with the same sets of knot points and the same base of spline functions that we used to test McCulloch’s method. Polynomial spline functions of various degree have been used. For $`0`$-degree spline functions, that is when a piecewise constant function is fitted to the instantaneous forward rate function, the least squares procedure has been constrained so as to attain a monotonic decreasing discount function. Our results are reported in figure 8 for knots ala McCulloch, and in figure 9 for knots placed at $`0`$, $`1`$, $`3`$, $`5`$, $`7`$, $`11`$ and $`30`$ years.
These graphs show that the method of Coleman, Fisher and Ibbotson shares the same drawbacks of McCulloch’s method. However the risk of getting negative forward rates seem to be reduced.
The graphs in figure 10 are produced following the approach of Nelson and Siegel. The main features of the estimates based on parsimonious models are readily seen. Although for the discount function the differences with other approaches are pretty mild, the constraints imposed on the functional form of the instantaneous forward rate function get rid of the oscillating behavior peculiar to spline based techniques.
We conclude this section by showing a table where the Root Mean Square Error (RMSE) <sup>8</sup><sup>8</sup>8i.e. the square root of the arithmetic mean of squared residuals, see equation (3.3) obtained in our numerical experiments is reported. This allows us to compare the quality of the fit among all employed methods (for various choices of the parameters, such as knot points or degree).
As expected the quality of the fit is higher for spline based techniques and lower for parsimonious models. Moreover it is worth noting that it is not true that increasing the order of the spline or the number of knots yields a better fit.
## 5. Conclusions
We have presented a critical review of the main methods for the estimation of the term structure of interest rates in fixed income markets.
It is apparent that most of the activity in this field lacks of a solid mathematical basis and there is not a single technique which is definitely better than others.
A number of elements contribute to form this scenario:
* there is a general trend of re-cycling existing techniques without any critical evaluation of their real effectiveness.
* errors can be present in the input data (wrong prices). A large error is immediately detected whereas smaller errors may induce to take into account “fake” arbitrage opportunities.
* fixed income markets are not homogeneous. The number of data available for a cross-section analysis of the U.S. Treasury securities is very different, for instance, from the number of outstanding Italian government bonds. The features of a market should be considered very carefully before applying any technique.
We expect to extend our work in two directions:
* The development of algorithms to find the best position of the knots for spline-based interpolation methods. We consider genetic algorithms very promising for this kind of problem.
* A mathematical analysis of equilibrium term structure models. This is pretty important since there is already evidence of relevant differences between the predictions of existing models and the experimental data (an example is the volatility term structure).
The importance of “intuition” in financial markets analysis cannot be underestimated. However a more precise distinction among hypotheses, mathematical implications of these hypotheses and empirical observations is definitely required in this field.
## 6. Appendix
This appendix is entirely devoted to the proof of Theorem 2.5 that we write again here for our convenience.
###### Theorem 6.1.
Let $`𝒯`$ be a complete coupon term structure and let $`T(𝒯)=(t_1,\mathrm{},t_N)`$ for some $`N`$. If $`\mathrm{Card}(𝒯)N`$ and if the following condition:
1. if $`(q_1,\mathrm{},q_B)^B`$ are such that $`_{j=1}^Bq_j𝝋(c_j,m_j)=0`$, then $`_{j=1}^Bq_jp_j=0`$;
is satisfied, then there exist $`d_1,\mathrm{},d_N`$ such that for every $`(c,m,p)𝒯`$
(6.1)
$$p=\underset{i=1}{\overset{N}{}}d_i\phi _i(c,m).$$
If, furthermore, conditions:
1. if $`(q_1,\mathrm{},q_B)^B`$ is such that
$$\underset{j=1}{\overset{B}{}}q_j\phi _i(c_j,m_j)=\{\begin{array}{cc}f_{\overline{ı}}\hfill & ,\text{if}i=\overline{ı}\hfill \\ 0\hfill & ,\text{otherwise}\hfill \end{array},$$
for some $`f_{\overline{ı}}>0`$, then $`0<_{j=1}^Bq_jp_j<f_{\overline{ı}}`$;
2. if $`(q_1,\mathrm{},q_B)^B\backslash \{0\}`$ are such that $`_{j=1}^Bq_jp_j=0`$, then there exists $`\overline{ı}\{1,\mathrm{},N\}`$ such that $`_{i=1}^{\overline{ı}}_{j=1}^Bq_j\phi _i(c_j,m_j)0`$;
are fulfilled as well, then $`1>d_1>d_2>\mathrm{}>d_N>0`$.
###### Proof.
Let $`(c_1,m_1,p_1),\mathrm{},(c_N,m_N,p_N)𝒯`$ be such that $`m_j=t_j`$ for every $`j\{1,\mathrm{},N\}`$. The existence of $`N`$ bonds like that in the set $`𝒯`$ is ensured by the hypotheses that $`\mathrm{Card}(𝒯)N`$ and that $`𝒯`$ forms a complete coupon term structure.
Let $`\mathrm{\Phi }=((\mathrm{\Phi }_{ij}))`$ be an $`N\times N`$ matrix such that
(6.2)
$$\mathrm{\Phi }_{ij}:=\phi _i(c_j,m_j)$$
for all $`i,j\{1,\mathrm{},N\}`$.
Finally let $`(c,m,p)`$ be an arbitrary bond of $`𝒯`$.
Since $`\mathrm{\Phi }`$ is an upper triangular matrix with no vanishing element on its principal diagonal, it is non-singular.
Now let $`q_1,\mathrm{},q_N`$ be such that
$$q_j=\underset{i=1}{\overset{N}{}}(\mathrm{\Phi }^1)_{ji}\phi _i(c,m)(j=1,\mathrm{},N)$$
Then
$$\underset{j=1}{\overset{N}{}}q_j𝝋(c_j,m_j)=𝝋(c,m).$$
By condition $`(NA_1)`$, this implies equation (6.1) with
$$d_i:=\underset{j=1}{\overset{N}{}}p_j(\mathrm{\Phi }^1)_{ji}(i=1,\mathrm{},N).$$
In order to conclude the proof of the first part of the theorem, we have to show that the $`d_i`$’s are independent of the choice of $`(c_1,m_1,p_1),\mathrm{},(c_N,m_N,p_N)`$ such that $`m_j=t_j`$ for every $`j`$ if several such choices are possible. To do this, let $`\overline{ȷ}\{1,\mathrm{},N\}`$ and let $`(\stackrel{~}{c}_{\overline{ȷ}},\stackrel{~}{m}_{\overline{ȷ}},\stackrel{~}{p}_{\overline{ȷ}})𝒯\backslash \{(c_{\overline{ȷ}},m_{\overline{ȷ}},p_{\overline{ȷ}})\}`$ be such that $`\stackrel{~}{m}_{\overline{ȷ}}=t_{\overline{ȷ}}`$.
Let $`\stackrel{~}{\mathrm{\Phi }}=((\stackrel{~}{\mathrm{\Phi }}_{ij}))`$ be an $`N\times N`$ matrix such that
(6.3)
$$\stackrel{~}{\mathrm{\Phi }}_{ij}:=\{\begin{array}{cc}\phi _i(c_j,m_j)\hfill & ,\text{if}j\overline{ȷ}\hfill \\ \phi _i(\stackrel{~}{c}_{\overline{ȷ}},\stackrel{~}{m}_{\overline{ȷ}})\hfill & ,\text{if}j=\overline{ȷ}\hfill \end{array}$$
for all $`i,j\{1,\mathrm{},N\}`$, and let
$$\stackrel{~}{d}_i:=\underset{\stackrel{j=1}{j\overline{ȷ}}}{\overset{N}{}}p_j(\stackrel{~}{\mathrm{\Phi }}^1)_{ji}+\stackrel{~}{p}_{\overline{ȷ}}(\stackrel{~}{\mathrm{\Phi }}^1)_{\overline{ȷ}i}(i=1,\mathrm{},N).$$
We want to show that $`\stackrel{~}{d}_i=d_i`$ for every $`i\{1,\mathrm{},N\}`$. Since both $`\overline{ȷ}`$ and $`(\stackrel{~}{c}_{\overline{ȷ}},\stackrel{~}{m}_{\overline{ȷ}},\stackrel{~}{p}_{\overline{ȷ}})`$ has been arbitrarily chosen, this will conclude the proof of the first part of the theorem.
Observe that $`\stackrel{~}{\mathrm{\Phi }}_j=\mathrm{\Phi }_j`$ for all $`j\overline{ȷ}`$ and that by equations (6.1), (6.2) and (6.3)
$$p_j=\underset{i=1}{\overset{N}{}}d_i\mathrm{\Phi }_{ij}(j=1,\mathrm{},N)$$
and
$$\stackrel{~}{p}_{\overline{ȷ}}=\underset{i=1}{\overset{N}{}}d_i\stackrel{~}{\mathrm{\Phi }}_{i\overline{ȷ}}.$$
Then for every $`i\{1,\mathrm{},N\}`$
$`\stackrel{~}{d}_i`$ $`={\displaystyle \underset{\stackrel{j=1}{j\overline{ȷ}}}{\overset{N}{}}}\left[(\stackrel{~}{\mathrm{\Phi }}^1)_{ji}{\displaystyle \underset{h=1}{\overset{N}{}}}d_h\mathrm{\Phi }_{hj}\right]+(\stackrel{~}{\mathrm{\Phi }}^1)_{\overline{ȷ}i}\left({\displaystyle \underset{h=1}{\overset{N}{}}}d_h\stackrel{~}{\mathrm{\Phi }}_{h\overline{ȷ}}\right)`$
$`={\displaystyle \underset{h=1}{\overset{N}{}}}d_h\left[{\displaystyle \underset{j=1}{\overset{N}{}}}\stackrel{~}{\mathrm{\Phi }}_{hj}(\stackrel{~}{\mathrm{\Phi }}^1)_{ji}\right]=d_i.`$
In order to prove the second part of the theorem, let $`(c_1,m_1,p_1),\mathrm{},(c_N,m_N,p_N)𝒯`$ be such that $`m_j=t_j`$ for every $`j\{1,\mathrm{},N\}`$ and let $`(q_1,\mathrm{},q_N)^N`$ be a portfolio of these bonds such that
$$\underset{j=1}{\overset{N}{}}q_j\phi _i(c_j,m_j)=\{\begin{array}{cc}f_{\overline{ı}}\hfill & ,\text{if}i=\overline{ı}\hfill \\ f_{\overline{ı}+1}\hfill & ,\text{if}i=\overline{ı}+1\hfill \\ 0\hfill & ,\text{otherwise}\hfill \end{array},$$
for some $`\overline{ı}\{1,\mathrm{},N\}`$ and some $`f_{\overline{ı}},f_{\overline{ı}+1}`$. By (6.1), the price of this portfolio is
$$\underset{j=1}{\overset{N}{}}q_jp_j=d_{\overline{ı}}f_{\overline{ı}}+d_{\overline{ı}+1}f_{\overline{ı}+1}.$$
If $`f_{\overline{ı}}>0`$ and $`f_{\overline{ı}+1}=0`$, then by condition $`(NA_2)`$, $`0<d_{\overline{ı}}f_{\overline{ı}}<f_{\overline{ı}}`$, which implies that $`0<d_{\overline{ı}}<1`$ and, since $`\overline{ı}`$ has been chosen arbitrarily, that
$$0<d_i<1(i=1,\mathrm{},N).$$
Now suppose, by contradiction, that $`d_{\overline{ı}}d_{\overline{ı}+1}`$ and let $`f_{\overline{ı}},f_{\overline{ı}+1}`$ be such that
$$d_{\overline{ı}}f_{\overline{ı}}+d_{\overline{ı}+1}f_{\overline{ı}+1}=\underset{j=1}{\overset{N}{}}q_jp_j=0.$$
Since $`d_{\overline{ı}},d_{\overline{ı}+1}>0`$, we can assume, without loss of generality, that $`f_{\overline{ı}}0`$ and that $`f_{\overline{ı}}+f_{\overline{ı}+1}0`$. However condition $`(NA_3)`$ implies that either $`f_{\overline{ı}}<0`$ or $`f_{\overline{ı}}+f_{\overline{ı}+1}<0`$. This is the wanted contradiction. ∎
|
no-problem/0003/astro-ph0003197.html
|
ar5iv
|
text
|
# Does the unification of BL Lac and FR I radio galaxies require jet velocity structures?
## 1 Introduction
Unification models adduce the main differences between the observed properties of different classes of AGNs to the anisotropy of the radiation emitted by the active nucleus (see Antonucci anto (1993) and Urry & Padovani urrypad (1995) for reviews). In particular, for low luminosity radio-loud objects, namely BL Lacs and FR I radio galaxies (Fanaroff & Riley fr74 (1974)), it is believed that this effect is mainly due to relativistic beaming. In fact, there is growing evidence that obscuration does not play a significant role in these objects, contrary to other classes of AGNs. This is indicated by optical (Chiaberge et al. paperI (1999) hereafter Paper I), radio (Henkel et al. henkel (1998)), and X-ray information (e.g. Fabbiano et al. fabbiano (1984), Worral and Birkinshaw wb94 (1994), Trussoni et al. edo (1999)). Within this scenario, the emission from the inner regions of a relativistic jet dominates the observed radiation in BL Lacs, while in FR I, whose jet is observed at larger angles with respect to the line of sight, this component is strongly debeamed. Evidence for this unification scheme includes the power and morphology of the extended radio emission of BL Lacs (e.g. Antonucci & Ulvestad antoulve (1985), Kollgaard et al. koll92 (1992), Murphy et al. murphy (1993)) and the properties of their host galaxies (e.g. Ulrich ulrich89 (1989), Stickel et al. stickel91 (1991), Urry et al. urry99 (1999)), which are similar to those of FR I. Furthermore, there is a quantitative agreement among the amount of beaming required by different observational properties (e.g. Ghisellini et al. gg93 (1993)), the number densities and luminosity functions of the parent and beamed populations in different bands (e.g. Urry & Padovani urrypad (1995), Celotti et al. celo93 (1993)) and the comparison of the radio core emission of beamed and unbeamed objects with similar total radio power (Kollgaard et al. koll96 (1996)).
Despite this global agreement, it should be stressed that beaming factors inferred from the broad band spectral properties of blazars, more specifically superluminal motions, transparency to the $`\gamma `$–ray emission, shape of the SED and time–lags among variations at different frequencies, are significantly and systematically larger than those suggested by radio luminosity data (Dondi & Ghisellini dondi (1995), Ghisellini et al. gg98 (1998), Tavecchio et al. taold (1998)).
Thanks to the Hubble Space Telescope (HST), faint optical nuclear components have been recently detected in FR I galaxies (Chiaberge et al. paperI (1999)). A strong linear correlation is found between this optical and the radio core emission which strongly argues for a common non-thermal origin. This suggests that the optical cores can be identified with synchrotron radiation produced in a relativistic jet, qualitatively supporting the unifying model for FR I and BL Lacs.
These information offer a new possibility of verifying the unification scheme, by directly comparing the properties of the optical and radio cores of radio galaxies with their putative aligned (beamed) counterparts, analogously to the procedure followed for the radio cores. X-ray observations also provide useful constraints to the nuclear emission of FR I sources (e.g. Hardcastle & Worrall hard99 (1999)).
The main advantage of using multifrequency data is the possibility of directly comparing the full broad band spectral distributions of these two classes of sources and eventually shed light on the apparent discrepancy in the Lorentz factors inferred from different approaches.
The paper is organized as follows. The (complete) samples of BL Lacs and radio galaxies are presented in Sect. 2. In Sect. 3 we compare separately the core radio and optical emission of beamed and unbeamed objects with similar extended radio power. From this we infer the Lorentz factors requested by the unification scheme within the simplest scenario in which the radiation is emitted by a single uniform region of the relativistic jet. In Sect. 4 the radio and optical data are considered together and, starting from the observed SED of BL Lacs, we derive the expected properties of the nuclear emission of FR I, by taking into account the spectral dependence of the relativistic transformations. As the single–region picture does not account for the observed properties, in Sect. 5 we explore a (simple) alternative scenario and test it also against the X-ray information. Summary and conclusions are presented in Sect. 6.
## 2 The samples
### 2.1 FR I radio galaxies
Our complete sample of radio galaxies comprises all the FR I sources belonging to the 3CR catalogue (Spinrad et al. spinrad (1985)), morphologically identified as FR I. The redshifts of these objects span the range $`z=0.0037`$$`0.29`$, with a median value of $`z=0.03`$, and the total radio luminosities at 1.4 GHz are between $`10^{30.2}`$ and 10<sup>34.2</sup> erg s<sup>-1</sup> Hz<sup>-1</sup> ($`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$ are adopted hereafter). We exclude from the original sample the peculiar object 3C~386, as discussed in Paper I. The optical and radio data are from Paper I, while the X-ray ones from Hardcastle & Worral (hard99 (1999)). The optical data are extrapolated to the V band using a spectral index $`\alpha _o=1`$ ($`F_\nu \nu ^\alpha `$).
### 2.2 Radio and X-ray selected BL Lacs samples
We consider both the complete sample of 34 radio selected BL Lacs derived from the 1Jy catalog (Stickel et al. stickel91 (1991), Kühr et al. kuhr (1981)), and the BL Lac sample selected from the Einstein Slew survey (Elvis et al. elvis (1992), Perlman et al. perl96 (1996)), which comprises 48 objects, and it is nearly complete. The extended radio power (at 1.4 GHz) $`L_{ext}`$ \[erg s<sup>-1</sup> Hz<sup>-1</sup>\] spans the ranges $`10^{30.1}10^{33.8}`$ (1Jy BL Lacs, Kollgaard et al. koll96 (1996)) and $`10^{29.1}10^{33.4}`$ (Slew survey BL Lacs, Kollgaard et al. koll96 (1996), Perlman et al. perl96 (1996)); the redshifts are between 0.049 and 1.048 (median $`z=0.501`$) and between 0.031 and 0.513 (median $`z=0.188`$) for the two samples, respectively (the redshifts of the Slew BL Lacs are taken from the data collected by Fossati et al. gfos (1998)).
Instead of classifying BL Lacs according to their selection spectral band, in the following we adopt the definitions of high and low energy peaked BL Lacs (HBL and LBL respectively), which are based on the position of the (synchrotron) emission peak in the spectrum and therefore more indicative of the physical characteristics of the objects (Giommi & Padovani giopad94 (1994), Fossati et al. gfos (1998)). Of the 34 objects belonging to the 1 Jy sample, 32 are classified as LBL and 2 as HBL, while of the 48 X-ray selected BL Lacs, 40 are HBL and 8 are LBL.
The spectral data for both samples of BL Lacs are taken from Fossati et al. gfos (1998).
## 3 Core versus extended luminosity
According to the unification models the beamed and unbeamed populations must cover the same range of extended luminosity, as this is considered to be isotropic. On the contrary, emission from the core is affected by beaming: radio galaxies should have a fainter central component, whose intensity would depend on the Doppler factor $`\delta =[\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )]^1`$, where $`\mathrm{\Gamma }=(1\beta ^2)^{1/2}`$, $`\beta c`$ is the bulk velocity of the emitting plasma and $`\theta `$ the angle between the direction of the jet and the line of sight. The transformation law for the specific flux density is in fact $`F_\nu =\delta ^{p+\alpha }F_\nu ^{}`$, where the primed quantity refers to the comoving frame, $`\alpha `$ is the local spectral index, $`p=2`$ for a continuous jet and $`p=3`$ for a moving sphere.
Therefore the comparison of the core emission of beamed objects and their parent population with similar extended emission provides a direct estimate of the Lorentz factor of the radiating plasma, if the typical observing angles are known. With this aim and similarly to what has been done in the radio band (e.g. Kollgaard et al. koll96 (1996)), we plot the optical V band luminosity ($`L_o`$) vs the extended radio luminosity at 1.4 GHz ($`L_{ext}`$) for the three samples (Fig. 1).
First we should note that the HBL objects do not fully share the range of extended radio power of the 3CR radio galaxies (the HBL total luminosities are in fact more similar to the objects belonging to the B2 sample of low power radio galaxies). Conversely, $`L_{ext}`$ of LBL well match those FR I of the 3CR catalog.
Also notice that the regions occupied by the two samples of BL Lacs appear to be continuously connected, the lower radio power BL Lacs (which are HBLs) and the higher radio power ones (LBLs) having an optical luminosity which weakly increases for increasing extended luminosity. Because of this trend, in order to compare sources with the same $`L_{ext}`$ we have sub-divided the samples into three bins, namely: $`\mathrm{log}L_{ext}`$ \[erg s<sup>-1</sup> Hz<sup>-1</sup>\] $`<31.5`$, $`\mathrm{log}L_{ext}`$ between 31.5 and 32.5, and $`\mathrm{log}L_{ext}>32.5`$.
We thus calculate the median values of the observed nuclear luminosity of FR I and BL Lacs in each interval of extended power. BL Lacs are on average 4 orders of magnitude brighter than FR I cores. We can assume that BL Lacs are observed <sup>1</sup><sup>1</sup>1At the particular angle $`\theta 1/\mathrm{\Gamma }`$, one obtains $`\delta =\mathrm{\Gamma }`$. at $`\theta 1/\mathrm{\Gamma }`$ and FR Is at $`\theta =60^{}`$: in fact, for an isotropic distribution of objects, $`\theta =60^{}`$ corresponds to the median angle if, as it is in the case of FR I, the scatter in the optical luminosity is dominated by relativistic beaming. Bulk Lorentz factors $`\mathrm{\Gamma }`$ $`4`$ for the case of an emitting sphere and $`6`$ for a continuous jet are required in order to account for the different core luminosities of FR I and BL Lacs in each bin of extended power. An optical spectral index $`\alpha _o=1`$ is assumed for all sources (independent of beaming).
An alternative method to estimate $`\mathrm{\Gamma }`$ relies on the fact that, for a randomly oriented sample, the best fit regression line of a luminosity distribution corresponds to the behavior of sources observed at $`60^{}`$, once the most core dominated objects are excluded (Kollgaard et al. koll96 (1996)). We thus determine the best fit regression of FR I in the $`L_{ext}L_o`$ plane, after excluding from the sample the 5 objects in which optical jets are detected. These sources, in fact, have the most luminous optical cores, are among the most core dominated objects in the radio band, and their radio jets are shorter, indicating that they are pointing towards the observer (Sparks et al. sparks (1995)). Interestingly, we obtain that there is a remarkable correlation ($`P>99.9\%`$) between $`\mathrm{log}L_{ext}`$ and $`\mathrm{log}L_o`$, among the remaining 20 “highly misoriented” objects, although with a slope ($`0.6`$) marginally steeper than the correlation between $`L_{ext}`$ and core radio luminosity ($`L_r`$) found by Giovannini et al. (gg2 (1988)) for a larger sample of radio galaxies. In Fig. 2 we show the regions in which the three samples are located in the $`L_{ext}L_o`$ plane, and the dashed lines represent the “beamed” FR I population as observed under an angle $`1/\mathrm{\Gamma }`$ in the case of $`p=3`$. Also with this method $`\mathrm{\Gamma }`$ $`5`$ ($`7`$) are required to displace the FR I to the regions occupied by both HBL and LBL for $`p=3`$ ($`p=2`$).
Let us now consider $`L_r`$ (at 5 GHz) versus $`L_{ext}`$ (Fig. 3), analogously to what is shown by Kollgaard et al. (koll96 (1996)) for a larger sample of radio galaxies (which also includes our objects). The typical radio core luminosities of HBL and LBL are significantly different, the latter objects being on average about one order of magnitude more luminous than the former ones. Conversely, as we have already pointed out, no substantial difference between the two classes is found in the case of $`L_o`$.
These results have been initially attributed to a different amount of beaming for X-ray and radio-selected BL Lacs (i.e. different angle of sight and/or different jet velocities<sup>2</sup><sup>2</sup>2If so the inferred Lorentz factors, relative to our sample, are $`\mathrm{\Gamma }=4`$ (3) for HBL $`\mathrm{\Gamma }=10`$ (5) for LBL, in the case of p=2 (p=3)) while more recently a consistent picture has emerged where this diversity can be accounted for by the different shape of their intrinsic SED (e.g. Padovani pado92 (1992), Ghisellini & Maraschi ggma (1989), Giommi & Padovani giopad94 (1994), Fossati et al. gfos (1998)). The role of these two scenarios will be further explored in the next section, through the comparison of the SED of both types of BL Lacs with their parents.
We conclude that the Lorentz factors inferred from the comparison of the radio, but also optical emission of FR I and BL Lacs, are consistent with those previously estimated from the statistics of these sources within the unifying scheme. However, as already mentioned, such values are significantly and systematically lower than those required by other independent means, such as superluminal motions and high energy spectral constraints (fit to the overall SED and time–lags) in both LBLs and HBLs. (Maraschi et al. nonna92 (1992), Sikora et al. sikora (1994), Celotti et al. mkntev (1998), Tavecchio et al. taold (1998)). These latter methods require a value of the Doppler factor $`\delta `$ in the range 15–20 for the region emitting most of the radiation in both HBLs and LBLs. The need for high degrees of beaming will constitute a crucial point in the following.
## 4 FR I and BL Lac in the $`L_o`$ and $`L_r`$ plane
Since for the first time multifrequency data are available also for the nucleus of radio galaxies, we can now directly compare the spectral properties of beamed objects and their parent population: this new approach can thus combine information from the SED of BL Lacs with those inferred from their relation with FR I. In particular in Paper I and in Chiaberge et al. papII (1999) we showed how the location of sources in the $`L_o`$ and $`L_r`$ plane represents a very useful tool to discuss their nuclear properties. In Fig. 4 we show the optical vs radio core luminosity for the three samples. The dashed line represents the (almost linear) correlation found between these two quantities among the FR I sources (Paper I). Radio galaxies, HBL and LBL occupy different regions of this plane: LBLs are located only marginally above the continuation of this correlation, while HBL are $`2`$ order of magnitude brighter in the optical with respect to other objects for a given radio luminosity.
In order to determine how beaming affects the observed luminosities and thus how objects could be connected in this plane, we consider the SED of BL Lacs, observationally much better determined, and calculate the observed spectrum of the misaligned objects, by taking into account relativistic transformations.
In fact, an important point, previously neglected, is that these transformations depend on the spectral index in the band considered, which in itself might change as a function of the degree of beaming. Therefore, in order to correctly de–beam the SED of BL Lacs, a continuous representation of it and an estimate of the bulk Lorentz factor of the emitting region are needed (it is again assumed $`\theta =1/\mathrm{\Gamma }`$). While any continuous description of the SED and typical Lorentz factors can be used, we derive both of them by adopting a homogeneous synchrotron self–Compton emission model to reproduce the observed SEDs (e.g. Chiaberge & Ghisellini cg (1999), Ghisellini et al. gg98 (1998), Mastichiadis & Kirk masti (1997)). This approach has the advantage of considering both the emission and dynamical ($`\mathrm{\Gamma }`$) properties self–consistently. <sup>3</sup><sup>3</sup>3These models, in fact, satisfy time variability constraints, assume continuous injection of particles, radiative and adiabatic cooling, $`\gamma `$$`\gamma `$ collisions and pair production. The continuous curves shown in Fig. 5 are then not only interpolating curves, but physically possible fits to the data. The bulk velocities obtained in this way are fully compatible with those inferred from the already mentioned constraints (Sect. 3).
Let us firstly consider single objects for which the SED is well sampled, namely Mkn~421 (a typical HBL) and PKS 0735+178 (a typical LBL). We model the observed SED as explained, derive the value of $`\mathrm{\Gamma }`$($`=\delta `$) for the two sources and then calculate their corresponding observed SEDs for different orientations. Clearly the net effect of debeaming is a “shift” of the SED towards lower luminosities and energies (see Fig. 5).
Notice that, as the model is appropriate for the optically thin part of the spectrum, in order to account for the radio emission, which necessarily has to be produced on larger scales, we linearly extrapolate the fit from the infrared-mm spectral region. However, at an angle of $`\theta =60^{}`$ and for the Lorentz factors derived from the model, $`\mathrm{\Gamma }1520`$, the observed (debeamed) radiation at 5 GHz corresponds to what is seen in BL Lacs at far infrared frequencies (respectively $`500300\mu m`$, see Fig. 5) and therefore the debeamed points in Fig. 6 represent the correct predicted luminosities of the BL Lac component at 5 GHz.
The resulting debeamed optical and radio luminosities are reproduced in Fig. 6. The dash–dotted lines represent “debeaming trails” and the filled circles the calculated debeamed luminosities for $`\theta =1/\mathrm{\Gamma }`$ (i.e. the BL Lac itself), $`10^{}`$, $`30^{}`$ and $`60^{}`$. Most noticeably, for $`\theta =60^{}`$ – which is the mean angle of sight for the misaligned population – the BL Lac component is about four orders of magnitude below the radio galaxy region in the optical, and two/four in the radio band.
While equivalently incompatible with the FR I population, the HBL and LBL move on different trails. This is due to the different shape of their SED (see Fig. 5), and in particular to the position of the synchrotron peak frequency: if – for increasing values of $`\theta `$ – in the rest frame the peak overcomes the optical band, the spectral index steepens and the optical flux drops more rapidly than the radio one.
Another remarkable result is that the debeaming trail of the HBL does not even cross the region occupied by radio galaxies in the $`L_rL_o`$ plane. As this might be a serious problem for the unified scheme, we further examine this issue. In particular we closely examine the effect of the spectral shape and its relation with the intrinsic luminosity by considering three different SED, which represent the whole family of BL Lacs, from HBLs to LBLs. <sup>4</sup><sup>4</sup>4The three SED correspond to different bins of radio luminosity (at 5 GHz) – which appears correlated with the bolometric luminosity and the position of the peak frequency (Fossati et al. gfos (1998), Ghisellini et al. gg98 (1998)). In Fig. 7 we plot the resulting trails: once again, as in the cases of Mkn 421 and PKS 0735+178, the expected nuclear luminosity is 10-10<sup>4</sup> times fainter than what observed in FR I and the debeaming trail for the lower luminosity object (a typical HBL) does not even cross the FR I region. Note that if the luminosity is indeed related to the shape of the SED, this discrepancy would exacerbate for even fainter BL Lacs. In fact for HBL the radio and optical spectral slopes can be considered constant as the viewing angle increases, resulting in a linear (one to one) debeaming trail, parallel to the FR I correlation.
Summarizing: the radio and optical luminosities of BL Lacs and FR I are not consistent with the simplest predictions of the unifying scheme, if a single emitting region is responsible for the different broad band spectral properties of the beamed and parent populations. More specifically: i) Lorentz factors $`15`$, as derived from the high energy spectral properties, underestimate the predicted emission from the parent population; ii) the relative ratio of radio to optical luminosity of HBL is inconsistent with the observed FR I spectra. In the next section we discuss and test a possible solution to these discrepancies.
## 5 A jet velocity structure
We have shown that the high bulk Lorentz factors required by the emission models of BL Lacs imply that if such objects are observed at $`60^{}`$ the resulting spectral properties are not compatible with what is observed in the nuclei of radio galaxies. And indeed the previous comparison of the core emission of FR I and BL Lacs (see Sect. 3) led to lower values of $`\mathrm{\Gamma }`$. How can these results be reconciled within the unifying scenario? A possible and plausible effect, which could account for this discrepancy, is provided by the existence of a distribution in the bulk velocity of the flow, with the emission from plasma moving at different speeds dominating the flux observed at different viewing angles.
Let us consider this hypothesis in the frame of the unification scheme and examine the simplest case, i.e. a model with two axisymmetric components having the same intrinsic luminosity and spectra. In other words, the only difference between the center and the layer of the jet is the bulk Lorentz factor which is determined for the spine ($`\mathrm{\Gamma }_{spine}`$) by modeling the BL Lac SED, while for the layer ($`\mathrm{\Gamma }_{layer}`$) by requiring that the debeamed BL Lac match the FR I distributions in the $`L_{ext}L_o`$ and $`L_rL_o`$ planes.
The monochromatic intensity emitted by the jet is therefore calculated as
$$I(\nu ,\theta )=\delta _{spine}^3(\theta )I^{}(\nu /\delta _{spine})+\delta _{layer}^3(\theta )I^{}(\nu /\delta _{layer}),$$
where $`I^{}`$ is the comoving intensity.
The predicted luminosity trails for the two specific BL Lacs are shown in Fig. 8. Values of $`\mathrm{\Gamma }_{layer}`$ are set to 1.2 and 1.5 for Mkn 421 and PKS 0735+178 respectively, so that the point of each trail corresponding to the angle $`\theta =60^{}`$ falls approximatively onto the median of the FR I optical core luminosity in each $`L_{ext}`$ bin (Fig. 9).
By using the same value of $`\mathrm{\Gamma }_{layer}`$ this two-velocity model satisfactorily predicts the properties of the debeamed counterparts of LBL and in particular it reproduces the FR I location in the $`L_rL_o`$ plane.
Conversely this picture can not account for the observed optical–radio properties of debeamed HBL. The luminosity of objects seen at $`\theta =60^{}`$ is close to the lower limit of the FR I region in the optical, but still one order of magnitude fainter in the radio. We must stress however that the extended radio powers of HBLs correspond more closely to the range covered by the B2 radio galaxies. Clearly, any firm statement on this issue must await for the analysis of the nuclear properties of the B2 sample, but the extrapolation of the 3CR radio-optical correlation does not match the debeamed predicted luminosities of HBL. This result does not depend on the specific value of $`\mathrm{\Gamma }_{layer}`$ adopted since, as already discussed in Sect. 4, the HBL trails run almost parallel to the radio-galaxies locus.
A further modification of this model is thus required for the HBL unification. Without altering the comoving spectra of the two components, the simplest change is to assume a lower Doppler factor for spine in the radio emitting region, as might be the case if the flow slows down between the optical and the radio emitting sites. This would increase the initial slope of the debeaming trail which would rapidly reach the FR I region.
### 5.1 Constraints from the X–ray observations
The limited angular resolution makes the analysis of X-ray observations of FR I sources less straightforward than in the radio and optical bands. In particular it is necessary to disentangle any non thermal nuclear radiation from the often dominant emission of the hot gas associated with the galactic corona and/or galaxy cluster. Nonetheless they provide useful constraints to high energy nuclear emission of these radio-galaxies.
We can test the validity of the two-velocity jet scenario by considering also this X-ray emission. In Fig. 10 we report the debeaming trails in the $`L_XL_o`$ plane for the average SEDs assuming the same jet parameters as before: the predicted powers in both bands appear to be consistent with the observed properties of radio galaxies, supporting the presence of a less beamed plasma component (layer) dominating the emission in the parent population also in the X-ray band. Conversely, there is no need for a different amount of beaming in these two bands. This is somehow reassuring, as optical and X–ray are believed to originate co–spatially.
In addition, X–ray data can be used to define the location of radio–galaxies also in the broad band spectral indices plane. The spectral characteristics of blazars are often represented in the plane defined by $`\alpha _{ro}`$ (5GHz-5500Å) and $`\alpha _{ox}`$(5500Å-1 keV). It is therefore worthwhile to determine how relativistic beaming affects the position of the objects also in this plane. While an approximated relation between the BL Lacs and FR I broad band spectral slopes is derived in A in the case of constant local spectral indices, changes in the local spectral slopes are properly taken into account.
In this plane (Fig. 11), as already well established, HBL and LBL occupy the left (i.e. flatter $`\alpha _{ro}`$) and the top-center (i.e. steeper $`\alpha _{ox}`$) regions, respectively and their different position is accounted for by their different SEDs (e.g. Fossati et al. gfos (1998)), i.e. reflects the position of the peak of the emission. The FR I region is instead well defined at the center of the diagram. The debeaming trails for Mkn 421 and PKS 0735+178 are also shown <sup>5</sup><sup>5</sup>5Note that again the two trails differ because of the different SEDs: for Mkn 421 the peak of the synchrotron component is between the optical and the X-ray bands and therefore the effect of debeaming is, initially (i.e. for increasing $`\theta `$) to steepen both $`\alpha _{ro}`$ and $`\alpha _{ox}`$; when the Compton peak enters the X-ray band, $`\alpha _{ox}`$ flattens. Instead, for PKS 0735+178 the synchrotron peaks between the radio and the optical, and the 1 keV flux is due to Compton and, for the range of angles of sight considered here, the change in the spectral indices turns out to be monotonous.: the empty circles correspond to $`\theta =1/\mathrm{\Gamma }`$, $`10^{}`$ $`30^{}`$, $`60^{}`$ in the case of the two components model, while the two asterisks represent each source as observed at $`60^{}`$ in the case of a single emitting region. PKS 0735+178 falls in the radio galaxy region for $`\mathrm{\Gamma }_{layer}=3`$ or less, while Mkn 421 does not intersect this area either in the single or in two component models, confirming the results of the analysis presented above.
## 6 Summary and conclusions
With the aim of exploring the viability of the unification scenario between (HBL, LBL) BL Lacs and FR I radio galaxies we have compared their nuclear emission in the radio, optical and X–ray bands.
We have firstly considered these spectral regions separately, comparing the nuclear emission of the two classes of objects for similar extended radio power. As the core radiation of BL Lacs is enhanced by relativistic beaming, we derived the bulk Lorentz factors requested to account for the observed distribution. The values of $`\mathrm{\Gamma }`$ thus inferred are not compatible with the higher bulk velocities requested by theoretical arguments, such as the pair production opacity and the spectral modeling of the SED of BL Lacs.
We then examined the core emission of three samples in the $`L_rL_o`$ plane. In the frame of the simplest one-zone emission model, we calculated debeaming trails of the BL Lac broad band emission as predicted by the relativistic transformation for an increasing angle of sight. We found that the model does not account for the observed spectral properties of FR I, as expected from the above inconsistency of the Lorentz factors.
The simplest and rather plausible hypothesis to account for this discrepancy within the unification scenario is to assume a structure in the jet velocity field, in which a fast spine is surrounded by a slow layer. Note however that the slower jet component must be relativistic in order to explain the anisotropic radiation of radio galaxy cores (e.g. Capetti & Celotti ac2 (1999)). The observed flux is dominated by the emission from either the spine or the slower layer, in the case of aligned and misaligned objects, respectively.
Interestingly, the existence of velocity structures in the jet has been suggested by various authors (Komissarov komiss (1990), Laing laing93 (1993)) in order to explain some observed properties of FR I (and FR II) jets, such as the structure of the magnetic field in FR I which appears to be longitudinal close to the jet axis and transverse at the edges. Swain et al. swain (1998) obtained VLA images of 3C~353 (an FR II with straight jets), finding that a model consisting in a fast relativistic spine ($`\beta >0.8`$) plus a slower outer layer ($`\beta <0.5`$, but still relativistic in order to produce the observed jet-counterjet intensity asymmetry) could account for the apparently lower emissivity near the jet axis. Similar behaviours have been inferred for the two low luminosity radio galaxies M~87 (Owen et al. owenm87 (1989)) and B2~1144+35 (Giovannini et al. gg299 (1999)). Furthermore, Laing et al. (laing99 (1999)) showed that the jet asymmetries in FR I can be explained by means of a two-speed model. As a consequence, they argued that the lower velocity component dominates in the cores of the edge-on sources, while the fast spine emission dominates the end-on ones. This possibility might be also supported by recent numerical simulations of relativistic jets (Aloy et al. aloy (2000)).
The same indication has been found through different approaches. Capetti & Celotti ac2 (1999) reveal a trend in the radio galaxy/BL Lac relative powers with the line of sight, which is consistent with a slower (less beamed) component dominating at the largest angles. Capetti et al. hstx (2000) consider the same issue by examining the more detailed SED of five radio galaxies and consider their beamed counterparts. They found that while the spectral shapes of 3C~264 and 3C~270 can be reconducted to those of BL Lacs, the required ratio of beaming factors, i.e. $`\delta _{\mathrm{BLLac}}/\delta _{\mathrm{FR}\mathrm{I}}10100`$, implies that the corresponding BL Lacs would be overluminous. The inclusion of a slower (less beamed) jet component seems to be a plausible explanation.
We found that Lorentz factors of the layer $`\mathrm{\Gamma }_{layer}2`$ can account for the unification of FR I (of the 3CR) with LBL and intermediate luminosity BL Lacs. Instead the debeaming trails for the lowest luminosity HBL do not cross the FR I region in the $`L_rL_o`$ plane. While the HBL behavior should be compared with that of radio galaxies with which they share the extended radio power (e.g. those of the B2 catalogue), our simple two-component jet model could not account for the observed properties if the cores of such low-power FR I radio galaxies lied on the extrapolation of the 3CR radio-optical correlation. The properties of such weak sources can be instead reproduced if their radio emitting region is less beamed than the optical one, as could be expected if the jet decelerates after the higher energy emitting zone.
Finally, the presence of velocity structures in jets of course affects the number counts of beamed and unbeamed sources: for example, the lack of BL Lacs in clusters (Owen et al. owen96 (1996)) could be attributed to values of typical bulk Lorentz factors higher than those derived from statistical arguments (Urry et al. ups91 (1991)). Intriguingly, the very latter authors had to require a wide distribution of Lorentz factors to account for the number densities of FR I and BL Lacs in the radio band.
Much has still to be understood on the dynamics and emitting properties of relativistic jets. Multifrequency studies of the nuclear properties of beamed sources and their parent populations and their comparison – according to unification scenarios which are well supported by other independent indications – constitute a new and powerful tool to achieve that, both for well studied individual sources as well as complete samples. Near IR observations by HST, mm data and higher resolution and sensitivity by Chandra in X–rays will further open this possibility.
Concluding, the radio, optical and X-ray nuclear emission of FR I and BL Lacs strongly indicate the presence of a velocity structure in the jet if indeed these sources are intrinsically identical. In other words, by considering the indications of trends in the SED of blazars emerged in the last few years (Giommi & Padovani giopad94 (1994), Fossati et al. gfos (1998)) together with the constraints derived from their unification with radio galaxies, it appears that the phenomenology of these sources is characterized and determined by differences both in the intrinsic SED it and in beaming properties.
###### Acknowledgements.
The authors thank the anonymous referee for his/her useful comments. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The Italian MURST is thanked for financial support.
## Appendix A Debeaming and the broad band spectral slopes
In the case of a one-component model, and under the assumption that the local spectral indices are constant, we can derive the transformation law for the change of the spectral slope due to relativistic beaming. If the flux density in the frame comoving with the emitting region is $`F_\nu ^{}^{}`$, the observed one is
$$F_\nu ^{\mathrm{object}}(\nu )=\delta _{\mathrm{object}}^{p+\alpha }F_\nu ^{}^{}(\nu ),$$
where $`\delta _{\mathrm{object}}`$ is the beaming factor of the same object for different lines of sight (i.e. observed as BL Lac or as radio galaxy). Substituting these transformations in the definition of the broad-band spectral index $`\alpha _{12}`$ (where 1 and 2 refer to radio, optical or X-ray flux), one obtains
$$\alpha _{12}^{\mathrm{BLLac}}=\frac{\mathrm{log}\{\frac{F_2^{\mathrm{FR}\mathrm{I}}(\nu _2)(\delta _{\mathrm{BLLac}}/\delta _{\mathrm{FR}\mathrm{I}})^{p+\alpha _2}}{F_1^{\mathrm{FR}\mathrm{I}}(\nu _1)(\delta _{\mathrm{BLLac}}/\delta _{\mathrm{FR}\mathrm{I}})^{p+\alpha _1}}\}}{\mathrm{log}(\nu _2/\nu _1)}$$
which can be written as
$$\alpha _{12}^{\mathrm{BLLac}}\alpha _{12}^{\mathrm{FR}\mathrm{I}}=(\alpha _1\alpha _2)\frac{\mathrm{log}(\delta _{\mathrm{BLLac}}/\delta _{\mathrm{FR}\mathrm{I}})}{\mathrm{log}(\nu _2/\nu _1)},$$
where $`\alpha _1`$ and $`\alpha _2`$ are the local spectral indices.
|
no-problem/0003/cond-mat0003001.html
|
ar5iv
|
text
|
# Self-organization in the Concert Hall: the Dynamics of Rhythmic Applause
An audience expresses appreciation for a good performance by the strength and nature of its applause. The initial thunder often turns into synchronized clapping - an event familiar to many who frequent concert halls. Synchronized clapping has a well defined scenario: the initial strong incoherent clapping is followed by a relatively sudden synchronization process, after which everybody claps simultaneously and periodically. This synchronization can disappear and reappear several times during the applause. The phenomenon is a delightful expression of social self-organization, that provides a human scale example of the synchronization processes observed in numerous systems in nature ranging from the synchronized flashing of the south-east Asian fireflies to oscillating chemical reactions .
Here we investigate the mechanism and the development of synchronized clapping by performing a series of measurements focused on both the collective aspects of the self-organization process as well as the behavior of the individuals in the audience. We recorded several theater and opera performances in Eastern Europe (Romania and Hungary) utilizing a microphone placed at the ceiling of the hall (Fig 1a). Typically, after a few seconds of random clapping a periodic signal develops (a signature of synchronized clapping), clearly visible in Fig. 1a as pronounced pikes in the signal. This transition is also captured by the order parameter (Fig. 1c), which increases as the periodic signal develops, and decreases as it disappears. While synchronization increases the strength of the signal at the moment of the clapping, it leads to a decrease in the average noise intensity in the room (see Fig. 1d). This is rather surprising, since one would expect that the driving force for synchronization is the desire of the audience to express its enthusiasm by increasing the average noise intensity. The origin of this conflict between the average noise and synchronization can be understood by correlating the global signal with the behavior of an individual in the audience. For this we recorded the local sound intensity in the vicinity of a group of individuals (Fig. 1b), unaware of the recording process. In the incoherent phase the local signal is periodic with a short period corresponding to the fast clapping of an individual in the audience. However, the clapping period suddenly doubles at the beginning of the synchronized phase (approximately at 12$`s`$ in Fig. 1a and b), and slowly decreases as synchronization is lost (Fig. 1e). Thus, the decrease in the average noise intensity is a consequence of the period doubling, since there is less clapping in unit time. An increase in the average noise intensity is possible only by decreasing the clapping period, which indeed does take place, as shown in Fig. 1e. However, the decreasing clapping period gradually brings the synchronized clapping back to the fast clapping observed in the early asynchronous phase, and synchronization disappears. Apparently, this conflicting desire of the audience to simultaneously increase the average noise intensity and to maintain synchronization leads to the sequence of appearing and disappearing synchronized regimes.
These results indicate that the transition from random to synchronized clapping is accompanied by a period doubling process. Next we argue that in fact period doubling is a necessary condition for synchronization. To address this question, we investigated the internal frequency of several individuals by controlled clapping experiments. Individual students, isolated in a room, were instructed to clap in the manner they usually do right after a good performance (Mode I clapping), or during the rhythmic applause (Mode II clapping). As Fig. 1f shows, the frequencies of the Mode I and Mode II clapping are clearly separated and the average period doubles from Mode I to Mode II clapping. Most important, however, we find that the width of the frequency distribution and the relative dispersion of the Mode II clapping is considerably smaller, a result that is reproducible for a single individual as well (Fig. 1g).
These results indicate that after an initial asynchronous phase, characterized by high frequency clapping (Mode I), the individuals synchronize by eliminating every second beat, suddenly shifting to a clapping mode with double period (Mode II) where dispersion is smaller. As shown by Winfree and Kuramoto, for a group of globally coupled oscillators the necessary condition for synchronization is that dispersion has to be smaller than a critical value . Consequently, period doubling emerges as a condition of synchronization, since it leads to slower clapping modes during which significantly smaller dispersion can be maintained. Thus our measurements offer a key insight into the mechanism of syncnronized clapping: during fast clapping synchronization is not possible due to the large dispersion in the clapping frequencies. After period doubling, as Mode II clapping with small dispersion appears, synchronization can be and is achieved. However, as the audience gradually decreases the period to enhance the average noise intensity, it gradually slips back to the fast clapping mode with larger dispersion, destroying synchronization.
In summary, the individuals in the audience have to be aware that by doubling their clapping period they can achieve synchronization, which perhaps explains why in the smaller and culturally more homogeneous Eastern European communities synchronized clapping is a daily event, but it is only sporadically observed for the West and U.S. In general, our results offer evidence of a novel route to synchronization, not yet observed in physical or biological systems .
Z. Néda, E. Ravasz, Y. Brechet<sup>∗∗</sup>, T. Vicsek<sup>∗∗∗</sup>, A.-L. Barabási<sup>&</sup>,
Babeş-Bolyai University, Department of Theoretical Physics, str. Kogălniceanu 1, RO-3400, Cluj-Napoca, Romania
<sup>∗∗</sup> ENSEEG-LTPCM, INP-Grenoble Saint-Martin D’Heres, CEDEX, France
<sup>∗∗∗</sup> Department of Biological Physics, Eötvös-Loránd University, Budapest, Hungary
<sup>&</sup> Department of Physics, University of Notre Dame, IN 46556, USA
E-mail: [email protected]
|
no-problem/0003/cond-mat0003207.html
|
ar5iv
|
text
|
# MARSHALL-PEIERLS SIGN RULE IN FRUSTRATED HEISENBERG CHAINS
## 1 Introduction
The Marshall-Peierls sign rule (MPSR) determines the sign of the Ising-basis-states building the ground-state wave function of a Heisenberg Hamiltonian and has been proven exactly for bipartite lattices and arbitrary site spins by Lieb, Schultz and Mattis . As pointed out in several papers the knowledge of the sign is of great importance in different numerical methods, e.g. for the construction of variational wave functions , in Quantum Monte-Carlo methods (which suffer from the sign problem in frustrated systems ) and also in the density matrix renormalization group method, where the application of the MPSR has substantially improved the method in a frustrated spin system .
The MPSR has been analyzed so far for systems with s=1/2. The authors of studied the frustrated chain and found for the ground-state a critical value for the breakdown of the MPSR for the infinite chain limit by using exact diagonalization data. For the $`J_1`$-$`J_2`$ model on the square lattice the violation of the MPSR was considered as an indication for the breakdown of long range order . In a recent paper we extended these investigations to higher subspaces of $`S^z`$. For linear chains we have shown that for the lowest eigenstates in every subspace of $`S^z`$ there is a finite region of frustration where the MPSR holds.
In this paper we want to analyze the frustrated spin chain with s=1. This spin system has attracted a lot of attention, because of the well known Haldane conjecture . The unfrustrated s=1 spin chain shows a spin gap and exponential decaying correlations whereas the s=1/2 spin chain has no gap and a power-law correlation decay. Since both systems are qualitatively different one might expect also a different influence of frustration on the MPSR.
## 2 The Model and the Marshall-Peierls sign rule
In the following we study the MPSR for the frustrated antiferromagnetic s=1 Heisenberg quantum spin chain:
$$\widehat{\mathrm{H}}=J_1\underset{\mathrm{𝐍𝐍}}{}𝐬_𝐢𝐬_𝐣+J_2\underset{\mathrm{𝐍𝐍𝐍}}{}𝐬_𝐢𝐬_𝐣,$$
(1)
$`\mathrm{𝐍𝐍}`$ and $`\mathrm{𝐍𝐍𝐍}`$ denote nearest-neighbor and next-nearest-neighbor bonds on the linear chain. We set $`J_1=1`$ for the rest of the paper. For this model the MPSR can be exactly proved only for $`J_20`$.
The Marshall-Peierls sign rule can be described as follows: In the unfrustrated limit of $`J_2=0`$, the lowest eigenstate of the Hamiltonian (1) in each subspace of fixed eigenvalue $`M`$ of the spin operator $`S_{total}^z`$ reads
$$\mathrm{\Psi }_M=\underset{m}{}c_m^{(M)}|m,c_m^{(M)}>0.$$
(2)
Here the Ising-states $`|m`$ are defined by
$$|m(1)^{S_AM_A}|m_1|m_2\mathrm{}|m_N,$$
(3)
where $`|m_i,i=1,\mathrm{},N`$, are the eigenstates of the site spin operator $`S_i^z`$ ($`s_im_is_i`$), $`S_A=_{iA}s_i`$, $`M_{A(B)}=_{iA(B)}m_i`$, $`M=M_A+M_B`$. The lattice consists of two equivalent sublattices $`A`$ and $`B`$. $`s_is`$, $`i=1,\mathrm{},N`$, are the site spins. The summations in Eq.(2) are restricted by the condition $`_{i=1}^Nm_i=M`$. The wave function (2) is not only an eigenstate of the unfrustrated Hamiltonian ($`J_2=0`$) and $`S_{total}^z`$ but also of the square of the total spin $`𝐒_{total}^2`$ with quantum number $`S=M`$. Because $`c_m^{(M)}>0`$ is valid for each $`m`$ from the basis set (3) it is impossible to build up other orthonormal states without using negative amplitudes $`c_m^{(M)}`$. Hence the ground-state wave function $`\mathrm{\Psi }_M`$ is nondegenerated. As it comes out, the MPSR is still fulfilled not only for the ground-state but also for every lowest eigenstate in the subspace $`M`$ in the unfrustrated case. We emphasize that for $`J_2>0`$ no proof for the above statements can be given and that a frustrating $`J_2>0`$ can destroy the MPSR.
## 3 Results
We have calculated by exact diagonalization the ground-state of the model (1) for N=8,$`\mathrm{}`$,14 varying the frustration parameter $`J_2`$. By analyzing the ground-state wave function according to the MPSR we found for every system a critical value of frustration $`J_2^{crit}`$, where the MPSR starts to be violated. We apply the scaling law proposed by Zeng and Parkinson and extrapolate our data as a function of 1/N<sup>2</sup>. We found a value for the infinite chain limit: $`J_2^{crit}(\mathrm{})=0.016\pm 0.003`$. In Fig.1 we compare these data with the values for the s=1/2 systems (N=8,$`\mathrm{}`$,26), where the extrapolation yields $`J_2^{crit}(\mathrm{})=0.027\pm 0.003`$. It is also interesting to note that this value is slightly lower than the value of 0.032 found by Zeng and Parkinson using data for N=8,$`\mathrm{}`$,20 only.
We argue that in the case of s=1 the chain is more sensitive to frustration and therefore the MPSR is violated for smaller values of $`J_2`$. Nevertheless in numerical methods the MPSR can be used at least approximately for much larger values of frustration. This can be justified by the examination the ground-state wave function according to the Ising-basis-states which violates the MPSR. We call these states non-Marshall-states and denote their weight by $`W_{nM}`$. In Fig.2 we show $`W_{nM}`$ as a function of frustration $`J_2`$.
As can be seen in Fig.2, the weight of the non-Marshall-states remains smaller than 1% (1E-2) until $`J_20.36`$. This result seems to be more or less size independent because all three lines for the systems with N=8,10 and 12 cross at this point. The points at the bottom line denote the first violation of the MPSR in a given system and coincide with the points given in Fig.1. The examination of $`W_{nM}`$ indicates that for quite large frustration the predominant part of the ground-state wave function fulfills the MPSR. Therefore, the MPSR can be used in numerical methods even if it does not hold strictly.
## 4 Conclusions
We have shown that in the frustrated antiferromagnetic s=1 Heisenberg quantum spin chain the Marshall-Peierls sign rule is violated by frustration. We found by extrapolation to the infinite chain limit a critical value of frustration $`J_2^{crit}0.016\pm 0.003`$ below which the MPSR still holds exactly. By calculating the weight of the Ising-basis-states of the ground-state wave function which do not fulfill the MPSR we conclude that the MPSR can be used in numerical methods at least approximately until a large frustration of $`J_20.36`$.
## Acknowledgments
We would like to thank Nedko Ivanov for many useful discussions. This work has been supported by the DFG (Project Nr. Ri 615/6-1).
|
no-problem/0003/hep-ph0003101.html
|
ar5iv
|
text
|
# Untitled Document
NEUTRINO OSCILLATIONS: SOME THEORETICAL IDEAS<sup>1</sup><sup>1</sup>1Talk given at Orbis Scientiae Conference, Coral Gables, FLA, Dec. 16-19, 1999
Stephen M. Barr
Bartol Research Institute
University of Delaware
Newark, DE 19711
INTRODUCTION
Over the years, and especially since the discovery of the large mixing of $`\nu _\mu `$ seen in atmospheric neutrino experiments, there have been numerous models of neutrino masses proposed in the literature. In the last two years alone, as many as one hundred different models have been published. One of the goals of this talk is to give a helpful classification of these models. Such a classification is possible because in actuality there are only a few basic ideas that underlie the vast majority of published neutrino mixing schemes. After some preliminaries, I give a classification of three-neutrino models, and then in the last part of the talk I discuss in more detail one category of models — those with “lopsided” charged-lepton mass matrices. Finally, I talk about a specific very predictive model based on lopsided mass matrices that I have worked on with Albright and Babu.
THE DATA
There are four indications of neutrino mass that guide recent attemps to build models: (1) the solar neutrino problem, (2) the atmospheric neutrino anomaly, (3) the LSND experiment, and (4) dark matter. Several excellent reviews of the evidence for neutrino mass have appeared recently.<sup>1</sup>
(1) The three most promising solutions to the solar neutrino problem are based on neutrino mass. These are the small-angle MSW solution (SMA), the large-angle MSW solution (LMA), and the vacuum oscillation solution (VO). All these solutions involve $`\nu _e`$ oscillating into some other type of neutrino — in the models we shall consider predominantly $`\nu _\mu `$. In the SMA solution the mixing angle and mass-squared splitting between $`\nu _e`$ and the neutrino into which it oscillates are roughly $`\mathrm{sin}^22\theta 5.5\times 10^3`$ and $`\delta m^25.1\times 10^6eV^2`$. For the LMA solution one has $`\mathrm{sin}^22\theta 0.79`$, and $`\delta m^23.6\times 10^5eV^2`$. (The numbers are best-fit values from a recent analysis.<sup>2</sup>) And for the VO solution $`\mathrm{sin}^22\theta 0.93`$, and $`\delta m^24.4\times 10^{10}eV^2`$. (Again, these are best-fit values from a recent analysis.<sup>3</sup>)
(2) The atmospheric neutrino anomaly strongly implies that $`\nu _\mu `$ is oscillating with nearly maximal angle into either $`\nu _\tau `$ or a sterile neutrino, with the data preferring the former possibility.<sup>4</sup> One has $`\mathrm{sin}^22\theta 1.0`$, and $`\delta m^23\times 10^3eV^2`$.
(3) The LSND result, which would indicate a mixing between $`\nu _e`$ and $`\nu _\mu `$ with $`\delta m^20.11eV^2`$ is regarded with more skepticism for two reasons. The experimental reason is that KARMEN has failed to corroborate the discovery, though KARMEN has not excluded the entire LSND region. The theoretical reason is that to account for the LSND result and also for both the solar and atmospheric anomalies by neutrino oscillations would require three quite different mass-squared splittings, and that can only be achieved with four species of neutrino. This significantly complicates the problem of model-building. In particular, it is regarded as not very natural, in general, to have a fourth sterile neutrino that is extremely light compared to the weak scale. For these reasons, the classification given in this talk will assume that the LSND results do not need to be explained by neutrino oscillations, and will include only three-neutrino models.
(4) The fourth possible indication of neutrino mass is the existence of dark matter. If a significant amount of this dark matter is in neutrino mass, it would imply a neutrino mass of order several eVs. In order then to achieve the small $`\delta m^2`$’s needed to explain the solar and atmospheric anomalies one would have to assume that $`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$ were nearly degenerate. We shall not focus on such models in our classification, which is primarily devoted to models with “hierarchical” neutrino masses. However, in most models with nearly degenerate masses, the neutrino mass matrix consists of a dominant piece proportional to the identity matrix and a much smaller hierarchical piece. Since the piece proportional to the identity matrix would not by itself give oscillations, such models can be classified together with hierarchical mass models in most instances.
In sum, the models we shall classify are those which assume (a) three flavors of neutrino that oscillate ($`\nu _e`$, $`\nu _\mu `$, and $`\nu _\tau `$), (b) a hierarchical pattern of neutrino masses, (c) the atmospheric anomaly explained by $`\nu _\mu `$-$`\nu _\tau `$ oscillations with nearly maximal angle, and (d) the solar anomalies explained by $`\nu _e`$ oscillating primarily with $`\nu _\mu `$ with either small angle (SMA) or large angle (LMA, VO).
MAJOR DIVISIONS
There are several major divisions among models. One is between models in which the neutrino masses arise through the see-saw mechanism,<sup>5</sup> and those in which the neutrino masses are generated directly at low energy. In see-saw models, there are both left- and right-handed neutrinos. Consequently, there are five fermion mass matrices to explain: the four Dirac mass matrices, $`U`$, $`D`$, $`L`$, and $`N`$ of the up quarks, down quarks, charged leptons, and neutrinos, respectively, and the Majorana mass matrix $`M_R`$ of the right-handed neutrinos. The four Dirac mass matrices are all roughly of the weak scale, while $`M_R`$ is many orders of magnitude larger than the weak scale. After integrating out the superheavy right-handed neutrinos, the mass matrix of the left-handed neutrinos is given by $`M_\nu =N^TM_R^1N`$. Typically, in see-saw models, the four Dirac mass matrices are closely related to each other, either by grand unification or by flavor symmetries. That means that in see-saw models neutrino masses and mixings are just one aspect of the larger problem of quark and lepton masses, and are likely to shed great light on that problem, and perhaps even be the key to solving it. On the other hand, in most see-saw models $`M_R`$ is either not related or is tenuously related to the Dirac mass matrices of the quarks and leptons. The freedom in $`M_R`$ is the major obstacle to making precise predictions of neutrino masses and mixings in most see-saw schemes.
In non-see-saw schemes, there are no right-handed neutrinos. Consequently, there are only four mass matrices to consider, the Dirac mass matrices of the quarks and charged leptons, $`U`$, $`D`$, and $`L`$, and the Majorana mass matrix of the light left-handed neutrinos $`M_\nu `$. Typically in such schemes $`M_\nu `$ has nothing directly to do with the matrices $`U`$, $`D`$, and $`L`$, but is generated at low-energy by completely different physics.
The three most popular possibilities in recent models for generating $`M_\nu `$ at low energy in a non-see-saw way are (a) triplet Higgs, (b) variants of the Zee model,<sup>6</sup> and (c) R-parity violating terms in low-energy supersymmetry. (a) In triplet-Higgs models, $`M_\nu `$ arises from a renormalizable term of the form $`\lambda _{ij}\nu _i\nu _jH_T^0`$, where $`H_T`$ is a Higgs field in the $`(1,3,+1)`$ representation of $`SU(3)\times SU(2)\times U(1)`$. (b) In the Zee model, the Standard Model is supplemented with a scalar, $`h`$, in the $`(1,1,+1)`$ representation and having weak-scale mass. This field can couple to the lepton doublets $`L_i`$ as $`L_iL_jh`$ and to the Higgs doublets $`\varphi _a`$ (if there is more than one) as $`\varphi _a\varphi _bh`$. Clearly it is not possible to assign a lepton number to $`h`$ in such a way as to conserve it in both these terms. The resulting lepton-number violation allows one-loop diagrams that generate a Majorana mass for the left-handed neutrinos. (c) In supersymmetry the presence of such R-parity-violating terms in the superpotential as $`L_iL_jE_k^c`$ and $`Q_iD_j^cL_k`$, causes lepton-number violation, and allows one-loop diagrams that give neutrino masses.
It is clear that in all of these schemes the couplings that give rise to neutrino masses have nothing to do with the physics that gives mass to the other quarks and leptons. While this allows more freedom to the neutrino masses, it would from one point of view be very disappointing, as it would mean that the observation of neutrino oscillations is almost irrelevant to the burning question of the origin of quark and charged lepton masses.
Another major division among models has to do with the kinds of symmetries that constrain the forms of mass matrices and that, in some models, relate different mass matrices to each other. There are two main approaches: (a) grand unification, and (b) flavor symmetry. Many models use both.
(a) The simplest grand unified group is $`SU(5)`$. In minimal $`SU(5)`$ there is one relation among the Dirac mass matrices, namely $`D=L^T`$, coming from the fact that the left-handed charged leptons are unified with the right-handed down quarks in a $`\overline{\mathrm{𝟓}}`$, while the right-handed charged leptons and left-handed down quarks are unified in a $`\mathrm{𝟏𝟎}`$. In $`SU(5)`$ there do not have to be right-handed neutrinos, though they may be introduced. In $`SO(10)`$, which in several ways is a very attractive group for unification, the minimal model gives the relations $`N=UD=L`$. In realistic models these relations are modified in various ways, for example by the appearance of Clebsch coefficients in certain entries of some of the mass matrices. It is clear that unified symmetries are so powerful that very predictive models are possible. Most of the published models which give sharp predictions for masses and mixings are unified models.
(b) Flavor symmetries can be either abelian or non-abelian. Non-abelian symmetries are useful for obtaining the equality of certain elements of the mass matrix, as in models where the neutrino masses are nearly degenerate, and in the so-called “flavor democracy” schemes. Abelian symmetries are useful for explaining hierarchical mass matrices through the so-called Froggatt-Nielson mechanism.<sup>7</sup> The idea is that different fermion multiplets can differ in charge under a $`U(1)`$ flavor symmetry that is spontaneously broken by some “flavon” expectation value (or values), $`f_i`$. Thus, different elements of the fermion mass matrices would be suppressed by different powers of $`f_i/Mϵ_i1`$, where $`M`$ is the scale of flavor physics. This kind of scheme can explain small mass ratios and mixings in the sense of predicting them to arise at certain orders in the small quantities $`ϵ_i`$. A drawback of such models compared to many grand unified models, is that actual numerical predictions, as opposed to order of magnitude estimates, are not possible. On the other hand, models based on flavor symmetry involve less of a theoretical superstructure built on top of the Standard Model than do unified models, and could therefore be considered more economical in a certain sense. Unified models put more in but get more out than flavor symmetry.
THE PUZZLE OF LARGE $`\nu _\mu \nu _\tau `$ MIXING
The most significant new fact about neutrino mixing is the largeness of the mixing between $`\nu _\mu `$ and $`\nu _\tau `$ This comes as somewhat of a surprise from the point of view of both grand unification and flavor symmetry approaches. Since grand unification relates leptons to quarks, one might expect lepton mixing angles to be small like those of the quarks. In particular, the mixing between the second and third family of quarks is given by $`V_{cb}`$, which is known to be $`0.04`$. That is to be compared to the nearly maximal mixing of the second and third families of leptons: $`U_{\mu 3}1/\sqrt{2}0.7`$. It is true that even in the early 1980’s some grand unified models predicted large neutrino mixing angles. (Especially noteworthy is the remarkably prophetic 1982 paper of Harvey, Ramond, and Reiss,<sup>8</sup> which explicitly predicted and emphasized that there should be large $`\nu _\mu \nu _\tau `$ mixing. However, in those days the top mass was expected to be light, and Ref. 8 chose it to be 25 GeV. That gave $`V_{cb}`$ in that model to be about $`0.22`$. The corresponding lepton mixing was further boosted by a Clebsch of 3. With the actual value of $`m_t`$ that we now know, the model of Ref. 8 would predict $`U_{\mu 3}`$ to be 0.12). What makes the largeness of $`U_{\mu 3}`$ a puzzle in the present situation is the fact that we now know that both $`V_{cb}`$ and $`m_c/m_t`$ are exceedingly small.
The same puzzle exists in the context of flavor symmetry. The fact that the quark mixing angles are small suggests that there is a family symmetry that is only weakly broken, while the large mixings of some of the neutrinos suggests that family symmetries are badly broken.
The chief point of interest in looking at any model of neutrino mixing is how it explains the large mixing of $`\nu _\mu `$ and $`\nu _\tau `$. This will be the feature that I will use to organize the classification of models.
CLASSIFICATION OF THREE-NEUTRINO MODELS
Virtually all published models fit somewhere in the simple classification now to be described. The main divisions of this classification are based on how the large $`\nu _\mu \nu _\tau `$ mixing arises. This mixing is described by the element $`U_{\mu 3}`$ of the so-called MNS matrix (analogous to the CKM matrix for the quarks).
The mixing angles of the neutrinos are the mismatch between the eigenstates of the neutrinos and those of the charged leptons, or in other words between the mass matrices $`L`$ and $`M_\nu `$. Thus, there are two obvious ways of obtaining large $`U_{\mu 3}`$: either $`M_\nu `$ has large off-diagonal elements while $`L`$ is nearly diagonal, or $`L`$ has large off-diagonal elements and $`M_\nu `$ is nearly diagonal. Of course this distinction only makes sense in some preferred basis. But in almost every model there is some preferred basis given by the underlying symmetries of that model. This distinction gives the first major division in the classification, between models of what I shall call class I and class II. (It is also possible that the large mixing is due almost equally to large off-diagonal elements in $`L`$ and $`M_\nu `$, but this possibility seems to be realized in very few published models. I will put them into class II.)
If the large $`U_{\mu 3}`$ is due to $`M_\nu `$ (class I), then it becomes important whether $`M_\nu `$ arises from a non-see-saw mechanism or the see-saw mechanism. We therefore distinguish these cases as class I(1) and class I(2) respectively. In the see-saw models, $`M_\nu `$ is given by $`N^TM_R^1N`$, so a further subdivision is possible: between models in which the large off-diagonal elements are in $`M_R`$ and those in which they are in $`N`$. We call these class I(2A) and I(2B) respectively.
If $`U_{\mu 3}`$ is due to large off-diagonal elements in $`L`$, while $`M_\nu `$ is nearly diagonal (class II), then the question to ask is why, given that $`L`$ has large off-diagonal elements, there are not also large off-diagonal elements in the Dirac mass matrices of the other charged fermions, namely $`U`$ and $`D`$, causing large CKM mixing of the quarks. In the literature there seem to be two ways of answering this question. One way involves the CKM angles being small due to a cancellation between large angles that are nearly equal in the up and down quark sectors. We call this class II(1). The main examples of this idea are the so-called “flavor democracy models”. The other idea is that the matrices $`L`$ and $`D^T`$ (related by unified or flavor symmetry) are “lopsided” in such a way that the large off-diagonal elements only affect the mixing of fermions of one handedness: left-handed for the leptons, making $`U_{\mu 3}`$ large, and right-handed for the quarks, leaving $`V_{cb}`$ small. We call this approach class II(2).
Schematically, one then has
$$\begin{array}{cc}I\hfill & \mathrm{Large}\mathrm{mixing}\mathrm{from}M_\nu \hfill \\ & (1)\mathrm{Non}\mathrm{see}\mathrm{saw}\hfill \\ & (2)\mathrm{See}\mathrm{saw}\hfill \\ & \mathrm{A}.\mathrm{Large}\mathrm{mixing}\mathrm{from}M_R\hfill \\ & \mathrm{B}.\mathrm{Large}\mathrm{mixing}\mathrm{from}N\hfill \\ II\hfill & \mathrm{Large}\mathrm{mixing}\mathrm{from}L\hfill \\ & (1)\mathrm{CKM}\mathrm{small}\mathrm{by}\mathrm{cancellation}\hfill \\ & (2)\mathrm{lopsided}L.\hfill \end{array}$$
(1)
Now let us examine the different categories in more detail, giving examples from the literature.
I(1) Large mixing from $`M_\nu `$, non-see-saw.
This kind of model gives a natural explanation of the discrepancy between the largeness of $`U_{\mu 3}`$ and the smallness of $`V_{cb}`$. $`V_{cb}`$ comes from Dirac mass matrices, which are all presumably nearly diagonal like $`L`$, whereas $`U_{\mu 3}`$ comes from the matrix $`U_\nu `$; and since in non-see-saw models $`M_\nu `$ comes from models the matrix $`M_\nu `$ comes from completely different physics than do the Dirac mass matrices it is not at all surprising if it has a very different form from the others, containing some large off-diagonal elements. While this basic idea is very simple and appealing, these models have the drawback that in non-see-saw models the form of $`M_\nu `$, since it comes from new physics unrelated to the origin of the other mass matrices, is highly unconstrained. Thus, there are few definite predictions, in general, for masses and mixings in such schemes. However, in some schemes constraints can be put on the new physics responsible for $`M_\nu `$.
As we saw, there are a variety of attractive ideas for generating a non-see-saw $`M_\nu `$ at low energy, and there are published models of neutrino mixing corresponding to all these ideas.<sup>9-13</sup> $`M_\nu `$ comes from triplet Higgs in Ref. 9; from the Zee mechanism in Ref. 10; and from R-parity and lepton-number-violating terms in a SUSY model in Ref. 11. In Ref. 12 a “democratic form” of $`M_\nu `$ is enforced by a family symmetry. Several other models in class I(1) exist in the literature.<sup>13</sup>
I(2A) See-saw $`M_\nu `$, large mixing from $`M_R`$
In these models, $`M_\nu `$ comes from the see-saw mechanism and therefore has the form $`N^TM_R^1N`$. The large off-diagonal elements in $`M_\nu `$ are assumed to come from $`M_R`$, while the Dirac neutrino matrix $`N`$ is assumed to be nearly diagonal and hierarchical like the other Dirac matrices $`L`$, $`U`$, and $`D`$. As with the models of class I(1), these models have the virtue of explaining in a natural way the difference between the lepton angle $`U_{\mu 3}`$ and the quark angle $`V_{cb}`$. The quark mixings all come from Dirac matrices, while the lepton mixings involve the Majorana matrix $`M_R`$, which it is quite reasonable to suppose might have a very different character, with large off-diagonal elements.
However, there is a general problem with models of this type, which not all the examples in the literature convincingly overcome. The problem is that if $`N`$ has a hierarchical and nearly diagonal form, it tends to communicate this property to $`M_\nu `$. For example, suppose we take $`N=\mathrm{diag}(ϵ^{},ϵ,1)M`$, with $`1ϵϵ^{}`$. And suppose that the $`ij^{th}`$ element of $`M_R^1`$ is called $`a_{ij}`$. Then the matrix $`M_\nu `$ will have the form
$$M_\nu \left(\begin{array}{ccc}ϵ^2a_{11}& ϵ^{}ϵa_{12}& ϵ^{}a_{13}\\ ϵ^{}ϵa_{12}& ϵ^2a_{22}& ϵa_{23}\\ ϵ^{}a_{13}& ϵa_{23}& a_{33}\end{array}\right).$$
(2)
If all the non-vanishing elements $`a_{ij}`$ were of the same order of magnitude, then obviously $`M_\nu `$ is approximately diagonal and hierarchical. The contribution to the leptonic angles coming from $`M_\nu `$ would therefore typically be proportional to the small parameters $`ϵ`$ and $`ϵ^{}`$. This suggests that to get a value of $`U_{\mu 3}`$ that is of order 1, it is necessary to have the small parameter coming from $`N`$ get cancelled by a correspondingly large parameter from $`M_R^1`$. The trouble is that to have such a conspiracy between the magnitudes of parameters in $`N`$ and $`M_R`$ is unnatural, in general, since these matrices have very different origins. This problem has been pointed out by various authors.<sup>14</sup> We shall call it the Dirac-Majorana conspiracy problem.
There are several models in the literature that fall into class I(2A).<sup>15-17</sup> Of these, an especially interesting paper is that of Jezabek and Sumino,<sup>15</sup> because it shows that a Dirac-Majorana conspiracy can be avoided. Jezabek and Sumino consider the case that the Dirac and Majorana matrices of the neutrinos have the forms
$$N=\left(\begin{array}{ccc}x^2y& 0& 0\\ 0& x& x\\ 0& O(x^2)& 1\end{array}\right)m_D,M_R=\left(\begin{array}{ccc}0& 0& A\\ 0& 1& 0\\ A& 0& 0\end{array}\right)m_R,$$
(3)
where $`x`$ is a small parameter. If one computes $`M_\nu =N^TM_R^1N`$ one finds that
$$M_\nu =\left(\begin{array}{ccc}0& O(x^4y/A)& x^2y/A\\ O(x^4y/A)& x^2& x^2\\ x^2y/A& x^2& x^2\end{array}\right)m_D^2/m_R.$$
(4)
Note that this gives a maximal mixing of the second and third families, without having to assume any special relationship between the small parameter in $`N`$ (namely $`x`$) and the parameter in $`M_R`$ (namely $`A`$). Altarelli and Feruglio<sup>16</sup> generalize this example, showing that the same effect occurs if $`M_R`$ is taken to have a triangular symmetric form.
An interesting point about the form of $`M_\nu `$ in Eq. (4) is that it gives bimaximal mixing. This is easily seen by doing a rotation of $`\pi /4`$ in the 2-3 plane, bringing the matrix to the form
$$M_\nu ^{}=\left(\begin{array}{ccc}0& z& z^{}\\ z& 0& 0\\ z^{}& 0& 2x^2\end{array}\right).$$
(5)
In the 1-2 block this matrix has a Dirac form, giving nearly maximal mixing of $`\nu _e`$.
Other published models that fall into class I(2) are given in Ref. 17.
I(2B) See-saw $`M_\nu `$, large mixing from $`N`$
At least at first glance, this seems to be a less natural approach. the point is that if the large $`U_{\mu 3}`$ is due to large off-diagonal elements in $`N`$, it might be expected that the other Dirac mass matrices, $`U`$, $`D`$, and $`L`$, would also have large off-diagonal elements, giving large CKM angles. There are ways around this objection, and a few interesting models that fall into this class have been constructed. However, experience seems to show that this approach is harder to make work than the others, and fewer models of this type exist in the literature.<sup>18</sup>
II(1) Large mixing from $`L`$, CKM small by cancellation
If the large value of $`U_{\mu 3}`$ comes from large off-diagonal elements in the mass matrix $`L`$ of the charged leptons, then it is most natural to assume that the other Dirac mass matrices have large off-diagonal elements also. Why, then, are the CKM angles small? One possibility is that the CKM angles are small because of an almost exact cancellation between large angles needed to diagonalize $`U`$ and $`D`$. That, in turn, would imply that $`U`$ and $`D`$, even though highly non-diagonal, have nearly identical forms. This is the idea realized in so-called “flavor democracy” models.
In flavor democracy models, a permutation symmetry $`S_3\times S_3`$ among the left- and right-handed fermions causes the Dirac mass matrices $`L`$, $`D`$, and $`U`$ to have the form
$$L,D,U\left(\begin{array}{ccc}1& 1& 1\\ 1& 1& 1\\ 1& 1& 1\end{array}\right).$$
(6)
Smaller contributions that break the permutation symmetry cause deviations from this form. These flavor-democratic forms are of rank 1, explaining why one family is much heavier than the others. On the other hand, the mass matrix of the neutrinos $`M_\nu `$ is assmed to have, by an $`S_3`$ symmetry acting on the left-handed neutrinos, the approximate form
$$M_\nu \left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right).$$
(7)
If $`M_\nu `$ were exactly proportional to the identity, then the basis of neutrino mass eigenstates would be undefined, and so then would be the MNS angles. However, once the small $`S_3`$-violating effects are taken into account, a neutrino basis is picked out. It is not surprising that, typically, the neutrino angles that are predicted are of order unity. On the other hand, the fact that $`U`$ and $`D`$ are nearly the same in form leads to a cancellation that tends to make the quark mixing angles small.
Exactly what angles are predicted for the neutrinos depends on the form of the small contributions to the mass matrices that break the permutation symmetries. There are many simple forms that might be assumed, and the possibilities are rich. There exists a large and growing literature on these models.<sup>19</sup>
The idea of flavor democracy is an elegant one, especially in that it uses one basic idea to explain the largeness of the leptonic angles, the smallness of the quark angles, and the fact that one family is much heavier than the others. On the other hand, it requires the very specific forms given in Eqs. (6) and (7), which come from very specific symmetries. It is in this sense a narrower approach to the problem of fermion masses than some of the others I have mentioned.
It would be interesting to know whether models of class II(1), in which the CKM angles are small by cancellations of large angles, can be constructed using ideas other than flavor democracy.
II(2) Large mixing from “lopsided” $`L`$
We now come to what I regard as the most elegant way to explain the largeness of $`U_{\mu 3}`$: “lopsided” $`L`$. The basic idea is that the charged-lepton and down-quark mass matrices have the approximate forms
$$L\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& ϵ\\ 0& \sigma & 1\end{array}\right)m_D,D\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& \sigma \\ 0& ϵ& 1\end{array}\right)m_D.$$
(8)
The “$``$” sign is used because in realistic models these $`\sigma `$ and $`ϵ`$ entries could have additional factors of order unity, such as from Clebsches. The fact that $`L`$ is related closely in form to the transpose of $`D`$ is a very natural feature from the point of view of $`SU(5)`$ or related symmetries, and is a crucial ingredient in this approach. The assumption is that $`ϵ1`$, while $`\sigma 1`$. In the case of the charged leptons $`ϵ`$ controls the mixing of the second and third families of right-handed fermions (which is not observable at low energies), while $`\sigma `$ controls the mixing of the second and third families of left-handed fermions, which contributes to $`U_{\mu 3}`$ and makes it large. For the quarks the reverse is the case because of the “$`SU(5)`$” feature: the small $`O(ϵ)`$ mixing is in the left-handed sector, accounting for the smallness of $`V_{cb}`$, while the large $`O(\sigma )`$ mixing is in the right-handed sector, where it cannot be observed and does no harm.
In this approach the three crucial elements are these: (a) Large mixing of neutrinos (in particular of $`\nu _\mu `$ and $`\nu _\tau `$) caused by large off-diagonal elements in the charged-lepton mass matrix $`L`$; (b) this off-diagonal element appearing in a highly asymmetric or lopsided way; and (c) $`L`$ being similar to the transpose of $`D`$ by $`SU(5)`$ or a related symmetry.
To my knowledge the first place that all the elements of this approach appear is in a paper by Babu and Barr<sup>20</sup> and a sequel by Barr.<sup>21</sup> In those papers the emphasis was on a particular mechanism (in $`SU(5)`$ and $`SO(10)`$) by which the lopsidedness of $`L`$ and $`D`$ can arise. So perhaps it was not noticed by some readers that the scheme described in those papers was an instance of a more general mechanism.
The next time that this general idea can be found is in three papers that appeared almost simultaneously: Sato and Yanagida,<sup>22</sup> Albright, Babu, and Barr,<sup>23</sup> and Irges, Lavignac, and Ramond.<sup>24</sup>
It is interesting that the same mechanism was arrived at independently by these three groups from completely different points of view. In Sato and Yanagida the model is based on $`E_7`$, and the structure of the matrices is determined by the Froggatt-Nielson mechanism. In Albright, Babu, and Barr, the model was based on $`SO(10)`$, and does not use the Froggett-Nielson approach. Rather, the constraints on the form of the mass matrices come from assuming a “minimal” set of Higgs for $`SO(10)`$ and choosing the smallest and simplest set of Yukawa operators that can give realistic matrices. Though both papers assume a unified symmetry larger than $`SU(5)`$, in both it is the $`SU(5)`$ subgroup that plays the critical role in relating $`L`$ to $`D^T`$. The model of Irges, Lavignac, and Ramond, like that of Sato and Yanagida, uses the Froggatt-Nielson idea, but is not based on a grand unified group. Rather, the fact that $`L`$ is related to $`D^T`$ follows ultimately from the requirement of anomaly cancellation for the various $`U(1)`$ flavor symmetries of the model. However, it is well known that anomaly cancellation typically enforces charge assignments that can be embedded in unified groups. So that even though the model does not contain an explicit $`SU(5)`$, it could be said to be “$`SU(5)`$-like”.
In the last two years, the same mechanism has been employed by a large number of authors using a variety of approaches.<sup>25</sup>
A PREDICTIVE SO(10) MODEL WITH LOPSIDED L
The model that I shall now describe briefly was not constructed to explain neutrino phenomenology; rather it emerged from the attempt to find a realistic model of the masses of the charged leptons and quarks in the context of $`SO(10)`$, In particular, the idea was to take the Higgs sector of $`SO(10)`$ to be as minimal as possible, and then to find what this implied for the mass matrices of the quarks and leptons. In fact, in the first paper we wrote, we did not pay any attention to the neutrino spectrum. Then we noticed that the model in that paper actually predicted a large mixing of $`\nu _\mu `$ with $`\nu _\tau `$ and published a follow-up paper.<sup>23</sup> The reason for the large mixing of the mu and tau neutrinos was precisely the fact that the charged lepton mass matrix has a lopsided form.
The reason this lopsided form was built into this model (which I shall refer to as the ABB model henceforth) was that it was necessary to account for certain well-known features of the mass spectrum of the quarks. In particular, the mass matrix entry that is denoted $`\sigma `$ in Eq. (8) above plays three crucial roles in the ABB model that have nothing to do with neutrino mixing. (1) It is required to get the Georgi-Jarlskog<sup>26</sup> factor of 3 between $`m_\mu `$ and $`m_s`$. (2) It explains the value of $`V_{cb}`$. (3) It explains why $`m_c/m_tm_s/m_b`$. Remarkably, it turns out not only to perform these three tasks, but also gives mixing of order 1 between $`\nu _\mu `$ and $`\nu _\tau `$. Not often are four birds killed with one stone!
In constructing the model, several considerations guided us. First, we assumed the “minimal” set of Higgs for $`SO(10)`$. It has been shown<sup>27</sup> that the smallest set of Higgs that will allow a realistic breaking of $`SO(10)`$ down to $`SU(3)\times SU(2)\times U(1)`$, with natural doublet-triplet splitting,<sup>28</sup> consists of a single adjoint ($`\mathrm{𝟒𝟓}`$), two pairs of spinors ($`\mathrm{𝟏𝟔}+\overline{\mathrm{𝟏𝟔}}`$), a pair of vectors ($`\mathrm{𝟏𝟎}`$), and some singlets. The adjoint, in order to give the doublet-triplet splitting, must have a VEV proportional to the $`SO(10)`$ generator $`BL`$. This fact is an important constraint. Second, we assumed that the qualitative features of the quark and lepton spectrum should not arise by artificial cancellations or numerical accidents. Third, we required that the Georgi-Jarlskog factor arise in a simple and natural way. Fourth, we assumed that the entries in the mass matrices should come from operators of low-dimension that arise in simple ways from integrating out small representations of fermions.
Having imposed these conditions of economy and naturalness on the model we were led to a structure coming from just six effective Yukawa terms (just five if $`m_u`$ is allowed to vanish). These gave the following mass matrices:
$$\begin{array}{cc}U^0=\left(\begin{array}{ccc}\eta & 0& 0\\ 0& 0& \frac{1}{3}ϵ\\ 0& \frac{1}{3}ϵ& 1\end{array}\right)m_U,\hfill & D^0=\left(\begin{array}{ccc}0& \delta & \delta ^{}\\ \delta & 0& \sigma +\frac{1}{3}ϵ\\ \delta ^{}& \frac{1}{3}ϵ& 1\end{array}\right)m_D\hfill \\ & \\ N^0=\left(\begin{array}{ccc}\eta & 0& 0\\ 0& 0& ϵ\\ 0& ϵ& 1\end{array}\right)m_U,\hfill & L^0=\left(\begin{array}{ccc}0& \delta & \delta ^{}\\ \delta & 0& ϵ\\ \delta ^{}& \sigma +ϵ& 1\end{array}\right)m_D.\hfill \end{array}$$
(9)
(The first papers<sup>23</sup> gave only the structures of the second and third families, while this was extended to the first family in a subsequent paper.<sup>29</sup>) Here $`\sigma 1.8`$, $`ϵ0.14`$, $`\delta |\delta ^{}|0.008`$, $`\eta 0.6\times 10^5`$. The patterns that are evident in these matrices are due to the $`SO(10)`$ group-theoretical characteristics of the various Yukawa terms. Notice several facts about the crucial parameter $`\sigma `$ that is responsible for the lopsidedness of $`L`$ and $`D`$. First, if $`\sigma `$ were not present, then instead of the Georgi-Jarlskog factor of 3, the ratio $`m_\mu /m_s`$ would be given by 9. (That is, the Clebsch of $`\frac{1}{3}`$ that appears in $`D`$ due to the generator $`BL`$ gets squared in computing $`m_s`$.) Since the large entry $`\sigma `$ overpowers the small entries of order $`ϵ`$, the correct Georgi-Jarlskog factor emerges. Second, if $`\sigma `$ were not present, $`U`$ and $`D`$ would be proportional, as far as the two heavier families are concerned, and $`V_{cb}`$ would vanish. Third, by having $`\sigma 1`$ one ends up with $`V_{cb}`$ and $`m_s/m_b`$ being of the same order ($`ϵ`$) as is indeed observed. And since $`\sigma `$ does not appear in $`U`$ (for group-theoretical reasons) the ratio $`m_c/m_t`$ comes out much smaller, of order $`ϵ^2`$, also as observed. In fact, with this structure, the mass of charm is predicted correctly to within the level of the uncertainties.
Thus, for several reasons that have nothing to do with neutrinos one is led naturally to the very lopsided form that we found gives an elegant explanation of the mixing seen in atmospheric neutrino data!
From the very small number of Yukawa terms, and from the fact that $`SO(10)`$ symmetry gives the normalizations of these terms, and not merely order of magnitude estimates for them, it is not surprising that many precise predictions result. In fact there are altogether nine predictions.<sup>29</sup> Some of these are post-dictions (including the highly non-trivial one for $`m_c`$). But several predictions will allow the model to be tested in the future, including predictions for $`V_{ub}`$, and the mixing angles $`U_{e2}`$ $`U_{e3}`$.
In the first papers it appeared that the model only gave the small-angle MSW solution to the solar neutrino problem. In fact, if $`\eta =0`$, or if forms for $`M_R`$ are chosen that do not involve much mixing of the first-family right-handed neutrino with the others, then a very precise prediction for $`U_{e2}`$ results that is beautifully consistent with the small-angle MSW solution.<sup>29</sup> However, in a subsequent paper<sup>30</sup> we showed that for other simple forms of $`M_R`$ the model gives bi-maximal mixing. (This happens in a way similar to what we saw above in Eqs. (4) and (5) for the Jezabek-Sumino model.)
For more details of the ABB model and its predictions I refer you the papers I have mentioned.
(The classification given in this talk has been somewhat expanded in a paper by Barr and Dorsner.<sup>31</sup> That paper also contains a much more complete listing of three-neutrino models that have been published in the last few years. It also gives a general discussion of expectations for the parameter $`U_{e3}`$.)
REFERENCES
1. J.W.F. Valle, Neutrino physics at the turn of the millenium (hep-ph/9911224); S.M. Bilenky, Neutrino masses, mixings, and oscillations, Lectures at the 1999 European School of High Energy Physics, Casta Papiernicka, Slovakia, Aug. 22-Sept. 4, 1999 (hep-ph/0001311).
2. M.C. Gozalez-Garcia, P.C. de Holanda, C. Peña-Garay, and J.C.W. Valle, Status of the MSW solutions to the solar neutrino problem, hep-ph/9906469
3. V. Barger and K. Whisnant, Seasonal and energy dependence of solar neutrino vacuum oscillations, hep-ph/9903262
4. M.C. Gonzalez-Garcia, talk at International Workshop on Particles in Astrophysics and Cosmology: From Theory to Observation, Valencia, Spain, May 3-8, 1999.
5. M. Gell-Mann, P. Ramond, and Slansky, in Supergravity, Proc. Supergravity Workshop at Stony Brook, ed. P. Van Nieuwenhuizen and D.Z. Freedman (North-Holland, Amsterdam, 1979); T. Yanagida, Proc. Workshop on Unified theory and the baryon number of the universe, ed. O. Sawada and A. Sugamota (KEK, 1979).
6. A. Zee, Phys. Lett. B93 (1980) 389; Phys. Lett. B161 (1985) 141.
7. C. Froggatt and H.B. Nielson, Nucl. Phys. B147 (1979) 277.
8. J.A. Harvey, D.B. Reiss, and P. Ramond, Mass relations and neutrino oscilations in an $`SO(10)`$ model, Nucl. Phys. B199 (1982) 223-268.
9. R.N. Mohapatra and S. Nussinov, Bimaximal neutrino mixing and neutrino mass matrix, Phys. Rev. D60 (1999) 013002 (hep-ph/9809415).
10. C. Jarlskog, M. Matsuda, S. Skadhauge, and M. Tanimoto, Zee mass matrix and bimaximal neutrino mixing, Phys. Lett. B449 (1999) 240-252 (hep-ph/9812282).
11. M. Drees, S. Pakvasa, X. Tata, T. terVeldhuis, A supersymmetric resolution of solar and atmospheric neutrino puzzles, Phys. Rev. D57 (1998) 5335-5339 (hep-ph/9712392).
12. K. Fukuura, T. Miura, E. Takasugi, and M. Yoshimura, Maximal CP violation, large mixings of neutrinos and democratic type neutrino mass matrix, Osaka Univ. preprint, OU-HET-326 (hep-ph/9909415).
13. G.K. Leontaris and J. Rizos, New fermion mass textures from anomalous $`U(1)`$ symmetries with baryon and lepton number conservation, CERN-TH-99-268 (hep-ph/9909206).
14. M. Jezabek and Y. Sumino, Neutrino mixing and seesaw mechanism, Phys. Lett. B440 (1998) 327-331 (hep-ph/9807310); G. Altarelli and F. Feruglio, Neutrino mass textures from oscillations with maximal mixing, Phys. Lett. B439 (1998) 112-118 (hep-ph/9807353).
15. M. Jezabek and Y. Sumino, Neutrino mixing and seesaw mechanism, Phys. Lett. B440 (1998) 327-331 (hep-ph/9807310).
16. G. Altarelli and F. Feruglio, Phys. Lett. B439 (1998) 112-118 (hep-ph/9807353).
17. B. Stech, Are the neutrino masses and mixings closely related to the masses and mixings of quarks?, talk at 23rd Johns Hopkins Workshop on Current Problems in Particle Theory: Neutrinos in the Next Millenium, Baltimore, MD, 10-12 June 1999 (hep-ph/9909268). M. Bando, T. Kugo, and K. Yoshioki, Neutrino mass textures with large mixing, Phys. Rev. Lett. 80 (1998) 3004-3007 (hep-ph/9710417). M. Abud, F. Buccella, D. Falcone, G. Ricciardi, and F. Tramontano, Neutrino masses and mixings in $`SO(10)`$, DSF-T-99-36 (hep-ph/9911238).
18. Q. Shafi and Z. Tavartkiladze, Proton decay, neutrino oscillations and other consequences from supersymmetric $`SU(6)`$ with pseudogoldstone Higgs, BA-99-39 (hep-ph/9905202); D.P. Roy, talk at 6th Topical Seminar on Neutrino and AstroParticle Physics, San Miniato, Italy, 17-21 May 1999 (hep-ph/9908262).
19. M. Fukugita, M. Tanimoto, and T. Yanagida, Atmospheric neutrino oscillation and a phenomenological lepton mass matrix, Phys. Rev. D57 (1998) 4429-4432 (hep-ph/9709388); M. Tanimoto, Vacuum neutrino oscillations of solar neutrinos and lepton mass matrix, Phys. Rev. D59 (1999) 017304 (hep-ph/9807283); H. Fritzsch and Z.-z. Xing, Large leptonic flavor mixing and the mass spectrum of leptons, Phys. Lett. B440 (1998) 313-318 (hep-ph/9808272); S.K. Kang and C.S. Kim, Bimaximal lepton flavor mixing and neutrino oscillation, Phys. Rev. D59 (1999) 091302 (hep-ph/9811379).
20. K.S. Babu and S.M. Barr, Phys. Lett. B381 (1996) 202 (hep-ph/9511446).
21. S.M. Barr, Phys. Rev. D55 (1997) 1659 (hep-ph/9607419).
22. J. Sato and T. Yanagida, Large lepton mixing in a coset space family unification on $`E(7)/SU(5)\times U(1)^3`$, Phys. Lett. B430 (1998) 127-131 (hep-ph/9710516).
23. C.H. Albright, K.S. Babu, and S.M. Barr, Phys. Rev. Lett. 81 (1998) 1167 (hep-ph/9802314); C.H. Albright and S.M. Barr, Fermion masses in $`SO(10)`$ with a single adjoint Higgs field, Phys. Rev. D58 (1998) 013002 (hep-ph/9712488).
24. N. Irges, S. Lavignac, and P. Ramond, Predictions from an anomalous $`U(1)`$ model of Yukawa hierarchies, Phys. Rev. D58 (1998) 035003 (hep-ph/9802334).
25. Y. Nomura and T. Yanagida, Bimaximal neutrino mixing in $`SO(10)`$ (GUT), Phys. Rev. D59 (1999) 017303 (hep-ph/9807325); Z. Berezhiani and A. Rossi, Grand unified textures for neutrino and quark mixings, JHEP 9903:002 (1999) (hep-ph/9811447); K. Hagiwara and N. Okamura, Quark and lepton flavor mixings in the $`SU(5)`$ grand unification theory, Nucl. Phys. B548 (1999) 60-86 (hep-ph/9811495); G. Altarelli and F. Feruglio, A simple grand unification view of neutrino mixing and fermion mass matrices, Phys. Lett. B451 (1999) 388-396 (hep-ph/9812475); K.S. Babu, J. Pati, and F. Wilczek, Fermion masses, neutrino oscillations and proton decay in the light of SuperKamiokande, (hep-ph/9812538); R. Barbieri, L.J. Hall, G.L. Kane, and G.G. Ross, Nearly degenarate neutrinos and broken flavor symmetry, OUTP-9901-P (hep-ph/9901228); K.I. Izawa, K. Kurosawa, Y. Nomura, and T. Yanagida, Grand unification scale generation through anomalous $`U(1)`$ breaking, Phys. Rev. D60 (1999) 115016 (hep-ph/9904303); E. Ma, Permutation symmetry for neutrino and charged lepton mass matrices, Phys. Rev. D61 (2000) 033012 (hep-ph/9909249); Q. Shafi and Z. Tavartkiladze, Bimaximal neutrino mixings and proton decay in $`SO(10)`$ with anomalous flavor $`U(1)`$, BA-99-63 (hep-ph/9910314); P. Frampton and A. Rasin, Non-abelian discrete symmetries, fermion mass textures and large neutrino mixing, IFP-777-UNC (hep-ph/9910522).
26. H. Georgi and S.L. Glashow, Phys. Lett. B86 (1979) 297.
27. S.M. Barr and S. Raby, Minimal $`SO(10)`$ unification, Phys. Rev. Lett. 79 (1998) 4748-4751.
28. S. Dimopoulos and F. Wilczek, report No. NSF-ITP-82-07 (1981), in The unity of fundamental interactions Proceedings of the 19th Course of the International School of Subnuclear Physics, Erice, Italy, 1981 ed. A. Zichichi (Plenum Press, New York, 1983); K.S. Babu and S.M. Barr, Phys. Rev. D48 (1993) 5354 (hep-ph/9306242); K.S. Babu and S.M. Barr, Phys. Rev. D50 (1994) 3529 (hep-ph/9402291).
29. C.H. Albright and S.M. Barr, Phys. Lett. B452 (1999) 287 (hep-ph/9901318).
30. C.H. Albright and S.M. Barr, minimal Higgs model, Phys. Lett. B461 (1999) 218 (hep-ph/9906297).
31. S.M. Barr and Ilja Dorsner, hep-ph/0003058.
|
no-problem/0003/astro-ph0003150.html
|
ar5iv
|
text
|
# Where do we stand?
## 1 What is a bulge?
Classical bulges are centrally-concentrated, high surface density, three-dimensional stellar systems. Their high density could arise either because significant gaseous dissipation occurred during their formation, or could simply reflect formation at very high redshift (or some combination of these two, depending on the density). For illustration, equating the mean mass density within the luminous parts of a galaxy (assumed to have circular velocity $`v_c`$ and radius $`r_c`$) with the cosmic mean mass density at a given redshift, $`z_f`$, gives (e.g. Peebles 1989)
$$z_f30\frac{1}{f_c\mathrm{\Omega }^{1/3}}(\frac{v_c}{250\mathrm{k}\mathrm{m}/\mathrm{s}})^{2/3}(\frac{10\mathrm{k}\mathrm{p}\mathrm{c}}{r_c})^{2/3},$$
where $`f_c`$ is the collapse factor of the proto-galaxy, being at least the factor 2 of dissipationless collapse, and probably higher so that bulges, as observed, are self-gravitating, meaning that they have collapsed relative to their dark haloes.
The majority view at the meeting, consistent with the observations, is that indeed proto-bulges radiated away binding energy, but also at least their stars formed at relatively high redshift. One must always be careful to distinguish between the epoch at which the stars now in a bulge formed, and the epoch of formation of the bulge system itself (as emphasized by Pfenniger, this volume). Of course if the bulge formed with significant dissipation, meaning gas physics dominated, then the star formation and bulge formation probably occured together.
The small length-scale of bulges, combined with their modest rotation velocity, leads to a low value of their angular momentum per unit mass. Indeed, in the Milky Way Galaxy, the angular momentum distribution of the bulge is similar to that of the slowly-rotating stellar halo, and different from that of the disk, strongly suggestive of a bulge–halo connection, perhaps via gas ejection from halo star-forming regions (e.g. Wyse & Gilmore 1992). One can appeal to bulges forming from the low angular momentum regions of the proto-galaxy, a variant on the Eggen, Lynden-Bell & Sandage (1967) ‘monolithic collapse’ scenario, explored further by van den Bosch (1998 and this volume). Or one can posit angular momentum transport prior to the formation of the bulge, taking angular momentum away from the central regions, and depositing it in the outer regions. Such transport of angular momentum could perhaps occur during hierarchical merging, by dynamical friction and gravitational torques, although one must be careful not to end up with too small a disk due to over-efficient angular momentum re-arrangement (e.g. Zurek, Quinn & Salmon 1988; Navarro & Benz 1991; Navarro & Steinmetz 1997). More modest amounts of angular momentum transport may be achieved by some viscosity in the early disk (e.g. Zhang & Wyse 1999).
A recurring theme of the meeting was that large bulges (of early-type disk galaxies?) are related to ellipticals while small bulges (intermediate–late-type disk galaxies?) are more closely allied to disks. We need to be very clear about the observational selection criteria used in the definition of samples, and how this could bias our conclusions. As we will see below, the Milky Way bulge shows characteristics of both early- and late-type bulges, and will feature in both bulge–elliptical connections and bulge–disk connections.
### 1.1 The elliptical–bulge connection
There has been remarkably little new kinematic data for representative samples of bulges (as opposed to detailed study of particular individual bulges, chosen for their unusual characteristics) since the pioneering work of the 1970s and 1980s. As demonstrated by Davies et al. (1983), the bulges of early-type spirals are like ellipticals of equal luminosity in terms of rotational support, and are consistent with being isotropic oblate rotators i.e. with having an isotropic stellar velocity dispersion tensor, and being flattened by rotation about their minor axis. This sample was biased towards early-type spirals to facilitate bulge–disk decomposition, by observing edge-on systems with a prominent bulge. The bulge of the Milky Way Galaxy can be observed to match the techniques employed in the study of the bulges of external galaxies and, also then has stellar kinematics consistent with being an isotropic rotator (Ibata & Gilmore 1995a,b; Minniti 1996), as shown in Figure 1 here.
The trend apparent in Figure 1, and discussed more fully in Davies et al. (1983), is that the level of rotational support in ellipticals increases as the luminosity of the elliptical decreases. The surface brightness of ellipticals also increases with decreasing luminosity, at least down to the luminosity of M32 (the dwarf spheroidal galaxies are another matter), as noted by Kormendy (1977), Wirth & Gallagher (1984) and many subsequent papers. These two relations are consistent with an increasing level of importance of dissipation in ellipticals with decreasing galaxy luminosity (Wyse & Jones 1984).
Further, the bulges of S0-Sc disk galaxies follow the general trend of the Kormendy (1977) relations, in that smaller bulges are denser (de Jong 1996; Carlberg, this volume; see Figure 3 below for details). Thus one interpretation of Figure 1 is then that (some) bulges too formed with significant dissipation.
As discussed by several speakers, the bulges of S0-Sc disk galaxies have approximately the same Mg2 – velocity dispersion relation as do ellipticals (Jablonka et al. 1996; Idiart et al. 1997; see Renzini this volume), although the actual physics behind this correlation is not uniquely constrained. The properties of line-strength gradients in ellipticals of a range of luminosities are consistent with lower luminosity ellipticals forming with more dissipation than the more luminous ellipticals (Carollo, Danziger & Buson 1993). Again these results are suggestive that bulges are similar to low-luminosity ellipticals, and that gas dissipation was important.
The detailed interpretation of the line-strength data in terms of the actual age and metallicity distributions of the stars is extremely complex and as yet no definitive statements can be made. There is a clear need for more data, including radial gradients, and for more models (see Trager, this volume). The broad-band colors of (some) bulges are consistent with those of the stellar populations in early-type galaxies in the Coma cluster (Peletier & Davies, this volume). We still need better models to interpret even broad-band colors.
### 1.2 The disk–bulge connection
The surface-brightness profiles of bulges in later-type disk galaxies are better fit by an exponential law than by the steeper de Vaucouleurs profile, which in turn is a better fit for the bulges of early-type disk galaxies (Andredakis, Peletier & Balcells 1995; de Jong 1996). The sizes of bulges are statistically related to those of the disks in which they are embedded, and indeed the (exponential) scale-lengths of bulges are around one-tenth that of their disk; this correlation is better for late-type spirals than for early types (Courteau, de Jong & Broeils 1996). The projected starlight of the bulge of the Milky Way can be reasonably well-approximated by exponentials (vertically and in the plane); the Milky Way then fits within the scatter of the correlation of the external galaxies.
The optical colors of bulges are approximately the same as those of the inner disk, for the range of Hubble types S0-Sd (Balcells & Peletier 1994; de Jong 1996), but as ever the decomposition of the light curves is difficult, as is correction for dust. This correlation implies similar stellar populations in bulges and their disks, as may be expected if bulges form from their disks (see Pfenniger, this volume). Thus, should there be a variation of mean stellar age from disk to disk, as may be expected from the range of colors observed, and indeed from observations of gas fraction etc., together with models of star formation in disks, one would expect a corresponding range in the mean stellar age of the different bulges. However, Peletier & Davies (this volume) find only a narrow range in bulge ages for their sample, based on optical–IR colors. More data are clearly needed.
Figure 1 demonstrated the similarity in their kinematics between bulges and ellipticals of the same luminosity; Figure 2 (taken from Franx 1993) illustrates some of the complexity of bulge kinematics, and emphasizes the need to be aware of the selection criteria – not all bulges are the same. The left-hand panel shows that in terms of the ratio of stellar velocity dispersion to true circular velocity (not the rotational streaming velocity), bulges scatter below ellipticals. Further, the right-hand panel shows that bulges of late-type disk galaxies have values of this ratio similar to that typical of inner disks (from Bottema 1993). The Milky Way bulge in this plot is quite typical ($`\sigma /\mathrm{V}_\mathrm{c}0.5`$, B/T$`0.25`$).
Complexity in the relationship between surface brightness and scale-length for bulges is illustrated in Figure 3, based on WFPC2 data from Carollo (1999). The plot shows that while the large, $`R^{1/4}`$-law bulges follow the same scaling as ellipticals, the smaller, exponential-profile bulges are offset to lower surface brightnesses and occupy the extension to smaller scalelengths (by about a factor of ten, as noted above) of the locus of late-type disks. This strengthens the disk–bulge connection for these small bulges. However, Carollo (1999) finds both R<sup>1/4</sup> and exponential bulges in apparently very similar disks, so some additional parameter is important.
Association of ‘peanut’ bulges with bars, which are essentially a disk phenomenon, was made in several contributions, using both gas and stellar kinematics (Kuijken; Bureau). However, the pronounced ‘peanut’ in the early COBE images of the Milky Way was apparently largely an artefact of patchy dust, and the amplitude of such a morphology in the bulge of the Milky Way is not reliably established (Binney, Gerhard & Spergel 1997). As emphasized by Pfenniger (this volume), the kinematical and dynamical effects of bars are 3-dimensional; they can scatter stars by resonances, and/or themselves go unstable, fatten and dissolve, leading to a bulge. Which process dominates? There is a wealth of fascinating physics to explore. The modellers need to make more contact with observations, including predictions for direct comparison with the stellar kinematics, ages of stars, surface brightness profiles etc.
M33 has neither a bulge nor a bar, but does have a central nucleus, and of course a substantial disk. Such systems need to be discussed in this context. The central nucleus of the Milky Way contains a black hole and star clusters of mass fraction well below the 1% or so estimated to destroy a bar (Norman, Sellwood & Hasan 1996), if we associate all the $`10^{10}`$M of the bulge with the bar. Indeed it is somewhat of a curiosity that the Milky Way does not fit the relationship between black hole mass and bulge mass found by Magorrian et al. (1998).
## 2 When do Bulges form?
The fossil evidence from Local Group galaxies constrains the ages of the stars presently in bulges, which could be rather different from the age of the morphological system.
The overwhelming evidence (contributions by Gilmore, Frogel, Renzini, Rich) for the Milky Way bulge is that its stars are old, except for a very small scaleheight young component – and since all components of the Galaxy have their peak surface brightnesses in the center,<sup>∗\*</sup><sup>∗\*</sup>$``$Some of the decompositions of the COBE data have modelled the disk with a hole in the central regions, which I believe points to continuing uncertainly in the interpretation of those data in terms of the parameterizations of the different components along the line-of-sight. this is as likely to be associated with the disk. Further, as discussed by Rich (this volume) the dominant stellar population even in the nuclear regions is apparently old.
The situation in external bulges in the local universe is more uncertain, but is consistent with stars in large bulges being ‘old’, which means forming perhaps 10Gyr ago.
Direct studies of morphology at high redshift require HST and are at present based on small samples and must be treated with caution if attempting to draw general conclusions. The Hubble Deep Field (HDF) has provided much of the field galaxy sample (as opposed to members of galaxy clusters). Recently, Abraham et al. (1999a) analysed the spatially-resolved colors of galaxies of known redshift in the HDF. In contrast to the case of cluster ellipticals discussed by Renzini (this volume), they find that almost half of their (small) sample of field ellipticals at intermediate redshift ($`0.4<z<1`$) show evidence for a range of stellar ages. The color gradients in the galaxies for which they could derive a reasonable bulge–disk decomposition are consistent with the mean stellar ages of the bulges being older than those of their disks. These authors argue that this presents difficulties for secular evolution models, but again one must remember the possible selection biases. Abraham et al. (1999b) further find a significant deficit of barred galaxies for redshifts above 0.5; as those authors note, more data for a wider range of rest-frame colors and redshifts are needed confirm this result, and then to decide on a robust interpretation. As discussed by Lilly (these proceedings), there is strong evidence from SCUBA data for the existence of compact galaxies with high star-formation rates at high redshift, consistent with proto-spheroids forming in a starburst.
The age distribution of inner disks is of obvious importance for constraining scenarios of disk–bulge formation. Unfortunately, we do not know this well, even in the Milky Way. Indeed, we do not have a good understanding of the star formation history even at the solar neighborhood. We do know that out to a few kpc from the Sun there are stars in the thin and thick disks that are as old as the globular clusters (Edvardsson et al. 1993; Gilmore, Wyse & Jones 1995). The stellar color–magnitude data, the chemical abundances and the white dwarf luminosity function data are all broadly consistent with a local (solar neighborhood) star formation rate that has been approximately constant, back to $`12`$Gyr (e.g. Noh & Scalo 1990; Rocha-Pinto & Maciel 1997). Most models of star formation in disks predict that the central regions should evolve faster, and hence the mean stellar age should be older in the inner disk than in the outer disk. Thus perhaps indeed predominantly-old bulges can be formed recently, from old stars in the central parts of disks. But one really has to be careful to avoid a significant age range in the bulge, reflecting the continuing star formation in the disk up to the time of bulge formation.
Simulations of hierarchical clustering galaxy formation predict ‘bulges’ to form stars at redshift of $`z2`$ (peak) even if assembled later (Frenk, oral presentation this meeting; Baugh, Cole, Frenk & Lacey 1998). In these scenarios, bulges (and ellipticals) form from mergers between pre-existing disk galaxies, and consist of a mix of the disk stars, plus, in some versions, new star formation in the central regions resulting from the disk gas being driven there during the merger. Disks are then (re-)accreted around these bulges. Thus bulges in galaxies with relatively big disks (i.e. Scs) should be the oldest bulges, and bulges with small disks should be the youngest (Baugh, Cole & Frenk 1996; Kauffmann 1996). This is not obviously consistent with the observations presented at the meeting.
A preliminary attempt to make detailed predictions and see if the ‘bulges’ in these models fit the observed scaling between size and luminosity was presented at the meeting by Lacey; the models did not include dissipation and failed to produce small enough bulges. This is further evidence that the high phase space densities of bulges require dissipation (cf. Wyse 1998).
## 3 What are the Timescales – duration of bulge formation?
The finest time-resolution in studies of stellar populations is available from study of the patterns of elemental abundances in individual stars, as discussed in this volume by Renzini; the elemental signature of a short duration of star formation is a pattern of enhanced $`\alpha `$-elements as produced by Type II supernovae alone only. The bulge of the Milky Way is surprisingly under-studied and really do need more data for field stars; for the extant small sample, different $`\alpha `$-elements show different patterns (McWilliam & Rich 1994), unexplained within the context of solely Type II supernovae yields (e.g. Worthey 1998) or in comparison with the element ratios of stars in the stellar halo. It is worth noting however that the elemental abundances seen in the bulge field stars and in the bulge (or thick-disk?) globular clusters are consistent with a normal massive-star stellar IMF (cf. Wyse & Gilmore 1992), as also seen via star counts for the lower mass stars in Baade’s window (Holtzman et al. 1998).
Color-magnitude diagrams of old populations can only constrain the duration of star formation to be less than many Gyr, due to the crowding of the isochrones (reflecting the long main-sequence lifetimes of low-mass stars). Further, one needs to know the metallicity distributions, and crucially for the Milky Way bulge, the distance distribution, since foreground disk stars are a difficult contaminant.
As mentioned above, hierarchical-clustering and merging scenarios predict a many Gyr spread in ages of bulge stars, but we need a better quantification of ‘many’. And again, a significant age spread is predicted in the simpler secular evolution models, although the restriction to only one early disk–bar–bulge episode would minimise it.
The shortest durations of star formation are predicted by the starburst models wherein pre-assembled gas forms stars on only a few free-fall times, but the physics of the assembly of the gas will also play a role (Carlberg, this volume, who favors wind-regulated accretion of gas-rich satellites; Elmegreen, this volume, who favors unregulated, monolithic collapse). That very high star formation rates happened in some systems at high redshift is supported by the SCUBA observations (Lilly, this volume), but important aspects of the model obviously need to be worked out (e.g. is there or isn’t there a dominant supernova-driven wind?)
## 4 Constraints from Physical Properties
### 4.1 Angular momentum distributions
The hierarchical-clustering and merging scenario predicts misalignment in the angular momentum vector of different shells of material around a peak. This may be expected to translate into some persistent misalignment between disk and bulge, and even counter-rotating components. While examples of such systems exist (see Bertola et al. this volume), these would appear to be the exception rather than the rule (see Kuijken, this volume).
Quantification of the specific angular momemtum distributions of disks and bulges is obviously desirable, but the observational determinations are dependent on not only detailed kinematic data, but also the decomposition of the light profile (and M/L). Note that in the Milky Way the determination of the kinematic properties of the bulge – and in particular any gradients – requires very careful treatment of contamination by the disk (see Ibata & Gilmore 1995a,b; Tiede & Terndrup 1997 for details). The extant theoretical predictions of angular momentum distributions of bulge and of disk are also not sufficient.
### 4.2 Central star clusters and bars
‘Secular evolution’ models for forming bulges from inner disks naively predict an anti-correlation between significant central mass concentrations and bars, since in these models the clusters destroy the bar. There is a particular need to determine how many cycles of bar formation/dissolution are expected theoretically, and how many are allowed by the observations. The uniform old age of bulges, including that of the Milky Way, suggested by most of the evidence presented at this meeting (but again remember possible selection effects) argues strongly for only one such episode, and as noted by Gilmore (this volume), the disk must still continue into the central regions. The relative frequency of bars, exponential versus R<sup>1/4</sup> bulges, central star clusters etc. is as yet poorly quantified. The initial results of an HST WFPC2 and NICMOS imaging survey of nearby spiral galaxies (Carollo 1999) have revealed some of the complexity of the inner regions of these systems, finding a high fraction of photometrically-distinct compact sources sitting at the galactic centers. These ‘nuclei’ have surface brightnesses and radii ranging from those typical of the old Milky Way globular clusters to those of the young star-clusters found in interacting galaxies (e.g. Whitmore et al. 1993; Whitmore & Schweizer 1995), with typical half-light radii of a few pc up to $`20`$pc. Many of the nuclei are embedded in bulge-less disks or in bulge-like structures whose light distribution is too dusty/star-forming to be meaningfully modelled. Every exponential bulge was found to contain a nucleus, and further the luminosity of the nucleus was consistent with its being sufficiently massive to have destroyed a bar of the same mass as the (exponential) bulge. Are these nuclei the central mass concentrations of the models?
The $`\mathrm{V}\mathrm{H}`$ color distribution of the exponential bulges is rather broad, and peaks at $`\mathrm{V}\mathrm{H}0.96`$, significantly bluer, by about 0.4 mag, than the value typical of the R<sup>1/4</sup> bulges (Carollo et al. 1999). If this bluer color can be ascribed to a younger age, this would indicate that exponential bulges are the preferred mode, for bulges forming more recently. The relatively massive central clusters found in these exponential bulges could theoretically prevent subsequent bar formation, and removing the possibility of successive cycles of bar formation – gas inflow – formation of central object – bar dissolution mechanism (as was discussed also by Rix in his oral presentation at this meeting).
### 4.3 Chemical abundance distributions
The K-giants in the Milky Way bulge have a very broad metallicity distribution, both in Baade’s window (at a projected Galactocentric distance of around 500pc; Rich 1988) and at projected distances of several kpc (Ibata & Gilmore 1995a,b). The breadth of the metallicity distribution in the bulge contrasts with that narrow distribution observed in the disk at the solar neighborhood. The lack of metal-poor stars in the local disk conflicts with predictions of the ‘simple model’ of chemical evolution, and is the famous ‘G-dwarf problem’. One hastens to add that the fact the Milky Way bulge has a broad distribution, and indeed fits the predictions of the ‘simple model’ of chemical evolution, does not mean that any or all of the many assumptions of the ‘simple model’ are valid; another example of a stellar system with a metallicity distribution that is well-fit by the simple model (albeit with a reduced yield) is the stellar halo of the Milky Way. The G-dwarf problems has many solutions, the most popular of which is to postulate gas inflows (e.g. Tinsley 1980). The width of a metallicity distribution is related to the ratio of inflow time to star formation time, and perhaps the wider metallicity distribution in the bulge can be interpreted in terms of very rapid star formation, occuring too fast for inflow to affect the metallicity structure.
The M-giants studied by Frogel (this volume) in the inner 100pc or so of the bulge do appear to have a narrow metallicity distribution, but this may reflect the bias inherent in the sample selection by such a late spectral type; data for K-giants are desirable, both because they are a more representative evolutionary phase of low-mass stars, and because their spectra are easier to interpret and use to determine metallicities, than are M-giants.
From the width of the giant branch in color-magnitude diagrams, the M31 bulge is inferred to have a rather broad metallicity distribution in its outer parts, but a narrow metallicity distribution interior to 1kpc (Renzini, this volume; Rich, this volume). Perhaps this variation in width also reflects a variation of the ratio of star-formation rate to gas inflow rate, this time a variation with radius within the bulge. At face value, the opposite trend – one with a broader metallicity distribution in the inner, more dense parts – may be expected in models where the local star formation rate is determined by a non-linear function of gas density, but the flow rate is given by the inverse of the dynamical time (proportional to the square root of density), so that the ratio of star formation time to flow time decreases with increasing density.
The stellar populations of the resolved bulges in the Local Group are not compatible with their formation via accretion and assimilation of satellites and or globulars like those remaining today – the bulges are too metal-rich, and have too narrow an age distribution. However, perhaps some part of the metal-poor tail in the Milky Way bulge could be due to accretion of the dense, metal-poor and old globular clusters. Note that for stellar satellite systems with a realistic density profile, a significant fraction of the stars will be tidally removed far out in the halo, and only a fraction will make it into the center (Syer and White 1998; see Kuijken, this volume). Kuijken (this volume) notes that the timescale of satellite accretion is rather long, so that any bulge-building by this means should be on-going. This raises a further difficulty, in that the old, metal-rich bulge stars are unlike those in typical satellites. A graphic illustration of the difference in stellar populations between the bulge of the Milky Way and the Sagittarius dwarf, one of the more massive satellite galaxies of the Milky Way, is shown in Figure 4.
### 4.4 Chemical Abundance Gradients
A strong chemical abundance gradient is a signature of slow, dissipative collapse. Such gradients are weakened, but not erased, by any subsequent mergers (e.g. White 1980; Barnes & Hernquist 1992). There are no clear predictions for secular evolution models (but are needed).
Observationally, there are weak or minimal amplitude gradients in mean metallicity in resolved bulges (Milky Way Galaxy – Gilmore, Frogel, this volume; M31 – Frogel, Rich, Renzini, this volume). As mentioned, the interpretation of absorption line-strengths remains ambiguous, and we need more data and models.
## 5 Summary
Bulges are diverse in their properties, and probably in their formation mechanisms, or at least in the dominant physics at the epochs of star formation and/or assembly. Perhaps the differences are just a matter of degree, since, for example, even ‘monolithic collapse’ involves fragmentation, with subsequent star formation in the fragments. A centrally-concentrated profile appears to match ‘maximum entropy’ arguments (Tremaine, Henon & Lynden-Bell 1986) for the end-point of violent relaxation of a cold, clumpy system, independently of the details of the evolution to that end-point.
The overall trends of the observations are that small bulges, of late-type disk galaxies, show a strong connection to their disk, while big bulges, of early-type disk galaxies, are more like the low-luminsity extension of the elliptical galaxy sequence. The bulge of the Milky Way appears to straddle these two generalities, having an affinity for its disk in terms of structure, but having the old, metal-rich population associated with ‘spheroids’.
What does this mean? Even the casual reader should have noted the not-infrequent occurrence of the sentiment ‘more data and models are needed’ in the text above. We are at the stage of requiring robust quantitative results from both theory and observations.
More specifically, for the Milky Way, we require good HST color-magnitude diagrams for more lines-of-sight towards the Milky Way bulge, following the work of Feltzing & Gilmore (1999) in establishing the association of a younger stellar population with foreground disk. We also require good reddening maps and metallicity data to aid the interpretation of these color-magnitude diagrams. The inner disk of the Milky Way is remarkably under-studied, and again age and metallicity distributions – and stellar kinematics – are obviously crucial in determining the similarity or otherwise of inner disk and bulge. Further, we need to understand the relationship between the ‘bulge’ globular clusters and the bulge field population; present models of globular-cluster formation appeal to pre-enrichment to provide the uniform enrichment within a given cluster, so it is not obvious that the enrichment signatures of cluster stars and field stars should be the same. Elemental abundances for statistically-significant samples of unbiased tracers of the field in a variety of lines-of-sight are required to understand the history of star formation.
A combination of HST and ground-based (to probe both small- and large-scale structure) broad-band optical and IR colors, and surface brightness profiles, are still lacking for large samples, including the whole range of spiral Hubble types. These data should allow a robust quantification of the correlations between morphologies. Basic kinematic data, including gradients, should be obtained for a representative sample of bulges and disks. While we may lack the means at present for a unique interpretation of absorption line-strength data, the straightforward test for continuity in the line strengths from bulges to their disks is meaningful.
The redshift of statistically-significant samples of galaxies is being continually pushed back (at what point will this pose a real problem for CDM?) and HST and the next generation of telescopes should provide robust morphological classifications. We will no doubt see evolution, but need to have the model predictions to be able to distinguish the underlying physics behind the evolution.
‘Secular-evolution’ models are their early stages of development, but several key questions may be posed. While it may be reasonable to comment that a correlation between bulge scale-length and disk scale-length points to a connection between bulge and disk, can the models ‘post’-dict the factor of ten that is observed? Can they predict the frequency with which one should see barred spirals today, even ones with big bulges? Are all bars the same? Are there too many bars and/or central concentrations observed for the models of bar dissolution? Or is the dominant mechanism of bulge-building in this scenario actually scattering of disk stars through resonant coupling, rather than bar dissolution? How can this be compatible with uniformly old bulges? But are exponential bulges (apart from the Milky Way bulge) composed of old stars?
Cold-dark-matter dominated cosmologies gained popularity partially because of their robust predictive power, a requirement for a good theory, in terms of the large-scale structure formed by the dissipationless dark haloes, (e.g. Davis, Efstathiou, Frenk & White 1992). The predictions for the luminous components, the galaxies as we observe them, have not yet achieved the same level of maturity. Advocates of merging and hierarchical clustering should quantify further the ages of stars now in bulges, and the epoch of assembly into bulges. What is predicted for the age spread within a typical bulge like the Milky Way? What fraction of bulges should have angular momentum vector misaligned with their disk? Should colors of bulge and disk be correlated?
If bulges form in a ‘star-burst’, what is the role of a supernova-driven wind? In this context, the X-ray properties of bulges, including the Milky Way, should constrain the ability of the bulge potential well to retain hot gas.
Where do we stand? – inspired to get to work!
###### Acknowledgements.
I acknowledge support from NASA ATP grant NAG5-3928.
|
no-problem/0003/cond-mat0003363.html
|
ar5iv
|
text
|
# Output from Bose condensates in tunnel arrays: the role of mean-field interactions and of transverse confinement
## 1 Introduction
The dynamics of Bose-Einstein condensates of alkali vapours - in very elongated traps is a matter of wide experimental and theoretical interest. An example is the mode–locked, pulsed atom laser which has been realized by Anderson and Kasevich by pouring a condensate of <sup>87</sup>Rb atoms in a vertical optical lattice. Drops of coherent matter leave the condensate under the effect of gravity. In this and other similar situations a theorist would like to reduce the problem to one-dimensional (1D) transport along the axial direction, keeping account of the interactions through a renormalization of the scattering length embodying the transverse confinement of the 3D condensate.
A numerical study of atomic transport in a model relevant to the above-mentioned system has been based on the solution of a 1D time-dependent Gross-Pitaevskii equation (GPE) . This was obtained by freezing out the transverse motions of the condensate and by renormalizing the mean-field interactions according to a proposal made by Jackson et al. . The main result of this study was to show that, independently of the strength of the interactions and in complete agreement with the experimental data of Anderson and Kasevich , the separation between successive matter drops is determined by the period of Bloch oscillations of the condensate in the 1D periodic potential of the optical lattice. It was further seen from the numerical results that the shape of the drops is closely related to that of the parent condensate, thus supporting a mechanism of coherent emission and suggesting a practical way to tailor matter-wave laser pulses. Nevertheless, the question remains of how far a specific 1D schematization may quantitatively capture other features of the phenomenon.
In this Letter we address this question by carrying out a 3D numerical study of gravity-driven transport in a cylindrically symmetric condensate inside an optical lattice and by testing against these data the results of 1D reductions of the problem. In addition to the proposal of Jackson et al. , we test a reduction previously proposed for condensates in anisotropic harmonic traps , in which we fix the effective 1D scattering length by adjusting the chemical potential of the 1D model to that of the 3D condensate. As a further proof that the phenomenon of drop emission reflects the Bloch oscillations of the condensate, the spacing between the drops remains the same in all these models. The focus of our study then is on the shape and size of the matter pulses.
In our numerical simulations we use a fast, explicit time-marching scheme for the solution of the GPE in cylindrical geometry, which has been developed by Cerimele et al. . This method is briefly described in Section 2. Our extensive results of 3D simulation are presented in Section 3 and summarized in a simple diagram reporting the fractional number of particles in the first drop as a function of a single, suitably defined combination of system parameters. The corresponding results from the 1D reductions are reported in Section 4 and critically compared with those of the 3D simulation in Section 5. This final section also present our conclusions.
## 2 The numerical method
The time-dependent GPE for the condensate wave function $`\mathrm{\Psi }(𝐫,t)`$ is
$$i\mathrm{}\frac{\mathrm{\Psi }(𝐫,t)}{t}=\left(\frac{\mathrm{}^2_𝐫^2}{2M}+U_{ext}(𝐫)+U_I|\mathrm{\Psi }(𝐫,t)|^2\right)\mathrm{\Psi }(𝐫,t),$$
(1)
where $`M`$ is the atomic mass, $`U_I=4\pi \mathrm{}^2aN/M`$ is the interaction strength and $`U_{ext}`$ is the external potential, $`a`$ being the scattering length and $`N`$ the number of particles in the condensate. In what follows $`U_{ext}(𝐫)`$ is due to the optical lattice and to gravity, namely
$$U_{ext}(r,z)=U_l^0[1\mathrm{exp}(r^2/r_{lb}^2)\mathrm{cos}^2(2\pi z/\lambda )]Mgz,$$
(2)
Here, $`U_l^0`$ is the well depth, $`r_{lb}`$ is the transverse size, the wavelength $`\lambda `$ yields the lattice period $`d=\lambda /2`$ and $`g`$ is the acceleration of gravity. Finally, the normalization condition is $`|\mathrm{\Psi }(𝐫,t)|^2𝑑𝐫=1`$.
We numerically solve Eq. (1) by using an explicit time marching technique , in contrast with alternating-direction-implicit solver methods previously used in the context of Bose-Einstein condensation . The present algorithm extends a fast time-staggered scheme proposed by Visscher to solve the Schrödinger equation in an external potential, with the aims of preserving norm conservation in the presence of non-linear mean-field interactions and of handling cylindrical symmetry . In brief, in solving Eq. (1) we synchronously advance the real and imaginary parts of the scaled wave function in units of two time steps, using their intermediate centred value. The space derivatives are approximated by using centred differentiation.
The following crucial points deserve special comment. It is proven a priori and numerically verified that the algorithm preserves the norm of the wave function at each time step, provided that the boundary conditions are such as to annihilate surface terms. It can also be shown that the numerical stability of the algorithm is preserved as long as the simulation time-step does not exceed a critical value $`\mathrm{\Delta }\tau _c`$, which is limited either by the grid spacing or by the magnitude of the product $`aN`$ entering $`U_I`$. In our simulations the actual time-step is consistently kept well below the marginal stability threshold. Finally, the cylindrical symmetry is handled by an accurate treatment of the space derivatives near the symmetry axis .
## 3 Results for a 3D condensate in cylindrical geometry
Equation (1) is made dimensionless by adopting the scale units $`S_l=\sqrt{\mathrm{}/2M\omega }`$, $`S_t=1/\omega `$ and $`S_E=\mathrm{}\omega `$ for length, time and energy, $`\omega `$ being the radial frequency of the original magnetic trap. We also rescale the wave function by the dimensionless radial coordinate $`\rho `$. We use a grid resolution of $`21\times 16`$ on each single well, the time step being $`\mathrm{\Delta }\tau =210^6`$.
The initial value $`\mathrm{\Psi }(r,z;t=0)`$ of the wave function is chosen so as to approximate the condensate realized in . Namely,
$$\mathrm{\Psi }(r,z;t=0)=A\mathrm{exp}[M(\omega _rr^2+\stackrel{~}{\omega }z^2)/2\mathrm{}]\underset{l}{}\mathrm{exp}[M\omega _z(zld)^2/2\mathrm{}].$$
(3)
Here, $`A`$ is a normalization factor and $`l`$ labels the occupied sites, their total number being $`n_w`$. In constructing Eq. (3) we have assumed (i) constant phase of the condensate in space ; (ii) a Gaussian transverse profile with a frequency $`\omega _r=\sqrt{2U_l^0/(Mr_{lb}^2)}`$ given by the harmonic approximation to the transverse shape of the optical potential; and (iii) an overall axial profile reflecting that of the condensate inside the magnetic trap before loading the optical lattice and taken as a Gaussian having frequency $`\stackrel{~}{\omega }=4\pi ^{3/5}/\zeta ^2\omega `$, with $`\zeta =(32\pi Na/S_l)^{1/5}`$ . Finally, the lowest state at each lattice site is occupied by a portion of condensate, having Gaussian shape with frequency $`\omega _z=2\sqrt{U_l^0E_R}/\mathrm{}`$, where $`E_R=h^2/2M\lambda ^2`$ is the recoil energy.
Before entering the presentation of the simulation results, we list below the system parameters relevant to the experiment on $`{}_{}{}^{87}Rb`$ . These are $`a=110a_0`$ with $`a_0`$ the Bohr radius, $`N=10^4`$, $`\lambda =850nm`$, $`r_{lb}=80\mu m`$, $`U_l^0=1.4E_R`$ and $`n_w=31`$. The parameters in this reference list provide our reference run.
Fig. 1 shows four pictures of drop emission for different values of the coupling strength, by plotting the contour density profiles, taken with $`U_l^0=1.4E_R`$ after 5.3 $`ms`$. The first panel displays the behaviour of the non-interacting gas, namely the case $`a=0`$ and $`n_w=31`$. The other panels report the behaviour of the interacting gas with $`a=110a_0`$; from left to right, the cases $`N=10^4`$ with $`n_w=31`$, $`N=10^5`$ with $`n_w=49`$ and $`N=210^5`$ with $`n_w=57`$. All the other parameters are as listed above.
A number of the results obtained in the earlier 1D simulation are recovered in the present 3D runs. Each drop in figure 1 extends over a number of wells equal to that occupied by the parent condensate. In all cases the drops are equally spaced by seventy wells from centre to centre. This spacing corresponds to 1.1 $`ms`$ of simulation time, in agreement with experiment and with the value of 1.09 $`ms`$ for the period $`T_B=2h/Mg\lambda `$ of Bloch oscillations of the condensate in the periodic optical potential. The time lag between successive drops is independent of the amplitude of the periodic potential, of the strength of the interactions and of the dimensionality of the simulation sample, as expected if Bloch oscillations provide the correct interpretation of the observations . Finally, both the axial width and the fine structure of each drop reproduce those of the parent condensate, as expected for coherent emission from all lattice wells.
Again in analogy with the case of 1D simulation , the transport behaviour of the 3D system as a function of its governing parameters can be summarized in a single diagram. We introduce a scaling parameter $`g_s`$, having the dimensions of the acceleration of gravity, by setting
$$Mg_sd=U_l^0U_i$$
(4)
where $`U_i4\pi \mathrm{}^2aN/MR^3`$ with $`R=\zeta S_l`$ is a measure of the mean-field interaction strength. In figure 2 we plot the fractional number $`N_{drop}/N`$ of atoms in the first drop as a function of the ratio $`g/g_s`$, each symbol representing a run with different input parameters. Symbols of different shape represent runs at different values of $`U_l^0/E_R`$, as shown in the legend. For each symbol we report the values of $`N_{drop}/N`$ corresponding to increasing coupling strength $`U_i`$, starting from the non-interacting gas and then increasing $`N`$ in the sequence $`N=10^4`$, $`10^5`$ and $`210^5`$ (from left to right).
It is seen from figure 2 that there is a critical value for the onset of drop formation, which is $`g/g_s0.14`$ (the onset is marked by an arrow). To all effects there is no emission of drops for subcritical values of $`g/g_s`$, since the condition of resonance between the bound state in the well and the continuum is not satisfied. Well defined drops are instead emitted at supercritical values of $`g/g_s`$. In this regime $`N_{drop}/N`$ increases rather regularly with $`g/g_s`$ and shows little sensitivity to the strength of the mean-field interactions at fixed $`U_l^0`$. Ultimately, with decreasing $`U_l^0`$ the potential wells become too shallow and regular drop emission turns into a discharge of the whole condensate.
On the other hand, an increase in the mean-field interactions affects the axial width of the drops. In particular the width of the first drop, as measured by its second moment, is close to $`n_w/2`$, with an appreciable scatter from case to case, while the centre-to-centre distance of neighbouring drops is about 70 wells. As a result, overlap between drops starts at $`N310^5`$.
In summary, our 3D simulations confirm that the transport behaviour of a Bose condensate in a vertical optical lattice is described by a diagram as shown in figure 2. This diagram could yield useful predictions even for atomic species different from <sup>87</sup>Rb.
## 4 One-dimensional modelling
The time-dependent GPE for a 1D reduction of the present transport problem is
$$i\mathrm{}\frac{\psi (z,t)}{t}=\left(\frac{\mathrm{}^2_z^2}{2M}+u_{ext}(z)+u_I|\psi (z,t)|^2\right)\psi (z,t),$$
(5)
where $`u_{ext}(z)=U_l^0\mathrm{sin}^2(2\pi z/\lambda )Mgz`$ and $`u_I=4\pi \mathrm{}^2\stackrel{~}{a}N/M`$, $`\stackrel{~}{a}`$ being a renormalized coupling parameter with the dimensions of an inverse length.
In the renormalization proposed by Jackson et al. (hereafter referred to as model I) one assumes that the coherence length of the condensate in the axial direction is much larger than its transverse radius. The 3D wave function is factorized as $`\mathrm{\Psi }(𝐫,t)=g(r,\sigma (z(t)))`$ where $`\sigma (z)`$ is the axial density. Using a harmonic approximation for the radial part of the optical potential, one obtains in the specific problem an effective scattering length $`\stackrel{~}{a}^I\gamma ^Ia`$ with $`\gamma ^I=\sqrt{U_l^0/E_R}/(r_{lb}\lambda )`$. A similar renormalization of $`U_l^0`$ is negligible, since $`r_{lb}\lambda `$.
As an alternative (hereafter referred to as model II) we impose that the value of the chemical potential of the 3D system be preserved upon reduction to 1D . As is shown in a detailed calculation given in Appendix A within the Thomas-Fermi approximation, we find $`\stackrel{~}{a}^{II}=\gamma ^{II}a`$ with $`\gamma ^{II}`$ given (for $`U_l^0>\mu `$) by
$$\gamma ^{II}=\frac{1}{\pi r_{lb}^2}+\frac{U_l^0\mu }{4E_R}\frac{1}{\lambda aN}I(\mu ),$$
(6)
an explicit expression for the positive quantity $`I(\mu )`$ being given in the Appendix. We remark for later discussion that $`\gamma ^{II}`$ in Eq. (6) depends implicitly on the product $`aN`$ which determines the interaction strength.
Table 1 reports the values of $`\gamma ^IS_l^2`$ and $`\gamma ^{II}S_l^2`$ for a number of values of $`U_l^0/E_R`$ and of $`N`$, all other parameters being as in the reference list. The values of $`\mu /E_R`$ are also shown. The important point of Table 1 is that while $`\gamma ^I`$ is independent of $`N`$, $`\gamma ^{II}`$ decreases with increasing $`N`$. This means that in model I an increase in the number of particles leads to a much more rapid increase of the mean-field interaction parameter. For instance, in the case $`U_l^0/E_R=1.4`$ an increase of $`N`$ by a factor $`30`$ implies that $`\gamma ^{II}aN`$ increases by only a factor $`6.8`$.
In the next Section we discuss the consequences of these different behaviours of the two 1D models. We compare the results of the 3D simulation reported in Section 3 with those obtained by solving the 1D GPE with $`\stackrel{~}{a}^I`$ or $`\stackrel{~}{a}^{II}`$ inserted in turn into the mean-field term $`u_I`$.
## 5 Discussion and concluding remarks
Table 2 collects the results for the fractional number $`N_{drop}/N`$ of atoms in the first drop for different values of $`U_l^0/E_R`$ and $`N`$ in the interacting gas, as calculated from (i) the 3D simulation in cylindrical geometry (third column), (ii) the 1D simulation in model I (fourth column), and (iii) the 1D simulation in model II (last column).
It is immediately seen that model II works better than model I and quantitatively reproduces the data of the 3D simulation. The interactions lift the bound state towards the continuum by an amount which may be measured by the mean interaction energy $`E_I`$ per particle . This is proportional to the product of the effective scattering length times the particle density. In the 1D simulation according to model I we have $`E_I\stackrel{~}{a}/\lambda a/\lambda ^2r_{lb}`$. In the 3D simulation we have instead $`E_Ia/(\lambda r_{lb}^2)`$, which is significantly smaller since $`r_{lb}\lambda `$.
Thus a picture emerges in which axial transport is accompanied by a transverse breathing of the condensate, due to the vanishing of radial confinement at $`z=(2n+1)\lambda /4`$ (see Eq. (2)). Model I cannot account for this behaviour, since it has been derived by freezing the transverse motion. It also follows that an increase in coupling strength is partially accomodated by a transverse spreading of the condensate, so that the value of $`N_{drop}/N`$ is rather insensitive to the mean-field interactions in the present range of system parameters.
In summary, a Bose condensate in an optical lattice, subject to a constant driving field in the linear transport regime, behaves as a coherent blob of matter which executes Bloch oscillations through band states and can undergo Zener tunnel at the Brillouin zone edge. The simple diagram shown in Figure 2 describes the drops of coherent matter which are emitted via tunnel into the continuum, in dependence of the governing parameters of the system. The effect of the mean-field interactions is very small within the range of system parameters in which regular pulses are emitted. A quantitative reduction of this behaviour to a 1D model can be achieved by imposing a simple condition of constancy of the chemical potential. We expect that this method of dimensionality reduction will be useful in other similar problems and applications.
Acknowledgements
We gratefully thank Dr. M. M. Cerimele, Dr. F. Pistella and Professor S. Succi of the Istituto Applicazioni Calcolo “M. Picone” of the Italian National Research Council for making the simulation programs available to us and for useful discussions. Special thanks are due to Dr. F. Pistella for her help with the graphics. This work has been sponsored by the Istituto Nazionale di Fisica della Materia under the Advanced Research Project on BEC.
## Appendix A Calculation of the renormalization factor $`\gamma ^{II}`$
We set $`\mathrm{\Psi }(𝐫,t)=\mathrm{exp}(i\mu t/\mathrm{})\mathrm{\Psi }(𝐫)`$ in Eq. (1) and $`\psi (z,t)=\mathrm{exp}(i\mu t/\mathrm{})\psi (z)`$ in Eq. (5). We then have to solve the stationary GPE’s
$$\mu \mathrm{\Psi }(𝐫)=\left(\frac{\mathrm{}^2_𝐫^2}{2M}+U_{ext}(𝐫)+U_I|\mathrm{\Psi }(𝐫)|^2\right)\mathrm{\Psi }(𝐫)$$
(A.1)
for $`\mathrm{\Psi }(𝐫)`$ in the 3D and
$$\mu \psi (z)=\left(\frac{\mathrm{}^2_z^2}{2M}+u_{ext}(z)+u_I|\psi (z)|^2\right)\psi (z).$$
(A.2)
for $`\psi (z)`$ in 1D. To this end we use the Thomas-Fermi approximation, which amounts to neglecting the kinetic energy terms. We then impose the normalization condition on $`\mathrm{\Psi }(𝐫)`$ and on $`\psi (z)`$, thereby obtaining the expressions for the chemical potential $`\mu `$.
After introducing dimensionless quantities as indicated in Section 3 and assuming confinement ($`U_l^0>\mu `$), this procedure yields the relations
$$\frac{32\pi ^2aN\gamma ^{II}S_l^2}{U_l^0\lambda }=\left(12\beta \right)\mathrm{arccos}\left(2\beta 1\right)+\sqrt{1\left(2\beta 1\right)^2}$$
(A.3)
in the 1D case and
$$\frac{32\pi ^2aNS_l^2}{U_l^0\lambda }=\frac{32\pi ^2aN\gamma ^{II}S_l^2}{U_l^0\lambda }2\beta _0^{\mathrm{arccos}(2\beta 1)}w\mathrm{tan}(w/2)𝑑w$$
(A.4)
in the 3D case, where $`\beta (U_l^0\mu )/U_l^0`$. These equations yield Eq. (6) in the main text, where
$$I(\mu )=_0^{f(\mu )}w\mathrm{tan}(w/2)𝑑w>0$$
(A.5)
with
$$f(\mu )=\mathrm{arccos}(2(U_l^0\mu )/U_l^01).$$
(A.6)
In our numerical calculations we first determine $`\mu `$ from Eq. (A.1) and then $`\gamma ^{II}`$ from Eq. (6).
Figure captions
Fig. 1: Contour plots of the condensate density after 5.3 $`ms`$ for $`U_l^0=1.4E_R`$, as functions of the axial coordinate $`z/d`$ and of the transverse distance in $`\mu m`$. The first panel on the left refers to the non-interacting gas with a number of particles $`N=10^4`$. The other panels refer to the interacting gas, for $`a=110a_0`$ and various values of $`N`$ (from left to right, $`N=10^4`$, $`10^5`$ and $`210^5`$).
Fig. 2: Diagram for drop formation from 3D simulation in cylindrical geometry. We plot the fractional number $`N_{drop}/N`$ of particles in the first drop against $`g/g_s`$ for $`g=981`$ cm$`/`$s<sup>2</sup>. The meaning of the symbols is explained in the legend and in the text.
|
no-problem/0004/astro-ph0004245.html
|
ar5iv
|
text
|
# Discovery and Photometric Observation of the Optical Counterpart in a Possible Galactic Halo X-ray Transient, XTE J1118+480
## 1. Introduction
Soft X-ray transients (SXTs), or X-ray novae (e.g. Chen et al. 1997) are binary systems which exhibit luminous X-ray and optical outburst (Tanaka, Shibazaki 1996). They uniquely provide the most compelling evidence for the existence of steller mass black holes using radial velocity studies, giving mass functions exceeding the maximum mass of a stable neutron star ($`3M_{}`$), and we know eight such black hole candidates (van Paradijs, McClintock 1995; Bailyn et al. 1998; Orosz et al. 1998). Their outburst light curves often have an common feature, that is, after a rise of a few days, the X-ray intensity comes to the maximum, which was typically followed by the exponential decay with an e-folding time of $`40\mathrm{d}`$ (Chen et al. 1997). At the maximum, the X-ray luminosity reaches $`10^{3839}\mathrm{erg}\mathrm{s}^1`$ and the optical to X-ray flux ratio of $`500`$ is typical in SXTs (Tanaka, Shibazaki 1996).
Many astronomers have recently believed that the compact object of SXTs in quiescence is surrounded by the accretion disk whose inner part is the geometrically thick and optically thin advection dominated accretion flow (ADAF) and the outer part is the geometrically thin and optically thick disk which becomes thermally unstable when the disk becomes too hot for hydrogen to remain neutral (Narayan, Yi 1995; Shakura, Sunyaev 1973; Osaki 1974). This model of the outburst mechanism is called the disk instability model and satisfactorily explains the outburst cycle of a few tens of years and the outburst duration by the viscous diffusion time scale of the accretion disk in the cool and hot states, respectively (Mineshige 1996).
Almost all observed SXTs are distributed on the galactic disk and few had been discovered in the galactic halo (Chen et al. 1997; Bradt et al. 2000). White, van Paradijs (1996) have studied the galactic distribution of BHCs low-mass X-ray binaries and found an rms value for the distance from the galactic disk to be $`0.4`$ kpc. We can easily understand these biased distribution because massive stars, which are responsible for producing neutron stars and black holes, have been generated in the disk rather than the halo.
A new SXT, XTE J1118+480 whose galactic latitude is high ($`62^{}`$), has been discovered at an intensity of 39 mCrab with the All-Sky Monitor (ASM; Levine et al. 1996) on the Rossi X-Ray Timing Explorer (RXTE) in 2000 March 29 (Remillard et al. 2000). The X-ray spectrum just after the discovery is similar to Cyg X-1 in its hard state (Remillard et al. 2000). The hard X-ray spectrum is well characterized by a power law with a photon index of 2.1 and the source is visible up to 120 keV which implies that it is a possible black hole X-ray transient (Wilson, McCollough 2000). In this paper, we report the discovery of the optical counterpart of XTE J1118+480 and the optical short-time variability, and discuss the binary nature and a distance estimation.
## 2. Discovery and Observations
Following communication of the X-ray discovery of XTE J1118+480 to the Variable Star NETwork (VSNET, http://www.kusastro.kyoto-u.ac.jp/vsnet) on 2000 March 30, we started CCD time-series observations, which immediately revealed the presence of the optical counterpart at 12.92 mag, within the error circle of $`6^{}`$ proposed by RXTE/ASM (Uemura et al. 2000a). The position of the optical counterpart is R.A. = $`11^\mathrm{h}\mathrm{\hspace{0.17em}18}^\mathrm{m}\mathrm{\hspace{0.17em}10}^\mathrm{s}.85`$, Decl. = $`+48^{}\mathrm{\hspace{0.17em}02}^{}\mathrm{\hspace{0.17em}12}^{\prime \prime }.9`$ (equinox 2000.0; accuracy $`0.2^{\prime \prime }`$). A star of 18.8 mag in the USNO A1.0 and A2.0 catalogues (USNO A1.0 1350.08089912 = USNO A2.0 1350.07924726) lies within $`2^{\prime \prime }`$ of this position. No other object can be seen on images of the second DSS images available from the Space Telescope Science Institute within $`10^{\prime \prime }`$, indicating that this star is the quiescent optical counterpart.
Figure 1 gives the finding chart of XTE J1118+480. The left and right panels show the DSS image and outburst image on March 30.83 (UT), respectively. After detection of March outburst in optical and X-ray wavelengths, the reanalysis of previous data revealed the presence of another X-ray outburst and simultaneous optical activity in 2000 January (Remillard et al. 2000; Uemura et al. 2000a; Wren, McKay 2000).
The CCD photometric observations were made using the following cameras and telescopes; unfiltered ST-7 and 25-cm Schmidt-Cassegrain (Kyoto), unfiltered SXL8 and 33-cm Newtonian (Conder Brow Observatory), CB245 with clear filter and 44-cm Newtonian (California), unfiltered ST-7 and 28-cm Schmidt-Cassegrain (Ceccano), and unfiltered ST-9e and 28-cm Schmidt-Cassegrain (Nayoro), whose integration times are $`30\mathrm{s}`$ , $`30\mathrm{s}`$, $`16\mathrm{s}`$, $`40\mathrm{s}`$, and $`25\mathrm{s}`$, respectively. After correcting for the standard de-biasing and flat fielding, we processed object frames with the PSF and aperture photometry packages. We performed differential photometry relative to the comparison star, C, shown in figure 1, whose constancy was confirmed by check star of GSC3451.938. Photographic observations were performed using an unfiltered T-Max400 film with 10-cm camera at Nagano and Aichi.
## 3. Results
We observed no activity brighter than 15 mag from HJD 2449660 (1994 November) to 2451547 (1999 December) with 17 negative observations. Figure 2 shows the light curve of XTE J1118+480 after it became active in 2000 January. The abscissa denotes time in heliocentric julian date. The filled circles, filled diamonds, open circles, and open triangle denote the photographic magnitude, nightly averaged unfiltered CCD magnitude which almost equals to $`R_c`$-magnitude, the $`V`$-magnitude, and upper-limit reported by ROTSE collaboration, respectively (Uemura et al. 2000a; Wren, McKay 2000). The error of each photometric observation is typically less than 0.1 mag. We also depict the X-ray flux observed with RXTE/ASM shown in gray crosses with errorbars (http://xte.mit.edu/ASM\_lc.html). The discovery date corresponds to the first of the filled diamonds on HJD 2451634.
In figure 2, we can see two distinct X-ray outbursts reported by Remillard et al. (2000) and two optical outbursts which correlate with X-ray intensity. At the beginning of the second outburst, figure 2 clearly shows the optical precursor which began to rise about 10 days prior to the X-ray outburst. We also see a fast optical flare around HJD 2451572 which anti-correlates with the X-ray intensity. The flare occurred just before the first X-ray outburst ended and lasted only for a few days. A possible optical flare is also seen around HJD 2451627.
Figure 3 shows the light curve of XTE J1118+480 from HJD 2451634 to 2451647. In figure 3, we can see the sinusoidal periodic modulation upon the plateau of 12.89 mag and no general tendency of rise nor decay was observed on the limited data available. The data on HJD 2451643 were special in that they showed a rise over $`0.2\mathrm{d}`$, which was not seen in the other data. Although we can see a possible hump feature during this rise, we have excluded this segment from the following period analysis. We performed a period analysis using the PDM method (Stellingwerf 1978).
The resultant period-theta diagram is shown in figure 4. As can be seen in figure 4, the best candidate of the period is $`0.17078\pm 0.00004\mathrm{d}`$. This period is also confirmed with the subsequent observations by VSNET collaboration team (Uemura et al. 2000b). We see a remarkable dip around HJD 2451643.0 while the all other data well fits with the single peak averaged light curve. The amplitude of 0.055 mag is not changed during our time-series CCD observations. We noted that the scatter seemed large for a star of this brightness even if subtracting the periodic modulation, indicating the presence of fluctuations on a short (seconds) time scale.
## 4. Discussion
XTE J1118+480 shows some features different from those of typical X-ray transients, that is, the multiple peaks within three months observed both in X-ray and optical, an extremely low optical to X-ray flux ratio, and the high galactic latitude of $`62^{}`$. For the first outburst shown in figure 3, the light curve profile around the peak is relatively sharp compared with the second one and no clear time lag is seen between the X-ray and optical outburst. We can interpret this correlation through the occurrence of an “inside-out” type outburst in accretion disk which will generate the rapid rise in X-ray and simultaneous optical brightening, however this picture cannot predict the fast optical flare at the end of outburst (Smak 1984). On the other hand, we suggest the “outside-in” type outburst for the second outburst to explain the optical precursor. To observe these different types of outburst indicates that they may be caused by the different instability faculty such as normal and super outbursts of SU UMa-type dwarf novae (Warner 1995).
From the X-ray flux of 39 mCrab reported in Remillard et al. (2000) and the simultaneous magnitude of optical counterpart of $``$ 13 mag, the optical to X-ray flux ratio is calculated as $`5`$ which is by two orders of magnitude smaller than the typical value of $`500`$. The X-ray intensity particularly shows a low value rather than optical, and this suggests an idea that it may be a high inclination system (Garcia et al. 2000). The HST observation revealed the continuum slope somewhat flatter, which suggests the intrinsic X-ray flux is relatively low and hence removes the suggestion of the high inclination system (Haswell et al. 2000). A high inclination system generally provides a high probability of occurrence of eclipses by secondary, whereas we have not detected them.
It is almost certain that the periodicity appeared in figure 3 reflects the orbital motion of the underlying binary, through the reprocessing the X-rays by the secondary, or possibly superhump modulations as seen in SU UMa-type dwarf novae (O’Donoghue, Charles 1996). We therefore suggest the orbital period of $`0.17078`$ d, which is relatively short compared with other low mass X-ray binaries (Ritter, Kolb 1998).
Since the quiescent magnitude is 18.8 mag, if we assume a K dwarf secondary star and no other optical source in quiescence, the distance is at least 1.5 kpc, that is, 1.3 kpc above the galactic plane. The color of the quiescent counterpart in the USNO catalog ($`br=+0.6`$) is much bluer than that of a K or M dwarf, implying the substatial contribution from the accretion disk. The above distance estimate should be thus regarded as a lower limit, implying that XTE J1118+480 is a galactic halo X-ray transient. We can generally expect that such a galactic halo object is older than that in disk, and this is consistent with the short orbital period of 0.17078 d which implies a more evolved binary system. It should be noted, however, if we assume a short orbital period of 0.17078 d, the secondary may be a smaller star than in other typical SXTs. This case may be comparable to the system GRO J0422+32, whose orbital parameter and secondary type are 0.212 d and M2V, respectively (Orosz, Bailyn 1995; Filippenko et al. 1995). An M2V star secondary leads to the lower limit of the distance of 0.55 kpc. Optical or infrared spectroscopic observations are definitely needed to unambiguously determine the orbital period and the type of the secondary.
On the point of the high galactic latitude SXTs, 3U 0042+32 is similar to XTE J1118+480 (Ricketts, Cooke 1977). The outburst in 1977 of 3U 0042+32 was detected by Uhuru which classified it as a high galactic latitude ($`30^{}`$) X-ray transient. During the active phase in 1977 Feburuary, 3U 0042+32 experienced at least four distinct outbursts whose interval is estimated as 11.6 d (Watson, Ricketts 1978). Compared with XTE J1118+480, the outburst duration seems to be quite shorter (decay e-folding time $`=2.8`$ d). Another noteworthy, possibly related, high galactic latitude X-ray binary is MS 1603+2600 = UW CrB (Morris et al. 1990). This object is a persistent source whose orbital period is 111 min. Although Hakala et al. (1998) suggested this X-ray binary may correspond to a quiescent state of SXTs, the discovery of a short-period transient system XTE J1118+480 may suggest that these two systems comprise a new variety of halo X-ray binaries.
## 5. Summary
We discovered the optical counterpart of the high galactic latitude soft X-ray transient, XTE J1118+480, whose position is R.A. = $`11^\mathrm{h}\mathrm{\hspace{0.17em}18}^\mathrm{m}\mathrm{\hspace{0.17em}10}^\mathrm{s}.85`$, Decl. = $`+48^{}\mathrm{\hspace{0.17em}02}^{}\mathrm{\hspace{0.17em}12}^{\prime \prime }.9`$. There are 18.8 mag star within the $`2^{\prime \prime }`$ error circle in USNO catalogue and we consider it as an optical counterpart in quiescence. We revealed the two distinct optical outbursts before discovery. From our time-series CCD photometry, we detected a 0.17078 d periodicity which we consider as the orbital period. If we assume the secondary star of K dwarf, XTE J1118+480 is the possible first firmly identified black hole candidate X-ray transient in the galactic halo.
## References
Bailyn C. D., Jain R. K., Coppi P., Orosz J. A. 1998, ApJ 499, 367
Bradt H., Levine A., Remillard R., Donald A. S. astro-ph/0003438
Chen W., Shrader C. R., Livio M. 1997, ApJ 491, 312
Filippenko A. V., Matheson T., Ho L. C. 1995, ApJ 455, 614
Garcia M., Brown W., Pahre M., McClintock J. 2000, IAU Circ. 7392
Hakala P. J., Chaytor D. H., Vihu O., Piirola V., Morris S. L., Muhli P. 1998, ApJ 333, 540
Haswel C. A., Hynes R. I., King A. R. 2000, IAU Circ. 7407
Levine A. M., Bradt H., Cui W., Jernigan J. G., Morgan E. H., Remillard R., Shirey R. E., Smith D. A. 1996, ApJ 469, L33
Mineshige S. 1996, PASJ 48, 93
Morris S. L., Liebert J., Stocke J. T., Gioia I. M., Schild R. E., Wolter A. 1990, ApJ 365, 686
Narayan R., Yi I. 1995, ApJ 452, 710
O’Donoghue D., Charles P. A. 1996, MNRAS. 282, 191
Orosz J. A., Bailyn C. D. 1995 ApJ 446, L59
Orosz J. A., Jain R. K., Bailyn C. D., McClintock J. E., Remillard R. A. 1998, ApJ 499, 375
Osaki Y. 1974, PASJ 26, 429
Remillard R., Morgan E., Smith D., Smith E. 2000, IAU Circ. 7389
Ricketts M. J., Cooke B. A. 1977, IAU Circ. 3039
Ritter H., Kolb U. 1998, A&AS 129, 83
Shakura N. I., Sunyaev R. A. 1973, A&A 24, 337
Smak J. 1984, Acta Astron. 34, 161
Stellingwerf R. F. 1978, ApJ 224, 953
Tanaka Y., Shibazaki N. 1996, ARA&A 34, 607
Uemura M., Kato T., Yamaoka H. 2000a, IAU Circ. 7390
Uemura M., Kato T., Matsumoto K., Honkawa M., Cook L. M., Martin B., Masi G., Oksanen A., Moilanen M., Novak R., Sano Y., Ueda Y. 2000b, IAU Circ. 7418
van Paradijs J., McClintock J. E. 1995, in X-ray Binaries, ed. W.H.G. Lewin J., van Paradijs E. P. J. van den Heuvel (Cambridge: Cambridge Univ. Press), 58
White N., van Paradijs J. 1996, ApJ 473L, 25
Warner B. 1995, Cataclysmic Variable Stars, p126–215 (Cambridge Univ. Press, Cambridge)
Watson M. G., Ricketts M. J. 1978, MNRAS. 183, 35P
Wilson C. A., McClollough M. L. 2000, IAU Circ. 7390
Wren J., McKay T. 2000, IAU Circ. 7394
|
no-problem/0004/astro-ph0004004.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Astrophysical systems in the Universe are characterized by the gravitation. The structure formed through this long-range force is quite different from those formed through the other short-range forces. If the system does not strongly depend on the initial conditions of the Universe, we can apply statistical mechanics for describing such self-gravitating systems (SGS). However, we cannot directly apply the standard Boltzmann statistical mechanics for SGS since the long-range nature of gravity strongly violates the extensive property of the system which is the premise of statistical mechanics. Actually, the total energy increases much faster than the particle number $`N`$, the partition function $`Z`$ often becomes complex, reflecting the fact that there is no absolute stable state in SGS.
In order to seek for workable statistical mechanics of SGS, we try an approach based on the new definition of entropy whose extensivity is violated from the beginning; Tsallis statistical mechanics.
We formulate the count-in-cell method for the large scale galaxy distributions in this new statistical mechanics. First we calculate the expression of the composite entropy and the generalized Euler relation in this new statistical mechanics. These are applied to the data of CfA II South redshift survey. The parameter q becomes negative, which represents the instability of gravity.
## 2 Tsallis Statistical Mechanics
The ordinary Boltzmann statistical mechanics is characterized by the entropy $`S=\underset{i}{}p_i\mathrm{ln}p_i`$. The distribution function $`p_{N,E}=\mathrm{exp}\left[\left(E\mu N\right)/T\right]/\mathrm{\Xi }`$ maximizes this entropy with constraints of the probability conservation, the energy conservation, and the particle number conservation. This statistical mechanics is originally aimed to describe the multi-fractal and chaos structures. It is characterized by the entropy of the form $`S_q\left[p\right]=\left(\underset{i}{}p_i^q1\right)/(1q)`$, where $`q`$ is a real parameter. Tsallis distribution function is obtained so that it maximizes this entropy with the same constraints. The solution has a power law tail:
$$p_{N,E}=\frac{1}{\mathrm{\Xi }_q}\left\{1\frac{1q}{\stackrel{~}{T}}\left(E\overline{E}\mu \left(N\overline{N}\right)\right)\right\}^{\frac{1}{1q}},$$
(1)
where $`\stackrel{~}{T}CT`$, and $`C=\underset{i}{}p_i^q`$. Partition function is defined as
$$\mathrm{\Xi }_q=\underset{N,E}{}\left\{1\frac{1q}{\stackrel{~}{T}}\left(E\overline{E}\mu \left(N\overline{N}\right)\right)\right\}^{\frac{1}{1q}}.$$
(2)
Note that $`p_{N,E}`$ reduces to the ordinary Boltzmann form for $`q1`$. For the consistent formulation, it is important to notice that the observable expectation value is calculated by the escort distribution: $`P_i=p_i^q/C`$, $`Q_q=\underset{i}{}P_iQ_i`$. The averaged quantities $`\overline{E},\overline{N}`$ in Eqs.(1) and (2) should be understood in this sense.
## 3 Galaxy Distribution in Count-in-Cell Method
There are many approaches to describe large scale structure of the Universe. In this paper, we will concentrate on distribution of galaxies. One of the well-known method to describe the distribution of galaxies is the two-point correlation function. However, when we solve equations of motions for the two-point correlation function, we need three-point correlation function. Such higher order essentialness is called BBGKY chain. Since we have to cut the BBGKY chain, we have to apply some kind of approximations.
On the other hand, count-in-cell method is often used to describe distribution of galaxies. In this method, we use an analytic formula for probability $`f(N)`$ of finding $`N`$ galaxies in a randomly positioned volume $`V`$. As we don’t have to use any approximation in count-in-cell method, we can apply it even for clustered system. Saslaw and Hamilton, and S. Inagaki introduced the virial parameter b=(gravitational correlation energy)/(kinetic energy of random motion), which measures the deviation from the dynamical-equilibrium. Then they found that in thermal-equilibrium, their theoretical investigation of $`f(N)`$ is consistent with he N-body simulations or catalogues of observations. Strictly speaking, we should not apply thermal-equilibrium theory for expanding Universe. However their consistency let us further study thermal-equilibrium statistical description. In evolution of the Universe, dynamical-equilibrium $`b=1`$ must be realized before the Universe reach thermal-equilibrium. Therefore in this paper we consider the dynamical-equilibrium $`b=1`$ case. We believe that there exist adequate fitting parameter other than the virial parameter $`b`$. That is the reason why we consider non-extensive statistics which gives us parameter $`q`$.
Supposing the galaxies distribute according to Tsallis statistics, we consider a system described by equilibrium thermodynamics. That is the grand canonical ensemble, characterized by the given temperature and the chemical potential.
The probability to find no galaxy in the volume $`V`$ is
$$f\left(0\right)\underset{E}{}P_{0,E}=P_{0,0}=\frac{\left(p_{0,0}\right)^q}{C},$$
(3)
where
$$p_{0,0}=\mathrm{\Xi }_q^1\left[1+\frac{1q}{\stackrel{~}{T}}\left(\overline{E}\mu \overline{N}\right)\right]^{\frac{1}{1q}}.$$
(4)
The partition function can be decomposed as
$$\mathrm{\Xi }_q=\underset{N,E}{}\left(1\frac{1q}{\stackrel{~}{T}}\left(E\overline{E}\mu \left(N\overline{N}\right)\right)\right)^{1+q/\left(1q\right)}=\left(\mathrm{\Xi }_q\right)^qC$$
(5)
and therefore $`\mathrm{\Xi }_q=C^{1/\left(1q\right)}=\left(1+\left(1q\right)S\right)^{1/\left(1q\right)}`$. Thus we obtain
$$f\left(0\right)=\left(1+\left(1q\right)S\right)^{\frac{1}{1q}}\left(1+\frac{1q}{\stackrel{~}{T}}\left(\overline{E}\mu \overline{N}\right)\right)^{\frac{q}{1q}}.$$
(6)
In order to reduce the above expression, we need to calculate the composite entropy and the Euler relation in Tsallis statistics. When we compose two systems A and B, the distribution function is given by $`p_{ij}(A,B)=p_i(A)p_j(B)`$ and the composed entropy is $`S_{A+B}=S_A+S_B+\left(1q\right)S_AS_B.`$ Sequentially using this composition $`S_{N+1}=s+S_N+\left(1q\right)sS_N`$, (where $`s`$ means entropy of 1 system), we obtain the total entropy of N identical systems as
$$S_N=\frac{\left(1+\left(1q\right)s\right)^N1}{1q}.$$
(7)
Generalized Euler relation is given by the following arguments. The variables $`E,V,N`$ are the natural arguments of the entropy: $`S_N=S(E,V,N)`$. Differenciating $`S_{\alpha N}=S(\alpha E,\alpha V,\alpha N)`$ with respect to $`\alpha `$ and setting $`\alpha 1`$, we obtain
$$\frac{S_{\alpha N}}{\alpha }|_{\alpha 1}=\frac{S(\alpha E,\alpha V,\alpha N)}{\alpha }|_{\alpha 1}=\frac{E}{T}+\frac{pV}{T}\frac{\mu N}{T},$$
(8)
where we used $`S/V=p/T,S/E=1/T,S/N=\mu /T`$, which are guaranteed by the Legendre structure of the Tsallis thermodynamics. In the above, we can put any other values for $`\alpha `$ as well. In general, the $`\alpha `$-dependence is inherited from $`S`$ to $`T`$ on the right hand side of Eq.(8). Non-extensivity of $`S`$ necessarily accompanies the non-intensivity of the Legendre-conjugate variable $`T`$. Actually, the fact that the variables $`E,V,N`$ are extensive and $`p,\mu `$ are intensive in Eq.(8) renders the $`\alpha `$-dependence of the temperature:
$$T\left(\alpha \right)=T_1\left(1+\left(1q\right)s\right)^{1\alpha N}$$
(9)
where $`T_1`$ is the temperature of one particle system. Note that $`\stackrel{~}{T}CT`$ defined just after Eq.(1) is $`\alpha `$-independent! Probably this quantity $`\stackrel{~}{T}`$ should be related with the physical temperature as defined from the velocity dispersion of the system. However at present, we do not have idea on the meaning of the quantity $`T`$<sup>1</sup><sup>1</sup>1 This situation is similar to the argument on the physical probability distributions at the end of the section two. Among $`P_i`$ and $`p_i`$, related with each other by the factor $`C`$, we have chosen the escort distribution $`P_i`$ as the physical distribution. . This point will be further discussed in the last section.
Then we obtain the Euler relation in non-extensive statistical mechanics
$$\frac{\left(1+\left(1q\right)S\right)\mathrm{ln}\left[1+\left(1q\right)S\right]}{1q}=\frac{E+pV\mu N}{T}.$$
(10)
This temperature on the right hand side is $`T\left(\alpha =1\right)`$ in Eq.(9), though the explicit form of the temperature does not appear in the final expression of $`f\left(N\right)`$. Using this Euler relation, we can now eliminate $`\overline{E}\mu \overline{N}`$ in $`f\left(0\right)`$ and Eq.(6) reduces to
$$f\left(0\right)=\left(1+\left(1q\right)S\right)^{\frac{1}{1q}}\left(1\frac{pV}{\stackrel{~}{T}}\left(1q\right)+\mathrm{ln}\left[\left(1+\left(1q\right)S\right)\right]\right)^{\frac{q}{1q}}.$$
(11)
Since we consider the dynamical-equilibrium system, the system is fully virialized $`b=1`$, and therefore the pressure $`p`$ must be $`0`$, we obtain:
$$f\left(0\right)=\left(1+\left(1q\right)S\right)^{\frac{1}{1q}}\left(1+\mathrm{ln}\left[\left(1+\left(1q\right)S\right)\right]\right)^{\frac{q}{1q}}.$$
(12)
Moreover using Eq.(7), we finally obtain
$$f(0)=\left\{1+\left(1q\right)s\right\}^{\frac{N}{1q}}\left[1+N\mathrm{ln}\left\{1+\left(1q\right)s\right\}\right]^{\frac{q}{1q}}.$$
(13)
Our parameters are $`s`$ (the unit of non-extensive entropy per galaxy) and $`q`$ (the Tsallis statistical parameter).
Probability of finding $`N`$ galaxies in the volume V is generated from $`f\left(0\right)`$:<sup>2</sup><sup>2</sup>2 Note that this expression is slightly different from that by Saslow and Hamilton .
$$f\left(N\right)\frac{(n)^N}{N!}\frac{d^N}{dn^N}f(0)$$
(14)
where $`n`$ is the galaxy number density. This is because the void probability $`f\left(0\right)`$ contains all the information of the whole correlation functions:
$$f\left(0\right)=1P\left\{X_1\right\}+P\left\{X_1X_2\right\}P\left\{X_1X_2X_3\right\}\pm \mathrm{},$$
(15)
where $`P\left\{X_1X_2\right\}`$ is the probability that there are galaxies in $`dV_1`$ at $`X_1`$ and $`dV_2`$ at $`X_2`$. The above expression guarantees that the probability is properly normalized: $`\underset{N=0}{\overset{\mathrm{}}{}}f\left(N\right)=1`$.
## 4 Comparison with observations
We use the data of CfA II South redshift observations which includes 4392 galaxies . We have to reduce the data to the uniform sample. First we restrict the data to the galaxies whose absolute luminosity is brighter than the magnitude -19.1 and the distance $`4,0008,000[\mathrm{km}/\mathrm{sec}]`$. The distance is measured by the cosmic redshift. We further exclude the edge of the observation region. We applied the K-correction for compensating the reddening. Finally the data is reduced to 870 galaxies. The number density is $`n=4.44\times 10^9[\left(\mathrm{km}/\mathrm{sec}\right)^3]`$.
We first fit the void probability $`f\left(0\right)`$ by varying the parameters q and s. The best fit is realized by $`q=5.66847`$ and $`s=0.164142`$ (Fig.1). We fix these values and do not change them hereafter in this paper. With these parameters, general probability $`f\left(N\right)`$ is given. In Figs.1-2, we compared our calculations and the CfA data.
We have checked the normalization of probabilities $`\underset{N=0}{\overset{\mathrm{}}{}}f\left(N\right)=1`$ is realized.
The negative value of $`q`$ we obtained may not be so surprising. In reference , the value of $`q`$ appears to decrease monotonically from 1 to $`\mathrm{}`$ for one-dimensional logistic maps. Their model reveals unstable onset-to-chaos attractor. For negative $`q`$, the entropy functional loses its convexity and the distribution becomes unstable. Thus the intrinsic instability of SGS is faithfully represented in this formalism and this fit.
## 5 Conclusions and Discussions
We have constructed the non-extensive statistical mechanics based on the non-extensive entropy. Especially calculating the entropy of composite systems and deriving the generalized Euler relation in thermodynamics, we could evaluate the void probability function $`f\left(0\right)`$ and the probability to find $`N`$ galaxies $`f\left(N\right)`$. This result was applied to the CfAII South galaxy observations and we have obtained negative parameter $`q`$. This is thought to be another representation of the intrinsic instability of SGS. It will be also interesting to notice the fact that the multi-fractal scaling is observed in this CfAII South data within the scale-region from $`500[\mathrm{km}/\mathrm{sec}]`$ to $`3000[\mathrm{km}/\mathrm{sec}]`$. We would like to clarify possible connection between the non-extensive distributions and the multi-fractal nature in the context of gravity.
On the way we derive $`f\left(N\right)`$, we encountered “scale ($`\alpha `$) dependent temperature $`T`$”. If we put the values we obtained $`q=5.66847`$ and $`s=0.164142`$ to Eq.(9), we can explicitly plot the scale dependence of the temperature. It turns out to reduce with increasing scale and abruptly drops to zero at about $`r600[\mathrm{km}/\mathrm{sec}]`$ or, assuming the cosmic expansion speed as $`H=72[\mathrm{km}/\mathrm{sec}/\mathrm{Mpc}]`$, at about $`8.3[\mathrm{Mpc}]`$, which is almost the scale that the galaxy correlation function becomes unity. On the other hand, it is apparent that the galaxies do have peculiar velocity of order $`1000[\mathrm{km}/\mathrm{sec}]`$ at this scale. Therefore, at least, the quantity $`T`$ cannot be interpreted as the ordinary temperature as defined from the velocity dispersions. One of our next task will be to elucidate the meaning of $`T`$.
In relation with the astronomical velocity distributions, the authors claim that the velocity distribution of the clusters of galaxies can be well fitted by the Tsallis distribution with the parameter $`q=0.23_{0.05}^{+0.07}`$, which is apparently different from our negative value for galaxy distributions. Further study on the velocity distributions in various scales (galaxies, clusters, super-clusters) would reveal the origin and evolution of the large-scale structure of Universe. We would like to report these results in the near future.
## Acknowledgment
I. J. would like to thank Prof. T. Yokobori and Prof. A. T. Yokobori, Jr for their hospitality. All of us would like to thank S. Abe, M. Hotta and K. Sasaki for useful discussions and valuable comments.
|
no-problem/0004/cond-mat0004374.html
|
ar5iv
|
text
|
# Roughness of fracture surfaces
## Abstract
We study the roughness of fracture surfaces of three dimensional samples through numerical simulations of a model for quasi-static cracks known as Born Model. We find for the roughness exponent a value $`\zeta 0.5`$ measured for “small length scales” in microfracturing experiments. Our simulations confirm that at small length scales the fracture can be considered as quasi-static. The isotropy of the roughness exponent on the crack surface is also showed. Finally, considering the crack front, we compute the roughness exponents of longitudinal and transverse fluctuations of the crack line ($`\zeta _{}\zeta _{}0.5`$). They result in agreement with experimental data, and support the possible application of the model of line depinning in the case of long-range interactions.
A large amount of studies has been devoted to the problem of material strength and to the study of fractures in disordered media . In this paper we focus our attention on the self-affine properties of the fracture surface . By self-affinity one means that the surface coordinate $`z`$ in the direction perpendicular to the crack or fracture (x-y) plane has the following scaling properties:
$$\begin{array}{ccc}\hfill z(\lambda x,y)& \hfill =& \hfill \lambda ^{\zeta _x}z(x,y)\\ \hfill z(x,\lambda y)& \hfill =& \hfill \lambda ^{\zeta _y}z(x,y)\end{array}$$
(1)
where the two $`\zeta `$ exponents are known as roughness exponents, and the $`\widehat{y}`$ direction is chosen along the direction of propagation of the crack. Even if these two directions seem to play a different role in the morphological description of the fracture surface, experimental measurements showed that these two directions have similar scaling properties for very different materials . A unique roughness exponent is then generally considered (therefore $`\zeta _x=\zeta _y\zeta `$), and it has been claimed that it has a universal value of $`\zeta =0.8`$ . This behaviour has been confirmed for a large variety of experimental situations . However, more extended studies have also shown that in some experimental conditions, for fracture surfaces of metallic materials, one deals with a different value of $`\zeta =0.5`$ . These two values of the roughness exponent are connected to the length scale at which the crack is examined. In particular, at small length scales one observes a roughness exponent $`\zeta =0.5`$, whereas at large length scales the larger value $`\zeta =0.8`$ is found. These results have recently been connected with the velocity of the crack front , and interpreted as a quasi-static regime and a dynamic one: the value of $`\zeta =0.5`$ should be expected in the quasi-static regime. This connection comes from the suggestion that the crack surface can be thought as the trace left by the crack front so that the problem of the surface roughness can be mapped on the problem of the evolution of a line moving in a random medium. Ertaş and Kardar introduce a couple of non-linear Langevin equations to describe this evolution and their model can be usefully considered to evaluate the statistical properties of the evolution of a crack surface line. In this case, these equations describe longitudinal and transverse fluctuations with respect to the fracture plane containing the line velocity and the pulling force . Values obtained are close to the roughness measured at large length scales. At small length scales however, the crack front behaves as a moving line undergoing a depinning transition, and in this case results from Ertaş and Kardar model , should be revised considering long-range interactions, to give $`\zeta _{}`$ and $`\zeta _{}`$ equal to $`0.5`$ respectively for longitudinal and transverse fluctuations. Another interesting model has been proposed by Roux et al. where the fracture surface is expected to be a minimal energy surface. In two dimension this problem maps directly in the random directed polymer problem: the polymer with the minimum energy is the collection of the “weakest” monomers in the medium that form a directed path. This corresponds to the surface crack of a fuse model and for brittle fractures gives a roughness exponent $`\zeta =2/3`$ in 2 dimensions. Similar arguments hold for $`d=3`$ and are discussed for a scalar model of fracture in . In this case on has $`\zeta =0.42\pm 0.01`$, differently from experimental results .
In this paper we present a numerical study of a fracture propagation for a model of quasi-static fractures to show that in such a regime, the roughness exponent is in agreement with the one found at small length scales. Comparisons with theoretical models are discussed at the end of this work. To model the fracture propagation we use a mesoscopic model known as Born Model (BM), describing the sample through a discrete collection of sites connected by springs . The statistical properties of the two dimensional BM have been previously considered in . In particular for $`d=2`$ a value of roughness of the fracture surface of $`\zeta 0.64(3)`$ in agreement with other measurements has been found. In the BM the elastic energy of the sample under load is given by the energy of deformation for the springs connecting the sites of the sample. The elastic potential energy consists of two different terms, describing, respectively, a central force and a non-central force contribution:
$$V=\frac{1}{2}\underset{i,j}{}V_{i,j}=\frac{1}{2}\underset{i,j}{}\left\{(\alpha \beta )[(𝐮_i𝐮_j)\widehat{𝐫}_{i,j}]^2+\beta [𝐮_i𝐮_j]^2\right\}$$
(2)
where $`𝐮_i`$ is the displacement vector for site $`i`$, $`\widehat{𝐫}_{i,j}`$ is the unit vector between $`i`$ and $`j`$, $`\alpha `$ and $`\beta `$ are force constants tuning the effects of central and non-central force contributions, and the sum is over the nearest neighbour sites connected by a non-broken spring. By imposing the condition $`_𝐮V=0`$ one obtains a series of equations for the fields $`𝐮_i`$. Solving them one obtains the equilibrium positions of the springs in the sample. It has to be noticed that since one is interested in the equilibrium position one has to consider only the ratio of the two parameters $`\beta /\alpha `$ (hereafter we will consider $`\alpha =1`$ and a varying $`\beta `$). Simulations show that varying $`\beta `$ between $`1`$ and $`0`$ corresponds to varying the Poisson coefficient between $`0`$ and $`1/2`$, as expected from the theory of elasticity. At this point with a probability proportional to $`\sqrt{V_{i,j}}`$, which represents a generalized elongation, one selects a spring to remove on the fracture boundary, with the result of obtaining a connected fracture ; by doing that the boundary conditions of the system change and one has to compute a new equilibrium position. Breaking a new bond after the complete relaxation of the lattice, results in a slow velocity for the fracturing process, which mimics a quasi-static process.
The elastic springs can be arranged in different kinds of networks: however in two dimensions one has to consider a triangular lattice, for a square lattice does not correctly describe the response of the system to the applied stress. In three dimensions as well, one has to consider a network with the correct response, which results in a more complex arrangement with respect to the case of the simple cube. In our case we chose a sample described by a Face Centered Cubic (FCC) structure and we applied a mode I loading on two opposite faces by fixing their displacement field. We then applied periodic boundary conditions on a second couple of opposite faces and on one of the two remaining faces we put a starting notch (see Fig.1).
Starting from this setup we realized different simulations by stopping the algorithm when the sample is divided in two parts. At this point we started considering the surface of the fracture and we analyzed its statistical properties. We performed simulations for different values of the $`\beta `$ parameter, for 20 different samples of $`32\times 32\times 16`$ cells (each cell contains 4 sites for a total number of $`2^{16}`$ sites) for each value of $`\beta `$, for which we obtained all the relevant results. Further simulations on $`40\times 40\times 20`$ and $`50\times 50\times 25`$ FCC cells lattices were performed to verify the generality of the results. Simulations lasted from a minimum of 18-hours of CPU time for each $`32\times 32\times 16`$ cells lattice, up to more than 180 hours for a $`50\times 50\times 25`$ cells lattice, on a Digital alpha-station (500 MHz).
An example of a final fracture surface is shown in Fig.2, whereas a typical broken sample is showed in Fig.3: different colors show damaged (with at least one broken bond) and undamaged sites, and the structure of the FCC lattice.
To compute the roughness exponent, we considered different cuts of the fracture surface, some of them along the $`\widehat{x}`$ direction and some of them along the $`\widehat{y}`$ direction which is the direction of propagation of the crack. In principle we did not consider the fracture to be isotropic, but we tried to recover the roughness exponents $`\zeta _x`$ and $`\zeta _y`$. To measure the roughness of the surface, we followed the procedure described in , by introducing the two spatial correlation functions
$$\begin{array}{ccc}\hfill C_x(\rho )& \hfill =& \hfill [z(x_i+\rho ,y_i)z(x_i,y_i)]^2\\ \hfill C_y(\rho )& \hfill =& \hfill [z(x_i,y_i+\rho )z(x_i,y_i)]^2\end{array}$$
(3)
where the average $`\mathrm{}`$ is taken over the different $`x`$ and $`y`$ of the sites on the surface, and over different realization of the surfaces. Then we considered the power spectra $`\stackrel{~}{C}_{x,y}(k)`$ of the profile, that is to say we studied the Fourier transform of the previous introduced correlation functions. In this way the boundary effects are considered only in the large $`k`$ modes . For self-affine profiles these power spectra are expected to scale as
$$\stackrel{~}{C}_{x,y}(k)k^{(1+2\zeta _{x,y})}$$
(4)
Fits for the Fourier transform are shown in Fig.4. Results show that the value of $`\zeta _x`$ is equal within the error bars to the value of $`\zeta _y`$. The two directions on the fracture surface show the same statistical properties as expected for a large variety of materials; the surface can then be described by a unique roughness exponent $`\zeta `$. Moreover, the value of $`\zeta `$ does not seem to depend on the value of $`\beta `$, and is in complete agreement with the value expected for fractures in a quasi-static regime . All the results are summarized in Tab.I.
To test such a measure (as suggested by Ref.) we also studied the scaling behaviour of the surface width in direct space, along cuts perpendicular to the direction of the crack propagation: the same behaviour for $`\zeta `$ is recovered (see Fig.5). In this case results are quite striking, since a small deviation from the value of 0.5 leads to slightly displaced curves. This observation allows us to be enough confident in these results even if they extend for about one decade.
An interesting analisys is the computation of the roughness of fluctuations of the crack front, compared with experiments and with the theory of line depinning by Ertaş and Kardar. It has to be noticed that the definition of the crack front has some sort of ambiguity, because during the fracturing process more than one fracturing plane can develop, but just one among these will belong to the final fracture surface. Experimentally, this exponent can be found arresting the fracture during propagation, and injecting indian ink into the cracks under moderate vacuum. Samples are then dried and the process of fracturing is continued until complete separation is reached . From this point of view, one has to look at the border of the fracture belonging to what will be the final surface. Also, this corresponds to considering the fracture as the trace left from the crack front: this therefore belongs to the final surface.
Following this idea, we measured the roughness for the fluctuations of the crack front along the direction parallel to the line velocity ($`\zeta _{}`$) and for those perpendicular to it ($`\zeta _{}`$), during the crack evolution. In the same way as the experimental one, we mark the sites reached through the fracturing process until a certain timestep. The process then continues up to complete separation. We then look at the marked part of the final surface, to recover the crack front. Final results are from Fourier transform of an average at subsequent steps of the autocorrelation function in the steady state. In either cases we again obtained for both a value close to $`0.5`$ (see Tab.II), in good agreement with experiments.
As regards the roughness exponent we can conclude from our simulations that its value is the one characteristic of the quasi-static evolution of cracks. This result is confirmed by quantitative analysis and is to be compared with results from the different approaches. Our conclusions seem to be different from the conclusions of the directed polymers approach as presented with their scalar problem in . The value of $`\zeta =0.42\pm 0.01`$ found with this model could be related to the different physics of the fuse networks. The use of this model in fact, comes from the assumption that in two dimensions a fracture can be described by means of a scalar model. This is not stated in three dimensions, where a description like the one that comes from the theory of elasticity can be obtained only through a vectorial model. This could also explain the difference between our results, which agree with experimental values, and those for the fuse network in .
The analysis of the roughness of longitudinal and transverse fluctuations of the crack front, give results still in agreement with experiments and seems to confirm the mapping of the crack front to a line undergoing a pinning-depinning transition. Our result is also to be compared with the one from . In this paper in fact it is stated that an explanation of the roughness in terms of a quasi-static fracturing process seems unlikely. This seems to suggest a different conclusion, as in our simulations elastic waves are cut-off through relaxation of the lattice after each bond-breaking. However, fluctuations are still enclosed in the stocastic process for the fracture.
In conclusion we presented numerical simulations of the fracture of a three dimensional sample. Our result supports the idea that the fracture roughness exponent is related to the different length scale at which the sample is analysed and then to the different dynamics of the crack. In particular for short length scales where the fracture can be identified as quasi-static the roughness exponent is $`\zeta =0.5`$. We also show that for elastic fractures one can expect isotropic behaviour in the developing of the surface: our results show no dependance from the direction on crack surface. As regards the crack front, our results agree with experimental measurements, and support the mapping to line depinning in the case of long-range interactions.
We whish to thank A.Petri and R.C.Ball for useful discussions. We also acknowledge support of EU contract N. ERBFMRXCT980183
|
no-problem/0004/hep-ph0004036.html
|
ar5iv
|
text
|
# A new approach to texture analysis of quark mass matrices
## 1 Mass matrices in the triangular basis
Texture analysis of quark mass matrices has been a subject of interest for over two decades, and the advent of new data has rendered many popular textures into oblivion. A general and systematic approach based on the observed mass and mixing spectrum becomes both welcome and necessary. What we summarize here is one such approach using triangular matrices.
It is recently observed that, as a result of the chiral nature of the electroweak force and the observed hierarchical structure among quark masses and mixing angles, the ten physical parameters of the quark mass matrices can be most simply encoded in hierarchical matrices of upper triangular form. For example, in the basis where $`M^U`$ is diagonal, $`M^D`$ is accurately given by,
$$M^U=\left(\begin{array}{ccc}m_u& & \\ & m_c& \\ & & m_t\end{array}\right)M^D=\left(\begin{array}{ccc}m_d/V_{ud}^{}& m_sV_{us}& m_bV_{ub}\\ 0& m_sV_{cs}& m_bV_{cb}\\ 0& 0& m_bV_{tb}\end{array}\right)(1+𝒪(\lambda ^4)),$$
(1)
where $`\lambda =|V_{us}|=0.22`$. Note that in this form, the unphysical right-handed rotations have been eliminated to a good approximation.
Eq.(1) is actually one of ten triangular pairs in the minimal-parameter basis (m.p.b.), where each matrix element consists of a simple product of a quark mass and certain CKM elements, and the CP-violating weak-phase is a linear combination of the phases of certain matrix elements. These ten pairs are simply related by weak basis transformation. Given any quark mass matrices, one can read off their physical content after converting them into one of the ten triangular pairs. Alternatively, one can start from upper triangular matrices and obtain mass matrices in other form, e.g. hermitian, and analyze texture zeros in the new basis.
## 2 Texture zeros of hermitian mass matrices
Based on the weak-scale quark mass relations $`m_u:m_c:m_t\lambda ^8:\lambda ^4:1`$ and $`m_d:m_s:m_b\lambda ^4:\lambda ^2:1`$, and on the hierarchical CKM mixings $`V_{us}=\lambda `$, $`V_{cb}\lambda ^2`$, $`V_{ub}\lambda ^4`$, and $`V_{td}\lambda ^3`$, we can write the properly normalized Yukawa matrices in the general hierarchical triangular form,
$$T^U=\left(\begin{array}{ccc}a_U\lambda ^8& b_U\lambda ^6& c_U\lambda ^4\\ 0& d_U\lambda ^4& e_U\lambda ^2\\ 0& 0& 1\end{array}\right),T^D=\left(\begin{array}{ccc}a_D\lambda ^4& b_D\lambda ^3& c_D\lambda ^3\\ 0& d_D\lambda ^2& e_D\lambda ^2\\ 0& 0& 1\end{array}\right).$$
(2)
Note that the diagonal elements are essentially the quark masses. The left-handed (LH) rotations are directly related to the off-diagonal elements, whose coefficients are of order one or much smaller to avoid fine-tuning in getting the CKM mixing. This direct correspondence to masses and LH rotations is a unique feature of upper triangular matrices, and this feature makes the triangular matrices especially useful in analyzing quark mass matrices. As an example, we now turn to the analysis of texture zeros of hermitian quark mass matrices.
Starting from the triangular matrices of Eq. (2), we can easily write down their corresponding hermitian form:
$`Y^U`$ $`=`$ $`\left(\begin{array}{ccc}(a_U+c_Uc_U^{}+b_Ub_U^{}/d_U)\lambda ^8& (b_U+c_Ue_U^{})\lambda ^6& c_U\lambda ^4\\ (b_U^{}+c_U^{}e_U)\lambda ^6& (d_U+e_Ue_U^{})\lambda ^4& e_U\lambda ^2\\ c_U^{}\lambda ^4& e_U^{}\lambda ^2& 1\end{array}\right)(1+𝒪(\lambda ^4))`$ (6)
$`Y^D`$ $`=`$ $`\left(\begin{array}{ccc}(a_D+b_Db_D^{}/d_D)\lambda ^4& b_D\lambda ^3& c_D\lambda ^3\\ b_D^{}\lambda ^3& d_D\lambda ^2& e_D\lambda ^2\\ c_D^{}\lambda ^3& e_D^{}\lambda ^2& 1\end{array}\right)(1+𝒪(\lambda ^2)).`$ (10)
It is seen that diagonal zeros in the hermitian matrices imply definite relations between the diagonal and off-diagonal triangular parameters (i.e. masses and LH rotation angles). As a result of the different mass hierarchies in the up and down quark sector, there is a clear asymmetry between Eqs. (6) and (10) regarding their texture zeros. For example, the $`(2,2)`$ element can be zero for $`Y^U`$ but not for $`Y^D`$; both the $`(1,1)`$ and $`(1,2)`$ elements can vanish with $`Y^U`$ but not with $`Y^D`$.
Hermitian mass matrix textures can then be analyzed as follows. We first list all possible texture pairs ($`M^U`$, $`M^D`$) directly from Eqs. (6) and (10). Each pair is then transformed into one of the ten triangular forms in the m.p.b. so that we can read-off possible relations between quark masses and mixing. Finally, the viability of each texture pair is tested by confronting its predictions with data. In this way, we find no viable hermitian pairs with six zeros, including, e.g., the Fritzsch texture. We are thus lead to consider hermitian mass matrices with five texture zeros.
## 3 Hermitian mass matrices with 5 texture zeros
Following the procedure outlined above, we identify five candidates for viable hermitian pairs with five texture zeros, as was first obtained by Ramond, Roberts, and Ross (RRR) . We now examine these five pairs analytically using the m.p.b. triangular matrices, and confront their predictions with data. The running quark mass values are taken from , and we use a recent update on $`V_{ub}/V_{cb}`$ and $`V_{td}/V_{ts}`$: $`\left|V_{ub}/V_{cb}\right|_{\mathrm{exp}}=0.093\pm 0.014`$, and $`0.15<\left|V_{td}/V_{ts}\right|_{\mathrm{exp}}<0.24`$. The results of this analysis can be summarized as follows.
RRR patterns 1, 2, and 4 lead to the same predictions: $`\left|V_{td}/V_{ts}\right|\sqrt{m_d/m_s}=0.224\pm 0.022`$ and $`\left|V_{ub}/V_{cb}\right|\sqrt{m_u/m_c}=0.059\pm 0.006`$. Note that the numbers are independent of the scale at which the texture is valid, and that the latter prediction is on the low side and is disfavored by data.
The 5th RRR pattern also gives rise to two relations, each with two solutions depending on the sign of $`d_U`$. For the first relation,
$$\left|V_{us}\right|\left|\sqrt{\frac{m_d}{m_s}}e^{i\delta }\sqrt{\frac{m_u}{m_c}}\sqrt{\frac{V_{cb}^2}{(m_c/m_t)\pm V_{cb}^2}}\right|,$$
(11)
where the relative phase is free as CP violation depends on additional phases in the mass matrices. Thus Eq. (11) is valid with a properly chosen phase. For the second relation,
$$\left|\frac{V_{ub}}{V_{cb}}\right|\sqrt{\frac{m_u}{m_c}\left(\frac{m_c}{m_tV_{cb}^2}\pm 1\right)}=\{\begin{array}{c}0.107\pm 0.012(d_U>0,5\mathrm{a})\\ 0.068\pm 0.011(d_U<0,5\mathrm{b})\end{array}$$
(12)
where the numbers are given at $`M_Z`$ with $`V_{cb}=0.040`$. If the texture is valid at a much higher scale like $`M_{\mathrm{GUT}}`$, the weak scale predictions for the quark mixing will decrease by only a few percent. Comparing to data, we see that these predictions are marginally acceptable.
Finally, assuming the 3rd RRR texture to be valid at the weak scale, two predictions result: $`\left|V_{ub}\right|\sqrt{m_u/m_t}=0.0036\pm 0.0004`$, and $`\left|V_{us}/V_{cs}\right|\sqrt{m_d/m_s}=0.224\pm 0.022`$. Assuming the texture valid at the GUT scale will decrease the prediction for $`V_{ub}`$ by a few percent. These predictions are in excellent agreement with data.
In summary, the 3rd RRR five-texture-zero pattern is currently most favorable.
## 4 Remarks on parallel textures
There are some recent studies of 4-zero hermitian hierarchical mass matrices with parallel textures between $`M^U`$ and $`M^D`$ (i.e. $`M^U`$ and $`M^D`$ have the same texture zeros in the same locations). In particular, the generalized Fritzsch texture with zeros at (1,1), (1,3) and (3,1) has been a focus of interest. Using triangular matrices, one can easily see that generalized Fritzsch texture does not change the predictions for $`V_{ub}/V_{cb}`$ and $`V_{td}/V_{ts}`$ from its 5-zero hermitian counterparts (i.e. the 1st, 2nd, and 4th RRR patterns), and thus not favorable with data. In fact, no parallel hermitian hierarchical textures with 4 zeros is found to be favorable by current data. In other words, the viable hermitian pairs display an asymmetry between the up and down textures. This asymmetry could serve as a useful guidline in building realistic models for quark-lepton masses.
## Acknowledgments
I would like to thank T.K. Kuo, S. Mansour, and S.-H. Chiu for very enjoyable collaboration.
|
no-problem/0004/nlin0004035.html
|
ar5iv
|
text
|
# Spontaneous pattern formation in driven nonlinear lattices
\[
## Abstract
We demonstrate the spontaneous formation of spatial patterns in a damped, ac-driven cubic Klein-Gordon lattice. These patterns are composed of arrays of intrinsic localized modes characteristic for nonlinear lattices. We analyze the modulation instability leading to this spontaneous pattern formation. Our calculation of the modulational instability is applicable in one and two-dimensional lattices, however in the analyses of the emerging patterns we concentrate particularly on the two-dimensional case.
\]
Complex spatial patterns are often observed in systems driven away from equilibrium . Typically, the patterns emerge when relatively simple systems are driven into unstable states that will deform dramatically in response to small perturbations. As the patterns are arising from an instability, the pattern-forming behavior is likely to be extremely sensitive to small changes in system parameters. The description of deterministic pattern forming systems is typically accomplished in the form of partial differential equations such as the Navier-Stokes equations for fluids and reaction-diffusion equations for chemical systems. These phenomena have primarily been studied exclusively in continuum systems such as hydrodynamical, optical, chemical systems and liquid crystals, although more recently pattern formation of similar type have also been reported in periodically vibrated granular media .
Complementing the development of the theoretical understanding of pattern formation in continuum systems, the localized mode forming ability of discrete lattices has also received significant recent attention. There is now a fairly complete understanding of the existence and stability of localized structures (often referred to as intrinsic localized modes (ILM’s) or discrete breathers) in a variety of nonlinear lattices, undriven as well as driven. It is fair to claim that these collective patterns are well understood while the process of their creation and interaction remains relatively unexplored.
In the present communication we study the pattern forming abilities of a damped and periodically driven nonlinear lattice. Specifically, we demonstrate how a driven nonlinear (cubic) Klein-Gordon lattice forms a variety of patterns via modulational instabilities. We analyse the modulational instabilities and show how these relate to the length-scale of the patterns that are formed. Normally, the spatial extend (characteristic length-scale) of ILM’s is directly related to the frequency of the ILM’s . However, in our case the (generally different) length-scale emerging from the instability may lead to a length-scale competion, the results of which we will explore.
Model and stability of homogeneous solution. First, we study the Klein-Gordon lattice
$`\ddot{x}_n+\gamma \dot{x}_n+\omega _0^2x_n=\mathrm{\Delta }_nx_n+\lambda x_n^3+ϵ\mathrm{cos}\omega t,`$ (1)
where $`\gamma `$ is the damping parameter, $`\omega _0`$ the natural frequency of the oscillators, $`\lambda =\pm 1`$ the nonlinearity parameter and finally $`ϵ`$ is the amplitude of the ac-drive at frequency $`\omega `$. In one dimension the nearest neighbor coupling is $`\mathrm{\Delta }_nx_n=x_{n+1}2x_n+x_{n1}`$. The amplitude $`A_0`$ (and phase $`\delta _0`$) of the spatially homogeneous solution $`x_n=A_0\mathrm{cos}(\omega t+\delta _0)`$ of Eq.(1) can, within the rotating wave approximation, be shown to satisfy
$$A_0^2\left(\gamma ^2\omega ^2+(\omega ^2\omega _0^2+\frac{3}{4}\lambda A_0^2)^2\right)=ϵ^2.$$
(2)
The amplitude $`A_0`$ of the response to a driving amplitude $`ϵ`$ is shown for the soft ($`\lambda =1`$) potential in Fig. 1.
For $`\omega <\omega _0`$ (dashed line) three solutions are possible, while for $`\omega >\omega _0`$ (solid line) a single solution is possible. A similar picture is valid for the hard ($`\lambda =1`$) potential except that the multiple solutions then appear in the case $`\omega >\omega _0`$.
Analyzing the stability of the homogeneous solution with respect to spatial perturbations, we introduce $`x_n=y+z_n`$ into Eq.(1). Assuming periodic boundary conditions, we may expand $`z_n`$ in its Fourier components $`z_n=_k\mathrm{exp}(ikn)\xi _k(t)`$, where the mode amplitude $`\xi _k(t)`$ is then governed by
$`\ddot{\xi }_k+\gamma \dot{\xi }_k+\omega _k^2\xi _k`$ $`=`$ $`{\displaystyle \frac{3}{2}}\lambda A_0^2\left[1+\mathrm{cos}(2\omega t+2\delta _0)\right]\xi _k`$ (3)
with $`\omega _k^2=\omega _0^2+4\mathrm{sin}^2(k/2)`$ denoting the linear dispersion relation of the system.
Finally, the transformation $`\xi _k(t)=\zeta _k(\omega t+\delta _0)\mathrm{exp}(\frac{\gamma }{2}(\omega t+\delta ))\zeta _k(\tau )\mathrm{exp}(\frac{\gamma }{2}\tau )`$ reduces Eq.(3) to a standard Mathieu equation
$`{\displaystyle \frac{d^2\zeta _k}{d\tau ^2}}+a\zeta _k2q\mathrm{cos}(2\tau )\zeta _k=0,`$ (4)
where
$$a=\frac{1}{4\omega ^2}\left(4\omega _k^26\lambda A_0^2\gamma ^2\right)\text{and}q=\frac{3\lambda A_0^2}{4\omega ^2}.$$
(5)
As is well-known the Mathieu equations exhibit parametric resonances when $`\sqrt{a}i`$, where $`i=1,2,3,\mathrm{}`$. The width of the resonance regions depends on the ratio $`q/a`$ (see, e.g. ). In the framework of Eq.(4) the extent of the primary resonance $`a1`$ can easily be estimated to be $`(a1)^2<q^2`$. However, in the presence of the damping $`\gamma `$ the resonance condition for Eq.(3) becomes
$$q^2>\frac{\gamma ^2}{\omega ^2}+(a1)^2.$$
(6)
With $`a`$ and $`q`$ defined in Eqs.(5), given $`\lambda ,\gamma ,\omega ,\omega _0`$, and $`ϵ`$, this translates into an instability band of certain wavenumbers $`k`$.
Figure 2 shows this instability band as given by Eq.(6) for parameters corresponding to the solid curve in Fig. 1. The shaded region is the band of wavenumbers that are unstable according to Eq.(6) and the dashed line indicates the most unstable wavenumber, i.e. $`a=1`$. The effect of the damping clearly is to pinch off the instability region at a finite driving $`ϵ>0`$. Similar, using Eq.(6), instability regions can be determined for the solutions indicated by the dashed curve in Fig.1.
We have verified the presence and location of the instability band by direct simulations of Eq.(1) starting from the initial condition $`x_n(t=0)=A_0\mathrm{cos}(\delta _0)+\eta _n,`$ where, $`\eta _n`$, represents a small ($`|\eta _n|A_0`$) spatially random perturbation. This initial condition injects energy into all wavenumbers and in the presence of an unstable region of wavenumbers the dynamics will enhance the energy content in this region and thereby identify the unstable region.
The above analysis is easily extended into the case of two spatial dimensions, the only required change being that the dispersion relation now is $`\omega _\stackrel{}{k}^2=\omega _0^2+4\mathrm{sin}^2(k_x/2)+4\mathrm{sin}^2(k_y/2)`$, where the wavevector is $`\stackrel{}{k}=(k_x,k_y)`$. The instability in this case appears on an annulus in the wavevector plane, with a radius given by $`a=1`$ (see, Eq.(5)) and a width determined by Eq.(6).
Pattern formation. Numerical simulations allow us not only to verify the predicted instability band, but also to follow the full nonlinear development and saturation initiated by the instability. In particular, in regions of parameter space we obtain the spontaneous formation of patterns of distinct spatial geometry. Although we have observed this phenomenon in one as well as in two dimensions, in the present communication we focus on the two-dimensional system, where the pattern formation is very rich.
Although the dynamics show different features according to the specific region of parameter space, it is possible to trace a typical behavior as follows: Initializing the system in the spatially homogeneous state described above with a small amount of randomness added, the instability sets in after a certain number of cycles of the ac-drive, depending on the strength of the parametric resonance, i.e. on the value of $`\sqrt{a}1,2,3,\mathrm{}`$. Thereafter the system usually evolves through a sequence of different patterns (rhombi, stripes, etc.), composed of localized regions of high amplitude oscillations, before reaching a final configuration that may or may not result in a structure of definite symmetry.
Due to the sensitive response to very small changes of the parameters, determining stability regions for the different pattern geometries is a difficult task. However, in the case of a hard potential ($`\lambda =1`$), Fig. 3 shows a limited area of ($`ϵ`$,$`\omega `$)-space in which distinct spatial patterns emerge and remain stable.
This diagram is constructed following the full dynamics of the system for thousands of cycles. As our study concentrates on patterns arising from instabilities of the homogeneous solutions, we do not discuss possible hysteretic behavior as is sometimes observed in similar systems (see, e.g. Ref.).
Figure 4 shows representative examples of the spontaneously emerging patterns corresponding to the points marked in Fig. 3.
The patterns consist of localized regions of high amplitude oscillations (ILM’s) residing on a background that oscillates at the frequency, $`\omega `$, of the ac-drive. In all the considered cases we have observed the natural result that patterns are only energetically sustained when the ILM, $`\omega _{ILM}`$, and driving, $`\omega `$, frequencies are commensurate, i.e. $`\omega _{ILM}=n\omega `$, where $`n`$ is an integer. For the patterns displayed in Fig. 4, $`n=2`$. Further, the motion of the ILM’s is out of phase, i.e. , at the points in time where the background oscillation reaches its maximal excursion the ILM’s obtain their minimal amplitude such that at these points the state is completely homogeneous.
At a fixed driving $`ϵ`$, for increasing values of the frequency $`\omega `$, as in Fig. 4, we can observe the following behavior: Due to the presence of the damping $`\gamma `$, at sufficiently small $`\omega `$ the spatially homogeneous solution is stable towards all possible spatial modulations such that the flat state is sustained. However, for values near point (a) the system becomes unstable with respect to certain spatial modulations and spatial patterns in the form of large stripes emerge (Fig. 4(a)). Increasing the frequency, these stripes become thinner and denser and begin to show an increasingly clearer modulation (Fig. 4(b)). The characteristic length scale of these patterns is set by the size of the unstable $`\stackrel{}{k}`$-vector according to the above analysis. The nonlinear character of the system results in a transition towards a more isotropic geometry (rhombic) as the driving frequency is increased further (Fig. 4(c)). As in the case of the stripes, stronger localization of the ILM’s arranged in the rhombic pattern (Fig. 4(d)) is observed for even larger driving frequencies. The angle between the sides of the rhombus unit cell varies but for the hard potential it is always close to $`\pi /2`$. For values of $`ϵ`$ larger than those displayed in Fig. 3, the final mesoscopic patterns of the system dynamics are spatially disordered much like the phenomena observed in granular media . It is important to realize that the length of the unstable wavevector determines the length-scales of the final patterns, while the symmetry of the patterns is determined by the nonlinear character of the system.
As a result of the periodic boundary conditions, the length scale of the emerging patterns must be commensurate with the system size. However, we have observed that the existence of a band (variability in length and angle) of unstable $`\stackrel{}{k}`$-vectors (see Fig. 2 and related discussion) allows continuous accommodation of this constraint except for the discontinuous changes in length scales occurring when it is energetically favorable for the system to add (or subtract) an additional stripe (or row of ILM’s).
For the soft potential ($`\lambda =1`$) the variation of the amplitude $`ϵ`$, and the frequency, $`\omega `$ of the ac-drive is particularly problematic as the dynamics in this case can lead to the development of catastrophic instabilities as one or several oscillators overcome the finite barrier in the quartic potential.In all the cases we have been able to simulate, the early stage time evolution of the system is characterized by the formation of ILM’s regularly arranged in a square pattern. This spatial configuration, which is sustained for up to hundreds of cycles, seems always to suffer from a weak instability and eventually deforms into a rhombic pattern, as shown in Fig. 5.
Contrasting with the case of the hard potential, here in the soft potential the angle between the sides of the rhombus unit cell is always close to $`2\pi /3`$ (so almost hexagonal).
We now analyze this pattern more closely. The pattern shown in Fig. 5 consists of ILM’s spontaneously organized into a regular rhombic pattern. The ILM’s are spatially localized and perform harmonic temporal oscillations (at the frequency, $`\omega `$, of the ac-drive) and are therefore objects that are well described in the literature. A particular feature of these ILM’s is that they reside on a background that oscillates at the same frequency. To expose the dynamics of the ILM’s more closely, we show in Fig. 6 a time sequence along a one-dimensional cut of the two-dimensional system (the cut is indicated by the line in Fig. 5). It should be noted that in Fig. 6 we have removed the oscillations of the background in order to expose the ILM dynamics most clearly.
Although we have focused on the ILM dynamics in the case of a soft potential, the features are analogous in the case of the hard potential and only the symmetry of the patterns is different.
In summary we have studied the modulational instability in the damped and ac-driven cubic nonlinear Klein-Gordon lattice. The analysis applies to one as well as two spatial dimensions. We have further demonstrated how these instabilities lead to a variety of mesoscopic patterns of intrinsic local modes. In the case of a hard potential we characterized the patterns in a stability diagram and in the case of the soft potential we showed that the dynamics always result in a rhombic pattern with an angle close to $`2\pi /3`$. These rhombic patterns were never observed in the case of the hard potential. This difference in the shapes of the rhombic patterns in the hard and soft cases can be understood by exploiting the analogy between the changes in the steady states of a dissipative systems and phase transitions in systems at thermodynamic equilibrium (see. e.g. Ref. ). In terms of this analogy the appearance of spatially periodic structures in a nonequilibrium system can be connected to a perturbation of the translational symmetry of thermodynamic states in equilibrium. A study based on this philosophy was pursued in Ref. with the results that the angle defining rhombic patterns is given by the coefficient of the cubic term in the system, which is precisely our observation. From the present analysis it appears that experimental studies of the pattern forming abilities of discrete systems present an excellent opportunity to study ILM’s and their mutual interactions and mesoscopic patterning. For example, optical systems and spin systems appear good candidates for such studies.
Research at Los Alamos National Laboratory is performed under the auspice of the US DoE.
|
no-problem/0004/astro-ph0004044.html
|
ar5iv
|
text
|
# Three-dimensional modelling of edge-on disk galaxies Based on observations collected at the European Southern Observatory, Chile and Lowell Observatory, Flagstaff (AZ), USA
## 1 Introduction
Global parameters of galactic disks can be used to constrain the formation process as well as the evolution of disk galaxies. Recently it has become possible to deduce parameters such as scalelength or central surface density from numerical or semi-analytical self-consistent galaxy formation models (Syer et al. syer (1999), Mo et al. mo (1998), Dalcanton et al. dal (1997)) and compare the results with observed values. Observationally the Hubble Deep Field gives the opportunity to study morphological features and simple structural parameters for galaxies even at high redshifts (Takamiya taka (1999), Marleau & Simard hdfmorph (1998), Fasano & Filippi fasan (1998)).
Former statistical studies providing sets of homogeneously derived parameters for nearby galaxy samples are those of de Jong (1996b ) using a two-dimensional and of Courteau (cour (1996)) with a one-dimensional decomposition technique.
However, so far only a few statistical studies based on high quality CCD data (de Grijs dgmn (1998), Barteldrees & Dettmar bd (1994), hereafter Paper I) have addressed the actual three-dimensional structure of disk galaxies with regard to the stellar distribution perpendicular to the plane (z-direction) and taking into account an outer truncation (cut-off radius) for the disk, first introduced by van der Kruit & Searle (1981a ).
While the stellar distribution perpendicular to the plane (z-profile) could result from various ”heating” processes during the galactic evolution (Toth & Ostriker toth (1992), Hernquist & Mihos hern (1995)), the cut-off radius can be used to constrain either the angular momentum of a protocloud (van der Kruit vdk87 (1987)) or possible starformation thresholds in the gaseous disk (Kennicutt kenni (1989)).
In the following, we have compiled parameters of galactic disks for a total of 31 edge-on galaxies in different optical filters of which 17 objects have been already discussed in Paper I. One goal of our detailed comparison of several independent fitting procedures is to study the influence of the applied techniques on resulting disk parameters. It also provides the data base for statistical analysis addressing some of the above mentioned questions (e.g., Pohlen et al. pol (2000)). For 14 objects surface photometric data are given for the first time in Appendix A.
## 2 Observations and data reductions
### 2.1 Observations
The observations were carried out at the 42-inch (1.06m) telescope of the Lowell Observatory located on the Anderson Mesa dark side during several nights in December 1988 (run identification: L1) and January 1989 (L2) and at the 2.2m telescope at ESO/La Silla during three runs in June 1985 (E1), March (E2) and June (E3) 1987. At the 42-inch telescope we used a 2:1 focal reducer with the f/8 secondary, equipped with a CCD camera which is based on a thinned TI 800x800 WFPC 1 CCD with 15 $`\mu `$m pixelsize resulting in a field of approximately 9$`\mathrm{}`$ with a scale of 0.7$`\mathrm{}`$pixel<sup>-1</sup>. Images were taken with a standard Johnson R filter. Observations at the 2.2m telescope were carried out with the ESO CCD adapter using a 512x320 RCA chip, giving an effective field size of $`3\mathrm{}`$x$`\mathrm{\hspace{0.33em}2}\mathrm{}`$ and a scale of 0.36$`\mathrm{}`$pixel<sup>-1</sup>. For the ESO observing runs we used the g, r, and i filters of the Thuan and Gunn (tg (1976)) system. Exposures were mainly taken in the g or r band, and only seven galaxies were observed in all three filters.
### 2.2 Sample selection
The northern sample observed at Lowell was selected automatically in an electronic version of the UGC-catalog (Nilson ugc (1973)) searching for galaxies with an inclination class 7 matching the field size. After visual inspection to check the inclination and remove interacting and disturbed galaxies, the observed sample was chosen out of the remaining galaxies according to the allocated observing time.
For the southern sky there is no comparable catalog providing information of inclinations directly. Using the axial ratios given e.g. in the ESO-Lauberts & Valentijn catalog (lv (1989)) will introduce a selection bias preferring late type galaxies with lower B/D ratio (Guthrie guthri (1992), Bottinelli et al. bot (1983)). One way to avoid this is extending the first selection to much lower axis ratios, comparable to $`i65\mathrm{°}`$, and then checking the inclination by eye. Therefore we selected the galaxies according to the field size of about $`2\mathrm{}`$ from a visual inspection of film copies of the southern sky survey (see Paper I).
Table 1 gives a list of the resulting sample used during our fitting process, with a serial number (1), the principal galaxy name (2), the used filter (3), the integration time in minutes (4), and the run identification label (5), whereas the ’$``$’ marks images already published in Paper I. Further parameters are taken from the RC3 catalogue (de Vaucouleurs et al. rc3 (1991)): the right ascension (6) and declination (7), the RC3 coded Hubble-type (8), the Hubble parameter T (9), and the D<sub>25</sub> diameter in arcminutes (10). In the case of ESO 578-025 parameters are taken from the ESO-Uppsala catalogue (Lauberts eso (1982)).
### 2.3 Data reduction
We applied standard reduction techniques for bias subtraction, bad pixel correction and flatfielding. Following the procedure described in Paper I the sky background was fitted for each image using the edges of the individual frames to reduce any large scale inhomogeneity in the field of view. For part of the data we tried to remove the foreground stars from the image, but even with sophisticated PSF fitting using IRAF-DAOphot routines we were not able to remove stars without any confusion. The remaining residuals were always of the order of the discussed signal. This technique could only be used to mask out the regions affected by stars. In order to increase the signal-to-noise ratio near the level of the sky background part of the data was filtered using a weighted smoothing algorithm (see Paper I). Thereby the noise was reduced by about one magnitude measured with a three sigma deviation on the background, whereas the interpretation of the faint structure is hampered by this process.
We therefore conclude that the best way is to omit any additional image modifications, besides a rotation of the image to the major axis of the disk.
### 2.4 Photometric calibration
Most of the images were taken during non-photometric nights, therefore we tried a different way to perform photometric calibration. Comparing simulated aperture measurements with published integrated aperture data led to the best possible homogeneous calibration of the whole sample. Most of the southern galaxies were calibrated using the catalogue of Lauberts & Valentijn (lv (1989)), whereas NED<sup>1</sup><sup>1</sup>1NASA/IPAC Extragalactic Database (NED) was used for all northern galaxies. We used equation (1) derived in Paper I for the colour transformation of R and B literature values and the g measurements, and no correction between the R and r, and I and i band, respectively. Due to the fact that the photometric errors, of the input catalogues from Lauberts & Valentijn (lv (1989)) as well as within the RC3 catalogue (de Vaucouleurs et al. rc3 (1991)) are of the order of $`0.1`$ mag, we do not apply any further corrections. Galaxies calibrated in this way are marked with l in column (5) of Table 2, whereas for galaxies which did not have published values for their magnitude in the observed filter, we interpolated from calibrated images of the same night, by comparing the count rates for the sky value. These galaxies are marked with an i. For a few nights no galaxy with published photometry was observed and in these cases we used interpolated night sky values from the same observing run. Images calibrated in this way are marked with e in Table 2. The resulting zero points and central surface brightness values in these cases should be interpreted carefully, although the derived structural parameters, like scalelength and scaleheight, are not influenced by any uncertainty in the flux calibration.
Appendix A shows the contour plots and selected radial profiles for the 14 objects (25 images) not already published in Paper I.
### 2.5 Distance estimates
In order to derive the intrinsic values of the scale parameters and to compare physical dimensions we tried to estimate distances for our galaxies. Therefore we took published radial velocities corrected for the Virgo centric infall from the LEDA<sup>2</sup><sup>2</sup>2Lyon/Meudon Extragalactic Database (LEDA) database, and estimated the distance following the Hubble relation with a Hubble constant of $`H_0=75`$ km s<sup>-1</sup>Mpc<sup>-1</sup>. Table 1 gives the heliocentric radial velocities (11) according to LEDA, and our estimated distances (12).
## 3 Disk models
### 3.1 Background
Our disk model is based upon the fundamental work of van der Kruit and Searle (1981a , 1981b , 1982a , 1982b ). They tried to find a fitting function for the light distribution in disks of edge-on galaxies. These galaxies are, compared to the face-on view, preferred for studying galactic disks due to the fact, that in this geometry it is possible to disentangle the radial and vertical stellar distribution. Their model include an exponential radial light distribution found for face-on galaxies (de Vaucouleurs, devau59 (1959); Freeman, free (1970)), a sech<sup>2</sup> behaviour in $`z`$, which is expected for an isothermal population in a plan-parallel system (Camm, camm (1950); Spitzer, spitzer (1942)), and a sharp edge of the disk, first observed by van der Kruit (vdk79 (1979)) in radial profiles of edge-on galaxies. The resulting luminosity density distribution for this symmetric disk model is (van der Kruit & Searle 1981a ):
$$\widehat{L}(R,z)=\widehat{L}_0\mathrm{exp}\left(\frac{R}{h}\right)\mathrm{sech}^2\left(\frac{z}{z_0}\right)R<R_{\mathrm{co}}$$
(1)
$`\widehat{L}`$ being the luminosity density in units of \[$`L_{\mathrm{}}`$ pc<sup>-3</sup>\], $`\widehat{L}_0`$ the central luminosity density, $`R`$ and $`z`$ are the radial resp. vertical axes in cylinder coordinates, $`h`$ is the radial scalelength and $`z_0`$ the scaleheight, and $`R_{\mathrm{co}}`$ is the cut-off radius.
The empirically found exponential radial light distribution is now well accepted and it is proposed that viscous dissipation could be responsible (Firmani et al. firmani (1996), Struck-Marcel struck (1991); Saio & Yoshii saio (1990), Lin & Pringle lp (1987)), although there is so far no unique explanation for the disk being exponential. An alternative description of the form $`1/R`$ proposed by Seiden et al. (seiden (1984)) did not get much attention, although it emphasizes the empirical nature of the exponential fitting function.
To avoid the strong dust lane and to follow the light distribution down to the region $`z0`$ Wainscoat (wain (1986)) and Wainscoat et al. (wainetal (1989)) carried out NIR observations using the much lower extinction in this wavelength regime compared to the optical. They found a clear excess over the isothermal distribution and proposed the z-distribution to be better fitted by an exponential function $`f(z)=\mathrm{exp}(z/z_0)`$. According to van der Kruit (vdk88 (1988)) such a distribution would led to a sharp minimum of the velocity dispersion in the plane, which is not observed (Fuchs & Wielen fw (1987)). Therefore he proposed $`f(z)=\mathrm{sech}(z/z_0)`$ as an intermediate solution. De Grijs (dg (1997)) extended this to a family of density laws $`g_m(z,z_0)=2^{2/m}g_0\mathrm{sech}^{2/m}\left(mz/2z_0\right)(m>0)`$ following van der Kruit (vdk88 (1988)), where the isothermal ($`m=1`$), and the exponential ($`m=\mathrm{}`$) cases represent the two extremes.
Therefore the luminosity density distribution can be written as:
$$\widehat{L}(R,z)=\widehat{L}_0\mathrm{exp}\left(\frac{R}{h}\right)f_n(z,z_0)\mathrm{H}(R_{\mathrm{co}}R)$$
(2)
with H$`(x_0x)`$ being the Heaviside function.
In order to limit the choice of parameters we restrict our models to the three main density laws for the z-distribution (exponential, sech, and sech<sup>2</sup>). Due to the choice of our normalised isothermal case $`z_0`$ is equal to $`2h_z`$, where $`h_z`$ is the usual exponential vertical scale height:
$`f_1(z)`$ $`=`$ $`4\mathrm{exp}\left(2{\displaystyle \frac{z}{z_0}}\right)`$
$`f_2(z)`$ $`=`$ $`2\mathrm{sech}\left({\displaystyle \frac{2z}{z_0}}\right)`$
$`f_3(z)`$ $`=`$ $`\mathrm{sech}^2\left({\displaystyle \frac{z}{z_0}}\right)`$
In contrast to Paper I and Barteldrees & Dettmar (bdold (1989)) we define the cut-off radius at the position where the radial profiles become nearly vertical, corresponding to the mathematical description. They tried to avoid any confusion due to the lower signal-to-noise in the outer parts, by fixing the cut-off radius where the measured radial profile begin to deviate significantly from the pure exponential fit.
### 3.2 Numerical realisation
The model of the two dimensional surface photometric intensity results from an integration along the line of sight of the three dimensional luminosity density distribution (2) with regard to the inclination $`i`$ of the galaxy. Describing the luminosity density of the disk in a cartesian-coordinate grid K($`x`$-$`y`$-$`z`$) with $`R=\sqrt{(x^2+y^2)}`$ leads to the following coordinate transformation into the observed inclined system K($`x^{}`$-$`y^{}`$-$`z^{}`$) with $`x^{}`$ pointing towards the observer, whereas the rotation angle between the two systems corresponds to $`90\mathrm{°}i`$:
$`x`$ $`=`$ $`x^{}\mathrm{sin}(i)z^{}\mathrm{cos}(i)`$
$`y`$ $`=`$ $`y^{}`$
$`z`$ $`=`$ $`x^{}\mathrm{cos}(i)+z^{}\mathrm{sin}(i)`$
Taking into account this transformation we have to integrate equation (2), obtaining an equation for the intensity of the model disk depending on the observed radial and vertical axes $`y^{}`$ and $`z^{}`$ on the CCD:
$$I(y^{},z^{})=\underset{\mathrm{}}{\overset{+\mathrm{}}{}}\widehat{L}(x(x^{},z^{},i),y^{},z(x^{},z^{},i))𝑑x^{}$$
(3)
Together with equation (2) this gives:
$`I(y^{},z^{})=`$
$`\widehat{L}_0{\displaystyle \underset{\mathrm{}}{\overset{+\mathrm{}}{}}}\mathrm{exp}\left({\displaystyle \frac{\sqrt{(x^{}\mathrm{sin}iz^{}\mathrm{cos}i)^2+y^2}}{h}}\right)f_n(z^{},z_0)`$
$`\mathrm{H}\left(R_{\mathrm{co}}\sqrt{(x^{}\mathrm{sin}iz^{}\mathrm{cos}i)^2+y^2}\right)dx^{}`$ (4)
Therefore six free parameters fit the observed surface intensity on the chip ($`y^{}`$, $`z^{}`$ plane) to the model:
$$I=I(y^{},z^{},i,n,\widehat{L}_0,R_{\mathrm{co}},h,z_0)$$
(5)
Figure 1 shows a sequence of computed models with an isothermal z-distribution ($`n=3`$) for the exact edge-on case ($`i=90\mathrm{°}`$) with characteristic values for the ratio $`R_{\mathrm{co}}/h`$: $`1.40,2.88,5.00`$ and for $`h/z_0`$: $`2.0,4.0,7.3`$ keeping the cut-off radius $`R_{\mathrm{co}}`$ and the total luminosity $`L_{tot}zh^2\widehat{L}_0`$ constant. The latter is causing a different central surface brightness $`\mu _0`$ for each model, ranging from 23.8 to 19.3 starting with $`\mu _0=21.2`$ mag$`/\mathit{}\mathrm{}`$ for the reference model with $`R_{\mathrm{co}}/h=2.88`$ and $`h/z_0=4.0`$. All contour lines falling within the interval $`\mu =25.0`$$`19.5`$ are plotted with a spacing of 0.5.
### 3.3 Method 1
#### 3.3.1 Determination of fitting area
The first step is to divide the galaxy into its quadrants. The four images are then averaged according to their orientation, following van der Kruit & Searle (1981a ) and Shaw (s93 (1993)). Thereby larger foreground stars and asymmetrical perturbations in the intensity distribution are eliminated by omitting this region during averaging. Smaller foreground stars are removed by median filtering. The average quadrant should result at least from three quadrants to get a representative image of the galaxy. This averaging additionally increases the signal-to-noise ratio. In order to determine the fitting area for modelling the disk component on the final quadrant, one has to avoid the disturbing influence of the bulge component and the dust lane. The region dominated by the bulge is fixed following Wyse et al. (wgf (1997)) defining the bulge component by “light that is in excess of an inward extrapolation of a constant scale-length exponential disk”. Therefore the clear increase of the intensity towards the center which can be seen in radial cuts determines an inner fitting boundary $`R_{\mathrm{min}}`$. We tried to minimize the dust influence (cf. Section 4.4.2) by placing a lower limit $`z_{\mathrm{min}}`$ in the vertical direction by visual inspection. Additionally we restricted the remaining image by a limiting contour line $`\mu _{\mathrm{lim}}`$, where the intensity drops below a limit of 3$`\sigma `$ on the background.
We are aware of the problem that these are rather rough definitions difficult to reproduce without quoting the exact values for $`R_{\mathrm{min}},z_{\mathrm{min}},\mu _{\mathrm{lim}}`$. However, the final choice of the fitting area is a complex and subjective procedure depending on the intrinsic shape of each individual galaxy, the influence of their environment, and the quality of the image itself. Therefore it is not possible to quote exact general selection criteria and to derive the structural parameters straight forward. One solution is to do it in a consistent way for a large sample, leaving the problem of comparing results from different methods (cf. Section 4.2).
#### 3.3.2 Numerical fitting
The numerical realisation of the fitting procedure minimizes the difference ($`SQ`$) between the averaged quadrant and a modelled quadrant based on equation (4).
$$SQ=\underset{j}{}\left(log\left(I_{O_j}(y_j,z_j)\right)log\left(I_{M_j}(y_j,z_j)\right)\right)^2$$
$`I_{O_j}`$ is the intensity within the average quadrant (observed intensity) and $`I_{M_j}`$ of the modelled intensity. In contrast to Shaw & Gilmore (sg89 (1989), sg90 (1990)) and Shaw (s93 (1993)) using a similar approach for their models we do not weight individual pixels. They weight the difference by the error in the surface brightness measure, which Shaw (s93 (1993)) derives from the averaging of the quadrants. However, this method implies an absolute symmetry for the disk in $`z`$ and $`y`$, which is not the case for real galaxies. These kind of errors only reflect the asymmetry of the galaxy. Using the observed errors in the surface brightness for weighting individual pixels does not result in a considerable advantage because they are nearly the same after smoothing. The minimal $`SQ`$ is found by varying five of the six free parameters of the model (cf. eq. 5), whereas the parameter $`R_{\mathrm{co}}`$ is determinated by cuts parallel to the major axis. A significant decrease of the intensity extrapolated to $`I=0`$ gives the value of $`R_{\mathrm{co}}`$ (van der Kruit & Searle, 1981a ), therefore it is important that the intensity at the cut-off radius is well above the noise limit. The other five parameters are determined by fixing the smallest $`SQ`$. For the three different functions $`f(z)`$ and every possible $`i`$ ($`\mathrm{\Delta }i=0.5\mathrm{°}`$) the remaining parameters ($`\widehat{L}_0`$, $`h`$, and $`z_0`$) are varied with the “downhill simplex-method” (Nelder & Mead, nm (1965); Press et al., pftv (1988)) until the global minimum of $`SQ`$ is found. $`I_{M_j}`$ is calculated by a numerical gaussian integration of equation (4). The possible inclination angles can be restricted from the dust lane of the galaxy (Paper I). Tests of model disks with additional noise show that the “downhill simplex-method” found the input inclination $`i`$, the used model $`f(z)`$ and the other disk parameters within errors of $`\delta h,\delta z_0`$, and $`\delta \widehat{L}_0<`$ 1%. An estimation of the errors of the parameters for the best model disk can be made by inspection of the parameter space for $`SQ`$ around the smallest $`SQ`$, with slightly different fitting areas, and different values of $`R_{\mathrm{co}}`$. $`f(z)`$ is in almost all cases the same and the variation of $`i`$ is only small ($`\pm 1\mathrm{°}`$). The differences in $`h`$ and $`z_0`$ are about 15% (in some cases up to 25%), and $`\widehat{L}_0`$ varies about a factor of 2.
### 3.4 Method 2
Within this method all disk parameters as well as the choice of the optimum function $`f(z)`$ are estimated by a direct comparison of calculated and observed disk profiles by eye.
As a first step, the inclination $`i`$ is determined by using the axis ratio of the dust lane. Depending on the shape of the dust lane it is possible to restrict the inclination to $`\pm 1.5\mathrm{°}`$. The central luminosity density $`\widehat{L}_0`$ is calculated automatically for each new parameter set by using a number of preselected reference points along the disk. Given a sufficient signal-to-noise ratio, the cut-off radius $`R_{\mathrm{co}}`$ can be determined from the major axis profile. Thus, the remaining fitting parameters are the disk scale length and height, $`h`$ and $`z_0`$, as well as the set of 3 functions $`f(z)`$. The scale length $`h`$ is fitted to a number (usually between 4 and 6) of major axes profiles (left panel of Fig. 2). The quality of the fit for the vertical profiles along the disk can be used as a cross-check for the estimated scalelengths (right panel of Fig. 2). The disk scaleheight $`z_0`$ is estimated by fitting the $`z`$-profiles, outside a possible bulge or bar contamination. The vertical disk profiles of most of the galaxies investigated enable a reliable choice of the quantitatively best fitting function $`f(z)`$. This is due to the fact that the deviations between different functions become visible at vertical distances larger than that of the most sharply-peaked dust regions. The first raw fitting steps are usually carried out by using a reduced number of both major and minor axis profiles simultaneously. Afterwards, when a good first fitting quality is reached, a complete set of major and minor calculated and observed axes profiles are investigated in detail. At the end of this procedure a final, complete disk model is calculated using all previously estimated disk parameters.
## 4 Results
### 4.1 Distribution of disk parameters
Table 2 contains the best fit model for each image. Together with the galaxy name (1), the filter (2), and the referring image (3), with integration time, and run ID, we list the inclination (4), the best fitting function for the z-distribution (5), the calibration index (6) (ref. Section 2.4), and the central surface brightness of the model (7), without correcting for inclination. According to the distance tabulated in Table 1, the cut-off radius $`R_{\mathrm{co}}`$ (8) is given in kpc and arcsec as well as the scalelength $`h`$ (9) and the vertical scaleheight $`z_0`$ (10) which is normalised to the isothermal case, being two times an exponential scaleheight $`h_z`$. For the seven galaxies with available images in more than one filter, we do not see any correlation of fitted parameters with different wavelength, although we find the same inclination angle for the best fitted disk within the range of the errors. Appendix A shows the best fitting model as an overlay to selected radial profiles for each image. The subsequent analysis of the distribution for the different parameters concerning the formation and evolution of galaxies will be given in forthcoming papers.
### 4.2 Comparison of different methods
The different fitting methods were independently developed within two diploma theses (Lütticke 1996, Schwarzkopf 1996). The quality of the data basis for each project was the same. From the sample presented here there were five objects in common. These are used to compare the two methods and determine the quantitative difference of the derived parameters.
Table 3 shows the results for the five images. The mean deviation in the determined inclination is $`1\mathrm{°}`$ and 12.4% for the scaleheight (ranging from 5.0%-26.6%) whereas for three images different functions for the z distribution were used. The mean difference for the radial scalelength is 20.6% (2.1%-47.2%) and 4.2% for the determination of the cut-off radius.
A subsequent analysis shows that it is not possible to ascribe the sometimes quite large discrepancies to the quality of the individual method. It turns out, that the main problem is the non-uniform determination of the fitting area. The intrinsic asymmetric variations of a real galactic disk compared to the model enforce a more subjective restriction of the galaxy image to the fitting region, whereby for example one has to exclude the bulge area and the dust lane.
This finding is in agreement with the study of Knapen & van der Kruit (knap (1991)) who compared published values of the scalelength and find an average value of 23% for the discrepancy between different sources. As already mentioned by Schombert & Bothun (sb (1987)) the limiting factor for accuracy of the decomposition is not the typical S/N from the CCD-telescope combination nor the errors in the determination of the sky background, but the deviation of real galaxies from the standard model.
### 4.3 Comparison with the literature
In our former study (Paper I) with an earlier method to adapt equation (4), 20 of our 45 galaxy images have already been used. We decided to re-use them in this study to get models for as many galaxies as possible in a homogeneous way. Additionally, Paper I only presents the best fit values for the isothermal model, and uses a different definition of the cut-off radius.
Only three galaxies are in common with the sample of de Grijs (dgmn (1998)): ESO 564-027, ESO 321-010, and ESO 446-018. The mean difference for the scalelength is 10.4 % (ranging from 1.2%-20.3%) and for the scale height (normalised to the isothermal case) 4.0% (0.0%-8.2%). For the remaining galaxies there are no models in the literature.
### 4.4 Model limitations
Our model only represents a rather simple axisymmetric three-dimensional model for a galactic disk, consisting of an one component radial exponential disk with three different laws for the density distribution in the z-direction and a sharp outer truncation. Therefore it does not include additional components, such as bulges, bars, thick disks, or rings, and cannot deal with any asymmetries. Features like spiral structure or warps are not included, whereas Reshetnikov & Combes (rescom (1998)) multiply their exponential disk by a spiral function introducing an expression to characterise an intrinsic warp depending on the position angle outside of a critical radius.
The choice of our fitting area tries to avoid the dust lane, possible only for almost edge-on galaxies, as a first step to account for the dust influence (cf. Section 4.4.2). Examples of models including a radiative transfer with an extinction coefficient $`\kappa _\lambda (R,z)`$ can be found in Xilouris et al. (xil99 (1999)). However, introducing more and more new components and features automatically increases the amount of free parameters. Therefore we restricted our model to the described six parameters, to obtain statistically meaningful characteristics for galactic disks.
In the following we demonstrate that a simple disk model omitting the bulge component and the dust lane give indeed reasonable parameters.
#### 4.4.1 The influence of the bulge component
We have studied the influence of the bulge for some of our objects including the earliest type galaxy in our sample (ESO 575-059) presented here. We have subtracted our derived disk model from the galaxy and then tried to find the best representation for the remaining bulge by a de Vaucouleurs $`r^{1/4}`$ or an exponential model. Taking the slope of the vertical profile at $`R=0`$ and a fixed axis ratio, we have constructed the 2-dimensional model of the bulge. In agreement with Andredakis et al. (andre (1995)) we find, that bulges of early type galaxies are better fitted by an exponential profile than by a $`r^{1/4}`$. Figure 3 shows the resulting vertical and radial cuts for ESO 575-059 together with the models.
Despite the deviation between $`R=10^{\prime \prime }`$ and $`R=20^{\prime \prime }`$ which could be attributed to an additional component (inner disk or bar), we do not find any evidence for changing our disk model due to the influence of the bulge.
Therefore we conclude, that it is possible to nearly avoid any influence of the central component by fitting outside the clearly visible bulge region.
#### 4.4.2 The influence of dust
Dust disturbs the light profile by a combination of absorption and extinction and the net effect has to be calculated by radiation transfer models. Therefore it is not obvious that outside the “visible” dust lane, which is excluded for the fitting area as a first step, the dust will not play a major role in shaping the light distribution. Xilouris et al. (xil99 (1999)), Bianchi et al. (simone (1996)), de Jong (1996a ), and Byun et al. (byun (1994)) have recently addressed this problem in more detail. Although they investigated the influence of the dust on the light distribution by quoting best fit structural parameters for the star-disk as well as the dust-disk, they did not quantify the influence on the star-disk parameters derived by standard fitting methods without dust. Even Kylafis & Bahcall (kylafis (1987)) state within their fundamental paper on finding the dust distribution for NGC 891 that “in order to avoid duplication of previous work, we will take … (the values for the star-distribution estimated by standard fitting methods)”.
We checked the influence of the dust on our determined parameter set, by studying simulated galaxy images with three different dust distributions. These dusty galaxies were kindly provided by Simone Bianchi who calculated images with known input parameters for the star and dust distribution with his Monte Carlo radiative transfer method (Bianchi et al. simone (1996)). We have defined a worst, best, and transparent case, according to the dust distributions presented by Xilouris et al. (xil99 (1999)), of our mean stellar disk ($`R_{\mathrm{co}}=2.9`$, $`h/z=4`$, and $`f_1(z)`$). The worst case is calculated with $`\tau _R=0.51`$, $`h_\mathrm{d}/h_{}=1.55`$, and $`z_\mathrm{d}/z_{}=0.75`$, the best case with $`\tau _\mathrm{R}=0.20`$, $`h_\mathrm{d}/h_{}=1.08`$, and $`z_\mathrm{d}/z_{}=0.32`$, and a transparent case without dust. To be comparable we used the same method for selecting the fitting region by masking the “visible” dust lane and reserving a typical area for a possible bulge component using mean values for the transparent case. In contrast to our standard procedure we do not restrict the inclination range from the appearance of the dust lane, but specify the best $`SQ`$ model in the range $`i=80\mathrm{°}90\mathrm{°}`$ in Table 4. For the models marked with a ’$``$’ we pretend the correct input inclination. Table 4 demonstrates, that even for the worst case we are able to reproduce the input parameters within the range of the typical 20% error discussed in Section 4.2. It should be mentioned that for each case we overestimate the input scalelength and -height, whereas the determination of $`R_{\mathrm{co}}`$ does not depend on the dust distribution. The implication on the distribution of the ratio $`R_{\mathrm{co}}/h`$ will be discussed in a forthcoming paper.
### 4.5 Comments on individual galaxies
Trying to adapt a simple, perfect, and exact symmetric model to real galaxies always implies a compromise between the degree of any deviation and the final model (Section 4.6). The following list will provide some typical caveats found during the fit procedure which will characterise the quality of the specified model for individual galaxies.
ESO 112-004: warped, asymmetric, central part slightly tilted compared to disk, after fitting still remaining residuals
ESO 150-014: slightly warped, minor flatfield problems
NGC 585: remaining residuals
ESO 244-048: possible two component system, slope of inner radial profile significantly higher than of an outer one, final model fits the inner parts
NGC 973: one side disturbed by stray light of nearby star, seems to be radially asymmetric, remaining residuals
UGC 3425: superimposed star on one edge
NGC 2424: model does not fit very well without obvious reason
ESO 436-034: strong bulge component, possibly barred, hard to pinpoint final model, remaining residuals
ESO 319-026: outer parts show u-shaped behaviour, remaining residuals, therefore large ($`\pm 2\mathrm{°}`$) difference in inclination angle
ESO 321-010: u-shaped, no clear major axis visible, therefore uncertain rotation angle, bar visible, bulge rotated against disk
NGC 4835A: strong residuals
ESO 446-018: the different sides of the disk are asymmetric visible in radial profiles and on the contour plot
IC 4393: similar to NGC 4835A
ESO 581-006: galaxy shows typical late type profile, $`R_{\mathrm{co}}`$ questionable, but nevertheless final model seems to fit well
ESO 583-008: disturbed by superimposed star, shows warp feature and a bar structure, $`R_{\mathrm{co}}`$ questionable, remaining residuals
UGC 10535: one side slightly extended
NGC 6722: only one side observed, bulge rotated against disk, barred, strongly disturbed by dust absorption, radial extension visible, therefore $`R_{\mathrm{co}}`$ should be treated with caution
ESO 461-006: minor flatfield problem seems to cause asymmetry, although model looks fine
IC 4937: similar to NGC 6722, dominating bulge, small disk, model significantly different compared to the i and r image, model possibly hampered by strong dust lane
ESO 578-025: bar visible
ESO 466-001: maybe two components, final model represents only inner part, outer part clearly different from normal disk component
ESO 189-012: slightly warped
ESO 533-004: similar to NGC 4835A, model fits the whole galaxy, leaving more or less no bulge component
IC 5199: slightly radial asymmetric
ESO 604-006: only one side observed, bar structure visible
### 4.6 Comments on some rejected galaxies
The model limitations described above constrain the application of our fitting process. Therefore we had to exclude about 20 galaxies from our original sample. They all show significant deviations from the simple geometry and an inclusion of their parameters obtained by forcing the model to fit the data will spoil the resulting parameter distribution.
One larger group classified mainly as S0 galaxies (e.g. NGC 2549, ESO 376-009, NGC 7332, ESO 383-085, ESO 506-033) shows a completely different behaviour of the luminosity distribution in the outer parts compared to the other galaxies. They all show an additional component, mainly characterised as an elliptical envelope. This is already visible in the contour plot, but becomes even more evident in a radial cut parallel to the major axis. In these cases the usual common curved decline of the profile (e.g. ESO 578-025) is missing, and is replaced by a more or less straight decline into the noise level, sometimes even by an upwards curved profile. Fitting these luminosity distribution by our one component exponential disk with cut-off, will therefore naturally provide parameters qualitatively different compared to late type disks. This will be discussed in detail in a forthcoming paper.
Another group consists of galaxies dominated mainly by their bulges, whereas the disk is only an underlying component, partly characterized as having thick boxy bulges (Dettmar & Lütticke db (1999)), e.g. IC 4745, ESO 383-005, although there are also pure elliptical bulges (e.g. ESO 445-049, NGC 6948).
In the case of ESO 383-048 and ESO 510-074 the radial profiles clearly indicate that a more complex model will be needed to fit these kind of multicomponent galaxies. Galaxies like UGC 7170 or ESO 113-006 were excluded due to their strong warps, which made it impossible to fit the model in a consistent way. Mainly late type galaxies such as ESO 385-008, IC4871, UGC 1281, or ESO 376-023 show a too patchy and asymmetric light distribution, that any attempt to fit the profiles will give only very crude, low quality parameters. UGC 11859 and UGC 12423 were rejected due to their thin faint disks, which will maybe overcome by taking new images with longer integration times to get a higher signal-to-noise ratio, whereas NGC 5193A is completely embedded into the surface brightness distribution of its near companion.
###### Acknowledgements.
This work was supported by the *Deutsche Forschungsgemeinschaft, DFG*. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We have made use of the LEDA database (www-obs.univ-lyon1.fr). The authors wish to thank Simone Bianchi, who kindly provided us dusty-galaxy images produced with his radiative transfer code.
## Appendix A Contour plots and radial profiles
The following figures show in the top panel selected cuts parallel to the major axis (full line) as well as the best fit model (dashed line) according to Table 2. The bottom panel shows isophote maps of the surface brightness for the galaxies, rotated to the major axis. The faintest plotted contour is defined by a 3$`\sigma `$ criterion of the background. Table 5 lists the name (1), image (2), and the vertical positions of the plotted profiles parallel to the major axes in arcsec (3). The contours are plotted equally spaced with 0.5 mag$`/\mathit{}\mathrm{}`$ starting from the limiting surface brightness $`\mu _{\mathrm{lim}.}`$ (4).
|
no-problem/0004/astro-ph0004326.html
|
ar5iv
|
text
|
# Sub-Saturn Planet Candidates to HD 16141 and HD 46375¹
## 1 Introduction
The two thousand nearest and brightest dwarf stars ranging in spectral type from late F through M are currently being surveyed by groups with a Doppler precision of $``$10 m s<sup>-1</sup> or better. To date these surveys have resulted in the discoveries of 32 extrasolar planets (cf. Marcy, Cochran, and Mayor 2000, Marcy & Butler 2000, Vogt et al. 2000, Queloz et al. (2000), Udry et al. (2000), Noyes et al. 1997), including the first system of multiple planets (Butler et al. 1999) and the first detection of a transiting planet (Henry et al. 2000; Charbonneau et al. 2000).
Remarkably, all 32 companions found from precision Doppler surveys have $`M\mathrm{sin}i`$less than 8 M<sub>JUP</sub>, though companions with masses of 10–80 M<sub>JUP</sub> would have been much easier to detect. The companion mass function rises steeply toward smaller masses (Marcy & Butler 2000), but it turns over near 0.5 M<sub>JUP</sub>, presumably due to poor detectability. With conventional precision of 10 m s<sup>-1</sup>, companions of 0.5 M<sub>JUP</sub> in 4-day orbits induce stellar motion only a few times greater than such Doppler errors. The stars 51 Peg and HD 75289 have the lowest known values of $`M\mathrm{sin}i`$, both 0.46 M<sub>JUP</sub> (Mayor & Queloz 1995, Udry et al. 2000). However with a precision of 3 m s<sup>-1</sup>, one may explore the mass distribution below 1 M<sub>SAT</sub> (= 0.298 M<sub>JUP</sub>). If sub-Saturn mass companions occur less frequently than Jupiter-mass companions, the status of all Jupiter-mass companions as “planets” would be cast in doubt. Such a distribution of masses, peaked at $``$1 M<sub>JUP</sub>, is not the case in our Solar System nor do theories of planet formation predict such a peak (eg. Lissauer 1995, Boss 1995, Levison et al. 1998).
Here we report Doppler variations in HD 16141 and HD 46375 exhibiting $`M\mathrm{sin}i`$$`<`$ 1 M<sub>SAT</sub>. Stellar characteristics and observations are discussed in the second section. Orbital solutions are presented in the third section, followed by discussion of results.
## 2 Observations and Stellar Characteristics
HD 16141 (79 Cet, HIP 12048, G5 IV) and HD 46375 (HIP 31246, K1 IV-V) are slowly rotating, chromospherically inactive stars, based on the weak Ca II H&K emission in their spectra. Their Hipparcos (Perryman et al. 1997) distances are 35.9 and 33.4 pc, respectively, giving them absolute magnitudes of M<sub>V</sub>=4.05 and M<sub>V</sub>= 5.29, which places both stars 1.0 mag above the zero-age main sequence. The Hipparcos mission made 60 and 63 photometric observations of HD 16141 and HD 46375, respectively, revealing that both stars are photometrically stable at the level of errors, $``$0.01 mag.
The mass of HD 16141 is $`M`$=1.01 M, and it has \[Fe/H\] = 0.02, as determined by high-resolution spectral synthesis (Fuhrmann 1998). The mass of HD 46375 is probably $``$1.0 Malso. Its spectral type of K1 IV-V and color, $`BV`$= 0.86, would normally indicate a mass of 0.9 M. But its metalicity appears to be high, \[Fe/H\]=$``$+0.34, as judged from narrow band photometry (Apps, priv. comm. 2000). This high metalicity implies a higher mass than is associated with solar-metalicity stars of its color, giving it an estimated mass of 0.9 - 1.0 M. Here we adopt a mass of 1.0$`\pm `$0.1 M, for computation of $`M\mathrm{sin}i`$and semimajor axis. HD 46375 resides only a few arcminutes North-East from the bright nebulosity of the Rosette Nebula (which is over 1 kpc in the background), leaving some concern about contamination of the optical spectra and photometry of the star.
The velocities of HD 16141 and HD 46375 have been monitored since 1996 and 1998 respectively. The technique is the same as described in Vogt et al. (2000), using the Keck 1 HIRES spectrometer (Vogt et al. 1994). The H&K lines near 3950 Å provide a simultaneous chromospheric diagnostic (Saar et al. 1998), giving log$`R`$’(HK) = $``$5.05 and $``$4.94 for HD 16141 and HD 46375, respectively, typical for chromospherically quiet stars (Noyes et al. 1984). The photospheric Doppler “jitter” of such stars is less than 3 m s<sup>-1</sup>(Saar et al. 1998).
## 3 Orbital Solutions
The 46 observations of HD 16141 are shown in Figure 1 and listed in Table 1. These observations have an RMS of 7.0 m s<sup>-1</sup>, 2.5 times larger than the measurement error. A periodogram of these velocities (Figure 2a), reveals a dominant period at 75.6 days. The reality of this periodicity is supported by two tests. We broke the velocity set into its first half and second half, yielding separate periodograms with highest peaks at 75 d and 77 d, respectively, both having a false alarm probability under 1%. Thus, both halves of the measurements reveal the 76-day periodicity. We also computed the false alarm probability of the 76 d periodicity in the full velocity set by using a Monte Carlo approach (Gilliland & Baliunas 1987). We generated 10<sup>5</sup> sets of artificial velocities drawn from a Gaussian error distribution while retaining the actual times of observation. None of these artificial velocity sets yielded a periodogram peak as high as that actually found for HD 16141 (Fig 2a). Thus, the false alarm probability is less than 1$`\times `$10<sup>-5</sup>.
The best–fit Keplerian model to the velocities of HD 16141 is shown in Figure 1, and yields a period, $`P`$=75.82 d. The amplitude is, $`K`$=10.8 m s<sup>-1</sup>, and the eccentricity is $`e`$=0.28. The RMS of the residuals to the Keplerian fit is 3.2 m s<sup>-1</sup>, similar to the median internal error of 2.8 m s<sup>-1</sup>. Adopting a stellar mass of 1 M, the companion has $`M\mathrm{sin}i`$=0.22 M<sub>JUP</sub>, and the semimajor axis is 0.35 AU. A periodogram of the velocity residuals is shown in Figure 2b which shows no significant peaks, indicating that no additional companions are evident.
The 24 observations of HD 46375 are listed in Table 2. Figure 3 shows observations obtained between 6 and 11 Feb 2000, revealing a three day period. A phased version of the entire data set spanning 515 d is shown in Figure 4. The same best–fit sinusoid is shown in both Figures 3 and 4. The RMS of the sinusoidal fit is 2.59 m s<sup>-1</sup>, slightly greater than the velocity uncertainty of 2.2 m s<sup>-1</sup>. The best–fit Keplerian gives $`e`$=0.04$`\pm `$0.04 (i.e. not significant), and yields an RMS of 2.44 m s<sup>-1</sup>. The period is 3.024 d and the velocity semiamplitude $`K`$ = 35 m s<sup>-1</sup>. Assuming the host star is 1 M, the companion has $`M\mathrm{sin}i`$=0.25 M<sub>JUP</sub> and the semimajor axis is 0.041 AU. The orbital parameters for HD 16141 and HD 46375 are listed in Table 3.
## 4 Discussion
HD 16141 exhibits the smallest velocity amplitude, $`K`$=10.8 m s<sup>-1</sup>, reported for any planet candidate, thus warranting an assessment of stellar and instrumental sources of error. Chromospherically inactive stars are intrinsically stable at the level of 3 m s<sup>-1</sup>(Saar et al. 1998). Further, the majority of our program stars exhibit no instrumental “drift” in the velocity zero–point at the 2 m/s level during the last four years (Vogt et al. 2000). There is no plausible stellar time scale near 75 d except rotation. But spots are not important as HD 16141 is chromospherically quiet at Ca II H&K, and photometrically stable at 0.01 mag from Hipparcos. Thus, the most likely explanation for the Doppler variations is an orbiting planet.
The eccentricity for the planet around HD 16141 is not yet well determined, $`e`$=0.28$`\pm `$0.15, and indeed cannot be reliably distinguished from circular without further measurements. However, all 21 planets orbiting beyond 0.2 AU (Marcy & Butler 2000) have eccentricities above 0.1, and thus this planet will constitute an interesting test case of eccentricities of a possibly low mass planet.
The companion to HD 46375 is similar to the other “51 Peg–like” planets with their orbital periods of 3 to 5 days and circular orbits. Circular orbits are expected from tidal coupling with the primary star (Rasio & Ford 1996; Marcy et al. 1997; Ford et al. 1998). As the primary in the HD 46375 system has not been spun up, the mass of the planet is constrained to be less than $``$15 M<sub>JUP</sub>(Ford et al. 1998, Marcy et al. 1997). The suspected high metalicity of HD 46375 of \[Fe/H\]=0.34 (Apps, priv. comm.) supports the suggestion that “51 Peg–like”planets are associated with high-metalicity stars (Gonzalez et al. 1999, Queloz et al. 2000) .
The value of $`M\mathrm{sin}i`$=0.25 M<sub>JUP</sub> for the companion to HD 46375 is smaller than that of any known “51 Peg–like” extrasolar planet, with the smallest having been 51 Peg (0.46 M<sub>JUP</sub>, Mayor & Queloz 1995) and HD 75289 (0.46 M<sub>JUP</sub>, Udry et al. 2000). This suggests that if orbital migration brings such planets inward (Lin et al. 1996), the process may continue to operate at masses near 1 M<sub>SAT</sub>, pending knowledge of $`\mathrm{sin}i`$.
With demonstrated precision of 3 m s<sup>-1</sup>(Vogt et al. 2000), the Keck survey is currently capable of making 3 $`\sigma `$ detections of “51 Peg–like” planets down to the neptune–mass range. Knowledge of the companion mass function of “51 Peg–like” planets down into this range will provide useful constraints on models that explain the formation and subsequent dynamics of “51 Peg–like” planets.
The effective temperature of a planet in a 3.024 day orbit about HD 46375 would be $``$1400 K (Burrows et al. 1998). Henry (2000) reports that no transit occurs, implying that $`\mathrm{sin}i<`$ 0.992 . The expected amplitude of a transit signal is about 0.015 mags (Burrows et al. 1998, Henry et al. 2000, Charbonneau et al. 2000).
The companions to HD 16141 and HD 46375 have the lowest values of $`M\mathrm{sin}i`$(0.22 M<sub>JUP</sub>, 0.25 M<sub>JUP</sub>) found to date for extrasolar planets. The observed histogram of $`M\mathrm{sin}i`$shows a steep rise toward the lowest masses, consistent with a power-law, d$`N`$/d$`M`$ $`M^1`$ (Marcy and Butler 2000). These new planets support the suggestion that the mass distribution continues rising to 1 M<sub>SAT</sub>. Verification of any rise in the planetary mass function below 1 M<sub>SAT</sub>will require more detections to account properly for incompleteness.
We thank Kevin Apps for assessment of stellar characteristics. We acknowledge support by NASA grant NAG5-8299 and NSF grant AST95-20443 (to GWM), by NSF grant AST-9619418 and NASA grant NAG5-4445 (to SSV), travel support from the Carnegie Institution of Washington (to RPB), and by Sun Microsystems. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
|
no-problem/0004/physics0004061.html
|
ar5iv
|
text
|
# Untitled Document
STUDY OF NUCLEATING EFFICIENCY OF
SUPERHEATED DROPLETS BY NEUTRONS
B.ROY, B.K. CHATTERJEE, M. DAS and S.C. ROY
Department of Physics, Bose Institute, Calcutta 700009, India
Superheated droplets have proven to be excellent detectors for neutrons and could be used as a neutron dosimeter. To detect accurately the volume of superheated droplets nucleated, an air displacement system has been developed. Here the air expelled due to volume change upon nucleation displaces a column of water through a narrow horizontal glass tube, and, the displacement of water is linearly related to the nucleated volume and has the added advantage of being leak free.
In presence of neutrons, the rate of nucleation (rate of decrease in the volume of superheated droplets) is proportional to the residual volume of superheated droplets and the neutron flux ($`\varphi `$). Hence the volume of accumulated vapour (or the volume of the displaced air) is given as:
$$V=V_0\left(1e^{t/\tau }\right)$$
(1)
where $`\tau `$ = M/($`\varphi \rho \eta N_A`$ d $`n_i\sigma _i`$), M is the molecular weight and $`\rho `$ is the density of the superheated liquid, $`N_A`$ is the Avogadro number, $`n_i`$ is the abundance of the $`i^{th}`$ species of nucleon (whose neutron elastic scattering cross section is $`\sigma _i`$) in the molecule, $`\eta `$ is the efficiency of nucleation of the droplet by a recoil nucleon, d is the average droplet volume and $`V_0`$ is the volume of vapour of the entire superheated liquid. By least squares fitting of the volume of displaced air (V) as a function of time (t), $`V_0`$ and $`\tau `$ are obtained. From $`\tau `$ one may obtain $`\eta `$ if the other parameters are known. Results of an experiment performed with Freon- 12 samples using an Am-Be neutron source are presented in Fig. 1.
Figure Captions
Fig.1. Variation of volume (scaled by the mass of sample and the area of cross section of the tube) of displaced air as a function of time for Freon-12; solid curve is the least-squares-fit to experimental data.
|
no-problem/0004/astro-ph0004375.html
|
ar5iv
|
text
|
# Strange stars - linear approximation of the EOS and maximum QPO frequency
## 1 Introduction
The idea of the compact stars build of strange matter was presented by Witten (witten (1984)) and the models of stars were calculated using various models of strange matter by Haensel et al. (hzs (1986)) and Alcock et al. (afo (1986)). The main idea is that the u,d,s matter is ground state of matter at zero pressure (self-bound strange quark matter) i.e.:
$$\mu _0\mu (P=0)<M({}_{}{}^{56}\mathrm{Fe})=930.4\mathrm{MeV}$$
(1)
There were two aspects of rather extensive studies of strange stars properties. The first one in the context of maximum rotational velocity of the star (Frieman & Olinto fo (1989), Glendenning glen89 (1989), Prakash et al. Prakash (1990), Zdunik & Haensel zh90 (1990)) was related to the announcement reporting the detection of the half millisecond pulsar in 1989 (which after one year turned out to be erroneous). This observation stimulated detailed investigations of the limits on the rotational frequency of dense stars which excluded nearly all neutron stars models leaving strange stars as a possible explanation.
At present the increasing interest in strange stars is a result of QPOs observations in low-mass X-ray binaries LMBX and some difficulties with the explanations of its properties by neutron star models under the assumption that QPOs represent Keplerian frequencies of the particles in an accretion disk (Kaaret et al. 1997, Kluźniak 1998, Zhang et al. 1998, Miller et al. 1998, Thampan et al. 1999, Schaab & Weigel 1999). In this paper the maximum values of these frequencies are found for a broad set of the parameters describing a strange matter EOS. Strange star models are consistent with the maximum observed QPO frequency $`1.33`$ kHz in 4U 0614+09 (van Straaten et al. Straaten (2000)).
Frieman & Olinto (fo (1989)) mentioned the approximation of the EOS of strange matter by the linear function (see also Prakash et al. Prakash (1990)) . I present here the linear form of the EOS of strange matter with parameters expressed as a polynomial functions of the physical constants describing MIT bag model (strange quark mass and QCD coupling constant). These approximate formulae allow us to write down the pressure vs. density dependence in a simple form $`P=a(\rho \rho _0)`$. The consequence of this form are scaling laws of all stellar quantities with appropriate powers of $`\rho _0`$. This linear EOS is complete in the sense that it contains not only pressure and energy density, which enter the relativistic stress-energy tensor and are sufficient for the determination of the main parameters of the star (mass, radius), but also the formula for baryon number density (or equivalently baryon chemical potential), necessary in the microscopic stability considerations and determination of the total baryon number of the star.
## 2 Strange stars and maximum QPO frequency
We describe the quark matter using the phenomenological MIT bag model (see, e.g., Baym, baym (1978)). The quark matter is the mixture of the massless $`u`$ and $`d`$ quarks, electrons and massive $`s`$ quarks. The model is described in detail in Farhi & Jaffe (fj (1984)), where the formulae for physical parameters of a strange matter are also presented. There are the following physical quantities entering this model: $`B`$ – the bag constant, $`\alpha _c`$ – the QCD coupling constant and $`m_s`$ – the mass of the strange quark. It is necessary to introduce also the parameter $`\rho _N`$ – the renormalization point. Following Farhi & Jaffe (fj (1984)) we choose $`\rho _N=313`$ MeV.
The consequence of this model of strange matter is scaling of all thermodynamic functions with some powers of $`B`$. Knowing the EOS for given $`\alpha _c`$, $`m_s`$ and $`B_0`$ we can obtain thermodynamic quantities for other value $`B`$ from the following formulae:
$$\begin{array}{ccccccc}P_{[B]}\hfill & =& P_{[B_0]}(B/B_0),\hfill & \rho _{[B]}& =\hfill & \rho _{[B_0]}(B/B_0),& \\ n_{[B]}\hfill & =& n_{[B_0]}(B/B_0)^{3/4},\hfill & \mu _{[B]}& =\hfill & \mu _{[B_0]}(B/B_0)^{1/4}& \end{array}$$
(2)
where the resulting EOS for $`B`$ is determined for the same value of $`\alpha _c`$ but for the strange quark mass given by the relation:
$$m_s(B)=m_s(B_0)(B/B_0)^{1/4}$$
(3)
The advantage of this approximations is the elimination of one parameter ($`B`$) from the calculations of the EOS. The dependence of all parameters on $`B`$ is very well defined and one can take this into account using simple scaling laws. It is therefore sufficient to study the EOS in two parameter space: $`\alpha _c`$ and $`m_s`$ for chosen value of $`B_0`$ and then obtain results for other $`B`$ from Eqs (2).
Strictly speaking the scaling relations (2) hold exactly for the parameter $`\rho _N`$ rescaled by the relation $`\rho _N(B)=\rho _N(B_0)(B/B_0)^{1/4}`$. However the renormalization point is fixed and this is the source of small discrepancy between the true results for different values of bag constant and the scaling relations presented in this paper. This difference is presented in Fig. 1 in the case of the baryon chemical potential at zero pressure calculated directly and using scaling relation (2). We choose $`\mu _0`$ because this thermodynamic function is crucial in microscopic stability considerations. The models of strange quark matter correspond to two values of bag constant: $`B_0=60\mathrm{MeV}\mathrm{fm}^3`$ and $`B=90\mathrm{MeV}\mathrm{fm}^3`$. The strange quark masses in this two cases fulfill the relation (3), namely $`m_s(60)=200\mathrm{MeV}`$ and $`m_s(90)=221.33\mathrm{MeV}`$.
We see that scaling formulae give exact results for $`\alpha _c=0`$ because then $`\rho _N`$ does not enter the EOS. The maximum error of the scaling formulae (2) is less than 0.5%.
From the microscopic point of view the strange quark matter is unstable at zero pressure for some values of $`\alpha _c`$ and $`m_s`$ discussed in this paper (large $`\alpha _c`$ and $`m_s`$), i.e. equation of state does not fulfill the relation (1). But from presented results we can obtain, using scaling formulae, the configurations which correspond to smaller value of $`B`$ and are microscopically stable. The maximum value of $`B`$ for which the strange quark matter is stable can be obtained from the equation:
$$\mu _0[B_0,\alpha _c,m_s(B_{\mathrm{max}}/B_0)^{1/4}]\left(\frac{B_{\mathrm{max}}}{B_0}\right)^{1/4}=930.4\mathrm{MeV}$$
(4)
The strange star configurations are calculated by solving Oppenheimer-Volkoff equations in the case of spherical symmetry. For given central pressure $`P_{\mathrm{centr}}`$ we obtain stellar parameters such as gravitational mass $`M`$, baryon number $`A`$, moment of inertia for slow rigid rotation $`I`$ and gravitational surface redshift $`z`$. All these quantities are subject to the scaling formulae similar to those describing the EOS (Eq. 2). If we calculate the star for $`B_0`$ with central pressure equal to $`P_{\mathrm{centr}}[B_0]`$ we know all parameters of the corresponding star with $`P_{\mathrm{centr}}=P_{\mathrm{centr}}[B_0]B/B_0`$ in the model with bag constant $`B`$ (cf. Witten witten (1984), Haensel et al. hzs (1986)). In general for the stellar parameter $`X`$ the following equality holds:
$$X[B,\alpha _c,m_s]=X[B_0,\alpha _c,m_s(B/B_0)^{1/4}](B/B_0)^k$$
(5)
where $`k=1/2`$ in the case of mass and radius ($`X=R,M`$), $`k=3/4`$ for $`X=A`$, $`k=3/2`$ for $`I`$ and $`k=0`$ for $`z`$.
In Fig. 1 we see that scaling formulae (5) reproduce the stellar parameters $`M`$ vs. $`R`$ with an error less than 0.3%.
These scaling laws so far have been mainly exploited in the case of maximum mass of the star (or equivalently maximum rotational frequency). Of course these relations refer to all stellar configurations (e.g. the curve $`M(R)`$) and as an interesting example we can consider the point defined by the equality:
$$R=R_{\mathrm{ms}}=3R_g=6\frac{GM}{c^2}$$
(6)
which corresponds to the maximum possible frequency of a particle in stable circular orbit $`\nu _{\mathrm{max}}`$ around nonrotating star. Because the scaling properties of $`R`$ and $`M`$ are the same ($`B^{1/2}`$) one can solve the Eq. 6 independently of $`B`$. For stellar masses higher than the solution of the Eq. 6 the maximum Keplerian frequency of an orbiting particle is defined by the marginally stable orbit at $`R_{\mathrm{ms}}`$ and for lower by the stellar radius $`R`$ (Fig. 2a). In both cases this frequency scales as $`B^{1/2}`$ and for other values of $`B`$ the patterns of Fig. 2a do not change, provided one rescales the axes and $`m_s`$ (Eqs 3, 5, 7).
The maximum values of $`\nu `$ as a function of strange matter parameters are presented in Fig. 2b for $`B_0=60\mathrm{MeV}\mathrm{fm}^3`$. For other values of $`B`$ these results scale as $`B^{1/2}`$ i.e.:
$$\nu _{\mathrm{max}}[B,\alpha _c,m_s]=\nu _{\mathrm{max}}[B_0,\alpha _c,m_s(B/B_0)^{1/4}]\left(\frac{B}{B_0}\right)^{1/2}$$
(7)
which allows us to determine the absolute maximum value of $`\nu _{\mathrm{max}}`$ consistent with the requirement of the stability of strange matter at zero pressure (Eq. 4) by putting the maximum allowed value of $`B`$ into Eq. (7). The result of this procedure $`\nu _{\mathrm{MAX}}\nu _{\mathrm{max}}(B_{\mathrm{max}})`$ is presented in Fig. 3a. We see that this limit is well above the highest observed QPO frequency $`\nu _{\mathrm{obs}}=1.33\mathrm{kHz}`$ recently reported by van Straaten et al. (Straaten (2000)). Thus at present the observations of the highest QPO frequency do not constrain strange matter models unless we do not make the assumptions about the mass of the star in LMBX. To set some limits on strange matter parameters the QPOs must be observed at frequencies larger than $`1.8`$ kHz. Our conclusion is true also for moderate rotation rates of strange star due to the initial increase of the frequency at marginally stable orbit which in the case of stars with mass slightly above the solution of Eq. 6 results in $`\nu _{\mathrm{max}}`$ very close to the value corresponding to the nonrotating stars (Zdunik et al. zdunik00 (2000), Datta et al. datta00 (2000)).
It should be mentioned that in principle analysis of the observational data can strongly support the existence of the marginally stable orbit. Such a conclusion have been recently presented by Kaaret et al. (Kaaret99 (1999)) in the case of 4U 1820-30. This interpretation is equivalent to the condition $`R<R_{ms}`$ setting the lower bound for the mass of the star. This minimum mass limits are presented in Fig. 3b as a function of $`m_s`$ and $`\alpha _c`$. Below this mass the innermost stable orbit is located at the surface of the star.
## 3 Linear interpolation of EOS
In the present section we discuss the interpolation of equation of state of strange quark matter by the linear function of $`\rho `$. For a broad set of parameters $`\alpha _c`$ and $`m_s`$ the dependence $`P(\rho )`$ can be very well approximated by the linear function, being exactly linear at the $`m_s=0`$ limit. The EOS for strange quark matter is presented in Fig. 4.
The main parameters of the matter are described by the following formulae:
$$\begin{array}{ccc}P(\rho )\hfill & =& \frac{1}{3}(1+\epsilon _{\mathrm{fit}})(\rho \rho _0)c^2\hfill \\ n(P)\hfill & =& n_0\left[1+\left(4\frac{3\epsilon _{\mathrm{fit}}}{1+\epsilon _{\mathrm{fit}}}\right)\frac{P}{\rho _0c^2}\right]^{3/(4+\epsilon _{\mathrm{fit}})}\hfill \end{array}$$
(8)
where $`\rho _0`$ and $`n_0`$ are energy and number density of the strange quark matter at zero pressure. The second equation is implied by the first law of thermodynamics.
The equations (8) allow us to determine all thermodynamic quantities characterizing strange quark matter in linear interpolation of the EOS, e.g. the baryon chemical potential:
$$\mu (P)=\mu _0\left(1+\frac{4+\epsilon _{\mathrm{fit}}}{1+\epsilon _{\mathrm{fit}}}\frac{P}{\rho _0c^2}\right)^{(1+\epsilon _{\mathrm{fit}})/(4+\epsilon _{\mathrm{fit}})}$$
(9)
where $`\mu _0=\rho _0c^2/n_0`$.
Because for all thermodynamic quantities the scaling relations with $`B`$ hold we can calculate all parameters needed for the linear EOS for chosen value of $`B`$ and then rescale them using equations (2). Thus the main point is to determine three parameters entering equations (8), as a functions of $`\alpha _c`$ and $`m_s`$: $`\rho _0(\alpha _c,m_s)`$, $`n_0(\alpha _c,m_s)`$ and $`\epsilon _{\mathrm{fit}}(\alpha _c,m_s)`$. These functions can be very well approximated by the polynomial of $`m_s`$ with coefficients depending on the value of $`\alpha _c`$. The formulae obtained by the least-squares fit to the exact results are following:
$`n_0`$ $`=`$ $`(a_0^n+a_2^nm_{s100}^2+a_3^nm_{s100}^3)\overline{B}^{3/4}`$
$`a_0^n`$ $`=0.28660C_\alpha ^{1/4}`$
$`a_2^n`$ $`=(0.010788+0.0032046\alpha _c)/\overline{B}^{1/2}`$
$`a_3^n`$ $`=0.0044248\sqrt{C_\alpha }/\overline{B}^{3/4}`$
$`\mu _0`$ $`=`$ $`(a_0^\mu +a_2^\mu m_{s100}^2+a_3^\mu m_{s100}^3)\overline{B}^{1/4}`$
$`a_0^\mu `$ $`=837.260/C_\alpha ^{1/4}`$
$`a_2^\mu `$ $`=(46.61616.848/C_\alpha )/\overline{B}^{1/2}`$
$`a_3^\mu `$ $`=(10.482+4.5211/C_\alpha )/\overline{B}^{3/4}`$
$`\epsilon _{\mathrm{fit}}`$ $`=`$ $`a_2^ϵm_{s100}^2+a_3^ϵm_{s100}^3`$
$`a_2^ϵ`$ $`=(0.035745+0.013366\alpha _c+0.023246\alpha _c^2)/\overline{B}^{1/2}`$
$`a_3^ϵ`$ $`=(0.00554880.0054232\alpha _c0.0069193\alpha _c^2)/\overline{B}^{3/4}`$
where $`m_{s100}`$ is strange quark mass in units 100 MeV i.e. $`m_{s100}=m_sc^2[\mathrm{MeV}]/100`$, $`\overline{B}=B[\mathrm{MeV}\mathrm{fm}^3]/60`$ and $`C_\alpha =12\alpha _c/\pi `$.
The energy-matter density at zero pressure is equal to $`\rho _0=n_0\mu _0/c^2`$. The functions $`\rho _0(m_s)`$ and $`\epsilon _{\mathrm{fit}}(m_s)`$ entering directly Eq. 8 and $`\mu _0(m_s)`$ for chosen values of $`\alpha _c`$ and accuracy of the above interpolations are presented in Fig. 5.
One can test the accuracy of the interpolation of the EOS by the linear function calculating the stellar configurations build of such a matter and comparing results with stars calculated using exact equation of state. Such a comparison is presented in Fig. 6. The relative error in this procedure is always less than 1% in the case of mass, radius and moment of inertia.
Except for the case of strange matter with massless quarks, studied in detail in the literature, the linear form of the equation of state of self-bound matter have been used so far mainly for the stellar mass and radius determination (e.g. Haensel & Zdunik hz89 (1989), Lattimer et al. latt90 (1990)). The authors exploited the relation $`P=a(\rho \rho _0)`$, because the quantities sufficient in these considerations are pressure and energy density, which enter the stress-energy tensor and TOV equations of hydrostatic equilibrium. However for the complete study of the properties of star the full microscopic description of the matter is needed, including baryon chemical potential and particle number density. In our EOS the formula for $`n(P)`$ (Eq. 8) enables us to determine the total baryon number of the stellar configuration and use this model of strange matter, for example, for the discussion of the conversion of neutron stars into strange stars (Olinto olinto87 (1987), Cheng & Dai Cheng96 (1996), Bombaci & Datta bd00 (2000)). During this process the total baryon number of the star is fixed. Using the relation (3) one can find the ranges of the values of $`m_s`$, $`\alpha _c`$ and $`B`$ consistent with the requirement that strange matter is ground state of matter at zero pressure (Eq. 1). As a result the allowable parameters of strange stars (e.g. $`M`$, $`\nu _{\mathrm{MAX}}`$) can be found without complicated and time-consuming calculations of the microscopic properties of strange matter.
## 4 Discussion and conclusions
In the present paper I analyzed the accuracy of the scaling properties of strange matter parameters with the value of the bag constant $`B`$ in the framework of MIT bag model of quark matter. The scaling formulae reproduce the main stellar parameters with an error less than 1%. This allow us to use them to determine the maximum possible frequency of a particle in stable circular orbit around strange star. The absolute maximum QPO frequency that can be accommodated by strange stars ranges from 1.7 to 2.4 kHz depending on the values of a strange matter parameters: strange quark mass and QCD coupling constant. Thus the present status of observational data ($`\nu _{\mathrm{max}}^{\mathrm{obs}}=1.33`$ kHz) cannot exclude strange stars as source of QPOs in LMBX. The frequencies of QPO in the very high range (larger than $`1.8`$ kHz), if observed, may set some bounds on parameters of strange matter, excluding large $`m_s`$ and large $`\alpha _c`$. We should mention that such a high values of $`\nu _{\mathrm{QPO}}`$ cannot be understood in terms of neutron stars (Thampan et al. Thampan99 (1999)).
The high accuracy of the simple scaling laws with $`B`$ is strictly connected with the possibility of the approximation of the equation of state by the linear function $`P=a(\rho \rho _0)`$ for which such a scaling properties are well known and exact. For this linear EOS the parameters of the strange stars scale with the powers of $`\rho _0`$ for fixed value of $`a`$ (analogous to the scaling laws with $`B`$, Eqs. 5, 7). It should be stressed that these scaling properties are valid not only for static stellar configurations but also in some dynamical problems (e.g. parameters of the rotating star, Gourgoulhon et al. ghlp (1999)). For fixed $`\rho _0`$ one can apply also the approximate scalings of the stellar parameters at the maximum mass point with appropriate powers of $`a`$ (Lattimer et al., latt90 (1990)) which have been recently confirmed by Stergioulas et al. (skb (1999)) for stars rotating at Keplerian frequency.
The presented linear approximation of the equation of state allow us to determine the dependence of many properties of strange stars on the physical parameters of a matter ($`m_s`$, $`\alpha _c`$) using very simple form of the EOS. For a broad set of $`m_s`$ and $`\alpha _c`$ the values of $`\epsilon _{\mathrm{fit}}`$ and $`\rho _0`$ entering linear EOS can be very accurately obtained by a polynomial formulae. In particular the formula for the baryon chemical potential enables us to study the microscopic stability of strange matter and make a complete discussion of the resulting constraints on strange stars parameters. The expression for baryon number density is necessary in the consideration of the conversions of neutron stars into strange stars with total baryon number conserved.
It is worth noticing that the linear approximation of $`P(\rho )`$ dependence (Eq. 8) can be used also for other models of strange matter, self-bound at high density $`\rho _0`$. The expressions for $`n(P)`$ and $`\mu (P)`$ allow us to determine all microscopic properties of the matter given the values of $`\rho _0`$ and $`n_0`$.
###### Acknowledgements.
This research was partially supported by the KBN grant No. 2P03D.014.13. I am very grateful to P. Haensel for careful reading of manuscript and helpful comments and suggestions.
|
no-problem/0004/astro-ph0004299.html
|
ar5iv
|
text
|
# On The Cosmic Origins Of Carbon And Nitrogen
## 1 Introduction
“It is quite a three pipe problem” (S. Holmes, quoted in Doyle 1891).
Carbon and nitrogen are among the most abundant of the chemical elements, and of obvious importance for life. Carbon is also a major constituent of interstellar dust. The main nuclear processes which generate these two elements are reasonably well understood – the carbon must come predominantly from the triple-alpha reaction of helium, and nitrogen by the conversion of carbon and oxygen that occurs during the CNO cycles of hydrogen burning. A lingering problem, though, has been the lack of knowledge of which sites are most important for their generation – in particular do they come mainly from short-lived massive stars, or from longer-lived progenitors of asymptotic giant branch stars? Coupled with this is uncertainty over the form and magnitude of the dependence of their production on the metallicity of the stars in which the reactions take place. Metallicity can essentially be tracked by its principal component – oxygen – whose dominant source is the Type II supernova explosion of massive stars. There is observed variation in the ratios of carbon and nitrogen to oxygen (i.e. C/O, N/O) in both stars (e.g. Gustafsson et al. 1999) and the gas in galactic systems (e.g. Garnett et al. 1999; Henry & Worthey 1999), and it is these variations that we use as clues to pin down the synthesis sites.
The threshold temperature of He burning and production of <sup>12</sup>C via the triple alpha process is $``$10<sup>8</sup>K, a temperature accessible in both massive (M$`>`$8M) and intermediate mass (1$`<`$M$`<`$8M) stars. Thus, these broad stellar groups represent two possible sites for carbon production. Likewise, nitrogen production via the CNO cycles may occur in either of these sites. However, discovering the origin of nitrogen is further complicated by the fact that the seed carbon needed for its production may either have been present when the star was born or is synthesized within the star during its lifetime. We now explore this idea in more detail.
Nitrogen is mainly produced in the six steps of the CN branch of the CNO cycles within H burning stellar zones, where <sup>12</sup>C serves as the reaction catalyst (see a textbook like Clayton 1983 or Cowley 1995 for nucleosynthesis review). Three reactions occur to transform <sup>12</sup>C to <sup>14</sup>N: <sup>12</sup>C(p,$`\gamma `$)<sup>13</sup>N($`\beta `$<sup>+</sup>,$`\nu `$)<sup>13</sup>C(p,$`\gamma `$)<sup>14</sup>N, while the next step, <sup>14</sup>N(p,$`\gamma `$)O<sup>15</sup>, depletes nitrogen and has a relatively low cross-section. The final two reactions in the cycle transform <sup>15</sup>O to <sup>12</sup>C. Since the fourth reaction runs much slower than the others, the cycle achieves equilibrium only when <sup>14</sup>N accumulates to high levels, and so one effect of the CN cycle is to convert <sup>12</sup>C to <sup>14</sup>N. The real issue in nitrogen evolution is to discover the source of the carbon which is converted into nitrogen, and of any oxygen which can contribute through the (slow) side chain <sup>16</sup>O(p,$`\gamma `$)<sup>17</sup>F($`\beta `$<sup>+</sup>,$`\nu `$)<sup>17</sup>O(p,$`\alpha `$)<sup>14</sup>N.
The conventional meaning of “primary” applied to nitrogen is that its production is independent of the initial composition of the star in which it is synthesized. An example is where stars produce their own carbon (and some oxygen) during helium burning, and the carbon (and perhaps oxygen) is subsequently processed into <sup>14</sup>N via the CN(O) cycle. Stars beyond the first generation in a galactic system already contain some carbon and oxygen, inherited from the interstellar medium out of which they formed. The amount of nitrogen formed from CNO cycling of this material will be proportional to its C abundance (and also its O abundance, if the CNO cycling proceeds long enough to deplete the oxygen) and is known as “secondary” nitrogen. In general, then, primary nitrogen production is independent of metallicity, while secondary production is a linear function of it.
However, these conventional definitions become rather blurred if the evolution of the synthesizing stars, and/or the release of nucleosynthesis products to the interstellar medium, depends on the initial composition of the star - as well it might if stellar wind generation is metallicity-dependent. Thus effective production of carbon could depend on the initial metallicity of the star, although no actual “seed nucleus” is involved, and the production of nitrogen might differ from a simple primary or secondary process. Also important is the mass of star in which production takes place, since if most production takes place in stars of low mass, a significant delay will occur between formation of the source stars and release of the products into the interstellar medium.
Detailed discussion and review of computed stellar yields is left until §§3 and 5 below, but we now refer to current interpretations of the observed carbon and nitrogen abundances in stars and galaxies. The literature on the origin of carbon has recently been reviewed by Gustafsson et al. (1999), and Garnett et al. (1999). The former conclude that their own stellar results are “consistent with carbon enrichment by superwinds of metal-rich massive stars, but inconsistent with a main origin of carbon in low mass stars”, which is pretty much echoed by the latter authors who state that the behavior of C/O ratios as a function of O/H is “best explained \[by\] … effects of stellar mass loss on massive star yields”. They note that theoretical chemical evolution models (Carigi 1994) in which carbon comes from intermediate stars apparently predict too shallow a C/O relation to fit their observed galaxy abundance gradients. Other recent discussions of C/O gradients across our own or other galaxies, or of C/O ratios in low metallicity galaxies, have been given by Götz and Köppen (1992), Prantzos, Vangioni-Flam & Chauveau (1994), Kunth et al. (1995), Garnett et al. (1995), Carigi et al. (1995), Mollá, Ferrini, and Díaz (1997) and Chiappini, Matteucci & Gratton (1997).
The literature on the origin of nitrogen was briefly reviewed by Vila-Costas & Edmunds (1993). A major problem with N/O ratios has been to try to explain the spread in N/O at a given O/H (see Figure 1B of Section 2), although the reality of a spread at low O/H (12+log(O/H) $``$ 7.6) has been questioned by Thuan et al. (1995) and Izotov & Thuan (1999). Two major mechanisms have been proposed for generating a spread - mechanisms that could also apply (with different timescales etc) to C/O ratios. One mechanism invokes a significant time delay between formation of the star which will produce the nitrogen and the delivery of the nitrogen to the interstellar medium. Thus oxygen is expected to be produced predominantly in the SNII explosion of short-lived massive stars, the N/O ratio in the ISM will at first decrease, and then rise again as the nitrogen is released. A delay could affect both primary and secondary nitrogen, and Edmunds & Pagel (1978) suggested that N/O ratios might perhaps be an indicator of the age of a galactic system, in the sense that it indicated the time since the bulk of star formation has taken place. This idea continues to find some support (e.g. Kobulnicky & Skillman 1996; van Zee, Haynes & Salzer 1997). A suspicion of low N/S or N/Si ratios (the S and Si being expected to follow O) in damped Lyman alpha absorption systems (Lu et al. 1996; Pettini, Lipman & Hunstead 1995; Pilyugin 1999) has been invoked as evidence of the youth of the systems on the basis of delayed N release. However, if the delay mechanism is to be effective in altering N/O ratios, the delay must be reasonably long - otherwise the probability of catching a system at low N/O, perhaps after a burst of star formation, will be too small. A time scale of several 10<sup>8</sup> or of order 10<sup>9</sup> years would seem to be necessary. We shall argue later that the dominant source of nitrogen may always be in intermediate mass stars of too high a mass to allow a strong, observable, systematic effect.
The second mechanism for causing N/O variation (or C/O variation, but to avoid repetition we only discuss nitrogen here) is variation in the flow of gas into or out of the galaxy. If the nitrogen is primary, then neither inflow of unenriched gas nor outflow of interstellar medium will affect the N/O ratio except in the case where the outflow is different for nitrogen and oxygen (e.g. Marconi, Matteucci & Tosi 1994; Pilyugin 1993). If the nitrogen is secondary (or of any composition behavior other than primary) the N/O ratio is still unaffected by any non-differential outflow, but can be affected by unenriched inflow (Serano & Peimbert 1983; Edmunds 1990). Köppen & Edmunds (1999) were able to place the useful constraint that variation in N/O caused by inflow and secondary nitrogen can be at most a factor of two if the inflow is time-decreasing (as most chemical evolution models tend to assume).
We will find - with no surprise - that nitrogen can be interpreted as having both primary and secondary components. But we will suggest that the nitrogen is (or acts as if it is) secondary on carbon, rather than on oxygen. This will allow a rather steep dependence of N/O on O/H at high O/H, possibly aided by inflow effects. That the CNO cycle might not go to completion, but effectively stop at CN equilibrium, was noted in LMC planetary nebulae by Dopita et al. (1996) and (at low metallicity) from observations of old stars by Langer & Kraft (1984). We shall not discuss isotope ratios such as <sup>15</sup>N/<sup>14</sup>N here, although they are a subject of some recent interest (Chin et al. 1999; Wielen & Wilson 1997).
After reviewing the observational data in Section 2 and stellar yields in Section 3, we give an elementary analytic model which can account well for the general trends of the observational data. The yield parameters of this analytic model can then be compared with published stellar evolution and nucleosynthesis predicted yields with the result that it is possible to choose which are the most realistic yield calculations – and to identify the stellar sources of carbon and nitrogen. The general form of the analytic models is confirmed by calculations generated by detailed chemical evolution codes in Section 5. After some discussion in Section 6, our identification of the sources and metallicity-dependence of carbon and nitrogen production are summarized in §7.
## 2 Data
We begin by considering the observed trends of carbon and nitrogen abundances with metallicity as gauged by oxygen abundance. Abundance data used for our study were taken directly from the literature, and the references are summarized in Table 1, where we indicate the type of object observed along with the spectral range, the first author of the study, and the number of objects. The first five studies pertain to objects within the Milky Way only, while the rest refer to objects in external galaxies. The data for carbon and nitrogen are presented in Figs. 1A and 1B, respectively. Below we discuss the data sources and the observed trends.
Carbon is an element whose abundance has lately become more measurable in extra-galactic H II regions, thanks to the Hubble Space Telescope and its UV capabilities, since the strong carbon lines of C III\] and C IV appear in that spectral region. Fig. 1A is a plot of log(C/O) versus 12+log(O/H) for numerous Galactic and extragalactic objects. Results for extragalactic H II regions are taken from Garnett et al. (1995; 1997; 1999), Izotov & Thuan (1999), and Kobulnicky & Skillman (1998), and are shown with symbols ‘G’, ‘I’, and ‘K’, respectively. The filled circles correspond to stellar data from Gustafsson et al. (1999) for a sample of F and G stars, the filled boxes correspond to B star data from Gummersbach et al. (1998), and the filled diamonds are halo star data points from Tomkin et al. (1992). The Galactic H II regions M8 and the Orion Nebula have been measured by Peimbert et al. (1993) and Esteban et al. (1998), respectively, where the point for Orion is indicated with an ‘O’ and M8 with an ‘M’. The sun’s position (Grevesse et al. 1996) is shown with an ‘S’. Garnett et al. (1999) calculated two C/O ratios each for six objects, using two different reddening laws. The common values of 12+log(O/H) for these pairs are: 8.06, 8.16, 8.39, 8.50, 8.51, and 8.58. We note that the two points at the lowest 12+log(O/H) values in the Garnett et al. and Izotov & Thuan samples are for I Zw 18.
Data in Fig. 1A suggest a direct correlation between C/O and O/H which has been noted before (c.f. Garnett et al. 1999), although the result is weakened somewhat by the two points for I Zw 18 around 12+log(O/H)=7.25 along with the data for the halo stars. Assuming that the trend is robust, it clearly implies that carbon production is favored at higher metallicities. One promising explanation (Prantzos, Vangioni-Flam, & Chauveau 1994; Gustafsson et al. 1999) is that mass loss in massive stars is enhanced by the presence of metals in their atmospheres which increase the UV cross-section to stellar radiation. Stellar yield calculations by Maeder (1992) appear to support this claim. The contributions to carbon production by different stellar mass ranges is discussed by both Prantzos et al. and Gustafsson et al., who conclude that the bulk of carbon is produced by massive stars. However, it is also clear that stars less massive than about 5M produce and expel carbon as well (van den Hoek & Groenewegen 1997; Marigo et al. 1996, 1998; Henry et al. 2000), and the relative significance of massive and intermediate mass stars has not been established.
Fig. 1B shows log(N/O) versus 12+log(O/H). The numerous data sources are identified in the figure caption. The most striking feature in Fig. 1B is the apparent threshold running from the lower left to upper right beginning around 12+log(O/H)=8.25 and breached by only a few objects. For the remainder of the paper, we shall refer to this threshold as the NO envelope. Behind the NO envelope the density of objects drops off toward lower values of 12+log(O/H) and higher values of log(N/O). A second feature is the apparent bimodal behavior of N/O. At values of 12+log(O/H)$`<`$8, N/O appears constant, a trend which seems to be consistent with the upper limits provided by the damped Ly$`\alpha `$ objects of Lu et al. (1996; L) and with the observed abundances of blue compact galaxies found by Izotov & Thuan (1999) at very low metallicity. Then, at 12+log(O/H)$`>`$8 N/O turns upward and rises steeply. The bimodal behavior of the NO envelope was emphasized by Kobulnicky & Skillman (1996), and is consistent with the idea that nitrogen and oxygen rise in lockstep at low metallicities, while nitrogen production becomes a function of metallicity at values of 12+log(O/H) greater than 8.3. In summary, the shape of the NO envelope and the scatter of points behind it represent two related problems in understanding nitrogen synthesis. In our analysis, we address primarily the first of these problems, while mentioning the second one briefly and deferring a detailed discussion of it to a future paper.
We employ analytical (§4) and numerical (§5) models in order to understand the observed trends in carbon and nitrogen buildup with that of oxygen. An important component of the numerical models, however, are the stellar yields which enter into the calculations, and these are discussed in detail next in §3.
## 3 Yields
In this section we compare yield predictions by various authors for those elements in whose evolution we are interested, namely carbon, nitrogen, oxygen. Furthermore, we state clearly at the outset that throughout our discussion we are only considering the most abundant isotope of each of these elements, i.e. <sup>12</sup>C, <sup>14</sup>N, and <sup>16</sup>O. Here, we present and compare modern yields which are available in the literature and which can be used to calculate both analytical and numerical models for the purpose of understanding the observations of carbon and nitrogen. There are two important production sites which we shall consider: (1) intermediate mass stars (IMS; 1$``$M$``$8M), potentially important for both carbon and nitrogen synthesis; and (2) massive stars (M$`>`$8M) which are important sites for oxygen production and perhaps for carbon and nitrogen as well.
Recent IMS yield calculations have been carried out by van den Hoek and Groenewegen (1997; VG) and Marigo, Bressan, & Chiosi (1996; 1998; MBC)<sup>6</sup><sup>6</sup>6Because of their use of modern opacity values and more detailed mass loss processes, the studies by these two teams supercede the earlier work of Renzini & Voli (1981). However, we do not mean to imply that current use of these latter yields necessarily gives a spurious result, given that uncertainties even for contemporary yield calculations are relatively large.. These models include the conversion of carbon to nitrogen at the base of the convective envelope during third dredge-up, the process known as “hot bottom burning”. The quality of the yield predictions by VG and Marigo et al. has recently been assessed empirically by Henry, Kwitter, & Bates (2000), who compared predictions of in situ planetary nebula abundances from these yield studies with their observed abundances in a sample of planetaries and found modest but encouraging agreement between theory and observation.
Yields for massive stars which we shall consider are those by Maeder (1992), Woosley & Weaver (1995) and Nomoto et al. (1997). The last two teams include element production from both quiescent as well as explosive burning stages. The Maeder calculations are unique in that they include the effects of metallicity on mass loss. We do not consider any yields of Type Ia supernovae in our study.
We compare theoretical yields by first defining the integrated yield of an element $`\mathrm{x}`$ as:
$$P_x_{m_{down}}^{m_{up}}mp_x(m)\varphi (m)𝑑m,$$
(1)
where $`\mathrm{p}_\mathrm{x}`$(m) is the stellar yield, $`\varphi `$(m) is the initial mass function, and $`\mathrm{m}_{\mathrm{up}}`$, $`\mathrm{m}_{\mathrm{down}}`$ are, respectively, upper and lower limits to the mass range of all stars formed. Assuming a Salpeter initial mass function (see §5.1) and a stellar mass range from 0.1 to 120 M, $`\mathrm{P}_\mathrm{x}`$ is then the mass fraction of all stars formed which is eventually expelled as new element $`\mathrm{x}`$.
Our calculated integrated yields are given in Table 2A, where column 1 indicates the source of values for $`\mathrm{p}_\mathrm{x}`$(m) in eq. 1 as defined in the footnote, columns 2 and 3 show the upper and lower mass limits of the range of stars considered in each study \[note these limits are not the same as the integration limits in eq. 1, which refers to all stars\], $`\mathrm{z}`$ indicates the metallicity for which the yields are relevant, and the last three columns give the integrated yields for carbon, nitrogen, and oxygen, respectively, according to eq. 1.
The VG and MBC studies refer to intermediate-mass stars where only carbon and nitrogen yields are of interest to us, since these stars do not synthesize oxygen. We see that results from both studies suggest that, with increasing metallicity, the integrated carbon yield decreases while the nitrogen yield increases. According to a recent study of thermally pulsing AGB models by Buell (1997), stars of relatively low metallicity have smaller total radii (higher surface gravity) and less effective mass loss, increasing the lifetime of the AGB phase and allowing these stars to experience more 3rd dredge-up when carbon is mixed out into the envelope. As metallicity rises, then, carbon production drops off while secondary nitrogen production becomes significant. In the balance, then, as stellar metallicity goes up, the models predict that carbon production declines while nitrogen production rises.
In the case of massive stars, Maeder (1992) predicts a sizable increase in carbon production with metallicity as the result of a mass loss process which is metallicity-sensitive. Carbon yields from Woosley & Weaver and Nomoto predict significantly less carbon than Maeder and show no apparent sensitivity to metallicity. Woosley & Weaver predict higher nitrogen production than does Nomoto for solar metallicity; in addition, the Woosley & Weaver results indicate that nitrogen production increases with metallicity. Note that Maeder did not include the contribution from supernova ejection, only from winds, in his calculation of nitrogen yields; the relevant numbers in Table 2A from his paper are therefore lower limits. Oxygen yields are predicted by Maeder to correlate inversely with metallicity while Woosley & Weaver predict them to correlate directly, albeit weakly, with metallicity. We note that this inverse sensitivity in Maeder’s results will be seen below to influence significantly the C/O and N/O behavior in our models. We also point out that yield calculations for massive stars which account for stellar rotation effects have been carried out by Langer & Henkel (1995) and Langer et al. (1997). Their predicted yields appear to be similar to those of Maeder for <sup>12</sup>C and <sup>14</sup>N.
## 4 Analytical Models
Our aim in this section is to fit reasonable analytical models to the data presented in §2 by altering input yield parameters. These latter values are then used to select the most appropriate sets of published yields to be employed as input for our follow-up numerical models.
An inspection of C/O versus O/H abundances shown in Fig. 1A suggests that carbon production increases with metallicity. We will model this analytically by the formal assumption of a primary and secondary component to the carbon yield, but we must emphasize that this does not necessarily imply that carbon is a “secondary” product of a varying initial seed. The metallicity dependence of the yield may well come about because stellar evolution and the release of nucleosynthesis products from a star are affected by its mass-loss, and this mass-loss can be metallicity dependent. For the results presented here, we will assume that there is no time delay in the delivery of carbon to the interstellar medium compared to the delivery of oxygen. (Our subsequent identification of the major site of carbon synthesis with massive stars confirms the validity of this assumption.) To allow for the effects of gas flow (Edmunds 1990) we note that outflow should have no effect on primary/primary or secondary/primary element ratios, but accretion (i.e. inflow) of unenriched gas can have some effect. As shown in Köppen & Edmunds (1999), useful limits on the effects of time-decreasing inflows can be set by considering the elementary “linear” inflow model in which the rate of gas accretion is equal to a constant times the star formation rate. The analytic models show that inflow can steepen the N/O relation, and - as inflow is frequently evoked in Galactic chemical evolution models as a mechanism for solving the G-dwarf problem - we allow some modest inflow in our numerical models.
Following the notation of Edmunds (1990, as modified by Köppen & Edmunds 1999) for a system with gas mass $`\mathrm{g}`$, gas metallicity $`\mathrm{z}`$ (by mass – and simply proportional to the oxygen abundance), carbon abundance $`\mathrm{z}_\mathrm{c}`$ (by mass), and a linear accretion rate $`\alpha \mathrm{a}`$ times the star formation rate, then considering the change in element abundances in the interstellar medium when a mass $`\mathrm{ds}`$ is formed into stars (from op. cit. Eq. 11 with 6):
$$\frac{dz}{ds}=\frac{paz}{g}$$
(2)
$$\frac{dz_c}{ds}=\frac{p_{pc}+p_{sc}zaz_c}{g},$$
(3)
where the general metallicity yield is $`\mathrm{p}`$, and that of carbon is $`\mathrm{p}_{\mathrm{pc}}+\mathrm{p}_{\mathrm{sc}}\mathrm{z}`$, where $`\mathrm{p}_{\mathrm{pc}}`$ is the primary component yield and $`\mathrm{p}_{\mathrm{sc}}`$ the secondary. It has also been (with justification) assumed that $`\mathrm{z},\mathrm{z}_\mathrm{c}1`$. Note that $`\alpha `$ is that fraction of interstellar material going into a generation of star formation that remains locked up in long-lived stars and remnants. The parameter characterizing accretion obeys $`0\mathrm{a}`$, and for the “simple” closed-box model $`\mathrm{a}0`$ (limiting forms are given in Appendix A).
Dividing the two equations gives
$$\frac{dz_c}{dz}+\frac{az_c}{paz}=\frac{p_{pc}+p_{sc}z}{paz},$$
(4)
which may be solved to give
$$z_c=\frac{p_{pc}z}{p}+\frac{pp_{sc}}{a^2}\left\{\frac{az}{p}+\left(1\frac{az}{p}\right)\mathrm{ln}\left(1\frac{az}{p}\right)\right\};$$
(5)
suitable values to (eyeball) fit the data of Fig. 1A are $`\mathrm{p}=0.01`$ (fixed by requiring solar oxygen abundance for a simple model of the solar neighbourhood), $`\mathrm{p}_{\mathrm{pc}}=0.0012`$, $`\mathrm{p}_{\mathrm{sc}}=0.9`$, and $`\mathrm{a}=0.1`$, as shown with a bold curve in Fig. 2A. We also show with faint curves the simple models for stronger accretion, i.e. $`\mathrm{a}=0.5`$, $`\mathrm{a}=0.9`$, to indicate the spread in N/O values that can be generated by different accretion rates. (Accretion of less than 0.1 produces curves which are nearly coincident with that for 0.1, so we do not show them in Fig. 2A.) Note that $`\mathrm{z}_\mathrm{c}/\mathrm{z}`$ and $`\mathrm{z}_\mathrm{n}/\mathrm{z}`$ represent mass ratios, and must be multiplied by the relevant atomic weight ratios for comparison with observed abundance number ratios. As explained in Köppen & Edmunds (1999), the effect of different accretion rates on the “secondary” component is no more than a factor of two in the $`\mathrm{z}_\mathrm{c}/\mathrm{z}`$ ratio at a given metallicity $`\mathrm{z}`$, if the accretion is decreasing with time. It is clear that the model gives a reasonable account of the data, with the effects of different accretion rates allowing some (small) spread in the element ratios at high abundance.
The next step is to extend the model to nitrogen abundances. The proposal here is that the yield of nitrogen has both a primary and a secondary component – but the latter is (or behaves as if it were) secondary on the carbon abundance, rather than on the overall metallicity. In nucleosynthesis terms this might occur because the CN cycle comes into equilibrium much faster than the ON cycle. Complete conversion of the C into N could take place long before much O is converted into N. So provided the CNO cycling stops at this point, the major source of secondary nitrogen will have come from carbon. For the primary component, it is of little consequence whether the N comes from CNO cycling of C or O, since the C and O will have to have been produced in the star by the triple alpha reaction and $`{}_{}{}^{12}\mathrm{C}(\alpha ,\gamma )^{16}\mathrm{O}`$ and hence be insensitive to the star’s initial abundance of either C or O. Some support for the cycling only converting the carbon may come from interpreting <sup>17</sup>O/<sup>16</sup>O ratios in the interstellar medium if overproduction of <sup>17</sup>O is to be avoided, unless the standard nuclear reaction rate for <sup>17</sup>O(p,$`\alpha `$)<sup>14</sup>N is seriously in error (see Edmunds 2000). But it could also be the case that the stellar evolution (with mass loss) mimics the effect of nitrogen being secondary on carbon because the overall effect is that the resulting N/O has a steeper metallicity than the proportionality to $`\mathrm{z}`$ expected for a secondary process on oxygen. Taking for the moment the simpler view of secondary processing on the carbon, then we have a nitrogen yield $`\mathrm{p}_{\mathrm{pn}}+\mathrm{p}_{\mathrm{sn}}\mathrm{z}_\mathrm{c}`$, and
$$\frac{dz_n}{ds}=\frac{p_{pn}+p_{sn}z_caz_n}{g},$$
(6)
which may be combined with Eqs. 2 and 5, and solved to give
$`\mathrm{z}_\mathrm{n}`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{p}_{\mathrm{pn}}}{\mathrm{p}}}+{\displaystyle \frac{\mathrm{p}_{\mathrm{pc}}\mathrm{p}_{\mathrm{sn}}}{\mathrm{ap}}}+{\displaystyle \frac{\mathrm{p}_{\mathrm{sc}}\mathrm{p}_{\mathrm{sn}}}{\mathrm{a}^2}}\right)\mathrm{z}+\left({\displaystyle \frac{\mathrm{p}_{\mathrm{pc}}}{\mathrm{p}}}+{\displaystyle \frac{\mathrm{p}_{\mathrm{sc}}}{\mathrm{a}}}\right)\left({\displaystyle \frac{\mathrm{pp}_{\mathrm{sn}}}{\mathrm{a}^2}}\right)\left(1{\displaystyle \frac{\mathrm{az}}{\mathrm{p}}}\right)\mathrm{ln}\left(1{\displaystyle \frac{\mathrm{az}}{\mathrm{p}}}\right)`$
$`{\displaystyle \frac{\mathrm{pp}_{\mathrm{sc}}\mathrm{p}_{\mathrm{sn}}}{2\mathrm{a}^3}}\left(1{\displaystyle \frac{\mathrm{az}}{\mathrm{p}}}\right)\left[\mathrm{ln}\left(1{\displaystyle \frac{\mathrm{az}}{\mathrm{p}}}\right)\right]^2.`$
Assuming the values of $`\mathrm{p}`$, $`\mathrm{p}_{\mathrm{pc}}`$, and $`\mathrm{p}_{\mathrm{sc}}`$ given above, suitable fitting values to the data of Fig. 1B are $`\mathrm{p}_{\mathrm{sn}}=0.00022`$ and $`\mathrm{p}_{\mathrm{sn}}=0.285`$, as shown in Fig. 2B for the same accretion rate values as in Fig. 2A. As can be seen, this “double secondary” model gives the characteristic knee, and steeper N/O dependence as O/H increases. Again, variation in inflow rate would give rise to some spread at moderate-to-high O/H values. The limiting form of Eq. 7 for the simple closed-box model is given in Appendix A, in which the “tertiary” or “double secondary” component due to the processing of carbon is more apparent.
Comparing our analytic fit yield coefficients ($`\mathrm{p}`$, $`\mathrm{p}_{\mathrm{pc}}`$, $`\mathrm{p}_{\mathrm{sc}}`$, $`\mathrm{p}_{\mathrm{pn}}`$, $`\mathrm{p}_{\mathrm{sn}}`$) with the results of the detailed stellar yield calculations in §3 ($`\mathrm{P}_\mathrm{C}`$, $`\mathrm{P}_\mathrm{N}`$, $`\mathrm{P}_\mathrm{O}`$), we find that for carbon our analytical primary and secondary values agree closely with Maeder’s yields, and that the analytical values for nitrogen are similar to the intermediate-mass star predictions of VG and MBC.
An elementary analytic treatment of the possible effect of a time delay in the nitrogen production is given in Appendix B. For illustration, the N/O versus O/H track for a closed system with a particularly rapid rate of star formation is plotted as a dashed line in Fig. A1B.
To summarize our analytic conclusions, we propose that the observational data on carbon and nitrogen abundance ratios relative to oxygen suggest that carbon has components that behave as if they were primary and secondary – the secondary behavior most likely being due to dependence of stellar evolution and mass loss on metallicity. The nitrogen shows both primary and secondary components – the latter behaving as if it were secondary on carbon abundance, i.e. demonstrating a steeper dependence on metallicity than would be expected if it were secondary on oxygen. Again, some of this behavior may be due to dependence of stellar evolution and mass loss on metallicity, rather than a purely nuclear seed effect. Different rates of infall of unenriched gas in galactic systems could explain some modest spread in secondary contribution to C/O ratios and (slightly greater effect) N/O ratios at high O/H. Our analytical model has also suggested that Maeder’s massive star yields for carbon and oxygen are the most appropriate, while the yields of VG or MBC are needed for nitrogen. We explore this in much greater detail in the next section, where we present results of numerical models which incorporate these yields in them.
## 5 Numerical Models
We now wish to use the implications and results concerning yields of the last two sections to calculate numerical models which will assist us in closely identifying the important production site(s) of carbon and nitrogen in the Universe. Specifically, we want to quantify the relative contributions of intermediate-mass and massive stars to the total mass buildup of these two elements as a function of time. To do this, we have constructed a numerical model of a generic one-zone galactic region and allowed it to evolve while we keep track of the contributions of intermediate-mass and massive stars to carbon and nitrogen synthesis at each timestep. The galaxy’s mass builds from zero by infall (accretion) occurring at a time-varying rate, while the rate of star formation is determined by the gas fraction. We do not assume instantaneous recycling, and thus stars eject matter after a time lag appropriate for their birth masses. The amounts of new helium, carbon, nitrogen, and oxygen synthesized and ejected by stars is assumed to be a function of progenitor mass and metallicity, using adopted yields inferred from analytical studies in §4. We then trace the buildup in the interstellar gas of each element with time. Obviously, our final results will be directly linked to our choice of yields.
The abundance patterns of C/O and N/O shown in Figs. 1A,B are the result of plotting abundance ratios determined in H II regions and stars from a broad sample of galaxies, including the Milky Way. Therefore, in computing our numerical models to fit the data, we make the following reasonable assumptions: (1) stellar yields are strictly functions of progenitor mass and metallicity, and as such are not influenced by a star’s local environment; the yields are universal and do not vary from galaxy to galaxy; (2) likewise, the initial mass function is universal; and (3) the star formation rate is a function of environment, since it is related to the local density and gas fraction. We now describe the details of our numerical code.
### 5.1 Details Of Code
Our numerical code for chemical evolution is a one-zone program written by one of us (R.B.C.H.) which, for specified infall and star formation rates, follows the buildup of carbon, nitrogen, and oxygen over time. The code utilizes the formalism for chemical evolution described in Tinsley (1980), and we now show and describe the relevant equations.
We imagine our generic galactic region as an open box, originally empty but accreting metal-free gas. As this material accumulates and forms stars, the total mass of the region becomes partitioned into interstellar gas of mass $`\mathrm{g}`$ and stars of mass $`\mathrm{s}`$ such that
$$M=g+s,$$
(8)
where $`\mathrm{M}`$ is total mass inside the box. The time derivative of eq. 8 is
$$\dot{M}=\dot{g}+\dot{s},$$
(9)
and if $`\dot{\mathrm{M}}`$ is taken to be the rate of infall $`\mathrm{f}(\mathrm{t})`$, $`\psi (\mathrm{t})`$ the star formation rate, and $`\mathrm{e}(\mathrm{t})`$ the mass ejected by stars, then:
$$\dot{s}=\psi (t)e(t)$$
(10)
and
$$\dot{g}=f(t)\psi (t)+e(t).$$
(11)
The interstellar mass of element $`\mathrm{x}`$ in the zone is $`\mathrm{gz}_\mathrm{x}`$, whose time derivative is:
$$\dot{g}z_x+g\dot{z}_x=z_x(t)\psi (t)+z_x^ff(t)+e_x(t),$$
(12)
where $`\mathrm{z}_\mathrm{x}(\mathrm{t})`$ and $`\mathrm{z}_\mathrm{x}^\mathrm{f}`$ are the mass fractions of $`\mathrm{x}`$ in the gas and in the infalling material, respectively, and $`\mathrm{e}_\mathrm{x}(\mathrm{t})`$ is the stellar ejection rate of $`\mathrm{x}`$. Solving for $`\dot{\mathrm{z}}_\mathrm{x}(\mathrm{t})`$ yields:
$$\dot{z}_x(t)=\{f(t)[z_x^fz_x(t)]+e_x(t)e(t)z_x(t)\}g^1.$$
(13)
The second term on the right hand side of eq. 13 accounts for the injection of metals into the gas by stars, while the first and third terms account for effects of dilution due to infall and ejected stellar gas.
In our models we take the infall rate to be:
$$f(t)=\mathrm{\Sigma }_{t_o}\left\{\tau _{scale}\left[1exp\left(\frac{t_o}{\tau _{scale}}\right)\right]\right\}^1exp\left(\frac{t}{\tau _{scale}}\right)M_{\mathrm{}}Gyr^1pc^2,$$
(14)
where $`\mathrm{t}_\mathrm{o}`$ and $`\mathrm{\Sigma }_{\mathrm{t}_\mathrm{o}}`$ are the current epoch and surface density, the latter is taken to be 75Mpc<sup>-2</sup>, $`\tau _{\mathrm{scale}}=4`$ Gyr, and $`\mathrm{t}_\mathrm{o}`$=15 Gyr. This formulation of the infall rate is the one discussed and employed by Timmes, Woosley, & Weaver (1995) for their models of the Galactic disk. The rates of mass ejection $`\mathrm{e}(\mathrm{t})`$ and ejection $`\mathrm{e}_\mathrm{x}(\mathrm{t})`$ of element $`\mathrm{x}`$ are:
$$e(t)=_{m_{\tau _m}}^{m_{up}}\{[mw(m)]\}\psi (t\tau _m)\varphi (m)𝑑mM_{\mathrm{}}Gyr^1pc^2$$
(15)
and
$$e_x(t)=_{m_{\tau _m}}^{m_{up}}\{[mw(m)]z_x(t\tau _m)+mp_{x,z_{t\tau _m}}\}\psi (t\tau _m)\varphi (m)𝑑mM_{\mathrm{}}Gyr^1pc^2.$$
(16)
In eqs. 15 and 16 $`\mathrm{m}_{\tau _\mathrm{m}}`$ is the turn-off mass, i.e. the stellar mass whose main sequence lifetime corresponds to the age of the system; this quantity was determined using results from Schaller et al. (1992). $`\mathrm{m}_{\mathrm{up}}`$ is the upper stellar mass limit, taken to be 120 M, $`\mathrm{w}(\mathrm{m})`$ is the remnant mass corresponding to ZAMS mass $`\mathrm{m}`$, taken from Yoshii, Tsujimoto, & Nomoto (1996). $`\mathrm{p}_\mathrm{x}(\mathrm{z})`$ is the stellar yield, i.e. the mass fraction of a star of mass $`\mathrm{m}`$ which is converted into element $`\mathrm{x}`$ and ejected, and $`\varphi (\mathrm{m})`$ is the initial mass function. The initial mass function is the normalized Salpeter (1955) relation:
$$\varphi (m)=\left[\frac{1b}{m_{up}^{(1b)}m_{down}^{(1b)}}\right]m^{(1+b)},$$
(17)
where b=1.35. Finally, the star formation rate $`\psi (\mathrm{t})`$ is given by:
$$\psi (t)=\nu M\left(\frac{g}{M}\right)^2M_{\mathrm{}}Gyr^1pc^2$$
(18)
where $`\nu =\nu _\mathrm{o}\left(1+\frac{\mathrm{z}}{0.001}\right)`$ is the star formation efficiency, which we note is metallicity-sensitive. Again we have followed Timmes et al. here, but we have introduced the metal-enhancement factor to help fit the data at high O/H. We also note that empirical studies of the star formation rate by Kennicutt (1998) suggest that the SFR exponent is $`1.4\pm 0.15`$ rather than the purely quadratic form in eq. 18. However, test models run with different star formation laws suggested that outcome is much more sensitive to $`\nu `$ than the exponent, and so we continue to adopt the law given in eq. 18.
In choosing stellar yields, we assumed at the beginning that carbon and nitrogen levels may be affected by both IMS and massive stars, while oxygen abundance is controlled by massive stars only. Following our analysis in §4, stellar yields for IMS were taken from VG, while those for massive stars were taken from Maeder (1992). During the numerical calculations, yields were interpolated to find the value relevant for the stellar mass and metallicity under consideration. On the other hand, yields were not extrapolated outside of the ranges for which they were determined by VG and Maeder. We began by assuming that the VG yields correctly quantify the contributions of IMS to the evolution of these two elements. (We did not experiment with the MBC yields because they have not calculated yields for stars between 5 and 8 M.) Then for the contribution of massive stars, we adopted Maeder’s (1992) yields for carbon, nitrogen, and oxygen but scaled his carbon yields slightly to force agreement between observations and our model predictions. Our model carbon yields are discussed more in §5.2.1.
Our calculations assumed a timestep of length one million years for the first billion years, after which the timestep length was increased one hundred-fold. At each timestep, the increment in $`\mathrm{z}_\mathrm{x}`$ was calculated by solving eq. 13 along with the required subordinate equations 14-18. This increment was then added to the current value and the program advanced to the next time step. Finally, the total metallicity at each point was taken as the sum of the mass fractions of carbon, nitrogen, and oxygen.
### 5.2 Results Of Numerical Experiments
Our best numerical model is shown with a bold line and identified with a ‘B’ in Figs. 3A,B along with the data. Models ‘A’ and ‘C’ differ from ‘B’ and each other only in their star formation efficiency and will be discussed later. For model ‘B’, $`\nu _\mathrm{o}=0.03`$. In both figures this model reproduces the general trends quite well. Note especially the match between the observed NO envelope and model ‘B’ in Fig. 3B. The flat primary, metal-insensitive region below $`12+\mathrm{log}(\mathrm{O}/\mathrm{H})8.3`$ as well as the upward-turning metal-sensitive secondary region at oxygen abundances above this value are reproduced successfully. Given the uncertainties in the observations, model ‘B’ would seem to provide an adequate fit to both the C/O and N/O patterns in Figs. 3A,B. We now discuss these patterns individually.
#### 5.2.1 Carbon
The C/O ratio predicted by our model ‘B’ rises sharply in Fig. 3A as a result of the increase of carbon production and decrease in oxygen production in massive stars, as predicted by Maeder’s (1992) yields. This metal-sensitivity of carbon production was strongly implied by our analytical models as well. We note that it was necessary to adjust Maeder’s carbon yields to values somewhat above the published ones in order to achieve a good fit to the data; our final empirical carbon yields for massive stars are listed in Table 3. We point out that these yields have been scaled up by a factor whose value is slightly and positively sensitive to metallicity. (Note that IMS yields would have to have been scaled by more than an order of magnitude to produce the same effect, a change which is much less tenable, given extant planetary nebula abundance constraints.) Column 1 gives the progenitor mass in solar units, columns 2 and 3 compare our and Maeder’s yields at $`\mathrm{z}=0.001`$ and columns 4 and 5 do likewise for $`\mathrm{z}=0.02`$. The values listed are in solar masses, i.e. $`\mathrm{mp}_\mathrm{C}(\mathrm{m})`$, where $`\mathrm{p}_\mathrm{C}(\mathrm{m})`$ is the stellar yield discussed in §3 (see eq. 1).
In contrast to massive stars, intermediate-mass stars, were found to play an insignificant role in carbon production, based upon the use of VG yields. To confirm this, we recalculated model B but excluded the IMS carbon contributions. This trial model predicted a track in Fig. 3A only slightly offset from the one shown for model ‘B’, and so we do not show it. It would seem that IMS have little impact on the model results for carbon. This is interesting because, despite the fact that carbon enrichment in some planetary nebulae definitely indicates carbon production in the progenitors of these objects, it is clear from our analysis, based upon our chosen yields sets, that carbon production by IMS is virtually swamped by an order of magnitude by massive star production of this element. This result was anticipated by our comparison of integrated yields from IMS and massive stars in §3 and Table 2.
#### 5.2.2 Nitrogen
By far the most difficult obstacle to obtaining a reasonable fit to the observations was in reproducing the trends in the nitrogen data. Integrated yields in Table 2 strongly imply that intermediate-mass stars play a major role in nitrogen evolution. Further support for the idea that IMS are an important source of nitrogen is that integrated yields in Table 2 for nitrogen by VG and oxygen by Maeder for $`\mathrm{z}=0.001`$ indicate that $`\mathrm{log}(\mathrm{N}/\mathrm{O})=1.41`$, in excellent agreement with observed values at low O/H.
But the problem with assuming straightaway that the IMS nitrogen source is the dominant one concerns the delay which these stars undergo in ejecting their products because of their lower masses. Since oxygen is produced by massive stars, a delay in nitrogen ejection by IMS might be too great to explain the N/O value that is observed at low O/H (at ostensibly early times). Indeed the alleged dominant role of IMS in nitrogen production has been questioned especially by Izotov & Thuan (1999; their data are indicated with ‘i’ in Fig. 1B), who suggest that because of IMS delay, faster-evolving massive stars must be a significant source of primary nitrogen in order to raise the log(N/O) ratio to a value of -1.5 at 12+log(O/H)$``$7.2, a metallicity which they assume to be commensurate with a galactic age too young to allow for nitrogen ejection by IMS. Umeda, Nomoto, & Nakamura (2000) have suggested that synthesis in metal-free population III stars can give rise to some primary nitrogen, but their implied production ratios give a log(N/O) that is well below -1.5.
To analyze this problem, let us first look at the expected delay times for IMS nitrogen ejection by imagining, for simplicity, a star cluster which forms during a burst with mass distributions conforming to a Salpeter IMF. Figure 4 then follows the increase in the integrated yield (eq. 1) of nitrogen versus time, as predicted by the VG calculations for IMS for the five metallicities indicated by line type. Also shown above the curves with short vertical lines are the main sequence turnoff ages of stars of mass 2, 3, 4, 5, 6, 7, and 8, respectively right to left. The correspondence between age and mass was determined using the results of Schaller et al. (1992). As the cluster ages, stars of increasingly lower mass eject their products and contribute to the total integrated yield. We see in Fig. 4 that nearly all of the nitrogen is produced within roughly 250 Myr of when the population forms, corresponding to the lifetime of a star of 4M. Nitrogen yields decline sharply (and the curve flattens) below this mass, because hot bottom burning is less significant. Now to consider the possible significance of this delay time, we compare it with the delay time in the ejection of oxygen by massive stars. Assuming a representative stellar mass of 25M, we find that Schaller et al. predict a main sequence lifetime of 6 Myr, considerably shorter than the IMS delay. On short time scales, then, the delay in nitrogen production by IMS appears to be significant, and therefore the suggestion by Izotov & Thuan that primary nitrogen must come principally from massive stars and its interstellar abundance must increase in lockstep with that of oxygen seems credible.
But the above picture rests on the assumption that the age-metallicity relation for all galaxies is similar. Thurston (1998) has questioned this and has shown that under conditions of a reduced star formation rate, the relation flattens, so that the time required by a system to reach even the lowest metallicity levels observed in H II regions in metal-poor systems is equal to or greater than the lag time for IMS to begin ejecting nitrogen. Then, assuming continuous star formation thereafter, nitrogen and oxygen will continue to increase in lockstep.
Bursting systems present a slightly more complicated picture. Yet even here, as long as the time between bursts is significantly larger than the 250 Myr lag time for IMS nitrogen ejection, probability favors observing systems which have advanced at least one lag time beyond the burst, so that by plotting these objects in Fig. 1B we should still find a constant N/O value over a range in metallicity.
Our approach, therefore, was to calculate models with reduced star formation efficiency, i.e. $`\nu _\mathrm{o}`$ in eq. 18, which produced both the N/O and C/O behavior observed in Figs. 1A,B while adopting the yields of VG and Maeder, as discussed in §4, and assuming a significant role of IMS in nitrogen production. For simplicity, we chose to consider only the continuous star formation scenario and ignore bursting, since we are testing the simplest explanation possible. (An analytical model for time delay of the nitrogen is given in Appendix B.)
Our best numerical model results for N/O, indicated by curve B in Fig. 3B, provides a good fit to the NO envelope. Notice in particular that the steep rise in log(N/O) at low metallicity coincides with the release of nitrogen by IMS. However, by lowering the star formation efficiency, we have forced this rise to occur at very low oxygen abundances, so that the relatively flat behavior of N/O begins at metallicities below those of the most metal-poor objects observed. At high metallicities, N/O once again rises steeply, this time because nitrogen synthesis in IMS is increasing while oxygen production in massive stars is decreasing with metallicity. Thus, our numerical model shown by curve B explains the general trend in the NO envelope while preserving the role of IMS in nitrogen production that is expected from the comparison of integrated yields in §3.
The effect of adjusting the star formation efficiency is shown by curves A and C in Fig. 1B, where efficiency has been lowered and raised, respectively, by a factor of five, as explained at the beginning of §5.2. As anticipated, the curve enters the plot at a lower metallicity when the efficiency is lower (curve A); the opposite is seen with the higher efficiency (curve C). The general shapes of curves A, B, and C are nearly the same at low metallicity, but they are simply shifted horizontally. To explicitly show the time relation here, we have placed five plus symbols corresponding to 0.25, 0.50, 0.75, 1.0, and 2.0 Gyr beginning at the lower left and moving up and to the right on each curve. So, for example, at 0.25 Gyr the value of log N/O is nearly the same for all three curves, while the metallicity \[12+log(O/H)\] is clearly different.
Note also that when star formation efficiency is high, as in curve C, the N/O curve rises more steeply than in the lower efficiency cases of curves A and B. This result is mainly due to the reduced rate of oxygen buildup at higher metallicities, where the reduced rate occurs for two reasons. First, the massive star oxygen yield decreases with metallicity. Second, the dilution effect of infall is now greater at higher metallicities, since high metallicity is achieved at earlier times, when the infall rate is greater. It is interesting that under certain conditions these two effects can conspire to produce a reduction in the interstellar oxygen fraction, since reduced massive star oxygen production at high metallicities cannot entirely compensate for the effect of dilution by infall. Notice that curve C does indeed indicate that oxygen abundance briefly goes down (the curve slants slightly leftward) when its 12+log(O/H) reaches a value of about 8.7; the positive trend in oxygen is then resumed when the effects of yield and infall are reversed with metallicity change and time. Such variation may contribute to the scatter observed in the behavior of N/O (see §6), and if stretched far enough might explain the paradoxical situation presently seen in the comparison of the solar metallicity with the significantly lower value in the younger Orion Nebula (Esteban et al. 1998). Of course this regressive behavior of oxygen which is hinted at in our models is heavily dependent on a negative trend of the oxygen yield with metallicity and the existence of a suitable gas flow. The decrease of interstellar metallicity with time would formally require a gas inflow that is a strongly increasing function of time.
### 5.3 Conclusions From Our Numerical Models
Our models strongly indicate that objects located along the flat region of the N/O curve at low metallicities are entirely and naturally explained if their environment is characterized by historically low star formation rates, with nearly all of the nitrogen being produced by intermediate mass stars. For higher metallicities, the upturn in N/O shows the dependence of nitrogen yield on metallicity, augmented by the decrease in oxygen yield. At the same time, C/O increases with O/H because of the positive metallicity effect on carbon synthesis by massive stars.
## 6 Discussion
We have satisfactorily and simultaneously reproduced the C/O and N/O behavior in Figs. 3A,B by adjusting the star formation rate to suppress oxygen buildup until nitrogen can be ejected by IMS. Our results clearly suggest that carbon rises relative to oxygen because the former is produced in massive stars in a manner which is directly sensitive to metallicity through the mass loss process. On the other hand, the bimodal behavior of N/O is consistent with primary IMS nitrogen production at metallicities below 12+log(O/H) of 8.3, with metal-sensitive secondary production contributing significantly above this value. Both the C/O and high-metallicity N/O trends are amplified by the decrease in oxygen synthesis by massive stars as metallicity increases. This picture is in complete agreement with the yield predictions for massive and intermediate mass stars discussed in §3.
The low star formation rate at early times is necessary to allow log(N/O) to rise to about -1.4 at metallicities below those observed, as primary nitrogen production for an IMS population builds up to its maximum level. As metallicities continue to climb, IMS production of nitrogen and massive star production of oxygen remain in equilibrium and the element ratio constant. Prior to 250Myr in our model, the gas surface density is roughly equal to 4 M/pc<sup>2</sup>, corresponding to a star formation rate of 10<sup>-3</sup>Myr<sup>-1</sup>kpc<sup>-2</sup>. A reasonable interpretation of this, then, is that the galaxies containing the low metallicity systems studied by Izotov & Thuan and Kobulnicky & Skillman (1996) and shown in Fig. 1B historically have had relatively low star formation rates due to their low surface densities. Indeed, results in Papaderos et al. (1996) indicate that a typical surface density for a BCG is 2-5 M/pc<sup>2</sup>, consistent with the surface gas densities in our model at early times. Observations by van Zee et al. (1997) for low surface brightness galaxies are consistent with this level, as are the total surface densities for the outer disk regions of spirals presented in Vila-Costas & Edmunds (1992). Of course the star formation rate in these systems could be reduced because of the lack of environmental effects or spiral structure, or the inability by gas to cool efficiently enough. But whatever the cause, it seems clear that the data are consistent with low star formation rates in these objects.
Thus, our results nicely accomodate the popular picture of blue compact galaxies having an underlying old metal-poor population with star bursts superimposed on the systems. The N/O levels in these systems are set by maximized primary nitrogen production of IMS and oxygen production by massive stars. Our picture contrasts with the explanation by Izotov & Thuan which claims that these systems are necessarily very young because of their low metallicities. Then, since their IMS have not had time to begin releasing nitrogen, this forces the conclusion that the N/O ratio at low metallicities must be set by massive star primary nitrogen production. However, this picture neglects the influence of the star formation efficiency in the rate of buildup of metals. According to our analysis, these systems are simply evolving relatively slowly or intermittently due to historically low star formation rates, so that the delay in nitrogen release by IMS becomes insignificant.
More generally, any galaxy or galactic region which evolves slowly will maintain a relatively low metallicity over a significant fraction of a Hubble time, since total metallicity is directly related to the star formation rate integrated over time. This opens up the possibility that objects such as blue compact galaxies and outer regions of spiral disks are in fact several Gyr in age despite their low metallicities. This is consistent with the recent conclusions of Legrand (2000), who employed chemical evolution models to study abundances and continuum colors of I Zw 18. Legrand found that models characterized by a star formation rate of $`10^4`$ M yr<sup>-1</sup> over 14 Gyr explain the observations. This general explanation may also apply to the outer regions of spiral disks, where CNO abundance ratios are similar to those observed in dwarf galaxies such as blue compact objects (Garnett et al. 1999).
Our numerical models allow us to track the separate contributions of IMS and massive stars to carbon and nitrogen evolution. The left and right sides of Fig. 5 show our predictions for carbon and nitrogen, respectively, for our best numerical model, model B. For each element the fraction of the total mass of carbon or nitrogen being ejected at a point in time is shown in the upper panels for IMS and massive stars versus time. Meanwhile, the lower panels plot the fraction of accumulated mass of carbon and nitrogen ejected by IMS and massive stars, also versus time.
To gauge the final contributions of IMS and massive stars to carbon and nitrogen synthesis, we can simply read the late-time values off of lower panels in Fig. 5. Clearly, in our models carbon production is dominated by massive stars, with massive stars providing 97% (all) of the total carbon. On the other hand, the roles of the stellar types appear to be reversed in the case of nitrogen, with IMS providing roughly 90% (all) of this element.
Our analysis has led to the conclusion that carbon and nitrogen production in the Universe are essentially decoupled from one another, with the former produced in massive stars, while the latter is produced in IMS. Consensus toward this claim in the case of carbon has been building for several years. For example, work by Prantzos et al. (1994), Garnett et al. (1999), and Gustafsson et al. (1999) show that carbon is largely produced by massive stars, to which we now add our support. In the case of nitrogen, the predicted large IMS yields in conjunction with the large extant database of planetary nebula abundances suggest strongly that the production of this element is dominated by IMS. The most recent problem has been the one presented by the observed constant N/O value at low metallicities, suggesting that massive stars must play at least an important role here, due to supposed significant IMS nitrogen release delays. However, our models have shown that low star formation rates can accomodate IMS nitrogen production with no problem.
Finally, during this investigation we have chosen to ignore until now the large scatter in N/O that is observed at a single O/H value. This matter has been addressed by a number of authors including Garnett (1990), Pilyugin (1993; 1999), and Marconi, Matteucci, & Tosi (1994), with the consensus being that at least some of the scatter is real and due to bursts which momentarily lower N/O in the observed H II regions with sudden injections of fresh oxygen, but as IMS eject nitrogen after the customary lag time, N/O rises again. Therefore, the scatter comes about by observing a large sample of H II regions in various stages of oxygen and nitrogen enrichment. This picture implies that most points should be concentrated at relatively high N/O values, with fewer points, representing those objects experiencing sudden oxygen enrichment, located below the main concentration, since presumably bursts are followed by relatively long periods of quiescence during which the end point N/O is characteristic.
However, a close look at the vertical distribution of points in N/O in Fig. 1B reveals that most points seem to be clustered along the NO envelope at relatively low values with the concentration falling off as one considers higher N/O values, i.e. exactly the reverse of the standard interpretation of the scatter described above. Indeed, this empirical finding strongly suggests that the “equilibrium” or unperturbed locus where most H II regions reside is the NO envelope, and thus the excursions caused by sudden injections of material are actually upward toward the region of fewer points.
Barring an unidentified selection effect, then, the distribution of data points in Fig. 1B would seem to challenge the conventional picture. In fact the data are consistent with the lack of evidence for localized oxygen contamination from massive stars in H II regions described in Kobulnicky & Skillman (1997b). Furthermore, the falloff in points above the NO envelope is more consistent with injections of nitrogen rather than oxygen; in this case the nitrogen source might be Wolf-Rayet stars or luminous blue variable stars, both of which were considered by Kobulnicky & Skillman (1997a) in their study of nitrogen-enriched H II regions in NGC 5253. Their explanation suggests a simultaneous enrichment of helium, and thus H II regions exhibiting high values of N/O in Fig. 1B should be checked for evidence of helium enrichment. Our general results, though, imply that the contributions of WR and luminous blue variable stars to nitrogen enrichment must be small. We postpone further investigation of this aspect of the N/O distribution, as we plan to take it up in another paper.
Finally, we note the upper limits of log(N/O) for four damped Ly$`\alpha `$ systems taken from Lu et al. (1996). While two of the points lie within the NO envelope, the other two points lie about 0.25 dex below it. Since the position of the rising track of a numerical model in Fig. 3B can be forced to the right (higher star formation rates) and left (lower star formation rates), such objects and others at even lower N/O values (if eventually observed) may be explained as representing very early stages of N/O evolution when IMS are just beginning to release nitrogen, i.e. roughly 250 Myr or less after an initial star burst.
## 7 Summary
We have compiled and analyzed several sets of published stellar yields for both massive and intermediate mass stars. Using analytical models with accretion but no time delay we have chosen yields which seem most plausible for explaining the data and have employed these yields in numerical models which predict the buildup of carbon, nitrogen, and oxygen over time. Our analysis suggests the following:
1. The most appropriate yields appear to be those of van den Hoek & Groenewegen (1997) for intermediate-mass stars and Maeder (1992) for massive stars.
2. Carbon is produced mainly by massive stars, with at most only a slight contribution from intermediate-mass stars. Observations of C/O versus O/H are consistent with a metallicity-enhanced carbon yield from massive stars, in agreement with Maeder’s (1992) predictions.
3. Nitrogen, conversely, is produced principally in intermediate-mass stars, specifically those stars between 4 and 8 M which undergo hot bottom burning and expel large amounts of primary nitrogen at low metallicities, and secondary/tertiary nitrogen at higher z levels. This conclusion agrees with the relatively large size of the nitrogen yields of intermediate mass stars predicted by van den Hoek & Groenewegen (1997).
4. Carbon and nitrogen production appear to be essentially decoupled from one another, since they are produced in two entirely different sites, i.e. massive stars and intermediate-mass stars.
5. The characteristic delay in nitrogen release by intermediate mass stars is approximately 250 Myr.
6. Time delay for both carbon and nitrogen production does not appear to be an important factor in the evolution of these elements.
7. Oxygen is produced entirely by massive stars. Maeder’s (1992) oxygen yields, which decrease with increasing metallicity, are consistent with observed behavior of C/O and N/O.
8. The observed behavior of N/O versus O/H is bimodal and is related to the primary/secondary nature of nitrogen production as well as the inverse sensitivity of oxygen yields to metallicity. At low metallicities the relation is flat since, in this region, nitrogen is primary in origin and simply increases in lockstep with oxygen. Beginning at roughly 12+log(O/H)=8.3, secondary/tertiary nitrogen production becomes significant and produces the upturn in the data seen at higher metallicities. This same upturn is augmented by the decrease in massive star oxygen production at higher metallicities.
9. Nitrogen production at low metallicities is consistent with production in intermediate mass stars despite the delay which these stars experience in the release of their nitrogen. Since the relevant stars have progenitor masses between 4-8 M and maximum lifetimes of roughly 250 Myr, a low star formation rate and the consequent slow rise in metallicity, is consistent with the data.
10. Our findings allow for objects such as blue compact galaxies and regions such as the outer parts of spiral disks, all characterized by low metallicity, to nevertheless have formed many Gyr ago.
Finally, because of the relatively short delay in nitrogen release by intermediate-mass stars, it may not be possible to employ N/O as a useful galaxy age indicator (as proposed by Edmunds & Pagel 1978, a time delay of 1 to 2 Gyr or so would be needed), except perhaps for extremely young systems. Nevertheless, we believe that our identification of the important C and N sources now allows a reasonable explanation of the systematics of the behavior of C/O and N/O ratios in galactic systems.
Future work should include a closer look at the N/O scatter by performing a consistent abundance determination analysis on the various data in the literature. Analysis should include consideration of H II region structure and excitation, and other means for explaining scatter. The possibility that the scatter is related to nitrogen contamination from Wolf-Rayet or luminous blue variable stars should be investigated further, although our general results imply that their impact on the overall yield of nitrogen from a stellar population must be small.
R.B.C.H would like to thank the members of the Physics & Astronomy Department of Cardiff University, Wales, for their hospitality and support during an extended visit. Travel to Cardiff was made possible by a generous travel grant from the University of Oklahoma. We also thank Bernard Pagel, Trinh Thuan, and Evan Skillman for useful and informative discussions.
## Appendix A
## Limiting Analytical Models
We give here additional limiting forms of the analytic equations from §4.
Carbon and nitrogen abundances in the “simple” closed-box model, with no time delays (limiting forms of eqs. 5 and 7):
$$z_c=\frac{p_{pc}}{p}z+\frac{p_{sc}}{2p}z^2$$
()
$$z_n=\frac{p_{pn}}{p}z+\frac{p_{sn}p_{pc}}{2p^2}z^2+\frac{p_{sn}p_{sc}}{6p^2}z^3$$
()
## Appendix B
## Analytical Models With Time Delay
To give some indication of the effect of time delays on the element ratios we now give an elementary model. This must inevitably make assumptions about the time evolution of star formation, so the model is not general. For convenience (and to avoid an epidemic of parameters) we neglect infall and just consider the simple closed-box model (although the model can be straightforwardly extended to include accretion). Suppose that all primary nitrogen is delayed by a time $`\tau _\mathrm{n}`$ and then suddenly released. Let the star formation rate in the closed system be simply proportional to gas density (with a constant $`\mathrm{k}`$), total mass of the system be unity, starting from just unenriched gas at $`\mathrm{t}=0`$, then:
$$\frac{ds}{dt}=k\alpha g=k\alpha e^{\alpha kt},$$
()
implying $`\mathrm{g}=\mathrm{e}^{\alpha \mathrm{kt}}`$, $`\mathrm{s}=1\mathrm{e}^{\alpha \mathrm{kt}}`$, and $`\mathrm{z}=\mathrm{p}\alpha \mathrm{kt}`$, so that overall metallicity (including oxygen) increases linearly with time. This allows us to map time into metallicity, and vice versa. We may represent the time delay as a metallicity increment $`\mathrm{z}_{\tau _\mathrm{n}}=\mathrm{p}\alpha \mathrm{k}\tau _\mathrm{n}`$. Now $`(\alpha \mathrm{k})^1`$ represents the timescale for star formation (see eq. B1), so if we denote $`\tau _{}=(\alpha \mathrm{k})^1`$, then $`\frac{\mathrm{z}_{\tau _\mathrm{n}}}{\mathrm{p}}=\frac{\tau _\mathrm{n}}{\tau _{}}`$, and choosing different values of $`\frac{\mathrm{z}_{\tau _\mathrm{n}}}{\mathrm{p}}`$ therefore corresponds to different rates of star formation for the system. We define a step function
$`\mathrm{H}(\mathrm{x}\mathrm{x}_\mathrm{o})`$ $`=`$ $`1\text{if}\mathrm{x}0`$
$`=`$ $`0\text{if}\mathrm{x}<0`$
and then
$`\mathrm{z}_\mathrm{n}`$ $`=`$ $`{\displaystyle \frac{\mathrm{p}_{\mathrm{pn}}}{\mathrm{p}}}\mathrm{exp}\left({\displaystyle \frac{\mathrm{z}_{\tau _\mathrm{n}}}{\mathrm{p}}}\right)(\mathrm{z}\mathrm{z}_{\tau _\mathrm{n}})\mathrm{H}(\mathrm{z}\mathrm{z}_{\tau _\mathrm{n}})`$
$`+{\displaystyle \frac{\mathrm{p}_{\mathrm{sn}}\mathrm{p}_{\mathrm{pc}}}{2\mathrm{p}^2}}\mathrm{z}^2+{\displaystyle \frac{\mathrm{p}_{\mathrm{sn}}\mathrm{p}_{\mathrm{sc}}}{6\mathrm{p}^2}}\mathrm{z}^3`$
We can see the effect of a time delay on the primary nitrogen in a system with fast star formation by setting $`\mathrm{z}_{\tau _\mathrm{n}}=0.01`$. This is plotted as a dashed line in Fig. A1B.
## Appendix C
## Analytic Model of Variable Oxygen Yield
We show here that our general conclusions about the sources of carbon and nitrogen are not affected by the possibility that the oxygen yield is a function of metallicity. The effect of a decrease with increasing metallicity is to steepen the relations of C/O and N/O with O/H at high O/H, but not to introduce any qualitatively new effects.
Suppose that the yield for oxygen has the form $`\mathrm{p}_1`$-$`\mathrm{p}_2`$$`\mathrm{z}_\mathrm{o}`$, noting the minus sign, $`\mathrm{p}_2`$ is positive, then for a simple closed-box model with no accretion we have:
$$z_c=\frac{1}{p_2}\left(p_{pc}+\frac{p_{sc}p_1}{p_2}\right)\mathrm{ln}\left(1\frac{p_2z_o}{p_1}\right)\frac{p_{sc}z_o}{p_2}$$
()
and
$$z_n=\frac{1}{p_2}\left(p_{pn}\frac{p_{sc}p_{sn}p_1}{p_2^2}\right)ln\left(1\frac{p_2z_o}{p_1}\right)+\frac{p_{sn}}{2p_2^2}\left(p_{pc}+\frac{p_{sc}p_1}{p_2}\right)ln^2\left(1\frac{p_2z_o}{p_1}\right)+\frac{p_{sc}p_{sn}z_o}{p_2^2}$$
()
These are plotted in Fig. A1A,B, with $`\mathrm{p}_1`$=0.017, $`\mathrm{p}_2`$=0.5, similar to the Maeder models. The steepening effect can be seen, but there is no incentive to revise our basic conclusions on the sources of C and N.
## Figure Captions
|
no-problem/0004/astro-ph0004402.html
|
ar5iv
|
text
|
# Konus catalog of SGR activity to 2000
## 1 INTRODUCTION
Recurrent short gamma-ray bursts with soft energy spectra have been known for twenty years. The first two sources of repeating bursts were discovered and localized in March 1979 with the Konus experiment aboard the Venera 11 and Venera 12 missions (Mazets & Golenetskii 1981). The extraordinary intense gamma-ray outburst on 1979 March 5 (Mazets et al. 1979a) was followed by a series of 16 weaker short bursts from source FXP0526-66, during the next few years (Golenetskii et al. 1984). At the same time, in March 1979, three short bursts were detected from another source, B1900+14 (Mazets, Golenetskii, & Guryan 1979). It was suggested that repeated soft bursts represent a distinct class of events different in their origin from the majority of gamma-ray bursts (Mazets & Golenetskii 1981; Mazets et al. 1982). In 1983, the Prognoz 9 and ICE spacecraft observed a numerous series of soft recurrent bursts from the third source, 1806-20 (Atteia et al. 1987; Laros et al. 1987). The sources of recurrent soft bursts have became known as soft gamma repeaters, SGRs.
Curiously, a retrospective analysis of Venera 11 and Prognoz-9 data showed that the short burst of January 7, 1979 (Mazets et al. 1981) belonged to SGR 1806-20 (Laros et al. 1986). Thus, the first three SGRs were detected within only three months. However, as has become clear, SGRs are actually a very rare class of astrophysical objects (Norris et al. 1991; Kouveliotou et al. 1994; Hurley et al. 1994). Indeed, the fourth source SGR 1627-41 was discovered and localized only in 1998 (Hurley et al. 1999a; Woods et al. 1999; Smith et al. 1999; Mazets et al. 1999a). In 1997 two bursts were observed coming from the fifth SGR to be identified, SGR 1801-23 (Hurley et al. 1997; Cline et al. 2000).
The five known SGRs have displayed different levels of activity. SGR 0526-66 has been silent since 1983. SGR 1900+14 emitted three bursts in 1979 (Mazets, Golenetskii, & Guryan 1979) and three bursts in 1992 (Kouveliotou et al. 1993). After a long period of silence, the burst activity of SGR 1900+14 resumed in May 1998 at a very high and irregular level and lasted until January 1999 (Hurley et al. 1998; Hurley et al. 1999b; Mazets et al. 1999b). On 1998 August 27 SGR 1900+14 emitted a giant outburst with a complex time history which exhibited a striking similarity to the 1979 March 5 event (Cline, Mazets, & Golenetskii 1998; Mazets et al. 1999c; Hurley et al. 1999c). SGR 1806-20 was a prolific source in 1979-1998 both in terms of active periods and in the numbers of emitted bursts (Laros et al. 1987; Atteia et al. 1987; Kouveliotou et al. 1994; Frederiks et al. 1997; Marsden et al. 1997; Göǧüs et al. 2000). SGR 1627-41 was active in June-July 1998 (Hurley et al. 1999a; Woods et al. 1999a; Mazets et al. 1999a). Only two bursts from SGR 1801-23 were detected, on June 29, 1997 (Cline et al. 2000).
X-ray studies of SGR 0526-66 (Rothschild et al. 1994), SGR 1806-20 (Cooke 1993), SGR 1900+14 (Hurley et al. 1996), and SGR 1627-41 (Woods et al. 1999a) have revealed quiescent soft X-ray counterparts to all these objects. Measurements of a rapid spin-down of SGR 1806-20 and SGR 1900+14 have been interpreted (Kouveliotou et al 1998; 1999) as establishing SGRs as “magnetars”, i.e. as young isolated neutron stars with a superstrong magnetic field up to 10<sup>14</sup>-10<sup>15</sup> G (Duncan & Thompson 1992; Thompson & Duncan 1995). At the same time, arguments for significantly lower magnetic fields in SGRs have also been made (Marsden, Rothschild, & Lingenfelter 1999; Harding, Contopoulos, & Kazanas 1999).
This catalog contains data on soft bursts from five soft gamma repeaters observed with the Konus experiments aboard the Venera 11, 12, 13, 14 missions in 1978-1983, with the Konus-Wind experiment on the Wind spacecraft in 1994-1999, and with the Konus-A experiment aboard Kosmos-2326 in 1995-1997. Our data on time histories and energy spectra of bursts presented here should be useful for further SGR studies especially when compared with results from similar instruments. The catalog including initial processed data is also available electronically at: http//www.ioffe.rssi.ru/LEA/SGR/Catalog/
## 2 INSTRUMENTS AND OBSERVATIONS
### 2.1 The Konus experiment on the Venera missions
The Konus instrument consisted of six identical gamma-ray detectors. Each sensor had a NaI(Tl) scintillator 80 mm in diameter and 30 mm in height viewed by a PM-tube through a thick lead glass window. An additional passive shield of a lateral surface of the crystal resulted in a cosine-like angular sensitivity of the detector. The axes of the six detectors were aligned along orthogonal axes of the spacecraft. This arrangement provided an instrument capability for burst localization with an accuracy of between one and a few degrees depending on the intensity of the burst (Golenetskii, Il’inskii, & Mazets 1974). The instrument operated in a triggered mode. The trigger threshold was set to be about $`6\sigma `$ above the current background level in the energy window 50-150 keV. When triggered, the instrument recorded a burst time history with resolutions of 1/64, 1/4, and 1 s as well as 8 energy spectra in the energy region 30 keV–2 MeV measured with an accumulation time of 4 s (Mazets at al. 1979b).
Observations of gamma-ray bursts (GRBs) from Veneras 11 and 12 were made from September 1978 until February 1980. The experiment was continued on Veneras 13 and 14 from November 1981 to April 1983 with improved spectral and time resolution (Mazets et al. 1983).
### 2.2 The Konus-Wind experiment
On the U.S. GGS Wind spacecraft, two scintillation gamma-ray detectors are monitoring northern and southern ecliptic hemispheres. Each detector contains a NaI(Tl) crystal 130 mm in diameter and 75 mm in height in a housing with an entrance window made of beryllium. A burst time history is recorded in three energy windows G1(10-50 keV), G2(50-200 keV), G3(200-750 keV) with a variable time resolution from 2 ms up to 256 ms. These records include prehistory sections of event time profiles recorded with 2 ms time resolution. Up to 64 energy spectra are measured in the energy range 10 keV–10 MeV. The accumulation time for each spectrum is automatically adapted to the current burst intensity within the range from 64 ms to 8 s. Time history records are of fixed duration 229.632 s. Spectral measurement may take significantly shorter or longer depending on the total accumulation time for the set of 64 energy spectra (Aptekar et al. 1995). Observations of GRBs from Wind have been made since November 1994.
### 2.3 The Konus-A experiment
The near-Earth orbiting Kosmos-2326 spacecraft was equipped with the Konus-A experiment for the study of GRBs. This experiment consisted of two gamma-ray burst detectors with associated electronic units. The first was a spectroscopic gamma-ray detector identical to the burst detectors flown on the Wind spacecraft. The second was an ensemble of four directionally sensitive gamma-ray detectors which were arranged to create an instrumental capability to localize a burst arriving from the zenith-centered hemisphere. Studying GRBs by means of two identical instruments from two spacecraft increased significantly the reliability of identifying faint details and features in burst time histories and energy spectra. Observations from Kosmos-2326 were carried out from December 1995 to October 1997.
## 3 CATALOG
### 3.1 Catalog structure
The catalog presents data from three experiments in chronological order but separately for each SGR. The information for each SGR is displayed in the same sequence.
First, a Table is given which contains the main characteristics of the events. The first three columns specify burst order numbers, burst names according to date of appearance, and trigger times T<sub>0</sub>. Burst duration $`\mathrm{\Delta }`$T is in the fourth column. Burst rise and fall times are both determined using the criterion that a burst was present only while the observed counts exceeded the background level by over three $`\sigma `$ in three successive time bins in the time history. The next two columns present values of peak fluxes and fluences. The spectral parameter kT is given in the seventh column. To ensure uniformity of data presentation, the kT values were obtained by fitting energy photon spectra to optically-thin thermal bremsstralung using:
$$F(E_\gamma )E_\gamma ^1\mathrm{exp}\left(\frac{E_\gamma }{kT}\right).$$
(1)
The last column contains remarks related mainly to measurements on the Wind and Kosmos spacecraft. Events detected in the trigger mode are marked with an index ‘T’. They provide the most complete temporal and spectral data. In some cases more than one event can be detected during a trigger mode record which lasts about four minutes. These events are marked with index ‘S’. The values T<sub>0</sub> shown in the third column in these cases denote the time of appearance of a serial event. The mark ‘B’ indicates weaker events observed in the background mode which provides a coarse time resolution and no direct spectral data. In this case, the kT value was evaluated from the accumulated “hardness ratio” G2/G1. For the weakest events, only an upper limit for kT could be obtained. Only coarse G2 count rate values are available in the spacecraft houskeeping data for events that occured during the “dead time” of the instrument, when information about previous triggered events is read out. Such events are denoted by the mark ‘H’.
Second, a set of figures is presented showing time histories and energy spectra for the observed bursts. Time histories are usually given for the two energy windows G1 and G2 together with the count rate ratio G2/G1. Deconvolved background-subtracted photon spectra were accumulated in the proper time intervals. On the time history graphs, these intervals are shown enclosed between two vertical dotted lines. For some weak events, an important fraction of the burst falls into the prehistory period. A spectrum accumulated after T<sub>0</sub> is often too uncertain to be shown on the graph. In this case, the parameter kT must be evaluated from the hardness ratio G2/G1 integrated for the whole burst. In some cases e.g. for serial events, a spectrum accumulating time is much longer that burst duration. Photon spectra averaged formally over an accumulation time interval and shown in Figures exhibit strongly reduced intensity. If the burst duration is known the flux can be easy corrected for the given $`\mathrm{\Delta }`$T.
Finally, there are several special events the study of which is expected to be of great importance for a deeper understanding of the SGR origin. More extensive information concerning these outbursts and event series is presented together with comments and explanations.
The data presented have been corrected for interferences, dead time, and all other known distortions. After background subtraction, instrumental energy-loss spectra were deconvolved to incident photon spectra.
### 3.2 SGR 0526-66
This is the first known source of soft repeated bursts. It was discovered and localized with the Konus experiment on board the Venera 11 and Venera 12 missions in 1979 when on March 5 a giant outburst accompanied by a long pulsating (8 s) tail was observed to be followed a few days later by several weak recurrent bursts (Mazets et al. 1979a). A much more precise localization performed by Cline et al. (1982) resulted in a very small error box which projected onto the supernova remnant N 49 located in the Large Magellanic Cloud (LMC) at a distance of 55 kpc. This source continued moderate activity until 1983 (Golenetskii et al. 1984). A weak persistent soft X-ray flux from SGR 0526-66 in its quiescent state has been detected with ROSAT (Rothschild et al. 1994). The data for 17 events are presented in Table 2.1 and displayed in Figures 2.1–2.18.
### 3.3 SGR 1900+14
The second source of soft recurrent bursts was also discovered and localized in March 1979 (Mazets, Golenetskii & Guryan 1979). Several bursts from this SGR were observed during its reactivation period in 1992 by BATSE (Kouveliotou et al. 1993). SGR 1900+14 is believed to be associated with the supernova remnant G42.8+0.6 (Kouveliotou et al. 1994) situated at distance of 10.4 kpc (Case & Bhattacharya 1998) but see also Vrba et al. (2000). ASCA observations in April 1998 revealed a 5.16 s periodicity in persistent X-ray emission from this source (Hurley et al. 1999d). A further episode of bursting activity began in May 1998 and continued until January 1999. BATSE (Göǧüs et al. 1999), BeppoSAX (Feroci et al. 1999), and Konus-Wind (Mazets et al. 1999b) investigated numerous bursts.
On August 27, 1998, a giant outburst in SGR 1900+14 was observed (Cline, Mazets, & Golenetskii 1998; Hurley et al. 1999c; Feroci et al. 1999). This event strikingly resembled the March 5 outburst from SGR 0526-66. It also consisted of a huge brief initial pulse followed by a slowly decaying and coherently pulsating (5.16 s) tail (Mazets et al. 1999c). The close similarity between these two giant outbursts indicates their common nature.
The appearance rate of repeated bursts was very irregular and clusters of bursts were observed. During the most crowded cluster, events followed one after another in time intervals as short as a few burst lengths.
The data related to SGR 1900+14 are presented in Table 3.1 and shown in Fig. 3.1–3.66.
### 3.4 SGR 1806-20
The third repeater, SGR 1806-20 was discovered and localized during a period of high activity in 1979-1983 in observations from the ICE and Prognoz-9 spacecraft (Laros et al. 1986; Laros et al. 1987; Atteia et al. 1987).
In subsequent years, the source continued a moderate level of activity. Beginning in 1996, the source became very active again with many clusters of events. A persistent soft X-ray source was observed by Murakami et al. (1994). Kouveliotou et al. (1998) discovered 7.47 s pulsations in the X-ray flux. Kulkarni and Frail (1993) located SGR 1806-20 inside the supernova remnant G10.0-0.3 at a distance of $`14`$ kpc. Statistical properties of the repeating bursts were studied by Göǧüs et al. (2000).
Results of observations of SGR 1806-20 with Konus experiments are presented in Table 4.1 and Fig. 4.1–4.20.
### 3.5 SGR 1627-41
This source was discovered in summer 1998 with observations from CGRO (Woods et al. 1999), Ulysses (Hurley et al. 1999a), Wind (Mazets et al. 1999a), and RXTE (Smith et al. 1999). The source was precisely localized by the IPN. Its position coincides with the supernova remnant G337.0-0.1 at a distance of $`11`$ kpc (Hurley et al. 1999a). Some evidence was obtained in support of a possible periodicity of 6.7 s (Dieters et al. 1998; Woods et al. 1999), but the observations from ASCA didn’t confirm this result (Hurley et al. 2000).
An enormously intense outburst from this source was observed on 1998 June 18. It was a short single pulse without any evidences for the existence of a pulsating decay (Mazets et al. 1999a). Calculation of the outburst intensity has required major dead time corrections. It is of great interest that on June 18 at least one more such event occurred. However, it was detected only in housekeeping data. This circumstance prevented us from making reliable dead time corrections. Hence, only a lower limit of intensity for this burst could be obtained.
Our data from the Konus-Wind experiment are presented in Table 5.1 and Fig. 5.1–5.17.
### 3.6 SGR 1801-23
Only two short bursts from this source were detected on June 29, 1997 by CGRO, Ulysses, Wind and Kosmos-2326 (Cline et al. 2000). However, it should be remembered that the first active period of SGR 1900+14 observed in March 1979 consisted of only three events. An association of SGR 1801-23 with the supernova remnant G6.4-0.1 is possible.
The data obtained are presented in Table 6.1 and Fig. 6.1.
## 4 CONCLUSION
This catalog confirms the existence of many similarities between the five known SGRs. The most obvious are a short duration of repeated bursts and their soft energy spectra. The two giant outbursts are strikingly similar. Moreover, all SGRs appear to be associated with young supernova remnants. Weak persistent soft X-ray fluxes were detected from four SGRs in a quiescent state. A regular periodicity of 5-8 s was well established for three SGRs in the X-ray and/or gamma-ray energy ranges. A fast spin-down of $`10^{10}`$ ss<sup>-1</sup> was determined for SGR 1806-20 and SGR 1900+14. The energy release in SGRs if they are at distances of $`10`$ kpc averages for repeated bursts $`10^{40}10^{41}`$ erg and for giant outbursts $`10^{44}`$ erg. Correspondingly, the luminosity of SGRs exceeds the Eddington limit by factors of thousands and millions respectively. All these common features have provided an observational basis for the magnetar model of SGRs. At the same time, some distinctions between the properties of repeated bursts can also be seen in the data presented in this catalog. For example, note the different patterns of spectral variability for SGR 1627-41, SGR 1900+14, and SGR 1806-20.
Studying such similarities, distinctions, and individual features should lead to a deeper understanding of the fleeting but extremely powerful processes operating in SGRs.
This work was supported by Russian Aviation and Space Agency Contract and RFBR grant N 99-02-17031.
|
no-problem/0004/math-ph0004021.html
|
ar5iv
|
text
|
# A New Geometric Probability Technique for an N-dimensional Sphere and Its Applications to Physics
## I Introduction
In a recent paper , geometric probability techniques were developed to calculate the probability density functions (PDFs) which describe the probability density of finding a distance $`s`$ separating two points distributed in a uniform $`3`$-dimensional sphere and in a uniform ellipsoid. Our focus in the present paper will be on the probability density functions $`P_n(s)`$ for an $`n`$-dimensional sphere of radius $`R`$ characterized by $`x_1^2+x_2^2+\mathrm{}+x_n^2R^2`$, where $`x_1`$, $`x_2`$, and $`x_n`$ are the corresponding Cartesian coordinates. (In the mathematical literature this is sometimes termed an $`n`$-dimensional ball). As discussed in Refs. , these results are of interest both as pure mathematics and as tools in mathematical physics. Specifically, it was demonstrated in Ref. that geometric probability techniques greatly facilitate the calculation of the self-energies for spherical matter distributions arising from electromagnetic, gravitational, or weak interactions. The functional form of $`P_n(s)`$ is known for a sphere of uniform density, and hence the object of the present paper is to generalize the results of Refs. to the case of an arbitrarily non-uniform density distribution by using our method. As an application of these results we will consider the neutrino-exchange contribution to the self-energy of a neutron star modeled as series of concentric shells of different constant density. Other applications will also be discussed.
In this paper we present a new technique for obtaining the analytical probability density function $`P_n(s)`$ for a sphere of $`n`$ dimensions having an arbitrary density distribution. To illustrate this technique, we begin by deriving the PDF, $`P_n(s)`$, for an $`n`$-dimensional uniform sphere of radius $`R`$, and compare our results to those obtained earlier by other means . We then extend this technique to an $`n`$-dimensional sphere with a non-uniform but spherically symmetric density distribution. We explicitly evaluate the analytical probability density functions for certain specific density distributions, and then use numerical Monte Carlo simulations to verify the analytical results.
Finally our formalism is generalized to an $`n`$-dimensional sphere with an arbitrary density distribution, and leads to a general-purpose master formula. This formula allows one to evaluate the PDF for a sphere in $`n`$ dimensions with an arbitrary density distribution. After verifying that the master formula reproduces the results for uniform and spherically symmetric density distributions, we analytically evaluate the probability density functions, for 2, 3, and 4-dimensional spheres having non-uniform density distributions. The analytical results are then verified by the use of Monte Carlo simulations. The outline of this paper is as follows. In Sec. II we present our new formalism and illustrate it by rederiving the well-known results for a circle and for a sphere of uniform density. In Sec. III we extend this formalism to the case of non-uniform but spherically symmetric density distributions. In Sec. IV we develop the formalism for the most general case of an arbitrary non-uniform density distribution. In Sec. V we present some applications of our formalism to physics. These include the $`m`$th moment $`s^m`$ for a sphere of uniform and Gaussian density distribution, Coulomb self-energy for a collection of charges, $`\nu \overline{\nu }`$-exchange interactions, obtaining the probability density functions for multiple-shell density distributions found in neutron star models , and the evaluation of some geometric probability constants .
## II Uniform density distributions
### A Theory for a uniform circle
In this section we illustrate our formalism by deriving the PDF for a circle of radius $`R`$ having a spatially uniform density distribution characterized by $`\rho =`$ constant. For two points randomly sampled inside the circle located at $`\stackrel{}{r}_1`$ and $`\stackrel{}{r}_2`$ measured from the center, define $`\stackrel{}{s}=\stackrel{}{s}(\stackrel{}{r}_1,\stackrel{}{r}_2)=\stackrel{}{r}_2\stackrel{}{r}_1`$ and $`s=\left|\stackrel{}{s}\right|`$. To simplify the discussion, we translate the center of the circle to the origin so that the equation for the circle is $`x^2+y^2=R^2`$. It is sufficient to initially consider those vectors $`\stackrel{}{s}`$ which are aligned in the positive $`\widehat{x}`$ direction, since rotational symmetry can eventually be used to extend our results to those vectors $`\stackrel{}{s}`$ with arbitrary orientations. We begin by identifying those pairs of points, $`\stackrel{}{r}_1`$ and $`\stackrel{}{r}_2`$, which satisfy $`\stackrel{}{s}=\stackrel{}{r}_2\stackrel{}{r}_1=s\widehat{x}`$, where $`0s2R`$. One set of points is obtained if the points 1 are uniformly located on a line $`L_1`$ given by $`x=s/2`$ and the corresponding points 2 are on a line $`L_2`$ given by $`x=+s/2`$ as shown in Fig. 1. Notice that $`x=\pm s/2`$ are two symmetric lines with respect to reflection about $`x=0`$. Another set of points is obtained if $`\stackrel{}{r}_1`$ is uniformly located in the area $`A_1`$ and $`\stackrel{}{r}_2`$ is located in the area $`A_2`$ as shown in Fig. 2(a). Note that $`A_1`$ and $`A_2`$ are congruent. The only remaining set of points that need to be considered are those for which $`\stackrel{}{r}_1`$ is uniformly located in the area $`A_3`$ while $`\stackrel{}{r}_2`$ is in the area $`A_4`$ as shown in Fig. 2(b). As before $`A_3`$ and $`A_4`$ are congruent. Furthermore, $`A_4`$ is a reflection of $`A_2`$ about $`x=s/2`$, while $`A_3`$ is a reflection of $`A_1`$ about $`x=s/2`$. As discussed in Appendix A, the union of $`A_2`$ and $`A_4`$ ($`A_2A_4`$) is the overlap area between the original circle $`C_0`$ and an identical circle $`C_1`$ whose center is shifted from $`(0,0)`$ to $`(\left|\stackrel{}{s}\right|,0)`$. Similarly, $`A_1A_3`$ is the overlap area between the original circle and an identical circle $`C_2`$ whose center is shifted from $`(0,0)`$ to $`(\left|\stackrel{}{s}\right|,0)`$ as shown in Fig. 3. Since $`A_1A_3`$ and $`A_2A_4`$ are identical, it follows that the probability of finding a given $`s=\left|s\widehat{x}\right|`$ in a uniform circle is proportional to
$$A_2A_4=_{\frac{s}{2}}^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}𝑑y+_{sR}^{\frac{s}{2}}𝑑x_{\sqrt{R^2(xs)^2}}^{\sqrt{R^2(xs)^2}}𝑑y.$$
(1)
Notice that
$`{\displaystyle _{\frac{s}{2}}^R}𝑑x{\displaystyle _{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}}𝑑y={\displaystyle _{sR}^{\frac{s}{2}}}𝑑x{\displaystyle _{\sqrt{R^2(xs)^2}}^{\sqrt{R^2(xs)^2}}}𝑑y.`$
Using rotational symmetry of $`C_0`$, this result can apply to any orientation of $`\stackrel{}{s}`$ between $`0`$ and $`2\pi `$, and hence the probability of finding a given $`s=\left|\stackrel{}{s}\right|`$ is proportional to $`|\stackrel{}{s}|_0^{2\pi }𝑑\varphi =2\pi s`$. We denote this PDF for a uniform circle by $`P_2(s)`$, and impose the normalization requirement
$$_0^{2R}P_2(s)𝑑s=1.$$
(2)
We then have
$`P_2(s)`$ $`=`$ $`{\displaystyle \frac{2\pi s\left(_{\frac{s}{2}}^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}𝑑y+_{sR}^{\frac{s}{2}}𝑑x_{\sqrt{R^2(xs)^2}}^{\sqrt{R^2(xs)^2}}𝑑y\right)}{_0^{2R}2\pi s\left(_{\frac{s}{2}}^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}𝑑y+_{sR}^{\frac{s}{2}}𝑑x_{\sqrt{R^2(xs)^2}}^{\sqrt{R^2(xs)^2}}𝑑y\right)𝑑s}}.`$ (3)
Equation (3) can be simplified to
$`P_2(s)`$ $`=`$ $`{\displaystyle \frac{s_{\frac{s}{2}}^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}𝑑y}{_0^{2R}\left(s_{\frac{s}{2}}^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}𝑑y\right)𝑑s}}`$ (4)
$`=`$ $`{\displaystyle \frac{2s}{R^2}}{\displaystyle \frac{s^2}{\pi R^4}}\sqrt{4R^2s^2}{\displaystyle \frac{4s}{\pi R^2}}\mathrm{sin}^1\left({\displaystyle \frac{s}{2R}}\right).`$ (5)
Equation (5) is identical to the results obtained in Refs. by other means.
The conclusion that emerges from this formalism is that the probability of finding two random points separated by a vector $`\stackrel{}{s}`$ in a uniform circle can be derived by simply calculating the overlap region of that circle with an identical circle obtained by shifting the center from the origin to $`\stackrel{}{s}`$. In the following sections, we show that this result generalizes to higher dimensions, and provides a simple way of calculating $`P_n(s)`$ for $`n3`$.
### B Theory for a uniform sphere
The above formalism for a $`2`$-dimensional uniform circle can be extended to a $`3`$-dimensional uniform sphere with radius $`R`$. For a given $`s`$, we arbitrarily select the positive $`\widehat{z}`$ direction and study the distribution of vectors $`\stackrel{}{s}=\stackrel{}{r}_2\stackrel{}{r}_1`$ in this direction. The areas $`A_1`$, $`A_2`$, $`A_3`$, and $`A_4`$ in Sec. II A are replaced by the volumes $`V_1`$, $`V_2`$, $`V_3`$, and $`V_4`$. The PDF $`P_3(s)`$ is therefore proportional to the overlap volume $`V_1V_3=V_2V_4`$, where
$$V_2V_4=_{\frac{s}{2}}^R𝑑z_{\sqrt{R^2z^2}}^{\sqrt{R^2z^2}}𝑑x_{\sqrt{R^2z^2x^2}}^{\sqrt{R^2z^2x^2}}𝑑y+_{sR}^{\frac{s}{2}}𝑑z_{\sqrt{R^2(zs)^2}}^{\sqrt{R^2(zs)^2}}𝑑x_{\sqrt{R^2(zs)^2x^2}}^{\sqrt{R^2(zs)^2x^2}}𝑑y.$$
(6)
Notice that
$$_{\frac{s}{2}}^R𝑑z_{\sqrt{R^2z^2}}^{\sqrt{R^2z^2}}𝑑x_{\sqrt{R^2z^2x^2}}^{\sqrt{R^2z^2x^2}}𝑑y=_{sR}^{\frac{s}{2}}𝑑z_{\sqrt{R^2(zs)^2}}^{\sqrt{R^2(zs)^2}}𝑑x_{\sqrt{R^2(zs)^2x^2}}^{\sqrt{R^2(zs)^2x^2}}𝑑y.$$
(7)
In a $`3`$-dimensional space, $`\stackrel{}{s}`$ can range from $`0`$ to $`\pi `$ in the $`\widehat{\theta }`$ direction, and from $`0`$ to $`2\pi `$ in the $`\widehat{\varphi }`$ direction. For a given $`s`$ with any orientation, the PDF $`P_3(s)`$ is also proportional to $`4\pi s^2`$ by using rotational symmetry of a uniform sphere. Following the previous discussion, we thus arrive at the following expression for the PDF for a uniform sphere:
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{4\pi s^2\left(_{\frac{s}{2}}^R𝑑z_{\sqrt{R^2z^2}}^{\sqrt{R^2z^2}}𝑑x_{\sqrt{R^2z^2x^2}}^{\sqrt{R^2z^2x^2}}𝑑y\right)}{_0^{2R}4\pi s^2\left(_{\frac{s}{2}}^R𝑑z_{\sqrt{R^2z^2}}^{\sqrt{R^2z^2}}𝑑x_{\sqrt{R^2z^2x^2}}^{\sqrt{R^2z^2x^2}}𝑑y\right)𝑑s}}`$ (8)
$`=`$ $`3{\displaystyle \frac{s^2}{R^3}}{\displaystyle \frac{9}{4}}{\displaystyle \frac{s^3}{R^4}}+{\displaystyle \frac{3}{16}}{\displaystyle \frac{s^5}{R^6}}.`$ (9)
The result in Eq. (9) agrees exactly with the expression obtained previously in Refs. .
### C Representations , general properties, recursion relations, and generating functions of $`P_n(s)`$
The present formalism can be readily generalized to obtain $`P_n(s)`$ for an $`n`$-dimensional sphere ($`n`$-sphere or $`n`$-ball) of radius $`R`$. From Eqs. (4) and (8) we find
$$P_n(s)=\frac{s^{n1}_{\frac{s}{2}}^R𝑑x_n_{\sqrt{R^2x_n^2}}^{\sqrt{R^2x_n^2}}𝑑x_1_{\sqrt{R^2x_n^2x_1^2}}^{\sqrt{R^2x_n^2x_1^2}}𝑑x_2\mathrm{}_{\sqrt{R^2x_n^2\mathrm{}x_{n2}^2}}^{\sqrt{R^2x_n^2\mathrm{}x_{n2}^2}}𝑑x_{n1}}{_0^{2R}\left(s^{n1}_{\frac{s}{2}}^R𝑑x_n_{\sqrt{R^2x_n^2}}^{\sqrt{R^2x_n^2}}𝑑x_1_{\sqrt{R^2x_n^2x_1^2}}^{\sqrt{R^2x_n^2x_1^2}}𝑑x_2\mathrm{}_{\sqrt{R^2x_n^2\mathrm{}x_{n2}^2}}^{\sqrt{R^2x_n^2\mathrm{}x_{n2}^2}}𝑑x_{n1}\right)𝑑s}.$$
(10)
We can rewrite Eq. (10) as
$$P_n(s)=\frac{s^{n1}_{\frac{s}{2}}^R𝑑x_n_{\overline{R}}^{\overline{R}}𝑑x_1_{\sqrt{\overline{R}^2x_1^2}}^{\sqrt{\overline{R}^2x_1^2}}𝑑x_2\mathrm{}_{\sqrt{\overline{R}^2x_1^2\mathrm{}x_{n2}^2}}^{\sqrt{\overline{R}^2x_1^2\mathrm{}x_{n2}^2}}𝑑x_{n1}}{_0^{2R}\left(s^{n1}_{\frac{s}{2}}^R𝑑x_n_{\overline{R}}^{\overline{R}}𝑑x_1_{\sqrt{\overline{R}^2x_1^2}}^{\sqrt{\overline{R}^2x_1^2}}𝑑x_2\mathrm{}_{\sqrt{\overline{R}^2x_1^2\mathrm{}x_{n2}^2}}^{\sqrt{\overline{R}^2x_1^2\mathrm{}x_{n2}^2}}𝑑x_{n1}\right)𝑑s},$$
(11)
where
$$\overline{R}=\sqrt{R^2x_n^2}.$$
(12)
As shown in Refs. , the volume $`V(n,R)`$ of an $`n`$-dimensional sphere of radius $`R`$ is given by
$$V(n,R)=_R^R𝑑x_1_{\sqrt{R^2x_1^2}}^{\sqrt{R^2x_1^2}}𝑑x_2\mathrm{}_{\sqrt{R^2x_1^2x_2^2\mathrm{}x_{n1}^2}}^{\sqrt{R^2x_1^2x_2^2\mathrm{}x_{n1}^2}}𝑑x_n=\frac{\pi ^{\frac{n}{2}}R^n}{\mathrm{\Gamma }\left(1+\frac{n}{2}\right)}.$$
(13)
Equations. (10) and (11) can then be reduced to a general simplified equation for an $`n`$-dimensional uniform sphere of radius $`R`$:
$`P_n(s)`$ $`=`$ $`{\displaystyle \frac{s^{n1}_{\frac{s}{2}}^R\frac{\pi ^{\frac{n1}{2}}}{\mathrm{\Gamma }\left(1+\frac{n1}{2}\right)}\left(\sqrt{R^2x_n^2}\right)^{\left(n1\right)}𝑑x_n}{_0^{2R}\left[s^{n1}_{\frac{s}{2}}^R\frac{\pi ^{\frac{n1}{2}}}{\mathrm{\Gamma }\left(1+\frac{n1}{2}\right)}\left(\sqrt{R^2x_n^2}\right)^{\left(n1\right)}𝑑x_n\right]𝑑s}}`$ (14)
$`=`$ $`{\displaystyle \frac{s^{n1}_{\frac{s}{2}}^R\left(R^2x^2\right)^{\frac{n1}{2}}𝑑x}{_0^{2R}\left[s^{n1}_{\frac{s}{2}}^R\left(R^2x^2\right)^{\frac{n1}{2}}𝑑x\right]𝑑s}}.`$ (15)
If $`n`$ is an even number,
$$P_n(s)=n\times \frac{s^{n1}}{R^n}\left[\frac{2}{\pi }\mathrm{cos}^1\left(\frac{s}{2R}\right)\frac{s}{\pi }\underset{k=1}{\overset{\frac{n}{2}}{}}\frac{(n2k)!!}{(n2k+1)!!}\left(R^2\frac{s^2}{4}\right)^{\frac{n2k+1}{2}}R^{2k2n}\right],$$
(16)
where $`0!=0!!=1`$. If $`n`$ is an odd number,
$$P_n(s)=n\times \frac{s^{n1}}{R^n}\frac{n!!}{(n1)!!}\underset{k=0}{\overset{\frac{n1}{2}}{}}\frac{(1)^k}{2k+1}\frac{\left(\frac{n1}{2}\right)!}{k!\left(\frac{n1}{2}k\right)!}\left[1\left(\frac{s}{2R}\right)^{2k+1}\right].$$
(17)
Using Eqs. (16) and (17) the explicit functional forms of $`P_n(s)`$ for $`n=2`$, $`3`$, $`4`$, and $`5`$ are as follows:
$`P_2(s)`$ $`=`$ $`{\displaystyle \frac{4}{\pi }}{\displaystyle \frac{s}{R^2}}\mathrm{cos}^1\left({\displaystyle \frac{s}{2R}}\right){\displaystyle \frac{2}{\pi }}{\displaystyle \frac{s^2}{R^3}}\left(1{\displaystyle \frac{s^2}{4R^2}}\right)^{1/2},`$ (18)
$`P_4(s)`$ $`=`$ $`{\displaystyle \frac{8}{\pi }}{\displaystyle \frac{s^3}{R^4}}\mathrm{cos}^1\left({\displaystyle \frac{s}{2R}}\right){\displaystyle \frac{8}{3\pi }}{\displaystyle \frac{s^4}{R^5}}\left(1{\displaystyle \frac{s^2}{4R^2}}\right)^{3/2}{\displaystyle \frac{4}{\pi }}{\displaystyle \frac{s^4}{R^5}}\left(1{\displaystyle \frac{s^2}{4R^2}}\right)^{1/2},`$ (19)
$`P_3(s)`$ $`=`$ $`3{\displaystyle \frac{s^2}{R^3}}{\displaystyle \frac{9}{4}}{\displaystyle \frac{s^3}{R^4}}+{\displaystyle \frac{3}{16}}{\displaystyle \frac{s^5}{R^6}},`$ (20)
$`P_5(s)`$ $`=`$ $`5{\displaystyle \frac{s^4}{R^5}}{\displaystyle \frac{75}{16}}{\displaystyle \frac{s^5}{R^6}}+{\displaystyle \frac{25}{32}}{\displaystyle \frac{s^7}{R^8}}{\displaystyle \frac{15}{256}}{\displaystyle \frac{s^9}{R^{10}}}.`$ (21)
The Monte Carlo results for $`n4`$, and the simulation techniques for producing random points uniformly inside an $`n`$-sphere will be presented elsewhere .
It is of interest to verify that Eq. (15), obtained via the present formalism, agrees with results obtained earlier by other means . This is most easily done by introducing the the function $`C(a;m,n)`$ defined by
$$C(a;m,n)=_0^as^mT_n(s)𝑑s=_0^as^{m+n1}Q_n(s)𝑑s=_0^as^{m+n1}𝑑s_{\frac{s}{2}}^R\left(R^2x^2\right)^{\frac{n1}{2}}𝑑x,$$
(22)
where
$`Q_n(s)`$ $`=`$ $`{\displaystyle _{\frac{s}{2}}^R}\left(R^2x^2\right)^{\frac{n1}{2}}𝑑x,`$ (23)
$`T_n(s)`$ $`=`$ $`s^{n1}{\displaystyle _{\frac{s}{2}}^R}\left(R^2x^2\right)^{\frac{n1}{2}}𝑑x.`$ (24)
It follows that the denominator of Eq. (15), which is the normalization constant, can be written as $`C(2R;\mathrm{\hspace{0.17em}0},n)`$ where
$`C(2R;\mathrm{\hspace{0.17em}0},n)`$ $`=`$ $`{\displaystyle _0^{2R}}T_n(s)𝑑s.`$ (25)
When $`n`$ is an even integer, $`C(2R;\mathrm{\hspace{0.17em}0},n)`$ can be expressed in terms of the gamma and beta functions by noting that
$$C(2R;\mathrm{\hspace{0.17em}0},n)=\frac{\pi }{2n}\frac{(n1)!!}{n!!}R^{2n}=\frac{1}{2n}\frac{\mathrm{\Gamma }\left(\frac{1}{2}\right)\mathrm{\Gamma }\left(\frac{n}{2}+\frac{1}{2}\right)}{\mathrm{\Gamma }\left(\frac{n}{2}+1\right)}R^{2n}=\frac{1}{2n}B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})R^{2n}.$$
(26)
Similarly, when $`n`$ is an odd integer,
$$C(2R;\mathrm{\hspace{0.17em}0},n)=\frac{1}{n}\frac{(n1)!!}{n!!}R^{2n}=\frac{1}{2n}B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})R^{2n}.$$
(27)
We thus find that the normalization constant has the same functional form irrespective of whether $`n`$ is even or odd, when expressed in terms of the beta function.
If we introduce the variable $`t=R^2x^2`$ and note that
$$Q_n(s)=\frac{R^n}{2}B_{1\frac{s^2}{4R^2}}(\frac{n}{2}+\frac{1}{2},\frac{1}{2})=\frac{R^n}{2}B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})I_{1\frac{s^2}{4R^2}}(\frac{n}{2}+\frac{1}{2},\frac{1}{2}),$$
(28)
we can then rewrite $`P_n(s)`$ in the form
$`P_n(s)`$ $`=`$ $`n{\displaystyle \frac{s^{n1}}{R^n}}{\displaystyle \frac{B_x(\frac{n}{2}+\frac{1}{2},\frac{1}{2})}{B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})}}`$ (29)
$`=`$ $`n{\displaystyle \frac{s^{n1}}{R^n}}I_x({\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}},{\displaystyle \frac{1}{2}}),`$ (30)
where
$$x=1\frac{s^2}{4R^2}.$$
(31)
$`B_x(p,q)`$ is the incomplete beta function defined by
$$B_x(p,q)=_0^xt^{p1}\left(1t\right)^{q1}𝑑t,$$
(32)
and $`I_x`$ is the normalized incomplete beta function defined by
$$I_x(p,q)=\frac{\mathrm{\Gamma }\left(p+q\right)}{\mathrm{\Gamma }\left(p\right)\mathrm{\Gamma }\left(q\right)}_0^xt^{p1}\left(1t\right)^{q1}𝑑t=\frac{B_x(p,q)}{B(p,q)}.$$
(33)
Eq. (30) is the expression for $`P_n(s)`$ obtained in Refs. , and hence we have demonstrated that the classical results can be reproduced by using the formalism developed here.
We find the following general properties of $`P_n(s)`$ and its derivative $`P_n^{}(s)`$ at the lower bound $`s=0`$ and at the upper bound $`s=2R`$: $`P_1(0)=1/R`$, $`P_n(0)=0`$ for $`n2`$, $`P_n(2R)=0`$, $`P_1^{}(0)=1/2R^2`$, $`P_2^{}(0)=2/R^2`$, $`P_n^{}(0)=0`$ for $`n3`$, $`P_1^{}(2R)=1/2R^2`$, and $`P_n^{}(2R)=0`$ for $`n2`$.
It is of interest to express the probability density functions $`P_n(s)`$ in terms of generating functions from which several representations and recursion relations for $`P_n(s)`$ can be derived. This can be achieved by observing that
$$B_x(\frac{n}{2}+\frac{1}{2},\frac{1}{2})=\frac{1}{n!}\left(\frac{}{h}\right)^nF(h=0,x),$$
(34)
where
$$F(h,x)=\frac{2}{\sqrt{1h^2}}\left[\mathrm{sin}^1(h)\mathrm{sin}^1\left(\frac{h\sqrt{x}}{1h\sqrt{x}}\right)\right],$$
(35)
and where $`0x1`$ and $`1h1`$. Similarly, $`Q_n(s)`$ in Eq. (23) can be defined in terms of a generating function $`F_1(h,s)`$ of the form
$$F_1(h,s)=\underset{n=0}{\overset{\mathrm{}}{}}Q_n(s)h^n=\frac{1}{\sqrt{1h^2R^2}}\left[\mathrm{sin}^1(hR)\mathrm{sin}^1\left(\frac{hR\sqrt{1\frac{s^2}{4R^2}}}{1hR\sqrt{1\frac{s^2}{4R^2}}}\right)\right],$$
(36)
where $`\left|hR\right|<1`$. $`Q_n(s)`$ is then given by
$$Q_n(s)=\frac{1}{n!}\left(\frac{}{h}\right)^nF_1(h=0,s).$$
(37)
$`T_n(s)`$ in Eq. (24) can also be defined in terms of a generating function $`F_2(h,s)`$,
$$F_2(h,s)=\underset{n=0}{\overset{\mathrm{}}{}}T_n(s)h^n=\left(\frac{1}{s}\right)\frac{1}{\sqrt{1h^2R^2s^2}}\left[\mathrm{sin}^1(hRs)\mathrm{sin}^1\left(\frac{hRs\sqrt{1\frac{s^2}{4R^2}}}{1hRs\sqrt{1\frac{s^2}{4R^2}}}\right)\right],$$
(38)
where $`\left|hRs\right|<1`$. $`T_n(s)`$ is then given by
$$T_n(s)=\frac{1}{n!}\left(\frac{}{h}\right)^nF_2(h=0,s).$$
(39)
It follows from the previous results that $`P_n(s)`$ can be expressed in terms of two unique elementary functions, $`F_1(h,s)`$ in Eq. (36) and $`F_2(h,s)`$ in Eq. (38), such that
$$P_n(s)=\frac{\frac{1}{n!}\left(\frac{}{h}\right)^nF_2(h=0,s)}{\frac{1}{2n}B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})R^{2n}}=\frac{s^{n1}\frac{1}{n!}\left(\frac{}{h}\right)^nF_1(h=0,s)}{\frac{1}{2n}B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})R^{2n}}.$$
(40)
It is convenient to summarize the different expressions we have obtained for the PDF of a uniform $`n`$-dimensional sphere with radius $`R`$:
1. Integral representation:
$$P_n(s)=\frac{s^{n1}_{\frac{s}{2}}^R\left(R^2x^2\right)^{\frac{n1}{2}}𝑑x}{\frac{1}{2n}B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})R^{2n}},$$
(41)
2. Normalized incomplete beta function representation:
$$P_n(s)=n\frac{s^{n1}}{R^n}I_{1\frac{s^2}{4R^2}}(\frac{n}{2}+\frac{1}{2},\frac{1}{2}),$$
(42)
3. Incomplete beta function representation:
$$P_n(s)=n\frac{s^{n1}B_{1\frac{s^2}{4R^2}}(\frac{n}{2}+\frac{1}{2},\frac{1}{2})}{B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})R^n},$$
(43)
4. Odd-integer finite series expansion representation ($`n=`$ odd):
$$P_n(s)=n\times \frac{s^{n1}}{R^n}\frac{n!!}{(n1)!!}\underset{i=0}{\overset{\frac{n1}{2}}{}}\frac{(1)^i}{2i+1}\frac{\left(\frac{n1}{2}\right)!}{i!\left(\frac{n1}{2}i\right)!}\left[1\left(\frac{s}{2R}\right)^{2i+1}\right],$$
(44)
5. Even-integer finite series expansion representation ($`n=`$ even):
$$P_n(s)=n\times \frac{s^{n1}}{R^n}\left[\frac{2}{\pi }\mathrm{cos}^1\left(\frac{s}{2R}\right)\frac{s}{\pi }\underset{i=1}{\overset{\frac{n}{2}}{}}\frac{(n2i)!!}{(n2i+1)!!}\left(R^2\frac{s^2}{4}\right)^{\frac{n2i+1}{2}}R^{2i2n}\right],$$
(45)
6. Infinite series expansion representation:
$$P_n(s)=\frac{s^{n1}_{i=0}^{\mathrm{}}\left(1\right)^i\frac{2^i}{2i+1}\frac{\left(2i1\right)!!}{\left(2i\right)!!}\frac{\left(\frac{n1}{2}\right)!}{\left(\frac{n1}{2}i\right)!}\left[1\left(\frac{s}{2R}\right)^{2i+1}\right]}{\frac{1}{2n}B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})R^n},$$
(46)
7. Generating function representation I:
$`P_n(s)`$ $`=`$ $`{\displaystyle \frac{\frac{1}{n!}\left(\frac{}{h}\right)_{h=0}^n\left\{\left(\frac{1}{s}\right)\frac{1}{\sqrt{1h^2R^2s^2}}\left[\mathrm{sin}^1(hRs)\mathrm{sin}^1\left(\frac{hRs\sqrt{1\frac{s^2}{4R^2}}}{1hRs\sqrt{1\frac{s^2}{4R^2}}}\right)\right]\right\}}{\frac{1}{2n}B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})R^{2n}}},`$ (47)
8. Generating function representation II:
$$P_n(s)=\frac{s^{n1}\frac{1}{n!}\left(\frac{}{h}\right)_{h=0}^n\left\{\frac{1}{\sqrt{1h^2R^2}}\left[\mathrm{sin}^1(hR)\mathrm{sin}^1\left(\frac{hR\sqrt{1\frac{s^2}{4R^2}}}{1hR\sqrt{1\frac{s^2}{4R^2}}}\right)\right]\right\}}{\frac{1}{2n}B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})R^{2n}},$$
(48)
9. Hypergeometric function representation:
$$P_n(s)=\frac{2n}{B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})}\frac{s^{n1}}{R^{n+1}}\left[R_2F_1(\frac{1}{2},\frac{1}{2}\frac{n}{2};\frac{3}{2};1)\frac{s}{2}_2F_1(\frac{1}{2},\frac{1}{2}\frac{n}{2};\frac{3}{2};\frac{s^2}{4R^2})\right],$$
(49)
where
$$\left(R^2x^2\right)^{\frac{n1}{2}}𝑑x=R^{n1}x_2F_1(\frac{1}{2},\frac{1}{2}\frac{n}{2};\frac{3}{2};\frac{x^2}{R^2}),$$
(50)
and
$`y(x)`$ $``$ $`{}_{2}{}^{}F_{1}^{}(a,b;c;x)`$ (51)
$`=`$ $`1+{\displaystyle \frac{ab}{c}}{\displaystyle \frac{x}{1!}}+{\displaystyle \frac{a(a+1)b(b+1)}{c(c+1)}}{\displaystyle \frac{x^2}{2!}}+{\displaystyle \frac{a(a+1)(a+2)b(b+1)(b+2)}{c(c+1)(c+2)}}{\displaystyle \frac{x^3}{3!}}+\mathrm{},`$ (52)
is one of the solutions for the hypergeometric equation
$$x(1x)y^{\prime \prime }(x)+\left[c(a+b+1)x\right]y^{}(x)aby(x)=0.$$
(53)
Using the previous results one can obtain a number of identities and recursion relations for the probability density functions $`P_n(s)`$, as we discuss in Appendix B.
## III Spherically symmetric density distributions
In this section we generalize the previous results to the case of an $`n`$-dimensional sphere of radius $`R`$ with a variable (but spherically symmetric) density distribution of the form $`\rho =\rho (r)`$, where $`r=\sqrt{x_1^2+x_2^2+\mathrm{}x_n^2}`$ is measured from the center and
$`x_1^2+x_2^2+\mathrm{}+x_n^2R^2.`$
As before we begin with the example of a circle ($`n=2`$) and generalize to a sphere ($`n3`$) later. Following the derivation presented in the previous section, the positive $`\widehat{x}`$ direction is chosen to specify the distribution of those $`\stackrel{}{s}`$ vectors that are aligned along the positive $`\widehat{x}`$ direction. At this stage we must consider the differences between uniform and non-uniform density distributions. For a given $`s,`$ if point $`2`$ carries the density information $`\rho (x,y)`$, then point $`1`$ should have the density information $`\rho (xs,y).`$ It follows that to incorporate the effects of a spherically symmetric density distribution the following substitution should be made:
$$\rho (\stackrel{}{r}_2)\times \rho (\stackrel{}{r}_1)\rho (x,y)\times \rho (xs,y).$$
(54)
Since the density distributions considered are spherically symmetric, the probability of finding a given $`s`$ in any orientation is still proportional to $`2\pi s.`$ The PDF $`P_2(s)`$ can then be expressed in the form
$$P_2(s)=\frac{n_d(s)\times \left(n_1(s)+n_2(s)\right)}{_0^{2R}n_d(s)\times \left(n_1(s)+n_2(s)\right)𝑑s},$$
(55)
where
$`n_d(s)`$ $`=`$ $`2\pi s,`$ (56)
$`n_1(s)`$ $`=`$ $`{\displaystyle _{sR}^{\frac{s}{2}}}𝑑x{\displaystyle _{\sqrt{R^2(xs)^2}}^{\sqrt{R^2(xs)^2}}}\rho (x,y)\times \rho (xs,y)𝑑y,`$ (57)
$`n_2(s)`$ $`=`$ $`{\displaystyle _{\frac{s}{2}}^R}𝑑x{\displaystyle _{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}}\rho (x,y)\times \rho (xs,y)𝑑y.`$ (58)
Substituting $`xs=x^{}`$ and using $`\rho (x,y)=\rho (x,y)`$ it can be shown that $`n_1(s)=n_2(s).`$ The expression for $`P_2(s)`$ can then be simplified to read
$`P_2(s)`$ $`=`$ $`{\displaystyle \frac{s_{\frac{s}{2}}^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}\rho (x,y)\times \rho (xs,y)𝑑y}{_0^{2R}\left(s_{\frac{s}{2}}^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}\rho (x,y)\times \rho (xs,y)𝑑y\right)𝑑s}}.`$ (59)
The formalism leading to Eq. (59) can be extended to a $`3`$-dimensional sphere of radius $`R`$. For a given $`s`$, the $`z`$-axis is chosen arbitrarily as our reference axis to examine the distribution of those $`\stackrel{}{s}`$ vectors that are aligned along the positive $`\widehat{z}`$ direction. If the density at point $`2`$ is $`\rho (x,y,z),`$ then the density at point $`1`$ will be $`\rho (x,y,zs).`$ In analogy with the $`2`$-dimensional case discussed above, the expression in Eq. (54) must be replaced by
$$\rho (\stackrel{}{r}_2)\times \rho (\stackrel{}{r}_1)=\rho (x,y,z)\times \rho (x,y,zs).$$
(60)
Since the density distributions considered here are spherically symmetric, the probability of finding a given $`s`$ in any orientation is proportional to $`4\pi s^2`$. Hence $`P_3(s)`$ can be expressed as
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{s^2_{\frac{s}{2}}^R𝑑z_{\sqrt{R^2z^2}}^{\sqrt{R^2z^2}}𝑑x_{\sqrt{R^2z^2x^2}}^{\sqrt{R^2z^2x^2}}\rho (x,y,z)\times \rho (x,y,zs)𝑑y}{_0^{2R}\left(s^2_{\frac{s}{2}}^R𝑑z_{\sqrt{R^2z^2}}^{\sqrt{R^2z^2}}𝑑x_{\sqrt{R^2z^2x^2}}^{\sqrt{R^2z^2x^2}}\rho (x,y,z)\times \rho (x,y,zs)𝑑y\right)𝑑s}}`$ (61)
$`=`$ $`{\displaystyle \frac{s^2_{\frac{s}{2}}^R𝑑x_3_{\sqrt{R^2x_3^2}}^{\sqrt{R^2x_3^2}}𝑑x_1_{\sqrt{R^2x_3^2x_1^2}}^{\sqrt{R^2x_3^2x_1^2}}𝑑x_2\rho (x_1,x_2,x_3)\rho (x_1,x_2,x_3^{})}{_0^{2R}s^2𝑑s_{\frac{s}{2}}^R𝑑x_3_{\sqrt{R^2x_3^2}}^{\sqrt{R^2x_3^2}}𝑑x_1_{\sqrt{R^2x_3^2x_1^2}}^{\sqrt{R^2x_3^2x_1^2}}𝑑x_2\rho (x_1,x_2,x_3)\rho (x_1,x_2,x_3^{})}},`$ (62)
where $`x_3^{}=x_3s`$.
Up to this point our discussion has been completely general. To continue we next evaluate $`P_3(s)`$ for a $`3`$-dimensional sphere of radius $`R`$ using two different spherically symmetric density distributions. Consider first
$$\rho (r)=\frac{5N}{4\pi R^5}r^2,$$
(63)
where $`N=4\pi _0^Rr^2\rho (r)𝑑r`$, and $`r`$ is measured from the center of the spherical distribution. Combining Eqs. (62) and (63) we find
$$P_3(s)=\frac{25}{7}\frac{s^2}{R^3}\frac{25}{4}\frac{s^3}{R^4}+5\frac{s^4}{R^5}\frac{25}{16}\frac{s^5}{R^6}+\frac{5}{448}\frac{s^9}{R^{10}}.$$
(64)
A plot of Eq. (64) when $`R=1`$, along with the corresponding Monte Carlo results, is shown in Fig. 4. The second spherically symmetric distribution we consider is
$$\rho (r)=\frac{N}{4\pi \left(\frac{1}{3}\frac{\alpha }{5}\right)R^3}\left[1\alpha \left(\frac{r}{R}\right)^2\right],$$
(65)
where $`N=4\pi _0^Rr^2\rho (r)𝑑r`$, $`0\alpha 1`$, and $`r`$ is measured from the center. Combining Eqs. (62) and (65) we find
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{15(3542\alpha +15\alpha ^2)s^2}{7(53\alpha )^2R^3}}{\displaystyle \frac{225(1\alpha )^2s^3}{4(53\alpha )^2R^4}}{\displaystyle \frac{15\alpha s^4}{(53\alpha )R^5}}`$ (67)
$`+{\displaystyle \frac{75(1+6\alpha 3\alpha ^2)s^5}{16(53\alpha )^2R^6}}{\displaystyle \frac{15\alpha s^7}{8(53\alpha )^2R^8}}+{\displaystyle \frac{45\alpha ^2s^9}{448(53\alpha )^2R^{10}}}.`$
A plot of Eq. (67) when $`R=\alpha =1`$ is shown in Fig. 5, along with the corresponding Monte Carlo results.
A general formula for the probability density function for an $`n`$-dimensional sphere with radius $`R`$ having a spherically symmetric density distribution can be derived from the previous results. We find
$$P_n(s)=\frac{s^{n1}_{\frac{s}{2}}^R𝑑x_n_{\sqrt{R^2x_n^2}}^{\sqrt{R^2x_n^2}}𝑑x_1\mathrm{}\mathrm{}_{\sqrt{R^2x_n^2x_1^2\mathrm{}x_{n1}^2}}^{\sqrt{R^2x_n^2x_1^2\mathrm{}x_{n1}^2}}\rho \left(𝐗\right)\rho \left(𝐗^{}\right)𝑑x_{n1}}{_0^{2R}\left[s^{n1}_{\frac{s}{2}}^R𝑑x_n_{\sqrt{R^2x_n^2}}^{\sqrt{R^2x_n^2}}𝑑x_1\mathrm{}\mathrm{}_{\sqrt{R^2x_n^2x_1^2\mathrm{}x_{n1}^2}}^{\sqrt{R^2x_n^2x_1^2\mathrm{}x_{n1}^2}}\rho \left(𝐗\right)\rho \left(𝐗^{}\right)𝑑x_{n1}\right]𝑑s},$$
(68)
where
$`\rho (𝐗)`$ $`=`$ $`\rho (x_1,x_2,x_3,\mathrm{}\mathrm{},x_n),`$ (69)
$`\rho (𝐗^{})`$ $`=`$ $`\rho (x_1,x_2,x_3,\mathrm{}\mathrm{},x_ns).`$ (70)
Another density distribution we wish to study is a Gaussian. As an example, consider the case of an $`n`$-dimensional sphere of radius $`R\mathrm{}`$ with a Gaussian density distribution $`\rho (r)`$ given by
$$\rho _n(r)=\frac{N}{(2\pi )^{\frac{n}{2}}\sigma ^n}e^{\frac{1}{2}\frac{r^2}{\sigma ^2}},$$
(71)
where
$$N=\underset{R\mathrm{}}{lim}n\frac{\pi ^{\frac{n}{2}}}{\mathrm{\Gamma }\left(\frac{n}{2}+1\right)}_0^R\rho _n(r)r^{n1}𝑑r.$$
(72)
In Eq. (72) $`r`$ is measured from the center of the spherical distribution and the integral is over all space. Recall that
$`{\displaystyle _0^{\mathrm{}}}x^ne^{\frac{x^2}{2\sigma ^2}}𝑑x`$ $`=`$ $`2^{\frac{n1}{2}}\mathrm{\Gamma }\left({\displaystyle \frac{n+1}{2}}\right)\sigma ^{n+1}.`$ (73)
Combining Eqs. (71) and (73), the PDF for an $`n`$-dimensional sphere in an infinite space with a Gaussian density distribution can be expressed as
$$P_n(s)=\underset{R\mathrm{}}{lim}\frac{s^{n1}e^{\frac{s^2}{4\sigma ^2}}}{_0^{2R}s^{n1}e^{\frac{s^2}{4\sigma ^2}}𝑑s}=\frac{1}{2^{n1}\mathrm{\Gamma }\left(\frac{n}{2}\right)\sigma ^n}s^{n1}e^{\frac{s^2}{4\sigma ^2}}.$$
(74)
For $`n=3`$, Eq. (74) agrees with the result obtained earlier in Ref. . Finally, we note that the maximum probability, denoted by $`S_{max}`$, occurs at
$$S_{max}=\sqrt{2(n1)}\sigma .$$
(75)
## IV Arbitrary density distributions
We consider in this section the probability density functions for an $`n`$-dimensional sphere of radius $`R`$ having an arbitrary density distribution,
$$\rho =\rho (𝐗)=\rho (x_1,x_2,\mathrm{},x_n),$$
(76)
where
$$x_1^2+x_2^2+\mathrm{}+x_n^2R^2,$$
(77)
The proportionality factors, $`2\pi s`$ ($`n=2`$) and $`4\pi s^2`$ ($`n=3`$), cannot be applied here directly because the density function is not spherically symmetric. For a given $`s,`$ each direction of $`\stackrel{}{s}`$ carries different information specified by the density distribution function.
We begin with a circle of radius $`R`$ and the conventional notation for polar coordinates, $`x=r\mathrm{cos}\varphi `$ and $`y=r\mathrm{sin}\varphi `$. For a given $`\stackrel{}{s}=\stackrel{}{r}_2\stackrel{}{r}_1,`$ the PDF, $`P_2(s),`$ is proportional to $`A_\stackrel{}{s}\times \rho (\stackrel{}{r}_2)\times \rho (\stackrel{}{r}_1),`$ where $`A_\stackrel{}{s}`$ is the overlapping area between the original circle and a second identical one whose center is shifted to $`\stackrel{}{s}`$, as described in the previous sections. In $`2`$-dimensional space $`\stackrel{}{s}`$ can be characterized by an angle $`\varphi `$ in the range $`0\varphi 2\pi `$. One can understand the new features that arise for a non-uniform density distribution by referring back to Fig. 1. In the case of a uniform density distribution the picture formed by the vector $`\stackrel{}{s}`$ extending between $`L_1`$ and $`L_2`$ is unchanged by a rotation of the entire pattern about the positive $`x`$-axis. However, for a non-uniform distribution the effect of such a rotation is to shift the vectors into a new region for which the density of points is not the same as it was initially. Stated another way, for a fixed $`\left|\stackrel{}{s}\right|`$ the shape of the overlapping area or volume is the same, but will contain a different fraction of the points depending on the orientation of $`\stackrel{}{s}`$. To deal with this effect, one can rotate the coordinate system so that the pattern remains as shown in Fig. 1, but with an appropriately transformed density distribution. To specify this transformation, we associate $`\stackrel{}{s}`$ with a rotation operator $`𝐑(\stackrel{}{s})`$ such that the direction of $`\stackrel{}{s}`$ is the new $`\widehat{x}`$ direction where $`\widehat{s}=\widehat{x}^{}`$, $`\widehat{x}^{}\widehat{x}=\mathrm{cos}\varphi `$, $`\widehat{x}^{}\widehat{y}=\mathrm{sin}\varphi `$, $`\widehat{y}^{}\widehat{x}=\mathrm{sin}\varphi `$ and $`\widehat{y}^{}\widehat{y}=\mathrm{cos}\varphi `$. We utilize the transformation matrix for a $`2`$-dimensional rotation
$$R_{2\times 2}(\varphi )=\left[\begin{array}{cc}\mathrm{cos}\varphi & \mathrm{sin}\varphi \\ \mathrm{sin}\varphi & \mathrm{cos}\varphi \end{array}\right]$$
(78)
to describe $`𝐑(\stackrel{}{s})`$. Notice that $`R_{2\times 2}(\varphi )`$ is an orthogonal matrix which satisfies $`R_{2\times 2}^1(\varphi )=R_{2\times 2}^T(\varphi )`$, where $`T`$ denotes the transpose. Recall that the functional form of the density distribution in Eq. (76) is written in the original coordinate system. The inverse transformation matrix, $`R_{2\times 2}^1(\varphi )=R_{2\times 2}^T(\varphi )`$, should be used to transmit the correct density information to the new coordinate system $`\widehat{s}`$ ($`\widehat{x}^{}`$).
It is convenient to introduce the following general notations,
$`\stackrel{}{𝐗}^{}`$ $`=`$ $`R_{2\times 2}^T(\varphi )\stackrel{}{𝐗},`$ (79)
$`\left[\begin{array}{c}x^{}\\ y^{}\end{array}\right]`$ $`=`$ $`\left[\begin{array}{cc}\mathrm{cos}\varphi & \mathrm{sin}\varphi \\ \mathrm{sin}\varphi & \mathrm{cos}\varphi \end{array}\right]\left[\begin{array}{c}x\\ y\end{array}\right],`$ (86)
and
$`\stackrel{}{𝐗}^{\prime \prime }`$ $`=`$ $`R_{2\times 2}^T(\varphi )(\stackrel{}{𝐗}\stackrel{}{𝐒}),`$ (87)
$`\left[\begin{array}{c}x^{\prime \prime }\\ y^{\prime \prime }\end{array}\right]`$ $`=`$ $`\left[\begin{array}{cc}\mathrm{cos}\varphi & \mathrm{sin}\varphi \\ \mathrm{sin}\varphi & \mathrm{cos}\varphi \end{array}\right]\left[\begin{array}{c}xs\\ y\end{array}\right],`$ (94)
where
$$\stackrel{}{𝐗}=\left[\begin{array}{c}x\\ y\end{array}\right],\stackrel{}{𝐗}^{}=\left[\begin{array}{c}x^{}\\ y^{}\end{array}\right],\stackrel{}{𝐗}^{\prime \prime }=\left[\begin{array}{c}x^{\prime \prime }\\ y^{\prime \prime }\end{array}\right],\stackrel{}{𝐒}=\left[\begin{array}{c}s\\ 0\end{array}\right].$$
(95)
We can then express the PDF for a circle of radius $`R`$ with an arbitrary density distribution as
$$P_2(s)=\frac{s_0^{2\pi }𝑑\varphi _{\frac{s}{2}}^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}\rho (𝐗^{})\rho (𝐗^{\prime \prime })𝑑y}{_0^{2R}\left[s_0^{2\pi }𝑑\varphi _{\frac{s}{2}}^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}\rho (𝐗^{})\rho (𝐗^{\prime \prime })𝑑y\right]𝑑s},$$
(96)
where
$$\rho (𝐗^{})=\rho (\mathrm{cos}\varphi x\mathrm{sin}\varphi y,\mathrm{sin}\varphi x+\mathrm{cos}\varphi y),$$
(97)
$$\rho (𝐗^{\prime \prime })=\rho (\mathrm{cos}\varphi (xs)\mathrm{sin}\varphi y,\mathrm{sin}\varphi (xs)+\mathrm{cos}\varphi y).$$
(98)
As an example , consider a circle of radius $`R`$ and non-uniform density distribution $`\rho (x,y)`$ given by
$$\rho (x,y)=\frac{640N}{3\pi R^{10}}x^4y^4=\frac{640N}{3\pi R^{10}}r^8\mathrm{cos}^4\varphi \mathrm{sin}^4\varphi ,$$
(99)
where $`N`$ is a normalization constant
$$N=_R^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}\rho (x,y)𝑑y.$$
(100)
For this example the $`2`$-dimensional PDF $`P_2(s)`$ is then given by
$`P_2(s)`$ $`=`$ $`{\displaystyle \frac{875}{81}}{\displaystyle \frac{s}{R^2}}+{\displaystyle \frac{500}{3}}{\displaystyle \frac{s^3}{R^4}}+{\displaystyle \frac{7400}{21}}{\displaystyle \frac{s^5}{R^6}}+{\displaystyle \frac{400}{3}}{\displaystyle \frac{s^7}{R^8}}+10{\displaystyle \frac{s^9}{R^{10}}}`$ (103)
$`{\displaystyle \frac{\sqrt{4R^2s^2}}{\pi }}\left[f_1(s)+f_2(s)f_3(s)\right]`$
$`{\displaystyle \frac{\mathrm{sin}^1\left(\frac{s}{2R}\right)}{\pi }}\left({\displaystyle \frac{1750}{81}}{\displaystyle \frac{s}{R^2}}+{\displaystyle \frac{1000}{3}}{\displaystyle \frac{s^3}{R^4}}+{\displaystyle \frac{14800}{21}}{\displaystyle \frac{s^5}{R^6}}+{\displaystyle \frac{800}{3}}{\displaystyle \frac{s^7}{R^8}}+20{\displaystyle \frac{s^9}{R^{10}}}\right),`$
where
$`f_1(s)`$ $`=`$ $`{\displaystyle \frac{14875}{162}}{\displaystyle \frac{s^2}{R^4}}+{\displaystyle \frac{92500}{243}}{\displaystyle \frac{s^4}{R^6}}+{\displaystyle \frac{553985}{1701}}{\displaystyle \frac{s^6}{R^8}},`$ (104)
$`f_2(s)`$ $`=`$ $`{\displaystyle \frac{260315}{10206}}{\displaystyle \frac{s^{10}}{R^{12}}}+{\displaystyle \frac{113693}{47628}}{\displaystyle \frac{s^{14}}{R^{16}}}+{\displaystyle \frac{2509}{142884}}{\displaystyle \frac{s^{18}}{R^{20}}},`$ (105)
$`f_3(s)`$ $`=`$ $`{\displaystyle \frac{2725}{1134}}{\displaystyle \frac{s^8}{R^{10}}}+{\displaystyle \frac{1438825}{142884}}{\displaystyle \frac{s^{12}}{R^{14}}}+{\displaystyle \frac{89189}{285768}}{\displaystyle \frac{s^{16}}{R^{18}}}.`$ (106)
Figure 6 exhibits $`P_2(s)`$ when $`R=1`$, and illustrates the agreement between the Monte Carlo simulation and the analytical result given above.
The preceding discussion can be extended to a $`3`$-dimensional sphere of radius $`R`$ with an arbitrary density distribution $`\rho =\rho (x,y,z)`$, where $`x=r\mathrm{sin}\theta \mathrm{cos}\varphi `$, $`y=r\mathrm{sin}\theta \mathrm{sin}\varphi `$, and $`z=r\mathrm{cos}\theta `$ are the usual $`3`$-dimensional spherical coordinates, and $`x^2+y^2+z^2R^2`$. For a given $`\stackrel{}{s}=\stackrel{}{r}_2\stackrel{}{r}_1,`$ the PDF $`P_3(s)`$ is proportional to $`\rho (\stackrel{}{r}_2)\times \rho (\stackrel{}{r}_1)\times V_\stackrel{}{s},`$ where $`V_\stackrel{}{s}`$ is the overlapping volume between the original sphere and a second identical one whose center is shifted to $`\stackrel{}{s}`$. In $`3`$-dimensional space $`\stackrel{}{s}`$ can be oriented at any angle $`\varphi `$ between $`0`$ and $`2\pi `$, and the angle $`\theta `$ can lie between $`0`$ and $`\pi `$. A rotation matrix $`R_{3\times 3}(\theta ,\varphi )`$ is used to represent the rotation operator $`𝐑(\stackrel{}{s})`$ associated with a given $`\stackrel{}{s}`$ such that
$$R_{3\times 3}(\theta ,\varphi )=R_{3\times 3}(\theta )\times R_{3\times 3}(\varphi )=\left[\begin{array}{ccc}\mathrm{cos}\theta \mathrm{cos}\varphi & \mathrm{cos}\theta \mathrm{sin}\varphi & \mathrm{sin}\theta \\ \mathrm{sin}\varphi & \mathrm{cos}\varphi & \mathrm{\hspace{0.17em}0}\\ \mathrm{sin}\theta \mathrm{cos}\varphi & \mathrm{sin}\theta \mathrm{sin}\varphi & \mathrm{cos}\theta \end{array}\right],$$
(107)
where
$`R_{3\times 3}(\theta )`$ $`=`$ $`\left[\begin{array}{ccc}\mathrm{cos}\theta & \mathrm{\hspace{0.17em}0}& \mathrm{sin}\theta \\ 0& \mathrm{\hspace{0.17em}1}& \mathrm{\hspace{0.17em}0}\\ \mathrm{sin}\theta & \mathrm{\hspace{0.17em}0}& \mathrm{cos}\theta \end{array}\right],`$ (111)
$`R_{3\times 3}(\varphi )`$ $`=`$ $`\left[\begin{array}{ccc}\mathrm{cos}\varphi & \mathrm{sin}\varphi & \mathrm{\hspace{0.17em}0}\\ \mathrm{sin}\varphi & \mathrm{cos}\varphi & \mathrm{\hspace{0.17em}0}\\ 0& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}1}\end{array}\right].`$ (115)
We observe the following:
1. The rotation matrices $`R_{3\times 3}(\theta ,\varphi )`$, $`R_{3\times 3}(\theta )`$, and $`R_{3\times 3}(\varphi )`$ are orthogonal so that
$$R_{3\times 3}^1(\theta ,\varphi )=R_{3\times 3}^T(\theta ,\varphi )=R_{3\times 3}^T(\varphi )R_{3\times 3}^T(\theta ).$$
(116)
2. The purpose of $`R_{3\times 3}(\varphi )`$ is to transform the coordinate system from $`(x,y,z)`$ to a second coordinate system $`(x^1,y^1,z^1)`$ given by
$$\left[\begin{array}{c}x^1\\ y^1\\ z^1\end{array}\right]=\left[\begin{array}{ccc}\mathrm{cos}\varphi & \mathrm{sin}\varphi & \mathrm{\hspace{0.17em}0}\\ \mathrm{sin}\varphi & \mathrm{cos}\varphi & \mathrm{\hspace{0.17em}0}\\ 0& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}1}\end{array}\right]\left[\begin{array}{c}x\\ y\\ z\end{array}\right].$$
(117)
3. The purpose of $`R_{3\times 3}(\theta )`$ is to transform the coordinate system from $`(x^1,y^1,z^1)`$ to a third coordinate system $`(x^2,y^2,z^2)`$ where
$$\left[\begin{array}{c}x^2\\ y^2\\ z^2\end{array}\right]=\left[\begin{array}{ccc}\mathrm{cos}\theta & \mathrm{\hspace{0.17em}0}& \mathrm{sin}\theta \\ 0& \mathrm{\hspace{0.17em}1}& \mathrm{\hspace{0.17em}0}\\ \mathrm{sin}\theta & \mathrm{\hspace{0.17em}0}& \mathrm{cos}\theta \end{array}\right]\left[\begin{array}{c}x^1\\ y^1\\ z^1\end{array}\right].$$
(118)
Define the following notations,
$`\stackrel{}{𝐗}^{}`$ $`=`$ $`R_{3\times 3}^T(\theta ,\varphi )\stackrel{}{𝐗},`$ (119)
$`=`$ $`R_{3\times 3}^T(\varphi )R_{3\times 3}^T(\theta )\stackrel{}{𝐗},`$ (120)
$`\stackrel{}{𝐗}^{\prime \prime }`$ $`=`$ $`R_{3\times 3}^T(\theta ,\varphi )(\stackrel{}{𝐗}\stackrel{}{𝐒})`$ (121)
$`=`$ $`R_{3\times 3}^T(\varphi )R_{3\times 3}^T(\theta )(\stackrel{}{𝐗}\stackrel{}{𝐒}),`$ (122)
where
$$\stackrel{}{𝐗}=\left[\begin{array}{c}x\\ y\\ z\end{array}\right],\stackrel{}{𝐗}^{}=\left[\begin{array}{c}x^{}\\ y^{}\\ z^{}\end{array}\right],\stackrel{}{𝐗}^{\prime \prime }=\left[\begin{array}{c}x^{\prime \prime }\\ y^{\prime \prime }\\ z^{\prime \prime }\end{array}\right],\stackrel{}{𝐒}=\left[\begin{array}{c}0\\ 0\\ 1\end{array}\right].$$
(123)
The PDF $`P_3(s)`$ can then be expressed in the form
$$P_3(s)=\frac{s^2_0^\pi \mathrm{sin}\theta d\theta _0^{2\pi }𝑑\varphi _{\frac{s}{2}}^R𝑑z_{\sqrt{R^2z^2}}^{\sqrt{R^2z^2}}𝑑x_{\sqrt{R^2z^2x^2}}^{\sqrt{R^2z^2x^2}}\rho (𝐗^{})\rho (𝐗^{\prime \prime })𝑑y}{_0^{2R}\left[s^2_0^\pi \mathrm{sin}\theta d\theta _0^{2\pi }𝑑\varphi _{\frac{s}{2}}^R𝑑z_{\sqrt{R^2z^2}}^{\sqrt{R^2z^2}}𝑑x_{\sqrt{R^2z^2x^2}}^{\sqrt{R^2z^2x^2}}\rho (𝐗^{})\rho (𝐗^{\prime \prime })𝑑y\right]𝑑s},$$
(124)
where
$`\rho (𝐗^{})`$ $`=`$ $`\rho (x^{},y^{},z^{}),`$ (125)
$`x^{}`$ $`=`$ $`\mathrm{cos}\theta \mathrm{cos}\varphi x\mathrm{sin}\varphi y+\mathrm{sin}\theta \mathrm{cos}\varphi z,`$ (126)
$`y^{}`$ $`=`$ $`\mathrm{cos}\theta \mathrm{sin}\varphi x+\mathrm{cos}\varphi y+\mathrm{sin}\theta \mathrm{sin}\varphi z,`$ (127)
$`z^{}`$ $`=`$ $`\mathrm{sin}\theta x+\mathrm{cos}\theta z,`$ (128)
$`\rho (𝐗^{\prime \prime })`$ $`=`$ $`\rho (x^{\prime \prime },y^{\prime \prime },z^{\prime \prime }),`$ (129)
$`x^{\prime \prime }`$ $`=`$ $`\mathrm{cos}\theta \mathrm{cos}\varphi x\mathrm{sin}\varphi y+\mathrm{sin}\theta \mathrm{cos}\varphi (zs),`$ (130)
$`y^{\prime \prime }`$ $`=`$ $`\mathrm{cos}\theta \mathrm{sin}\varphi x+\mathrm{cos}\varphi y+\mathrm{sin}\theta \mathrm{sin}\varphi (zs),`$ (131)
$`z^{\prime \prime }`$ $`=`$ $`\mathrm{sin}\theta x+\mathrm{cos}\theta (zs).`$ (132)
As an example, consider a $`3`$-dimensional sphere of radius $`R`$ and non-uniform density distribution $`\rho (x,y,z)`$ given by
$$\rho (x,y,z)=\frac{945N}{4\pi R^9}x^2y^2z^2=\frac{945N}{4\pi R^9}r^6\mathrm{sin}^4\theta \mathrm{cos}^2\theta \mathrm{cos}^2\varphi \mathrm{sin}^2\varphi ,$$
(133)
where $`N`$ is the normalizing factor
$$N=_R^R𝑑x_{\sqrt{R^2x^2}}^{\sqrt{R^2x^2}}𝑑y_{\sqrt{R^2x^2y^2}}^{\sqrt{R^2x^2y^2}}\rho (x,y,z)𝑑z.$$
(134)
Then
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{1701}{143}}{\displaystyle \frac{s^2}{R^3}}{\displaystyle \frac{25515}{572}}{\displaystyle \frac{s^3}{R^4}}+{\displaystyle \frac{8505}{143}}{\displaystyle \frac{s^4}{R^5}}{\displaystyle \frac{8505}{208}}{\displaystyle \frac{s^5}{R^6}}+{\displaystyle \frac{567}{11}}{\displaystyle \frac{s^6}{R^7}}{\displaystyle \frac{6237}{104}}{\displaystyle \frac{s^7}{R^8}}+9{\displaystyle \frac{s^8}{R^9}}`$ (136)
$`+{\displaystyle \frac{201285}{9152}}{\displaystyle \frac{s^9}{R^{10}}}{\displaystyle \frac{181629}{18304}}{\displaystyle \frac{s^{11}}{R^{12}}}+{\displaystyle \frac{16443}{6656}}{\displaystyle \frac{s^{13}}{R^{14}}}{\displaystyle \frac{6075}{18304}}{\displaystyle \frac{s^{15}}{R^{16}}}+{\displaystyle \frac{10899}{585728}}{\displaystyle \frac{s^{17}}{R^{18}}}.`$
Figure 7 is the plot of $`P_3(s)`$ for $`R=1`$, and illustrates the agreement between Monte Carlo simulation and the analytical result.
We can extend the discussion to a $`4`$-dimensional sphere of radius $`R`$ and arbitrary density distribution function,
$$\rho =\rho (x_1,x_2,x_3,x_4),$$
(137)
where
$$x_1^2+x_2^2+x_3^2+x_4^2R^2.$$
(138)
The $`4`$-dimensional hyperspherical coordinates that are a generalization of the conventional $`3`$-dimensional spherical coordinates are defined as follows:
$`x_1`$ $`=`$ $`r\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{cos}\varphi ,`$ (139)
$`x_2`$ $`=`$ $`r\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{sin}\varphi ,`$ (140)
$`x_3`$ $`=`$ $`r\mathrm{sin}\theta _2\mathrm{cos}\theta _1,`$ (141)
$`x_4`$ $`=`$ $`r\mathrm{cos}\theta _2,`$ (142)
and
$`r`$ $`=`$ $`\sqrt{x_1^2+x_2^2+x_3^2+x_4^2},`$ (143)
$`\theta _1`$ $`=`$ $`\mathrm{tan}^1{\displaystyle \frac{\sqrt{x_1^2+x_2^2}}{x_3}},`$ (144)
$`\theta _2`$ $`=`$ $`\mathrm{tan}^1{\displaystyle \frac{\sqrt{x_1^2+x_2^2+x_3^2}}{x_4}},`$ (145)
$`\varphi `$ $`=`$ $`\mathrm{tan}^1{\displaystyle \frac{x_2}{x_1}},`$ (146)
where
$$0rR,\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}0}\theta _1,\theta _2\pi ,\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}0}\varphi 2\pi ,$$
(147)
and the volume element dV is given by
$$dV=dx_1dx_2dx_3dx_4=r^3\mathrm{sin}^2\theta _2\mathrm{sin}\theta _1drd\theta _2d\theta _1d\varphi .$$
(148)
The representation of the rotation operator $`𝐑(\stackrel{}{s})`$ for a given $`\stackrel{}{s}`$ is a $`4`$-dimensional rotation matrix
$`R_{4\times 4}(\theta _2,\theta _1,\varphi )`$ $`=`$ $`R_{4\times 4}(\theta _2)\times R_{4\times 4}(\theta _1)\times R_{4\times 4}(\varphi )`$ (149)
$`=`$ $`\left[\begin{array}{cccc}\mathrm{cos}\theta _1\mathrm{cos}\varphi & \mathrm{cos}\theta _1\mathrm{sin}\varphi & \mathrm{sin}\theta & \mathrm{\hspace{0.17em}0}\\ \mathrm{sin}\varphi & \mathrm{cos}\varphi & \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}0}\\ \mathrm{cos}\theta _2\mathrm{sin}\theta _1\mathrm{cos}\varphi & \mathrm{cos}\theta _2\mathrm{sin}\theta _1\mathrm{sin}\varphi & \mathrm{cos}\theta _2\mathrm{cos}\theta _1& \mathrm{sin}\theta _2\\ \mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{cos}\varphi & \mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{sin}\varphi & \mathrm{sin}\theta _2\mathrm{cos}\theta _1& \mathrm{cos}\theta _2\end{array}\right],`$ (154)
where
$`R_{4\times 4}(\theta _2)`$ $`=`$ $`\left[\begin{array}{cccc}1& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}0}\\ 0& \mathrm{\hspace{0.17em}1}& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}0}\\ 0& \mathrm{\hspace{0.17em}0}& \mathrm{cos}\theta _2& \mathrm{sin}\theta _2\\ 0& \mathrm{\hspace{0.17em}0}& \mathrm{sin}\theta _2& \mathrm{cos}\theta _2\end{array}\right],`$ (159)
$`R_{4\times 4}(\theta _1)`$ $`=`$ $`\left[\begin{array}{cccc}\mathrm{cos}\theta _1& \mathrm{\hspace{0.17em}0}& \mathrm{sin}\theta _1& \mathrm{\hspace{0.17em}0}\\ 0& \mathrm{\hspace{0.17em}1}& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}0}\\ \mathrm{sin}\theta _1& \mathrm{\hspace{0.17em}0}& \mathrm{cos}\theta _1& \mathrm{\hspace{0.17em}0}\\ 0& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}1}\end{array}\right],`$ (164)
$`R_{4\times 4}(\varphi )`$ $`=`$ $`\left[\begin{array}{cccc}\mathrm{cos}\varphi & \mathrm{sin}\varphi & \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}0}\\ \mathrm{sin}\varphi & \mathrm{cos}\varphi & \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}0}\\ 0& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}1}& \mathrm{\hspace{0.17em}0}\\ 0& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}0}& \mathrm{\hspace{0.17em}1}\end{array}\right].`$ (169)
It is convenient to introduce the notations,
$`\stackrel{}{𝐗}^{}`$ $`=`$ $`R_{4\times 4}^T(\theta _2,\theta _1,\varphi )\stackrel{}{𝐗},`$ (170)
$`=`$ $`R_{4\times 4}^T(\varphi )R_{4\times 4}^T(\theta _1)R_{4\times 4}^T(\theta _2)\stackrel{}{𝐗},`$ (171)
$`\stackrel{}{𝐗}^{\prime \prime }`$ $`=`$ $`R_{4\times 4}^T(\theta _2,\theta _1,\varphi )(\stackrel{}{𝐗}\stackrel{}{𝐒}),`$ (172)
$`=`$ $`R_{4\times 4}^T(\varphi )R_{4\times 4}^T(\theta _1)R_{4\times 4}^T(\theta _2)(\stackrel{}{𝐗}\stackrel{}{𝐒}),`$ (173)
where
$$\stackrel{}{𝐗}=\left[\begin{array}{c}x_1\\ x_2\\ x_3\\ x_4\end{array}\right],\stackrel{}{𝐗}^{}=\left[\begin{array}{c}x_1^{}\\ x_2^{}\\ x_3^{}\\ x_4^{}\end{array}\right],\stackrel{}{𝐗}^{\prime \prime }=\left[\begin{array}{c}x_1^{\prime \prime }\\ x_2^{\prime \prime }\\ x_3^{\prime \prime }\\ x_4^{\prime \prime }\end{array}\right],\stackrel{}{𝐒}=\left[\begin{array}{c}0\\ 0\\ 0\\ s\end{array}\right].$$
(174)
The PDF $`P_4(s)`$ can then be written as
$$P_4(s)=\frac{s^3[\theta _2,\theta _1,\varphi ]\times [x_1,x_2,x_3,x_4]\times \rho (𝐗^{})\times \rho (𝐗^{\prime \prime })}{_0^{2R}\left\{s^3[\theta _2,\theta _1,\varphi ]\times [x_1,x_2,x_3,x_4]\times \rho (𝐗^{})\times \rho (𝐗^{\prime \prime })\right\}𝑑s},$$
(175)
where
$`{\displaystyle [\theta _2,\theta _1,\varphi ]}`$ $``$ $`{\displaystyle _0^\pi }\mathrm{sin}^2\theta _2d\theta _2{\displaystyle _0^\pi }\mathrm{sin}\theta _1d\theta _1{\displaystyle _0^{2\pi }}𝑑\varphi ,`$ (176)
$`{\displaystyle [x_1,x_2,x_3,x_4]}`$ $``$ $`{\displaystyle _{\frac{s}{2}}^R}𝑑x_4{\displaystyle _{\sqrt{R^2x_4^2}}^{\sqrt{R^2x_4^2}}}𝑑x_1{\displaystyle _{\sqrt{R^2x_4^2x_1^2}}^{\sqrt{R^2x_4^2x_1^2}}}𝑑x_2{\displaystyle _{\sqrt{R^2x_4^2x_1^2x_2^2}}^{\sqrt{R^2x_4^2x_1^2x_2^2}}}𝑑x_{3,}`$ (177)
$`\rho (𝐗^{})`$ $`=`$ $`\rho (x_1^{},x_2^{},x_3^{},x_4^{}),`$ (178)
$`x_1^{}`$ $`=`$ $`\mathrm{cos}\theta _1\mathrm{cos}\varphi x_1\mathrm{sin}\varphi x_2+\mathrm{cos}\theta _2\mathrm{sin}\theta _1\mathrm{cos}\varphi x_3+\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{cos}\varphi x_4,`$ (179)
$`x_2^{}`$ $`=`$ $`\mathrm{cos}\theta _1\mathrm{sin}\varphi x_1+\mathrm{cos}\varphi x_2+\mathrm{cos}\theta _2\mathrm{sin}\theta _1\mathrm{sin}\varphi x_3+\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{sin}\varphi x_4,`$ (180)
$`x_3^{}`$ $`=`$ $`\mathrm{sin}\theta _1x_1+\mathrm{cos}\theta _2\mathrm{cos}\theta _1x_3+\mathrm{sin}\theta _2\mathrm{cos}\theta _1x_4,`$ (181)
$`x_4^{}`$ $`=`$ $`\mathrm{sin}\theta _2x_3+\mathrm{cos}\theta _2x_4,`$ (182)
$`\rho (𝐗^{\prime \prime })`$ $`=`$ $`\rho (x_1^{\prime \prime },x_2^{\prime \prime },x_3^{\prime \prime },x_4^{\prime \prime }),`$ (183)
$`x_1^{\prime \prime }`$ $`=`$ $`\mathrm{cos}\theta _1\mathrm{cos}\varphi x_1\mathrm{sin}\varphi x_2+\mathrm{cos}\theta _2\mathrm{sin}\theta _1\mathrm{cos}\varphi x_3+\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{cos}\varphi (x_4s),`$ (184)
$`x_2^{\prime \prime }`$ $`=`$ $`\mathrm{cos}\theta _1\mathrm{sin}\varphi x_1+\mathrm{cos}\varphi x_2+\mathrm{cos}\theta _2\mathrm{sin}\theta _1\mathrm{sin}\varphi x_3+\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{sin}\varphi (x_4s),`$ (185)
$`x_3^{\prime \prime }`$ $`=`$ $`\mathrm{sin}\theta _1x_1+\mathrm{cos}\theta _2\mathrm{cos}\theta _1x_3+\mathrm{sin}\theta _2\mathrm{cos}\theta _1(x_4s),`$ (186)
$`x_4^{\prime \prime }`$ $`=`$ $`\mathrm{sin}\theta _2x_3+\mathrm{cos}\theta _2(x_4s).`$ (187)
As an example, consider a $`4`$-dimensional sphere of radius $`R`$ and non-uniform density distribution
$$\rho (x_1,x_2,x_3,x_4)=\frac{32N}{\pi ^2R^8}x_1^4=\frac{12N}{\pi ^2R^6}r^4\mathrm{sin}^4\theta _2\mathrm{sin}^4\theta _1\mathrm{cos}^4\varphi ,$$
(188)
where the normalizing factor $`N`$ is given by
$$N=_R^R𝑑x_1_{\sqrt{R^2x_1^2}}^{\sqrt{R^2x_1^2}}𝑑x_2_{\sqrt{R^2x_1^2x_2^2}}^{\sqrt{R^2x_1^2x_2^2}}𝑑x_3_{\sqrt{R^2x_1^2x_2^2x_3^2}}^{\sqrt{R^2x_1^2x_2^2x_3^2}}\rho (x_1,x_2,x_3,x_4)𝑑x_4.$$
(189)
Then
$`P_4(s)`$ $`=`$ $`{\displaystyle \frac{56}{3}}{\displaystyle \frac{s^3}{R^4}}+48{\displaystyle \frac{s^5}{R^6}}+8{\displaystyle \frac{s^7}{R^8}}`$ (192)
$`{\displaystyle \frac{1}{\pi }}\left({\displaystyle \frac{196}{3}}{\displaystyle \frac{s^4}{R^6}}+{\displaystyle \frac{114}{5}}{\displaystyle \frac{s^6}{R^8}}+{\displaystyle \frac{28}{15}}{\displaystyle \frac{s^8}{R^{10}}}{\displaystyle \frac{4}{5}}{\displaystyle \frac{s^{10}}{R^{12}}}+{\displaystyle \frac{2}{9}}{\displaystyle \frac{s^{12}}{R^{14}}}{\displaystyle \frac{1}{45}}{\displaystyle \frac{s^{14}}{R^{16}}}\right)\times \sqrt{4R^2s^2}`$
$`{\displaystyle \frac{1}{\pi }}\left({\displaystyle \frac{112}{3}}{\displaystyle \frac{s^3}{R^4}}+96{\displaystyle \frac{s^5}{R^6}}+16{\displaystyle \frac{s^7}{R^8}}\right)\times \mathrm{sin}^1\left({\displaystyle \frac{s}{2R}}\right).`$
Figure 8 is the plot of $`P_4(s)`$ for $`R=1`$, and illustrates the agreement between the Monte Carlo simulation and the analytical result.
We turn next to the general case of an $`n`$-dimensional sphere of radius $`R`$ and arbitrary density distribution,
$$\rho =\rho (x_1,x_2,\mathrm{},x_n),$$
(193)
where
$$x_1^2+x_2^2+\mathrm{}+x_n^2R^2.$$
(194)
Define the following $`n`$-dimensional spherical coordinates $`x_1,\mathrm{},x_n`$ ,
$`x_1`$ $`=`$ $`r\mathrm{sin}\theta _{n2}\mathrm{sin}\theta _{n3}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{cos}\varphi ,`$ (195)
$`x_2`$ $`=`$ $`r\mathrm{sin}\theta _{n2}\mathrm{sin}\theta _{n3}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{sin}\varphi ,`$ (196)
$`x_3`$ $`=`$ $`r\mathrm{sin}\theta _{n2}\mathrm{sin}\theta _{n3}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{sin}\theta _2\mathrm{cos}\theta _1,`$ (197)
$`\mathrm{}`$ $`\mathrm{}`$ $`\mathrm{}`$ (198)
$`x_i`$ $`=`$ $`r\mathrm{sin}\theta _{n2}\mathrm{sin}\theta _{n3}\mathrm{}\mathrm{}\mathrm{sin}\theta _{i1}\mathrm{cos}\theta _{i2},`$ (199)
$`\mathrm{}`$ $`\mathrm{}`$ $`\mathrm{}`$ (200)
$`x_{n2}`$ $`=`$ $`r\mathrm{sin}\theta _{n2}\mathrm{sin}\theta _{n3}\mathrm{cos}\theta _{n4},`$ (201)
$`x_{n1}`$ $`=`$ $`r\mathrm{sin}\theta _{n2}\mathrm{cos}\theta _{n3},`$ (202)
$`x_n`$ $`=`$ $`r\mathrm{cos}\theta _{n2},`$ (203)
where
$`dV`$ $`=`$ $`dx_1dx_2\mathrm{}dx_n`$ (204)
$`=`$ $`r^{n1}\mathrm{sin}^{n2}\theta _{n2}\mathrm{sin}^{n3}\theta _{n3}\mathrm{}\mathrm{}\mathrm{sin}^2\theta _2\mathrm{sin}\theta _1drd\theta _{n2}d\theta _{n3}\mathrm{}\mathrm{}d\theta _2d\theta _1d\varphi ,`$ (205)
and
$$0rR,\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}0}\theta _1,\theta _2,\mathrm{}\mathrm{},\theta _{n3},\theta _{n2}\pi ,\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}0}\varphi 2\pi .$$
(206)
The rotation operator $`𝐑(\stackrel{}{s})`$ for a given $`\stackrel{}{s}`$ is
$$R_{n\times n}(\theta _{n2},\theta _{n3},\mathrm{}\theta _2,\theta _1,\varphi )=R_{n\times n}(\theta _{n2})R_{n\times n}(\theta _{n3})\mathrm{}R_{n\times n}(\theta _2)R_{n\times n}(\theta _1)R_{n\times n}(\varphi ).$$
(207)
The matrix $`R_{n\times n}(\varphi )`$ appearing in Eq. (207) has the following elements: $`R_{11}(\varphi )=\mathrm{cos}\varphi `$, $`R_{12}(\varphi )=\mathrm{sin}\varphi `$, $`R_{21}(\varphi )=\mathrm{sin}\varphi `$ , $`R_{22}(\varphi )=\mathrm{cos}\varphi `$ , and $`R_{lm}(\varphi )=\delta _{lm}`$ where $`l,m1,2`$ and $`1l,mn`$. The matrix $`R_{n\times n}(\theta _1)`$ has the following elements: $`R_{11}(\theta _1)=\mathrm{cos}\theta _1`$, $`R_{13}(\theta _1)=\mathrm{sin}\theta _1`$, $`R_{31}(\theta _1)=\mathrm{sin}\theta _1`$, $`R_{33}(\theta _1)=\mathrm{cos}\theta _1`$, and $`R_{lm}(\theta _1)=\delta _{lm}`$ where $`l,m1,3`$ and $`1l,mn`$. The matrix $`R_{n\times n}(\theta _i)`$ has the following elements: $`R_{i+1,i+1}(\theta _i)=\mathrm{cos}\theta _i`$, $`R_{i+1,i+2}(\theta _i)=\mathrm{sin}\theta _i`$, $`R_{i+2,i+1}(\theta _i)=\mathrm{sin}\theta _i`$, $`R_{i+2,i+2}(\theta _i)=\mathrm{cos}\theta _i`$, and $`R_{lm}(\theta _i)=\delta _{lm}`$ where $`2in2`$, $`l,mi+1,i+2`$, and $`1l,mn`$. Notice that $`R_{n\times n}(\theta _{n2},\theta _{n3},\mathrm{}\theta _2,\theta _1,\varphi )`$ has the following properties:
1. $`R_{n\times n}(\theta _{n2},\theta _{n3},\mathrm{}\theta _2,\theta _1,\varphi )`$ is an orthogonal matrix such that $`R_{n\times n}^1=R_{n\times n}^T`$.
2. The $`1`$st row matrix elements are $`R_{11}=\mathrm{cos}\theta _1\mathrm{cos}\varphi `$, $`R_{12}=\mathrm{cos}\theta _1\mathrm{sin}\varphi `$, $`R_{13}=\mathrm{sin}\theta _1`$, and $`R_{1j}=0`$ for $`4jn`$.
3. The $`2`$nd row matrix elements are $`R_{21}=\mathrm{sin}\varphi `$, $`R_{22}=\mathrm{cos}\varphi `$, and $`R_{2j}=0`$ for $`3jn`$.
4. The $`i`$th row matrix elements, where $`3in1`$, are $`R_{i1}=\mathrm{cos}\theta _{i1}\times x_1[i]`$, $`R_{i2}=\mathrm{cos}\theta _{i1}\times x_2[i]`$, $`R_{im}=\mathrm{cos}\theta _{i1}\times x_m[i]`$ for $`1mi`$, $`R_{ii}=\mathrm{cos}\theta _{i1}\times x_i[i]`$, $`R_{ii+1}=\mathrm{sin}\theta _{i1}`$, and $`R_{ij}=0`$ for $`i+2jn`$, where $`x_m[i]`$ is the $`m`$th component of the $`i`$-dimensional Cartesian coordinate system in the representation of the $`i`$-dimensional spherical coordinate system for a unit vector. Some examples are $`x_3[3]=\mathrm{cos}\theta _1`$, $`x_3[4]=\mathrm{sin}\theta _2\mathrm{cos}\theta _1`$, $`x_3[5]=\mathrm{sin}\theta _3\mathrm{sin}\theta _2\mathrm{cos}\theta _1`$, and $`x_3[6]=\mathrm{sin}\theta _4\mathrm{sin}\theta _3\mathrm{sin}\theta _2\mathrm{cos}\theta _1`$.
5. The $`n`$th row matrix elements are $`R_{nj}=x_j[n]`$ for $`1jn`$, where $`x_j[n]`$ is the $`j`$th component of the $`n`$-dimensional Cartesian coordinate in the representation of the $`n`$-dimensional spherical coordinate system for a unit vector. Some examples are
$`x_1[n]`$ $`=`$ $`\mathrm{sin}\theta _{n2}\mathrm{sin}\theta _{n3}\mathrm{}\mathrm{}\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{cos}\varphi ,`$ (208)
$`x_2[n]`$ $`=`$ $`\mathrm{sin}\theta _{n2}\mathrm{sin}\theta _{n3}\mathrm{}\mathrm{}\mathrm{sin}\theta _2\mathrm{sin}\theta _1\mathrm{sin}\varphi ,`$ (209)
$`x_3[n]`$ $`=`$ $`\mathrm{sin}\theta _{n2}\mathrm{sin}\theta _{n3}\mathrm{}\mathrm{}\mathrm{sin}\theta _2\mathrm{cos}\theta _1,`$ (210)
$`x_n[n]`$ $`=`$ $`\mathrm{cos}\theta _{n2.}`$ (211)
The final master probability density function formula $`P_n(s)`$ for an $`n`$-dimensional sphere of radius $`R`$ and arbitrary density distribution has the following mathematical representation:
$$P_n(s)=\frac{s^{n1}[\theta _{n2},\theta _{n3},\mathrm{}\mathrm{}\theta _2,\theta _1,\varphi ][x_1,x_2,\mathrm{}\mathrm{},x_n]\rho (𝐗^{})\rho (𝐗^{\prime \prime })}{_0^{2R}\left\{s^{n1}[\theta _{n2},\theta _{n3},\mathrm{}\mathrm{}\theta _2,\theta _1,\varphi ][x_1,x_2,\mathrm{}\mathrm{},x_n]\rho (𝐗^{})\rho (𝐗^{\prime \prime })\right\}𝑑s},$$
(212)
where
$`{\displaystyle [\theta _{n2},\theta _{n3},\mathrm{}\mathrm{}\theta _2,\theta _1,\varphi ]}`$ $``$ $`{\displaystyle _0^\pi }\mathrm{sin}^{n2}\theta _{n2}d\theta _{n2}{\displaystyle _0^\pi }\mathrm{sin}^{n3}\theta _{n3}d\theta _{n3}\mathrm{}\mathrm{}`$ (214)
$`\mathrm{}\mathrm{}{\displaystyle _0^\pi }\mathrm{sin}^2\theta _2d\theta _2{\displaystyle _0^\pi }\mathrm{sin}\theta _1d\theta _1{\displaystyle _0^{2\pi }}𝑑\varphi ,`$
$`{\displaystyle [x_1,x_2,\mathrm{}\mathrm{},x_n]}`$ $``$ $`{\displaystyle _{\frac{s}{2}}^R}𝑑x_n{\displaystyle _{\sqrt{R^2x_n^2}}^{\sqrt{R^2x_n^2}}}𝑑x_1{\displaystyle _{\sqrt{R^2x_n^2x_1^2}}^{\sqrt{R^2x_n^2x_1^2}}}𝑑x_2\mathrm{}\mathrm{}`$ (216)
$`\mathrm{}\mathrm{}{\displaystyle _{\sqrt{R^2x_n^2x_1^2\mathrm{}x_{n3}^2}}^{\sqrt{R^2x_n^2x_1^2\mathrm{}x_{n3}^2}}}𝑑x_{n2}{\displaystyle _{\sqrt{R^2x_n^2x_1^2\mathrm{}x_{n2}^2}}^{\sqrt{R^2x_n^2x_1^2\mathrm{}x_{n2}^2}}}𝑑x_{n1},`$
$`\rho (𝐗^{})`$ $`=`$ $`\rho (x_1^{},x_2^{},\mathrm{}\mathrm{}x_n^{}),`$ (217)
$`\rho (𝐗^{\prime \prime })`$ $`=`$ $`\rho (x_1^{\prime \prime },x_2^{\prime \prime },\mathrm{}\mathrm{}x_n^{\prime \prime }).`$ (218)
Additionally, we introduce the following notations:
$`\stackrel{}{𝐗}^{}`$ $`=`$ $`R_{n\times n}^T(\theta _{n2},\theta _{n3},\mathrm{}\mathrm{}\theta _2,\theta _1,\varphi )\stackrel{}{𝐗}`$ (219)
$`=`$ $`R_{n\times n}^T(\varphi )R_{n\times n}^T(\theta _1)R_{n\times n}^T(\theta _2)\mathrm{}\mathrm{}R_{n\times n}^T(\theta _{n3})R_{n\times n}^T(\theta _{n2})\stackrel{}{𝐗},`$ (220)
$`\stackrel{}{𝐗}^{\prime \prime }`$ $`=`$ $`R_{n\times n}^T(\theta _{n2},\theta _{n3},\mathrm{}\mathrm{}\theta _2,\theta _1,\varphi )(\stackrel{}{𝐗}\stackrel{}{𝐒})`$ (221)
$`=`$ $`R_{n\times n}^T(\varphi )R_{n\times n}^T(\theta _1)R_{n\times n}^T(\theta _2)\mathrm{}\mathrm{}R_{n\times n}^T(\theta _{n3})R_{n\times n}^T(\theta _{n2})(\stackrel{}{𝐗}\stackrel{}{𝐒}),`$ (222)
where
$$\stackrel{}{𝐗}=\left[\begin{array}{c}x_1\\ x_2\\ \mathrm{}\\ \mathrm{}\\ x_{n1}\\ x_n\end{array}\right],\stackrel{}{𝐗}^{}=\left[\begin{array}{c}x_1^{}\\ x_2^{}\\ \mathrm{}\\ \mathrm{}\\ x_{n1}^{}\\ x_n^{}\end{array}\right],\stackrel{}{𝐗}^{\prime \prime }=\left[\begin{array}{c}x_1^{\prime \prime }\\ x_2^{\prime \prime }\\ \mathrm{}\\ \mathrm{}\\ x_{n1}^{\prime \prime }\\ x_n^{\prime \prime }\end{array}\right],\stackrel{}{𝐒}=\left[\begin{array}{c}0\\ 0\\ \mathrm{}\\ \mathrm{}\\ 0\\ s\end{array}\right].$$
(223)
The technique for generating random points within an $`n`$-dimensional sphere having an arbitrary density distribution will be discussed elsewhere .
## V Applications
### A $`m`$th moment
We first calculate the $`m`$th moment $`s^m`$ for the case of $`n`$-dimensional uniform sphere, where
$$s^m=_0^{2R}s^mP_n(s)𝑑s,$$
(224)
and $`m`$ is a positive integer. Evidently $`s^m`$ gives the expectation (average) value of the $`m`$th power of the distance between two independent random points generated inside a uniform $`n`$-dimensional sphere. By utilizing the function $`C(a;m,n)`$ defined in Eq. (22), we can write $`s^m`$ as
$$s^m=\frac{C(2R;m,n)}{C(2R;0,n)}=2^{m+n}\left(\frac{n}{m+n}\right)\frac{B(\frac{n}{2}+\frac{1}{2},\frac{n}{2}+\frac{1}{2}+\frac{m}{2})}{B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})}R^m,$$
(225)
where we have used the following identity:
$`C(a;m,n)`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \frac{a^{m+n}}{m+n}}B({\displaystyle \frac{1}{2}},{\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}})R^n{\displaystyle \frac{1}{2}}{\displaystyle \frac{a^{m+n}}{m+n}}B_{\left(\frac{a}{2R}\right)^2}({\displaystyle \frac{1}{2}},{\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}})R^n`$ (227)
$`+{\displaystyle \frac{1}{2}}{\displaystyle \frac{(2R)^{m+n}}{m+n}}B_{\left(\frac{a}{2R}\right)^2}({\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}}+{\displaystyle \frac{m}{2}},{\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}})R^n.`$
Furthermore we can rewrite Eq. (225) in terms of the gamma function by replacing the beta functions as follows:
$`s^m`$ $`=`$ $`(2R)^m\left({\displaystyle \frac{n}{m+n}}\right){\displaystyle \frac{\mathrm{\Gamma }\left(\frac{n}{2}+\frac{m}{2}+\frac{1}{2}\right)\mathrm{\Gamma }\left(n+1\right)}{\mathrm{\Gamma }\left(n+1+\frac{m}{2}\right)\mathrm{\Gamma }\left(\frac{n}{2}+\frac{1}{2}\right)}}`$ (228)
$`=`$ $`\left({\displaystyle \frac{n}{m+n}}\right)^2{\displaystyle \frac{\mathrm{\Gamma }\left(n+m+1\right)\mathrm{\Gamma }\left(\frac{n}{2}\right)}{\mathrm{\Gamma }\left(\frac{n}{2}+\frac{m}{2}\right)\mathrm{\Gamma }\left(n+1+\frac{m}{2}\right)}}R^m.`$ (229)
The results in Eqs. (228) and (229) are identical to those given in Refs. . They can be extended to evaluate $`s^m`$ and we find
$$\frac{1}{s^m}=\frac{n}{nm}\frac{2^{nm}}{R^m}\frac{B(\frac{n}{2}+\frac{1}{2},\frac{n}{2}+\frac{1}{2}\frac{m}{2})}{B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})},$$
(230)
where
$$mn1.$$
(231)
Combining Eqs. (225) and (230), the $`m`$th moment $`s^m`$ has the general form
$$s^m=2^{n+m}\left(\frac{n}{n+m}\right)\frac{B(\frac{n}{2}+\frac{1}{2},\frac{n}{2}+\frac{1}{2}+\frac{m}{2})}{B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})}R^m,$$
(232)
where
$$m=(n1),(n2),\mathrm{},2,1,\mathrm{\hspace{0.17em}0},\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}2},\mathrm{}\mathrm{}.$$
(233)
Following is a short list of $`s^m`$ in $`3`$ dimensions: $`1/s^2`$=$`9/4`$, $`1/s`$=$`6/5`$, $`s`$=$`36/35`$, $`s^2`$=$`6/5`$, $`s^3`$=$`32/21`$, $`s^4`$=$`72/35`$, and $`s^5`$=$`32/11`$, where the radius $`R`$ has been set to unity.
Additionally, $`s^m`$ can be evaluated for a sphere having a Gaussian density distribution and radius $`R\mathrm{}`$. From Eq. (74) we have,
$$s^m=\underset{R\mathrm{}}{lim}_0^{2R}s^mP_n(s)𝑑s=(2\sigma )^m\frac{\mathrm{\Gamma }\left(\frac{n+m}{2}\right)}{\mathrm{\Gamma }\left(\frac{n}{2}\right)}.$$
(234)
In some applications involving low-energy interactions among nucleons the lower limit (zero) should be replaced by the hard-core radius $`r_c0.5\times 10^{13}`$ cm . In such cases the expressions for $`P_n(s)`$ and $`s^m`$ assume the form:
$$P_n(s)=\frac{s^{n1}_{\frac{s}{2}}^R\left(R^2x^2\right)^{\frac{n1}{2}}𝑑x}{_{r_c}^{2R}𝑑s_{\frac{s}{2}}^R𝑑xs^{n1}\left(R^2x^2\right)^{\frac{n1}{2}}}=\frac{s^{n1}_{\frac{s}{2}}^R\left(R^2x^2\right)^{\frac{n1}{2}}𝑑x}{C(2R;0,n)C(r_c;0,n)}$$
(235)
and
$$s^m=_{r_c}^{2R}s^mP_n(s)𝑑s=\frac{H(R,r_{c;}m,n)}{H(R,r_c;\mathrm{\hspace{0.17em}0},n)},$$
(236)
where
$`H(R,r_c;m,n)`$ $`=`$ $`{\displaystyle \frac{(2R)^{n+m}}{n+m}}\left[B({\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}},{\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}}+{\displaystyle \frac{m}{2}})B_{\left(\frac{r_c}{2R}\right)^2}({\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}},{\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}}+{\displaystyle \frac{m}{2}})\right]`$ (238)
$`{\displaystyle \frac{r_c^{n+m}}{n+m}}\left[B({\displaystyle \frac{1}{2}},{\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}})B_{\left(\frac{r_c}{2R}\right)^2}({\displaystyle \frac{1}{2}},{\displaystyle \frac{n}{2}}+{\displaystyle \frac{1}{2}})\right],`$
and $`m`$ is an integer.
### B Coulomb Self-Energy of a Collection of Charges
As an application of the preceding formalism we evaluate the electrostatic energy $`W_n`$ of a collection of $`Z`$ charges in $`n`$ dimensions by applying geometric probability techniques. Consider first the familiar case of a spherical charge distribution in $`3`$ dimensions. For each pair of charges the potential energy due to the Coulomb interaction in Gaussian units is
$$V\left(\left|\stackrel{}{r}_2\stackrel{}{r}_1\right|\right)=\frac{e_0^2}{\left|\stackrel{}{r}_2\stackrel{}{r}_1\right|},$$
(239)
where $`e_0`$ is the elementary charge ($`e_0^2/\mathrm{}c1/137`$). Hence if we assume that the charges in each pair are uniformly distributed within the same spherical volume of radius $`R`$ then the average Coulomb energy $`U_3`$ of each pair of charges is
$$U_3=e^2_0^{2R}\frac{1}{s}P_3(s)𝑑s=\frac{6}{5}\frac{e_0^2}{R}.$$
(240)
For a collection of $`Z`$ charges there are $`Z(Z1)/2`$ such pairs, and hence the total Coulomb energy $`W_3`$ is
$$W_3=\frac{Z(Z1)}{2}U_3=\frac{3}{5}Z(Z1)\frac{e_0^2}{R}.$$
(241)
For $`n3`$ the Coulomb potential energy between two charges has the general form
$$V_n\left(\left|\stackrel{}{r}_2\stackrel{}{r}_1\right|\right)=\frac{q_n^2}{\left|\stackrel{}{r}_2\stackrel{}{r}_1\right|^{n2}},$$
(242)
where $`q_n`$ is a suitably defined charge with appropriate dimensions. Hence Eq. (240) generalizes to
$$U_n=q_n^2_0^{2R}\frac{1}{s^{n2}}P_n(s)𝑑s=q_n^2\frac{1}{s^{n2}}.$$
(243)
Using the results of Eq. (232) with $`m=n2`$ we then find
$$W_n=Z(Z1)\frac{B(\frac{n}{2}+\frac{1}{2},\frac{3}{2})}{B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})}\frac{q_n^2}{R^{n2}}=\left(\frac{2n}{n+2}\right)\frac{Z(Z1)q_n^2}{R^{n2}}.$$
(244)
For $`n`$ very large $`W_n`$ assumes the limiting form
$$\underset{n\mathrm{}}{lim}W_n2\frac{Z(Z1)q_n^2}{R^{n2}}.$$
(245)
If the density distribution $`\rho _n(r)`$ of the charges is Gaussian rather than uniform, where
$$\rho _n(r)=\frac{q_n}{(2\pi )^{\frac{n}{2}}\sigma _n^n}e^{\frac{1}{2}\frac{r^2}{\sigma _n^2}},$$
(246)
then
$$W_n=\frac{1}{2^{n2}\mathrm{\Gamma }\left(\frac{n}{2}\right)}\frac{q_n^2}{\sigma ^{n2}}.$$
(247)
In the limit $`n\mathrm{}`$,
$$W_n\frac{2}{\pi }e^{n/2}n^{n/2}\frac{q_n^2}{\sigma ^{n2}}.$$
(248)
These results are of interest in the context of recent work on modifications to the Newtonian inverse-square law arising from the existence of extra spatial dimensions . It is well known that the gravitational interaction is weaker in a space with $`n>3`$ spatial dimensions, this weakness being manifested by an inverse-power-law for the force $`F_n`$, $`F_n1/r^{n1}`$.
### C Neutrino-Pair Exchange Interactions
A second example of interest is the $`\nu \overline{\nu }`$-exchange (neutrino-pair exchange) contribution to the self energy of a nucleus or a neutron star. For two point masses the $`2`$-body potential energy is given by
$$V_{\nu \overline{\nu }}\left(\left|\stackrel{}{r}_i\stackrel{}{r}_j\right|\right)=\frac{G_F^2a_ia_j}{4\pi ^3\left|\stackrel{}{r}_i\stackrel{}{r}_j\right|^5},$$
(249)
where $`a_i`$ and $`a_j`$ are coupling constants which characterize the strength of the neutrino coupling to fermions $`i`$ and $`j`$ ($`i`$, $`j`$ = electron, proton, or neutron). In the standard model ,
$`a_e`$ $`=`$ $`{\displaystyle \frac{1}{2}}+2\mathrm{sin}^2\theta _W=0.964`$
$`a_p`$ $`=`$ $`{\displaystyle \frac{1}{2}}2\mathrm{sin}^2\theta _W=0.036`$
$`a_n`$ $`=`$ $`{\displaystyle \frac{1}{2}}.`$
In contrast to the Coulomb interaction, the functional form of $`V_{\nu \overline{\nu }}(r)`$ cannot be determined in a space of arbitrary dimensions on the basis of a general argument utilizing Gauss’ law. Hence we restrict our attention here to $`3`$ spatial dimensions and consider the case of a sphere of radius $`R`$ containing $`N`$ neutrons. For the case of a uniform density distribution we then find
$$W_3=\frac{N(N1)}{2}\left(\frac{3}{2r_c^2R^3}\frac{9}{4r_cR^4}+\frac{9}{8R^5}\frac{3r_c}{16R^6}\right)\frac{G_F^2}{4\pi ^3}\frac{3N(N1)G_F^2}{16\pi ^3r_c^2R^3},$$
(250)
where $`r_c`$ is the hard-core radius. The analogous result for a Gaussian density distribution is
$$W_3=\left[\frac{1}{r_c^2}e^{r_c^2/4\sigma ^2}\frac{\mathrm{\Gamma }(0,r_c^2/4\sigma ^2)}{4\sigma ^2}\right]\frac{N(N1)G_F^2}{32\sigma ^3\pi ^{7/2}},$$
(251)
where $`\mathrm{\Gamma }(a,b)`$ is the incomplete gamma function. The expression in Eq. (250) agrees with the result obtained in Refs. .
### D Neutron Star Models
Another application of current interest is the self-energy of neutron star arising from the exchange of $`\nu \overline{\nu }`$ pairs. Here we evaluate the probability density functions in $`3`$ dimensions for neutron stars with a multiple-shell uniform density distribution, which is what is typically assumed in neutron star models . For illustrative purposes, we discuss spherically symmetric models with spherical $`2`$, $`3`$, and $`4`$ shells, where for simplicity we assume shells of equal thickness. Some other multiple-shell models and their $`n`$-dimensional probability density functions can be found in .
For a $`2`$-shell model with a uniform density in each shell define $`\rho =\rho _1`$ for $`0rR/2`$ and $`\rho =\rho _2`$ for $`R/2rR`$, where $`\rho _1`$ and $`\rho _2`$ are constants and $`r`$ is measured from the center of the neutron star in $`3`$ dimensions. Using the preceding formalism we can show that the PDF has $`4`$ different functional forms specified by $`4`$ regions:
1. $`0s\frac{1}{2}R:`$
$$P_3(s)=\frac{24(\rho _1^2+7\rho _2^2)s^2}{\left(\rho _1+7\rho _2\right)^2R^3}\frac{36(\rho _1^22\rho _1\rho _2+5\rho _2^2)s^3}{\left(\rho _1+7\rho _2\right)^2R^4}+\frac{12(\rho _1^22\rho _1\rho _2+2\rho _2^2)s^5}{\left(\rho _1+7\rho _2\right)^2R^6},$$
(252)
2. $`\frac{1}{2}RsR:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{81(\rho _1\rho _2)\rho _2s}{2\left(\rho _1+7\rho _2\right)^2R^2}}+{\displaystyle \frac{24\rho _1s^2}{\left(\rho _1+7\rho _2\right)R^3}}{\displaystyle \frac{36\rho _1(\rho _1+3\rho _2)s^3}{\left(\rho _1+7\rho _2\right)^2R^4}}`$ (254)
$`+{\displaystyle \frac{12\rho _1^2s^5}{\left(\rho _1+7\rho _2\right)^2R^6}},`$
3. $`Rs\frac{3}{2}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{81(\rho _1\rho _2)\rho _2s}{2\left(\rho _1+7\rho _2\right)^2R^2}}+{\displaystyle \frac{24(9\rho _1\rho _2)\rho _2s^2}{\left(\rho _1+7\rho _2\right)^2R^3}}{\displaystyle \frac{36(5\rho _1\rho _2)\rho _2s^3}{\left(\rho _1+7\rho _2\right)^2R^4}}`$ (256)
$`+{\displaystyle \frac{12(2\rho _1\rho _2)\rho _2s^5}{\left(\rho _1+7\rho _2\right)^2R^6}},`$
4. $`\frac{3}{2}Rs2R:`$
$$P_3(s)=\frac{192\rho _2^2s^2}{\left(\rho _1+7\rho _2\right)^2R^3}\frac{144\rho _2^2s^3}{\left(\rho _1+7\rho _2\right)^2R^4}+\frac{12\rho _2^2s^5}{\left(\rho _1+7\rho _2\right)^2R^6}.$$
(257)
We observe that the PDFs defined in adjacent regions are continuous across the boundaries separating the regions.
For a $`3`$-shell model of a sphere of radius $`R`$ in $`3`$ dimensions, with a different uniform density in each shell, define $`\rho =\rho _1`$ for $`0rR/3`$, $`\rho =\rho _2`$ for $`R/3r2R/3`$, and $`\rho =\rho _3`$ for $`2R/3rR`$, where $`\rho _1`$, $`\rho _2`$, and $`\rho _3`$ are constants and $`r`$ is measured from the center of the neutron star. In this case the PDF has $`6`$ different functional forms specified by $`6`$ regions:
1. $`0s\frac{1}{3}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{81(\rho _1^2+7\rho _2^2+19\rho _3^2)s^2}{(\rho _1+7\rho _2+19\rho _3)^2R^3}}{\displaystyle \frac{729(\rho _1^22\rho _1\rho _2+5\rho _2^28\rho _2\rho _3+13\rho _3^2)s^3}{4(\rho _1+7\rho _2+19\rho _3)^2R^4}}`$ (259)
$`+{\displaystyle \frac{2187(\rho _1^22\rho _1\rho _2+2\rho _2^22\rho _2\rho _3+2\rho _3^2)s^5}{16(\rho _1+7\rho _2+19\rho _3)^2R^6}},`$
2. $`\frac{1}{3}Rs\frac{2}{3}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{81(\rho _2\rho _3)(9\rho 19\rho _2+25\rho _3)s}{8(\rho _1+7\rho _2+19\rho _3)^2R^2}}+{\displaystyle \frac{81(\rho _1^2+7\rho _1\rho _27\rho _1\rho _3+26\rho _2\rho _3)s^2}{(\rho _1+7\rho _2+19\rho _3)^2R^3}}`$ (261)
$`{\displaystyle \frac{729(\rho _1^2+3\rho _1\rho _25\rho _1\rho _3+10\rho _2\rho _3)s^3}{4(\rho _1+7\rho _2+19\rho _3)^2R^4}}+{\displaystyle \frac{2187(\rho _1^22\rho _1\rho _3+2\rho _2\rho _3)s^5}{16(\rho _1+7\rho _2+19\rho _3)^2R^6}},`$
3. $`\frac{2}{3}RsR:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{81(9\rho _1\rho _29\rho _2^2+55\rho _1\rho _330\rho _2\rho _325\rho _3^2)s}{8(\rho _1+7\rho _2+19\rho _3)^2R^2}}+{\displaystyle \frac{81(9\rho _1\rho _2\rho _2^2+19\rho _1\rho _3)s^2}{(\rho _1+7\rho _2+19\rho _3)^2R^3}}`$ (263)
$`{\displaystyle \frac{729(5\rho _1\rho _2\rho _2^2+5\rho _1\rho _3)s^3}{4(\rho _1+7\rho _2+19\rho _3)^2R^4}}+{\displaystyle \frac{2187(2\rho _1\rho _2)\rho _2s^5}{16(\rho _1+7\rho _2+19\rho _3)^2R^6}},`$
4. $`Rs\frac{4}{3}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{81(64\rho _139\rho _225\rho _3)\rho _3s}{8(\rho _1+7\rho _2+19\rho _3)^2R^2}}+{\displaystyle \frac{81(8\rho _2^2+28\rho _1\rho _39\rho _2\rho _3)s^2}{(\rho _1+7\rho _2+19\rho _3)^2R^3}}`$ (265)
$`{\displaystyle \frac{729(4\rho _2^2+10\rho _1\rho _35\rho _2\rho _3)s^3}{4(\rho _1+7\rho _2+19\rho _3)^2R^4}}+{\displaystyle \frac{2187(\rho _2^2+2\rho _1\rho _32\rho _2\rho _3)s^5}{16(\rho _1+7\rho _2+19\rho _3)^2R^6}},`$
5. $`\frac{4}{3}Rs\frac{5}{3}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{2025(\rho _2\rho _3)\rho _3s}{8(\rho _1+7\rho _2+19\rho _3)^2R^2}}+{\displaystyle \frac{81(35\rho _28\rho _3)\rho _3s^2}{(\rho _1+7\rho _2+19\rho _3)^2R^3}}`$ (267)
$`{\displaystyle \frac{729(13\rho _24\rho _3)\rho _3s^3}{4(\rho _1+7\rho _2+19\rho _3)^2R^4}}+{\displaystyle \frac{2187(2\rho _2\rho _3)\rho _3s^5}{16(\rho _1+7\rho _2+19\rho _3)^2R^6}},`$
6. $`\frac{5}{3}Rs2R:`$
$$P_3(s)=\frac{2187\rho _3^2s^2}{(\rho _1+7\rho _2+19\rho _3)^2R^3}\frac{6561\rho _3^2s^3}{4(\rho _1+7\rho _2+19\rho _3)^2R^4}+\frac{2187\rho _3^2s^5}{16(\rho _1+7\rho _2+19\rho _3)^2R^6}.$$
(268)
As in the previous case, the various functional forms for $`P_3(s)`$ are continuous across the boundaries separating the regions.
The $`3`$-shell model is of interest since actual models of neutron stars often invoke a $`3`$-shell picture . We note to start with that although our $`3`$-shell model assumes that all shells have the same thickness (shell radii of $`r`$, $`2r`$, and $`3r`$), we can relax this assumption by substituting $`rr_1`$, $`2rr_2`$, and $`3rr_3`$, where $`r_1`$, $`r_2`$, and $`r_3`$ are arbitrary numbers. Similarly, the densities $`\rho _1`$, $`\rho _2`$, and $`\rho _3`$ can also assume arbitrary values, so that the results of Eqs. (259)–(268) can be applied to any realistic neutron star model .
The preceding formalism can be extended to any number of shells. For a $`4`$-shell model define $`\rho =\rho _1`$ for $`0rR/4`$, $`\rho =\rho _2`$ for $`R/4rR/2`$ , $`\rho =\rho _3`$ for $`R/2r3R/4`$, and $`\rho =\rho _4`$ for $`3R/4rR`$. Here $`\rho _1`$, $`\rho _2`$, $`\rho _3`$, and $`\rho _4`$ are constants, and $`r`$ is measured from the center of the neutron star. The PDF has $`8`$ different functional forms specified by $`8`$ regions:
1. $`Rs\frac{1}{4}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{192(\rho _1^2+7\rho _2^2+19\rho _3^2+37\rho _4^2)s^2}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^3}}`$ (271)
$`{\displaystyle \frac{576(\rho _1^22\rho _1\rho _2+5\rho _2^28\rho _2\rho _3+13\rho _3^218\rho _3\rho _4+25\rho _4^2)s^3}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^4}}`$
$`+{\displaystyle \frac{768(\rho _1^22\rho _1\rho _2+2\rho _2^22\rho _2\rho _3+2\rho _3^22\rho _3\rho _4+2\rho _4^2)s^5}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^6}},`$
2. $`\frac{1}{4}Rs\frac{1}{2}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{18(9\rho _1\rho _29\rho _2^29\rho _1\rho _3+34\rho _2\rho _325\rho _3^225\rho _2\rho _4+74\rho _3\rho _449\rho _4^2)s}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^2}}`$ (275)
$`+{\displaystyle \frac{192(\rho _1^2+7\rho _1\rho _27\rho _1\rho _3+26\rho _2\rho _319\rho _2\rho _4+56\rho _3\rho _4)s^2}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^3}}`$
$`{\displaystyle \frac{576(\rho _1^2+3\rho _1\rho _25\rho _1\rho _3+10\rho _2\rho _313\rho _2\rho _4+20\rho _3\rho _4)s^3}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^4}}`$
$`+{\displaystyle \frac{768(\rho _1^22\rho _1\rho _3+2\rho _2\rho _32\rho _2\rho _4+2\rho _3\rho _4)s^5}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^6}},`$
3. $`\frac{1}{2}Rs\frac{3}{4}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{18(9\rho _2^29\rho _1\rho _255\rho _1\rho _3+30\rho _2\rho _3+25\rho _3^2)s}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^2}}`$ (280)
$`+{\displaystyle \frac{18(64\rho _1\rho _4183\rho _2\rho _4+70\rho _3\rho _4+49\rho _4^2)s}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^2}}`$
$`+{\displaystyle \frac{192(9\rho _1\rho _2\rho _2^2+19\rho _1\rho _326\rho _1\rho _4+63\rho _2\rho _4)s^2}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^3}}`$
$`{\displaystyle \frac{576(5\rho _1\rho _2\rho _2^2+5\rho _1\rho _310\rho _1\rho _4+17\rho _2\rho _4)s^3}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^4}}`$
$`+{\displaystyle \frac{768(2\rho _1\rho _3\rho _2^22\rho _1\rho _4+2\rho _2\rho _4)s^5}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^6}},`$
4. $`\frac{3}{4}RsR:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{18(64\rho _1\rho _339\rho _2\rho _325\rho _3^2+161\rho _1\rho _442\rho _2\rho _470\rho _3\rho _449\rho _4^2)s}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^2}}`$ (284)
$`+{\displaystyle \frac{192(8\rho _2^2+28\rho _1\rho _39\rho _2\rho _3+37\rho _1\rho _4)s^2}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^3}}`$
$`{\displaystyle \frac{576(4\rho _2^2+10\rho _1\rho _35\rho _2\rho _3+7\rho _1\rho _4)s^3}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^4}}`$
$`+{\displaystyle \frac{768(\rho _2^2+2\rho _1\rho _32\rho _2\rho _3)s^5}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^6}},`$
5. $`Rs\frac{5}{4}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{18(25\rho _2\rho _325\rho _3^2+225\rho _1\rho _4106\rho _2\rho _470\rho _3\rho _449\rho _4^2)s}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^2}}`$ (288)
$`+{\displaystyle \frac{192(35\rho _2\rho _38\rho _3^2+65\rho _1\rho _428\rho _2\rho _4)s^2}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^3}}`$
$`{\displaystyle \frac{576(13\rho _2\rho _34\rho _3^2+17\rho _1\rho _410\rho _2\rho _4)s^3}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^4}}`$
$`+{\displaystyle \frac{768(2\rho _2\rho _3\rho _3^2+2\rho _1\rho _42\rho _2\rho _4)s^5}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^6}},`$
6. $`\frac{5}{4}Rs\frac{3}{2}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{18(144\rho _295\rho _349\rho _4)\rho _4s}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^2}}+{\displaystyle \frac{192(27\rho _3^2+72\rho _2\rho _435\rho _3\rho _4)s^2}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^3}}`$ (290)
$`{\displaystyle \frac{576(9\rho _3^2+20\rho _2\rho _413\rho _3\rho _4)s^3}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^4}}+{\displaystyle \frac{768(\rho _3^2+2\rho _2\rho _42\rho _3\rho _4)s^5}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^6}},`$
7. $`\frac{3}{2}Rs\frac{7}{4}R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{882(\rho _3\rho _4)\rho _4s}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^2}}+{\displaystyle \frac{192(91\rho _327\rho _4)\rho _4s^2}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^3}}`$ (292)
$`{\displaystyle \frac{576(25\rho _39\rho _4)\rho _4s^3}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^4}}+{\displaystyle \frac{768(2\rho _2\rho _4)\rho _4s^5}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^6}},`$
8. $`\frac{7}{4}Rs2R:`$
$`P_3(s)`$ $`=`$ $`{\displaystyle \frac{12288\rho _4^2s^2}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^3}}{\displaystyle \frac{9216\rho _4^2s^3}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^4}}`$ (294)
$`+{\displaystyle \frac{768\rho _4^2s^5}{(\rho _1+7\rho _2+19\rho _3+37\rho _4)^2R^6}}.`$
A statistically tested random number generator and a special simulation technique have been employed to numerically simulate the probability density functions, and the numerical results agree with our analytical results as discussed in Refs .
### E Geometric Probability Constants
We can also apply the preceding geometric probability techniques to evaluate the expectation values of $`\stackrel{}{r}_{12}\stackrel{}{r}_{23}`$ in $`n`$ dimensions, where $`\stackrel{}{r}_{12}=\stackrel{}{r}_2\stackrel{}{r}_1`$, $`\stackrel{}{r}_{23}=\stackrel{}{r}_3\stackrel{}{r}_2`$, and $`\stackrel{}{r}_1`$, $`\stackrel{}{r}_2`$ and $`\stackrel{}{r}_3`$ are three independent points produced randomly inside an $`n`$-dimensional uniform sphere of radius $`R`$. The quantity $`\stackrel{}{r}_{12}\stackrel{}{r}_{23}_n`$ is one of the geometric probability constants of interest and has important applications in physics . In $`3`$ dimensions,
$$\stackrel{}{r}_{12}\stackrel{}{r}_{23}_3=\left(x_2x_1\right)\left(x_3x_2\right)+\left(y_2y_1\right)\left(y_3y_2\right)+\left(z_2z_1\right)\left(z_3z_2\right).$$
(295)
Following Refs ,
$$\stackrel{}{r}_{12}\stackrel{}{r}_{23}_n=n\frac{_R^Rx^2V(n1,\sqrt{R^2x^2})𝑑x}{V(n,R)}=\frac{n}{n+2}R^2,$$
(296)
where
$$V(n,R)=\frac{\pi ^{\frac{n}{2}}R^n}{\mathrm{\Gamma }\left(\frac{n}{2}+1\right)}.$$
(297)
We can verify Eq. (296) by applying the geometric probability techniques directly so that
$$\stackrel{}{r}_{12}\stackrel{}{r}_{23}_n=\frac{1}{2}_0^{2R}s^2P_n(s)𝑑s=\frac{n}{n+2}R^2.$$
(298)
Following is a short list of $`\stackrel{}{r}_{12}\stackrel{}{r}_{23}_n`$ for selected values of $`n`$:
$`\stackrel{}{r}_{12}\stackrel{}{r}_{23}_1`$ $`=`$ $`{\displaystyle \frac{1}{3}}R^2,`$ (299)
$`\stackrel{}{r}_{12}\stackrel{}{r}_{23}_2`$ $`=`$ $`{\displaystyle \frac{1}{2}}R^2,`$ (300)
$`\stackrel{}{r}_{12}\stackrel{}{r}_{23}_3`$ $`=`$ $`{\displaystyle \frac{3}{5}}R^2,`$ (301)
$`\stackrel{}{r}_{12}\stackrel{}{r}_{23}_4`$ $`=`$ $`{\displaystyle \frac{2}{3}}R^2,`$ (302)
$`\stackrel{}{r}_{12}\stackrel{}{r}_{23}_5`$ $`=`$ $`{\displaystyle \frac{5}{7}}R^2.`$ (303)
Notice that as $`n`$ becomes large $`\stackrel{}{r}_{12}\stackrel{}{r}_{23}_nR^2`$. If we evaluate $`\stackrel{}{r}_{12}\stackrel{}{r}_{23}_n`$ in an $`n`$-dimensional spherical space with a Gaussian density distribution as defined in Eq. (71) and radius $`R\mathrm{}`$, then
$$\stackrel{}{r}_{12}\stackrel{}{r}_{23}_n=n\sigma ^2.$$
(304)
An interesting application of our formalism in biomedical physics is the calculation of the dynamic probability density function $`P(s,t)`$ in $`3`$ dimensions, as discussed in Ref. .
## VI Conclusions
A new formalism has been presented in this paper for evaluating the analytical probability density function of distance between two randomly sampled points in an $`n`$-dimensional sphere with an arbitrary density. We illustrate the power of this new geometric probability technique by demonstrating how $`n`$-dimensional integrals can be reduced $`1`$-dimensional integrals, even in the presence of $`r_c`$. We have shown that the classical result of Eq. (30) can be rederived from a simple intuitive picture, and that it can also be extended to an $`n`$-dimensional sphere with a non-uniform density distribution. Several density examples (Eqs. (63), (65), (99), (133), and (188)) were presented, and the analytical results were shown to be in agreement with Monte Carlo simulations. Applications to a variety of physical systems, such as neutron star models, have also been discussed.
## VII Acknowledgments
The authors wish to thank A. W. Overhauser, Michelle Parry, David Schleef, and Christopher Tong for helpful discussions. One of the authors (S. J. T.) also wishes to thank the Purdue University Computing Center for computing support. Conversations on numerical algorithms with Dave Seaman and Chinh Le are also acknowledged. This work was supported in part by the U.S. Department of Energy under Contract No. DE-AC02-76ER01428.
## A Geometry of Intersecting Circles
Here we briefly show why circles $`O_1`$ and $`O_2`$ are identical with the center of $`O_1`$ located at $`(0,0)`$ and $`O_2`$ located at $`(s,0)`$ as shown in Fig. 9. Following the discussion in Sec. II A, define a Cartesian coordinate system for points 1, 2, 3, and 4 as
$`(x_1,y_1)`$ $`=`$ $`(s/2,\sqrt{R^2s^2/4}),`$ (A1)
$`(x_2,y_2)`$ $`=`$ $`(s/2,\sqrt{R^2s^2/4}),`$ (A2)
$`(x_3,y_3)`$ $`=`$ $`(sR,\mathrm{\hspace{0.17em}0}),`$ (A3)
$`(x_4,y_4)`$ $`=`$ $`(R,\mathrm{\hspace{0.17em}0}).`$ (A4)
Assume that the equation for the circle $`O_2`$ is
$$(x\alpha )^2+y^2=r^2,$$
(A5)
where $`(\alpha ,0)`$ is the center and $`r`$ is the radius. Inserting Eqs. (A1), (A2), and (A3) into Eq. (A5) which expresses the fact that the circle $`O_2`$ contains points 1, 2, and 3. If $`s2R`$, then the only solution is $`\alpha =s`$ and $`r=R`$. This result means that circles $`O_1`$ and $`O_2`$ have identical radii and the center of $`O_2`$ is located at $`(s,0)`$.
## B Recursion Relations of $`𝑷_𝒏\mathbf{(}𝒔\mathbf{)}`$
We present in this Appendix some recursion relations for the probability density functions $`P_n(s)`$ which follow from the results in the text.
$$P_n^{}(s)=\frac{n1}{s}P_n(s)\frac{n}{B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})}\frac{s^{n1}}{R^{n+1}}\left(1\frac{s^2}{4R^2}\right)^{\frac{n1}{2}}.$$
(B1)
$$P_n^{\prime \prime }(s)=\frac{n(n1)}{s^2}P_n(s)+\frac{2(n1)}{s}P_n^{}(s)+\frac{n(n1)}{4B(\frac{n}{2}+\frac{1}{2},\frac{1}{2})}\frac{s^n}{R^{n+3}}\left(1\frac{s^2}{4R^2}\right)^{\frac{n3}{2}}.$$
(B2)
$$\left(1\frac{s^2}{4R^2}\right)P_n^{\prime \prime }(s)\frac{n1}{s}\left(2\frac{3}{4}\frac{s^2}{R^2}\right)P_n^{}(s)+\frac{n1}{s^2}\left[n\left(\frac{2n1}{4}\right)\frac{s^2}{R^2}\right]P_n(s)=0.$$
(B3)
$`\left(1{\displaystyle \frac{s^2}{4R^2}}\right)\left[P_n^{\prime \prime }(s){\displaystyle \frac{2(n1)}{s}}P_n^{}(s)+{\displaystyle \frac{n(n1)}{s^2}}P_n(s)\right]`$ (B4)
$`(n1)\left({\displaystyle \frac{s^2}{4R^2}}\right)\left[{\displaystyle \frac{1}{s}}P_n^{}(s)+{\displaystyle \frac{n1}{s^2}}P_n(s)\right]=0.`$ (B5)
$$P_{n+2}(s)=\frac{n+2}{n}\frac{s^2}{R^2}P_n(s)\frac{1}{\pi }\frac{(n+2)!!}{(n+1)!!}\frac{s^{n+2}}{R^{n+3}}\left(1\frac{s^2}{4R^2}\right)^{\frac{n+1}{2}}(n=\mathrm{even}).$$
(B6)
$`2P_n(s)`$ $`=`$ $`{\displaystyle \frac{n}{n+2}}{\displaystyle \frac{R^2}{s^2}}P_{n+2}(s)+{\displaystyle \frac{n}{n2}}{\displaystyle \frac{s^2}{R^2}}P_{n2}(s)`$ (B8)
$`{\displaystyle \frac{1}{\pi }}{\displaystyle \frac{n!!}{(n+1)!!}}{\displaystyle \frac{s^n}{R^{n+1}}}\left(1+n{\displaystyle \frac{s^2}{4R^2}}\right)\left(1{\displaystyle \frac{s^2}{4R^2}}\right)^{\frac{n1}{2}}(n=\mathrm{even}).`$
$$P_{n+2}(s)=\frac{n+2}{n}\frac{s^2}{R^2}P_n(s)\frac{1}{2}\frac{(n+2)!!}{(n+1)!!}\frac{s^{n+2}}{R^{n+3}}\left(1\frac{s^2}{4R^2}\right)^{\frac{n+1}{2}}(n=\mathrm{odd}).$$
(B9)
$`2P_n(s)`$ $`=`$ $`{\displaystyle \frac{n}{n+2}}{\displaystyle \frac{R^2}{s^2}}P_{n+2}(s)+{\displaystyle \frac{n}{n2}}{\displaystyle \frac{s^2}{R^2}}P_{n2}(s)`$ (B11)
$`{\displaystyle \frac{1}{2}}{\displaystyle \frac{n!!}{(n+1)!!}}{\displaystyle \frac{s^n}{R^{n+1}}}\left(1+n{\displaystyle \frac{s^2}{4R^2}}\right)\left(1{\displaystyle \frac{s^2}{4R^2}}\right)^{\frac{n1}{2}}(n=\mathrm{odd}).`$
$$P_{n+2}(s)=\frac{n+2}{n}\frac{s^2}{R^2}P_n(s)\frac{1}{B(\frac{n}{2}+\frac{3}{2},\frac{1}{2})}\frac{s^{n+2}}{R^{n+3}}\left(1\frac{s^2}{4R^2}\right)^{\frac{n+1}{2}}.$$
(B12)
$`2P_n(s)`$ $`=`$ $`{\displaystyle \frac{n}{n+2}}{\displaystyle \frac{R^2}{s^2}}P_{n+2}(s)+{\displaystyle \frac{n}{n2}}{\displaystyle \frac{s^2}{R^2}}P_{n2}(s)`$ (B14)
$`{\displaystyle \frac{1}{n+2}}{\displaystyle \frac{1}{B(\frac{n}{2}+\frac{3}{2},\frac{1}{2})}}{\displaystyle \frac{s^n}{R^{n+1}}}\left(1+n{\displaystyle \frac{s^2}{4R^2}}\right)\left(1{\displaystyle \frac{s^2}{4R^2}}\right)^{\frac{n1}{2}}.`$
|
no-problem/0004/physics0004064.html
|
ar5iv
|
text
|
# Matching Conditions on Capillary Ripples: Polarization Estudio de las condiciones de empalme para las oscilaciones de intercara entre dos fluidos. Polarización.
## I Introduction
The boundary conditions constitute the key feature in any theory of surface waves. It is through them that one introduces in the analysis the physical consequences of specific surface effects, such as local changes of mass or stresses, thus going beyond the simple case in which one merely matches two semi-infinite bulk media at an interface (which, in particular, can be a free surface).
In the last years, increased attention has been paid to the properties of capillary waves by physicists and chemists . Ripples represent one of the few cases in which the relation between the dynamical properties of a surface and liquid flows can be predicted completely. The study of capillary ripples has clarified which properties of a liquid surface determine the surface’s resistance against deformation.
The boundary conditions for the stress at the interface are derived from the principle that the forces acting upon such an “interfacial” element result not only from viscous stresses in the liquid but also from stresses existing in the deformed interface. The cause of the difference is that an interface, unlike a three dimensional liquid, can not enjoy the property of incompressibility . Work done on an element of liquid is partially degrade into heat by viscous friction and partially transformed into kinetic energy which is transmitted to adjoining elements. Work done on an interfacial element leads, at least partially, to an increment of the surface potential energy. It is this potential energy of the deformed interface which enables the whole system, including the interface, to carry out an oscillatory motion.
In this article, first, it will be shown a non-traditional way to obtain the matching conditions at the interface between two non-mixed fluid at rest, considering the equation of motion of the whole media directly. Usually, university courses do not use this approach and state the boundary conditions from outside the constitutive equations governing the studied problem. It is important from a pedagogical point of view to evidence that in fact, the matching conditions at the interfaces are contained, almost in all cases, in the equation of movement for the whole system taken as the composition of all media. The other cases arise when the fluid interfaces have intrinsic properties not included on the equations of the media in that cases mentioned before.
This point of view was already used in to elaborate a formalism, based on Surface Green Functions, which establish an isomorphism between solids and fluids related to the interface normal modes. The aim of this paper is to emphasize the fact that this way to establish the matching conditions allows us to introduce the physical characteristics related with the boundary conditions in a firmer floating.
## II Equations for the bulk in a viscous incompressible fluid.
To achieve a system of equations which describes the oscillatory motion of a viscous incompressible fluid, the starting point is the linearized Navier-Stokes equation for a viscous fluid , which reads:
$$\rho \frac{V_i}{t}=\frac{\tau _{ij}}{x_j}+\rho F_{i,\text{ext}}$$
(1)
where $`\rho `$ is the density of equilibrium, $`V_i`$ the components of the fluid particle, $`F_{i,\text{ext}}`$ are the external forces and $`\tau _{ij}`$ is the stress tensor which, for a viscous incompressible fluid, has the following form
$$\tau _{ij}=p\delta _{ij}+\eta \left(\frac{V_i}{x_j}+\frac{V_j}{x_i}\right)$$
(2)
Here $`p`$ are the small variations of pressure, $`\eta `$ is the viscosity parameter and $`\delta _{ij}`$ is the unit matrix elements. The subscripts $`i`$ and $`j`$ take the values $`y`$ and $`z`$ identically. In (1) the sum over repeated subscripts is understood. It had been assumed that the system is symmetric with respect to the $`x`$ direction.
Putting (2) in (1) it is obtained
$`\rho {\displaystyle \frac{V_y}{t}}+{\displaystyle \frac{}{y}}\left(p+2\eta {\displaystyle \frac{V_y}{y}}\right)+`$ (3)
$`+{\displaystyle \frac{}{z}}\left[\eta \left({\displaystyle \frac{V_y}{z}}+{\displaystyle \frac{V_z}{y}}\right)\right]`$ $`=`$ $`0`$ (4)
$`\rho {\displaystyle \frac{V_z}{t}}+{\displaystyle \frac{}{y}}\left[\eta \left({\displaystyle \frac{V_z}{y}}+{\displaystyle \frac{V_y}{z}}\right)\right]+`$ (5)
$`+{\displaystyle \frac{}{z}}\left(p+2\eta {\displaystyle \frac{V_z}{z}}\right)`$ $`=`$ $`0`$ (6)
where we have neglected the external forces.
Also the continuity equation is needed, which expresses that the volume of an element of the incompressible fluid does not change during the motion. It has the form
$$\frac{V_y}{y}+\frac{V_z}{z}=0$$
(7)
The first step is to obtain the equation of motion and its solution of the each medium taken as infinite. So, the parameters as density and viscosity are considered constants in the whole medium and it leads to transform equations (4) and (6) to:
$`\rho {\displaystyle \frac{V_y}{t}}{\displaystyle \frac{p}{y}}+\eta ^2V_y`$ $`=`$ $`0`$ (8)
$`\rho {\displaystyle \frac{V_z}{t}}{\displaystyle \frac{p}{z}}+\eta ^2V_z`$ $`=`$ $`0`$ (9)
where $`^2=(^2/y^2+^2/z^2)`$.
The solution of the system (7), (8) and (9) written as a vector field velocity, can be putted as the sum of an irrotational field (related with the longitudinal mode) and a divergence free field (related with the transverse modes) , i.e.:
$$𝑽=𝑽_1+𝑽_2$$
(10)
which satisfy:
$`\times 𝑽_1`$ $`=`$ $`0`$ (11)
$`𝑽_2`$ $`=`$ $`0`$ (12)
Any irrotational field is characterized by a scalar function, the “potential” function $`\phi (y,z,t)`$, such that
$$𝑽_1=\phi $$
(13)
and the divergence free field can be described by a vector function , the “stream” or vorticity function $`\psi (y,z,t)`$, such that
$$𝑽_2=(\frac{\psi }{z},\frac{\psi }{y})$$
(14)
The velocity components can thus be written in terms of the potential and the stream functions, as:
$`V_y`$ $`=`$ $`{\displaystyle \frac{\phi }{y}}{\displaystyle \frac{\psi }{z}}`$ (15)
$`V_z`$ $`=`$ $`{\displaystyle \frac{\phi }{z}}+{\displaystyle \frac{\psi }{y}}`$ (16)
Substitution of eqns. (15) and (16) in the continuity condition (7) gives
$$^2\phi =0$$
(17)
and substitution into eqns. (8) and (9) leads to
$`{\displaystyle \frac{}{y}}\left\{\rho {\displaystyle \frac{\phi }{t}}p\right\}+{\displaystyle \frac{}{z}}\left\{\rho {\displaystyle \frac{\psi }{t}}\eta ^2\psi \right\}`$ $`=`$ $`0`$ (18)
$`{\displaystyle \frac{}{z}}\left\{\rho {\displaystyle \frac{\phi }{t}}p\right\}{\displaystyle \frac{}{y}}\left\{\rho {\displaystyle \frac{\psi }{t}}\eta ^2\psi \right\}`$ $`=`$ $`0`$ (19)
Equations (18) and (19) are simultaneously satisfied if one considers:
$`\rho {\displaystyle \frac{\phi }{t}}p`$ $`=`$ $`C_1`$ (20)
$`\rho {\displaystyle \frac{\psi }{t}}\eta ^2\psi `$ $`=`$ $`C_2`$ (21)
The constants $`C_1`$ and $`C_2`$ are obtained from the condition at zero flow, and it gives raise to $`C_1=p_o`$ and $`C_2=0`$ where $`p_o`$ is the reference atmospheric pressure.
The solution of equations (17), (20) and (21) are looking for as the following
$`\phi (y,z,t)`$ $`=`$ $`\mathrm{\Phi }(z)\text{e}^{i\left(\kappa y\omega t\right)}`$ (22)
$`\psi (y,z,t)`$ $`=`$ $`\mathrm{\Psi }(z)\text{e}^{i\left(\kappa y\omega t\right)}`$ (23)
where the parameters $`\kappa `$ and $`\omega `$ are the wavevector and frequency of the wave respectively.
Substituting eqns. (22) and (23) in (17) and (21) gives the $`z`$-dependence of the functions $`\phi `$ and $`\psi `$, which satisfy:
$`{\displaystyle \frac{\text{d}^2\mathrm{\Phi }(z)}{\text{d}z^2}}\kappa ^2\mathrm{\Phi }(z)`$ $`=`$ $`0`$ (24)
$`{\displaystyle \frac{\text{d}^2\mathrm{\Psi }(z)}{\text{d}z^2}}\text{q}_\text{t}^2\mathrm{\Psi }(z)`$ $`=`$ $`0`$ (25)
with $`\text{q}_\text{t}^2=\kappa ^2i\rho \omega /\eta `$
Equations (24) and (25) lead to the solutions:
$`\mathrm{\Phi }(z)`$ $`=`$ $`C_1\text{e}^{\kappa z}+C_2\text{e}^{\kappa z}`$ (26)
$`\mathrm{\Psi }(z)`$ $`=`$ $`C_3\text{e}^{\text{q}_\text{t}z}+C_4\text{e}^{\text{q}_\text{t}z}`$ (27)
and in combination with eqns. (22) and (23) they give a solution of the form:
$`\phi (y,z,t)`$ $`=`$ $`\left(C_1\text{e}^{\kappa z}+C_2\text{e}^{\kappa z}\right)\text{e}^{i\left(\kappa y\omega t\right)}`$ (28)
$`\psi (y,z,t)`$ $`=`$ $`\left(C_3\text{e}^{\text{q}_\text{t}z}+C_4\text{e}^{\text{q}_\text{t}z}\right)\text{e}^{i\left(\kappa y\omega t\right)}`$ (29)
where $`C_1,C_2,C_3`$ and $`C_4`$ are constants to be determined by boundary and matching conditions.
Then, the expressions for the varying velocity components and the pressure are finally obtained by substitution of eqns. (28) and (29) into (15), (16) and (20), respectively.
## III Interface problem: matching conditions.
Now we will match the two media. We consider a surface which at rest coincides with the plane $`z=0`$ and it separates medium $`M_1`$ at $`z<0`$ from medium $`M_2`$ at $`z>0`$. Each one is viscous and incompressible.
First of all, any solution has to fulfill the continuity of the velocity field across the surface according to
$`V_y^{(1)}`$ $`=`$ $`V_y^{(2)}\text{at}z=0`$ (30)
$`V_z^{(1)}`$ $`=`$ $`V_z^{(2)}\text{at}z=0`$ (31)
Superscript $`1`$ denotes medium $`M_1`$ and superscript $`2`$ denotes medium $`M_2`$. The rest of the matching conditions at the interface are derived from the system of equations which govern the whole system. This is no usually done in the normal program courses at the Universities and we consider that it is important to state that the matching conditions are, almost in all of the cases, contained in the equation of movement for the whole system taken as the composition of each one. These are eqns. (4) and (6) where the parameters $`\eta `$ and $`\rho `$ are constant, but taking different values on each medium. Integrating these equations through the surface about $`z=0`$ from $`ϵ`$ to $`+ϵ`$ and later taking $`ϵ0`$, it is obtained from eq. (4)
$`\left[\eta _1\left({\displaystyle \frac{V_y^{(1)}}{z}}+{\displaystyle \frac{V_z^{(1)}}{y}}\right)\right]_{z=0^{}}=`$ (32)
$`=\left[\eta _2\left({\displaystyle \frac{V_y^{(2)}}{z}}+{\displaystyle \frac{V_z^{(2)}}{y}}\right)\right]_{z=0^+}`$ (33)
and from eq. (6)
$`\left[p_1+2\eta _1{\displaystyle \frac{V_z^{(1)}}{z}}\right]_{z=0^{}}=`$ (34)
$`=\left[p_2+2\eta _2{\displaystyle \frac{V_z^{(2)}}{z}}\right]_{z=0^+}p_\gamma `$ (35)
where $`p_\gamma `$ is the jump due to the surface tension according with the Laplace Law . The other terms in eqns. (4) and (6) vanish when $`ϵ0`$ because they are continuous or have a finite jump at $`z=0`$.
It is easily seen according to eq. (2) that eqns. (33) and (35) are the conditions of the continuity of the stress tensor components through the interface, as expected.
On the other hand, the boundary conditions of the problem are regularity at $`z\pm \mathrm{}`$, which leads to eliminate 4 of the 8 constants appearing in (28) and (29) for the two media. Using the resulting functions $`\phi `$ and $`\psi `$ for each medium in conditions (30)-(35) the following system is met:
$$\begin{array}{cccc}i\kappa & i\kappa & \text{q}_{\text{t1}}& \text{q}_{\text{t2}}\\ 1& 1& i& i\\ 2i\kappa ^2\eta _1& 2i\kappa ^2\eta _2& \eta _1Q_{t1}& \eta _2Q_{t2}\\ d_{41}& \eta _2Q_{t2}& d_{43}& 2i\kappa \eta _2\text{q}_{\text{t2}}\end{array}\begin{array}{c}C_1\\ C_2\\ C_3\\ C_4\end{array}=0$$
(36)
to determine the remaining constants, with the definitions $`d_{41}=\eta _1Q_{t1}i\mathrm{{\rm Y}}`$, $`d_{43}=2i\kappa \eta _1\text{q}_{\text{t1}}\mathrm{{\rm Y}}`$, $`\mathrm{{\rm Y}}=\gamma \kappa ^3/\omega `$, $`Q_{t1}=2\kappa ^2i\rho _1\omega /\eta _1`$ and $`Q_{t2}=2\kappa ^2i\rho _2\omega /\eta _2`$.
In deducing the expression in system (36) which comes from (35) it was considered that
$$p_\gamma =\gamma \frac{^2z}{y^2}$$
(37)
on the surface, but as we are dealing with the velocity field, then it is obtained that:
$$\frac{p_\gamma }{t}=\gamma \frac{^2V_z}{y^2}$$
(38)
which finally leads to:
$$p_\gamma =\frac{\gamma \kappa ^2}{i\omega }\left(\kappa C_1+i\kappa C_3\right)\text{e}^{i\left(\kappa y\omega t\right)}$$
(39)
according to the substitution $`/ti\omega `$ and $`/y\kappa ^2`$
The vanishing of the determinant of (36) leads to the equation for the dispersion relation of the existing surface modes. It can be written as:
$`\omega ^2\left[(\rho _1+\rho _2)(\rho _1\text{q}_{\text{t2}}+\rho _2\text{q}_{\text{t1}})\kappa (\rho _1\rho _2)^2\right]`$ $`+`$ (40)
$`+\gamma \kappa ^3\left[\rho _1(\kappa \text{q}_{\text{t2}})+\rho _2(\kappa \text{q}_{\text{t1}})\right]`$ $`+`$ (41)
$`+\mathrm{\hspace{0.33em}4}\kappa ^3(\eta _2\eta _1)^2(\kappa \text{q}_{\text{t1}})(\kappa \text{q}_{\text{t2}})`$ $`+`$ (42)
$`+\mathrm{\hspace{0.33em}4}i\kappa ^2\omega (\eta _2\eta _1)(\rho _1\kappa \rho _2\kappa \rho _1\text{q}_{\text{t2}}+\rho _2\text{q}_{\text{t1}})`$ $`=`$ $`0`$ (43)
This is the dispersion relation for capillary waves for the interface of two viscous incompressible fluids.
In order to obtain the constants $`C_1,C_2,C_3`$ and $`C_4`$, an initial stimulus is needed according to an initial value problem but this method, simple at the beginning, becomes rather complicated later and it is not good for a quite general study.
As was said on the introduction, the aim of this paper is not to get inside the dispersion relation of the normal modes at the interface of two viscous fluids at rest. For a better study of this subject we recommend paper . We get here to show a way of solution also different from the usually taken as only the velocity vector as a function of an ”stream function”. From now on, we will put our attention on the matching conditions and we will show that more that a mathematical information of the matching can be found on it, but also the physics of the polarization can be deduced and how the interface moves in its oscillation.
From eqns. (33) and (35) it can be obtained more information about the polarization of the modes on the interface. This will be done in the next section.
## IV Matching conditions and polarization.
There are two possible modes on the fluid: one in which the fluid particle moves in the direction of the wave propagation called longitudinal with notation L($`V_y`$,$`0`$) and another transverse to the direction of the wave propagation and normal to the interface with notation TN($`0`$,$`V_z`$).
The longitudinal mode is related to the fluid compressibility because this motion of the fluid particles is only possible when its volume changes . This analysis also holds when the mode is on the interface, but this does not mean that there are no longitudinal modes of oscillation on the interface when the fluids involved are incompressible. It had been shown in that the interface, when oscillating, must be considered as a compressible one, because precisely its change in area is responsable for the increasing of its potential energy and therefore, for its oscillation. It is important to state that this is a fundamental argument to understand the movement of any interface in hydrodynamics.
Nevertheless, now it can be shown that in spite of the compressibility of the interface, there does not exist pure longitudinal mode if we are dealing with incompressible media. Let us demonstrate this.
If a point $`y_o`$ on the interface is considered moving with velocity, say $`V_{yo}^S`$ in the $`y`$-axis direction, then, according to the continuity of velocity, the point ($`y_o`$,$`ϵ`$) in $`M_1`$ and the point ($`y_o`$,$`+ϵ`$) in $`M_2`$ must have the same velocity $`V_{yo}^S`$ if $`ϵ`$ is small enough. As the interface is compressible, at the point $`y_1`$ near enough $`y_o`$ the velocity can be, for instance, $`V_{y1}^S`$ different in general from $`V_{yo}^S`$ but as the media are incompressible, at the point ($`y_1`$,$`ϵ`$) and ($`y_1`$,$`+ϵ`$) the velocity must be $`V_{yo}^S`$. See Fig. 1. This is not in conformity with the continuity of the velocity through the surface and hence the pure longitudinal mode is not possible and only the TN mode seems to be valid when the media are incompressible.
After these considerations during the above demonstration, the student can keep the idea that the interface oscillations can only occur in the $`z`$-axis. This is the accurate moment to show to the student that things not always are as they apparently seem to be, because that assumption does not take into account the different properties of each medium, whose response depends on its fundamental parameters, as density and viscosity, which are different for each medium. Hence, it is evident that it must be analyzed, precisely, the interface matching conditions.
It is useful to compare and to support the previous qualitative analysis with a quantitative and more profound one regarding the interface matching conditions.
Recalling carefully eqns. (33) and (35) and supposing that such a wave propagates in $`y`$ direction with movement only in $`z`$ direction (TN mode), then $`V_y^{(1)}=V_y^{(2)}=0`$ and eq. (33) becomes
$$\eta _1\frac{V_z^{(1)}}{y}|_{z=0^{}}=\eta _2\frac{V_z^{(2)}}{y}|_{z=0^+}$$
(44)
It is known that $`V_z`$ is continuous along the interface for all points. Then, the derivative with respect to $`y`$ is also the same in both hands of (44) and this expression only holds if $`\eta _1=\eta _2`$, i.e., if the media have the same viscosity. It does not mean for the interface to disappear because the density of each medium can be different. Then, if the viscosities of the media have not the same value, the velocity component $`V_y`$ along the interface must be different from zero to compensate the inequality (44) and to fulfill the continuity of the stress tensor in the $`y`$ direction given by eq. (33) yielding to a component of movement along $`y`$ direction. This mode will be called Sagittal mode or S$`(V_y,V_z)`$.
The above analysis was done for the general case. Now we are able to take the particular case in which one of the media is vacuum, for instance, $`M_2`$, with $`\eta _2=0`$. Then, condition (33) becomes
$$\left[\eta _1\left(\frac{V_y^{(1)}}{z}+\frac{V_z^{(1)}}{y}\right)\right]_{z=0^{}}=0$$
(45)
and it can be seen that also $`V_y`$ must be non zero on the surface to hold eq. (45) with the corresponding Sagittal polarization movement.
With this analysis on the conditions of stress component continuity in $`y`$ direction along the interface, it can be seen that if the two media are viscous (at least one of them), the fluid particle of the interface moves in a Sagittal mode which combines movement in both directions: along the wave propagation in $`y`$ direction, and normal to the interface in $`z`$ direction.
Then, it is qualitatively clear that the viscosity of each media plays a fundamental role in the coupling of modes even for incompressible fluids. In spite of that, it could be a mistake to say that the modes decouple if the viscosities are equal. It should not be forgotten that eq. (35) is also important in the characterization of the interface particle behaviour and it includes the pressure on each side of the surface. According to eq. (20), the pressure is associated with the longitudinal mode and the inertial effect of the fluid particle according with the density of the media. Then, it contains the information of each components of the velocity and also of the density and according to eq. (28) the pressure has a jump through the interface. This result, in combination with the analysis of eq. (33) make difficult to understand the role played by the densities of each media on the surface polarization movement, and it can not be reached from this only analysis. This point is still a matter of investigation.
## V Conclusions
The present work is an attempt to give an example, using the hydrodynamics, of how the study of the interface matching conditions allows us to make a plentiful and rich in details discussion. Moreover, of how the interface matching conditions content a sufficient information to conclude that the interface oscillation must be with a Sagittal mode and not with neither a pure longitudinal, nor a pure transversal one. This movement has been shown to be close related to the physical properties of the media such as viscosities and that fact allows us to establish rigorously that those are the parameter which characterize the interface movement and the response of each medium to an stimulus coming from the other one.
It was seen that viscosity is the main parameter in the coupling of the two modes to achieve a Sagittal one, nevertheless within the framework of this formalism it is difficult to determine the role of viscosity and of the density ratios in the coupling of modes. This aspect needs further investigation.
|
no-problem/0004/cond-mat0004078.html
|
ar5iv
|
text
|
# Switching and symmetry breaking behaviour of discrete breathers in Josephson ladders
(April 4, 2000)
We investigate the roto-breathers recently observed in experiments on Josephson ladders subjected to a uniform transverse bias current. We describe the switching mechanism in which the number of rotating junctions increases. In the region close to switching we find that frequency locking, period doubling, quasi-periodic behaviour and symmetry breaking all occur. This suggests that a chaotic dynamic occurs in the switching process. Close to switching the induced flux increases sharply and clearly plays an important role in the switching mechanism. We also find three critical frequencies which are independent of the dissipation constant $`\alpha `$, provided that $`\alpha `$ is not too large.
The recent discovery of discrete breathers in Josephson ladder arrays (Fig. 1) driven by a transverse dc bias current has shown not only that these localised excitations exist but that they exhibit remarkable behaviour, which is the subject of this paper.
The Josephson ladder has been studied theoretically for many years. In the absence of a driving current the interaction between vortices has been found to be exponential and this leads to the vortex density exhibiting a devil’s staircase as the magnetic field is increased. Quantum fluctuations, meta-stable states and inductance effects have been studied as has the interaction of vortices with a transverse dc bias current.
In the case of a Josephson ladder, a roto-breather is a stable group of vertical junctions rotating together ($`\theta _j^{}\theta _j\omega t`$, see Fig. 1). Until recently, the interest has focussed on the case of a sinusoidal bias current, where it has been shown that roto-breather solutions exist and that a discrete breather may even include a chaotic trajectory.
Roto-breathers should also be stable in the experimentally easier case of a ladder array subjected to a uniform dc transverse driving current. The case of rotation occurring at a single vertical junction has been studied numerically. Two experimental groups have now independently confirmed the existence of such solutions and discovered further unpredicted behaviour. In an annular ladder (Fig. 1a) and in a linear ladder (Fig. 1b) breathers were observed with various numbers of “vertical” (i.e. radial) junctions rotating. However the most interesting features of the observations were the switching between breathers and their symmetry breaking behaviour. Once a breather has been initialised it is stable, but if the bias current is slowly decreased then, at some critical value of the current, the number of rotating junctions switches to a larger number and a new breather is formed. Furthermore, the new breather may have a different centre of symmetry from the old breather despite the symmetry of the ladder itself and the symmetry of the driving currents about a single vertical junction; this is particularly remarkable in the case of the annular ladder where translational invariance should be exact. The production of breathers with an even number of rotating junctions is, by itself, a demonstration of symmetry breaking. The symmetry breaking effect was not mentioned by the experimenters, presumably because it could be due to experimental imperfections. However we show that it should also arise in a perfect experiment, the apparent mechanism being the occurrence of chaotic dynamics in the switching region.
First we construct a new model, similar to but significantly different from those proposed previously. The current through a junction is determined by the RCSJ model
$$\frac{I}{I_c}=\frac{d^2\phi }{dt^2}+\alpha \frac{d\phi }{dt}+\mathrm{sin}\phi $$
(1)
where $`I_c`$ is the critical current and
$$\phi =\mathrm{\Delta }\theta \frac{2\pi }{\mathrm{\Phi }_0}𝐀.\mathrm{d}𝐥$$
(2)
where $`\mathrm{\Delta }\theta `$ is the change in superconducting order parameter $`\theta `$ across the junction and $`𝐀`$ is the vector potential. The “vertical” (i.e. radial) junctions may differ in area from the “horizontal” junctions by an anisotropy parameter $`\eta =I_{ch}/I_{cv}=C_h/C_v=R_v/R_h`$ where $`I_{ch}`$ ($`I_{cv}`$), $`C_h`$ ($`C_v`$) and $`R_h`$ ($`R_v`$) are, respectively, the critical current, capacitance and resistance of a horizontal (vertical) junction. From Fig. 1a we see that there are three unknowns per plaquette: $`\theta _j`$, $`\theta _j^{}`$ and $`f_j`$, where $`f_j`$ is the total flux threading the $`j`$th plaquette. To solve for these unknowns we construct three equations per plaquette as follows. The first two equations are obtained from current conservation at the top (inner) and bottom (outer) rails. The third equation is obtained by making the approximation that the induced flux $`f_jf_a`$ (where $`f_a`$ is the applied flux) is produced solely by the currents flowing around the immediate perimeter of the $`j`$th plaquette (Fig. 1c):
$$\frac{f_jf_a}{\mathrm{\Phi }_0}=\frac{\beta _L}{8\pi }\left(I_j^v+I_j^hI_{j+1}^vI_j^h\right)$$
(3)
where $`\beta _L`$ is an inductance parameter. This last equation makes the plausible assumption that the plaquettes are more or less square and that it is only the currents flowing around the plaquette that produce the induced field. This differs from, and for square plaquettes is more accurate than, the common assumption that the induced field is proportional to the loop (or “mesh”) current circulating the plaquette.
We impose the initial conditions, mimicing the experiments, that at $`t=0`$, $`\theta _j=\theta _j^{}=0`$, $`f_j=0`$ and $`I_j=0`$ for all $`j`$. This means that $`\theta _j+\theta _j^{}=0`$ for all time, i.e. there is a symmetry between the inner and outer rails. Using Landau gauge we then have the following pair of coupled differential equations for each plaquette:
$`\left\{{\displaystyle \frac{d}{dt^2}}+\alpha {\displaystyle \frac{d}{dt}}\right\}\left(\theta _{j1}^{}+4\theta _j^{}\theta _{j+1}^{}\right)=2I_j+\mathrm{sin}\theta _{j1}^{}`$ (4)
$`4\mathrm{sin}\theta _j^{}+\mathrm{sin}\theta _{j+1}^{}+{\displaystyle \frac{8\pi }{\beta _L}}(f_{j1}f_j)`$
$`\left\{{\displaystyle \frac{d}{dt^2}}+\alpha {\displaystyle \frac{d}{dt}}\right\}\left(2\pi \eta f_j+(1\eta )(\theta _{j+1}^{}\theta _j^{})\right)=\mathrm{sin}\theta _j^{}`$ (5)
$`\mathrm{sin}\theta _{j+1}^{}+{\displaystyle \frac{8\pi }{\beta _L}}(f_jf_a)+2\eta \mathrm{sin}\chi _j^{}`$
where $`\theta _j^{}=\theta _j^{}\theta _j`$ and $`\chi _j^{}=\frac{1}{2}(\theta _{j+1}^{}\theta _j^{}+2\pi f_j)`$.
We now focus on determining whether or not our model exhibits the interesting switching and symmetry breaking behaviour observed in the data of Binder $`et`$ $`al`$. All parameter values are chosen to mimic the experimental setup i.e. $`\eta =0.44`$, $`\beta _L=2.7`$, $`\alpha =0.07`$ and $`f_a=0`$. Let
$$I_j=\{\begin{array}{cc}I_B+I_\mathrm{\Delta }\hfill & \text{ if }j=0\hfill \\ I_B\hfill & \text{ otherwise}\hfill \end{array}$$
(6)
where $`I_B`$ is called the bias current. Again following the experiment (Fig. 1a), we slowly increase $`I_\mathrm{\Delta }`$ while keeping $`I_B=0`$ until rotation starts at site $`j=0`$. $`I_\mathrm{\Delta }`$ is then slowly decreased while at the same time increasing $`I_B`$ to keep $`I_0`$ constant. When $`I_B`$ has reached the desired value it is then held fixed while $`I_\mathrm{\Delta }`$ is slowly reduced to zero. Finally $`I_B`$ is slowly reduced while keeping $`I_\mathrm{\Delta }=0`$. Note that the dynamical equations, initial conditions and injected currents are exactly symmetrical about site $`j=0`$. One would expect only solutions which are also symmetrical about $`j=0`$. However, like the experiments, the simulations also produce breathers which are not symmetrical about $`j=0`$. Fig. 2 shows how the number $`N_R`$ of rotating junctions changes as the bias current $`I_B`$ is slowly reduced from various starting values. In agreement with experiment, breathers with even $`N_R`$ are commonly produced (these cannot be symmetrical about $`j=0`$), and $`N_R`$ switches to larger and larger values until eventually all junctions are rotating. While we have found that similar behaviour occurs in a previously published model, our model gives considerably better agreement with the experimentally observed switching currents, thus indicating the importance of inductance effects in the switching mechanism. We believe the origin of the symmetry breaking is the occurrence of chaotic dynamics in the switching region.
The chaotic nature of the switching is further suggested by the fact that when the same computer code which produced Fig. 2 is performed on a different computer (different floating point processor) we see significant changes in the switching behaviour of nearly all trajectories although the overall pattern remains identical. The minute differences in the handling of floating point numbers are amplified in the chaotic regime to produce significantly altered trajectories.
Fig. 3 shows the voltage-current characteristics obtained from the same simulations used in producing Fig. 2. At high frequencies ($`V=d\theta ^{}/dt>4.3`$) most of the breathers show purely resistive behaviour (i.e. $`V=d\theta ^{}/dt=I_BR`$), the value of the resistance $`R`$ increasing with $`N_R`$ according to an expression deduced from experiment:
$$\alpha R1/(1+\eta /N_R)$$
(7)
This is the relationship expected if the current through each rotating vertical junction is $`\alpha V`$ and the four horizontal junctions surrounding the rotating region each carry current $`\frac{1}{2}\eta \alpha V`$ (to satisfy Kirchoff’s rule). For $`N_R=1`$ this expression was derived in ref. . We find that no such resistive behaviour occurs for $`V<4.3`$.
A typical example of a resistive breather (far away from any switching region) is shown in Fig. 4. Although the breather is not symmetric about $`j=0`$ it shows exact symmetry about the midpoint of the rotating region and also appears to be exactly periodic, the period being two revolutions of a vertical junction (we call this “period 2”). The non-rotating junctions oscillate, the amplitude decreasing exponentially with distance from the rotating region. Far away from switching the magnitude of the flux $`f_j`$ is everywhere small ($`<0.1`$) and is limited to the edges of the rotating region.
Fig. 3 shows that there are at least two critical frequencies, $`d\theta ^{}/dt=6.5`$ and $`d\theta ^{}/dt=5.0`$, at which breathers become unstable and switching occurs. As the current is reduced the frequency falls until it reaches the critical value, at which point the breather becomes unstable and switches to a larger breather with a larger resistance (Eq. (7)) and therefore a higher frequency. The process then repeats as the current continues to be ramped down. We identify the larger of these two critical frequencies with that observed experimentally. Although, at fixed $`I_B`$, the frequency of rotation depends on the dissipation constant $`\alpha `$, we find that these two critical frequencies are more or less independent of $`\alpha `$, as is the critical frequency below which breathers cease to show constant resistance. This of course breaks down when the bias current required to achieve the upper critical frequency exceeds unity, i.e. when $`\alpha >0.15`$.
Note also that in the above simulations: (i) no breathers are seen for $`I_B>0.85`$, (ii) only the single site breather is seen for $`0.68<I_B<0.85`$ and (iii) all rotating junctions normally stop together when the current $`I_B`$ is reduced below 0.17, although sometimes a new breather containing only a few rotating junctions is produced which is then destroyed when $`I_B`$ falls below 0.14.
As well as resistive period 2 breathers we have found frequency-locked breathers. In fact the usual $`N_R=1`$ resistive breather makes a transition to a frequency-locked breather as the bias current is reduced below $`I_B=0.699`$. At this point the period doubles, and the breather becomes asymmetric and jumps to a higher rotation frequency $`d\theta ^{}/dt=6.32`$. At $`I_B=0.6895`$ the period doubles again and at $`I_B=0.6892`$ it becomes quasi-periodic (Fig. 5). Switching to multi-site breathers occurs at $`I_B0.6891`$. Such frequency locked breathers have not been reported experimentally but have probably been overlooked as they are only stable in narrow current intervals.
While the maximum flux $`f^{max}`$ is normally small ($`f^{max}<0.1`$), it sharply increase at switching to $`f^{max}1`$. It would appear that an important part of the switching mechanism is the instability caused by having a large flux concentrated in a small area. The importance of the flux in the switching process is further confirmed by the fact that when $`I_B`$ has finally been reduced all the way to zero we find that the ladder may contain one or more stable vortex-antivortex pairs.
The occurrence of frequency locking, period doubling, quasi-periodic behaviour and symmetry breaking as switching is approached strongly suggests that the origin of the symmetry breaking is the occurrence of chaotic dynamics in the switching region. From the theory of chaotic dynamics it is known that two or more coupled Josephson junctions (or, equivalently, two or more coupled pendula) may exhibit three types of motion: oscillatory, rotational and chaotic. Chaotic motion arises for particular initial conditions. An analogous situation arises in non-linear coupled lattices which also display both breathers and chaotic dynamics. In a large array of coupled Josephson junctions (or, analagously, an array of coupled pendula) we must also expect these three types of motion. While roto-breathers are the most characteristic stable solutions it appears that the switching between breathers occurs only in the chaotic regime. Of course, the chaotic dynamics has an influence on the selection of breather type. Any small difference in the initial conditions leads to a completely different breather. Thus, in this chaotic regime, any small experimental imperfections or small inaccuracies in a computer simulation may lead to different chaotic trajectories and hence different roto-breathers. Symmetry breaking arises if the perturbation itself breaks symmetry.
We conclude that our model exhibits most of the main features of the roto-breathers recently observed in Josephson ladder arrays, sheds light on the switching mechanism and predicts further observable behaviour. The occurrence of frequency locking, period doubling, quasi-periodic behaviour and symmetry breaking in the switching region suggests that switching occurs in the chaotic regime. We find that the maximum flux increases sharply in the switching region and that the flux clearly plays an important role in the switching mechanism. We also find two critical switching frequencies and a critical frequency below which no resistive behaviour is observed. All three frequencies are approximately independent of the dissipation constant $`\alpha `$ (for $`\alpha <0.15`$),
We are grateful to A.V. Ustinov and P. Binder for kindly giving us their data and explaining it prior to publication. We are also grateful for discussions with M.V. Fistul and S. Flach and for the hospitality of the Max-Planck-Institut, Dresden and the Universität Erlangen-Nürnberg.
|
no-problem/0004/cond-mat0004398.html
|
ar5iv
|
text
|
# Fast fluctuating fields as the source of low-frequency conductance fluctuations in many-electron systems and failure of quantum kinetics
## I Introduction
The electrical 1/f-noise (flicker noise) was discovered in 1925. Up to now low frequency 1/f-noise is known in different physical, chemical, biological, cosmic, geophysical and social systems \[1-5\], but there is no conventional explanation of this phenomenon . In electronics frequently just the 1/f conductance fluctuations limit qualities of real devices. What is important, 1/f-noise is much higher sensitive to mesoscopic structure of conductors as well as to external influences than conductivity itself, hence, it may bring a rich and delicate information on microscopic mechanisms of conductivity.
Traditionally 1/f-noise is thought be reducable to some ”slow” random processes with a broad distribution of large life-times (relaxation times). The standard idea is that conductivity determined by ”fast” mechanisms is modulated by ”slow” varying parameters (number of charge carriers, local disorder, occupation of electron traps, Coulomb potentials, lattice magnetic moments, etc., for some hypotheses see \[1-4,6-14,17-18\]). But concrete origin of long life-times remains misterious even in best experimentally investigated situations such as the electron mobility 1/f fluctuations in semiconductors (besides, this theory could scarcely likely say why 1/f-noises in liquid and solid metals are comparable , or why hypothetical activation energies required to fit observed 1/f spectra much exceed actual energies determined from the response to charge injections ).
The problem looks sharply in case of ”bad” (narrow-band, variable-range hopping, tunnel, magnetically disturbed) conductivity like in doped semiconductors , oxides , amorphous silicon \[11-14\], materials with colossal magnetoresistance \[15-16,19,20,32\], cermets , etc. In these systems long-range Coulombian (or magnetic) interactions are important stimulating the attempts to reduce 1/f-noise to slow random charge redistributions \[4,11-14,17-18\], but it is hard to explain why real 1/f spectra do not saturate below 100 Hz . The interesting concept of self-organized criticality also connects low-frequency noise with large spatial and temporal scales.
The alternative way to understanding 1/f-noise is developed since 1982 \[5,21-25,28-30\]. It attributes 1/f-noise not to ”slow” processes but, in opposite, to ”fast” microscopic kinetic processes responsible for resistivity. The logics is simple: if producing stochastic behaviour any dynamical system constantly forgets its own history, hence, it is indifferent to a number of kinetic events happened in the past, hence, it has neither stimulus nor possibility to follow some absolutely certain ”probability of event per unit time”. Therefore, a dynamical (Hamiltonian) system what produces relaxation (irreversibility) and noise makes this, generally speaking, without keeping definite (well time-averagable) ”probabilities per unit time” and consequently produces fluctuations of the relaxation (dissipation) and noise rates. These fluctuations do not destroy detailed balance and do not cause any compensating reaction, hence, have no characteristic time scale (see for more details and examples). In this theory the long-living statistical correlations associated with 1/f spectrum are the manifestation of pure freedom of random flow of kinetic events, not of long memory \[21-25\]. In this respect, as N.S.Krylov argued in 1950, we all should get rid of the erroneous opinion that statistical correlations always reflect some actual causality.
The kinetic theory misses this 1/f-noise just because assumes quite certain ”probabilities per unit time” although this ansatz is masked by ansatzes like ”molecular chaos”, ”random phases”, ”thermodynamical limit”, ”continuous spectra”, etc. But the correctly performed derivation of gas kinetics from Hamiltonian dynamics demonstrates violation of ”molecular chaos” and 1/f fluctuations of diffusivities and mobilities of gas particles. Thus 1/f-noise arises even in the system where wittingly nothing like slow processes and giant life-times do exist.
While a gas is characterized by strong but short (well time-space separated) kinetic events (collisions), many systems possess weak but long-lasting (bad separated) interactions. For example, phononic systems (dielectric crystals). Nevertheless, as was shown in , 1/f type fluctuations of dissipation (inner friction) and of Raman light scattering observed in these systems (quartz) also can be deduced just from the basic (phonon-phonon) kinetics, if only take into account that any particular interaction interplays parametrically with other simultaneous interactions. Thus, it is wrong to suppose ”elementary” kinetic events be statistically independent, in contrary to what we see in kinetics.
The purpose of the present paper is to show that similar situation may realize in quantum conducting many-electron systems. Concretely, that the very evolution of quantum amplitude and probability concerning one particular electron transfer may depend on fast fluctuating potentials and fields (electric, magnetic, etc.) induced by other simultaneously occuring transfers, i.e. many elementary kinetic events (quantum jumps) are essentially interplaying. As far as we know, this effect still was not under consideration. It is expected be especially strong if characteristic transfer time (the time of evolution of quantum transfer probability right up to certainty) noticably exceeds correlation time of fluctuating fields. Such a condition seems natural if any conducting electron feels displacements of many other electrons (or changes of many magnetic moments) by means of long-range interactions.
Our first principal statement is that short-correlated random fields if influencing the formation of quantum transfer amplitudes lead to strong uncertainty of transfer probabilities, thus well defined ”probabilities per unit time” do not exist. As a consequence, long-correlated conductance fluctuations arise whose principal scale properties coinside with thats of 1/f-noise. Hence, to explain 1/f-noise we have no need in extremely slow charge redistributions, instead the very fast ones are sufficient. The second statement is that these effects can not be catched if neglect the actual quantum discreteness of energy spectra.
For simplicity, here to comprehend the main idea we concentrate on the case of tunnel conduction (some aspects of the discreteness in tunnel junctions were touched in but only relating to low temperatures without accounting for Coulombian effects). More extended argumentation including rigorous analysis of Hamiltonian models will be published separately.
## II Characteristic times of tunnel conductivity
Let us consider characteristic time scales relating to electron tunneling between metallic sides. Under a small voltage $`U`$ applied to tunnel junction the mean charge transported during time $`\mathrm{\Delta }t`$ can be phenomenologically expressed as
$`\mathrm{\Delta }Q=e{\displaystyle \frac{Ue}{\delta E}}{\displaystyle \frac{\mathrm{\Delta }t}{\tau _{trans}}}`$
Here $`\delta E`$ is the mean separation of electron energy levels, $`Ue/\delta E`$ is the number of levels effectivelly contributing to electric current, and $`\tau _{trans}`$ is the mean transmission time required for one electron jump from a given level at one side to wherever at opposite side. Though quantum jumps can realize in a moment, that moment is random and may lie approximately equally anywhere in the interval $`0<\mathrm{\Delta }t`$ $`<\tau _{trans}`$ , with $`\tau _{trans}`$ being the time necessary to accumulate the quantum transmission probability up to a value $`1`$ . Clearly, $`e/\tau _{trans}`$ serves as the mean current per level. Here from one gets the tunnel conductance as
$$G=\mathrm{\Delta }Q/U\mathrm{\Delta }t=e^2\nu \gamma ,\nu =1/\delta E,\gamma =1/\tau _{trans}$$
(1)
with $`\nu `$ being electron density of states and $`\gamma `$ mean jump probability per unit time.
Of course, any real junction possesses a finite capacity $`C`$ and thus finite characteristic time
$`\tau _{rel}RCC/G`$
Its physical role can be identified (as usually in RC-circuits) as the relaxation time of junction charging and the correlation time of thermal voltage fluctuations (if only suppose that Coulombian interaction between sides manifests itself in stochastic form). Compare the above defined times:
$$\tau _{trans}/\tau _{rel}e^2\nu /C=E_C/\delta E$$
(2)
with $`E_C=e^2/C`$ being the Coulomb energy. Clearly, (2) is merely number of levels promoting charge relaxation.
Now let us demonstrate that this ratio may essentially exceed unit,
$$\tau _{trans}/\tau _{rel}>>1$$
(3)
even if Coulombian effects are weak in the trivial sense $`E_C<<T`$ . For certainty, consider a flat junction with area $`S`$ , side thicknesses $`w`$ , and with $`d`$ and $`ϵ`$ being the thickness and dielectricity of isolating barrier, respectively, and use formula $`CϵS/4\pi d`$ . The estimate for the metallic density of states is $`\nu \mu /N_e`$ $`Sw/(\mathrm{}v_Fa^2)`$ , where $`\mu `$ is Fermi energy, $`N_e`$ number of metallic electrons, $`v_F`$ Fermi velocity and $`a`$ is atomic size ( $`Sw/N_e=a^3`$ is volume per one electron), thus
$`e^2\nu /C{\displaystyle \frac{e^2}{\mathrm{}c}}{\displaystyle \frac{c}{v_F}}{\displaystyle \frac{4\pi dw}{ϵa^2}}{\displaystyle \frac{dw}{a^2}}`$
where $`c`$ is the speed of light (introduced to see that $`c/v_F`$ overcompensates the fine structure constant) and typical value $`ϵ20`$ is substituted.
Obviously, the inequality (3) is satisfied if either $`d`$ or $`w`$ (moreover if both) a few times exceeds $`a`$ , i.e. practically always (in this sense Coulombian effects never could be neglected). Therefore the quantum probability of some particular electron jump grows inevitably under influence of relatively fast varying inter-side voltage $`u(t)`$ , $`u(t)\sqrt{T/C}`$ , produced by many other jumps at the same time factually happening in both directions between other levels. Not only a moment of jump realization is random but the jump probability itself turns be random, that is different kinetic events become entangled. This may be named quantum (Coulombian) interaction of electron transfers. It can not be completely described by one-electron language, but the insert of fluctuating voltage gives the quasi one-particle approximation which, to some extent, may substitute for a real picture.
## III Time ratios connected with energy discreteness
The standard kinetic scheme deals with the tunnel Hamiltonian
$`H=H_0+H_{tun},H_{tun}={\displaystyle \underset{kq}{}}g_{kq}(b_q^+a_k+a_k^+b_q)`$
and with three ansatzes attracted to avoid formal difficulties brought in by the discreteness of electron energy levels, namely: i) energy spectrum in sides is so dense that the continuous limit is possible, $`_k\mathrm{}`$ $``$ $`\mathrm{}\nu (E)𝑑E`$ ; ii) the Fermi’s golden rule $`p_{kq}(\mathrm{\Delta }t)`$ $``$ $`2\pi \mathrm{\Delta }t(g_{kq}^2/\mathrm{})`$ $`\delta (E_{kq})`$ , where $`p_{kq}(t)`$ is jump probability and $`E_{kq}`$ energy difference between states; iii) $`g_{kq}^2`$ is sufficiently smooth function of $`E_{kq}`$ .
Here $`\mathrm{\Delta }t`$ is a time interval necessary to adequately evaluate jump probabilities for kinetic equations. This scheme requires the restriction $`\mathrm{\Delta }t<<\tau _{gold}`$ , where $`\tau _{gold}=2\pi \mathrm{}/\delta E`$ . However, if we wanted to account for effects of the voltage fluctuations $`u(t)`$ we would need at least in $`\mathrm{\Delta }t`$ comparable with $`\tau _{trans}`$ , i.e. in the additional condition $`\tau _{trans}/\tau _{gold}<1`$ . But in an adequate model just the opposite relation must be expected,
$$\tau _{trans}/\tau _{gold}>1$$
(4)
It is easy seen if note that $`\tau _{trans}/\tau _{gold}`$ $`=\delta E/\mathrm{\Delta }E`$ , $`\mathrm{\Delta }E2\pi \mathrm{}/\tau _{trans}`$ , where $`\mathrm{\Delta }E`$ is the energy uncertainty associated with instability of intra-side electron states. Hence, if the desirable condition was valid, then close states would be undistinguishable, in other words, electron spectra in sides would undergoe mutual rebuilding because of too good transparency of tunnel barrier (see also on the relation $`R/R_0>1`$ ).
To analyse the ratio (4) more carefully, estimate $`\tau _{trans}`$ . The mean transported charge $`\mathrm{\Delta }Q`$ is expressed by
$`\mathrm{\Delta }Q=e(\mathrm{\Delta }N_+\mathrm{\Delta }N_{})=e{\displaystyle \underset{kq}{}}[f(E_k^{})f(E_q^+)]p_{kq},\mathrm{\Delta }N_\pm ={\displaystyle \underset{kq}{}}f(E_k^{})[1f(E_q^\pm )]p_{kq}`$
where $`\mathrm{\Delta }N_\pm `$ is number of electrons tunneling in left (right) direction, $`E^\pm `$ are energy levels in the sides, $`f(E)=1/[1+\mathrm{exp}(E\mu )/T]`$ , and
$$p_{kq}=p_{kq}(\mathrm{\Delta }t,U)4g_{kq}^2\mathrm{sin}^2(E_{kq}\mathrm{\Delta }t/2\mathrm{})/E_{kq}^2,E_{kq}=E_q^+E_k^{}eU$$
(5)
is tunneling probability evaluated by ordinary perturbation theory. The corresponding low field conductance
$`G=[\mathrm{\Delta }Q/U\mathrm{\Delta }t]_{U0}=e^2{\displaystyle \underset{kq}{}}[f(E_k)/E_k]p_{kq}(\mathrm{\Delta }t,0)/\mathrm{\Delta }t`$
turns into Eq.1 with
$$\gamma =1/\tau _{trans}=\underset{q}{}p_{kq}(\mathrm{\Delta }t,0)/\mathrm{\Delta }t2\pi g^2\nu /\mathrm{},(E_k,E_q\mu )$$
(6)
and $`g`$ being characteristic magnitude of $`g_{kq}`$ . Combining this relation with Eq.1 one obtains
$$\frac{\tau _{trans}}{\tau _{gold}}=\frac{e^2}{2\pi \mathrm{}G}=\frac{R}{R_0}\left(\frac{\delta E}{2\pi g}\right)^2,R_0=\frac{2\pi \mathrm{}}{e^2}20\text{kOhm}$$
(7)
Hence, the condition for really weak interaction is the smallness of transfer matrix elements as compared with the energy level spacing.
## IV Discreteness, phase decoherence and fluctuations of quantum transfer probabilities
Since the wish $`\tau _{trans}/\tau _{gold}<1`$ was invoked by the continuous spectrum approximation, we may suspect that the discreteness must be involved in an evident form to describe the fluctuations of transfer probabilities, while the perturbation theory is still applicable. Consider the picture when quantum transfers from a fixed level ”k” at one side to any level ”q” at another side are influenced by a fast fluctuating field (FFF), here the voltage noise $`u(t)`$ , in its turn induced by electron jumps beyond our attention. Thus we use quasi one-electron picture, basing on the ansatz that in many-electron surroundings with a sufficiently rich energy spectrum $`u(t)`$ behaves as a random process. The modern theory of quantum chaos gives powerful support for this statement (although taking in mind that in specific systems coherent collective charge oscillations are possible instead of stochasticity and relaxation).
In this section we may consider thermodynamical equilibrium taking $`U=0`$ . First introduce the randomly accumulating diffusive phase shift
$`\phi (t)={\displaystyle \frac{e}{\mathrm{}}}{\displaystyle _0^t}u(t^{})𝑑t^{}`$
Instead of (5), the standard Shroedinger equations for transfer amplitudes yield
$$p_{kq}\left|A_{kq}\right|^2,A_{kq}\frac{g_{kq}}{\mathrm{}}_0^{\mathrm{\Delta }t}\mathrm{exp}(iE_{kq}t/\mathrm{})Z(t)𝑑t,Z(t)=\mathrm{exp}[i\phi (t)]$$
(8)
Clearly, now $`p_{kq}`$ become random values governed (parametrically excited or damped) by the phase shift in its turn produced by fluctuating potential difference between ”in” and ”out” states. Further, introduce the phase correlation function, the phase correlation time and corresponding energetical measure of quantum coherence by
$$K(t_1t_2)=Z(t_1)Z^{}(t_2),\tau _{phase}=_0^{\mathrm{}}\left|K(\tau )\right|𝑑\tau ,\epsilon _{coh}=2\pi \mathrm{}/\tau _{phase}$$
(9)
where angle brackets denote averaging with respect to FFF.
As demonstrated below, expectedly $`\tau _{phase}`$ is rather short as compared with $`\tau _{trans}`$ and naturally limited by $`\tau _{rel}`$ . Therefore under the integral in (8) $`Z(t)`$ can be treated as complex shot noise. Consequently, at $`\mathrm{\Delta }t>>\tau _{rel}`$ the transfer amplitudes $`A_{kq}`$ behave approximately as complex Brownian paths, and regardless of details of $`Z(t)`$’s statistics we have reasons to write
$$p_{kq}^2=\left|A_{kq}\right|^42\left|A_{kq}\right|^2^2=2p_{kq}^2,p_{kq},p_{kq}p_{kq}^2$$
(10)
Here the Malakhov’s cumulant brackets are used,
$`x,yxyxy`$
This is our first finding: if considered at time intervals of order of the actual transition time the quantum amplitudes and probabilities may become 100% uncertain due to the phase decoherence (phase diffusion) forced by FFF. The second is that the mean probabilities grow linearly with time:
$`p_{kq}\mathrm{\Delta }t(g/\mathrm{})^2{\displaystyle K(\tau )\mathrm{exp}(iE_{kq}\tau /\mathrm{})𝑑\tau }\mathrm{\Delta }t,(E_{kq}=E_q^+E_k^{})`$
i.e. FFF remove a need in the golden rule to have uniform growth of probabilities even at $`\mathrm{\Delta }t>>\tau _{gold}`$ . Notice for the next that according to this formula now $`\epsilon _{coh}`$ (instead of $`2\pi \mathrm{}/\tau _{gold}`$ ) serves as the width of energy region available from a given level (”k” or ”q”).
But most interesting subject is the summary transition rate $`\gamma =p/\mathrm{\Delta }t`$ , with $`p=p_k`$ ,
$$p_k\underset{q}{}p_{kq}=_0^{\mathrm{\Delta }t}\mathrm{\Gamma }(t_1t_2)Z(t_1)Z^{}(t_2)𝑑t_1𝑑t_2,\mathrm{\Gamma }(\tau )=\underset{q}{}(\frac{g_{kq}}{\mathrm{}})^2\mathrm{exp}(i\tau E_{kq}/\mathrm{})$$
(11)
Here the kernel $`\mathrm{\Gamma }(\tau )`$ represents the discreteness. Its analytical properties are of much importance for all the theory . In the continuous approximation provided ansatzes i)-iii) one would have $`\mathrm{\Gamma }(\tau )=`$ $`\delta (\tau )/\tau _{trans}`$ , and $`p`$ would be definitely nonrandom, $`p=\mathrm{\Delta }t/\tau _{trans}`$ . In reality, because of the discreteness this kernel is quite non-local and does not decay completely at arbitrary long time. For visuality only, let us choose equidistant spectrum at the right-hand side, $`E_q^+E_k^{}=`$ $`n\delta E+\epsilon `$ ( $`n`$ is integer, $`\epsilon \delta E`$ ), then
$$\mathrm{\Gamma }(\tau )=\frac{1}{\tau _{trans}}\mathrm{exp}(i\epsilon \tau /\mathrm{})\underset{n}{}\delta (\tau n\tau _{gold})$$
(12)
Now the third important point is easy seen : if $`\epsilon _{coh}>\delta E`$ then the mean probability $`p`$ $`\mathrm{\Delta }t/\tau _{trans}`$ practically coinsides with what is given by usual kinetics, even in spite of formal violation of the golden rule (in this sense, FFF effectively expand applicability of usual scheme).
And the main fourth point is that the phase decoherence produced by FFF if combined with the discreteness results in randomness of the summary quantum jump probabilities. Indeed, due to the possibility to consider $`A_{kq}`$’s as Brownian walks we obtain
$$p,p\mathrm{}_0^{\mathrm{\Delta }t}\mathrm{\Gamma }(t_1t_2)\mathrm{\Gamma }(t_3t_4)K(t_1t_4)K(t_3t_2)𝑑t_1\mathrm{}𝑑t_4$$
(13)
Here under the same condition $`\epsilon _{coh}>\delta E`$ only the regions $`t_1t_4,`$ $`t_3t_2`$ are significant but many delta functions from (12) contribute. The resulting transfer probability variance is
$$p,p\frac{\mathrm{\Delta }t^2}{\tau _{trans}^2\tau _{gold}}\left|K(\tau )\right|^2𝑑\tau \frac{\tau _{phase}}{\tau _{gold}}p^2$$
(14)
(we took into account that ”width” of delta functions determined by the inverse width of whole energy band is wittingly smaller than $`\tau _{phase}`$ ).
If inequality (4) was inverted because of too small $`\delta E`$ the expression (13) would formally turn into zero, but in fact it always remains nonzero under any realistic slightly non-equidistant spectrum. Nevertheless, we may predict that violation of (4), as well as of (3), leads to supression of the probability fluctuations. Oppositely, large $`\delta E>\epsilon _{coh}`$ means most strong fluctuations with variance $`p^2`$ or greater. But thats are accompanied by decreasing correlations between transfers from different levels:
$$p_{k_1,}p_{k_2}p_{k_1}p_{k_2}\frac{1}{\tau _{gold}}\mathrm{exp}[i(E_{k_1}^{}E_{k_2}^{})\tau /\mathrm{}]\left|K(\tau )\right|^2𝑑\tau $$
(15)
Evidently, while at $`\epsilon _{coh}>\delta E`$ the currents from many levels fluctuate concordanly, at $`\epsilon _{coh}<\delta E`$ even close levels inject independently one on another. Besides, even $`p`$ become sensible to the energy shift $`\epsilon `$ in (12), therefore, this extreme case should be carefully analysed with accounting for realistic (uncommensurable) discrete energy spectra (and better out of the frame of the quasi one-particle picture).
To end this section, estimate the phase correlation time $`\tau _{phase}`$ (what may be named also phase decoherence time). Notice that $`K(\tau )`$ is nothing but characteristic function of the phase. In principle, it is determined just by the transfer statistics, in a complicated self-consistent picture. Since the latter now is out of our look, we confine ourselves by rough reasonings. For instance, at $`E_C`$ $`<<`$ $`T`$ it is likely natural to treat $`u(t)`$ as the Ornstein-Uhlenbeck random process, then
$`K(\tau )\mathrm{exp}\left[{\displaystyle \frac{T}{C}}\left({\displaystyle \frac{e}{\mathrm{}}}\tau _{rel}\right)^2\{1{\displaystyle \frac{\tau }{\tau _{rel}}}\mathrm{exp}({\displaystyle \frac{\tau }{\tau _{rel}}})\}\right]`$
The corresponding decoherence time is $`\tau _{phase}`$ $`(\mathrm{}/e)\sqrt{C/T}`$ $`(<<\tau _{gold})`$ . Perhaply, this is low bound for it (too rigid in the sense that it does not include the conductance $`G`$ ). At $`E_CT`$ (what qualifies small junctions) the charge quantization is essential and the better model for $`\phi (t)`$ is infinitely divisible random walk (on this subject see ) formed with rare increments by $`\mathrm{\Delta }\phi =`$ $`(e/\mathrm{})(\pm e/C)\theta `$ , where $`\theta `$ is random duration of charged stay distributed with some probability density $`W(\theta )`$ and total time fraction $`1`$ . The suitable expression is $`W(\tau )=`$ $`\tau _{rel}^1\mathrm{exp}(\tau /\tau _{rel})`$ what corresponds to the characteristic function
$`\mathrm{\Xi }(\eta ,\tau )\mathrm{exp}[i\eta {\displaystyle _0^\tau }u(t)𝑑t]\mathrm{exp}\left\{{\displaystyle \frac{|\tau |}{\tau _{rel}}}{\displaystyle _0^{\mathrm{}}}[\mathrm{cos}\left({\displaystyle \frac{\eta e}{C}}\theta \right)1]W(\theta )𝑑\theta \right\}`$
(here the Levy-Khinchin representation was applied). Taking $`\eta =e/\mathrm{}`$ and assuming $`2\pi R/R_0`$ $`>1`$ , obtain $`K(\tau )=`$ $`\mathrm{\Xi }(e/\mathrm{},\tau )`$ $`\mathrm{exp}(|\tau |/\tau _{rel})`$ . Thus, in this oppositely extreme case $`\tau _{phase}`$ may be of order of $`\tau _{rel}`$ (likely representing upper bound for $`\tau _{phase}`$ ) and possibly larger than $`\tau _{gold}`$ .
## V Low-frequency conductance fluctuations
In this section we return to externally driven junction, so the total voltage will be $`U(t)=U+u(t)`$ with the applied voltage $`U`$ interpreted as its average value.
Of course, fluctuations of electron jump probabilities eventually result in more or less analogous conductance fluctuations. The conductance is implied as $`\mathrm{\Delta }Q/U\mathrm{\Delta }t`$ where $`\mathrm{\Delta }Q`$ is the conditional quantum average value of transported charge taken under a fixed realization of $`u(t)`$ . The charge transport depends also on occupancies of energy levels. If $`U=0`$ then $`\mathrm{\Delta }Q`$ , by its definition, must turn into zero regardless of $`u(t)`$ . This means the existence of some natural statistical correlations between voltage noise and occupancies whose accurate description would need in a self-consistent many-electron picture. To avoid this difficulty, let us work up the storage
$$\mathrm{\Delta }Q=e\underset{kq}{}[f(E_k^{})f(E_q^+)]\left|A_{kq}\right|^2$$
(16)
making time integration by parts in the amplitudes as if $`u(t)`$ was absent, to transform (16) into
$$\mathrm{\Delta }Q=e^2U_0^{\mathrm{\Delta }t}\mathrm{\Lambda }(t_1t_2)\mathrm{exp}[ieU(t_1t_2)/\mathrm{}]Z(t_1)Z^{}(t_2)𝑑t_1𝑑t_2$$
(17)
where new kernel is introduced,
$$\mathrm{\Lambda }(\tau )=\underset{kq}{}(\frac{g_{kq}}{\mathrm{}})^2\frac{f(E_k^{})f(E_q^+)}{E_q^+E_k^{}}\mathrm{exp}[i\tau (E_q^+E_k^{})/\mathrm{}]$$
(18)
This form is consistent with the detailed balance $`\mathrm{\Delta }Q(U=0)=0`$ and hence may serve for estimates.
In general, if electron spectra in sides are much wider than $`T`$ then at $`U<T/e`$ we believe that, approximately, i) conductivity obeys Ohmic law and ii) its relative fluctuations do not depend on $`U`$ . Then Eqs. 17-18 can be simplified by means of linearization into
$$\mathrm{\Delta }Qe^2U\underset{k}{}[f(E_k^{})/E_k^{}]p_k$$
(19)
with $`p_k`$ defined by (11) . Naturally, the conductance fluctuations essentially depend on relation between decoherence and level spacing. At sufficiently small phase decoherence time (”large” junction, widely correlated jumps from different levels), the Eq.15 helps easy obtain
$$G,G\frac{\delta E}{T}G^2,(\tau _{phase}<<\tau _{gold})$$
(20)
At large decoherence time (”small” junction, non-correlated jumps) conductance fluctuations are very sensible to concrete structure of energy spectra in sides, first of all to the degree of their relative commensurability. Omitting the details, the result is that the relative conductance variance may prove to be anywhere in the interval
$$\frac{\tau _{phase}}{\tau _{gold}}\frac{\delta E}{T}<\frac{G,G}{G^2}<1,(\tau _{phase}>\tau _{gold})$$
(21)
i.e can achieve as great magnitude as $`1`$ . In this case, accurate accounting for fluctuations of the occupancies may be especially important.
## VI Comparison with experiment
The good experimental illustration for permissibly exciting properties of quantum transfers was brought in where 1/f-noise in the cermet (granular composite) $`Ni`$-nanoparticles $`(25\%)`$-$`Al_2O_3`$was investigated. In this system the parameters of a typical elementary tunnel junction formed by neighbouring metal particles are $`\delta E0.2`$meV, $`d2`$nm, $`C510^6`$cm, $`E_CT`$ (at room temperature), and $`R30`$MOhm, which mean that $`\tau _{gold}310^{11}`$s , $`\tau _{rel}1.510^{10}`$s , $`\tau _{trans}/\tau _{rel}=E_C/\delta E200`$ and $`\tau _{trans}310^8`$s . Both the inequalities (3) and (4) are well satisfied, thus giving us all grounds (see also Discussion) to suspect that 1/f-noise could be attributed to quantum Coulombian interactions, in the above mentioned sense (possibly with a contribution by electron-phonon processes). Moreover, $`\tau _{gold}`$ is even noticably smaller than $`\tau _{rel}`$, thus indicating, from our point of view, the possibility of highly maintained conductance fluctuations.
In fact, such cermet is characterized by giant 1/f conductance noise with relative spectrum density $`S_{\delta G}(f)\alpha /N_gf`$ where $`\alpha 610^3`$ and $`N_g`$ is the number of metal particles in a sample. In view of $`E_CT`$, $`N_g`$ approximately represents the number of active (simultaneously transported) electrons , hence that is almost standard noise with classical Hooge constant ($`\alpha =210^3`$) . It corresponds to the $`S_{\delta G}(f)\alpha /f`$ noise in separate elementary junction, demonstrating at least $`100\%\sqrt{\alpha \mathrm{ln}(1\text{s}/\tau _{rel})}`$ , i.e. $`40\%`$, uncertainty of its conductance.
But most beautiful observation was the visible 1/f-noise sensibility to the discreteness of electron energy spectra in metal granules. When applied voltage per elementary junction exceeds $`\delta E/e`$ and thus corresponding current exceeds $`e/\tau _{trans}`$ (see Eq.1), then quadratic (low-field) dependence of variance of voltage 1/f noise on bias current transforms into ”non-Ohmic” linear one (although mean conductance obeys Ohm law up to $`T/\delta E100`$ times larger current).
The experimental appearance of the factor $`\delta E/e`$ is evident manifestation of the role of discreteness and it itself gives clear confirmation of our aproach to 1/f-noise in this system. At the same time, this means that the hypothesis ii) (in previous section) fails, i.e. linearized expression (19) becomes invalid if applied to fluctuations, and the estimates (20)-(21) should be multiplied by a decreasing factor $`D(U)`$ ($`D(0)=1`$ , $`D(U)<1`$ ). Indeed, as we underlined, any transfer from one side embraces $`\epsilon _{coh}/\delta E`$ levels at another side. If this number is smaller than the number of charge transporting levels, $`eU/\delta E`$ , then the latters act as uncorrelated quantum channels. Hence, $`\mathrm{\Delta }Q`$ variance may become linear function of $`U`$ being proportional to the number of channels, what corresponds in small junctions to $`D\delta E/eU`$ at $`eU>\delta E`$ .
Definite formal support for this euristic reasoning (literally appropriate at zero temperature only) could be found already in the frame of quasi one-electron picture, if consider full non-linear dependence (17) of $`\mathrm{\Delta }Q`$ on $`U`$ . The decreasing factor what follows from the Eqs.17-18 is
$$D(U)=S(eU)/S(0),S(E)K_\mathrm{\Lambda }(\tau )\mathrm{exp}(iE\tau /\mathrm{})\left[K(\theta )K(\tau \theta )𝑑\theta \right]𝑑\tau $$
(22)
where the correlation function $`K_\mathrm{\Lambda }(\tau )`$ is defined by
$$K_\mathrm{\Lambda }(\tau )=\frac{1}{2\mathrm{\Delta }t}_{\mathrm{\Delta }t}^{\mathrm{\Delta }t}\mathrm{\Lambda }(t+\frac{\tau }{2})\mathrm{\Lambda }^{}(t\frac{\tau }{2})𝑑t$$
(23)
In case of small junctions the averaging over an ensemble of probable electron spectra must be added what automatically takes place in cermets. In theory, we need in some statistics of energy levels (see on the known variants). At present, notice only that the absense of a rigid measure of local spacing of energy levels (except the mean value $`\delta E`$ as related to whole spectra only) might results naturally in a weak dependence of $`K_\mathrm{\Lambda }(\tau )`$ on $`\tau `$ characterized by logarithm $`\mathrm{ln}(\tau _{gold}/\tau )`$ which implies just the inverse proportionality $`D(U)\epsilon _{coh}/eU`$ at $`eU>\epsilon _{coh}`$ .
## VII Discussion and resume
The principal peculiarity of above considered fluctuations of quantum transfer probabilities and corresponding conductance fluctuations is that their relative measure seems independent on the time under observation, $`\mathrm{\Delta }t`$ . It looks as if conductance undergoe fluctuations with non-decaying statistical correlations. Outwardly thats resemble static fluctuations investigated in the theory of disordered conductors (so-called universal conductance fluctuations) . But by essence these are different things: one starts from the decoherence just when other finishes at it.
Though our analysis was limited by times $`\mathrm{\Delta }t\tau _{trans}`$ , there is a feeling that similar statistics expands to arbitrarily longer time scales. Formal proof that it is really true, attributing to Hamiltonian models of quantum channels (with ”weak links” like tunnel barriers) subjected to FFF (Coulombian or magnetical), will be done separately. For the present, semi-formal arguments are: i) all what was obtained results from the trivial rule that in general one should manipulate with quantum amplitudes to find quantum probabilities as a final product; ii) all what was obtained results from ”fast” noise and phase decoherence, but not from some causal correlations which could not be continued to longer time. We believe that a self-consistent analysis connecting FFF and electron transport statistics will result in some slow (nearly logarithmic) dependence of $`G,G`$ on the observation time reflecting non-static character of the fluctuations and described by 1/f spectrum, instead of $`\delta (f)`$ having the same formal dimensionality (such dependence may begin already as a weak $`\mathrm{\Delta }t`$ dependence of the correlator (23)).
An accurate approach to a tunnel junction would deal with non-equilibrium steady state governed by the Hamiltonian
$`H=H_0+H_{tun},H_0=H_{}+H_++H_CU\mathrm{\Delta }Q`$
where $`H_C`$ describes inter-side Coulombian interaction, $`\mathrm{\Delta }Q`$ is the operator of transported charge, and $`H_\pm `$ describe two sides with their leads serving also as thermal baths ensuring relaxation to equilibrium occupancies. Like in practice, in theory it is rather hard to ”solder leads”, especially if wishing to avoid appeals to standard kinetic schemes. But, regardless of tehnical difficulties, we may state that none processes in leads could influence the formation of non-diagonal (inter-side) elements of the operator $`\mathrm{\Delta }Q`$ since thats are determined by the tunneling itself and charging of tunnel barrier only. Therefore, the better relaxation in leads the more grounds we have to extend the Eq.16 to arbitrary time intervals (now with $`\left|A_{kq}\right|^2`$ representing the number of passing electrons). The comparatively non-principal corrections to be performed are accounting for fluctuations of occupancies and including conductance fluctuations into higher-order statistics of FFF (voltage noise).
To resume, we demonstrated that if do not neglect the actual quantum discreteness when constructing kinetic models of transport processes then possible strong sensibility of quantum transfer amplitudes and probabilities to fast fluctuating fields becomes visible (in particular, created by the transfers themselves) which may result in fundamental 1/f type low-frequency fluctuations of transport rates. Hence, the now reigning quantum kinetic models ask for evident general comments. All come from the well known Pauli’s kinetic master equations. In Van-Hove developed its formal groundation under so called $`\lambda ^2t`$ -limit. But, with no doubts, this theory does not foresee anything like 1/f noise. What is the matter, the question arises naturally (some comments on this issue were suggested in ). We hope that our above consideration highlights the possible principal answer: in fact the Van-Hove’s formalism supposes the limit $`\delta E0`$ (i.e. the continuous spectrum idealization) be performed before the limit $`g0`$ with $`g`$ ($`g=\lambda `$) representing (as above) the magnitude of weak interactions. Thus the influence of quantum discreteness onto statistics of quantum jumps (between eigenstates of unperturbed Hamiltonian) is lost. We think that it will be not hopeless to properly improve the present kinetics (see Introduction).
ACKNOWLEDGEMENTS
I acknowledge Dr. Yu.V.Medvedev and participants of his seminar at the Department of kinetic properties of disordered and nonlinear systems in DonPTI NAS of Ukraine for support, stimulating criticism and helpfull discussions.
REFERENCES
1. P.Dutta, P.Horn, Rev.Mod.Phys., 53(3), 497 (1981).
2. F.N.Hooge, T.G.M.Kleinpenning, L.K.J.Vandamme, Rep.Prog.Phys., 44, 481 (1981).
3. M.B.Weissman, Rev.Mod.Phys., 60, 537 (1988).
4. M.B.Weissman, Rev.Mod.Phys., 65, 829 (1993).
5. G.N.Bochkov and Yu.E.Kuzovlev. UFN, 141, 151 (1983), English translation: Sov.Phys.-Usp., 26, 829 (1983).
6. B.Raquet, J.M.D.Coey, S.Wirth and S. Von Molnar, Phys.Rev., B59, 12435 (1999).
7. J.G.Massey and M.Lee, Phys.Rev.Lett., 79, 3986 (1997).
8. M.J.C. van den Homberg, A.H.Verbruggen, P.F.A.Alkemade, S.Radelaar, E.Ochs, K.Armbruster-Dagge, A.Seeger and H.Stoll, Phys.Rev., B 57, 53 (1998).
9. A.Ghosh, A.K.Raychaudhuri, R.Streekala, M.Rajeswari and T.Venkatesan, Phys.Rev., B 58, R14666 (1998).
10 X.Y.Chen, P.M.Koenrad, F.N.Hooge, J.H.Wolter and V.Aninkevicius, Phys.Rev., B 55, 5290 (1997).
11. G.M.Khera and J.Kakalios, Phys.Rev., B 56, 1918 (1997).
12. M.Gunes, R.E.Johanson and S.O.Kasap, Phys.Rev., B 60, 1477 (1999).
13. K.M.Abkemeier, Phys.Rev., B 55, 7005 (1997).
14. G.Snyder, M.B.Weisman and H.T.Hardner, Phys.Rev., B 56, 9205 (1997).
15. A.Lisauskas, S.I.Khartsev and A.M.Grishin, Studies of 1/f-noise in $`La_{1x}M_xMnO_3`$ (M=Sr,Pb) epitaxial thin films, in J.Low.Temp.Phys. as MOS-99 Proceedings.
16. A.Lisauskas, S.I.Khartsev, A.M.Grishin and V.Palenskis, Electrical noise in ultra thin giant magnetoresistors, in Mat.Res.Soc.Proc. Spring-99 Meeting.
17 B.I.Shklovskii and A.L.Efros. Electronic properties of doped semiconductors, Springer-Verlag. Berlin, 1984.
18. V.I.Kozub, Solid State Commun., 97, 843 (1996).
19. V.Podzorov, M.Uehara, M.E.Gershenson and S.-W.Cheong, lanl arXiv cond-mat/9912064.
20. M.Viret, L.Ranno and J.M.D.Coey, Phys.Rev., B 55, 8067 (1997).
21. Yu.E.Kuzovlev and G.N.Bochkov. On the nature and statistics of 1/f-noise. Preprint No.157, NIRFI, Gorkii, USSR, 1982.
22. Yu.E.Kuzovlev and G.N.Bochkov.Izv.VUZov.-Radiofizika, 26, 310 (1983), transl. in Radiophysics and Quantum Electronics (RPQEAC, USA), No 3 (1983).
23. G.N.Bochkov and Yu.E.Kuzovlev. Izv.VUZov.-Radiofizika, 27, 1151 (1984), transl. in Radiophysics and Quantum Electronics (RPQEAC, USA), No 9 (1984).
24. G.N.Bochkov and Yu.E.Kuzovlev. On the theory of 1/f-noise. Preprint N 195, NIRFI, Gorkii, USSR, 1985.
25. Yu.E.Kuzovlev, lanl arXiv cond-mat/9903350 .
26. N.S.Krylov. Works on the foundations of statistical mechanics. Princeton U.P., Princeton, 1979 (Russian original book published in 1950).
27. Yu.A.Genenko and Yu.M.Ivanchenko, Teor.Mat.Fiz., 69, 142 (1986).
28. Yu.E.Kuzovlev, Zh. Eksp. Teor. Fiz, 94, No.12, 140 (1988), transl. in Sov.Phys.-JETP, 67 (12), 2469 (1988).
29. Yu.E.Kuzovlev, Phys.Lett., A 194, 285 (1994).
30. Yu.E.Kuzovlev, Zh. Eksp. Teor. Fiz, 111, No.6, 2086 (1997), transl. in JETP, 84(6), 1138 (1997).
31. P.Bak. Self-organized criticality: why nature is complex. Springer, N-Y, 1996.
32. J.M.D.Coey, M.Viret and S. von Molnar, Adv.Phys., 48, 167 (1999).
33. G.Casati and B.Chirikov. Fluctuations in quantum chaos. Preprint, Budker Inst. of Nuclear Physics SB RAS, 1993.
34. C.W.J.Beenakker, Rev.Mod.Phys., 69, N 3, 731 (1997).
35. L.Van Hove, Physica, 21, 517 (1955).
36. J.V.Mantese, W.I.Goldburg, D.H.Darling, H.G.Craighead, U.J.Gibson, R.A.Buhrman and W.W.Webb, Solid State Commun., 37, 353 (1981).
37. W.Feller, Introduction to probability theory and its applications, John Wiley, N-Y, 1966.
38. Yu.A.Genenko and Yu.M.Ivanchenko, Phys.Lett., 126, 201 (1987).
|
no-problem/0004/cond-mat0004392.html
|
ar5iv
|
text
|
# Small-World Rouse Networks as models of cross-linked polymers
## I Introduction
Topological properties of polymers can dramatically effect their dynamical properties, such as their collapse in bad solvents and their response to external forces . Such forces can be applied microscopically, either by having charged polymers (polyelectrolytes, polyampholytes) in electrical fields, or via optical tweezers or magnetic beads. In this communication we study the stretching of cross-linked objects, whose backbones are regular lattices (we will consider for simplicity a ring), a few elements of the backbone being chemically connected to each other via cross-links. An experimental realization may be a very dilute solution of linear chains which are then cross-linked by irradiation or chemically. Our model is a realization of the so-called small-world networks (SWN), and the disorder (the cross-links) is in statistical terms quenched. Considering all bonds to be equal we study the dynamics in the framework of the Rouse model, in which the monomers are connected by harmonic springs; we term this structure the small-world Rouse network (SWRN). Our SWRN is built out of an N-monomer ring with superimposed fixed links between randomly chosen monomers; no additional links are generated or broken in external fields. The SWRN is a new intermediate between linear chains and networks. Distinct from Cayley-trees, which model hyperbranched polymers without loops, loops are a fundamental ingredient here. As such the SWRN is interesting in its own right as a study of the interplay between dynamics and topology, and it belongs to the class of generalized Gaussian structures.
## II Dynamics of small-world Rouse networks
The construction of the SWN which we consider here (see also ), is slightly different from the original one by Watts and Strogatz, but preserves its main SWN-characteristics. Starting from a ring of $`N`$ monomers, we cross-link with probability $`p`$ each monomer randomly to any of the monomers of the network. Such cross-links thus connect monomers far apart along the chemical backbone (the ring), rendering them close in Eucledian space. The SWRN-monomers are, in accordance with the Rouse model, connected by harmonic springs of strength $`k`$. The position $`𝐑_n(t)`$ of the $`n`$th bead under the action of an external force $`𝐅_n(t)`$ and in the presence of thermal noise $`𝜼_n(t)`$ is governed by the Langevin equation
$$\gamma \frac{d𝐑_n(t)}{dt}=k\underset{j=1}{\overset{N}{}}A_{nj}𝐑_j(t)+𝐅_n(t)+𝜼_n(t).$$
(1)
Here $`\gamma `$ is the coefficient of friction, and the matrix $`A_{ij}`$ is the connectivity matrix of the network. It is defined as follows: Every connection between site $`i`$ and site $`j`$ contributes $`1`$ to $`A_{ij}`$, while $`A_{ii}`$ and $`A_{jj}`$ are determined from the condition that $`_jA_{ij}=_iA_{ij}=0`$. Monasson recently published a detailed study of the spectrum of the small-world connectivity-matrix $`A`$. Among his findings was the existence of a “pseudo-gap” in the SWN density of states $`\rho (E)`$ which has the form
$$\rho (E)E^{1/2}\mathrm{exp}\left(\frac{C}{\sqrt{E}}\right),E0$$
(2)
This behavior could not be confirmed numerically through direct diagonlization, and appeared in the data as a real gap. There are, as we shall see, other ways of probing this behavior numerically, and our results support that $`\rho (E)`$ behaves as Eq. (2).
Writing
$$𝐑(t)(𝐑_1(t),𝐑_2(t),\mathrm{},𝐑_N(t))^T$$
(3)
and similary for the other quantities in Eq. (1), we can rewrite the equation of motion in a more condensed form as
$$\frac{d𝐑(t)}{dt}=\sigma A𝐑(t)+\frac{𝐅(t)}{\gamma }+𝐰(t).$$
(4)
Here we have introduced $`\sigma k/\gamma `$ and $`𝐰𝜼/\gamma `$. In the case of a spatially constant external force, the solution to Eq. (4) is obtained as
$$𝐑(t)=_{\mathrm{}}^t𝑑se^{\sigma A(ts)}\left(\frac{𝐅(s)}{\gamma }+𝐰(s)\right)$$
(5)
We now specialize to the following situation: the force is switched on at time $`t=0`$ and pulls only the $`m`$th bead in the $`y`$-direction, i.e. $`𝐅_i(t)=\theta (t)\delta _{i,m}F\widehat{𝐲}`$. Here $`\theta (t)`$ is the Heaviside step-function, $`\delta _{i,j}`$ Kronecker’s delta and $`\widehat{𝐲}`$ is a unit-vector pointing in the $`y`$-direction. We focus on the displacement of the $`m`$th bead along the $`y`$-axis, and average over thermal noise, using $`𝐰(t)=0`$. Finally we perform a structural average over $`m`$ and end up with (see e.g. and references therin for details)
$$Y(t)\frac{1}{N}\underset{m}{}𝐑_{m,y}(t)=\frac{F}{N\gamma }_0^t𝑑s\underset{i}{}e^{\sigma \lambda _is}=\frac{Ft}{N\gamma }+\frac{F}{N\gamma \sigma }\underset{i=2}{\overset{N}{}}\frac{1e^{\sigma \lambda _it}}{\lambda _i}.$$
(6)
In this equation $`𝐑_{m,y}`$ is the $`y`$ component of $`𝐑_m`$ and $`\lambda _i`$ with $`i=1\mathrm{}N`$ are the eigenvalues of the connectivity matrix $`A`$. The last expression follows from the fact that for a connected structure, only one eigenvalue vanishes (say, $`\lambda _1`$). At times $`t`$ much smaller than the time scale set by the largest eigenvalue $`\lambda _{max}`$, i.e. when $`\sigma \lambda _{max}t1`$, $`Y(t)`$ increases linearly in time: $`Y(t)Ft/\gamma `$. That is, only the monomer being pulled moves with a constant speed, not yet feeling the influence of the other monomers. Likewise, at late times $`\sigma \lambda _{min}t1`$, where $`\lambda _{min}`$ is the lowest non-vanishing eigenvalue, the entire polymer is being pulled with a constant speed, $`Y(t)Ft/(N\gamma )`$. These observations are independent of the specific structure being pulled, and only in the intermediary regime $`\lambda _{max}^1\sigma t\lambda _{min}^1`$ does the particular topology of the polymer affect the dynamics, namely through the spectrum of the connectivity matrix.
The numerical computation of the quantity $`Y(t)`$ above proceeds as follows. From a specific realization of an $`N=1000`$ small-world network, we construct the corresponding connectivity matrix. Then we find the $`N`$ eigenvalues using standard routines, and implement Eq. (6). To get an idea of the importance of sample to sample fluctuations, we consider first $`10`$ different realisations of the SWRN for $`p=0.05`$. Plotted is in Fig. 1 on double logarithmic scales $`Y`$ as a function of $`t`$, where here and in the following we use the dimensionless variables $`Y^{}(t)\sigma \gamma Y(t)/F`$ and $`t^{}\sigma t`$. In Fig. 1 we display the envelope of all $`10`$ realizations, i.e. the two curves are the extremal two “worst” cases. We see that the difference in the results is quite small (and appears, as it should, only for intermediary times), and therefore we regard results from any specific realization as being typical. In Fig. 2 we analyze the dependence of $`Y(t)`$ on cross-linking, by varying $`p`$ from $`p=0`$ (standard Rouse-model of the ring) to $`p=0.01`$ and $`p=0.05`$. We note first that the differences are now considerably larger than in Fig. 1. Second, for $`p=0`$ i.e. for the Rouse chain, we have the standard picture: a subdiffusive $`\sqrt{t}`$ behavior at intermediary times is followed by a diffusive $`t`$ behavior at longer times. At very early times we also have a linear behavior, albeit not visible in the range of the figure. The initial and final dynamics are in accordance with the explanation given above. The intermediate behavior reflects the structure of the spectrum of $`A`$, and is also well understood in the Rouse case; it is a result of the rather slow propagation of disturbances through the chain (here the ring).
We infer from the other curves in Fig. 2, that even very small but nonvanishing $`p`$ affect the intermediate behavior of $`Y(t)`$ quite strongly. For increasing $`p`$ the curves bend downwards from the $`p=0`$ case, mirroring the increased stiffness of the polymer due to the additional links. Thus a ring with cross-links can be easily distinguished experimentally (say through NMR or electronic energy transfer) from one without cross-links, whose $`Y(t)`$ dynamics under $`𝐅`$ is never slower than $`\sqrt{t}`$. As expected, the very early and very late behavior in all three cases coincide, being independent of the specific structure under scrutiny.
In Fig. 3 we plot on logarithmic scales $`Y^{}(t)`$ as a function of time $`t^{}`$ for several values of $`p`$. Increasing $`p`$ increases the stiffness of the polymer, and this is reflected in the intermediary regime, which becomes almost flat for large $`p`$. Moreover the long time behavior of the polymer is reached much earlier for polymers with a higher number of cross-links. In line with the discussion above of the range of the intermediary regime ($`\lambda _{max}^1\sigma t\lambda _{min}^1`$), this feature means that the lowest non-vanishing eigenvalue $`\lambda _{min}`$ gets quite large, and it is hence related to the appearance of a (pseudo) gap in the spectrum of $`A`$.
## III Theoretical Analysis
As indicated earlier, the initial as well as the asymptotic behavior of $`Y(t)`$ are well understood. Thus we will concentrate here on the richer and much more complex intermediate behavior. We shall rewrite Eq. (6) using a continuous picture, based on the density of states $`\rho (\lambda )=lim_N\mathrm{}(1/N)_i\delta (\lambda \lambda _i)`$, but continue to separate out the vanishing eigenvalue from the rest. Hence, with $`ϵ`$ very small, $`ϵ0^+`$:
$$Y(t)=\frac{F}{N\gamma }_0^t𝑑s\underset{i}{}e^{\sigma \lambda _is}=\frac{Ft}{N\gamma }+\frac{F}{\gamma }_0^t𝑑s_ϵ^{\mathrm{}}𝑑\lambda \rho (\lambda )e^{\sigma \lambda s},$$
(7)
an expression which is a fortiori correct in the presence of a gap, where one can take $`0<ϵ\lambda _{min}`$. It will be convenient also to consider the stretching (relative motion) $`\mathrm{\Delta }(t)`$ separately:
$$\mathrm{\Delta }(t)Y(t)\frac{Ft}{N\gamma }=\frac{F}{\gamma }_0^t𝑑s_{0^+}^{\mathrm{}}𝑑\lambda \rho (\lambda )e^{\sigma \lambda s}.$$
(8)
We remark that the inner integral in Eq. (8) is related to the probability for a random walker to be present at the original site. We therefore first analyse the behavior of this quantity:
$$P_0(t)_0^{\mathrm{}}𝑑\lambda \rho (\lambda )e^{\lambda t}$$
(9)
The asymptotic temporal behavior is accessed through the behavior of $`\rho (\lambda )`$ at small $`\lambda `$. Inserting the expression of Monasson Eq. (2) into Eq. (9) we obtain:
$$P_0(t)_0^{\mathrm{}}𝑑\lambda \lambda ^{1/2}\mathrm{exp}\left(\frac{C}{\sqrt{\lambda }}\lambda t\right)=t^{2/3}\frac{d}{dC}_0^{\mathrm{}}𝑑y\mathrm{exp}\left(t^{1/3}(\frac{C}{\sqrt{y}}y)\right)$$
(10)
The asymptotic behavior of the integral follows readily from a saddle-point procedure , so that we end up with
$$P_0(t)t^{1/2}\mathrm{exp}(C^{}t^{1/3}),$$
(11)
whith $`C^{}=3\left(C/2\right)^{2/3}`$.This is by itself a quite interesting and novel result, and it compares favourably to our numerical simulations for $`P_0(t)`$. The result is close in form to that for Cayley trees, where $`P_0(t)t^{3/2}\mathrm{exp}(ct)`$. Inserting Eq. (11) in Eq. (8) and reintroducing $`\sigma `$, we get
$$\mathrm{\Delta }(t)\frac{3F}{C^{}\gamma \sigma }\left(\frac{1}{2}\sqrt{\frac{\pi }{C^{}}}(\sigma t)^{1/6}e^{C^{}(\sigma t)^{1/3}}\right).$$
(12)
Notice that for very large $`t`$ the stretching $`\mathrm{\Delta }(t)`$ of the SWRN tends to a constant $`\mathrm{\Delta }_{\mathrm{}}`$. In the units of our figure this constant depends mainly on $`C^{}`$ (since $`F`$, $`\gamma `$ and $`\sigma `$ drop out); theoretically one may obtain $`C^{}`$ and also $`C`$ out of $`\mathrm{\Delta }_{\mathrm{}}`$.
In Fig. 4 we plot the dimensionless stretching $`\mathrm{\Delta }^{}(t)Y^{}(t)\sigma t/N`$ for $`p=0.05`$ and compare it to the analytical form Eq. (12). We do this by fitting $`abt^{1/6}\mathrm{exp}(ct^{1/3})`$ to the data, and as can be inferred from Fig. 4, the agreement is very convincing. From the least-squares fit we obtain $`a=5.09`$, $`b=7.67`$ and $`c=0.54`$. We remark that the agreement at short times may be rendered even better by also keeping the next term in the expansion of the integral of $`P_0(t)`$, a term which is proportional to $`t^{1/6}\mathrm{exp}\left(C^{}t^{1/3}\right)`$. Furthermore we remark that for Cayley-trees the intermediate behavior of $`Y(t)`$ can also be determined in a similar manner: The saddle-point approach yields to leading order:
$$Y(t)\stackrel{~}{a}\stackrel{~}{b}t^{3/2}\mathrm{exp}(\stackrel{~}{c}t),$$
(13)
with $`\stackrel{~}{a}`$, $`\stackrel{~}{b}`$ and $`\stackrel{~}{c}`$ being constants.
## IV Conclusions
In this communication we have studied the behavior of a small-world network model (the SWRN) of a linked ring-polymer, focusing on its dynamics under external forces. Thus the motion of a monomer pulled by such a force is vastly different, depending on whether the monomer belongs to a SWRN or to a simple ring without cross links. This may enable via NMR or electronic energy transfer to distinguish clearly between cross-linked and non-cross-linked polymers. Our numerical results for the stretching of the SWRN under external forces are in excellent agreement with our analytical expressions, which used the pseudo-gap behavior of the SWN density of states, as postulated in former work. As discussed in the present communication, these results are directly connected to expressions for the return to the origin of a random walker on the SWN.
###### Acknowledgements.
The support of the DFG, of the GIF through grant I0423-061.14, and of the Fonds der Chemischen Industrie are gratefully acknowledged.
FIGURES
CAPTIONS
FIG. 1. Two different realizations give rise to similar behavior of $`Y(t)`$, here plotted on logarithmic-logarithmic scales for $`p=0.05`$.
FIG. 2. On double logarithmic scales we plot the position $`Y(t)`$ as a function of time. From upper to lower curve, $`p=0`$, $`p=0.01`$ and $`p=0.05`$.
FIG. 3. On double logarithmic scales we plot the position $`Y(t)`$ as a function of time. From upper to lower curve, $`p=0.01`$, $`p=0.05`$, $`p=0.1`$, $`p=0.2`$, $`p=0.5`$ and $`p=0.8`$.
FIG. 4. Comparison of the theoretical prediction (dashed) with the data (dash-dotted), for $`p=0.05`$.
|
no-problem/0004/astro-ph0004289.html
|
ar5iv
|
text
|
# The Influence of Galactic Outflows on the Formation of Nearby Dwarf Galaxies
## 1. Introduction
Outflows from dwarf starbusting galaxies have long been the subject of theoretical and observational investigation. Since the 1970s, theoretical work has shown that supernovae (SNe) and OB winds in these low-mass objects produce energetic outflows that shock and enrich the intergalactic medium (IGM) in which they are embedded (Larson 1974; Dekel & Silk 1986; Vader 1986). This behavior has been clearly identified in studies of both local starbursting galaxies (Axon & Taylor 1978; Heckman 1997; Martin 1998) and their environments (della Ceca et al. 1996; Bomans, Chu, & Hopp 1997) as well as with spectroscopy of high-$`z`$ galaxies (Franx et al. 1997; Pettini et al. 1998; Frye, Broadhurst & Sprinrad 1999).
Whether these outflows lead to a catastrophic loss of the interstellar gas in these objects is much more uncertain, with recent simulations and observations suggesting that dwarfs are very inefficient in removing gas from their cores (Mac Low & Ferrara 1999; Murakami & Babul 1999). However, the issue of whether dwarfs retain a sizeable fraction of their gas is to some degree decoupled from the formation of outflows. In galaxies in the mass range $`10^7M_{}M10^9M_{}`$, Mac Low & Ferrara (1999) have shown that a “blowout” occurs in which the superbubbles produced by multiple SNII explosions punch out of the galaxy, shocking the surrounding IGM while failing to excavate the interstellar medium of the galaxy as a whole.
In the generally investigated hierarchical models of structure formation, low-mass galaxies form in large numbers at early times (e.g. White & Frenk 1991) are highly clustered (Kaiser 1984). The existence of a large number of small and clustered galaxies at early times is also favored observationally by the steep number counts and low luminosities of faint galaxies (Broadhurst, Ellis & Glazebrook 1992), the sizes of faint galaxies in the Hubble Deep Field (HDF) (Bouwens, Broadhurst, & Silk 1999a,b), and the clustering properties of Lyman-break galaxies (Adelberger et al. 1998). Locally however, the number density of low-mass objects is far less than predicted theoretically (Ferguson & Binggeli 1994; Klypin et al. 1999; Moore et al. 1999), leading to theoretical studies into the disruption of dwarves by tidal forces from neighboring objects (Moore, Lake, & Katz 1998) and external UV radiation ( Norman & Spaans 1997; Corbelli, Galli, & Palla 1997; Ferrara & Tolstoy 2000).
Little attention has been directed however, toward the influence of outflows on the formation of neighboring galaxies, despite the fact that typical outflow velocities exceed the virial velocities of objects in the mass range in which the overabundance is most severe. In a companion article (Scannapieco & Broadhurst 2000) we describe the effects of heating and enrichment by outflows on galaxy formation using a Monte Carlo treatment of hierarchical structure formation. Here we show using simple scaling arguments that suppression of galaxy formation by outflows from dwarf galaxies is important over a large range of halo masses, irrespective of the details of cosmological simulations.
The structure of this work is as follows. In §2 we consider two scenarios for the suppression of dwarf galaxy formation by outflows. In §3 we show that the outflow models considered fall well within the bounds of current observations. Conclusions are given in §4.
## 2. Suppression of Galaxy Formation
We consider two processes by which outflows from neighboring dwarves inhibit the formation of a galaxy. In the “mechanical evaporation” scenario, the gas associated with an overdense region is heated by a shock to above its virial temperature. The thermal pressure of the gas then overcomes the dark matter potential and the gas expands out of the halo, preventing galaxy formation. In this case, the cooling time of the collapsing cloud must be shorter than its sound crossing time, otherwise the gas will cool before it expands out of the gravitational well and will continue to collapse.
Alternatively, the gas may be stripped from a perturbation by a shock from a nearby source. In this case, the momentum of the shock is sufficient to carry with it the gas associated with the neighbor, thus emptying the halo of its baryons and preventing a galaxy from forming. In the following we evaluate the importance of these two effects in turn.
### 2.1. Mechanical Evaporation
The first mechanism we consider is the shock heating of the halo gas to a temperature sufficient to cause the gas to evaporate into the IGM. This will happen for cases in which the virial temperature of the object $`T_v`$ is less than the postshock temperature $`T_s`$, where $`T_s`$ can be written for an adiabatic shock as
$$T_s=\frac{3\mu m_p}{16k}v_s^2=14v_{s,km/s}^2\mathrm{K},$$
(1)
where $`\mu `$ is the molecular weight of the gas, which we take to be 0.6, $`m_p`$ is the proton mass, $`k`$ is Boltzmann’s constant, and $`v_s`$ is the velocity of the expanding shock.
For the typical outflows we will be considering, the blast wave reaches the neighboring perturbation of interest within one cooling time $`t_c\frac{kT_s}{\overline{n}_e\mathrm{\Lambda }}`$, where $`\overline{n}_e=4.5\times 10^7(1+z)^3h^2\mathrm{cm}^3`$ is the average number density of electrons, $`\mathrm{\Lambda }`$ is the cooling function which we normalize to $`\mathrm{\Lambda }_{22}\mathrm{\Lambda }/10^{22}`$ erg cm<sup>3</sup> s<sup>-1</sup>, and $`h`$ is the Hubble constant in units of 100 km/s/Mpc. Here and below we take the baryonic density in units of the critical density to be $`\mathrm{\Omega }_b=0.05`$. Thus the shock velocity can be approximated by a Sedov-Taylor blast wave solution,
$$v_s=1600(ϵ𝒩h)^{1/2}d_{c,\mathrm{kpc}}^{3/2}\mathrm{km}/\mathrm{s},$$
(2)
where $`𝒩`$ is the number of supernovae driving the bubble (each with a total energy $`2\times 10^{51}`$ erg to take into account the contribution from stellar winds), $`ϵ`$ is the total-to-kinetic energy conversion efficiency, $`d_{c,\mathrm{kpc}}`$ is the comoving distance from the explosion site in kpc $`\times h`$. The Compton drag on the expanding shock as well as the Hubble expansion are easily shown to be negligible for our purposes in this paper (Ferrara 1998; Scannapieco & Broadhurst 2000). From the relation $`T_v=70M_6^{2/3}(1+z)\mathrm{\Omega }_0^{1/3}`$ K, where $`M_6M/(10^6M_{}/h)`$ and $`\mathrm{\Omega }_0`$ is the matter density in units of the critical density, it follows that collapsed objects with total masses lower than $`M_63.5\times 10^8(1+z)^{3/2}(ϵ𝒩h)^{3/2}d_{c,\mathrm{kpc}}^{9/2}\mathrm{\Omega }_0^{1/2}`$ will be affected by shock heating and evaporation. We estimate $`d_{c,\mathrm{kpc}}`$ as the mean spacing between objects of a mass scale $`M`$, which is appropriate as the outflowing objects and the forming density perturbations of interest are of roughly the same mass scales:
$$d_{c,\mathrm{kpc}}\left(\frac{3M}{4\pi \rho _c\mathrm{\Omega }_0}\right)^{1/3}10M_6^{1/3}\mathrm{\Omega }_0^{1/3},$$
(3)
which gives
$$M_641(1+z)^{3/5}(ϵ𝒩h)^{3/5}\mathrm{\Omega }_0^{2/5}.$$
(4)
Assuming that one SN occurs for every 100 $`M_{}`$ of baryons which form stars (see eg. Gibson 1997) we can relate the number of SNe and the mass of the halo as $`𝒩h=500M_6ϵ_{sf}\mathrm{\Omega }_0^1,`$ where $`ϵ_{sf}`$ is the initial star formation efficiency. Assuming an initial star formation efficiency of $`ϵ_{\mathrm{sf}}=0.1`$ and a kinetic energy conversion efficiency of $`ϵ=0.2`$ we can estimate $`ϵ𝒩h`$ as approximately $`10M_6\mathrm{\Omega }_0^1`$, and thus we expect $`ϵ𝒩h5000\mathrm{\Omega }_0^1`$ for typical dwarf-galaxy sized objects of $`5\times 10^8M_{}.`$
The relation given by Eq. (4) is shown by the upper lines in Fig. 1 for $`ϵ𝒩h=10^4`$. Also shown for reference on this plot is the mass below which effects due to photoevaporation are important ($`M_64.4\times 10^3(1+z)^{3/2}\mathrm{\Omega }_0^{1/2}`$). Galaxies with masses below this limit are readily evaporated by the UV background after the reionization of the universe (Barkana & Loeb 1999; Ferrara & Tolstoy 2000)
#### 2.1.1 Cooling
To suppress galaxy formation, however, it is not enough only to heat the halo gas, because the gas could cool rapidly and fall back towards the center of the object. Therefore Eq. (4) is a necessary but not sufficient condition for mechanical evaporation. The second condition requires that the gas remains hot for the time required to escape the system, or that the cooling time be longer than the the sound crossing time, $`\frac{kT_s}{n_e\mathrm{\Lambda }}\mathrm{}/c_s`$, where $`\mathrm{}`$ is size of the halo, $`c_s=(kT_s/\mu m_h)^{1/2}`$ is the sound speed, and $`n_e=\overline{n}_e\delta `$ where $`\delta \rho /\rho _0`$ is the overdensity of the region, $`\rho _0`$ is the mean density of the universe. This gives
$`M_62.3(ϵNh)^{9/11}(1+z)^{12/11}\delta ^{4/11}h^{6/11}`$
$`\times \mathrm{\Omega }_0\mathrm{\Lambda }_{22}^{6/11},`$ (5)
which forms the upper bound of the cross-hatched region in Figure 1. Here we see that for both cosmological models, the cooling vastly undercuts the minimum mass necessary for disruption by heating, dropping below the photoevaporation limit in the $`\mathrm{\Lambda }`$CDM model. While it is relatively easy for galactic outflows to shock low-density gas to above the virial temperatures of the potential, this excess energy is efficiently radiated away for most large clouds.
### 2.2. Baryonic Stripping
The second possibility is that all the gas is striped by the impinging shock. We may estimate that this will occur when the shock moves through the center of the halo with sufficient momentum to accelerate the gas to the escape velocity of the halo: $`fM_sv_s>M_cv_e`$ where $`f=\mathrm{}^2/4d^2`$ is the solid angle of the shell impinging on the lump, $`M_s`$ is the mass of material swept up by the expanding shock, $`M_c=(4\pi /3)\rho _c\mathrm{\Omega }_b\delta \mathrm{}^3`$ is the baryonic mass of the cloud whose radius is $`\mathrm{}`$, and $`v_e=(GM/\mathrm{})^{1/2}`$ is the escape speed from the cloud. Solving for the mass that can be swept up as function of redshift and replacing $`d_{c,\mathrm{kpc}}`$ by the mean separation as estimated from Eq. (3) gives
$$M_653\delta ^1(1+z)^{3/5}(ϵ𝒩h)^{3/5}\mathrm{\Omega }_0^{2/5}.$$
(6)
which is also plotted in Figure 1. Here we see that while it is significantly easier to heat a cloud than to sweep away its baryons, the short cooling times of most halos cause baryonic stripping to have the greatest impact on the formation of nearby galaxies. Note that in this plot the density is taken to be only twice that of the mean density of the universe. Thus it is important that the perturbation be in the linear or weakly nonlinear regime for this mechanism to be effective. As the time between turn-around and virialization is relatively short, however, most protogalaxies are likely to be at these low density contrasts when shocked by neighboring objects.
## 3. Observational Constraints
### 3.1. Compton y-parameter
While widespread baryonic stripping of dwarf galaxies is a generic consequence of associating outflows typical of local dwarfs with primordial dwarf galaxies, it is important to check that this extrapolation to higher redshift is consistent with observational constraints. An unavoidable result of widespread IGM heating is the presence of spectral distortions in the cosmic microwave background (CMB). The degree of these distortions is given by the Compton $`y`$ parameter which is simply the convolution of the optical depth with the electron temperature along the line of sight (Zel’dovich & Sunyaev 1969; Sunyaev & Zel’dovich 1972). To calculate the mean $`y`$ we make use of the fact that the total optical depth within ionized regions must be within the observational limit of $`\tau 0.5`$ (Griffiths, Barbosa, & Liddle 1999). As the total optical depth within outflows must be less than the total optical depth overall we can estimate this convolution as
$$y\frac{\tau }{V_4}\left(_{r_\mathrm{i}}^{r_{10^4}}\frac{kT_s(r)}{m_ec^2}4\pi r^2𝑑r\right),$$
(7)
where $`r_i`$ is the initial size of the blast, which we take to be $`1h^1`$ comoving kpc, and $`V_4`$ is the volume defined by the radius outside which the temperature of the blast drops below the ionization temperature of hydrogen $`10^4`$, which is readily estimated from Eq. (1) and (2) to be equal to $`15(ϵ𝒩h)^{1/3}h^1(1+z)^1`$ kpc.
This gives
$$y\tau \frac{k10^4K}{m_ec^2}\mathrm{ln}(15(ϵ𝒩h)^{1/3})1.4\times 10^5,$$
(8)
where we take $`(ϵ𝒩h)=10^4`$. This is within the bounds set by the COBE data of $`y1.5\times 10^5`$ (Fixsen et al. 1996), but only marginally. This suggests that future CMB experiments may help to constrain the degree of IGM heating by outflows.
### 3.2. Point Source Luminosities
A second consequence of widespread dwarf outflows is the presence of objects seen as point sources in deep surveys. A SN number of $`𝒩5\times 10^4`$ implies that the local star formation rate is roughly $`0.5M_{}`$/yr. The question arises if objects with these luminosities could have escaped detection as point sources in the HDF. Assuming a Salpeter initial mass function, Ciardi et al. (2000) give the luminosity per unit frequency at the Lyman limit obtained from the adopted spectrum of a primordial galaxy at early evolutionary times as $`j_0=4\times 10^{20}M_{}\mathrm{erg}\mathrm{s}^1\mathrm{Hz}^1,`$ where $`M_{}`$ is the stellar mass of the object. Scaling to the stellar mass value implied by the assumed efficiency of massive star formation, $`M_{}=5\times 10^6M_{}`$, we find for the luminosity of the SN parent cluster of stars $`L_\nu 6\times 10^{42}`$ erg/s at the Lyman limit. The observed flux is then $`(\nu _0)=L_\nu (1+z)/4\pi d_L^2,`$ where $`\nu _0=\nu /(1+z)`$ is the observed frequency. This gives approximately $`=12(1+z)^1h^2`$ nJy. In order to be observed in the HDF optical filter centered at 6060Å (V<sub>606</sub>), the object should be located at redshift $`1+z=6.6`$, thus with a flux equal to $`2h^2`$ nJy. This value corresponds to a AB magnitude $`V_{AB}=30.6`$ which is well below the limiting magnitude of the HDF for point sources (typically 28 mag, see Mannucci & Ferrara 1999). However, these objects may be detectable by the Next Generation Space Telescope, which should reach AB magnitudes of order 32 in the near infrared, although dust extinction may complicate this analysis.
## 4. Conclusions
Using simple scaling arguments, we have shown that outflows from low-mass galaxies play an important role in the suppression of the formation of nearby dwarf galaxies. While the details of this process depend on the spatial distribution of forming galaxies, and can be only be studied in the context of cosmological simulations (Scannapieco & Broadhurst 2000), the overall implications of this mechanism can be understood from the general perspective of hierarchical structure formation.
In principle, outflows can suppress the formation of nearby galaxies both by shock heating and evaporation and by stripping of the baryonic matter from collapsing dark mater halos; in practice, the short cooling times for most dwarf-scale collapsing objects suggest that the baryonic stripping scenario is almost always dominant. This mechanism has the largest impact in forming dwarves in the $`10^9M_{}`$ range which is sufficiently large to resist photoevaporation by UV radiation, but too small to avoid being swept up by nearby dwarf outflows.
It is interesting to note that numerical studies working from a different perspective have similarly shown that momentum transfer is the most important feedback mechanism in determining the internal structure of larger galaxies. While SN heating is relatively ineffective at regulating star formation in numerical models, accounting for the kinetic component of SN feedback has resulted in quasi-stationary models of disc galaxies with reasonable star formation rates (see eg. Katz 1992; Navarro & White 1993; Mihos & Hernquist 1994; Springel 2000).
Various analytical and N-body studies (Kauffmann, White, & Guiderdoni 1993; Klypin et al. 1999; Moore et al. 1999) have shown that $`50`$ satellites with circular velocities $`20`$ km/s and masses $`10^9M_{}`$ should be found within 600 kpc of the Galaxy, while only 11 are observed. While the fraction of objects suppressed is dependent on the assumed cosmology and spatial distribution of galaxies, the partial suppression of the formation of objects in this mass range is a general consequence of baryonic stripping by outflows. Note that dark matter halos which are subject to sweeping are likely to accrete some gas at later times and thus this scenario provides a natural mechanism for the formation of “dark” Milky Way satellites, which may be associated the abundant High-Velocity Clouds as discussed by Blitz et al. (1999). Also, this population of halos could be identified with the low mass tail distribution of the dark galaxies that reveal their presence through gravitational lensing of quasars (Hawkins 1997).
The existence of an era of widespread IGM heating through outflows is consistent with current CMB constraints, but causes a mean spectral distortion in the range that will be probed by the next generation of experiments. Similarly, the luminosities of the majority of the starbursting dwarfs that contribute to this process are beyond the limiting magnitudes of the Hubble Deep Field, but may be observable with of the Next Generation Space Telescope. Thus it may be in the near future that we can directly observe the era in which dwarf galaxies were first formed and the impact of galactic outflows on this process.
We would like to thank Marc Davis, Joseph Silk, and an anonymous referee for helpful comments and discussions. This research was supported in part by the National Science Foundation under Grant PHY94-07194.
|
no-problem/0004/hep-ph0004121.html
|
ar5iv
|
text
|
# A finite-energy solution in Yang-Mills theory and quantum fluctuations.
## 1 Introduction.
This paper is devoted to constructing of the stable finite-energy and compact gluon object in the Yang-Mills theory with a nonstandard lagrangian. This lagrangian consists of pure Yang-Mills lagrangian $`_{YM}=1/4(F_{\mu \nu })^a(F^{\mu \nu })^a`$ and higher derivative term $`\mathrm{\Delta }`$ related to quantum fluctuations of the gluon field. Such an approach to investigation of quantum fluctuations had been introduced in the series of papers in the middle of 80’s -. In these papers IR low-energy limit of QCD have been studied and there was demonstrated that contribution from the quantum fluctuations can be taken into account by modification of QCD lagrangian via introducing into the lagrangian the additional higher derivative terms like $`ϵ^{abc}(F_{\mu \nu })^a(F_{}^{\nu }{}_{\rho }{}^{})^b(F^{\rho \mu })^c`$ or $`(D_\rho F_{\mu \nu })^a(D^\rho F^{\mu \nu })^a`$. From methodological point of view lagrangians obtained within this approach are very similar to the well-known Euler-Heisenberg effective lagrangian in QED . As a result, in leading approximation for gluon field one obtained the theory with a new lagrangian $`=_{YM}+\mathrm{\Delta }`$ for the $`c`$-number field $`A_{eff}`$ and this classical Yang-Mills field is an average of the initial gluon field $`A_0`$ over quantum fluctuations: $`A_{eff}=A_0`$. In the investigation of classical solutions in such effective theories was discussed in context of the color confinement problem. A very similar problem is discussed in paper also.
There are many approaches to obtaining finite-energy compact gluon objects from QCD. The crucial point here stems from the fact that pure classical Yang-Mills theory hasn’t such solutions by reason of scale invariance. In the papers it was shown that typical spherically symmetrical solution of pure Yang-Mills theory is an infinite-energy solution with singularity on the finite radius sphere. Therefore, any approaches to finding gluon clusters is based on the various modifications of lagrangians. For example, the well-known monopole solution exists thanks to the interaction of Yang-Mills field with the field of matter. Furthermore, some kind of gluon finite-energy objects appear in lattice approach to the QCD . Such solutions are called glueballs. A glueball on the lattice is a quantum object having no analogues in classical field theory. Unfortunately such colorless gluon object wasn’t be observed by experiment yet and only model predictions for physical characteristics like mass, effective radius and so on exist now.
In the present work we try to find similar quantum compact finite-energy objects by using of an effective low-energy approach to the QCD that was discussed above.
## 2 Finite-energy gluon clusters.
In this section we investigate the classical Yang-Mills theory with a nonstandard modified lagrangian
$$_{YM}^\epsilon =\frac{1}{4}(F_{\mu \nu })^a(F^{\mu \nu })^a\frac{\epsilon ^2}{6}ϵ^{abc}(F_{\mu \nu })^a(F_{}^{\nu }{}_{\rho }{}^{})^b(F^{\rho \mu })^c$$
(1)
where $`\epsilon =1/M`$ is an inverse mass dimensional parameter characterizing the intensity of quantum fluctuation; $`(F_{\mu \nu })^a=_\mu A_\nu ^a_\nu A_\mu ^a+ϵ^{abc}A_\mu ^bA_\nu ^c`$ and $`(D_\mu )^{ab}=\delta ^{ab}_\mu +ϵ^{abc}A_\mu ^c`$. Here we deal with $`SU(2)`$ Yang-Mills field.
Such form of modification of the Yang-Mills lagrangian is chosen due to the fact that the theory obtained contains only second-order derivative terms. Thus the dynamics of this field theory can be studied in detail.
Using the variation principle, we get the equation of motion
$$D_\mu ^{ab}(F^{\mu \nu }\epsilon ^2G^{\mu \nu })^b=0.$$
(2)
where $`(G^{\mu \nu })^a=ϵ^{abc}(F_{}^{\nu }{}_{\rho }{}^{})^b(F^{\rho \mu })^c`$.
Adding the divergence
$$_\rho [(F^{\nu \rho }\epsilon ^2G^{\nu \rho })^a_\mu A_\rho ^a],$$
(3)
to the energy-momentum tensor
$$T_{}^{\nu }{}_{\mu }{}^{}=_\mu A_\rho ^a\frac{_{YM}^\epsilon }{(_\nu A_\rho ^a)}\delta _{}^{\nu }{}_{\mu }{}^{}_{YM}^\epsilon =(F^{\nu \rho }\epsilon ^2G^{\nu \rho })^a_\mu A_\rho ^a\delta _{}^{\nu }{}_{\mu }{}^{}_{YM}^\epsilon ,$$
(4)
we obtain the symmetrical form of this tensor
$$T_{}^{\nu }{}_{\mu }{}^{}=(F^{\nu \rho }\epsilon G^{\nu \rho })^a(F_{\mu \rho })^a\delta _{}^{\nu }{}_{\mu }{}^{}_{YM}^\epsilon .$$
(5)
Now we consider the spherically symmetrical chromomagnetic field configuration. Substituting the well-known Wu-Yang ansatz
$$A_0^a=0,A_i^a=ϵ_{aij}n_j\frac{1H(r)}{r},n_i=x_i/rr=\sqrt{x_i^2},$$
(6)
in (2), we get the following equation on amplitude $`H(r)`$
$$\left(1\frac{\epsilon ^2}{r^2}(H(r)^21)\right)r^2H(r)^{\prime \prime }=H(r)\left(H(r)^21\right)+$$
$$+\frac{\epsilon ^2}{r^2}\left((rH(r)^{})^2H(r)2rH(r)^{}(H(r)^21)\right).$$
(7)
Energy of field configuration generated by the solution of equation (7) $`H(r)`$ is the functional
$$E^\epsilon [H]=T^{00}d^3x=4\pi \underset{0}{\overset{\mathrm{}}{}}\left[\left(1\frac{\epsilon ^2}{r^2}(H(r)^21)\right)\left(H(r)^{}\right)^2+\frac{(H(r)^21)^2}{2r^2}\right]𝑑r=\underset{0}{\overset{\mathrm{}}{}}E(r)𝑑r.$$
(8)
The next aim of our investigation is finding the solutions of equation (7). Notice that only finite-energy solutions are interesting for us. Hence the functional $`E^\epsilon [H]`$ (8) should be finite on such solutions.
Equation (7) is a very complicated nonlinear differential equation. In order to solve it only numerical or approximation methods seem applicable. The crucial point of such analysis is that the leading derivative term in this equation contains the factor
$$\mathrm{\Phi }[H](r)=\left(r^2\epsilon ^2(H(r)^21)\right).$$
(9)
If $`H_s(r)`$ is a solution of equation (7) and there is a point $`r=R`$ such that $`\mathrm{\Phi }[H_s](R)=0`$, then this solution $`H_s`$ has singular behavior in a neighborhood of this point $`r=R`$ due to smallness of the factor $`\mathrm{\Phi }[H_s]`$. Using the standard procedure, one obtains the asymptotic behavior near this point
$$H_s(r)\stackrel{rR\pm 0}{}\pm \sqrt{1+R^2/\epsilon ^2}C(Rr)^{2/3}+\underset{¯}{O}(Rr),$$
(10)
where $`C`$ is a constant. Of course, the $`H_s(R)`$ finite but its derivative at the point $`r=R`$ is singular. Indeed,
$$H_{}^{}{}_{s}{}^{}(r)\stackrel{rR\pm 0}{}\frac{2}{3}C(Rr)^{1/3}+\underset{¯}{O}(1)\mathrm{}.$$
(11)
Such singular behavior is analogous to the singular behavior on finite sphere of solutions of pure Yang-Mills field discussed above but there is a principal difference. Energy of such solutions in the pure Yang-Mills case is infinite but in the modified Yang-Mills case energy (and other physical characteristics) of solution with singular behavior (11) should be finite:
$$E(r)|_{r=R}4\pi \left(\pm \frac{8\epsilon ^2}{9R^2}\sqrt{1+R^2/\epsilon ^2}C^3+\frac{R^2}{2\epsilon ^4}\right)<\mathrm{}.$$
(12)
Therefore such solutions are physical.
Now we should discuss the numerical investigation of solutions of equation (7) that have the asymptotic behavior (11) at some point $`r=R`$.
To guarantee stability of our solutions we should choose this asymptotics at origin ($`r0`$)
$$H(r)1+a_1r^2+a_1^2\frac{2\epsilon ^2a_13}{10(1+2a_1\epsilon ^2)}r^4+\underset{¯}{O}(r^6),$$
(13)
and at infinity ($`r\mathrm{}`$)
$$H(\rho )1+a_2\rho +\frac{3}{4}a_2^2\rho ^2+\frac{11}{20}a_2^3\rho ^3+\frac{193a_2^2240\epsilon ^2}{480}a_2^2\rho ^4+$$
$$+\frac{329a_2^21280\epsilon ^2}{1120}a_2^3\rho ^5+\underset{¯}{O}(\rho ^6),\rho =1/r,$$
(14)
where $`a_1>0`$ and $`a_2>0`$ are constants. Solutions with such asymptotic are stable because vacuum states $`H(0)=1`$ and $`H(\mathrm{})=1`$ are different.
Notice that equation (7) has very useful symmetries. First of all, this equation is symmetrical with respect to the changes $`HH`$. So, if we have a solution $`H(r)`$, then $`H(r)`$ is a solution too.
Now let $`\epsilon =\epsilon _1`$ and we have a set of solutions $`\{H_{\epsilon _1}(r)\}`$. If we perform the change of variable
$$r\frac{\epsilon _1}{\epsilon _2}r,\{H_{\epsilon _1}(r)\}\stackrel{r\epsilon _1r/\epsilon _2}{}\{H_{\epsilon _2}(r)=H_{\epsilon _1}(\frac{\epsilon _1}{\epsilon _2}r)\},$$
(15)
we get equation (7) again but with new $`\epsilon =\epsilon _2`$. If we know a solution for some $`\epsilon >0`$, say, $`\epsilon =1`$, then using (15) we can obtain a solution for any other $`\epsilon _1>0`$.
The numerical investigation of equation (7) is presented in Fig.1. The solution of pure Yang-Mills theory ($`\epsilon =0`$) is shown by the line B. The function A is the solution of equation (7) with $`\epsilon =1`$. In Fig.2 we can see the energy density (8) corresponding to this solution.
The solution $`H_<`$ starting at the origin (or internal) increases monotonically and its energy density increases too. Evidently, as energy density grows, the role of quantum fluctuations grows too. At the point $`r=R`$, the energy density attains its critical value $`E^{cr}`$ and H(r) becomes singular (10). The solution $`H_>`$ starting at the infinity (or external) demonstrates an absolutely similar behavior. Now, a vary essential questions arises: How to connect these two sets of solutions and how to determine such solutions on the whole space?
These questions have no mathematical answer because in this case we deal with the solutions that can not be extended to the right (to the left) because the point of singularity $`r=R`$ is essential.
Obviously, this nonuniqueness of solution in the whole space is due to underdetermination of our effective model. It is necessary to introduce some additional physical condition that would allow to choose a physically reasonable solution from the broad class of solutions described above.
Since this solution of the model (1) has to be an effective approximation to an existing gluon object, the general properties of the latter should be represented by the former. Thus, if energy density of this gluon object is continuous everywhere, then it should be continuous for the approximating solution of equation (7) as well. We show below that the condition of continuity of energy density is sufficient for the construction of a unique solution and investigation of its properties.
According to mathematical structure of solutions of this model, the condition of continuity of energy density can be formulated as follows: There exists a critical density of the energy for classical solutions in our effective Yang-Mills theory and the value of this critical density $`E^{cr}`$ is a physical property of the theory. Therefore $`E^{cr}`$ shouldn’t depend on kind of solution (internal or external). It follows that
$$E(r)_<|_{r=R}=E(r)_>|_{r=R}C_<=C_>.$$
(16)
It is easily shown that condition (16) uniquely determines our solution and its properties ($`a_1`$, $`a_2`$ and $`R`$) for any $`\epsilon `$. This solution is shown in Fig.1 (function A). This solution looks like a shell with radius $`R`$.
Using (15), one obtains the following expression for energy of such gluon cluster
$$E^\epsilon =\frac{1}{\epsilon }E^{\epsilon =1}=ME^{\epsilon =1},$$
(17)
where $`E^{\epsilon =1}=110.75`$ is the energy of field configuration if $`\epsilon =1`$. Expression (17) is intuitively clear. Indeed, the pure Yang-Mills theory is scale invariant and has no mass-dimensional parameters. Modified Yang-Mills theory (1) has such parameter $`\epsilon =1/M`$ and the mass of gluon objects under investigation is proportional to this parameter.
Now to predict the physical mass $`M_{cluster}`$ and effective radius we should have a prediction of the value of parameter $`\epsilon =1/M`$. In this paper, following , we proposed that $`M0.59\pi GeV`$, and our model gives the following prediction of the mass and effective radius of investigated gluon clusters:
$$M=1/\epsilon 0.59\pi GeV,M_{cluster}205GeV,R0.15fm$$
(18)
In the next section we give some conclusions and perspectives of such investigations are discussed.
## 3 Conclusions.
The aim of this paper is to show that quantum fluctuations of nonabelian Yang-Mills field can lead to generation of the stable cluster finite-energy solution. In our work we used the gauge-invariant approach in which such quantum fluctuation should be taken into account by adding high-derivative terms to the pure Yang-Mills lagrangian. In the present work, we investigated the effective $`SU(2)`$ Yang-Mills theory and chromomagnetic spherically symmetrical field configurations.
One of the interesting consequences of this effective theory is a fact that for the investigated field configuration there exists a critical value of energy density. This fact is due to the physical condition of continuity of energy density. Such condition allowed us to construct the cluster solution for all space points. We predicted that the mass of such object should be about two hundred GeV and effective radius should be about $`0.2`$ fm.
Of course, we do not give a comprehensive investigation of this effective Yang-Mills theory. The questions about dyon solution or about a role of contributions from other high derivative modified term in pure Yang-Mills lagrangian are clear now. But maybe the most important question in such investigation is about physical consequences of existence of such gluon cluster objects and about their experimental status. All of these questions should be the themes for future investigation.
## Acknowledgments.
The author is grateful to Dr. A.N. Sobolevskiĭ and Dr. I.O. Cherednikov for useful discussions. This work was supported in part by RFBR under Grant No.96-15-96674.
|
no-problem/0004/cond-mat0004099.html
|
ar5iv
|
text
|
# Transport mean free path for Magneto-Transverse Light Diffusion: an alternative approach
## I Introduction
Magneto-transverse light diffusion - also known as the “Photonic Hall Effect” (PHE) - was theoretically predicted five years ago by Van Tiggelen , and was experimentally confirmed one year later by Rikken . This effect is analogous to the electronic Hall effect, well known in semiconductors physics. The evident driver behind the electronic Hall effect is the Lorentz force acting on charged particles. The PHE finds its origin in the Faraday effect, present inside the dielectric scatterers, which slightly changes their scattering amplitude. This is the reason for the similarities between the PHE and the so-called Beenakker-Senftleben effect, which concerns the transport coefficients of dilute paramagnetic gazes .
Motivated by the experimental observation of the PHE, theoretical work was first started on the single scattering of spherical magneto-optical particles. The scattering matrix and the scattering cross-section were calculated exactly for a single Faraday-active dielectric sphere using perturbation theory . From this solution, the Stokes parameters, which completely describe the intensity and polarization of the scattered light, were derived . Perturbational methods for the scattering by weakly anisotropic particles were first used by Kuzmin et al. for the single scattering case .
The PHE in multiple scattering is controlled by a length, which has been defined as the mean free path for magneto-transverse light diffusion, $`\mathrm{}_{}^{}`$. A simple expression for this length was obtained and successfully compared to experiments using a formulation based on the ladder approximation of the Bethe-Salpether equation . The experiments investigated the dependence of the PHE on the volume fraction of the scatterers, first on the real part of the dielectric constant of the scatterers , and more recently on their imaginary part . However, some parts of the derivation of Ref. are rather technical and fail to give a satisfying explanation of the origin of the dependence of $`\mathrm{}_{}^{}`$ on the anisotropy of the scattering. This article presents a simpler derivation of $`\mathrm{}_{}^{}`$, based on the radiative transfer equation, which should clarify this point. The radiative transfer equation is a Boltzmann type equation which describes the transport of light in multiple light scattering . From the radiative transfer equation, the diffusion equation is derived in an infinite medium when the scattering is not highly anisotropic. In other types of anisotropic materials, such as single-domain nematic liquid crystals, similar methods were developed, using either the Bethe-Salpether equation or the radiative transfer equation .
## II Single scattering
The single scattering of light by one dielectric sphere made of a Faraday-active material embedded in an isotropic medium with no magneto-optical properties is first considered. In a magnetic field, the dielectric constant of the sphere, $`ϵ_B`$, is a tensor of rank two. It depends on the distance to the center of the sphere of radius, $`R`$, via the Heaviside function, $`\mathrm{\Theta }(|𝐫|R)`$, which equals $`1`$ inside the sphere and $`0`$ outside ,
$$ϵ_B(𝐁,𝐫)_{ij}=\left[(ϵ_01)\delta _{ij}+iϵ_Fϵ_{ijk}\widehat{B_k}\right]\mathrm{\Theta }(|𝐫|R),$$
(1)
where $`ϵ_0`$ is the value of the normal isotropic dielectric constant of the sphere, $`ϵ_F=2ϵ_{0}^{}{}_{}{}^{1/2}V_0B/\omega `$ is the coupling constant of the Faraday effect, $`\delta `$ and $`ϵ`$ are respectively the Kronecker and Levi-Civitta tensors. The Verdet constant of the Faraday effect is denoted by $`V_0`$, $`B`$ is the amplitude of the magnetic field and $`\omega `$ is the frequency. The intensity of the scattered light, in single scattering, is characterized by the phase function which is also the essential ingredient for the transport theory of light. The phase function, $`F(\widehat{𝐤},\widehat{𝐤}^{},𝐁)`$, is proportional to the differential scattering cross-section averaged with respect to incoming and outgoing polarizations. It depends on the direction of the incoming plane wave, $`\widehat{𝐤}`$, on the direction of the scattered wave, $`\widehat{𝐤}^{}`$, and on the direction of the magnetic field, $`𝐁`$. The hat above the vectors denotes normalized vectors. For spherical magneto-optical particles and to linear order in the applied magnetic field, this phase function can be written as
$$F(\widehat{𝐤},\widehat{𝐤}^{},𝐁)=F_0(\widehat{𝐤},\widehat{𝐤}^{})+\mathrm{det}(\widehat{𝐤},\widehat{𝐤}^{},\widehat{𝐁})F_1(\widehat{𝐤},\widehat{𝐤}^{}),$$
(2)
where $`\mathrm{det}(𝐀,𝐁,𝐂)=𝐀(𝐁\times 𝐂)`$ denotes the scalar determinant. Due to the rotational symmetry of the scatterer, the functions, $`F_0`$ and $`F_1`$, only depend on the scattering angle, $`\theta `$, which is the angle between $`\widehat{𝐤}`$ and $`\widehat{𝐤}^{}`$. The phase function $`F_1`$ only depends on the difference in the azimuthal angles associated with $`\widehat{𝐤}`$ and $`\widehat{𝐤}^{}`$, because of the axial symmetry of the scatterer around the direction of the magnetic field. These symmetry properties simplify considerably the use of the radiative transfer equation . The amplitude of $`F_1`$ was found to be proportional to the dimensionless parameter, $`ϵ_F`$ .
The albedo is introduced as the ratio of the scattering cross-section over the extinction cross-section
$$a=\frac{Q_{scatt}}{Q_{ext}}.$$
(3)
It is related to the phase function defined in Eq. (2),
$$a=𝑑\widehat{𝐤}F(\widehat{𝐤},\widehat{𝐤}^{},\widehat{𝐁})=𝑑\widehat{𝐤}F_0(\widehat{𝐤},\widehat{𝐤}^{}),$$
(4)
where the last equality follows from Eq. (2). Thus in the absence of absorption the phase function is normalized $`(a=1)`$.
## III Transport in magneto-optical diffuse media
In an isotropic medium much larger than the transport mean free path, $`\mathrm{}^{}`$, Fick’s law relates the diffusion current, $`𝐉`$, to the energy-density gradient, $`I`$, by $`𝐉=D_0I`$. The conventional diffusion constant for radiative transfer equation is denoted $`D_0`$ and is usually related to the transport mean free path, $`\mathrm{}^{}`$, and the transport velocity, $`v_E`$, by $`D_0=\frac{1}{3}v_E\mathrm{}^{}`$. In the presence of a magnetic field, the diffusion constant is a second-rank tensor. The part of this tensor linear in the magnetic field is responsible for the PHE, whereas the part quadratic in the magnetic field generates a photonic magneto-resistance, which could also be observed experimentally . By Onsager’s relation, $`D_{ij}(𝐁)=D_{ji}(𝐁)`$, the part linear in the external magnetic field must be an antisymmetric tensor, and Fick’s law becomes,
$$𝐉=𝐃(𝐁)I=D_0ID_{}\widehat{𝐁}\times I.$$
(5)
The term containing $`D_{}`$ describes the magneto-transverse diffusion current responsible for the PHE. In analogy to the definition of $`\mathrm{}^{}`$, the transport mean free path, $`\mathrm{}_{}^{}`$, for magneto-transverse diffusion is defined as $`D_{}=\frac{1}{3}v_E\mathrm{}_{}^{}`$. For the electronic Hall effect, $`\mathrm{}_{}^{}`$ is proportional to the Hall conductivity, $`\sigma _{xy}`$, the sign of which is determined by the charge of the current carriers. Similarly, a positive $`\mathrm{}_{}^{}`$ means that the PHE has the same sign as the Verdet constant of the scatterers, and a negative $`\mathrm{}_{}^{}`$ means that the PHE has an opposite sign, which is also possible depending on the scattering. This point has been carefully checked experimentally .
The phase function introduced above can be used to describe not only the single scattering of light by independently scattering particles but also light scattering by a large collection of such particles in the regime of multiple light scattering. It is assumed that only magneto-optical Mie scatterers are present, and included in a matrix with no magneto-optical properties. A transport equation for a medium comprising randomly distributed spherical particles embedded in an isotropic medium can be written as follows :
$$\frac{\mathrm{}}{v_E}_t(𝐫,\widehat{𝐤},t)+\mathrm{}\widehat{𝐤}(𝐫,\widehat{𝐤},t)+(𝐫,\widehat{𝐤},t)=𝑑\widehat{𝐤}^{}F(\widehat{𝐤},\widehat{𝐤}^{},𝐁)(𝐫,\widehat{𝐤}^{},t),$$
(6)
where $`\mathrm{}`$ denotes the elastic mean free path, which is the average distance between two subsequent scattering events. The specific intensity $`(𝐫,\widehat{𝐤},t)`$ is defined as the density of radiation at position $`𝐫`$, time $`t`$, in the direction $`\widehat{𝐤}`$. The specific intensity has the dimension of a monochromatic radiance: energy per unit solid angle, time, wavelength and surface area . When supplemented by appropriate boundary conditions, the radiative transfer equation can be solved for a particular problem. It is useful to introduce the average specific intensity, $`I(𝐫,t)`$, and the current, $`𝐉(𝐫,t)`$, which are defined as
$$I(𝐫,t)=(𝐫,\widehat{𝐤},t)𝑑\widehat{𝐤},𝐉(𝐫,t)=v_E\widehat{𝐤}𝑑\widehat{𝐤}(𝐫,\widehat{𝐤},t).$$
(7)
To simplify the notations, the dependence of the specific intensity or of related quantities upon the magnetic field, $`𝐁`$, is not explicitly indicated. The integration of Eq. (6) with respect to $`\widehat{𝐤}`$ leads to the continuity equation
$$_tI(𝐫,t)+𝐉(𝐫,t)=\frac{(1a)v_E}{\mathrm{}}I(𝐫,t),$$
(8)
where $`a`$ denotes the albedo defined in Eq. (4).
If Eq. (6) is multiplied by $`\widehat{𝐤}`$ and integrated over the direction $`\widehat{𝐤}`$, the right-hand side of Eq. (6) will contain two integrals which depend on $`F_0`$ and $`F_1`$. These integrals are the two dimensionless quantities:
$$<\mathrm{cos}\theta >=𝑑\widehat{𝐤}F_0(\widehat{𝐤},\widehat{𝐤}^{})\widehat{𝐤}\widehat{𝐤}^{},$$
(9)
and
$$A_1=𝑑\widehat{𝐤}F_1(\widehat{𝐤},\widehat{𝐤}^{})\mathrm{det}(\widehat{𝐤},\widehat{𝐤}^{},\widehat{𝐁})^2.$$
(10)
By choosing a system of coordinates linked with $`\widehat{𝐤}^{}`$, it can be proved quite generally that
$$𝑑\widehat{𝐤}F(\widehat{𝐤},\widehat{𝐤}^{},\widehat{𝐁})\widehat{𝐤}=<\mathrm{cos}\theta >\widehat{𝐤}^{}A_1\widehat{𝐁}\times \widehat{𝐤}^{}.$$
(11)
After the integration over $`\widehat{𝐤}^{}`$ the following equation is obtained
$$\frac{\mathrm{}}{v_E}_t𝐉+𝐉\left(1<\mathrm{cos}\theta >\right)+A_1\widehat{𝐁}\times 𝐉=\mathrm{}v_E𝑑\widehat{𝐤}(𝐫,\widehat{𝐤},t)\widehat{𝐤}\widehat{𝐤}.$$
(12)
If it is assumed that the angular distribution of the specific intensity is almost isotropic, the current is much smaller than the average specific intensity. In that case the following approximation is valid:
$$(𝐫,\widehat{𝐤},t)I(𝐫,t)+\frac{3}{v_E}𝐉(𝐫,t)\widehat{𝐤}+\mathrm{}.$$
(13)
Substituting this relation into Eq. (12), and neglecting $`_t𝐉`$ for processes which are slow with respect to the characteristic time between two scatterings, $`\mathrm{}/v_E`$, gives Fick’s law :
$$𝐉(𝐫,t)=𝐃I(𝐫,t).$$
(14)
The diffusion tensor is given by,
$$D(𝐁)_{ij}=\frac{1}{3}v_E\mathrm{}\left[\left(1<\mathrm{cos}\theta >\right)\delta _{ij}A_1\epsilon _{ijk}\widehat{B_k}\right]^1.$$
(15)
It is important to note that the magnetic correction to the scattering cross-section plays a role similar to the role of the asymmetry factor, $`<\mathrm{cos}\theta >`$, present when no magnetic field is applied. The dependence of $`<\mathrm{cos}\theta >`$ can be obtained from standard Mie theory . Both anisotropy factors, $`<\mathrm{cos}\theta >`$ and $`A_1`$, are non-zero provided that the symmetry between forward and backward scattering is broken, which is the case for a finite size scatterer. This is the reason why the PHE of a Rayleigh scatterer vanishes whereas the PHE of a Mie scatterer does not .
In the case where $`A_1(1<\mathrm{cos}\theta >)`$, an expansion of the bracket gives
$$D_{ij}(𝐁)=\frac{1}{3}v_E\frac{\mathrm{}}{1<\mathrm{cos}\theta >}\delta _{ij}+\frac{1}{3}v_EA_1\frac{\mathrm{}}{\left(1<\mathrm{cos}\theta >\right)^2}\epsilon _{ijk}\widehat{B}_k.$$
(16)
This equation identifies $`\mathrm{}^{}=\mathrm{}/(1<\mathrm{cos}\theta >)`$ with the transport mean free path, and $`\mathrm{}_{}^{}=A_1\mathrm{}^{}/(1<\mathrm{cos}\theta >)`$ as the transport mean free path for magneto-transverse diffusion. The final result of Ref. is thus recovered with little efforts. The order of magnitude of the ratio $`\mathrm{}_{}^{}/\mathrm{}^{}`$ is $`10^5`$ for an applied magnetic field of $`1`$T and for samples made of rare-earth materials . The presence in Eq. (16) of the coefficient, $`A_1`$, which is directly connected with the PHE of a single scatterer , states that the PHE in multiple scattering is directly proportional to the normalized PHE of a single Mie sphere, is of the same sign and amplified by the factor, $`1/(1<\mathrm{cos}\theta >)^2`$. The expansion to first order in the magnetic field involved in Eq. (16), clarifies the origin of the factor, $`1/(1<\mathrm{cos}\theta >)^2`$, which makes $`\mathrm{}_{}^{}`$ even more dependent on the asymmetry factor, $`<\mathrm{cos}\theta >`$ than the transport mean free path, $`\mathrm{}^{}`$.
Substituting Eq. (14) into the continuity equation (8) yields the diffusion equation for the density of radiation
$$_tI(𝐫,t)=D_0^2I(𝐫,t)\frac{(1a)v_E}{\mathrm{}}I(𝐫,t)=D_0\left[^2I(𝐫,t)\frac{1}{L_a^2}I(𝐫,t)\right].$$
(17)
As expected, the part of the diffusion tensor, which is linear in the magnetic field and therefore antisymmetric does not enter the diffusion equation which depends only on the symmetric part, $`D_0`$. In addition to the transport mean free path, $`\mathrm{}^{}`$, Eq. (17) defines the characteristic length for the absorption in multiple light scattering, $`L_a`$
$$L_a=\sqrt{\frac{\mathrm{}^{}\mathrm{}}{3(1a)}}=\sqrt{\frac{\mathrm{}^{}\mathrm{}_{abs}}{3}},$$
(18)
as function of the characteristic length for the absorption of the coherent beam $`\mathrm{}_{abs}`$. Therefore there is no dependence of $`L_a`$ on the magnetic field to linear order. As discussed in Ref. , the diffusion coefficient $`D_0`$ does not depend on the absorption cross-section, which is proportional to $`1a`$. Eq. (16) shows that this property also holds for $`\mathrm{}_{}^{}`$ when the medium is dissipative.
## IV Transport in mixtures of Faraday-active and non Faraday-active spheres
The main result of Eq. (16) can be extended to the case of an homogeneous mixture of Faraday-active spheres with volume fraction, $`n_0`$, and non Faraday-active spheres with volume fraction, $`n_1`$. The phase function of Eq. (2) is now modified for the case of the mixtures as follows
$`F(\widehat{𝐤},\widehat{𝐤}^{},\widehat{𝐁})_{mix}`$ $`=`$ $`{\displaystyle \frac{n_0}{n_0+n_1}}F_0(\widehat{𝐤},\widehat{𝐤}^{})+{\displaystyle \frac{n_1}{n_0+n_1}}\left[F_0(\widehat{𝐤},\widehat{𝐤}^{})+\mathrm{det}(\widehat{𝐤},\widehat{𝐤}^{},\widehat{𝐁})F_1(\widehat{𝐤},\widehat{𝐤}^{})\right],`$ (19)
$`=`$ $`F_0(\widehat{𝐤},\widehat{𝐤}^{})+{\displaystyle \frac{n_1}{n_0+n_1}}\mathrm{det}(\widehat{𝐤},\widehat{𝐤}^{},\widehat{𝐁})F_1(\widehat{𝐤},\widehat{𝐤}^{}).`$ (20)
The equation (15) can be generalized to
$$D(𝐁)_{ij}=\frac{1}{3}v_E\mathrm{}\left[(1<\mathrm{cos}\theta >)\delta _{ij}\frac{n_1}{n_0+n_1}A_1\epsilon _{ijk}\widehat{B_k}\right]^1.$$
(21)
In the case when the last term of the bracket is small with respect to $`1<\mathrm{cos}\theta >`$, the following expression for the ratio of the transport mean free path for magneto-transverse diffusion over the transport mean free path is obtained
$$\frac{\mathrm{}_{}^{}}{\mathrm{}^{}}=\frac{n_1}{n_0+n_1}\frac{A_1}{1<\mathrm{cos}\theta >}.$$
(22)
## V Conclusion
In conclusion, the present transport theory for magneto-optical spheres of arbitrary size confirms the results of Ref. concerning the magneto-transverse light diffusion obtained from the Bethe-Salpether equation. This new approach clarifies the origin of the dependence of the transport mean free path for magneto-transverse diffusion on the anisotropy of the scattering. It also shows that the magnetic anisotropy in the scattering cross-section plays a similar role as $`\mathrm{cos}\theta `$ in the transport mean free path. This derivation can be generalized to the case of mixtures of Faraday-active and non Faraday-active scatterers.
## VI Acknowledgments
I thank B. Van Tiggelen, G. Rikken, S. Wiebel and J-J. Greffet for the many stimulating discussions. This work has been made possible by the Groupement de Recherches POAN.
|
no-problem/0004/cond-mat0004276.html
|
ar5iv
|
text
|
# The Elastic Behavior of Entropic “Fisherman’s Net”
## Abstract
A new formalism is used for a Monte Carlo determination of the elastic constants of a two-dimensional net of fixed connectivity. The net is composed of point-like atoms each of which is tethered to six neighbors by a bond limiting the distance between them to a certain maximal separation, but having zero energy at all smaller lengths. We measure the elastic constants for many values of the ratio $`\gamma `$ between the maximal and actual extensions of the net. When the net is very stretched ($`\gamma 1`$), a simple transformation maps the system into a triangular hard disks solid, and we show that the elastic properties of both systems, coincide. We also show that the crossover to a Gaussian elastic behavior, expected for the non-stressed net, occurs when the net is more loose ($`\gamma 3`$).
Materials like rubber and gels are formed when polymers or monomers are cross-linked into macroscopically large networks. Due to the small energetic differences (of the order of $`kT`$) between the allowed microscopic configurations of these materials, their physics is primarily determined by entropy, rather than energy. This has been recognized long ago, and the peculiar physical properties of rubber and gels, in particular their great flexibility, are attributed to this microscopic feature. The classical theories of rubber elasticity, for instance, deal with Gaussian networks in which the internal elastic energy is completely ignored and the strands between cross-links are viewed as entropic springs . These theories, however, do not explain well the elastic behavior of networks of certain types. Perhaps the most known unresolved problem in this field of research, is the question of the critical elastic behavior of random systems near connectivity threshold. Most of the numerical works which aimed to investigate this issue during the last twenty years, concerned with the energetic elasticity . Recent studies , however, suggested that close to the gel-point elasticity is dominated by its entropic component. A completely different aspect of entropic elasticity which has been studied much less, is the behavior of highly connected networks, well above their connectivity threshold. The classical theories are inappropriate in this connectivity regime, since the strands between the junctions are very short and do not resemble Gaussian springs.
When the boundary of a thermodynamic system is homogeneously deformed, the distance between any two boundary points which prior to the deformation were separated by $`\stackrel{}{R}`$, becomes
$$r=[R_iR_j(\delta _{ij}+2\eta _{ij})]^{1/2},$$
(1)
where the subscripts denote Cartesian coordinates and summation over repeated indices is implied. The quantities $`\eta _{ij}`$ are the components of the Lagrangian strain tensor, while $`\delta _{ij}`$ is the Krönecker delta. The elastic behavior of the system is characterized by the stress tensor, $`\sigma _{ij}`$, and the tensor of elastic constants, $`C_{ijkl}`$, which are the coefficients of the free energy density expansion in the strain variables
$$f(\{\eta \})=f(\{0\})+\sigma _{ij}\eta _{ij}+\frac{1}{2}C_{ijkl}\eta _{ij}\eta _{kl}+\mathrm{}.$$
(2)
Measuring the elastic constants is much more difficult in entropy-dominated systems than in energy-dominated ones. In the latter one needs to calculate energy variations around well defined ground states. In the former, on the other hand, different microscopic configurations possess similar energies. Entropy in this case is essentially the (logarithm of the) number of allowed microscopic configurations. Measuring the variations of this quantity in response to external deformations applied on the system, is usually very complicated. In order to simplify this task, and due to the fact that the exact energy details are quite irrelevant in entropy-dominated systems, the inter-atomic interactions in such systems are often modeled by “hard” potentials. Excluded volume effects, for instance, can be modeled by the hard spheres repulsion, while chemical bonds can be replaced by inextensible (“tether”) bonds which limit the distance between the bonded monomers, but have zero energy at all permitted distances . The energy of all the microscopic configurations in such models, which are called “athermal”, is the same, and their physics, therefore, is exclusively governed by entropy considerations. It is interesting to note that although athermal models have been investigated quite extensively in polymer and soft matter physics, the elastic properties of many of them are not well understood. Hard spheres systems, for instance, are studied for already more than 40 years . They were, in fact, the first systems for which Metropolis et al. performed the first Monte Carlo (MC) simulations in 1953 . The phase diagram of hard spheres, which is a function of a single parameter, their volume fraction, had been fully explored both in simulations and experiments . Yet, despite of the numerous works dedicated to elasticity of hard spheres , the accuracy of the values of their elastic constants still leaves much to be desired.
In the canonical ensemble, the elastic constants can be related to the mean squared thermal fluctuations of the stress tensor components (just as the heat capacity is proportional to the mean squared energy fluctuation). This relation, first expressed by Squire et al. , can be used for a Monte Carlo determination of the elastic constants. The method is known as the “fluctuation method”. The instantaneous stress, measured at a given microscopic configuration, is associated with the mean force (averaged over the entire volume) acting on the atoms . The local forces originate from external potentials and inter-particle interactions. In entropy-dominated systems, these forces are usually very small. They become extremely large only over very short time intervals when atoms come to the close vicinity of each other or when bonds are sufficiently stretched. Model with hard potentials can be regarded as the limiting case in which these time intervals vanish, while at the same time the instantaneous forces become infinitely large, keeping the rate of momentum exchange between atoms fixed. It is obvious that the stress in such systems must be related to the two-point probability densities of contact between spheres and occurrence of bond stretching, while the elastic constants (stress fluctuations) must be related to the corresponding four-point probability densities. Indeed, we have recently succeeded to formulate the exact relations. We obtained expressions enabling a direct measurement of the entropic contribution to the elastic constants, and demonstrated the accuracy and efficiency of the method using this formalism on three-dimensional hard spheres systems . In this paper we apply this new formalism to measure, by means of MC simulations, the stress and elastic constants of topologically simple networks. We consider a “toy model” consisting of a two-dimensional (2D) network of atoms forming a triangular “fisherman’s net” (FN): atoms are point-like, i.e., have no excluded volume, and each one of them is connected to six neighbors by a “tether” limiting the maximal distance between the atoms, but otherwise not exerting any force. The FN is a highly connected network, whose physical behavior is entropy-dominated. Very few studies were devoted to systems of this type, and it is indeed quite clear that the determination of the elastic constants of systems similar to the FN, is far from being trivial.
The FN is six-fold symmetric when it is equally stretched along all the spatial directions. Its elastic properties in this reference state should be as of an isotropic system : Its stress tensor is diagonal with the elements $`\sigma =\sigma _{xx}=\sigma _{yy}=P`$, where $`P`$ is the negative external pressure (stretching) one needs to apply to the boundaries in order to balance the forces exerted by the net. Only four elastic constants of the net do not vanish: $`C_{11}=C_{xxxx}`$, $`C_{22}=C_{yyyy}`$, $`C_{12}=C_{xxyy}=C_{yyxx}`$ and $`C_{44}=C_{xyxy}=C_{yxyx}=C_{xyyx}=C_{yxxy}`$. Due to the isotropic nature of the system, only two of them are independent, and they satisfy the relations: $`C_{11}=C_{22}`$, and, $`2C_{44}=C_{11}C_{12}`$ . It is quite common to describe the elastic properties of isotropic systems in terms of the shear modulus, $`\mu `$, and the bulk modulus, $`\kappa `$, defined by: $`\mu =C_{44}P`$, and, $`\kappa =\frac{1}{2}(C_{11}+C_{12})`$. When these quantities are positive, the isotropic system is mechanically stable .
Our simulations were performed on systems consisting of 1600 atoms which were bonded to form a triangular 2D net. The topology of the net is such that the mean positions of the atoms form a regular triangular lattice with lattice spacing $`b_0`$, while each pair of nearest neighbor atoms is connected by a tether whose maximal extension is $`bb_0`$. Periodic boundary conditions, which fixed the volume and prevented the net from collapsing, were applied. We denote by $`\gamma b/b_0`$, the ratio between between the maximal and actual extensions of the net. Typical equilibrium configurations corresponding to two values of $`\gamma `$ are depicted in Fig. 1. We generated the MC configurations using a new updating scheme, recently proposed by Jaster , in which the conventional Metropolis step of a single particle is replaced by a collective step of chain of particles. At each MC time unit we made 1600 move attempts (with acceptance probability $`0.7`$), where at each attempt a new atom was selected randomly. (On the average, each atom was chosen once in a MC time unit.) Correlations between subsequent configurations were estimated from the autocorrelation function of the amplitude of the longest-wavelength phonon in the systems (both longitudinal and transverse phonons were checked). For all $`\gamma `$ values, we found that after less than 1000 MC time units, the memory of the initial configuration is completely lost. We measured the stress and elastic constants for many values of $`\gamma `$. For each $`\gamma `$, we averaged the relevant quantities over a set of $`1.5\times 10^7`$ configurations separated from each other by 3 MC time units. We also evaluated the standard deviations of the averages. The error bars appearing in the graphs which present our results correspond to one standard deviation. More technical details of the simulations, as well as a detailed explanation on the formalism used in this work, were given in another publication .
When the net is fully extended ($`\gamma =1`$), atoms cannot leave their mean lattice positions. Entropy, therefore, vanishes, while the stress and elastic constants diverge. For slightly larger values of $`\gamma `$, atoms are restricted to small thermal fluctuations around their lattice positions, as in Fig. 1 (a). A similar atomic-level picture appears in hard disks (2D “hard spheres”) solids for densities proximal to the close-packing density. In fact, the FN and the hard disks (HD) problems are closely related: In the latter (HD) the centers of the disks are not allowed to approach their neighbors a distance smaller than $`a`$, the diameter of the disks, while in the former (FN) atoms are not allowed to depart from their neighbors a distance larger the maximal extension of the bond, $`b`$. For HD solids, one can define the ratio $`\delta =a/b_01`$ between the diameter of the disks, $`a`$, and the mean lattice separation, $`b_0`$. In the limits $`\gamma 1`$ and $`\delta 1`$ (corresponding to the FN and HD problems, respectively), the elastic constants of both systems coincide, as can be seen from the following argument: Let $`\mathrm{\Pi }_{\mathrm{FN}}`$ and $`\mathrm{\Pi }_{\mathrm{HD}}`$ be phase spaces of allowed configurations of a FN with a certain value of $`\gamma `$ and of a HD solid with $`\delta =1/\gamma `$, respectively. Each configuration in one of these phase spaces can be described by the set $`\{𝐮_i\}`$ of deviations of either the atoms of the net or the centers of the disks from their mean lattice positions. In the $`\gamma `$, $`\delta 1`$ asymptotic regimes, we can assume that the size of all the deviations is much smaller than the lattice spacing, $`b_0`$. One can easily check that if the set $`\{𝐮_i\}`$ represents an allowed microscopic configuration of the FN, then the set $`\{𝐮_i\}`$ almost always corresponds to an allowed configuration of the HD system. Moreover, by this transformation we can generate almost all the configurations of $`\mathrm{\Pi }_{\mathrm{HD}}`$. The measure of the subgroup of configurations for which the mapping $`\{𝐮_i\}\{𝐮_i\}`$ between the two problems does not apply, diminishes proportionally to $`u_{i}^{}{}_{}{}^{2}/b_{0}^{}{}_{}{}^{2}`$. Thus, the mapping $`\{𝐮_i\}\{𝐮_i\}`$ is asymptotically a one-to-one transformation from $`\mathrm{\Pi }_{\mathrm{FN}}`$ onto $`\mathrm{\Pi }_{\mathrm{HD}}`$. Since for both systems the Helmholtz free energy $`F`$ is equal to $`kT\mathrm{ln}|\mathrm{\Pi }|`$, where $`|\mathrm{\Pi }|`$ is the volume of the $`2N`$-dimensional configuration phase space ($`N`$ is the number of atoms), and since the Jacobian of the above transformation is unity, we readily find that the free energies $`F_{\mathrm{HD}}`$ and $`F_{\mathrm{FN}}`$ of the HD and FN systems, respectively, are related by
$`F_{\mathrm{FN}}(N,\gamma )F_{\mathrm{HD}}(N,\delta =1/\gamma ),\mathrm{for}\gamma 1.`$ (3)
Suppose now that both systems are slightly deformed from their reference states. The displacements of the atoms from their mean lattice positions can be divided into the set $`\{𝐮_i\}`$ of thermal fluctuations and the set $`\{𝐯_i\}`$ of small changes in mean lattice positions caused by the deformation. The transformation between $`\mathrm{\Pi }_{\mathrm{FN}}`$ and $`\mathrm{\Pi }_{\mathrm{HD}}`$, in this case, maps both $`\{𝐮_i\}`$ to $`\{𝐮_i\}`$ and $`\{𝐯_i\}`$ to $`\{𝐯_i\}`$. The $`\{𝐯_i\}`$ mapping is equivalent to the reversal of the strain applied on the system. We therefore find that $`F_{\mathrm{HD}}`$ and $`F_{\mathrm{FN}}`$ will be equally modified, provided that opposite strains are applied on the FN and HD systems. The following asymptotic relations between the stress and elastic constants of these systems follow immediately: $`\sigma _{\mathrm{FN}}(\gamma )P_{\mathrm{HD}}(1/\gamma )`$, $`\kappa _{\mathrm{FN}}(\gamma )\kappa _{\mathrm{HD}}(1/\gamma )`$ and $`C_{\mathrm{FN}}^{}{}_{44}{}^{}(\gamma )C_{\mathrm{HD}}^{}{}_{44}{}^{}(1/\gamma )`$. These relations are very useful since the asymptotic expressions for $`P_{\mathrm{HD}}`$, $`\kappa _{\mathrm{HD}}`$ and $`C_{\mathrm{HD}}^{}{}_{44}{}^{}`$ are available , and can be used to find the stress and bulk modulus of the FN. This gives us the exact expressions
$`\sigma _{\mathrm{FN}}(\gamma )`$ $``$ $`{\displaystyle \frac{4/\sqrt{3}}{(\gamma ^21)}}{\displaystyle \frac{kT}{b_0^2}},`$ (4)
$`\kappa _{\mathrm{FN}}(\gamma )`$ $``$ $`{\displaystyle \frac{4/\sqrt{3}}{(\gamma ^21)^2}}{\displaystyle \frac{kT}{b_0^2}}.`$ (5)
For the elastic constant $`C_{44}`$, Ref. finds only the asymptotic functional form, and therefore for our problem we have
$$C_{\mathrm{FN}}^{}{}_{44}{}^{}(\gamma )\frac{A}{(\gamma ^21)^2}\frac{kT}{b_0^2},$$
(6)
with an unknown constant $`A`$. Our numerical results, presented in Fig. 2, confirm these relations, which seem to be accurate over quite a large range of $`\gamma `$ values. In Eq. (6), we use the value $`A=1.80\pm 0.02`$ obtained by fitting the asymptotic expression for $`C_{44}`$ to the three data points corresponding to the smallest $`\gamma `$ values. Note that while in Eqs. (4)–(6), $`P_{\mathrm{HD}}`$, $`\kappa _{\mathrm{HD}}`$ and $`C_{\mathrm{HD}}^{}{}_{44}{}^{}`$ are expressed in units of $`kT/b_0^2`$, in Fig. 2 they are given in units of $`kT/b^2`$. In this representation, the stress and elastic constants of the FN are scaled to depend on the parameter $`\gamma `$ alone.
Fig. 3, shows the dependence of the stress and elastic constants on $`\gamma `$ for weakly stretched nets. We observe a spectacular decay of elastic constants to almost zero for $`\gamma 3`$, and at the same time we note that the stress becomes independent of $`\gamma `$. The very fact of decrease of elastic constants with increasing $`\gamma `$ should not be surprising, because it is intuitively clear that larger $`\gamma `$ represent a more “loose” and more “weak” solid. However, almost vanishing values already at $`\gamma 3`$ are not direct consequences of the “weakness” of the solid, but of the fact that a “loose” network can be approximated by a network of Gaussian springs. Gaussian spring is a linear spring of vanishing unstressed length. The energy of such a spring, $`E=\frac{1}{2}Kr^2`$, is simply proportional to its squared end-to-end distance, $`r^2`$. We will show that elastic solid formed by such springs has exactly vanishing elastic constants, independently of the value of the spring constant $`K`$. Thus, the effect observed in Fig. 3 is an indication of the Gaussian nature of the system.
It is easy to calculate the elastic properties of Gaussian networks at $`T=0`$: The stresses of such networks depend on their topologies, namely on the details of the connectivity between the atoms and on the values of the springs constants between them. For 2D networks the stresses are not modified due to homogeneous changes in the size of the net, since the force exerted on the surface grows (diminishes) linearly with the length of the boundaries. Moreover, at $`T=0`$, the free energy, $`F`$, coincides with the internal energy $`E=_{\mathrm{bonds}\alpha \beta }\frac{1}{2}K_{\alpha \beta }(r^{\alpha \beta })^2`$, where $`r^{\alpha \beta }`$ is the length of the bond connecting atoms “$`\alpha `$” and “$`\beta `$”, and $`K_{\alpha \beta }`$ is the spring constant assigned to this bond. From Eq. (1) it is obvious that the energy expansion in the strain variables includes only linear terms in $`\eta `$, and hence, by comparing with Eq. (2), $`C_{ijkl}(T=0)0`$. This identity, as well as the size independence of the stresses, hold at any other temperature since Gaussian networks have the interesting feature that their stress and elastic constants are temperature independent! For the stresses this feature is readily understood: The stresses can be expressed as the averages of quantities which are linear in the coordinates of the atoms. When the statistical weights of the distribution are Gaussian, i.e., an exponent of a quadratic form of the coordinates, these averages coincide with the most probable values, namely their values at equilibrium. The temperature independence of the elastic constants then follows immediately, since the latter are just the derivatives of the stress components.
The similarity between non-stressed tethered and Gaussian one-dimensional (1D) nets, i.e., linear polymers, is a consequence of central limit theorem . For topologically two-dimensional regular (non-random) nets, such similarity was demonstrated by Kantor et al. : In both tethered and Gaussian two-dimensional nets, the mean squared distance in the embedding space, $`r_{\mathrm{𝐱𝐱}^{}}^2=|𝐫(𝐱)𝐫(𝐱^{})|^2`$, between two distant points whose internal positions in the net (measured in lattice constants) are $`𝐱`$ and $`𝐱^{}`$, grows proportionally to $`\mathrm{ln}|𝐱𝐱^{}|`$. One can define the effective spring constant, $`K_{\mathrm{eff}}`$, as the value of $`K`$ of a Gaussian network with the same connectivity and statistical properties as of the tethered network. The value of $`K_{\mathrm{eff}}`$ is extracted from the ratio of the mean squared distance, $`r_{\mathrm{𝐱𝐱}^{}}^2`$, between two points $`𝐱`$ and $`𝐱^{}`$ on the FN, and the mean squared distance $`\stackrel{~}{r}_{\mathrm{𝐱𝐱}^{}}^{}{}_{}{}^{2}`$ between the same two points on a Gaussian network of unit spring constants:
$$K_{\mathrm{eff}}=\stackrel{~}{r}_{\mathrm{𝐱𝐱}^{}}^{}{}_{}{}^{2}/r_{\mathrm{𝐱𝐱}^{}}^2.$$
(7)
$`\stackrel{~}{r}_{\mathrm{𝐱𝐱}^{}}^{}{}_{}{}^{2}`$ can be calculated exactly, while the value of the corresponding $`r_{\mathrm{𝐱𝐱}^{}}^2=|𝐫(𝐱)𝐫(𝐱^{})|^2`$ can be extracted from MC simulations of the FN with free boundaries conditions (i.e., in the absence of external pressure). We simulated a FN of $`56^2=3136`$ atoms and measured (using $`10^7`$ different configurations) $`r_{\mathrm{𝐱𝐱}^{}}^2`$ for several pairs of points $`𝐱`$ and $`𝐱^{}`$ at different lattice separations. With these measurements we evaluated the effective spring constants \[using Eq. (7)\], and found, as shown in Fig. 4, that for the FN model $`K_{\mathrm{eff}}1.96kT/b^2`$. In order to support our conclusion about the crossover into the Gaussian regime, we need to show that the constant value to which the stress drops in Fig. 3, is just the stress applied by a Gaussian net with spring constants $`K_{\mathrm{eff}}`$ calculated for non-stressed FN. For a Gaussian net with $`K1.96`$, one finds that $`\sigma =\sqrt{3}K3.39kT/b^2`$ which indeed coincides with the value of $`3.4kT/b^2`$, extracted from Fig. 3.
The persistence of the Gaussian regime to intermediate values of $`\gamma `$ ($`\gamma 3`$), is not unique for 2D nets. Such behavior is also found, for instance, in 1D polymers. Let us consider, for a moment, a chain of $`N1`$ tethers of maximal length $`b`$, which is stretched by a force $`f`$, to an end-to-end length $`l=Nb_0`$. It is a well known fact that this chain will be Gaussian, i.e., $`f`$ and $`l`$ will be proportional to each other, provided that $`l`$ does not exceed the order of magnitude of the root mean square size of the chain:
$$l=Nb_0\sqrt{N}b.$$
(8)
Yet, one must understand that in order to observe Gaussian elastic behavior, it is not essential to apply this criterion (8) to the whole chain, but only to small segments of it. If there exist a certain length scale at which the potential between the atoms becomes effectively quadratic, i.e., can be replaced by a Gaussian spring, then the whole chain is like a chain of Gaussian springs, and therefore it is itself Gaussian. For a linear polymer chain, the effective potential between non-neighboring atoms is calculated by integrating out the spatial degrees of freedom of the atoms located between them. Such calculations are usually done iteratively, where on each “rescaling” step every second atoms is integrated out. It appears that even elementary potentials which are very different from parabola, are brought into a parabolic form within a few “rescaling” steps. For the specific potential used in this work, three steps are sufficient, which means that a segment of $`N10`$ tethers may be justly considered as an effective Gaussian spring. Similarly to a macroscopically large chain, we expect that the Gaussian nature of this segment will persist as long as it is stretched to a length which does not exceed its root mean square size, namely, as long as $`10b_0\sqrt{10}b`$ \[see criterion (8)\]. This relation gives the lower limit, $`\gamma =b/b_0\sqrt{10}3`$, of the Gaussian regime of a 1D chain of tethers. For a 2D regular phantom net, the effective potential becomes approximately parabolic also for a distance of number of bonds, $`N10`$ . Root mean square distance between two such points is $`b\sqrt{\mathrm{ln}N}`$. Thus, in order to observe Gaussian elastic behavior, we require that $`10b_0b\sqrt{\mathrm{ln}10}`$, or, $`\gamma =b/b_010/\sqrt{\mathrm{ln}10}4`$, which is consistent with the value $`\gamma 3`$, observed in Fig. 3.
In summary, we have applied a new “fluctuation” formalism to MC determination of the stress and elastic constants of stretched tethered networks. These systems provide a convenient framework for studying the entropic contribution to elasticity in real polymeric systems. The Gaussian nature of entropic elasticity, observed for non-stressed phantom nets, was also found when stress was applied. It breaks only for highly extended networks, close to their full-extension. This point has interesting implications to the problem of the critical elastic behavior of gels right above the gel-point. As already mentioned at the first paragraph of this paper, recently it was suggested by Plischke, Joós and co-workers that this behavior is dominated by entropy . These authors studied numerically (using a different technique) the elastic behavior, at $`T0`$, of bond diluted (percolating) systems at which only a fraction $`p`$ of the bonds were present. Their results in $`2D`$ for the critical exponent $`f`$ characterizing the growth of the shear modulus above the percolation threshold $`p_c`$, $`\mu (pp_c)^f`$, match (within the range of error), the known result for the exponent $`t`$ describing the conductivity of random resistors network $`\mathrm{\Sigma }(pp_c)^t`$. The question is whether this result is universal. For Gaussian networks the identity, $`f=t`$, can be proven rigorously . One can further argue that this result also applies to other types of interactions, provided that above a certain finite length-scale, the network is effectively Gaussian. We have shown here that this property is not always insured. In a percolation problem, the elastic backbone is inhomogeneous and includes very tenuous parts where the tension applied to the network is distributed between very few strands. Such strands may deviate from Gaussian behavior when high stress is applied. Further complications can arise from excluded volume effects which have not been discussed here at all.
This work was supported by the Israel Science Foundation through Grant No. 177/99.
|
no-problem/0004/astro-ph0004047.html
|
ar5iv
|
text
|
# Evidence for a Black Hole and Accretion Disk in the LINER NGC 42031
## 1 Introduction
Spectroscopic surveys have revealed that a large fraction of nearby early-type galaxies harbor low-ionization nuclear emission-line regions, or LINERs (Heckman 1980; Ho et al. 1997a and references therein). The physical understanding of these sources remains rudimentary, but there are strong indications that at least some LINERs are weak versions of the Seyfert or QSO phenomenon, and hence powered by accretion onto a massive black hole. LINERs are nonetheless distinct from classical AGNs in terms of their characteristic (low) luminosity, emission-line properties, and broad-band spectral energy distribution (Ho 1999). These differences may follow from fundamental disparities in the accretion process operative in luminous and weak AGNs.
The study of emission-line behavior on small spatial scales within galaxy nuclei provides one strategy for probing the energetics, dynamics, and structure of LINERs and related objects. Here we report on observations acquired with the Hubble Space Telescope (HST) for the LINER NGC 4203. The data provide dynamical constraints on a black hole, and reveal line emission that may directly trace an accretion flow in this source. These observations and future follow-up studies will provide an important framework for testing physical models for the structure of LINERs, and the nature of black holes in galaxy nuclei.
## 2 Observations
NGC 4203 was observed with the Space Telescope Imaging Spectrograph (STIS) as part of a spectroscopic survey of nearby weakly active galaxy nuclei. The full details of this program will be reported elsewhere (Rix et al., in preparation). NGC 4203 is of S0 morphological type, seen nearly face-on, and its nucleus was classified spectroscopically by Ho et al. (1997b) as a LINER 1.9 source. NGC 4203 exhibits a recession velocity of 1088 km s<sup>-1</sup>, and we assume a distance for this source of 9.7 Mpc (Tully 1988).
HST observations were acquired on 1999 April 18 UT, with the $`0\stackrel{}{\mathrm{.}}2\times 52\mathrm{}`$ slit placed across the nucleus at position angle (PA) = 105.6°. Two exposures totaling 1630 s and three exposures totaling 2779 s were obtained with the G430L and G750M gratings, resulting in spectra spanning 3300 – 5700 Å and 6300 – 6850 Å with full width at half maximum (FWHM) spectral resolution for extended sources of 10.9 and 2.2 Å, respectively. The telescope was offset by 0$`\stackrel{}{\mathrm{.}}`$05 ($``$ 1 pixel) along the slit direction between repeated exposures. The two-dimensional (2-D) spectra were bias- and dark-subtracted, flat-fielded, shifted to a common alignment, combined with repeated exposures to obtain single blue and red spectra, cleaned of residual cosmic rays and hot pixels, and corrected for geometrical distortion. The data were wavelength- and flux-calibrated via standard STSDAS procedures.
## 3 Results
### 3.1 The Unresolved Nucleus
Spectra of the nucleus of NGC 4203, obtained by coadding the 2-D spectra over the central 0$`\stackrel{}{\mathrm{.}}`$25 along the slit, are shown in Figure 1. This extraction width represents approximately twice the FWHM for the STIS point-spread function (PSF; i.e., FWHM = 0$`\stackrel{}{\mathrm{.}}`$12). The data show strong, low-ionization forbidden line emission, consistent with the classification of this source as a LINER. The red spectrum displays an additional striking feature in the form of a distinct broad component of H$`\alpha `$, which contributes prominent high-velocity shoulders to the line profile. Ground-based observations of this source obtained with the Palomar 5-m telescope in 1985 and reported by Ho et al. (1997c) also show evidence of broad H$`\alpha `$ emission, but in the earlier spectrum the broad component is well represented by a Gaussian profile with a FWHM of 1500 km s<sup>-1</sup>, far less than the velocity difference of $`7200`$ km s<sup>-1</sup> between the profile shoulders evident in Figure 1a.
We examined the properties of the broad H$`\alpha `$ emission in more detail by removing the narrow contributions to the blended line. A synthetic profile for the narrow emission was constructed from the \[S ii\] $`\lambda \lambda `$6716, 6731 lines, and used to model the lines of \[N ii\] $`\lambda \lambda `$6548, 6583, and narrow H$`\alpha `$. The overall emission feature has a central core that strongly resembles the broad H$`\alpha `$ line seen previously in the Palomar spectrum, and we consequently included such a component of “normal” broad-line emission, matched to the profile parameters and redshift of the ground-based observation, while fitting the overall blend. The \[N ii\] doublet was constrained to the flux ratio of 1:3.0 predicted by atomic parameters, and the line strengths were otherwise adjusted in order to produce the smoothest residual profile.
Removal of the narrow lines and central peak of broad H$`\alpha `$ emission results in the H$`\alpha `$ line profile shown at the bottom of Figure 1a. The profile of the remaining broad pedestal in the 6540 – 6590 Å region is sensitive to the details of the decomposition; however, the profile remains distinctly double-peaked, independent of the scaling for the subtracted components. The broad feature has a full-width near zero intensity of at least 12,500 km s<sup>-1</sup>, and a total flux of $`1.2\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup>. The line appears centered near the narrow-line redshift, although the profile of the broad feature is highly asymmetric, with stronger emission in the blue peak and a more extended wing on the red side. Similar emission is clearly visible in the profile of the H$`\beta `$ feature (Fig. 1b).
The scaling of the “normal” broad H$`\alpha `$ component employed in the profile decomposition is 84% that seen in the earlier Palomar observation, which represents reasonable agreement when allowance is made for photometric uncertainties in the ground-based data and ambiguities in the fitting procedure. The degree to which the broad wings have varied is of considerable interest, given the variability of double-peaked emission reported in some other LINERs (e.g., Storchi-Bergmann, Baldwin, & Wilson 1993; Halpern & Eracleous 1994; Bower et al. 1996 ). To address this question, we conducted tests of the observability of extended H$`\alpha `$ wings in the Palomar spectrum. Shoulders on the H$`\alpha `$ profile resembling those in the HST data might have eluded detection in the Palomar spectrum if this component was present in 1985, due to wavelength-dependent focus variations and the strong stellar continuum measured through the relatively large (2″$`\times `$ 4″) ground-based effective aperture.
Since we cannot say for sure whether the broad shoulders on the Balmer lines were present in 1985, it is unclear whether the appearance of this emission in 1999 represents a transient event or a more stable attribute of NGC 4203. The fact that the central core of emission has remained roughly constant suggests that the central source has not changed appreciably, in which case the detection of the broad pedestal of emission probably results primarily from the diminished contamination by starlight in the HST aperture, rather than intrinsic variability. HST has revealed similar behavior in NGC 4450, observed as part of our survey (Ho et al. 2000a). The fact that two sources out of a relatively small sample of objects surveyed (24 galaxies total, of which 8 are LINERs) were discovered to have double-peaked emission lines suggests that small-aperture spectroscopy is an important complement to variability studies for uncovering this phenomenon in LINERs.
### 3.2 Gas Kinematics and Mass Modeling
The 2-D spectra of NGC 4203 exhibit spatially resolved narrow emission, which is most readily apparent in H$`\alpha `$ and the \[N ii\] lines. These features are detected out to a distance of $`1`$″ (47 pc) in both directions from the nucleus. Line fluxes and radial velocities were measured as a function of position along the slit, by performing a simultaneous fit with $`\chi ^2`$ minimization, assuming Gaussian line profiles. The \[S ii\] lines are visible interior to $`\pm `$0$`\stackrel{}{\mathrm{.}}`$5 from the nucleus, and are included in the fits for that region. The \[O i\] $`\lambda \lambda `$6300, 6364 lines are prominent in the nucleus but were not included in the fits; emission in these lines is highly concentrated spatially, and the \[O i\] profiles are also distinctly broader than those of the other forbidden lines in the red spectrum (consistent with linewidth vs. critical density correlations reported for other LINERs; e.g., Filippenko & Halpern 1984). The continuum underlying the emission features was represented by a straight line, and for columns near the nucleus the broad H$`\alpha `$ emission feature was represented by the combination of three Gaussian profiles, which was successful in isolating the narrow-line emission.
The radial velocity of the narrow-line gas is plotted as a function of position in the top panel of Figure 2. A distinct gradient is seen through the central $`\pm `$0$`\stackrel{}{\mathrm{.}}`$5, which is steepest in the innermost $`\pm `$ 0$`\stackrel{}{\mathrm{.}}`$1. Velocities level off and show evidence of a significant decline in amplitude at larger distances, possibly due to a warp in the gas disk.
The measurements illustrated in Figure 2 make it possible to probe the central mass distribution in this galaxy, which is of particular interest given the activity seen in the nucleus. With only one slit position, a number of assumptions are necessary to obtain a well-constrained kinematic model. Here, we assumed (1) that the gas is orbiting the nucleus in a coplanar disk at the local circular velocity, (2) the stellar mass-to-light ratio $`\mathrm{{\rm Y}}`$ is spatially constant and the stellar orbit distribution is approximately isotropic, and (3) the stars dominate the total mass inside a $`1\stackrel{}{\mathrm{.}}5\times 4\mathrm{}`$ aperture (see Sarzi et al. 2000 for a thorough description of the modeling). We then used the acquisition image taken with STIS to derive the deprojected, PSF-corrected light distribution: it is well described by a cusp with $`\rho _{}(r)r^{1.55}`$ and a central, presumably nonstellar point source. Using the isotropic, spherical Jeans equation, we can predict from $`\rho _{}(r)`$ the stellar velocity dispersion $`\sigma _{}`$ over a $`1\stackrel{}{\mathrm{.}}5\times 4\mathrm{}`$ aperture as a function of $`\mathrm{{\rm Y}}`$; matching the observed $`\sigma _{}=124`$ km s<sup>-1</sup> within this aperture (Dalle Ore et al. 1991) then dictates the value of $`\mathrm{{\rm Y}}`$, to which we assign a 30% error bar reflecting the geometrical uncertainties in this derivation. For any assumed $`M_{BH}`$ one can then solve for the combination of disk inclination and disk major axis that best matches the data. The lines in the bottom panel of Figure 2 represent such a model sequence in $`M_{BH}`$ where the disk orientation was chosen to optimize the match to the data. While the ultra-broad H$`\alpha `$ emission implies the presence of a black hole, the formal best fit to the extended rotation curve is for a small black-hole mass (i.e., $`\chi ^2`$ is minimized at $`M_{BH}=0`$ M). We thus obtain a formal upper limit of $`M_{BH}6\times 10^6`$ M at 99.7% confidence.
Examination of an archival WFPC2 $`V`$-band image reveals asymmetric patchy absorption suggestive of an inclined ($`50^{}`$) dusty disk with a major-axis PA of $``$0°. If this orientation applies to the nebular gas (as suggested by HST imaging studies of other LINERs; Pogge et al. 2000), our slit PA is $`15\pm 20`$° from the disk minor axis. Over this PA range the projection factors change dramatically, and we cannot use the dust lane information directly to constrain $`M_{BH}`$. Formally, the $`\chi ^2`$ obtained with minor axis PA = 15° is much worse than for the best fit above. We thus must await spectra at additional PAs or offsets from the nucleus in order to obtain more direct constraints on the disk orientation and black-hole mass (e.g., Bower et al. 1998).
## 4 Discussion
### 4.1 The Black Hole and its Galactic Host
Studies of the velocity field for stars and occasionally gas in the centers of nearby bulge-dominated galaxies have generated a list of strong candidates for supermassive black holes (e.g., Kormendy & Richstone 1995; Ho 1998). These studies increasingly suggest that the black-hole mass is correlated with the mass of the stellar bulge in the host galaxy. We consequently examined the black-hole mass in relation to the bulge mass in the case of NGC 4203. This galaxy has a total apparent magnitude of $`B_T=11.61`$, with $`BV=0.97`$ (de Vaucouleurs et al. 1991). Burstein (1979) decomposed the $`B`$-band photometric profile of NGC 4203 into an exponential disk and $`r^{1/4}`$-law bulge, and obtained a disk/bulge luminosity ratio of 2.0. For a representative mass-to-light ratio of $`\mathrm{{\rm Y}}_V6`$, the corresponding bulge mass is $`9\times 10^9`$ M; our upper limit of $`M_{BH}6\times 10^6`$ M thus implies a black-hole/bulge mass ratio of $`7\times 10^4`$. Although this number is rather uncertain, it falls near the low end of previous estimates for normal galaxies ($``$0.002 – 0.006; Magorrian et al. 1998; van der Marel 1998) and overlaps with estimates derived from reverberation measurements of Seyfert nuclei (Wandel 1999). A measurement or more stringent limit on $`M_{BH}`$ in NGC 4203 would be of interest for gauging whether the black-hole/bulge correlation applies in this source.
### 4.2 The Black Hole and its AGN
The luminous output of the LINER in NGC 4203 can be used as a diagnostic of accretion onto the central black hole. Ho et al. (2000b) have quantified the broad-band spectral energy distribution for this source, and estimate a total bolometric luminosity for the nucleus of $`9.5\times 10^{40}`$ erg s<sup>-1</sup>. For the limit on $`M_{BH}`$ obtained from the nebular rotation curve, this result implies a ratio to the Eddington luminosity of $`L/L_{Edd}1\times 10^4`$.
This result and the broad Balmer profiles in the NGC 4203 nucleus have an appealing consistency in the framework of advection-dominated accretion flow (ADAF) models for accretion onto the central black hole. ADAFs are expected to occur naturally in systems with relatively low accretion rates (e.g., see Narayan, Mahadevan, & Quataert 1998 for a review). For an $`\alpha `$-disk prescription for the accretion flow, an ion torus will form in the inner disk, and advect a significant fraction of the accretion energy into the black hole, when the accretion rate falls below a critical value, $`\dot{M}_{crit}\alpha ^2\dot{M}_{Edd}`$. Here the Eddington accretion rate is defined so that $`L_{Edd}=0.1\dot{M}_{Edd}c^2`$. Adopting a plausible value of $`\alpha 0.3`$ implies that $`\dot{M}_{crit}0.1\dot{M}_{Edd}`$, corresponding to $`L0.1L_{Edd}`$, below which the luminosity is expected to scale as $`\dot{M}^2`$. The limiting value of $`L/L_{Edd}`$ in NGC 4203 is in the advection-dominated regime, and corresponds to an accretion rate $`\dot{M}5\times 10^4M_{}`$ yr<sup>-1</sup>; an improved constraint on $`M_{BH}`$ is of obvious interest for testing this physical picture.
ADAFs can account for many properties of luminous radio galaxies, which often resemble NGC 4203 in displaying broad, double-peaked Balmer emission lines (Eracleous & Halpern 1994). These profiles have been modeled in terms of a relativistic disk (Chen & Halpern 1989), with predicted properties notably similar to those seen in NGC 4203 – specifically, stronger emission in the blue peak than in the red, and a sharper cutoff to the blue shoulder than in the red. The line emission presumably arises from an outer part of the accretion flow that remains geometrically thin and may be irradiated by the inner ion torus that is characteristic of ADAFs. Modeling of the emission profile can potentially yield an estimate of the radius at which the transition from thin disk to ion torus occurs, and we defer more detailed exploration of this possibility to a later paper.
Alternative explanations exist for the double-peaked Balmer lines. Possibilities include emission from bipolar outflows or anisotropically illuminated clouds, from gas surrounding binary black holes, or from a tidally disrupted star or other asymmetric circumnuclear structure (Eracleous et al. 1999 and references therein). Studies of double-peaked emission profiles in other LINERs have revealed evolution in the profiles over time, and such observations can potentially be used to distinguish between these possibilities, and to probe the structure of the accretion flow if a disk is in fact responsible. In the case of NGC 1097, variations in the relative heights of the red and blue line peaks are suggestive of emission from an orbiting ring that is azimuthally asymmetric in its emissivity (Storchi-Bergmann et al. 1997). If the elevated blue wing in NGC 4203 arises from an asymmetry of this type, we might expect the peak in emission to eventually oscillate to the red wing. If the timescale for the asymmetry to propagate matches the orbital timescale, then we can predict a rise in the red wing in $`4`$ years, based on the velocity width of the emission-line shoulders and our bound on the black-hole mass. Periodic variability on other (probably longer) timescales may be relevant if the asymmetry propagates as a wave or via precession. Follow-up observations thus may yield further important constraints on the structure and evolution of the accretion disk.
## 5 Conclusions
NGC 4203 is the latest addition to a small set of LINERs that exhibit broad, double-shouldered Balmer lines. The discovery of two such objects in our HST survey (NGC 4450 being the other; Ho et al. 2000a) suggests that such emission may be common in LINERs, but often eludes detection in ground-based apertures for which the contrast with the stellar continuum is weak. The line profile in NGC 4203 is noteworthy for its resemblance to emission profiles seen in broad-line radio galaxies, which have been interpreted as the signature of a thin outer accretion disk irradiated by an inner ion torus, representing an advection-dominated flow onto a black hole.
The nucleus of NGC 4203 also exhibits spatially resolved emission that can be used to provide kinematic information on the underlying mass distribution. The existing observations at a single PA can be used to restrict the underlying black-hole mass to $`M_{BH}6\times 10^6`$ M. The availability of a mass estimate for the black hole is of fundamental importance for studying the physics of the accretion process. Our limiting value for $`M_{BH}`$ is consistent with a sub-Eddington accretion rate and formation of an ADAF, although a more stringent limit could challenge this picture. Follow-up observations from space, or with small apertures under good seeing conditions on the ground, will make it possible to improve the estimate of the black-hole mass and study time evolution of the broad H$`\alpha `$ emission.
###### Acknowledgements.
This research was supported financially through NASA grant NAG 5-3556, and by GO-07361-96A, awarded by STScI, which is operated by AURA, Inc., for NASA under contract NAS5-26555. We thank T. Statler for helpful discussions, and the referee, M. Eracleous, for valuable comments.
|
no-problem/0004/astro-ph0004261.html
|
ar5iv
|
text
|
# Astrometry of the 𝜔 Centauri Hubble Space Telescope Calibration Field Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555.
## 1 INTRODUCTION
Walker (W94 (1994); hereafter W94) presented ground-based $`BV`$ (Johnson) $`RI`$ (Kron-Cousins) photometry of the Harris et al. (H93 (1993); hereafter H93) Hubble Space Telescope (HST) Wide-Field/Planetary Camera (WF/PC) calibration field in the Galactic globular cluster $`\omega `$ Centauri (=NGC 5139=C 1323-1472). This small (100″$`\times `$100″) field is located 13.2 arcmin southwest of the center of $`\omega `$ Cen at 13<sup>h</sup> 25<sup>m</sup> 37<sup>s</sup> -47° 35′ 38″ (epoch J2000.0) \[see Fig. 1 (Plate 59) of H93, Figs. 1 and 2 of W94\]. Although this field has been observed over 1100 times with the HST, there is no astrometry available for the W94 stars in the published literature. In an effort to encourage various ongoing WFPC2 calibration efforts, I now present astrometry (J2000.0) of the W94 stars in the $`\omega `$ Cen HST calibration field on the International Celestial Reference Frame.
## 2 THE HARRIS ET AL. (1993) STARS
The first step of the astrometric calibration of the W94 stars in the $`\omega `$ Cen HST calibration field is to identify which H93 stars given in Table 4 of W94 are present on a WFPC2 observation of that field.
Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. is the combination of all the stars in Table 4 of W94 with $`x`$ and $`y`$ CCD positions along with their astrometry as given in Table 1 of H93. Columns 1–7 of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. are from Table 4 of W94; columns 8 and 9 give the right ascensions and declinations (J2000.0) as listed in Table 1 of H93; column 10 is the identification number given in Table 5 of W94.
I determined a plate solution for the W94 observations using the data in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and the iraf ccmap task. A simple linear model in $`x`$ and $`y`$ was fitted (6 unknowns) to a tangent sky projection geometry. The ccmap fit results are presented in the first section of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . Column 1 of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. gives information about the input positions and astrometry; column 2 gives the number of stars in the input dataset; columns 3 and 4 gives the fit solution for the plate scale in units of arcsec per pixel (arcsec px<sup>-1</sup>); columns 5 and 6 gives the fit solutions for axis rotation in degrees; columns 7 and 8 gives the fit rms for the standard astrometric coordinates $`\xi `$ and $`\eta `$ in units of arcsec.
Walker states that the size of his pixels was 0.38 arcsec. This agrees well with the fitted plate scale of 0.39 arcsec px<sup>-1</sup> given in the first section of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . We see in Fig. 1 of W94 that the orientation of the W94 data is North to the left (low values of $`x`$) with East at the bottom (low values of $`y`$). This agrees well with the fitted axis rotations of $``$88° and $``$268° given in the first section of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . This suggests that the CCD camera used by Walker was just slightly out of alignment from a perfect East-West orientation which would have given values of 90° and 270°, respectively, for axis rotation. The respective fit rms values for the standard astrometric coordinates $`\xi `$ and $`\eta `$ was only 73.0 and 40.7 milliarcsec (mas) for the first fit with all 22 stars in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ; a smaller subset of 12 stars improves the respective rms fits values to 21.3 and 33.0 mas.
I chose two WFPC2 exposures, U4PH0101R.C0H (14-s F814W) and U4PH0102R.C0H (100-s F814W), which observed the $`\omega `$ Cen HST calibration field with the WF2 aperture (Biretta et al. biretta\_et\_al\_1996 (1996)) at the same position and same roll angle. These observations were secured as part of the HST Cycle 7 WFPC2 calibration program CAL/WF2 7929 (PI: Casertano) and were placed in the public data archive at the Space Telescope Science Institute on 1998 March 31. The datasets were recalibrated at the Canadian Astronomy Data Centre and retrieved electronically using a guest account which was established for this purpose.
The astrometric calibration of the W94 stars began by identifying which of the H93 stars in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. were present on the WF2 section of WFPC2 observation U4PH0101R.C0H. This was accomplished by applying the previously derived plate solutions to Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. using the iraf cctran task. Even the first fit with all 22 H93 stars produced a plate solution which allowed for the unambiguous identification of the H93 stars present on the WF2 section of this 14-s WFPC2 observation. Some of the H93 stars in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. (e.g. H93 star 917 = W94 star 30) do not appear on the WF2 section of this WFPC2 observation; some are probable ground-based blends (e.g. H93 star 1368 = W94 star 23); some have WFPC2 stellar images are distorted by WFPC2 CCD defects (e.g. H93 star 1200 = W94 star 33); and some are so bright that they show diffraction spikes even on a short 14-s exposure (e.g. H93 star 1629 = W94 star 1).
The following list of 12 H93 stars is a clean subset of Table 1 which produced the best plate solution of the first section of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. with rms fitting errors of $`\sigma _\xi =21.3`$ mas (0.055 px) and $`\sigma _\eta =33.0`$ mas (0.085 px) : 903, 1035, 1056, 1063, 1249, 1461, 1462, 1512, 1522, 1565, 1655, 1686. This fit had a plate scale which was identical in both directions (0.388 arcsec px<sup>-1</sup>) with axis rotation values of 88.056° and 268.079°.
## 3 RELATIVE ASTROMETRY OF THE WALKER (1994) STARS
The next step of the astrometric calibration of the W94 stars in the $`\omega `$ Cen HST calibration field is to obtain relative astrometry of all the stars in Table 5 of W94.
Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. gives the W94 $`x`$ and $`y`$ CCD positions and the relative astrometry of the clean subset of 12 H93 stars identified in the previous section. Columns 1–4 of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. are from Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ; columns 7 and 8 give the $`x`$ and $`y`$ positions of these stars on the WF2 section of the WFPC2 observation U4PH0101R.C0H as determined using the iraf imexamine task; columns 5 and 6 give the right ascensions and declinations (J2000.0) as determined with the stsdas metric task using the U4PH0101R.C0H observation with the WF2 CCD positions given in columns 7 and 8.
I determined a plate solution for the W94 observations using ccmap with the data in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . A simple linear model in $`x`$ and $`y`$ was fitted (6 unknowns) to a tangent sky projection geometry. The ccmap fit results are presented in the second section of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . This plate solution had rms fitting errors of $`\sigma _\xi =20.3`$ mas (0.052 px) and $`\sigma _\eta =22.3`$ mas (0.058 px); fitted plate scales of 0.387 arcsec px<sup>-1</sup> for both $`x`$ and $`y`$; and axis rotation values of 88.536° and 268.583°. Note that this result is nearly identical to that of the best plate solution derived in the previous section.
I determined relative astrometry (J2000.0) for all of the W94 stars by using cctran with this plate solution and the $`x`$ and $`y`$ CCD positions given in Table 5 of W94. Figure 1 shows the W94 stars (circles) which are present on the WF2 section of the 100-s WFPC2 U4PH0102R.C0H observation; the plotted positions of these W94 stars was determined with the stsdas invmetric task. The two WFPC2 observations, U4PH0101R.C0H and U4PH0102R.C0H, have identical astrometry; I used the deeper observation in Fig. 1 in order to better display the location of the faintest W94 stars.
## 4 ASTROMETRY OF THE WALKER (1994) STARS ON THE ICRF
The final step of the astrometric calibration of the W94 stars in the $`\omega `$ Cen HST calibration field is to place the relative astrometry of the W94 stars on the International Celestial Reference Frame (ICRF)<sup>1</sup><sup>1</sup>1The Hipparcos catalog to the non-specialist..
The plate solution derived in the previous section is ultimately based on the HST Guide Star Catalog which is known to have stars with large positional errors. I demonstrate below that the plate solution determined in the previous section systematically deviates from the ICRF (J2000.0) by about 1.1 arcsec.
I used the USNO-A2.0 finder chart website<sup>2</sup><sup>2</sup>2http://ftp.nofs.navy.mil/data/fchpix/ to find the 97 USNO-A2.0 (Monet et al. monet\_et\_al\_1998 (1998)) astrometric catalog stars within a 2′$`\times `$2′ box centered on the field center of the $`\omega `$ Cen HST calibration field (see Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ). Columns 2 and 3 of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. give the right ascensions and declinations (J2000.0) on the International Celestial Reference Frame (ICRF); column 4 gives the red plate magnitude; column 1 gives my identification number for these nearby USNO-A2.0 stars.
Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. combines the W94 $`x`$ and $`y`$ CCD positions with astrometry from the USNO-A2.0 catalog for 20 W94 stars with probable USNO-A2.0 counterparts. Columns 1–3 of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. are from Table 5 of W94; column 6 gives the my identification number of the USNO-A2.0 star in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. which I associate with the W94 star; columns 4–5 give the right ascensions and declinations (J2000.0) given in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. .
I determined plate solutions for the W94 observations using the data in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. and the ccmap task. A simple linear model in $`x`$ and $`y`$ was fitted (6 unknowns) to a tangent sky projection geometry. The ccmap fit results are presented in the third section of Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. . The plate solution with 20 W94 stars had rms fitting errors of $`\sigma _\xi =280`$ mas (0.72 px) and $`\sigma _\eta =467`$ mas (1.2 px). The fitting error for $`\eta `$ is considerably larger than the nominal 250 mas positional error for the USNO-A2.0 catalog. The plate solution based on a unambiguous subset of 12 W94 stars had rms fitting errors of $`\sigma _\xi =225`$ mas (0.58 px) and $`\sigma _\eta =248`$ mas (0.64 px) which does agree with the nominal USNO-A2.0 positional errors.
The plate solution based on Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. predicts a position for W94 star 2 (= H93 star 1438) of 13<sup>h</sup> 25<sup>m</sup> 35 $`\stackrel{\mathrm{s}}{\mathrm{.}}`$869 -47° 35′ 27 $`\stackrel{}{\mathrm{.}}`$66 (J2000.0). I associate this star with the 38th star in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. which has the USNO-A2.0 catalog position of 13<sup>h</sup> 25<sup>m</sup> 35 $`\stackrel{\mathrm{s}}{\mathrm{.}}`$9327 -47° 35′ 28 $`\stackrel{}{\mathrm{.}}`$6580 (J2000.0). The difference between these positions in right ascension and declination are $`\mathrm{\Delta }\alpha _{2000}=0\stackrel{\mathrm{s}}{\mathrm{.}}0637`$ and $`\mathrm{\Delta }\delta _{2000}=0\stackrel{}{\mathrm{.}}92`$, respectfully, giving an angular separation of $``$1.12 arcsec.
Observing that the USNO-A2.0 stars in Fig. 1 are systematically offset from the stellar images by $``$1.1 arcsec, we come to the odd conclusion that Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. indicates that the plate solutions based on H93 astrometry (Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ) and HST metric astrometry (Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ) are considerably more precise (i.e., have excellent fitting errors) — but are significantly less accurate (i.e., have large systematic offsets) — than the plate solutions based on USNO-A2.0 astrometry (Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ).
The precise relative astrometry derived in the previous section can be placed on the International Celestial Reference Frame by simply adding 0 $`\stackrel{\mathrm{s}}{\mathrm{.}}`$0637 and -0 $`\stackrel{}{\mathrm{.}}`$92 to the right ascensions and declinations derived from the plate solution based on metric astrometry of the WF2 section of HST WFPC2 observation U4PH0101R.C0H (see Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. ). The squares shown on Fig. 1 indicate the ICRF positions of the W94 stars; these are offset from the stellar images by $``$1.1 arcsec on this particular image. The absolute rms errors of the J2000.0 right ascensions and declinations given in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. are estimated to be 0.35 angular arcsec. This estimate is the sum of the nominal 0.25 arcsec positional error of the USNO-A2.0 catalog and the estimated relative rms positional error of approximately 0.10 arcsec due to the uncertainty of the WFPC2 astrometric calibration as a function of time.
Other WFPC2 observations of this field obtained with different guide stars, roll angles, and apertures, may exhibit different offsets between the stellar images on a given WFPC2 CCD and the computed $`(x,y)`$ pixel locations of the W94 stars on that chip as derived using the invmetric task with Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. .
In order to identify the W94 stars on any given WFPC2 observation of the $`\omega `$ Cen HST calibration field, the reader should first find a given W94 star (preferably W94 star 2) somewhere in observation and then measure its $`(x,y)`$ centroid on the particular chip where the star is present<sup>3</sup><sup>3</sup>3For example, the center of W94 star 2 is found at the $`(x,y)`$ location of $`(484.92,266.90)`$ of the WF2 section of the WFPC2 observation U4PH0102R.C0H . . Convert that CCD position to a right ascension and declination using the metric task<sup>4</sup><sup>4</sup>4The following iraf command, metric "u4ph0101r.c0h" x=484.92 y=266.90 , gives $`\alpha _{2000}=`$13<sup>h</sup> 25<sup>m</sup> 35 $`\stackrel{\mathrm{s}}{\mathrm{.}}`$8656 and $`\delta _{2000}=`$-47° 35′ 27 $`\stackrel{}{\mathrm{.}}`$792 . . Next, determine the difference in right ascension ($`\mathrm{\Delta }\alpha _{2000}`$) and declination ($`\mathrm{\Delta }\delta _{2000}`$) between the metric value and the ICRF position of that star as given in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. <sup>5</sup><sup>5</sup>5Using iraf sexagesimal notation, we find that $`\mathrm{\Delta }\alpha _{2000}=\alpha _{\text{metric}}\alpha _{\text{table6}}=(\text{13:25:35.8656})(\text{13:25:35.933})=0\stackrel{\mathrm{s}}{\mathrm{.}}067`$ and $`\mathrm{\Delta }\delta _{2000}=\delta _{\text{metric}}\delta _{\text{table6}}=(\text{-47:35:27.792})(\text{-47:35:28.58})=+0\stackrel{}{\mathrm{.}}788`$ . . Add $`\mathrm{\Delta }\alpha _{2000}`$ and $`\mathrm{\Delta }\delta _{2000}`$ to all the right ascensions and declinations in Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. <sup>6</sup><sup>6</sup>6Continuing with the example, $`\alpha _{\text{table6}}+\mathrm{\Delta }\alpha _{2000}=\text{13}\text{h}\text{ 25}\text{m}\text{ 35 }\stackrel{\mathrm{s}}{\mathrm{.}}\text{866}`$ and $`\delta _{\text{table6}}+\mathrm{\Delta }\delta _{2000}=\text{-47°\hspace{0.17em}35′\hspace{0.17em}27 }\stackrel{}{\mathrm{.}}\text{79}`$ . . Finally, determine the $`(x,y)`$ locations of the W94 stars on the individual WFPC2 CCDs using the invmetric task with the modified Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. positions<sup>7</sup><sup>7</sup>7 The following iraf command, invmetric u4ph0102r.c0h ra="13:25:35.866" dec="-47:35:27.79" , gives the $`(x,y)`$ CCD location of $`(484.95,266.93)`$ on the WF2 section of U4PH0102R.C0H. The small difference (0.058 px) between the computed pixel location and the measured value of $`(484.92,266.90)`$ is due to (1) centroiding errors, (2) round-off errors, and (3) the small difference between the metric value, 13<sup>h</sup> 25<sup>m</sup> 35 $`\stackrel{\mathrm{s}}{\mathrm{.}}`$8656 -47° 35′ 27 $`\stackrel{}{\mathrm{.}}`$792 (determined above), and the plate solution value for W94 star 2, 13<sup>h</sup> 25<sup>m</sup> 35 $`\stackrel{\mathrm{s}}{\mathrm{.}}`$869 -47° 35′ 27 $`\stackrel{}{\mathrm{.}}`$66 , which was derived with Table Astrometry of the $`\omega `$ Centauri Hubble Space Telescope Calibration Field<sup>1</sup><sup>1</sup>affiliation: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555. in the previous section. on a chip-by-chip basis.
I am grateful to Alistair Walker for sending me a glossy reprint of W94 as well as electronic versions of Tables 4 and 5 from W94. I would also like to thank Lindsey Davis, Mike Fitzpatrick, Stephen Levine, Hugh Harris, Stefano Casertano, Brad Whitmore, J. C. Hsu, Andy Fruchter for discussions and advice on various aspects of this project. This research was supported by a grant from the National Aeronautics and Space Administration (NASA), Order No. S-67046-F, which was awarded by the Long-Term Space Astrophysics Program (NRA 95-OSS-16). This research has made use of NASA’s Astrophysics Data System Abstract Service.
|
no-problem/0004/astro-ph0004351.html
|
ar5iv
|
text
|
# Photon frequency conversion induced by gravitational radiation
## I Introduction
Recently there has been an increased interest in gravitational waves, mainly due to possibility of direct detection by LIGO (Laser Interferometer Gravitational-Wave Observatory) . Naturally the effects of gravitational waves on earth are very small – which is illustrated by the large dimensions required for detection. Closer to the source the influence of the gravitational waves may be larger, but generally it is nontrivial to predict the possible influence of the emitted radiation – in particular the coupling to the electromagnetic field complicates the description. For a discussion of the interaction between electromagnetic fields and gravitational radiation in an astrophysical context, see for example Refs. , and references therein.
In the present paper we will study the propagation of gravitational perturbations in a magnetized plasma, with the direction of propagation perpendicular to the magnetic field. It turns out that large density gradients driven by the gravitational perturbation can be generated, even for small deviations from flat space, provided the cyclotron frequency is much larger than the plasma frequency. Furthermore, as is well known from laboratory plasmas (see, e.g., ), moving density gradients can increase (or decrease) the frequency of electromagnetic wave packets, so called photon acceleration. The density gradients in our case are propagating with exactly the speed of light, in contrast to the laboratory application . In principle this means that a given photon may increase its energy by several orders of magnitude, independent of its initial energy. Applying our results to gravitational radiation generated by binary systems, it turns out that the regime of most interest is the infrared regime. In this case frequency conversion by an order of magnitude is possible, for a binary system close to merging.
## II Plasma response to a gravitational wave pulse
### A Basic equations
The metric of a linearized gravitational wave propagating in the $`z`$-direction can be written as
$$\mathrm{d}s^2=\mathrm{d}t^2+[1+h(u)]\mathrm{d}x^2+[1h(u)]\mathrm{d}y^2+\mathrm{d}z^2,$$
(1)
where we have assumed linear polarization, and $`uzt`$. For an observer comoving with the time coordinate, the natural frame for measurements is given by
$$e_0=_t,e_1=\left(1\frac{1}{2}h\right)_x,e_2=\left(1+\frac{1}{2}h\right)_y,e_3=_z.$$
(2)
It can be shown that in such a frame, Maxwell’s equations can be written
$`\mathbf{}\mathbf{}𝑬`$ $`=`$ $`\rho /ϵ_0,`$ (4)
$`\mathbf{}\mathbf{}𝐁`$ $`=`$ $`0,`$ (5)
$`{\displaystyle \frac{𝑬}{t}}\mathbf{}\mathbf{\times }𝑩`$ $`=`$ $`𝒋__E\mu _0𝒋,`$ (6)
$`{\displaystyle \frac{𝑩}{t}}+\mathbf{}\mathbf{\times }𝑬`$ $`=`$ $`𝒋__B,`$ (7)
where the effective gravitational current densities are defined as
$`j__E^1`$ $`=`$ $`j__B^2=\frac{1}{2}(E^1B^2){\displaystyle \frac{h}{z}},`$ (9)
$`j__E^2`$ $`=`$ $`j__B^1=\frac{1}{2}(E^2+B^1){\displaystyle \frac{h}{z}},`$ (10)
and $`\mathbf{}(e_1,e_2,e_3)`$.
To first order in $`h`$, the fluid equations become
$`{\displaystyle \frac{n}{t}}+\mathbf{}\mathbf{}(n𝒗)`$ $`=`$ $`0,`$ (12)
$`\left({\displaystyle \frac{}{t}}+𝒗\mathbf{}\mathbf{}\right)\gamma 𝒗`$ $`=`$ $`{\displaystyle \frac{q}{m}}(𝑬+𝒗\mathbf{\times }𝑩),`$ (13)
where $`\gamma (1v_{}^2)^{1/2}`$, $`v_{}v_3`$, and $`n=\gamma \stackrel{~}{n}`$, where $`\stackrel{~}{n}`$ is the proper number density. These equations holds for each particle species. Note that in general terms proportional to $`v_1h`$ and $`v_2h`$ appear in the equations . Throughout this paper, we will assume that $`v_1,v_21`$, and thus neglect terms of order $`v_1h`$, $`v_2h`$.
### B Electromagnetic fields driven by a gravitational perturbation
From now on we assume $`/t\omega _\mathrm{c}`$, where $`\omega _\mathrm{c}qB/m`$ is the cyclotron frequency, for all particle species (since the gravitational perturbation is assumed to be the driver of all perturbations this scaling thereby holds for $`/t`$ acting on all fields). Furthermore, we assume the presence of an external magnetic field: $`𝑩_0=B_0e_1`$ \[where the total field is $`𝑩=(B_0+\delta B)e_1`$\]. The electric field takes the form $`𝑬=E_{}e_2`$.
Looking for solutions driven by the gravitational perturbation, and thus using $`/t=/z`$, we first consider Faraday’s law for $`\delta BB_0`$, which gives
$$\delta B=E_{}+hB_0$$
(14)
Next we note that if the excited fields $`E_{}`$ and $`\delta B`$ grows (invalidating $`\delta BB_0`$), the quantity $`E_{}+B`$ that appears in the effective current still becomes $`E_{}+\delta B=hB_0`$, and thereby the above formula holds for arbitrary electromagnetic amplitude. Taking the time derivative of Ampere’s law, using Eq. (14), we obtain
$$\left[\frac{^2}{t^2}\frac{^2}{z^2}\right]E_{}+\mu _0\underset{i}{}\frac{j_{(i)}}{t}=2\frac{^2h}{t^2}B_0,$$
(15)
where the sum is over particle species, and $`j_{}j_2`$. For $`/t=/z`$, the term (explicitly) involving $`E_{}`$ vanishes. The currents are determined by the equation of motion, noting that the condition $`/t\omega _\mathrm{c}`$ means that the current contribution from different particle species cancel to lowest order in an expansion in the operator $`\omega _\mathrm{c}^1/t`$. The equation of motion gives
$$v_{}=\frac{E_{}}{B_0+\delta B}$$
(16)
to lowest order. Note that, using Eq. (14), we can now approximate the denominator in Eq. (16) by $`B_0E_{}`$. The error this approximation introduces will not have any noticeable effects. This is because $`v_{}`$ can only be altered significantly by the omitted term if $`\delta BB_0`$, but this regime is inaccessible, since – from Eq. (16) – it correspond to superluminal speeds. From the parallel component of Eq. (13) we can calculate the first order correction to the induced velocity, which subsequently determines the current. We obtain
$$v_{}=\frac{m}{q}\frac{1v_{}}{B_0E_{}}\frac{(\gamma v_{})}{t}$$
(17)
Furthermore, the continuity equation gives
$$\delta n=\frac{n_0v_{}}{1v_{}}$$
(18)
where we have divided the density into a perturbed and an unperturbed part, $`n=n_0+\delta n`$.
\>From (15) and the relations above we can thus determine the induced velocity and density in terms of the metric perturbation $`h`$. The result (for all particle species) is
$`v_{}`$ $`=`$ $`{\displaystyle \frac{1\left(1\right)^2}{1+\left(1\right)^2}},`$ (20)
$`\delta n`$ $`=`$ $`{\displaystyle \frac{n_0}{2}}\left[{\displaystyle \frac{1}{\left(1\right)^2}}1\right],`$ (21)
where $`2h/_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)`$, and $`\omega _{\mathrm{p}(i)}(q_{(i)}^2n_0/ϵ_0m_{(i)})^{1/2}`$ is the plasma frequency for the unperturbed plasma species $`i`$. Thus it is clear that even a moderate or small value of the gravitational perturbation may cause significant density perturbation, provided the plasma is strongly magnetized in the sense that $`_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)1`$. This is because the fast magnetosonic (or compressional Alfvén) wave fulfills approximately the same dispersion relation as the gravitational wave, with the mismatch being proportional to $`_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)`$ . The divergence that occur for $`1`$ is clearly unphysical, and it’s removal will be discussed in the next subsection.
For future considerations it will also be useful to have the relation between the relative magnetic field perturbation and the relative density perturbation. When $`\left|\delta B\right|\left|hB_0\right|`$, which is the case of most interest, the last term of Eq. (14) can be neglected and the desired relation can be derived by combining the resulting formula with Eqs. (16) and (18). The simple result is
$$\frac{\delta n}{n_0}=\frac{\delta B}{B_0}$$
(22)
### C Removal of the divergence
The purpose in this subsection is to explain the reason for the occurrence of divergence when $``$ approaches unity, and to discuss various modifications of the assumptions that lead to a more physical behavior. From Eq. (17) we note that for infinitesimal velocity perturbations, $`v_{}`$ (and thereby $`j_{}`$) depends linearly on $`v_{}`$, but for large parallel velocities, in particular when $`v_{}1`$, $`v_{}`$ remains finite due to the factor $`1v_{}`$. From Eq. (15) it is thus clear that we cannot have a stationary solution where $`E_{}`$ depend only on $`zt`$ for large enough $`h`$, and from Eq. (20) we see that this limit for the gravitational perturbation is reached when $``$ becomes unity. Basically, the physical reason is the following: In vacuum the electromagnetic and gravitational modes obey the same dispersion relation, and therefore – due to the mode coupling provided by the unperturbed magnetic field – the system evolves in a non-stationary way. In particular gravitational wave energy may be continuously converted into electromagnetic wave energy, as will be examined in more detail below. In the presence of a plasma, however, the induced currents change the dispersion relation of the electromagnetic wave, and the resulting detuning of the modes saturate the conversion of energy between them, making a steady state solution (in a frame moving with the velocity of light) possible in principle. For a strongly magnetized plasma, on the other hand, the induced plasma currents cannot grow continuously with $`h`$, as we have seen above. For sufficiently high gravitational amplitude this means that the plasma currents are of little significance, practically the plasma appears as vacuum for $`1`$, and in particular solutions depending only on $`zt`$ are impossible. This conclusion is not dependent on the absence of thermal effects in our calculations in section II B. Generally the addition of thermal motion only modifies our expressions (II B) by a factor of the order $`1+(v_t/c)^2`$, where $`v_t`$ is the thermal velocity. In particular, the divergence of (21) still occurs for a finite value of $``$.
On the other hand, it is clear that our omission of the back reaction of the electromagnetic wave on the gravitational pulse in principle could change this picture, since obviously certain components of the energy momentum tensor also diverges when $`1`$, implying that the gravitational wave amplitude could indeed be diminished due to the influence of the generated EM-wave. The effects of the selfconsistent gravitational field caused by the plasma perturbations are discussed in the Appendix, but will be omitted here since it turns out that the backreaction on the gravitational wave is negligible in the application to be discussed in this article.
Since it is clear that for $`1`$ the generated currents cannot stop the growth of the EM-wave, we simplify the picture from now on by putting the density to zero and thus totally ignoring the plasma effects. The general solution to Eq. (15) for the electric field in the presence of a monochromatic gravitational wave $`h=\stackrel{~}{h}\mathrm{cos}[k(zt)]`$ can then be written
$`\delta B`$ $`=`$ $`E_{}=\frac{1}{2}k(C_zz+C_tt)B_0\stackrel{~}{h}\mathrm{sin}[k(zt)]`$ (24)
$`+E_+(zt)+E_{}(zt),`$
where $`C_z+C_t=1`$ and $`E_+`$ and $`E_{}`$ are arbitrary functions. For an initial value problem where the plasma is unperturbed in the absence of the pulse $`C_z=0,`$ $`C_t=1`$ and $`E_+=E_{}=0`$, i.e. the electromagnetic amplitude grows linearly with time. For a boundary value problem, on the other hand, where the external magnetic field $`B_0`$ occupies a region $`z0`$ and there is a gravitational wave but no EM-waves propagating into the magnetized region, clearly $`C_z=1,`$ $`C_t=0`$ and $`E_+=E_{}=0`$, i.e. we have a linear spatial growth instead. For the applications to be discussed later on we will be interested in a situation where $`B_0`$ is not necessarily static. We thus note that qualitatively the solution given by Eq. (24) still applies for a quasi-static situation, i.e. where the dependence of $`B_0`$ on time is slow enough such that the electric fields $`E`$ associated with the time variations fulfills $`E/B_01`$.
In principle we can achieve very large EM-wave amplitudes also when we abandon the specific solutions depending on $`zt`$. However, since the growth is only linear in $`t`$ and/or $`z`$, apparently we need large times/distances of coherent interaction. For a boundary value problem we can roughly define the effective distance of interaction as $`z_{\mathrm{eff}}`$
$$\delta B_{\mathrm{max}}z_{\mathrm{eff}}B_{0,\mathrm{char}}h_{\mathrm{char}}^{}$$
(25)
where the index $`\mathrm{char}`$ denotes the characteristic values of the various quantities in the region of interest, and the prime denote differentiation with respect to the argument.
To summarize: Eq. (21) have a class of physically sound solutions, but also unphysical ones with the property $`\delta n\mathrm{}`$ as $`1.`$ The singular behavior is caused by the insistence to look for solutions that moves with a specific velocity, together with the omission of the selfconsistent gravitational field from the plasma perturbations. The divergent solutions can be removed either by considering a boundary or an initial value problem, as discussed in this subsection, or by considering the backreaction of the plasma perturbations on the gravitational wave, as discussed in the Appendix. The alternative considered here is the most relevant one with regard to astrophysical applications. Real astrophysical systems have finite distances of interaction between gravitational waves and plasma waves, that can be estimated on physical grounds. Thus when estimating the maximum magnetic field perturbation that can be produced by a gravitational wave in a given situation, we can in principle apply solutions (21) together with (22) but we must note the upper bound for $`\delta B_{\mathrm{max}}`$ that exists for a given $`z_{\mathrm{eff}}`$ and is given by Eq. (25).
## III Photon frequency shift
We now consider the effect of the gravitational wave perturbations on high frequency photons in a plasma. For simplicity we assume that the photons propagate parallel to the gravitational waves and let them be represented by the vector potential $`𝑨=\stackrel{\mathbf{~}}{𝑨}\mathrm{exp}(\mathrm{i}\theta )+\mathrm{c}.\mathrm{c}.`$, where $`\mathrm{c}.\mathrm{c}.`$ stands for complex conjugate. Making the approach of geometrical optics , the wave number $`k_z\theta `$ and frequency $`\omega _t\theta `$ satisfies some local dispersion relation $`\omega =W(z,t,k)`$. The amplitude of the vector potential is assumed small and by high frequency photons we mean $`\omega \omega _{\mathrm{p}(i)},\omega _{\mathrm{c}(i)}`$.
Due to the gravitational waves the plasma has a background of possibly large fields $`\delta n`$, $`v_{}`$, $`\delta B`$, and $`E_{}`$ all being functions of $`zt`$ and varying on a time and length scale much longer than that of $`𝑨`$.
Since $`\omega \omega _{\mathrm{c}(i)}`$ the high frequency pulse approximately behaves as if the plasma is unmagnetized. The equation of motion linearized in the high frequency (hf) variables reads
$$\left[\frac{}{t}+v_{}\frac{}{z}\right]𝒗_{(i)}^{\mathrm{hf}}=\frac{q_{(i)}}{\gamma m_{(i)}}\left(\frac{𝑨}{t}+v_{}\frac{𝑨}{z}\right),$$
(26)
and thus $`𝒗_{(i)}^{\mathrm{hf}}=q_{(i)}𝑨/\gamma m_{(i)}`$, where the large scale variations have been neglected. The induced high frequency current is therefore $`𝒋^{\mathrm{hf}}=\omega _\mathrm{p}^2𝑨/\mu _0`$, where the plasma frequency is $`\omega _\mathrm{p}(_iq_{(i)}^2n/ϵ_0m_{(i)}\gamma )^{1/2}`$. Taking the time derivative of Ampere’s law gives the following wave equation for the photons
$$\left[\frac{^2}{t^2}\frac{^2}{z^2}+\omega _\mathrm{p}^2\right]𝑨=0.$$
(27)
We recognize the dispersion relation as $`\omega =[k^2+\omega _\mathrm{p}^2(zt)]^{1/2}`$, where we assume that the variations in the plasma frequency are determined from Eqs. (II B) together with $`\gamma =(1v_{}^2)^{1/2}`$.
The change in the wave number and frequency as the wave propagates through the nonuniform and time varying media with velocity $`v_g=\omega /k`$ is given by the ray equations
$$\frac{dk}{dt}=\frac{W}{z},\frac{d\omega }{dt}=\frac{W}{t}.$$
(28)
Note that $`W`$ is a function of $`zt`$ and introduce coordinates, $`\xi =zv_gt`$, $`\tau =t`$ locally moving with the photons, i.e. it should be understood that $`v_g=v_g(\tau =\tau _0)`$ for some $`\tau _0`$. Then, in a small neighborhood of $`\tau _0`$ it holds that $`d\omega /d\tau =W/\xi `$. Using $`_\xi =(1v_g)^1_\tau `$, this can be integrated from time 1 to 2 (which need not be a small interval), noting that $`1v_g\omega _\mathrm{p}^2/2\omega ^2`$. The result is
$$\frac{\omega _1}{\omega _2}=\frac{\omega _{\mathrm{p1}}^2}{\omega _{\mathrm{p2}}^2},$$
(29)
where the indices $`1`$ and $`2`$ denote the values at $`\tau _1`$ and $`\tau _2`$, respectively. An interesting aspect of Eq. (29) is that the frequency conversion factor $`N=\omega _1/\omega _2`$ is independent of the frequency regime of the EM-wave. Thus, in principle, x-rays can be turned into gamma rays, just as well as infra-red waves can be converted into the visible regime. This is in contrast to laser excited wake fields , where efficient frequency shifts can only take place provided the frequency of the converted pulse roughly lies in the same frequency regime as the exciting laser pulse. The reason for the difference is that the density gradients propagates with exactly the speed of light in our case, whereas, naturally, the corresponding velocity is slightly less than $`c`$ in the laboratory experiments. The necessary distance of acceleration for a given conversion factor $`N`$ is proportional to $`\omega ^2`$, however, and this puts certain limits for the applicability to the highest frequency regimes.
## IV Example
We have found that large density perturbations traveling with the velocity of light can be induced by small gravitational wave perturbations, provided the cyclotron frequency is much larger than the plasma frequency, as described by Eqs. (II B). Furthermore, photons propagating in a moving density gradient can undergo frequency up-conversion (or down-conversion), as described by Eq. (29). In principle the effects can be large, even for a moderate deviation from flat space-time. It is not yet clear that the predicted frequency conversion can be observed during reasonable conditions, however, and our aim in this section is to provide estimates to shed light on this question. In this section we reinstate the speed of light in all expressions.
As a source of gravitational radiation we consider a binary system. At least one of the objects should have a moderate to strong magnetic field (in order to make the parameter $`_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)`$ small), and the objects should be compact (as to make the gravitational wave frequency and amplitude before merging large). Thus, for definiteness (and calculational simplicity due to symmetries) we assume that the system consists of two neutron stars of equal mass $`M_{}`$, separated by a distance of $`40R_S`$, where $`R_S=2GM_{}/c^23`$ km. Furthermore, the surface magnetic field of each neutron star is assumed to be $`10^6`$ T. For the unperturbed plasma density profile, see Fig. 1.
The area surrounding the binary system can loosely be divided into three regions (Fig. 1). The interval $`20R_S30R_S`$ from the center of mass (CM) roughly constitutes region I, which is the region where most of the gravitational energy is gained by the EM-wave. Using a Newtonian approximation, with $`d=\alpha R_S`$ and $`r=\beta R_S`$, it is straightforward to show that $`|h|(2\alpha \beta )^1`$, where $`d`$ is the separation distance between the binary objects and $`r`$ is the observation distance from the center of mass of the system. In order to obtain an estimate of the amplitude of the generated EM-wave, we combine the above expression for the gravitational wave amplitude with Eq. (25) and the data given above. The result is
$$\frac{\delta B}{B_0}7\times 10^5$$
(30)
at the of end region I. In region II (approximate interval $`30R_S3500R_S`$ from the CM) $`\delta B/B_0`$ is still small, and – as seen by Eq. (22) – the relative density perturbation is thereby small as well, which limits the frequency conversion effect in this region. However, the gravitationally induced EM-wave suffers spherical attenuation, whereas the unperturbed magnetic field is that of a dipole, and consequently the relative density perturbation grows quadratically with distance. The end of region II is defined as the necessary distance to make $`\delta B/B_0`$ of the order unity due to this increase. (For pulsars with period longer than $`35`$ ms, region I and II lies in the near zone, and thus the unperturbed magnetic field indeed decays cubically in the region of interest, although the unperturbed field becomes a radiation field outside the light cylinder of the pulsar.) In region III (approximate interval $`3500R_S10^6R_S`$), the relative density perturbation is appreciable, and thus the main frequency conversion occurs here .
At the beginning of region III the relative density perturbation is $`\delta n/n_01`$, in agreement with Eq. (22). An EM-wave with initial frequency $`\omega \omega _{\mathrm{min}}=10^{12}\mathrm{rad}/\mathrm{s}`$ can move from a density minimum to a density maximum during a “laboratory system distance” $`L_{\mathrm{freq}}=cT_{\mathrm{freq}}=cL_{\mathrm{grad}}/(cv_g)\omega _{\mathrm{max}}^2L_{\mathrm{grad}}/\omega _\mathrm{p}^2`$, where $`L_{\mathrm{grad}}`$ is a typical density gradient scale length. For definiteness we assume that the pulsars have periods of the order of 350 ms, in which case $`\delta B/B_0`$ may increase to $`\delta B/B_010`$ for the most of region III. In our example the maximum frequency magnification $`N`$ thus is
$$N=\frac{\omega _{\mathrm{max}}}{\omega _{\mathrm{min}}}=\frac{\omega _{\mathrm{p},\mathrm{max}}^2}{\omega _{\mathrm{p},\mathrm{min}}^2}10.$$
(31)
Inserting $`\omega _{\mathrm{max}}=10^{13}\mathrm{rad}/\mathrm{s}`$, and letting $`\omega _{\mathrm{p},\mathrm{max}}^2=10^{11}\mathrm{rad}/\mathrm{s}`$ (corresponding to $`n_010^{12}\mathrm{cm}^3`$) we obtain $`L_{\mathrm{freq}}10^6R_\mathrm{S}`$, i.e. the acceleration can take place within region III. Strictly applying our one-dimensional calculations of Sec. III means that frequency up-converted EM-waves will be down-converted and vice versa, if the gravitational source and the induced density perturbation are indeed periodic. In our example, on the other hand, the successive frequency conversion effects will decrease with the distance from the source, and thus for an earth based observer the radiation generated in region III should show periodic up- and down-conversions. The frequency conversion ratio of Eq. (31) is of course a maximum value of our example, that occurs for radiation generated at a density extremum, but all radiation generated in region III will be up- or down converted with a factor in the interval $`1N`$, and consequently the effect should be observable provided the object is close enough for radiation generated in region III, in the approximate frequency interval $`10^{11}\mathrm{rad}/\mathrm{s}\omega 10^{14}\mathrm{rad}/\mathrm{s}`$, to be detected, where the upper limit is imposed by the fact that the system has a finite distance of interaction. If we try to increase the interaction efficiency by considering higher plasma densities the electromagnetic wave damping due to Thomson scattering becomes prohibitive.
## V Summary and discussion
We have considered the generation of traveling density perturbations in a magnetized plasma induced by gravitational radiation. Provided $`_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)1`$, significant density perturbations, i.e. $`\delta n/n_01`$, can be induced even by a small gravitational wave with $`h1`$, provided $`1`$. Basically the large effect is possible because of the approximate agreement of the dispersion relations between the fast magnetosonic and gravitational modes in the regime $`_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)1`$, which in turn allows for a long distance of coherent interaction.
In order to find a mechanism where the induced density perturbations may give rise to earth-based observational effects, we have studied frequency conversion of electromagnetic wave packets traveling in the moving density gradients. The formula (29), relating the frequency of the wave packet for two different positions in the moving density profile, is in conceptual agreement with the corresponding results of Ref. , which considered an analogous situation but where the density perturbation was due to plasma oscillations traveling with a phase velocity slightly less than the speed of light $`c`$. In our case the gradients move with exactly $`c`$, however, and thereby the maximum frequency conversion factor $`N`$ does not decrease with the initial frequency \[as for conventional photon acceleration\], in principle allowing for up-conversion even of $`\gamma `$-rays.
The idealizations made in Secs. II and III is a somewhat too strong for our results to be directly applicable to a situation of astrophysical relevance. In particular, we cannot consider the unperturbed plasma as homogeneous and the geometry as one-dimensional when making estimates. In our example with a binary system as a source of gravitational radiation, we have thus been forced to divide the neighborhood of the system into three regions: Region I where most of the energy transfer into electromagnetic wave energy occurs, region II where the relative density perturbation grows, and region III where the frequency conversion takes place. In order to describe the physics in region I adequately we must abandon solutions that depend on $`zct`$ only, and the basis for this has been discussed in Sec. II C. By making estimates based on our analytical calculations, we conclude that the gravitational waves emitted by a system of binary pulsars close to merging may result in periodic frequency up- and down-conversions of electromagnetic radiation in the infrared part of the spectrum. The frequency of the up- and down-conversions coincides with the gravitational wave frequency, i.e. it is twice the orbital frequency.
## VI Appendix
In this appendix we are going to investigate the regime of validity for the multi-component test fluid approach. Normally we think that by continually decreasing the parameters proportional to the unperturbed energy density, at some point the fluid in an external gravitational field can be treated as a test fluid. In our case the situation is not quite that simple, since we can decrease the electromagnetic ($`B_0^2`$) and the rest mass energy density ($`n_0`$)at the same rate keeping $`_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)`$ constant. Since our solution in section II B has a diverging energy-momentum tensor whenever $`2h/_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)1`$, clearly we cannot justify the test fluid approach simply by assuming a sufficiently low unperturbed energy density. To shed light on the physical effects due to the selfconsistent gravitational field, we will first consider the linearized theory. This will provide a guide for making estimates of the regime of validity of our (nonlinear) test matter solution in section II B, and also makes it possible to justify the omission of selfconsistent gravitational effects in section IV .
We divide all quantities into an unperturbed part (i.e. the value in the absence of the gravitational perturbation) and a perturbed part. We note that the only variables that are nonzero in the unperturbed state are the density ($`=n_0`$), the magnetic field ($`=B_0e_1`$) and the metric ($`=\eta _{\mu \nu }`$). It should be emphasized that in addition to the direct effect on the dispersion relation from the matter, which we will study below, there is also an indirect contribution (that will be omitted here) to the dispersion relation from the background curvature produced by the (unperturbed) matter. In the regime where the gravitational wave length is much shorter than the background curvature, however, the shortwave approximation can be applied, which imply that these two effects can be studied separately and their contribution to the dispersion relation of the gravitational wave can be added, see e.g. Ref. . In the above scenario (provided thermal effects are still neglected) the only effects from the gravitational wave on the plasma perturbations are from the effective currents in (9)-(10), where, in the present case, we have $`j__E^2=j__B^1=(1/2)B_0(h/z)`$ and the other components are zero. Thus using Maxwells eqs. and the the set of fluid equations for each particle species and the same approximations as in section II (but avoiding the ansatz $`/t=/z`$) we will obtain a wave equation for the fast magnetosonic wave, modified from the standard textbook form by allowing for an arbitrary value of $`_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)`$, and with a gravitational ”source term” due to the effective gravitational currents above. The result is
$$\left(\frac{^2}{t^2}+\frac{C_A^2}{1+C_A^2}\frac{^2}{z^2}\right)\delta B=2\frac{^2h}{t^2}B_0$$
(32)
where we have introduced the Alfvén velocity $`C_A=\left(_i(\omega _{\mathrm{p}(i)}^2/\omega _{\mathrm{c}(i)}^2)\right)^{1/2}`$.(Note that $`C_A`$ may be larger than unity, but, as can be seen above, the actual magnetosonic wave velocity is smaller or equal to $`C_A.`$) The system is closed selfconsistently by Einstein’s field equations, which, after linearization reduces to (cf Eq. 4.9 in Ref. )
$$\left(\frac{^2}{t^2}+\frac{^2}{z^2}\right)h=16\pi G[T_{11}T_{22}]_{\mathrm{lin}}=\frac{16\pi G}{\mu _0}B_0\delta B$$
(33)
where $`\mathrm{lin}`$ stands for ”linear part of”. It is simple to combine Eqs. (32) and (33) into a single wave equation for the coupled fast magnetosonic and gravitational mode. However, it is probably more illustrative to proceed by considering the corresponding dispersion relation. Making a plane wave ansatz, $`\delta B=\stackrel{~}{\delta B}\mathrm{exp}[i(kz\omega t)]`$ and $`h=\stackrel{~}{h}\mathrm{exp}[i(kz\omega t)]`$, we directly find the dispersion relation:
$$\omega ^2k^2=\frac{32\pi GB_0^2}{\mu _0}\left(\frac{\omega ^2}{\omega ^2k^2C_A^2/(1+C_A^2)}\right)$$
(34)
from Eqs. (32) and (33). Thus the presence of matter causes a phase velocity $`\omega /k>1`$ and a group velocity $`d\omega /dk<1`$. A further consequence is that the gravitational wave also becomes dispersive. Apparently the relation between $`\stackrel{~}{\delta B}`$ and $`\stackrel{~}{h}`$ is
$$\stackrel{~}{\delta B}=B_0\stackrel{~}{h}\left(\frac{\omega ^2}{\omega ^2k^2C_A^2/(1+C_A^2)}\right)$$
(35)
where the omission of the selfconsistent gravitational field is a valid approximation only if we can use the vacuum dispersion relation $`\omega ^2k^2=0`$ as an approximation instead of Eq. (34) when calculating $`\stackrel{~}{\delta B}`$ from (35). From now on we will focus on the regime $`C_A1`$, which make the magnetosonic phase velocity close to unity. Since the (typically small) right hand side of (34) now must be compared to the small phase velocity difference of the (uncoupled) magnetosonic and gravitational waves, the condition for omitting the selfconsistent gravitational field is significantly stronger if one should get an approximately correct magnetic field, and not just a small contribution from the right hand side in the dispersion relation (34). For $`C_A1`$ the condition for omitting the selfconsistent gravitational field, and still obtaining an approximate expression for $`\stackrel{~}{\delta B},`$ becomes:
$$\frac{32\pi GB_0^2}{\mu _0}\frac{\omega ^2}{C_A^4}$$
(36)
The above validity condition is obtained by comparing the magnetic field obtained from the full selfconsistent dispersion relation and its vacuum approximation. A much simpler way to arrive at the same condition as in (36) is to demand that the relative contribution from the energy momentum tensor terms in Einstein’s equations should be much smaller than the relative velocity difference between the magnetosonic and gravitational waves. The advantage with this latter formulation of the validity condition is that it can be easily applied also when the relation between $`\delta B`$ and $`h`$ as well as the expression for the energy momentum tensor are nonlinear. Adopting this condition for omitting the selfconsistent gravitational field when the plasma response to the metric perturbation is nonlinear we write
$$32\pi G\mathrm{max}(\delta T)\frac{\omega _{\mathrm{char}}^2}{C_A^2}h_{\mathrm{char}}$$
(37)
where $`\mathrm{max}(\delta T)`$ denotes the maximum deviation from the unperturbed value of the perturbed energy momentum tensor for any of its components, and the index ”char” denotes the characteristic value of the gravitational wave frequency and metric perturbation, respectively. For the regime when Eq. (37) is violated, obviously our solution in section II B must be modified to take the selfconsistent gravitational field into account, and this may result in new types of solutions describing, for example, nonlinear solitary gravitational pulses. This problem is outside the scope of our present article, however. We note that our example in section IV, fulfills the validity condition (37) with a margin of several orders of magnitude.
|
no-problem/0004/astro-ph0004243.html
|
ar5iv
|
text
|
# PLANET observations of microlensing event OGLE-1999-BUL-23: limb darkening measurement of the source star
## 1 Introduction
In point-source-point-lens (PSPL) microlensing events, the light curve yields only one physically interesting parameter, the characteristic time scale of the event, $`t_\mathrm{E}`$, which is a combination of the mass of the lens and the source-lens relative parallax and proper motion. However, more varieties than PSPL events have been observed in reality, and using deviations from the standard light curve, one can deduce more information about the lens and the source. The Probing Lensing Anomalies NETwork (PLANET) is an international collaboration that monitors events in search of such anomalous light curves using a network of telescopes in the southern hemisphere (Albrow et al., 1998).
One example of information that can be extracted from anomalous events is the surface brightness profile of the source star (Witt, 1995). In a binary or multiple lens system, the caustic is an extended structure. If the source passes near or across the caustic, drastic changes in magnification near the caustics can reveal the finite size of the source (Gould, 1994; Nemiroff & Wickramasinghe, 1994; Witt & Mao, 1994; Alcock et al., 1997), and one can even extract its surface-brightness profile (Bogdanov & Cherepashchuk, 1996; Gould & Welch, 1996; Sasselov, 1997; Valls-Gabaud, 1998).
The fall-off of the surface brightness near the edge of the stellar disk with respect to its center, known as limb darkening, has been extensively observed in the Sun. Theories of stellar atmospheres predict limb darkening as a general phenomenon and give models for different types of stars. Therefore, measurement of limb darkening in distant stars other than the Sun would provide important observational constraints on the study of stellar atmospheres. However, such measurements are very challenging with traditional techniques and have usually been restricted to relatively nearby stars or extremely large supergiants. As a result, only a few attempts have been made to measure limb darkening to date. The classical method of tracing the stellar surface brightness profile is the analysis of the light curves of eclipsing binaries (Wilson & Devinney, 1971; Twigg & Rafert, 1980). However, the current practice in eclipsing-binary studies usually takes the opposite approach to limb darkening (Claret, 1998a) – constructing models of light curves using theoretical predictions of limb darkening. This came to dominate after Popper (1984) demonstrated that the uncertainty of limb darkening measurements from eclipsing binaries is substantially larger than the theoretical uncertainty. Since the limb-darkening parameter is highly correlated with other parameters of the eclipsing binary, fitting for limb darkening could seriously degrade the measurement of these other parameters. Multi-aperture interferometry and lunar occultation, which began as measurements of the angular sizes of stars, have also been used to resolve the surface structures of stars (Hofmann & Scholz, 1998). In particular, a large wavelength dependence of the interferometric size of a stellar disk has been attributed to limb darkening, and higher order corrections to account for limb darkening have been widely adopted in the interferometric angular size measurement of stars. Several recent investigations using optical interferometry extending beyond the first null of the visibility function have indeed confirmed that the observed patterns of the visibility function contradict a uniform stellar disk model and favor a limb-darkened disk (Quirrenbach et al., 1996; Hajian et al., 1998) although these investigations have used a model prediction of limb darkening inferred from the surface temperature rather than deriving the limb darkening from the observations. However, at least in one case, Burns et al. (1997) used interferometric imaging to measure the stellar surface brightness profile with coefficients beyond the simple linear model. In addition, developments of high resolution direct imaging in the last decade using space telescopes (Gilliland & Dupree, 1996) or speckle imaging (Kluckers et al., 1997) have provided a more straightforward way of detecting stellar surface irregularities. However, most studies of this kind are still limited to a few extremely large supergiants, such as $`\alpha `$ Ori. Furthermore, they seem to be more sensitive to asymmetric surface structures such as spotting than to limb darkening.
By contrast, microlensing can produce limb-darkening measurements for distant stars with reasonable accuracy. To date, limb darkening (more precisely, a set of coefficients of a parametrized limb-darkened profile) has been measured for source stars in three events, two K giants in the Galactic bulge and an A dwarf in the Small Magellanic Cloud (SMC). MACHO 97-BLG-28 was a cusp-crossing event of a K giant source with extremely good data, permitting Albrow et al. (1999a) to make a two-coefficient (linear and square-root) measurement of limb darkening. Afonso et al. (2000) used data from five microlensing collaborations to measure linear limb darkening coefficients in five filter bandpasses for MACHO 98-SMC-1, a metal-poor A star in the SMC. Although the data for this event were also excellent, the measurement did not yield a two-parameter determination because the caustic crossing was a fold-caustic rather than a cusp, and these are less sensitive to the form of the stellar surface brightness profile. Albrow et al. (2000a) measured a linear limb-darkening coefficient for MACHO 97-BLG-41, a complex rotating-binary event with both a cusp crossing and a fold-caustic crossing. In principle, such an event could give very detailed information about the surface brightness profile. However, neither the cusp nor the fold-caustic crossing was densely sampled, so only a linear parameter could be extracted.
In this paper, we report a new limb-darkening measurement of a star in the Galactic bulge by a fold-caustic crossing event, OGLE-1999-BUL-23, based on the photometric monitoring of PLANET.
## 2 OGLE-1999-BUL-23
OGLE-1999-BUL-23 was originally discovered towards the Galactic bulge by the Optical Gravitational Lensing Experiment (OGLE) <sup>1</sup><sup>1</sup>1The OGLE alert for this event is posted at http://www.astrouw.edu.pl/~ftp/ogle/ogle2/ews/bul-23.html (Udalski et al., 1992; Udalski, Kubiak, & Szymański, 1997). The PLANET collaboration observed the event as a part of our routine monitoring program after the initial alert, and detected a sudden increase in brightness on 12 June 1999.<sup>2</sup><sup>2</sup>2 The PLANET anomaly and caustic alerts are found at http://www.astro.rug.nl/~planet/OB99023cc.html Following this anomalous behavior, we began dense (typically one observation per hour) photometric sampling of the event. Since the source lies close to the (northern) winter solstice ($`\alpha =18^h07^m45\stackrel{\mathrm{s}}{\mathrm{.}}14`$, $`\delta =27\mathrm{°}33\mathrm{}15\stackrel{}{\mathrm{.}}4`$), while the caustic crossing occurred nearly at the summer solstice (19 June 1999), and since good weather at all four of our southern sites prevailed throughout, we were able to obtain nearly continuous coverage of the second caustic crossing without any significant gaps. Visual inspection and initial analysis of the light curve revealed that the second crossing was due to a simple fold caustic crossing (see §2.2).
### 2.1 Data
We observed OGLE-1999-BUL-23 with I and V band filters at four participant telescopes: the Elizabeth 1 m at South African Astronomical Observatory (SAAO), Sutherland, South Africa; the Perth/Lowell 0.6 m telescope at Perth, Western Australia; the Canopus 1 m near Hobart, Tasmania, Australia; and the Yale/AURA/Lisbon/OSU 1 m at Cerro Tololo Inter-American Observatory (CTIO), La Serena, Chile. From June to August 1999 ($`1338<\mathrm{HJD}^{}<1405`$), PLANET obtained almost 600 images of the field of OGLE-1999-BUL-23. In addition, baseline points were taken at SAAO ($`\mathrm{HJD}^{}1440`$) and Perth ($`\mathrm{HJD}^{}1450`$; $`\mathrm{HJD}^{}1470`$). Here $`\mathrm{HJD}^{}\mathrm{HJD}2450000`$, where HJD is Heliocentric Julian Date at center of exposure. The data reduction and photometric measurements of the event were performed relative to non-variable stars in the same field using DoPHOT. After several re-reductions, we recovered the photometric measurements from a total of 476 frames.
We assumed independent photometric systems for different observatories and thus explicitly included the determination of independent (unlensed) source and background fluxes for each different telescope and filter band in the analysis. This provides both determinations of the photometric offsets between different systems and independent estimates of the blending factors. The final results demonstrate satisfactory alignment among the data sets (see §2.3), and we therefore believe that we have reasonable relative calibrations. Our previous studies have shown that the background flux (or blending factors) may correlate with the size of seeing disks in some cases (Albrow et al., 2000a, b). To check this, we introduced linear seeing corrections in addition to constant backgrounds.
From previous experience, it is expected that the formal errors reported by DoPHOT underestimate the actual errors (Albrow et al., 1998), and consequently that $`\chi ^2`$ is overestimated. Hence, we renormalize photometric errors to force the final reduced $`\chi ^2/dof=1`$ for our best fit model. Here, $`dof`$ is the number of degrees of freedom (the number of data points less the number of parameters). We determine independent rescaling factors for the photometric uncertainties from the different observatories and filters. The process involves two steps: the elimination of bad data points and the determination of error normalization factors. In this as in all previous events that we have analyzed, there are outliers discrepant by many $`\sigma `$ that cannot be attributed to any specific cause even after we eliminate some points whose source of discrepancy is identifiable. Although, in principle, whether particular data points are faulty or not should be determined without any reference to models, we find that the light curves of various models that yield reasonably good fits to the data are very similar to one another, and furthermore, there is no indication of temporal clumping of highly discrepant points. We, therefore, identify outlier points with respect to our best model and exclude them from the final analysis.
For the determination of outliers, we follow an iterative approach using both steps of error normalization. First, we calculate the individual $`\chi ^2`$’s of data sets from different observatories and filter bands with reference to our best model without any rejection or error scaling. Then, the initial normalization factors are determined independently for each data set using those individual $`\chi ^2`$’s and the number of data points in each set. If the deviation of the most discrepant outlier is larger than what is predicted based on the number of points and the assumption of a normal distribution, we classify the point as bad and calculate the new $`\chi ^2`$’s and the normalization factors again. We repeat this procedure until the largest outlier is comparable with the prediction of a normal distribution. Although the procedure appears somewhat arbitrary, the actual result indicates that there exist rather large decreases of $`\sigma `$ between the last rejected and included data points. After rejection of bad points, 429 points remain (see Table 1 and Fig. 1).
### 2.2 Analysis: searching for $`\chi ^2`$ minima
We use the method of Albrow et al. (1999b, hereafter Paper I), which was devised to fit the light curve of fold-caustic crossing binary-lens events, to analyze the light curve of this event and find an appropriate binary-lens solution. This method consists of three steps: (1) fitting of caustic-crossing data using an analytic approximation of the magnification, (2) searching for $`\chi ^2`$ minima over the whole parameter space using the point-source approximation and restricted to the non-caustic crossing data, and (3) $`\chi ^2`$ minimization using all data and the full binary-lens equation in the neighborhood of the minima found in the second step.
For the first step, we fit the I-band caustic crossing data ($`1348.5\mathrm{HJD}^{}1350`$) to the six-parameter analytic curve shown in equation (1) that characterizes the shape of the second caustic crossing (Paper I; Afonso et al. 2000),
$$F(t)=\left(\frac{Q}{\mathrm{\Delta }t}\right)^{1/2}\left[G_0\left(\frac{tt_{\mathrm{cc}}}{\mathrm{\Delta }t}\right)+\mathrm{\Gamma }H_{1/2}\left(\frac{tt_{\mathrm{cc}}}{\mathrm{\Delta }t}\right)\right]+F_{\mathrm{cc}}+(tt_{\mathrm{cc}})\stackrel{~}{\omega },$$
(1a)
$$G_n(\eta )\pi ^{1/2}\frac{(n+1)!}{(n+1/2)!}_{\mathrm{max}(\eta ,1)}^1𝑑x\frac{(1x^2)^{n+1/2}}{(x\eta )^{1/2}}\mathrm{\Theta }(1\eta ),$$
(1b)
$$H_{1/2}(\eta )G_{1/2}(\eta )G_0(\eta ).$$
(1c)
Figure 2 shows the best-fit curve and the data points used for the fit. This caustic-crossing fit essentially constrains the search for a full solution to a four-dimensional hypersurface instead of the whole nine-dimensional parameter space (Paper I).
We construct a grid of point-source light curves with model parameters spanning a large subset of the hypersurface and calculate $`\chi ^2`$ for each model using the I-band non-caustic crossing data. After an extensive search for $`\chi ^2`$-minima over the four-dimensional hypersurface, we find positions of two apparent local minima, each in a local valley of the $`\chi ^2`$-surface. The smaller $`\chi ^2`$ of the two is found at $`(d,q,\alpha )(2.4,\mathrm{\hspace{0.17em}0.4},\mathrm{\hspace{0.17em}75}\mathrm{°})`$, where $`d`$ is the projected binary separation in units of the Einstein ring radius, $`q`$ is the mass ratio of the binary system, and $`\alpha `$ is the angle between the binary axis and the path of the source, defined so that the geometric center of the lens system lies on the right hand side of the moving source. The other local minimum is $`(d,q,\alpha )(0.55,\mathrm{\hspace{0.17em}0.55},\mathrm{\hspace{0.17em}260}\mathrm{°})`$. The results appear to suggest a rough symmetry of $`dd^1`$ and $`(\alpha <\pi )(\alpha >\pi )`$, as was found for MACHO 98-SMC-1 (Paper I; Afonso et al. 2000). Besides these two local minima, there are several isolated $`(d,q)`$-grid points at which $`\chi ^2`$ is smaller than at neighboring grid points. However, on a finer grid they appear to be connected with one of the two local minima specified above. We include the two local minima and some of the apparently isolated minimum points as well as points in the local valley around the minima as starting points for the refined search of $`\chi ^2`$-minimization in the next step.
### 2.3 Solutions: $`\chi ^2`$ minimization
Starting from the local minima found in §2.2 and the points in the local valleys around them, we perform a refined search for the $`\chi ^2`$ minimum. The $`\chi ^2`$ minimization includes all the I and V data points for successive fitting to the full expression for magnification, accounting for effects of a finite source size and limb darkening.
As described in Paper I, the third step makes use of a variant of equation (1) to evaluate the magnified flux in the neighborhood of the caustic crossing. Paper I found that, for MACHO 98-SMC-1, this analytic expression was an extremely good approximation to the results of numerical integration and assumed that the same would be the case for any fold crossing. Unfortunately, we find that, for OGLE-1999-BUL-23, this approximation deviates from the true magnification as determined using the method of Gould & Gaucherel (1997) as much as 4%, which is larger than our typical photometric uncertainty in the region of caustic crossing. To maintain the computational efficiency of Paper I, we continue to use the analytic formula (1), but correct it by pre-tabulated amounts given by the fractional difference (evaluated close to the best solution) between this approximation and the values found by numerical integration. We find that this correction works quite well even at the local minimum for the other (close-binary) solution – the error is smaller than 1%, and in particular, the calculations agree within 0.2% for the region of primary interest. The typical (median) photometric uncertainties for the same region are 0.015 mag (Canopus after the error normalization) and 0.020 mag (Perth). In addition, we test the correction by running the fitting program with the exact calculation at the minimum found using the corrected approximation, and find that the measured parameters change less than the precision of the measurement. In particular, the limb-darkening coefficients change by an order of magnitude less than the measurement uncertainty due to the photometric errors.
The results of the refined $`\chi ^2`$ minimization are listed in Table 2 for three discrete “solutions” and in Table 3 for grid points neighboring the best-fit solution whose $`\mathrm{\Delta }\chi ^2`$ is less than one. The first seven columns describe seven of the standard parameters of the binary-lens model (the remaining two parameters are the source and background flux). The eighth column is the time of the second caustic crossing ($`t_{\mathrm{cc}}`$) – the time when the center of the source crossed the caustic. The limb darkening coefficients for I and V bands are shown in the next two columns. The final column is $`\mathrm{\Delta }\chi ^2`$,
$$\mathrm{\Delta }\chi ^2\frac{\chi ^2\chi _{\mathrm{best}}^2}{\chi _{\mathrm{best}}^2/dof},$$
(2)
as in Paper I. The light curve (in magnification) of the best-fit model is shown in Figure 3 together with all the data points used in the analysis.
#### 2.3.1 “Degeneracy”
For typical binary-lens microlensing events, more than one solution often fits the observations reasonably well. In particular, Dominik (1999) predicted a degeneracy between close and wide binary lenses resulting from a symmetry in the lens equation itself, and such a degeneracy was found empirically for MACHO 98-SMC-1 (Paper I; Afonso et al. 2000).
We also find two distinct local $`\chi ^2`$ minima (§2.2) that appear to be closely related to such degeneracies. However, in contrast to the case of MACHO 98-SMC-1, our close-binary model for OGLE-1999-BUL-23 has substantially higher $`\chi ^2`$ than the wide-binary model ($`\mathrm{\Delta }\chi ^2=\mathrm{\hspace{0.17em}127.86}`$). Figure 4 shows the predicted light curves in SAAO instrumental I band. The overall geometries of these two models are shown in Figures 5 and 6. The similar morphologies of the caustics with respect to the path of the source is responsible for the degenerate light curves near the caustic crossing (Fig. 6). However, the close-binary model requires a higher blending fraction and lower baseline flux than the wide-binary solution because the former displays a higher peak magnification ($`A_{\mathrm{max}}50`$ vs. $`A_{\mathrm{max}}30`$). Consequently, a precise determination of the baseline can significantly contribute to discrimination between the two models, and in fact, the actual data did constrain the baseline well enough to produce a large difference in $`\chi ^2`$.
A fair number of pre-event baseline measurements are available via OGLE, and those data can further help discriminate between these two “degenerate” models. We fit OGLE measurements to the two models with all the model parameters being fixed and allowing only the baseline and the blending fraction as free parameters. We find that the PLANET wide-binary model produces $`\chi ^2=306.83`$ for 169 OGLE points ($`\chi ^2/dof=1.83`$, compare Table 1) while $`\chi ^2=608.22`$ for the close-binary model for the same 169 points (Fig. 7). That is, $`\mathrm{\Delta }\chi ^2=164.04`$, so that the addition of OGLE data by itself discriminates between the two models approximately as well as all the PLANET data combined. The largest contribution to this large $`\mathrm{\Delta }\chi ^2`$ appears to come from the period about a month before the first caustic crossing which is well covered by the OGLE data but not by the PLANET data. In particular, the close-binary model predicts a bump in the light curve around $`\mathrm{HJD}^{}1290`$ due to a triangular caustic (see Fig. 5), but the data do not show any abnormal feature in the same region, although it is possible that rotation of the binary moved the caustic far from the source trajectory (eg. Afonso et al. 2000). In brief, the OGLE data strongly favor the wide-binary model.
#### 2.3.2 Limb-Darkening Coefficients
The limb darkening of the source is parametrized using a normalized linear model of the source surface brightness profile, which was introduced in Appendix B of Paper I,
$`S_\lambda (\vartheta )=\overline{S}_\lambda \left[1\mathrm{\Gamma }_\lambda \left(1{\displaystyle \frac{3}{2}}\mathrm{cos}\vartheta \right)\right]=\overline{S}_\lambda \left[(1\mathrm{\Gamma }_\lambda )+{\displaystyle \frac{3}{2}}\mathrm{\Gamma }_\lambda \mathrm{cos}\vartheta \right],`$
$`\mathrm{where}\mathrm{sin}\vartheta {\displaystyle \frac{\theta }{\theta _{}}}\mathrm{and}\overline{S}_\lambda {\displaystyle \frac{F_{s,\lambda }}{\pi \theta _{}^2}},`$ (3)
while linear limb darkening is usually parametrized by,
$$S_\lambda (\vartheta )=S_\lambda (0)\left[1c_\lambda (1\mathrm{cos}\vartheta )\right]=S_\lambda (0)\left[(1c_\lambda )+c_\lambda \mathrm{cos}\vartheta \right].$$
(4)
The relationship between the two expressions of linear limb-darkening coefficients is
$$c_\lambda =\frac{3\mathrm{\Gamma }_\lambda }{2+\mathrm{\Gamma }_\lambda }.$$
(5)
Amongst our six data sets, data from SAAO did not contain points that were affected by limb darkening, i.e. caustic crossing points. Since the filters used at different PLANET observatories do not differ significantly from one another, we use the same limb-darkening coefficient for the three remaining I-band data sets. The V-band coefficient is determined only from Canopus data, so that a single coefficient is used automatically.
For the best-fit lens geometry, the measured values of linear limb-darkening coefficients are $`\mathrm{\Gamma }_I=\mathrm{\hspace{0.17em}0.534}\pm 0.020`$ and $`\mathrm{\Gamma }_V=\mathrm{\hspace{0.17em}0.711}\pm 0.089`$, where the errors include only uncertainties in the linear fit due to the photometric uncertainties at *fixed* binary-lens model parameters. However, these errors underestimate the actual uncertainties of the measurements because the measurements are correlated with the determination of the seven lens parameters shown in Tables 2 and 3. Incorporating these additional uncertainties in the measurement (see the next section for a detailed discussion of the error determination), our final estimates are
$$\mathrm{\Gamma }_I=\mathrm{\hspace{0.17em}0.534}_{0.040}^{+0.050}\left(c_I=\mathrm{\hspace{0.17em}0.632}_{0.037}^{+0.047}\right),$$
(6a)
$$\mathrm{\Gamma }_V=\mathrm{\hspace{0.17em}0.711}_{0.095}^{+0.098}\left(c_V=\mathrm{\hspace{0.17em}0.786}_{0.078}^{+0.080}\right).$$
(6b)
This is consistent with the result of the caustic-crossing fit of §2.2 ($`\mathrm{\Gamma }_I=\mathrm{\hspace{0.17em}0.519}\pm 0.043`$). Our result suggests that the source is more limb-darkened in $`V`$ than in $`I`$, which is generally expected by theories. Figure 8 shows the I-band residuals (in magnitudes) at the second caustic crossing from our best-fit models for a linearly limb-darkened and a uniform disk model. It is clear that the uniform disk model exhibits larger systematic residuals near the peak than the linearly limb-darkened disk. From the residual patterns – the uniform disk model produces a shallower slope for the most of the falling side of the second caustic crossing than the data require, one can infer that the source should be more centrally concentrated than the model predicts, and consequently the presence of limb darkening. The linearly limb-darkened disk reduces the systematic residuals by a factor of $``$ 5. Formally, the difference of $`\chi ^2`$ between the two models is 172.8 with two additional parameters for the limb-darkened disk model, i.e. the data favor a limb-darkened disk over a uniform disk at very high confidence.
## 3 Error Estimation for Limb Darkening Coefficients
Due to the multi-parameter character of the fit, a measurement of any parameter is correlated with other parameters of the model. The limb-darkening coefficients obtained with the different model parameters shown in Table 3 exhibit a considerable scatter, and in particular, for the I-band measurement, the scatter is larger than the uncertainties due to the photometric errors. This indicates that, in the measurement of the limb-darkening coefficients, we need to examine errors that correlate with the lens model parameters in addition to the uncertainties resulting from the photometric uncertainties at fixed lens parameters. This conclusion is reinforced by the fact that the error in the estimate of $`\mathrm{\Gamma }`$ from the caustic-crossing fit (see Fig. 2), which includes the correlation with the parameters of the caustic-crossing, is substantially larger than the error in the linear fit, which does not.
Since limb darkening manifests itself mainly around the caustic crossing, its measurement is most strongly correlated with $`\mathrm{\Delta }t`$ and $`t_{\mathrm{cc}}`$. To estimate the effects of these correlations, we fit the data to models with $`\mathrm{\Delta }t`$ or $`t_{\mathrm{cc}}`$ fixed at several values near the best fit – the global geometry of the best fit, i.e. $`d`$ and $`q`$ being held fixed as well. The resulting distributions of $`\mathrm{\Delta }\chi ^2`$ have parabolic shapes as a function of the fit values of the limb-darkening coefficient and are centered at the measurement of the best fit. (Both, $`\mathrm{\Delta }t`$ fixed and $`t_{\mathrm{cc}}`$ fixed, produce essentially the same parabola, and therefore, we believe that the uncertainty related to each correlation with either $`\mathrm{\Delta }t`$ or $`t_{\mathrm{cc}}`$ is, in fact, same in its nature.) We interpret the half width of the parabola at $`\mathrm{\Delta }\chi ^2=1`$ ($`\delta \mathrm{\Gamma }_I=0.031,\delta \mathrm{\Gamma }_V=0.032`$) as the uncertainty due to the correlation with the caustic-crossing parameters at a given global lens geometry of a fixed $`d`$ and $`q`$.
Although the global lens geometry should not *directly* affect the limb darkening measurement, the overall correlation between local and global parameters can contribute an additional uncertainty to the measurement. This turns out to be the dominant source of the scatter found in Table 3. To incorporate this into our final determination of errors, we examine the varying range of the measured coefficients over $`\mathrm{\Delta }\chi ^21`$. The result is apparently asymmetric between the direction of increasing and decreasing the amounts of limb darkening. We believe that this is real, and thus we report asymmetric error bars for the limb-darkening measurements.
The final errors of the measurements reported in §2.3.2 are determined by adding these two sources of error to the photometric uncertainty in quadrature. The dominant source of errors in the I-band coefficient measurement is the correlation between the global geometry and the local parameters whereas the photometric uncertainty is the largest contribution to the uncertainties in the V-band coefficient measurement.
Although the measurements of V and I band limb darkening at fixed model parameters are independent, the final estimates of two coefficients are not actually independent for the same reason discussed above. (The correlation between V and I limb-darkening coefficients is clearly demonstrated in Table 3.) Hence, the complete description of the uncertainty requires a covariance matrix.
$$C=C_{\mathrm{phot}}+\stackrel{~}{C}_{\mathrm{cc}}^{1/2}\left(\begin{array}{cc}1& \xi \\ \xi & 1\end{array}\right)\stackrel{~}{C}_{\mathrm{cc}}^{1/2}+\stackrel{~}{C}_{\mathrm{geom}}^{1/2}\left(\begin{array}{cc}1& \xi \\ \xi & 1\end{array}\right)\stackrel{~}{C}_{\mathrm{geom}}^{1/2},$$
(7a)
$$C_{\mathrm{phot}}\left(\begin{array}{cc}\sigma _{V,\mathrm{phot}}^2& 0\\ 0& \sigma _{I,\mathrm{phot}}^2\end{array}\right),$$
(7b)
$$\stackrel{~}{C}_{\mathrm{cc}}^{1/2}\left(\begin{array}{cc}\sigma _{V,\mathrm{cc}}& 0\\ 0& \sigma _{I,\mathrm{cc}}\end{array}\right),$$
(7c)
$$\stackrel{~}{C}_{\mathrm{geom}}^{1/2}\left(\begin{array}{cc}\overline{\sigma }_{V,\mathrm{geom}}& 0\\ 0& \overline{\sigma }_{I,\mathrm{geom}}\end{array}\right),$$
(7d)
where the subscript (phot) denotes the uncertainties due to the photometric errors; (cc), the correlation with $`\mathrm{\Delta }t`$ and $`t_{\mathrm{cc}}`$ at a fixed $`d`$ and $`q`$; (geom), the correlation with the global geometry, and $`\xi `$ is the correlation coefficient between $`\mathrm{\Gamma }_V`$ and $`\mathrm{\Gamma }_I`$ measurement. We derive the correlation coefficient using each measurement of $`\mathrm{\Gamma }_V`$ and $`\mathrm{\Gamma }_I`$, and the result indicates that two measurements are almost perfectly correlated $`(\xi =0.995)`$. We accommodate asymmetry of the errors by making the error ellipse off-centered with respect to the best estimate. (See §5 for more discussion on the error ellipses.)
## 4 Physical Properties of the Source Star
Figure 9 shows color-magnitude diagrams (CMDs) derived from a $`2^{}\times 2^{}`$ SAAO field and a $`4^{}\times 4^{}`$ Canopus field centered on OGLE-1999-BUL-23 with positions marked for the unmagnified source (S), the baseline (B), blended light (BL) at median seeing, and the center of red clump giants (RC). The source position in these CMDs is consistent with a late G or early K subgiant in the Galactic bulge (see below). Using the color and magnitude of red clump giants in the Galactic bulge reported by Pacyński et al. (1999) ($`I_{\mathrm{RC}}=\mathrm{\hspace{0.17em}14.37}\pm 0.02,[VI]_{\mathrm{RC}}=\mathrm{\hspace{0.17em}1.114}\pm 0.003`$), we measure the reddening-corrected color and magnitude of the source in the Johnson-Cousins system from the relative position of the source with respect to the center of red clump in our CMDs, and obtain:
$$(VI)_{\mathrm{S},0}=\mathrm{\hspace{0.17em}1.021}\pm 0.044,$$
(8a)
$$V_{\mathrm{S},0}=\mathrm{\hspace{0.17em}18.00}\pm 0.06,$$
(8b)
where the errors include the difference of the source positions in the two CMDs, but may still be somewhat underestimated because the uncertainty in the selection of red clump giants in our CMDs has not been quantified exactly.
From this information, we derive the surface temperature of the source; $`T_{\mathrm{eff}}=(4830\pm 100)`$ K, using the color calibration in Bessell, Castelli, & Plez (1998) and assuming $`\mathrm{log}g=\mathrm{\hspace{0.17em}3.5}`$ and the solar abundance. This estimate of temperature is only weakly dependent on the assumed surface gravity and different stellar atmospheric models. To determine the angular size of the source, we use equation (4) of Albrow et al. (2000a), which is derived from the surface brightness-color relation of van Belle (1999). We first convert $`(VI)_{\mathrm{S},0}`$ into $`(VK)_{\mathrm{S},0}=\mathrm{\hspace{0.17em}2.298}\pm 0.113`$ using the same color calibration of Bessell et al. (1998) and then obtain the angular radius of the source of
$`\theta _{}`$ $`=`$ $`(1.86\pm 0.13)\mu \mathrm{as}`$ (9)
$`=`$ $`(0.401\pm 0.027)R_{\mathrm{}}\mathrm{kpc}^1.`$
If the source is at the Galactocentric distance (8 kpc), this implies that the radius of the source is roughly 3.2$`R_{\mathrm{}}`$, which is consistent with the size of a $`1M_{\mathrm{}}`$ subgiant ($`\mathrm{log}g=\mathrm{\hspace{0.17em}3.4}`$).
Combining this result with the parameters of the best-fit model yields
$`\mu =\theta _{}/(\mathrm{\Delta }t\mathrm{sin}\varphi )`$ $`=`$ $`(13.2\pm 0.9)\mu \mathrm{as}\mathrm{day}^1`$ (10)
$`=`$ $`(22.8\pm 1.5)\mathrm{km}\mathrm{s}^1\mathrm{kpc}^1,`$
$`\theta _\mathrm{E}=\mu t_\mathrm{E}`$ $`=`$ $`(0.634\pm 0.043)\mathrm{mas},`$ (11)
where $`\varphi =\mathrm{\hspace{0.17em}123}\stackrel{}{\mathrm{.}}9`$ is the angle that the source crosses the caustic (see Fig. 6). This corresponds to a projected relative velocity of $`(182\pm 12)\mathrm{km}\mathrm{s}^1`$ at the Galactocentric distance, which is generally consistent with what is expected in typical bulge/bulge or bulge/disk (source/lens) events, but inconsistent with disk/disk lensing. Hence we conclude that the source is in the bulge. As for properties of the lens, the projected separation of the binary lens is $`(1.53\pm 0.10)`$ AU $`\mathrm{kpc}^1`$, and the combined mass of the lens is given by
$$M_\mathrm{L}=\frac{c^2D_\mathrm{S}D_\mathrm{L}}{4G(D_\mathrm{S}D_\mathrm{L})}\theta _\mathrm{E}^2=(0.395\pm 0.053)\left(\frac{x}{1x}\right)\left(\frac{D_\mathrm{S}}{8\mathrm{kpc}}\right)M_{\mathrm{}},$$
(12)
where $`xD_\mathrm{L}/D_\mathrm{S}`$, $`D_\mathrm{L}`$ is the distance to the lens, and $`D_\mathrm{S}`$ is the distance to the source.
## 5 Limb Darkening of the Source
We compare our determination of the linear limb-darkening coefficients to model calculations by Claret, Díaz-Cordovés, & Giménez (1995) and Díaz-Cordovés, Claret, & Giménez (1995). For an effective temperature of $`T_{\mathrm{eff}}=(4830\pm 100)`$ K and a surface gravity of $`\mathrm{log}g=\mathrm{\hspace{0.17em}3.5}`$, the interpolation of the V-band linear limb-darkening coefficients, $`c_V`$, of Díaz-Cordovés et al. (1995) predicts a value $`c_V=\mathrm{\hspace{0.17em}0.790}\pm 0.012`$, very consistent with our measurement. However, for the I-band coefficient, the prediction of Claret et al. (1995), $`c_I=\mathrm{\hspace{0.17em}0.578}\pm 0.008`$, is only marginally consistent with our measurement, at the $`1.46\sigma `$ level. Adopting a slightly different gravity does not qualitatively change this general result. Since we believe that the uncertainty in the color of the source is larger than in the limb-darkening coefficients, we also examine the opposite approach to the theoretical calculations $``$ using the measured values of limb-darkening coefficients to derive the effective temperature of the source. If the source is a subgiant ($`\mathrm{log}g\mathrm{\hspace{0.17em}3.5}`$) as our CMDs suggest, the measured values of the limb-darkening coefficients are expected to be observed in stars of the effective temperature, $`T_{\mathrm{eff}}=(4850_{670}^{+650})`$ K for $`c_V`$ or $`T_{\mathrm{eff}}=(4200_{490}^{+390})`$ K for $`c_I`$. As before, the estimate from the V-band measurement shows a better agreement with the measured color than the estimate from the I-band. Considering that the data quality of I band is better than V band (the estimated uncertainty is smaller in $`I`$ than in $`V`$), this result needs to be explained.
In Figure 10, we plot theoretical calculations of $`(c_I,c_V)`$ together with our measured values. In addition to Díaz-Cordovés et al. (1995) and Claret et al. (1995) (A), we also include the calculations of linear limb-darkening coefficients by van Hamme (1993) (B) and Claret (1998b) (C). For all three calculations, the V-band linear coefficients are generally consistent with the measured coefficients and the color, although van Hamme (1993) predicts a slightly smaller amount of limb darkening than the others. On the other hand, the calculations of the I-band linear coefficients are somewhat smaller than the measurement except for Claret (1998b) with $`\mathrm{log}g=\mathrm{\hspace{0.17em}4.0}`$. (However, to be consistent with a higher surface gravity while maintaining its color, the source star should be in the disk, which is inconsistent with our inferred proper motion.) Since $`c_V`$ and $`c_I`$ are not independent (in both the theories and in our measurement), it is more reasonable to compare the I and V band measurements to the theories simultaneously. Using the covariance matrix of the measurement of $`\mathrm{\Gamma }_I`$ and $`\mathrm{\Gamma }_V`$ (see §3), we derive error ellipses for our measurements in the $`(c_I,c_V)`$ plane and plot them in Figure 10. Formally, at the $`1\sigma `$ level, the calculations of the linear limb-darkening coefficients in any of these models are not consistent with our measurements. In principle, one could also constrain the most likely stellar types that are consistent with the measured coefficients, independent of *a priori* information on the temperature and the gravity, with a reference to a model. If we do this, the result suggests either that the surface temperature is cooler than our previous estimate from the color or that the source is a low-mass main-sequence ($`\mathrm{log}g\mathrm{\hspace{0.17em}4.0}`$) star. However, the resulting constraints are not strong enough to place firm limits on the stellar type even if we assume any of these models to be “correct”.
One possible explanation of our general result – the measured V-band coefficients are nearly in perfect agreement with theories while the I-band coefficients are only marginally consistent – is non-linearity of stellar limb darkening. Many authors have pointed out the inadequacy of the linear limb darkening in producing a reasonably high-accuracy approximation of the real stellar surface brightness profile (Wade & Rucinski, 1985; Díaz-Cordovés & Giménez, 1992; van Hamme, 1993; Claret, 1998b). Indeed, Albrow et al. (1999a) measured the two-coefficient square-root limb darkening for a cusp-crossing microlensing event and found that the single-coefficient model gives a marginally poorer fit to the data. The quality of the linear parameterization has been investigated for most theoretical limb-darkening calculations, and the results seem to support this explanation. van Hamme (1993) defined the quality factors (Q in his paper) for his calculations of limb-darkening coefficients, and for 4000 K $`T_{\mathrm{eff}}`$ 5000 K and 3.0 $`\mathrm{log}g`$ 4.0, his results indicate that the linear parameterization is a better approximation for V band than for I band. Similarly, Claret (1998b) provided plots of summed residuals ($`\sigma `$ in his paper) for his fits used to derive limb-darkening coefficients showing that the V-band linear limb-darkening has lower $`\sigma `$ than I-band and is as good as the V-band square-root limb-darkening near the temperature range of our estimate for the source of OGLE-1999-BUL-23. In fact, Díaz-Cordovés et al. (1995) reported that the V-band limb darkening is closest to the linear law in the temperature range $`T_{\mathrm{eff}}=\mathrm{\hspace{0.17em}4500}4750`$ K. In summary, the source happens to be very close to the temperature at which the linear limb darkening is a very good approximation in $`V`$, but is less good in $`I`$.
The actual value of the coefficient in the linear parameterization of a non-linear profile may vary depending on the method of calculation and sampling. In order to determine the linear coefficients, models (A) and (C) used a least square fit to the theoretical (non-parametric) profile by sampling uniformly over $`\mathrm{cos}\vartheta `$ (see eq. ), while model (B) utilized the principle of total flux conservation between parametric and non-parametric profiles. On the other hand, a fold-caustic crossing event samples the stellar surface brightness by convolving it with a rather complicated magnification pattern (Gaudi & Gould, 1999). Therefore, it is very likely that neither of the above samplings and calculations is entirely suitable for the representation of the limb-darkening measurement by microlensing unless the real intensity profile of the star is actually same as the assumed parametric form (the linear parameterization, in this case). In fact, the most apropriate way to compare the measurement to the stellar atmospheric models would be a direct fit to the (non-parametric) theoretical profile after convolution with the magnification patterns near the caustics. In the present paper, this has not been done, but we hope to make such a direct comparison in the future.
We thank A. Udalski for re-reducing the OGLE data on very short notice after we noticed an apparent discrepancy between the PLANET data and the original OGLE reductions. This work was supported by grants AST 97-27520 and AST 95-30619 from the NSF, by grant NAG5-7589 from NASA, by a grant from the Dutch ASTRON foundation through ASTRON 781.76.018, by a Marie Curie Fellowship from the European Union, and by “coup de pouce 1999” award from the Ministère de l’Éducation nationale, de la Recherche et de la Technologie.
|
no-problem/0004/hep-ph0004155.html
|
ar5iv
|
text
|
# No CPT Violation from Tilted Brane in Neutral Meson-Antimeson Systems
## Acknowledgement
The work of G.C. was supported by the KOSEF Brainpool Program. The work of C.S.K. was supported in part by grant No. 1999-2-111-002-5 from the Interdisciplinary Research Program of the KOSEF, in part by the BSRI Program, Ministry of Education, Project No. 99-015-DI0032, and in part by Sughak program of Korean Reasearch Foundation, Project No. 1997-011-D00015. The work of K.L. was supported by the KOSEF 1998 Interdisciplinary Research Program.
|
no-problem/0004/astro-ph0004310.html
|
ar5iv
|
text
|
# On the late spectral types of cataclysmic variable secondaries
## 1 Introduction
Over last two decades, numerous studies have considered the evolution of the low–mass secondaries in cataclysmic variable (CV) systems. The calculations aimed mainly at reproducing the observed distribution of orbital periods $`P`$ of CVs, in particular the minimum period at 80 min and the dearth of systems in the 2-3 h period range (see e.g. King 1988 for a review). Based on the most popular explanation for this period gap, the disrupted magnetic braking model (cf. Rappaport, Verbunt and Joss 1983; Spruit and Ritter 1983), evolutionary sequences constructed with full stellar models or simplified bipolytrope models (cf. Hameury et al. 1988; Hameury 1991; Kolb and Ritter 1992; and references therein) reproduce the broad features of the observed $`P`$ distribution. Despite the significant uncertainties in the underlying stellar input physics (e.g. opacities, equation of state, etc…), the lack of a reliable description of angular momentum losses $`\dot{J}`$ that drive the mass transfer provided sufficient freedom that it was always possible to find a set of secondary star models which reasonably fit the period gap.
Input physics like atmospheric opacities and the outer boundary condition are essential for a comparison with observable quantities such as colours or spectral types.
The spectral type and the orbital period ($`P(R^3/M)^{1/2}`$; $`R`$ and $`M`$ are the secondary radius and mass) provide a unique set of constraints that test the structure both of the secondary’s interior and its outermost layers. As we shall show below, these can be used to find constraints on the actual orbital braking strength $`\dot{J}`$ as well.
Previously, an analysis of this kind was hampered by the lack of knowledge, both theoretical and observational, of stars in the low–mass range populated by CV secondaries. Here we make use of the marked and independent progress in the study of low–mass stars over the past decade. A wealth of ground-based and spaced-based observations has significantly improved our knowledge of the photometric and spectroscopic properties of stars at the bottom of the main sequence. In parallel, recent theoretical work has demonstrated the necessity to use accurate internal physics and outer boundary conditions based on non–grey atmosphere models to correctly describe the structure and evolution of low–mass objects (Burrows et al. 1993; Baraffe et al. 1995, 1997, 1998; Chabrier and Baraffe 1997, 2000, and references therein). These interior models, combined with recent, much improved models of cool atmospheres and synthetic spectra (see the review of Allard et al. 1997, and references therein) allow one to calculate self–consistent magnitudes and synthetic colours of low–mass stars. These can be compared directly to observed quantities, avoiding the use of uncertain empirical effective temperature $`T_{\mathrm{eff}}`$ and bolometric correction scales. The agreement between models and observations for (i) eclipsing binary systems (Chabrier and Baraffe 1995), (ii) mass–magnitude and mass–spectral type relations (Baraffe and Chabrier, 1996; Chabrier et al. 1996), (iii) colour–magnitude diagrams (Baraffe et al. 1997, 1998) and (iv) synthetic spectra (Leggett et al. 1996; Allard et al. 1997), demonstrate how reliable the theory of low–mass stars already is.
Recently, Beuermann et al. (1998) used our preliminary calculations based on these next–generation low–mass star models and compared calculated and observed spectral types (SpT) of CV secondaries. Beuermann et al. (1998) found that CVs with donors that are nuclearly evolved, with a substantial amount of central hydrogen depletion, can account for the late spectral types in long–period systems ($`P/\mathrm{h}6`$), but not in those close to the upper edge of the period gap ($`3P/\mathrm{h}6`$). Higher mass transfer rates as usually assumed can account for the latter.
The purpose of the present paper is to give a detailed account of the full set of our CV evolutionary calculations, a preliminary subset of which has been used by Beuermann et al. 1998. We discuss implications of the need for both high mass transfer rates and a significant fraction of systems with nuclearly evolved donors. We focus on CVs where the spectral type can be determined fairly accurately, i.e. on systems with orbital period above 3 h. In a previous paper (Kolb and Baraffe 1999), we considered aspects specific to the evolution of systems below the period gap ($`P2`$ h).
In §2 we summarize the input physics and present evolutionary sequences with initial ZAMS donors. Nuclearly evolved sequences are presented in §3. A discussion and conclusions follow in §4.
## 2 Evolutionary models with initial ZAMS donor
A brief description of the input physics for our stellar code can be found in Kolb and Baraffe (1999) and Baraffe et al. (1998). More details are given by Chabrier and Baraffe (1997) for the interior physics and by Hauschildt et al. (1999) for the atmosphere models. Here we just recall the main improvements compared to earlier models applied to CV donors: (i) the new models are based on the equation of state (EOS) of Saumon et al. (1995) which is specially devoted to low–mass stars and brown dwarfs, and to the description of strong non–ideal effects characterizing the interior stellar plasma; (ii) the outer boundary condition is based on non-grey atmosphere models (the use of a grey boundary condition overestimates the effective temperature $`T_{\mathrm{eff}}`$ and luminosity $`L`$ for a given mass $`M`$); (iii) mass–colour and mass–magnitude relations are derived self–consistently from the synthetic spectra of the same atmosphere models which provide the boundary condition.
We use the empirical SpT – $`(IK)`$ relation established by Beuermann et al. (1998) to determine the spectral type SpT for a given model ($`M`$, $`T_{\mathrm{eff}}`$, $`L`$) from the calculated colour $`IK`$. The accuracy and relevance of this conversion is discussed in Beuermann et al. (1998).
Because of the uncertainty of current descriptions for orbital angular momentum losses by magnetic braking and the substantial computer time (cf. Kolb and Ritter, 1990; Hameury 1991) required for evolutionary sequences if mass transfer is treated explicitly and self–consistently, we restrict our analysis to sequences calculated with constant mass transfer rate. We have checked that sequences with mass transfer driven by angular momentum losses $`\dot{J}`$ with either constant $`\dot{J}`$, constant $`\dot{J}/J`$, or $`\dot{J}`$ according to Verbunt & Zwaan (1981), have the same main properties as the sequences with constant $`\dot{M}`$ on which we base our conclusions.
In this section we consider sequences where mass transfer starts with donors of mass $`M_2=1\mathrm{M}_{}`$ that are initially on the ZAMS, with solar composition (X = 0.70, Z=0.02). We define a “standard sequence” which reproduces the width and location of the period gap (2.1 - 3.2 h) in the framework of the disrupted magnetic braking model. This sequence is calculated with $`\dot{M}=1.5\times 10^9M_{}\mathrm{yr}^1`$ until the donor becomes fully convective at $`M_2`$ = 0.21 $`\mathrm{M}_{}`$ and $`P`$ = 3.2 h. At this point mass transfer ceases and the secondary shrinks back to its thermal equilibrium configuration in $`23\times 10^8`$ years. Mass transfer resumes when the Roche lobe radius catches up with the donor radius at a period $`P`$ = 2.1 h. The transfer rate in this second phase is $`5\times 10^{11}M_{}\mathrm{yr}^1`$, a value typical for systems driven by gravitational wave emission (cf. Kolb and Baraffe 1999). This standard sequence is shown in Fig. 1 (thick solid line), together with sequences calculated with higher transfer rates ($`3\times 10^9M_{}\mathrm{yr}^1`$, $`10^8M_{}\mathrm{yr}^1`$ and $`10^7M_{}\mathrm{yr}^1`$).
Mass–losing stars deviate from thermal equilibrium. The deviation is large if the secondary’s thermal timescale $`t_{\mathrm{KH}}GM^2/RL`$ is long compared to the mass transfer timescale $`t_M=M/(\dot{M})`$. As shown in Fig. 2a, mass transfer causes a contraction of the donor radius (e.g. Whyte and Eggleton 1980; Stehle et al. 1996) relative to the corresponding ZAMS radius for predominantly radiative objects $`M0.6\mathrm{M}_{}`$, where the radiative core exceeds 80% of the total mass (cf. Chabrier and Baraffe 1997, their Fig. 9). Predominantly convective stars ($`M0.6\mathrm{M}_{}`$) expand relative to the ZAMS. Although a large mass transfer rate causes a significant departure of R from its equilibrium value (see Fig. 2a), Fig. 2b shows that $`T_{\mathrm{eff}}`$ is rather insensitive to $`\dot{M}`$ (see also Singer et al. 1993; Kolb et al. 2000).
The outer boundary condition of the stellar model is more important than $`\dot{M}`$ for the determination of $`T_{\mathrm{eff}}`$, as indicated by the dotted curve in Fig. 1 and Fig. 2b. This curve corresponds to a sequence with $`\dot{M}=1.5\times 10^9M_{}\mathrm{yr}^1`$, but calculated with the Eddington approximation, which implicitly assumes greyness and the diffusion approximation for radiative transfer in the atmosphere. Chabrier and Baraffe (1997) have shown that such an approximation overestimates $`T_{\mathrm{eff}}`$ for a given mass when molecules form in the atmosphere at $`T_{\mathrm{eff}}40004500`$ K. Note that the Eddington approximation sequence reproduces the period gap as well as the corresponding non-grey sequence, with a width and location of 2.2 - 3 h. However, the Eddington sequence consistently gives earlier spectral types than observed, and dramatically fails to reproduce the location of objects with $`P5`$ h in Fig. 1. This example highlights the significant improvements of our new generation of low–mass star models.
Note that the period $`P_{\mathrm{turn}}`$ where the period derivative changes from negative to positive due to departure from thermal equilibrium increases with mass transfer rate. Remarkably, the standard sequence reaches period bounce at $`P`$ = 3.2 h, the same period where the secondary star becomes fully convective. Sequences with higher mass transfer rate bounce before the donor becomes fully convective. In other words, for mass transfer rates $`\dot{M}1.5\times 10^9M_{}\mathrm{yr}^1`$, the minimum period reached by CVs just corresponds to or exceeds the observed upper edge of the period gap at $`3`$ h.
As mentioned in Beuermann et al. (1998), a large spread of the secular mean mass transfer rates from $`1.5\times 10^9M_{}\mathrm{yr}^1`$ to $`10^8M_{}\mathrm{yr}^1`$ could account for the observed range of late spectral type objects with $`P5`$ h. It is well known that such a spread of $`\dot{M}`$ in systems above the period gap cannot be excluded by observations (cf. e.g. Warner 1995, his Fig. 9.8; Sproats et al. 1996).
Although the sharp observed gap boundaries seem to imply uniform values of the secular mean $`\dot{M}`$ near the upper edge of the period gap (e.g. Ritter 1996), a spread of $`\dot{M}`$ cannot be dismissed unambiguously. The sharp upper edge is preserved if disrupted magnetic braking holds and $`\dot{M}`$ adopts only values $`1.5\times 10^9M_{}\mathrm{yr}^1`$. With increasing $`\dot{M}`$ the critical mass where the secondary becomes fully convective becomes smaller (cf. Fig. 2a). Thus, if the system detaches at this point, by analogy with the disrupted magnetic braking model, mass transfer resumes after the secondary has reached thermal equilibrium at a period shorter than 2 h. As an example, for the sequence with $`\dot{M}=10^8M_{}\mathrm{yr}^1`$ the donor becomes fully convective at $`M_20.15\mathrm{M}_{}`$, and the system reappears as CV at $`P1.7`$ h. Test calculations with a population synthesis code (e.g. Kolb 1993) show that such a spread of systems reappearing below the gap still gives a reasonably sharp lower edge of the period gap, with only a mild over–accumulation of systems in a period interval $`1.72.1`$ h. This is perfectly consistent with the observed distribution which does show a slight, although statistically probably not significant, accumulation of systems there.
Despite this consistency, accepting the $`\dot{M}`$ spread is not an attractive solution to the late SpT problem. It is hard to see why a presumably global angular momentum loss mechanism should drive such different transfer rates in otherwise similar systems.
Finally we note that there are no CV secondaries above the ZAMS line in the $`P`$SpT diagram for periods longer than $`6`$ h. This suggests that in these systems the secular mean $`\dot{M}`$ must be smaller than $`10^8M_{}\mathrm{yr}^1`$. But whatever the value of $`\dot{M}`$, sequences with unevolved donors never drop below the ZAMS line for $`P6`$ h.
## 3 Evolved sequences
We now consider sequences with initial donors which have evolved off the ZAMS prior to mass transfer. When mass transfer turns on, the star is old enough (cf. Table 1) to have burned a substantial amount of hydrogen in the centre. Only stars with mass $`M>0.35\mathrm{M}_{}`$, which develop a radiative core (cf. Chabrier and Baraffe 1997), start to deplete central H within a Hubble time. But only if $`M>0.8\mathrm{M}_{}`$ is more than $`50\%`$ of the initial H depleted in less than 10 Gyr. Therefore, this donor type is restricted to systems that form with a fairly massive secondary $`1\mathrm{M}_{}`$.
Figure 3 depicts sequences with such nuclearly evolved donors — hereafter “evolved sequences” — in the $`P`$–SpT diagram, for different central H mass fractions $`X_\mathrm{c}`$ at the time mass transfer starts. The initial secondary mass is $`M_2=1\mathrm{M}_{}`$ or $`M_2=1.2\mathrm{M}_{}`$ and mass transfer rate is fixed to $`1.5\times 10^9M_{}\mathrm{yr}^1`$. Table 2 gives for each of these sequences the secondary mass $`M`$, spectral type, $`T_{\mathrm{eff}}`$ and radius $`R`$ as a function of $`P`$, for $`3P/h10`$. The evolutionary track of ZAMS donor sequences in the $`P`$–SpT diagram is insensitive to the initial donor mass, while for evolved sequences with the same initial $`X_\mathrm{c}`$ $`T_{\mathrm{eff}}`$ decreases (the spectral type becomes later) at a given $`P`$ with increasing initial donor mass. This is illustrated in Fig. 3 where we plot sequences starting with $`M_2=1\mathrm{M}_{}`$ (long–dashed curve) and $`M_2=1.2\mathrm{M}_{}`$ (dash–dotted curve), both for $`X_\mathrm{c}=0.05`$.
As already emphasized by Beuermann et al. (1998), evolved sequences result in later spectral types for a given $`P5`$ h than the standard sequence and can explain amazingly well the spread of data observed above 6 h. The reason for this is illustrated in Figures 4 and 5, which display several diagnostic quantities along the sequences as a function of secondary mass (Fig. 4) and orbital period (Fig. 5).
When mass transfer turns on, more evolved donors with smaller $`X_\mathrm{c}`$ have larger $`R`$, $`T_{\mathrm{eff}}`$ and $`T_\mathrm{c}`$ than less evolved donors. This follows from the well known properties of the H burning phase of solar–type stars with a central radiative core: as hydrogen is depleted in the centre, the increase of the central molecular weight $`\mu `$ yields an increase of the central temperature and thus $`L`$, causing an expansion of the star. Hence the main–sequence evolution of a star with constant mass proceeds towards larger $`L`$ and $`T_{\mathrm{eff}}`$. The same effect exists for mass–losing main–sequence stars. Consequently, for a given mass, $`R`$ is larger for more evolved sequences, and $`P`$ is longer, while for a given $`P`$, the secondary mass and $`T_{\mathrm{eff}}`$ is smaller, hence the spectral type is later (cf. Fig. 3). As a consequence, the donor mass at any given $`P`$ is smallest for the most evolved sequence, cf. Table 2.
Although the evolved sequences shown in Fig 3 provide an attractive explanation for the observed data above 5-6 h, they are obviously in conflict with the standard explanation of the period gap. Indeed, with the adopted value $`\dot{M}=1.5\times 10^9M_{}\mathrm{yr}^1`$ we find that these sequences enter the period gap before the donor becomes fully convective or before the system bounces due to departure from thermal equilibrium. Table 1 gives the mass of the secondary and the mass $`M_{\mathrm{core}}`$ of its radiative core at $`P`$ = 3 h for the different sequences. Period bounce occurs for the ($`M_2=1\mathrm{M}_{}`$, $`X_\mathrm{c}=0.16`$) sequence at $`P`$ = 2.60 h, corresponding to $`M=0.1\mathrm{M}_{}`$ and $`M_{\mathrm{core}}=0.04\mathrm{M}_{}`$. The donor becomes fully convective only at $`M0.05\mathrm{M}_{}`$ ($`P`$ = 2.9 h). For more evolved sequences, period bounce would occur at even smaller periods.
This difference in the size of the radiative core is caused by the small central H abundance. The smaller $`X_\mathrm{c}`$ and the higher central temperature of evolved donors compared to unevolved donors (cf. Figs. 4 and 5) imply lower radiative opacities in the central region, and consequently favour radiative transport. As a result, the inward progression of the bottom of the donor’s convective envelope as mass decreases proceeds at a slower rate. Hence in evolved sequences the donor has a larger radiative core $`M_{\mathrm{core}}`$ than a donor with the same mass in an unevolved sequence (see Fig. 4). In terms of $`P`$, Fig. 5 shows that $`M_{\mathrm{core}}`$ is larger for the standard sequence at any given $`P3.5`$ h. The situation then reverses at the upper edge of the gap, which corresponds to $`M_{\mathrm{core}}=0`$ for the standard sequence. The effect of a lower central H abundance described above is similar to that reported by Laughlin et al. (1997) who followed the central H burning phase of very low–mass stars with $`M0.25\mathrm{M}_{}`$ (with constant mass). They note that stars which are fully convective on the ZAMS develop a radiative core towards the end of the main sequence because of the depletion of H and subsequent lowering of the central radiative opacities.
The $`P`$–SpT diagram suggests that at long orbital periods a fairly large fraction of systems has a significantly evolved donor. If these carried on evolving with a transfer rate $`1.5\times 10^9M_{}\mathrm{yr}^1`$ they would clearly overpopulate the period gap regime. This is made worse by the fact that some of these sequences reach their minimum period in the period gap (Fig. 3): at period bounce $`\dot{P}`$ = 0, and the detection probability $`d1/\dot{P}`$ increases sharply. We find a similar behaviour if mass transfer is treated self–consistently, with an angular momentum loss rate $`\dot{J}=`$ const. or $`\dot{J}J`$, calibrated such that the corresponding unevolved sequence reproduces the observed period gap (giving $`\dot{M}=1.5\times 10^9M_{}\mathrm{yr}^1`$ at $`P3`$ h).
An obvious way to prevent evolved sequences from evolving into the period gap region is to increase the mass transfer rate. This increases the deviation from thermal equilibrium and leads to period bounce above the upper edge of the observed gap. Test calculations with constant mass transfer rate show that period bounce occurs at $`P>3`$ h with $`\dot{M}3\times 10^9`$ and $`5\times 10^9`$ $`M_{}\mathrm{yr}^1`$ for sequences with respectively $`X_\mathrm{c}=0.16`$ and 0.05, for initial donor mass $`1\mathrm{M}_{}`$. The same behavior is found for donors with higher initial mass. In other words, for sequences with smaller $`X_\mathrm{c}`$ a higher $`\dot{M}`$ near the upper edge of the period gap is required than for less evolved donors, consistent with the fact that the thermal time is shorter if $`X_\mathrm{c}`$ is small. These evolved sequences with high transfer rates ($`\dot{M}>5\times 10^9M_{}\mathrm{yr}^1`$) also match the observed spectral types in the region $`P`$ = 3-6 h, just as high $`\dot{M}`$ sequences with ZAMS donors do (see §2 and Fig. 1).
Note however, that for long periods $`P`$ $`5`$ h a higher $`\dot{M}`$ has the same effect as for the unevolved sequences discussed in §2 (Fig. 1). The effective temperature is larger and therefore the spectral type earlier than for a sequence with smaller $`\dot{M}`$. Thus in order to account for the full range of rather cool spectral types at long periods, $`\dot{M}`$ cannot always be high along all evolved sequences.
To summarize: evolved sequences require high $`\dot{M}`$ ($`5\times 10^9`$) near the upper gap region to avoid crossing the gap, but a low mass transfer rate ($`\dot{M}5\times 10^9`$) for $`P6`$ h to explain the late spectral type CVs in this region. This suggests that the mean mass transfer rate increases during the secular evolution of nuclear evolved donors.
## 4 Discussion and conclusions
We performed calculations of the long–term evolution of CVs with up–to–date stellar models that fit observed properties of single low–mass stars exceptionally well. Our focus was on the orbital period– spectral type ($`P`$–SpT) diagram, and the observed location of CV secondaries which populate a band with spectral types significantly later than ZAMS stars at the same orbital period.
Our calculations tested if the observed spectral types can be reproduced by varying two main parameters, the mass transfer rate $`\dot{M}`$ (assumed constant), and the degree of nuclear evolution of the secondary before mass transfer starts (measured in terms of the initial central H fraction $`X_\mathrm{c}`$). Only CVs forming with massive ($`11.2\mathrm{M}_{}`$) donors, i.e. at long periods, can have donors where H is significantly depleted in the centre.
We summarize our results as follows: (i) In the framework of the standard discontinuous orbital braking model for the period gap, the observed gap between 2-3 h is well reproduced by sequences starting from ZAMS donors and proceeding at a mass transfer rate $`\dot{M}12\times 10^9M_{}\mathrm{yr}^1`$ near the upper edge of the gap. (ii) Higher transfer rates ($`\dot{M}5\times 10^9M_{}\mathrm{yr}^1`$) than assumed in (i) near the upper edge of the gap are required to explain the full range of late spectral types of secondaries in CVs with periods $`P=36`$ h. This is true whatever the evolutionary state of the donor at turn–on of mass transfer. (iii) A family of evolutionary sequences starting mass transfer from an evolved donor with varying $`X_\mathrm{c}<0.5`$ and low mass transfer rate ($`\dot{M}5\times 10^9M_{}\mathrm{yr}^1`$) covers the observed locations of CVs in the $`P`$SpT diagram for $`P6`$ h. (iv) For these sequences, higher mass transfer rates than in (i) are required near 3 h, otherwise they would evolve into the period gap and predict too early spectral types at shorter $`P`$.
If $`\dot{M}`$ is sufficiently large, the sequences reach their minimum period above the upper edge of the period gap. The donor becomes fully convective at a mass smaller than the canonical $`0.21\mathrm{M}_{}`$ of the standard (unevolved) sequence, but at a period longer than 3 h. If the system detaches at this point and the donor can re–establish thermal equilibrium, then mass transfer would resume at a period well below 2 h. This guarantees consistency with the standard discontinuous orbital braking model.
Points (ii) to (iv) show that if the mass transfer rate increases significantly along the secular evolution of evolved donors, these sequences can explain the observed scatter in the $`P`$SpT diagram both above 6 h, through the effect a lower $`X_\mathrm{c}`$ has on the evolutionary track (see §3 and Fig. 3), and below 6 h, through the effect of a higher $`\dot{M}`$ (see §2 and Fig. 1).
A large range of the secular mean mass transfer rate, as suggested by (ii), could be reconciled with the sharp boundaries of the observed period gap. In the standard model these are a result of the dominance of sequences with the canonical value $`12\times 10^9M_{}\mathrm{yr}^1`$ at periods close to 3 h. The apparent scatter of data in the $`P`$-SpT diagram does not support this dominance. Reasonably sharp boundaries could be maintained if the canonical value is the minimum $`\dot{M}`$ value of the range of transfer rates. The observed gap edges would then be defined by this minimum mass transfer rate sequence. Sequences with higher $`\dot{M}`$ would bounce before they reach the upper edge of the gap. They detach at longer periods and reattach at shorter periods than the standard sequence.
A more serious problem is that the large $`\dot{M}`$ range for otherwise similar system parameters, as suggested by (ii) for unevolved sequences, leaves the nature of the main control parameter determining the strength of orbital angular momentum losses completely undetermined. Our experiments show that $`X_\mathrm{c}`$ could be this parameter.
The full observed range of data in the $`P`$-SpT diagram could be explained if the orbital braking strength increases both with decreasing $`P`$ and decreasing $`X_\mathrm{c}`$. This ensures that period bounce prevents evolved sequences from overpopulating the period gap, while they still pass through rather late spectral types at long periods. The resulting $`\dot{J}`$ law must give the canonical value of $`\dot{M}`$ at 3 h for $`X_\mathrm{c}=0.7`$. A large $`\dot{M}`$ range for unevolved systems is not required.
With standard magnetic braking laws (e.g. by Verbunt & Zwaan 1981, Mestel & Spruit 1987) the transfer rate usually increases with period, and is smaller for sequences with more evolved donors (see e.g. Pylyser & Savonije 1989, Singer et al. 1993, Ritter 1994), i.e. has just the opposite differential behaviour than the one we propose. Given the significant uncertainty in these magnetic braking models and our poor knowledge of the underlying stellar magnetic dynamo, the suggestion that observations favour a non–standard law does not appear unrealistic. Further considerations of the theory of magnetic braking to assess our finding are clearly beyond the scope of this paper. We note, however, that there are magnetic braking scenarios in the literature that lead to an increase of $`\dot{M}`$ with progressing evolution. Kolb & Ritter (1992) obtained this for a Verbunt & Zwaan (1981)–type law when only the donor’s convective envelope is coupled to the orbital motion. Zangrilli et al. (1997) found this property for their boundary–layer dynamo.
At this point we re–emphasize that the input physics on which our stellar models are based provides an excellent description of observed properties of isolated low–mass stars. Unless there are effects peculiar to CV secondaries that are not taken into account by the models (e.g., effects due to the rapid rotation, or the irradiation by the white dwarf and accretion disc), one is forced to accept that a large fraction of CVs have a nuclearly evolved donor star. This is in stark contrast to predictions of standard models for the formation of CVs, where most CVs form with a secondary which is too low–mass ($`0.6\mathrm{M}_{}`$) to be evolved (Politano 1996, King et al. 1994). But this predominance of unevolved donors is mainly due to the neglect of systems forming with a secondary that is significantly more massive than the white dwarf (Ritter 2000). These undergo thermal–timescale mass transfer at a rate $`10^7M_{}\mathrm{yr}^1`$ and are usually associated with supersoft sources (e.g. di Stefano and Rappaport 1994). It is perfectly possible that these systems reappear as standard CVs once the mass ratio ($`q=`$ donor mass/WD mass) is sufficiently small. In descendants of this thermal–timescale evolution any degree of nuclear evolution prior to mass transfer is possible. De Kool (1992) finds that the impact of these survivor systems depends on the assumed initial mass ratio distribution which ultimately determines the secondary mass distribution in post–common envelope binaries. For de Kool’s model 1 (d$`N`$d$`q`$) survivor systems completely dominate the CV population. A full appraisal of the viability of the apparent predominance of nuclear evolved systems is beyond the scope of our study. This requires a full population synthesis with self–consistent treatment of $`X_\mathrm{c}`$–dependent angular momentum losses.
Finally, the accurate determination of donor masses could provide an observational test of our hypothesis that a significant fraction of CV donors is nuclearly evolved. Observational errors are still too large (e.g. Smith & Dhillon 1998) for this test. Systems with the most discrepant (late) spectral type should have the smallest mass at any given $`P`$.
### Acknowledgments
We are grateful to K. Beuermann, H. Ritter, H. Spruit, F. and E. Meyer for valuable discussions. We thank the referee, R. Smith, for a careful reading of the manuscript. I.B thanks the Max–Planck–Institut für Astrophysik (Garching) and the University of Leicester for hospitality during the realization of part of this work. The calculations were performed on the T3E at Centre d’Etudes Nucléaires de Grenoble. We thank A. Norton for improving the language of the manuscript.
|
no-problem/0004/physics0004025.html
|
ar5iv
|
text
|
# Symmetry breaking and coarsening in spatially distributed evolutionary processes including sexual reproduction and disruptive selection
## Abstract
Sexual reproduction presents significant challenges to formal treatment of evolutionary processes. A starting point for systematic treatments of ecological and evolutionary phenomena has been provided by the gene centered view of evolution which assigns effective fitness to each allele instead of each organism. The gene centered view can be formalized as a dynamic mean field approximation applied to genes in reproduction / selection dynamics. We show that the gene centered view breaks down for symmetry breaking and pattern formation within a population; and show that spatial distributions of organisms with local mating neighborhoods in the presence of disruptive selection give rise to such symmetry breaking and pattern formation in the genetic composition of local populations. Global dynamics follows conventional coarsening of systems with nonconserved order parameters. The results have significant implications for the ecology of genetic diversity and species formation.
The dynamics of evolution can be studied by statistical models that reflect properties of general models of the statistical dynamics of interacting systems. Research on this topic can affect the conceptual foundations of evolutionary biology, and many applications in ecology, population biology, and conservation biology. Among the central problems is understanding the creation, persistence, and disappearance of genetic diversity. In this paper, we describe a model of sexual reproduction which illustrates mean field approaches (the gene-centered view of evolution) and the relevance of symmetry breaking and pattern formation in spatially distributed populations as an example of the breakdown of these approximations.
Pattern formation in genomic space has been of increasing interest in theoretical studies of sympatric speciation. These papers advance our understanding of the mechanisms of forming two species from one. However, they do not address the fundamental and practical problems of genetic diversity and spatial inhomogeneity within one species—a population whose evolution continues to be coupled by sexual reproduction. Moreover, and significantly, these papers do not address the implication of symmetry breaking and pattern formation for the gene centered view as a fundamental framework of evolutionary theory. In the following, we demonstrate that symmetry breaking and pattern formation invalidate the gene centered view (whether or not speciation occurs), and that they are important for the spatio-temporal behavior of the genetic composition of sexually reproducing populations. This has a wide range of implications for ecology, conservation biology, and evolutionary theory.
Before introducing the complications of sexual reproduction, we start with the simplest iterative model of exponential growth of asexually reproducing populations:
$$N_i(t+1)=\lambda _iN_i(t)$$
(1)
where $`N_i`$ is the population of type $`i`$ and $`\lambda _i`$ is their fitness. If the total population is considered to be normalized, the relevant dynamics is only of the proportion of each type, then we obtain
$$P_i(t+1)=\frac{\lambda _i}{_i\lambda _iP_i(t)}P_i(t)$$
(2)
where $`P_i`$ is the proportion of type $`i`$. The addition of mutations to the model, $`N_i(t+1)=_j\lambda _{ij}N_j(t)`$, gives rise to the quasi-species model which has attracted significant attention in the physics community. Recent research has focused on such questions as determining the rate of environmental change which can be followed by evolutionary change.
Sexual reproduction causes offspring to depend on the genetic makeup of two parents. This leads to conceptual problems (not just mathematical problems) in evolutionary theory because the offspring of an organism may be as different from the parent as organisms it is competing against. A partial solution to this problem is recognizing that it is sufficient for offspring traits to be correlated to parental traits for the principles of evolution to apply. However, the gene centered view is a simpler perspective in which the genes serve as indivisible units that are preserved from generation to generation. In effect, different versions of the gene, i.e. alleles, compete rather than organisms. This view simplifies the interplay of selection and heredity in sexually reproducing organisms.
We will show, formally, that the gene centered view corresponds to a mean field approximation. This clarifies the domain of its applicability and the conditions in which it should not be applied to understanding evolutionary processes in real biological systems. We will then describe the breakdown of the gene centered view in the case of symmetry breaking and pattern formation and its implications for the study of ecological systems.
It is helpful to explain the gene centered view using the “rowers analogy” introduced by Dawkins. In this analogy boats of mixed English- and German-speaking rowers are filled from a common rower pool. Boats compete in heats and it is assumed that a speed advantage exists for boats with more same-language rowers. The successful rowers are then returned to the rower pool for the next round. Over time, a predominantly and then totally same language rower pool will result. Thus, the selection of boats serves, in effect, to select rowers who therefore may be considered to be competing against each other. In order to make the competition between rowers precise, an effective fitness can be assigned to a rower. We will make explicit the rowers model (in the context of genes and sexual reproduction) and demonstrate the assignment of fitness to rowers (genes).
The rowers analogy can be directly realized by considering genes with selection in favor of a particular combination of alleles on genes. Specifically, for two genes, after selection, when allele $`A_1`$ appears in one gene, allele $`B_1`$ must appear on the second gene, and when allele $`A_1`$ appears on the first gene allele $`B_1`$ must appear on the second gene. We can write these high fitness organisms with the notation $`(1,1)`$ and $`(1,1)`$, and the organisms with lower fitness as $`(1,1)`$ and $`(1,1)`$. For simplicity, we assume below that the lower fitness organisms are non-reproducing. Models which allow them to reproduce, but with lower probabilities than the high fitness organisms, give similar results.
The assumption of placing rowers into the rower pool and taking them out at random is equivalent to assuming that there are no correlations in reproduction (i.e. no correlations in mate pairing) and that there is a sufficiently dense sampling of genomic combinations by the population (in this case only a few possibilities). Then the offspring genetic makeup can be written as a product of the probability of each allele in the parent population. This assumption describes a “panmictic population” which forms the core of the gene centered view often used in population biology. The assumption that the offspring genotype frequencies can be written as a product of the parent allele frequencies is a dynamic form of the usual mean field approximation neglect of correlations in interacting statistical systems. While the explicit dynamics of this system is not like the usual treatment of mean-field theory, e.g. in the Ising model, many of the implications are analogous.
In our case, the reproducing parents (either $`(1,1)`$ or $`(1,1)`$) must contain the same proportion of the correlated alleles ($`A_1`$ and $`B_1`$) so that $`p(t)`$ can represent the proportion of either $`A_1`$ or $`B_1`$ and $`1p(t)`$ can represent the proportion of either $`A_1`$ or $`B_1`$. The reproduction equation specifying the offspring (before selection) for the gene pool model are:
$`P_{1,1}(t+1)`$ $`=`$ $`p(t)^2`$ (3)
$`P_{1,1}(t+1)`$ $`=`$ $`P_{1,1}(t+1)=p(t)(1p(t))`$ (4)
$`P_{1,1}(t+1)`$ $`=`$ $`(1p(t))^2`$ (5)
where $`P_{1,1}`$ is the proportion of $`(1,1)`$ among the offspring, and similarly for the other cases.
The proportion of the alleles in generation $`t+1`$ is given by the selected organisms. Since the less fit organisms $`(1,1)`$ and $`(1,1)`$ do not survive this is given by $`p(t+1)=P_{1,1}^{}(t+1)+P_{1,1}^{}(t+1)=P_{1,1}^{}(t+1)`$, where primes indicate the proportion of the selected organisms. Thus
$$p(t+1)=\frac{P_{1,1}(t+1)}{P_{1,1}(t+1)+P_{1,1}(t+1)}$$
(6)
This gives the update equation:
$$p(t+1)=\frac{p(t)^2}{p(t)^2+(1p(t))^2}$$
(7)
There are two stable states of the population with all organisms $`(1,1)`$ or all organisms $`(1,1)`$. If we start with exactly 50% of each allele, then there is an unstable steady state. In every generation 50% of the organisms reproduce and 50% do not. Any small bias in the proportion of one or the other will cause there to be progressively more of one type over the other, and the population will eventually have only one set of alleles. This problem is reminiscent of an Ising ferromagnet at low temperature: A statistically biased initial condition leads to alignment.
This model can be reinterpreted by assigning a mean fitness (analogous to a mean field) to each allele as in Eq. (2). The fitness coefficient for allele $`A_1`$ or $`B_1`$ is $`\lambda _1=p(t)`$ with the corresponding $`\lambda _1=1\lambda _1`$. The assignment of a fitness to an allele reflects the gene centered view. The explicit dependence on the population composition (an Engligh-speaking rower in a predominantly English-speaking rower pool has higher fitness than one in a predominantly German-speaking rower pool) has been objected to on grounds of biological appropriateness . For our purposes, we recognize this dependence as the natural outcome of a mean field approximation.
We can describe more specifically the relationship between this picture and the mean field approximation by recognizing that the assumptions of no correlations in reproduction, a random mating pattern of parents, is the same as a long-range interaction in an Ising model. If there is a spatial distribution of organisms with mating correlated by spatial location and fluctuations so that the starting population has more of the alleles represented by 1 in one region and more of the alleles represented by $`1`$ in another region, then patches of organisms that have predominantly $`(1,1)`$ or $`(1,1)`$ form after several generations. This symmetry breaking, like in a ferromagnet, is the usual breakdown of the mean field approximation. Here, it creates correlations / patterns in the genetic makeup of the population. When correlations become significant then the species has two types, though they are still able to cross-mate and are doing so at the boundaries of the patches. Thus the gene centered view breaks down when multiple organism types form.
Understanding the spatial distribution of organism genotype is a central problem in ecology and conservation biology. The spatial patterns that can arise from spontaneous symmetry breaking through sexual reproduction, as implied by the analogy with other models, may be relevant. A systematic study of the relevance of symmetry breaking to ecological systems begins from a study of spatially distributed versions of the model just described. This model is a simplest model of disruptive selection, which corresponds to selection in favor of two genotypes whose hybrids are less viable. Assuming overlapping local reproduction neighborhoods, called demes, the relevant equations are:
$`p(x,t+1)`$ $`=`$ $`D(\overline{p}(x,t))`$ (8)
$`D(p)`$ $`=`$ $`{\displaystyle \frac{p^2}{p^2+(1p)^2}}`$ (9)
$`\overline{p}(x,t)`$ $`=`$ $`{\displaystyle \frac{1}{N_R}}{\displaystyle \underset{|x_j|R}{}}p(x+x_j,t)`$ (10)
$`N_R`$ $`=`$ $`\left|\{x_j||x_j|R\}\right|`$ (11)
where the organisms are distributed over a two-dimensional grid and the local genotype averaging is performed over a preselected range of grid cells around the central cell. Under these conditions the organisms locally tend to assume one or the other type. In contrast to conventional insights in ecology and population biology, there is no need for either complete separation of organisms or environmental variations to lead to spatially varying genotypes. However, because the organisms are not physically isolated from each other, the boundaries between neighboring domains will move, and the domains will follow conventional coarsening behavior for systems with non-conserved order parameters.
A simulation of this model starting from random initial conditions is shown in Fig. 1. This initial condition can arise when selection becomes disruptive after being non-disruptive due to environmental change. The formation of domains of the two different types that progressively coarsen over time can be seen. While the evolutionary dynamics describing the local process of organism selection is different, the spatial dynamics of domains is equivalent to the process of coarsening / pattern formation that occurs in many other systems such as an Ising model or similar cellular automata models. Fourier transformed power spectra (Figs. 24) confirm the correspondence to conventional coarsening by showing that the correlation length grows as $`t^{1/2}`$ after initial transients. In a finite sized system, it is possible for one type to completely eliminate the other type. However, the time scale over which this takes place is much longer than the results assuming complete reproductive mixing, i.e. the mean field approximation. Since flat boundaries do not move except by random perturbations, a non-uniform final state is possible. The addition of noise will cause slow relaxation of flat boundaries but they can also be trapped by quenched (frozen) inhomogeneity.
The results have significant implications for ecology of genetic diversity and species formation. The observation of harlequin distribution patterns of sister forms is generally attributed to nonhomogeneities in the environment, i.e. that these patterns reflect features of the underlying habitat (=selective) template. Our results show that disruptive selection can give rise to spontaneously self-organized patterns of spatial distribution that are independent of underlying habitat structure. At a particular time, the history of introduction of disruptive selection events would be apparent as a set of overlapping patterns of genetic diversity that exist on various spatial scales.
More specific relevance of these results to the theoretical understanding of genetic diversity can be seen in Fig. 5 where the population averaged time dependence of $`p`$ is shown. The gene centered view / mean field approximation predicts a rapid homogenization over the entire population. The persistence of diversity in simulations with symmetry breaking, as compared to its disappearance in mean field approximation, is significant. Implications for experimental tests and methods are also important. Symmetry breaking predicts that when population diversity is measured locally, rapid homogenization similar to the mean field prediction will apply, while when they are measured over areas significantly larger than the expected range of reproduction, extended persistence of diversity should be observed.
The divergence of population traits in space studied in our work can also couple to processes of speciation, i.e., processes that prevent interbreeding or doom the progeny of such breedings. These may include assortative mating, whereby organism traits inhibit interbreeding. Such divergences can potentially lead to the formation of multiple species from a single connected species (sympatric speciation). By contrast, allopatric speciation, where disconnected populations diverge, has traditionally been the more accepted process even though experimental observations suggest sympatric speciation is important.
Recent studies have begun to connect the process of symmetry breaking to sympatric speciation. Without considering pattern formation in physical space, we and other researchers have been investigating the role of pattern formation in genomic space as a mechanism or description of sympatric speciation. These studies include: a model of stochastic branching and fixation of subpopulations due to genetic drifts and local reproduction in genome space, general reaction-diffusion Turing pattern formation models in genomic space, and specific individual-based models of reproductive isolation involving assortative mating and disruptive selection (intrinsic disruptive selection, or disruptive selection arising from competition or sexual selection). Our work, presented here, is unique in discussing spatial inhomogeneity and genetic diversity within one species.
In conclusion, in formalizing sexual reproduction in evolutionary theory, we have found fundamental justification for rejecting the widespread application of the gene centered view. The formal mathematical analysis we presented to demonstrate the lack of applicability of the gene centered view is an essential step toward developing a sound conceptual foundation for evolution. We also showed that the gene centered view breaks down for species where local mating and disruptive selection give rise to symmetry breaking and pattern formation, which correspond to genetic inhomogeneity and trait divergence of subpopulations. The patterns formed undergo coarsening, following the usual universal spatio-temporal scaling behavior. The slow movement of boundaries between types cause long term persistence of genetic diversity through the local survival of (partially) incompatible types. This provides a new understanding of the development and persistence of spatio-temporal patterns of genetic diversity within a single species.
One should note that the context in which the gene centered view breaks down is of profound significance in applied aspects of modern ecology and conservation biology. The preservation of endangered species and ecosystems is currently at risk due to a dramatic decrease in their genetic diversity. We have described the implications of our results for the experimental observation of genetic diversity in endangered species. Our study of spatial patterns of genetic diversity in populations may also help guide the design of conservation areas and human directed breeding programs for endangered organisms.
|
no-problem/0004/cond-mat0004039.html
|
ar5iv
|
text
|
# Critical velocity in cylindrical Bose-Einstein condensates
## Abstract
We describe a dramatic decrease of the critical velocity in elongated cylindrical Bose-Einstein condensates which originates from the non-uniform character of the radial density profile. We discuss this mechanism with respect to recent measurements at MIT.
Superfluidity is one of the striking manifestations of quantum statistics, and the occurence of this phenomenon depends on the excitation spectrum of the quantum liquid. The key physical quantity is the critical velocity $`v_c`$, that is, the maximum velocity at which the flow of the liquid is still non-dissipative (superfluid). The well-known Landau criterion gives the critical velocity as the minimum ratio of energy to momentum in the excitation spectrum:
$$v_c=\mathrm{min}\left(\frac{ϵ(k)}{k}\right)$$
(1)
(we put $`\mathrm{}=1`$ and the particle mass $`M=1`$). Actually, in liquid $`{}_{}{}^{4}\text{He}`$ the Landau criterion overestimates the critical velocity. The explanation of this fact was put forward by Feynman (see e.g. ) who suggested that superfluidity is destroyed by spontaneous creation of complex excitations (vortex lines, vortex rings, etc.). Extensive theoretical and experimental studies on this subject are reviewed in .
Bose-Einstein condensation of dilute trapped clouds of alkali atoms offers new possibilities for the investigation of superfluidity . In the spatially homogeneous case, the spectrum of elementary excitations of a Bose-condensed gas is given by the Bogolyubov dispersion law :
$$ϵ(k)=\sqrt{\left(\frac{k^2}{2}\right)^2+2c_s^2\frac{k^2}{2}},$$
(2)
and the Landau critical velocity is equal to the speed of sound $`c_s=\sqrt{gn_0}`$ ($`n_0`$ is the condensate density, $`g=4\pi a`$, and $`a>0`$ is the scattering length). The first experimental observation of the critical velocity in trapped gaseous condensates, recently reported by the MIT group , gives a significantly smaller value of $`v_c`$.
The analyses in and in recent theoretical publications (see e.g. and references therein) employ the Feynman hypothesis and provide a qualitative explanation of the MIT experimental result. In this Letter we point out a simple geometrical effect which is characteristic for elongated cylindrical traps. Due to the non-uniform character of the radial density profile, the spectrum of axially propagating excitations in these traps is very different from the Bogolyubov dispersion law (2), and this difference leads to a strong decrease of the critical velocity. We show that this effect can at least partially explain the small critical velocity measured in the MIT experiment .
We consider an infinitely long cylindrical condensate which is harmonically trapped in the radial ($`\rho `$) direction. Then the condensate wave function $`\psi _0(\rho )`$ satisfies the Gross-Pitaevskii equation
$`\left({\displaystyle \frac{\mathrm{\Delta }_\rho }{2}}+V(\rho )\mu +g|\psi _0(\rho )|^2\right)\psi _0(\rho )=0,`$
where $`\mu `$ is the chemical potential, $`V(\rho )=\omega ^2\rho ^2/2`$ is the trapping potential, and $`\omega `$ the trap frequency. In the Thomas-Fermi regime, where the ratio $`\eta =\mu /\omega 1`$, the density profile is given by $`n_0(\rho )|\psi _0|^2=(\mu V(\rho ))/g`$ and the chemical potential is related to the maximum condensate density as $`\mu =n_{0max}g`$.
Elementary excitations can be regarded as quantized fluctuations of the condensate wavefunction . In our trapping geometry they are characterized by the axial ($`z`$) wavevector $`k`$ and radial angular momentum $`m`$. The corresponding part of the field operator reads
$`\delta \widehat{\psi }={\displaystyle \underset{m,k}{}}(u_{mk}\widehat{b}_{mk}v_{mk}^{}\widehat{b}_{mk}^{}),`$
where $`\widehat{b}_{mk}(\widehat{b}_{mk}^{})`$ are annihilation(creation) operators of the excitations. The excitation wave functions can be written in the form
$`(u,v)_{mk}(\rho ,z)=(u,v)_{mk}(\rho )\mathrm{exp}(im\varphi )\mathrm{exp}(ikz),`$
where $`\varphi `$ is the angle in the $`x,y`$ plane, and the radial functions $`u_{mk}(\rho )`$ and $`v_{mk}(\rho )`$ are solutions of the Bogolyubov-de Gennes equations (see e.g. )
$`ϵu`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{\Delta }_\rho }{2}}+{\displaystyle \frac{k^2}{2}}+V\mu +2n_0g\right)u+n_0gv,`$ (3)
$`ϵv`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{\Delta }_\rho }{2}}+{\displaystyle \frac{k^2}{2}}+V\mu +2n_0g\right)v+n_0gu.`$ (4)
Equations (3) and (4) constitute an eigenvalue problem. For given $`m`$ and $`k`$, they lead to a set of frequencies $`ϵ_{nm}(k)`$ characterized by the radial quantum number $`n`$ which takes integer values from zero to infinity. In the limit $`ϵ_{nm}(k)\mu `$, these modes were found for $`m=0`$ in the hydrodynamic approach in . For $`kR1`$, where $`R=(2\mu /\omega ^2)^{1/2}`$ is the Thomas-Fermi radial size of the condensate, the dispersion relation can be expanded in powers of $`k^2`$ :
$$ϵ_{n0}^2(k)=2\omega ^2n(n+1)+\frac{\omega ^2}{4}(kR)^2+O(k^4).$$
(5)
The lowest mode ($`n=0`$) represents axially propagating phonons: For this mode we have $`ϵ_{00}(k)=c_Zk`$, where the sound velocity $`c_Z=\sqrt{n_{0max}g/2}`$ is smaller by a factor of $`\sqrt{2}`$ than the Bogolyubov speed of sound at maximum condensate density, $`c_s`$ . The velocity $`c_s/\sqrt{2}`$ of axially propagating phonons has been measured in the MIT experiment .
The first correction to the linear behavior of the dispersion law $`ϵ_{00}(k)`$ at $`kR1`$ reveals its negative curvature: $`\delta ϵ_{00}=\omega (kR)^3/192`$ . In the perturbative analysis leading to Eq.(5) was extended numerically to $`k1/R`$, still assuming that $`ϵ_{n0}(k)\mu `$. The calculations show that the group velocity of the first mode, $`dϵ_{00}/dk`$, monotonously decreases with increasing $`k`$ and can become significantly smaller than $`c_Z`$ characteristic for $`k1/R`$. This indicates that the critical velocity (1) associated with creating axial excitations ($`m=0`$, $`n=0`$) is smaller than $`c_s`$ and can also be reduced to below $`c_Z`$. The physical reason is that the decrease of the condensate density with increasing $`\rho `$ makes the axial superfluid flow less stable (see below).
The hydrodynamic approach used in is not valid for $`ϵ\mu `$, where the excitation spectrum is no longer phonon-like and is dominated by the single particle dispersion relation $`ϵ(k)=k^2/2`$ (see Eq.(2)). The crossover between the two regimes occurs at $`kk_c=\mu ^{1/2}`$ and prevents the decrease of the group velocity with further increase in $`k`$. Obviously, the decrease of the critical velocity due to the radial inhomogeneity of the density profile can be dramatic only for $`k_c1/R`$, i.e. in large condensates with $`\eta 1`$.
The Bogolyubov-de Gennes equations (3) and (4) are valid for arbitrary $`k`$ and hence allow us to find the excitation spectrum in the crossover regime (see Fig.1) and establish the value of the critical velocity as a function of the Thomas-Fermi parameter $`\eta `$. Since the spectrum of excitations consists of a number of independent branches characterized by the quantum numbers $`n`$ and $`m`$, Eq.(1) gives a value $`v_c^{(nm)}`$ for the critical velocity corresponding to each mode. The results of our numerical calculations for the two lowest modes ($`n=0,m=0`$ and $`n=1,m=0`$) are presented in Fig.2. The breakdown of superfluidity occurs when the velocity of the flow matches the lowest of the velocities $`v_c^{(nm)}`$, which is proved to be $`v_c^{00}`$. The corresponding curve in Fig.2 indicates a significant decrease of the critical velocity compared to $`c_Z`$ already at $`\eta 10`$.
The decrease of the critical velocity with increasing ratio $`\mu /\omega `$ seems counterintuitive. One can increase this ratio by decreasing $`\omega `$ and keeping constant $`\mu `$. Then, at a constant density on the axis of the cylinder, for radially larger condensates one gets smaller $`v_c`$. However, this phenomenon has a clear physical explanation. The key point is the non-uniform character of the radial density profile. With increasing axial wavevector $`k`$, the wave functions of the excitations with $`kk_c`$ are more localized in the outer spatial region of the condensate and are thus more sensitive to the small value of $`n_0`$ in this region. It is this feature that provides a decrease of $`v_c`$ with increasing $`\eta `$, since for larger $`\eta `$ one has more possibilities to increase $`k`$ and still satisfy the condition $`kk_c`$. The described situation is not met in liquid helium where the density is practically constant and changes only in the region very close to the border of the sample.
We now turn to the discussion of the MIT experiment . The radial frequency in this experiment was $`\omega =2\pi \times 65`$ s<sup>-1</sup> and the chemical potential $`\mu =110`$ nK, so that $`\eta 35`$. The sound velocity $`c_s=6.2`$ mm/s was measured by observing a ballistic expansion. For these parameters we find $`v_c0.42c_s=2.6`$ mm/s, which is not far from the observed value $`v_c^{(exp)}1.6`$ mm/s . We do not pretend to explain the MIT data as the condensate in the experiment is not an infinite cylinder. The ratio of the radial to axial frequency for the MIT trap is $`3.3`$, which indicates that the effect of a finite axial size can be important. Nevertheless, our calculations may partially explain the small critical velocity observed in the experiment. Moreover, a smaller value of $`v_c`$ originating from our geometrical effect can facilitate the nucleation of vortex objects and thus provide a larger decrease of the critical velocity.
We would like to thank M. Lewenstein, A. Sanpera and A. Muryshev for fruitful discussions and numerical insights. The work was supported by the Austrian Science Foundation, by the Stichting voor Fundamenteel Onderzoek der Materie (FOM), by INTAS, and by the Russian Foundation for Basic Studies (grant 99-02-1802).
|
no-problem/0004/hep-ph0004232.html
|
ar5iv
|
text
|
# Northwestern University: NUHEP 706 University of Wisconsin: MADPH-00-1172 April 2000 Extending the Frontiers—Reconciling Accelerator and Cosmic Ray p-p Cross Sections
We simultaneously fit a QCD-inspired parameterization of all accelerator data on forward proton-proton and antiproton-proton scattering amplitudes, together with cosmic ray data (using Glauber theory), to predict proton-air and proton-proton cross sections at energies near $`\sqrt{s}`$ 30 TeV. The p-air cosmic ray measurements provide a strong constraint on the inclusive particle production cross section, as well as greatly reducing the errors on the fit parameters—in turn, greatly reducing the errors in the high energy proton-proton and proton-air cross section predictions.
The energy range of cosmic ray experiments covers not only the energy of the Large Hadron Collider (LHC), but extends beyond it. Cosmic ray experiments can measure the penetration in the atmosphere of these very high energy protons—however, extracting proton-proton cross sections from cosmic ray observations is far from straightforward . By a variety of experimental techniques, cosmic ray experiments map the atmospheric depth at which cosmic ray initiated showers develop. The measured quantity is the shower attenuation length ($`\mathrm{\Lambda }_m`$), which is not only sensitive to the interaction length of the protons in the atmosphere ($`\lambda _{p\mathrm{air}}`$), with
$$\mathrm{\Lambda }_m=k\lambda _{p\mathrm{air}}=k\frac{14.5m_p}{\sigma _{p\mathrm{air}}^{\mathrm{inel}}},$$
(1)
but also depends critically on the inelasticity, which determines the rate at which the energy of the primary proton is dissipated into electromagnetic shower energy observed in the experiment. The latter effect is taken into account in Eq. (1) by the parameter $`k`$; $`m_p`$ is the proton mass and $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ the inelastic proton-air cross section. The departure of $`k`$ from unity depends on the inclusive particle production cross section in nucleon and meson interactions on the light nuclear target of the atmosphere and its energy dependence.
The extraction of the pp cross section from the cosmic ray data is a two stage process. First, one calculates the $`p`$-air total cross section from the inelastic cross section inferred in Eq. (1), where
$$\sigma _{p\mathrm{air}}^{\mathrm{inel}}=\sigma _{p\mathrm{air}}\sigma _{p\mathrm{air}}^{\mathrm{el}}\sigma _{p\mathrm{air}}^{q\mathrm{el}}.$$
(2)
Next, the Glauber method transforms the value of $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ into a proton-proton total cross section $`\sigma _{pp}`$; all the necessary steps are calculable in the theory, but depend sensitively on a knowledge of $`B`$, the slope of $`\frac{d\sigma _{pp}^{\mathrm{el}}}{dt}`$, the $`pp`$ differential elastic scattering cross section, where
$$B=\left[\frac{d}{dt}\left(\mathrm{ln}\frac{d\sigma _{pp}^{\mathrm{el}}}{dt}\right)\right]_{t=0}.$$
(3)
In Eq. (2) the cross section for particle production is supplemented with $`\sigma _{p\mathrm{air}}^{\mathrm{el}}`$ and $`\sigma _{p\mathrm{air}}^{q\mathrm{el}}`$, the elastic and quasi-elastic cross section, respectively, as calculated by the Glauber theory, to obtain the total cross section $`\sigma _{p\mathrm{air}}`$. We show in Fig. 1
plots of $`B`$ as a function of $`\sigma _{pp}`$, for 5 curves of different values of $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$. This summarizes the reduction procedure from the measured quantity $`\mathrm{\Lambda }_m`$ (of Eq. 1) to $`\sigma _{pp}`$. Also plotted in Fig. 1 is a curve (dashed) of $`B`$ vs. $`\sigma _{pp}`$ which will be discussed later. Two significant drawbacks of this extraction method are that one needs:
1. a model of proton-air interactions to complete the loop between the measured attenuation length $`\mathrm{\Lambda }_m`$ and the cross section $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$, i.e., the value of $`k`$ in Eq. (1).
2. a simultaneous relation between $`B`$ and $`\sigma _{pp}`$ at very high energies—well above the region currently accessed by accelerators.
A proposal to minimize the impact of theory on these needs is the topic of this note.
We have constructed a QCD-inspired parameterization of the forward proton-proton and proton-antiproton scattering amplitudes which is analytic, unitary and fits all accelerator data of $`\sigma _{\mathrm{tot}}`$, $`B`$ and $`\rho `$, the ratio of the real-to-imaginary part of the forward scattering amplitude; see Fig. 2.
In addition, the high energy cosmic ray data of Fly’s Eye and AGASSA experiments are also simultaneously used, i.e., $`k`$ from Eq. (1) is also a fitted quantity—we refer to this fit as a global fit . We emphasize that in the global fit, all 4 quantities, $`\sigma _{\mathrm{tot}}`$, $`B`$, $`\rho `$ and $`k`$, are simultaneously fitted. Because our parameterization is both unitary and analytic, its high energy predictions are effectively model-independent, if you require that the proton is asymptotically a black disk. Using vector meson dominance and the additive quark models, we find further support for our QCD fit—it accommodates a wealth of data on photon-proton and photon-photon interactions without the introduction of new parameters. In particular, it also simultaneously fits $`\sigma _{pp}`$ and $`B`$, forcing a relationship between the two. Specifically, the $`B`$ vs. $`\sigma _{pp}`$ prediction of our fit completes the relation needed (using the Glauber model) between $`\sigma _{pp}`$ and $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$. The percentage error in the prediction of $`\sigma _{pp}`$ at $`\sqrt{s}=30`$ TeV is $`1.2`$%, due to the statistical error in the fitting parameters (see references ). A major difference between the present result, in which we simultaneously fit the cosmic ray and accelerator data, and our earlier result, in which only accelerator data are used, is a significant reduction (about a factor of 2.5) in the errors of $`\sigma _{pp}`$ at $`\sqrt{s}=30`$.
In Fig. 3,
we have plotted the values of $`\sigma _{pp}`$ vs. $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ that are deduced from the intersections of our $`B`$-$`\sigma _{pp}`$ curve with the $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ curves of Fig. 1. Figure 3 allows the conversion of measured $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ cross sections to $`\sigma _{pp}`$ total cross sections. The percentage error in $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ is $`0.8`$ % near $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}=450`$mb, due to the errors in $`\sigma _{pp}`$ and $`B`$ resulting from the errors in the fitting parameters. Again, the global fit gives an error of a factor of about 2.5 smaller than our earlier result, a distinct improvement.
When we confront our predictions of the p-air cross sections ($`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$) as a function of energy with published cross section measurements of the Fly’s Eye and AGASSA groups, we find that the predictions systematically are about one standard deviation below the published cosmic ray values. It is at this point important to recall Eq. (1) and remind ourselves that the measured experimental quantity is $`\mathrm{\Lambda }_m`$ and not $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$. We emphasize that the extraction of $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ from the measurement of $`\mathrm{\Lambda }_m`$ requires knowledge of the parameter $`k`$. The measured depth $`X_{\mathrm{max}}`$ at which a shower reaches maximum development in the atmosphere, which is the basis of the cross section measurement in Ref. , is a combined measure of the depth of the first interaction, which is determined by the inelastic cross section, and of the subsequent shower development, which has to be corrected for. $`X_{\mathrm{max}}`$ increases logarithmically with energy with elongation rate ($`\mathrm{\Delta }X_{\mathrm{max}}`$ per decade of Lab energy) of 50–60 g/cm<sup>2</sup> in calculations with QCD-inspired hadronic interaction models. The position of $`X_{\mathrm{max}}`$ directly affects the rate of shower attenuation with atmospheric depth, which is the alternative procedure for extracting $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$. The rate of shower development and its fluctuations are the origin of the deviation of $`k`$ from unity in Eq. (1). Its predicted values range from 1.5 for a model where the inclusive cross section exhibits Feynman scaling, to 1.1 for models with large scaling violations . The comparison between prediction and experiment is further confused by the fact that the AGASA and Fly’s Eye experiments used different values of $`k`$ in the analysis of their data, i.e., AGASA used $`k=1.5`$ and Fly’s Eye used $`k=1.6`$.
We therefore decided to let $`k`$ be a free parameter and to make a global fit to the accelerator and cosmic ray data, as emphasized earlier. This neglects the possibility that $`k`$ may show a weak energy dependence over the range measured. Recently, Pryke has made Monte Carlo model simulations that indicate that $`k`$ is compatible with being energy-independent. Using an energy-independent $`k`$, we find that $`k=1.349\pm 0.045`$, where the error in $`k`$ is the statistical error of the global fit. By combining the results of Fig. 2 (a) and Fig. 3, we can predict the variation of $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ with energy, $`\sqrt{s}`$. In Fig. 4
we have rescaled the published high energy data for $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ (using the common value of $`k=1.349`$), and plotted the revised data against our prediction of $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ vs. $`\sqrt{s}`$.
The plot of $`\sigma _{pp}`$ vs. $`\sqrt{s}`$, including the rescaled cosmic ray data is shown in Fig. 5. Clearly, we have an excellent fit, with good agreement between AGASA and Fly’s Eye. In order to extract the cross sections’ energy dependence from the cosmic ray data, the experimenters of course assigned energy values to their cross sections. Since the cosmic ray spectra vary so rapidly with energy, we must allow for systematic errors in $`k`$ due to possible energy misassignments. At the quoted experimental energy resolutions, $`\mathrm{\Delta }\mathrm{Log}_{10}(E_{\mathrm{lab}}(\mathrm{ev}))=0.12`$ for AGASSA and $`\mathrm{\Delta }\mathrm{Log}_{10}(E_{\mathrm{lab}}(\mathrm{ev}))=0.4`$ for Fly’s Eye, where $`E_{\mathrm{lab}}`$ is in electron volts, we find from the curve in Fig. 4 that $`\mathrm{\Delta }k/k=0.0084`$ for AGASSA and $`\mathrm{\Delta }k/k=0.0279`$ for Fly’s Eye. We estimate conservatively that experimental energy resolution introduces a systematic error in $`k`$ such that $`\mathrm{\Delta }k_{\mathrm{systematic}}=\sqrt{(\mathrm{\Delta }k_{\mathrm{AGASSA}}^2+\mathrm{\Delta }k_{\mathrm{FLYSEYE}}^2)/2}=0.028`$. Thus, we write our final result as $`k=1.349\pm 0.045\pm 0.028`$, where the first error is statistical and the last error is systematic.
Recently, Pryke has published a comparative study of high statistics simulated air showers for proton primaries, using four combinations of the MOCCA and CORSIKA program frameworks, and SIBYLL and QGSjet high energy hadronic interaction models. He finds $`k=1.30\pm 0.04`$ and $`k=1.32\pm 0.03`$ for the CORSIKA-QGSjet and MOCCA-Internal models, respectively, which are in excellent agreement with our measured result, $`k=1.349\pm 0.045\pm 0.028`$.
Further, Pryke obtains $`k=1.15\pm 0.03`$ and $`k=1.16\pm 0.03`$ for the CORSIKA-SIBYLL and MOCCA-SIBYLL models, respectively, whereas the SYBILL group finds $`k=1.2`$, which is not very different from the Pryke value. However, the SYBILL-based models, with $`k=`$1.15–1.20, are significantly different from our measurement of $`k=1.349\pm 0.045\pm 0.028`$. At first glance, this appears somewhat strange, since our model for forward scattering amplitudes and SIBYLL share the same underlying physics. The increase of the total cross section with energy to a black disk of soft partons is the shadow of increased particle production which is modeled in SYBILL by the production of (mini)-jets in QCD. The difference between the $`k`$ values of 1.15–1.20 and 1.349 results from the very rapid rise of the $`pp`$ cross section in SIBYLL at the highest energies. This is an artifact of the fixed cutoff in transverse momentum used to compute the mini-jet production cross section, and is not a natural consequence of the physics in the model. There are ways to remedy this.
In conclusion, the overall agreement between the accelerator and the cosmic ray $`pp`$ cross sections with our QCD-inspired fit, as shown in Fig. 5, is striking. We find that the accelerator and cosmic ray $`pp`$ cross sections are readily reconcilable using a value of $`k=1.349\pm 0.045\pm 0.028`$, which is both model independent and energy independent—this determination of $`k`$ severely constrains any model of high energy hadronic interactions. We predict high energy $`\sigma _{pp}`$ and $`\sigma _{p\mathrm{air}}^{\mathrm{inel}}`$ cross sections that are accurate to $``$ 1.2% and 0.8%, respectively, at $`\sqrt{s}=30`$ TeV.
At the LHC ($`\sqrt{s}=14`$ TeV), we predict $`\sigma _{\mathrm{tot}}=107.9\pm 1.2`$ mb for the total cross section, $`B=19.59\pm 0.11`$ (GeV/c)<sup>-2</sup> for the nuclear slope and $`\rho =0.117\pm 0.001`$, where the quoted errors are due to the statistical errors of the fitting parameters.
In the near term, we look forward to the possibility of repeating this analysis with the higher statistics of the HiRes cosmic ray experiment that is currently in progress and the Auger Observatory.
|
no-problem/0004/gr-qc0004055.html
|
ar5iv
|
text
|
# Comment on “Viscous cosmology in the Kasner metric”.
## Abstract
Abstract: We show in this comment that in an anisotropic Bianchi type I model of the Kasner form, it is not possible to describe the growth of entropy, if we want to keep the thermodynamics together with the dominant energy conditions. This consequence disagrees with the results obtained by Brevik and Pettersen \[Phys. Rev. D 56, 3322 (1997)\].
PACS number(s): 98.80.Hw, 98.80.Bp
Brevik and Pettersen have studied the consequences that a Bianchi type I metric of the Kasner form
$`ds^2=dt^2+t^{2p_1}dx^2+t^{2p_2}dy^2+t^{2p_3}dz^2`$ (1)
occurs for the equation of state for the cosmic fluid, characterized by a shear viscosity $`\eta `$ and a bulk viscosity $`\xi `$. In their work, they concluded that for a viscous fluid, with $`\eta 0`$ and $`\xi 0`$, and from Einstein equations, the requirement that the three Kasner parameters $`p_i`$ ($`i=1,2,3`$) be constant implies that $`\eta 1/t`$ and $`\xi 1/t`$.
From their equation (29) it is obtained an explicit expression for the shear viscosity given by
$`\eta ={\displaystyle \frac{1}{2\kappa t}}(1S),`$ (2)
where $`\kappa =8\pi G`$ and $`S={\displaystyle \underset{i=1}{\overset{3}{}}}p_i`$. It could be shown from thermodynamics that we should impose the condition $`\eta 0`$ . Therefore, we see from expression (2) that we must require that
$`S10.`$ (3)
On the other hand, the entropy production, in an anisotropic Kasner type universe, becomes
$`\dot{\sigma }{\displaystyle \frac{2S^2}{nk__BTt^2}}\eta A,`$ (4)
where $`A=1/3{\displaystyle \underset{i=1}{\overset{3}{}}}(1H_i/H)^2=3Q/S^210`$, $`Q={\displaystyle \underset{i=1}{\overset{3}{}}}p_i^2`$, $`n`$ is the baryon number density, $`k__B`$ is the Boltzmann constant and $`T`$ the temperature.
In expression (4) we have restricted to the case in which the shear viscosity $`\eta `$ is vastly greater than the bulk viscosity $`\xi `$, as was considered in Ref. .
Thus, following the result obtained by Brevik and Pettersen, we conclude that in an anisotropic Kasner type model, the parameters entering in the metric have to satisfy the bound $`S1`$, in order to deal with an appropriated physical model.
On the other hand, if we require the model to satisfy the dominant energy conditions, specified by $`\rho P_j\rho `$ , where $`\rho `$ is the energy density and $`P_j`$ (with $`j=x,y,z`$) are the effective momenta related to the corresponding coordinate axis, we could show that the shear viscosity $`\eta `$ necessarily, in this sort of model, becomes negative, since these dominant energy conditions imply that $`S1`$, and not $`S1`$, as was specified by Brevik and Pettersen.
To see this, let us write the dominant energy condition explicitly in terms of the parameters that enter in the metric (1), i.e. $`p_1,p_2`$ and $`p_3`$.
Einstein field equations can be written in comoving coordinates as (with $`\kappa =8\pi G=1`$)
$$\frac{\dot{a}}{a}\frac{\dot{b}}{b}+\frac{\dot{a}}{a}\frac{\dot{c}}{c}+\frac{\dot{b}}{b}\frac{\dot{c}}{c}=\rho ,$$
(5)
$$\frac{\ddot{b}}{b}+\frac{\ddot{c}}{c}+\frac{\dot{b}}{b}\frac{\dot{c}}{c}=P_x,$$
(6)
$$\frac{\ddot{a}}{a}+\frac{\ddot{c}}{c}+\frac{\dot{a}}{a}\frac{\dot{c}}{c}=P_y$$
(7)
and
$$\frac{\ddot{a}}{a}+\frac{\ddot{b}}{b}+\frac{\dot{a}}{a}\frac{\dot{b}}{b}=P_z,$$
(8)
where $`a`$, $`b`$ and $`c`$ are the anisotropic expansion factors. From the metric (1) they are given by $`a=t^{p_1}`$, $`b=t^{p_2}`$ and $`c=t^{p_3}`$. Thus, the Einstein s field equations (5)-(8) reduce to
$$\rho =\frac{p_1p_2+p_1p_3+p_2p_3}{t^2},$$
(9)
$$P_x=\frac{p_2^2+p_3^2p_2p_3+p_2p_3}{t^2},$$
(10)
$$P_y=\frac{p_1^2+p_3^2p_1p_3+p_1p_3}{t^2}$$
(11)
and
$$P_z=\frac{p_1^2+p_2^2p_1p_2+p_1p_2}{t^2}.$$
(12)
Here, as was mentioned above, $`P_j`$, with $`j=x,y,z`$, represent the effective momenta in the corresponding coordinate axis. Note that either $`\rho `$ and $`P_j`$ (with $`j=x,y,z`$) scale as $`t^2`$. Thus, the dominant energy conditions will give some specific relations between the Kasner parameters $`p_i`$.
The conditions $`P_j\rho `$, with $`j=x,y,z`$, yield to three inequations given by
$$(Sp_1)(S1)0,$$
(13)
$$(Sp_2)(S1)0$$
(14)
and
$$(Sp_3)(S1)0,$$
(15)
which, after adding them, reduced to just one inequation given by
$`2S(S1)0.`$ (16)
In a similar way, from $`P_j\rho `$ we get the inequations
$`S(1+p_1)p_1Q,`$ (17)
$`S(1+p_2)p_2Q,`$ (18)
and
$`S(1+p_3)p_3Q.`$ (19)
After adding them we get
$`S{\displaystyle \frac{AS^2}{2}}.`$ (20)
From expression (20), we see that $`S0`$, since by definition $`A0`$. With this condition on $`S`$, we obtain from expression (16) that necessarily $`S`$ should be greater than one. This means that a Bianchi type I metric of the Kasner form, always give rise to a negative shear viscosity, where expression (2) applies. From this result, and from expression (4), we observe that we are finished with the unfavored situation in which $`\dot{\sigma }0`$, giving the meaning that the entropy in this sort of universe decreases instead of increasing.
In conclusion, we have shown in this comment that it is not possible to describe the growth of entropy in the universe for a viscous anisotropic Bianchi type I metric of the Kasner form, if we want to keep the thermodynamic conditions together with the dominant energy conditions.
MC and SdC were supported by COMICION NACIONAL DE CIENCIAS Y TECNOLOGIA through Grants FONDECYT N<sup>0</sup> 1990601 and N<sup>0</sup> 1971157. Also MC was supported by Dirección de Promoción y Desa-rrollo de la Universidad del Bío-Bío, and SdC was supported from UCV-DGIP 123.744/99.
|
no-problem/0004/hep-ph0004017.html
|
ar5iv
|
text
|
# Angular Distributions in Higgs Decays
## Acknowledgement
This work was supported by the Robert A. Welch Foundation.
|
no-problem/0004/cond-mat0004290.html
|
ar5iv
|
text
|
# Stable ⁸⁵"Rb" Bose-Einstein Condensates with Widely Tunable Interactions
## Abstract
Bose-Einstein condensation has been achieved in a magnetically trapped sample of $`{}_{}{}^{85}\text{Rb}`$ atoms. Long-lived condensates of up to $`10^4`$ atoms have been produced by using a magnetic-field-induced Feshbach resonance to reverse the sign of the scattering length. This system provides many unique opportunities for the study of condensate physics. The variation of the scattering length near the resonance has been used to magnetically tune the condensate self-interaction energy over a very wide range. This range extended from very strong repulsive to large attractive self-interactions. When the interactions were switched from repulsive to attractive, the condensate shrank to below our resolution limit, and after $`5\text{ms}`$ emitted a burst of high-energy atoms.
Atom-atom interactions have a profound influence on most of the properties of Bose-Einstein condensation (BEC) in dilute alkali gases. These interactions are well described in a mean-field model by a self-interaction energy that depends only on the density of the condensate $`(n)`$ and the s-wave scattering length $`(a)`$ . Strong repulsive interactions produce stable condensates with a size and shape determined by the self-interaction energy. In contrast, attractive interactions ($`a<0`$) lead to a condensate state where the number of atoms is limited to a small critical value determined by the magnitude of $`a`$ . The scattering length also determines the formation rate, the spectrum of collective excitations, the evolution of the condensate phase, the coupling with the noncondensed atoms, and other important properties.
In the vast majority of condensate experiments the scattering length has been fixed at the outset by the choice of atom. However, it was proposed that the scattering length could be controlled by utilizing the strong variation expected in the vicinity of a magnetic-field-induced Feshbach resonance in collisions between cold $`(\mu \text{K})`$ alkali atoms . Recent experiments on cold $`{}_{}{}^{85}\text{Rb}`$ and Cs atoms and Na condensates have demonstrated the variation of the scattering length via this approach . However, extraordinarily high inelastic losses in the Na condensates were found to severely limit the extent to which the scattering length could be varied and precluded an investigation of the interesting negative scattering length regime . These findings prompted the subsequent proposal of several exotic coherent loss processes that remain untested in other alkali species .
Here we report the successful use of a Feshbach resonance to readily vary the self-interaction of long-lived condensates over a large range. In $`{}_{}{}^{85}\text{Rb}`$ there exists a Feshbach resonance in collisions between two atoms in the $`\text{F}=2,\text{m}_\text{f}=2`$ hyperfine ground state at a magnetic field $`B155\text{Gauss(G)}`$ . Near this resonance the scattering length varies dispersively as a function of magnetic field and, in principle, can have any value between $`\mathrm{}`$ and $`+\mathrm{}`$ (see inset in Fig. 1). This has allowed us to reach novel regimes of condensate physics. These include producing very large repulsive interactions $`(n_{pk}|a|^310^2)`$, where effects beyond the mean-field approximation should be readily observable. We can also make transitions between repulsive and attractive interactions (or vice-versa). This now makes it possible to study condensates in the negative scattering length regime, including the anticipated “collapse” of the condensate , with a level of control that has not been possible in other experiments . In fact, this ability to change the sign of the scattering length is essential for the existence of our $`{}_{}{}^{85}\text{Rb}`$ condensate. Away from the resonance the large negative scattering length ($`400a_0`$ ) limits the maximum number of atoms in a condensate to $`80`$ . However, we have produced condensates with up to $`10^4`$ atoms by operating in a region of the Feshbach resonance where $`a`$ is positive.
The experimental apparatus was similar to the double magneto-optical trap (MOT) system used in our earlier work . Atoms collected in a vapor cell MOT were transferred to a second MOT in a low-pressure chamber. Once sufficient atoms have accumulated in the low-pressure MOT, both the MOT’s were turned off and the atoms were loaded into a purely magnetic, Ioffe-Pritchard “baseball” trap. During the loading sequence the atom cloud was compressed, cooled and optically pumped, resulting in a typical trapped sample of about $`3\times 10^8`$ $`{}_{}{}^{85}\text{Rb}`$ atoms in the $`\text{F}=2,\text{m}_\text{f}=2`$ state at a temperature of $`T=45\mu \text{K}`$. The lifetime of atoms in the magnetic trap due to collisions with background atoms was $`450\text{s}`$.
Forced radio-frequency evaporation was employed to cool the sample of atoms. Unfortunately, $`{}_{}{}^{85}\text{Rb}`$ is plagued with pitfalls for the unwary evaporator familiar only with $`{}_{}{}^{87}\text{Rb}`$ or Na. In contrast to those atoms, the elastic collision cross section for $`{}_{}{}^{85}\text{Rb}`$ exhibits strong temperature dependence due to a zero in the s-wave scattering cross section at a collision energy of $`E/k_B350\mu \text{K}`$ . This decrease in the elastic collision cross section with temperature means that the standard practice of adiabatic compression to increase the initial elastic collision rate does not work. $`{}_{}{}^{85}\text{Rb}`$ also suffers from unusually high inelastic collision rates. We recently investigated these losses and observed a mixture of two and three-body processes which varied with $`B`$ . The overall inelastic collision rate displayed several orders of magnitude variation across the Feshbach resonance, with a dependence on $`B`$ similar to that of the elastic collision rate. However, the inelastic rate increased more rapidly than the elastic rate towards the peak of the Feshbach resonance, and was found to be significantly lower in the high field wing of the resonance than on the low field side. This knowledge of the loss rates, together with the known field dependence of the elastic cross section , has enabled us to successfully devise an evaporation path to reach BEC. This begins with evaporative cooling at a field $`(B=250\text{G})`$ well above the Feshbach resonance. To maintain a relatively low density, and thereby minimize the inelastic losses, a relatively weak trap is used. The low initial elastic collision rate means that about $`120\text{s}`$ are needed to reach $`T2\mu \text{K}`$, putting a stringent requirement on the trap lifetime. As the atoms cool, the elastic rate increases and it becomes advantageous to trade some of this increase for a reduced inelastic collision rate by moving to $`B=162.3\text{G}`$ where the magnitude of the scattering length is decreased. The remainder of the evaporation is performed at this field (with a radial (axial) trap frequency of $`17.5\text{Hz}`$ $`(6.8\text{Hz})`$). In contrast to field values away from the Feshbach resonance, the scattering length is positive at this field and stable condensates may therefore be produced.
The density distribution of the trapped atom cloud was probed using absorption imaging with a $`10\mu \text{s}`$ laser pulse $`1.6\text{ms}`$ after the rapid $`(0.2\text{ms})`$ turn-off of the magnetic trap. The shadow of the atom cloud was magnified by about a factor of 10 and imaged onto a CCD array to determine the spatial size and the number of atoms. The emergence of the BEC transition was observed at $`T15\text{nK}`$. Typically, we were able to produce “pure” condensates of up to $`10^4`$ atoms with peak number densities of $`n_{pk}1\times 10^{12}\text{cm}^3`$. The lifetime of the condensate at $`B=162.3\text{G}`$ was about $`10\text{s}`$ . This lifetime is consistent with that expected from the inelastic losses we have measured in cold thermal clouds. It is notable that our evaporation trajectory suffered a near-catastrophic decline prior to the observation of the BEC transition. We approached the required BEC phase space density at 100 nK with about $`10^6`$atoms, but then lost a factor of about $`50`$ in the atom number before the characteristic two-component density distribution was visible. Over this part of the trajectory, the cooling efficiency has become low (and the phase space density remain approximately constant). This is because the mean free path is comparable to the cloud size and the high density results in high losses. This situation improves when the number of atoms becomes sufficiently low, and we are then able to obtain a significant fraction of the atoms in a condensate. The number of atoms in the condensate after we reach this favorable low-density regime is obviously a delicate balance between the elastic (cooling) and inelastic (loss) collision processes in the cloud. Both of these are strongly field dependent near the Feshbach resonance. Although we are able to decrease the loss rate by moving to higher fields, the ratio of elastic to inelastic collisions actually decreases and it becomes harder to form condensates. For example, at $`B=164.3\text{G}`$ we can only produce condensates of a few thousand atoms. Conversely, moving to lower fields does not help because we reach the favorable low-density cooling regime at smaller numbers of atoms. This restriction together with the larger loss rate means that at $`B=160.3\text{G}`$, for example, we are unable to form condensates.
One of the features of the high inelastic loss rates reported in the Na experiments was an anomalously high decay rate when the condensate was swept rapidly through the Feshbach resonance . In light of this work, it was essential to determine to what extent the $`{}_{}{}^{85}\text{Rb}`$ condensate was perturbed in being swept across the Feshbach peak during the trap turn-off. These measurements also provide an additional test of coherent loss mechanisms such as those in references . We applied a linear ramp to the current in the baseball coil to sweep the magnetic field experienced by the atoms from $`B=162.3\text{G}`$ across the Feshbach peak to $`B132\text{G}`$ and then immediately turned off the trap and imaged the atom cloud. From the images we determined the fraction of condensate atoms lost as a function of the inverse ramp speed (Fig. 2). The loss for the fastest ramp, which corresponds to the direct turn-off of the magnetic trap, is less than $`9\%`$. This was determined in a separate experiment where the condensate was imaged directly in the magnetic trap both before and after the ramp. For comparison, the experiment was repeated using a cloud of thermal atoms much hotter than the BEC transition temperature. The results for the thermal atoms are consistent with the known inelastic loss rates in the vicinity of the Feshbach resonance . The strong and poorly characterized temperature dependence of these known loss rates near the Feshbach peak makes it difficult to determine what fraction of the observed condensate loss can be attributed to the usual inelastic loss processes and we cannot, therefore, rule out a coherent aspect to the loss process. There have been several models of coherent loss processes put forward to explain the corresponding sodium results. However, these calculations are based on the Timmermans theory of coupled atomic and molecular Gross-Pitaevskii equations which is unlikely to be applicable to the conditions of the present experiment .
We also changed the self-interaction energy by varying the magnetic field and observed the resulting change in the condensate shape. By applying a linear ramp to the magnetic field we have varied the magnitude of $`a`$ in the condensate by almost three orders of magnitude. The duration of this ramp was sufficiently long $`(500\text{ms})`$ to ensure that the condensate responded adiabatically. Fig. 3 shows a series of condensate images for various magnetic fields. They illustrate how we are able to easily change $`a`$ over a very wide range of positive values. Moving towards the Feshbach peak the condensate size increases due to the increased self-interaction energy. The density distribution approaches the parabolic distribution with an aspect ratio of $`\lambda _{TF}=\omega _z/\omega _r`$ expected in the TF regime . Moving in the opposite direction the cloud size becomes smaller than our 7 micron resolution limit shortly before we reach the noninteracting limit where the condensate density distribution is a Gaussian whose dimensions are set by the harmonic oscillator lengths ($`l_i=(\mathrm{}/m\omega _i)^{1/2}`$ where $`i=r,z`$) . We took condensate images similar to those shown at many field values between 156 and 166 G. From the images the full widths at half-maximum (FWHM) of the column density distributions were determined. The scattering length was then derived by assuming a TF column density distribution with the same FWHM $`((Na)^{1/5})`$. In Fig. 1 we plot the scattering length derived in this manner versus the magnetic field. It shows that these values agree with the predicted field dependence of the Feshbach resonance.
The ability to tune the atom-atom interactions in a condensate presents several exciting avenues for future research. One is to explore the breakdown of the dilute-gas approximation near the Feshbach peak. The lifetime of the condensate decreases with larger a, but for a lifetime of about 100 ms, which is sufficient for many experiments, we have created static condensates with $`n_{pk}|a|^310^2`$. For such values, effects beyond the mean field approximation, such as shifts in the frequencies of the collective excitations , are about 10%.
A second avenue is the behavior of the condensate when the scattering length becomes negative. When we increased the magnetic field beyond $`B=166.8\text{G}`$, where $`a`$ was expected to change sign, a sudden departure from the smooth behavior in Figs. 1 and 3 was observed. As $`a`$ was decreased the condensate width decreased and then about 5 ms seconds after the change in sign there was a sudden explosion that ejected a large fraction of the condensate. This left a small observable remnant condensate surrounded by a “hot” cloud at a temperature on the order of 100 nK. These preliminary results on switching from repulsive to attractive interactions suggest a violent, but highly reproducible, destruction of the condensate. The ability to control the precise moment of the onset of $`a<0`$ instability is a distinct advantage over existing methods for studying this regime which rely on analyzing ensemble averages of “post-collapse” condensates . The dynamical response of the condensate to a sudden change in the sign of the interactions can now be investigated in a controlled manner, probing the rich physics of this dramatic condensate “collapse” process.
We are pleased to acknowledge useful discussions with Murray Holland, Jim Burke, Josh Milstein and Marco Prevedelli. This research has been supported by the NSF and ONR. One of us (S. L. Cornish) acknowledges the support of a Lindemann Fellowship.
|
no-problem/0004/physics0004071.html
|
ar5iv
|
text
|
# Harmonically dancing space-time nodes: quantitatively deriving relativity, mass, and gravitation
## Reference
Einstein, A., 1905, Annalen der Physik, 18, 891.
Heisenberg, W., 1930, Physical Principles of the Quantum Theory, Dover, NY.
Lorentz, H.A., Einstein, A., Minkowski, H., & Weyl, H., 1923, The Principles
Relativity - a Collection of Original Memoirs, trans. W. Perret & G.B.
Jeffrey, London: Methuen & Co. Ltd., paperback reprint, Dover Publ. (1958).
Schwarzschild, K., 1916, Über das Gravitationsfeld eines Massenpunktes
nach der Einsteinschen Theorie, Sitzber Preuss. Acad. Wiss. Berlin
189 – 196.
## Figure Captions
Figure 1: The space-time lattice of stationary ($`\mathrm{\Sigma }`$) and moving ($`\mathrm{\Sigma }^{}`$) observers are illustrated here for the case of distance measurements. The ‘tickmarks’ of the ruler of $`\mathrm{\Sigma }`$ are marked as the topmost set of black dots. The rod to be measured is the short bar immediately beneath, and is at rest with respect to $`\mathrm{\Sigma }`$. Observer $`\mathrm{\Sigma }^{}`$ measures the length of this rod while in motion, by simultaneously acquiring data on the positions of the front and rear end of the rod. It is postulated that effectively $`\mathrm{\Sigma }^{}`$ is using a moving set of ‘tickmarks’, and if microscopically these are connected by oscillators which vibrate while in motion, the ‘tickmarks’ widen as depicted in the lower half of the diagram. Consequently $`\mathrm{\Sigma }^{}`$ obtains a smaller value for the length of the rod.
Figure 2: The space-time lattices of inertial observers $`\mathrm{\Sigma }`$ (top) and $`\mathrm{\Sigma }^{}`$ (bottom), the latter ‘moving’ at velocity $`𝐯`$ with respect to the former, who is regarded as ‘stationary’. If $`\mathrm{\Sigma }^{}`$ measures $`\mathrm{\Delta }x`$ between two ‘stationary’ events at the same time, or $`\mathrm{\Delta }t`$ at the same position, in each case the result is less than that obtained by $`\mathrm{\Sigma }`$. This means for ‘orthogonal’ measurements of space and time performed by $`\mathrm{\Sigma }^{}`$, the ‘tickmarks’ of $`\mathrm{\Sigma }`$ are more widely separated, as indicated by the lower grid.
Figure 3: Illustrating the origin of mass and gravitation. Top: a region $`\mathrm{\Delta }\mathrm{\Lambda }`$ of $`\mathrm{\Lambda }`$ (the rest frame lattice of observer $`\mathrm{\Sigma }`$) has higher than ambient temperature. Mass is proportional to the incumbent extra energy. The physical boundary of the mass (shown here in the space dimension only) is drawn as a rectangular box, inside of which all the energy surplus resides, as a result the nodes are much more widely spaced than outside. Middle: The same region as it appears in the lattice of observer $`\mathrm{\Sigma }^{}`$ who ‘moves’ with respect to $`\mathrm{\Sigma }`$ at velocity $`𝐯`$. Separation between any pair of nodes is now increased by the factor $`\gamma `$, meaning fewer oscillators within $`\mathrm{\Delta }\mathrm{\Sigma }`$, but more energy per oscillator. The net increase in enclosed energy (hence mass) is $`\gamma `$ (see text). Bottom: Energy is conducted outwards from $`\mathrm{\Delta }\mathrm{\Lambda }`$ to the ambient lattice by phonons, which causes the separation between nodes to gradually reach the natural minimum at asymptotically large distances. Note that the inner boundary from which the energy transport commences is an abstract quantity which need not be the physical size of the mass; in fact, the former is usually within the latter.
|
no-problem/0004/astro-ph0004026.html
|
ar5iv
|
text
|
# The CNOC2 Field Galaxy Redshift Survey I: The Survey and the Catalog for the Patch CNOC 0223+00
## 1 Introduction
Fundamental to our understanding of the Universe is the formation and evolution of structures, from galaxies to clusters of galaxies to large-scale structures such as sheets, filaments, and voids. Theoretical advances, often in the form of increasingly larger and more sophisticated N-body simulations, have laid much of the groundwork in interpreting the development and evolution of dark matter clustering (e.g., Davis et al. 1985; Colin, Carlberg, & Couchman 1997; and Jenkins et al. 1998). The modeling of the connection between the clustering of galaxies (which are the most easily observed component) and dark matter (which provides the gravitational field), however, is complex and much less well understood. In essence, deriving such a connection requires a full understanding of galaxy formation and evolution.
On the observational side, progressively larger redshift surveys of galaxies have provided relatively robust measurements of the present-epoch galaxy correlation function or power spectrum (e.g., the CfA survey \[Davis & Peebles 1983, Vogeley et al. 1992\]; and the LCRS \[Lin et al. 1996\]). Although there has been a number of investigations attempting to measure the evolution of galaxy clustering out to $`z`$ 0.5 to 1.0, they were all based on small surveys which were not specifically designed for this purpose (e.g., LeFèvre et al. 1996, Shepherd et al. 1997, Carlberg et al. 1997). These surveys cover very small areas with sample sizes of 100’s of objects. Furthermore, the substantial galaxy population evolution over this redshift range must be taken into account to ensure that similar samples of galaxies at different redshifts are being used to measure the clustering evolution.
The second Canadian Network for Observational Cosmology (CNOC2) Redshift Survey is the first large redshift survey of faint field galaxies carried out with the explicit goal of investigating the evolution of clustering of galaxies. Such an investigation requires a sample with a large number of galaxies spanning a significant redshift range and covering a sufficiently large area on the sky. The redshift range over which the evolution is to be measured is 0.1 to 0.6, chosen to maximize the efficiency of a 4m class telescope. The survey is also designed specifically to provide a large database of both spectroscopy and multicolor photometry data for the study of the evolution of galaxy populations over the same epochs. This is particularly important as such information will allow us to attempt to disentangle any inter-dependence between the correlation function and galaxy types and evolution.
The current knowledge of field galaxy evolution at $`z0.5`$ is based mostly on the measurement of the luminosity function (LF) (e.g., Lilly et al. 1995; Ellis et al. 1996; Lin et al. 1997), with the primary conclusion being that the most rapid evolution is seen in blue, star-forming galaxies. A much larger redshift sample of several thousand galaxies with multi-color photometry will vastly improve on the current results. The multicolor data can provide sufficient information to allow a classification of the spectral energy distributions of the sample galaxies. This, along with a large sample size, will make possible a much more detailed analysis of the evolution of the LF as a function of galaxy population, perhaps enabling a definitive discrimination between luminosity and density evolution.
In this paper, we describe the general strategy and design of the CNOC2 survey, the data reduction methods, and the creation of weight functions. Also presented is the first data catalog from the survey. In Section 2 we present the survey strategy, the field selections and the observations. Sections 3 and 4 describe the data reduction methods for the photometric and spectroscopic data, respectively. The issues of completeness and weights are discussed in Section 5. Finally, Section 6 presents the catalog for the patch CNOC 0223+00. First results on galaxy population evolution and clustering evolution are presented in papers by Lin et al. (1999) and Carlberg et al. (2000), respectively. Reports on close-pair merger evolution and the serendipitous active galactic nuclei sample from the survey are given in papers by Patton et al. (2000) and Hall et al. (2000). Additional papers on more detailed studies of galaxy evolution, the dependence of clustering evolution on galaxy types, and other related subjects are in preparation.
## 2 The Survey
The CNOC2 Field Galaxy Redshift Survey was conducted using the MOS arm of the MOS/SIS spectrograph (LeFèvre et al. 1994) at the 3.6m Canada-France-Hawaii Telescope (CFHT). The primary goal of the survey is to obtain a large, well-defined sample of galaxies with high quality spectroscopic and photometric data for the purpose of studying the evolution of galaxy clustering in the redshift range of $`0.1`$ to 0.6. To first order, such a goal requires a survey at these intermediate redshifts which is comparable to the local universe CfA redshift survey (Huchra et al. 1983, Geller & Huchra 1989) in the number of galaxies ($`10^4`$), the volume covered ($`10^6h^3`$ Mpc<sup>3</sup>), and velocity accuracy ($`50`$ km s<sup>-1</sup>).
The redshift range for the survey is chosen to maximize the spectroscopic efficiency of a 4m class telescope and the MOS spectrograph. Based on the experience from the CNOC1 Cluster Redshift Survey (Yee, Ellingson, & Carlberg 1996; hereafter YEC), a relatively high success rate of $`85`$% can be obtained with a reasonable exposure time of the order of an hour for galaxies at $`R_c21.5`$. Such a sample would have an average galaxy density on the sky of $``$ 145 galaxies per MOS field (of about 70 square arcminutes), and a peak of $`z0.35`$ in the redshift distribution, well-matched to the number of slits available in the MOS field and the redshift range.
The robust measurement of the spectral energy distributions (SED) of the sample galaxies in the form of multi-color photometry is also an integral part of the survey. Furthermore, the availability of multicolor photometry will allow us to produce a photometric-redshift training set of unprecedented size, and also provide important consistency checks on the spectroscopic redshift measurements. Hence, a significant amount of telescope time is also devoted to imaging in the survey. In the following subsections, we describe the main features of the survey in detail.
### 2.1 Field Selections
There are several considerations in choosing the survey areas on the sky. To avoid being dominated by a small number of large structures, the survey covers four widely separated regions, called patches, on the sky. There are a number of advantages in splitting the sample area into 4 regions. First, this allows one to obtain a reasonable sampling of the clustering; this is particularly important because each of the patches still covers a relatively small area on the sky. Second, having 4 patches provides a rough estimate of the cosmic variance in the determination of the clustering and statistical properties of galaxies such as the luminosity function, luminosity density, and star formation rate. In the case of clustering, the different patches will also provide an indication as to whether there are significant clustering signals at scales larger than the area covered by a single patch. Finally, having 4 patches allows us to distribute them in Right Ascension to maximize observing efficiency.
For a sample of the order of $`10^4`$ galaxies over a volume of $`10^6h^3`$ Mpc<sup>3</sup>, we need to cover the order of at least 1.5 square degrees. Hence, each patch should have an area of about 0.4 square degrees. One pointing of the MOS field, after considerations of spectral coverage on the CCD, provides a spectroscopically defined area of about $`9^{}\times 7^{}`$, with about 15<sup>′′</sup> overlapping area with the adjacent fields. Thus, each patch is equivalent to about 20 MOS fields.
To obtain a sufficiently large length scale for determining the correlation function reliably, we also need each patch to extend about 10 to 20 $`h^1`$Mpc over our redshift range. These considerations motivated us to design the geometric shape of each patch to have a central block of about 0.5<sup>o</sup>$`\times `$0.5<sup>o</sup> with two orthogonal “legs” extending outward, as illustrated in Figure 1. Each patch, as designed, spans $`80^{}`$ and $`63^{}`$ in the North-South and East-West direction, respectively. We have named the fields in each patch as indicated in Figure 1 (along with the sequential field numbers), with the a fields representing the North arm, the b fields representing the East portion of the central block, and the c fields representing the West edge and arm. The somewhat peculiar naming of the b fields is due to the initial design of the patch having an inverted $`Y`$ shape, and the name b was used to designate the East arm. The patch design was altered after the first run in order to include a central block, so that the b arm has effectively “curled” into the middle to form the central block.
The properties of the patches are listed in Table 1. The four patches are selected based on several criteria. They are chosen to avoid bright stars ($`m\text{ }<\text{ }12`$), low-redshift clusters (e.g., Abell clusters) and other known low-redshift bright galaxies, and known quasars and AGN. The patches have Galactic latitudes between 45<sup>o</sup> and 60<sup>o</sup>. The lower bound guards against excessive Galactic extinction ($`<0.15`$ mag in $`A_B`$). The extinction in the direction of each patch is obtained from the dust maps of Schlegel, Finkbeiner, & Davis (1998) and listed in Table 1 (see Lin et al. 1999 for details). The upper bound is chosen to ensure that there are a sufficient number of stars in each MOS field to provide proper star-galaxy classification. A significant number of stars are needed because of the variable point-spread function (PSF) over the MOS field (see Section 3.2).
The positions of the patches are chosen so that they are 5 to 7 hours apart in Right Ascension. This allows two patches to be accessible on the sky at any time of the year, and ensures that observations can be made within reasonable hour angles at any part of a given night. A total of 74 fields out of the intended 80 were observed for the survey, meeting 92.5% of the initial design goals.
### 2.2 Observations
We use the observational techniques developed for the CNOC1 Cluster Redshift Survey (YEC), with some additional improvements to increase the efficiency and in the sample selection method. The survey was carried out at the CFHT 3.6m telescope using the MOS imaging spectrograph. The survey was completed over a total of 53 nights in 7 runs from February 1995 to May 1998, with approximately 32 usable nights. Three different CCDs were used over the lifetime of the survey. Some pertinent properties of the CCDs are listed in Table 2 and a journal of the observations is presented in Table 3.
Three CCDs were used due to various reasons. Initially, the ORBIT1 CCD, a high quantum efficiency (QE) and blue sensitive detector, was used for run no. 1. However, it died just before run no. 2, and the older LORAL3, which has lower QE and poor blue sensitivity, was pressed back into service for both runs no. 2 and 3. From runs no. 4 to 7, the new STIS2 CCD became available. This is a high QE and blue sensitive detector with very clean cosmetic characteristics, but with a larger pixel size. Although the STIS2 CCD is a 2048$`\times `$2048 detector, the total size is significantly larger than the available field of view of MOS, and hence only a portion of the CCD was used. We note that the majority of the data (78%) were obtained using the STIS2 CCD, as the first 3 runs were plagued by bad weather. The exposure times for both spectroscopic and direct images were adjusted to compensate for the different QEs of the CCDs.
The MOS spectrograph has a usable field of about 10 diameter with the corners being in poor focus. An area smaller than the total imaging area is defined as the spectroscopic field, which is the area used for the survey. This defined spectroscopic area is limited by the CCD detector’s extent in spectral coverage, in that slits placed on different parts of the chip must all produce a complete spectrum. The spectroscopic field size for each detector is listed in Table 2. The defined area of adjacent fields are nominally overlapped 10<sup>′′</sup> to 20<sup>′′</sup> to provide consistency checks on astrometry, photometry, and redshift determination. Note that the STIS2 CCD has a significantly larger defined area because of the larger physical CCD size allowing for additional areas for the spectra of objects at the edge of the imaging field.
As in the CNOC1 survey, we use a band-limiting filter to increase significantly the multiplexing efficiency of the survey. The shortened spectra allow for multi-tiering of spectra on the CCD image, increasing the number of slits per spectroscopic mask from the order of 30 to about 100. For the CNOC2 survey, we have designed a filter with band limits of $``$ 4300Å to 6300Å. These limits were chosen with various compromises in mind. The range must be short enough to allow for significant multi-tiering, but long enough to be able to sample the key galaxy spectral features over a redshift range commanded by an apparent magnitude limit which is optimal for a 4m class telescope. Furthermore, the filter limits were chosen to coincide with the onset of the grism/CCD inefficiency in the blue, and with the first prominent atmospheric OH emission complex in the red. The transmission curve of the band-limiting filter is shown in Figure 2. The half-power limits for the curve are 4387Å and 6285Å with a peak transmission of $`0.8`$.
The band limits effectively define the redshift completeness boundaries of the survey. The most prominent spectral features in galaxies over these wavelengths are \[OII\]$`\lambda 3727`$ for late-type galaxies and the Ca II H K features at $`\lambda \lambda `$3933,3969Å for early-type galaxies. For the HK lines, the effective redshift range is about 0.12 to 0.55. Although the \[OII\] line disappears at the blue edge at $`z0.16`$, the effective low redshift boundary for emission-line galaxies is $`z0`$, since for most emission-line objects, detectable \[OIII\]$`\lambda \lambda `$4959,5007Å move into the red end of the spectrum at $`z0.25`$. Hence, the effective redshift range for the full sample is 0.12 to 0.55; whereas it is 0.0 to 0.68 for emission line galaxies.
The imaging observations are carried out using 5 filters: $`I_c`$, $`R_c`$, $`V`$, $`B`$, and $`U`$, with typical integration times for each CCD listed in Table 2. (For simplicity, the subscript denoting the Cousins $`R`$ and $`I`$ filters will be dropped for the remainder of the paper.) The $`V`$ images are actually obtained through a Gunn $`g`$ filter, but calibrated to the Johnson $`V`$ system (see Section 3.1). The $`R`$ and $`B`$ images are utilized for designing the masks used for the spectroscopic observations. Image quality ranges from 0.7<sup>′′</sup> FWHM to 1.2<sup>′′</sup> in the best focused part of the $`R`$ images, with a deterioration up to about 20% near the edges due to the focal reducer optics.
The imaging data are reduced at the telescope, and preliminary catalogs of each field are produced and used to design the masks for multi-object spectroscopy. The mask design procedure, using a computerized algorithm, is described in Section 2.3. The MOS/SIS system at CFHT has a computer controlled laser slit cutting machine (LAMA), which allows spectroscopic masks with as many as 100 slits to be fabricated in 20 to 40 minutes. The elapsed time between obtaining a set of direct images to having a mask for spectroscopic observation, including the procedures for preprocessing, catalog creation, mask design, and mask cutting, was pushed to as low as about 2 hours by the end of the survey.
The new B300 grism is used for spectroscopy. This grism has enhanced blue response compared to the O300 grism used for CNOC1. Along with the blue sensitive CCDs, this produces a significant improvement in the blue region of CNOC2 spectra. The grism is blazed at 4584Å, and has a dispersion of 233.6Å/mm, giving 3.55Å/pixel for the ORBIT1 and LORAL3 CCDs, and 4.96Å/pixel for the STIS2 CCD. A slit width of 1.3<sup>′′</sup> is used for the spectroscopic observation, giving a spectral resolution of $``$14.8Å.
The observing procedure is similar to that for the CNOC1 survey (YEC). Two masks are observed for each field, with nominal integration times differing by a factor of $`2`$ between the A and B masks. Table 2 lists the spectroscopic integration times used for each CCD. Because the need to have continuously available a set of masks for spectroscopic observations prepared in advance, to maximize the observation efficiency, a computered-aided schedule of the imaging and spectroscopic observation sequence for each night is prepared ahead of time, and adjusted as weather, equipment failure, and varying acquisition and set-up time required. With the STIS2 CCD which has higher QE and faster readout time, on average for each field, including overhead for acquisition, focusing, mask alignment, and arc lamp calibration, $``$155 minutes were need for the spectroscopic observation of two masks, allowing as many as 800 spectra to be obtained in a single 10-hour clear night. For the direct imaging, $``$60 minutes were required for each field.
### 2.3 Galaxy Sample and Mask Design
A computerized mask design algorithm is used for generating the positional information of the slits for the fabrication of masks. Besides allowing one to optimize the placement of as many as 100 slits per mask, an objective design algorithm based on properly calibrated photometric catalogs also serves the very important task of producing a well-understood and well-defined spectroscopic galaxy redshift sample. The algorithm for optimal slit placement is identical to that used for CNOC1 (YEC); however, the selection process for the galaxy sample is different, and is specifically designed to generate a fair field-galaxy sample based on both the $`R`$ and $`B`$ photometry. The mask design uses only objects classified as galaxies with occasional stars included, either serendipitously (e.g., falling into a slit designed for a galaxy), or due to misclassification (see Sections 3.1 and 3.2).
Two masks, A and B, are designed for each field. For 4 of the 74 fields, a third mask C is also observed when either mask A or B are deemed insufficient due to poor observing conditions. The total number of masks observed in each patch is listed in Table 2. The nominal spectroscopic limit is $`R=21.5`$ and $`B=22.5`$. The defined sample for the spectroscopic survey is the union set of the two limits. Objects fainter than both of these limits belong to the secondary sample, for which slits are placed only if sufficient room for the spectrum is available on the detector.
The two-mask strategy for multi-object spectroscopy offers three important advantages. First, by choosing objects with different average brightnesses for the two masks, the integration time for each mask can be geared to the brightness of the objects, resulting in a significant saving of exposure time. Second, the second mask allows for the compensation of under-representation of close pairs in selecting objects for spectroscopic observation. This under-representation arises from the fact that once a slit is placed, a certain area on the CCD is blocked by the resulting spectrum of the object. This blockage as a function of the distance from a chosen object is illustrated by the thin solid line on Figure 3 which is derived based on the area blocked by the spectrum as a function of radius from the center of the slit. The discontinuity arises from the spectral coverage not being symmetric with respect to the central wavelength. This function is equivalent to the pair fraction expected in a spectroscopic sample created by a single mask from a parent photometric sample distributed uniformly on the sky. The actual situation is worse in that galaxies are clustered on the sky. The use of the second mask allows objects being blocked by the spectra in the first mask to be observed. This is a particularly important feature, in that without such compensation, a fair sampling of object separations is not possible, rendering severe selection effects in the correlation function. Finally, a second mask allows for the redundant observations of a significant number of objects, which is important for both quality control and obtaining empirical estimates of redshift measurement uncertainties.
Mask A is designed with the emphasis on brighter galaxies, and has a total integration time of about 1/2 of that for Mask B. The design of mask A is based strictly on the apparent magnitude of the galaxies. A priority list of galaxies brighter than $`R=20.25`$ mag is made, with the objects closer to $`R=20.25`$ mag having the higher priorities. Slits which produce non-overlapping spectra are placed based on the order from this list.
This simple algorithm, however, produces the undesirable effect that objects near the upper and lower edges in the dispersion direction (i.e., the north and south edges) will preferentially have a higher probability of being placed. This arises because the spectra of these objects will extend beyond the defined field area on the CCD, effectively reducing the probability of their spectrum being blocked by those belonging to objects already chosen. To compensate for this edge effect, whenever a galaxy is chosen, the fraction of its spectrum falling outside the defined field is computed. The object is not accepted, and placed at the end of the priority queue, if a random number drawn between 0 and 1 is smaller than this fraction. This edge-effect correction is applied to all subsamples in the design process. Note that this additional step does not prevent a larger fraction of objects near the edges being selected, simply for the reason that there is more area there to place spectra. However, this procedure ensures that the selection of slits in the remainder of the field is not unduly driven by the excess placements of slits near the edges, since the latter objects have their priority systematically suppressed.
Once all possible simple placements have been done, the number of placements is optimized by shifting existing slits along the spatial direction (i.e., so that the object may no longer be in the center of its slit) in order to place additional slits (for details, see YEC). When no more slits from this sample can be placed, a secondary sample is produced from the remainder of the photometric sample based on an ordered list of increasing magnitude starting from $`R=20.25`$, and the whole procedure is repeated.
The design goals for the B masks are considerably more complex. The B mask is designed to complement the A mask in both the magnitude priority and close-pair selection. The primary sample for the B mask contains objects not already placed in the A mask with $`R=20.25`$ as the lower bound and $`R=21.50`$ or $`B=22.5`$ as the upper bound. These objects are ordered in increasing $`R`$ magnitude. Note that this produces an effect that bluer objects of similar $`B`$ magnitudes will be slightly undersampled compared to the redder objects. However, the net effect is expected to be slight, since faint blue galaxies typically have emission-line spectra which are more likely to yield measured redshifts.
A second ranking, designed to compensate for the close pair selection effect, is produced in the following manner. First, the ratios of the number of pairs as a function of separation for both the objects already assigned to the A mask and in the $`R<21.5`$ sample are derived. For each object in the sample that is not already in mask A, the distance to the nearest object that is already assigned to mask A is determined. The ratio of the pair distributions at the appropriate separation is then used to prioritize the pair-compensation ranking, in that the lower the ratio, the higher the ranking. The final priority ranking of the object is then created by adding the magnitude and pair-compensation rankings in quadrature. Slits are placed on the mask based on these rankings. And again, additional target placements produced by shifting the existing slits are made to maximize the number.
The pair-compensation applied to mask B, however, leaves one possible selection effect. The objects fainter than the mask A primary magnitude limit ($`R=20.25`$) which are placed on mask A will have a shorter exposure time than those in the same magnitude range in mask B. Hence, even though both objects of a close pair may be chosen (i.e., one in mask A and the other in B), one of these will have a lower chance of having the redshift measured. This problem is partially alleviated by the slit position shifting algorithm to add additional objects (on the same mask), which is also done with the pair-compensation priorities. The slit-shifting procedure has the effect of freeing up some of the blocked areas.
Once the primary sample for mask B is exhausted, a second sample, consisting of all objects with $`R<20.25`$ that have not been placed in mask A (i.e., those in the primary sample of A that were missed ), is created, and the same procedure (with the pair-compensation ranking) is applied, with the exception that the magnitude ranking is done from faint ($`R=20.25`$) to bright. The third and fourth samples are redundant observations of objects already placed in mask A: galaxies with $`R>20.25`$ and $`R<21.5`$ or $`B<22.5`$, and galaxies with $`R<20.25`$, respectively. Finally, any additional available space is filled with the final sample of galaxies with $`R>21.5`$, ranked by brightness.
A typical mask design is shown in Figure 4. In Figures 5 and 6, we show the magnitude distribution and fraction of objects selected as a function of magnitude for masks A and B for the fields in the Patch CNOC 0223+00. Note that bright galaxies have a higher sampling rate relative to the fainter galaxies. This is part of the design philosophy, as the smaller total number of bright galaxies requires a higher sampling rate to provide sufficient statistics.
Figure 3 also presents the pair fraction as a function of separation for objects in the A masks (dot-dashed line) and in the A and B masks (dashed line) for the whole survey, summed field by field, demonstrating the corrective action of having two masks. Note that the two-mask and pair-compensation procedures are still not sufficient to completely correct the lower sampling rate at separations up to about 100<sup>′′</sup>. This is simply due to the fact that with only two masks, high galaxy density regions will always be undersampled. Furthermore, at small angles, close triples will be severely undersampled in a two-mask system. The better-than-expected coverage at the smallest angular bin ($`<3^{\prime \prime }`$) is due to serendipitous observations of very close doubles in a single slit. At large separations ($`>300^{\prime \prime }`$), the pair fraction is oversampled. This is due to the edge effect of objects near the North/South edges having a higher probability of being placed. Also plotted in Figure 3 (thick solid line) is the pair distribution of the final redshift sample, which follows the mask pair distribution except at large radii, showing a significant drop despite more objects being sampled by the masks at these separations. This drop is due to the poorer image quality at the edges and corners of MOS. The undersampling effect at the arcminute scale is partially corrected by applying a geometric weight (see Section 5) to each object. Nevertheless the redshift sample pair distribution from an entire patch shows periodic spatial features at an amplitude of about 10 to 20% with a period of about the width of the fields($`550^{\prime \prime }`$) in the East-West direction. These features are noticeable in the E-W direction but not in the N-S direction due to the fact that extra objects are sampled by the masks at the North/South edges, but not the East/West edges.
## 3 Photometric Data Reduction
### 3.1 Photometry
Photometric reduction is performed using the program PPP (Yee, 1991) as described in YEC. Detailed descriptions of object finding, star-galaxy classification, and “total” photometry via growth curve analysis can be found in YEC and Yee (1991).
The photometric catalog in 5 filters is produced using a master object list created from the $`R`$ image. Although in principle, this procedure will miss extremely blue objects, or red objects in $`RI`$; in practice, such an effect is not expected to be significant, since the $`R`$ band image is by far the deepest in the set. Visual inspection of the images from the other filters indicates that at worst only the occasional very faint blue object is missed, and the $`R`$-selected catalog (which is 100% complete up to about 23.2 mag) should be complete for objects redder than the extreme color of $`BR=0.7`$ at the $`B=22.5`$ limit of the primary spectroscopic sample. We note that the photometric catalog contains no objects as blue as $`BR=0.0`$, well above the cut-off limit. However, it is clear that these catalogs should not be used to search for $`R`$-band drop-outs!
The object list is visually inspected, and spurious objects arising from cosmetic defects such as bleeding columns and diffraction spikes, and small structures (e.g., HII regions) within large, well-resolved, low-redshift galaxies are removed. We also pay close attention to close doubles that are missed by the object finding routine, and manually add these to the list. This adjustment is important in preventing stars with a close companion from being misclassified as galaxies.
Because of the slightly different distortion from the optics of the focal reducer of MOS over the large wavelength region covered by the 5 filters, and the fact that three different CCDs with different scales have been used, the master list of objects from the $`R`$ image is corrected to the co-ordinate system of the other filters via a geometric transformation determined by comparing positions of bright objects in the images.
The photometry from the longer exposure $`U`$ band images suffers significantly more severely from cosmic-ray hits than the other bands. This is due to both the longer exposure time and the much lower relative flux of the desired signals. Hence, a cosmic-ray detection removal algorithm (as part of the PPP package) is applied to all the $`U`$ band images before photometry is done. This algorithm detects cosmic-ray hits by first identifying all pixels that are more than 9$`\sigma `$ above the local sky root-mean-squared fluctuation. These pixels are then tested to see if they are consistent with being part of a real object via a sharpness test. If they are not, they are tagged as cosmic ray detections, and a padding of an additional layer of pixels around them is applied to mask any residual lower level extension of the detection. This simple algorithm works extremely well, and does not accidentally tag any real object pixels. The tagged pixels are then “fixed” by simple linear interpolation. It is found that by applying this algorithm, the $`U`$ band photometry improves for about 5% of the galaxies based on improved SED fits for objects in the redshift sample.
We note that the photometry catalogs created at the summit in real time (in $`R`$ and $`B`$ only) which are used for designing the masks are preliminary. These catalogs are updated later with a more careful inspection and adjustment of the star-galaxy classification parameters. Hence, there are small improvements in the final catalogs compared to the summit catalogs. In general, the final catalogs feature fewer spurious objects, fewer missed closed neighbors, and more reliable the star-galaxy classifications (see Section 3.2), especially for objects situated near edges and corners of the image where the defocusing is significant. However, because the mask design uses the summit catalogs, a number of stars misclassified as galaxies are unwittingly selected for spectroscopic observation. Typically these stars are either near corners or edges, or have a faint, close neighbor. Some of these misclassified objects are active galaxies and quasars; they are discussed in Hall et al. (2000).
The calibration to standard systems is achieved using three to four Landolt (1994) standard fields, plus M67 whenever possible, per each run in the 5 filters. It is found that the calibration of the $`g`$ band photometry requires a rather large and uncertain color term. This is due to the fact that the $`g`$ filter used is actually significantly redder than the original $`g`$ band definition (Thuan & Gunn 1976), and that there are only a small number of standard stars available (compared to the Landolt system). Hence, we have calibrated these images to the $`V`$ system based on Landolt standards. This results in a smaller color term and in general more stable results. Nevertheless, for completeness we also produce catalogs with the red and the green filters calibrated to the Gunn $`r`$ and $`g`$ system. Typical systematic uncertainties in the zero point calibration constants are 0.04, 0.04, 0.05, 0.05, 0.07 for $`IRVBU`$. Most of the data are obtained under photometric conditions. Small adjustments to compensate for field-to-field differences are made using newly obtained large format CCD images (KPNO 0.9m MOSAIC camera and CFH12K camera) and using the overlapping regions between adjacent fields. While this does not eliminate the systematic uncertainties, it does put every field in a single patch on a consistent calibration.
The final “total” $`R`$ magnitude for each object is created using the following algorithm. The magnitude from the adopted optimal aperture determined from the $`R`$-band growth curve (see Yee 1991) is used as the primary magnitude for an object. If the aperture is smaller than $`12^{\prime \prime }`$, the magnitude is extrapolated to the $`12^{\prime \prime }`$ aperture by using the stellar PSF shape. This nominally produces a correction of less than 0.05 mag. This procedure provides an exact correction for faint stars, and a first order correction for faint galaxies to compensate for seeing smearing. For galaxies brighter than about 19 mag, a second pass for the growth curve is performed using a maximum aperture size of $`24^{\prime \prime }`$. The few very bright galaxies ($`R\text{ }<\text{ }13.5`$) are typically larger than this aperture and their magnitudes can be underestimated by 0.1 to 0.2 magnitudes. The x-y position and star-galaxy classification for each object are also adopted from the $`R`$ image.
To derive the object magnitude for another filter, we use the color of the object relative to a reference filter. A default color aperture, which is the largest aperture used for color determination between a pair of filters, is set at a relatively large value of 4<sup>′′</sup> to account for the non-uniform PSF in the MOS image. The color between the two filters is formed using the flux inside a color aperture which is the minimum of the adopted optimal apertures for the two filters and the default color aperture. Note that the adopted color aperture radius in general is not a specific aperture used in the tabulated growth curve (which uses apertures with integral pixel diameters). The magnitudes within the color aperture are derived by interpolation of the quantized growth curve. The total magnitude for the filter in question is then the difference between the optimal magnitude for the reference filter, which is nominally chosen to be $`R`$, and the color. For the $`U`$ band, the color is formed using $`B`$ and $`U`$ to avoid using the long baseline between $`R`$ and $`U`$ which may increase the uncertainty. This method of determining the “total magnitude” in each filter has the advantage of always having colors defined from the same aperture for the two filters, and also decreasing the uncertainty in the flux measurement of the fainter filter. However, it does make the assumption that there is a negligible color gradient in the object. The photometry uncertainties for each filter in the color pair at the adopted color aperture are recorded, and the uncertainty of the color is the quadrature sum of the two.
The positions of objects on the CCD frame are determined using an iterative intensity centroid method (see Yee 1991). The uncertainty ranges from better than 0.01 pixels for bright objects to about a pixel for objects near the 5$`\sigma `$ detection limit. However, the major uncertainty in the relative positions of objects from the same field arises from the pincushion distortion of the focal-reduced image, which could be as large as 5 to 6 pixels at the corners. The distortion is corrected using an image taken through a mask with a grid of pinholes of known separations. In general, the relative positions of objects near the edges and corners may be uncertain systematically up to 1<sup>′′</sup>. The star cluster M67 is used as the astrometry reference (Girardi et al. 1989) for determining the scale and rotation of the CCD set-up, to an accuracy of about 0.0004 and 0.05<sup>o</sup>, respectively. Note that these uncertainties compound the inaccuracy of the absolute position determinations of objects over the large distances spanned by the fields.
### 3.2 Star-Galaxy Classification
Star-galaxy classification for the MOS imaging data is particularly challenging due to the large variation in the image quality due to the focal reducer optics. The focus variation across the field is often sufficient to produce crescent- or even donut-shaped PSFs at the corners. We adopt the observational procedure of performing focusing consistently at the same predetermined region about 1/3 of the way outwards from the center of the CCD, so that within the defined area the image quality variation is minimized. A variable PSF classification scheme, as outlined in YEC, is used. This procedure essentially compares the growth curve shape of each object with the four nearest PSF standards, one in each quadrant centered on the object (see Yee 1991 and YEC). The PSF standards are chosen automatically; however, manual intervention in the choices of PSFs is often required near the corners and edges of the image. Typically 20 to 40 reference PSFs are used per field.
Stars having a very close neighbor of either a galaxy or another star are occasionally misclassified as non-stellar. These misclassifications are corrected by hand during visual inspection of the classifier plot. In general, objects down to a brightness of $`R22`$ mag have robust star-galaxy classification, beyond which some faint galaxies begin to merge into the stellar sequence (e.g., see Figure 2 in YEC). A statistical variable star-galaxy classification criterion is used for the faint objects (see Yee 1991) to compensate for the merging of the star and galaxy sequences in the classifier space. While the separation between stars and galaxies is excellent at magnitudes brighter than the nominal spectroscopic limit of $`R=21.5`$, quasars, luminous distant active galactic nuclei, and a small number of compact galaxies may be missed in the spectroscopic sample. A more detailed discussion of the effect of star-galaxy classification on compact extragalactic objects is provided in Hall et al. (2000) which analyses the sample of serendipitous AGN and quasars in the CNOC2 sample,
## 4 Spectroscopic Reduction
### 4.1 Spectral Extraction
The spectrum of each object is extracted and wavelength and flux calibrated using semiautomated IRAF<sup>1</sup><sup>1</sup>1 The Image Reduction and Analysis Facility (IRAF) is distributed by the National Optical Astronomy Observatories, which is operated by AURA, Inc., under contract to the National Science Foundation. reduction procedures. Many of the procedures are the same as those used for the CNOC1 survey as described in Section 6.2 of Yee, Ellingson & Carlberg (1996). Here we summarize those procedures and discuss in detail only those changes made for CNOC2. The scripts and programs used are available from G. Wirth ([email protected]). A major conceptual change is that instead of extracting all the spectra from the same large image, each individual spectrum is copied to a smaller subimage upon which the extraction is performed.
Each mask has a flatfield and arc lamp image and typically two individual spectroscopic exposures. The individual spectroscopic exposures are cleaned of cosmic rays (see YEC), overscan subtracted and summed after applying small shifts to correct for flexure in a handful of cases. Regions of zero-order contamination are interpolated over on the flatfield after being marked interactively.
Since the relative position of each spectrum on the CCD is known exactly in advance, for each mask it is easy to construct an automatic extraction file containing the spatial and dispersion position of the objects and the relative positions of the edges of each slit. The automatic extraction file is used to create an IRAF aperture database containing the aperture position, width, and sky background ranges for each object, including serendipitous objects not targeted in the mask design. Three subimages are then created for each object by copying the same region from the summed spectroscopic exposure, flatfield image, and arc lamp image. Each flatfield subimage is normalized using a cubic spline fit to the mean flat field wavelength profile, leaving only pixel-to-pixel variations. The resulting response subimage is divided into the object subimage to yield a flattened object subimage.
Each object subimage is interactively examined and the default object and background apertures adjusted if necessary. Notes are made at this stage of any salient features of the spectrum, including emission line(s), a very faint or invisible continuum, overlapping spectra, multiple objects per slit, etc. The spectrum is then extracted using variance weighting and an arc spectrum is extracted from the arc lamp subimage using the same extraction aperture and profile weighting. The arc lamp images contain lines from He, Ne and Ar, and give 11 lines strong enough and sufficiently unblended to be used for wavelength calibration. There is a gap with no arc lines between 5015Å and 5875Å where the wavelength calibration is less certain. The wavelength calibration solution for each subimage is found by interactively identifying several arc lines, using a line database to identify the rest, and then fitting a cubic polynomial to the data. The resulting wavelength solution is non-linear (with deviations of over 10Å from linear). Typical rms residuals to the fit for the 11 arc lines are less than 0.1Å. The wavelength solution changes significantly for spectra at differing locations on the CCD, with the mean dispersion ranging from 4.9Å per pixel at the bottom to 5.1Å per pixel at the top (for the STIS2 CCD). The object spectra are wavelength calibrated, and all linearized to run from 4390Å to 6292.21Å with a uniform dispersion of 4.89Å/pixel. The wavelength solutions are checked by visual inspection of the wavelengths of the bright sky lines. The data are then extinction corrected and flux calibrated to $`F_\lambda `$. The flux calibration uses observations of standard stars, generally taken through one of the central apertures. Using these stars, the end-to-end throughput (photons hitting the telescope primary, through to electrons detected by the CCD) for the typical seeing is fairly flat across the blocking filter bandpass, with measured values between 10 and 15%. Slits near the corners of the mask suffer from significant vignetting, which has not been corrected for in the current data. As with CNOC1, the relative flux calibration should be considered accurate to $`20`$% across large wavelength ranges. The absolute flux calibration of the spectra is considerably more uncertain.
Regions 45Å wide around the bright night sky emission lines at 5577Å and 5892Å are automatically interpolated over. Finally, the spectra are examined and residual cosmic rays or other bad regions are interactively marked and interpolated over, including regions 125Å wide around zero order contamination. These interpolated spectra are the final versions used for redshift determination, but we also found it advantageous to keep copies of the uninterpolated spectra to help confirm redshifts where a feature falls within an interpolated region. A total of 14932 spectra were extracted during the course of the CNOC2 data analysis.
### 4.2 Cross-Correlations
We determine redshifts using the cross-correlation technique described by YEC ( see also Ellingson & Yee 1994). Our method is similar to standard techniques (e.g., Tonry & Davis 1979), except for a second calculation of the cross-correlation function, which is used to remove biases resulting from the large redshift range we need to consider ($`0<z0.7`$), coupled with the finite spectral coverage of both our object and template spectra. The reader is referred to YEC for details.
The object spectra are cross-correlated against three galaxy templates, specifically elliptical, Sbc, and Scd galaxy spectra taken from the spectrophotometric galaxy atlas of Kennicutt (1992). All object spectra, along with their associated cross-correlation functions against each template, are visually inspected to verify or reject the redshifts first determined automatically by the cross-correlation program on the basis of the values of the cross-correlation coefficient $`R_{cor}`$ and peak heights (Tonry & Davis 1979). Generally the redshift corresponding to the template with the highest $`R_{cor}`$ value is adopted, but occasionally a template with a somewhat lower $`R_{cor}`$ value may be chosen if visual inspection deems it a better spectral match to the object galaxy. Low signal-to-noise spectra with indeterminable or uncertain redshifts, typically with $`R_{cor}3`$, are rejected. Figure 7 shows example spectra and cross-correlation functions for galaxies of different spectral types and different $`R_{cor}`$ values. We assign the galaxy spectral type, denoted “Scl,” according to the cross-correlation template chosen, where Scl=2, 4, and 5 correspond to the elliptical, Sbc, and Scd templates, respectively. The visual inspection also includes examination of the two-dimensional spectral images, both before and after cosmic ray removal, which allows us to reject any remaining cosmic rays that might otherwise masquerade as emission lines in the extracted one-dimensional spectra. Contaminating stellar spectra, as well as unusual spectra (e.g., AGN or spectroscopic gravitational lens candidates, see Hall et al. 2000) are also identified during the visual inspection process. Objects classified as AGN or QSO are designated as Scl=6. In addition, objects without assigned redshifts but with spectra of signal-to-noise ratios judged sufficient to have detected spectral features if they were present are flagged and denoted by Scl=88. Spectroscopically identified stars are designated as Scl=77. The IRAF add-on radial velocity package RVSAO (Kurtz & Mink 1998) is used to aid in graphical display and visual redshift assessment of our spectra.
### 4.3 Redshift Verification and Uncertainty
The redshifts determined during the “first pass” visual inspection described above are subjected to confirmation in several subsequent steps. A “second pass” visual inspection of those spectra assigned a first-pass redshift is made to verify the original redshift determination and to flag problem cases for final inspection. This is done by one of us (H. Lin) on the spectra in bulk, after all the survey data has been acquired and all first-pass inspections completed, in order to provide a single reasonably uniform categorization of the redshift quality. During second-pass inspection, the original first-pass redshifts are assigned to three categories: good, probable, and questionable/bad. Questionable/bad redshifts are subjected to a final visual inspection, as described below. Note this questionable/bad category also includes cases where a clerical mistake was made during first pass on an otherwise obviously good redshift. For probable redshifts, we apply an additional, more objective photometric-redshift (photo-$`z`$) verification test to confirm the redshift or to flag the spectrum for final visual inspection.
The photo-$`z`$ calibration is done using the empirical polynomial fitting method of Connolly et al. (1995). Separate fits are determined for Scl=2, 4, and 5 galaxies, using the full 4-patch CNOC2 data set, but the calibration sample is restricted to those galaxies with second-pass “good” redshifts and high cross-correlation $`R_{cor}`$ values ($`6`$ for Scl=2,4; $`12`$ for Scl=5). For Scl=2 and 4 objects, the fits include up to linear-order terms in in the $`UBVRI`$ magnitudes, but for Scl=5, we also include terms up to quadratic order to improve the fit. We take out any redshift-dependent residuals in our photo-$`z`$ fit by subtracting the median difference between photometric and spectroscopic redshift, in bins of width $`\mathrm{\Delta }z=0.1`$ in spectroscopic redshift. Note that since we are interested in checking the consistency between photometric and spectroscopic redshifts, rather than trying to derive photo-$`z`$’s for objects with no spectroscopy in the first place, it is entirely valid to use existing spectroscopic redshift and spectral class (Scl) information to optimize the photo-$`z`$ fits. Using the calibration sample of second-pass good redshifts, we find that only about 10% of these galaxies have $`|z(\mathrm{photometric})z(\mathrm{spectroscopic})|>0.065`$, 0.09, or 0.15 for Scl=2, 4, or 5, respectively. We then adopt these simple cuts to flag for final inspection those second-pass probable redshifts which show a large discrepancy with their respective photometric redshift estimates.
The third and final pass is then carried out by two of us (P. Hall, H. Lin), on the following objects: those with second-pass questionable/bad redshifts, those with second-pass probable redshifts failing the photo-$`z`$ verification test, those with $`R_{cor}<4`$, those morphologically classified as galaxies or probable galaxies but spectroscopically classified as stars (and vice versa), and all objects which were not initially assigned redshifts but which do have spectra of reasonable signal-to-noise ratios (i.e., Scl=88). Redshifts for which both inspectors agree are good (bad) are retained (rejected); those for which the inspectors disagree are resolved by a tie-breaking vote cast by a third person (H. Yee). We also correct any obvious mistakes in the redshift at this point (e.g., a redshift based on a \[OIII\]$`\lambda \lambda `$5007 emission line that was mistakenly or accidentally taken to be a \[OII\]$`\lambda `$3727 line during first pass). As an example, overall about 250 redshifts out of $``$1550 (i.e., $`15`$%) from the 0223+00 patch are third-pass inspected and 36 of these are ultimately excluded from the final redshift catalog. Thus, the final catalog contains objects that have secure redshifts with their quality approximately ranked by the correlation $`R_{cor}`$ values.
Finally, we use redundant redshift measurements, typically from independent pairs of A and B mask spectra, to empirically calibrate the redshift errors calculated by our cross-correlation program. Specifically, we examine the distribution of velocity differences for the redundant pairs, appropriately normalized by the quadrature sum of the formal velocity errors returned by the program. We find that the original program errors need to be multiplied by a factor of 1.3 in order to match the empirical velocity differences, but once that is done a K-S test shows that the resulting normalized velocity difference distribution is indeed consistent with a Gaussian. In Figure 8 we plot the velocity difference distribution for the 303 redundant pairs in the 0223+00 patch; the rms velocity difference, divided by $`\sqrt{2}`$, is 103 km s<sup>-1</sup>, which indicates our typical random velocity error on a single redshift measurement. We estimate the systematic error in the velocity zero point for our templates to be approximately 30 km s<sup>-1</sup>, as detailed in YEC.
## 5 Completeness and Weights
The practical difficulty of obtaining spectroscopic observations for every single galaxy in a faint galaxy sample requires such a survey to adopt a sparse sampling strategy. In the case of the CNOC2 survey, the sampling is slightly under 2 to 1; and the sampling itself is not necessarily uniform. For example, objects near the edges in the spectral direction, and objects in low galaxy density regions are more likely to be sampled simply due to geometric considerations. Furthermore, the success rate of obtaining a redshift when a galaxy spectrum is obtained depends on many factors, the foremost being the signal-to-noise ratio and the strength of the spectral features. For some galaxies, because of the relatively short spectrum, there may not be any useful identification features for deriving a robust redshift. Hence, to convert an observed redshift sample to a complete sample requires a detailed understanding of the various effects and methods of accounting for them. For the CNOC1 Cluster Redshift Survey, YEC described in detail their method of compensating for these various factors by attaching a series of statistical weights to each galaxy. The CNOC2 survey uses the same method to assign weights to each galaxy in the redshift sample. Here, we will describe very briefly the determination of weights, with emphasis on the small variations from the method used by YEC.
For each galaxy, a selection function $`S=S_mS_{xy}S_cS_z`$ is computed, where $`S_m(m)`$ is the magnitude ($`m`$) selection function, $`S_{xy}(x,y,m)`$ is the geometric selection function based on the location ($`x,y`$) of the object, $`S_c(c,m)`$ is the color ($`c`$) selection function, and $`S_z(z,m)`$ is the redshift selection function. The weight, $`W`$, for each galaxy is then $`1/S`$. $`S_m`$ is chosen as the primary selection function which has a value between 0 and 1; whereas the other selection functions are considered as modifiers to $`S_m`$ and are normalized to have a mean over the sample of $`1.0`$ (with the exception of $`S_z`$). This description of the weights of the galaxies allows one to omit any of the secondary selection functions if they are deemed unnecessary for the analysis.
Selection functions are computed for each filter for every object with a redshift. $`S_m`$ is derived by counting galaxies in a running bin of $`\pm `$0.25 magnitudes around each object, taking the ratio of the number with redshifts to that in the photometric sample. Examples of the magnitude selection function for individual fields are shown in Figure 9 for the patch CNOC 0223+00. Because there are significant variations of coverage from field to field due to observing conditions and object density – the nominal 20% selection limit (i.e., $`S_m=0.2`$) varies from $`R_c=21.1`$ to 21.8 mag – the magnitude selection function is computed in individual fields rather than over the whole patch (which was done in the CNOC1 catalogs, which have many fewer fields per cluster). This is equivalent to applying a large geometric filter to the magnitude selection function. Variations over smaller angular scale are corrected by $`S_{xy}`$, which is computed by counting galaxies within $`\pm `$0.5 mag over an area of 2.0 arcminutes around the object. The color selection function is similarly accomplished by counting galaxies over a color range of $`\pm `$0.25 mag. The colors used for $`I`$, $`R`$, $`V`$, $`B`$, and $`U`$ are $`RI`$, $`BR`$, $`VR`$, $`BR`$, and $`UB`$, respectively.
The calculations of $`S_m`$, $`S_{xy}`$, and $`S_c`$ are all performed strictly empirically using the galaxy catalog, and they can be easily recomputed with different bin sizes, sampling areas, or reference colors. However, the computation of $`S_z`$ is model dependent, as it requires an estimate of the number of galaxies of a certain magnitude and color which are outside the measurable redshift range. This requires knowledge of the luminosity function of galaxies as a function of color and redshift, which is in fact one of the goals in the study of galaxy evolution. If the LF of the galaxies is known, then a simple correction to $`S_m`$, the magnitude selection function, is $`f_z(z1,z2)/f_{LF}(z1,z2)`$; where $`f_z(z1,z2)`$ is the fraction of galaxies in the redshift sample between $`z1`$ and $`z2`$, and $`f_{LF}(z1,z2)`$ is the fraction of galaxies in the same $`z`$ range as modeled by the LF. Using the LF derived by Lin et al. (1999) from two of the 4 CNOC patches, we can estimate the redshift selection function, $`S_z`$, as a function of magnitude. We note that Lin et al. derived the LF using a different redshift weight correction based on colors of galaxies, although the LF and $`S_z`$ can be derived iteratively using the above method for determining $`S_z`$. As an example, Table 4 lists $`W_z`$ derived based on count models in half magnitude bins for the patch CNOC 0223+00. The correction to $`W_m`$ due to the redshift range effect is significant at $`R\text{ }>\text{ }20.5`$, rising to as much as 30% at the nominal spectroscopic sample limit of $`R=21.5`$. A large correction is expected at the faint end, as fainter galaxies have a higher mean redshift, resulting in fewer galaxies having their redshift measured. Values of $`W_z`$ for individual objects in the catalogs are derived by interpolating the results from binned data. Because of the near 100% success rate and small number statistics at bright magnitudes, $`W_z`$ is fixed at 1.0 at $`R<18.0`$.
Another independent method for accurately accounting for the redshift selection effect is to compare the redshift sample to a photometric redshift sample derived from the 5 color photometric data. This method will be discussed in a future paper.
## 6 The Data Catalog for the Patch CNOC 0223+00
The patch CNOC 0223+00 contains 19 fields. An example of a $`R`$ MOS image of a single field is shown as a gray scale plot in Figure 10, with objects having measured redshifts marked. The layout of the 19 fields of the patch is presented in Figure 11, with all galaxies brighter than $`R=23.0`$ (100% complete) marked. The total area in the defined region for the patch is 1408.7 square arcminutes. The patch contains to this limit 9554 objects classified as galaxies and 2574 classified as stars. The sky and redshift distributions of the redshift sample are shown in Figures 12 and 13. A total of 1541 redshifts are measured from 3820 spectrum (with a redundancy rate of 32.9%). The number of galaxies with $`R21.5`$ is 2692, of which 1293 have a measured redshift. The cumulative sampling rate at $`R=21.5`$ is 48.0%, with a raw success rate of 67.3%. Figure 9 illustrates the differential sampling rate (i.e. $`S_m`$) of individual objects as a function of magnitude. The raw and redshift-selection-corrected success rates as a function of $`R`$ magnitude are tabulated in Table 4 and plotted in Figure 14. The corrected success rate is close to 100% for galaxies brighter than 20.0, and close to 70% at the nominal primary spectroscopy limit of $`R=21.5`$. We note that the success rates tabulated reflect the sum of the A and B masks, with the A masks having half the exposure time of that of the B masks. Typical spectra and their correlation functions are plotted in Figure 7. Figure 8 illustrates the histogram of $`\mathrm{\Delta }v`$ from redundant spectroscopic observations. The distribution produces an estimate of the average uncertainty for the redshift measurements of 103 km s<sup>-1</sup>.
Some statistics for each field are presented in Table 5. These include the $`5\sigma `$ detection limits for the $`R`$ and $`B`$ filters. These are fiducial limits determined using stellar PSFs; typical 100% completeness for galaxies are 0.6 to 0.8 mag brighter (see Yee 1991). The numbers in brackets indicate the runs (and hence the CCD used, see Table 2) from which the $`R`$ and $`B`$ images were obtained. The mean $`5\sigma `$ magnitude limits are $`23.87\pm 0.13`$ and $`24.49\pm 0.13`$ for $`R`$ and $`B`$, respectively, where the uncertainty is the rms width of the distribution. The 100% photometric completeness limit is approximately 0.7 to 0.9 magnitudes brighter than the $`5\sigma `$ limit (see Yee 1991). For the other 3 filters: $`I`$, $`V`$, and $`U`$, the mean 5$`\sigma `$ magnitude limits are 22.80$`\pm `$0.22, 24.09$`\pm `$0.12, and 23.04$`\pm `$0.15, respectively. Also listed in Table 5 are the number of redshifts measured in each field, with the bracketed numbers indicating the runs from which masks A, B, and C were obtained. The integrated selection function $`S_m`$ at $`R=21.5`$ and the differential selection rate at $`21.0<R<21.5`$ are also shown in Table 5. These numbers provide an idea of the variations from field to field expected in a patch.
The fields are merged into a patch by matching a reference object which appears in both adjacent fields. Table 6 is a sample catalog for the patch CNOC 0223+00 which contains every 25th galaxy with redshifts from the full catalog to demonstrate some of the information tabulated for each object in the full catalog. Listed in the sample catalog are the following columns:
Column 1: the PPP object number – the first two digits represent the field number, while the last 4 digits is the sequential object number within the field ordered from South to North.
Column 2: the offset in RA in arc seconds from the fiducial origin in the a0 field with West being positive.
Column 3: the offset in Dec in arc seconds with North being positive. The object positions, both $`\mathrm{\Delta }`$RA and $`\mathrm{\Delta }`$Dec, have not been put on a proper astrometric grid. The relative position between two objects are in general accurate to 1 to 2 <sup>′′</sup> within the same field, and could be uncertain by as much as 5<sup>′′</sup> or more across the patch (see Section 3.1).
Columns 4 to 8 : the $`I`$, $`R`$, $`V`$, $`B`$, and $`U`$ magnitudes. The photometric uncertainties for $`R`$ and $`B`$ are shown to illustrate the typical errors. The errors are tabulated in 1/100 magnitude units. Note that a 0.03 mag “aperture” uncertainty is added in quadrature to the photon noise error for all objects. This accounts for the uncertainty in determining the optimal aperture for the object (See YEC).
Column 9: the redshift and uncertainty. The uncertainty has been scaled by the empirically determined 1.3 factor from that produced by the cross-correlation algorithm (see Section 4.3) and is tabulated in units of 0.00001 in redshift.
Column 10: the spectroscopic classification, with 2=elliptical spectrum, 4=intermediate-type spectrum, 5=emission-line spectrum, 6=active galactic nuclei.
Column 11: cross-correlation coefficient value ($`R_{cor}`$).
Column 12: Spectral energy distribution (SED) class determined by fitting the 5 color photometry to the empirical SEDs of galaxies of different spectral types from Coleman, Wu, & Weedman (1980). The template SEDs of E/S0, Sbc, Scd, and Im classes from Coleman, Wu, & Weedman are designated as 0.0, 1.0, 2.0, and 3.0. An additional very blue template, denoted as 4.0, is created from the GISSEL library (Bruzual & Charlot 1996) for modeling late-type strongly star-forming galaxies. Intermediate SED classes are interpolations of these templates.
Column 13: k-correction for the filter $`R`$ determined using the SED class fit and spectral model.
Column 14: magnitude weight for the filter $`R`$.
In the full electronic version of the catalog, the color aperture error, the k-correction and the magnitude, geometric, and color weights for the remaining 4 filters, and the redshift weight for each object are also listed. The central position in RA and Dec of the catalog (i.e., $`\mathrm{\Delta }`$RA=0.0 and $`\mathrm{\Delta }`$Dec=0.0) is 00:23:29.2, +00:05:14 (1950). The complete catalog and detailed explanatory notes for the catalog can be found in a number of websites: http://adc.gsfc.nasa.gov/, http://www.astro.utoronto.ca/$``$hyee/CNOC2/<sup>2</sup><sup>2</sup>2 The full electronic version of the catalog will be initially available at this website on or around August 1, 2000. , and http://cadcwww.hia.nrc.ca.
Also available electronically is a ”field-area map”, which is a two-dimensional array containing 0’s and 1’s, mapping the sampled area of the patch with a 2<sup>′′</sup>/pixel resolution. This map allows the determination of the exact area on the sky covered, including the effect of blockage by bright stars.
We wish to thank the Canadian Time Assignment Committee and the CFHT for generous allocation of observing time and the CFHT organization for the technical support. We especially extend our thanks to the telescope operators: Ken Barton, John Hamilton, Norman Purves, Dave Woodworth, and Marie-Claire Hainaut who, through their dedication and skill, helped immensely in maximizing the observing efficiency for this large project. This project was supported by a Collaborative Program grant from NSERC, as well as individual operating grants from NSERC to RGC and HKCY. HL acknowledges support provided by NASA through Hubble Fellowship grant #HF-01110.01-98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555.
|
no-problem/0004/hep-th0004115.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Non-commutative field theories have an unconventional perturbative behaviour -. New infrared singularities in the correlation functions appear even for massive theories -. This phenomenon is due to an interplay between the UV and IR induced by the Moyal phase appearing in the vertices. Recently some of the amplitudes of these non-commutative theories have been derived from string theory -.
In this note we analyze at the perturbative level the non-commutative version of a non-relativistic scalar field theory in $`2+1`$ dimensions in order to gather more information about the UV/IR mixing and the structure of degrees of freedom of non-commutative theories. Our motivation for investigating non-relativistic non-commutative field theories is twofold. In non-relativistic quantum theory non-commutativity of space arises often in the effective description of charged particles carrying dipole momentum moving in strong magnetic fields . It seems natural then to look for the by now well-known UV/IR mixing in the context of non-relativistic quantum field theory. Another more theoretical motivation is that it might be easier to understand the effects of non-commutativity of space in simpler setups than relativistic quantum field theory. We also would like to emphasize that due to an ordering ambiguity in the interaction vertices the particular model we are considering can not be obtained as the non-relativistic limit of a relativistic field model!
As we will see, also in this non-relativistic example there is an interplay between the IR and UV behaviour due to the Moyal phases. For the two-point function a singularity of delta-function type appears. This should be contrasted with the pole like singularities found for relativistic theories . For the four-point function we find a singularity of a more familiar, logarithmic type.
We study the system at finite temperature and non-zero chemical potential $`\mu `$. We compute the thermodynamical potential up to two loops. The presence of the chemical potential provides another scale besides the non-commutativity scale and temperature. If $`\mu >>T`$, the non-planar contribution is strongly suppressed with respect to the planar one for thermal wavelengths smaller than the non-commutativity scale. This suggest a reduction of degrees of freedom running in the non-planar graphs at high temperature . The limit of $`\mu <<T`$ is more involved. The non-planar graph does not appear to be strongly suppressed as a function of the temperature. It depends crucially on the ratio between the chemical potential and the non-commutativity scale.
The paper is organized as follows. In section 2, we study the two and four-point function up to one-loop. In section 3 we study the free energy up to two loops. We give some conclusions in section 4.
## 2 Two and Four-Point Function at one-loop
We will start by introducing the model. We will work in $`2+1`$ dimensions, where the non-commutativity affects only the spatial directions. Non-commutative $`R^2`$ is defined by the commutation relations
$$[x^\mu ,x^\nu ]=i\theta ^{\mu \nu },$$
(2.1)
with $`\theta ^{\mu \nu }=\theta ϵ^{\mu \nu }`$. The algebra of functions on non-commutative $`R^2`$ is defined through the star product
$$(fg)(x):=\underset{yx}{lim}e^{\frac{i}{2}\theta ^{\mu \nu }_\mu ^x_\nu ^y}f(x)g(y)$$
(2.2)
Here $`x^\mu `$ are taken to be ordinary c-numbers. We will study a self-interacting non-relativistic scalar field model, defined by the Lagrangian
$$=\varphi ^{}\left(i_t+\frac{\stackrel{}{}^2}{2}\right)\varphi \frac{g}{4}\varphi ^{}\varphi ^{}\varphi \varphi .$$
(2.3)
The star product has been dropped in the term bilinear in the fields. This is consistent since we can always delete one star in monomials of fields in the action. This is equivalent to neglecting total derivative terms. In ordinary space-time this model arises as the low energy limit of a real relativistic scalar field with $`\varphi ^4`$ self interaction. It has been studied in as a model for applying renormalization to quantum mechanics with $`\delta `$-function potential . The model is scale invariant in ordinary space-time since scale transformations in a non relativistic theory take the form $`t\lambda ^2t,\stackrel{}{x}\lambda \stackrel{}{x}`$. The scaling of $`t`$ is due to the fact that in (2.3) the mass has been scaled out by redefining $`tmt`$. It has been shown that the theory acquires a scale anomaly upon quantization quite analogous to what happens in relativistic quantum field theory. Of course, in the case considered here scale invariance is already broken at tree level by the non-commutativity scale $`\sqrt{\theta }`$.
In going from ordinary space-time to the non-commutative one an ordering ambiguity for the interaction term arises. We fix that ambiguity in (2.3) by putting the $`\varphi ^{}`$ fields to the left. The other possible ordering would have been to chose $`_{int}=\frac{g^{}}{4}\varphi ^{}\varphi \varphi ^{}\varphi `$. A relativistic complex scalar field model with both interaction vertices has been considered in . There the authors showed that the theory was renormalizable at one-loop level only when $`g=g^{}`$ or $`g=0`$. We will later on show that no such restriction arises in the non-relativistic model (2.3).
The solutions to the free field equations can be written as Fourier transforms
$$\begin{array}{ccc}\varphi (\stackrel{}{x},t)\hfill & =& \frac{d^2k}{(2\pi )^2}a(\stackrel{}{k})e^{i(\omega _kt\stackrel{}{k}\stackrel{}{x})},\hfill \\ \varphi ^{}(\stackrel{}{x},t)\hfill & =& \frac{d^2k}{(2\pi )^2}a^{}(\stackrel{}{k})e^{i(\omega _kt\stackrel{}{k}\stackrel{}{x})},\hfill \end{array}$$
(2.4)
where $`\omega _k=\frac{\stackrel{}{k}^2}{2}`$. The propagator of the theory is given by
$$T\varphi (x)\varphi ^{}(y)=\frac{d^2kd\omega }{(2\pi )^3}\frac{ie^{i(\omega t\stackrel{}{k}\stackrel{}{x})}}{\omega \stackrel{}{k}^2+iϵ}.$$
(2.5)
We want to compute now the one-loop correction to the two-point function. This is given by the tadpole diagram of figure 1. In ordinary space-time we can employ a normal ordering prescription setting the tadpole to zero. In the non-commutative theory we expect a dependence of the tadpole on the external momentum due to the contribution of the non-planar diagrams. The planar and non-planar contributions are given by
$$i\mathrm{\Sigma }(E,\stackrel{}{p})=I_{planar}+I_{nonplanar},$$
(2.6)
and
$$\begin{array}{ccc}I_{planar}\hfill & =& \frac{g}{2}\frac{d\omega d^2k}{(2\pi )^3}\frac{1}{\omega \frac{\stackrel{}{k}^2}{2}+iϵ}\hfill \\ I_{nonplanar}\hfill & =& \frac{g}{2}\frac{d\omega d^2k}{(2\pi )^3}\frac{\mathrm{exp}(i\stackrel{~}{p}k)}{\omega \frac{\stackrel{}{k}^2}{2}+iϵ},\hfill \end{array}$$
(2.7)
where $`\stackrel{~}{p}^\mu =\theta ^{\mu \nu }p_\nu `$. In order to do the $`\omega `$-integration we recall that $`(x+iϵ)^1=𝒫\frac{1}{x}i\pi \delta (x)`$. This leaves us with the $`\stackrel{}{k}`$-integrations
$$\begin{array}{ccc}I_{planar}\hfill & =& \frac{ig}{8\pi }_0^\mathrm{\Lambda }k𝑑k=\frac{ig}{16\pi }\mathrm{\Lambda }^2\hfill \\ I_{nonplanar}\hfill & =& \frac{ig}{16\pi ^2}𝑑\phi _0^\mathrm{\Lambda }e^{i\stackrel{~}{p}k\mathrm{cos}(\phi )}k𝑑k=\frac{ig}{8\pi \stackrel{~}{p}}\mathrm{\Lambda }J_1(\stackrel{~}{p}\mathrm{\Lambda }),\hfill \end{array}$$
(2.8)
where we introduced a UV-cutoff $`\mathrm{\Lambda }`$ and $`J_1(x)`$ denotes a Bessel function. The quadratic divergence from the planar part can be removed by adding a corresponding counterterm to the action, $`_c=\delta \mu \varphi ^{}\varphi `$. The non-planar part reproduces the quadratic divergence for $`\stackrel{~}{p}0`$ since $`lim_{x0}\frac{J_1(x)}{x}=\frac{1}{2}`$. In the limit $`\mathrm{\Lambda }\mathrm{}`$, using $`_0^{\mathrm{}}J_1(x)𝑑x=1`$, it is straightforward to show that the result from the non-planar diagram represents a delta-function in polar coordinates in $`\stackrel{~}{p}`$-space. We find then
$$\mathrm{\Sigma }(E,p)=\frac{g}{4\theta ^2}\delta ^2(\stackrel{}{p}).$$
(2.9)
Thus the situation is rather analogous to what happened in relativistic field theories. The limits of $`\mathrm{\Lambda }\mathrm{}`$ and $`p0`$ do not commute.
It is interesting to see that we can recover the delta-type singularity of the non-planar diagram as a limit of the relativistic case. The relativistic theory is given by the Lagrange density
$$_{rel}=\frac{1}{2}(\varphi _r)^2\frac{1}{2}m^2\varphi _r^2\frac{\lambda }{4!}\varphi _r\varphi _r\varphi _r\varphi _r.$$
(2.10)
The non relativistic limit can be obtained as a $`1/m`$ expansion. We take off the fast oscillation due to the large mass and introduce dimensionless non relativistic fields by defining
$$\varphi _r=\frac{1}{\sqrt{2m}}(e^{imt}\varphi +e^{imt}\varphi ^{}).$$
(2.11)
To extract the non-relativistic limit we have to expand the vertex in 2.10 and compare with the vertex in 2.3. From the relativistic vertex we obtain
$$_{rel}=\frac{\lambda }{4!m^2}\left(\varphi ^{}\varphi ^{}\varphi \varphi +\frac{1}{2}\varphi ^{}\varphi \varphi ^{}\varphi \right).$$
(2.12)
Note that the non-relativistic limit produces both possible orderings in the interaction. Therefore in the non-commutative case our model 2.3 is not the non-relativistic limit of a real relativistic scalar field. It turns out however that only the first vertex in 2.12 contributes to the non-planar tadpole diagram. We should be able then to obtain the result 2.9 from the relativistic case. Comparing 2.3 with 2.12 we see that $`\lambda =6mg`$ <sup>1</sup><sup>1</sup>1Recall that in 2.3 we have rescaled $`tmt`$ and $`m`$ in order to factor out the mass dependence.. The non-planar contribution to the tadpole diagram in the relativistic theory is given by
$$I_{rel}=\frac{\lambda }{6(2\pi )^3}d^3k\frac{e^{i\stackrel{~}{p}k}}{k^2m^2}.$$
(2.13)
In order to evaluate the integral we switch to Euclidean momentum and use Schwinger parameterization. We obtain
$$I_{rel}=\frac{i\lambda \pi ^{\frac{3}{2}}}{6(2\pi )^3}_0^{\mathrm{}}\frac{d\alpha }{\alpha ^{3/2}}e^{\frac{\stackrel{~}{p}^2}{4\alpha }\alpha m^2}=\frac{i\lambda e^{\stackrel{~}{p}m}}{24\pi \stackrel{~}{p}}.$$
(2.14)
There arises also a factor of $`\frac{1}{2}`$ which can be seen by noting that the integral $`I_{rel}`$ defines the relativistic self energy $`\mathrm{\Sigma }_{rel}`$. The relativistic dispersion relation is $`(p_0m)(p_0+m)\stackrel{}{p}^2\mathrm{\Sigma }_{rel}=0`$. Setting $`p_0=m+E`$ where $`E`$ is the non-relativistic energy we can go to the non relativistic limit by scaling $`m\mathrm{}`$ and $`E0`$ keeping the product $`Em=\omega `$ fixed. This is the non relativistic energy of dimension $`2`$. In this way the relativistic dispersion relation becomes twice the non relativistic one if we identify $`lim_m\mathrm{}\mathrm{\Sigma }_{rel}=2\mathrm{\Sigma }_{nonrel}`$. Substituting for $`\lambda `$ it is then easy to show that
$$\underset{m\mathrm{}}{lim}I_{rel}=i\frac{g}{2}\delta ^2(\stackrel{~}{p}).$$
(2.15)
Thus we reproduce precisely the non relativistic result (2.9).
If we formally sum all the tadpole diagrams contributing to the two-point function we obtain the following modified dispersion relation
$$\omega =\frac{p^2}{2}+\frac{g}{4\theta ^2}\delta ^2(\stackrel{}{p}).$$
(2.16)
In the resummation one encounters arbitrary high powers of $`\delta `$-functions. Thus the resumation is highly ill defined. On the other hand we just showed that the dispersion relation 2.16 arises also as the limit of the relativistic one. Therefore we expect it to be correct on physical grounds. Alternatively we could keep the cutoff and arrive to 2.16 with a suitable smeared $`\delta `$-function. The meaning of 2.16 is that the energy of the zero momentum states is shifted by an infinite amount. However, it is important to note that the delta-function in the dispersion relation is integrable. Thus wavepackets containing zero momentum components still will have finite energy.
We would like now to evaluate the four-point function at one-loop. In a non-relativistic theory only the s channel contributes, since the t and u channels contain internal lines flowing both forward and backwards in time and this evaluates to zero in a non-relativistic theory (see figure 2). As shown in the contributions from u and t channel make the relativistic complex scalar field non-renormalizable if one does not also include the second possible ordering for the vertex. It is thus the vanishing of u- and t- channel that allows us to ignore the second possible ordering in the vertex. The non-relativistic one-loop four-point function is
$$\mathrm{\Gamma }_4(\omega _i,\stackrel{}{p}_i,\mathrm{\Lambda })=\frac{\lambda ^2}{8\pi }\left(\text{log}\frac{\mathrm{\Lambda }^2}{E\frac{P^2}{4}}+i\pi \right),$$
(2.17)
where $`E=\omega _1+\omega _2=\omega _1^{}+\omega _2^{}`$ and $`P=p_1+p_2=q_1+q_2`$ are the center of mass energy and momentum, and $`\omega _i,p_i`$ and $`\omega _i^{},p_i^{}`$ the energy and momentum of the incoming and outcoming particles respectively; $`\mathrm{\Lambda }`$ is an UV cutoff.
The one-loop four-point function for the non-commutative case is given by
$$\mathrm{\Gamma }_4=\frac{i\lambda ^2}{2}\mathrm{cos}\frac{\stackrel{~}{p}_1p_2}{2}\mathrm{cos}\frac{\stackrel{~}{q}_1q_2}{2}\frac{d^2kd\omega }{(2\pi )^3}\frac{\text{cos}^2\frac{\stackrel{~}{P}k}{2}}{(\omega \frac{k^2}{2}+iϵ)(E\omega \frac{(kP)^2}{2}+iϵ)},$$
(2.18)
Using $`\text{cos}^2\frac{\stackrel{~}{P}k}{2}=\frac{1+\mathrm{cos}\stackrel{~}{P}k}{2}`$ and writing $`\mathrm{cos}\stackrel{~}{P}k`$ in terms of exponentials, we can separate the planar and non-planar contributions. After doing the $`\omega `$ integration and shifting $`kk+\frac{P}{2}`$ we get for the non-planar part
$$\mathrm{\Gamma }_4^{nonplanar}=\frac{\lambda ^2}{4}\mathrm{cos}\frac{\stackrel{~}{p}_1p_2}{2}\mathrm{cos}\frac{\stackrel{~}{q}_1q_2}{2}\frac{d^2k}{(2\pi )^2}\frac{e^{i\stackrel{~}{P}k}}{k^2E+\frac{P^2}{4}2iϵ}.$$
(2.19)
This integral can be analyzed by changing to polar coordinates in momentum space. For angles such that $`\stackrel{~}{P}.k>0`$, we can evaluate 2.19 by using a contour encircling the first quadrant of the $`|k|`$-complex plane. For angles such that $`\stackrel{~}{P}.k<0`$, it is convenient to use a contour in the $`|k|`$-plane encircling the fourth quadrant. Adding both contributions, we obtain the following result
$$\mathrm{\Gamma }_4^{nonplanar}=\frac{\lambda ^2}{16}\mathrm{cos}\frac{\stackrel{~}{p}_1p_2}{2}\mathrm{cos}\frac{\stackrel{~}{q}_1q_2}{2}\left[Y_0\left(\stackrel{~}{P}\sqrt{E\frac{P^2}{4}}\right)+iJ_0\left(\stackrel{~}{P}\sqrt{E\frac{P^2}{4}}\right)\right],$$
(2.20)
where $`J_0`$ and $`Y_0`$ denote Bessel functions of first and second kind respectively. In order to better understand this expression, it is convenient to expand the Bessel functions for small $`\stackrel{~}{P}`$. Up to a real constant and $`O(\stackrel{~}{P})`$ terms, the result is
$$\mathrm{\Gamma }_4^{nonplanar}=\frac{\lambda ^2}{16\pi }\text{cos}\frac{\stackrel{~}{p}_1p_2}{2}\text{cos}\frac{\stackrel{~}{q}_1q_2}{2}\left(\text{ln}\frac{1/\stackrel{~}{P}^2}{E\frac{P^2}{4}}+i\pi \right).$$
(2.21)
The non-commutative phases regulate the otherwise divergent contribution coming from high momentum. The resulting dependence of the non-planar diagram on the external momentum is smoother than for the two-point function. The external momentum $`\stackrel{~}{P}`$ acts as an UV cutoff very much in the same way as in previously analyzed examples of relativistic theories .
## 3 Finite Temperature Behaviour
In this section we analyze the thermodynamics of our model. The physical reason to consider this system in a heat bath is to check if there is a reduction of degrees of freedom for the non-planar sector of the theory. In the case of relativistic theories this was shown to happen for thermal wavelengths smaller than the non-commutative length scale .
Before we embark on doing the calculation we remind the reader of the following formula
$$\underset{n}{}\frac{1}{i\omega _nx}=\frac{\beta }{2}\frac{\beta }{e^{\beta x}1}.$$
(3.1)
where $`\omega _n=\frac{2\pi n}{\beta }`$. The first term on the r.h.s. represents the zero temperature contributions. The resulting zero temperature divergences can be canceled by the introduction of appropriate couterterms.
We will compute the thermodynamic potential up to two loop. In order to cure infrared divergences we will introduce a chemical potential term $`\mu \varphi ^{}\varphi `$ in our lagrangian 2.3. The introduction of a chemical potential seems natural taking into account the renormalization properties of the theory at zero temperature. Notice that now three scales are present. Correspondingly we have two regimes of high temperature. By high temperature we mean thermal wavelength <sup>2</sup><sup>2</sup>2 The thermal wavelength of a non-relativistic system is given by $`\lambda _T=\frac{2\pi }{\sqrt{T}}`$ ($`m=1`$). much smaller than the scale of non commutativity, or equivalently, $`T\theta >>1`$. The physics is then still dependent on the chemical potential<sup>3</sup><sup>3</sup>3In non-relativistic theory the chemical potential takes values in $`(\mathrm{},0)`$.. In the regime $`\mu >>T`$ we expect a classical particle picture to be valid. We will also investigate the regime $`T>>\mu `$, where classical field theory is a good approximation to quantum statistical mechanics.
The one-loop contribution to the thermodynamic potential $`F=T\mathrm{log}Z`$ is given by
$$T\frac{d^2k}{(2\pi )^2}\mathrm{log}\left(1e^{\beta (\frac{k^2}{2}\mu )}\right)=T^2\text{Li}_2\left(1e^{\frac{\mu }{T}}\right),$$
(3.2)
where $`\text{Li}_2`$ denotes the dilogarithm. The the two loop contribution is given by
$$I=\frac{g}{2}T^2\underset{l,n}{}\frac{d^2p}{(2\pi )^3}\frac{d^2k}{(2\pi )^3}\frac{\mathrm{cos}^2\frac{\stackrel{~}{p}\stackrel{}{k}}{2}}{(i\omega _l\frac{p^2}{2}+\mu )(i\omega _n\frac{k^2}{2}+\mu )}$$
(3.3)
As in the zero temperature case we substitute $`\mathrm{cos}^2\frac{\stackrel{~}{p}\stackrel{}{k}}{2}=\frac{1+\mathrm{cos}\stackrel{~}{p}\stackrel{}{k}}{2}`$ separating the planar and non planar parts.
Using formula (3.1) we obtain three contributions to the planar part. The $`(T=0,T=0)`$ is a temperature independent divergence. The $`(T=0,T)`$ contributions are divergent. They can be canceled by adding counterterm of the form of the chemical potential $`\delta \mu \varphi ^{}\varphi `$. The $`(T,T)`$ contribution can be easily integrated
$$I_{planar}=\frac{g}{8\pi ^2}T^2\left[\mathrm{ln}\left(1e^{\frac{\mu }{T}}\right)\right]^2.$$
(3.4)
The non-planar contribution to the free energy contains again three pieces. The first one is temperature independent and finite. The $`(T=0,T)`$ contribution is
$$I_{nonplanar}^{T=0}=\frac{g}{2}\frac{d^2p}{(2\pi )^2}\frac{d^2k}{(2\pi )^2}\frac{e^{i\stackrel{~}{p}k}}{e^{\beta (\frac{k^2}{2}\mu )}1}=\frac{g}{8\pi ^2\theta ^2}\frac{1}{e^{\frac{\mu }{T}}1}.$$
(3.5)
This can be interpreted as a one-loop contribution due to the shift in the dispersion relation 2.16. The $`(T,T)`$ contribution is
$$I_{nonplanar}^T=\frac{g}{8\pi ^2}p𝑑pk𝑑k\frac{J_0(\stackrel{~}{p}k)}{(e^{\beta (\frac{k^2}{2}\mu )}1)(e^{\beta (\frac{k^2}{2}\mu )}1)}.$$
(3.6)
Since $`J_01`$, we see that the non-planar contribution is suppressed with respect to the planar one. The strength of the suppression will depend on the value of the two dimensionless quantities $`\theta T`$ and $`\mu /T`$.
We will analyze first the regime $`\mu /T>>1`$. In this limit we can substitute the Bose-Einstein distribution by the Maxwell-Boltzmann distribution. This corresponds to consider low densities for the thermal gas. This is the particle approximation to the quantum field theory. In this limit we can evaluate the integral explicitly,
$$I_{nonplanar}^T=\frac{g}{8\pi ^2}\frac{T^2}{1+(\theta T)^2}e^{\frac{2\mu }{T}}.$$
(3.7)
For $`\theta T<<1`$ planar and non-planar graphs give the same contribution. For $`\theta T>>1`$ there is a very strong suppression of the non-planar sector. The $`T^2`$ dependence of 3.4 is substituted by $`1/\theta ^2`$. When $`T`$ is larger than $`1/\theta `$ the thermal wavelength $`\lambda _T1/\sqrt{T}`$ becomes smaller than the radius of a Moyal cell. Equation 3.7 seems to indicate that the effective wavelength of the modes that circulate in the non-planar loop can not be smaller than the radius of the Moyal cell.
We analyze now the regime of small $`\frac{\mu }{T}<<1`$. The classical thermal field theory approximation consists in dimensionally reducing the system along the Euclidean time direction, or equivalently, considering only the zero mode in the sum over Matsubara frequencies. In the limit of small $`\frac{\mu }{T}`$ this approximation is valid up to modes of momentum $`k^2<T`$. On the other hand the non-commutative phases suppress modes of momentum $`k^2>\frac{1}{\theta }`$ as can be explicitly seen from the Bessel function appearing in 3.6. Therefore when $`\theta T>>1`$ and $`\frac{\mu }{T}<<1`$ we expect that the classical field approximation will describe the leading behaviour of the non-planar sector <sup>4</sup><sup>4</sup>4Notice that in our case the two spatial directions are non-commutative. Therefore we expect the suppression of high momenta by $`\theta `$ to be more effective than in the cases studied in , where the classical approximation was applied to a system with odd spatial dimensions.. The integral 3.6 can be evaluated in this limit with the result
$$I_{nonplanar}^T=\frac{g}{8\pi ^2}T^2\text{G}\left((\mu \theta )^2\right),$$
(3.8)
where $`G(z)=G_{13}^{31}\left(z|_{000}^0\right)=\frac{1}{2\pi i}\mathrm{\Gamma }(1+s)^3\mathrm{\Gamma }(s)z^s𝑑s`$ denotes a Meijer G-function. The suppression of the non-planar sector with respect to the planar one appears in this case to be only logarithmic with the temperature. However, contrary to the previous case, the ratio between planar and non-planar contributions depends also on $`\mu \theta `$. For $`\mu \theta `$ large the function $`G`$ tends to zero implying an additional suppression for the non-planar sector. For $`\mu \theta `$ small $`G`$ diverges. This divergence is associated to the infrared problems of the theory at small chemical potential.
## 4 Discussion and Conclusions
We have seen that the phenomena of the UV/IR mixing is not only a characteristic of relativistic theories but also occurs in non-relativistic theories. The model we have considered is a non-commutative version of a 2+1 dimensional model that describes many particle quantum mechanics with a delta function interactions. For the two-point function we have seen the appearance of IR singularity of delta function type which changes the dispersion relation. For the four-point function we found a logarithmic singularity. Thus the non-relativistic model has an UV/IR mixing similar to the relativistic field theories studied so far. Since our model can not be embedded in a natural way in a string theory one might interpret this as slight evidence that the IR singularities are not connected to closed string states that do not decouple from the field theory.
The renormalizability of non commutative field theories to all loop order is still an open problem . The non relativistic scalar field model might proof to be a simple and interesting toy model for such a study. The fact that some diagrams vanish identically (such as t- and u-channel contributions to the four-point function) could simplify a systematic study of renormalizability. That in resumming the self-energy insertions in the propagator one has to deal with powers of delta-functions should not a priori be considered as a an unsurmountable obstacle. As we argued such a formal resummation is physically well motivated. Indeed the delta function appears also in the non relativistic limit of the resummed propagator of relativistic $`\varphi ^4`$ theory.
We have also studied the two loop correction to free energy and we have seen that the non-planar part of the theory is very sensitive to the value of the chemical potential. At large negative values it turns out that the non-planar part is strongly suppressed compared to the planar part. In this regime the behaviour is similar to what has been found in relativistic theories in . The thermal wavelength of the degrees of freedom in non-planar diagrams can not become smaller that the non commutativity scale. Therefore these degrees of freedom are suppressed at high temperature.
This interpretation is less clear at high temperature and small chemical potential. It turned out that the non-planar part is at most logarithmically suppressed. Given that these two regimes behave so differently it should be an interesting direction of further research to study the effects of a chemical potential also in relativistic, non-commutative field theories.
## Acknowledgements
We would like to thank Luis Alvarez-Gaumé, Herbert Balasin, José Barbón, César Gómez, Harald Grosse, Cristina Manuel, Antonio Pineda, Toni Rebhan and Miguel Angel Vazquez-Mozo for helpful discussions. The work of J.G. is partially supported by AEN 98-0431, GC 1998SGR (CIRIT). K.L. and E.L. would like to thank the Erwin Schrödinger Institut for Mathematical Physics, Vienna for its hospitality.
|
no-problem/0004/astro-ph0004294.html
|
ar5iv
|
text
|
# Radio recombination lines from starburst galaxies : high and low density ionized gas
## 1. Introduction
RRL and radio continuum studies of nuclear starbursts in galaxies are proving to be useful not only because of the absence of extinction but also because different density components of the ionized gas can be accessed through observations of RRLs at different frequencies (Zhao et al. 1996). We report a multi-frequency RRL and continuum study of four starburst galaxies.
## 2. Observations and Modeling
We have observed RRLs at 1.4 GHz, 8.3 GHz and 15 GHz using the VLA<sup>1</sup><sup>1</sup>1The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. from four starburst galaxies : NGC 253, NGC 3628, and NGC 3690 and IC694 in the Arp 299 system. The two higher frequency lines were detected in all four galaxies and the line at 1.4 GHz was detected only in NGC 253. The line emission arises in the central 5$``$8<sup>′′</sup> nuclear region in these galaxies. The peak line strengths for NGC 253 are about 5-8 mJy. The 8 and 15 GHz lines are typically 0.5$``$1.0 mJy in the other three galaxies and the 3$`\sigma `$ upper limits to the 1.4 GHz lines are $``$1 mJy. We model the ionized gas as a collection of spherical uniform HII regions characterised by a single temperature, density and size. The total number of such HII regions is obtained by comparing the computed continuum and line emission for each model with the observed values. Other derived parameters are the flux of ionizing photons and the mass of the gas. Valid solutions are chosen based on constraints which are described in Anantharamaiah et al (1993).
## 3. Properties of the Ionized Gas and Conclusions
The models show that the 8.3 GHz and 15 GHz lines originate in a population of compact (0.1$``$5 pc) high density (5000$``$50000 cm<sup>-3</sup>) HII regions with low total volume filling factor ($`<`$ 10<sup>-4</sup>). These lines arise from internal emission in the HII regions. Since the NIR and optical data imply much lower densities, this component is probably not detected in these bands due to high extinction. In all four galaxies, the photon flux neccesary for ionizing this gas is equal to or greater than that derived using conventional means. This result could lead to an upward revision of their star formation rates.
The above gas is practically undetectable at frequencies $`\stackrel{<}{}`$4 GHz. Modeling the detected H166$`\alpha `$ line at 1.4 GHz from NGC 253 indicates that this line arises from low density (10$``$100 cm<sup>-3</sup>) diffuse (5$``$100 pc) HII regions with an area filling factor $`>`$ 0.1. The upper limits to the 1.4 GHz line for the other three galaxies give very similar results. Although the detection of lines from this component in the other three galaxies might be difficult with current instruments, measuring the continuum flux densities at $`\nu <`$1 GHz from the nuclear region using the GMRT will enable us to strongly constrain the properties of this gas.
These observations are a first step towards deriving properties of the ionized gas at different densities in starburst regions and hence towards studying the star formation at different time scales since the lifetime of an HII region depends on its density.
## References
Anantharamaiah, K. R., Zhao, J. H., Goss, W. M., & Viallefond, F. 1993, ApJ, 419, 585
Zhao, J. H., Anantharamaiah, K. R., Goss, W. M., & Viallefond, F. 1996, ApJ, 472, 54
|
no-problem/0004/hep-th0004125.html
|
ar5iv
|
text
|
# Brane Worlds, the Cosmological Constant and String Theory
## 1 Introduction
The traditional method of compactification of superstring theory involves writing the ten dimensional manifold as a direct product of a non-compact Minkowskian four manifold and a compact Euclidean six manifold $`M_{10}=M_4M_6`$, with the size of the compact space being set by the string scale. The four manifold is identified with the space we observe around us and is therefore taken to be flat. The compact manifold is taken to be a Ricci flat K$`\ddot{\mathrm{a}}`$hler manifold (i.e. a Calabi-Yau space or an orbifold) so that we obtain $`𝒩=1`$ supersymmetry (SUSY) in the four manifold . The latter is required for at least two reasons. The first is the so-called hierarchy problem involving the stabilization of the weak scale to quantum corrections. With (softly broken) $`𝒩=1`$ SUSY one can protect this hierarchy. The other issue is that of the cosmological constant (CC). Again supersymmetry offers the hope of stabilizing this against quantum corrections i.e. if the ultraviolet theory (say at the string scale) has zero cosmological constant then as long as SUSY is preserved it will remain zero. In this case case, however, once SUSY is broken, it is difficult to see how one can prevent a CC, at least of order the mass splitting between fermions and bosons, from being generated.
Actually getting a realistic model where this is achieved would be progress, since there would also be standard model phase transitions which are also of the same order and could in principle cancel the SUSY breaking effects. Nevertheless at this point it is, we believe, fair to say that even this has not yet been achieved in a natural way, since typical cosmological constants that are generated upon SUSY breaking are at an intermediate scale of $`𝒪(10^{11}GeV)`$ (which is the actual scale of SUSY breaking in the so-called hidden sector).
To get a zero (or small) CC requires fine tuning of some parameter in the scalar potential of $`𝒩=1`$ supergravity. However string theory has no tunable parameters. In fact it is not very clear how to even generate the potential for the moduli (the dilaton, the size and shape of the compact manifold) but it is generally agreed that, either stringy or low energy field theoretic, non-perturbative effects will give such a potential and stabilize these moduli. <sup>1</sup><sup>1</sup>1For a recent discussion highlighting the problems involved with references to earlier literature see .
Let us first think about the field theoretic stabilization mechanism. One possibility is to have several gauge groups in the hidden sector whose gauginos condense. This is the so-called race track model. The effective superpotential needs to be a sum of three exponentials of the moduli fields, two are needed to get a weak gauge coupling minimum and an intermediate scale of SUSY breaking and a third is needed in order to fine tune the cosmological constant to zero. This model has various problems associated with the steepness of the potential (see for a recent discussion) but even apart from that it seems unlikely that the sort of fine tuning that is required is allowed by string theory. An alternative would be to have the moduli stabilized by string scale physics. In this case however one needs to find field theoretic models which solve the practical cosmological constant problem i.e. obtain a model with CC only at the weak scale as advocated in . Even this seems a non-trivial task; in particular, it seems that one needs a constant in the superpotential to do so and to get a zero (or small) cosmological constant would seem to require a fine tuning that would really be at variance with what one expects from string theory. Similar remarks apply to all models proposed so far for getting standard model physics out of string theory (including many brane world type scenarios based on type I string theory) once one tries to get flat four dimensional space after SUSY breaking.
The main point of this paper is to propose a string theoretic brane world scenario for getting a zero cosmological constant based on arguments sketched in . This would imply a radical departure from what happens in the standard string compactification models discussed in the above paragraphs. Instead of compactifying to four flat dimensions with $`𝒩=1`$ supersymmetry, we compactify the ten dimensional theory (here we will concentrate on type IIB) on a five sphere (or squashed sphere) to five dimensions . In contrast to the standard compactifications we will now have a potential for at least the T-moduli (i.e. the size and shape of the squashed sphere) and of course a bulk cosmological constant. What we have is five dimensional gauged supergravity but the four dimensional world, instead of being a further compactification of this, is viewed as a brane sitting in this five dimensional bulk. As discussed in , there are various possibilities for getting flat four dimensional space on the brane without fine tuning<sup>2</sup><sup>2</sup>2For a discussion of a case which apparently does not involve a choice of integration constant but involves a bulk singularity see the first paper of . Apart from the difficulty of interpreting this naked singularity (which is common to all the models in both papers of this model has the problem that the brane tension chosen is not renormalization group (RG) invariant. For comments on these models along these lines see , . none of which explain why the cosmological constant is zero. But unlike in the standard compactifications, where it is difficult to understand even how is the CC tuned to zero, in our case we believe there is a viable scenario whereby with a choice of integration constants and a compactification parameter (which is not determined by the ten dimensional theory) flat space four dimensional solutions can be obtained.
Although our discussion is confined to an analysis of a five dimensional effective action coming from compactifying a ten dimensional action for type IIB supergravity on a (squashed) sphere, we believe that is quite likely that there is a string construction in such a Ramond-Ramond (RR) background. In fact, recently there has been considerable progress in constructing such theories . Thus we think that the model that we propose here could be promoted to the complete string theory.
Another issue that we should discuss here is that of singular brane configurations versus smooth configurations. We take the point of view that our branes are D-branes and/or orientifold planes so that as far as the low energy effective action is concerned their thickness is infinitesimal.<sup>3</sup><sup>3</sup>3We stress that although flat space considerations imply that gravity cannot be confined to a D-brane (as one wants for a one brane RS type scenario) in a warped background this is not necessarily the case. Their transverse size in other words is set by the string scale and to the extent that we are working in the low energy effective action we can ignore their thickness. Thus we will be looking for solutions to the bulk equations in the presence of these branes, which will act as singular source terms as for instance in section 3 of . We believe that the considerations of where the authors look for brane scenarios that try to mimic the Randall - Sundrum (RS) scenarios and/or without putting in sources (and finding a negative result) are not relevant to our construction. On either side of our branes we will have smooth solutions to the supergravity equations which will then be matched at the position of the branes. It should also be pointed out here that a model for the real world would in fact require a brane which carried gauged degrees of freedom and so we really would want to think in terms of D-branes rather than smooth brane like configurations. The picture that we have (i.e. basically that in ,) actually implies the physical existence of a brane (on which the standard model should live) placed in the bulk supergravity space.
Our model has the explicit D-brane/orientifold planes at the fixed points of the $`\frac{S^1}{Z_2}`$ orbifold in the ($`x^4`$) fifth direction. We assume that the ten dimensional dilaton is frozen by string scale dynamics. When the squashing modes are constant but at a certain critical point with non-zero values, the bulk is a $`𝒩=2`$ background. Turning on the breathing mode $`\phi `$ of the sphere should not affect the supersymmetry. The supersymmetry of the D/orientifold 3-plane would be $`𝒩=1`$. This should give a solution of the RS type without fine tuning and with $`𝒩=1`$ supersymmetry. After supersymmetry breaking however one does not expect a solution for arbitrary values of the radius of the $`S^1`$. In this case, in addition to the radius of the circle, there are also two integration constants from the metric factor and breathing mode and the constant coming from the compactification, \- four constants in all that will be determined by the two pairs of matching conditions at each brane. Thus all the constants are completely fixed.
As stressed in , this is of course not a solution to the cosmological constant problem since there is no explanation of why the integration constants are chosen to give the values that they need to have in order to get flat branes. Nevertheless we believe it is progress in that at least there is an explanation as to how one might get a zero CC in a string theoretic scenario after SUSY breaking.<sup>4</sup><sup>4</sup>4While this paper was being prepared for publication a paper which discusses an alternate scenario for two flat branes without fine tuning appeared. Our mechanism is replaced there with an assumption about supersymmetry in the bulk and on the “Planck brane” . However we do not understand how a fine tuning in the bulk potential is avoided in this case.
It should also be pointed out that the brane world scenario appears to be the only context, within string theory, that the idea of using the free parameter (from the point of view of the ten dimensional theory) in the potential of gauged supergravity, to adjust the cosmological constant to zero, can be made use of. This is because this mechanism can only be used in type IIA or IIB theories (and orientifolds thereof) and the standard model/real world in such theories necessarily lives on a D-brane.
In the next section we discuss a possible string theory set up in which the scenario we have in mind may be realized. In section 3 we discuss special bulk solutions to the five dimensional equations and in section 3.1 we display approximate general solutions with all the allowed integration constants that are necessary to obtain a solution with two branes. In section 4 we summarize the physics of the construction and its significance, and then speculate on possible phenomenological applications.
## 2 The String Theory Setup
We wish to consider type IIB string theory compactified on a five sphere or squashed sphere. Typically this background is obtained by turning on the five form RR flux. We assume that string theory on such a background exists perhaps along the lines discussed by Berkovits et al . If this is indeed justified, it should follow that the entire D-brane machinery can be taken over.
Thus we look at $`S^1/Z_2`$ orientifold of the five dimensional theory resulting from the (squashed) sphere compactification with $`S^1`$ being a circle of radius $`R`$ and the $`Z_2`$ action being $`x^4x^4`$. The theory is a five dimensional gauged supergravity which contains in addition to the dilaton-axion, 40 other scalars some of which can be interpreted as squashing modes of the five sphere. These fields are massless in the limit when the gauge coupling is zero. In addition there is the so called (compactification scale) “breathing mode” and of course the gauge fields which along with the two 2-form fields associated with the F and D strings will be set to zero in the following. Also we assume that the ten dimensional dilaton is frozen by string scale dynamics.
Let us first consider the round sphere compactification. The ten dimensional metric is written as
$$ds_{10}^2=e^{2\alpha \phi }ds_5^2+e^{2\beta \phi }ds^2(S^5)$$
with $`\alpha =\frac{1}{4}\sqrt{\frac{5}{3}},\beta =\frac{3}{5}\alpha `$. The ans$`\ddot{\mathrm{a}}`$tz for the self-dual 5 - form is
$$H_{(5)}=4me^{8\alpha \phi }ϵ_{(5)}+4mϵ_{(5)}(S^5),$$
(1)
where $`ϵ_{(5)}`$ and $`ϵ_{(5)}(S^5)`$ are the volume forms of the non-compact and compact spaces respectively. The effective five dimensional action is then
$$S=d^5x\sqrt{G_5}\left[\frac{1}{2}(\phi )^2+e^{\frac{16\alpha \phi }{5}}R_58m^2e^{8\alpha \phi }\right],$$
(2)
where $`R_5`$ is the scalar curvature of the five sphere and we have set to zero all fields that are irrelevant to our discussion. Note that this action allows the $`AdS_5`$ solution with constant breathing mode $`\phi =\phi _0`$ given by
$$e^{\frac{24\alpha }{5}\phi _0}=\frac{R_5}{20m^2}$$
(3)
and
$$R_{MN}=4m^2e^{8\alpha \phi _0}G_{5}^{}{}_{MN}{}^{}.$$
(4)
Now the fifth dimension is a $`\frac{S^1}{Z_2}`$ orbifold. The fixed points would be orientifold planes and we place D-branes at one or other (or both) fixed points so as to have two (composite) branes at the ends of the fifth dimension. The picture is just like that in the ten dimensional type IA situation analyzed by Polchinski and Witten . In the original ten dimensional framework, the self dual five form field strength would satisfy in the presence of a collection of $`n_0`$ branes located near $`x^4=0`$ and extending in the $`x^1,x^2,x^3`$ directions (counting -16 for the orientifold fixed plane), the equation $`dH_{(5)}=n_0\tau \delta (x^4)dx^4ϵ_5(S_5)`$, where $`\tau `$ is the charge of a single brane. We take $`x^0`$ to be the time like direction and $`x^5,\mathrm{},x^9`$ the directions along the five sphere. Substituting expression (1) into this, gives $`_{x^4}m(x^4)=n_0\tau \delta (x^4)`$. The solution for $`H_{(5)}`$ now takes the form as before, i.e. (1), but with $`m`$ being now a piece wise continuous function of $`x^4`$ with a jump $`\mathrm{\Delta }m=n_0\tau `$ at $`x^4=0`$ and a similar jump at $`x^4=\pi R`$. Using also the symmetry under $`x^4x^4`$, we find the sort of situations illustrated in figs. 1, 2, 3.
Note that the quantization of $`H_5`$ also follows directly from its coupling to the $`D3`$ brane. Consistent coupling requires by a standard argument $`\tau _{S_5}H_5=2\pi n`$ where $`n`$ is an integer and $`\tau =2\pi M_s^4`$ is the $`D_3`$ brane charge in terms of the string scale $`M_s`$. Using equation (1) for $`H_5`$ and $`_{S_5}=\pi ^3r^5`$ we then get
$$m=n\left(\frac{1}{M_s^44\pi ^3r^5}\right).$$
The units for $`m`$ given in the figures is then the quantity in parantheses in the above equation.
In the above, we should point out that we have assumed not only that there is a valid string description as in Berkovits et al but also that the corresponding five dimensional orientifolds have charges $`16`$. These values are just given for illustrative purposes and of course if the general scenario is valid with however different charges, then the examples would have to be modified accordingly. We should also point out that alternatively, one might try to justify such a picture by considering the following procedure. First, compactify type I string theory on a five torus. Taking the T-dual picture one has a type IIB orientifold with $`2^5`$ orientifold 3-planes at the fixed points of the $`Z_2`$ orbifold symmetry of the dual torus and 32 D3-branes to cancel the tadpoles. Now let the size of the dual torus go to infinity while keeping some number of the D-branes at the fixed point at the origin. Now adding the point at infinity should give presumably an $`S^5/Z_2`$. It is not quite clear to us which compactification makes sense in the complete string theory but given that the 5D gauged supergravity models appear to come from five sphere or squashed sphere compactifications we will just consider these, assuming that a theory along the lines of Berkovits et al will allow our scenario.
As we will see below, the bulk is taken to be at the $`𝒩=2`$ critical point of the gauged supergravity potential. In the presence of the branes this is then broken down to $`𝒩=1`$ so that one may have a phenomenologically acceptable model on one or the other brane. Let us pick the one at $`x^4=0`$ to situate the standard model. The other one can then play the role of a hidden sector (as in the Horava-Witten theory ) where supersymmetry can be broken and then communicated to the visible brane. The only new point here is that any cosmological constant that is generated as a result of the SUSY breaking is compensated by adjustment of integration constants and $`R_5`$.
It should be remarked here that $`R_5`$ is on the same footing as the other integration constants (which occur in the five dimensional theory) from the point of view of the ten dimensional theory. It is not a modulus. The relevant modulus is the field $`\phi `$. $`R_5`$ is, from the point of view of the ten dimensional theory, and presumably of string theory, an integration constant appearing in the ansatz for the ten dimensional metric. For compactification on a Ricci flat metric such as a Calabi-Yau metric, this constant can be absorbed into $`\phi `$. In our case as long as $`m`$ is non-zero i.e. the potential is not runaway, $`R_5`$ cannot be absorbed into the modulus $`\phi `$. We stress again that this is not a mechanism that was available with standard compactification scenarios (Calabi-Yau, Orbifold etc).<sup>5</sup><sup>5</sup>5Recently, other warped compactifications, in the context of M/F theory, have appeared where a RS type scenario with explicit branes is constructed. However for these Calabi-Yau compactifications with G-flux, it is not possible to use our mechanism, since the compact space is Ricci flat, so one does not have the required freedom in the choice of constants to cancel the CC.
The next issue is what is the appropriate 5D effective theory. Our choice is the $`𝒩=2`$ supersymmetric vacuum of which corresponds to two non trivial scalars relaxed to their critical values plus a non trivial breathing mode, a generalization of the model with one squashing mode and a non trivial breathing mode model of . The scalar potential of this model is expected to have a contribution from the curvature of the compact space and a number of contributions from non-trivial 5-form and 3-form fluxes. For simplicity, we will truncate the potential only to the curvature term and the 5-form flux term since these are sufficient to demonstrate our point. A more complete treatment should include all the other terms. The potential is then just as in the round sphere case (2), and it is:
$$V(\phi )=R_5e^{a\phi }+8m^2e^{b\phi }.$$
(5)
## 3 Flat Domain Wall Solutions - Brane Worlds
The equations of motion corresponding to the action (2) with $`\phi =\phi (x^4)`$ and the flat four dimensional domain wall $`5D`$ metric
$$ds_5^2=e^{2A(x^4)}\eta _{\mu \nu }dx^\mu dx^\nu +(dx^4)^2,$$
(6)
where $`\eta _{\mu \nu }=diag(1,+1,+1,+1)`$, are
$$\phi ^{\prime \prime }+4A^{}\phi ^{}=\frac{V(\phi )}{\phi },$$
(7)
$$A^{\prime \prime }=\frac{1}{6}\phi ^2,$$
(8)
$$A^2=\frac{1}{12}V(\phi )+\frac{1}{24}\phi ^2.$$
(9)
The prime denotes differentiation with respect to $`x^4`$. We emphasize that our domain wall ansatz is that we are looking for solutions with flat four dimensional slices of the five dimensional geometry.
It has been shown that by introducing a “superpotential” $`W`$, the second order differential equations reduce to , :
$$\phi ^{}=\frac{W(\phi )}{\phi },$$
(10)
$$A^{}=\frac{1}{6}W,$$
(11)
$$V(\phi )=\frac{1}{2}\left(\frac{W}{\phi }\right)^2\frac{1}{3}W^2.$$
(12)
This is a system of decoupled first order differential equations instead of the coupled second order differential equations that we had before. A particular ansatz for $`W`$ that satisfies the last condition for $`V(\phi )`$ of the form (5), is
$$W=p_1e^{\frac{a}{2}\phi }+p_2e^{\frac{b}{2}\phi },$$
(13)
with
$$p_1=\pm 2\sqrt{\frac{2R_5}{a(ba)}}\mathrm{and}p_2=\pm 2\sqrt{\frac{16m^2}{b(ba)}}.$$
(14)
This ansatz essentially solves the system, but it seems that we have to pay a price. Namely, we have lost one integration constant because (13) is a particular solution of (12) and does not have the integration constant that a general solution should have.
Note that the original second order system apparently has four integration constants. However, (9) is a constraint equation which gives one relation between these constants. Also the zero mode of $`A`$ can always be absorbed in a rescaling of coordinates. Thus the original system has two independent integration constants. As stressed by , the first order system must be completely equivalent to the second order system and the parameter space should have the same dimension. This is of course possible only if we use the general solution for $`W`$ in which case the first order system will also have two independent integration constants, one in the solution for $`W`$ and one in the solution for $`\phi `$. Of course if we work with the particular solution (13), we would be restricting ourselves to a one parameter subspace of the solution space. We will come back to the problem of the lost integration constant later.
For the moment, we have to solve the following system of first order differential equations:
$$\phi ^{}=\frac{1}{2}ap_1e^{\frac{a}{2}\phi }+\frac{1}{2}bp_2e^{\frac{b}{2}\phi },$$
(15)
$$A^{}=\frac{1}{6}p_1e^{\frac{a}{2}\phi }\frac{1}{6}p_2e^{\frac{b}{2}\phi }.$$
(16)
Integrating (15) gives the Lerch transcendent as the solution,
$$r+c_1^\phi =\frac{2}{ap_1}\underset{n=0}{\overset{\mathrm{}}{}}\frac{(\frac{bp_2}{ap_1})^n}{(ab)n+a}\left[1e^{\left(n(ba)a\right)\phi }\right],$$
(17)
which is quite hard to invert for general $`\phi `$, so as to obtain an exact solution for the warp factor. It is much easier to work in certain interesting limits:
* $`\phi \phi _0`$.
Let us first solve (15) and (16) for $`\phi =\phi _0`$, the value of $`\phi `$ at the critical point of $`V`$. We have $`\phi ^{}=0`$ and therefore $`\frac{W}{\phi }=0`$, which implies through $`(\text{12})`$ that this vacuum characterizes a critical point (in fact a minimum) of the scalar potential. Furthermore, from (16) we see that it corresponds to an exact $`AdS`$ vacuum, since $`A`$ has a linear dependence in $`x^4`$. The condition, therefore, that determines $`\phi _0`$, is
$$ap_1e^{\frac{a}{2}\phi _0}+bp_2e^{\frac{b}{2}\phi _0}=0.$$
(18)
Let us now solve the equations for a vacuum that is near the $`\phi =\phi _0`$ vacuum. For that, we assume
$$\phi =\phi _0+ϵf(x^4),$$
(19)
with $`ϵ`$ a small number. Substituting this ansatz into (15), we find that $`f`$ satisfies the differential equation $`f^{}=4kf`$ and therefore
$$\phi =\phi _0+c_1^\phi e^{4kx^4},k^2=\frac{1}{32}a(ba)R_5e^{a\phi _0},$$
(20)
where $`k`$ and $`p_1`$ are of opposite signs and $`c_1^\phi `$ is the integration constant with $`ϵ`$ absorbed in it. Solving (16) for $`A`$, yields the (almost $`AdS`$) warp factor:
$$A(x^4)=kx^4+𝒪(ϵ^2).$$
(21)
The first term in the above, corresponds to the $`AdS`$ part. The solution (20) and (21), however, is valid only for large $`|x^4|`$. In particular, if $`k<0`$ (that is if $`p_1>0`$) then the solution is valid when $`x^4+\mathrm{}`$ and if $`k>0`$ ($`p_1<0`$) then the solution is valid when $`x^4\mathrm{}`$. Finally, (18) implies that $`p_1`$ and $`p_2`$ must come with opposite signs.
* $`\phi +\mathrm{}`$.
In this limit, we have
$$\phi ^{}\frac{1}{2}bp_2e^{\frac{1}{2}b\phi },$$
(22)
which can be integrated to give
$$\phi =\frac{2}{b}\left[\mathrm{ln}\left(\frac{b^2}{4}p_2x^4+c_1^\phi \right)\right],$$
(23)
where $`c_1^\phi `$ is an integration constant. The above solution is valid for $`\frac{b^2}{4}p_2x^4+c_1^\phi 0^+`$. The solution for the warp factor turns out to be
$$e^{2A(x^4)}=\left(\frac{b^2}{4}p_2x^4+c_1^\phi \right)^{\frac{4}{3b^2}},$$
(24)
up to an (irrelevant) overall integration constant. Thus in this limit, we have $`e^{2A}0^+`$.
* $`\phi \mathrm{}`$.
In this limit, we have
$$\phi ^{}\frac{1}{2}ap_1e^{\frac{1}{2}a\phi },$$
(25)
giving
$$\phi =\frac{2}{a}\left[\mathrm{ln}\left(\frac{a^2}{4}p_1x^4+c_1^\phi \right)\right],$$
(26)
which implies that $`\frac{a^2}{4}p_1x^4+c_1^\phi +\mathrm{}`$ for any finite $`c_1^\phi `$.
The warp factor is
$$e^{2A(x^4)}=\left(\frac{a^2}{4}p_1x^4+c_1^\phi \right)^{\frac{4}{3a^2}},$$
(27)
up to the usual overall constant. Thus, we have $`e^{2A}+\mathrm{}`$ in this limit.
As was already noticed in , the above solution has two separate branches, one which corresponds to $`\phi (\phi _0,+\mathrm{})`$ for $`x^4(\mathrm{},\frac{4}{b^2p_2}c_1^\phi )`$ and one which corresponds to $`\phi (\phi _0,\mathrm{})`$ for $`x^4(\mathrm{},+\mathrm{})`$. We will see later that the separation of the solution in two distinct branches is probably special to our specific choice (13) of $`W`$ and that this choice is equivalent to choosing a gauge where $`c_2^\phi =0`$. The first branch is singular at $`x^4=\frac{4}{b^2p_2}c_1^\phi `$ because the warp factor vanishes, so we neglect it.
As an example, in fig. 4, we plot the first branch solutions ($`I`$ and $`II`$) for the warp factor found in . They correspond to $`p_1<0`$ and $`p_1>0`$ respectively, i.e. solution $`II`$ can be obtained from solution $`I`$ by a reflection around the origin $`x^4=0`$. Solution $`I`$ corresponds to patching together in a smooth way the above found solutions for $`\phi \phi _0`$ and $`\phi \mathrm{}`$ in which case the singularity encountered in the $`\phi +\mathrm{}`$ regime is avoided. Solutions $`I`$ and $`II`$ individually do not possess reflection symmetry around the origin, but they are the ones appropriate for constructing models with explicit branes inserted in the bulk. More specifically, we can construct a brane world scenario as follows: Consider the solutions of fig. 4 and choose a region in the $`x^4`$ coordinate, symmetric around $`x^4`$. The region is, say, the interval $`[\pi R,+\pi R]`$. If $`\pi R<x^4<0`$, take solution $`I`$ as it is and if $`0x^4+\pi R`$, take solution $`II`$ to get a $`Z_2`$ symmetric situation. The orbifold will then be the region $`0<x^4<\pi R`$. A one brane world can be obtained by taking $`R`$ to infinity. This construction, for the warp factor, is illustrated in fig. 5.
With the $`D`$3-branes present, the $`5`$ dimensional action is modified to:
$`S(\phi )={\displaystyle d^5x\sqrt{G_5}\left[\frac{1}{2}(\phi )^2V(\phi )\right]}{\displaystyle d^4x\sqrt{G_4^{(1)}}T^{(1)}(\phi )}`$
$`{\displaystyle d^4x\sqrt{G_4^{(2)}}T^{(2)}(\phi )},`$ (28)
where $`G_{4\mu \nu }^{(i)}`$ is the induced metric on the $`(i)`$’th brane by $`G_{5\mu \nu }`$, and for our metric it is simply $`G_{4\mu \nu }^{(i)}=\eta _{\mu \nu }`$, and $`T^{(1)}(\phi )`$ and $`T^{(2)}(\phi )`$ are the tensions of the branes. The dependence of $`T`$ on $`\phi `$ is kept arbitrary since it will in general be affected by quantum effects on the brane when supersymmetry is broken. The action for the branes is written in the static gauge so that the embedding functions are $`x^\mu (\xi )=\xi ^\mu `$, $`\mu =0,\mathrm{},3`$ and we ignore their fluctuations. Also of course we have set all other fields on the brane to the minimum of the quantum effective action so that what we have kept is just an effective description of the ground state of the theory on the brane.
Now, in addition to the equations of motion, we have to satisfy the jump conditions at $`x^4=0`$ (and $`x^4=\pi R`$ in the two brane case), which constitute the connection between the two solutions at opposite sides of the brane(s). They are:
$$2A^{}(x^4)=+\frac{1}{6}T^{(1)}(\phi (x^4))_{x^4=0},$$
(29)
$$2A^{}(x^4)=\frac{1}{6}T^{(2)}(\phi (x^4))_{x^4=\pi R},$$
(30)
$$2\phi ^{}(x^4)=\frac{T^{(1)}}{\phi }(\phi (x^4))_{x^4=0},$$
(31)
$$2\phi ^{}(x^4)=+\frac{T^{(2)}}{\phi }(\phi (x^4))_{x^4=\pi R}.$$
(32)
### 3.1 More General Solutions
In a two-brane world, we need both nontrivial integration constants of the second order equations of motion (7), (8) and (9), because we have only $`R_5`$ and the size of the orbifold $`R`$ to satisfy four jump conditions. One might think a solution with two branes and a pure $`AdS`$ bulk is possible , since there are only two non trivial jump conditions to satisfy (one for each brane) and two available constants, $`R_5`$ and $`R`$. However, as was pointed out in , the two branes have equal and opposite tensions at every point of the RG flow and the tension for a flat brane solution is fixed only in terms of $`R_5`$. The orbifold size does not enter the jump conditions and therefore it can not be used. In order to have a solution without fine tuning, we need a nontrivial scalar field in the bulk. But then, our initial choice of $`W`$ does not provide us with the two required integration constants, since as we explained, we have lost one integration constant in the choice (13). We therefore look for more general solutions to the equations of motion.
We make the ansatz
$$\phi =\phi _0+ϵf_1(x^4)+ϵ^2f_2(x^4)+ϵ^3f_3(x^4)+\mathrm{}$$
(33)
$$A=kx^4+ϵg_1(x^4)+ϵ^2g_2(x^4)+ϵ^3g_3(x^4)+\mathrm{}$$
(34)
with $`k>0`$, and we substitute it into (7), (8) and (9). We obtain an infinite set of differential equations. We show the result for (7) and (8), up to order $`ϵ^3`$:
$$f_1^{\prime \prime }+4kf_1^{}+pf_1=0,g_1^{\prime \prime }=0$$
(35)
$$f_2^{\prime \prime }+4kf_2^{}+pf_2\frac{1}{2}a(a^2b^2)R_5e^{\frac{1}{2}a\phi _0}f_1^2+4f_1^{}g_1^{}=0,g_2^{\prime \prime }+\frac{1}{6}f_1^2=0$$
(36)
$`f_3^{\prime \prime }+4kf_3^{}+pf_3a(a^2b^2)R_5e^{\frac{1}{2}a\phi _0}f_1f_2{\displaystyle \frac{1}{6}}a(a^3b^3)R_5e^{\frac{1}{2}a\phi _0}f_1^3+`$
$`+4f_1^{}g_2^{}+4f_2^{}g_1^{}=0,g_3^{\prime \prime }+{\displaystyle \frac{1}{3}}f_1^{}f_2^{}=0,`$ (37)
where $`p=32k^2`$. The above pattern (that presumably continues), suggests that we can write the general solution for the field $`\phi `$ as:
$$\phi (x^4)=\phi _0+c_1^\phi e^{4kx^4}+c_2^\phi e^{8kx^4}+P(x^4,c_1^\phi ,c_2^\phi ),$$
(38)
where the part of the expression with the exponentials is the general solution to the equation $`f^{\prime \prime }+4kf^{}+pf=0`$ and $`P(x^4,c_1^\phi ,c_2^\phi )`$ is some function of $`x^4`$ containing also the two integration constants and $`ϵ`$ has been absorbed into the integration constants. By looking at (35), (36) and (37), one can see that the solution is a small deviation from $`AdS`$ in two cases. One case is when $`x^4\mathrm{}`$. Then, the expansion in $`ϵ`$ makes sense only if $`c_2^\phi =0`$ and therefore this case is just the solution (20). The second case is when $`x^40`$ and $`c_1^\phi ,c_2^\phi <<1`$. This is a satisfactory solution with two explicit integration constants as long as the jump conditions allow them to be small. Another observation we can make here is that when $`c_2^\phi 0`$, it is not clear that $`\phi `$ has two separate branches. Nevertheless, we can still apply the method for the construction of the orbifold with the only caveat that if the jump conditions require integration constants of order one or larger, we still do not have an explicit solution.
We will now find a solution to the second order equations of motion in the $`\phi <\phi _0`$ region which will be valid for a larger range of the integration constants. Define $`HA^{}`$ and denote $`\frac{}{\phi }`$ with a dot. Also, for simplicity we will assume that $`R_5=8m^2=1`$, even though we have to keep in mind that $`R_5`$ is really a parameter determined by the integration constants. Combining equations (8) and (9), we obtain the equation $`\dot{H}^2=\frac{1}{36}(24H^2+2V)`$. Let us assume that $`24H^2>>2|V|`$. We will verify soon that this is a valid assumption in the regime of interest. Then, we can easily solve for $`H`$: $`H=e^{\sqrt{\frac{2}{3}}\phi }`$, where we have chosen the positive branch of the square root. Using the value of $`\phi _0`$ from the minimization of $`V`$, we deduce that $`2V`$ could be consistently dropped provided that
$$\frac{1}{12}e^{a\phi }\left(1\frac{a}{b}e^{(ba)(\phi \phi _0)}\right)<<e^{2\sqrt{\frac{2}{3}}\phi }.$$
(39)
Since our solution corresponds to $`\phi (\mathrm{},\phi _0)`$, a numerical estimate tells us that the above is true if, approximately, $`3<\phi <0.6`$ (for $`R_5=8m^2=1`$, we get from (18) $`\phi _00.6`$). Next, using the above solution for $`H`$, equation (7) becomes $`\phi ^{\prime \prime }+4e^{\sqrt{\frac{2}{3}}\phi }\phi ^{}\dot{V}=0`$. In the $`\phi <\phi _0`$ region the potential $`V`$ approaches zero exponentially and its shape in this regime is rather flat, which means that $`\dot{V}0`$. Indeed, one can neglect $`\dot{V}`$ in the equation for $`\phi `$ provided that
$$|ae^{a\phi }+be^{b\phi }|<<\mathrm{\hspace{0.33em}4}e^{\sqrt{\frac{2}{3}}\phi }|\phi ^{}|\mathrm{and}|\phi ^{\prime \prime }|.$$
(40)
Then we can solve for $`\phi `$ and we obtain
$$\phi (x^4)=\frac{\sqrt{6}}{2}\mathrm{ln}\frac{c_2^\phi }{1+12e^{\frac{\sqrt{6}}{3}c_2^\phi (c_1^\phi +x^4)}}+c_2^\phi (c_1^\phi +x^4)+\frac{\sqrt{6}}{4}\mathrm{ln}6.$$
(41)
By computing $`\phi ^{}`$ and $`\phi ^{\prime \prime }`$ from the above expression, one can verify that (40) is true in the relevant range of $`\phi `$ for a large range of $`c_1^\phi `$ and $`c_2^\phi `$, when $`x^4𝒪(010)`$. <sup>6</sup><sup>6</sup>6(40) is in fact true for any finite $`x^4`$ for appropriate choice of the order of magnitude of the integration constants, but the regime around $`x^4=0`$ is the interesting one since it is where the branes are located.
For $`\phi >>\phi _0`$, we can take similar steps. We can again assume for simplicity that $`R_5=8m^2=1`$. The derivative of the warp factor is obtained by solving the equation $`\dot{H}=\frac{\sqrt{2}}{6}e^{\frac{b}{2}\phi }`$, which yields $`H=\frac{\sqrt{2}}{3b}e^{\frac{b}{2}\phi }`$ up to an irrelevant integration constant. The equation for $`\phi `$ becomes $`\phi ^{\prime \prime }=be^{b\phi }`$, where we have ignored the smaller terms. The solution for $`\phi `$ is then
$$\phi (x^4)=\frac{\sqrt{15}}{10}\mathrm{ln}\left[\frac{c_1^\phi }{2}\left[1+\mathrm{tan}\left(c_1^\phi \frac{5}{3}(x^4+c_2^\phi )^2\right)\right]\right].$$
(42)
Finally, we assure that one can solve the equations of motion in an analogous fashion for arbitrary $`R_5`$.
We can now construct the orbifold as in fig.5 and satisfy the four jump conditions. An interesting fact is that to do so, along with $`c_1^\phi `$ and $`c_2^\phi `$, we have to use $`R_5`$ and $`R`$, which provides us with a RG scale dependent determination of the $`5D`$ dilaton and the orbifold size.
## 4 Conclusions
In this paper we have suggested a string theoretic framework for a two brane scenario. The main issue we have addressed is the possibility of obtaining a flat four dimensional world on the branes. The supersymmetric case is a degenerate one in which the matching conditions (equations (29-32)) are automatically satisfied. When supersymmetry is broken however these conditions are non-trivial. The solutions to the equations of motion and constraint have two independent integration constants. In addition there is the distance between the branes yielding in general three adjustable constants in all. As was observed in this would mean that there would have to be a tunable parameter either in the bulk potential or the brane tension.
The main observation of this paper is that there is a possible string construction in which the appearence of such a tunable parameter in the five dimensional theory is manifest. Indeed at the ten dimensional level there is no tuning, the parameter appears in the compactification and is in this sense an integration constant. Its appearence is related to the fact that we are dealing with a Ricci non-flat compactification (which gives a potential with a critical point for the breathing mode).
Solutions in the presence of supersymmetry breaking necessarily must have the complete set of free parameters that we have enumerated above. In particular the solutions for the warp factor and the breathing mode must contain the two independent integration constants. Unfortunately it was not possible to obtain exact analytic solutions displaying this but we have shown how to obtain approximate solutions that contain the two integration constants. In other words we have demonstrated the existence of (approximate) solutions that can be used to support flat branes after supersymmetry breaking, justifying the picture illustrated in figure 5.
There is one major unsolved problem though. What we have done strictly speaking is to demonstrate the existence of a flat two brane scenario after supersymmetry breaking in the context of type IIB supergravity. In the paper we have suggested that this ought to imply that there is a corresponding microscopic i.e. string theoretic implementation but this was not explicitly demonstrated. In particular in the supersymmetric situation, the D-brane tension in five dimensions should be obtained from the usual formula for D-brane tension in ten dimensions. However, as was demonstrated in recent papers <sup>7</sup><sup>7</sup>7These appeared after the first version of this paper was published on the e-print archive hep-th. , five dimensional supersymmetry requires that the tension $`T(\phi )`$ is essentially the superpotential $`W`$ of equation (13). It is not at all clear how this comes from the usual ten dimensional action for the D3 brane though it is possible that this action needs to be modified for string theory in the presence of RR flux and Ricci-non-flat compactifications. Perhaps one should expect a simple relation to the 10 dimensional tension only at the maximally supersymmetric point, the $`𝒩`$=8 critical point (3). At this point one finds that the tension in five dimensions $`T(\phi _0)=\frac{3}{4}T_{D_3}`$, a relation that was first found by . However this apparently can be justified from the string point of view . Perhaps this is all that is needed, but the matter is not entirely clear to us and is currently under investigation.
The phenomenological importance of our scenario is that we can have a $`𝒩=2`$ bulk five dimensional theory (by compactifying on a squashed five sphere as for example in , so that the brane theory would be $`𝒩=1`$ in four dimensions. The implication in four dimensions is that we have an adjustable constant (essentially an integration constant) in the superpotential that can be used to set the cosmological constant to zero after supersymmetry breaking (say by gaugino condensation). If this scenario can be completely justified (i.e. if the problem mentioned above can be solved) then it would be the first time that a string theoretic justification could be given for adding an adjustable constant to the superpotential.
###### Acknowledgments.
This work is partially supported by the Department of Energy contract No. DE-FG02-91-ER-40672.
|
no-problem/0004/quant-ph0004036.html
|
ar5iv
|
text
|
# On unentangled Gleason theorems for quantum information theory
## I Introduction
In an interesting recent paper Wallach obtained an *unentangled* Gleason theorem. His work was motivated by fundamental problems in quantum information theory, in particular: to what extent do local operations and measurements on multipartite quantum systems suffice to guarantee the validity of a theorem of Gleason-type and thus a Born-type rule for probabilities? His positive result is formulated in terms of partially defined frame functions, defined only on the *unentangled states* of a finite product of finite dimensional Hilbert spaces. We shall show that Wallach’s theorem, and also its generalisation to infinite dimensions, can readily be derived from results which were obtained by us in our investigations of generalised Gleason theorems for quantum bi-measures and multi-measures . The physical motivation for our earlier work arose from the so-called histories approach to quantum mechanics . Our more general approach relies on the generalised Gleason theorem obtained by Bunce and one of us .
## II Preliminaries
Throughout this note $``$ is a Hilbert space, $`𝒮()`$ is the set of unit vectors in $``$, and the sets of projections, compact operators or bounded operators are denoted by $`𝒫(),𝒦(),`$ or $`()`$ respectively.
A *quantum measure* for $``$ is a map $`m:𝒫()`$ such that $`m(p+q)=m(p)+m(q)`$ whenever $`p`$ and $`q`$ are orthogonal. If $`m`$ takes only positive values and $`m(1)=1`$, then $`m`$ is a *quantum probability measure*. If, whenever $`\{p_i\}_{iI}`$ is a family of mutually orthogonal projections, $`_im(p_i)`$ is absolutely convergent and $`m(_ip_i)=_im(p_i)`$, then $`m`$ is said to be *completely additive*.
The essential content of Gleason’s original theorem is that if $`m`$ is a positive, completely additive quantum measure on $`𝒫()`$, then it has a unique extension to a positive normal functional $`\varphi _m`$ on $`()`$, whenever the Hilbert space $``$ is not of dimension 2. It then follows from routine functional analysis that there exists a unique positive, self-adjoint trace class operator $`T`$ on $``$ such that $`\varphi _m(x)=\mathrm{Tr}(Tx)`$ for each $`x()`$, i.e., $`m(p)=\mathrm{Tr}(Tp)`$ for each $`p𝒫()`$. As a tool to help him prove his theorem, Gleason introduced the notion of a frame function. A (positive) *frame function* for $``$ is a function $`f:𝒮()^+`$ such that there exists a real number $`w`$ (the *weight* of $`f`$) such that, for any orthonormal basis of $``$, $`\{x_i\}_{iI}`$, $`_if(x_i)=w`$. There is a bijective correspondence between frame functions for $``$ and positive, completely additive quantum measures on $`𝒫()`$, see .
## III Unentangled frame functions and quantum multi-measures
Let $`_1,\mathrm{},_n`$ be Hilbert spaces. An *unentangled* element of $`_1\mathrm{}_n`$ is a vector which can be expressed in the form $`x_1\mathrm{}x_n`$. (Unentangled elements are sometimes referred to as *simple tensors*.) Let $`\mathrm{\Sigma }(_1,\mathrm{},_n)`$ be the set of all unentangled vectors of norm 1 in $`_1\mathrm{}_n`$. Every element in $`\mathrm{\Sigma }(_1,\mathrm{},_n)`$ can be expressed as a tensor product of unit vectors in $`_1,\mathrm{},_n`$ respectively. Following Wallach , an *unentangled frame function* for $`_1,\mathrm{},_n`$ is a function $`f:\mathrm{\Sigma }(_1,\mathrm{},_n)^+`$ such that, for some positive real number $`w`$ (the *weight* of $`f`$) whenever $`\{\xi _i\}_{iI}`$ is an orthonormal basis of $`_1\mathrm{}_n`$ with each $`\xi _i\mathrm{\Sigma }(_1,\mathrm{},_n)`$, then $`_if(\xi _i)=w`$. The physical idea behind this definition is that the elements of $`\mathrm{\Sigma }(_1,\mathrm{},_n)`$ represent the outcomes of elementary local operations or measurements.
It turns out that unentangled frame functions have natural links with quantum multi-measures. For the purposes of this note we define a (positive) *quantum multi-measure* for $`_1,\mathrm{},_n`$ to be a function $`m:𝒫(_1)\times \mathrm{}\times 𝒫(_n)^+`$, such that $`m`$ is completely orthoadditive in each variable separately, see . (Our results in apply to more general, vector valued quantum multi-measures.) When $`n=2`$, a multi-measure is called a *bi-measure*. These arise naturally in the study of quantum decoherence functionals .
###### Lemma III.1
Let $`_1,\mathrm{},_n`$ be Hilbert spaces, none of which is of dimension 2. Let $`m`$ be a (positive) quantum multi-measure for $`_1,\mathrm{},_n`$. Then there exists a unique bounded, multi-linear map $`M:(_1)\times \mathrm{}\times (_n)`$, such that
$`M(p_1,p_2,\mathrm{},p_n)=m(p_1,p_2,\mathrm{},p_n)\text{ for each }p_j(_j).`$Furthermore, given $`r`$, with $`1rn`$ and assuming $`n2`$, for each positive $`x_j(_j)`$, with $`1jn`$ and $`jr`$, the map $`yM(x_1,\mathrm{},x_{r1},y,x_{r+1},\mathrm{},x_n)`$ is a positive normal functional on $`(_r)`$.
*Proof*: The existence and uniqueness of $`M`$ is a consequence of results obtained in . Whenever $`x`$ is a positive operator in $`()`$, there exists a sequence of commuting projections $`\{p_j\}_{j=1,2,\mathrm{}}`$ such that $`x=x_j\frac{1}{2^j}p_j`$ (for a proof see, e.g., page 27). This observation, together with the positivity of $`m`$, shows that if $`x_r`$ is positive for $`r=1,2,\mathrm{},n`$, then $`M(x_1,\mathrm{},x_n)0`$. It now follows from the results of that given $`r`$, with $`1rn`$ and assuming $`n2`$, for each positive $`x_j(_j)`$, with $`1jn`$ and $`jr`$, the map $`yM(x_1,\mathrm{},x_{r1},y,x_{r+1},\mathrm{},x_n)`$ is a positive normal functional on $`(_r)`$. $`\mathrm{}`$
Let us recall that the algebraic tensor product $`(_1)_{\mathrm{alg}}\mathrm{}_{\mathrm{alg}}(_n)`$ may be identified with the linear span of $`\{x_1x_2\mathrm{}x_n:x_j(_j)\}`$ in (the von Neumann tensor product) $`(_1\mathrm{}_n)=(_1)\mathrm{}(_n)`$. Let $`m`$ and $`M`$ be as in Lemma III.1, then by the basic property of the algebraic tensor product, there exists a unique linear functional $`𝔐`$ on $`(_1)_{\mathrm{alg}}\mathrm{}_{\mathrm{alg}}(_n)`$ such that
$`𝔐(x_1x_2\mathrm{}x_n)=M(x_1,x_2,\mathrm{},x_n)`$.
###### Corollary III.2
Let $`_1,\mathrm{},_n`$ be finite dimensional Hilbert spaces, none of which has dimension 2. Let $`m`$ be a positive quantum multi-measure for $`_1,\mathrm{},_n`$. Then there exists an unentangled frame function $`f`$ for $`_1,\mathrm{},_n`$ such that, whenever $`\nu _1\nu _2\mathrm{}\nu _n`$ is in $`\mathrm{\Sigma }(_1,\mathrm{},_n)`$ and $`p_j`$ is the projection of $`_j`$ onto the one-dimensional subspace generated by $`\nu _j`$,
$`f(\nu _1\nu _2\mathrm{}\nu _n)=m(p_1,p_2,\mathrm{},p_n).`$
*Proof*: Fix a unit vector $`\nu _j`$ in $`_j`$ for $`j=1,2,\mathrm{},n`$. Then the projection from $`_1\mathrm{}_n`$ onto the subspace spanned by $`\nu _1\nu _2\mathrm{}\nu _n`$ can be identified with the projection $`p_1p_2\mathrm{}p_n`$ in $`(_1\mathrm{}_n)=(_1)\mathrm{}(_n)`$. Define $`f(\nu _1\nu _2\mathrm{}\nu _n)`$ to be $`𝔐(p_1p_2\mathrm{}p_n)=M(p_1,p_2,\mathrm{},p_n)=m(p_1,p_2,\mathrm{},p_n)`$. $`\mathrm{}`$
The following technical lemma allows us to associate a canonical multi-measure with each unentangled frame function.
###### Lemma III.3
Let $`_1,\mathrm{},_n`$ be Hilbert spaces of arbitrary dimension and let $`f:\mathrm{\Sigma }(_1,\mathrm{},_n)^+`$ be an unentangled frame function. Then there is a (positive, completely additive) quantum multi-measure $`m`$ for $`_1,\mathrm{},_n`$ such that whenever $`\nu _1\nu _2\mathrm{}\nu _n`$ is in $`\mathrm{\Sigma }(_1,\mathrm{},_n)`$ and $`p_j`$ is the projection of $`_j`$ onto the one-dimensional subspace generated by $`\nu _j`$,
$`m(p_1,p_2,\mathrm{},p_n)=f(\nu _1\nu _2\mathrm{}\nu _n).`$
*Proof*: To simplify our notation we shall prove this for $`n=2`$, but the method is perfectly general.
Let $`e_1`$ and $`e_2`$ be projections in $`𝒫(_1)`$ and $`𝒫(_2)`$, respectively. Let $`E_1`$ and $`E_2`$ be the subspaces of $`_1`$ and $`_2`$ which are the respective ranges of $`e_1`$ and $`e_2`$. Let $`\{\xi _j\}_{jJ}`$ and $`\{\psi _i\}_{iI}`$ be orthonormal bases of $`E_1`$ and $`E_2`$, respectively. We wish to define $`m(e_1,e_2)`$ to be
$`{\displaystyle \underset{jJ}{}}{\displaystyle \underset{iI}{}}f(\xi _j\psi _i).`$The only difficulty here is that we do not know that this number is independent of the choice of orthonormal bases for $`E_1`$ and $`E_2`$, respectively. To establish this we argue as follows.
Let $`w`$ be the weight of $`f`$. Let $`\{\xi _j\}_{jJ^{}}`$ and $`\{\psi _i\}_{iI^{}}`$ be orthonormal bases for $`E_1^{}`$ and $`E_2^{}`$, respectively. Then $`\{\xi _j\psi _i\}_{jJJ^{},iII^{}}`$ is an orthonormal basis for $`_1_2`$. So
$`{\displaystyle \underset{(j,i)J\times I}{}}f(\xi _j\psi _i)+{\displaystyle \underset{(j,i)J^{}\times (II^{})}{}}f(\xi _j\psi _i)+{\displaystyle \underset{(j,i)(JJ^{})\times I^{}}{}}f(\xi _j\psi _i)=w.`$Let $`\{\xi _j^{}\}_{jJ}`$ and $`\{\psi _i^{}\}_{iI}`$ be orthonormal bases of $`E_1`$ and $`E_2`$, respectively. Then
$`{\displaystyle \underset{(j,i)J\times I}{}}f(\xi _j^{}\psi _i^{})+{\displaystyle \underset{(j,i)J^{}\times (II^{})}{}}f(\xi _j\psi _i)+{\displaystyle \underset{(j,i)(JJ^{})\times I^{}}{}}f(\xi _j\psi _i)=w.`$Hence
$`{\displaystyle \underset{(j,i)J\times I}{}}f(\xi _j^{}\psi _i^{})`$ $`=`$ $`w{\displaystyle \underset{(j,i)J^{}\times (II^{})}{}}f(\xi _j\psi _i){\displaystyle \underset{(j,i)(JJ^{})\times I^{}}{}}f(\xi _j\psi _i)`$
$`=`$ $`{\displaystyle \underset{(j,i)J\times I}{}}f(\xi _j\psi _i).`$
So $`m`$ is well-defined. It is straightforward to verify that $`m`$ has all the required properties. $`\mathrm{}`$
*Remark*: In the above argument we made essential use of the property that $`f`$ is an unentangled frame function. Suppose that we only knew that $`f`$ satisfied the weaker property: for some positive real number $`w`$ whenever $`\{\xi _i\}_{iI}`$ is a product orthonormal basis of $`_1\mathrm{}_n`$, then $`_if(\xi _i)=w`$. Then the proof of the preceding lemma would break down. This throws fresh light on the counterexample constructed in Proposition 5 in .
In our investigations on quantum decoherence functionals we were led to obtain results on generalised quantum bi-measures and multi-measures . The statement of the next theorem is Wallach’s Theorem 1 . Our proof shows that Wallach’s Theorem is a natural consequence of our earlier results on quantum multi-measures.
###### Proposition III.4 (Wallach, Theorem 1 )
Let $`_1,\mathrm{},_n`$ be finite dimensional Hilbert spaces, each of dimension at least 3. Let $`f:\mathrm{\Sigma }(_1,\mathrm{},_n)^+`$ be an unentangled frame function. Then there exists a self-adjoint operator $`T`$ in $`(_1\mathrm{}_n)`$ such that whenever $`\nu _1\nu _2\mathrm{}\nu _n`$ is in $`\mathrm{\Sigma }(_1,\mathrm{},_n)`$ and $`p_j`$ is the projection of $`_j`$ onto the one-dimensional subspace generated by $`\nu _j`$,
$`f(\nu _1\nu _2\mathrm{}\nu _n)=\mathrm{Tr}((p_1p_2\mathrm{}p_n)T).`$
*Proof*: Since each of the Hilbert spaces $`_1,\mathrm{},_n`$ is finite dimensional, $`(_1)\mathrm{}(_n)=(_1)_{\mathrm{alg}}\mathrm{}_{\mathrm{alg}}(_n)`$. Let $`m`$ be the quantum multi-measure constructed from $`f`$ as in Lemma III.3. Let $`𝔐`$ be the linear functional on $`(_1)_{\mathrm{alg}}\mathrm{}_{\mathrm{alg}}(_n)=(_1)\mathrm{}(_n)=(_1\mathrm{}_n)`$ such that $`𝔐(q_1q_2\mathrm{}q_n)=M(q_1,q_2,\mathrm{},q_n)=m(q_1,q_2,\mathrm{},q_n)`$ for each $`q_j𝒫(_j)`$. Since $`𝔐`$ is a linear functional on a finite dimensional space, it is bounded. Hence there is a unique bounded operator $`T`$ in $`(_1\mathrm{}_n)`$ such that $`𝔐(x)=\mathrm{Tr}(xT)`$ for all $`x`$. Thus
$$f(\nu _1\nu _2\mathrm{}\nu _n)=𝔐(p_1p_2\mathrm{}p_n)=\mathrm{Tr}((p_1p_2\mathrm{}p_n)T).$$
(1)
On taking complex conjugates of the Equation (1) we find that $`T`$ may be replaced by $`T^{}`$. So in (1) we may replace $`T`$ by $`\frac{1}{2}(T+T^{})`$. Hence we may suppose in (1) that $`T`$ is self-adjoint. $`\mathrm{}`$
The work of shows that Wallach’s Theorem can be generalised to the situation where the Hilbert spaces are not required to be finite dimensional provided an appropriate boundedness condition is imposed. More precisely:
###### Theorem III.5
Let $`_1,\mathrm{},_n`$ be Hilbert spaces, each of dimension at least 3.
Let $`f:\mathrm{\Sigma }(_1,\mathrm{},_n)^+`$ be an unentangled frame function. Let $`𝔐`$ be the associated linear functional on $`(_1)_{\mathrm{alg}}\mathrm{}_{\mathrm{alg}}(_n)`$. If the restriction of $`𝔐`$ to $`𝒦(_1)_{\mathrm{alg}}\mathrm{}_{\mathrm{alg}}𝒦(_n)`$ is bounded, then there exists a unique bounded self-adjoint, trace class operator $`T`$ in $`(_1\mathrm{}_n)`$ such that whenever $`\nu _1\nu _2\mathrm{}\nu _n`$ is in $`\mathrm{\Sigma }(_1,\mathrm{},_n)`$ and $`p_j`$ is the projection of $`_j`$ onto the one-dimensional subspace generated by $`\nu _j`$,
$`f(\nu _1\nu _2\mathrm{}\nu _n)=\mathrm{Tr}((p_1p_2\mathrm{}p_n)T).`$
*Proof*: Let $`𝔐_0`$ be the restriction of $`𝔐`$ to $`𝒦(_1)_{\mathrm{alg}}\mathrm{}_{\mathrm{alg}}𝒦(_n)`$. By hypothesis $`𝔐_0`$ is bounded and so has a unique bounded extension, also denoted by $`𝔐_0`$, to $`𝒦(_1\mathrm{}_n)`$. By standard functional analysis, there exists a trace class operator $`T`$ such that $`𝔐_0(z)=\mathrm{Tr}(zT)`$ for each $`z`$ in $`𝒦(_1\mathrm{}_n)`$. Since each one-dimensional projection in $`(_j)`$ is in $`𝒦(_j)`$,
$`m(p_1,p_2,\mathrm{},p_n)=M(p_1,p_2,\mathrm{},p_n)=𝔐_0(p_1p_2\mathrm{}p_n)=\mathrm{Tr}((p_1p_2\mathrm{}p_n)T).`$So
$`f(\nu _1\nu _2\mathrm{}\nu _n)=\mathrm{Tr}((p_1p_2\mathrm{}p_n)T)=T(\nu _1\nu _2\mathrm{}\nu _n),\nu _1\nu _2\mathrm{}\nu _n,`$where $`,`$ denotes the inner product on $`_1\mathrm{}_n.`$ It now follows from Lemma 5.9 that $`T`$ is unique. But, arguing as in the proof of Proposition III.4, we can replace $`T`$ by $`\frac{1}{2}(T+T^{})`$. So, by uniqueness, $`T`$ is self-adjoint. $`\mathrm{}`$
*Remark*: When the spaces $`_1,\mathrm{},_n`$ are finite dimensional, then the boundedness condition of Theorem III.5 is automatically satisfied. So Wallach’s Theorem is a corollary of Theorem III.5 which, in turn, follows from the work of .
Moreover, it can be shown along the lines of that, for $`n=2`$, there exists a self-adjoint operator $`T`$, not necessarily of trace class, on $`=_1_2`$, such that
$`𝔐(p)=\mathrm{Tr}(Tp)`$for all finite rank projections $`p`$ in $`𝒫()`$ if, and only if, $`𝔐`$ is bounded on one dimensional projection operators whose ranges are generated by vectors in the algebraic tensor product $`_1_{\mathrm{alg}}_2`$.
|
no-problem/0004/astro-ph0004167.html
|
ar5iv
|
text
|
# TGRS OBSERVATIONS OF GAMMA-RAY LINES FROM NOVAE. II. CONSTRAINING THE GALACTIC NOVA RATE FROM A SURVEY OF THE SOUTHERN SKY DURING 1995–1997
## 1 Introduction
Classical novae are rather frequently observed in our Galaxy (Liller & Mayer 1987, Shafter 1997), and have also been studied in external galaxies ; typically $`N`$ 3–4 per year are detected in our Galaxy (Duerbeck 1995, Warner 1995). Most of the discoveries and observations of Galactic novae have been made by amateur astronomers with little access to spectroscopic and photometric equipment. Sky coverage has been episodic and extremely hard to calculate. Classification attempts have also been hindered. As a result, many of the most basic properties involving their global rate and distribution are surprisingly uncertain. For example, a number of arguments suggest that the Galactic rate of novae must be much higher than $`N`$:
(a) The typical limiting apparent magnitude obtainable with amateur apparatus and methods has been increasing steadily in recent years, but for the period covered by this paper may be taken to be $`m_V8`$, within a very wide range, and with extremely uneven coverage. Application of the expanding-photosphere method to a subset of relatively nearby and bright novae has yielded the empirical relation
$`M_V`$ $`=`$ $`2.41\mathrm{log}t_210.7\text{for 5 d }<t_2<50\text{ d}`$ (1)
$`=`$ $`9\text{for }t_25\text{ d}`$ (2)
$`=`$ $`6.6\text{for }t_2>50\text{ d}`$ (3)
(Warner 1995) for the absolute magnitude, where $`t_2`$ (the speed class) is the time taken for $`m_V`$ to increase by 2 from discovery. It follows that the distance out to which amateur astronomers are detecting typical novae is $`10`$ kpc, or only about one-half the volume of the Galaxy. Furthermore, the rate of discoveries at the faintest magnitudes ($`m_V>6`$) is greater than what would be extrapolated from brighter novae. This indicates that a new population — presumably associated with the Galactic bulge rather than the disk — is present and poorly sampled (Duerbeck 1990; see below).
(b) Even within that part of the Galaxy which is effectively searched for novae, the discovery rate is blatantly incomplete. Not only does the discovery rate for novae with $`3<m_V6`$ fall below the extrapolated rate for brighter events (thus, in contrast to the preceding argument, suggesting that many events in this range are missed: Duerbeck 1990), but there is a marked deficiency of discoveries in the southern celestial hemisphere (Warner 1995). This is relevant to our work, since the TGRS detector is permanently pointed at the southern sky (§2.1). During its period of operation (1995–1997) five novae were discovered in the southern hemisphere (Harris et al. 1999, hereafter Paper I), but there is no way of knowing how many were missed.<sup>2</sup><sup>2</sup>2 The discovery process is particularly liable to miss ”fast” novae ($`t_2`$ a few days) which rise and fall in between successive visits to a given location. The possibility of detecting undiscovered novae as bright as $`m_V=3`$ (marginally within TGRS’s capabilities) is one of the justifications for the present work.
(c) In Galactic latitude, the distribution of classical novae is somewhat concentrated toward the equatorial plane (scale heights for disk and bulge populations 125 and 500 pc respectively: Duerbeck 1984, 1990). They must therefore be affected to some degree by interstellar extinction, and a deficiency of discoveries close to the plane is indeed observed (Warner 1995).
In terms of the composition of their ejecta, novae are classified into CO-rich and ONe-rich; it is thought that the distinction reflects the composition of the underlying white dwarf material, with the ONe class coming from more massive progenitors whose cores burned beyond the early He-burning stage which yields C and O. Different levels of positron annihilation line flux are expected from each class (§4). If the progenitors of the ONe subclass are really more massive, they will tend to lie closer to the Galactic plane, and the resulting novae will be more strongly affected by extinction and relatively under-represented in the discovered sample (of which they compose $`1/3`$: Gehrz et al. 1998). Evidence of this has been detected by Della Valle et al. (1992).
(d) The three preceding factors would all tend to enhance the true Galactic nova rate above that observed. However, a second, quite distinct approach to the problem tends to produce systematically lower rates. In this approach, several external galaxies (particularly the Magellanic Clouds, M31 and M33) have been monitored for novae, and their observed rates extrapolated in some fashion to the Milky Way (Ciardullo et al. 1987, Della Valle & Livio 1994). The usual basis for extrapolation is absolute blue luminosity (Della Valle & Claudi 1990). As can be seen in Table 1, the results from this approach are systematically smaller than attempts to correct for the missing Galactic novae directly. The original explanation for this effect was provided by Duerbeck (1990), who postulated two different classes of event by spatial distribution — disk and bulge novae. It was claimed that the bulge population has a systematically slower speed class, and is therefore generally less luminous by Equations (1–3), which might account for the discrepancy, given a larger bulge in the main external source of novae, M31. As will be seen (§4.1), our search method is probably relevant only to a disk population.
A third approach to the problem is theoretically possible, by which classical nova outbursts are assumed to be part of a life-cycle of which other cataclysmic variables are manifestations. The Galactic nova rate is then derived from the assumed space densities of these related objects, together with some model for the outburst recurrence time (Warner 1995). This approach is more reliable at predicting the Galactic space density rather than the global rate, which is more directly related to the measurements we shall present.
It is important to correct for and combine these various factors into an overall global Galactic nova rate, which would govern the input of novae into Galactic chemical evolution, dust grains and interstellar radioactivity (Gehrz et al. 1998). However attempts to do so have yielded wildly discordant results, ranging from 11–260 novae yr<sup>-1</sup> (see Table 1).
We have therefore adopted in this work yet a fourth (and simplest) approach which is to make an unbiased search for novae in our Galaxy. The detection of $`\gamma `$-ray lines from radioactive decays of the nucleosynthesis products produced in novae is such an approach; these decays in general emit positrons, whose annihilation with electrons produces a line at 511 keV. An obvious advantage of this approach is the very small absorption of $`\gamma `$-rays in the Galaxy. We will also see that problems of uneven coverage and sensitivity are minimal. These advantages are realised when the $`\gamma `$-ray detector TGRS, on board the WIND mission, is used (§2.1).
In Paper I we determined that TGRS does indeed have the capability to perform a sky survey for classical novae. The target of Paper I was to detect the positron annihilation line in five known novae; although none was detected, the viability of such a method was established. The key to the method (see §2 below) is that the line arises in nova material expanding towards the observer, and is therefore broadened and blueshifted (Leising & Clayton 1987). Its peak is therefore shifted away from a strong background line at exactly 511 keV, which arises in the instrument itself from decays of unstable nuclei produced by cosmic ray spallation.
In the next section we give a brief description of the detector and data, and of our analysis. None of these is substantially different from that of Paper I, where the reader may find a more detailed description.
## 2 Observations and Analysis
### 2.1 Spacecraft and Instrument
The TGRS experiment is very well suited to a search for the 511 keV line, for several reasons. First, it is located on board the WIND spacecraft whose orbit is so elliptical that it has spent virtually all of its mission since 1994 November in interplanetary space, where the $`\gamma `$-ray background level is relatively low. Second, these backgrounds are not only low but very stable over time. Third, TGRS is attached to the south-facing surface of the rotating cylindrical WIND body, which points permanently toward the south ecliptic pole. The detector is unshielded, and TGRS therefore has an unobstructed view of the entire southern ecliptic hemisphere. Taken together, these three facts make possible a continuous and complete survey of the southern sky. Fourth, and most importantly, the TGRS Ge detector has sufficient spectral resolution to detect a 511 keV line which is slightly Doppler-shifted away from the background 511 keV line mentioned in §1. The Doppler blueshift in the nova line, for the epochs $`<`$12 hr which we consider, is predicted to be 2–5 keV (Hernanz 1999, Kudryashov 2000), which compares with the TGRS energy resolution at 511 keV of 3–4 keV FWHM (Harris et al. 1998 and Paper I).
The TGRS detector is a radiatively cooled 35 cm<sup>2</sup> n-type Ge crystal sensitive to energies between 20 keV–8 MeV. Since the launch of WIND in 1994 November, TGRS has accumulated count rates continuously in this energy range. The few gaps in the data stream are due either to perigee passes, which are rare (lasting $`1`$ d at several month intervals) thanks to WIND’s very eccentric orbit, or to memory readouts following solar flare or $`\gamma `$-ray burst triggers, which may last for $`2`$ hr. The data were binned in 1 keV energy bins during 24 min intervals.
We searched in data covering a period of nearly three years, from 1995 January to 1997 October. In the fall of 1997 the performance of the detector began to degrade seriously, and the energy resolution became too coarse to resolve the 511 keV background and nova lines. This degradation is believed to result from crystal defects induced by accumulated cosmic ray impacts, which trap semiconductor holes and reverse the impurity charge status. A region of the crystal thus becomes undepleted and the effective area is reduced (Kurczynski et al. 1999). We terminated our search of the data when the photopeak effective area at 511 keV fell below an estimated 80% of its original value. The total live time accumulated was about $`7.7\times 10^7`$ s, which was nominally 88% of the entire interval. In fact, the distribution of live times among the 6 hr intervals was such that 41% of all intervals had the full 6 hr of live time, and almost 99% of intervals contained some live time.
### 2.2 Analysis
Our analysis procedure relies heavily on the most recently theoretically-predicted properties of the 511 keV line (Hernanz et al. 1999, Kudryashov 2000), mainly its light-curve, energy and shape. The timescale over which the background spectra described above are summed is set by the predicted $`\gamma `$-ray light-curve from the ”thermonuclear flash” which powers a nova. In this process a degenerate accreted H layer on the surface of a CO or ONe white dwarf ignites proton capture reactions involving both accreted material and some material dredged up from the interior of the white dwarf. The timescale for this process is set by the $`\beta `$-decay timescales of the unstable nucleosynthesis products of rapid proton capture on C, O and Ne. These unstable species fall into two groups, one having very rapid decays ($``$ minutes: e.g. <sup>13</sup>N, <sup>14,15</sup>O, <sup>17</sup>F) and the more slow-decaying <sup>18</sup>F ($`\tau _{1/2}=110`$ min). The light-curve results from the convolution of these decays with the reduction of opacity to 511 keV $`\gamma `$-rays due to envelope expansion; it thus tends to be double-peaked at values $``$10–100 s and $``$3–6 hr (Gómez-Gomar et al. 1998), with significant emission lasting for $`12`$ hr (Hernanz et al. 1999). The 10–100 s peak is ultimately due to the decay of the very short-lived group of isotopes, and is thus especially prominent in the CO nova light-curve (though these isotopes are essential to the energetics of both classes). The 3–6 hr peak reflects the survival of slower-decaying <sup>18</sup>F in both classes (Gómez-Gomar et al. 1998).
With these timescales in mind, we summed the 24-min background spectra into 6 hr intervals <sup>3</sup><sup>3</sup>3 Not only does this accumulation time contain most of the predicted line fluence, but detection of the line in the following 6 hr period would provide an independent confirmation of a detection. However, we found in Paper I that detection of the line after 12 hr is hindered not only by low flux, but also by worsened blending with the background 511 keV line. The details of how measurements in the 6–12 hr interval may be combined with those in the 0–6 hr interval are given in Paper I. The 4005 resulting 6-hr spectra were fit by a model (described in Paper I) containing the strong background 511 keV line at rest, and a broadened blueshifted nova line. The energies of the nova line were fixed at the predicted values (516 keV after 6 hr, dropping to 513 keV after 12 hr: Gómez-Gomar et al. 1998, Hernanz et al. 1999, Kudryashov 2000). The widths were taken to be 8 keV FWHM and the shapes to be Gaussian, as in Paper I; the shapes are poorly documented in published models, but the approximation is probably reasonable at an epoch of a few hours (Leising & Clayton 1987). Instrumental broadenings of these lines and of the background 511 keV line were very small during 1995–1997 (Harris et al. 1998). Although our analysis is somewhat sensitive to the departure of the actual line parameters from these predictions, we believe that it should be adequate to detect lines in the parameter range appropriate for fast novae. For example, we estimate that lines with energies in the range 513–522 keV are detected with $`50`$% of true amplitude, corresponding to expansion velocities 1200–6500 km s<sup>-1</sup> which bracket the range observed in fast novae (Warner 1995).
The 4005 count spectra were fit to the above model (plus an underlying constant term) and the line amplitudes were divided by the photopeak effective area at 511 keV. This photopeak efficiency was determined from Monte Carlo simulations as a function of energy and zenith angle (Seifert et al. 1997), taking into account the effects of hole-trapping in the detector (§2.1); we found that the efficiency remained extremely stable until the fall of 1997, whereupon it rapidly fell to 80% of its value at launch. The effective area is a slowly varying function of the zenith angle of the source. To calculate the average effective area, we assumed the Galactic distribution of the synthetic population of several thousand novae computed by Hatano et al. (1997), for the southern part of which the mean TGRS zenith angle is $`60\mathrm{deg}`$, corresponding to an effective area 13.6 cm<sup>2</sup>. The fits were performed by the standard method of varying the model parameters to minimize the quantity $`\chi ^2`$, with errors on the parameters computed from the parameter range where $`\chi ^2`$ exceeded the minimum by +1 (Paper I).
With a sufficiently large sample of spectra, there is a probability that a fitted line of any given amplitude may be produced by chance. We therefore imposed a rather high value of significance as the threshold above which a detection would be established. If the significances are normally distributed (see §3 below) then our sample size of 4005 spectra implies that a threshold level of $`4.6\sigma `$ yields a probability $`<1`$% of a single false detection by chance (Abramowitz & Stegun 1964).
## 3 Results
A typical fit to a 6 hr spectrum is shown in Figure 1 (there is a more detailed discussion in Paper I). The fits are generally acceptable, with values of $`\chi ^2`$/d.o.f. close to 1. The amplitudes of the nova lines are significantly positive in all cases (see below). This arises from a significant departure of the blue wing of the 511 keV background line from the Gaussian shape assumed in the fits, whose origin is unclear (Paper I).
The full series of measurements for a nova line of FWHM 8 keV and blueshift 5 keV, (parameters corresponding to typical predicted values after 6 hr: Hernanz et al. 1999 and Paper I) is shown in Figure 2. It can be seen that the systematic positive offset mentioned above was extremely stable throughout the mission; there are very weak linear trends on $``$year timescales which are almost invisible in Fig. 2. We subtracted this quasi-constant systematic value from all nova line measurements.
A very similar time series was obtained for a nova line at position predicted for 12 hr after the explosion (513 keV), except that the error bars were very much larger (see Paper I, §3.4). Each fitted 6 hr line was combined with the following 12 hr fit in the proportions suggested by the light curve of Hernanz et al. (1999). The results closely resembled those of Fig. 2 after subtraction of the quasi-constant systematic, since the 12 hr lines contributed little on account of their large error bars.
It is also clear from inspection of Fig. 2 that there are no highly significant line amplitudes lying above the mean. We further show in Figure 3 that the distribution of significant deviations from the mean is very close to normal. The variability in the error bars comes almost entirely from the variability in live times, which is small (§2.1). There is therefore a well-defined mean $`1\sigma `$ error of $`8.2\times 10^4`$ photon cm<sup>-2</sup> s<sup>-1</sup> (compare results of Paper I for zenith angle $`60\mathrm{deg}`$). The $`4.6\sigma `$ threshold based on this average error is shown by a dotted line in Fig. 2. The only points lying above this line are a few 6 hr periods with low live time and large errors. We therefore conclude that no previously-undetected novae were discovered by TGRS during 1995–1997, in an almost unbiased search covering a live time of $`7.7\times 10^7`$ s.
## 4 Discussion
### 4.1 Limit on the Galactic nova rate
Recent developments in the theory of nucleosynthesis in classical novae (Hernanz et al. 1999) have been discouraging for our purpose of a positron annihilation $`\gamma `$-ray search, since new measurements of nuclear reaction rates have led to much lower predictions of the flux in this line after 6 and 12 hr. The discussion in Paper I of the capability of constraining the global Galactic nova rate using our present results was therefore over-optimistic. Nevertheless, we will discuss the application of our method in general terms, so that even though important constraints cannot now be derived, it may be useful for more sensitive future experiments (e.g. INTEGRAL) or for more optimistic theoretical predictions.
A formal expression for the number of novae detectable by TGRS is
$`N_{obs}`$ $`=`$ $`R_{gal}T_{tot}{\displaystyle _0^{1.4M_{\mathrm{}}}}\mathrm{\Phi }(M){\displaystyle f(\varphi >\varphi _{min})w(\varphi >\varphi _{min})𝑑\varphi _{min}𝑑M}`$ (4)
where $`R_{gal}`$ is the Galactic nova rate; $`\varphi _{min}`$ is a given (time varying) threshold flux for detection by TGRS; $`f(\varphi >\varphi _{min})`$ is the fraction of the mass of the Galaxy within TGRS’s detection radius $`r_d`$ and $`r_d=\sqrt{\varphi _{pred}(M)/\varphi _{min}}`$; $`T_{tot}`$ is the total TGRS live time; $`w(\varphi >\varphi _{min})`$ is the fraction of TGRS live time for which $`\varphi >\varphi _{min}`$; $`\mathrm{\Phi }`$ is the distribution of white dwarf masses in classical novae; and $`\varphi _{pred}(M)`$ is the predicted 511 keV line flux at 1 kpc for mass $`M`$.
The white dwarf mass distribution in novae, $`\mathrm{\Phi }(M)`$, is very poorly known. Whereas field white dwarf masses appear to peak at $`0.6M_{\mathrm{}}`$ and to decline in number for higher masses up to the Chandrasekhar limit $`1.4M_{\mathrm{}}`$ (Warner 1990), the mass distribution in nova systems must be weighted towards higher masses. This is because the thermonuclear runaway occurs when the basal pressure of the material accreted onto the white dwarf exceeds some critical value. The critical pressure is proportional to the -4 power of the white dwarf radius, to the white dwarf mass, and to the accreted mass. Since white dwarf radii decrease with increasing white dwarf mass, the accreted mass necessary to reach critical pressure is a strongly decreasing function of white dwarf mass. If the accretion rate from the secondary star is roughly independent of white dwarf mass, it follows that explosions on more massive white dwarfs will recur after much shorter intervals (Gehrz et al. 1998). There have not been reliable measurements of this effect, although theory indicates that the ratio of ONe:CO novae of 1:2 is compatible with a distribution peaking at about $`1.2M_{\mathrm{}}`$ (Truran & Livio 1986). Further, the mass ranges corresponding to the CO and ONe compositions are poorly known and may well overlap (Livio & Truran 1994).
Theoretical predictions of 511 keV line emission are only available for a few values of $`M`$. In Table 2 we show the parameters of the most recent models suitable for use in Eq. (4) (Hernanz et al. 1999). Earlier models suggest that emission from lower-mass CO white dwarf events is considerably less (Gómez-Gomar et al. 1998). In view of the remarks above about the ONe:CO ratio, we will make the crude assumption that the ratio of ”low mass” CO objects to ”high mass” CO objects to ONe objects is 1:1:1, where ”high mass” CO objects have the properties given in Table 2 and ”low mass” CO objects are assumed to produce no 511 keV line emission at all. This eliminates the integral over M in Eq. (4).
The remaining integral in Eq. (4) can be approximated by the value of the integrand when $`\varphi _{min}`$ has its mean value — this follows from our result in §§2.1, 3 that the variation of live times in our sample (and therefore of the errors in Fig. 2) is very small. For a given model in Table 2, therefore, taking $`\varphi _{min}=4.6\times 8.2\times 10^4=3.8\times 10^3`$ photon cm<sup>-2</sup> s<sup>-1</sup>, the problem is reduced to the computation of the fraction $`f`$ of Galactic mass which lies within the radius $`r_d=\sqrt{\varphi _{pred}/3.8\times 10^3}`$ kpc.
As an example, let us consider the Hernanz et al. (1999) model of a $`1.15M_{\mathrm{}}`$ CO nova from Table 2. Here $`\varphi _{pred}=3.1\times 10^3`$ photon cm<sup>-2</sup> s<sup>-1</sup>, so that $`r_d=0.9`$ kpc. Within this value of $`r_d`$ we determined that 0.61% of the Galaxy’s mass resides, according to the widely used Bahcall-Soneira model of the Galaxy (Bahcall & Soneira 1984): one-half of this value (i.e. the southern hemisphere) gives $`f=0.00305`$. For the $`1.15M_{\mathrm{}}`$ CO model, Eq. (1) then reduces to $`R_{gal}=\frac{N_{obs}}{T_{tot}f}`$. Our upper limit for $`N_{obs}`$ is $`<1`$, with 63% probability (for Poisson-distributed events: Gehrels 1986), and the live time is $`7.7\times 10^7`$ s (§2.1), giving us a 63% upper limit on the rate of ”high-mass” CO novae of $`R_{gal}<134`$ yr<sup>-1</sup>. This is quite close to the exact value 123 yr<sup>-1</sup> obtained by explicitly integrating Eq. (4) over $`\varphi _{min}`$ (Table 2).
In the same way we obtain an upper limit of 238 yr<sup>-1</sup> on the rate of novae occurring on ONe white dwarfs from Hernanz et al.’s (1999) prediction. Our best result comes from the CO model in Table 2, which we have assumed to be one-third of the total, from which we derive a global Galactic nova rate of $`<369`$ yr<sup>-1</sup>.
Uncertainties in this value clearly arise from uncertainties in the nova models, in the fraction of white dwarfs in nova systems of each type, in the Bahcall-Soneira model, and in the possibility of distinct spheroid and disk nova populations having differing rates, since our typical detection radius $`<1`$ kpc includes almost none of the Bahcall-Soneira spheroid. Since our results do not significantly constrain previous measurements of the nova rate (Table 1), we do not make estimates of these errors, which will require attention from other, more sensitive experiments (see next section).
### 4.2 Implications for nova detection by other instruments
An attempt has been made, using the BATSE instrument on the Compton Observatory, to detect 511 keV line emission from a recent nearby nova (V382 Vel) by a similar method to that used here and in Paper I (Hernanz et al. 2000). The advantage of observing with BATSE over TGRS is its much larger effective area. Its disadvantages are much poorer energy resolution with a NaI spectrometer, and a background varying on very short ($`<<90`$ min) timescales. The sensitivities achieved are comparable to those obtained here.
Degradation of the Ge detector (§2.1) prevented TGRS from achieving comparable sensitivity on V382 Vel (Harris et al. 2000), so future efforts in this field will rely on BATSE and on the INTEGRAL mission, which is scheduled for launch in 2001 September carrying a Ge spectrometer (SPI) with resolution comparable to TGRS but a much larger effective area. Hernanz et al. (1999) estimated that SPI could detect the model novae of Table 2 out to $`3`$ kpc. However they also pointed out that the short duration of the 511 keV line emission would make it difficult for INTEGRAL to slew to a candidate event. Thus the detection rate would be limited to novae within the SPI field of view, which is $`25\mathrm{deg}`$ FWHM.
The search method which we have used, i.e. an ex post facto search in background spectra, ought to be perfectly feasible with INTEGRAL. The chief requirements for this method are very high energy resolution and a sufficiently low and stable background. While the SPI detector has excellent resolution, the background level in it has not yet been rigorously computed. Nevertheless, qualitative arguments suggest that the background will be no worse than that in TGRS. Like WIND, INTEGRAL will be in a high-altitude elliptical orbit which avoids extensive exposure to Earth’s trapped radiation belts and to albedo $`\gamma `$-rays from Earth’s atmosphere. The main disadvantages of INTEGRAL for nova detection are the small SPI field of view and the planned observing strategy which cuts down the amount of time spent pointing towards the main concentration of novae near the Galactic center.
We can make use of the planned program of INTEGRAL observations of the central Galactic radian in the first year of operation (Winkler et al. 1999) to estimate the rate at which novae might be detected in the SPI data. As previously, we assume that novae follow the Bahcall & Soneira (1984) Galactic distribution. The planned first-year INTEGRAL observations may be approximated by a $`31\mathrm{deg}\times 11\mathrm{deg}`$ grid with $`2\mathrm{deg}`$ spacing between $`30\mathrm{deg}l30\mathrm{deg}`$ and $`10\mathrm{deg}b10\mathrm{deg}`$, the exposure to each point being 1180 s per pass, with 12 passes per year covering the whole grid. Thus the live time for the whole grid is 0.153 yr. From the Bahcall-Soneira model, the pointing geometry, and the SPI aperture $`25\mathrm{deg}`$ we calculated that the INTEGRAL detection radius $`3`$ kpc intercepts $``$0.75% of the Galactic nova distribution. The live time 0.153 yr is then multiplied by a typical Galactic nova rate $`50`$ yr<sup>-1</sup>, (of which 2/3 are practically detectable, as assumed in §4.1), and by the intercepted fraction, to imply that INTEGRAL ought to detect 0.04 novae yr<sup>-1</sup>. Unless theoretical estimates of the 511 keV line flux turn out to be considerably larger, the prospects for such a detection appear to be small. The same conclusion probably applies to a different method of detecting 511 keV line emission indirectly, by observing the 170–470 keV continuum produced by Compton scattering in the nova envelope using SPI’s large-area CsI shield (Jean et al. 1999).
We are grateful to M. Hernanz and A. Kudryahov for helpful discussions and for providing pre-publication results, and to J. Jordi (the referee) for constructive comments. Peter Kurczynski (University of Maryland) helped in assessing the instrument performance. Theresa Sheets (LHEA) and Sandhia Bansal (HSTX) assisted with the analysis software.
|
no-problem/0004/hep-ph0004216.html
|
ar5iv
|
text
|
# Baryon Spectrum in the Large 𝑁_𝑐 Limit11footnote 1Invited talk at NSTAR2000, Newport News, VA, Feb. 19, 2000. William and Mary preprint no. WM-00-103.
## 1 Introduction
It has been known for some time that QCD admits a useful and elegant expansion in powers of $`1/N_c`$, where $`N_c`$ is the number of colors . Given this expansion, it is possible to determine the order in $`N_c`$ of any Feynman diagram or matrix element. The $`1/N_c`$ expansion has been utilized successfully in baryon effective field theories to isolate the leading and subleading contributions to a variety of physical observables .
Here we study the mass spectrum of the nonstrange, $`\mathrm{}=1`$ baryons (associated with the SU(6) 70-plet for $`N_c=3`$) in a large-$`N_c`$ effective theory . We describe the states as a symmetrized “core” of $`(N_c1)`$ quarks in the ground state plus one excited quark in a relative $`P`$ state. “Quarks” in the effective theory refer to eigenstates of the spin-flavor-orbit group, SU(6) $`\times `$ O(3), such that an appropriately symmetrized collection of $`N_c`$ of them have the quantum numbers of the physical baryons. Baryon wave functions are antisymmetric in color and symmetric in the spin-flavor-orbit indices of the quark fields. While this construction assures that we obtain states with the correct total quantum numbers, we do not assume that SU(6) is an approximate symmetry of the effective Lagrangian. Rather, we parameterize the most general way in which spin and flavor symmetries are broken by introducing a complete set of quark operators that act on the baryon states. Matrix elements of these operators are hierarchical in $`1/N_c`$, so that predictivity can be obtained without recourse to ad hoc phenomenological assumptions.
The nonstrange 70-plet states which we consider in this analysis consist of two isospin-$`3/2`$ states, $`\mathrm{\Delta }_{1/2}`$ and $`\mathrm{\Delta }_{3/2}`$, and five isospin-$`1/2`$ states, $`N_{1/2}`$, $`N_{1/2}^{}`$, $`N_{3/2}`$, $`N_{3/2}^{}`$, and $`N_{5/2}^{}`$. The subscript indicates total baryon spin; unprimed states have quark spin $`1/2`$ and primed states have quark spin $`3/2`$. These quantum numbers imply that two mixing angles, $`\theta _{N1}`$ and $`\theta _{N3}`$, are necessary to specify the total angular momentum $`1/2`$ and $`3/2`$ nucleon mass eigenstates, respectively. Thus we may write
$$\left[\begin{array}{c}N(1535)\\ N(1650)\end{array}\right]=\left[\begin{array}{cc}\mathrm{cos}\theta _{N1}& \mathrm{sin}\theta _{N1}\\ \mathrm{sin}\theta _{N1}& \mathrm{cos}\theta _{N1}\end{array}\right]\left[\begin{array}{c}N_{1/2}\\ N_{1/2}^{}\end{array}\right]$$
(1)
and
$$\left[\begin{array}{c}N(1520)\\ N(1700)\end{array}\right]=\left[\begin{array}{cc}\mathrm{cos}\theta _{N3}& \mathrm{sin}\theta _{N3}\\ \mathrm{sin}\theta _{N3}& \mathrm{cos}\theta _{N3}\end{array}\right]\left[\begin{array}{c}N_{3/2}\\ N_{3/2}^{}\end{array}\right],$$
(2)
where the $`N(1535)`$, $`N(1650)`$, $`N(1520)`$ and $`N(1700)`$ are the appropriate mass eigenstates observed in experiment.
## 2 Operators Analysis
To parameterize the complete breaking of SU(6)$`\times `$O(3), it is natural to write all possible mass operators in terms of the generators of this group. The generators of orbital angular momentum are denoted by $`\mathrm{}^i`$, while $`S^i`$, $`T^a`$, and $`G^{ia}`$ represent the spin, flavor, and spin-flavor generators of SU(6), respectively. The generators $`S_c^i`$, $`T_c^a`$, $`G_c^{ia}`$ refer to those acting upon the $`N_c1`$ core quarks, while separate SU(6) generators $`s^i`$, $`t^a`$, and $`g^{ia}`$ act only on the single excited quark. Factors of $`N_c`$ originate either as coefficients of operators in the Hamiltonian, or through matrix elements of those operators. An $`n`$-body operator, which acts on $`n`$ quarks in a baryon state, has a coefficient of order $`1/N_c^{n1}`$, reflecting the minimum number of gluon exchanges required to generate the operator in QCD. Compensating factors of $`N_c`$ arise in matrix elements if sums over quark lines are coherent. For example, the unit operator $`1`$ contributes at $`O(N_c^1)`$, since each core quark contributes equally in the matrix element. The core spin of the baryon $`S_c^2`$ contributes to the masses at $`O(1/N_c)`$, because the matrix elements of $`S_c^i`$ are of $`O(N_c^0)`$ for baryons that have spins of order unity as $`N_c\mathrm{}`$. Similarly, matrix elements of $`T_c^a`$ are $`O(N_c^0)`$ in the two-flavor case since the baryons considered have isospin of $`O(N_c^0)`$, but the operator $`G_c^{ia}`$ has matrix elements on this subset of states of $`O(N_c^1)`$. This means that the contributions of the $`O(N_c)`$ quarks add incoherently in matrix elements of the operator $`S_c^i`$ or $`T_c^a`$ but coherently for $`G_c^{ia}`$. Thus, the full large $`N_c`$ counting of the matrix element is $`O(N_c^{1n+m})`$, where $`m`$ is the number of coherent core quark generators. A complete operator basis for the nonstrange 70-plet masses is shown in Table 1<sup>2</sup><sup>2</sup>2Some of these operators have been studied previously .. Index contractions are left implicit wherever they are unambiguous, and the $`c_i`$ are operator coefficients. The tensor $`\mathrm{}_{ij}^{(2)}`$ represents the rank two tensor combination of $`\mathrm{}^i`$ and $`\mathrm{}^j`$ given by $`\mathrm{}_{ij}^{(2)}=\frac{1}{2}\{\mathrm{}_i,\mathrm{}_j\}\frac{\mathrm{}^2}{3}\delta _{ij}`$. Note in Table 1 that operators $`1`$, $`2`$$`3`$, and $`4`$$`9`$ have matrix elements of order $`N_c^1`$, $`N_c^0`$, and $`N_c^1`$, respectively.
## 3 Results
Since the operator basis in Table 1 completely spans the $`9`$-dimensional space of observables, we can solve for the $`c_i`$ given the experimental data. For each baryon mass, we assume that the central value corresponds to the midpoint of the mass range quoted in the Review of Particle Properties ; we take the one standard deviation error as half of the stated range. To determine the off-diagonal mass matrix elements, we use the mixing angles extracted from the analysis of strong decays , $`\theta _{N1}=0.61\pm 0.09`$ and $`\theta _{N3}=3.04\pm 0.15`$. These values are consistent with those obtained from radiative decays, as well . Solving for the operator coefficients, we obtain the values shown in Table 2.
Naively, one expects the $`c_i`$ to be of comparable size. Using the value of $`c_1`$ as a point of comparison, it is clear that there are no operators with anomalously large coefficients. Thus, we find no conflict with the naive $`1/N_c`$ power counting rules. However, only three operators of the nine, $`𝒪_1`$, $`𝒪_3`$, and $`𝒪_6`$, have coefficients that are statistically distinguishable from zero! A fit including those three operators alone is shown in Table 3, and has a $`\chi ^2`$ per degree of freedom is $`1.87`$. Fits involving other operator combinations are studied in Refs. . Clearly, large $`N_c`$ power counting is not sufficient by itself to explain the $`\mathrm{}=1`$ baryon masses—the underlying dynamics plays a crucial role.
## 4 Interpretation and Conclusions
We will now show that the preference in Table 2 for two nontrivial operators, $`\frac{1}{N_c}\mathrm{}^{(2)}gG_c`$ and $`\frac{1}{N_c}S_c^2`$, can be understood in a constituent quark model with a single pseudoscalar meson exchange, up to corrections of order $`1/N_c^2`$. The argument goes as follows:
The pion couples to the quark axial-vector current so that the $`\overline{q}q\pi `$ coupling introduces the spin-flavor structure $`\sigma ^i\tau ^a`$ on a given quark line. In addition, pion exchange respects the large $`N_c`$ counting rules given in Section 2. A single pion exchange between the excited quark and a core quark is mapped to the operators $`g^{ia}G_c^{ja}\mathrm{}_{ij}^{(2)}`$ and $`g^{ia}G_c^{ia}`$, while pion exchange between two core quarks yields $`G_c^{ia}G_c^{ia}`$. These exhaust the possible two-body operators that have the desired spin-flavor structure. The first operator is one of the two in our preferred set. The third operator may be rewritten
$$2G_c^{ia}G_c^{ia}=C_21\frac{1}{2}T_c^aT_c^a\frac{1}{2}S_c^2$$
(3)
where $`C_2`$ is the SU(4) quadratic Casimir for the totally symmetric core representation (the 10 of SU(4) for $`N_c=3`$). Since the core wavefunction involves two spin and two flavor degrees of freedom, and is totally symmetric, it is straightforward to show that $`T_c^2=S_c^2`$. Then Eq. (3) implies that one may exchange $`G_c^{ia}G_c^{ia}`$ in favor of the identity operator and $`S_c^2`$, the second of the two operators suggested by our fits.
The remaining operator, $`g^{ia}G_c^{ia}`$, is peculiar in that its matrix element between two nonstrange, mixed symmetry states is given by
$$\frac{1}{N_c}gG=\frac{N_c+1}{16N_c}+\delta _{S,I}\frac{I(I+1)}{2N_c^2},$$
(4)
which differs from the identity only at order $`1/N_c^2`$. Thus to order $`1/N_c`$, one may make the replacements
$$\{1\text{ , }g^{ia}G_c^{ja}\mathrm{}_{ij}^{(2)}\text{ , }g^{ia}G_c^{ia}\text{ , }G_c^{ia}G_c^{ia}\}\{1\text{ , }g^{ia}G_c^{ja}\mathrm{}_{ij}^{(2)}\text{ , }S_c^2\}.$$
(5)
We conclude that the operator set suggested by the data may be understood in terms of single pion exchange between quark lines. This is consistent with the interpretation of the mass spectrum advocated by Glozman and Riska . Other simple models, such as single gluon exchange, do not directly select the operators suggested by our analysis and may require others that are disfavored by the data.
## Acknowledgments
C.D.C. thanks the National Science Foundation for support under Grant Nos. PHY-9800741 and PHY-9900657, and the Jeffress Memorial Trust for support under Grant No. J-532.
## References
|
no-problem/0004/hep-ph0004102.html
|
ar5iv
|
text
|
# Gamma-Ray BurstsInvited talk presented at the 7th International Symposium on Particles, Strings and Cosmology (Dec. 1999, Lake Tahoe, California).
## 1 Introduction
The origin of GRBs, bursts of 0.1 MeV—1 MeV photons lasting for a few seconds, remained unknown for over 20 years, primarily because GRBs were not detected prior to 1997 at wave-bands other than $`\gamma `$-rays . The isotropic distribution of bursts over the sky suggested that GRB sources lie at cosmological distances, and general phenomenological considerations were used to argue that the bursts are produced by the dissipation of the kinetic energy of a relativistic expanding fireball (see for review).
Adopting the cosmological fireball hypothesis, it was shown that the physical conditions in the fireball dissipation region allow Fermi acceleration of protons to energy $`>10^{20}\mathrm{eV}`$ , and that the average rate at which energy is emitted as $`\gamma `$-rays by GRBs is comparable to the energy generation rate of UHECRs in a model where UHECRs are produced by a cosmological distribution of sources . Based on these two facts, it was suggested that GRBs and UHECRs have a common origin (see for a recent review).
In the last two years, afterglows of GRBs have been discovered in X-ray, optical, and radio wave bands . Afterglow observations confirmed the cosmological origin of the bursts, through the redshift determination of several GRB host-galaxies, and confirmed standard model predictions of afterglows that result from the collision of an expanding fireball with its surrounding medium . These observations therefore provide strong support for the GRB model of UHECR production.
In this review, UHECR and neutrino production in GRBs is discussed in the light of recent GRB and UHECR observations. The fireball model is briefly described in §2.1, and proton acceleration in GRB fireballs is discussed in §2.2. Implications of recent afterglow observations to high energy particle production are discussed in §3. Model predictions are shown to be consistent with the observed UHECR spectrum in §4. Predictions of the GRB model for UHECR production, that can be tested with future UHECR experiments, are discussed in §5. High energy neutrino production in fireballs and its implications for future high energy neutrino detectors are discussed in §6.
## 2 UHECR from GRB fireballs
### 2.1 The fireball model
In the fireball model of GRBs, a compact source, of linear scale $`r_010^7`$ cm, produces a wind characterized by an average luminosity $`L10^{52}\mathrm{erg}\mathrm{s}^1`$ and mass loss rate $`\dot{M}=L/\eta c^2`$. At small radius, the wind bulk Lorentz factor, $`\mathrm{\Gamma }`$, grows linearly with radius, until most of the wind energy is converted to kinetic energy and $`\mathrm{\Gamma }`$ saturates at $`\mathrm{\Gamma }\eta 300`$. Variability of the source on time scale $`\mathrm{\Delta }tr_0/c1`$ ms, resulting in fluctuations in the wind bulk Lorentz factor $`\mathrm{\Gamma }`$ on similar time scale, lead to internal shocks in the expanding fireball at a radius $`r_i\mathrm{\Gamma }^2c\mathrm{\Delta }t`$. These shocks reconvert part of the kinetic energy to internal energy, which is then radiated as $`\gamma `$-rays by synchrotron emission of shock-accelerated electrons. The $`\gamma `$-ray flux is observed to vary on time scale $`t_{\mathrm{var}}r_i/\mathrm{\Gamma }^2c\mathrm{\Delta }t`$.
As the fireball expands, it drives a relativistic shock (blast wave) into the surrounding gas. At a radius $`r\mathrm{\Gamma }^2cT`$, where $`T10`$ s is the wind duration, most of the fireball energy is transferred to the surrounding gas, and the flow approaches self-similar expansion. The shock driven into the ambient medium at this stage continuously heats new gas, and accelerates relativistic electrons that produce by synchrotron emission the delayed radiation, “afterglow,” observed on time scales of days to months. As the shock-wave decelerates, the emission shifts with time to lower frequency.
### 2.2 Fermi acceleration in GRBs
The observed GRB and afterglow radiation is produced by synchrotron emission of shock accelerated electrons. In the region where electrons are accelerated, protons are also expected to be shock accelerated. This is similar to what is thought to occur in supernovae remnant shocks . Internal shocks are generally expected to be “mildly” relativistic in the fireball rest frame, i.e. characterized by Lorentz factor $`\gamma _i11`$, since adjacent shells within the wind are expected to expand with Lorentz factors which do not differ by more than an order of magnitude. We therefore expect results related to particle acceleration in sub-relativistic shocks to be valid for the present scenario. In particular, the predicted energy distribution of accelerated protons is $`dN_p/dE_pE_p^2`$.
Two constraints must be satisfied by fireball wind parameters in order to allow proton acceleration to $`E_p>10^{20}`$ eV in internal shocks :
$$\xi _B/\xi _e>0.02\mathrm{\Gamma }_{300}^2E_{p,20}^2L_{\gamma ,52}^1,$$
(1)
in order for the proton acceleration time $`t_a`$ to be smaller than the wind expansion time, and
$$\mathrm{\Gamma }>130E_{20}^{3/4}\mathrm{\Delta }t_{10\mathrm{m}\mathrm{s}}^{1/4},$$
(2)
in order for the synchrotron energy loss time of the proton to be larger than $`t_a`$. Here, $`\mathrm{\Gamma }=300\mathrm{\Gamma }_{300}`$, $`\mathrm{\Delta }t=10\mathrm{\Delta }t_{10\mathrm{m}\mathrm{s}}`$ ms, $`E_p=10^{20}E_{p,20}`$ eV, $`L_\gamma =10^{52}L_{\gamma ,52}\mathrm{erg}/\mathrm{s}`$ is the $`\gamma `$-ray luminosity, $`\xi _B`$ is the fraction of the wind energy density which is carried by magnetic field, $`4\pi r^2c\mathrm{\Gamma }^2(B^2/8\pi )=\xi _BL`$, and $`\xi _e`$ is the fraction of wind energy carried by shock accelerated electrons.
Eqs. (1) and (2) imply that protons may be accelerated in a GRB wind to energy $`>10^{20}`$ eV, provided that $`\mathrm{\Gamma }>100`$ and that the magnetic field is close to equipartition with electrons. The former condition, $`\mathrm{\Gamma }>100`$, is remarkably similar to that inferred based on $`\gamma `$-ray spectra. There is no theory at present that allows a basic principles calculation of the strength of the magnetic field. However, magnetic field close to equipartition, $`\xi _B1`$, is required in order to account for the observed $`\gamma `$-ray emission.
We have assumed in the discussion so far that the fireball is spherically symmetric. However, since a jet-like fireball behaves as if it were a conical section of a spherical fireball as long as the jet opening angle is larger than $`\mathrm{\Gamma }^1`$, our results apply also for a jet-like fireball (we are interested only in processes that occur when the wind is ultra-relativistic, $`\mathrm{\Gamma }300`$, prior to significant fireball deceleration). For a jet-like wind, $`L`$ in our equations should be understood as the luminosity the fireball would have carried had it been spherically symmetric.
## 3 Implications of afterglow observations
In addition to providing support to the validity of the qualitative fireball scenario described in §2.1, afterglow observations provide quantitative constraints on fireball model parameters. The determination of GRB redshifts implies that the characteristic GRB $`\gamma `$-ray luminosity and emitted energy are $`L_\gamma 10^{52}\mathrm{erg}/\mathrm{s}`$ and $`E_\gamma 10^{53}\mathrm{erg}`$ respectively, an order of magnitude higher than the values assumed prior to afterglow detection. Afterglow observations also indicate that $`\xi _e\xi _B0.1`$. This suggests that the constraint (1) is indeed satisfied, allowing proton acceleration to $`>10^{20}`$ eV.
The observed GRB redshift distribution implies a GRB rate of $`R_{\mathrm{GRB}}10/\mathrm{Gpc}^3\mathrm{yr}`$ at $`z1`$. The present, $`z=0`$, rate is less well constrained, since most observed GRBs originate at redshifts $`1z2.5`$ . Present data are consistent with both no evolution of GRB rate with redshift, and with strong evolution (following, e.g., the luminosity density evolution of QSOs or the evolution of star formation rate), in which $`R_{\mathrm{GRB}}(z=1)/R_{\mathrm{GRB}}(z=0)8`$. The energy observed in $`\gamma `$-rays reflect the fireball energy in accelerated electrons. If shock accelerated protons and electrons carry similar energy, as indicated by afterglow observations, then the $`z=0`$ rate of cosmic-ray production by GRBs is similar to the generation rate of $`\gamma `$-ray energy,
$$E^2(d\dot{n}_{CR}/dE)_{z=0}10^{44}\zeta \mathrm{erg}/\mathrm{Mpc}^3\mathrm{yr},$$
(3)
where $`\zeta `$ is in the range of $`1`$ to $`8`$.
## 4 Comparison with UHECR observations
In Fig. 1 we compare the UHECR spectrum, reported by the Fly’s Eye , the Yakutsk , and the AGASA experiments, with that predicted by the GRB model.
The flattening of the cosmic-ray spectrum at $`10^{19}`$ eV, combined with the lack of anisotropy and the evidence for a change in composition from heavy nuclei at low energy to light nuclei (protons) at high energy , suggest that an extra-Galactic source of protons dominates the flux at $`E>10^{19}`$ eV. The UHECR flux predicted by the GRB model is in remarkable agreement with the observed extra-Galactic flux.
The suppression of model flux above $`10^{19.7}`$ eV is due to energy loss of high energy protons in interaction with the microwave background, i.e. to the “GZK cutoff” . Both Fly’s Eye and Yakutsk data show a deficit in the number of events, consistent with the predicted suppression. The deficit is, however, only at a $`2\sigma `$ confidence level . The AGASA data is consistent with Fly’s Eye and Yakutsk results below $`10^{20}`$ eV. A discrepancy may be emerging at higher energy, $`>10^{20}`$ eV, where the Fly’s Eye and Yakutsk experiments detect 1 event each, and the AGASA experiment detects 6 events for similar exposure.
The flux above $`10^{20}\mathrm{eV}`$ is dominated by sources at distances $`<40\mathrm{Mpc}`$. Since the distribution of known astrophysical systems (e.g. galaxies, clusters of galaxies) is inhomogeneous on scales of tens of Mpc, significant deviations from model predictions presented in Fig. 1 for a uniform source distribution are expected above $`10^{20}\mathrm{eV}`$. Clustering of cosmic-ray sources leads to a standard deviation, $`\sigma `$, in the expected number, $`N`$, of events above $`10^{20}`$ eV, given by $`\sigma /N=0.9(d_0/10\mathrm{M}\mathrm{p}\mathrm{c})^{0.9}`$, where $`d_0`$ is the unknown scale length of the source correlation function and $`d_010`$ Mpc for field galaxies.
An order of magnitude increase in the exposure of UHECR experiments, compared to that available at present, is required to test for the existence of the GZK cutoff. Such exposure would allow this test through an accurate determination of the spectrum in the energy range of $`10^{19.7}`$ eV to $`10^{20}`$ eV, where the effects of source inhomogeneities are expected to be small . Moreover, an order of magnitude increase in exposure will also allow to determine the source correlation length $`d_0`$, through the detection of anisotropies in the arrival directions of $`10^{19.5}`$ eV cosmic-rays over angular scales of $`\mathrm{\Theta }d_0/30`$ Mpc .
## 5 GRB model predictions for planned UHECR experiments
The rate at which GRBs occur within a distance of $`100`$ Mpc from Earth, the distance to which $`>10^{20}`$ eV proton propagation is limited due to interaction with the microwave background, is $`1`$ per 100 yr. This rate can be reconciled with the detection of several $`>10^{20}`$ eV events over a period of a few years only if there is a large dispersion, $`100\mathrm{y}\mathrm{r}`$, in the arrival time of protons produced in a single burst. The required dispersion is likely to result from deflection by random magnetic fields . A proton of energy $`E`$ propagating over a distance $`D`$ through a magnetic field of strength $`B`$ and correlation length $`\lambda `$ is deflected by an angle $`\theta _s(D/\lambda )^{1/2}\lambda /R_L`$, which results in a time delay, compared to propagation along a straight line, $`\tau (E,D)\theta _s^2D/4cB^2\lambda `$. The random energy loss suffered by $`>10^{20}\mathrm{eV}`$ protons as they propagate, owing to the production of pions, implies that protons observed at Earth with given energy have different energy histories along their propagation path. Thus, magnetic field deflection results not only in a delay, but also in a spread in arrival time of protons of fixed energy, comparable to the delay $`\tau `$.
The current upper bound on the inter-galactic magnetic field , $`B\lambda ^{1/2}10^9\mathrm{G}\mathrm{Mpc}^{1/2}`$, allows a spread $`\tau (E=10^{20}\mathrm{eV},D=100\mathrm{M}\mathrm{p}\mathrm{c})10^5`$ yr, well above the minimum, $`\tau 100`$ yr, required in the model. The magnetic field upper bound implies an upper bound on the number of GRBs contributing to the $`>10^{20}`$ eV flux at any given time, $`10^5/100=10^3`$. The upper bound on the number of sources contributing to the flux above $`E`$ decreases rapidly as $`E`$ increases beyond $`10^{20}`$ eV, as the propagation distance decreases with energy due to the increase in pion production energy loss rate. This rapid decrease implies that at $`E3^{20}`$ eV there can be only a few sources contributing to the flux at any given time .
The GRB model therefore makes a unique prediction : The UHECR flux at energy $`E3\times 10^{20}`$ eV should be dominated by a few sources on the sky. These sources should have narrowly peaked energy spectra, and the brightest sources should be different at different energies. This is due to the fact that at any fixed time a given burst is observed in UHECRs only over a narrow range of energy: If a burst is currently observed at some energy $`E`$ then UHECRs of much lower (higher) energy from this burst will arrive (have arrived) mainly in the future (past). Testing the GRB model predictions requires an exposure 10 times larger than that of present experiments. Such increase is expected to be provided by the planned Auger detectors.
## 6 $`10^{14}`$ eV Neutrinos
Protons accelerated in the fireball to high energy lose energy through photo-pion production in interaction with fireball photons. The decay of charged pions results in the production of high energy neutrinos . The observed energy of a proton, for which the observed 1 MeV photons are at the threshold of the $`\mathrm{\Delta }`$-resonance, is $`0.2\mathrm{GeV}^2\mathrm{\Gamma }^2/1`$ MeV. Typically, the neutrino receives $`5\%`$ of the proton energy. Thus, the typical energy of neutrinos resulting from interaction of accelerated protons with GRB photons is
$$E_\nu ^b5\times 10^{14}\mathrm{\Gamma }_{300}^2\mathrm{eV}.$$
(4)
The flux normalization is determined by the efficiency of pion production. The fraction of energy lost to pion production by protons producing the neutrino flux above $`E_\nu ^b`$ is essentially independent of energy and is given by
$$f_\pi 0.2\frac{L_{\gamma ,52}}{\mathrm{\Gamma }_{300}^4\mathrm{\Delta }t_{10\mathrm{m}\mathrm{s}}}.$$
(5)
If GRBs are the sources of UHECRs, then using Eq. (5) and and Eq. (3) with $`\zeta 1`$, the expected GRB neutrino flux is
$$E_\nu ^2\mathrm{\Phi }_{\nu _x}1.5\times 10^9\left(\frac{f_\pi }{0.2}\right)\times \mathrm{min}\{1,E_\nu /E_\nu ^b\}\frac{\mathrm{GeV}}{\mathrm{cm}^2\mathrm{s}\mathrm{sr}},$$
(6)
where $`\nu _x`$ stands for $`\nu _\mu `$, $`\overline{\nu }_\mu `$ and $`\nu _e`$.
The flux of $`10^{14}`$ eV neutrinos given in Eq. (6) implies that large area, $`1\mathrm{k}\mathrm{m}^2`$, high-energy neutrino telescopes, which are being constructed to detect cosmologically distant neutrino sources , would observe several tens of events per year correlated in time and in arrival direction with GRBs. Detection of neutrinos from GRBs could be used to test the simultaneity of neutrino and photon arrival to an accuracy of $`1\mathrm{s}`$ ($`1\mathrm{ms}`$ for short bursts), checking the assumption of special relativity that photons and neutrinos have the same limiting speed to one part in $`10^{16}`$, and the weak equivalence principle, according to which photons and neutrinos should suffer the same time delay as they pass through a gravitational potential, to one part in $`10^6`$ (considering the Galactic potential alone).
The model discussed above predicts the production of high energy muon and electron neutrinos. However, if the atmospheric neutrino anomaly has the explanation it is usually given, oscillation to $`\nu _\tau `$’s with mass $`0.1\mathrm{eV}`$ , then one should detect equal numbers of $`\nu _\mu `$’s and $`\nu _\tau `$’s. Up-going $`\tau `$’s, rather than $`\mu `$’s, would be a distinctive signature of such oscillations. Since $`\nu _\tau `$’s are not expected to be produced in the fireball, looking for $`\tau `$’s would be an “appearance experiment.” To allow flavor change, the difference in squared neutrino masses, $`\mathrm{\Delta }m^2`$, should exceed a minimum value proportional to the ratio of source distance and neutrino energy. A burst at $`100\mathrm{Mpc}`$ producing $`10^{14}\mathrm{eV}`$ neutrinos can test for $`\mathrm{\Delta }m^210^{16}\mathrm{eV}^2`$, 5 orders of magnitude more sensitive than solar neutrinos.
|
no-problem/0004/quant-ph0004086.html
|
ar5iv
|
text
|
# “Velocities” in Quantum Mechanics
## Example 1. The free particle <br>
Let us first consider the free particle in two dimensions as an example of a stationary state. The plane wave is of the form
$$u_{p_xp_y}(x,y)=ae^{i(p_xx+p_yy)/\mathrm{}},$$
(16)
where $`a`$ is independent of $`t`$, $`x`$ and $`y`$. The probability current of the plane wave is
$$j_x(x,y)=|a|^2p_x/m,j_y(x,y)=|a|^2p_y/m,$$
and hence their “velocity” is
$$\text{}v_x\text{}=p_x/m,\text{}v_y\text{}=p_y/m.$$
Equations (9) and (12) are easily seen to hold for the plane wave. Therefore the velocity potential satisfying (10) or (11) of the plane wave is
$$\mathrm{\Phi }=(p_xx+p_yy)/m,$$
(17)
and the stream function satisfying (13) is
$$\mathrm{\Psi }=(p_xyp_yx)/m.$$
(18)
For the state represented by (16), the complex velocity potential (14) gives, from (17) and (18)
$`W`$ $`=(p_xx+p_yy)/m+i(p_xyp_yx)/m`$
$`=(p_xip_y)z/m.`$ (19)
According to hydrodynamics , the flow round the angle $`\pi /n`$ is expressed by the complex velocity potential
$$W=Az^n,$$
(20)
$`A`$ being a number. Equation (19) is of the form (20) with $`n=1`$, and it shows that the complex velocity potential of the plane wave just expresses uniform flow.
## Example 2. The harmonic oscillator <br>
We shall now consider the eigenstate of the two-dimensional harmonic oscillator as an example of a closed state of the stable system. The eigenfunction is,<sup>1</sup><sup>1</sup>1The $`\omega `$ here, denoting the angular frequency, is, of course, to be distinguished from the $`\omega `$ denoting the vorticity. in terms of the Cartesian coordinates $`x`$, $`y`$,
$$u_{n_xn_y}(x,y)=N_{n_x}N_{n_y}e^{\alpha ^2(x^2+y^2)/2}H_{n_x}(\alpha x)H_{n_y}(\alpha y)\left(\alpha \sqrt{m\omega /\mathrm{}}\right),$$
(21)
where $`N_{n_x}`$, $`N_{n_y}`$ are the normalizing constants. But the Hermite polynomials $`H_{n_x}(\alpha x)`$, $`H_{n_y}(\alpha y)`$ are real functions of $`x`$, $`y`$, respectively. Thus the probability current of the harmonic oscillator vanishes and their “velocity” also vanishes. Therefore the velocity potential and the stream function all vanish. For the state represented by (21), the complex velocity potential gives
$$W=0,$$
(22)
which expresses fluid at rest in hydrodynamics. This fluid at rest, however, is not the only one that is physically permissible for a closed state in quantum mechanics, as we can also have flows which are vortical. For these flows the vorticity may contain singularities in the $`xy`$-plane. Such flows will be dealt with in Example 3.
## Example 3. Flows in a central field of force <br>
As a final example we shall consider the bound state in a certain central field of force. The eigenfunction is, in terms of the polar coordinates $`r`$, $`\theta `$, $`\phi `$,
$$u_{nlm_l}(r,\theta ,\phi )=R_{nl}(r)Y_{lm_l}(\theta ,\phi ),$$
(23)
where the spherical harmonics $`Y_{lm_l}(\theta ,\phi )`$ are of the form
$$Y_{lm_l}(\theta ,\phi )=C_{lm_l}P_l^{|m_l|}(\mathrm{cos}\theta )e^{im_l\phi },$$
(24)
and $`C_{lm_l}`$ are the normalizing constants. But $`R_{nl}(r)`$ for the bound state and the associated Legendre polynomials $`P_l^{|m_l|}(\mathrm{cos}\theta )`$ are real functions. The polar coordinates $`j_r`$, $`j_\theta `$, $`j_\phi `$ of (2) in a central field of force are thus
$$j_r(r,\theta ,\phi )=j_\theta (r,\theta ,\phi )=0,j_\phi (r,\theta ,\phi )=|u_{nlm_l}(r,\theta ,\phi )|^2\frac{m_l\mathrm{}}{mr\mathrm{sin}\theta }.$$
In consequence, a simple treatment becomes possible, namely, we may consider the “velocity” for a definite direction $`\theta `$ and then we can introduce the radius $`\rho =r\mathrm{sin}\theta `$ in above equations and get a problem in two degrees of freedom $`\rho `$, $`\phi `$. The two-dimensional polar coordinates “$`v_\rho `$”, “$`v_\phi `$” of (5) give
$$\text{}v_\rho \text{}=0,\text{}v_\phi \text{}=\frac{m_l\mathrm{}}{m\rho }.$$
The divergence of them readily vanishes. If we transform to two-dimensional polar coordinates $`\rho `$, $`\phi `$, equations (13) become
$$\text{}v_\rho \text{}=\frac{1}{\rho }\frac{\mathrm{\Psi }}{\phi },\text{}v_\phi \text{}=\frac{\mathrm{\Psi }}{\rho },$$
(25)
and the stream function in a central field of force is thus
$$\mathrm{\Psi }=\frac{m_l\mathrm{}}{m}\mathrm{log}\rho .$$
(26)
On the other hand, the vorticity (8) satisfies, with the help of (25),
$$\omega =\frac{1}{\rho }\frac{}{\rho }\left(\rho \text{}v_\phi \text{}\right)\frac{1}{\rho }\frac{}{\phi }\text{}v_\rho \text{}=^2\mathrm{\Psi },$$
(27)
where $`^2`$ is written for the two-dimensional Laplacian operator
$$^2\frac{1}{\rho }\frac{}{\rho }\left(\rho \frac{}{\rho }\right)+\frac{1}{\rho ^2}\frac{^2}{\phi ^2}.$$
On substituting (26) in (27) we obtain
$$\omega =\frac{m_l\mathrm{}}{m}^2\mathrm{log}\rho =2\pi \frac{m_l\mathrm{}}{m}\delta (𝝆),$$
(28)
where $`\delta (𝝆)`$ is the two-dimensional Dirac $`\delta `$ function. Thus the vorticity in a central field of force vanishes everywhere except the origin $`\rho =0`$. This singularity will lie along the quantization axis $`\theta =0`$ and $`\pi `$ in three-dimensional space. The velocity potential satisfying (10) or (11) in a central field of force is
$$\mathrm{\Phi }=\frac{m_l\mathrm{}}{m}\phi .$$
(29)
For the state represented by (23), the complex velocity potential (14) gives, from (29) and (26)
$`W`$ $`={\displaystyle \frac{m_l\mathrm{}}{m}}\phi i{\displaystyle \frac{m_l\mathrm{}}{m}}\mathrm{log}\rho `$
$`=i{\displaystyle \frac{m_l\mathrm{}}{m}}\mathrm{log}z,`$ (30)
since $`z=\rho e^{i\phi }`$. According to hydrodynamics , this expresses the vortex filament. The strength of the vortex filament is defined by the circulation round a closed contour $`C`$ encircling the singularity at the origin $`\rho =0`$
$$\mathrm{\Gamma }_C\text{}v_\phi \text{}𝑑s.$$
(31)
We make use of Stokes’ theorem,
$$\mathrm{\Gamma }=_S\omega 𝑑S,$$
(32)
where $`S`$ is a two-dimensional surface whose boundary is the closed contour $`C`$. On substituting (28) in (32) we obtain
$$\mathrm{\Gamma }=2\pi \frac{m_l\mathrm{}}{m},$$
(33)
where the eigenvalue $`m_l`$ of a component of the angular momentum is an integer. Equation (33) informs us that the circulations are quantized in units of $`2\pi \mathrm{}/m`$ for the state (23) moving in a central field of force. Equation (33) is known as Onsager’s Quantization of Circulations, in superfluidity .
##
The above examples show the great superiority of the “velocities” in dealing with the flows in quantum mechanics. In particular, the two-dimensional quantum flows can be expressed by complex velocity potentials and their analytical properties. Up to the present we have considered only stable systems. For a non-stationary state of the unstable system the probability density (1) and the probability current (2) are not simple, i.e. they generally depend on the time $`t`$ and the coordinate $`𝒓`$. However, the work will be simple for such unstable systems, since the “velocity” (5) does not involve $`t`$. In fact, for the two-dimensional parabolic potential barrier , as an example of the unstable systems, there is the flow round a right angle that is expressed by the complex velocity potential (20) with $`n=2`$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.