Wednesday, 30 November 2011

One step closer to dark matter in universe

ScienceDaily (Oct. 31, 2011) — Scientists all over the world are working feverishly to find the dark matter in the universe. Now researchers at Stockholm University have taken one step closer to solving the enigma with a new method.

The universe is still a mystery. We know what about 5 percent of the universe consists of. The rest is simply unknown. Researchers have gotten as far as knowing that a major portion, about 23 percent of the universe consists of a new kind of matter. No one has seen this matter, and no one knows what it consists of. The remaining roughly 72 percent of the universe is made up of something even more enigmatic, called dark energy. Jan Conrad and Maja Llena Garde are scientists at Fysikum, Stockholm University and the Oskar Klein Center for Cosmoparticle Physics, and they are part of the international research team that has taken a giant step toward finding dark matter with the help of a new method.

"With our new method, for the first time we have been able to exclude models regarded by many as the most natural ones. Previous attempts did not achieve the same sensitivity. What's more, our results are especially reliable," says Jan Conrad.

"We can't see dark matter because it doesn't interact with the matter we know about. Nor does it emit any light. It's virtually invisible. But we can determine that it affects the matter we're familiar with."

"We see how the rotation of galaxies is affect by something that weighs a lot but is invisible. We also see how the gas in galaxy clusters doesn't move as it would if there were only visible matter present. So we know it's there. The question is simply what it is. Many theoretical models have been developed to predict particles that meet the requirements for being identified as dark matter. But experiments are needed if we are to determine whether any of these models are correct," says Jan Conrad.

Since dark matter is invisible, we can only see traces of it, and one way to do this is to look at light with extremely high energy, so-called gamma radiation. With the help of the satellite-borne Fermi Large Area Telescope, scientists can study gamma radiation and look for traces of dark matter.

"We've looked at gamma radiation from dwarf galaxies. These galaxies are small and dim, but extremely massive, so they seem to consist largely of dark matter. Unfortunately we still haven't detected a gamma signal from the dark matter in these objects, but we are definitely getting closer. Our new method involves looking at several dwarf galaxies at the same time and combining the observations in a new way, which yields excellent results. This is an exciting time for dark matter research, because we're getting closer and closer," says Maja Llena Garde.

"This is truly a giant step forward in our pursuit of dark matter," says the director of the Oskar Klein Center, Lars Bergström. "With my colleague Joakim Edsjö, I've studied these processes theoretically for more than ten years, but this is the first time important experimental breakthroughs are being seen. Now we just hope that Jan, Maja, and the Fermi team will continue this exciting quest using their new method."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Expertanswer, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

The Fermi-LAT Collaboration: M. Ackermann, M. Ajello, A. Albert, W. B. Atwood, L. Baldini, J. Ballet, G. Barbiellini, D. Bastieri, K. Bechtol, R. Bellazzini, B. Berenji, R. D. Blandford, E. D. Bloom, E. Bonamente, A. W. Borgland, J. Bregeon, M. Brigida, P. Bruel, R. Buehler, T. H. Burnett, S. Buson, G. A. Caliandro, R. A. Cameron, B. Canadas, P. A. Caraveo, J. M. Casandjian, C. Cecchi, E. Charles, A. Chekhtman, J. Chiang, S. Ciprini, R. Claus, J. Cohen-Tanugi, J. Conrad, S. Cutini, A. de Angelis, F. de Palma, C. D. Dermer, S. W. Digel, E. do Couto e Silva, P. S. Drell, A. Drlica-Wagner, L. Falletti, C. Favuzzi, S. J. Fegan, E. C. Ferrara, Y. Fukazawa, S. Funk, P. Fusco, F. Gargano, D. Gasparrini, N. Gehrels, S. Germani, N. Giglietto, F. Giordano, M. Giroletti, T. Glanzman, G. Godfrey, I. A. Grenier, et al. Constraining dark matter models from a combined analysis of Milky Way satellites with the Fermi Large Area Telescope. Physical Review Letters, 2011 [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Structure, not scientists to blame for Los Alamos failings, article says

ScienceDaily (Nov. 1, 2011) — Policy decisions and poor management have substantially undermined the US Los Alamos National Laboratory -- and, consequently, national security, according to an article available November 1 in the current issue of the Bulletin of the Atomic Scientists, published by SAGE. The article calls into question media and government stereotypes that have blamed Los Alamos's scientists for the decline.

According to George Mason University professor of anthropology and sociology Hugh Gusterson, who has studied America's nuclear weapons scientists since the 1980s, morale at Los Alamos is the worst it has ever been in the lab's seven-decade history. Its ability to function as an institution and to superintend the nuclear stockpile has been substantially eroded, he writes. Driven by a mistaken belief that Los Alamos's organizational culture is characterized by arrogance and carelessness, congressmen and government officials are to blame for framing Los Alamos as an institution in need of reform and for implementing deleterious management practices, which have reduced effectiveness, Gusterson writes.

Gusterson is an expert on nuclear culture, international security, and the anthropology of science. His article, "The assault on Los Alamos National Laboratory: A drama in three acts," highlights the decline of Los Alamos, the famous nuclear laboratory originally established by J. Robert Oppenheimer in the high desert of New Mexico during World War II.

The first phase began with a media circus when Chinese-American scientist Wen Ho Lee's downloaded secret computer codes in 1999. Lee was arrested and charged on 59 counts of mishandling national security information, 58 of which were dropped.

The media reinforced the perception that Lee's behavior was symptomatic of a culture of laxness at Los Alamos. Security was tightened, yet additional disks were misplaced. FBI agents descended on Los Alamos, administering polygraphs to weapons scientists, commandeering their offices, and dragging some from their beds at night for interrogations. The National Nuclear Security Administration was created to superintend weapons labs and General Eugene Habiger was put in charge of security at Los Alamos and the nation's other weapons lab, Lawrence Livermore.

The 2003 appointment of Pete Nanos as director of Los Alamos marked the next phase of decline. After more disks apparently went missing and a student was hit in the eye by a laser beam, Nanos called for swift and extreme action. Calling lab employees "cowboys and buttheads" who thought they were above the rules -- and describing "a culture of arrogance" and "suicidal denial" at a news conference -- he suspended lab operations for up to seven months, forcing employees to retrain and reflect on security practices.

The shutdown cost $370 million. Both Nanos and his actions were deeply unpopular with lab staff. Nanos abruptly resigned in 2005. It turned out the disks had not gone missing, but had in fact never existed. It was an inventory management error. Extreme and destructive acts of cultural reengineering had cost the Los Alamos National Laboratory and, presumably, national security dearly.

Next, instead of renewing the University of California's management contract, the federal government put the contract out to bid. Los Alamos National Security (LANS), a consortium headed by the Bechtel Corporation with the University of California as a junior partner, won the contract in 2005. A year later, it also won the contract to run the lab at Livermore.

To boost profits, Bechtel increased the management fee tenfold, rewarding its senior LANS officials. The budget was static but costs increased, resulting in heavy job losses at the Livermore Laboratory. New managers did not establish the same rapport with scientists as previous managers who had risen through the ranks. Peer reviewed publication output by scientists dropped sharply. But the number and quality of articles published, papers given, and experiments conducted by lab scientists was now irrelevant to the government's evaluation of managerial effectiveness. Scientists were discouraged from raising concerns, which could impact management bonuses.

Gusterson concludes that misattribution of Los Alamos's problems to a pathological organizational culture involved at least two misreadings of the situation: The actions of a rogue individual (Lee) were confused with the informal norms of an entire organization, and the organizational dysfunction at Los Alamos has been misdiagnosed as a problem of culture when it is more likely a problem of structure.

"Having survived the antinuclear protests of the 1980s and the end of the Cold War a few years later, American nuclear weapons scientists are now finding that the main threat to their craft comes from an unexpected source: politicians and administrators who are supposed to be on their side," says Gusterson. "As so often seems to be the case, well-meaning attempts to make the country more secure are having the opposite effect."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by SAGE Publications, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

H. Gusterson. The assault on Los Alamos National Laboratory: A drama in three acts. Bulletin of the Atomic Scientists, 2011; 67 (6): 9 DOI: 10.1177/0096340211426631

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

First-of-a-kind tension wood study broadens biofuels research

ScienceDaily (Oct. 25, 2011) — Taking a cue from Mother Nature, researchers at the Department of Energy's BioEnergy Science Center have undertaken a first-of-its-kind study of a naturally occurring phenomenon in trees to spur the development of more efficient bioenergy crops.

Tension wood, which forms naturally in hardwood trees in response to bending stress, is known to possess unique features that render it desirable as a bioenergy feedstock. Although individual elements of tension wood have been studied previously, the BESC team is the first to use a comprehensive suite of techniques to systematically characterize tension wood and link the wood's properties to sugar release. Plant sugars, known as cellulose, are fermented into alcohol for use as biofuel.

"There has been no integrated study of tension stress response that relates the molecular and biochemical properties of the wood to the amount of sugar that is released," said Oak Ridge National Laboratory's Udaya Kalluri, a co-author on the study.

The work, published in Energy & Environmental Science, describes tension wood properties including an increased number of woody cells, thicker cell walls, more crystalline forms of cellulose and lower lignin levels, all of which are desired in an biofuel crop.

"Tension wood in poplar trees has a special type of cell wall that is of interest because it is composed of more than 90 percent cellulose, whereas wood is normally composed of 40 to 55 percent cellulose," Kalluri said. "If you increase the cellulose in your feedstock material, then you can potentially extract more sugars as the quality of the wood has changed. Our study confirms this phenomenon."

The study's cohesive approach also provides a new perspective on the natural plant barriers that prevent the release of sugars necessary for biofuel production, a trait scientists term as recalcitrance.

"Recalcitrance of plants is ultimately a reflection of a series of integrated plant cell walls, components, structures and how they are put together," said co-author Arthur Ragauskas of Georgia Institute of Technology. "This paper illustrates that you need to use an holistic, integrated approach to study the totality of recalcitrance."

Using the current study as a model, the researchers are extending their investigation of tension wood down to the molecular level and hope to eventually unearth the genetic basis behind its desirable physical features. Although tension wood itself is not considered to be a viable feedstock option, insight gleaned from studying its unique physical and molecular characteristics could be used to design and select more suitably tailored bioenergy crops.

"This study exemplifies how the integrated model of BESC can bring together such unique research expertise," said BESC director Paul Gilna. "The experimental design in itself is reflective of the multidisciplinary nature of a DOE Bioenergy Research Center."

The research team also includes Georgia Institute of Technology's Marcus Foston, Chris Hubbell, Reichel Sameul, Seokwon Jung and Hu Fan; National Renewable Energy Laboratory's Robert Sykes, Shi-You Ding, Yining Zeng, Erica Gjersing and Mark Davis, and ORNL's Sara Jawdy and Gerald Tuskan.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Oak Ridge National Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Marcus Foston, Christopher A. Hubbell, Reichel Samuel, Seokwon Jung, Hu Fan, Shi-You Ding, Yining Zeng, Sara Jawdy, Mark Davis, Robert Sykes, Erica Gjersing, Gerald A. Tuskan, Udaya Kalluri, Arthur J. Ragauskas. Chemical, ultrastructural and supramolecular analysis of tension wood in Populus tremula x alba as a model substrate for reduced recalcitrance. Energy & Environmental Science, 2011; DOI: 10.1039/C1EE02073K

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Nanoparticles and their size may not be big issues

ScienceDaily (Oct. 24, 2011) — If you've ever eaten from silverware or worn copper jewelry, you've been in a perfect storm in which nanoparticles were dropped into the environment, say scientists at the University of Oregon.

Since the emergence of nanotechnology, researchers, regulators and the public have been concerned that the potential toxicity of nano-sized products might threaten human health by way of environmental exposure.

Now, with the help of high-powered transmission electron microscopes, chemists captured never-before-seen views of miniscule metal nanoparticles naturally being created by silver articles such as wire, jewelry and eating utensils in contact with other surfaces. It turns out, researchers say, nanoparticles have been in contact with humans for a long, long time.

The project involved researchers in the UO's Materials Science Institute and the Safer Nanomaterials and Nanomanufacturing Initiative (SNNI), in collaboration with UO technology spinoff Dune Sciences Inc. SNNI is an initiative of the Oregon Nanoscience and Microtechnologies Institute (ONAMI), a state signature research center dedicated to research, job growth and commercialization in the areas of nanoscale science and microtechnologies.

The research -- detailed in a paper placed online in advance of regular publication in the American Chemistry Society's journal ACS Nano -- focused on understanding the dynamic behavior of silver nanoparticles on surfaces when exposed to a variety of environmental conditions.

Using a new approach developed at UO that allows for the direct observation of microscopic changes in nanoparticles over time, researchers found that silver nanoparticles deposited on the surface of their SMART Grids electron microscope slides began to transform in size, shape and particle populations within a few hours, especially when exposed to humid air, water and light. Similar dynamic behavior and new nanoparticle formation was observed when the study was extended to look at macro-sized silver objects such as wire or jewelry.

"Our findings show that nanoparticle 'size' may not be static, especially when particles are on surfaces. For this reason, we believe that environmental health and safety concerns should not be defined -- or regulated -- based upon size," said James E. Hutchison, who holds the Lokey-Harrington Chair in Chemistry. "In addition, the generation of nanoparticles from objects that humans have contacted for millennia suggests that humans have been exposed to these nanoparticles throughout time. Rather than raise concern, I think this suggests that we would have already linked exposure to these materials to health hazards if there were any."

Any potential federal regulatory policies, the research team concluded, should allow for the presence of background levels of nanoparticles and their dynamic behavior in the environment.

Because copper behaved similarly, the researchers theorize that their findings represent a general phenomenon for metals readily oxidized and reduced under certain environmental conditions. "These findings," they wrote, "challenge conventional thinking about nanoparticle reactivity and imply that the production of new nanoparticles is an intrinsic property of the material that is now strongly size dependent."

While not addressed directly, Hutchison said, the naturally occurring and spontaneous activity seen in the research suggests that exposure to toxic metal ions, for example, might not be reduced simply by using larger particles in the presence of living tissue or organisms.

Co-authors with Hutchison on the paper were Richard D. Glover, a doctoral student in Hutchison's laboratory, and John M. Miller, a research associate. Hutchison and Miller were co-founders of Dune Sciences Inc., a Eugene-based company that specializes in products and services geared toward the development and commercialization of nano-enabled products. Miller currently is the company's chief executive officer; Hutchison is chief science officer.

The electron microscopes used in this study are located at the Center for Advanced Materials Characterization in Oregon in the underground Lorry I. Lokey Laboratories at the UO. The U.S. Air Force Research Laboratory and W.M. Keck Foundation supported the research. Glover's participation also was funded by the National Science Foundation's STEM (science, technology, engineering, mathematics) Fellows in K-12 Education Program.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Oregon.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Richard D. Glover, John M. Miller, James E. Hutchison. Generation of Metal Nanoparticles from Silver and Copper Objects: Nanoparticle Dynamics on Surfaces and Potential Sources of Nanoparticles in the Environment. ACS Nano, 2011; 111019095813007 DOI: 10.1021/nn2031319

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

VISTA finds new globular star clusters and sees right through the heart of the Milky Way

ScienceDaily (Oct. 20, 2011) — Two newly discovered globular clusters have been added to the total of just 158 known globular clusters in our Milky Way. They were found in new images from ESO's VISTA survey telescope as part of the Via Lactea (VVV) survey. This survey has also turned up the first star cluster that is far beyond the centre of the Milky Way and whose light has had to travel right through the dust and gas in the heart of our galaxy to get to us.

The dazzling globular cluster called UKS 1 dominates the right-hand side of the first of the new infrared images from ESO's VISTA survey telescope at the Paranal Observatory in Chile. But if you can drag your gaze away, there is a surprise lurking in this very rich star field -- a fainter globular cluster that was discovered in the data from one of VISTA's surveys. You will have to look closely to see the other star cluster, which is called VVV CL001: it is a small collection of stars in the left half of the image.

But VVV CL001 is just the first of VISTA's globular discoveries. The same team has found a second object, dubbed VVV CL002, which appears in image b [1]. This small and faint grouping may also be the globular cluster that is the closest known to the centre of the Milky Way. The discovery of a new globular cluster in our Milky Way is very rare. The last one was discovered in 2010, and only 158 globular clusters were known in our galaxy before the new discoveries.

These new clusters are early discoveries from the VISTA Variables in the Via Lactea (VVV) survey that is systematically studying the central parts of the Milky Way in infrared light. The VVV team is led by Dante Minniti (Pontificia Universidad Católica de Chile) and Philip Lucas (Centre for Astrophysics Research, University of Hertfordshire, UK).

As well as globular clusters, VISTA is finding many open, or galactic clusters, which generally contain fewer, younger, stars than globular clusters and are far more common (eso1128). Another newly announced cluster, VVV CL003, seems to be an open cluster that lies in the direction of the heart of the Milky Way, but much further away, about 15 000 light-years beyond the centre. This is the first such cluster to be discovered on the far side of the Milky Way.

Given the faintness of the newly found clusters, it is no wonder that they have remained hidden for so long; up until a few years ago, UKS 1 (seen in image a), which easily outshines the newcomers, was actually the dimmest known globular cluster in the Milky Way. Because of the absorption and reddening of starlight by interstellar dust, these objects can only be seen in infrared light and VISTA, the world's largest survey telescope, is ideally suited to searching for new clusters hidden behind dust in the central parts of the Milky Way [2].

One intriguing possibility is that VVV CL001 is gravitationally bound to UKS 1 -- making these two stellar groups the Milky Way's first binary globular cluster pair. But this could just be a line-of-sight effect with the clusters actually separated by a vast distance.

These VISTA pictures were created from images taken though near-infrared filters J (shown in blue), H (shown in green), and Ks (shown in red). The size of the images show only a small fraction of the full VISTA field of view.

Notes

[1] The discovery of the additional new clusters was just announced in San Juan, Argentina, during the first bi-national meeting of the Argentinian and Chilean astronomical associations.

[2] The tiny dust grains that form huge clouds within galaxies scatter blue light much more strongly than red and infrared light. As a result astronomers can see through the dust much more effectively if they study infrared light rather than the usual visible radiation that our eyes are sensitive to.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by ESO.

Note: ScienceDaily reserves the right to edit materials for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New device measures viscosity of ketchup and cosmetics

ScienceDaily (Oct. 24, 2011) — A device that can measure and predict how liquids flow under different conditions will ensure consumer products -- from make-up to ketchup -- are of the right consistency.

The technology developed at the University of Sheffield enables engineers to monitor, in real time, how the viscous components (rheology) of liquids change during a production process, making it easier, quicker and cheaper to control the properties of the liquid.

The research is a joint project between the University's Department of Chemical and Biological Engineering and the School of Mathematics and Statistics. A paper describing the innovation is published Oct. 24, 2011 in the journal Measurement Science and Technology.

Dr Julia Rees from the University's Department of Applied Mathematics, who co-authored the study, said: "Companies that make liquid products need to know how the liquids will behave in different circumstances because these different behaviours can affect the texture, the taste or even the smell of a product."

The viscosity of most liquids changes under different conditions and designers often use complicated mathematical equations to determine what these changes might be.

The team from Sheffield has now developed a way of predicting these changes using a non-invasive sensor system that the liquid simply flows through. The sensor feeds information back through an electronic device that calculates a range of likely behaviours.

Dr Rees, from the Department of Applied Mathematics, explains: "Measuring the individual components of a liquid's viscosity is called rheometry. We can produce equations to measure a liquid's total viscosity, but the rheology of most liquids is very complicated. Instead, we look at properties in a liquid that we can measure easily, and then apply maths to calculate the viscosity. The sensor device we have developed will be able to make these calculations for companies using a straightforward testing process."

Companies developing new products will be able to incorporate the device into their development process, meaning there will no longer be a need for `grab samples' to be taken away for expensive laboratory testing, providing cost and efficiency savings.

The device can be made to any scale and can even be etched onto a microchip, with channels about the width of a human hair. This will be useful for testing where only small samples of fluid are available, for example in biological samples.

Dr Rees' team have developed a laboratory prototype of the system and are currently working to refine the technology and develop a design prototype.

Will Zimmerman, Professor of Biochemical Dynamical Systems in the Department of Chemical and Biological Engineering at the University of Sheffield, worked on the project alongside Dr Rees. He says: "Because the microrheometer works in real time, materials, time and energy will not be wasted when processing flaws are detected. Conservation is one of the best ways to 'green' industrial processing with greater efficiency. Ben Franklin's maxim, 'waste not, want not' is just as true today."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Sheffield.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

H C Hemaka Bandulasena, William B Zimmerman, Julia M Rees. An inverse method for rheometry of power-law fluids. Measurement Science and Technology, 2011; 22 (12): 125402 DOI: 10.1088/0957-0233/22/12/125402

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, 29 November 2011

Laser light used to cool object to quantum ground state

ScienceDaily (Oct. 9, 2011) — For the first time, researchers at the California Institute of Technology (Caltech), in collaboration with a team from the University of Vienna, have managed to cool a miniature mechanical object to its lowest possible energy state using laser light. The achievement paves the way for the development of exquisitely sensitive detectors as well as for quantum experiments that scientists have long dreamed of conducting.

"We've taken a solid mechanical system -- one made up of billions of atoms -- and used optical light to put it into a state in which it behaves according to the laws of quantum mechanics. In the past, this has only been achieved with trapped single atoms or ions," says Oskar Painter, professor of applied physics and executive officer for applied physics and materials science at Caltech and the principal investigator on a paper describing the work that appears in the October 6 issue of the journal Nature.

As described in the paper, Painter and his colleagues have engineered a nanoscale object -- a tiny mechanical silicon beam -- such that laser light of a carefully selected frequency can enter the system and, once reflected, can carry thermal energy away, cooling the system.

By carefully designing each element of the beam as well as a patterned silicon shield that isolates it from the environment, Painter and colleagues were able to use the laser cooling technique to bring the system down to the quantum ground state, where mechanical vibrations are at an absolute minimum. Such a cold mechanical object could help detect very small forces or masses, whose presence would normally be masked by the noisy thermal vibrations of the sensor.

"In many ways, the experiment we've done provides a starting point for the really interesting quantum-mechanical experiments one wants to do," Painter says. For example, scientists would like to show that a mechanical system could be coaxed into a quantum superposition -- a bizarre quantum state in which a physical system can exist in more than one position at once. But they need a system at the quantum ground state to begin such experiments.

To reach the ground state, Painter's group had to cool its mechanical beam to a temperature below 100 millikelvin (-273.15°C). That's because the beam is designed to vibrate at gigahertz frequencies (corresponding to a billion cycles per second) -- a range where a large number of phonons are present at room temperature. Phonons are the most basic units of vibration just as the most basic units or packets of light are called photons. All of the phonons in a system have to be removed to cool it to the ground state.

Conventional means of cryogenically cooling to such temperatures exist but require expensive and, in some cases, impractical equipment. There's also the problem of figuring out how to measure such a cold mechanical system. To solve both problems, the Caltech team used a different cooling strategy.

"What we've done is used the photons -- the light field -- to extract phonons from the system," says Jasper Chan, lead author of the new paper and a graduate student in Painter's group. To do so, the researchers drilled tiny holes at precise locations in their mechanical beam so that when they directed laser light of a particular frequency down the length of the beam, the holes acted as mirrors, trapping the light in a cavity and causing it to interact strongly with the mechanical vibrations of the beam.

Because a shift in the frequency of the light is directly related to the thermal motion of the mechanical object, the light -- when it eventually escapes from the cavity -- also carries with it information about the mechanical system, such as the motion and temperature of the beam. Thus, the researchers have created an efficient optical interface to a mechanical element -- or an optomechanical transducer -- that can convert information from the mechanical system into photons of light.

Importantly, since optical light, unlike microwaves or electrons, can be transmitted over large, kilometer-length distances without attenuation, such an optomechanical transducer could be useful for linking different quantum systems -- a microwave system with an optical system, for example. While Painter's system involves an optical interface to a mechanical element, other teams have been developing systems that link a microwave interface to a mechanical element. What if those two mechanical elements were the same? "Then," says Painter, "I could imagine connecting the microwave world to the optical world via this mechanical conduit one photon at a time."

The Caltech team isn't the first to cool a nanomechanical object to the quantum ground state; a group led by former Caltech postdoctoral scholar Andrew Cleland, now at the University of California, Santa Barbara, accomplished this in 2010 using more conventional refrigeration techniques, and, earlier this year, a group from the National Institute of Standards and Technology in Boulder, Colorado, cooled an object to the ground state using microwave radiation. The new work, however, is the first in which a nanomechanical object has been put into the ground state using optical light.

"This is an exciting development because there are so many established techniques for manipulating and measuring the quantum properties of systems using optics," Painter says.

The other cooling techniques used starting temperatures of approximately 20 millikelvin -- more than a factor of 10,000 times cooler than room temperature. Ideally, to simplify designs, scientists would like to initiate these experiments at room temperature. Using laser cooling, Painter and his colleagues were able to perform their experiment at a much higher temperature -- only about 10 times lower than room temperature.

The work was supported by Caltech's Kavli Nanoscience Institute; the Defense Advanced Research Projects Agency's Microsystems Technology Office through a grant from the Air Force Office of Scientific Research; the European Commission; the European Research Council; and the Austrian Science Fund.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by California Institute of Technology, via EurekAlert!, a service of AAAS.

Journal Reference:

Jasper Chan, T. P. Mayer Alegre, Amir H. Safavi-Naeini, Jeff T. Hill, Alex Krause, Simon Gröblacher, Markus Aspelmeyer, Oskar Painter. Laser cooling of a nanomechanical oscillator into its quantum ground state. Nature, 2011; 478 (7367): 89 DOI: 10.1038/nature10461

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Nano funnel used to generate extreme ultraviolet light pulses

ScienceDaily (Oct. 17, 2011) — If you want to avoid spilling when you are pouring liquids in the kitchen you may appreciate a funnel. Funnels are not only useful tools in the kitchen. Light can also be efficiently concentrated with funnels. In this case, the funnels have to be about 10.000-times smaller.

An international team of scientists from the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon (South Korea), the Max Planck Institute of Quantum Optics (MPQ) in Garching (Germany), and the Georgia State University (GSU) in Atlanta (USA) has now managed to concentrate the energy of infrared light pulses with a nano funnel and use the concentrated energy to generate extreme ultraviolet light flashes. These flashes, which repeated 75 million times per second, lasted only a few femtoseconds. The new technology can help in the future to measure the movement of electrons with the highest spatial and temporal resolution.

Light is convertible. The wavelengths composing the light can change through interactions with matter, where both the type of material and shape of the material are important for the frequency conversion. An international team of scientists from the Korea Advanced Institute of Science and Technology (KAIST), the Max Planck Institute of Quantum Optics (MPQ), and the Georgia State University (GSU) has now modified light waves with a nano funnel made out of silver. The scientists converted femtosecond laser pulses in the infrared spectral range to femtosecond light flashes in the extreme ultraviolet (EUV). Ultrashort, pulsed EUV light is used in laser physics to explore the inside of atoms and molecules. A femtosecond lasts only a millionth of a billionth of a second.

Light in the infrared (IR) can be converted to the EUV by a process known as high-harmonic generation, whereby the atoms are exposed to a strong electric field from the IR laser pulses. These fields have to be as strong as the fields holding the atom together. With these fields electrons can be extracted from the atoms and accelerated with full force back onto the atoms. Upon impact highly energetic radiation in the EUV is generated.

To reach the necessary strong electric fields for the production of EUV light, the team of scientists has now combined this scheme with a nano funnel in order to concentrate the electric field of the light. With their new technology, they were able to create a powerful EUV light source with wavelengths down to 20 nanometers. The light source exhibits a so far unreached high repetition rate: the few femtoseconds lasting EUV light flashes are repeated 75 million times per second.

The core of the experiment was a small, only a few micrometers long, slightly elliptical funnel made out of silver and filled with xenon gas. The tip of the funnel was only ca. 100 nanometers wide. The infrared light pulses were sent into the funnel entrance where they travel through towards the small exit. The electromagnetic forces of the light result in density fluctuations of the electrons on the inside of the funnel. Here, a small patch of the metal surface was positively charged, the next one negative and so on, resulting in new electromagnetic fields on the inside of the funnel, which are called surface plasmon polaritons. The surface plasmon polaritons travel towards the tip of the funnel, where the conical shape of the funnel results in a concentration of their fields. “The field on the inside of the funnel can become a few hundred times stronger than the field of the incident infrared light. This enhanced field results in the generation of EUV light in the Xe gas.”, explains Prof. Mark Stockman from GSU.

The nano funnel has yet another function. Its small opening at the exit acts as “doorman” for light wavelengths. Not every opening is passable for light. If the opening is smaller than half of a wavelength, the other side remains dark. The 100 nanometer large opening of the funnel did not allow the infrared light at 800 nm to pass. The generated EUV pulses with wavelengths down to 20 nanometers passed, however, without problems. “The funnel acts as an efficient wavelength filter: at the small opening only EUV light comes out.”, explains Prof. Seung-Woo Kim from KAIST, where the experiments were conducted.

“Due to their short wavelength and potentially short pulse duration reaching into the attosecond domain, extreme ultraviolet light pulses are an important tool for the exploration of electron dynamics in atoms, molecules and solids”, explains Seung-Woo Kim. Electrons are extremely fast, moving on attosecond timescales (an attosecond is a billionth of a billionth of a second). In order to capture a moving electron, light flashes are needed, which are shorter than the timescale of the motion. Attosecond light flashes have become a familiar tool in the exploration of electron motion. With the conventional techniques, they can only be repeated a few thousand times per second. This can change with the nano funnel. “We assume that the few femtosecond light flashes consist of trains of attosecond pulses”, argues Matthias Kling, group leader at MPQ. “With such pulse trains, we should be able to conduct experiments with attosecond time resolution at very high repetition rate.”

The repetition rate is important for e.g. the application of EUV pulses in electron spectroscopy on surfaces. Electrons repel each other by Coulomb forces. Therefore, it may be necessary to restrict the experimental conditions such that only a single electron is generated per laser shot. With low repetition rates, long data acquisition times would be required in order to achieve sufficient experimental resolution. “In order to conduct experiments with high spatial and temporal resolution within a sufficiently short time, a high repetition rate EUV source is needed”, explains Kling. The novel combination of laser technology and nanotechnology can help in the future to record movies of ultrafast electron motion on surfaces with so far unreached temporal and spatial resolution in the attosecond-nanometer domain.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Max Planck Institute of Quantum Optics.

Note: ScienceDaily reserves the right to edit materials for content and length. For further information, please contact the source cited above.

Journal Reference:

In-Yong Park, Seungchul Kim, Joonhee Choi, Dong-Hyub Lee, Young-Jin Kim, Matthias F. Kling, Mark I. Stockman, Seung-Woo Kim. Plasmonic generation of ultrashort extreme-ultraviolet light pulses. Nature Photonics, 2011; DOI: 10.1038/NPHOTON.2011.258

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Faraway Eris is Pluto's twin

ScienceDaily (Oct. 26, 2011) — Astronomers have accurately measured the diameter of the faraway dwarf planet Eris for the first time by catching it as it passed in front of a faint star. This event was seen at the end of 2010 by telescopes in Chile, including the Belgian TRAPPIST telescope at ESO's La Silla Observatory. The observations show that Eris is an almost perfect twin of Pluto in size. Eris appears to have a very reflective surface, suggesting that it is uniformly covered in a thin layer of ice, probably a frozen atmosphere. The results will be published in the 27 October 2011 issue of the journal Nature.

In November 2010, the distant dwarf planet Eris passed in front of a faint background star, an event called an occultation. These occurrences are very rare and difficult to observe as the dwarf planet is very distant and small. The next such event involving Eris will not happen until 2013. Occultations provide the most accurate, and often the only, way to measure the shape and size of a distant Solar System body.

The candidate star for the occultation was identified by studying pictures from the MPG/ESO 2.2-metre telescope at ESO's La Silla Observatory. The observations were carefully planned and carried out by a team of astronomers from a number of (mainly French, Belgian, Spanish and Brazilian) universities using -- among others -- the TRAPPIST [1] (TRAnsiting Planets and PlanetesImals Small Telescope, eso1023) telescope, also at La Silla.

"Observing occultations by the tiny bodies beyond Neptune in the Solar System requires great precision and very careful planning. This is the best way to measure Eris's size, short of actually going there," explains Bruno Sicardy, the lead author.

Observations of the occultation were attempted from 26 locations around the globe on the predicted path of the dwarf planet's shadow -- including several telescopes at amateur observatories, but only two sites were able to observe the event directly, both of them located in Chile. One was at ESO's La Silla Observatory using the TRAPPIST telescope, and the other was located in San Pedro de Atacama and used two telescopes [2]. All three telescopes recorded a sudden drop in brightness as Eris blocked the light of the distant star.

The combined observations from the two Chilean sites indicate that Eris is close to spherical. These measurements should accurately measure its shape and size as long as they are not distorted by the presence of large mountains. Such features are, however, unlikely on such a large icy body.

Eris was identified as a large object in the outer Solar System in 2005. Its discovery was one of the factors that led to the creation of a new class of objects called dwarf planets and the reclassification of Pluto from planet to dwarf planet in 2006. Eris is currently three times further from the Sun than Pluto.

While earlier observations using other methods suggested that Eris was probably about 25% larger than Pluto with an estimated diameter of 3000 kilometres, the new study proves that the two objects are essentially the same size. Eris's newly determined diameter stands at 2326 kilometres, with an accuracy of 12 kilometres. This makes its size better known than that of its closer counterpart Pluto, which has a diameter estimated to be between 2300 and 2400 kilometres. Pluto's diameter is harder to measure because the presence of an atmosphere makes its edge impossible to detect directly by occultations. The motion of Eris's satellite Dysnomia [3] was used to estimate the mass of Eris. It was found to be 27% heavier than Pluto [4]. Combined with its diameter, this provided Eris's density, estimated at 2.52 grams per cm3 [5].

"This density means that Eris is probably a large rocky body covered in a relatively thin mantle of ice," comments Emmanuel Jehin, who contributed to the study [6].

The surface of Eris was found to be extremely reflective, reflecting 96% of the light that falls on it (a visible albedo of 0.96 [7]). This is even brighter than fresh snow on Earth, making Eris one of the most reflective objects in the Solar System, along with Saturn's icy moon Enceladus. The bright surface of Eris is most likely composed of a nitrogen-rich ice mixed with frozen methane -- as indicated by the object's spectrum -- coating the dwarf planet's surface in a thin and very reflective icy layer less than one millimetre thick.

"This layer of ice could result from the dwarf planet's nitrogen or methane atmosphere condensing as frost onto its surface as it moves away from the Sun in its elongated orbit and into an increasingly cold environment," Jehin adds. The ice could then turn back to gas as Eris approaches its closest point to the Sun, at a distance of about 5.7 billion kilometres.

The new results also allow the team to make a new measurement for the surface temperature of the dwarf planet. The estimates suggest a temperature for the surface facing the Sun of -238 Celsius at most, and an even lower value for the night side of Eris.

"It is extraordinary how much we can find out about a small and distant object such as Eris by watching it pass in front of a faint star, using relatively small telescopes. Five years after the creation of the new class of dwarf planets, we are finally really getting to know one of its founding members," concludes Bruno Sicardy.

Notes:

[1] TRAPPIST is one of the latest robotic telescopes installed at the La Silla Observatory. With a main mirror just 0.6 metres across, it was inaugurated in June 2010 and is mainly dedicated to the study of exoplanets and comets. The telescope is a project funded by the Belgian Fund for Scientific Research (FRS-FNRS), with the participation of the Swiss National Science Foundation, and is controlled from Liège.

[2] The Caisey Harlingten and ASH2 telescopes.

[3] Eris is the Greek goddess of chaos and strife. Dysnomia is Eris' daughter and the goddess of lawlessness.

[4] Eris's mass is 1.66 x 1022 kg, corresponding to 22% of the mass of the Moon.

[5] For comparison, the Moon's density is 3.3 grams per cm3, and water's is 1.00 gram per cm3.

[6] The value of the density suggests that Eris is mainly composed of rock (85%), with a small ice content (15%). The latter is likely to be a layer, about 100 kilometre thick, that surrounds the large rocky core. This very thick layer of mostly water ice is not to be confused with the very thin layer of frozen atmosphere on Eris's surface that makes it so reflective.

[7] The albedo of an object represents the fraction of the light that falls on it that is scattered back into space rather than absorbed. An albedo of 1 corresponds to perfect reflecting white, while 0 is totally absorbing black. For comparison, the Moon's albedo is only 0.136, similar to that of coal.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by European Southern Observatory (ESO).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

B. Sicardy, J. L. Ortiz, M. Assafin, E. Jehin, A. Maury, E. Lellouch, R. Gil Hutton, F. Braga-Ribas, F. Colas, D. Hestroffer, J. Lecacheux, F. Roques, P. Santos-Sanz, T. Widemann, N. Morales, R. Duffard, A. Thirouin, A. J. Castro-Tirado, M. Jelínek, P. Kubánek, A. Sota, R. Sánchez-Ramírez, A. H. Andrei, J. I. B. Camargo, D. N. da Silva Neto, A. Ramos Gomes, R. Vieira Martins, M. Gillon, J. Manfroid, G. P. Tozzi, C. Harlingten, S. Saravia, R. Behrend, S. Mottola, E. García Melendo, V. Peris, J. Fabregat, J. M. Madiedo, L. Cuesta, M. T. Eibe, A. Ullán, F. Organero, S. Pastor, J. A. de los Reyes, S. Pedraz, A. Castro, I. de la Cueva, G. Muler, I. A. Steele, M. Cebrián, P. Montañés-Rodríguez, A. Oscoz, D. Weaver, C. Jacques, W. J. B. Corradi, F. P. Santos, W. Reis, A. Milone, M. Emilio, L. Gutiérrez, R. Vázquez, H. Hernández-Toledo. A Pluto-like radius and a high albedo for the dwarf planet Eris from an occultation. Nature, 2011; 478 (7370): 493 DOI: 10.1038/nature10550

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Shaken, not stirred: Scientists spy molecular maneuvers

ScienceDaily (Oct. 27, 2011) — Stir this clear liquid in a glass vial and nothing happens. Shake this liquid, and free-floating sheets of protein-like structures emerge, ready to detect molecules or catalyze a reaction. This isn't the latest gadget from James Bond's arsenal -- rather, the latest research from the U. S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) scientists unveiling how slim sheets of protein-like structures self-assemble. This "shaken, not stirred" mechanism provides a way to scale up production of these two-dimensional nanosheets for a wide range of applications, such as platforms for sensing, filtration and templating growth of other nanostructures.

"Our findings tell us how to engineer two-dimensional, biomimetic materials with atomic precision in water," said Ron Zuckermann, Director of the Biological Nanostructures Facility at the Molecular Foundry, a DOE nanoscience user facility at Berkeley Lab. "What's more, we can produce these materials for specific applications, such as a platform for sensing molecules or a membrane for filtration."

Zuckermann, who is also a senior scientist at Berkeley Lab, is a pioneer in the development of peptoids, synthetic polymers that behave like naturally occurring proteins without degrading. His group previously discovered peptoids capable of self-assembling into nanoscale ropes, sheets and jaws, accelerating mineral growth and serving as a platform for detecting misfolded proteins.

In this latest study, the team employed a Langmuir-Blodgett trough -- a bath of water with Teflon-coated paddles at either end -- to study how peptoid nanosheets assemble at the surface of the bath, called the air-water interface. By compressing a single layer of peptoid molecules on the surface of water with these paddles, said Babak Sanii, a post-doctoral researcher working with Zuckermann, "we can squeeze this layer to a critical pressure and watch it collapse into a sheet."

"Knowing the mechanism of sheet formation gives us a set of design rules for making these nanomaterials on a much larger scale," added Sanii.

To study how shaking affected sheet formation, the team developed a new device called the SheetRocker to gently rock a vial of peptoids from upright to horizontal and back again. This carefully controlled motion allowed the team to precisely control the process of compression on the air-water interface.

"During shaking, the monolayer of peptoids essentially compresses, pushing chains of peptoids together and squeezing them out into a nanosheet. The air-water interface essentially acts as a catalyst for producing nanosheets in 95% yield," added Zuckermann. "What's more, this process may be general for a wide variety of two-dimensional nanomaterials."

This research is reported in a paper titled, "Shaken, not stirred: Collapsing a peptoid monolayer to produce free-floating, stable nanosheets," appearing in the Journal of the American Chemical Society (JACS) and available in JACS online. Co-authoring the paper with Zuckermann and Sanii were Romas Kudirka, Andrew Cho, Neeraja Venkateswaran, Gloria Olivier, Alexander Olson, Helen Tran, Marika Harada and Li Tan.

This work at the Molecular Foundry was supported by DOE's Office of Science and the Defense Threat Reduction Agency.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Lawrence Berkeley National Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Babak Sanii, Romas Kudirka, Andrew Cho, Neeraja Venkateswaran, Gloria K. Olivier, Alexander M. Olson, Helen Tran, R. Marika Harada, Li Tan, Ronald N. Zuckermann. Shaken, Not Stirred: Collapsing a Peptoid Monolayer To Produce Free-Floating, Stable Nanosheets. Journal of the American Chemical Society, 2011; : 111012114427004 DOI: 10.1021/ja206199d

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Shaken, not stirred: Scientists spy molecular maneuvers

ScienceDaily (Oct. 27, 2011) — Stir this clear liquid in a glass vial and nothing happens. Shake this liquid, and free-floating sheets of protein-like structures emerge, ready to detect molecules or catalyze a reaction. This isn't the latest gadget from James Bond's arsenal -- rather, the latest research from the U. S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) scientists unveiling how slim sheets of protein-like structures self-assemble. This "shaken, not stirred" mechanism provides a way to scale up production of these two-dimensional nanosheets for a wide range of applications, such as platforms for sensing, filtration and templating growth of other nanostructures.

"Our findings tell us how to engineer two-dimensional, biomimetic materials with atomic precision in water," said Ron Zuckermann, Director of the Biological Nanostructures Facility at the Molecular Foundry, a DOE nanoscience user facility at Berkeley Lab. "What's more, we can produce these materials for specific applications, such as a platform for sensing molecules or a membrane for filtration."

Zuckermann, who is also a senior scientist at Berkeley Lab, is a pioneer in the development of peptoids, synthetic polymers that behave like naturally occurring proteins without degrading. His group previously discovered peptoids capable of self-assembling into nanoscale ropes, sheets and jaws, accelerating mineral growth and serving as a platform for detecting misfolded proteins.

In this latest study, the team employed a Langmuir-Blodgett trough -- a bath of water with Teflon-coated paddles at either end -- to study how peptoid nanosheets assemble at the surface of the bath, called the air-water interface. By compressing a single layer of peptoid molecules on the surface of water with these paddles, said Babak Sanii, a post-doctoral researcher working with Zuckermann, "we can squeeze this layer to a critical pressure and watch it collapse into a sheet."

"Knowing the mechanism of sheet formation gives us a set of design rules for making these nanomaterials on a much larger scale," added Sanii.

To study how shaking affected sheet formation, the team developed a new device called the SheetRocker to gently rock a vial of peptoids from upright to horizontal and back again. This carefully controlled motion allowed the team to precisely control the process of compression on the air-water interface.

"During shaking, the monolayer of peptoids essentially compresses, pushing chains of peptoids together and squeezing them out into a nanosheet. The air-water interface essentially acts as a catalyst for producing nanosheets in 95% yield," added Zuckermann. "What's more, this process may be general for a wide variety of two-dimensional nanomaterials."

This research is reported in a paper titled, "Shaken, not stirred: Collapsing a peptoid monolayer to produce free-floating, stable nanosheets," appearing in the Journal of the American Chemical Society (JACS) and available in JACS online. Co-authoring the paper with Zuckermann and Sanii were Romas Kudirka, Andrew Cho, Neeraja Venkateswaran, Gloria Olivier, Alexander Olson, Helen Tran, Marika Harada and Li Tan.

This work at the Molecular Foundry was supported by DOE's Office of Science and the Defense Threat Reduction Agency.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Lawrence Berkeley National Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Babak Sanii, Romas Kudirka, Andrew Cho, Neeraja Venkateswaran, Gloria K. Olivier, Alexander M. Olson, Helen Tran, R. Marika Harada, Li Tan, Ronald N. Zuckermann. Shaken, Not Stirred: Collapsing a Peptoid Monolayer To Produce Free-Floating, Stable Nanosheets. Journal of the American Chemical Society, 2011; : 111012114427004 DOI: 10.1021/ja206199d

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Geothermal mapping report confirms vast coast-to-coast clean energy source in U.S.

ScienceDaily (Oct. 25, 2011) — New research from SMU's Geothermal Laboratory, funded by a grant from Google.org, documents significant geothermal resources across the United States capable of producing more than three million megawatts of green power -- 10 times the installed capacity of coal power plants today.

Sophisticated mapping produced from the research, viewable via Google Earth at www.google.org/egs, demonstrates that vast reserves of this green, renewable source of power generated from Earth's heat are realistically accessible using current technology.

The results of the new research, from SMU Hamilton Professor of Geophysics David Blackwell and Geothermal Lab Coordinator Maria Richards, confirm and refine locations for resources capable of supporting large-scale commercial geothermal energy production under a wide range of geologic conditions, including significant areas in the eastern two-thirds of the United States. The estimated amounts and locations of heat stored in Earth's crust included in this study are based on nearly 35,000 data sites -- approximately twice the number used for Blackwell and Richards' 2004 Geothermal Map of North America, leading to improved detail and contouring at a regional level.

Based on the additional data, primarily drawn from oil and gas drilling, larger local variations can be seen in temperatures at depth, highlighting more detail for potential power sites than was previously evident in the eastern portion of the U.S. For example, eastern West Virginia has been identified as part of a larger Appalachian trend of higher heat flow and temperature.

Conventional U.S. geothermal production has been restricted largely to the western third of the country in geographically unique and tectonically active locations. For instance, The Geysers Field north of San Francisco is home to more than a dozen large power plants that have been tapping naturally occurring steam reservoirs to produce electricity for more than 40 years.

However, newer technologies and drilling methods can now be used to develop resources in a wider range of geologic conditions, allowing reliable production of clean energy at temperatures as low as 100°C (212°F) -- and in regions not previously considered suitable for geothermal energy production. Preliminary data released from the SMU study in October 2010 revealed the existence of a geothermal resource under the state of West Virginia equivalent to the state's existing (primarily coal-based) power supply.

"Once again, SMU continues its pioneering work in demonstrating the tremendous potential of geothermal resources," said Karl Gawell, executive director of the Geothermal Energy Association. "Both Google and the SMU researchers are fundamentally changing the way we look at how we can use the heat of the Earth to meet our energy needs, and by doing so are making significant contributions to enhancing our national security and environmental quality."

"This assessment of geothermal potential will only improve with time," said Blackwell. "Our study assumes that we tap only a small fraction of the available stored heat in the Earth's crust, and our capabilities to capture that heat are expected to grow substantially as we improve upon the energy conversion and exploitation factors through technological advances and improved techniques."

Blackwell is releasing a paper with details of the results of the research to the Geothermal Resources Council on October 25, 2011.

Blackwell and Richards first produced the 2004 Geothermal Map of North America using oil and gas industry data from the central U.S. Blackwell and the 2004 map played a significant role in a 2006 Future of Geothermal Energy study sponsored by the U.S. Department of Energy that concluded geothermal energy had the potential to supply a substantial portion of the future U.S. electricity needs, likely at competitive prices and with minimal environmental impact. SMU's 2004 map has been the national standard for evaluating heat flow, temperature and thermal conductivity for potential geothermal energy projects.

In this newest SMU estimate of resource potential, researchers used additional temperature data and in-depth geological analysis for the resulting heat flow maps to create the updated temperature-at-depth maps from 3.5 kilometers to 9.5 kilometers (11,500 to 31,000 feet). This update revealed that some conditions in the eastern two-thirds of the U.S. are actually hotter than some areas in the western portion of the country, an area long-recognized for heat-producing tectonic activity. In determining the potential for geothermal production, the new SMU study considers the practical considerations of drilling, and limits the analysis to the heat available in the top 6.5 km (21,500 ft.) of crust for predicting megawatts of available power. This approach incorporates a newly proposed international standard for estimating geothermal resource potential that considers added practical limitations of development, such as the inaccessibility of large urban areas and national parks. Known as the 'technical potential' value, it assumes producers tap only 14 percent of the 'theoretical potential' of stored geothermal heat in the U.S., using currently available technology.

Three recent technological developments already have sparked geothermal development in areas with little or no tectonic activity or volcanism:

Low Temperature Hydrothermal -- Energy is produced from areas with naturally occurring high fluid volumes at temperatures ranging from less than boiling to 150°C (300°F). This application is currently producing energy in Alaska, Oregon, Idaho and Utah.Geopressure and Coproduced Fluids Geothermal -- Oil and/or natural gas are produced together with electricity generated from hot geothermal fluids drawn from the same well. Systems are installed or being installed in Wyoming, North Dakota, Utah, Louisiana, Mississippi and Texas.Enhanced Geothermal Systems (EGS) -- Areas with low fluid content, but high temperatures of more than 150°C (300°F), are "enhanced" with injection of fluid and other reservoir engineering techniques. EGS resources are typically deeper than hydrothermal and represent the largest share of total geothermal resources capable of supporting larger capacity power plants.

A key goal in the SMU resource assessment was to aid in evaluating these nonconventional geothermal resources on a regional to sub-regional basis.

Areas of particular geothermal interest include the Appalachian trend (Western Pennsylvania, West Virginia, to northern Louisiana), the aquifer heated area of South Dakota, and the areas of radioactive basement granites beneath sediments such as those found in northern Illinois and northern Louisiana. The Gulf Coast continues to be outlined as a huge resource area and a promising sedimentary basin for development. The Raton Basin in southeastern Colorado possesses extremely high temperatures and is being evaluated by the State of Colorado along with an area energy company.

SMU's Geothermal Laboratory in Dedman College of Humanities and Sciences conducted this research through funding provided by Google.org, which is dedicated to using the power of information and innovation to advance breakthrough technologies in clean energy.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Southern Methodist University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, 28 November 2011

Graphene grows better on certain copper crystals

ScienceDaily (Oct. 27, 2011) — New observations could improve industrial production of high-quality graphene, hastening the era of graphene-based consumer electronics, thanks to University of Illinois engineers.

By combining data from several imaging techniques, the team found that the quality of graphene depends on the crystal structure of the copper substrate it grows on. Led by electrical and computer engineering professors Joseph Lyding and Eric Pop, the researchers published their findings in the journal Nano Letters.

"Graphene is a very important material," Lyding said. "The future of electronics may depend on it. The quality of its production is one of the key unsolved problems in nanotechnology. This is a step in the direction of solving that problem."

To produce large sheets of graphene, methane gas is piped into a furnace containing a sheet of copper foil. When the methane strikes the copper, the carbon-hydrogen bonds crack. Hydrogen escapes as gas, while the carbon sticks to the copper surface. The carbon atoms move around until they find each other and bond to make graphene. Copper is an appealing substrate because it is relatively cheap and promotes single-layer graphene growth, which is important for electronics applications.

"It's a very cost-effective, straightforward way to make graphene on a large scale," said Joshua Wood, a graduate student and the lead author of the paper.

"However, this does not take into consideration the subtleties of growing graphene," he said. "Understanding these subtleties is important for making high-quality, high-performance electronics."

While graphene grown on copper tends to be better than graphene grown on other substrates, it remains riddled with defects and multi-layer sections, precluding high-performance applications. Researchers have speculated that the roughness of the copper surface may affect graphene growth, but the Illinois group found that the copper's crystal structure is more important.

Copper foils are a patchwork of different crystal structures. As the methane falls onto the foil surface, the shapes of the copper crystals it encounters affect how well the carbon atoms form graphene.

Different crystal shapes are assigned index numbers. Using several advanced imaging techniques, the Illinois team found that patches of copper with higher index numbers tend to have lower-quality graphene growth. They also found that two common crystal structures, numbered (100) and (111), have the worst and the best growth, respectively. The (100) crystals have a cubic shape, with wide gaps between atoms. Meanwhile, (111) has a densely packed hexagonal structure.

"In the (100) configuration the carbon atoms are more likely to stick in the holes in the copper on the atomic level, and then they stack vertically rather than diffusing out and growing laterally," Wood said. "The (111) surface is hexagonal, and graphene is also hexagonal. It's not to say there's a perfect match, but that there's a preferred match between the surfaces."

Researchers now are faced with balancing the cost of all (111) copper and the value of high-quality, defect-free graphene. It is possible to produce single-crystal copper, but it is difficult and prohibitively expensive.

The U. of I. team speculates that it may be possible to improve copper foil manufacturing so that it has a higher percentage of (111) crystals. Graphene grown on such foil would not be ideal, but may be "good enough" for most applications.

"The question is, how do you optimize it while still maintaining cost effectiveness for technological applications?" said Pop, a co-author of the paper. "As a community, we're still writing the cookbook for graphene. We're constantly refining our techniques, trying out new recipes. As with any technology in its infancy, we are still exploring what works and what doesn't."

Next, the researchers hope to use their methodology to study the growth of other two-dimensional materials, including insulators to improve graphene device performance. They also plan to follow up on their observations by growing graphene on single-crystal copper.

"There's a lot of confusion in the graphene business right now," Lyding said. "The fact that there is a clear observational difference between these different growth indices helps steer the research and will probably lead to more quantitative experiments as well as better modeling. This paper is funneling things in that direction."

Lyding and Pop are affiliated with the Beckman Institute for Advanced Science and Technology at the U. of I. The Office of Naval Research, the Air Force Office of Scientific Research, and the Army Research Office supported this research.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Illinois at Urbana-Champaign.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Joshua D. Wood, Scott W. Schmucker, Austin S. Lyons, Eric Pop, Joseph W. Lyding. Effects of Polycrystalline Cu Substrate on Graphene Growth by Chemical Vapor Deposition. Nano Letters, 2011; : 111004083339002 DOI: 10.1021/nl201566c

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

A particulate threat to diabetics

In a new study of people with diabetes, blood pressure rose in rough lockstep with short-term increases in soot and other microscopic air pollutant particles. Such transient increases in blood pressure can place the health of the heart, arteries, brain and kidneys at risk, particularly in people with chronic disease.

In contrast, when ozone levels climbed, blood pressure tended to fall among these people, independent of particulate levels. "And that was certainly not what we expected," notes study coauthor Barbara Hoffmann of the Leibniz Research Institute for Environmental Medicine in Düsseldorf, Germany.

Temperature also had an independent effect: A five-day average increase of 11.5 degrees Celsius, for instance, was associated with a small drop in blood pressure, Hoffmann and her colleagues report online October 21 in Environmental Health Perspectives.

Earlier studies suggested that particulates of the size measured in this study — just 2.5 micrometers in diameter — can hike blood pressure, particularly in people with diabetes.

To further investigate, Hoffman and her colleagues followed 70 Boston-area men and women, ages 40 to 85, with long-standing type 2 diabetes. All lived within 25 kilometers of a major air pollution monitoring station. Each participant submitted to repeated health tests at intervals of several weeks, which the researchers matched up with air pollution values from the preceding five days.

The team found pollution-related changes primarily in systolic blood pressure, the pressure exerted by the pumping action of each heartbeat. Systolic pressure is the top number in a blood pressure reading.

Since levels of particulates and ozone don’t necessarily track, one type of air pollutant cannot be expected to cancel out blood pressure alterations posed by the other, the researchers say. And ozone-associated drops in blood pressure aren’t necessarily beneficial. In fact, Hoffmann says, they offer additional evidence of a diabetes-related impairment in the ability of blood vessels to quickly adjust to changing environmental conditions by relaxing or constricting.

Changes in ozone and air pollution levels had no effect on people whose blood sugar was well controlled. Similarly, people with healthy baseline blood pressure readings exhibited little vulnerability to pollution.

"So especially if you want to positively influence your risk from air pollution," Hoffmann says, "it seems a very good idea to tightly control your blood pressure and your blood sugar."

The fact that a rise in concentrations of near-nanoscale particulates as small as 3.5 micrograms per cubic meter of air could raise systolic blood pressure “corroborates that current levels of particulate matter disrupt blood pressure control,” says physician Robert Brook of the Division of Cardiovascular Medicine at the University of Michigan Medical School in Ann Arbor. The new data, he maintains, confirm that short-term inhalation of fine airborne particulates at ambient levels — and perhaps traffic-related soot in particular — "have small but potentially clinically meaningful effects."


Found in: Body & Brain, Earth and Environment

View the original article here

Planets smashed into dust near supermassive black holes

ScienceDaily (Oct. 28, 2011) — Fat doughnut-shaped dust shrouds that obscure about half of supermassive black holes could be the result of high speed crashes between planets and asteroids, according to a new theory from an international team of astronomers.

The scientists, led by Dr. Sergei Nayakshin of the University of Leicester, are publishing their results in the journal Monthly Notices of the Royal Astronomical Society.

Supermassive black holes reside in the central parts of most galaxies. Observations indicate that about 50% of them are hidden from view by mysterious clouds of dust, the origin of which is not completely understood. The new theory is inspired by our own Solar System, where the so-called zodiacal dust is known to originate from collisions between solid bodies such as asteroids and comets. The scientists propose that the central regions of galaxies contain not only black holes and stars but also planets and asteroids.

Collisions between these rocky objects would occur at colossal speeds as large as 1000 km per second, continuously shattering and fragmenting the objects, until eventually they end up as microscopic dust. Dr. Nayakshin points out that this harsh environment -- radiation and frequent collisions -- would make the planets orbiting supermassive black holes sterile, even before they are destroyed. "Too bad for life on these planets," he says, "but on the other hand the dust created in this way blocks much of the harmful radiation from reaching the rest of the host galaxy. This in turn may make it easier for life to prosper elsewhere in the rest of the central region of the galaxy."

He also believes that understanding the origin of the dust near black holes is important in our models of how these monsters grow and how exactly they affect their host galaxies. "We suspect that the supermassive black hole in our own Galaxy, the Milky Way, expelled most of the gas that would otherwise turn into more stars and planets," he continues, "Understanding the origin of the dust in the inner regions of galaxies would take us one step closer to solving the mystery of the supermassive black holes."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Royal Astronomical Society (RAS).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Sergei Nayakshin, Sergey Sazonov, Rashid Sunyaev. Are SMBHs shrouded by 'super-Oort' clouds of comets and asteroids? Monthly Notices of the Royal Astronomical Society, 2011; (submitted) [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Dark matter mystery deepens

ScienceDaily (Oct. 17, 2011) — Like all galaxies, our Milky Way is home to a strange substance called dark matter. Dark matter is invisible, betraying its presence only through its gravitational pull. Without dark matter holding them together, our galaxy's speedy stars would fly off in all directions. The nature of dark matter is a mystery -- a mystery that a new study has only deepened.

"After completing this study, we know less about dark matter than we did before," said lead author Matt Walker, a Hubble Fellow at the Harvard-Smithsonian Center for Astrophysics.

The standard cosmological model describes a universe dominated by dark energy and dark matter. Most astronomers assume that dark matter consists of "cold" (i.e. slow-moving) exotic particles that clump together gravitationally. Over time these dark matter clumps grow and attract normal matter, forming the galaxies we see today.

Cosmologists use powerful computers to simulate this process. Their simulations show that dark matter should be densely packed in the centers of galaxies. Instead, new measurements of two dwarf galaxies show that they contain a smooth distribution of dark matter. This suggests that the standard cosmological model may be wrong.

"Our measurements contradict a basic prediction about the structure of cold dark matter in dwarf galaxies. Unless or until theorists can modify that prediction, cold dark matter is inconsistent with our observational data," Walker stated.

Dwarf galaxies are composed of up to 99 percent dark matter and only one percent normal matter like stars. This disparity makes dwarf galaxies ideal targets for astronomers seeking to understand dark matter.

Walker and his co-author Jorge Peñarrubia (University of Cambridge, UK) analyzed the dark matter distribution in two Milky Way neighbors: the Fornax and Sculptor dwarf galaxies. These galaxies hold one million to 10 million stars, compared to about 400 billion in our galaxy. The team measured the locations, speeds and basic chemical compositions of 1500 to 2500 stars.

"Stars in a dwarf galaxy swarm like bees in a beehive instead of moving in nice, circular orbits like a spiral galaxy," explained Peñarrubia. "That makes it much more challenging to determine the distribution of dark matter."

Their data showed that in both cases, the dark matter is distributed uniformly over a relatively large region, several hundred light-years across. This contradicts the prediction that the density of dark matter should increase sharply toward the centers of these galaxies.

"If a dwarf galaxy were a peach, the standard cosmological model says we should find a dark matter 'pit' at the center. Instead, the first two dwarf galaxies we studied are like pitless peaches," said Peñarrubia.

Some have suggested that interactions between normal and dark matter could spread out the dark matter, but current simulations don't indicate that this happens in dwarf galaxies. The new measurements imply that either normal matter affects dark matter more than expected, or dark matter isn't "cold." The team hopes to determine which is true by studying more dwarf galaxies, particularly galaxies with an even higher percentage of dark matter.

The paper discussing this research was accepted for publication in The Astrophysical Journal.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Harvard-Smithsonian Center for Astrophysics.

Note: ScienceDaily reserves the right to edit materials for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Mathematically detecting stock market bubbles before they burst

ScienceDaily (Oct. 31, 2011) — From the dotcom bust in the late nineties to the housing crash in the run-up to the 2008 crisis, financial bubbles have been a topic of major concern. Identifying bubbles is important in order to prevent collapses that can severely impact nations and economies.

A paper published this month in the SIAM Journal on Financial Mathematics addresses just this issue. Opening fittingly with a quote from New York Federal Reserve President William Dudley emphasizing the importance of developing tools to identify and address bubbles in real time, authors Robert Jarrow, Younes Kchia, and Philip Protter propose a mathematical model to detect financial bubbles.

A financial bubble occurs when prices for assets, such as stocks, rise far above their actual value. Such an economic cycle is usually characterized by rapid expansion followed by a contraction, or sharp decline in prices.

"It has been hard not to notice that financial bubbles play an important role in our economy, and speculation as to whether a given risky asset is undergoing bubble pricing has approached the level of an armchair sport. But bubbles can have real and often negative consequences," explains Protter, who has spent many years studying and analyzing financial markets.

"The ability to tell when an asset is or is not in a bubble could have important ramifications in the regulation of the capital reserves of banks as well as for individual investors and retirement funds holding assets for the long term. For banks, if their capital reserve holdings include large investments with unrealistic values due to bubbles, a shock to the bank could occur when the bubbles burst, potentially causing a run on the bank, as infamously happened with Lehman Brothers, and is currently happening with Dexia, a major European bank," he goes on to explain, citing the significance of such inflated prices.

Using sophisticated mathematical methods, Protter and his co-authors answer the question of whether the price increase of a particular asset represents a bubble in real time. "[In this paper] we show that by using tick data and some statistical techniques, one is able to tell with a large degree of certainty, whether or not a given financial asset (or group of assets) is undergoing bubble pricing," says Protter.

This question is answered by estimating an asset's price volatility, which is stochastic or randomly determined. The authors define an asset's price process in terms of a standard stochastic differential equation, which is driven by Brownian motion. Brownian motion, based on a natural process involving the erratic, random movement of small particles suspended in gas or liquid, has been widely used in mathematical finance. The concept is specifically used to model instances where previous change in the value of a variable is unrelated to past changes.

The key characteristic in determining a bubble is the volatility of an asset's price, which, in the case of bubbles is very high. The authors estimate the volatility by applying state of the art estimators to real-time tick price data for a given stock. They then obtain the best possible extension of this data for large values using a technique called Reproducing Kernel Hilbert Spaces (RKHS), which is a widely used method for statistical learning.

"First, one uses tick price data to estimate the volatility of the asset in question for various levels of the asset's price," Protter explains. "Then, a special technique (RKHS with an optimization addition) is employed to extrapolate this estimated volatility function to large values for the asset's price, where this information is not (and cannot be) available from tick data. Using this extrapolation, one can check the rate of increase of the volatility function as the asset price gets arbitrarily large. Whether or not there is a bubble depends on how fast this increase occurs (its asymptotic rate of increase)."

If it does not increase fast enough, there is no bubble within the model's framework.

The authors test their methodology by applying the model to several stocks from the dot-com bubble of the nineties. They find fairly successful rates in their predictions, with higher accuracies in cases where market volatilities can be modeled more efficiently. This helps establish the strengths and weaknesses of the method.

The authors have also used the model to test more recent price increases to detect bubbles. "We have found, for example, that the IPO [initial public offering] of LinkedIn underwent bubble pricing at its debut, and that the recent rise in gold prices was not a bubble, according to our models," Protter says.

It is encouraging to see that mathematical analysis can play a role in the diagnosis and detection of bubbles, which have significantly impacted economic upheavals in the past few decades.

Robert Jarrow is a professor at the Johnson Graduate School of Management at Cornell University in Ithaca, NY, and managing director of the Kamakura Corporation. Younes Kchia is a graduate student at Ecole Polytechnique in Paris, and Philip Protter is a professor in the Statistics Department at Columbia University in New York.

Professor Protter's work was supported in part by NSF grant DMS-0906995.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Society for Industrial and Applied Mathematics.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Robert Jarrow, Younes Kchia, and Philip Protter. How to Detect an Asset Bubble. SIAM J. Finan. Math., 2011; 2, pp. 839-865 [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, 27 November 2011

Cyber war might never happen

ScienceDaily (Oct. 18, 2011) — Cyber war, long considered by many experts within the defence establishment to be a significant threat, if not an ongoing one, may never take place according to Dr Thomas Rid of King's College London.

In a paper published in The Journal of Strategic Studies, Dr Thomas Rid, from the Department of War Studies, argues that cyber warfare has never taken place, nor is it currently doing so and it is unlikely to take place in the future.

Dr Rid said: 'The threat intuitively makes sense: almost everybody has an iPhone, an email address and a Facebook account. We feel vulnerable to cyber attack every day. Cyberwar seems the logical next step.

'Cyber warfare is of increasing concern to governments around the world, with many nations developing defensive -- and reportedly offensive -- capabilities.'

Recent events, such as a highly sophisticated computer worm known as Stuxnet, which was reported to have damaged the Iranian nuclear enrichment programme, have fuelled speculation that cyber warfare is imminent. There have also been alleged acts of cyber warfare originating from Russia aimed at Estonia and Georgia.

However, Dr Rid states that to constitute cyber warfare an action must be a potentially lethal, instrumental and political act of force, conducted through the use of software. Yet no single cyber attack has ever been classed as such and no act alone has ever constituted an act of war.

Dr Rid concludes: 'Politically motivated cyber attacks are simply a more sophisticated version of activities that have always occurred within warfare: sabotage, espionage and subversion.'

Dr Rid specialises in cyber security and conflict, irregular conflict and counterterrorism. He is currently researching how armies use social media and is working on a project on the subject of cyber security.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by King's College London.

Note: ScienceDaily reserves the right to edit materials for content and length. For further information, please contact the source cited above.

Journal Reference:

Thomas Rid. Cyber War Will Not Take Place. Journal of Strategic Studies, 2011; 1 DOI: 10.1080/01402390.2011.608939

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Mathematically detecting stock market bubbles before they burst

ScienceDaily (Oct. 31, 2011) — From the dotcom bust in the late nineties to the housing crash in the run-up to the 2008 crisis, financial bubbles have been a topic of major concern. Identifying bubbles is important in order to prevent collapses that can severely impact nations and economies.

A paper published this month in the SIAM Journal on Financial Mathematics addresses just this issue. Opening fittingly with a quote from New York Federal Reserve President William Dudley emphasizing the importance of developing tools to identify and address bubbles in real time, authors Robert Jarrow, Younes Kchia, and Philip Protter propose a mathematical model to detect financial bubbles.

A financial bubble occurs when prices for assets, such as stocks, rise far above their actual value. Such an economic cycle is usually characterized by rapid expansion followed by a contraction, or sharp decline in prices.

"It has been hard not to notice that financial bubbles play an important role in our economy, and speculation as to whether a given risky asset is undergoing bubble pricing has approached the level of an armchair sport. But bubbles can have real and often negative consequences," explains Protter, who has spent many years studying and analyzing financial markets.

"The ability to tell when an asset is or is not in a bubble could have important ramifications in the regulation of the capital reserves of banks as well as for individual investors and retirement funds holding assets for the long term. For banks, if their capital reserve holdings include large investments with unrealistic values due to bubbles, a shock to the bank could occur when the bubbles burst, potentially causing a run on the bank, as infamously happened with Lehman Brothers, and is currently happening with Dexia, a major European bank," he goes on to explain, citing the significance of such inflated prices.

Using sophisticated mathematical methods, Protter and his co-authors answer the question of whether the price increase of a particular asset represents a bubble in real time. "[In this paper] we show that by using tick data and some statistical techniques, one is able to tell with a large degree of certainty, whether or not a given financial asset (or group of assets) is undergoing bubble pricing," says Protter.

This question is answered by estimating an asset's price volatility, which is stochastic or randomly determined. The authors define an asset's price process in terms of a standard stochastic differential equation, which is driven by Brownian motion. Brownian motion, based on a natural process involving the erratic, random movement of small particles suspended in gas or liquid, has been widely used in mathematical finance. The concept is specifically used to model instances where previous change in the value of a variable is unrelated to past changes.

The key characteristic in determining a bubble is the volatility of an asset's price, which, in the case of bubbles is very high. The authors estimate the volatility by applying state of the art estimators to real-time tick price data for a given stock. They then obtain the best possible extension of this data for large values using a technique called Reproducing Kernel Hilbert Spaces (RKHS), which is a widely used method for statistical learning.

"First, one uses tick price data to estimate the volatility of the asset in question for various levels of the asset's price," Protter explains. "Then, a special technique (RKHS with an optimization addition) is employed to extrapolate this estimated volatility function to large values for the asset's price, where this information is not (and cannot be) available from tick data. Using this extrapolation, one can check the rate of increase of the volatility function as the asset price gets arbitrarily large. Whether or not there is a bubble depends on how fast this increase occurs (its asymptotic rate of increase)."

If it does not increase fast enough, there is no bubble within the model's framework.

The authors test their methodology by applying the model to several stocks from the dot-com bubble of the nineties. They find fairly successful rates in their predictions, with higher accuracies in cases where market volatilities can be modeled more efficiently. This helps establish the strengths and weaknesses of the method.

The authors have also used the model to test more recent price increases to detect bubbles. "We have found, for example, that the IPO [initial public offering] of LinkedIn underwent bubble pricing at its debut, and that the recent rise in gold prices was not a bubble, according to our models," Protter says.

It is encouraging to see that mathematical analysis can play a role in the diagnosis and detection of bubbles, which have significantly impacted economic upheavals in the past few decades.

Robert Jarrow is a professor at the Johnson Graduate School of Management at Cornell University in Ithaca, NY, and managing director of the Kamakura Corporation. Younes Kchia is a graduate student at Ecole Polytechnique in Paris, and Philip Protter is a professor in the Statistics Department at Columbia University in New York.

Professor Protter's work was supported in part by NSF grant DMS-0906995.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Society for Industrial and Applied Mathematics.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Robert Jarrow, Younes Kchia, and Philip Protter. How to Detect an Asset Bubble. SIAM J. Finan. Math., 2011; 2, pp. 839-865 [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here