Thursday 30 June 2011

'Catch and release' program could improve nanoparticle safety assessment

ScienceDaily (June 8, 2011) — Depending on whom you ask, nanoparticles are, potentially, either one of the most promising or the most perilous creations of science. These tiny objects can deliver drugs efficiently and enhance the properties of many materials, but what if they also are hazardous to your health in some way? Now, scientists at the National Institute of Standards and Technology (NIST) have found a way to manipulate nanoparticles so that questions like this can be answered.

The team has developed a method of attracting and capturing metal-based nanoparticles on a surface and releasing them at the desired moment. The method, which uses a mild electric current to influence the particles' behavior, could allow scientists to expose cell cultures to nanoparticles so that any lurking hazards they might cause to living cells can be assessed effectively.

The method also has the advantage of collecting the particles in a layer only one particle thick, which allows them to be evenly dispersed into a fluid sample, thereby reducing clumping -- a common problem that can mask the properties they exhibit when they encounter living tissue. According to NIST physicist Darwin Reyes, these combined advantages should make the new method especially useful in toxicology studies.

"Many other methods of trapping require that you modify the surface of the nanoparticles in some way so that you can control them more easily," Reyes says. "We take nanoparticles as they are, so that you can explore what you've actually got. Using this method, you can release them into a cell culture and watch how the cells react, which can give you a better idea of how cells in the body will respond."

Other means of studying nanoparticle toxicity do not enable such precise delivery of the particles to the cells. In the NIST method, the particles can be released in a controlled fashion into a fluid stream that flows over a colony of cells, mimicking the way the particles would encounter cells inside the body -- allowing scientists to monitor how cells react over time, for example, or whether responses vary with changes in particle concentration.

For this particular study, the team used a gold surface covered by long, positively charged molecules, which stretch up from the gold like wheat in a field. The nanoparticles, which are also made of gold, are coated with citrate molecules that have a slight negative charge, which draws them to the surface covering, an attraction that can be broken with a slight electric current. Reyes says that because the surface covering can be designed to attract different materials, a variety of nanoparticles could be captured and released with the technique.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by National Institute of Standards and Technology (NIST).

Journal Reference:

Darwin R. Reyes, Geraldine I. Mijares, Brian Nablo, Kimberly A. Briggman, Michael Gaitan. Trapping and Release of Citrate-Capped Gold Nanoparticles. Applied Surface Science, 2011; DOI: 10.1016/j.apsusc.2011.04.030

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New driving force for chemical reactions

ScienceDaily (June 9, 2011) — New research just published in the journal Science by a team of chemists at the University of Georgia and colleagues in Germany shows for the first time that a mechanism called tunneling control may drive chemical reactions in directions unexpected from traditional theories.

The finding has the potential to change how scientists understand and devise reactions in everything from materials science to biochemistry.

The discovery was a complete surprise and came following the first successful isolation of a long-elusive molecule called methylhydroxycarbene by the research team. While the team was pleased that it had "trapped" the prized compound in solid argon through an extremely low-temperature experiment, they were surprised when it vanished within a few hours. That prompted UGA theoretical chemistry professor Wesley Allen to conduct large scale, state-of-the-art computations to solve the mystery.

"What we found was that the change was being controlled by a process called quantum mechanical tunneling," said Allen, "and we found that tunneling can supersede the traditional chemical reactivity processes of kinetic and thermodynamic control. We weren't expecting this at all."

What had happened? Clearly, a chemical reaction had taken place, but only inert argon atoms surrounded the compound, and essentially no thermal energy was available to create new molecular arrangements. Moreover, said Allen, "the observed product of the reaction, acetaldehyde, is the least likely outcome among conceivable possibilities."

Other authors of the paper include Professor Peter Schreiner and his group members Hans Peter Reisenauer, David Ley and Dennis Gerbig of the Justus-Liebig University in Giessen, Germany. Graduate student Chia-Hua Wu at UGA undertook the theoretical work with Allen.

Quantum tunneling isn't new. It was first recognized as a physical process decades ago in early studies of radioactivity. In classical mechanics, molecular motions can be understood in terms of particles roaming on a potential energy surface. Energy barriers, visualized as mountain passes on the surface, separate one chemical compound from another.

For a chemical reaction to occur, a molecular system must have enough energy to "get over the top of the hill," or it will come back down and fail to react. In quantum mechanics, particles can get to the other side of the barrier by tunneling through it, a process that seemingly requires imaginary velocities. In chemistry, tunneling is generally understood to provide secondary correction factors for the rates of chemical reactions but not to provide the predominant driving force.

(The strange world of quantum mechanics has been subject to considerable interest and controversy over the last century, and Austrian physicist Erwin Schrödinger's thought-experiment called "Schrödinger's Cat" illustrates how perplexing it is to apply the rules and laws of quantum mechanics to everyday life.)

"We knew that the rate of a reaction can be significantly affected by quantum mechanical tunneling," said Allen. "It becomes especially important at low temperatures and for reactions involving light atoms. What we discovered here is that tunneling can dominate a reaction mechanism sufficiently to redirect the outcome away from traditional kinetic control. Tunneling can cause a reaction that does not have the lowest activation barriers to occur exclusively."

Allen suggests a vivid analogy between the behavior of methylhydroxycarbene and Schrödinger's iconic cat.

"The cat cannot jump out of its box of deadly confinement because the walls are too high, so it conjures a Houdini-like escape by bursting through the thinnest wall," he said.

The fact that new ideas about tunneling came from the isolation of methylhydroxycarbene was the kind of serendipity that runs through the history of science. Schreiner and his team had snagged the elusive compound, and that was reason enough to celebrate, Allen said. But the surprising observation that it vanished within a few hours raised new questions that led to even more interesting scientific discoveries.

"The initiative to doggedly follow up on a 'lucky observation' was the key to success," said Allen. "Thus, a combination of persistent experimentation and exacting theoretical analysis on methylhydroxycarbene and its reactivity led to the concept I dubbed tunneling control, which may be characterized as `a type of nonclassical kinetic control wherein the decisive factor is not the lowest activation barrier'."

While the process was unearthed for the specific case of methylhydroxycarbene at extremely low temperatures, Allen said that tunneling control "can be a general phenomenon, especially if hydrogen transfer is involved, and such processes need not be restricted to cryogenic temperatures."

Allen's research was funded by the U.S. Department of Energy.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Georgia.

Journal Reference:

Peter R. Schreiner, Hans Peter Reisenauer, David Ley, Dennis Gerbig, Chia-Hua Wu, Wesley D. Allen. Methylhydroxycarbene: Tunneling Control of a Chemical Reaction. Science, 10 June 2011: Vol. 332 no. 6035 pp. 1300-1303 DOI: 10.1126/science.1203761

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

'Biological circuit' components developed; New microscope technique for measuring them

ScienceDaily (June 9, 2011) — Electrical engineers have long been toying with the idea of designing biological molecules that can be directly integrated into electronic circuits. University of Pennsylvania researchers have developed a way to form these structures so they can operate in open-air environments, and, more important, have developed a new microscope technique that can measure the electrical properties of these and similar devices.

The research was conducted by Dawn Bonnell, Trustee Chair Professor and director of the Nano/Bio Interface Center, graduate students Kendra Kathan-Galipeau and Maxim Nikiforov and postdoctoral fellow Sanjini Nanayakkara, all of the Department of Materials Science and Engineering in Penn's School of Engineering and Applied Science. They collaborated with assistant professor Bohdana Discher of the Department of Biophysics and Biochemistry at Penn's Perelman School of Medicine and Paul A. O'Brien, a graduate student in Penn's Biotechnology Masters Program.

Their work was published in the journal ACS Nano.

The development involves artificial proteins, bundles of peptide helices with a photoactive molecule inside. These proteins are arranged on electrodes, which are common feature of circuits that transmit electrical charges between metallic and non-metallic elements. When light is shined on the proteins, they convert photons into electrons and pass them to the electrode.

"It's a similar mechanism to what happens when plants absorb light, except in that case the electron is used for some chemistry that creates energy for the plant," Bonnell said. "In this case, we want to use the electron in electrical circuits."

Similar peptide assemblies had been studied in solution before by several groups and had been tested to show that they indeed react to light. But there was no way to quantify their ambient electrical properties, particularly capacitance, the amount of electrical charge the assembly holds.

"It's necessary to understand these kinds of properties in the molecules in order to make devices out of them. We've been studying silicon for 40 years, so we know what happens to electrons there," Bonnell said. "We didn't know what happens to electrons on dry electrodes with these proteins; we didn't even know if they would remain photoactive when attached to an electrode."

Designing circuits and devices with silicon is inherently easier than with proteins. The electrical properties of a large chunk of a single element can be measured and then scaled down, but complex molecules like these proteins cannot be scaled up. Diagnostic systems that could measure their properties with nanometer sensitivity simply did not exist.

The researchers therefore needed to invent both a new way of a measuring these properties and a controlled way of making the photovoltaic proteins that would resemble how they might eventually be incorporated into devices in open-air, everyday environments, rather than swimming in a chemical solution.

To solve the first problem, the team developed a new kind of atomic force microscope technique, known as torsional resonance nanoimpedance microscopy. Atomic force microscopes operate by bringing an extremely narrow silicon tip very close to a surface and measuring how the tip reacts, providing a spatial sensitivity of a few nanometers down to individual atoms.

"What we've done in our version is to use a metallic tip and put an oscillating electric field on it. By seeing how electrons react to the field, we're able to measure more complex interactions and more complex properties, such as capacitance," Bonnell said.

Bohdana Discher's group designed the self-assembling proteins much as they had done before but took the additional step of stamping them onto sheets of graphite electrodes. This manufacturing principle and the ability to measure the resulting devices could have a variety of applications.

"Photovoltaics -- solar cells -- are perhaps the easiest to imagine, but where this work is going in the shorter term is biochemical sensors," Bonnell said.

Instead of reacting to photons, proteins could be designed to produce a charge when in the presence of a certain toxins, either changing color or acting as a circuit element in a human-scale gadget.

This research was supported by the Nano/Bio Interface Center and the National Science Foundation.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Pennsylvania.

Journal Reference:

Kendra Kathan-Galipeau, Sanjini Nanayakkara, Paul A. O’Brian, Maxim Nikiforov, Bohdana M. Discher, Dawn A. Bonnell. Direct Probe of Molecular Polarization inDe NovoProtein–Electrode Interfaces. ACS Nano, 2011; 110603081000090 DOI: 10.1021/nn200887n

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Germy with a chance of hail

access BUGS IN ICEThree adjacent ice crystals (borders resemble forked road) contain green-stained Pseudomonas syringae bacteria isolated from precipitation. This plant pathogen, one of the most efficient bacteria at nucleating ice, is commonly found in clouds.B. Christner/LSU

Bacteria often leave their hosts feeling under the weather. And even when the hosts are high-altitude parcels of air, microbes can be a source of inclement conditions, a Montana research team finds. Cloudborne bacteria might even pose climate threats by boosting the production of a greenhouse gas, another team proposes.

Both groups reported their findings May 24 at the American Society for Microbiology meeting in New Orleans.

These data add to a growing body of evidence that biological organisms are affecting clouds, notes Anthony Prenni of Colorado State University in Fort Collins, an atmospheric scientist who did not participate in the new studies. Right now, he cautions, “We still don’t know on a global scale how important these processes are.” But research into microbial impacts on weather and climate is really heating up, he adds, so “within a few years, I think we’re going to have a much better handle on it.”

Alexander Michaud’s new research was triggered by a June storm that pummeled Montana State University’s campus in Bozeman last year with golf ball–sized and larger hailstones. The microbial ecologist normally studies subglacial aquatic environments in Antarctica. But after saving 27 of the hailstones, he says, “I suddenly realized, no one had really ever thought about studying hailstones — in a layered sense — for biology.”

So his team dissected the icy balls, along with hundreds of smaller ones collected during a July hail storm south of campus. Michaud now reports finding germs throughout, with the highest concentrations by far — some 1,000 cells per milliliter of meltwater — in the hailstones’ cores.

Since at least the 1980s, scientists have argued that some share of clouds, and their precipitation, likely traces to microbes. Their reasoning: Strong winds can loft germs many kilometers into the sky. And since the 1970s, agricultural scientists have recognized that certain compounds made by microbes serve as efficient water magnets around which ice crystals can form at relatively high temperatures (occasionally leading to frost devastation of crops).

In 2008, Brent Christner of Louisiana State University in Baton Rouge and his colleagues reported isolating ice-nucleating bacteria from rain and snow. A year later, Prenni’s group found microbes associated with at least a third of the cloud ice crystals they sampled at an altitude of 8 kilometers.

“But finding ice-nucleating bacteria in snow or hail is very different from saying they were responsible for the ice,” says Noah Fierer of the University of Colorado at Boulder. “I say that,” he admits, “even though as a microbiologist, I’d love to believe that bacteria control weather.”

Pure water molecules won’t freeze in air at temperatures above about –40° Celsius, Christner notes. Add tiny motes of mineral dust or clay, and water droplets may coalesce around them — or nucleate — at perhaps –15°. But certain bacteria can catalyze ice nucleation at even –2°, he reported at the meeting in New Orleans.

Through chemical techniques, Michaud’s group determined that the ice nucleation in their hail occurred around –11.5° for the June hailstones and at roughly –8.5° for the July stones.

Michaud’s data on the role of microbes in precipitation “is pretty strong evidence,” Prenni says.

Also at the meeting, Pierre Amato of Clermont University in Clermont-Ferrand, France, reported biological activity in materials sampled from a cloud at an altitude of 1,500 meters. The air hosted many organic pollutants, including formaldehyde, acetate and oxalate. Sunlight can break these down to carbon dioxide, a greenhouse gas, something Amato’s group confirmed in the lab. But sunlight didn’t fully degrade some organics unless microbes were also present.

Moreover, certain cloudborne bacteria — the French team identified at least 17 types — degraded organic pollutants to carbon dioxide at least as efficiently as the sun did. Amato’s team reported these findings online February 9 in Atmospheric Chemistry and Physics Discussions.

This microbial transformation of pollutants to carbon dioxide occurs even in darkness. Amato has calculated the total nighttime microbial production of carbon dioxide in clouds and pegs it “on the order of 1 million tons per year.” Though not a huge sum (equal to the carbon dioxide from perhaps 180,000 cars per year), he cautions that this amount could increase based on airborne pollutant levels, temperatures and microbial populations.


Found in: Earth and Environment

View the original article here

News in Brief: Earth/Environment

Flameproofing baby products, early tectonics, the future of tomatoes and more in this week's newsWeb edition : Saturday, June 4th, 2011 Early plate tectonics

Slivers of ancient rock may show that plate tectonics, the movement of large chunks of crust that is a defining feature of Earth’s surface today, has been happening for at least 3.7 billion years. Researchers in England, Australia and China say they’ve identified intact pieces of the Earth’s mantle in the ancient, highly squished rocks of southern Greenland. If so, the slivers must have come from one early crustal plate diving beneath another, scraping up pieces of the deeper Earth, the team writes in a paper appearing May 24 in Geology. —Alexandra Witze

Tomatoes: CO2 is galling

As concentrations of carbon dioxide rise, certain plants may lose more battles with pests, Chinese scientists report in the May 24 PLoS One. Researchers grew tomatoes at concentrations of carbon dioxide representing current levels (390 parts per million) and at those predicted to exist near the end of the century (780 ppm). When exposed to tiny nematodes, which are common soil parasites, tomatoes breathing the elevated carbon dioxide suffered most. They developed more tumorlike galls in their roots than did plants at current carbon dioxide levels and produced fewer flavonoids and other antioxidant compounds. The researchers linked this greater parasite damage to the reduced activity of genes managing certain plant defense compounds. —Janet Raloff

Babies face high flame-retardant exposures

Eighty percent of baby products containing polyurethane foam are treated with flame retardant chemicals, a U.S. team of chemists finds. After testing 101 common products, they calculate that babies’ exposure to TDCPP, the flame retardant that turned up most, would be higher than adults encounter — and “higher than acceptable daily intake levels of TDCPP set by the Consumer Product Safety Commission.” The testing found five products still in use that had been treated with a flame retardant that has since been banned: decabrominated diphenyl ether, the scientists reported May 18 in Environmental Science & Technology. And two products contained organophosphate flame retardants never before reported in the environment or commercial products. —Janet Raloff

China’s pollution cuts downstream rains

Industrial growth in China since the early 1980s has darkened its skies with plenty of pollutant particles. A team of U.S. researchers now reports that satellite data show the concentrations of tiny aerosol particles climbed some 40 percent throughout that period over the Shanghai-Nanjing and Jinan industrial regions. Downwind rains over the East China Sea, meanwhile, diminished by one-third over the same period, they report May 10 in Geophysical Research Letters. The scientists say that learning why pollution had this effect might suggest weather impacts of possible geoengineering tactics, like seeding skies with particles to slow global warming. —Janet Raloff


Found in: Earth and Environment

View the original article here

Novel geothermal technology packs a one-two punch against climate change

ScienceDaily (June 7, 2011) — Two University of Minnesota Department of Earth Sciences researchers have developed an innovative approach to tapping heat beneath Earth's surface. The method is expected to not only produce renewable electricity far more efficiently than conventional geothermal systems, but also help reduce atmospheric carbon dioxide (CO2) -- dealing a one-two punch against climate change.

The approach, termed CO2-plume geothermal system, or CPG, was developed by Earth sciences faculty member Martin Saar and graduate student Jimmy Randolph in the university's College of Science and Engineering. The research was published in the most recent issue of Geophysical Research Letters. The researchers have applied for a patent and plan to form a start-up company to commercialize the new technology.

Established methods for transforming Earth's heat into electricity involve extracting hot water from rock formations several hundred feet from Earth's surface at the few natural hot spots around the world, then using the hot water to turn power-producing turbines. The university's novel system was born in a flash of insight on a northern Minnesota road trip and jump-started with $600,000 in funding from the U of M Institute on the Environment's Initiative for Renewable Energy and the Environment (IREE). The CPG system uses high-pressure CO2 instead of water as the underground heat-carrying fluid.

CPG provides a number of advantages over other geothermal systems, Randolph said. First, CO2 travels more easily than water through porous rock, so it can extract heat more readily. As a result, CPG can be used in regions where conventional geothermal electricity production would not make sense from a technical or economic standpoint.

"This is probably viable in areas you couldn't even think about doing regular geothermal for electricity production," Randolph said. "In areas where you could, it's perhaps twice as efficient."

CPG also offers the benefit of preventing CO2 from reaching the atmosphere by sequestering it deep underground, where it cannot contribute to climate change. In addition, because pure CO2 is less likely than water to dissolve the material around it, CPG reduces the risk of a geothermal system not being able to operate for long times due to "short-circuiting" or plugging the flow of fluid through the hot rocks. Moreover, the technology could be used in parallel to boost fossil fuel production by pushing natural gas or oil from partially depleted reservoirs as CO2 is injected.

Saar and Randolph first hit on the idea behind CPG in the fall of 2008 while driving to northern Minnesota together to conduct unrelated field research. The two had been conducting research on geothermal energy capture and separately on geologic CO2 sequestration.

"We connected the dots and said, 'Wait a minute -- what are the consequences if you use geothermally heated CO2?'" recalled Saar. "We had a hunch in the car that there should be lots of advantages to doing that."

After batting the idea around a bit, the pair applied for and received a grant from the Initiative for Renewable Energy and the Environment, which disburses funds from Xcel Energy's Renewable Development Fund to help launch potentially transformative projects in emerging fields of energy and the environment. The IREE grant paid for preliminary computer modeling and allowed Saar and Randolph to bring on board energy policy, applied economics and mechanical engineering experts from the University of Minnesota as well as modeling experts from Lawrence Berkeley National Laboratory. It also helped leverage a $1.5 million grant from the U.S. Department of Energy to explore subsurface chemical interactions involved in the process.

"The IREE grant was really critical," Saar said. "This is the kind of project that requires a high-risk investment. I think it's fair to say that there's a good chance that it wouldn't have gone anywhere without IREE support in the early days."

Saar and Randolph have recently applied for additional DOE funding to move CPG forward to the pilot phase.

"Part of the beauty of this is that it combines a lot of ideas but the ideas are essentially technically proven, so we don't need a lot of new technology developed," Randolph said.

"It's combining proven technology in a new way," Saar said. "It's one of those things where you know how the individual components work. The question is, how will they perform together in this new way? The simulation results suggest it's going to be very favorable."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Minnesota.

Journal Reference:

Jimmy B. Randolph, Martin O. Saar. Combining geothermal energy capture with geologic carbon dioxide sequestration. Geophysical Research Letters, 2011; 38 (10) DOI: 10.1029/2011GL047265

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Physicists store antimatter atoms for 1,000 seconds -- and still counting

ScienceDaily (June 5, 2011) — The ALPHA Collaboration, an international team of scientists working at CERN in Geneva, Switzerland, has created and stored a total of 309 antihydrogen atoms, some for up to 1,000 seconds (almost 17 minutes), with an indication of much longer storage time as well.

ALPHA announced in November, 2010, that they had succeeded in storing antimatter atoms for the first time ever, having captured 38 atoms of antihydrogen and storing each for a sixth of a second. In the weeks following, ALPHA continued to collect anti-atoms and hold them for longer and longer times.

Scientists at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California at Berkeley, including Joel Fajans and Jonathan Wurtele of Berkeley Lab's Accelerator and Fusion Research Division (AFRD), both UC Berkeley physics professors, are members of the ALPHA Collaboration.

Says Fajans, "Perhaps the most important aspect of this result is that after just one second these antihydrogen atoms had surely already decayed to ground state. These were likely the first ground state anti-atoms ever made." Since almost all precision measurements require atoms in the ground state, ALPHA's achievement opens a path to new experiments with antimatter.

A principal component of ALPHA's atom trap is a superconducting octupole magnet proposed and prototyped in Berkeley Lab's AFRD. It takes ALPHA about 15 minutes to make and capture atoms of antihydrogen in their magnetic trap.

"So far, the only way we know whether we've caught an anti-atom is to turn off the magnet," says Fajans. "When the anti-atom hits the wall of the trap it annihilates, which tells us that we got one. In the beginning we were turning off our trap as soon as possible after each attempt to make anti-atoms, so as not to miss any."

Says Wurtele, "At first we needed to demonstrate that we could trap antihydrogen. Once we proved that, we started optimizing the system and made rapid progress, a real qualitative change."

Initially ALPHA caught only about one anti-atom in every 10 tries, but Fajans notes that at its best the ALPHA apparatus trapped one anti-atom with nearly every attempt.

Although the physical set-ups are different, ALPHA's ability to hold anti-atoms in a magnetic trap for 1,000 seconds, and presumably longer, compares well to the length of time ordinary atoms can be magnetically confined.

"A thousand seconds is more than enough time to perform measurements on a confined anti-atom," says Fajans. "For instance, it's enough time for the anti-atoms to interact with laser beams or microwaves." He jokes that, at CERN, "it's even enough time to go for coffee."

The ALPHA Collaboration not only made and stored the long-lived antihydrogen atoms, it was able to measure their energy distribution.

"It may not sound exciting, but it's the first experiment done on trapped antihydrogen atoms," Wurtele says. "This summer we're planning more experiments, with microwaves. Hopefully we will measure microwave-induced changes of the atomic state of the anti-atoms." With these and other experiments the ALPHA Collaboration aims to determine the properties of antihydrogen and measure matter-antimatter asymmetry with precision.

A program of upgrades is being planned that will allow experiments not possible with the current ALPHA apparatus. At present the experimenters don't have laser access to the trap. Lasers are essential for performing spectroscopy and for "cooling" the antihydrogen atoms (reducing their energy and slowing them down) to perform other experiments.

Fajans says, "We hope to have laser access by 2012. We're clearly ready to move to the next level."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Lawrence Berkeley National Laboratory.

Journal Reference:

ALPHA Collaboration: G. B. Andresen, M. D. Ashkezari, M. Baquero-Ruiz, W. Bertsche, P. D. Bowe, E. Butler, C. L. Cesar, M. Charlton, A. Deller, S. Eriksson, J. Fajans, T. Friesen, M. C. Fujiwara, D. R. Gill, A. Gutierrez, J. S. Hangst, W. N. Hardy, R. S. Hayano, M. E. Hayden, A. J. Humphries, R. Hydomako, S. Jonsell, S. L. Kemp, L. Kurchaninov, N. Madsen, S. Menary, P. Nolan, K. Olchanski, A. Olin, P. Pusa, C. Ø. Rasmussen, F. Robicheaux, E. Sarid, D. M. Silveira, C. So, J. W. Storey, R. I. Thompson, D. P. van der Werf, J. S. Wurtele, Y. Yamazaki. Confinement of antihydrogen for 1,000 seconds. Nature Physics, 2011; DOI: 10.1038/nphys2025

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday 29 June 2011

Material turns hard or soft at the touch of a button

ScienceDaily (June 6, 2011) — A world premiere: a material which changes its strength, virtually at the touch of a button. This transformation can be achieved in a matter of seconds through changes in the electron structure of a material; thus hard and brittle matter, for example, can become soft and malleable. What makes this development revolutionary, is that the transformation can be controlled by electric signals.

This world-first has its origins in Hamburg. Jörg Weißmüller, a materials scientist at both the Technical University of Hamburg and the Helmholtz Center Geesthacht, has carried out research on this groundbreaking development, working in cooperation with colleagues from the Institute for Metal Research in Shenyang, China.

The 51-year-old researcher from the Saarland referred to his fundamental research, which opens the door to a multitude of diverse applications, as "a breakthrough in the material sciences." The new metallic high-performance material is described by Prof. Dr. Jörg Weißmüller and the Chinese research scientist Hai-Jun Jin in the latest issue of the scientific journal Science. Their research findings could, for example, make future intelligent materials with the ability of self healing, smoothing out flaws autonomously.

The firmness of a boiled egg can be adjusted at will through the cooking time. Some decisions are, however, irrevocable -- a hard-boiled egg can never be reconverted into a soft-boiled one. There would be less annoyance at the breakfast table if we could simply switch back and forth between the different degrees of firmness of the egg.

Similar issues arise in the making of structural materials such as metals and alloys. The materials properties are set once and for all during production. This forces engineers to make compromises in the selection of the mechanical properties of a material. Greater strength is inevitably accompanied by increased brittleness and a reduction of the damage tolerance.

Professor Weißmüller, head of the Institute of Materials Physics and Technology at the Technical University of Hamburg and also of the department for Hybrid Material Systems at the Helmholtz Center Geesthacht, stated: "This is a point where significant progress is being made. For the first time we have succeeded in producing a material which, while in service, can switch back and forth between a state of strong and brittle behavior and one of soft and malleable. We are still at the fundamental research stage but our discovery may bring significant progress in the development of so-called smart materials."

A Marriage of Metal and Water

In order to produce this innovative material, material scientists employ a comparatively simple process: corrosion. The metals, typically precious metals such as gold or platinum, are placed in an acidic solution. As a consequence of the onset of the corrosion process, minute ducts and holes are formed in the metal. The emerging nanostructured material is pervaded by a network of pore channels.

The pores are impregnated with a conductive liquid, for example a simple saline solution or a diluted acid, and a true hybrid material of metal and liquid is thus created. It is the unusual "marriage," as Weißmüller calls this union of metal and water which, when triggered by an electric signal, enables the properties of the material to change at the touch of a button.

As ions are dissolved in the liquid, the surfaces of the metal can be electrically charged. In other words, the mechanical properties of the metallic partner are changed by the application of an electric potential in the liquid partner. The effect can be traced back to a strengthening or weakening of the atomic bonding in the surface of the metal when extra electrons are added to or withdrawn from the surface atoms. The strength of the material can be as much as doubled when required. Alternatively, the material can be switched to a state which is weaker, but more damage tolerant, energy-absorbing and malleable.

Specific applications are still a matter for the future. However, researchers are already thinking ahead. In principle, the material can create electric signals spontaneously and selectively, so as to strengthen the matter in regions of local stress concentration. Damage, for instance in the form of cracks, could thereby be prevented or even healed. This has brought scientists a great step closer to their objective of 'intelligent' high performance materials.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Helmholtz Association of German Research Centres.

Journal Reference:

H.-J. Jin, J. Weissmuller. A Material with Electrically Tunable Strength and Flow Stress. Science, 2011; 332 (6034): 1179 DOI: 10.1126/science.1202190

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Making complex fluids look simple

ScienceDaily (June 1, 2011) — An international research team has successfully developed a widely applicable method for discovering the physical foundations of complex fluids for the first time. Researchers at the University of Vienna and University of Rome have developed a microscopic theory that describes the interactions between the various components of a complex polymer mixture. This approach has now been experimentally proven by physicists from Jülich, who conducted neutron scattering experiments in Grenoble.

The results have been published in the June issue of the journal Physical Review Letters.

Some important materials from technology and nature are complex fluids: polymer melts for plastics production, mixtures of water, oil and amphiphiles, which can be found in both living cells and in your washing machine, or colloidal suspensions such as blood or dispersion paints. They are quite different from simple fluids consisting of small molecules, such as water, because they are made of mixtures of particles between a nanometre and a micrometre in size, and have a large number of so-called degrees of freedom. The latter include vibrations, movements of the functional groups of molecules or joint movements of several molecules. They can appear on widely varied length, time, and energy scales. This makes experimental and theoretical studies difficult and, so far, has impeded understanding of the properties of these systems and the targeted development of new materials with improved properties.

A method developed and tested by physicists at Forschungszentrum Jülich, the Institut Laue-Langevin in Grenoble, and the Universities of Vienna and Rome now permits realistic modelling of complex fluids for the first time. "Our microscopic theory describes the interactions between the various components of a complex mixture and in turn, enables us to draw realistic conclusions about their macroscopic properties, such as their structure or their flow properties," said Prof. Christos Likos of the University of Vienna, an expert on theory and simulation.

The team from Vienna and Rome developed the theory model. Since the researchers were unable to include all the details of the real system -- a mixture of larger star-shaped polymers and smaller polymer chains -- they systematically eliminated the rapidly moving degrees of freedom and focused on the relevant slow degrees of freedom, a time-consuming and challenging task. "To do this, we use a relatively new method called coarse graining and replace each complex macromolecule with a sphere of the appropriate size. The challenge involves integrating the degrees of freedom that have been eliminated in the simplified systems as averages so that the characteristics of the substances are retained," Likos explained.

The team from Jülich used elaborate small angle neutron scattering experiments with the instrument D11 at the Institut Laue-Langevin in Grenoble to prove that the interactions between the spheres of the coarse-grained model realistically simulate the conditions in the real system. "We were faced with the proverbial challenge of visualizing the needle in a haystack," explained Dr. Jörg Stellbrink, a physicist and neutron scattering expert at the Jülich Centre for Neutron Science (JCNS). For neutrons, the individual polymers of the mixture cannot be readily distinguished. For this reason, the physicists "coloured" the components they were interested in, so that they stood out of the crowd. This is one of the Jülich team's specialities. In this way, they were able to selectively examine the structures and interactions on a microscopic length scale.

The physicists are especially proud of the excellent agreement between theoretical predictions and experimental results. The method will now open up a spectrum of possibilities for studying the physical properties of a whole range of different complex mixtures.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Helmholtz Association of German Research Centres, via EurekAlert!, a service of AAAS.

Journal Reference:

B. Lonetti, M. Camargo, J. Stellbrink, C. Likos, E. Zaccarelli, L. Willner, P. Lindner, D. Richter. Ultrasoft Colloid-Polymer Mixtures: Structure and Phase Diagram. Physical Review Letters, 2011; 106 (22) DOI: 10.1103/PhysRevLett.106.228301

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

CERN group traps antihydrogen atoms for more than 16 minutes

ScienceDaily (June 5, 2011) — Trapping antihydrogen atoms at the European Organization for Nuclear Research (CERN) has become so routine that physicists are confident that they can soon begin experiments on this rare antimatter equivalent of the hydrogen atom, according to researchers at the University of California, Berkeley.

"We've trapped antihydrogen atoms for as long as 1,000 seconds, which is forever" in the world of high-energy particle physics, said Joel Fajans, UC Berkeley professor of physics, faculty scientist at Lawrence Berkeley National Laboratory and a member of the ALPHA (Antihydrogen Laser Physics Apparatus) experiment at CERN in Geneva, Switzerland.

The ALPHA team is hard at work building a new antihydrogen trap with "the hope that by 2012 we will have a new trap with laser access to allow spectroscopic experiments on the antiatoms," he said.

Fajans and the ALPHA team, which includes Jonathan Wurtele, UC Berkeley professor of physics, will publish their latest successes online on June 5 in advance of print publication in the journal Nature Physics. Fajans, Wurtele and their graduate students played major roles in designing the antimatter trap and other aspects of the experiment.

Their paper reports that in a series of measurements last year, the team trapped 112 antiatoms for times ranging from one-fifth of a second to 1,000 seconds, or 16 minutes and 40 seconds.

Since the experiment first successfully trapped antihydrogen atoms in 2009, the researchers have captured 309.

"We'd prefer being able to trap a thousand atoms for a thousand seconds, but we can still initiate laser and microwave experiments to explore the properties of antiatoms," Fajans said.

In November 2010, Fajans, Wurtele and the ALPHA team reported their first data on trapped antihydrogen: 38 antiatoms trapped for more than one-tenth of a second each. They succeeded in capturing an antiatom in only about one in 10 attempts, however.

Toward the end of last year's experiments, they were capturing an antiatom in nearly every attempt, and were able to keep the antiatoms in the trap as long as they wanted. Realistically, trapping for 10-30 minutes will be sufficient for most experiments, as long as the antiatoms are in their lowest energy state, or ground state.

"These antiatoms should be identical to normal matter hydrogen atoms, so we are pretty sure all of them are in the ground state after a second," Wurtele said.

"These were likely the first ground state antiatoms ever made," Fajans added.

Antimatter is a puzzle because it should have been produced in equal amounts with normal matter during the Big Bang that created the universe 13.6 billion years ago. Today, however, there is no evidence of antimatter galaxies or clouds, and antimatter is seen rarely and for only short periods, for example during some types of radioactive decay before it annihilates in a collision with normal matter.

Hence the desire to measure the properties of antiatoms in order to determine whether their electromagnetic and gravitational interactions are identical to those of normal matter. One goal is to check whether antiatoms abide by CPT symmetry, as do normal atoms. CPT (charge-parity-time) symmetry means that a particle would behave the same way in a mirror universe if it had the opposite charge and moved backward in time.

"Any hint of CPT symmetry breaking would require a serious rethink of our understanding of nature," said Jeffrey Hangst of Aarhus University in Denmark, spokesperson for the ALPHA experiment. "But half of the universe has gone missing, so some kind of rethink is apparently on the agenda."

ALPHA captures antihydrogen by mixing antiprotons from CERN's Antiproton Decelerator with positrons -- antielectrons -- in a vacuum chamber, where they combine into antihydrogen atoms. The cold neutral antihydrogen is confined within a magnetic bottle, taking advantage of the tiny magnetic moments of the antiatoms. Trapped antiatoms are detected by turning off the magnetic field and allowing the particles to annihiliate with normal matter, which creates a flash of light.

Because the confinement depends on the antihydrogen's magnetic moment, if the spin of the antiatom flips, it is ejected from the magnetic bottle and annihilates with an atom of normal matter. This gives the experimenters an easy way to detect the interaction of light or microwaves with antihydrogen, because photons at the right frequency make the antiatom's spin flip up or down.

Though the team has trapped up to three antihydrogen atoms at once, the goal is to trap even more for long periods of time in order to achieve greater statistical precision in the measurements.

The ALPHA collaboration also will report in the Nature Physics paper that the team has measured the energy distribution of the trapped antihydrogen atoms.

"It may not sound exciting, but it's the first experiment done on trapped antihydrogen atoms," Wurtele said. "This summer, we're planning more experiments, with microwaves. Hopefully, we will measure microwave-induced changes of the atomic state of the antiatoms."

The work of the ALPHA collaboration is supported by numerous international organizations, including the Department of Energy and the National Science Foundation in the United States.

Among the paper's 38 authors are UC Berkeley graduate students Marcelo Baquero-Ruiz and Chukman So.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Berkeley. The original article was written by Robert Sanders, Media Relations.

Journal Reference:

ALPHA Collaboration: G. B. Andresen, M. D. Ashkezari, M. Baquero-Ruiz, W. Bertsche, P. D. Bowe, E. Butler, C. L. Cesar, M. Charlton, A. Deller, S. Eriksson, J. Fajans, T. Friesen, M. C. Fujiwara, D. R. Gill, A. Gutierrez, J. S. Hangst, W. N. Hardy, R. S. Hayano, M. E. Hayden, A. J. Humphries, R. Hydomako, S. Jonsell, S. L. Kemp, L. Kurchaninov, N. Madsen, S. Menary, P. Nolan, K. Olchanski, A. Olin, P. Pusa, C. Ø. Rasmussen, F. Robicheaux, E. Sarid, D. M. Silveira, C. So, J. W. Storey, R. I. Thompson, D. P. van der Werf, J. S. Wurtele, Y. Yamazaki. Confinement of antihydrogen for 1,000 seconds. Nature Physics, 2011; DOI: 10.1038/nphys2025

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

'Biological circuit' components developed; New microscope technique for measuring them

ScienceDaily (June 9, 2011) — Electrical engineers have long been toying with the idea of designing biological molecules that can be directly integrated into electronic circuits. University of Pennsylvania researchers have developed a way to form these structures so they can operate in open-air environments, and, more important, have developed a new microscope technique that can measure the electrical properties of these and similar devices.

The research was conducted by Dawn Bonnell, Trustee Chair Professor and director of the Nano/Bio Interface Center, graduate students Kendra Kathan-Galipeau and Maxim Nikiforov and postdoctoral fellow Sanjini Nanayakkara, all of the Department of Materials Science and Engineering in Penn's School of Engineering and Applied Science. They collaborated with assistant professor Bohdana Discher of the Department of Biophysics and Biochemistry at Penn's Perelman School of Medicine and Paul A. O'Brien, a graduate student in Penn's Biotechnology Masters Program.

Their work was published in the journal ACS Nano.

The development involves artificial proteins, bundles of peptide helices with a photoactive molecule inside. These proteins are arranged on electrodes, which are common feature of circuits that transmit electrical charges between metallic and non-metallic elements. When light is shined on the proteins, they convert photons into electrons and pass them to the electrode.

"It's a similar mechanism to what happens when plants absorb light, except in that case the electron is used for some chemistry that creates energy for the plant," Bonnell said. "In this case, we want to use the electron in electrical circuits."

Similar peptide assemblies had been studied in solution before by several groups and had been tested to show that they indeed react to light. But there was no way to quantify their ambient electrical properties, particularly capacitance, the amount of electrical charge the assembly holds.

"It's necessary to understand these kinds of properties in the molecules in order to make devices out of them. We've been studying silicon for 40 years, so we know what happens to electrons there," Bonnell said. "We didn't know what happens to electrons on dry electrodes with these proteins; we didn't even know if they would remain photoactive when attached to an electrode."

Designing circuits and devices with silicon is inherently easier than with proteins. The electrical properties of a large chunk of a single element can be measured and then scaled down, but complex molecules like these proteins cannot be scaled up. Diagnostic systems that could measure their properties with nanometer sensitivity simply did not exist.

The researchers therefore needed to invent both a new way of a measuring these properties and a controlled way of making the photovoltaic proteins that would resemble how they might eventually be incorporated into devices in open-air, everyday environments, rather than swimming in a chemical solution.

To solve the first problem, the team developed a new kind of atomic force microscope technique, known as torsional resonance nanoimpedance microscopy. Atomic force microscopes operate by bringing an extremely narrow silicon tip very close to a surface and measuring how the tip reacts, providing a spatial sensitivity of a few nanometers down to individual atoms.

"What we've done in our version is to use a metallic tip and put an oscillating electric field on it. By seeing how electrons react to the field, we're able to measure more complex interactions and more complex properties, such as capacitance," Bonnell said.

Bohdana Discher's group designed the self-assembling proteins much as they had done before but took the additional step of stamping them onto sheets of graphite electrodes. This manufacturing principle and the ability to measure the resulting devices could have a variety of applications.

"Photovoltaics -- solar cells -- are perhaps the easiest to imagine, but where this work is going in the shorter term is biochemical sensors," Bonnell said.

Instead of reacting to photons, proteins could be designed to produce a charge when in the presence of a certain toxins, either changing color or acting as a circuit element in a human-scale gadget.

This research was supported by the Nano/Bio Interface Center and the National Science Foundation.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Pennsylvania.

Journal Reference:

Kendra Kathan-Galipeau, Sanjini Nanayakkara, Paul A. O’Brian, Maxim Nikiforov, Bohdana M. Discher, Dawn A. Bonnell. Direct Probe of Molecular Polarization inDe NovoProtein–Electrode Interfaces. ACS Nano, 2011; 110603081000090 DOI: 10.1021/nn200887n

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Phase change memory-based 'Moneta' system points to the future of computer storage

ScienceDaily (June 3, 2011) — A University of California, San Diego faculty-student team is about to demonstrate a first-of-its kind, phase-change memory solid state storage device that provides performance thousands of times faster than a conventional hard drive and up to seven times faster than current state-of-the-art solid-state drives (SSDs).

The device was developed in the Computer Science and Engineering department at the UC San Diego Jacobs School of Engineering and will be on exhibit June 7-8 at DAC 2011, the world's leading technical conference and trade show on electronic design automation, with the support of several industry partners, including Micron Technology, BEEcube and Xilinx. The storage system, called "Moneta," uses phase-change memory (PCM), an emerging data storage technology that stores data in the crystal structure of a metal alloy called a chalcogenide. PCM is faster and simpler to use than flash memory -- the technology that currently dominates the SSD market.

Moneta marks the latest advancement in solid state drives (SSDs). Unlike conventional hard disk drives, solid state storage drives have no moving parts. Today's SSDs use flash memory and can be found in a wide range of consumer electronics such as iPads and laptops. Although faster than hard disk, flash memory is still too slow to meet modern data storage and analysis demands, particularly in the area of high performance computing where the ability to sift through enormous volumes of data quickly is critical. Examples include storing and analyzing scientific data collected through environmental sensors, or even web searches through Google.

"As a society, we can gather all this data very, very quickly -- much faster than we can analyze it with conventional, disk-based storage systems," said Steven Swanson, professor of Computer Science and Engineering and director of the Non-Volatile Systems Lab (NVSL). "Phase-change memory-based solid state storage devices will allow us to sift through all of this data, make sense of it, and extract useful information much faster. It has the potential to be revolutionary."

PCM Memory Chips

To store data, the PCM memory chips switch the alloy between a crystalline and amorphous state based on the application of heat through an electrical current. To read the data, the chips use a smaller current to determine which state the chalcogenide is in.

Moneta uses Micron Technology's first-generation PCM chips and can read large sections of data at a maximum rate of 1.1 gigabytes per second and write data at up to 371 megabytes per second. For smaller accesses (e.g., 512 B), Moneta can read at 327 megabytes per second and write at 91 megabytes per second , or between two and seven times faster than a state-of-the-art, flash-based SSD. Moneta also provides lower latency for each operation and should reduce energy requirements for data-intensive applications.

A Glimpse at Computers of the Future

Swanson hopes to build the second generation of the Moneta storage device in the next six to nine months and says the technology could be ready for market in just a few years as the underlying phase-change memory technology improves. The development has also revealed a new technology challenge.

"We've found that you can build a much faster storage device, but in order to really make use of it, you have to change the software that manages it as well. Storage systems have evolved over the last 40 years to cater to disks, and disks are very, very slow," said Swanson. "Designing storage systems that can fully leverage technologies like PCM requires rethinking almost every aspect of how a computer system's software manages and accesses storage. Moneta gives us a window into the future of what computer storage systems are going to look like, and gives us the opportunity now to rethink how we design computer systems in response."

In addition to Swanson, the Moneta team includes Computer Science and Engineering Professor and Chair Rajesh Gupta, who is also associate director of UC San Diego's California Institute for Telecommunications and Information Technology. Student team members from the Department of Computer Science and Engineering include Ameen Akel, Adrian Caulfield, Todor Mollov, Arup De, and Joel Coburn.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - San Diego, Jacobs School of Engineering.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Just four percent of galaxies have neighbors like the Milky Way

ScienceDaily (May 23, 2011) — How unique is the Milky Way?

To find out, a group of researchers led by Stanford University astrophysicist Risa Wechsler compared the Milky Way to similar galaxies and found that just four percent are like the galaxy Earth calls home.

"We are interested in how the Milky Way fits into the broader context of the universe," said Wechsler. "This research helps us understand whether our galaxy is typical or not, and may provide clues to its formation history."

The research team compared the Milky Way to similar galaxies in terms of luminosity--a measure of how much light is emitted--and distance to other bright galaxies. They found galaxies that have two satellites that are as bright and close by as the Milky Way's two closest satellites, the Large and Small Magellanic Clouds, are rare.

Published in the May 20 issue of the Astrophysical Journal, the findings are based on analyses of data collected from the Sloan Digital Sky Survey (SDSS). The work is the first of three papers that study the properties of the Milky Way's two most massive satellites.

Supported in part by the National Science Foundation (NSF), the SDSS is the most extensive survey of the optical sky performed to date.

In more than eight years of operations, SDSS has obtained images covering more than a quarter of the sky, and created 3-dimensional maps containing more than 930,000 galaxies and 120,000 quasars. For this analysis, Wechsler's group studied more than 20,000 galaxies with properties similar to the Milky Way and investigated the galaxies surrounding these Milky Way "twins," to create a "census" of galaxies similar to the Milky Way in the universe.

The work represents one of the most extensive studies of this kind ever performed.

Scientists can also compare the SDSS data to galaxies simulated by a computer model. Since they are currently unable to see all the way back to the Big Bang, this is one way researchers are trying to understand how the universe as we see it today began.

In order to learn more about possible conditions in the early universe, the group performed computer simulations to recreate the universe from specific sets of starting conditions. Then they compared their simulations to the SDSS data set. In this way, the group was able to test different theories of galaxy formation to determine whether or not each would result in a universe that matches what we see today. The results of their simulation matched the result found in the SDSS data set: just four percent of the simulated galaxies had two satellites like the Magellanic Clouds.

"This is an excellent example of data-enabled science," said Nigel Sharp, of NSF's Division of Astronomical Sciences. "Comparing the 'fake' and 'real' Universes is how we discriminate between successful and unsuccessful theories. This work interconnects three of the four legs of science: theory, observation and simulation, for a powerful scientific result."

Their results also lend support to a leading theory of galaxy formation called the Cold Dark Matter (CDM) theory. This theory provides what many consider to be the simplest explanation for the arrangement of galaxies throughout the universe following the Big Bang. It assumes that most of the matter in the Universe consists of material that cannot be observed by its electromagnetic radiation (dark) and whose constituent particles move slowly (cold). Dark matter, an invisible and exotic material of unknown composition, is believed to influence the distribution of galaxies in space and the overall expansion of the universe. The rareness of this aspect of the Milky Way may provide clues to its formation history.

"Because the presence of two galaxies like the Magellanic Clouds is unusual, we can use them to learn more about our own galaxy," said Wechsler. Using their simulation, the team identified a sample of simulated galaxies that had satellites matching the Milky Way's in terms of their locations and speeds.

"The combination of large surveys of the sky like the SDSS and large samples of simulated galaxies provides a new opportunity to learn about the place of our galaxy in the Universe," said Wechsler. "Future surveys will allow us to extend this study to even dimmer satellite galaxies, to build a full picture of the formation of our galaxy."

The theoretical and numerical work that produced the simulations used as a comparison for the SDSS data were supported by an award funded under the American Recovery and Reinvestment Act of 2009.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by National Science Foundation.

Journal Reference:

Lulu Liu, Brian F. Gerke, Risa H. Wechsler, Peter S. Behroozi, Michael T. Busha. How Common Are the Magellanic Clouds? The Astrophysical Journal, 2011; 733 (1): 62 DOI: 10.1088/0004-637X/733/1/62

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday 28 June 2011

Iceland's Citizens Are Writing Its New Constitution Online

It turns out the Web really is democratic
Iceland's Parliament Building It may not look very Web 2.0, but Iceland is tapping the power of the Web to engage its citizens in the drafting of a new constitution. Dr. Jaus via Flickr

In the 18th century, if you wanted to draft a democratic constitution you crowded a handful of men into a room and hashed out the finer points of policy and philosophy until you had a document that was declared the law of the land. Same for the 19th and 20th centuries. But nowadays, the Internet--that great democratizer--is bringing a new kind of power to the people. Icelandic authorities overhauling that county’s constitution post-financial meltdown is tapping the power of the Web to allow citizens to give their two cents on how a new governing document should look.

There is still the small collection of leaders in a room drafting the actual document--25 of them to be exact. But they are reaching out to Iceland’s 320,000 people--one of the world’s more computer-literate populations--through Facebook, Twitter, and YouTube (but mostly Facebook, let’s be honest).

A thorough review and rewriting of the constitution (which is more or less Denmark’s constitution with a few minor tweaks) has been on the legislative agenda since Iceland gained independence in 1944. The new crowdsourced document could be put before the entire voting population in a referendum before parliament decides on the final draft. We’re not sure why. It seems like parliament should have a pretty good notion of how the public feels about the final draft based on how many “likes” it gets.

[PhysOrg]


View the original article here

Using magnets to help prevent heart attacks: Magnetic field can reduce blood viscosity, physicist discovers

ScienceDaily (June 8, 2011) — If a person's blood becomes too thick it can damage blood vessels and increase the risk of heart attacks. But a Temple University physicist has discovered that he can thin the human blood by subjecting it to a magnetic field.

Rongjia Tao, professor and chair of physics at Temple University, has pioneered the use of electric or magnetic fields to decrease the viscosity of oil in engines and pipelines. Now, he is using the same magnetic fields to thin human blood in the circulation system.

Because red blood cells contain iron, Tao has been able to reduce a person's blood viscosity by 20-30 percent by subjecting it to a magnetic field of 1.3 Telsa (about the same as an MRI) for about one minute.

Tao and his collaborator tested numerous blood samples in a Temple lab and found that the magnetic field polarizes the red blood cells causing them to link together in short chains, streamlining the movement of the blood. Because these chains are larger than the single blood cells, they flow down the center, reducing the friction against the walls of the blood vessels. The combined effects reduce the viscosity of the blood, helping it to flow more freely.

When the magnetic field was taken away, the blood's original viscosity state slowly returned, but over a period of several hours.

"By selecting a suitable magnetic field strength and pulse duration, we will be able to control the size of the aggregated red-cell chains, hence to control the blood's viscosity," said Tao. "This method of magneto-rheology provides an effective way to control the blood viscosity within a selected range."

Currently, the only method for thinning blood is through drugs such as aspirin; however, these drugs often produce unwanted side effects. Tao said that the magnetic field method is not only safer, it is repeatable. The magnetic fields may be reapplied and the viscosity reduced again. He also added that the viscosity reduction does not affect the red blood cells' normal function.

Tao said that further studies are needed and that he hopes to ultimately develop this technology into an acceptable therapy to prevent heart disease.

Tao and his former graduate student, Ke "Colin" Huang, now a medical physics resident in the Department of Radiation Oncology at the University of Michigan, are publishing their findings in the journal Physical Review E.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Temple University, via EurekAlert!, a service of AAAS.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Engineers envision 2-dimensional graphene metamaterials and 1-atom-thick optical devices

ScienceDaily (June 9, 2011) — Two University of Pennsylvania engineers have proposed the possibility of two-dimensional metamaterials. These one-atom- thick metamaterials could be achieved by controlling the conductivity of sheets of graphene, which is a single layer of carbon atoms.

Professor Nader Engheta and graduate student Ashkan Vakil, both of the Department of Electrical and Systems Engineering in Penn's School of Engineering and Applied Science, published their theoretical research in the journal Science.

The study of metamaterials is an interdisciplinary field of science and engineering that has grown considerably in recent years. It is premised on the idea that materials can be designed so that their overall wave qualities rely not only upon the material they are made of but also on the pattern, shape and size of irregularities, known as "inclusions," or "meta-molecules" that are embedded within host media.

"By designing the properties of the inclusions, as well as their shapes and density, you achieve in the bulk property something that may be unusual and not readily available in nature," Engheta said.

These unusual properties generally have to do with manipulating electromagnetic (EM) or acoustic waves; in this case, it is EM waves in the infrared spectrum

Changing the shape, speed and direction of these kinds of waves is a subfield of metamaterials known as "transformation optics" and may find applications in everything from telecommunications to imaging to signal processing.

Engheta and Vakil's research shows how transformation optics might now be achieved using graphene, a lattice of carbon a single atom thick.

Researchers, including many at Penn, have devoted considerable effort into developing new ways to manufacture and manipulate graphene, as its unprecedented conductivity would have many applications in the field of electronics. Engheta and Vakil's interest in graphene, however, is due to its capability to transport and guide EM waves in addition to electrical charges and the fact that its conductivity can be easily altered.

Applying direct voltage to a sheet of graphene, by way of ground plate running parallel to the sheet, changes how conductive the graphene is to EM waves. Varying the voltage or the distance between the ground plate and the graphene alters the conductivity, "just like tuning a knob," Engheta said.

"This allows you to change the conductivity of different segments of a single sheet of graphene differently from each other," he said. And if you can do that, you can navigate and manipulate a wave with those segments. In other words, you can do transformation optics using graphene."

In this marriage between graphene and metamaterials, the different regions of conductivity on the effectively two-dimensional, one-atom-thick sheet function as the physical inclusions present in three-dimensional versions.

The examples Engheta and Vakil have demonstrated with computer models include a sheet of graphene with two areas that have different conductivities, one that can support a wave, and one that cannot. The boundary between the two areas acts as a wall, capable of reflecting a guided EM wave on the graphene much like one would in a three dimensional space.

Another example involves three regions, one that can support a wave surrounded by two that cannot. This produces a "waveguide," which functions like a one-atom-thick fiber optic cable. A third example builds on the waveguide, adding another non-supporting region to split the waveguide into two.

"We can 'tame' the wave so that it moves and bends however we like," Engheta said. "Rather than playing around with the boundary between two media, we're thinking about changes of conductivity across a single sheet of graphene."

Other applications include lensing and the ability to do "flatland" Fourier transforms, a fundamental aspect of signal processing that is found in nearly every piece of technology with audio or visual components.

"This will pave the way to the thinnest optical devices imaginable," Engheta said. "You can't have anything thinner than one atom!"

Support for this research came from U.S. Air Force Office of Scientific Research.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Pennsylvania.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Researchers create nanoscale waveguide for future photonics

ScienceDaily (May 31, 2011) — The creation of a new quasiparticle called the "hybrid plasmon polariton" may throw open the doors to integrated photonic circuits and optical computing for the 21st century. Researchers with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) have demonstrated the first true nanoscale waveguides for next generation on-chip optical communication systems.

"We have directly demonstrated the nanoscale waveguiding of light at visible and near infrared frequencies in a metal-insulator-semiconductor device featuring low loss and broadband operation," says Xiang Zhang, the leader of this research. "The novel mode design of our nanoscale waveguide holds great potential for nanoscale photonic applications, such as intra-chip optical communication, signal modulation, nanoscale lasers and bio-medical sensing."

Zhang, a principal investigator with Berkeley Lab's Materials Sciences Division and director of the University of California at Berkeley's Nano-scale Science and Engineering Center (SINAM), is the corresponding author of a paper published by Nature Communications that describes this work titled "Experimental Demonstration of Low-Loss Optical Waveguiding at Deep Sub-wavelength Scales." Co-authoring the paper with Zhang were Volker Sorger, Ziliang Ye, Rupert Oulton, Yuan Wang, Guy Bartal and Xiaobo Yin.

In this paper, Zhang and his co-authors describe the use of the hybrid plasmon polariton, a quasi-particle they conceptualized and created, in a nanoscale waveguide system that is capable of shepherding light waves along a metal-dielectric nanostructure interface over sufficient distances for the routing of optical communication signals in photonic devices. The key is the insertion of a thin low-dielectric layer between the metal and a semiconductor strip.

"We reveal mode sizes down to 50-by-60 square nanometers using Near-field scanning optical microscopy (NSOM) at optical wavelengths," says Volker Sorger a graduate student in Zhang's research group and one of the two lead authors on the Nature Communications paper. "The propagation lengths were 10 times the vacuum wavelength of visible light and 20 times that of near infrared."

The high-technology world is eagerly anticipating the replacement of today's electronic circuits in microprocessors and other devices with circuits based on the transmission of light and other forms of electromagnetic waves. Photonic technology, or "photonics," promises to be superfast and ultrasensitive in comparison to electronic technology.

"To meet the ever-growing demand for higher data bandwidth and lower power consumption, we need to reduce the energy required to create, transmit and detect each bit of information," says Sorger. "This requires reducing physical photonic component sizes down beyond the diffraction limit of light while still providing integrated functionality."

Until recently, the size and performance of photonic devices was constrained by the interference that arises between closely spaced light waves. This diffraction limit results in weak photonic-electronic interactions that can only be avoided through the use of devices much larger in size than today's electronic circuits. A breakthrough came with the discovery that it is possible to couple photons with electrons by squeezing light waves through the interface between a metal/dielectric nanostructure whose dimensions are smaller than half the wavelengths of the incident photons in free space.

Directing waves of light across the surface of a metal nanostructure generates electronic surface waves -- called plasmons -- that roll through the metal's conduction electrons (those loosely attached to molecules and atoms). The resulting interaction between plasmons and photons creates a quasi-particle called a surface plasmon polariton(SPP) that can serve as a carrier of information. Hopes were high for SPPs in nanoscale photonic devices because their wavelengths can be scaled down below the diffraction limit, but problems arose because any light signal loses strength as it passes through the metal portion of a metal-dielectric interface.

"Until now, the direct experimental demonstration of low-loss propagation of deep sub-wavelength optical modes was not realized due to the huge propagation loss in the optical mode that resulted from the electromagnetic field being pushed into the metal," Zhang says. "With this trade-off between optical confinement and metallic losses, the use of plasmonics for integrated photonics, in particular for optical interconnects, has remained uncertain."

To solve the problem of optical signal loss, Zhang and his group proposed the hybrid plasmon polariton (HPP) concept. A semiconductor (high-dielectric) strip is placed on a metal interface, just barely separated by a thin oxide (low-dielectric) layer. This new metal-oxide-semiconductor design results in a redistribution of an incoming light wave's energy. Instead of being concentrated in the metal, where optical losses are high, some of the light wave's energy is squeezed into the low dielectric gap where optical losses are substantially less compared to the plasmonic metal.

"With this design, we create an HPP mode, a hybrid of the photonic and plasmonic modes that takes the best from both systems and gives us high confinement with low signal loss," says Ziliang Ye, the other lead authors of the Nature Communications paper who is also a graduate student in Zhang's research group. "The HPP mode is not only advantageous for down-scaling physical device sizes, but also for delivering novel physical effects at the device level that pave the way for nanolasers, as well as for quantum photonics and single-photon all-optical switches."

The HPP waveguide system is fully compatible with current semiconductor/CMOS processing techniques, as well as with the Silicon-on-Insulator (SOI) platform used today for photonic integration. This should make it easier to incorporate the technology into low-cost, large-scale integration and manufacturing schemes. Sorger believes that prototypes based on this technology could be ready within the next two years and the first actual products could be on the market within five years.

"We are already working on demonstrating an all-optical transistor and electro-optical modulator based on the HPP waveguide system," Sorger says. "We're also now looking into bio-medical applications, such as using the HPP waveguide to make a molecular sensor."

This research was supported by the National Science Foundation's Nano-Scale Science and Engineering Center.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Lawrence Berkeley National Laboratory.

Journal Reference:

Volker J. Sorger, Ziliang Ye, Rupert F. Oulton, Yuan Wang, Guy Bartal, Xiaobo Yin, Xiang Zhang. Experimental demonstration of low-loss optical waveguiding at deep sub-wavelength scales. Nature Communications, 2011; 2: 331 DOI: 10.1038/ncomms1315

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Quantum knowledge cools computers: New understanding of entropy

ScienceDaily (June 1, 2011) — From a laptop warming a knee to a supercomputer heating a room, the idea that computers generate heat is familiar to everyone. But theoretical physicists have discovered something astonishing: not only do computational processes sometimes generate no heat, under certain conditions they can even have a cooling effect. Behind this finding are fundamental considerations relating to knowledge and a lack of knowledge. The researchers publish their findings in the journal Nature.

When computers compute, the energy they consume eventually ends up as heat. This isn't all due to the engineering of the computer -- physics has something to say about the fundamental energy cost of processing information.

Recent research by a team of physicists reveals a surprise at this fundamental level. ETH-Professor Renato Renner, and Vlatko Vedral of the Centre for Quantum Technologies at the National University of Singapore and the University of Oxford, UK, and their colleagues describe in the scientific journal Nature how the deletion of data, under certain conditions, can create a cooling effect instead of generating heat. The cooling effect appears when the strange quantum phenomenon of entanglement is invoked. Ultimately, it may be possible to harness this effect to cool supercomputers that have their performance held back by heat generation. "Achieving the control at the quantum level that would be required to implement this in supercomputers is a huge technological challenge, but it may not be impossible. We have seen enormous progress is quantum technologies over the past 20 years," says Vedral. With the technology in quantum physics labs today, it should be possible to do a proof of principle experiment on a few bits of data.

Landauer's principle is given a quantum twist

The physicist Rolf Landauer calculated back in 1961 that during the deletion of data, some release of energy in the form of heat is unavoidable. Landauer's principle implies that when a certain number of arithmetical operations per second have been exceeded, the computer will produce so much heat that the heat is impossible to dissipate. In supercomputers today other sources of heat are more significant, but Renner thinks that the critical threshold where Landauer's erasure heat becomes important may be reached within the next 10 to 20 years. The heat emission from the deletion of a ten terabyte hard-drive amounts in principle to less than a millionth of a joule. However, if such a deletion process were repeated many times per second then the heat would accumulate correspondingly.

The new study revisits Landauer's principle for cases when the values of the bits to be deleted may be known. When the memory content is known, it should be possible to delete the bits in such a manner that it is theoretically possible to re-create them. It has previously been shown that such reversible deletion would generate no heat. In the new paper, the researchers go a step further. They show that when the bits to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer's state to that of the computer in such a way that they know more about the memory than is possible in classical physics.

Similar formulas -- two disciplines

In order to reach this result, the scientists combined ideas from information theory and thermodynamics about a concept known as entropy. Entropy appears differently in these two disciplines, which are, to a large extent, independent of each other. In information theory, entropy is a measurement of the information density. It describes, for instance, how much memory capacity a given set of data would take up when compressed optimally. In thermodynamics, on the other hand, entropy relates to the disorder in systems, for example to the arrangement of molecules in a gas. In thermodynamics, adding entropy to a system is usually equivalent to adding energy as heat.

The ETH physicist Renner says "We have now shown that in both cases, the term entropy is actually describing the same thing even in the quantum mechanical regime." As the formulas for the two entropies look the same, it had already been assumed that there was a connection between them. "Our study shows that in both cases, entropy is considered to be a type of lack of knowledge," says Renner. The new paper in Nature builds on work published earlier in the New Journal of Physics.

In measuring entropy, one should bear in mind that an object does not have a certain amount of entropy per se, instead an object's entropy is always dependent on the observer. Applied to the example of deleting data, this means that if two individuals delete data in a memory and one has more knowledge of this data, she perceives the memory to have lower entropy and can then delete the memory using less energy. Entropy in quantum physics has the unusual property of sometimes being negative when calculated from the information theory point of view. Perfect classical knowledge of a system means the observer perceives it to have zero entropy. This corresponds to the memory of the observer and that of the system being perfectly correlated, as much as allowed in classical physics. Entanglement gives the observer „more than complete knowledge" because quantum correlations are stronger than classical correlations. This leads to an entropy less than zero. Until now, theoretical physicists had used this negative entropy in calculations without understanding what it might mean in thermodynamic terms or experimentally.

No heat, even a cooling effect

In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that "more than complete knowledge" from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy.

Renner emphasizes, however, "This doesn't mean that we can develop a perpetual motion machine." The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what's known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says "We're working on the edge of the second law. If you go any further, you will break it."

Fundamental findings

The scientists' new findings relating to entropy in thermodynamics and information theory may have usefulness beyond calculating the heat production of computers. For example, methods developed within information theory to handle entropy could lead to innovations in thermodynamics. The connection made between the two concepts of entropy is fundamental.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by ETH Zurich/Swiss Federal Institute of Technology.

Journal Reference:

Lídia del Rio, Johan Åberg, Renato Renner, Oscar Dahlsten, Vlatko Vedral. The thermodynamic meaning of negative entropy. Nature, 2011; 474 (7349): 61 DOI: 10.1038/nature10123

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday 27 June 2011

New method to make sodium ion-based battery cells could lead to better, cheaper batteries for the electrical grid

ScienceDaily (June 8, 2011) — By adding the right amount of heat, researchers have developed a method that improves the electrical capacity and recharging lifetime of sodium ion rechargeable batteries, which could be a cheaper alternative for large-scale uses such as storing energy on the electrical grid.

To connect solar and wind energy sources to the electrical grid, grid managers require batteries that can store large amounts of energy created at the source. Lithium ion rechargeable batteries -- common in consumer electronics and electric vehicles -- perform well, but are too expensive for widespread use on the grid because many batteries will be needed, and they will likely need to be large. Sodium is the next best choice, but the sodium-sulfur batteries currently in use run at temperatures above 300 degrees Celsius, or three times the temperature of boiling water, making them less energy efficient and safe than batteries that run at ambient temperatures.

Battery developers want the best of both worlds -- to use both inexpensive sodium and use the type of electrodes found in lithium rechargeables. A team of scientists at the Department of Energy's Pacific Northwest National Laboratory and visiting researchers from Wuhan University in Wuhan, China used nanomaterials to make electrodes that can work with sodium, they reported June 3 online in the journal Advanced Materials.

"The sodium-ion battery works at room temperature and uses sodium ions, an ingredient in cooking salt. So it will be much cheaper and safer," said PNNL chemist Jun Liu, who co-led the study with Wuhan University chemist Yuliang Cao.

The electrodes in lithium rechargeables that interest researchers are made of manganese oxide. The atoms in this metal oxide form many holes and tunnels that lithium ions travel through when batteries are being charged or are in use. The free movement of lithium ions allows the battery to hold electricity or release it in a current. But simply replacing the lithium ions with sodium ions is problematic -- sodium ions are 70 percent bigger than lithium ions and don't fit in the crevices as well.

To find a way to make bigger holes in the manganese oxide, PNNL researchers went much much smaller. They turned to nanomaterials -- materials made on the nanometer-sized scale, or about a million times thinner than a dime -- that have surprising properties due to their smallness. For example, the short distances that sodium ions have to travel in nanowires might make the manganese oxide a better electrode in ways unrelated to the size of the tunnels.

To explore, the team mixed two different kinds of manganese oxide atomic building blocks -- one whose atoms arrange themselves in pyramids, and another one whose atoms form an octahedron, a diamond-like structure from two pyramids stuck together at their bases. They expected the final material to have large S-shaped tunnels and smaller five-sided tunnels through which the ions could flow.

After mixing, the team treated the materials with temperatures ranging from 450 to 900 degrees Celsius, then examined the materials and tested which treatment worked best. Using a scanning electron microscope, the team found that different temperatures created material of different quality. Treating the manganese oxide at 750 degrees Celsius created the best crystals: too low and the crystals appeared flakey, too high and the crystals turned into larger flat plates.

Zooming in even more using a transmission electron microscope at EMSL, DOE's Environmental Molecular Sciences Laboratory on PNNL's campus, the team saw that manganese oxide heated to 600 degrees had pockmarks in the nanowires that could impede the sodium ions, but the 750 degree-treated wires looked uniform and very crystalline.

But even the best-looking material is just window-dressing if it doesn't perform well. To find out if it lived up to its good looks, the PNNL-Wuhan team dipped the electrode material in electrolyte, the liquid containing sodium ions that will help the manganese oxide electrodes form a current. Then they charged and discharged the experimental battery cells repeatedly.

The team measured peak capacity at 128 milliAmp hours per gram of electrode material as the experimental battery cell discharged. This result surpassed earlier ones taken by other researchers, one of which achieved peak capacity of 80 milliAmp hours per gram for electrodes made from manganese oxide but with a different production method. The researchers think the lower capacity is due to sodium ions causing structural changes in that manganese oxide that do not occur or occur less frequently in the heat-treated nano-sized material.

In addition to high capacity, the material held up well to cycles of charging and discharging, as would occur in consumer use. Again, the material treated at 750 Celsius performed the best: after 100 cycles of charging-discharging, it lost only 7 percent of its capacity. Material treated at 600 Celsius or 900 Celsius lost about 37 percent and 25 percent, respectively.

Even after 1,000 cycles, the capacity of the 750 Celsius-treated electrodes only dropped about 23 percent. The researchers thought the material performed very well, retaining 77 percent of its initial capacity.

Last, the team charged the experimental cell at different speeds to determine how quickly it could take up electricity. The team found that the faster they charged it, the less electricity it could hold. This suggested to the team that the speed with which sodium ions could diffuse into the manganese oxide limited the battery cell's capacity -- when charged fast, the sodium ions couldn't enter the tunnels fast enough to fill them up.

To compensate for the slow sodium ions, the researchers suggest in the future they make even smaller nanowires to speed up charging and discharging. Grid batteries need fast charging so they can collect as much newly made energy coming from renewable sources as possible. And they need to discharge fast when demands shoots up as consumers turn on their air conditioners and television sets, and plug in their electric vehicles at home.

Such high performing batteries could take the heat off an already taxed electrical power grid.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Pacific Northwest National Laboratory.

Journal Reference:

Yuliang Cao, Lifen Xiao, Wei Wang, Daiwon Choi, Zimin Nie, Jianguo Yu, Laxmikant V. Saraf, Zhenguo Yang, Jun Liu. Reversible Sodium Ion Insertion in Single Crystalline Manganese Oxide Nanowires with Long Cycle Life. Advanced Materials, 2011; DOI: 10.1002/adma.201100904

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Free-floating planets may be more common than stars

ScienceDaily (May 18, 2011) — Astronomers, including a NASA-funded team member, have discovered a new class of Jupiter-sized planets floating alone in the dark of space, away from the light of a star. The team believes these lone worlds were probably ejected from developing planetary systems.

The discovery is based on a joint Japan-New Zealand survey that scanned the center of the Milky Way galaxy during 2006 and 2007, revealing evidence for up to 10 free-floating planets roughly the mass of Jupiter. The isolated orbs, also known as orphan planets, are difficult to spot, and had gone undetected until now. The newfound planets are located at an average approximate distance of 10,000 to 20,000 light-years from Earth.

"Although free-floating planets have been predicted, they finally have been detected, holding major implications for planetary formation and evolution models," said Mario Perez, exoplanet program scientist at NASA Headquarters in Washington.

The discovery indicates there are many more free-floating Jupiter-mass planets that can't be seen. The team estimates there are about twice as many of them as stars. In addition, these worlds are thought to be at least as common as planets that orbit stars. This would add up to hundreds of billions of lone planets in our Milky Way galaxy alone.

"Our survey is like a population census," said David Bennett, a NASA and National Science Foundation-funded co-author of the study from the University of Notre Dame in South Bend, Ind. "We sampled a portion of the galaxy, and based on these data, can estimate overall numbers in the galaxy."

The study, led by Takahiro Sumi from Osaka University in Japan, appears in the May 19 issue of the journal Nature.

The survey is not sensitive to planets smaller than Jupiter and Saturn, but theories suggest lower-mass planets like Earth should be ejected from their stars more often. As a result, they are thought to be more common than free-floating Jupiters.

Previous observations spotted a handful of free-floating, planet-like objects within star-forming clusters, with masses three times that of Jupiter. But scientists suspect the gaseous bodies form more like stars than planets. These small, dim orbs, called brown dwarfs, grow from collapsing balls of gas and dust, but lack the mass to ignite their nuclear fuel and shine with starlight. It is thought the smallest brown dwarfs are approximately the size of large planets.

On the other hand, it is likely that some planets are ejected from their early, turbulent solar systems, due to close gravitational encounters with other planets or stars. Without a star to circle, these planets would move through the galaxy as our sun and other stars do, in stable orbits around the galaxy's center. The discovery of 10 free-floating Jupiters supports the ejection scenario, though it's possible both mechanisms are at play.

"If free-floating planets formed like stars, then we would have expected to see only one or two of them in our survey instead of 10," Bennett said. "Our results suggest that planetary systems often become unstable, with planets being kicked out from their places of birth."

The observations cannot rule out the possibility that some of these planets may have very distant orbits around stars, but other research indicates Jupiter-mass planets in such distant orbits are rare.

The survey, the Microlensing Observations in Astrophysics (MOA), is named in part after a giant wingless, extinct bird family from New Zealand called the moa. A 5.9-foot (1.8-meter) telescope at Mount John University Observatory in New Zealand is used to regularly scan the copious stars at the center of our galaxy for gravitational microlensing events. These occur when something, such as a star or planet, passes in front of another, more distant star. The passing body's gravity warps the light of the background star, causing it to magnify and brighten. Heftier passing bodies, like massive stars, will warp the light of the background star to a greater extent, resulting in brightening events that can last weeks. Small planet-size bodies will cause less of a distortion, and brighten a star for only a few days or less.

A second microlensing survey group, the Optical Gravitational Lensing Experiment (OGLE), contributed to this discovery using a 4.2-foot (1.3 meter) telescope in Chile. The OGLE group also observed many of the same events, and their observations independently confirmed the analysis of the MOA group.

NASA's Jet Propulsion Laboratory, Pasadena,Calif., manages NASA's Exoplanet Exploration program office. JPL is a division of the California Institute of Technology in Pasadena.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA/Jet Propulsion Laboratory.

Journal Reference:

T. Sumi, K. Kamiya, D. P. Bennett, I. A. Bond, F. Abe, C. S. Botzler, A. Fukui, K. Furusawa, J. B. Hearnshaw, Y. Itow, P. M. Kilmartin, A. Korpela, W. Lin, C. H. Ling, K. Masuda, Y. Matsubara, N. Miyake, M. Motomura, Y. Muraki, M. Nagaya, S. Nakamura, K. Ohnishi, T. Okumura, Y. C. Perrott, N. Rattenbury, To. Saito, T. Sako, D. J. Sullivan, W. L. Sweatman, P. J. Tristram, P. C. M. Yock, A. Udalski, M. K. Szymanski, M. Kubiak, G. Pietrzynski, R. Poleski, I. Soszynski, L. Wyrzykowski, K. Ulaczyk. Unbound or distant planetary mass population detected by gravitational microlensing. Nature, 2011; 473 (7347): 349 DOI: 10.1038/nature10092

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here