Tuesday, 31 May 2011

'I'm a tumor and I'm over here!' Nanovaults used to prod immune system to fight cancer

ScienceDaily (May 4, 2011) — UCLA scientists have discovered a way to wake up the immune system to fight cancer by delivering an immune system-stimulating protein in a nanoscale container called a vault directly into lung cancer tumors, harnessing the body's natural defenses to fight disease growth.

The vaults, barrel-shaped nanoscale capsules found in the cytoplasm of all mammalian cells, were engineered to slowly release a protein, the chemokine CCL21, into the tumor. Pre-clinical studies in mice with lung cancer showed that the protein stimulated the immune system to recognize and attack the cancer cells, potently inhibiting cancer growth, said Leonard Rome, a researcher at UCLA's Jonsson Comprehensive Cancer Center, associate director of the California NanoSystems Institutes and co-senior author of the study.

"Researchers have been working for many years to develop effective immune therapies to treat cancer, with limited success," said Rome, who has been studying vaults for decades. "In lung tumors, the immune system is down-regulated and what we wanted to do was wake it up, find a way to have the cancer say to the immune system, 'Hey, I'm a tumor and I'm over here. Come get me.' "

The study appears in the May 3, 2011 issue of PLoS ONE, a peer-reviewed journal of the Public Library of Science.

The new vault delivery system, which Rome characterized as "just a dream" three years ago, is based on a 10-year, on-going research effort focusing on using a patient's white blood cells to create dendritic cells, cells of the immune system that process antigen material and present it on the surface to other immune system cells. A Phase I study that is part of the effort, led by ULCA's Dr. Steven Dubinett, used a replication-deficient adenovirus to infect the dendritic cells and prompt them to over-secrete CCL21, the first time the chemokine has been administered to humans. The engineered cells -- 10 million at a time -- were then injected directly into the patient's lung cancer to stimulate an immune response.

The early phase study has shown the dendritic cell method is safe, has no side effects and seems to boost the immune response -- Dubinett and his team found T lymphocytes circulating in the blood stream with specific cytokine signatures, indicating that the lymphocytes were recognizing the cancer as a foreign invader.

However, the process to generate dendritic cells from the white blood cells and engineer them to over-secrete CCL21 is cumbersome, expensive and time-consuming. It also requires a Good Manufacturing Practice (GMP) suite, a specialized laboratory critical for the safe growth and manipulation of cells, which many research institutions do not have.

"It gets complicated," said Dubinett, director of the Lung Cancer Program at UCLA's Jonsson Comprehensive Cancer Center, a professor of pathology and laboratory medicine, member of the California NanoSystems Institute and a co-senior author of the paper. "You have to have a confluence of things happen -- the patient has to be clinically eligible for the study and healthy enough to participate, we have to be able to grow the cells and then genetically modify them and give them back."

There also was the challenge of patient-to-patient variability, said Sherven Sharma, a researcher at both the Jonsson Cancer Center and the California NanoSystems Institute, professor of pulmonary and critical care medicine and co-senior author of the study. It was easier to isolate and grow the dendritic cells in some patients than in others, so results were not consistent.

"We wanted to create a simpler way to develop an environment that would stimulate the immune system," Sharma said.

In the Phase I study, it takes more than a week to differentiate the white blood cells into dendritic cells and let them grow to the millions required for the therapy. The dendritic cells are infected with a virus engineered to carry a gene that caused the cells to secrete CCL21 and then injected into the patient's tumor using guided imaging.

"We thought if we could replace the dendritic cells with a nano-vehicle to deliver the CCL21, we would have an easier and less expensive treatment that also could be used at institutions that don't have GMP," Dubinett said.

If successful, the vault delivery method would add a desperately needed weapon to the arsenal in the fight against lung cancer, which accounts for nearly one-third of all cancer deaths in the United States and kills one million people worldwide every year.

"It's crucial that we find new and more effective therapies to fight this deadly disease," Dubinett said. "Right now we don't have adequate options for therapies for advanced lung cancer."

The vault nanoparticles containing the CCL21 have been engineered to slowly release the protein into the tumor over time, producing an enduring immune response. Although the vaults protect the packed CCL21, they act like a time-release capsule, Rome said.

Rome, Dubinett and Sharma plan to test the vault delivery method in human studies within the next three years and hope the promising results found in the pre-clinical animal tumor models will be replicated. If such a study is approved, it would be the first time a vault nanoparticle is used in humans for a cancer immunotherapy.

The vault nanoparticle would require only a single injection into the tumor because of the slow-release design, and it eventually could be designed to be patient specific by adding the individual's tumor antigens into the vault, Dubinett said. The vaults may also be targeted by adding antibodies to their surface that recognize receptors on the tumor. The injection could then be delivered into the blood stream and the vault would navigate to the tumor, a less invasive process that would be easier on the patients. The vault could also seek out and target tumors and metastases too small to be detected with imaging.

Rome cautioned that the vault work is at a much earlier stage than Dubinett's dendritic cell research, but he is encouraged by the early results. The goal is to develop an "off-the-shelf" therapy using vaults.

"In animals, the vault nanoparticles have proven to be as effective, if not more effective, than the dendritic cell approach," he said. "Now we need to get the vault therapy approved by the FDA for use in humans."

Because a vault is naturally occurring particle, it causes no harm to the body and is potentially an ideal vehicle for use in delivery of personalized therapies, Rome said.

The study was funded by a University of California Discovery Grant, a Jonsson Cancer Center fellowship grant, the National Institutes of Health, the UCLA Lung Cancer Program, the Department of Veterans Affairs Medical Research Funds and the University of California's Tobacco-related Disease Program Award.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Los Angeles Health Sciences.

Journal Reference:

Upendra K. Kar, Minu K. Srivastava, Åsa Andersson, Felicita Baratelli, Min Huang, Valerie A. Kickhoefer, Steven M. Dubinett, Leonard H. Rome, Sherven Sharma. Novel CCL21-Vault Nanocapsule Intratumoral Delivery Inhibits Lung Cancer Growth. PLoS ONE, 2011; 6 (5): e18758 DOI: 10.1371/journal.pone.0018758

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Galaxy NGC 4214: A star formation laboratory

ScienceDaily (May 13, 2011) — Size isn't everything ... in astronomy, at least. Dwarf galaxy NGC 4214 may be small, but what it lacks in size it makes up for in content. It is packed with everything an astronomer could ask for, from hot, young star-forming regions to old clusters with red supergiants.

The intricate patterns of glowing ionised hydrogen gas, cavities blown clear of gas by stellar wind, and bright stellar clusters of NGC 4214 can be seen in this optical and near-infrared image, taken using the Wide Field Camera 3 (WFC3) instrument on the NASA/ESA Hubble Space Telescope.

A huge heart-shaped cavity -- possibly the galaxy's most eye-catching feature -- can be seen at the centre of the image. Inside this hole lies a large cluster of massive, young stars ranging in temperature from 10 000 to 50 000 degrees Celsius. Their strong stellar winds are responsible for the creation of this hollow area. The resulting lack of gas prevents any further star formation from occurring in this region.

Located around 10 million light-years away in the constellation of Canes Venatici (The Hunting Dogs), the galaxy's relative close proximity to us, combined with the wide variety of evolutionary stages among the stars, makes it an ideal laboratory to research what triggers star formation and evolution. By chance, there is relatively little interstellar dust between us and NGC 4214, making our measurements of it more accurate.

NGC 4214 contains a large amount of gas, some of which can be seen glowing red in the image, providing abundant material for star formation. The area with the most hydrogen gas, and consequently, the youngest clusters of stars (around two million years old), lies in the upper part of this Hubble image. Like most of the features in the image, this area is visible due to ionisation of the surrounding gas by the ultraviolet light of a young cluster of stars within.

Observations of this dwarf galaxy have also revealed clusters of much older red supergiant stars that we see at a late stage in their evolution. Additional older stars can be seen dotted all across the galaxy. While these are dominant in infrared emission they can only be seen shining faintly in this visible-light image. The variety of stars at different stages in their evolution, indicate that the recent and ongoing starburst periods are by no means the first, and the galaxy's numerous ionised hydrogen regions suggest they will not be the last.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by ESA/Hubble Information Centre.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Toward optical computing in handheld electronics: Graphene optical modulators could lead to ultrafast communications

ScienceDaily (May 9, 2011) — Scientists at the University of California, Berkeley, have demonstrated a new technology for graphene that could break the current speed limits in digital communications.

The team of researchers, led by UC Berkeley engineering professor Xiang Zhang, built a tiny optical device that uses graphene, a one-atom-thick layer of crystallized carbon, to switch light on and off. This switching ability is the fundamental characteristic of a network modulator, which controls the speed at which data packets are transmitted. The faster the data pulses are sent out, the greater the volume of information that can be sent. Graphene-based modulators could soon allow consumers to stream full-length, high-definition, 3-D movies onto a smartphone in a matter of seconds, the researchers said.

"This is the world's smallest optical modulator, and the modulator in data communications is the heart of speed control," said Zhang, who directs a National Science Foundation (NSF) Nanoscale Science and Engineering Center at UC Berkeley. "Graphene enables us to make modulators that are incredibly compact and that potentially perform at speeds up to ten times faster than current technology allows. This new technology will significantly enhance our capabilities in ultrafast optical communication and computing."

In this latest work, described in the May 8 advanced online publication of the journal Nature, researchers were able to tune the graphene electrically to absorb light in wavelengths used in data communication. This advance adds yet another advantage to graphene, which has gained a reputation as a wonder material since 2004 when it was first extracted from graphite, the same element in pencil lead. That achievement earned University of Manchester scientists Andre Geim and Konstantin Novoselov the Nobel Prize in Physics last year.

Zhang worked with fellow faculty member Feng Wang, an assistant professor of physics and head of the Ultrafast Nano-Optics Group at UC Berkeley. Both Zhang and Wang are faculty scientists at Lawrence Berkeley National Laboratory's Materials Science Division.

"The impact of this technology will be far-reaching," said Wang. "In addition to high-speed operations, graphene-based modulators could lead to unconventional applications due to graphene's flexibility and ease in integration with different kinds of materials. Graphene can also be used to modulate new frequency ranges, such as mid-infrared light, that are widely used in molecular sensing."

Graphene is the thinnest, strongest crystalline material yet known. It can be stretched like rubber, and it has the added benefit of being an excellent conductor of heat and electricity. This last quality of graphene makes it a particularly attractive material for electronics.

"Graphene is compatible with silicon technology and is very cheap to make," said Ming Liu, post-doctoral researcher in Zhang's lab and co-lead author of the study. "Researchers in Korea last year have already produced 30-inch sheets of it. Moreover, very little graphene is required for use as a modulator. The graphite in a pencil can provide enough graphene to fabricate 1 billion optical modulators."

It is the behavior of photons and electrons in graphene that first caught the attention of the UC Berkeley researchers.

The researchers found that the energy of the electrons, referred to as its Fermi level, can be easily altered depending upon the voltage applied to the material. The graphene's Fermi level in turn determines if the light is absorbed or not.

When a sufficient negative voltage is applied, electrons are drawn out of the graphene and are no longer available to absorb photons. The light is "switched on" because the graphene becomes totally transparent as the photons pass through.

Graphene is also transparent at certain positive voltages because, in that situation, the electrons become packed so tightly that they cannot absorb the photons.

The researchers found a sweet spot in the middle where there is just enough voltage applied so the electrons can prevent the photons from passing, effectively switching the light "off."

"If graphene were a hallway, and electrons were people, you could say that, when the hall is empty, there's no one around to stop the photons," said Xiaobo Yin, co-lead author of the Nature paper and a research scientist in Zhang's lab. "In the other extreme, when the hall is too crowded, people can't move and are ineffective in blocking the photons. It's in between these two scenarios that the electrons are allowed to interact with and absorb the photons, and the graphene becomes opaque."

In their experiment, the researchers layered graphene on top of a silicon waveguide to fabricate optical modulators. The researchers were able to achieve a modulation speed of 1 gigahertz, but they noted that the speed could theoretically reach as high as 500 gigahertz for a single modulator.

While components based upon optics have many advantages over those that use electricity, including the ability to carry denser packets of data more quickly, attempts to create optical interconnects that fit neatly onto a computer chip have been hampered by the relatively large amount of space required in photonics.

Light waves are less agile in tight spaces than their electrical counterparts, the researchers noted, so photon-based applications have been primarily confined to large-scale devices, such as fiber optic lines.

"Electrons can easily make an L-shaped turn because the wavelengths in which they operate are small," said Zhang. "Light wavelengths are generally bigger, so they need more space to maneuver. It's like turning a long, stretch limo instead of a motorcycle around a corner. That's why optics require bulky mirrors to control their movements. Scaling down the optical device also makes it faster because the single atomic layer of graphene can significantly reduce the capacitance -- the ability to hold an electric charge -- which often hinders device speed."

Graphene-based modulators could overcome the space barrier of optical devices, the researchers said. They successfully shrunk a graphene-based optical modulator down to a relatively tiny 25 square microns, a size roughly 400 times smaller than a human hair. The footprint of a typical commercial modulator can be as large as a few square millimeters.

Even at such a small size, graphene packs a punch in bandwidth capability. Graphene can absorb a broad spectrum of light, ranging over thousands of nanometers from ultraviolet to infrared wavelengths. This allows graphene to carry more data than current state-of-the-art modulators, which operate at a bandwidth of up to 10 nanometers, the researchers said.

"Graphene-based modulators not only offer an increase in modulation speed, they can enable greater amounts of data packed into each pulse," said Zhang. "Instead of broadband, we will have 'extremeband.' What we see here and going forward with graphene-based modulators are tremendous improvements, not only in consumer electronics, but in any field that is now limited by data transmission speeds, including bioinformatics and weather forecasting. We hope to see industrial applications of this new device in the next few years."

Other UC Berkeley co-authors of this paper are graduate student Erick Ulin-Avila and post-doctoral researcher Thomas Zentgraf in Zhang's lab; and visiting scholar Baisong Geng and graduate student Long Ju in Wang's lab.

This work was supported through the Center for Scalable and Integrated Nano-Manufacturing (SINAM), an NSF Nanoscale Science and Engineering Center. Funding from the Department of Energy's Basic Energy Science program at Lawrence Berkeley National Laboratory also helped support this research.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Berkeley. The original article was written by Sarah Yang, Media Relations.

Journal Reference:

Ming Liu, Xiaobo Yin, Erick Ulin-Avila, Baisong Geng, Thomas Zentgraf, Long Ju, Feng Wang, Xiang Zhang. A graphene-based broadband optical modulator. Nature, 2011; DOI: 10.1038/nature10067

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New way to control conductivity: Reversible control of electrical and thermal properties could find uses in storage systems

ScienceDaily (May 5, 2011) — A team of researchers at MIT has found a way to manipulate both the thermal conductivity and the electrical conductivity of materials simply by changing the external conditions, such as the surrounding temperature. And the technique they found can change electrical conductivity by factors of well over 100, and heat conductivity by more than threefold.

"It's a new way of changing and controlling the properties" of materials -- in this case a class called percolated composite materials -- by controlling their temperature, says Gang Chen, MIT's Carl Richard Soderberg Professor of Power Engineering and director of the Pappalardo Micro and Nano Engineering Laboratories. Chen is the senior author of a paper describing the process that was published online on April 19 and will appear in a forthcoming issue of Nature Communications. The paper's lead authors are former MIT visiting scholars Ruiting Zheng of Beijing Normal University and Jinwei Gao of South China Normal University, along with current MIT graduate student Jianjian Wang. The research was partly supported by grants from the National Science Foundation.

The system Chen and his colleagues developed could be applied to many different materials for either thermal or electrical applications. The finding is so novel, Chen says, that the researchers hope some of their peers will respond with an immediate, "I have a use for that!"

One potential use of the new system, Chen explains, is for a fuse to protect electronic circuitry. In that application, the material would conduct electricity with little resistance under normal, room-temperature conditions. But if the circuit begins to heat up, that heat would increase the material's resistance, until at some threshold temperature it essentially blocks the flow, acting like a blown fuse. But then, instead of needing to be reset, as the circuit cools down the resistance decreases and the circuit automatically resumes its function.

Another possible application is for storing heat, such as from a solar thermal collector system, later using it to heat water or homes or to generate electricity. The system's much-improved thermal conductivity in the solid state helps it transfer heat.

Essentially, what the researchers did was suspend tiny flakes of one material in a liquid that, like water, forms crystals as it solidifies. For their initial experiments, they used flakes of graphite suspended in liquid hexadecane, but they showed the generality of their process by demonstrating the control of conductivity in other combinations of materials as well. The liquid used in this research has a melting point close to room temperature -- advantageous for operations near ambient conditions -- but the principle should be applicable for high-temperature use as well.

The process works because when the liquid freezes, the pressure of its forming crystal structure pushes the floating particles into closer contact, increasing their electrical and thermal conductance. When it melts, that pressure is relieved and the conductivity goes down. In their experiments, the researchers used a suspension that contained just 0.2 percent graphite flakes by volume. Such suspensions are remarkably stable: Particles remain suspended indefinitely in the liquid, as was shown by examining a container of the mixture three months after mixing.

By selecting different fluids and different materials suspended within that liquid, the critical temperature at which the change takes place can be adjusted at will, Chen says.

"Using phase change to control the conductivity of nanocomposites is a very clever idea," says Li Shi, a professor of mechanical engineering at the University of Texas at Austin. Shi adds that as far as he knows "this is the first report of this novel approach" to producing such a reversible system.

"I think this is a very crucial result," says Joseph Heremans, professor of physics and of mechanical and aerospace engineering at Ohio State University. "Heat switches exist," but involve separate parts made of different materials, whereas "here we have a system with no macroscopic moving parts," he says. "This is excellent work."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Massachusetts Institute of Technology. The original article was written by David L. Chandler, MIT News Office.

Journal Reference:

Ruiting Zheng, Jinwei Gao, Jianjian Wang, Gang Chen. Reversible temperature regulation of electrical and thermal conductivity using liquid–solid phase transitions. Nature Communications, 2011; 2: 289 DOI: 10.1038/ncomms1288

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New online mechanism for electric vehicle charging

ScienceDaily (May 8, 2011) — Researchers at the University of Southampton have designed a new pricing mechanism that could change the way in which electric vehicles are charged. It is based on an online auction protocol that makes it possible to charge electric vehicles without overloading the local electricity network.

The paper entitled Online Mechanism Design for Electric Vehicle Charging was presented this week at AAMAS 2011 -- the Tenth Conference on Autonomous Agents and Multiagent Systems, and outlines a system where electric vehicle owners use computerised agents to bid for the power to charge the vehicles and also organise time slots when a vehicle is available for charging.

Dr Alex Rogers, University of Southampton computer scientist and one of the paper's authors, says: "Plug-in hybrid electric vehicles are expected to place a considerable strain on local electricity distribution networks. If many vehicles charge simultaneously, they may overload the local distribution network, so their charging needs to be carefully scheduled."

To address this issue, Dr Rogers and his team turned to the field of online mechanism design. They designed a mechanism that allows vehicle owners to specify their requirements (for example, when they need the vehicle and how far they expect to drive). The system then automatically schedules charging of the vehicles' batteries. The mechanism ensures that there is no incentive to 'game the system' by reporting that the vehicle is need earlier than is actually the case, and those users who place a higher demand on the system are automatically charged more than those who can wait.

University of Southampton computer scientist Dr Enrico Gerding, the lead author of the paper, adds: "The mechanism leaves some available units of electricity un-allocated. This is counter-intuitive since it seems to be inefficient but it turns out to be essential to ensure that the vehicle owners don't have to delay plugging-in or misreport their requirements, in an attempt to get a better deal."

In a study based on the performance of currently available electric vehicles, performed by Dr Valentin Robu and Dr Sebastien Stein, the mechanism was shown to increase the number of electric vehicles that can be charged overnight, within a neighbourhood of 200 homes, by as much as 40 per cent.

This research follows on from Dr Rogers' and Professor Nick Jennings' work on developing agents that can trade on the stock market and manage crisis communications and Dr Rogers' iPhone application, GridCarbon for measuring the carbon intensity of the UK grid.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Southampton, via EurekAlert!, a service of AAAS.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, 30 May 2011

Quantum simulation with light: Frustrations between photon pairs

ScienceDaily (May 6, 2011) — Researchers from the Vienna Center for Quantum Science and Technology at the University of Vienna and the Institute of Quantum Optics and Quantum Information (IQOQI) at the Austrian Academy of Sciences used a quantum mechanical system in the laboratory to simulate complex many-body systems. This experiment, which is published in Nature Physics, promises future quantum simulators with enormous potential insights into unknown quantum phenomena.

Already the behavior of relatively small quantum systems cannot be calculated because quantum states contain much more information than their classical counter-parts. However, if another quantum system is used to simulate the quantum system of interest, then answers about the properties of the complex quantum system can be obtained.

When is a quantum system frustrated?

Currently, many international groups are focusing their research on frustrated quantum systems, which have been conjectured to explain high-temperature superconductivity. A quantum system is frustrated if competing requirements cannot be satisfied simultaneously. The Viennese research group realized for the first time an experimental quantum simulation, where the frustration regarding the "pairing" of correlations was closely investigated.

Using two pairs of entangled photons, a frustrated quantum system could be simulated that consists of four particles. "Just the recent development of our quantum technology allows us to not only rebuild other quantum systems, but also to simulate its dynamics" says Philip Walther (University of Vienna). "Now we can prepare quantum states of individual photons to gain insights into other quantum systems," explains Xiao-song Ma (Austrian Academy of Sciences).Therefore, two in polarization entangled photons exhibit in many ways the same quantum physical properties as for example electrons in matter.

Conflict over partnerships

The research team of international scientists from China, Serbia, New Zeeland and Austria prepared single photons that were facing the conflict over partnerships between each other. Each photon can establish a single bond to only one partner exclusively, but wants to get correlated with several partners -- obviously this leads to frustration. As a result, the quantum system uses "tricks" that allow quantum fluctuations that different pairings can coexist as superposition.

The work of the Viennese group underlines that quantum simulations are a very good tool for calculating quantum states of matter and are thus opening the path for the investigation of more complex systems.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Vienna.

Journal Reference:

Xiao-song Ma, Borivoje Dakic, William Naylor, Anton Zeilinger, Philip Walther. Quantum simulation of the wavefunction to probe frustrated Heisenberg spin systems. Nature Physics, 2011; 7 (5): 399 DOI: 10.1038/nphys1919

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

What electric car convenience is worth

ScienceDaily (May 18, 2011) — Want a Nissan Leaf? Join the 20,000 people on the waiting list to get one. The Chevy Volt got your eye? General Motors ramped up availability earlier this year to try and meet demand. With the latest generation of electric vehicles gaining traction, new findings from University of Delaware researchers are informing automakers' and policymakers' decisions about the environmentally friendly cars.

Results of one study show the electric car attributes that are most important for consumers: driving range, fuel cost savings and charging time. The results are based on a national survey conducted by the researchers, UD professors George Parsons, Willett Kempton and Meryl Gardner, and Michael Hidrue, who recently graduated from UD with a doctoral degree in economics. Lead author Hidrue conducted the research for his dissertation.

The study, which surveyed more than 3,000 people, showed what individuals would be willing to pay for various electric vehicle attributes. For example, as battery charging time decreases from 10 hours to five hours for a 50-mile charge, consumers' willingness to pay is about $427 per hour in reduction time. Drop charging time from five hours to one hour, and consumers would pay an estimated $930 an hour. Decrease the time from one hour to 10 minutes, and they would pay $3,250 per hour.

For driving range, consumers value each additional mile of range at about $75 per mile up to 200 miles, and $35 a mile from 200-300 miles. So, for example, if an electric vehicle has a range of 200 miles and an otherwise equivalent gasoline vehicle has a range of 300, people would require a price discount of about $3,500 for the electric version. That assumes everything else about the vehicle is the same, and clearly there is lower fuel cost with an electric vehicle and often better performance. So all the attributes have to be accounted for in the final analysis of any car.

"This information tells the car manufacturers what people are willing to pay for another unit of distance," Parsons said. "It gives them guidance as to what cost levels they need to attain to make the cars competitive in the market."

The researchers found that battery costs would need to decrease substantially without subsidy and with current gas prices for electric cars to become competitive in the market. However, the researchers said, the current $7,500 government tax credit could bridge the gap between electric car costs and consumers' willingness to pay if battery costs decline to $300 a kilowatt hour, the projected 2014 cost level by the Department of Energy. Many analysts believe that goal is within reach.

The team's analysis could also help guide automakers' marketing efforts -- it showed that an individual's likelihood of buying an electric vehicle increases with characteristics such as youth, education and an environmental lifestyle. Income was not important.

In a second recently published study, UD researchers looked at electric vehicle driving range using second-by-second driving records. That study, which is based on a year of driving data from nearly 500 instrumented gasoline vehicles, showed that 9 percent of the vehicles never exceeded 100 miles in a day. For those who are willing to make adaptations six times a year -- borrow a gasoline car, for example -- the 100-mile range would work for 32 percent of drivers.

"It appears that even modest electric vehicles with today's limited battery range, if marketed correctly to segments with appropriate driving behavior, comprise a large enough market for substantial vehicle sales," the authors concluded.

Kempton, who published the driving patterns article with UD marine policy graduate student Nathaniel Pearre and colleagues at the Georgia Institute of Technology, pointed out that U.S. car sales are around 12 million in an average, non-recession year. Nine percent of that would be a million cars per year -- for comparison to current production, for example, Chevy plans to manufacture just 10,000 Volts in 2011.

By this measure, the potential market would justify many more plug-in cars than are currently being produced, Kempton said.

The findings of the two studies were reported online in March and February in Resource and Energy Economics and Transportation Research, respectively.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Delaware. The original article was written by Elizabeth Boyle.

Journal References:

Nathaniel S. Pearre, Willett Kempton, Randall L. Guensler, Vetri V. Elango. Electric vehicles: How much range is required for a day’s driving? Transportation Research Part C: Emerging Technologies, 2011; DOI: 10.1016/j.trc.2010.12.010Michael K. Hidrue, George R. Parsons, Willett Kempton, Meryl P. Gardner. Willingness to pay for electric vehicles and their attributes. Resource and Energy Economics, 2011; DOI: 10.1016/j.reseneeco.2011.02.002

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Portable tech might provide drinking water, power to villages

ScienceDaily (May 4, 2011) — Researchers have developed an aluminum alloy that could be used in a new type of mobile technology to convert non-potable water into drinking water while also extracting hydrogen to generate electricity.

Such a technology might be used to provide power and drinking water to villages and also for military operations, said Jerry Woodall, a Purdue University distinguished professor of electrical and computer engineering.

The alloy contains aluminum, gallium, indium and tin. Immersing the alloy in freshwater or saltwater causes a spontaneous reaction, splitting the water into hydrogen and oxygen molecules. The hydrogen could then be fed to a fuel cell to generate electricity, producing water in the form of steam as a byproduct, he said.

"The steam would kill any bacteria contained in the water, and then it would condense to purified water," Woodall said. "So, you are converting undrinkable water to drinking water."

Because the technology works with saltwater, it might have marine applications, such as powering boats and robotic underwater vehicles. The technology also might be used to desalinate water, said Woodall, who is working with doctoral student Go Choi.

A patent on the design is pending.

Woodall envisions a new portable technology for regions that aren't connected to a power grid, such as villages in Africa and other remote areas.

"There is a big need for this sort of technology in places lacking connectivity to a power grid and where potable water is in short supply," he said. "Because aluminum is a low-cost, non-hazardous metal that is the third-most abundant metal on Earth, this technology promises to enable a global-scale potable water and power technology, especially for off-grid and remote locations."

The potable water could be produced for about $1 per gallon, and electricity could be generated for about 35 cents per kilowatt hour of energy.

"There is no other technology to compare it against, economically, but it's obvious that 34 cents per kilowatt hour is cheap compared to building a power plant and installing power lines, especially in remote areas," Woodall said.

The unit, including the alloy, the reactor and fuel cell might weigh less than 100 pounds.

"You could drop the alloy, a small reaction vessel and a fuel cell into a remote area via parachute," Woodall said. "Then the reactor could be assembled along with the fuel cell. The polluted water or the seawater would be added to the reactor and the reaction converts the aluminum and water into aluminum hydroxide, heat and hydrogen gas on demand."

The aluminum hydroxide waste is non-toxic and could be disposed of in a landfill.

The researchers have a design but haven't built a prototype.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Purdue University. The original article was written by Emil Venere.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Scientists achieve guiding of electrons by purely electric fields

ScienceDaily (May 10, 2011) — The investigation of the properties of electrons plays a key role for the understanding of the fundamental laws of nature. However, being extremely small and quick, electrons are difficult to control. Physicists around Dr. Peter Hommelhoff, head of the Max Planck Research Group "Ultrafast Quantum Optics" at the Max Planck Institute of Quantum Optics (Garching near Munich), have now demonstrated efficient guiding of slow electrons by applying a microwave voltage to electrodes fabricated on a planar substrate.

The research is published online in Physical Review Letters.

This new technique of electron guiding -- which resembles the guiding of light waves in optical fibres -- promises a variety of applications, from guided matter-wave experiments to non-invasive electron microscopy.

Electrons have been the first elementary particles revealing their wave-like properties and have therefore been of great importance in the development of the theory of quantum mechanics. Even now the observation of electrons leads to new insight into the fundamental laws of physics. Measurements involving confined electrons have so far mainly been performed in so-called Penning traps, which combine a static magnetic field with an oscillating electric field.

For a number of experiments with propagating electrons, like interferometry with slow electrons, it would be advantageous to confine the electrons by a purely electric field. This can be done in an alternating quadrupole potential similar to the standard technique that is used for ion trapping. These so-called Paul traps are based on four electrodes to which a radiofrequency voltage is applied. The resulting field evokes a driving force which keeps the particle in the centre of the trap. Wolfgang Paul received the Nobel Prize in physics for the invention of these traps in 1989.

For several years by now scientists realize Paul traps with micro structured electrodes on planar substrates, using standard microelectronic chip technology. Dr. Hommelhoff and his group have now applied this method for the first time to electrons. Since the mass of these point-like particles is only a tenth of a thousandth of the mass of an ion, electrons react much faster to electric fields than the rather heavy ions. Hence, in order to guide electrons, the frequency of the alternating voltage applied to the electrodes has to be much higher than for the confinement of ions and is in the microwave range, at around 1 GHz.

In the experiment electrons are generated in a thermal source (in which a tungsten wire is heated like in a light bulb) and the emitted electrons are collimated to a parallel beam of a few electron volts. From there the electrons are injected into the "wave-guide." It is being generated by five electrodes on a planar substrate to which an alternating voltage with a frequency of about 1 GHz is applied. This introduces an oscillating quadrupole field in a distance of half a millimetre above the electrodes, which confines the electrons in the radial direction. In the longitudinal direction there is no force acting on the particles so that they are free to travel along the "guide tube." As the confinement in the radial direction is very strong the electrons are forced to follow even small directional changes of the electrodes.

In order to make this effect more visible the 37mm long electrodes are bent to a curve of 30 degrees opening angle and with a bending radius of 40mm. At the end of the structure the guided electrons are ejected and registered by a detector. A bright spot caused by guided electrons appears on the detector right at the exit of the guide tube, which is situated in the left part of the picture. When the alternating field is switched off a more diffusively illuminated area shows up on the right side. It is caused by electrons spreading out from the source and propagating on straight trajectories over the substrate.

"With this fundamental experiment we were able to show that electrons can be efficiently guided be purely electric fields," says Dr. Hommelhoff. "However, as our electron source yields a rather poorly collimated electron beam we still lose many electrons." In the future the researchers plan to combine the new microwave guide with an electron source based on field emission from an atomically sharp metal tip. These devices deliver electron beams with such a strong collimation that their transverse component is limited by the Heisenberg uncertainty principle only.

Under these conditions it should be feasible to investigate the individual quantum mechanical oscillations of the electrons in the radial potential of the guide. "The strong confinement of electrons observed in our experiment means that a "jump" from one quantum state to the neighbouring higher state requires a lot of energy and is therefore not very likely to happen," explains Johannes Hoffrogge, doctoral student at the experiment. "Once a single quantum state is populated it will remain so for an extended period of time and can be used for quantum experiments." This would make it possible to conduct quantum physics experiments such as interferometry with guided slow electrons. Here the wave function of an electron is first split up; later on, its two components are brought together again whereby characteristic superpositions of quantum states of the electron can be generated. But the new method could also be applied for a new form of electron microscopy.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Max Planck Institute of Quantum Optics.

Journal Reference:

J. Hoffrogge, R. Fröhlich, M. Kasevich, P. Hommelhoff. Microwave Guiding of Electrons on a Chip. Physical Review Letters, 2011; 106 (19) DOI: 10.1103/PhysRevLett.106.193001

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Antibody-based biosensor can guide environmental clean-ups, provide early warning system for spills

ScienceDaily (May 9, 2011) — Tests of a new antibody-based "biosensor" developed by researchers at the Virginia Institute of Marine Science show that it can detect marine pollutants like oil much faster and more cheaply than current technologies. The device is small and sturdy enough to be used from a boat.

Testing of the biosensor in the Elizabeth River and Yorktown Creek, which both drain into lower Chesapeake Bay, shows that the instrument can process samples in less than 10 minutes, detect pollutants at levels as low as just a few parts per billion, and do so at a cost of just pennies per sample. Current technology requires hours of lab work, with a per-sample cost of up to $1,000.

"Our biosensor combines the power of the immune system with the sensitivity of cutting-edge electronics," says Dr. Mike Unger of VIMS. "It holds great promise for real-time detection and monitoring of oil spills and other releases of contaminants into the marine environment."

The biosensor was developed and tested by Unger, fellow VIMS professor Steve Kaattari, and their doctoral student Candace Spier, with assistance from marine scientist George Vadas. The team's report of field tests with the sensor appears in this month's issue of Environmental Toxicology and Chemistry.

The instrument was developed in conjunction with Sapidyne Instruments, Inc., with funding from the state of Virginia, the Office of Naval Research, and the Cooperative Institute for Coastal and Estuarine Environmental Technology, a partnership between NOAA and the University of New Hampshire.

The tests in the Elizabeth River took place during clean up of a site contaminated by polycyclic aromatic hydrocarbons (PAHs), byproducts of decades of industrial use of creosote to treat marine pilings. The U.S. Environmental Protection Agency considers PAHs highly toxic and lists 17 as suspected carcinogens.

The biosensor allowed the researchers to quantify PAH concentrations while the Elizabeth River remediation was taking place, gaining on-site knowledge about water quality surrounding the remediation site. Spier says the test was "the first use of an antibody-based biosensor to guide sampling efforts through near real-time evaluation of environmental contamination."

In the Yorktown Creek study, the researchers used the biosensor to track the runoff of PAHs from roadways and soils during a rainstorm.

Biosensor development

Kaattari says "Our basic idea was to fuse two different kinds of technologies -- monoclonal antibodies and electronic sensors -- in order to detect contaminants."

Antibodies are proteins produced by the immune system of humans and other mammals. They are particularly well suited for detecting contaminants because they have, as Kaattari puts it, an "almost an infinite power to recognize the 3-dimensional shape of any molecule."

Mammals produce antibodies that recognize and bind with large organic molecules such as proteins or with viruses. The VIMS team took this process one step further, linking proteins to PAHs and other contaminants, then exposing mice to these paired compounds in a manner very similar to a regular vaccination.

"Just like you get vaccinated against the flu, we in essence are vaccinating our mice against contaminants," says Kaattari. "The mouse's lymphatic system then produces antibodies to PAHs, TNT, tributyl tin [TBT, the active ingredient in anti-fouling paints for boats], or other compounds."

Once a mouse has produced an antibody to a particular contaminant, the VIMS team applies standard clinical techniques to produce "monoclonal antibodies" in sufficiently large quantities for use in a biosensor.

"This technology allows you to immortalize a lymphocyte that produces only a very specific antibody," says Kaattari. "You grow the lymphocytes in culture and can produce large quantities of antibodies within a couple of weeks. You can preserve the antibody-producing lymphocyte forever, which means you don't have to go to a new animal every time you need to produce new antibodies."

From antibody to electrical signal

The team's next step was to develop a sensor that can recognize when an antibody binds with a contaminant and translate that recognition into an electrical signal. The Sapidyne® sensor used by the VIMS team works via what Kaattari calls a "fluorescence-inhibitory, spectroscopic kind of assay."

In the sensor used on the Elizabeth River and Yorktown Creek, antibodies designed to recognize a specific class of PAHs were joined with a dye that glows when exposed to fluorescent light. The intensity of that light is in turn recorded as a voltage. The sensor also houses tiny plastic beads that are coated with what Spier calls a "PAH surrogate" -- a PAH derivative that retains the shape that the antibody recognizes as a PAH molecule.

When water samples with low PAH levels are added to the sensor chamber (which is already flooded with a solution of anti-PAH antibodies), the antibodies have little to bind with and are thus free to attach to the surrogate-coated beads, providing a strong fluorescent glow and electric signal. In water samples with high PAH concentrations, on the other hand, a large fraction of the antibodies bind with the environmental contaminants. That leaves fewer to attach to the surrogate-coated beads, which consequently provides a fainter glow and a weaker electric signal.

During the Elizabeth River study, the biosensor measured PAH concentrations that ranged from 0.3 to 3.2 parts per billion, with higher PAH levels closer to the dredge site. In Yorktown Creek, the biosensor showed that PAH levels in runoff peaked 1 to 2 hours after the rain started, with peak concentration of 4.4 parts per billion.

Comparison of the biosensor's field readings with later readings from a mass spectrometer at VIMS showed that the biosensor is just as accurate as the more expensive, slower, and laboratory-bound machine.

A valuable field tool

Spier says "Using the biosensor allowed us to quickly survey an area of almost 900 acres around the Elizabeth River dredge, and to provide information about the size and intensity of the contaminant plume to engineers monitoring the dredging from shore. If our results had shown elevated concentrations, they could have halted dredging and put remedial actions in place."

Unger adds "measuring data in real-time also allowed us to guide the collection of large-volume water samples right from the boat. We used these samples for later analysis of specific PAH compounds in the lab. This saved time, effort, and money by keeping us from having to analyze samples that might contain PAHs at levels below our detection limit."

"Biosensors have their constraints and optimal operating conditions," says Kaattari, "but their promise far outweighs any limitations. The primary advantages of our biosensor are its sensitivity, speed, and portability. These instruments are sure to have a myriad of uses in future environmental monitoring and management."

One promising use of the biosensor is for early detection and tracking of oil spills. "If biosensors were placed near an oil facility and there was a spill, we would know immediately," says Kaattari. "And because we could see concentrations increasing or decreasing in a certain pattern, we could also monitor the dispersal over real time."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Virginia Institute of Marine Science. The original article was written by David Malmquist.

Journal Reference:

Candace R. Spier, George G. Vadas, Stephen L. Kaattari, Michael A. Unger. Near real-time, on-site, quantitative analysis of PAHs in the aqueous environment using an antibody-based biosensor. Environmental Toxicology and Chemistry, 2011; DOI: 10.1002/etc.546

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, 29 May 2011

Dawn spacecraft reaches milestone approaching asteroid Vesta

ScienceDaily (May 4, 2011) — NASA's Dawn spacecraft has reached its official approach phase to the asteroid Vesta and will begin using cameras for the first time to aid navigation for an expected July 16 orbital encounter. The large asteroid is known as a protoplanet -- a celestial body that almost formed into a planet.

At the start of this three-month final approach to this massive body in the asteroid belt, Dawn is 1.21 million kilometers (752,000 miles) from Vesta, or about three times the distance between Earth and the moon. During the approach phase, the spacecraft's main activity will be thrusting with a special, hyper-efficient ion engine that uses electricity to ionize and accelerate xenon. The 12-inch-wide ion thrusters provide less thrust than conventional engines, but will provide propulsion for years during the mission and provide far greater capability to change velocity.

"We feel a little like Columbus approaching the shores of the New World," said Christopher Russell, Dawn principal investigator, based at the University of California in Los Angeles (UCLA). "The Dawn team can't wait to start mapping this Terra Incognita."

Dawn previously navigated by measuring the radio signal between the spacecraft and Earth, and used other methods that did not involve Vesta. But as the spacecraft closes in on its target, navigation requires more precise measurements. By analyzing where Vesta appears relative to stars, navigators will pin down its location and enable engineers to refine the spacecraft's trajectory. Using its ion engine to match Vesta's orbit around the sun, the spacecraft will spiral gently into orbit around the asteroid. When Dawn gets approximately 16,000 kilometers (9,900 miles) from Vesta, the asteroid's gravity will capture the spacecraft in orbit.

"After more than three-and-a-half years of interplanetary travel, we are finally closing in on our first destination," said Marc Rayman, Dawn's chief engineer, at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "We're not there yet, but Dawn will soon bring into focus an entire world that has been, for most of the two centuries scientists have been studying it, little more than a pinpoint of light."

Scientists will search the framing camera images for possible moons around Vesta. None of the images from ground-based and Earth-orbiting telescopes have seen any moons, but Dawn will give scientists much more detailed images to determine whether small objects have gone undiscovered.

The gamma ray and neutron detector instrument also will gather information on cosmic rays during the approach phase, providing a baseline for comparison when Dawn is much closer to Vesta. Simultaneously, Dawn's visible and infrared mapping spectrometer will take early measurements to ensure it is calibrated and ready when the spacecraft enters orbit around Vesta.

Dawn's odyssey, which will take it on a journey of 4.8-billion kilometers (3-billion miles), began on Sept. 27, 2007, with its launch from Cape Canaveral Air Force Station in Florida. It will stay in orbit around Vesta for one year. After another long cruise phase, Dawn will arrive at its second destination, an even more massive body in the asteroid belt, called Ceres, in 2015.

These two icons of the asteroid belt will help scientists unlock the secrets of our solar system's early history. The mission will compare and contrast the two giant bodies, which were shaped by different forces. Dawn's science instrument suite will measure surface composition, topography and texture. In addition, the Dawn spacecraft will measure the tug of gravity from Vesta and Ceres to learn more about their internal structures.

The Dawn mission to Vesta and Ceres is managed by JPL for NASA's Science Mission Directorate in Washington. Dawn is a project of SMD's Discovery Program, which is managed by NASA's Marshall Space Flight Center in Huntsville, Ala. UCLA is responsible for overall Dawn mission science. Orbital Sciences Corp. of Dulles, Va., designed and built the Dawn spacecraft. The framing cameras have been developed and built under the leadership of the Max Planck Institute for Solar System Research in Katlenburg-Lindau in Germany, with significant contributions by the German Aerospace Center (DLR) Institute of Planetary Research in Berlin, and in coordination with the Institute of Computer and Communication Network Engineering in Braunschweig. The framing camera project is funded by NASA, the Max Planck Society and DLR.

JPL is a division of the California Institute of Technology, Pasadena.

For more information about Dawn, visit: http://www.nasa.gov/dawn and http://dawn.jpl.nasa.gov

To learn more about Dawn's approach phase, read the latest Dawn Journal at http://blogs.jpl.nasa.gov/2011/05/dawn-begins-its-vesta-phase/

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA/Jet Propulsion Laboratory.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Mars Express sees deep fractures on Mars

ScienceDaily (May 6, 2011) — Newly released images from the European Space Agency's Mars Express show Nili Fossae, a system of deep fractures around the giant Isidis impact basin. Some of these incisions into the martian crust are up to 500 m deep and probably formed at the same time as the basin.

Nili Fossae is a 'graben' system on Mars, northeast of the Syrtis Major volcanic province, on the northwestern edge of the giant Isidis impact basin. Graben refers to the lowered terrain between two parallel faults or fractures in the rocks that collapses when tectonic forces pull the area apart. The Nili Fossae system contains numerous graben concentrically oriented around the edges of the basin.

It is thought that flooding of the basin with basaltic lava after the impact that created it resulted in subsidence of the basin floor, adding stress to the planet's crust, which was released by the formation of the fractures.

A strongly eroded impact crater is visible to the bottom right of the image. It measures about 12 km across and exhibits an ejecta blanket, usually formed by material thrown out during the impact. Two landslides have taken place to the west of the crater. Whether they were a direct result of the impact or occurred later is unknown.

A smaller crater, measuring only 3.5 km across, can be seen to the left of centre in the image and this one does not exhibit any ejecta blanket material. It has either been eroded or may have been buried.

The surface material to the top left of the image is much darker than the rest of the area. It is most likely formed of basaltic rock or volcanic ash originating from the Syrtis Major region. Such lava blankets form when large amounts of low-viscosity basaltic magma flow across long distances before cooling and solidifying. On Earth, the same phenomenon can be seen in the Deccan Traps in India.

Nili Fossae interests planetary scientists because observations taken with telescopes on Earth and published in 2009 have shown that there is a significant enhancement in Mars' atmospheric methane over this area, suggesting that methane may be being produced there. Its origin remains mysterious, however, and could be geological or perhaps even biological.

As a result, understanding the origin of methane on Mars is high on the priority list and in 2016, ESA and NASA plan to launch the ExoMars Trace Gas Orbiter to investigate further. Nili Fossae will be observed with great interest.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by European Space Agency.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

NASA's Galileo reveals magma 'ocean' beneath surface of Jupiter's moon

ScienceDaily (May 12, 2011) — A new analysis of data from NASA's Galileo spacecraft has revealed that beneath the surface of Jupiter's volcanic moon Io is an "ocean" of molten or partially molten magma.

The finding, from a study published May 13 in the journal Science, is the first direct confirmation of such a magma layer on Io and explains why the moon is the most volcanic object known in the solar system. The research was conducted by scientists from UCLA, UC Santa Cruz and the University of Michigan-Ann Arbor.

"The hot magma in Io's ocean is millions of times better at conducting electricity than rocks typically found on the Earth's surface," said the study's lead author, Krishan Khurana, a former co-investigator on Galileo's magnetometer team and a research geophysicist with UCLA's Institute of Geophysics and Planetary Physics. "Just like the waves beamed from an airport metal detector bounce off metallic coins in your pocket, betraying their presence to the detector, Jupiter's rotating magnetic field continually bounces off the molten rocks in Io's interior. The bounced signal can be detected by a magnetometer on a passing spacecraft.

"Scientists are excited that we finally understand where Io's magma is coming from and have an explanation for some of the mysterious signatures we saw in some of Galileo's magnetic field data," Khurana added. "It turns out Io was continually giving off a 'sounding signal' in Jupiter's rotating magnetic field that matched what would be expected from molten or partially molten rocks deep beneath the surface."

Io's volcanoes are the only known active magma volcanoes in the solar system other than those on Earth; Io produces about 100 times more lava each year than all of Earth's volcanoes. While those on Earth occur in localized hotspots like the "Ring of Fire" around the Pacific Ocean, Io's volcanoes are distributed all over its surface. A global magma ocean lying beneath about 20 to 30 miles (30 to 50 km) of Io's crust helps explain the moon's activity.

"It has been suggested that both the Earth and moon may have had similar magma oceans billions of years ago, at the time of their formation, but they have long since cooled," said Torrence Johnson, who was Galileo's project scientist, based at NASA's Jet Propulsion Laboratory in Pasadena, Calif., and who was not directly involved in the study. "Io's volcanism informs us how volcanoes work and provides a window in time to styles of volcanic activity that may have occurred on the Earth and moon during their earliest history."

Io's volcanoes were discovered by NASA's Voyager spacecraft in 1979. The energy for the volcanic activity comes from the squeezing and stretching of the moon by Jupiter's gravity as Io orbits the immense planet, the largest in the solar system.

Galileo was launched in 1989 and began orbiting Jupiter in 1995. After a successful mission, the spacecraft was intentionally sent into Jupiter's atmosphere in 2003. The unexplained signatures appeared in the magnetic-field data taken from Galileo fly-bys of Io in October 1999 and February 2000, during the final phase of the mission.

"But at the time, models of the interaction between Io and Jupiter's immense magnetic field, which bathes the moon in charged particles, were not yet sophisticated enough for us to understand what was going on in Io's interior," said study co-author Xianzhe Jia of the University of Michigan.

Recent work in mineral physics showed that a group of what are known as "ultramafic" rocks become capable of carrying substantial electrical current when melted. These rocks are igneous in origin -- that is, they are formed through the cooling of magma. On Earth, ultramafic rocks are believed to derive from the mantle. The finding led Khurana and colleagues to test the hypothesis that the strange signature was produced by an electrical current flowing in a molten or partially molten layer of this kind of rock.

Tests showed that the signatures detected by Galileo were consistent with a rock like lherzolite, an igneous rock rich in silicates of magnesium and iron found, for example, in Spitzbergen, Norway. The magma ocean layer on Io appears to be more than 30 miles (50 km) thick, making up at least 10 percent of the moon's mantle by volume. The blistering temperature of the magma ocean probably exceeds 2,200 degrees Fahrenheit (1,200 degrees Celsius).

Additional co-authors on the paper are Christopher T. Russell, professor of geophysics and space physics in UCLA's Department of Earth and Space Sciences; Margaret Kivelson, professor emeritus of space physics in UCLA's Department of Earth and Space Sciences; Gerald Schubert, professor of geophysics and planetary physics in UCLA's Department of Earth and Space Sciences; and Francis Nimmo, associate professor of Earth and planetary sciences at UC Santa Cruz.

The Galileo mission was managed by the Jet Propulsion Laboratory (JPL), a division of the California Institute of Technology in Pasadena, for NASA.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Los Angeles. The original article was written by Jia-Rui Cook.

Journal Reference:

Krishan K. Khurana, Xianzhe Jia, Margaret G. Kivelson, Francis Nimmo, Gerald Schubert, Christopher T. Russell. Evidence of a Global Magma Ocean in Io’s Interior. Science, 2011; DOI: 10.1126/science.1201425

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New calculations on blackbody energy set the stage for clocks with unprecedented accuracy

ScienceDaily (May 14, 2011) — A team of physicists from the United States and Russia announced that it has developed a means for computing, with unprecedented accuracy, a tiny, temperature-dependent source of error in atomic clocks. Although small, the correction could represent a big step towards atomic timekeepers' longstanding goal of a clock with a precision equivalent to one second of error every 32 billion years -- longer than the age of the universe.

Precision timekeeping is one of the bedrock technologies of modern science and technology. It underpins precise navigation on Earth and in deep space, synchronization of broadband data streams, precision measurements of motion, forces and fields, and tests of the constancy of the laws of nature over time.

"Using our calculations, researchers can account for a subtle effect that is one of the largest contributors to error in modern atomic timekeeping," says lead author Marianna Safronova of the University of Delaware, the first author of the presentation. "We hope that our work will further improve upon what is already the most accurate measurement in science: the frequency of the aluminum quantum-logic clock," adds co-author Charles Clark, a physicist at the Joint Quantum Institute, a collaboration of the National Institute of Standards and Technology (NIST) and the University of Maryland.

The paper was presented at the 2011 Conference on Lasers and Electro-Optics in Baltimore, Md.

The team studied an effect that is familiar to anyone who has basked in the warmth of a campfire: heat radiation. Any object at any temperature, whether the walls of a room, a person, the Sun or a hypothetical perfect radiant heat source known as a "black body," emits heat radiation. Even a completely isolated atom senses the temperature of its environment. Just as heat swells the air in a hot-air balloon, so-called "blackbody radiation" (BBR) enlarges the size of the electron clouds within the atom, though to a much lesser degree -- by one part in a hundred trillion, a size that poses a severe challenge to precision measurement.

This effect comes into play in the world's most precise atomic clock, recently built by NIST researchers. This quantum-logic clock, based on atomic energy levels in the aluminum ion, Al+, has an uncertainty of 1 second per 3.7 billion years, translating to 1 part in 8.6 x 10-18, due to a number of small effects that shift the actual tick rate of the clock.

To correct for the BBR shift, the team used the quantum theory of atomic structure to calculate the BBR shift of the atomic energy levels of the aluminum ion. To gain confidence in their method, they successfully reproduced the energy levels of the aluminum ion, and also compared their results against a predicted BBR shift in a strontium ion clock recently built in the United Kingdom. Their calculation reduces the relative uncertainty due to room-temperature BBR in the aluminum ion to 4 x 10-19, or better than 18 decimal places, and a factor of 7 better than previous BBR calculations.

Current aluminum-ion clocks have larger sources of uncertainty than the BBR effect, but next-generation aluminum clocks are expected to greatly reduce those larger uncertainties and benefit substantially from better knowledge of the BBR shift.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by National Institute of Standards and Technology (NIST).

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Mixing fluids efficiently in confined spaces: Let the fingers do the working

ScienceDaily (May 13, 2011) — Getting two fluids to mix in small or confined spaces is a big problem in many industries where, for instance, the introduction of one fluid can help extract another -- like water pumped underground can release oil trapped in porous rock -- or where the mixing of liquids is the essential point of the process. A key example of the latter is microfluidics technology, which allows for the controlled manipulation of fluids in miniscule channels often only a few hundred nanometers wide.

Microfluidic devices were first introduced in the 1980s and for many years were best known for their use in ink-jet printers, but have since been introduced in other fields, including the chemical analysis of blood or other sera in lab-on-a-chip technologies. These devices -- usually not much larger than a stick of chewing gum -- sometimes rely on nano-sized moving components, the geometry of the grooved channels or pulsed injections to induce a mixing of the fluids. But researchers in MIT's Department of Civil and Environmental Engineering suggest that a simpler method might be equally, if not more, effective.

"Getting two fluids to mix in a very tight space is difficult because there's not much room for a disorderly flow," said Professor Ruben Juanes, the ARCO Associate Professor in Energy Studies and principal investigator on the research. "But with two fluids of highly contrasting viscosity, the thinner fluid naturally creates disorder, which proves to be a marvelously efficient means of mixing."

In an analysis published online May 12 in Physical Review Letters (PRL), the researchers show that the injection of a thin or low-viscosity fluid into a much more viscous fluid (think of water spurting into molasses) will cause the two fluids to mix very quickly via a physical process known as viscous fingering. The thinner liquid, say the researchers, will form fingers as it enters the thicker liquid, and those fingers will form other fingers, and so on until the two liquids have mixed uniformly.

They also found that for maximum mixing to occur quickly, the ideal ratio of the viscosity of any two fluids depends on the speed at which the thinner liquid is injected into the thicker one.

The research team of Juanes, postdoctoral associate Luis Cueto-Felgueroso and graduate students Birendra Jha and Michael Szulczewski, made a series of controlled experiments using mixtures of water and glycerol, a colorless liquid generally about a thousand times more viscous than water. By alternating the viscosity of the liquids and the velocity of the injection flows, Jha was able to create a mathematical model of the process and use that to determine the best viscosity ratio for a particular velocity. He is lead author on the PRL paper.

"It's been known for a very long time that a low viscosity fluid will finger through the high viscosity fluid," said Juanes. "What was not known is how this affects the mixing rate of the two fluids. For instance, in the petroleum industry, people have developed increasingly refined models of how quickly the low viscosity fluid will reach the production well, but know little about how it will mix once it makes contact with the oil."

Similarly, Juanes said, in microfluidics technology, the use of fluids of different viscosities has not been seriously proposed as a mixing mechanism, but the new study indicates it could work very efficiently in the miniscule channels.

"We can now say that on average, the viscosity of the fluid injected should be about 10 times lower than that of the fluid into which it is injected," said Juanes. "If the contrast is greater than 10, then the injection should be done more slowly to achieve the fastest maximum mixing. Otherwise, the low viscosity fluid will create a single channel through the thicker fluid, which is not ideal."

Cueto-Felgueroso said a similar process is at work in the engraved channels of a microfluidic device and in subsurface rock containing oil. "Mixing fluids at small scales or velocities is difficult because you can't rely on turbulence: it would be hard to stir milk into your coffee if you were using a microscopic cup," Cueto-Felgueroso said. "With viscous fingering, you let the fluids do the job of stirring."

This work was funded by the Italian energy company, Eni, and the ARCO Chair in Energy Studies.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Massachusetts Institute of Technology, Department of Civil and Environmental Engineering. The original article was written by Denise Brehm.

Journal Reference:

Birendra Jha, Luis Cueto-Felgueroso, Ruben Juanes. Fluid Mixing from Viscous Fingering. Physical Review Letters, 2011; 106 (19) DOI: 10.1103/PhysRevLett.106.194502

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Video: German Researchers Smash Robot Arm With a Baseball Bat


Come On, Human, I Can Take It DLR

The robotics engineers at DLR, the German Aerospace Center, have a history of violent behavior with their mechanical creations — earlier this year, we saw them smash a robot’s hand with a hammer, and last year we watched brave engineers give a robot a knife and let themselves be stabbed. Now they’ve taken to whaling on the ‘bots with a baseball bat.

The point is to test the DLR Hand Arm System, an ultra-tough system with 52 motors and joints that can absorb energy the way human ones do. The robot’s toughness could prevent breakdowns in industrial settings, home use or any other place where a robot might bump into something.

After a whack with a baseball bat, the arm worked just as well as before, gently touching a yellow ball.

The arm consists of newly designed floating spring joints, which help dissipate energy better than a rigid structure could. They have two motors, one to control the joint and another small one to adjust its stiffness. The hand also has 38 tendons tougher than Kevlar, according to IEEE Spectrum. The tendons are attached to a spring-based elastic mechanism, which also allows the fingers to release and store energy.

IEEE Spectrum explains in more detail, reporting on the robot from the IEEE International Conference on Robotics and Automation in Shanghai.

Watch the arm take a beating below, to the apparent glee of its human handler.

[IEEE]


View the original article here

Saturday, 28 May 2011

Exotic behavior when mechanical devices reach the nanoscale

ScienceDaily (May 15, 2011) — Mechanical resonators are extensively used in high-tech industry, to mark time in electronic components, and to stabilize radio transmissions. Most mechanical resonators damp (slow down) in a well-understood linear manner, but ground-breaking work by Prof. Adrian Bachtold and his research group at the Catalan Institute of Nanotechnology has shown that resonators formed from nanoscale graphene and carbon nanotubes exhibit nonlinear damping, opening up exciting possibilities for super-sensitive detectors of force or mass.

In an article recently published in Nature Nanotechnology Prof. Bachtold and his co-researchers describe how they formed nano-scale resonators by suspending tiny graphene sheets or carbon nanotubes and clamping them at each end. These devices, similar to guitar strings, can be set to vibrate at very specific frequencies.

In all mechanical resonators studied to date, from large objects several metres in size down to tiny components just a few tens of nanometers in length, damping has always been observed to occur in a highly predictable, linear manner. However Prof. Bachtold´s research demonstrates that this linear damping paradigm breaks down for resonators with critical dimensions on the atomic scale. Of particular importance they have shown that the damping is strongly nonlinear for resonators based on nanotubes and graphene, a characteristic that facilitates amplification of signals and dramatic improvements in sensitivity.

The finding has profound consequences. Damping is central to the physics of nanoelectromechanical resonators, lying at the core of quantum and sensing experiments. Therefore many predictions that have been made for nanoscale electro-mechanical devices now need to be revisited when considering nanotube and graphene resonators.

This new insight into the dynamics of nano-scale resonators will also enable dramatic improvements in the performance of numerous devices. Already the Prof. Bachtold´s group has achieved a new record in quality factor for graphene resonators and ultra-weak force sensing with a nanotube resonator.

The work is particularly timely because an increasing number of research groups around the world with diverse backgrounds are choosing to study nanotube/graphene resonators, which have a number of uniquely useful properties.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Institut Català de Nanotecnologia, via EurekAlert!, a service of AAAS.

Journal Reference:

A. Eichler, J. Moser, J. Chaste, M. Zdrojek, I. Wilson-Rae, A. Bachtold. Nonlinear damping in mechanical resonators made from carbon nanotubes and graphene. Nature Nanotechnology, 2011; DOI: 10.1038/NNANO.2011.71

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Robots learn to share: Why we go out of our way to help one another

ScienceDaily (May 4, 2011) — Using simple robots to simulate genetic evolution over hundreds of generations, Swiss scientists provide quantitative proof of kin selection and shed light on one of the most enduring puzzles in biology: Why do most social animals, including humans, go out of their way to help each other? In the online, open access journal PLoS Biology, EPFL robotics professor Dario Floreano teams up with University of Lausanne biologist Laurent Keller to weigh in on the oft-debated question of the evolution of altruism genes.

Altruism, the sacrificing of individual gains for the greater good, appears at first glance to go against the notion of "survival of the fittest." But altruistic gene expression is found in nature and is passed on from one generation to the next. Worker ants, for example, are sterile and make the ultimate altruistic sacrifice by not transmitting their genes at all in order to insure the survival of the queen's genetic makeup. The sacrifice of the individual in order to insure the survival of a relative's genetic code is known as kin selection. In 1964, biologist W.D. Hamilton proposed a precise set of conditions under which altruistic behavior may evolve, now known as Hamilton's rule of kin selection. Here's the gist: If an individual family member shares food with the rest of the family, it reduces his or her personal likelihood of survival but increases the chances of family members passing on their genes, many of which are common to the entire family. Hamilton's rule simply states that whether or not an organism shares its food with another depends on its genetic closeness (how many genes it shares) with the other organism.

Testing the evolution of altruism using quantitative studies in live organisms has been largely impossible because experiments need to span hundreds of generations and there are too many variables. However, Floreano's robots evolve rapidly using simulated gene and genome functions and allow scientists to measure the costs and benefits associated with the trait. Additionally, Hamilton's rule has long been a subject of much debate be-cause its equation seems too simple to be true. "This study mirrors Hamilton's rule re-markably well to explain when an altruistic gene is passed on from one generation to the next, and when it is not," says Keller.

Previous experiments by Floreano and Keller showed that foraging robots doing simple tasks, such as pushing seed-like objects across the floor to a destination, evolve over multiple generations. Those robots not able to push the seeds to the correct location are selected out and cannot pass on their code, while robots that perform comparatively better see their code reproduced, mutated, and recombined with that of other robots into the next generation -- a minimal model of natural selection. The new study by EPFL and UNIL researchers adds a novel dimension: once a foraging robot pushes a seed to the proper destination, it can decide whether it wants to share it or not. Evolutionary experiments lasting 500 generations were repeated for several scenarios of altruistic interaction -- how much is shared and to what cost for the individual -- and of genetic relatedness in the population. The researchers created groups of relatedness that, in the robot world, would be the equivalent of complete clones, siblings, cousins and non-relatives. The groups that shared along the lines of Hamilton's rule foraged better and passed their code onto the next generation.

The quantitative results matched surprisingly well the predictions of Hamilton's rule even in the presence of multiple interactions. Hamilton's original theory takes a limited and isolated vision of gene interaction into account, whereas the genetic simulations run in the foraging robots integrate effects of one gene on multiple other genes with Hamilton's rule still holding true. The findings are already proving useful in swarm robotics. "We have been able to take this experiment and extract an algorithm that we can use to evolve cooperation in any type of robot," explains Floreano. "We are using this altruism algo-rithm to improve the control system of our flying robots and we see that it allows them to effectively collaborate and fly in swarm formation more successfully."

This research was funded by the Swiss National Science Foundation, the Euro-pean Commission ECAgents and Swarmanoids projects, and the European Research Council.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Public Library of Science, via EurekAlert!, a service of AAAS.

Journal Reference:

Markus Waibel, Dario Floreano, Laurent Keller. A Quantitative Test of Hamilton's Rule for the Evolution of Altruism. PLoS Biology, 2011; 9 (5): e1000615 DOI: 10.1371/journal.pbio.1000615

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Doppler effect found even at molecular level -- 169 years after its discovery

ScienceDaily (May 11, 2011) — Whether they know it or not, anyone who's ever gotten a speeding ticket after zooming by a radar gun has experienced the Doppler effect -- a measurable shift in the frequency of radiation based on the motion of an object, which in this case is your car doing 45 miles an hour in a 30-mph zone.

But for the first time, scientists have experimentally shown a different version of the Doppler effect at a much, much smaller level -- the rotation of an individual molecule. Prior to this such an effect had been theorized, but it took a complex experiment with a synchrotron to prove it's for real.

"Some of us thought of this some time ago, but it's very difficult to show experimentally," said T. Darrah Thomas, a professor emeritus of chemistry at Oregon State University and part of an international research team that just announced its findings in Physical Review Letters, a professional journal.

Most illustrations of the Doppler effect are called "translational," meaning the change in frequency of light or sound when one object moves away from the other in a straight line, like a car passing a radar gun. The basic concept has been understood since an Austrian physicist named Christian Doppler first proposed it in 1842.

But a similar effect can be observed when something rotates as well, scientists say.

"There is plenty of evidence of the rotational Doppler effect in large bodies, such as a spinning planet or galaxy," Thomas said. "When a planet rotates, the light coming from it shifts to higher frequency on the side spinning toward you and a lower frequency on the side spinning away from you. But this same basic force is at work even on the molecular level."

In astrophysics, this rotational Doppler effect has been used to determine the rotational velocity of things such as planets. But in the new study, scientists from Japan, Sweden, France and the United States provided the first experimental proof that the same thing happens even with molecules.

At this tiny level, they found, the rotational Doppler effect can be even more important than the linear motion of the molecules, the study showed.

The findings are expected to have application in a better understanding of molecular spectroscopy, in which the radiation emitted from molecules is used to study their makeup and chemical properties. It is also relevant to the study of high energy electrons, Thomas said.

"There are some studies where a better understanding of this rotational Doppler effect will be important," Thomas said. "Mostly it's just interesting. We've known about the Doppler effect for a very long time but until now have never been able to see the rotational Doppler effect in molecules."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Oregon State University, via EurekAlert!, a service of AAAS.

Journal Reference:

T. D. Thomas, E. Kukk, K. Ueda, T. Ouchi, K. Sakai, T. X. Carroll, C. Nicolas, O. Travnikova, and C. Miron. Experimental observation of rotational Doppler broadening in a molecular system. Physical Review Letters, Accepted Apr 12, 2011

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Exotic behavior when mechanical devices reach the nanoscale

ScienceDaily (May 15, 2011) — Mechanical resonators are extensively used in high-tech industry, to mark time in electronic components, and to stabilize radio transmissions. Most mechanical resonators damp (slow down) in a well-understood linear manner, but ground-breaking work by Prof. Adrian Bachtold and his research group at the Catalan Institute of Nanotechnology has shown that resonators formed from nanoscale graphene and carbon nanotubes exhibit nonlinear damping, opening up exciting possibilities for super-sensitive detectors of force or mass.

In an article recently published in Nature Nanotechnology Prof. Bachtold and his co-researchers describe how they formed nano-scale resonators by suspending tiny graphene sheets or carbon nanotubes and clamping them at each end. These devices, similar to guitar strings, can be set to vibrate at very specific frequencies.

In all mechanical resonators studied to date, from large objects several metres in size down to tiny components just a few tens of nanometers in length, damping has always been observed to occur in a highly predictable, linear manner. However Prof. Bachtold´s research demonstrates that this linear damping paradigm breaks down for resonators with critical dimensions on the atomic scale. Of particular importance they have shown that the damping is strongly nonlinear for resonators based on nanotubes and graphene, a characteristic that facilitates amplification of signals and dramatic improvements in sensitivity.

The finding has profound consequences. Damping is central to the physics of nanoelectromechanical resonators, lying at the core of quantum and sensing experiments. Therefore many predictions that have been made for nanoscale electro-mechanical devices now need to be revisited when considering nanotube and graphene resonators.

This new insight into the dynamics of nano-scale resonators will also enable dramatic improvements in the performance of numerous devices. Already the Prof. Bachtold´s group has achieved a new record in quality factor for graphene resonators and ultra-weak force sensing with a nanotube resonator.

The work is particularly timely because an increasing number of research groups around the world with diverse backgrounds are choosing to study nanotube/graphene resonators, which have a number of uniquely useful properties.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Institut Català de Nanotecnologia, via EurekAlert!, a service of AAAS.

Journal Reference:

A. Eichler, J. Moser, J. Chaste, M. Zdrojek, I. Wilson-Rae, A. Bachtold. Nonlinear damping in mechanical resonators made from carbon nanotubes and graphene. Nature Nanotechnology, 2011; DOI: 10.1038/NNANO.2011.71

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Proton dripping tests a fundamental force in nature

ScienceDaily (May 11, 2011) — Like gravity, the strong interaction is a fundamental force of nature. It is the essential "glue" that holds atomic nuclei -- composed of protons and neutrons -- together to form atoms, the building blocks of nearly all the visible matter in the universe. Despite its prevalence in nature, researchers are still searching for the precise laws that govern the strong force. However, the recent discovery of an extremely exotic, short-lived nucleus called fluorine-14 in laboratory experiments may indicate that scientists are gaining a better grasp of these rules.

Fluorine-14 comprises nine protons and five neutrons. It exists for a tiny fraction of a second before a proton "drips" off, leaving an oxygen-13 nucleus behind. A team of researchers led by James Vary, a professor of physics at Iowa State University, first predicted the properties of fluorine-14 with the help of scientists in Lawrence Berkeley National Laboratory's (Berkeley Lab's) Computational Research Division, as well as supercomputers at the National Energy Research Scientific Computing Center (NERSC) and the Oak Ridge Leadership Computing Facility. These fundamental predictions served as motivations for experiments conducted by Vladilen Goldberg's team at Texas A&M's Cyclotron Institute, which achieved the first sightings of fluorine-14.

"This is a true testament to the predictive power of the underlying theory," says Vary. "When we published our theory a year ago, fluorine-14 had never been observed experimentally. In fact, our theory helped the team secure time on their newly commissioned cyclotron to conduct their experiment. Once their work was done, they saw virtually perfect agreement with our theory."

He notes that the ability to reliably predict the properties of exotic nuclei with supercomputers helps pave the way for researchers to cost-effectively improve designs of nuclear reactors, to predict results from next generation accelerator experiments that will produce rare and exotic isotopes, as well as to better understand phenomena such as supernovae and neutron stars.

"We will never be able to travel to a neutron star and study it up close, so the only way to gain insights into its behavior is to understand how exotic nuclei like fluorine-14 behave and scale up," says Vary.

Developing a Computer Code to Simulate the Strong Force

Including fluorine-14, researchers have so far discovered about 3,000 nuclei in laboratory experiments and suspect that 6,000 more could still be created and studied. Understanding the properties of these nuclei will give researchers insights into the strong force, which could in turn be applied to develop and improve future energy sources.

With these goals in mind, the Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program brought together teams of theoretical physicists, applied mathematicians, computer scientists and students from universities and national laboratories to create a computational project called the Universal Nuclear Energy Density Functional (UNEDF), which uses supercomputers to predict and understand behavior of a wide range of nuclei, including their reactions, and to quantify uncertainties. In fact, fluorine-14 was simulated with a code called Many Fermion Dynamics-nuclear (MFDn) that is part of the UNEDF project.

According to Vary, much of this code was developed on NERSC systems over the past two decades. "We started by calculating how two or three neutrons and protons interact, then built up our interactions from there to predict the properties of exotic nuclei like fluorine-14 with nine protons and five neutrons," says Vary. "We actually had these capabilities for some time, but were waiting for computing power to catch up. It wasn't until the past three or four years that computing power became available to make the runs."

Through the SciDAC program, Vary's team partnered with Ng and other scientists in Berkeley Lab's CRD who brought discrete and numerical mathematics expertise to improve a number of aspects in the code. "The prediction of fluorine-14 would not have been possible without SciDAC. Before our collaboration, the code had some bottlenecks, so performance was an issue," says Esmond Ng, who heads Berkeley Lab's Scientific Computing Group. Vary and Ng lead teams that are part of the UNEDF collaboration.

"We would not have been able to solve this problem without help from Esmond and the Berkeley Lab collaborators, or the initial investment from NERSC, which gave us the computational resources to develop and improve our code," says Vary. "It just would have taken too long. These contributions improved performance by a factor of three and helped us get more precise numbers."

He notes that a single simulation of fluorine-14 would have taken 18 hours on 30,000 processor cores, without the improvements implemented with the Berkeley Lab team's help. However, thanks to the SciDAC collaboration, each final run required only 6 hours on 30,000 processors. The final runs were performed on the Jaguar system at the Oak Ridge Leadership Computing Facility with an Innovative and Novel Computational Impact on Theory and Experiment (INCITE) allocation from the Department of Energy's Office of Advanced Scientific Computing Research (ASCR).

The paper that predicts fluorine-14 was published in Physical Letters C Rapid Communications. In addition to Vary, Pieter Maris, also of Iowa State, and Andrey Shirokov of Moscow State University were co-authors on the paper. In addition to Ng, Chao Yang and Philip Sternberg (a former postdoc), also of Berkeley Lab, and Masha Sosonkina of Ames Laboratory at Iowa State University contributed to the project.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Lawrence Berkeley National Laboratory.

Journal Reference:

P. Maris, J. P. Vary, P. Navratil, W. E. Ormand, H. Nam, D. J. Dean. Origin of the anomalous long lifetime of 14C. arXiv.org, 2011; [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here