Monday 31 October 2011

Borrowing from brightly-colored birds: Physicists develop lasers inspired by nature

ScienceDaily (Oct. 13, 2011) — Researchers at Yale University are studying how two types of nanoscale structures on the feathers of birds produce brilliant and distinctive colors. The researchers are hoping that by borrowing these nanoscale tricks from nature they will be able to produce new types of lasers -- ones that can assemble themselves by natural processes.

The team will present their findings at the Optical Society's (OSA) Annual Meeting, Frontiers in Optics (FiO) 2011 (http://www.frontiersinoptics.com/), taking place in San Jose, Calif. next week.

Many of the colors displayed in nature are created by nanoscale structures that scatter light strongly at specific frequencies. In some cases, these structures create iridescence, where colors change with the angle of view -- like the shifting rainbows on a soap bubble. In other cases, the hues produced by the structures are steady and unchanging. The mechanism by which angle-independent colors are produced stumped scientists for 100 years: at first glance, these steady hues appeared to have been produced by a random jumble of proteins. But when researchers zoomed in on small sections of the protein at a time, quasi-ordered patterns began to emerge. The scientists found that it is this short-range order that scatters light preferentially at specific frequencies to produce the distinctive hues of a bluebird's wings, for example.

Inspired by feathers, the Yale physicists created two lasers that use this short-range order to control light. One model is based on feathers with tiny spherical air cavities packed in a protein called beta-keratin. The laser based on this model consists of a semiconductor membrane full of tiny air holes that trap light at certain frequencies. Quantum dots embedded between the holes amplify the light and produce the coherent beam that is the hallmark of a laser. The researchers also built a network laser using a series of interconnecting nano-channels, based on their observations of feathers whose beta-keratin takes the form of interconnecting channels in "tortuous and twisting forms." The network laser produces its emission by blocking certain colors of light while allowing others to propagate. In both cases, researchers can manipulate the lasers' colors by changing the width of the nano-channels or the spacing between the nano-holes.

What makes these short-range-ordered, bio-inspired structures different from traditional lasers is that, in principle, they can self-assemble, through natural processes similar to the formation of gas bubbles in a liquid. This means that engineers would not have to worry about the nanofabrication of the large-scale structure of the materials they design, resulting in cheaper, faster, and easier production of lasers and light-emitting devices.

One potential application for this work includes more efficient solar cells that can trap photons before converting them into electrons. The technology could also yield long-lasting paint, which could find uses in processes such as cosmetics and textiles. "Chemical paint will always fade," says lead author Hui Cao. But a physical "paint" whose nanostructure determines its color will never change. Cao describes a 40-million-year-old beetle fossil that her lab examined recently, and which had color-producing nanostructures. "With my eyes I can still see the color," she said. "It really lasts for a very long time."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Optical Society of America, via EurekAlert!, a service of AAAS.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Pendulums and floating film: Two seemingly unrelated phenomena share surprising link

ScienceDaily (Oct. 11, 2011) — A coupled line of swinging pendulums apparently has nothing in common with an elastic film that buckles and folds under compression while floating on a liquid, but scientists at the University of Chicago and Tel Aviv University have discovered a deep connection between the two phenomena.

Energy carried in ordinary waves, like those seen on the ocean near a beach, quickly disperses. But the energy in the coupled pendulums and in compressed elastic film concentrates into different kinds of waves, ones with discrete packets of energy called "solitons."

Solitons express themselves in other realms as well. In telecommunications, for example, pulsing light travels as solitons through optical fibers. "What is special about solitons, and this very special way of localizing energy, is that it does not disperse," said Haim Diamant, associate professor of chemistry at Tel Aviv University. "It remains a very well-defined, focused packet."

The connection, which Diamant and UChicago's Thomas Witten report Oct. 14 in the journal Physical Review Letters, is subtle and more easily appreciated in visual form (see graphic). Witten graphed the swing angle of the pendulums as time progresses. Then he made a curve whose angle with the horizontal varies to match the swing angles along the diagonal edge of the graph. The resulting curve takes the same shape as the profile of a folded elastic film floating on a liquid, like the film shown at the bottom of the picture.

Witten and Diamant began collaborating while the latter worked at UChicago's James Franck Institute as a postdoctoral scientist from 1999 to 2002. They have been working together on puzzles emerging from the laboratory of another collaborator, Ka Yee Lee, UChicago professor in chemistry, ever since. Lee's research group studies the complex mixture of lipids and proteins that lines the sacs of the lung. These molecular linings fold and unfold as we inhale and exhale; the folding appears important for normal breathing.

Slowly growing energy

The energy applied to the film's deformation starts out weak, then grows stronger. Once the wrinkling energy in the film grows stronger, it concentrates itself into a fold shaped like the folding curve in the image.

Though the fold appears in a specific place on the film, "the motion resulting from folding extends over a big region. Usually big things are slow," said Witten, the Homer J. Livingston Professor in Physics. "But this is a big thing that is not slow. It's a rapid jerk, and we want to see what enables such rapid, large-scale motion."

Lee and her associates aim to understand breathing mechanics using synthetic films only one layer of molecules thick to simulate the surfactant that lines the microscopic air sacs found in the lungs. Diamant and Witten sought to solve the equation that exactly describes the fold shape of such a film. They knew it would be a difficult task, given that it was a nonlinear equation, one in which simple changes produce complicated effects.

A typical nonlinear problem might absorb decades of work without yielding a solution; this one seemed different. "There were strange hints that told us this problem might be solved exactly," Diamant said. "It's very rare that a nonlinear problem can be solved exactly."

Miracle solution

These hints had appeared in numerical simulations of the folding process generated by Enrique Cerda, a collaborator at the University of Santiago in Chile. "It was a miracle that we found an exact solution, but we had a strong feeling that it existed," Diamant said.

Once Diamant and Witten solved the problem, they realized that the solution resembled the sine-Gordon equation, well-known among mathematicians and physicists, which describes how a coupled line of swinging pendulums concentrate their energy.

Their resulting paper lays out the first example the authors know of in which soliton motion of a dynamical system can help scientists understand material deformation. The materials in this instance involve a thin, rigid layer floating on a fluid surface, a structure commonly found in biological tissues and synthetic coatings.

The finding has still-undetermined technological or biomedical applications, but it offers a way to control the film's deformation, including making the fold stick down into or up out of the water, forming a groove.

"This groove is controllable," Witten said. "You can shape the groove; you can make it come; you can make it go away." One also could control the location of the groove on the film, making it possible to manipulate the film on the scale of a few microns -- a fraction the width of a human hair.

"If there was some other material in the water that was attracted to the surface, we could make it nestle into this shape and we could capture it," Witten explained. "We think that this shape could have some potential that people don't realize."

As a next step, Diamant and Witten wonder if the dynamics of swinging pendulums can tell them anything about the dynamics of a folding elastic film. "All we have described at the moment is this static shape of the fold, but folding too is a dynamic phenomenon," Witten said.

Squeeze the film, and it will begin to fold after a period of time. Witten and Diamant would like to further describe how that process works based on what the swinging pendulums do.

"It seems only natural, but things like that are dangerous and they don't necessarily work," Witten noted. "But we do know that there is a lot known about the solitons that we can potentially harness to understand the folds."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Chicago. The original article was written by Steve Koppes.

Journal Reference:

Haim Diamant, Thomas Witten. Compression Induced Folding of a Sheet: An Integrable System. Physical Review Letters, 2011; 107 (16) DOI: 10.1103/PhysRevLett.107.164302

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Redox flow batteries, a promising technology for renewable energies integration

ScienceDaily (Oct. 14, 2011) — Today there is a wide variety of energy storage technologies at very different stages of development. Among them, the Redox Flow Battery (RFB) is an innovative solution based on the use of liquid electrolytes stored in tanks and pumped through a reactor to produce energy. Tecnalia is currently working in the development of high performance RFBs.

RFB is, by its very nature, a modular and highly flexible technology with very rapid response, little environmental impact and considerable potential for cutting costs. This is the reason why Redox Flow Batteries are emerging as a very promising option for stationary storage in general and for renewable applications in particular.

Renewable energies

There is no doubt that the development of renewable energies will be a key milestone in the way towards a new environmentally-friendly energy model., However, their variability and limited predictability are posing a problem for the operation of the system and, as a result, a barrier to their massive penetration.

A clear example of these difficulties is the need to maintain backup systems that generate energy during low wind or low solar irradiance periods. On the other side, high renewable generation can lead to energy waste during low demand periods.

Redox Flow Batteries are considered as a highly adequate technology to mitigate the variability of renewable energies and to improve their dispatchability, that is, to provide the capability to regulate the output in a similar way to conventional power stations. The energy stored during periods of high renewable production can be used to compensate for the lack of generation when the weather conditions are less favourable.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Elhuyar Fundazioa.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Carbon nanotube muscles generate giant twist for novel motors

ScienceDaily (Oct. 14, 2011) — New artificial muscles that twist like the trunk of an elephant, but provide a thousand times higher rotation per length, have been developed by a team of researchers from The University of Texas at Dallas, The University of Wollongong in Australia, The University of British Columbia in Canada, and Hanyang University in Korea.

The research appears in the journal Science.

These muscles, based on carbon nanotubes yarns, accelerate a 2000 times heavier paddle up to 590 revolutions per minute in 1.2 seconds, and then reverse this rotation when the applied voltage is changed. The demonstrated rotation of 250 per millimeter of muscle length is over a thousand times that of previous artificial muscles, which are based on ferroelectrics, shape memory alloys, or conducting organic polymers. The output power per yarn weight is comparable to that for large electric motors, and the weight-normalized performance of these conventional electric motors severely degrades when they are downsized to millimeter scale.

These muscles exploit strong, tough, highly flexible yarns of carbon nanotubes, which consist of nanoscale cylinders of carbon that are ten thousand times smaller in diameter than a human hair. Important for success, these nanotubes are spun into helical yarns, which means that they have left and right handed versions (like our hands), depending upon the direction of rotation during twisting the nanotubes to make yarn. Rotation is torsional, meaning that twist occurs in one direction until a limiting rotation results, and then rotation can be reversed by changing the applied voltage. Left and right hand yarns rotate in opposite directions when electrically charged, but in both cases the effect of charging is to partially untwist the yarn.

Unlike conventional motors, whose complexity makes them difficult to miniaturize, the torsional carbon nanotube muscles are simple to inexpensively construct in either very long or millimeter lengths. The nanotube torsional motors consist of a yarn electrode and a counter-electrode, which are immersed in an ionically conducting liquid. A low voltage battery can serve as the power source, which enables electrochemical charge and discharge of the yarn to provide torsional rotation in opposite directions. In the simplest case, the researchers attach a paddle to the nanotube yarn, which enables torsional rotation to do useful work -- like mixing liquids on "micro-fluidic chips" used for chemical analysis and sensing.

The mechanism of torsional rotation is remarkable. Charging the nanotube yarns is like charging a supercapacitor -- ions migrate into the yarns to electrostatically balance the electronic charge electrically injected onto the nanotubes. Although the yarns are porous, this influx of ions causes the yarn to increase volume, shrink in length by up to a percent, and torsionally rotate. This surprising shrinkage in yarn length as its volume increases is explained by the yarn's helical structure, which is similar in structure to finger cuff toys that trap a child's fingers when elongated, but frees them when shortened.

Nature has used torsional rotation based on helically wound muscles for hundreds of millions of years, and exploits this action for such tasks as twisting the trunks of elephants and octopus limbs. In these natural appendages, helically wound muscle fibers cause rotation by contracting against an essentially incompressible, bone-less core. On the other hand, the helically wound carbon nanotubes in the nanotube yarns are undergoing little change in length, but are instead causing the volume of liquid electrolyte within the porous yarn to increase during electrochemical charging, so that torsional rotation occurs.

The combination of mechanical simplicity, giant torsional rotations, high rotation rates, and micron-size yarn diameters are attractive for applications, such as microfluidic pumps, valve drives, and mixers. In a fluidic mixer demonstrated by the researchers, a 15 micron diameter yarn rotated a 200 times larger radius and 80 times heavier paddle in flowing liquids at up to one rotation per second.

"The discovery, characterization, and understanding of these high performance torsional motors shows the power of international collaborations," said Ray H. Baughman, a corresponding author of the author of the Science article and Robert A. Welch Professor of Chemistry and director of The University of Texas at Dallas Alan G. MacDiarmid NanoTech Institute. "Researchers from four universities in three different continents that were born in eight different countries made critically important contributions."

Other co-authors of this article are Javad Foroughi (first author and research fellow), Geoffrey M. Spinks (a corresponding author and professor), and Gordon G. Wallace (professor) of the University of Wollongong in Australia; Jiyoung Oh (postdoctoral fellow), Mikhail E. Kozlov (research professor), and Shaoli Fang (research professor) at The University of Texas at Dallas; Tissaphern Mirfakhrai (postdoctoral fellow) and John D. W. Madden (professor) at The University of British Columbia; and Min Kyoon Shin (postdoctoral fellow) and Seon Jeong Kim (professor) at Hanyang University.

Funding for this research was provided by grants from the Air Force Office of Scientific Research, the Air Force AOARD program, the Office of Naval Research MURI program, and the Robert A. Welch Foundation in the United States; the Creative Research Initiative Center for Bio-Artificial Muscle in Korea; the Natural Sciences and Engineering Research Council of Canada; and the Australian Research Council.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Texas at Dallas, via EurekAlert!, a service of AAAS.

Journal Reference:

Javad Foroughi, Geoffrey M. Spinks, Gordon G. Wallace, Jiyoung Oh, Mikhail E. Kozlov, Shaoli Fang, Tissaphern Mirfakhrai, John D. W. Madden, Min Kyoon Shin, Seon Jeong Kim, Ray H. Baughman. Torsional Carbon Nanotube Artificial Muscles. Science, 2011; DOI: 10.1126/science.1211220

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Venus has an ozone layer too, space probe discovers

ScienceDaily (Oct. 9, 2011) — The European Space Agency's Venus Express spacecraft has discovered an ozone layer high in the atmosphere of Venus. Comparing its properties with those of the equivalent layers on Earth and Mars will help astronomers refine their searches for life on other planets.

The results are being presented at the Joint Meeting of the European Planetary Science Congress and the American Astronomical Society's Division for Planetary Sciences.

Venus Express made the discovery while watching stars seen right at the edge of the planet set through its atmosphere. Its SPICAV instrument analysed the starlight, looking for the characteristic fingerprints of gases in the atmosphere as they absorbed light at specific wavelengths.

The ozone was detectable because it absorbed some of the ultraviolet from the starlight. Ozone is a molecule containing three oxygen atoms. According to computer models, the ozone on Venus is formed when sunlight breaks up carbon dioxide molecules, releasing oxygen atoms.

These atoms are then swept around to the nightside of the planet by winds in the atmosphere: they can then combine to form two-atom oxygen molecules, but also sometimes three-atom ozone molecules.

"This detection gives us an important constraint on understanding the chemistry of Venus' atmosphere," says Franck Montmessin, who led the research.

It may also offer a useful comparison for searching for life on other worlds. Ozone has only previously been detected in the atmospheres of Earth and Mars. On Earth, it is of fundamental importance to life because it absorbs much of the Sun's harmful ultraviolet rays. Not only that, it is thought to have been generated by life itself in the first place.

The build-up of oxygen, and consequently ozone, in Earth's atmosphere began 2.4 billion years ago. Although the exact reasons for it are not entirely understood, microbes excreting oxygen as a waste gas must have played an important role.

Along with plant life, they continue to do so, constantly replenishing Earth's oxygen and ozone. As a result, some astrobiologists have suggested that the simultaneous presence of carbon dioxide, oxygen and ozone in an atmosphere could be used to tell whether there could be life on the planet.

This would allow future telescopes to target planets around other stars and assess their habitability. However, as these new results highlight, the amount of ozone is crucial.

The small amount of ozone in Mars' atmosphere has not been generated by life. There, it is the result of sunlight breaking up carbon dioxide molecules. Venus too, now supports this view of a modest ozone build-up by non-biological means. Its ozone layer sits at an altitude of 100 km, about four times higher in the atmosphere than Earth's and is a hundred to a thousand times less dense.

Theoretical work by astrobiologists suggests that a planet's ozone concentration must be 20%of Earth's value before life should be considered as a cause. These new results support that conclusion because Venus clearly remains below this threshold.

"We can use these new observations to test and refine the scenarios for the detection of life on other worlds," says Dr Montmessin.

Yet, even if there is no life on Venus, the detection of ozone there brings Venus a step closer to Earth and Mars. All three planets have an ozone layer.

"This ozone detection tells us a lot about the circulation and the chemistry of Venus' atmosphere" says Håkan Svedhem, ESA Project Scientist for the Venus Express mission. "Beyond that, it is yet more evidence of the fundamental similarity between the rocky planets, and shows the importance of studying Venus to understand them all."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Europlanet Media Centre, via AlphaGalileo.

Journal Reference:

F. Montmessin, J.-L. Bertaux, F. Lefèvre, E. Marcq, D. Belyaev, J.-C. Gérard, O. Korablev, A. Fedorova, V. Sarago, A.C. Vandaele. A layer of ozone detected in the nightside upper atmosphere of Venus. Icarus, 2011; 216 (1): 82 DOI: 10.1016/j.icarus.2011.08.010

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday 30 October 2011

Venus has an ozone layer too, space probe discovers

ScienceDaily (Oct. 9, 2011) — The European Space Agency's Venus Express spacecraft has discovered an ozone layer high in the atmosphere of Venus. Comparing its properties with those of the equivalent layers on Earth and Mars will help astronomers refine their searches for life on other planets.

The results are being presented at the Joint Meeting of the European Planetary Science Congress and the American Astronomical Society's Division for Planetary Sciences.

Venus Express made the discovery while watching stars seen right at the edge of the planet set through its atmosphere. Its SPICAV instrument analysed the starlight, looking for the characteristic fingerprints of gases in the atmosphere as they absorbed light at specific wavelengths.

The ozone was detectable because it absorbed some of the ultraviolet from the starlight. Ozone is a molecule containing three oxygen atoms. According to computer models, the ozone on Venus is formed when sunlight breaks up carbon dioxide molecules, releasing oxygen atoms.

These atoms are then swept around to the nightside of the planet by winds in the atmosphere: they can then combine to form two-atom oxygen molecules, but also sometimes three-atom ozone molecules.

"This detection gives us an important constraint on understanding the chemistry of Venus' atmosphere," says Franck Montmessin, who led the research.

It may also offer a useful comparison for searching for life on other worlds. Ozone has only previously been detected in the atmospheres of Earth and Mars. On Earth, it is of fundamental importance to life because it absorbs much of the Sun's harmful ultraviolet rays. Not only that, it is thought to have been generated by life itself in the first place.

The build-up of oxygen, and consequently ozone, in Earth's atmosphere began 2.4 billion years ago. Although the exact reasons for it are not entirely understood, microbes excreting oxygen as a waste gas must have played an important role.

Along with plant life, they continue to do so, constantly replenishing Earth's oxygen and ozone. As a result, some astrobiologists have suggested that the simultaneous presence of carbon dioxide, oxygen and ozone in an atmosphere could be used to tell whether there could be life on the planet.

This would allow future telescopes to target planets around other stars and assess their habitability. However, as these new results highlight, the amount of ozone is crucial.

The small amount of ozone in Mars' atmosphere has not been generated by life. There, it is the result of sunlight breaking up carbon dioxide molecules. Venus too, now supports this view of a modest ozone build-up by non-biological means. Its ozone layer sits at an altitude of 100 km, about four times higher in the atmosphere than Earth's and is a hundred to a thousand times less dense.

Theoretical work by astrobiologists suggests that a planet's ozone concentration must be 20%of Earth's value before life should be considered as a cause. These new results support that conclusion because Venus clearly remains below this threshold.

"We can use these new observations to test and refine the scenarios for the detection of life on other worlds," says Dr Montmessin.

Yet, even if there is no life on Venus, the detection of ozone there brings Venus a step closer to Earth and Mars. All three planets have an ozone layer.

"This ozone detection tells us a lot about the circulation and the chemistry of Venus' atmosphere" says Håkan Svedhem, ESA Project Scientist for the Venus Express mission. "Beyond that, it is yet more evidence of the fundamental similarity between the rocky planets, and shows the importance of studying Venus to understand them all."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Europlanet Media Centre, via AlphaGalileo.

Journal Reference:

F. Montmessin, J.-L. Bertaux, F. Lefèvre, E. Marcq, D. Belyaev, J.-C. Gérard, O. Korablev, A. Fedorova, V. Sarago, A.C. Vandaele. A layer of ozone detected in the nightside upper atmosphere of Venus. Icarus, 2011; 216 (1): 82 DOI: 10.1016/j.icarus.2011.08.010

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Trees have a tipping point

Amount of forest cover can shift suddenly and unexpectedlyWeb edition : Thursday, October 13th, 2011 access Fires like this one in South Africa burn rapidly and regularly through savanna ecosystems, a process that allows grasslands to remain open and helps prevent trees from establishing themselves.Carla Staver

Like Coke versus Pepsi, tropical land ecosystems come in two choices: forest or grassland. New research shows these two options can switch abruptly, and there’s rarely any in-between.

If so, then many of these ecosystems are particularly vulnerable to future changes such as rising temperature, scientists say. With just slight shifts in rainfall or other factors, people living in what is now tropical rainforest might suddenly find themselves in scrubland populated by a different mix of plants and animals — where people’s livelihoods might have to change dramatically.

“That transition is not going to happen smoothly,” says Milena Holmgren, an ecologist at Wageningen University in the Netherlands. “The evidence is showing there are these big jumps.”

Holmgren and her colleagues describe the finding in the Oct. 14 Science. Another group, from Princeton University and South Africa’s national research council, report similar conclusions in a second paper in the same journal.

In theory, the relationship between rainfall and tree cover should be straightforward: The more rain a place has, the more trees that will grow there. But small studies have suggested that changes can occur in discrete steps. Add more rain to a grassy savanna, and it stays a savanna with the same percentage of tree cover for quite some time. Then, at some crucial amount of extra rainfall, the savanna suddenly switches to a full-fledged forest.

But no one knew whether such rapid transformations happened on a global scale. Separately, both research groups decided to look at data gathered by the MODIS instruments on board NASA’s Terra and Aqua satellites, which sense vegetation cover and other features of the land surface. This information included how much of each square kilometer of land was covered by trees, grasses or other vegetation. Both teams focused on the tropics and subtropics of Africa, South America and Australia, because those areas are thought to be least disturbed by human activity.

Looking at the numbers, Holmgren’s group identified three distinct ecosystem types: forest, savanna, and a treeless state. Forests typically had 80 percent tree cover, while savannas had 20 percent trees and the “treeless” about 5 percent or less. Intermediate states — with, say, 60 percent tree cover — are extremely rare, Holmgren says. Which category a particular landscape fell into depended heavily on rainfall.

Fire may be another important factor in determining tree cover, as the second group found. Led by Princeton ecology graduate student Carla Staver, this team studied how fire helped differentiate between forest and savanna. Fire spreads quickly in savannas because of all the grasses and slowly in tree-dense forests. “There’s a tipping point between where you get fires spreading easily and where you don’t,” Staver says.

That point, she and her colleagues found, sits at a tree cover of about 40 to 45 percent. Below that number, fires spread easily and prevent new trees from establishing themselves. Above that number, trees work to maintain a thick canopy that acts as a barrier to stop fire from spreading.

“These two papers tell us that these feedbacks really do operate at all scales,” says Audrey Mayer, an ecologist at Michigan Technological University in Houghton. “They’ll make us have to redo some of our assumptions about how things are going to change in the future.”

Many global climate models, for instance, assume a smooth transition between savanna and forest as temperature and rainfall change. But the new work suggests that forests could appear or disappear quickly, Mayer says, especially if people complicate the picture. “You can’t just plant a couple of trees and they’ll grow up and the forest will come back,” she says. “You have to fight those internal feedbacks.”

Staver and her colleagues are now searching for savanna-forest transitions that are occurring right now. “These things are definitely happening,” she says, “and the new work tells us it could be even more widespread than we’d thought.” Studying where landscapes are changing could help the scientists better understand what causes ecosystems to tip from one category to the other.

For their part, the Dutch scientists have developed “resilience maps” that show which places are most likely to tip from savanna to forest or vice versa. Farmers scratching out a living in western Africa or ranchers running cattle on the fringes of the Amazon might use such maps to learn how viable their livelihoods are likely to be in coming decades.

Locals could thus spend more time and energy working to keep the ecosystem the way it is, perhaps by building extra capacity for storing water or by cutting back on logging. Or residents could cut down more trees to tip a forest into a grassy rangeland for their animals. “These maps can be a tremendous tool for all kinds of organizations,” Holmgren says.

Mayer says she’d like to see the analysis extended into the Northern Hemisphere, where she suspects the results might be the same. Across parts of Illinois and Indiana, for instance, stretches a narrow strip of tallgrass prairie surrounded by forests dubbed the “prairie peninsula.” The peninsula was probably kept grassy by centuries of fire and grazing management — because otherwise it, too, would revert to forest.


Found in: Earth and Environment

View the original article here

Monkeys 'move and feel' virtual objects using only their brains

ScienceDaily (Oct. 6, 2011) — In a first ever demonstration of a two-way interaction between a primate brain and a virtual body, two monkeys trained at the Duke University Center for Neuroengineering learned to employ brain activity alone to move an avatar hand and identify the texture of virtual objects.

"Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton," said Miguel Nicolelis, M.D., Ph.D., professor of neurobiology at Duke University Medical Center and co-director of the Duke Center for Neuroengineering, who was senior author of the study.

Without moving any part of their real bodies, the monkeys used their electrical brain activity to direct the virtual hands of an avatar to the surface of virtual objects and, upon contact, were able to differentiate their textures.

Although the virtual objects employed in this study were visually identical, they were designed to have different artificial textures that could only be detected if the animals explored them with virtual hands controlled directly by their brain's electrical activity.

The texture of the virtual objects was expressed as a pattern of minute electrical signals transmitted to the monkeys' brains. Three different electrical patterns corresponded to each of three different object textures.

Because no part of the animal's real body was involved in the operation of this brain-machine-brain interface (BMBI), these experiments suggest that in the future patients severely paralyzed due to a spinal cord lesion may take advantage of this technology, not only to regain mobility, but also to have their sense of touch restored, said Nicolelis, who was senior author of the study published in the journal Nature on Oct. 5.

"This is the first demonstration of a brain-machine-brain interface that establishes a direct, bidirectional link between a brain and a virtual body," Nicolelis said. "In this BMBI, the virtual body is controlled directly by the animal's brain activity, while its virtual hand generates tactile feedback information that is signaled via direct electrical microstimulation of another region of the animal's cortex."

"We hope that in the next few years this technology could help to restore a more autonomous life to many patients who are currently locked in without being able to move or experience any tactile sensation of the surrounding world," Nicolelis said.

"This is also the first time we've observed a brain controlling a virtual arm that explores objects while the brain simultaneously receives electrical feedback signals that describe the fine texture of objects 'touched' by the monkey's newly acquired virtual hand," Nicolelis said. "Such an interaction between the brain and a virtual avatar was totally independent of the animal's real body, because the animals did not move their real arms and hands, nor did they use their real skin to touch the objects and identify their texture. It's almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves."

The combined electrical activity of populations of 50-200 neurons in the monkey's motor cortex controlled the steering of the avatar arm, while thousands of neurons in the primary tactile cortex were simultaneously receiving continuous electrical feedback from the virtual hand's palm that let the monkey discriminate between objects, based on their texture alone.

"The remarkable success with non-human primates is what makes us believe that humans could accomplish the same task much more easily in the near future," Nicolelis said.

It took one monkey only four attempts and another nine attempts before they learned how to select the correct object during each trial. Several tests demonstrated that the monkeys were actually sensing the object and not selecting them randomly.

The findings provide further evidence that it may be possible to create a robotic exoskeleton that severely paralyzed patients could wear in order to explore and receive feedback from the outside world, Nicolelis said. Such an exoskeleton would be directly controlled by the patient's voluntary brain activity in order to allow the patient to move autonomously. Simultaneously, sensors distributed across the exoskeleton would generate the type of tactile feedback needed for the patient's brain to identify the texture, shape and temperature of objects, as well as many features of the surface upon which they walk.

This overall therapeutic approach is the one chosen by the Walk Again Project, an international, non-profit consortium, established by a team of Brazilian, American, Swiss, and German scientists, which aims at restoring full body mobility to quadriplegic patients through a brain-machine-brain interface implemented in conjunction with a full-body robotic exoskeleton.

The international scientific team recently proposed to carry out its first public demonstration of such an autonomous exoskeleton during the opening game of the 2014 FIFA Soccer World Cup that will be held in Brazil.

Other authors include Joseph E. O'Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, all from the Duke University Center for Neuroengineering and Solaiman Shokur, and Hannes Bleuler from the Ecole Polytechnic Federale de Lausanne (EPFL), in Lausanne, Switzerland.

This work was funded by the U.S. National Institutes of Health.

A video illustrating the experiment is available at: http://www.youtube.com/watch?v=WTTTwvjCa5g

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Duke University Medical Center.

Journal Reference:

Joseph E. O’Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, Solaiman Shokur, Hannes Bleuler, Miguel A. L. Nicolelis. Active tactile exploration using a brain–machine–brain interface. Nature, 2011; DOI: 10.1038/nature10489

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Science & the Public: Arctic ozone: ‘Hole’ or just not whole?

Some scientists argue the far North’s ozone merely thinned.Web edition : Tuesday, October 4th, 2011 This past spring, the Arctic stratosphere’s ozone layer suffered unprecedented depletion. But whether the record loss constituted a “hole” depends on which experts you consult.

In a Nature paper published online earlier this week, Gloria Manney of NASA’s Jet Propulsion Laboratory in Pasadena and more than two dozen coauthors describe the 2011 loss as “an Arctic ozone hole.” Other renowned scientists have been weighing in — and some argue that as dramatic as this year's thinning was, a hole it wasn't.

Reports of a putative hole in the far North’s ozone are far from new. A quarter century ago to this day, Science News ran a story noting that “while everyone’s attention has been riveted on the atmosphere above Antarctica, a NASA researcher has discovered what he believes is another ozone cavity that forms each [winter] on the other side of the world. . . . This Arctic ozone hole is ‘not as large in magnitude, but it’s unquestionably there.'"

Since then, descriptions of the recurrent depletion of Arctic ozone have been scaled back to more of just a demonstrable thinning. There's been little question that its triggers, however, are identical to those that seasonably eat away huge portions of Antarctica’s stratospheric ozone.

What made 2011 different — and a watershed — argues Michelle Santee (a JPL colleague of Manney's and coauthor on the new paper), is that at long last, “the magnitude of the [Arctic] loss is comparable to that in the early Antarctic ozone holes in the mid 1980s.”

Santee observes that “the actual definition of an ozone hole has never been codified, even for the Antarctic.” Over the past several decades, however, a de facto rule of thumb has developed. It's measured in terms of the total ozone throughout a column of the atmosphere spanning from Earth’s surface up to satellite height. Such composite measurements are logged in Dobson units, and 220 is about the minimum needed to constitute a hole.

Ordinarily, Santee says, 450 Dobson units “is sort of the canonical value for the Northern polar region’s ozone — what would be your sort of basal level.” This year, she notes, total-column Arctic ozone values “were below 250 Dobson units for nearly a month — and reached 220 to 230 for about a week.”

Geir Braathen, senior scientific officer with the World Meteorological Organization in Geneva, concurs that “scientists have not agreed on any threshold ozone loss, like 250 or 260 Dobson units [for a hole].” Still, this atmospheric chemist cautions, “I would be careful about calling the Arctic depletion an ozone hole” because it might lead people to think it's comparable to what emerges in the Antarctic. And it isn’t.

Antarctica's hole recurs annually, whereas mega-thinning in Arctic ozone is novel. Antarctica’s ozone also thins at some point to zero in a band many kilometers high. At no altitude has Arctic ozone ever fallen to zero — even in 2011. Finally, Braathen points out, the aerial expanse and depth of the Antarctic hole greatly dwarfs the Arctic region that experienced substantial thinning earlier this year.

“Going into this Arctic spring, many of us — myself included — really thought this might be the year that we would see a real Arctic ozone hole,” observed Susan Solomon, of the University of Colorado, Boulder, at the recent American Chemical Society meeting in Denver. "But in the end," she says, "I think it’s fair to say that we didn’t.”

It may be a matter of semantics, she concedes, but there was a rapid resupply of ozone from outside the Arctic vortex (that swirling wall of winds in the stratosphere that largely corrals a patch of atmosphere, rendering it vulnerable to ozone-destroying chemical reactions). Such a resupply does not occur in the Antarctic vortex, she notes; and that's what permits its stratospheric ozone concentrations to plummet to zero over a several-kilometer height.

So, although the new paper clearly demonstrates that at some altitudes Arctic ozone was efficiently destroyed, Solomon says, “I wouldn’t call this an ozone hole.” 

Whatever you call 2011's Arctic ozone depletion, “I consider this to be a ‘big deal,’” says Ross Salawitch of the University of Maryland in College Park. Moreover, there’s no reason to suspect that in some future years, the losses won't be substantially worse, he says. 

Warming surface temperatures can cool the stratosphere, promoting conditions that accelerate ozone depletion, he notes. It takes prolonged cold temperatures in the winter and spring stratosphere to maximize ozone losses. Although such conditions have not been recurring annually in the Arctic, they have returned at three- to four-year intervals. And each new cold snap has been more extreme than the last, Salawitch points out.

Reinforcing concerns about future ozone depletion, Braathen says, is recognition that for many years to come there will be more than enough chlorine- and bromine-based pollutants in the stratosphere to allow for the possibility of “complete destruction of ozone — even in the Arctic — if it’s cold enough.”


Found in: Earth Science, Environment and Science & Society

View the original article here

Graphene's 'Big Mac' creates next generation of chips

ScienceDaily (Oct. 10, 2011) — The world's thinnest, strongest and most conductive material, discovered in 2004 at the University of Manchester by Professor Andre Geim and Professor Kostya Novoselov, has the potential to revolutionize material science.

Demonstrating the remarkable properties of graphene won the two scientists the Nobel Prize for Physics last year and Chancellor of the Exchequer George Osborne has just announced plans for a £50m graphene research hub to be set up.

Now, writing in the journal Nature Physics, the University of Manchester team have for the first time demonstrated how graphene inside electronic circuits will probably look like in the future.

By sandwiching two sheets of graphene with another two-dimensional material, boron nitrate, the team created the graphene 'Big Mac' -- a four-layered structure which could be the key to replacing the silicon chip in computers.

Because there are two layers of graphene completed surrounded by the boron nitrate, this has allowed the researchers for the first time to observe how graphene behaves when unaffected by the environment.

Dr Leonid Ponomarenko, the leading author on the paper, said: "Creating the multilayer structure has allowed us to isolate graphene from negative influence of the environment and control graphene's electronic properties in a way it was impossible before.

"So far people have never seen graphene as an insulator unless it has been purposefully damaged, but here high-quality graphene becomes an insulator for the first time."

The two layers of boron nitrate are used not only to separate two graphene layers but also to see how graphene reacts when it is completely encapsulated by another material.

Professor Geim said: "We are constantly looking at new ways of demonstrating and improving the remarkable properties of graphene."

"Leaving the new physics we report aside, technologically important is our demonstration that graphene encapsulated within boron nitride offers the best and most advanced platform for future graphene electronics. It solves several nasty issues about graphene's stability and quality that were hanging for long time as dark clouds over the future road for graphene electronics.

We did this on a small scale but the experience shows that everything with graphene can be scaled up."

"It could be only a matter of several months before we have encapsulated graphene transistors with characteristics better than previously demonstrated."

Graphene is a novel two-dimensional material which can be seen as a monolayer of carbon atoms arranged in a hexagonal lattice.

Its remarkable properties could lead to bendy, touch screen phones and computers, lighter aircraft, wallpaper-thin HD TV sets and superfast internet connections, to name but a few.

The £50m Graphene Global Research and Technology Hub will be set up by the Government to commercialise graphene. Institutions will be able to bid for the money via the Engineering and Physical Sciences Research Council (EPSRC) -- who funded work leading to the award of the Nobel prize long before the applications were realised.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Manchester, via EurekAlert!, a service of AAAS.

Journal Reference:

L. A. Ponomarenko, A. K. Geim, A. A. Zhukov, R. Jalil, S. V. Morozov, K. S. Novoselov, I. V. Grigorieva, E. H. Hill, V. V. Cheianov, V. I. Fal’ko, K. Watanabe, T. Taniguchi, R. V. Gorbachev. Tunable metal–insulator transition in double-layer graphene heterostructures. Nature Physics, 2011; DOI: 10.1038/nphys2114

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New technique offers enhanced security for sensitive data in cloud computing

ScienceDaily (Oct. 13, 2011) — Researchers from North Carolina State University and IBM have developed a new, experimental technique to better protect sensitive information in cloud computing -- without significantly affecting the system's overall performance.

Under the cloud-computing paradigm, the computational power and storage of multiple computers is pooled, and can be shared by multiple users. Hypervisors are programs that create the virtual workspace that allows different operating systems to run in isolation from one another -- even though each of these systems is using computing power and storage capability on the same computer. A longstanding concern in cloud computing is that attackers could take advantage of vulnerabilities in a hypervisor to steal or corrupt confidential data from other users in the cloud.

The NC State research team has developed a new approach to cloud security, which builds upon existing hardware and firmware functionality to isolate sensitive information and workload from the rest of the functions performed by a hypervisor. The new technique, called "Strongly Isolated Computing Environment" (SICE), demonstrates the introduction of a different layer of protection.

"We have significantly reduced the 'surface' that can be attacked by malicious software," says Dr. Peng Ning, a professor of computer science at NC State and co-author of a paper describing the research. "For example, our approach relies on a software foundation called the Trusted Computing Base, or TCB, that has approximately 300 lines of code, meaning that only these 300 lines of code need to be trusted in order to ensure the isolation offered by our approach. Previous techniques have exposed thousands of lines of code to potential attacks. We have a smaller attack surface to protect."

SICE also lets programmers dedicate specific cores on widely-available multi-core processors to the sensitive workload -- allowing the other cores to perform all other functions normally. A core is the brain of a computer chip, and many computers now use chips that have between two and eight cores. By confining the sensitive workload to one or a few cores with strong isolation, and allowing other functions to operate separately, SICE is able to provide both high assurance for the sensitive workload and efficient resource sharing in a cloud.

In testing, the SICE framework generally took up approximately 3 percent of the system's performance overhead on multi-core processors for workloads that do not require direct network access. "That is a fairly modest price to pay for the enhanced security," Ning says. "However, more research is needed to further speed up the workloads that require interactions with the network."

The paper, "SICE: A Hardware-Level Strongly Isolated Computing Environment for x86 Multi-core Platforms," was co-authored by Ning; NC State Ph.D. student Ahmed Azab; and Dr. Xiaolan Zhang of IBM's T.J. Watson Research Center. The paper will be presented at the 18th ACM Conference on Computer and Communications Security, Oct. 17-21 in Chicago, Ill. The research was funded by the National Science Foundation, U.S. Army Research Office and IBM.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by North Carolina State University.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday 29 October 2011

Erasing history? Temporal cloaks adjust light's throttle to hide an event in time

ScienceDaily (Oct. 13, 2011) — Researchers from Cornell University in Ithaca, N.Y., have demonstrated for the first time that it's possible to cloak a singular event in time, creating what has been described as a "history editor." In a feat of Einstein-inspired physics, Moti Fridman and his colleagues sent a beam of light traveling down an optical fiber and through a pair of so-called "time lenses." Between these two lenses, the researchers were able to briefly create a small bubble, or gap, in the flow of light. During that fleetingly brief moment, lasting only the tiniest fraction of a second, the gap functioned like a temporal hole, concealing the fact that a brief burst of light ever occurred.

The team is presenting their findings at the Optical Society's (OSA) Annual Meeting, Frontiers in Optics (FiO) 2011 (http://www.frontiersinoptics.com/), taking place in San Jose, Calif. next week.

Their ingenious system, which is the first physical demonstration of a phenomenon originally described theoretically a year ago by Martin McCall and his colleagues at Imperial College London in the Journal of Optics, relies on the ability to use short intense pulses of light to alter the speed of light as it travels through optical materials, in this case an optical fiber. (In a vacuum, light maintains its predetermined speed limit of 180,000 miles per second.) As the beam passes through a split-time lens (a silicon device originally designed to speed up data transfer), it accelerates near the center and slows down along the edges, causing it to balloon out toward the edges, leaving a dead zone around which the light waves curve. A similar lens a little farther along the path produces the exact but opposite velocity adjustments, resetting the speeds and reproducing the original shape and appearance of the light rays.

To test the performance of their temporal cloak, the researchers created pulses of light directly between the two lenses. The pulses repeated like clockwork at a rate of 41 kilohertz. When the cloak was off, the researchers were able to detect a steady beat. By switching on the temporal cloak, which was synchronized with the light pulses, all signs that these events ever took place were erased from the data stream.

Unlike spatial optical cloaking, which typically requires the use of metamaterials (specially created materials engineered to have specific optical properties), the temporal cloak designed by the researchers relies more on the fundamental properties of light and how it behaves under highly constrained space and time conditions. The area affected by the temporal cloak is a mere 6 millimeters long and can last only 20 trillionths of a second. The length of the cloaked area and the length of time it is able to function are tightly constrained -- primarily by the extreme velocity of light. Cloaking for a longer duration would create turbulence in the system, essentially pulling back the curtain and hinting that an event had occurred. Also, to achieve any measurable macroscopic effects, an experiment of planetary and even interplanetary scales would be necessary.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Optical Society of America, via EurekAlert!, a service of AAAS.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New mystery on Mars' forgotten plains

ScienceDaily (Oct. 12, 2011) — One of the supposedly best understood and least interesting landscapes on Mars is hiding something that could rewrite the planet's history. Or not. In fact, about all that is certain is that decades of assumptions regarding the wide, flat Hesperia Planum are not holding up very well under renewed scrutiny with higher-resolution, more recent spacecraft data.

"Most scientists don't want to work on the flat things," noted geologist Tracy Gregg of University at Buffalo, State University of New York. So after early Mars scientists decided Hesperia Planum looked like a lava-filled plain, no one really revisited the matter and the place was used to exemplify something rather important: The base of a major transitional period in the geologic time scale of Mars. The period is aptly called the Hesperian and it is thought to have run from 3.7 to 3.1 billion years ago.

But when Gregg and her student Carolyn Roberts started looking at this classic Martian lava plain with modern data sets, they ran into trouble.

"There's a volcano in Hesperia Planum that not many people pay attention to because it's very small," Gregg said. "As I started looking closer at the broader region -- I can't find any other volcanic vents, any flows. I just kept looking for evidence of lava flows. It's kind of frustrating. There is nothing like that in the Hesperia Planum."

"A likely cause of this trouble is the thick dust that blankets Hesperia Planum," she said. "It covers everywhere like a snowfall."

So she turned her attention to what could be discerned on Hesperia Planum: about a dozen narrow, sinuous channels, called rilles, just a few hundred meters wide and up to hundreds of kilometers long. These rilles have no obvious sources or destinations and it is not at all clear they are volcanic.

"The question I have is what made the channels," said Gregg. Was it water, lava, or something else? "There are some lavas that can be really, really runny. And both are liquids that run downhill." So either is a possibility.

To begin to sort the matter out, Gregg and Roberts are now looking for help on the Moon. Their preliminary findings are being presented at the Annual Meeting of The Geological Society of America in Minneapolis.

"On the Moon we see these same kinds of features and we know that water couldn't have formed them there," Gregg said. So they are in the process of comparing channels on the Moon and Mars, using similar data sets from different spacecraft, to see if that sheds any light on the matter. She hopes to find evidence that will rule out water or lava on Hesperia Planum.

"Everybody assumed these were huge lava flows," said Gregg. "But if it turns out to be a lake deposit, it's a very different picture of what Mars was doing at that time." It would also make Hesperia Planum a good place to look for life, because water plus volcanic heat and minerals is widely believed to be a winning combination for getting life started.

"The 'volcanic' part is an interpretation that's beginning to fall apart," said Gregg. "What is holding up is that the Hesperian marks a transition between the Noachian (a time of liquid water on the surface and the formation of lots of impact craters) and the Amazonian (a drier, colder Mars)."

She has found that other scientists are interested in her work because of its possible implications on the Mars geological time scale. Gregg is not worried that Mars history will need to be rewritten, but she does suspect that Hesperia Planum is a lot more complicated than people has long thought.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by The Geological Society of America.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Point defects in super-chilled diamonds may offer stable candidates for quantum computing bits

ScienceDaily (Oct. 13, 2011) — Diamond, nature's hardest known substance, is essential for our modern mechanical world -- drills, cutters, and grinding wheels exploit the durability of diamonds to power a variety of industries. But diamonds have properties that may also make them excellent materials to enable the next generation of solid-state quantum computers and electrical and magnetic sensors.

To further explore diamonds' quantum computing potential, researchers from the University of Science and Technology of China tested the properties of a common defect found in diamond: the nitrogen-vacancy (NV) center.

Consisting of a nitrogen atom impurity paired with a 'hole' where a carbon atom is absent from the matrix structure, the NV center has the potential to store information because of the predictable way in which electrons confined in the center interact with electromagnetic waves. The research team probed the energy level properties of the trapped electrons by cooling the diamonds to an extremely chilly 5.6 degrees Kelvin and then measuring the magnetic resonance and fluorescent emission spectra. The team also measured the same spectra at gradually warmer increments, up to 295 degrees Kelvin.

The results, as reported in the AIP's journal Applied Physics Letters, show that at temperatures below 100 Kelvin the electrons' transition energies, or the energies required to get from one energy level to the next, were stable. Shifting transition energies could make quantum mechanical manipulations tricky, so cooler temperatures may aid the study and development of diamonds for quantum computation and ultra-sensitive detectors, the authors write.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by American Institute of Physics, via EurekAlert!, a service of AAAS.

Journal Reference:

X.-D. Chen, C.-H. Dong, F.-W. Sun, C.-L. Zou, J.-M. Cui, Z.-F. Han, and G.-C Guo. Temperature dependent energy level shifts of nitrogen-vacancy centers in diamond. Applied Physics Letters, 2011

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Genome duplication encourages rapid adaptation of plants

ScienceDaily (May 3, 2011) — Plants adapt to the local weather and soil conditions in which they grow, and these environmental adaptations are known to evolve over thousands of years as mutations slowly accumulate in plants' genetic code. But a University of Rochester biologist has found that at least some plant adaptations can occur almost instantaneously, not by a change in DNA sequence, but simply by duplication of existing genetic material.
Justin Ramsey's findings were recently published in the Proceedings of the National Academy of Sciences.
While nearly all animals have two sets of chromosomes -- one set inherited from the maternal parent and the other inherited from the paternal parent -- many plants are polyploids, meaning they have four or more chromosome sets. "Some botanists have wondered if polyploids have novel features that allow them to survive environmental change or colonize new habitats," says Assistant Professor Justin Ramsey. "But this idea had not been rigorously tested."
Plant breeders have previously induced polyploidy in crop plants, like corn and tomato, and evaluated its consequences in greenhouses or gardens. Such an experimental approach had never been taken in wild plant species, Ramsey said, so it was unknown how polyploidy affected plant survival and reproduction in nature.
Ramsey decided to perform his own test by studying wild yarrow (Achillea borealis) plants that are common on the coast of California. Yarrow with four chromosome sets (tetraploids) occupy moist, grassland habitats in the northern portion of Ramsey's study area; yarrow with six sets of chromosomes (hexaploids) grow in sandy, dune habitats in the south.
Ramsey transplanted tetraploid yarrow from the north into the southern habitat and discovered that the native hexaploid yarrow had a five-fold survival advantage over the transplanted tetraploid yarrow. This experiment proved that southern plants are intrinsically adapted to dry conditions; however, it was unclear if the change in chromosome number, per se, was responsible. Over time, natural hexaploid populations could have accumulated differences in DNA sequence that improved their performance in the dry habitats where they now reside.
To test that idea, Ramsey took first-generation, mutant hexaploid yarrow that were screened from a tetraploid population, and transplanted them to the sandy habitat in the south. Ramsey compared the performance of the transplanted yarrows and found that the hexaploid mutants had a 70 percent survival advantage over their tetraploid siblings. Because the tetraploid and hexaploid plants had a shared genetic background, the difference of survivorship was directly attributable to the number of chromosome sets rather than the DNA sequences contained on the chromosomes.
Ramsey offers two theories for the greater survivorship of the hexaploid plants. It may be that DNA content alters the size and shape of the cells regulating the opening and closing of small pores on the leaf surface. As a result, the rate at which water passes through yarrow leaves may be reduced by an increase in chromosome set number (ploidy). Another possibility, according to Ramsey, is that the addition of chromosome sets masks the effects of plant deleterious genes, similar to those that cause cystic fibrosis and other genetic diseases in humans.
"Sometimes the mechanism of adaptation isn't a difference in genes," said Ramsey, "it's the number of chromosomes." While scientists previously believed polyploidy played a role in creating gene families -- groups of genes with related functions -- they were uncertain whether chromosome duplication itself had adaptive value.
Now, Ramsey says scientists "should pay more attention to chromosome number, not only as an evolutionary mechanism, but as a form of genetic variation to preserve rare and endangered plants."
Story Source:
The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Rochester.
Journal Reference:
J. Ramsey. From the Cover: Polyploidy and ecological adaptation in wild yarrow. Proceedings of the National Academy of Sciences, 2011; 108 (17): 7096 DOI: 10.1073/pnas.1016631108
Note: If no author is given, the source is cited instead.
Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

View the original article here

Clearing the 'cosmic fog' of the early universe: Massive stars may be responsible

ScienceDaily (Oct. 13, 2011) — The space between the galaxies wasn't always transparent. In the earliest times, it was an opaque, dense fog. How it cleared is an important question in astronomy. New observational evidence from the University of Michigan shows how high energy light from massive stars could have been responsible.

Astronomers believed that early star-forming galaxies could have provided enough of the right kind of radiation to evaporate the fog, or turn the neutral hydrogen intergalactic medium into the charged hydrogen plasma that remains today. But they couldn't figure out how that radiation could escape a galaxy. Until now.

Jordan Zastrow, a doctoral astronomy student, and Sally Oey, a U-M astronomy professor, observed and imaged the relatively nearby NGC 5253, a dwarf starburst galaxy in the southern constellation Centaurus. Starburst galaxies, as their name implies, are undergoing a burst of intense star formation. While rare today, scientists believe they were very common in the early universe.

The researchers used special filters to see where and how the galaxy's extreme ultraviolet radiation, or UV light, was interacting with nearby gas. They found that the UV light is, indeed, evaporating gas in the interstellar medium. And it is doing so along a narrow cone emanating from the galaxy.

A paper on their work is published Oct. 12 in Astrophysical Journal Letters.

"We are not directly seeing the ultraviolet light. We are seeing its signature in the gas around the galaxy," Zastrow said.

In starburst galaxies, a superwind from these massive stars can clear a passageway through the gas in the galaxy, allowing the radiation to escape, the researchers said.

The shape of the cone they observed could help explain why similar processes in other galaxies have been difficult to detect.

"This feature is relatively narrow. The opening that is letting the UV light out is small, which makes this light challenging to detect. We can think of it as a lighthouse. If the lamp is pointed toward you, you can see the light. If it's pointed away from you, you can't see it," Zastrow said. "We believe the orientation of the galaxy is important as to whether we can detect escaping UV radiation."

The findings could help astronomers understand how the earliest galaxies affected the universe around them.

Also contributing were researchers from the University of Maryland, MIT's Kavli Institute for Astrophysics and Space Research, and the University of California, Berkeley. The research is funded by the National Science Foundation. Observations were conducted with the Magellan Telescopes at Las Campanas Observatory in Chile.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Michigan.

Journal Reference:

Jordan Zastrow, M. S. Oey, Sylvain Veilleux, Michael McDonald, Crystal L. Martin. An ionization cone in the dwarf starburst galaxy NGC 5253. The Astrophysical Journal, 2011; 741 (1): L17 DOI: 10.1088/2041-8205/741/1/L17

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New optical signal processing to satisfy power-hungry, high-speed networks

ScienceDaily (Oct. 10, 2011) — A new all-optical signal processing device to meet the demands of high capacity optical networks and with a wide range of applications including ultrafast optical measurements and sensing has been developed by researchers at the University of Southampton.

The project is part of the European Union Framework 7 PHASORS project which completed earlier this year.

In a paper entitled: Multilevel quantization of optical phase in a novel coherent parametric mixer architecture, which will be published in Nature Photonics on October 9, a team of researchers led by Professor David Richardson at the University of Southampton's Optoelectronics Research Centre (ORC), describes a simple and reconfigurable device created to automatically tune the phase property of ultrafast light signals. This phase quantization function is analogous to the way electronic circuits can adjust electrical signals to ensure their voltage matches the discrete set of values required for digital computing.

According to Professor Richardson at the ORC, this is a significant breakthrough because their new device allows an unprecedented level of control and flexibility in processing light using light, functionality required now that ultra-high speed optical signals can be found everywhere from communication links between microprocessor cores in next generation supercomputers to the sub-sea fibre links spanning continents.

"Today parametric mixers are routinely used for laser wavelength conversion, spectroscopy, interferometry and optical amplification," said Mr Joseph Kakande a PhD student at ORC who undertook most of the research "Conventional parametric mixers when operated in a phase sensitive fashion have for many decades been known to have a two-level response. We have now managed to achieve a multilevel phase response which means that we have demonstrated for the first time, a device that squeezes the classical characteristics of its input light to more than two phase levels."

As an example, the team has already used the device to remove noise picked up by a signal in during transmission in optical fibre at over 100 Gbit/s. In principle, this can be done even faster, at speeds hundreds of times greater than could be done using electronics, and crucially, using less power. The researchers envisage many as yet unknown deployment opportunities, given that controlling the phase of light also finds use in applications spanning enabling ultrasensitive interferometers in the hunt for gravitational waves to facilitating the probing of the inner workings of cells.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Southampton, via AlphaGalileo.

Journal Reference:

Joseph Kakande, Radan Slavík, Francesca Parmigiani, Adonis Bogris, Dimitris Syvridis, Lars Grüner-Nielsen, Richard Phelan, Periklis Petropoulos, David J. Richardson. Multilevel quantization of optical phase in a novel coherent parametric mixer architecture. Nature Photonics, 2011; DOI: 10.1038/nphoton.2011.254

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New Saudi Arabias of solar energy: Himalaya Mountains, Andes, Antarctica

ScienceDaily (Oct. 13, 2011) — Mention prime geography for generation of solar energy, and people tend to think of hot deserts. But a new study concludes that some of the world's coldest landscapes -- including the Himalaya Mountains, the Andes, and even Antarctica -- could become Saudi Arabias of solar. The research appears in the ACS journal Environmental Science & Technology.

Kotaro Kawajiri and colleagues explain that the potential for generating electricity with renewable solar energy depends heavily on geographic location. Arid and semi-arid areas with plenty of sunshine long have been recognized as good solar sites. However, the scientists point out that, as a result of the limited data available for critical weather-related conditions on a global scale, gaps still exist in knowledge about the best geographical locations for producing solar energy. To expand that knowledge, they used one established technique to estimate global solar energy potential using the data that are available. The technique takes into account the effects of temperature on the output of solar cells. Future work will consider other variables, such as transmission losses and snow fall.

As expected, they found that many hot regions such as the U.S. desert southwest are ideal locations for solar arrays. However, they also found that many cold regions at high elevations receive a lot of sunlight -- so much so that their potential for producing power from the sun is even higher than in some desert areas. Kawajiri and colleagues found, for instance, that the Himalayas, which include Mt. Everest, could be an ideal locale for solar fields that generate electricity for the fast-expanding economy of the People's Republic of China.

The authors acknowledge funding from the National Institute of Advanced Industrial Science and Technology and the Japan Society for the Promotion of Science.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by American Chemical Society, via EurekAlert!, a service of AAAS.

Journal Reference:

Kotaro Kawajiri, Takashi Oozeki, Yutaka Genchi. Effect of Temperature on PV Potential in the World. Environmental Science & Technology, 2011; 110818104437050 DOI: 10.1021/es200635x

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday 28 October 2011

Frustration inspires new form of graphene

ScienceDaily (Oct. 14, 2011) — They're the building block of graphite -- ultra-thin sheets of carbon, just one atom thick, whose discovery was lauded in 2010 with a Nobel Prize in Physics.

The seemingly simple material is graphene, and many researchers believe it has great potential for many applications, from electronic devices to high-performance composite materials. Graphene is extremely strong, an excellent conductor, and with no internal structure at all, it offers an abundance of surface area -- much like a sheet of paper.

When it comes to producing and utilizing graphene on a large scale, however, researchers have come upon a major problem: the material's tendency to aggregate. Like paper, graphene sheets easily stack into piles, significantly reducing their surface area and making them unprocessable.

Researchers at Northwestern University have now developed a new form of graphene that does not stack. The new material -- inspired by a trash can full of crumpled-up papers -- is made by crumpling the graphene sheets into balls.

A paper describing the findings was published Oct. 13 in the journal ACS Nano.

Graphene-based materials are very easily aggregated due to the strong interaction between the sheets, called "Van der Waals attraction." Therefore, common steps in materials processing, such as heating, solvent washing, compression, and mixing with other materials, can affect how the sheets are stacked. When the paper-like sheets band together -- picture a deck of cards -- their surface area is lost; with just a fraction of its original surface area available, the material becomes less effective. Stacked graphene sheets also become rigid and lose their processability.

Some scientists have tried to physically keep the sheets apart by inserting non-carbon "spacers" between them, but that changes the chemical composition of the material. When graphene is crumpled into balls, however, its surface area remains available and the material remains pure.

"If you imagine a trash can filled with paper crumples, you really get the idea," says Jiaxing Huang, Morris E. Fine Junior Professor in Materials and Manufacturing, the lead researcher of the study. "The balls can stack up into a tight structure. You can crumple them as hard as you want, but their surface area won't be eliminated, unlike face-to-face stacking."

"Crumpled paper balls usually express an emotion of frustration, a quite common experience in research," Huang says, "However, here 'frustration' quite appropriately describes why these particles are resistant to aggregation -- because their uneven surface frustrates or prevents tight face-to-face packing no matter how you process them."

To make crumpled graphene balls, Huang and his team created freely suspended water droplets containing graphene-based sheets, then used a carrier gas to blow the aerosol droplets through a furnace. As the water quickly evaporated, the thin sheets were compressed by capillary force into near-spherical particles.

The resulting crumpled graphene particles have the same electrical properties as the flat sheets but are more useful for applications that require large amounts of the material. The ridges formed in the crumpling process render the particles a strain-hardening property; the harder you compress them, the stronger they become. Therefore, the crumpled graphene balls are remarkably stable against mechanical deformation, Huang said. "We expect this to serve as a new graphene platform to investigate application in energy storage and energy conversion," Huang said.

Other authors of the paper were Jiayan Luo, Hee Dong Jang, Tao Sun, Li Xiao, Zhen He, Alexandros P. Katsoulidis, Mercouri G. Kanatzidis, and J. Murray Gibson.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Northwestern University.

Journal Reference:

Jiayan Luo, Hee Dong Jang, Tao Sun, Li Xiao, Zhen He, Alexandros P. Katsoulidis, Mercouri G. Kanatzidis, J. Murray Gibson, Jiaxing Huang. Compression and Aggregation-resistant Particles of Crumpled Soft Sheets. ACS Nano, 2011; 111013144955004 DOI: 10.1021/nn203115u

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Almahata Sitta meteorite could come from triple asteroid mash-up

ScienceDaily (Oct. 11, 2011) — Analysis of fragments of the Almahata Sitta meteorite, which landed in Sudan in 2008, has shown that the parent asteroid was probably formed through collisions of three different types of asteroids. The meteorites are of particular interest because they contain material both primitive and evolved types of asteroids.

The results are being presented at the EPSC-DPS Joint Meeting 2011 in Nantes, France, by Dr Julie Gayon-Markt.

The meteorites are fragments of the asteroid 2008 TC3, which impacted Earth exactly three years ago on 7th October 2008. More than 600 fragments were collected from the Nubian Desert in Sudan. They are collectively known as Almahata Sitta, which is Arabic for "Station Six," a train station between Wadi Halfa and Khartoum near where the fragments were found. The impact was historic because it was the first time that an asteroid was observed in space and tracked as it descended through Earth's atmosphere.

"Because falls of meteorites of different types are rare, the question of the origin of an asteroid harbouring both primitive and evolved characteristics is a challenging and intriguing problem," said Gayon-Markt. "Our recent studies of the dynamics and spectroscopy of asteroids in the main asteroid belt shed light on the origin of the Almahata Sitta fragments. We show that the Nysa-Polana asteroid family, located in the inner Main Belt is a very good candidate for the origin of 2008TC3."

Primitive asteroids, which are relatively unchanged since the birth of the Solar System, contain high proportions of hydrated minerals and organic materials. However, many other asteroids have undergone heating at some point, probably through the decay of radioactive materials, and the molten magma has separated into an iron core surrounded by a rocky mantle.

The Nysa-Polana family is divided into three different types: relatively rare B-type asteroids, which are primitive remnants of the early Solar System, stony S-type asteroids and intermediate X-type asteroids. Both S-type and X-type asteroids have undergone thermal evolution in their past. The spectral characteristics of all these three types are found in the Almahata Sitta fragments. The Nysa-Polana family is located in the inner Main Asteroid Belt and has a low orbital inclination relative to the ecliptic plane, which corresponds to the low-inclination of 2008TC3 during its journey to Earth.

The study led by Gayon-Markt suggests that 2008TC3 formed from the impact of an S-type object in the inner Main Asteroid belt with a B-type object from the Nysa-Polana family, followed by a second impact with an X-type asteroid of the Nysa-Polana.

"Around seventy to eighty percent of the Almahata Sitta fragments are what we call ureilites. Although ureilites show both primitive and evolved characteristics, their spectra in visible light are very similar to B-type primitive objects. The remaining twenty to thirty percent of the Almahata Sitta fragments gather two other kinds of meteorites which are linked to S-type and X-type asteroids. A workable explanation for how asteroid 2008TC3 could have formed involves low velocity collisions between these asteroid fragments of very different mineralogies," said Gayon-Markt.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Europlanet Media Centre, via AlphaGalileo.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Watching motion of electrons in molecules during chemical reactions

ScienceDaily (Oct. 14, 2011) — A research group led by ETH Zurich has now, for the first time, visualized the motion of electrons during a chemical reaction. The new findings in the experiment are of fundamental importance for photochemistry and could also assist the design of more efficient solar cells.

In 1999, Ahmed Zewail was awarded the nobel prize in chemistry for his studies of chemical reactions using ultrashort laser pulses. Zewail was able to watch the motion of atoms and thus visualize transition states on the molecular level. Watching the dynamics of single electrons was still considered a dream at that time. Thanks to the latest developments in laser technology and intense research in the field of attosecond spectroscopy (1 attosecond = 10-18 s) the research has developed fast. For the first time, Prof. Hans Jakob Wörner from the Laboratory of Physical Chemistry at ETH Zurich, together with colleagues from Canada and France, was able to record electronic motion during a complete chemical reaction. The experiment is described in the latest issue of Science.

The research team irradiated nitrogen dioxide molecules (NO2) with a very short ultraviolet pulse. Subsequently, the molecule takes up the energy from the pulse which sets the electrons in motion. The electrons start rearranging themselves, which causes the electron cloud to oscillate between two different shapes for a very short time, before the molecule starts to vibrate and eventually decomposes into nitric oxide and an oxygen atom.

Conical intersections

Nitrogen dioxide has model character with respect to understanding electronic motion. In the NO2 molecule, two states of the electrons can have the same energy for a particular geometry -- commonly described as conical intersection. The conical intersection is very important for photochemistry and frequently occurs in natural chemical processes induced by light. The conical intersection works like a dip-switch. For example, if the retina of a human eye is irradiated by light, the electrons start moving, and the molecules of the retina (retinal) change their shape, which finally converts the information of light to electrical information for the human brain. The special aspect about conical intersections is that the motion of electrons is transferred to a motion of the atoms very efficiently.

Snapshot of an electron

In an earlier article, Hans Jakob Wörner has already published how attosecond spectroscopy can be used for watching the motion of electrons. The first weak ultraviolet pulse sets the electrons in motion. The second strong infrared pulse then removes an electron from the molecule, accelerates it and drives it back to the molecule. As a result, an attosecond light pulse is emitted, which carries a snapshot of the electron distribution in the molecule. Wörner illustrates the principle of attosecond spectroscopy: "The experiment can be compared to photographs, which, for example, image a bullet shot through an apple. The bullet would be too fast for the shutter of a camera, resulting in a blurred image. Therefore, the shutter is left open and the picture is illuminated with light flashes, which are faster than the bullet. That's how we get our snap-shot."

From the experiment to solar cells

When the electron returns to the molecule, it releases energy in the form of light. In the experiment, Wörner and his colleagues measured the light of the electrons and were therefore able to deduce detailed information on the electron distribution and its evolution with time. This information reveals details of chemical reaction mechanisms that were not accessible to most of previous experimental techniques. The experiment on NO2 helps understanding fundamental processes in molecules and is an ideal extension of computer simulations of photochemical processes: "What makes our experiment so important is that it verifies theoretical models," says Wörner. The immense interest in photochemical processes is not surprising, as this area of research aims at improving solar cells and making artificial photosynthesis possible.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by ETH Zürich.

Journal Reference:

H. J. Worner, J. B. Bertrand, B. Fabre, J. Higuet, H. Ruf, A. Dubrouil, S. Patchkovskii, M. Spanner, Y. Mairesse, V. Blanchet, E. Mevel, E. Constant, P. B. Corkum, D. M. Villeneuve. Conical Intersection Dynamics in NO2 Probed by Homodyne High-Harmonic Spectroscopy. Science, 2011; 334 (6053): 208 DOI: 10.1126/science.1208664

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Intel to Mass-Produce New 3-D Transistors for Faster, More Efficient Computer Chips


3-D Transistor This image shows the vertical fins of Intel’s 22 nanometer microprocessor using 3-D Tri-Gate transistors. Intel
In a move that could remake the microchip industry, Intel announced Wednesday it will start mass-producing the first three-dimensional silicon transistors. The 3-D transistor design, which Intel says will improve efficiency by more than one-third, will be integrated into a 22-nanometer node in an Intel chip called Ivy Bridge.
It’s a major change from the two-dimensional flat transistor structure we all know and love, which has powered every computer chip for the last 50 years. The 3-D switch design and the scale of its production will allow Moore’s Law to advance apace, Intel said.
Moore’s Law holds that the number of transistors that can be placed on a circuit will double every two years, but this places limits on the circuits’ size — a growing problem as engineers cram greater numbers of transistors onto ever-tinier chips. A 3-D switch could allow computer chips to be built like skyscrapers, optimizing space by building upward, and thereby allowing uninhibited transistor growth.
The Tri-Gate transistors consist of a thin 3-D silicon fin that arises vertically from the silicon substrate, Intel explains. Each fin has three gates, one on the top and one on each side, which allows for greater transistor current control. When it’s on, current flow is more efficient, and when the switches are off, the flow of electrons is closer to zero. By contrast, flat transistors have one gate, only on top.
All this leads to greater efficiency, allowing chips to operate at a lower voltage and with lower leakage — Intel claims a whopping 37 percent performance increase over its 2-D chips. Since the fins and their gates are vertical, more transistors can be packed close together. Eventually, designers will be able to make taller fins, aiming for even better performance.
“It will give product designers the flexibility to make current devices smarter and wholly new ones possible,” said Mark Bohr, a senior fellow at Intel.
More than 6 million 22-nm Tri-Gate transistors could fit inside the period at the end of this sentence, according to the company. (If you zoom in, who knows how many could fit!)
The new transistors will be integrated into Ivy Bridge-based Intel Core processors by the end of this year, which consumers will be able to get in 2012, Intel said.
Plenty of other chip designers have been talking about 3-D chips — just last month, we saw a 2-D reprogrammable one designed to behave as if it was a 3-D one. But Intel has taken it a step further by figuring out how to mass-produce them.
It’s technically 3-D because the switches are vertical and horizontal, but the transistors are not stacked, allowing electrons to flow in three dimensions — that’s a holy Grail of microprocessor design. But a new circuit design that allows more transistors on tinier spaces certainly sounds like a major breakthrough.
[IBM via PC Magazine]

View the original article here