Saturday, 23 July 2011

New method for imaging molecules inside cells

ScienceDaily (June 29, 2011) — Using a new sample holder, researchers at the University of Gothenburg have further developed a new method for imaging individual cells. This makes it possible to produce snapshots that not only show the outline of the cell's contours but also the various molecules inside or on the surface of the cell, and exactly where they are located, something which is impossible with a normal microscope.

Individual human cells are small, just one or two hundredths of a millimeter in diameter. As such, special measuring equipment is needed to distinguish the various parts inside the cell. Researchers generally use a microscope that magnifies the cell and shows its contours outline, but does not provide any information on the molecules inside the cell and on its surface.

"The new sample holder is filled with holds cells in solution," says Ingela Lanekoff, one of the researchers who developed the new method at the University of Gothenburg's Department of Chemistry. "We then rapidly freeze the sample down to -196°C, which enables us to get a snapshot of where the various molecules are at the moment of freezing. Using this technique we can produce images that show not only the outline of the cell's contours, but also the molecules that are there, and where they are located."

Important to measure chemical processes in the body

So why do the researchers want to know which molecules are to be found in a single cell? Because the cell is the smallest living component there is, and the chemical processes that take place here play a major role in how the cell functions in our body. For example, our brain has special cells that can communicate with each other through chemical signals. This vital communication has been shown to be dependent on the molecules in the cell's membrane.

Imaging the molecules in the membrane of single individual cells's membrane enables researchers to measure changes. Together with previous results, Lanekoff's findings show that the rate of communication in the studied cells studied is affected by a change of less than one per cent in the quantities abundance of a specific molecule in the membrane. This would suggest that communication between the cells in the brain is heavily dependent on the chemical composition of the membrane of each individual cell,. This could be an important part of the puzzle which could go some way towards explaining the mechanisms behind learning and memory.

The thesis, Analysis of phospholipids in cellular membranes with LC and imaging mass spectrometry, has been successfully defended at the University of Gothenburg. Supervisors: Andrew Ewing and Roger Karlsson. Download the thesis at: hdl.handle.net/2077/25279

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Gothenburg.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Physicists observe 'campfire effect' in blinking nanorod semiconductors

ScienceDaily (June 24, 2011) — When semiconductor nanorods are exposed to light, they blink in a seemingly random pattern. By clustering nanorods together, physicists at the University of Pennsylvania have shown that their combined "on" time is increased dramatically providing new insight into this mysterious blinking behavior.

The research was conducted by associate professor Marija Drndic's group, including graduate student Siying Wang and postdoctorial fellows Claudia Querner and Tali Dadosh, all of the Department of Physics and Astronomy in Penn's School of Arts and Sciences. They collaborated with Catherine Crouch of Swarthmore College and Dmitry Novikov of New York University's School of Medicine.

Their research was published in the journal Nature Communications.

When provided with energy, whether in the form of light, electricity or certain chemicals, many semiconductors emit light. This principle is at work in light-emitting diodes, or LEDs, which are found in any number of consumer electronics.

At the macro scale, this electroluminescence is consistent; LED light bulbs, for example, can shine for years with a fraction of the energy used by even compact-fluorescent bulbs. But when semiconductors are shrunk down to nanometer size, instead of shining steadily, they turn "on" and "off" in an unpredictable fashion, switching between emitting light and being dark for variable lengths of time. For the decade since this was observed, many research groups around the world have sought to uncover the mechanism of this phenomenon, which is still not completely understood.

"Blinking has been studied in many different nanoscale materials for over a decade, as it is surprising and intriguing, but it's the statistics of the blinking that are so unusual," Drndic said. "These nanorods can be 'on' and 'off' for all scales of time, from a microsecond to hours. That's why we worked with Dmitry Novikov, who studies stochastic phenomena in physical and biological systems. These unusual Levi statistics arise when many factors compete with each other at different time scales, resulting in a rather complex behavior, with examples ranging from earthquakes to biological processes to stock market fluctuations."

Drndic and her research team, through a combination of imaging techniques, have shown that clustering these nanorod semiconductors greatly increases their total "on" time in a kind of "campfire effect." Adding a rod to the cluster has a multiplying effect on the "on" period of the group.

"If you put nanorods together, if each one blinks in rare short bursts, you would think the maximum 'on' time for the group will not be much bigger than that for one nanorod, since their bursts mostly don't overlap," Novikov said. "What we see are greatly prolonged 'on' bursts when nanorods are very close together, as if they help each other to keep shining, or 'burning.'"

Drndic's group demonstrated this by depositing cadmium selenide nanorods onto a substrate, shining a blue laser on them, then taking video under an optical microscope to observe the red light the nanorods then emitted. While that technique provided data on how long each cluster was "on," the team needed to use transmission electron microscopy, or TEM, to distinguish each individual, 5-nanometer rod and measure the size of each cluster.

A set of gold gridlines allowed the researchers to label and locate individual nanorod clusters. Wang then accurately overlaid about a thousand stitched-together TEM images with the luminescence data that she took with the optical microscope. The researchers observed the "campfire effect" in clusters as small as two and as large as 110, when the cluster effectively took on macroscale properties and stopped blinking entirely.

While the exact mechanism that causes this prolonged luminescence can't yet be pinpointed, Drndic's team's findings support the idea that interactions between electrons in the cluster are at the root of the effect.

"By moving from one end of a nanorod to the other, or otherwise changing position, we hypothesize that electrons in one rod can influence those in neighboring rods in ways that enhance the other rods' ability to give off light," Crouch said. "We hope our findings will give insight into these nanoscale interactions, as well as helping guide future work to understand blinking in single nanoparticles."

As nanorods can be an order of magnitude smaller than a cell, but can emit a signal that can be relatively easily seen under a microscope, they have been long considered as potential biomarkers. Their inconsistent pattern of illumination, however, has limited their usefulness.

"Biologists use semiconductor nanocrystals as fluorescent labels. One significant disadvantage is that they blink," Drndic said. "If the emission time could be extended to many minutes it makes them much more usable. With further development of the synthesis, perhaps clusters could be designed as improved labels."

Future research will use more ordered nanorod assemblies and controlled inter-particle separations to further study the details of particle interactions.

This research was supported by the National Science Foundation.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Pennsylvania.

Journal Reference:

Siying Wang, Claudia Querner, Tali Dadosh, Catherine H. Crouch, Dmitry S. Novikov, Marija Drndic. Collective fluorescence enhancement in nanoparticle clusters. Nature Communications, 2011; 2: 364 DOI: 10.1038/ncomms1357

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Video: Japanese Silicone-Skinned Dental Patient-Bot Flinches and Gags Just Like a Real Person, Only Much Creepier


Here’s one more reason to dread going to the dentist — you could run into this super-creepy patient-bot. Showa Hanako 2, unveiled at a press conference in Japan Thursday, will be sold through a dental supply company in Japan.

The robot is designed to help dental students practice, so researchers at Showa University made it as realistic as possible. That means it chokes, coughs, sneezes, moves its tongue and even gets a sore jaw. The builders worked with Japan’s leading maker of "love dolls" for adults, Orient Industry, to make the skin, tongue and mouth.

The tongue and arms have two degrees of freedom, so the robot can mimic a patient who starts to fidget while a dentist tries to perform prophylaxis. The skin is made of silicone and the tongue and cheek linings are made of one piece, so it feels more real — no obvious rubber seams, and no way for water to leak through and harm the machinery.

Koutaro Maki, a professor at Showa University School of Dentistry, explains somewhat bashfully that Orient Industry “had the technology” to produce such realistic mouths.

In seriousness, he says students get a sense that they’re working on a real patient, with all the extra tension that comes with that responsibility.

“If you don’t try to make a robot’s face look realistic, it doesn’t have the same effect on users psychologically,” he explains to DigInfo TV.

Why yes, professor, that's true. Check it out.

[via New Scientist]


View the original article here

When matter melts: Scientists map phase changes in quark-gluon plasma

ScienceDaily (June 24, 2011) — In its infancy, when the universe was a few millionths of a second old, the elemental constituents of matter moved freely in a hot, dense soup of quarks and gluons. As the universe expanded, this quark-gluon plasma quickly cooled, and protons and neutrons and other forms of normal matter "froze out": the quarks became bound together by the exchange of gluons, the carriers of the color force.

"The theory that describes the color force is called quantum chromodynamics, or QCD," says Nu Xu of the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), the spokesperson for the STAR experiment at the Relativistic Heavy Ion Collider (RHIC) at DOE's Brookhaven National Laboratory. "QCD has been extremely successful at explaining interactions of quarks and gluons at short distances, such as high-energy proton and antiproton collisions at Fermi National Accelerator Laboratory. But in bulk collections of matter -- including the quark-gluon plasma -- at longer distances or smaller momentum transfer, an approach called lattice gauge theory has to be used."

Until recently, lattice QCD calculations of hot, dense, bulk matter could not be tested against experiment. Beginning in 2000, however, RHIC was able to recreate the extreme conditions of the early universe in miniature, by colliding massive gold nuclei (heavy ions) at high energies.

Experimentalists at RHIC, working with theorist Sourendu Gupta of India's Tata Institute of Fundamental Research, have recently compared lattice-theory predictions about the nature of the quark-gluon plasma with certain STAR experimental results for the first time. In so doing they have established the temperature boundary where ordinary matter and quark matter cross over and change phase. Their results appear in the journal Science.

Phase diagrams

The aim of both the theoretical and experimental work is to explore and fix key points in the phase diagram for quantum chromodynamics. Phase diagrams are maps, showing, for example, how changes in pressure and temperature determine the phases of water, whether ice, liquid, or vapor. A phase diagram of QCD would map the distribution of ordinary matter (known as hadronic matter), the quark-gluon plasma, and other possible phases of QCD such as color superconductivity.

"Plotting a QCD phase diagram requires both theory calculations and experimental effort with heavy-ion collisions," says Xu, who is a member of Berkeley Lab's Nuclear Science Division and an author of the Science paper. Experimental studies require powerful accelerators like RHIC on Long Island or the Large Hadron Collider at CERN in Geneva, while calculations of QCD using lattice gauge theory require the world's biggest and fastest supercomputers. Direct comparisons can achieve more than either approach alone.

One of the basic requirements of any phase diagram is to establish its scale. A phase diagram of water might be based on the Celsius temperature scale, defined by the boiling point of water under normal pressure (i.e., at sea level). Although the boiling point changes with pressure -- at higher altitudes water boils at lower temperatures -- these changes are measured against a fixed value.

The scale of the QCD phase diagram is defined by a transition temperature at the zero value of "baryon chemical potential." Baryon chemical potential measures the imbalance between matter and antimatter, and zero indicates perfect balance.

Through extensive calculations and actual data from the STAR experiment, the team was indeed able to establish the QCD transition temperature. Before they could do so, however, they first had to realize an equally significant result, showing that the highly dynamical systems of RHIC's gold-gold collisions, in which the quark-gluon plasma winks in and out of existence, in fact achieve thermal equilibrium. Here's where theory and experiment worked hand in hand.

"The fireballs that result when gold nuclei collide are all different, highly dynamic, and last an extremely short time," says Hans Georg Ritter, head of the Relativistic Nuclear Collisions program in Berkeley Lab's Nuclear Science Division and an author of the Science paper. Yet because differences in values of the kind observed by STAR are related to fluctuations in thermodynamic values predicted by lattice gauge theory, says Ritter, "by comparing our results to the predictions of theory, we have shown that what we measure is in fact consistent with the fireballs reaching thermal equilibrium. This is an important achievement."

The scientists were now able to proceed with confidence in establishing the scale of the QCD phase diagram. After a careful comparison between experimental data and the results from the lattice gauge theory calculations, the scientists concluded that the transition temperature (expressed in units of energy) is 175 MeV (175 million electron volts).

Thus the team could develop a "conjectural" phase diagram that showed the boundary between the low-temperature hadronic phase of ordinary matter and the high-temperature quark-gluon phase.

In search of the critical point

Lattice QCD also predicts the existence of a "critical point." In a QCD phase diagram the critical point marks the end of a line showing where the two phases cross over, one into the other. By changing the energy, for example, the baryon chemical potential (balance of matter and antimatter) can be adjusted.

Among the world's heavy-ion colliders, only RHIC can tune the energy of the collisions through the region of the QCD phase diagram where the critical point is most likely to be found -- from an energy of 200 billion electrons volts per pair of nucleons (protons or neutrons) down to 5 billion electron volts per nucleon pair.

Says Ritter, "Establishing the existence of a QCD critical point would be much more significant than setting the scale." In 2010, RHIC started a program to search for the QCD critical point.

Xu says, "In this paper, we compared experimental data with lattice calculations directly, something never done before. This is a real step forward and allows us to establish the scale of the QCD phase diagram. Thus begins an era of precision measurements for heavy-ion physics."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Lawrence Berkeley National Laboratory.

Journal Reference:

Sourendu Gupta, Xiaofeng Luo, Bedangadas Mohanty, Hans Georg Ritter and Nu Xu. Scale for the phase diagram of quantum chromodynamics. Science, 24 June 2011 DOI: 10.1126/science.1204621

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Scientists a step closer to understanding 'natural antifreeze' molecules

ScienceDaily (June 23, 2011) — Scientists have made an important step forward in their understanding of cryoprotectants -- compounds that act as natural 'antifreeze' to protect drugs, food and tissues stored at sub-zero temperatures.

Researchers from the Universities of Leeds and Illinois, and Columbia University in New York, studied a particular type of cryoprotectants known as osmolytes. They found that small osmolyte molecules are better at protecting proteins than larger ones.

The findings, published in Proceedings of the National Academy of Sciences, could help scientists develop better storage techniques for a range of materials, including human reproductive tissue used in IVF.

Biological systems can usually only operate within a small range of temperatures. If they get too hot or too cold, the molecules within the system can become damaged (denatured), which affects their structure and stops them from functioning.

But certain species of fish, reptiles and amphibians can survive for months below freezing by entering into a kind of suspended animation. They are able to survive these extreme conditions thanks to osmolytes -- small molecules within their blood that act like antifreeze -preventing damage to their vital organs.

These properties have made osmolytes attractive to scientists. They are used widely in the storage and testing of drugs and other pharmaceuticals; in food production; and to store human tissue like egg and sperm cells at very low temperatures (below -40ºC) for a long period of time.

"If you put something like human tissue straight in the freezer, ice crystals start to grow in the freezing water and solutes -- solid particles dissolved in the water -- get forced out into the remaining liquid.

This can result in unwanted high concentrations of solutes, such as salt, which can be very damaging to the tissue," said Dr Lorna Dougan from the University of Leeds, who led the study. "The addition of cryoprotectants, such as glycerol, lowers the freezing temperature of water and prevents crystallisation by producing a 'syrupy' semi-solid state. The challenge is to know which cryoprotectant molecule to use and how much of it is necessary.

"We want to get this right so that we recover as much of the biological material as possible after re-thawing. This has massive cost implications, particularly for the pharmaceutical industry because at present they lose a large proportion of their viable drug every time they freeze it."

Dr Dougan and her team tested a range of different osmolytes to find out which ones are most effective at protecting the 3D structure of a protein. They used an atomic force microscope to unravel a test protein in a range of different osmolyte environments to find out which ones were most protective. They discovered that smaller molecules, such as glycerol, are more effective than larger ones like sorbitol and sucrose.

Dr Dougan said: "We've been able to show that if you want to really stabilise a protein, it makes sense to use small protecting osmolytes. We hope to use this discovery and future research to develop a simple set of rules that will allow scientists and industry to use the best process parameters for their system and in doing so dramatically increase the amount of material they recover from the freeze-thaw cycle."

The research was funded by the UK Engineering and Physical Sciences Research Council, the US National Institutes of Health and the China National Basic Research Program.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Leeds.

Journal Reference:

L. Dougan, G. Z. Genchev, H. Lu, J. M. Fernandez. Probing osmolyte participation in the unfolding transition state of a protein. Proceedings of the National Academy of Sciences, 2011; DOI: 10.1073/pnas.1101934108

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, 22 July 2011

New solar cell: Engineers crack full-spectrum solar challenge

ScienceDaily (June 27, 2011) — In a paper published in Nature Photonics, U of T Engineering researchers report a new solar cell that may pave the way to inexpensive coatings that efficiently convert the sun's rays to electricity.

The U of T researchers, led by Professor Ted Sargent, report the first efficient tandem solar cell based on colloidal quantum dots (CQD). "The U of T device is a stack of two light-absorbing layers -- one tuned to capture the sun's visible rays, the other engineered to harvest the half of the sun's power that lies in the infrared," said lead author Dr. Xihua Wang.

"We needed a breakthrough in architecting the interface between the visible and infrared junction," said Sargent, a Professor of Electrical and Computer Engineering at the University of Toronto, who is also the Canada Research Chair in Nanotechnology. "The team engineered a cascade -- really a waterfall -- of nanometers-thick materials to shuttle electrons between the visible and infrared layers."

According to doctoral student Ghada Koleilat, "We needed a new strategy -- which we call the Graded Recombination Layer -- so that our visible and infrared light-harvesters could be linked together efficiently, without any compromise to either layer."

The team pioneered solar cells made using CQD, nanoscale materials that can readily be tuned to respond to specific wavelengths of the visible and invisible spectrum. By capturing such a broad range of light waves -- wider than normal solar cells -- tandem CQD solar cells can in principle reach up to 42 per cent efficiencies. The best single-junction solar cells are constrained to a maximum of 31 per cent efficiency. In reality, solar cells that are on the roofs of houses and in consumer products have 14 to 18 per cent efficiency. The work expands the Toronto team's world-leading 5.6 per cent efficient colloidal quantum dot solar cells.

"Building efficient, cost-effective solar cells is a grand global challenge. The University of Toronto is extremely proud of its world-class leadership in the field," said Professor Farid Najm, Chair of The Edward S. Rogers Sr. Department of Electrical & Computer Engineering.

Sargent is hopeful that in five years solar cells using the graded recombination layer published in the Nature Photonics paper will be integrated into building materials, mobile devices, and automobile parts.

"The solar community -- and the world -- needs a solar cell that is over 10% efficient, and that dramatically improves on today's photovoltaic module price points," said Sargent. "This advance lights up a practical path to engineering high-efficiency solar cells that make the best use of the diverse photons making up the sun's broad palette."

The publication was based in part on work supported by an award made by the King Abdullah University of Science and Technology (KAUST), by the Ontario Research Fund Research Excellence Program, and by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Equipment from Angstrom Engineering and Innovative Technology enabled the research.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Toronto Faculty of Applied Science & Engineering, via EurekAlert!, a service of AAAS.

Journal Reference:

Xihua Wang, Ghada I. Koleilat, Jiang Tang, Huan Liu, Illan J. Kramer, Ratan Debnath, Lukasz Brzozowski, D. Aaron R. Barkhouse, Larissa Levina, Sjoerd Hoogland, Edward H. Sargent. Tandem colloidal quantum dot solar cells employing a graded recombination layer. Nature Photonics, 2011; DOI: 10.1038/nphoton.2011.123

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Splitsville for boron nitride nanotubes

ScienceDaily (June 29, 2011) — For Hollywood celebrities, the term "splitsville" usually means "check your prenup." For scientists wanting to mass-produce high quality nanoribbons from boron nitride nanotubes, "splitsville" could mean "happily ever after."

Scientists with the Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley, working with scientists at Rice University, have developed a technique in which boron nitride nanotubes are stuffed with atoms of potassium until the tubes split open along a longitudinal seam. This creates defect-free boron nitride nanoribbons of uniform lengths and thickness. Boron nitride nanoribbons are projected to display a variety of intriguing magnetic and electronic properties that hold enormous potential for future devices.

Nanoribbons are two-dimensional single crystals (meaning only a single atom in thickness) that can measure multiple microns in length, but only a few hundred or less nanometers in width. Graphene nanoribbons, which are made from pure carbon, carry electrons at much faster speeds than silicon, and can be used to cover wide areas and a broad assortment of shapes. Boron nitride nanoribbons offer similar advantages plus an additional array of electronic, optical and magnetic properties.

"There has been a significant amount of theoretical work indicating that, depending on the ribbon edges, boron nitride nanoribbons may exhibit ferromagnetism or anti-ferromagnetism, as well as spin-polarized transport which is either metallic or semi-conducting," says physicist Alex Zettl, one of the world's foremost researchers into nanoscale systems and devices who holds joint appointments with Berkeley Lab's Materials Sciences Division (MSD) and the Physics Department at UC Berkeley, where he is the director of the Center of Integrated Nanomechanical Systems (COINS).

"The unique properties of boron nitride nanoribbons are of great fundamental scientific interest and also have implications for applications in technologies that include spintronics and optoelectronics," Zettl says. "However, the facile, scalable synthesis of high quality boron nitride nanoribbons has been a significant challenge."

Zettl and members of his research group met this challenge using the chemical process known as "intercalation," whereby atoms or molecules of one type are inserted between atoms and molecules of another type. James Tour at Rice University and his research group had demonstrated that the intercalation of potassium atoms into carbon nanotubes promotes a longitudinal splitting of the tubes. This prompted Zettl and Tour to collaborate on a study that used the same approach on boron nitride nanotubes, which are very similar in structure to nanotubes made from carbon.

Zettl and Tour reported the results of this study in the journal Nano Letters.  Co-authoring the paper were Kris Erickson, Ashley Gibb, Michael Rousseas and Nasim Alem, who are all members of Zettl's research group, and Alexander Sinitskii, a member of Tour's research group.

"The likely mechanism for the splitting of both carbon and boron nitride nanotubes is that potassium islands grow from an initial starting point of intercalation," Zettl says. "This island growth continues until enough circumferential strain results in a breakage of the chemical bonds of the intercalated nanotube. The potassium then begins bonding to the bare ribbon edge, inducing further splitting."

Alex Zettl holds joint appointments with Berkeley Lab and UC Berkeley where he directs the Center of Integrated Nanomechanical Systems.

This synthesis technique yields boron nitride nanoribbons of uniform widths that can be as narrow as 20 nanometers. The ribbons are also at least one micron in length, with minimal defects within the plane or along the edges. Zettl says the high quality of the edges points to the splitting process being orderly rather than random. This orderliness could explain why a high proportion of the boron nitride nanoribbons display the coveted zigzag or armchair-shaped edges, rather than other edge orientations.

Edges are critical determinants of a nanoribbon's properties because the electrons along the edge of one ribbon edge can interact with the electrons along the edge of another ribbon, resulting in the type of energy gap that is crucial for making devices. For example, zigzagged edges in graphene nanoribbons have been shown to be capable of carrying a magnetic current, which makes them candidates for spintronics, the computing technology based on the spin rather than the charge of electrons.

Kris Erickson, who was the lead author on the Nano Letters paper, says that, "Given the significant dependence upon boron nitride nanoribbon edges for imbuing particular electronic and magnetic properties, the high likelihood of synthesizing ribbons with zigzag and armchair edges makes our technique particularly suitable for addressing theoretical predictions and realizing proposed applications."

Erickson also says it should be possible to functionalize the edges of the boron nitride nanoribbons, as these edges are terminated with chemically reactive potassium atoms following synthesis and with reactive hydrogen atoms following exposure to water or ethanol.

"The potassium-terminated edge could easily be replaced with a species other than hydrogen," Erickson says. "Different chemicals could be used for quenching to impart other terminations, and, furthermore, hydrogen could be replaced after quenching by either utilizing established boron nitride functionalization routes, or by devising new routes unique to the highly reactive nanoribbon edge."

Zettl and his research group are now investigating alternative syntheses using different boron nitride nanotube precursors to increase yields and improve the purification process. They are also attempting to functionalize the edges of their nanoribbons and they are in the process of determining if the various predicted edge states for these nanoribbons can be studied.

"What we really need most right now is a better source of boron nitride nanotubes," Zettl says.

This work was supported by the U.S. Department of Energy's Office of Science, with additional support from the National Science Foundation through the Center of Integrated Nanomechanical Systems (COINS), the Office of Naval Research, and the Air Force Research Laboratory.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Lawrence Berkeley National Laboratory.

Journal Reference:

Kris J. Erickson, Ashley L. Gibb, Alexander Sinitskii, Michael Rousseas, Nasim Alem, James M. Tour, Alex K. Zettl. Longitudinal Splitting of Boron Nitride Nanotubes for the Facile Synthesis of High Quality Boron Nitride Nanoribbons. Nano Letters, 2011; : 110524125655048 DOI: 10.1021/nl2014857

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

‘Cling-film’ solar cells could lead to advance in renewable energy

ScienceDaily (July 4, 2011) — A scientific advance in renewable energy which promises a revolution in the ease and cost of using solar cells, has been announced on July 4, 2011. A new study shows that even when using very simple and inexpensive manufacturing methods -- where flexible layers of material are deposited over large areas like cling-film -- efficient solar cell structures can be made.

The study, published in the journal Advanced Energy Materials, paves the way for new solar cell manufacturing techniques and the promise of developments in renewable solar energy. Scientists from the Universities of Sheffield and Cambridge used the ISIS Neutron Source and Diamond Light Source at STFC Rutherford Appleton Laboratory in Oxfordshire to carry out the research.

Plastic (polymer) solar cells are much cheaper to produce than conventional silicon solar cells and have the potential to be produced in large quantities. The study showed that when complex mixtures of molecules in solution are spread onto a surface, like varnishing a table-top, the different molecules separate to the top and bottom of the layer in a way that maximises the efficiency of the resulting solar cell.

Dr Andrew Parnell of the University of Sheffield said, "Our results give important insights into how ultra-cheap solar energy panels for domestic and industrial use can be manufactured on a large scale. Rather than using complex and expensive fabrication methods to create a specific semiconductor nanostructure, high volume printing could be used to produce nano-scale (60 nano-meters) films of solar cells that are over a thousand times thinner than the width of a human hair. These films could then be used to make cost-effective, light and easily transportable plastic solar cell devices such as solar panels."

Dr. Robert Dalgliesh, one of the ISIS scientists involved in the work, said, "This work clearly illustrates the importance of the combined use of neutron and X-ray scattering sources such as ISIS and Diamond in solving modern challenges for society. Using neutron beams at ISIS and Diamond's bright X-rays, we were able to probe the internal structure and properties of the solar cell materials non-destructively. By studying the layers in the materials which convert sunlight into electricity, we are learning how different processing steps change the overall efficiency and affect the overall polymer solar cell performance. "

"Over the next fifty years society is going to need to supply the growing energy demands of the world's population without using fossil fuels, and the only renewable energy source that can do this is the Sun," said Professor Richard Jones of the University of Sheffield. " In a couple of hours enough energy from sunlight falls on the Earth to satisfy the energy needs of the Earth for a whole year, but we need to be able to harness this on a much bigger scale than we can do now. Cheap and efficient polymer solar cells that can cover huge areas could help move us into a new age of renewable energy."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Science and Technology Facilities Council (STFC).

Journal Reference:

Paul A. Staniec, Andrew J. Parnell, Alan D. F. Dunbar, Hunan Yi, Andrew J. Pearson, Tao Wang, Paul E. Hopkinson, Christy Kinane, Robert M. Dalgliesh, Athene M. Donald, Anthony J. Ryan, Ahmed Iraqi, Richard A. L. Jones, David G. Lidzey. The Nanoscale Morphology of a PCDTBT:PCBM Photovoltaic Blend. Advanced Energy Materials, 2011; DOI: 10.1002/aenm.201100144

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Hot springs microbe yields record-breaking, heat-tolerant enzyme

ScienceDaily (July 5, 2011) — Bioprospectors from the University of California, Berkeley, and the University of Maryland School of Medicine have found a microbe in a Nevada hot spring that happily eats plant material -- cellulose -- at temperatures near the boiling point of water.

In fact, the microbe's cellulose-digesting enzyme, called a cellulase, is most active at a record 109 degrees Celsius (228 degrees Fahrenheit), significantly above the 100oC (212oF) boiling point of water.

This so-called hyperthermophilic microbe, discovered in a 95oC (203oF) geothermal pool, is only the second member of the ancient group Archaea known to grow by digesting cellulose above 80oC. And the microbe's cellulase is the most heat tolerant enzyme found in any cellulose-digesting microbe, including bacteria.

"These are the most thermophilic Archaea discovered that will grow on cellulose and the most thermophilic cellulase in any organism," said coauthor Douglas S. Clark, UC Berkeley professor of chemical and biomolecular engineering. "We were surprised to find this bug in our first sample."

Clark and coworkers at UC Berkeley are teaming with colleagues, led by Frank T. Robb, at the University of Maryland (U-Md) School of Medicine in Baltimore, to analyze microbes scooped from hot springs and other extreme environments around the United States in search of new enzymes that can be used in extreme industrial processes, including the production of biofuels from hard-to-digest plant fiber. Their team is supported by a grant from the Energy Biosciences Institute (EBI), a public-private collaboration that includes UC Berkeley, in which bioscience and biological techniques are being applied to help solve the global energy challenge.

"Our hope is that this example and examples from other organisms found in extreme environments -- such as high-temperature, highly alkaline or acidic, or high salt environments -- can provide cellulases that will show improved function under conditions typically found in industrial applications, including the production of biofuels," Clark said.

Clark, Robb and their colleagues, including UC Berkeley professor Harvey W. Blanch and postdoctoral researcher Melinda E. Clark, and U-Md postdoctoral researcher Joel E. Graham, will publish their results on July 5, in the online-only journal Nature Communications.

Many industrial processes employ natural enzymes, some of them isolated from organisms that live in extreme environments, such as hot springs. The enzyme used in the popular polymerase chain reaction to amplify DNA originally came from a thermophilic organism found in a geyser in Yellowstone National Park.

But many of these enzymes are not optimized for industrial processes, Clark said. For example, a fungal enzyme is currently used to break down tough plant cellulose into its constituent sugars so that the sugars can be fermented by yeast into alcohol. But the enzyme's preferred temperature is about 50oC (122oF), and it is not stable at the higher temperatures desirable to prevent other microbes from contaminating the reaction.

Hence the need to look in extreme environments for better enzymes, he said.

"This discovery is interesting because it helps define the range of natural conditions under which cellulolytic organisms exist and how prevalent these bugs are in the natural world," Clark said. "It indicates that there are a lot of potentially useful cellulases in places we haven't looked yet."

Robb and his colleagues collected sediment and water samples from the 95oC (203oF) Great Boiling Springs near the town of Gerlach in northern Nevada and grew microbes on pulverized Miscanthus gigas, a common biofuel feedstock, to isolate those that could grow with plant fiber as their only source of carbon.

After further growth on microcrystalline cellulose, the U-Md and UC Berkeley labs worked together to sequence the community of surviving microbes to obtain a metagenome, which indicated that three different species of Archaea were able to utilize cellulose as food. Using genetic techniques, they plucked out the specific genes involved in cellulose degradation, and linked the most active high-temperature cellulase, dubbed EBI-244, to the most abundant of the three Archaea.

Based on the structure of the enzyme, "this could represent a new type of cellulase or a very unusual member of a previously known family," Clark said.

The enzyme is so stable that it works in hot solutions approaching conditions that could be used to pretreat feedstocks like Miscanthus to break down the lignocelluloses and liberate cellulose. This suggests that cellulases may someday be used in the same reaction vessel in which feedstocks are pretreated.

The newly discovered hyperthermophilic cellulase may actually work at too high a temperature for some processes, Clark said. By collecting more hyperthermophilic cellulases, protein engineers may be able to create a version of the enzyme optimized to work at a lower temperature, but with the robust structural stability of the wild microbe.

"We might even find a cellulase that could be used as-is," he said, "but at least they will give us information to engineer new cellulases, and a better understanding of the diversity of nature."

The EBI partnership, which is funded with $500 million for 10 years from the energy company BP, includes researchers from the UC Berkeley; the University of Illinois at Urbana-Champaign; and the Lawrence Berkeley National Laboratory.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Berkeley.

Journal Reference:

Joel E. Graham, Melinda E. Clark, Dana C. Nadler, Sarah Huffer, Harshal A. Chokhawala, Sara E. Rowland, Harvey W. Blanch, Douglas S. Clark, Frank T. Robb. Identification and characterization of a multidomain hyperthermophilic cellulase from an archaeal enrichment. Nature Communications, 2011; 2: 375 DOI: 10.1038/ncomms1373

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Model finds optimal fiber optic network connections 10,000 times more quickly

ScienceDaily (June 28, 2011) — Designing fiber optic networks involves finding the most efficient way to connect phones and computers that are in different places -- a costly and time-consuming process. Now researchers from North Carolina State University have developed a model that can find optimal connections 10,000 times more quickly, using less computing power to solve the problem.

"Problems that used to take days to solve can now be solved in just a few seconds," says Dr. George Rouskas, computer science professor at NC State and author of a paper describing the new method. The model could solve problems more than 10,000 times faster when data is routed through larger "rings," in the network, Rouskas says.

Every time you make a phone call or visit a website, you send and receive data in the form of wavelengths of light through a network of fiber optic cables. These data are often routed through rings that ensure the information gets where it needs to go. These ring networks are faced with the constant challenge of ensuring that their system design can meet user requirements efficiently. As a result, ring network designers try to determine the best fiber optic cable route for transmitting user data between two points, as well as which wavelength of light to use. Most commercial fiber optics handle approximately 100 different wavelengths of light.

Solving these design challenges is difficult and time-consuming. Using existing techniques, finding the optimal solution for a ring can take days, even for smaller rings. And a ring's connections are modified on an ongoing basis, to respond to changing use patterns and constantly increasing traffic demands.

But the new model developed by Rouskas and his team should speed things up considerably. Specifically, the researchers have designed a mathematical model that identifies the exact optimal routes and wavelengths for ring network designers. The model creates a large graph of all the paths in a ring, and where those paths overlap. The model then breaks that graph into smaller units, with each unit consisting of the paths in a ring that do not overlap. Because these paths do not overlap, they can use the same wavelengths of light. Paths that overlap cannot use the same wavelengths of light -- because two things cannot occupy the same space at the same time.

By breaking all of the potential paths down into these smaller groups, the model is able to identify the optimal path and wavelength between two points much more efficiently than previous techniques.

"This will significantly shorten the cycle of feedback and re-design for existing rings," Rouskas says. "It also means that the ring design work can be done using fewer computer resources, which makes it less expensive. This should allow network providers to be more responsive to user demands than ever before."

The paper, "Fast Exact ILP Decompositions for Ring RWA," is published in the July issue of the Journal of Optical Communications and Networking. The paper was co-authored by Dr. Emre Yetginer, a former postdoctoral researcher at NC State now at Tubitak UEKAE, and NC State Ph.D. student Zeyu Liu.

NC State's Department of Computer Science is part of the university's College of Engineering.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by North Carolina State University.

Journal Reference:

Emre Yetginer, Zeyu Liu, George N. Rouskas. Fast Exact ILP Decompositions for Ring RWA. Journal of Optical Communications and Networking, 2011; 3 (7): 577 DOI: 10.1364/JOCN.3.000577

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Scientists discover dielectron charging of water nano-droplet

ScienceDaily (June 28, 2011) — Scientists have discovered fundamental steps of charging of nano-sized water droplets and unveiled the long-sought-after mechanism of hydrogen emission from irradiated water. Working together at the Georgia Institute of Technology and Tel Aviv University, scientists have discovered when the number of water molecules in a cluster exceeds 83, two excess electrons may attach to it -- forming dielectrons -- making it a doubly negatively charged nano droplet. Furthermore, the scientists found experimental and theoretical evidence that in droplets composed of 105 molecules or more, the excess dielectrons participate in a water-splitting process resulting in the liberation of molecular hydrogen and formation of two solvated hydroxide anions.

The results appear in the June 30 issue of the Journal of Physical Chemistry A.

It has been known since the early 1980s that while single electrons may attach to small water clusters containing as few as two molecules, only much larger clusters may attach more than single electrons. Size-selected, multiple-electron, negatively-charged water clusters have not been observed -- until now.

Understanding the nature of excess electrons in water has captured the attention of scientists for more than half a century, and the hydrated electrons are known to appear as important reagents in charge-induced aqueous reactions and molecular biological processes. Moreover, since the discovery in the early 1960s that the exposure of water to ionizing radiation causes the emission of gaseous molecular hydrogen, scientists have been puzzled by the mechanism underlying this process. After all, the bonds in the water molecules that hold the hydrogen atoms to the oxygen atoms are very strong. The dielectron hydrogen-evolution (DEHE) reaction, which produces hydrogen gas and hydroxide anions, may play a role in radiation-induced reactions with oxidized DNA that have been shown to underlie mutagenesis, cancer and other diseases.

"The attachment of multiple electrons to water droplets is controlled by a fine balancing act between the forces that bind the electrons to the polar water molecules and the strong repulsion between the negatively charged electrons," said Uzi Landman, Regents' and Institute Professor of Physics, F.E. Callaway Chair and director of the Center for Computational Materials Science (CCMS) at Georgia Tech.

"Additionally, the binding of an electron to the cluster disturbs the equilibrium arrangements between the hydrogen-bonded water molecules and this too has to be counterbalanced by the attractive binding forces. To calculate the pattern and strength of single and two-electron charging of nano-size water droplets, we developed and employed first-principles quantum mechanical molecular dynamics simulations that go well beyond any ones that have been used in this field," he added.

Investigations on controlled size-selected clusters allow explorations of intrinsic properties of finite-sized material aggregates, as well as probing of the size-dependent evolution of materials properties from the molecular nano-scale to the condensed phase regime.

In the 1980s Landman, together with senior research scientists in the CCMS Robert Barnett, the late Charles Cleveland and Joshua Jortner, professor of chemistry at Tel Aviv University, discovered that there are two ways that single excess electrons can attach to water clusters -- one in which they bind to the surface of the water droplet, and the other where they localize in a cavity in the interior of the droplet, as in the case of bulk water. Subsequently, Landman, Barnett and graduate student Harri-Pekka Kaukonen reported in 1992 on theoretical investigations concerning the attachment of two excess electrons to water clusters. They predicted that such double charging would occur only for sufficiently large nano-droplets. They also commented on the possible hydrogen evolution reaction. No other work on dielectron charging of water droplets has followed since.

That is until recently, when Landman, now one of the world leaders in the area of cluster and nano science, and Barnett teamed up with Ori Chesnovsky, professor of chemistry, and research associate Rina Giniger at Tel Aviv University, in a joint project aimed at understanding the process of dielectron charging of water clusters and the mechanism of the ensuing reaction -- which has not been observed previously in experiments on water droplets. Using large-scale, state-of-the-art first-principles dynamic simulations, developed at the CCMS, with all valence and excess electrons treated quantum mechanically and equipped with a newly constructed high-resolution time-of-flight mass spectrometer, the researchers unveiled the intricate physical processes that govern the fundamental dielectron charging processes of microscopic water droplets and the detailed mechanism of the water-splitting reaction induced by double charging.

The mass spectrometric measurements, performed at Tel Aviv, revealed that singly charged clusters were formed in the size range of six to more than a couple of hundred water molecules. However, for clusters containing more than a critical size of 83 molecules, doubly charged clusters with two attached excess electrons were detected for the first time. Most significantly, for clusters with 105 or more water molecules, the mass spectra provided direct evidence for the loss of a single hydrogen molecule from the doubly charged clusters.

The theoretical analysis demonstrated two dominant attachment modes of dielectrons to water clusters. The first is a surface mode (SS'), where the two repelling electrons reside in antipodal sites on the surface of the cluster. The second is another attachment mode with both electrons occupying a wave function localized in a hydration cavity in the interior of the cluster -- the so-called II binding mode. While both dielectron attachment modes may be found for clusters with 105 molecules and larger ones, only the SS' mode is stable for doubly charged smaller clusters.

"Moreover, starting from the II, internal cavity attachment mode in a cluster composed of 105 water molecules, our quantum dynamical simulations showed that the concerted approach of two protons from two neighboring water molecules located on the first shell of the internal hydration cavity, leads, in association with the cavity-localized excess dielectron, to the formation of a hydrogen molecule. The two remnant hydroxide anions diffuse away via a sequence of proton shuttle processes, ultimately solvating near the surface region of the cluster, while the hydrogen molecule evaporates," said Landman.

"What's more, in addition to uncovering the microscopic reaction pathway, the mechanism which we discovered requires initial proximity of the two reacting water molecules and the excess dielectron. This can happen only for the II internal cavity attachment mode. Consequently, the theory predicts, in agreement with the experiments, that the reaction would be impeded in clusters with less than 105 molecules where the II mode is energetically highly improbable. Now, that's a nice consistency check on the theory," he added.

As for future plans, Landman remarked, "While I believe that our work sets methodological and conceptual benchmarks for studies in this area, there is a lot left to be done. For example, while our calculated values for the excess single electron detachment energies are found to be in quantitative agreement with photoelectron measurements in a broad range of water cluster sizes -- containing from 15 to 105 molecules -- providing a consistent interpretation of these measurements, we would like to obtain experimental data on excess dielectron detachment energies to compare with our predicted values," he said.

"Additionally, we would like to know more about the effects of preparation conditions on the properties of multiply charged water clusters. We also need to understand the temperature dependence of the dielectron attachment modes, the influence of metal impurities, and possibly get data from time-resolved measurements. The understanding that we gained in this experiment about charge-induced water splitting may guide our research into artificial photosynthetic systems, as well as the mechanisms of certain bio-molecular processes and perhaps some atmospheric phenomena."

"You know," he added. "We started working on excess electrons in water clusters quite early, in the 1980s -- close to 25 years ago. If we are to make future progress in this area, it will have to happen faster than that."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Georgia Institute of Technology, via EurekAlert!, a service of AAAS.

Journal Reference:

Robert N. Barnett, Rina Giniger, Ori Cheshnovsky, Uzi Landman. Dielectron Attachment and Hydrogen Evolution Reaction in Water Clusters. The Journal of Physical Chemistry A, 2011; 110603091014098 DOI: 10.1021/jp201560n

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, 21 July 2011

Sun and planets constructed differently, analysis from NASA mission suggests

ScienceDaily (June 30, 2011) — The sun and the solar system's rocky inner planets, including Earth, may have formed differently than previously thought, according to UCLA scientists and colleagues analyzing samples returned by NASA's Genesis mission.

The data from Genesis, which collected material from the solar wind blowing from the sun, reveal differences between the sun and planets with regard to oxygen and nitrogen, two of the most abundant elements in our solar system, the researchers report in two studies in the June 24 issue of the journal Science. And although the differences are slight, the research could help determine how our solar system evolved.

"We want to understand how rocky planets form, particularly our rocky planet," said Genesis co-investigator and UCLA professor of Earth and space sciences Kevin McKeegan, who was the lead author of the Science study on oxygen. "To understand that, we need to understand how the isotope composition of the most abundant element in the Earth came to be what it is."

On Earth, the air contains three kinds, or isotopes, of oxygen atoms, which differ in the number of neutrons they contain. All three have eight protons, and almost all have eight neutrons (O-16), but a small proportion of isotopes contain nine neutrons (O-17) or 10 neutrons (O-18). Although isotopes of an element behave similarly, there are subtle differences in reaction rates according to the isotopic mass, McKeegan said.

"We found that the Earth and moon, as well as Martian and other meteorites, which are samples of asteroids, have a lower concentration of the O-16 than does the sun," McKeegan said. "The implication is that we did not form out of the same solar nebula materials that created the sun. Just how and why remains to be discovered."

McKeegan and his colleagues measured, for the first time, the isotopic composition of oxygen in the solar wind. They found that the sun has about 6 percent more O-16 -- relative to both of the minor oxygen isotopes -- than Earth does. Because the sun represents the "starting composition of the entire solar system," these findings are surprising, McKeegan said.

"It's the most abundant element in the Earth, and it is isotopically anomalous," he said, adding that something chemically unusual happened to the material that eventually formed Earth and other rocky planets some 4.6 billion years ago, after the sun had already formed.

"The present composition of the rocky planets is quite different from the starting composition in a way we do not fully understand," he said, "but it must have involved interesting chemistry before the planets formed in the gaseous nebula that produced the sun and planets."

The data were obtained from an analysis of material ejected from the outer portion of the sun. That material can be thought of as a "fossil of our nebula" because scientific evidence suggests that the sun's outer layer has not changed measurably in billions of years. The sample of solar material collected by Genesis was small, but there was enough to be analyzed using UCLA's MegaSIMS (secondary ion mass spectrometer).

"This is the first time the heavy elements in the sun have had their isotope composition determined with precision, directly from solar material," McKeegan said. "The Genesis mission was a success. The mission has achieved its highest priority objectives. We are learning how planets form."

Analyses of meteorites from Mars indicate that oxygen on Mars is very similar to oxygen on Earth, but not identical, McKeegan said.

Genesis launched in August 2001. The spacecraft traveled to the L1 Lagrange Point, about 1 million miles from Earth, where it remained for 886 days between 2001 and 2004, passively collecting solar wind samples.

On Sept. 8, 2004, the spacecraft released a sample return capsule that entered Earth's atmosphere. Although the capsule made a hard landing -- the result of a failed parachute -- in the Utah Test and Training Range in Dugway, Utah, it marked NASA's first sample return since the final Apollo lunar mission in 1972 and the first material collected beyond the moon.

Co-authors of the oxygen study included Veronika Heber, a UCLA research scientist in the Department of Earth and Space Sciences; George Jarzebinski, senior electronics engineer at UCLA; Chris Coath, a former UCLA researcher who designed the ion optics of the MegaSIMS; Peter Mao, a former UCLA researcher who is currently an astrophysicist at the California Institute of Technology; Antti Kallio, a former UCLA postdoctoral scholar who acquired much of the solar wind data; Takaya Kunihiro, a former UCLA postdoctoral scholar currently at Japan's Okayama University; and Don Burnett, a professor at the California Institute of Technology, who was Genesis' principal investigator. A team from Los Alamos National Laboratory led by Roger Wiens built a device on the Genesis spacecraft for the analysis of oxygen and nitrogen from the solar wind. Wiens and his colleagues are also co-authors of the study. NASA funded the research.

"The sun houses more than 99 percent of the material currently in our solar system, so it's a good idea to get to know it better," Burnett said.

A second paper in Science by different researchers details differences between the sun and planets with regard to the element nitrogen. Like oxygen, nitrogen has one isotope (N-14) that makes up nearly 100 percent of the nitrogen atoms in the solar system, but there is also a tiny amount of N-15.

Researchers studying the same Genesis samples found that compared to Earth's atmosphere, nitrogen in the sun and Jupiter had slightly more N-14 -- but 40 percent less N-15. The sun and Jupiter appear to have the same nitrogen composition, but as with oxygen, the nitrogen composition of Earth and the rest of the inner solar system is very different.

"These findings show that all solar system objects, including the terrestrial planets, meteorites and comets, are anomalous compared to the initial composition of the nebula from which the solar system formed," said Bernard Marty, a Genesis co-investigator from the Centre de Recherches Pétrographiques et Géochimiques in France and lead author of the second Science study. "Understanding the cause of such a heterogeneity will impact our view on the formation of the solar system."

The Jet Propulsion Laboratory in Pasadena, Calif., managed the Genesis mission for NASA's Science Mission Directorate in Washington, D.C. Genesis was part of the Discovery Program managed at NASA's Marshall Space Flight Center in Huntsville, Ala. Lockheed Martin Space Systems in Denver developed and operated the spacecraft. Analysis at the Centre de Recherches Pétrographiques et Géochimiques was supported by the Centre National d'Etudes Spatiales and the Centre National de la Recherche Scientifique, both in Paris.

For more information on the Genesis mission, visit http://genesismission.jpl.nasa.gov.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Los Angeles. The original article was written by Stuart Wolpert.

Journal References:

K. D. McKeegan, A. P. A. Kallio, V. S. Heber, G. Jarzebinski, P. H. Mao, C. D. Coath, T. Kunihiro, R. C. Wiens, J. E. Nordholt, R. W. Moses, D. B. Reisenfeld, A. J. G. Jurewicz, D. S. Burnett. The Oxygen Isotopic Composition of the Sun Inferred from Captured Solar Wind. Science, 2011; 332 (6037): 1528 DOI: 10.1126/science.1204636B. Marty, M. Chaussidon, R. C. Wiens, A. J. G. Jurewicz, D. S. Burnett. A 15N-Poor Isotopic Composition for the Solar System As Shown by Genesis Solar Wind Samples. Science, 2011; 332 (6037): 1533 DOI: 10.1126/science.1204656

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Synthetic collagen from maize has human properties

ScienceDaily (June 28, 2011) — Synthetic collagen has a wide range of applications in reconstructive and cosmetic surgery and in the food industry. For proper function in animals a certain number of prolines within the protein need to be hydroxylated. BioMed Central's open access journal BMC Biotechnology reports that for the first time the a1 chain of type 1 collagen has been produced in maize with similar levels of proline hydroxylation to human collagen.

Most collagen used is derived from animals but there are risks associated with this collagen containing infectious agents or being rejected by the body. To avoid this problem several laboratory-based systems have been developed using plants to produce collagen. Plant-derived recombinant proteins should have lower contamination and fewer infectious agents, but these systems are unable to make modifications to the protein essential for proper function in human cells.

Working in collaboration with industrial partners researchers added a gene, which codes for the a1 chain of human CI (hCI a1), to maize along with genes which make human prolyl 4-hydroxylase. This second protein was able to hydroxylate approximately the same percentage of prolines in the recombinant collagen a1 chain, produced in maize, as seen for human collagen made in human cells.

Dr Kan Wang from Iowa State University said, "Producing human collagen in maize seeds is an inexpensive alternative to using animal-derived collagen. The seeds are easy to grow, process, and store. Our transgenic plant system is also able to produce a protein with human-like modifications making it a better choice for a wide range of applications."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by BioMed Central, via EurekAlert!, a service of AAAS.

Journal Reference:

Xing Xu, Qinglei Gan, Richard C Clough, Kamesh M Pappu, John A Howard, Julio A Baez, Kan Wang. Hydroxylation of recombinant human collagen type I alpha 1 in transgenic maize co-expressed with a recombinant human prolyl 4-hydroxylase. BMC Biotechnology, 2011; 11 (1): 69 DOI: 10.1186/1472-6750-11-69

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Brain-like computing a step closer to reality

ScienceDaily (June 24, 2011) — The development of 'brain-like' computers has taken a major step forward with the publication of research led by the University of Exeter.

Published in the journal Advanced Materials, the study involved the first ever demonstration of simultaneous information processing and storage using phase-change materials. This new technique could revolutionize computing by making computers faster and more energy-efficient, as well as making them more closely resemble biological systems.

Computers currently deal with processing and memory separately, resulting in a speed and power 'bottleneck' caused by the need to continually move data around. This is totally unlike anything in biology, for example in human brains, where no real distinction is made between memory and computation. To perform these two functions simultaneously the University of Exeter research team used phase-change materials, a kind of semi-conductor that exhibits remarkable properties.

Their study demonstrates conclusively that phase-change materials can store and process information simultaneously. It also shows experimentally for the first time that they can perform general-purpose computing operations, such as addition, subtraction, multiplication and division. More strikingly perhaps it shows that phase-change materials can be used to make artificial neurons and synapses. This means that an artificial system made entirely from phase-change devices could potentially learn and process information in a similar way to our own brains.

Lead author Professor David Wright of the University of Exeter said: "Our findings have major implications for the development of entirely new forms of computing, including 'brain-like' computers. We have uncovered a technique for potentially developing new forms of 'brain-like' computer systems that could learn, adapt and change over time. This is something that researchers have been striving for over many years."

This study focused on the performance of a single phase-change cell. The next stage in Exeter's research will be to build systems of interconnected cells that can learn to perform simple tasks, such as identification of certain objects and patterns.

This research was funded by the Engineering and Physical Sciences Research Council.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Exeter.

Journal Reference:

C. David Wright, Yanwei Liu, Krisztian I. Kohary, Mustafa M. Aziz, Robert J. Hicken. Arithmetic and Biologically-Inspired Computing Using Phase-Change Materials. Advanced Materials, 2011; DOI: 10.1002/adma.201101060

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

NASA's Spitzer finds distant galaxies grazed on gas

ScienceDaily (July 1, 2011) — Galaxies once thought of as voracious tigers are more like grazing cows, according to a new study using NASA's Spitzer Space Telescope.

Astronomers have discovered that galaxies in the distant, early universe continuously ingested their star-making fuel over long periods of time. This goes against previous theories that the galaxies devoured their fuel in quick bursts after run-ins with other galaxies.

"Our study shows the merging of massive galaxies was not the dominant method of galaxy growth in the distant universe," said Ranga-Ram Chary of NASA's Spitzer Science Center at the California Institute of Technology in Pasadena, Calif. "We're finding this type of galactic cannibalism was rare. Instead, we are seeing evidence for a mechanism of galaxy growth in which a typical galaxy fed itself through a steady stream of gas, making stars at a much faster rate than previously thought."

Chary is the principal investigator of the research, appearing in the Aug. 1 issue of the Astrophysical Journal. According to his findings, these grazing galaxies fed steadily over periods of hundreds of millions of years and created an unusual amount of plump stars, up to 100 times the mass of our sun.

"This is the first time that we have identified galaxies that supersized themselves by grazing," said Hyunjin Shim, also of the Spitzer Science Center and lead author of the paper. "They have many more massive stars than our Milky Way galaxy."

Galaxies like our Milky Way are giant collections of stars, gas and dust. They grow in size by feeding off gas and converting it to new stars. A long-standing question in astronomy is: Where did distant galaxies that formed billions of years ago acquire this stellar fuel? The most favored theory was that galaxies grew by merging with other galaxies, feeding off gas stirred up in the collisions.

Chary and his team addressed this question by using Spitzer to survey more than 70 remote galaxies that existed 1 to 2 billion years after the Big Bang (our universe is approximately 13.7 billion years old). To their surprise, these galaxies were blazing with what is called H alpha, which is radiation from hydrogen gas that has been hit with ultraviolet light from stars. High levels of H alpha indicate stars are forming vigorously. Seventy percent of the surveyed galaxies show strong signs of H alpha. By contrast, only 0.1 percent of galaxies in our local universe possess this signature.

Previous studies using ultraviolet-light telescopes found about six times less star formation than Spitzer, which sees infrared light. Scientists think this may be due to large amounts of obscuring dust, through which infrared light can sneak. Spitzer opened a new window onto the galaxies by taking very long-exposure infrared images of a patch of sky called the GOODS fields, for Great Observatories Origins Deep Survey.

Further analyses showed that these galaxies furiously formed stars up to 100 times faster than the current star-formation rate of our Milky Way. What's more, the star formation took place over a long period of time, hundreds of millions of years. This tells astronomers that the galaxies did not grow due to mergers, or collisions, which happen on shorter timescales. While such smash-ups are common in the universe -- for example, our Milky Way will merge with the Andromeda galaxy in about 5 billion years -- the new study shows that large mergers were not the main cause of galaxy growth. Instead, the results show that distant, giant galaxies bulked up by feeding off a steady supply of gas that probably streamed in from filaments of dark matter.

Chary said, "If you could visit a planet in one of these galaxies, the sky would be a crazy place, with tons of bright stars, and fairly frequent supernova explosions."

NASA's Jet Propulsion Laboratory in Pasadena, Calif., manages the Spitzer Space Telescope mission for the agency's Science Mission Directorate in Washington. Science operations are conducted at the Spitzer Science Center at Caltech. Caltech manages JPL for NASA.

For more information about Spitzer, visit http://www.nasa.gov/spitzer and http://spitzer.caltech.edu/ .

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA/Jet Propulsion Laboratory.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Termites' digestive system could act as biofuel refinery

ScienceDaily (July 5, 2011) — One of the peskiest household pests, while disastrous to homes, could prove to be a boon for cars, according to a Purdue University study.

Mike Scharf, the O. Wayne Rollins/Orkin Chair in Molecular Physiology and Urban Entomology, said his laboratory has discovered a cocktail of enzymes from the guts of termites that may be better at getting around the barriers that inhibit fuel production from woody biomass. The Scharf Laboratory found that enzymes in termite guts are instrumental in the insects' ability to break down the wood they eat.

The findings, published in the early online version of the journal PLoS One, are the first to measure the sugar output from enzymes created by the termites themselves and the output from symbionts, small protozoa that live in termite guts and aid in digestion of woody material.

"For the most part, people have overlooked the host termite as a source of enzymes that could be used in the production of biofuels. For a long time it was thought that the symbionts were solely responsible for digestion," Scharf said. "Certainly the symbionts do a lot, but what we've shown is that the host produces enzymes that work in synergy with the enzymes produced by those symbionts. When you combine the functions of the host enzymes with the symbionts, it's like one plus one equals four."

Scharf and his research partners separated the termite guts, testing portions that did and did not contain symbionts on sawdust to measure the sugars created.

Once the enzymes were identified, Scharf and his team worked with Chesapeake Perl, a protein production company in Maryland, to create synthetic versions. The genes responsible for creating the enzymes were inserted into a virus and fed to caterpillars, which then produce large amounts of the enzymes. Tests showed that the synthetic versions of the host termite enzymes also were very effective at releasing sugar from the biomass.

They found that the three synthetic enzymes function on different parts of the biomass.

Two enzymes are responsible for the release of glucose and pentose, two different sugars. The other enzyme breaks down lignin, the rigid compound that makes up plant cell walls.

Lignin is one of the most significant barriers that blocks the access to sugars contained in biomass. Scharf said it's possible that the enzymes derived from termites and their symbionts, as well as synthetic versions, could be more effective at removing that lignin barrier.

Sugars from plant material are essential to creating biofuels. Those sugars are fermented to make products such as ethanol.

"We've found a cocktail of enzymes that create sugars from wood," Scharf said. "We were also able to see for the first time that the host and the symbionts can synergistically produce these sugars."

Next, Scharf said his laboratory and collaborators would work on identifying the symbiont enzymes that could be combined with termite enzymes to release the greatest amount of sugars from woody material. Combining those enzymes would increase the amount of biofuel that should be available from biomass.

The U.S. Department of Energy and Chesapeake Perl funded the research.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Purdue University. The original article was written by Brian Wallheimer.

Journal Reference:

Michael E. Scharf, Zachary J. Karl, Amit Sethi, Drion G. Boucias. Multiple Levels of Synergistic Collaboration in Termite Lignocellulose Digestion. PLoS ONE, 2011; 6 (7): e21709 DOI: 10.1371/journal.pone.0021709

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, 20 July 2011

Tapping titanium's colorful potential

ScienceDaily (June 28, 2011) — A new, cost-effective process for colouring titanium can be used in manufacturing products from sporting equipment to colour-coded nuclear waste containers.

"The new method uses an electrochemical solution to produce coloured titanium, improving on an older, time-consuming and expensive method where heat was used to develop a coloured layer," says Gregory Jerkiewicz, a professor in the Department of Chemistry.

Dr. Jerkiewicz's new technique can be finely tuned to produce over 80 different shades of basic colours. In addition, the coloured titanium produced using the new method remains crack-free and stable for many years.

Coloured titanium has the potential to be used in the production of everyday objects like spectacle frames, jewelry, golf clubs and high-performance bicycles.

Industries including healthcare, aviation and the military could use the technology to create items like colour-coded surgical tools, brightly coloured airplane parts, and stealth submarines made from blue titanium.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Queen's University.

Journal Reference:

Andrew Munro, Michael F. Cunningham, Gregory Jerkiewicz. Spectral and Physical Properties of Electrochemically Formed Colored Layers on Titanium Covered with Clearcoats. ACS Applied Materials & Interfaces, 2011; : 110316084657083 DOI: 10.1021/am2000196

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Researchers map the physics of Tibetan singing bowls

ScienceDaily (July 4, 2011) — Researchers have been investigating the connection between fifth century Himalayan instruments used in religious ceremonies and modern physics.

In a study published July 1, 2011, in IOP Publishing's journal Nonlinearity, researchers have captured high speed images of the dynamics of fluid-filled Tibetan bowls and quantified how droplets are propelled from the water's surface as the bowls are excited.

The first of five videos demonstrating the intriguing dynamics can be seen at: http://www.youtube.com/watch?v=oob8zENYt0g

A Tibetan bowl, generally made from a bronze alloy containing copper, tin, zinc, iron, silver, gold and nickel, is a type of standing bell played by striking or rubbing its rim with a wooden or leather-wrapped mallet. This excitation causes the sides and rim of the bowl to vibrate, producing a rich sound.

The unique singing properties of Tibetan bowls were utilised as a way of investigating a liquid's interaction with solid materials -- a situation that arises in many engineering applications such as the wind-loading of bridges and buildings.

When a fluid-filled Tibetan bowl is rubbed, the slight changes in the bowl's shape disturb the surface at the water's edge, generating waves. Moreover, when these changes are sufficiently large, the waves break, leading to the ejection of droplets.

The new findings could benefit processes such as fuel injectors and perfume sprays where droplet generation plays an important role.

The high-speed videos allowed the researchers, from Université de Liège and the Massachusetts Institute of Technology, to quantify how the droplets were formed, ejected, accelerated, and bounced on the surface of the fluid.

A similar phenomenon exists when rubbing the edge of a wine glass, which inspired the design of the glass harmonica by Benjamin Franklin. However, the Tibetan singing bowl is easier to excite than the wine glass, since its resonant frequency is much smaller.

In order to generate the waves and resultant droplets, a loudspeaker was set up adjacent to the bowls, which emitted sound at specific frequencies. Once the sound hit the resonant frequency of the bowl -- a sound wave vibrating in phase with the natural vibration of the bowl -- the waves would be generated.

A high speed camera was used to capture images of the droplets, from which measurements could be taken.

Senior author Professor John Bush said, "Although our system represents an example of fluid-solid interactions, it was motivated more by curiosity than engineering applications.

"We are satisfied with the results of our investigation, which we feel has elucidated the basic physics of the system. Nevertheless, one might find further surprises by changing the bowl or fluid properties."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Institute of Physics, via EurekAlert!, a service of AAAS.

Journal Reference:

Denis Terwagne, John W. M. Bush. Tibetan singing bowls. Nonlinearity, 2011; 24: R51 DOI: 10.1088/0951-7715/24/8/R01

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Properties of 'confined' water within single-walled carbon nanotube pores clarified

ScienceDaily (June 23, 2011) — Water and ice may not be among the first things that come to mind when you think about single-walled carbon nanotubes (SWCNTs), but a Japan-based research team hoping to get a clearer understanding of the phase behavior of confined water in the cylindrical pores of carbon nanotubes zeroed in on confined water's properties and made some surprising discoveries.

The team, from Tokyo Metropolitan University, Nagoya University, Japan Science and Technology Agency, and National Institute of Advanced Industrial Science and Technology, describes their findings in the American Institute of Physics' Journal of Chemical Physics.

Although carbon nanotubes consist of hydrophobic (water repelling) graphene sheets, experimental studies on SWCNTs show that water can indeed be confined in open-ended carbon nanotubes.

This discovery gives us a deeper understanding of the properties of nanoconfined water within the pores of SWCNTs, which is a key to the future of nanoscience. It's anticipated that nanoconfined water within carbon nanotubes can open the door to the development of a variety of nifty new nanothings -- nanofiltration systems, molecular nanovalves, molecular water pumps, nanoscale power cells, and even nanoscale ferroelectric devices.

"When materials are confined at the atomic scale they exhibit unusual properties not otherwise observed, due to the so-called 'nanoconfinement effect.' In geology, for example, nanoconfined water provides the driving force for frost heaves in soil, and also for the swelling of clay minerals," explains Yutaka Maniwa, a professor in the Department of Physics at Tokyo Metropolitan University. "We experimentally studied this type of effect for water using SWCNTs."

Water within SWCNTs in the range of 1.68 to 2.40 nanometers undergoes a wet-dry type of transition when temperature is decreased. And the team discovered that when SWCNTs are extremely narrow, the water inside forms tubule ices that are quite different from any bulk ices known so far. Strikingly, their melting point rises as the SWCNT diameter decreases -- contrary to that of bulk water inside a large-diameter capillary. In fact, tubule ice occurred even at room temperature inside SWCNTs.

"We extended our studies to the larger diameter SWCNTs up to 2.40 nanometers and successfully proposed a global phase behavior of water," says Maniwa. "This phase diagram (see image) covers a crossover from microscopic to macroscopic regions. In the macroscopic region, a novel wet-dry transition was newly explored at low temperature."

Results such as these contribute to a greater understanding of fundamental science because nanoconfined water exists and plays a vital role everywhere on Earth -- including our bodies. "Understanding the nanoconfined effect on the properties of materials is also crucial to develop new devices, such as proton-conducting membranes and nanofiltration," Maniwa notes.

Next up, the team plans to investigate the physical properties of confined water discovered so far inside SWCNTs (such as dielectricity and proton conduction). They will pursue this to obtain a better understanding of the molecular structure and transport properties in biological systems.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by American Institute of Physics, via EurekAlert!, a service of AAAS.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Astronomers reveal a cosmic 'axis of evil'

ScienceDaily (June 30, 2011) — Astronomers are puzzled by the announcement that the masses of the largest objects in the universe appear to depend on which method is used to weigh them. The new work was presented at a specialist discussion meeting on 'Scaling Relations of Galaxy Clusters' organised by the Astrophysics Research Institute (ARI) at Liverpool John Moores University and supported by the Royal Astronomical Society.

Clusters of galaxies are the largest gravitationally bound objects in the universe containing thousands of galaxies like the Milky Way and their weight is an important probe of their dark matter content and evolution through cosmic time. Measurements used to weigh these systems carried out in three different regions of the electromagnetic spectrum: X-ray, optical and millimetre wavelengths, give rise to significantly different results.

Eduardo Rozo, from the University of Chicago, explained that any two of the measurements can be made to fit easily enough but that always leaves the estimate using the third technique out of line. Dubbed the 'Axis of Evil', it is as if the universe is being difficult by keeping back one or two pieces of the jigsaw and so deliberately preventing us from calibrating our weighing scales properly.

More than 40 of the leading cluster astronomers from UK, Europe and the US attended the meeting to discuss the early results from the Planck satellite, currently scanning the heavens at millimetre wavelengths, looking for the smallest signals from clusters of galaxies and the cosmic background radiation in order to understand the birth of the universe. The Planck measurements were compared with optical images of clusters from the Sloan Digitised Sky Survey and new X-ray observations from the XMM-Newton satellite.

ARI astronomers are taking a leading role in this research through participation in the X-ray cluster work and observations of the constituent galaxies using the largest ground-based optical telescopes.

One possible resolution to the 'Axis of Evil' problem discussed at the meeting is a new population of clusters which is optically bright but also X-ray faint. Dr Jim Bartlett (Univ. Paris), who is one of the astronomers who presented the Planck results, argued that the prospect of a new cluster population which responds differently was a 'frightening prospect' because it overturns age old ideas about the gravitational physics being the same from cluster to cluster.

Chris Collins, LJMU Professor of Cosmology, who organised the meeting said: 'I saw this meeting as an opportunity to bring together experts who study clusters at only one wavelength and don't always talk to their colleagues working at other wavelengths. The results presented are unexpected and all three communities (optical, X-ray and millimetre) will need to work together in the future to figure out what is going on.'

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Royal Astronomical Society (RAS).

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Big dinosaurs kept their cool

access ORAL THERMOMETERThe number of chemical bonds between certain oxygen and carbon atoms in this fossil tooth allowed scientists to determine the body temperature of a 150-million-year-old sauropod dinosaur called Camarasaurus.Thomas Tütken/University of Bonn

How do you take a dinosaur’s temperature? Very carefully.

By counting chemical bonds in 150-million-year-old fossilized teeth, scientists have done the paleontological equivalent of jamming a thermometer up a giant reptile’s rear end. Reporting online June 23 in Science, the researchers say the huge, four-legged dinosaurs known as sauropods would have registered a body temperature similar to that of any modern Homo sapiens.

Barring a nurse’s visit to Jurassic Park, the work provides perhaps the best glimpse yet at dinosaurs’ internal temperature, a key factor in understanding their metabolism. The findings measure some 4 to 7 degrees Celsius cooler than one theory of dinosaur growth has suggested.

“This approach to the old issue of warm- versus cold-bloodedness has the potential to open a new door into this controversy,” says Luis Chiappe, a paleontologist at the Natural History Museum of Los Angeles County who was not involved in the research.

Once thought to be cold-blooded and sluggish like crocodiles and alligators, dinosaurs got a reputation makeover in the 1960s and 1970s as active, possibly warm-blooded creatures. But scientists still don’t agree on exactly how dinosaurs exchanged heat with their surroundings and how warm or cold they might have been inside.

To tackle this question, a research team led by postdoc Rob Eagle of Caltech decided to look at sauropods, the biggest land animals that ever lived. Eagle’s adviser, John Eiler, had invented a way to tease out body temperature by studying the number of chemical bonds formed between rare versions of carbon and oxygen in growing teeth and bone. More of those bonds form at lower temperatures, so fossilized teeth can reveal how warm it was inside the living animal.

Eagle’s team analyzed roughly a dozen teeth of several sauropod species excavated in Tanzania, Oklahoma and Wyoming. The creatures’ internal temperatures clocked in between 36 and 38° Celsius. That’s warmer than cold-blooded creatures like crocodiles, cooler than birds, and just about the range of modern mammals.

Other scientists have suggested that sauropod body temperature could have reached 40° or even higher, simply because of the sheer amount of the dinosaurs’ flesh. The new study suggests instead that dinosaurs had some way of keeping cool — perhaps by using internal air sacs or long necks and tails for ventilation, Eagle says.

The work fits with other recent evidence suggesting that sauropods and modern mammals were about the same temperature, says Chiappe. But the sheer size of sauropods probably means the resemblance stops there. “They aren’t necessarily physiologically similar to modern mammals in having high metabolic rates,” Eagle says.

His team now plans to calculate body temperatures in fossils of other dinosaurs, such as smaller sauropods. Analyzing more dinosaurs — along with birds, mammals and their ancestors — could reveal important evolutionary patterns, Eagle says: “You could trace when warm-bloodedness actually evolved.” 


Found in: Earth, Life and Paleontology

View the original article here

Subatomic quantum memory in diamond demonstrated

ScienceDaily (June 28, 2011) — Physicists working at the University of California, Santa Barbara and the University of Konstanz in Germany have developed a breakthrough in the use of diamond in quantum physics, marking an important step toward quantum computing. The results are reported in this week's online edition of Nature Physics.

The physicists were able to coax the fragile quantum information contained within a single electron in diamond to move into an adjacent single nitrogen nucleus, and then back again using on-chip wiring.

"This ability is potentially useful to create an atomic-scale memory element in a quantum computer based on diamond, since the subatomic nuclear states are more isolated from destructive interactions with the outside world," said David Awschalom, senior author. Awschalom is director of UCSB's Center for Spintronics & Quantum Computation, professor of physics, electrical and computer engineering, and the Peter J. Clarke director of the California NanoSystems Institute.

Awschalom said the discovery shows the high-fidelity operation of a quantum mechanical gate at the atomic level, enabling the transfer of full quantum information to and from one electron spin and a single nuclear spin at room temperature. The process is scalable, and opens the door to new solid-state quantum device development.

Scientists have recently shown that it is possible to synthesize thousands of these single electron states with beams of nitrogen atoms, intentionally creating defects to trap the single electrons. "What makes this demonstration particularly exciting is that a nitrogen atom is a part of the defect itself, meaning that these sub-atomic memory elements automatically scale with the number of logical bits in the quantum computer," said lead author Greg Fuchs, a postdoctoral fellow at UCSB.

Rather than using logical elements like transistors to manipulate digital states like "0" or "1," a quantum computer needs logical elements capable of manipulating quantum states that may be "0" and "1" at the same time. Even at ambient temperature, these defects in diamond can do exactly that, and have recently become a leading candidate to form a quantum version of a transistor.

However, there are still major challenges to building a diamond-based quantum computer. One of these is finding a method to store quantum information in a scalable way. Unlike a conventional computer, where the memory and the processor are in two different physical locations, in this case they are integrated together, bit-for-bit.

"We knew that the nitrogen nuclear spin would be a good choice for a scalable quantum memory -- it was already there," said Fuchs. "The hard part was to transfer the state quickly, before it is lost to decoherence."

Awschalom explained: "A key breakthrough was to use a unique property of quantum physics -- that two quantum objects can, under special conditions, become mixed to form a new composite object." By mixing the quantum spin state of the electrons in the defect with the spin state of the nitrogen nucleus for a brief time -- less than 100 billionths of a second -- information that was originally encoded in the electrons is passed to the nucleus.

"The result is an extremely fast transfer of the quantum information to the long-lived nuclear spin, which could further enhance our capabilities to correct for errors during a quantum computation," said co-author Guido Burkard, a theoretical physicist at the University of Konstanz, who developed a model to understand the storage process.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Santa Barbara, via EurekAlert!, a service of AAAS.

Journal Reference:

G. D. Fuchs, G. Burkard, P. V. Klimov, D. D. Awschalom. A quantum memory intrinsic to single nitrogen–vacancy centres in diamond. Nature Physics, 2011; DOI: 10.1038/nphys2026

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here