Sunday 28 October 2012

Astronomers Report That Dark Matter 'Halos' May Contain Stars, Disprove Other Theories


 Could it be that dark matter "halos" -- the huge, invisible cocoons of mass that envelop entire galaxies and account for most of the matter in the universe -- aren't completely dark after all but contain a small number of stars? Astronomers from UCLA, UC Irvine and elsewhere make a case for that in the Oct. 25 issue of the journal Nature.

Astronomers have long disagreed about why they see more light in the universe than it seems they should -- that is, why the infrared light they observe exceeds the amount of light emitted from known galaxies.
When looking at the cosmos, astronomers have seen what are neither stars nor galaxies nor a uniform dark sky but mysterious, sandpaper-like smatterings of light, which UCLA's Edward L. (Ned) Wright refers to as "fluctuations." The debate has centered around what exaclty the source of those fluctuations is.
One explanation is that the fluctuations in the background are from very distant unknown galaxies. A second is that they're from unknown galaxies that are not so far away, faint galaxies whose light has been traveling to us for only 4 billion or 5 billion years (a rather short time in astronomy terms). In the Nature paper, Wright and his colleagues present evidence that both these explanations are wrong, and they propose an alternative.
The first explanation -- that the fluctuations are from very distant galaxies -- is nowhere close to being supported by the data the astronomers present from NASA's Spitzer Space Telescope, said Wright, a UCLA professor of physics and astronomy.
"The idea of not-so-far-away faint galaxies is better, but still not right," he added. "It's off by a factor of about 10; the 'distant galaxies' hypothesis is off by a factor of about 1,000."
Wright and his colleagues, including lead author Asantha Cooray, a UC Irvine professor of physics and astronomy, contend that the small number of stars that were kicked to the edges of space during violent collisions and mergers of galaxies may be the cause of the infrared light "halos" across the sky and may explain the mystery of the excess emitted infrared light.
As crashing galaxies became gravitationally tangled with one another, "orphaned" stars were tossed into space. It is these stars, the researchers say, that produce the diffuse, blotchy scatterings of light emitted from the galaxy halos that extend well beyond the outer reaches of galaxies.
"Galaxies exist in dark matter halos that are much bigger than the galaxies; when galaxies form and merge together, the dark matter halo gets larger and the stars and gas sink to the middle of the the halo," said Wright, who holds UCLA's David Saxon Presidential Chair in Physics. "What we're saying is one star in a thousand does not do that and instead gets distributed like dark matter. You can't see the dark matter very well, but we are proposing that it actually has a few stars in it -- only one-tenth of 1 percent of the number of stars in the bright part of the galaxy. One star in a thousand gets stripped out of the visible galaxy and gets distributed like the dark matter.
"The dark matter halo is not totally dark," Wright said. "A tiny fraction, one-tenth of a percent, of the stars in the central galaxy has been spread out into the halo, and this can produce the fluctuations that we see."
In large clusters of galaxies, astronomers have found much higher percentages of intra-halo light, as large as 20 percent, Wright said.
For this study, Cooray, Wright and colleagues used the Spitzer Space Telescope to produce an infrared map of a region of the sky in the constellation Boötes. The light has been travelling to us for 10 billion years.
"Presumably this light in halos occurs everywhere in the sky and just has not been measured anywhere else," said Wright, who is also principal investigator of NASA's Wide-field Infrared Survey Explorer (WISE) mission.
"If we can really understand the origin of the infrared background, we can understand when all of the light in the universe was produced and how much was produced," Wright said. "The history of all the production of light in the universe is encoded in this background. We're saying the fluctuations can be produced by the fuzzy edges of galaxies that existed at the same time that most of the stars were created, about 10 billion years ago."
The research was funded by the National Science Foundation, NASA and NASA's Jet Propulsion Laboratory.
Future research, especially with the James Webb Space Telescope, should provide further insights, Wright said.
"What we really need to be able to do is to see and identify the galaxies that are producing all the light in the infrared background," he said. "That could be done to a much greater extent once the James Webb Space Telescope is operational because it will be able to see much more distant, fainter galaxies."
Journal Reference:
  1. Asantha Cooray, Joseph Smidt, Francesco De Bernardis, Yan Gong, Daniel Stern, Matthew L. N. Ashby, Peter R. Eisenhardt, Christopher C. Frazer, Anthony H. Gonzalez, Christopher S. Kochanek, Szymon Kozłowski, Edward L. Wright. Near-infrared background anisotropies from diffuse intrahalo light of galaxiesNature, 2012; 490 (7421): 514

Saturday 15 September 2012

Enough Wind to Power Global Energy Demand: New Research Examines Limits, Climate Consequences

There is enough energy available in winds to meet all of the world's demand. Atmospheric turbines that convert steadier and faster high-altitude winds into energy could generate even more power than ground- and ocean-based units. New research from Carnegie's Ken Caldeira examines the limits of the amount of power that could be harvested from winds, as well as the effects high-altitude wind power could have on the climate as a whole.

Their work is published September 9 by Nature Climate Change.
Led by Kate Marvel of Lawrence Livermore National Laboratory, who began this research at Carnegie, the team used models to quantify the amount of power that could be generated from both surface and atmospheric winds. Surface winds were defined as those that can be accessed by turbines supported by towers on land or rising out of the sea. High-altitude winds were defined as those that can be accessed by technology merging turbines and kites. The study looked only at the geophysical limitations of these techniques, not technical or economic factors.
Turbines create drag, or resistance, which removes momentum from the winds and tends to slow them. As the number of wind turbines increase, the amount of energy that is extracted increases. But at some point, the winds would be slowed so much that adding more turbines will not generate more electricity. This study focused on finding the point at which energy extraction is highest.
Using models, the team was able to determine that more than 400 terawatts of power could be extracted from surface winds and more than 1,800 terawatts could be generated by winds extracted throughout the atmosphere.
Today, civilization uses about 18 TW of power. Near-surface winds could provide more than 20 times today's global power demand and wind turbines on kites could potentially capture 100 times the current global power demand.
At maximum levels of power extraction, there would be substantial climate effects to wind harvesting. But the study found that the climate effects of extracting wind energy at the level of current global demand would be small, as long as the turbines were spread out and not clustered in just a few regions. At the level of global energy demand, wind turbines might affect surface temperatures by about 0.1 degree Celsius and affect precipitation by about 1%. Overall, the environmental impacts would not be substantial.
"Looking at the big picture, it is more likely that economic, technological or political factors will determine the growth of wind power around the world, rather than geophysical limitations," Caldeira said.

Friday 14 September 2012

Length of Yellow Caution Traffic Lights Could Prevent Accidents, Researchers Say


A couple of years ago, Hesham Rakha misjudged a yellow traffic light and entered an intersection just as the light turned red. A police officer handed him a ticket.

"There are circumstances, as you approach a yellow light, where the decision is easy. If you are close to the intersection, you keep going. If you are far away, you stop. If you are almost at the intersection, you have to keep going because if you try to stop, you could cause a rear-end crash with the vehicle behind you and would be in the middle of the intersection anyway," said Rakha, professor of civil and environmental engineering at Virginia Tech.
He's not trying to defend his action. Rakha, director of the Center for Sustainable Mobility (www.vtti.vt.edu/csm.php) at the Virginia Tech Transportation Institute (www.vtti.vt.edu), is describing his research. Since 2005, his research group has been studying drivers' behaviors as they approach yellow lights. Their goal is to determine signal times for intersections that are safer and still efficient.
If a driver decides to stop when instead of proceeding, rear-end crashes could occur. If a driver proceeds instead of stopping, collisions with side street traffic could occur. Although observation-based research shows that only 1.4 percent of drivers cross the stop line after the light turns red, more than 20 percent of traffic fatalities in the United States occur at intersections.
"If the yellow time is not set correctly, a dilemma zone is eminent," Rakha said.
"The dilemma zone occurs when the driver has no feasible choice," he said. "In other words the driver can neither stop nor proceed through the intersection before the light turns red. This can also occur if the approaching vehicle is traveling faster than the posted speed limit and/or if the driver's perception and reaction time is longer than the design one-second value."
In most cases, the yellow time is set for 4.2 seconds on a 45 mph road. The time is longer for higher-speed roads. "These timings are based on two assumptions," Rakha explains. "Namely, the driver requires one second to perceive and react to the change in signal indication and that the driver requires 3.2 seconds to stop from 45 mph at a comfortable deceleration level, assumed to be 3 meters per second squared (3 m/s2 ) or 10 feet per second squared."
For his studies, Rakha's used Virginia's Smart Road, located at the Virginia Tech Transportation Institute. The Smart Road intersection has a signal that can be controlled for length of the red, yellow and green lights. "We can study driver behavior by changing the signal when the driver is a certain distance from the intersection."
Center for Sustainable Mobility researchers have determined that half of drivers make the stop-go decision three seconds before the stop line. Of those that go, few clear the intersection before the light changes to red. In Virginia, if you are in the intersection when the light turns red, you are not running a red light. However, there is still risk.
The specific findings from the Smart Road study are that 43 percent of drivers who crossed the stop line during the yellow time were not able to clear the intersection before the light turned red. At 45 mph, it takes 1.5 seconds to clear a 30-meter (98.4 feet) intersection. "If the all-red interval is the minimum conventional one second, then there is a potential risk that the legal yellow-light runners would not be able to completely clear the intersection at the instant the side-street traffic gains the right-of-way," the researchers reported at the Transportation Research Board Annual Meeting in 2010.
People over 60 years of age have a longer perception-reaction time, so they have to brake harder to stop. But they are more likely to try to stop, compared to younger drivers. However, if they keep going, they are unlikely to clear the intersection, the researchers report.
"Even if yellow timing is designed properly to avoid a dilemma zone, someone driving above the speed limit could encounter a dilemma because it takes longer to stop from a higher speed. They could speed up, but our studies show that drivers who keep going usually maintain their speed," Rakha said.
The research determined that the perception-reaction time is slightly longer than one second, but that driver deceleration levels are significantly higher than the deceleration level assumed for traffic signal design.
If the road conditions are poor, drivers react 15 percent slower because they are processing more information. Deceleration level decreases by 8 percent. "In such conditions, you need a longer yellow," Rakha said.
He and his team has developed a novel procedure to compute the yellow and change interval duration that accounts for the risk of drivers being caught in the dilemma zone. Using this procedure they have created tables for light vehicles at various speeds for dry roads and wet roads, so that traffic planners can set traffic signals that can be adjusted to roadway surface and weather conditions. They also created tables for the driving characteristics for different age groups.
One strategy to make intersections safer might be more use of caution lights that tell drivers a green light is about to change, so the driver has a longer time to react. Such systems now are used on high speed roads, where the stopping distance is longer, and when the lighted intersection is not visible until the last seconds.
A future strategy that researchers are investigating is in-car display systems that can be customized to each driver's reaction time. "So one person receives a four-second warning of a light change, and another person receives a five-second warning. Or, instead of a warning, the system might just tell you to stop," said Rakha.
Research to implement vehicle-to-vehicle and infrastructure-to-vehicle communication is ongoing, including at the Virginia Tech Transportation Institute Tier 1 University Transportation Center. Rakha's research group is studying vehicle-to-vehicle communication at intersections.
The research has been presented at the 2010 and 2011 Transportation Research Board annual meetings. Articles published in 2012 are:
"Novel Stochastic Procedure for Designing Yellow Intervals at Signalized Intersections," by Ahmed Amer, transportation engineer with Vanasse Hangen Brustlin Inc.; Rakha; and Ihab El- Shawarby, assistant professor at Ain-Shams University in Cairo and senior research associate with the Center for Sustainable Mobility at the Virginia Tech Transportation Institute. It appeared in the June 1, 2012, Journal of Transportation Engineering.
"Designing Yellow Intervals for Rainy and Wet Roadway Conditions," by Huan Li of Blacksburg, who has received his master of science degree in civil engineering; Rakha; and El-Shawarby. The article appeared in the spring 2012 issue of theInternational Journal of Transportation Science and Technology.

Saturday 8 September 2012

NASA's Kepler Discovers Multiple Planets Orbiting a Pair of Stars

Coming less than a year after the announcement of the first circumbinary planet, Kepler-16b, NASA's Kepler mission has discovered multiple transiting planets orbiting two suns for the first time. This system, known as a circumbinary planetary system, is 4,900 light-years from Earth in the constellation Cygnus.


This discovery proves that more than one planet can form and persist in the stressful realm of a binary star and demonstrates the diversity of planetary systems in our galaxy.
Astronomers detected two planets in the Kepler-47 system, a pair of orbiting stars that eclipse each other every 7.5 days from our vantage point on Earth. One star is similar to the sun in size, but only 84 percent as bright. The second star is diminutive, measuring only one-third the size of the sun and less than 1 percent as bright.
"In contrast to a single planet orbiting a single star, the planet in a circumbinary system must transit a 'moving target.' As a consequence, time intervals between the transits and their durations can vary substantially, sometimes short, other times long," said Jerome Orosz, associate professor of astronomy at San Diego State University and lead author of the paper. "The intervals were the telltale sign these planets are in circumbinary orbits."
The inner planet, Kepler-47b, orbits the pair of stars in less than 50 days. While it cannot be directly viewed, it is thought to be a sweltering world, where the destruction of methane in its super-heated atmosphere might lead to a thick haze that could blanket the planet. At three times the radius of Earth, Kepler-47b is the smallest known transiting circumbinary planet.
The outer planet, Kepler-47c, orbits its host pair every 303 days, placing it in the so-called "habitable zone," the region in a planetary system where liquid water might exist on the surface of a planet. While not a world hospitable for life, Kepler-47c is thought to be a gaseous giant slightly larger than Neptune, where an atmosphere of thick bright water-vapor clouds might exist.
"Unlike our sun, many stars are part of multiple-star systems where two or more stars orbit one another. The question always has been -- do they have planets and planetary systems? This Kepler discovery proves that they do," said William Borucki, Kepler mission principal investigator at NASA's Ames Research Center in Moffett Field, Calif. "In our search for habitable planets, we have found more opportunities for life to exist."
To search for transiting planets, the research team used data from the Kepler space telescope, which measures dips in the brightness of more than 150,000 stars. Additional ground-based spectroscopic observations using telescopes at the McDonald Observatory at the University of Texas at Austin helped characterize the stellar properties. The findings are published in the journal Science.
"The presence of a full-fledged circumbinary planetary system orbiting Kepler-47 is an amazing discovery," said Greg Laughlin, professor of Astrophysics and Planetary Science at the University of California in Santa Cruz. "These planets are very difficult to form using the currently accepted paradigm, and I believe that theorists, myself included, will be going back to the drawing board to try to improve our understanding of how planets are assembled in dusty circumbinary disks."
Ames manages Kepler's ground system development, mission operations and science data analysis. NASA's Jet Propulsion Laboratory in Pasadena, Calif., managed the Kepler mission development.
Ball Aerospace & Technologies Corp. in Boulder, Colo., developed the Kepler flight system and supports mission operations with the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder.
The Space Telescope Science Institute in Baltimore archives, hosts and distributes Kepler science data. Kepler is NASA's tenth Discovery Mission and funded by NASA's Science Mission Directorate at the agency's headquarters in Washington.

Saturday 7 April 2012

Standoff Sensing Enters New Realm With Dual-Laser Technique


 Identifying chemicals from a distance could take a step forward with the introduction of a two-laser system being developed at the Department of Energy's Oak Ridge National Laboratory.

In a paper published in the Journal of Physics D: Applied Physics, Ali Passian and colleagues present a technique that uses a quantum cascade laser to "pump," or strike, a target, and another laser to monitor the material's response as a result of temperature-induced changes. That information allows for the rapid identification of chemicals and biological agents.
"With two lasers, one serves as the pump and the other is the probe," said Passian, a member of ORNL's Measurement Science and Systems Engineering Division. "The novel aspect to our approach is that the second laser extracts information and allows us to do this without resorting to a weak return signal.
"The use of a second laser provides a robust and stable readout approach independent of the pump laser settings."
While this approach is similar to radar and lidar sensing techniques in that it uses a return signal to carry information of the molecules to be detected, it differs in a number of ways.
"First is the use of photothermal spectroscopy configuration where the pump and probe beams are nearly parallel," Passian said. "We use probe beam reflectometry as the return signal in standoff applications, thereby minimizing the need for wavelength-dependent expensive infrared components such as cameras, telescopes and detectors."
This work represents a proof of principle success that Passian and co-author Rubye Farahi said could lead to advances in standoff detectors with potential applications in quality control, forensics, airport security, medicine and the military. In their paper, the researchers also noted that measurements obtained using their technique may set the stage for hyperspectral imaging.
"This would allow us to effectively take slices of chemical images and gain resolution down to individual pixels," said Passian, who added that this observation is based on cell-by-cell measurements obtained with their variation of photothermal spectroscopy. Hyperspectral imaging provides not only high-resolution chemical information, but topographical information as well.
Other authors are ORNL's Laurene Tetard, a Wigner Fellow, and Thomas Thundat of the University of Alberta. Funding for this research was provided by ORNL's Laboratory Directed Research and Development program.

Friday 6 April 2012

Swarming and Transporting


On its own, an ant is not particularly clever. But in a community, the insects can solve complicated tasks. Researchers intend to put this "swarm intelligence" to use in the logistics field. Lots of autonomous transport shuttles would provide an alternative to traditional materials-handling technology.

The orange-colored vehicle begins moving with a quiet whirr. Soon afterwards the next shuttles begin to move, and before long there are dozens of mini-transporters rolling around in the hall. As if by magic, they head for the high-rack storage shelves or spin around their own axis. But the Multishuttle Moves® -- is the name given to these driverless transport vehicles -- are not performing some robots' ballet. They are moving around in the service of science. At the Fraunhofer Institute for Material Flow and Logistics IML in Dortmund, Germany, researchers are working to harness swarm intelligence as a means of improving the flow of materials and goods in the warehouse environment. In a research hall 1000 square meters in size, the scientists have replicated a small-scale distribution warehouse with storage shelves for 600 small-part carriers and eight picking stations. The heart of the testing facility is a swarm of 50 autonomous vehicles. "In the future, transport systems should be able to perform all of these tasks autonomously, from removal from storage at the shelf to delivery to a picking station. This will provide an alternative to conventional materials-handling solutions," explains Prof. Dr. Michael ten Hompel, executive director at IML.
But how do the vehicles know what they should transport, and where, and which of the 50 shuttles will take on any particular order? "The driverless transport vehicles are locally controlled. The ›intelligence‹ is in the transporters themselves," Dipl.-Ing. Thomas Albrecht, head of the Autonomous Transport Systems department explains the researchers' solution approach. "We rely on agent-based software and use ant algorithms based on the work of Marco Dorigo. These are methods of combinational optimization based on the model behavior of real ants in their search for food." When an order is received, the shuttles are informed of this through a software agent. They then coordinate with one another via WLAN to determine which shuttle can take over the load. The job goes to whichever free transport system is closest.
The shuttles are completely unimpeded as they navigate throughout the space -- with no guidelines. Their integrated localization and navigation technology make this possible. The vehicles have a newly developed, hybrid sensor concept with signal-based location capability, distance and acceleration sensors and laser scanners. This way, the vehicles can compute the shortest route to any destination. The sensors also help prevent collisions.
The vehicles are based on the components of the shelf-bound Multishuttle already successfully in use for several years. The researchers at IML have worked with colleagues at Dematic to develop the system further. The special feature about the Multishuttle Move®: the transporters can navigate in the storage area and in the hall. To accomplish this, the shuttles are fitted with an additional floor running gear. But what benefits do these autonomous transporters offer compared with conventional steady materials-handling technology with roller tracks? "The system is considerably more flexible and scalable," Albrecht points out. It can grow or contract depending on the needs at hand. This is how system performance can be adapted to seasonal and daily fluctuation. Another benefit: It considerably shortens transportation paths. In conventional storage facilities, materials-handling equipment obstructs the area between high-rack storage and picking stations. Packages must travel two to three times farther than the direct route. "It also makes shelf-control units and steady materials-handling technology," Albrecht adds. Researchers are now trying to determine how these autonomous transporters can improve intralogistics. "We want to demonstrate that cellular materials-handling technology makes sense not only technically but also economically as an alternative to classic materials-handling technology and shelf-control units," institute executive director ten Hompel observes. If this succeeds, the autonomous vehicles could soon be going into service in warehouses.

Thursday 5 April 2012

New Theory On Size of Black Holes: Gas-Guzzling Black Holes Eat Two Courses at a Time


Astronomers have put forward a new theory about why black holes become so hugely massive -- claiming some of them have no 'table manners', and tip their 'food' directly into their mouths, eating more than one course simultaneously.
Researchers from the UK and Australia investigated how some black holes grow so fast that they are billions of times heavier than the sun.
The team from the University of Leicester (UK) and Monash University in Australia sought to establish how black holes got so big so fast.
Professor Andrew King from the Department of Physics and Astronomy, University of Leicester, said: "Almost every galaxy has an enormously massive black hole in its center. Our own galaxy, the Milky Way, has one about four million times heavier than the sun. But some galaxies have black holes a thousand times heavier still. We know they grew very quickly after the Big Bang.''
"These hugely massive black holes were already full--grown when the universe was very young, less than a tenth of its present age."
Black holes grow by sucking in gas. This forms a disc around the hole and spirals in, but usually so slowly that the holes could not have grown to these huge masses in the entire age of the universe. `We needed a faster mechanism,' says Chris Nixon, also at Leicester, "so we wondered what would happen if gas came in from different directions."
Nixon, King and their colleague Daniel Price in Australia made a computer simulation of two gas discs orbiting a black hole at different angles. After a short time the discs spread and collide, and large amounts of gas fall into the hole. According to their calculations black holes can grow 1,000 times faster when this happens.
"If two guys ride motorbikes on a Wall of Death and they collide, they lose the centrifugal force holding them to the walls and fall," says King. The same thing happens to the gas in these discs, and it falls in towards the hole.
This may explain how these black holes got so big so fast. "We don't know exactly how gas flows inside galaxies in the early universe," said King, "but I think it is very promising that if the flows are chaotic it is very easy for the black hole to feed."
The two biggest black holes ever discovered are each about ten billion times bigger than the Sun.
Their research is due to published in the Monthly Notices of the Royal Astronomical Society. The research was funded by the UK Science and Technology Facilities Council.

Wednesday 4 April 2012

Shiny New Tool for Imaging Biomolecules


At the heart of the immune system that protects our bodies from disease and foreign invaders is a vast and complex communications network involving millions of cells, sending and receiving chemical signals that can mean life or death. At the heart of this vast cellular signaling network are interactions between billions of proteins and other biomolecules. These interactions, in turn, are greatly influenced by the spatial patterning of signaling and receptor molecules. The ability to observe signaling spatial patterns in the immune and other cellular systems as they evolve, and to study the impact on molecular interactions and, ultimately, cellular communication, would be a critical tool in the fight against immunological and other disorders that lead to a broad range of health problems including cancer. Such a tool is now at hand.

Researchers with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley, have developed the first practical application of optical nanoantennas in cell membrane biology. A scientific team led by chemist Jay Groves has developed a technique for lacing artificial lipid membranes with billions of gold "bowtie" nanoantennas. Through the phenomenon known as "plasmonics," these nanoantennas can boost the intensity of a fluorescent or Raman optical signal from a protein passing through a plasmonic "hot-spot" tens of thousands of times without the protein ever being touched.
"Our technique is minimally invasive since enhancement of optical signals is achieved without requiring the molecules to directly interact with the nanoantenna," Groves says. "This is an important improvement over methods that rely on adsorption of molecules directly onto antennas where their structure, orientation, and behavior can all be altered."
Groves holds joint appointments with Berkeley Lab's Physical Biosciences Division and UC Berkeley's Chemistry Department, and is also a Howard Hughes Medical Institute investigator. He is the corresponding author of a paper that reports these results in the journal NanoLetters. The paper is titled "Single Molecule Tracking on Supported Membranes with Arrays of Optical Nanoantennas." Co-authoring the paper were Theo Lohmuller, Lars Iversen, Mark Schmidt, Christopher Rhodes, Hsiung-Lin Tu and Wan-Chen Lin.
Fluorescent emissions, in which biomolecules of interest are tagged with dyes that fluoresce when stimulated by light, and Raman spectroscopy, in which the scattering of light by molecular vibrations is used to identify and locate biomolecules, are work-horse optical imaging techniques whose value has been further enhanced by the emergence of plasmonics. In plasmonics, light waves are squeezed into areas with dimensions smaller than half-the-wavelength of the incident photons, making it possible to apply optical imaging techniques to nanoscale objects such as biomolecules. Nano-sized gold particles in the shape of triangles that are paired in a tip-to-tip formation, like a bow-tie, can serve as optical antennas, capturing and concentrating light waves into well-defined hot spots, where the plasmonic effect is greatly amplified. Although the concept is well-established, applying it to biomolecular studies has been a challenge because gold particle arrays must be fabricated with well-defined nanometer spacing, and molecules of interest must be delivered to plasmonic hot-spots.
"We're able to fabricate billions of gold nanoantennas in an artificial membrane through a combination of colloid lithography and plasma processing," Groves says. "Controlled spacing of the nanoantenna gaps is achieved by taking advantage of the fact that polystyrene particles melt together at their contact point during plasma processing. The result is well-defined spacing between each pair of gold triangles in the final array with a tip-to-tip distance between neighboring gold nanotriangles measuring in the 5-to-100 nanometer range."
Until now, Groves says, it has not been possible to decouple the size of the gold nanotriangles, which determines their surface plasmon resonance frequency, from the tip-to-tip distance between the individual nanoparticle features, which is responsible for enhancing the plasmonic effect. With their colloidal lithography approach, a self-assembling hexagonal monolayer of polymer spheres is used to shadow mask a substrate for subsequent deposition of the gold nanoparticles. When the colloidal mask is removed, what remains are large arrays of gold nanoparticles and triangles over which the artificial membrane can be formed.
The unique artificial membranes, which Groves and his research group developed earlier, are another key to the success of this latest achievement. Made from a fluid bilayer of lipid molecules, these membranes are the first biological platforms that can combine fixed nanopatterning with the mobility of fluid bilayers. They provide an unprecedented capability for the study of how the spatial patterns of chemical and physical properties on membrane surfaces influence the behavior of cells.
"When we embed our artificial membranes with gold nanoantennas we can trace the trajectories of freely diffusing individual proteins as they sequentially pass through and are enhanced by the multiple gaps between the triangles," Groves says. "This allows us to study a realistic system, like a cell, which can involve billions of molecules, without the static entrapment of the molecules."
As molecules in living cells are generally in a state of perpetual motion, it is often their movement and interactions with other molecules rather than static positions that determine their functions within the cell. Groves says that any technique requiring direct adsorption of a molecule of interest onto a nanoantenna intrinsically removes that molecule from the functioning ensemble that is the essence of its natural behavior. The technique he and his co-authors have developed allows them to look at individual biomolecules but within the context of their surrounding community.
"The idea that optical nanoantennas can produce the kinds of enhanced signals we are observing has been known for years but this is the first time that nanoantennas have been fabricated into a fluid membrane so that we can observe every molecule in the system as it passes through the antenna array," Groves says. "This is more than a proof-of-concept we've shown that we now have a useful new tool to add to our repertoire."
This research was primarily supported by the DOE Office of Science.

Tuesday 3 April 2012

Laser Hints at How Universe Got Its Magnetism


Scientists have used a laser to create magnetic fields similar to those thought to be involved in the formation of the first galaxies; findings that could help to solve the riddle of how the Universe got its magnetism.

Magnetic fields exist throughout galactic and intergalactic space, what is puzzling is how they were originally created and how they became so strong.
A team, led by Oxford University physicists, used a high-power laser to explode a rod of carbon, similar to pencil lead, in helium gas. The explosion was designed to mimic the cauldron of plasma -- an ionized gas containing free electrons and positive ions -- out of which the first galaxies formed.
The team found that within a microsecond of the explosion strong electron currents and magnetic fields formed around a shock wave. Astrophysicists took these results and scaled them through 22 orders-of-magnitude to find that their measurements matched the 'magnetic seeds' predicted by theoretical studies of galaxy formation.
A report of the research is published in a recent issue of the journal Nature.
'Our experiment recreates what was happening in the early Universe and shows how galactic magnetic fields might have first appeared,' said Dr Gianluca Gregori of Oxford University's Department of Physics, who led the work at Oxford. 'It opens up the exciting prospect that we will be able to explore the physics of the cosmos, stretching back billions of years, in a laser laboratory here on Earth.'
The results closely match theories which predict that tiny magnetic fields -- 'magnetic seeds' -- precede the formation of galaxies. These fields can be amplified by turbulent motions and can strongly affect the evolution of the galactic medium from its early stages.
Dr Gregori said: 'In the future, we plan to use the largest lasers in the world, such as the National Ignition Facility at the Lawrence Livermore National Laboratory in California (USA), to study the evolution of cosmic plasma.'
The experiments were conducted at the Laboratoire pour l'Utilisation de Lasers Intenses laser facility in France.

Monday 2 April 2012

Materials Inspired by Mother Nature: One-Pound Boat That Could Float 1,000 Pounds


Combining the secrets that enable water striders to walk on water and give wood its lightness and great strength has yielded an amazing new material so buoyant that, in everyday terms, a boat made from 1 pound of the substance could carry five kitchen refrigerators, about 1,000 pounds.

One of the lightest solid substances in the world, which is also sustainable, it was among the topics of a symposium in San Diego March 25 at the 243rd National Meeting & Exposition of the American Chemical Society, the world's largest scientific society. The symposium focused on an emerging field called biomimetics, in which scientists literally take inspiration from Mother Nature, probing and adapting biological systems in plants and animals for use in medicine, industry and other fields.
Olli Ikkala, Ph.D., described the new buoyant material, engineered to mimic the water strider's long, thin feet and made from an "aerogel" composed of the tiny nano-fibrils from the cellulose in plants. Aerogels are so light that some of them are denoted as "solid smoke." The nanocellulose aerogels also have remarkable mechanical properties and are flexible.
"These materials have really spectacular properties that could be used in practical ways," said Ikkala. He is with Helsinki University of Technology in Espoo, Finland. Potential applications range from cleaning up oil spills to helping create such products as sensors for detecting environmental pollution, miniaturized military robots, and even children's toys and super-buoyant beach floats.
Ikkala's presentation was among almost two dozen reports in the symposium titled, "Cellulose-Based Biomimetic and Biomedical Materials," that focused on the use of specially processed cellulose in the design and engineering of materials modeled after biological systems. Cellulose consists of long chains of the sugar glucose linked together into a polymer, a natural plastic-like material. Cellulose gives wood its remarkable strength and is the main component of plant stems, leaves and roots. Traditionally, cellulose's main commercial uses have been in producing paper and textiles -- cotton being a pure form of cellulose. But development of a highly processed form of cellulose, termed nanocellulose, has expanded those applications and sparked intense scientific research. Nanocellulose consists of the fibrils of nanoscale diameters so small that 50,000 would fit across the width of the period at the end of this sentence.
"We are in the middle of a Golden Age, in which a clearer understanding of the forms and functions of cellulose architectures in biological systems is promoting the evolution of advanced materials," said Harry Brumer, Ph.D., of Michael Smith Laboratories, University of British Columbia, Vancouver. He was a co-organizer of the symposium with J. Vincent Edwards, Ph.D., a research chemist with the Agricultural Research Service, U.S. Department of Agriculture in New Orleans, Louisiana. "This session on cellulose-based biomimetic and biomedical materials is really very timely due to the sustained and growing interest in the use of cellulose, particularly nanoscale cellulose, in biomaterials."
Ikkala pointed out that cellulose is the most abundant polymer on Earth, a renewable and sustainable raw material that could be used in many new ways. In addition, nanocellulose promises advanced structural materials similar to metals, such as high-tech spun fibers and films.
"It can be of great potential value in helping the world shift to materials that do not require petroleum for manufacture," Ikkala explained. "The use of wood-based cellulose does not influence the food supply or prices, like corn or other crops. We are really delighted to see how cellulose is moving beyond traditional applications, such as paper and textiles, and finding new high-tech applications."
One application was in Ikkala's so-called "nanocellulose carriers" that have such great buoyance. In developing the new material, Ikkala's team turned nanocellulose into an aerogel. Aerogels can be made from a variety of materials, even the silica in beach sand, and some are only a few times denser than air itself. By one estimate, if Michelangelo's famous statue David were made out of an aerogel rather than marble, it would be less than 5 pounds.
The team incorporated into the nanocellulose aerogel features that enable the water strider to walk on water. The material is not only highly buoyant, but is capable of absorbing huge amounts of oil, opening the way for potential use in cleaning up oil spills. The material would float on the surface, absorbing the oil without sinking. Clean-up workers, then, could retrieve it and recover the oil.
The American Chemical Society is a non-profit organization chartered by the U.S. Congress. With more than 164,000 members, ACS is the world's largest scientific society and a global leader in providing access to chemistry-related research through its multiple databases, peer-reviewed journals and scientific conferences. Its main offices are in Washington, D.C., and Columbus, Ohio.

Sunday 1 April 2012

Single Molecules in a Quantum Movie


The quantum physics of massive particles has intrigued physicists for more than 80 years, since it predicts that even complex particles can exhibit wave-like behaviour -- in conflict with our everyday ideas of what is real or local. An international team of scientists now succeeded in shooting a movie which shows the build-up of a matter-wave interference pattern from single dye molecules which is so large (up to 0.1 mm) that you can easily see it with a camera. This visualizes the dualities of particle and wave, randomness and determinism, locality and delocalization in a particularly intuitive way.

A quantum premiere with dye molecules as leading actors
Physicist Richard Feynman once claimed that interference effects caused by matter-waves contain the only mystery of quantum physics. Understanding and applying matter waves for new technologies is also at the heart of the research pursued by the Quantum Nanophysics team around Markus Arndt at the University of Vienna and the Vienna Center for Quantum Science and Technology.
The scientists now premiered a movie which shows the build-up of a quantum interference pattern from stochastically arriving single phthalocyanine particles after these highly-fluorescent dye molecules traversed an ultra-thin nanograting. As soon as the molecules arrive on the screen the researchers take live images using a spatially resolving fluorescence microscope whose sensitivity is so high that each molecule can be imaged and located individually with an accuracy of about 10 nanometers. This is less than a thousandth of the diameter of a human hair and still less than 1/60 of the wavelength of the imaging light.
A breath of nothing
In these experiments van der Waals forces between the molecules and the gratings pose a particular challenge. These forces arise due to quantum fluctuations and strongly affect the observed interference pattern. In order to reduce the van der Waals interaction the scientists used gratings as thin as 10 nanometers (only about 50 silicon nitride layers). These ultra-thin gratings were manufactured by the nanotechnology team around Ori Cheshnovski at the Tel Aviv University who used a focused ion beam to cut the required slits into a free-standing membrane.
Tailored nanoparticles
In this study the experiments could be extended to phthalocyanine heavier derivatives which were tailor-made by Marcel Mayor and his group at the University of Basel. They represent the most massive molecules in quantum far-field diffraction so far.
Motivation and continuation
The newly developed and combined micro- and nanotechnologies for generating, diffracting and detecting molecular beams will be important for extending quantum interference experiments to more and more complex molecules but also for atom interferometry.
The experiments have a strongly didactical component: they reveal the single-particle character of complex quantum diffraction patterns on a macroscopic scale that is visible to the eye. You can see them emerge in real-time and they last for hours on the screen. The experiments thus render the wave-particle duality of quantum physics particularly tangible and conspicuous.
The experiments have a practical side, too. They allow to access molecular properties close to solid interfaces and they show a way towards future diffraction studies at atomically thin membranes.
Seeing is believing: the movie by Thomas Juffmann et al. is published on March 25 in Nature Nanotechnology. This project was supported by the Austrian FWF Z149-N16 (Wittgenstein), ESF/FWF/SNF MIME (I146) and the Swiss SNF in the NCCR "Nanoscale Science."

Saturday 31 March 2012

Can a Machine Tell When You're Lying? Research Suggests the Answer Is 'Yes'


Inspired by the work of psychologists who study the human face for clues that someone is telling a high-stakes lie, UB computer scientists are exploring whether machines can also read the visual cues that give away deceit.
Results so far are promising: In a study of 40 videotaped conversations, an automated system that analyzed eye movements correctly identified whether interview subjects were lying or telling the truth 82.5 percent of the time.
That's a better accuracy rate than expert human interrogators typically achieve in lie-detection judgment experiments, said Ifeoma Nwogu, a research assistant professor at UB's Center for Unified Biometrics and Sensors (CUBS) who helped develop the system. In published results, even experienced interrogators average closer to 65 percent, Nwogu said.
"What we wanted to understand was whether there are signal changes emitted by people when they are lying, and can machines detect them? The answer was yes, and yes," said Nwogu, whose full name is pronounced "e-fo-ma nwo-gu."
The research was peer-reviewed, published and presented as part of the 2011 IEEE Conference on Automatic Face and Gesture Recognition.
Nwogu's colleagues on the study included CUBS scientists Nisha Bhaskaran and Venu Govindaraju, and UB communication professor Mark G. Frank, a behavioral scientist whose primary area of research has been facial expressions and deception.
In the past, Frank's attempts to automate deceit detection have used systems that analyze changes in body heat or examine a slew of involuntary facial expressions.
The automated UB system tracked a different trait -- eye movement. The system employed a statistical technique to model how people moved their eyes in two distinct situations: during regular conversation, and while fielding a question designed to prompt a lie.
People whose pattern of eye movements changed between the first and second scenario were assumed to be lying, while those who maintained consistent eye movement were assumed to be telling the truth. In other words, when the critical question was asked, a strong deviation from normal eye movement patterns suggested a lie.
Previous experiments in which human judges coded facial movements found documentable differences in eye contact at times when subjects told a high-stakes lie.
What Nwogu and fellow computer scientists did was create an automated system that could verify and improve upon information used by human coders to successfully classify liars and truth tellers. The next step will be to expand the number of subjects studied and develop automated systems that analyze body language in addition to eye contact.
Nwogu said that while the sample size was small, the findings are exciting.
They suggest that computers may be able to learn enough about a person's behavior in a short time to assist with a task that challenges even experienced interrogators. The videos used in the study showed people with various skin colors, head poses, lighting and obstructions such as glasses.
This does not mean machines are ready to replace human questioners, however -- only that computers can be a helpful tool in identifying liars, Nwogu said.
She noted that the technology is not foolproof: A very small percentage of subjects studied were excellent liars, maintaining their usual eye movement patterns as they lied. Also, the nature of an interrogation and interrogators' expertise can influence the effectiveness of the lie-detection method.
The videos used in the study were culled from a set of 132 that Frank recorded during a previous experiment.
In Frank's original study, 132 interview subjects were given the option to "steal" a check made out to a political party or cause they strongly opposed.
Subjects who took the check but lied about it successfully to a retired law enforcement interrogator received rewards for themselves and a group they supported; Subjects caught lying incurred a penalty: they and their group received no money, but the group they despised did. Subjects who did not steal the check faced similar punishment if judged lying, but received a smaller sum for being judged truthful.
The interrogators opened each interview by posing basic, everyday questions. Following this mundane conversation, the interrogators asked about the check. At this critical point, the monetary rewards and penalties increased the stakes of lying, creating an incentive to deceive and do it well.
In their study on automated deceit detection, Nwogu and her colleagues selected 40 videotaped interrogations.
They used the mundane beginning of each to establish what normal, baseline eye movement looked like for each subject, focusing on the rate of blinking and the frequency with which people shifted their direction of gaze.
The scientists then used their automated system to compare each subject's baseline eye movements with eye movements during the critical section of each interrogation -- the point at which interrogators stopped asking everyday questions and began inquiring about the check.
If the machine detected unusual variations from baseline eye movements at this time, the researchers predicted the subject was lying.

Friday 30 March 2012

New 'Thermal' Approach to Invisibility Cloaking Hides Heat to Enhance Technology


In a new approach to invisibility cloaking, a team of French researchers has proposed isolating or cloaking objects from sources of heat -- essentially "thermal cloaking." This method, which the researchers describe in the Optical Society's (OSA) open-access journal Optics Express, taps into some of the same principles as optical cloaking and may lead to novel ways to control heat in electronics and, on an even larger scale, might someday prove useful for spacecraft and solar technologies.

Recent advances in invisibility cloaks are based on the physics of transformation optics, which involves metamaterials and bending light so that it propagates around a space rather than through it. Sebastien Guenneau, affiliated with both the University of Aix-Marseille and France's Centre National de la Recherche Scientifique (CRNS), decided to investigate, with CRNS colleagues, whether a similar approach might be possible for thermal diffusion.
"Our key goal with this research was to control the way heat diffuses in a manner similar to those that have already been achieved for waves, such as light waves or sound waves, by using the tools of transformation optics," says Guenneau.
Though this technology uses the same fundamental theories as recent advances in optical cloaking, there is a key difference. Until now, he explains, cloaking research has revolved around manipulating trajectories of waves. These include electromagnetic (light), pressure (sound), elastodynamic (seismic), and hydrodynamic (ocean) waves. The biggest difference in their study of heat, he points out, is that the physical phenomenon involved is diffusion, not wave propagation.
"Heat isn't a wave -- it simply diffuses from hot to cold regions," Guenneau says. "The mathematics and physics at play are much different. For instance, a wave can travel long distances with little attenuation, whereas temperature usually diffuses over smaller distances."
To create their thermal invisibility cloak, Guenneau and colleagues applied the mathematics of transformation optics to equations for thermal diffusion and discovered that their idea could work.
In their two-dimensional approach, heat flows from a hot to a cool object with the magnitude of the heat flux through any region in space represented by the distance between isotherms (concentric rings of diffusivity). They then altered the geometry of the isotherms to make them go around rather than through a circular region to the right of the heat source -- so that any object placed in this region can be shielded from the flow of heat (see image).
"We can design a cloak so that heat diffuses around an invisibility region, which is then protected from heat. Or we can force heat to concentrate in a small volume, which will then heat up very rapidly," Guenneau says.
The ability to shield an area from heat or to concentrate it are highly desirable traits for a wide range of applications. Shielding nanoelectronic and microelectronic devices from overheating, for example, is one of the biggest challenges facing the electronics and semiconductor industries, and an area in which thermal cloaking could have a huge impact. On a larger scale and far into the future, large computers and spacecraft could also benefit greatly. And in terms of concentrating heat, this is a characteristic that the solar industry should find intriguing.
Guenneau and colleagues are now working to develop prototypes of their thermal cloaks for microelectronics, which they expect to have ready within the next few months.

Thursday 29 March 2012

Butterfly Wings' 'Art of Blackness' Could Boost Production of Green Fuels


Butterfly wings may rank among the most delicate structures in nature, but they have given researchers powerful inspiration for new technology that doubles production of hydrogen gas -- a green fuel of the future -- from water and sunlight.

The researchers presented their findings in San Diego on March 26 at the American Chemical Society's (ACS') 243rd National Meeting & Exposition.
Tongxiang Fan, Ph.D., who reported on the use of two swallowtail butterflies -- Troides aeacus (Heng-chun birdwing butterfly) and Papilio helenus Linnaeus (Red Helen) -- as models, explained that finding renewable sources of energy is one of the great global challenges of the 21st century. One promising technology involves producing clean-burning hydrogen fuel from sunlight and water. It can be done in devices that use sunlight to kick up the activity of catalysts that split water into its components, hydrogen and oxygen. Better solar collectors are the key to making the technology practical, and Fan's team turned to butterfly wings in their search for making solar collectors that gather more useful light.
"We realized that the solution to this problem may have been in existence for millions of years, fluttering right in front of our eyes," Fan said. "And that was correct. Black butterfly wings turned out to be a natural solar collector worth studying and mimicking," Fan said.
Scientists long have known that butterfly wings contain tiny scales that serve as natural solar collectors to enable butterflies, which cannot generate enough heat from their own metabolism, to remain active in the cold. When butterflies spread their wings and bask in the sun, those solar collectors are soaking up sunlight and warming the butterfly's body.
Fan's team at Shanghai Jiao Tong University in China used an electron microscope to reveal the most-minute details of the scale architecture on the wings of black butterflies -- black being the color that absorbs the maximum amount of sunlight.
"We were searching the 'art of blackness' for the secret of how those black wings absorb so much sunlight and reflect so little," Fan explained.
Scientists initially thought it was simply a matter of the deep inky black color, due to the pigment called melanin, which also occurs in human skin. More recently, however, evidence began to emerge indicating that the structure of the scales on the wings should not be ignored.
Fan's team observed elongated rectangular scales arranged like overlapping shingles on the roof of a house. The butterflies they examined had slightly different scales, but both had ridges running the length of the scale with very small holes on either side that opened up onto an underlying layer.
The steep walls of the ridges help funnel light into the holes, Fan explained. The walls absorb longer wavelengths of light while allowing shorter wavelengths to reach a membrane below the scales. Using the images of the scales, the researchers created computer models to confirm this filtering effect. The nano-hole arrays change from wave guides for short wavelengths to barriers and absorbers for longer wavelengths, which act just like a high-pass filtering layer.
The group used actual butterfly-wing structures to collect sunlight, employing them as templates to synthesize solar-collecting materials. They chose the black wings of the Asian butterfly Papilio helenus Linnaeus, or Red Helen, and transformed them to titanium dioxide by a process known as dip-calcining. Titanium dioxide is used as a catalyst to split water molecules into hydrogen and oxygen. Fan's group paired this butterfly-wing patterned titanium dioxide with platinum nanoparticles to increase its water-splitting power. The butterfly-wing compound catalyst produced hydrogen gas from water at more than twice the rate of the unstructured compound catalyst on its own.
"These results demonstrate a new strategy for mimicking Mother Nature's elaborate creations in making materials for renewable energy. The concept of learning from nature could be extended broadly, and thus give a broad scope of building technologically unrealized hierarchical architecture and design blueprints to exploit solar energy for sustainable energy resources," he concluded.
The scientists acknowledged funding from National Natural Science Foundation of China (No.51172141 and 50972090), Shanghai Rising-star Program (No.10QH1401300).

Wednesday 28 March 2012

Tiny Reader Makes Fast, Cheap DNA Sequencing Feasible


Researchers have devised a nanoscale sensor to electronically read the sequence of a single DNA molecule, a technique that is fast and inexpensive and could make DNA sequencing widely available.

The technique could lead to affordable personalized medicine, potentially revealing predispositions for afflictions such as cancer, diabetes or addiction.
"There is a clear path to a workable, easily produced sequencing platform," said Jens Gundlach, a University of Washington physics professor who leads the research team. "We augmented a protein nanopore we developed for this purpose with a molecular motor that moves a DNA strand through the pore a nucleotide at a time."
The researchers previously reported creating the nanopore by genetically engineering a protein pore from a mycobacterium. The nanopore, from Mycobacterium smegmatis porin A, has an opening 1 billionth of a meter in size, just large enough for a single DNA strand to pass through.
To make it work as a reader, the nanopore was placed in a membrane surrounded by potassium-chloride solution, with a small voltage applied to create an ion current flowing through the nanopore. The electrical signature changes depending on the type of nucleotide traveling through the nanopore. Each type of DNA nucleotide -- cytosine, guanine, adenine and thymine -- produces a distinctive signature.
The researchers attached a molecular motor, taken from an enzyme associated with replication of a virus, to pull the DNA strand through the nanopore reader. The motor was first used in a similar effort by researchers at the University of California, Santa Cruz, but they used a different pore that could not distinguish the different nucleotide types.
Gundlach is the corresponding author of a paper published online March 25 by Nature Biotechnology that reports a successful demonstration of the new technique using six different strands of DNA. The results corresponded to the already known DNA sequence of the strands, which had readable regions 42 to 53 nucleotides long.
"The motor pulls the strand through the pore at a manageable speed of tens of milliseconds per nucleotide, which is slow enough to be able to read the current signal," Gundlach said.
Gundlach said the nanopore technique also can be used to identify how DNA is modified in a given individual. Such modifications, referred to as epigenetic DNA modifications, take place as chemical reactions within cells and are underlying causes of various conditions.
"Epigenetic modifications are rather important for things like cancer," he said. Being able to provide DNA sequencing that can identify epigenetic changes "is one of the charms of the nanopore sequencing method."
Coauthors of the Nature Biotechnology paper are Elizabeth Manrao, Ian Derrington, Andrew Laszlo, Kyle Langford, Matthew Hopper and Nathaniel Gillgren of the UW, and Mikhail Pavlenok and Michael Niederweis of the University of Alabama at Birmingham.
The work was funded by the National Human Genome Research Institute in a program designed to find a way to conduct individual DNA sequencing for less than $1,000. When that program began, Gundlach said, the cost of such sequencing was likely in the hundreds of thousands of dollars, but "with techniques like this it might get down to a 10-dollar or 15-minute genome project. It's moving fast."