Saturday 30 April 2011

Pioneering animal diabetes treatment: Researchers adapt human continuous glucose monitors for pets

ScienceDaily (Apr. 25, 2011) — Studies show the incidence of diabetes in dogs has increased 200 percent over the past 30 years. Now, University of Missouri veterinarians have changed the way veterinarians treat diabetes in animals by adapting a device used to monitor glucose in humans.

Dogs are susceptible to type 1, insulin-dependent diabetes. Affected animals are unable to utilize sugar in their bloodstream because their bodies do not produce enough insulin, a hormone that helps cells turn sugar into energy. Veterinarians treat animals with this type of diabetes similarly to the way humans are treated, with insulin injections and a low-carbohydrate diet.

Amy DeClue, assistant professor of veterinary internal medicine, and Charles Wiedmeyer, assistant professor of veterinary clinical pathology, have been studying the use of a "continuous glucose monitor" (CGM) on animals since 2003. A CGM is a small flexible device that is inserted about an inch into the skin, to constantly monitor glucose concentrations.

"Continuous glucose monitoring is much more effective and accurate than previous glucose monitoring techniques and has revolutionized how veterinarians manage diabetes in dogs," said DeClue. "The CGM gives us a complete view of what is happening in the animal in their natural setting. For example, it can show us if a pet's blood glucose changes when an owner gives treats, when the animal exercises or in response to insulin therapy."

CGMs have become more commonly used in dogs with diabetes that are not responding well to conventional treatment. The monitor provides detailed data for glucose concentrations throughout the course of three days in a dog's usual environment, so veterinarians can make better treatment decisions. Previously, veterinarians would have created an insulin regimen based on a glucose curve by taking blood from the animal in the veterinary hospital every two hours over the course of a single day. The glucose curve was often inaccurate due to increased stress from the animals being in an unnatural environment.

Dogs show clinical signs of diabetes similar to humans. Clinical signs include increased urination, thirst, hunger and weight loss. Typically, no direct cause is found for diabetes in dogs, but genetic disposition and obesity are thought to play a role in causing diabetes, according to DeClue. Just like people, dogs suffering with diabetes must be medically managed or complications can arise.

"Typically, dogs that are treated properly for diabetes go on to live a long, full life," said Wiedmeyer. "Actually, dogs with diabetes are similar to young children with diabetes but somewhat easier to manage. Dogs will eat what their owners give them at the same time each day and they won't ask for a cupcake at a friend's birthday party. With tools like the continuous glucose monitor to assist with disease management, the outlook is very good for a dog with diabetes."

In the future Wiedmeyer projects that the device will become smaller and less invasive. In addition, he hopes device manufacturers develop a device that would monitor blood sugar levels remotely.

DeClue and Wiedmeyer's most recent article on methods for monitoring and treating diabetes in dogs was published in the journal Clinic in Laboratory Medicine.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Missouri-Columbia.

Journal Reference:

Charles E. Wiedmeyer, Amy E. DeClue. Glucose Monitoring in Diabetic Dogs and Cats: Adapting New Technology for Home and Hospital Care. Clinics in Laboratory Medicine, 2011; 31 (1): 41 DOI: 10.1016/j.cll.2010.10.010

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

SpaceX Will Send Humans To Mars In the Next 10 to 20 Years


SpaceX's Dragon Spacecraft Separating from its Falcon 9 Carrier Rocket SpaceX, Courtesy of NASA

SpaceX will send humans to Mars within 10 to 20 years, according to an interview with its CEO in the Wall Street Journal. Elon Musk says his company will send people to space within three years, and he wants to colonize other planets next.

“I want SpaceX to help make life multi-planetary,” he said. “We’re going all the way to Mars, I think. Best case, 10 years, worst case, 15 to 20 years.”

In an interview with the Journal’s Alan Murray, Musk said he wants humans to build a self-sustaining base on Mars.

“A future when humanity is a spacefaring civilization, out there exploring the stars, is an incredibly exciting future,” he said.

The majority of the interview focuses on Tesla Motors’ efforts and the state of electric cars in the U.S., including price and availability and the problem of charging the vehicles. But Musk’s pronouncements about space colonization are far juicier, in our opinion.

He said SpaceX would be the transportation provider, not necessarily the colony-builder.

“We want to be like the shipping company that brought people from Europe to America, or like the Union Pacific railroad. Our goal is to facilitate the transfer of people and cargo to other planets, and then it’s going to be up to people if they want to go,” he said.

Along with developing several successful heavy-lift rockets for cargo, SpaceX aims to send astronauts to space for NASA and other clients. Last week, the company won a $75 million contract from NASA to make its Falcon 9 rocket and Dragon space capsule ready for humans. Sierra Nevada, Boeing and Blue Origin also won contracts to build capsules.

Last year, SpaceX became the first to launch a private spaceship into orbit and bring it home.


[Wall Street Journal]


View the original article here

Huge dry ice deposit on Mars: NASA orbiter reveals big changes in Red Planet's atmosphere

ScienceDaily (Apr. 22, 2011) — NASA's Mars Reconnaissance Orbiter has discovered the total amount of atmosphere on Mars changes dramatically as the tilt of the planet's axis varies. This process can affect the stability of liquid water, if it exists on the Martian surface, and increase the frequency and severity of Martian dust storms.

Researchers using the orbiter's ground-penetrating radar identified a large, buried deposit of frozen carbon dioxide, or dry ice, at the Red Planet's south pole. The scientists suspect that much of this carbon dioxide enters the planet's atmosphere and swells the atmosphere's mass when Mars' tilt increases. The findings are published in the journal Science.

The newly found deposit has a volume similar to Lake Superior's nearly 3,000 cubic miles (about 12,000 cubic kilometers). The deposit holds up to 80 percent as much carbon dioxide as today's Martian atmosphere. Collapse pits caused by dry ice sublimation and other clues suggest the deposit is in a dissipating phase, adding gas to the atmosphere each year. Mars' atmosphere is about 95 percent carbon dioxide, in contrast to Earth's much thicker atmosphere, which is less than .04 percent carbon dioxide.

"We already knew there is a small perennial cap of carbon-dioxide ice on top of the water ice there, but this buried deposit has about 30 times more dry ice than previously estimated," said Roger Phillips of Southwest Research Institute in Boulder, Colo. Phillips is deputy team leader for the Mars Reconnaissance Orbiter's Shallow Radar instrument and lead author of the report.

"We identified the deposit as dry ice by determining the radar signature fit the radio-wave transmission characteristics of frozen carbon dioxide far better than the characteristics of frozen water," said Roberto Seu of Sapienza University of Rome, team leader for the Shallow Radar and a co-author of the new report. Additional evidence came from correlating the deposit to visible sublimation features typical of dry ice.

"When you include this buried deposit, Martian carbon dioxide right now is roughly half frozen and half in the atmosphere, but at other times it can be nearly all frozen or nearly all in the atmosphere," Phillips said.

An occasional increase in the atmosphere would strengthen winds, lofting more dust and leading to more frequent and more intense dust storms. Another result is an expanded area on the planet's surface where liquid water could persist without boiling. Modeling based on known variation in the tilt of Mars' axis suggests several-fold changes in the total mass of the planet's atmosphere can happen on time frames of 100,000 years or less.

The changes in atmospheric density caused by the carbon-dioxide increase also would amplify some effects of the changes caused by the tilt. Researchers plugged the mass of the buried carbon-dioxide deposit into climate models for the period when Mars' tilt and orbital properties maximize the amount of summer sunshine hitting the south pole. They found at such times, global, year-round average air pressure is approximately 75 percent greater than the current level.

"A tilted Mars with a thicker carbon-dioxide atmosphere causes a greenhouse effect that tries to warm the Martian surface, while thicker and longer-lived polar ice caps try to cool it," said co-author Robert Haberle, a planetary scientist at NASA's Ames Research Center in Moffett Field, Calif. "Our simulations show the polar caps cool more than the greenhouse warms. Unlike Earth, which has a thick, moist atmosphere that produces a strong greenhouse effect, Mars' atmosphere is too thin and dry to produce as strong a greenhouse effect as Earth's, even when you double its carbon-dioxide content."

The Shallow Radar, one of the Mars Reconnaissance Orbiter's six instruments, was provided by the Italian Space Agency, and its operations are led by the Department of Information Engineering, Electronics and Telecommunications at Sapienza University of Rome. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter project for NASA's Science Mission Directorate at the agency's headquarters in Washington. Lockheed Martin Space Systems in Denver built the spacecraft.

For more information about the Mars Reconnaissance Orbiter mission, visit http://www.nasa.gov/mro .

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA/Jet Propulsion Laboratory.

Journal Reference:

Roger J. Phillips, Brian J. Davis, Kenneth L. Tanaka, Shane Byrne, Michael T. Mellon, Nathaniel E. Putzig, Robert M. Haberle, Melinda A. Kahre, Bruce A. Campbell, Lynn M. Carter, Isaac B. Smith, John W. Holt, Suzanne E. Smrekar, Daniel C. Nunes, Jeffrey J. Plaut, Anthony F. Egan, Timothy N. Titus, and Roberto Seu. Massive CO2 Ice Deposits Sequestered in the South Polar Layered Deposits of Mars. Science, 2011; DOI: 10.1126/science.1203091

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Self-powered, blood-activated sensor detects pancreatitis quickly and cheaply

ScienceDaily (Apr. 25, 2011) — A new low cost test for acute pancreatitis that gets results much faster than existing tests has been developed by scientists at The University of Texas at Austin.

The sensor, which could be produced for as little as a dollar, is built with a 12-cent LED light, aluminum foil, gelatin, milk protein and a few other cheap, easily obtainable materials.

The sensor could help prevent damage from acute pancreatitis, which is a sudden inflammation of the pancreas that can lead to severe stomach pain, nausea, fever, shock and in some cases, death.

"We've turned Reynold's Wrap, JELL-O and milk into a way to look for organ failure," says Brian Zaccheo, a graduate student in the lab of Richard Crooks, professor of chemistry and biochemistry.

The sensor, which is about the size of a matchbox, relies on a simple two-step process to diagnose the disease.

In step one, a bit of blood extract is dropped onto a layer of gelatin and milk protein. If there are high levels of trypsin, an enzyme that is overabundant in the blood of patients with acute pancreatitis, the trypsin will break down the gelatin in much the same way it breaks down proteins in the stomach.

In step two, a drop of sodium hydroxide (lye) is added. If the trypsin levels were high enough to break down that first barrier, the sodium hydroxide can trickle down to the second barrier, a strip of Reynold's wrap, and go to work dissolving it.

The foil corrodes, and with both barriers now permeable, a circuit is able to form between a magnesium anode and an iron salt at the cathode. Enough current is generated to light up a red LED. If the LED lights up within an hour, acute pancreatitis is diagnosed.

"In essence, the device is a battery having a trypsin-selective switch that closes the circuit between the anode and cathode," write Zaccheo and Crooks in a paper recently published in Analytical Chemistry.

Zaccheo and Crooks, who have a provisional patent, can envision a number of potential uses for the sensor. It might help providers in the developing world who don't have the resources to do the more complex tests for pancreatitis. It could be of use in situations where batteries are in short supply, such as after a natural disaster or in remote locations. And because of the speed of the sensor, it could be an excellent first-line measure even in well-stocked hospitals.

For Zaccheo, the most appealing aspect of the project isn't so much the specific sensor. It is the idea we might be able to save time, money and even lives by adopting this kind of low-tech approach.

"I want to develop biosensors that are easy to use but give a high level of sensitivity," he says. "All you need for this, for instance, is to know how to use a dropper and a timer."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Texas at Austin.

Journal Reference:

Brian A. Zaccheo, Richard M. Crooks. Self-Powered Sensor for Naked-Eye Detection of Serum Trypsin. Analytical Chemistry, 2011; 83 (4): 1185 DOI: 10.1021/ac103115z

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Why biggest stellar explosions often happen in tiniest galaxies: Ultraviolet probe sheds light on mystery

ScienceDaily (Apr. 21, 2011) — Astronomers using NASA's Galaxy Evolution Explorer may be closer to knowing why some of the most massive stellar explosions ever observed occur in the tiniest of galaxies.

"It's like finding a sumo wrestler in a little 'Smart Car,'" said Don Neill, a member of NASA's Galaxy Evolution Explorer team at the California Institute of Technology in Pasadena, and lead author of a new study published in the Astrophysical Journal.

"The most powerful explosions of massive stars are happening in extremely low-mass galaxies. New data are revealing that the stars that start out massive in these little galaxies stay massive until they explode, while in larger galaxies they are whittled away as they age, and are less massive when they explode," said Neill.

Over the past few years, astronomers using data from the Palomar Transient Factory, a sky survey based at the ground-based Palomar Observatory near San Diego, have discovered a surprising number of exceptionally bright stellar explosions in so-called dwarf galaxies up to 1,000 times smaller than our Milky Way galaxy. Stellar explosions, called supernovae, occur when massive stars -- some up to 100 times the mass of our sun -- end their lives.

The Palomar observations may explain a mystery first pointed out by Neil deGrasse Tyson and John Scalo when they were at the University of Austin Texas (Tyson is now the director of the Hayden Planetarium in New York, N.Y.). They noted that supernovae were occurring where there seemed to be no galaxies at all, and they even proposed that dwarf galaxies were the culprits, as the Palomar data now indicate.

Now, astronomers are using ultraviolet data from the Galaxy Evolution Explorer to further examine the dwarf galaxies. Newly formed stars tend to radiate copious amounts of ultraviolet light, so the Galaxy Evolution Explorer, which has scanned much of the sky in ultraviolet light, is the ideal tool for measuring the rate of star birth in galaxies.

The results show that the little galaxies are low in mass, as suspected, and have low rates of star formation. In other words, the petite galaxies are not producing that many huge stars.

"Even in these little galaxies where the explosions are happening, the big guys are rare," said co-author Michael Rich of UCLA, who is a member of the mission team.

In addition, the new study helps explain why massive stars in little galaxies undergo even more powerful explosions than stars of a similar heft in larger galaxies like our Milky Way. The reason is that low-mass galaxies tend to have fewer heavy atoms, such as carbon and oxygen, than their larger counterparts. These small galaxies are younger, and thus their stars have had less time to enrich the environment with heavy atoms.

According to Neill and his collaborators, the lack of heavy atoms in the atmosphere around a massive star causes it to shed less material as it ages. In essence, the massive stars in little galaxies are fatter in their old age than the massive stars in larger galaxies. And the fatter the star, the bigger the blast that will occur when it finally goes supernova. This, according to the astronomers, may explain why super supernovae are occurring in the not-so-super galaxies.

"These stars are like heavyweight champions, breaking all the records," said Neill.

Added Rich, "These dwarf galaxies are especially interesting to astronomers, because they are quite similar to the kinds of galaxies that may have been present in our young universe, shortly after the Big Bang. The Galaxy Evolution Explorer has given us a powerful tool for learning what galaxies were like when the universe was just a child."

Caltech leads the Galaxy Evolution Explorer mission and is responsible for science operations and data analysis. NASA's Jet Propulsion Laboratory in Pasadena manages the mission and built the science instrument. Caltech manages JPL for NASA. The mission was developed under NASA's Explorers Program managed by the Goddard Space Flight Center, Greenbelt, Md. Researchers sponsored by Yonsei University in South Korea and the Centre National d'Etudes Spatiales (CNES) in France collaborated on this mission.

Graphics and additional information about the Galaxy Evolution Explorer are online at http://www.nasa.gov/galex/ and http://www.galex.caltech.edu .

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by NASA/Jet Propulsion Laboratory.

Journal Reference:

James D. Neill, Mark Sullivan, Avishay Gal-Yam, Robert Quimby, Eran Ofek, Ted K. Wyder, D. Andrew Howell, Peter Nugent, Mark Seibert, D. Christopher Martin, Roderik Overzier, Tom A. Barlow, Karl Foster, Peter G. Friedman, Patrick Morrissey, Susan G. Neff, David Schiminovich, Luciana Bianchi, José Donas, Timothy M. Heckman, Young-Wook Lee, Barry F. Madore, Bruno Milliard, R. Michael Rich, Alex S. Szalay. The Extreme Hosts of Extreme Supernovae. The Astrophysical Journal, 2011; 727 (1): 15 DOI: 10.1088/0004-637X/727/1/15

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday 29 April 2011

Buried microbes coax energy from rock

In experiments, microorganisms stimulate minerals to produce hydrogen, a key fuel for growthWeb edition : Tuesday, February 8th, 2011

Here’s yet another reason to marvel at microbes: Buried deep within Earth at temperatures and pressures that would kill most living beings, bacteria and other tiny organisms not only survive but apparently even coax the rocks around them to produce food.

Researchers have found that the mere presence of microbes triggers minerals to release hydrogen gas, which the organisms then munch. “It looks like the bacteria themselves have an integral role in liberating this energy,” says R. John Parkes, a geomicrobiologist at Cardiff University in Wales.

His team’s findings appear in the March issue of Geology.

The work helps explain how microbes can survive up to kilometers deep in a subterranean world far from any sunlight to fuel photosynthesis. Such “deep biospheres” may even exist on other planets, Parkes says, with organisms tucked safely away from frigid temperatures and lethal radiation at the surface.

On Earth, some two-thirds of all bacteria, along with another group of single-celled organisms known as archaea, are thought to lurk underground. Scientists have long wondered where these critters get their energy.

Earlier work showed that the microbes fed, in part, on decayed organic matter that settled to the seafloor and formed thick sediments there — a sort of microbial smorgasbord. Parkes and his colleagues decided to look instead at inorganic minerals that can wash offshore and also end up in those sediments.

The researchers ground up a variety of minerals, such as quartz, and put them in a sludgy sediment. In some mixtures they added a dash of microbes to start things off. The scientists then heated the mixtures to various temperatures up to 100 degrees Celsius — what might be found 3 to 4 kilometers deep — and waited to see what happened over several months.

Mixtures that contained microorganisms began giving off hydrogen gas as temperatures climbed to 70° C and above, the team found. Mixtures that had been sterilized so that nothing was living in them didn’t produce much hydrogen at all. Somehow, Parkes says, the microbes help stimulate chemical reactions within the minerals that make hydrogen.

"The results are curious, but not compelling," says Steven D'Hondt, an oceanographer and geobiologist at the University of Rhode Island in Narragansett. For instance, he says, scientists would have to run the same experiments without any organic matter in the mixtures in order to be sure that the hydrogen was coming from the minerals and not from the organic matter.

Earthquake zones and other places with lots of geological activity often produce hydrogen and other gases, Parkes says, maybe because freshly split rocks and minerals provide a surface that catalyzes chemical reactions, such as the breaking apart of water molecules to produce oxygen and hydrogen. “But people had not linked that to a direct energy source for deep-sediment bacteria, and neither had they shown that the bacteria themselves could actually catalyze this process,” he says. “The fascinating thing is that we have a mechanism of obtaining energy organically in the subsurface which has not really been considered before.”

Bo Barker Jørgensen, a microbiologist at the Max Planck Institute for Marine Microbiology in Bremen, Germany, says that most buried microbes probably live at shallow depths, not the 3- to 4-kilometer depths simulated in the new study. (The deepest confirmed microorganisms come from 1.6 kilometers in sediments, and 3.5 kilometers in solid rock.) But Jørgensen adds that the new work shows how various groups of subterranean microorganisms thrive at different temperature levels.

Surface-minded researchers might do well to start thinking a bit more deeply.
Found in: Earth, Earth Science and Life

View the original article here

Thursday 28 April 2011

New Bill Directs NASA Back to the Moon By 2022, With Permanent Habitation In Mind


Buzz Aldrin Salutes the Flag NASA

After a rollercoaster year for NASA, it looks like Congress isn’t quite done tinkering with the space agency’s future. A return to the moon is back on the table after a Florida congressman introduced a moon-centric bill in the House of Representatives, which he’s calling the “Reasserting American Leadership in Space Act,” or the REAL Space Act. Really.

“The National Aeronautics and Space Administration shall plan to return to the Moon by 2022 and develop a sustained human presence on the Moon,” the bill says, in no uncertain terms. The goal is to promote exploration, commerce, science, and American “preeminence in space,” the bill says.

In fairness, the bill spells out some convincing reasons why NASA should boldly go where it went 42 years ago — chiefly as a stepping stone for the future exploration of Mars and other destinations.

Also, “space is the world's ultimate high ground, returning to the Moon and reinvigorating our human space flight program is a matter of national security.”

A moon base had been NASA’s goal since 2005, you may remember, after President Bush directed the agency to develop a new rocket and crew transportation system that could go back to the moon and eventually to Mars. President Obama ordered a review of these plans upon taking office. The Review of United States Human Space Flight Plans Committee, also known as the Augustine commission after its chairman, Norman Augustine, determined NASA didn’t have nearly enough money to accomplish the goal. Obama’s new course for NASA initially ditched the entire Constellation program, including the Ares rocket, but was later tweaked to include funding for a heavy-lift launch vehicle of some kind.

The problem is, there’s no clear destination for that heavy-lift rocket, and even the commercial spaceflight companies developing new crew vehicles on NASA’s behalf aren’t sure where they would go. Many space exploration advocates insist that NASA needs a destination, not just a journey. Obama has dismissed a moon mission, saying “We’ve been there before,” but some still believe the moon is a viable option for just that reason. Plus, it has plentiful resources — although this fact is strangely absent from the new bill, sponsored by Rep. Bill Posey, R-Fla.

Cosponsors include Rep. Rob Bishop, R-Utah; Rep. Sheila Jackson-Lee, D-Texas; Rep. Pete Olson, R-Texas; and Rep. Frank Wolf, R-Va. All of the above represent districts with an interest in ongoing NASA space exploration, but Wolf’s support is interesting because he chairs the appropriations subcommittee that covers NASA activities. H.R. 1641 has been referred to the House Science, Space and Technology committee.

The bill basically follows Obama’s vision, loosely defined as exploring elsewhere in the solar system: “A sustained human presence on the Moon will allow astronauts and researchers the opportunity to leverage new technologies in addressing the challenges of sustaining life on another celestial body, lessons which are necessary and applicable as we explore further into our solar system, to Mars and beyond,” the bill reads.

It simply states that NASA funding should be aligned in accordance with this goal.

With members of both parties still hammering out a federal budget, additional spending to go back to the moon seems as likely as, well, a trip to the moon. But Posey, advocating for the bill earlier this month, said a clear mission for NASA is necessary.

"Without a resolute vision for our human spaceflight program, our program will flounder and ultimately perish," he wrote in an op-ed published in Florida Today.

Should the moon be part of that vision? What do you think?

[Yahoo News]


View the original article here

Study tests interventions targeting multiple health-related behaviors in African American couples

ScienceDaily (Apr. 25, 2011) — Interventions to promote healthy behaviors, including eating more fruits and vegetables, increasing physical activity, and participating in cancer screenings, as well as prevention of HIV/sexually transmitted diseases (STDs), appear beneficial for African-American couples who are at high risk for chronic diseases, especially if one of the individuals is living with HIV (human immunodeficiency virus).

The report is published in the April 25 issue of Archives of Internal Medicine, one of the JAMA/Archives journals.

As background information in the article, the authors write that the medications being used to treat HIV, particularly highly active antiretroviral therapy (HAART), have been so successful for many individuals that they are now living longer and are at risk for developing other chronic diseases, such as cardiovascular disease and diabetes. "The issue of comorbid chronic disease is particularly worrisome for the 48 percent of people living with HIV in 2007 who were African American," the authors note.

Nabila El-Bassel, Ph.D., and colleagues from the National Institute of Mental Health Multisite HIV/STD Prevention Trial for African-American Couples Group, tested how well an intervention would work that addressed multiple health-related behaviors in African-American heterosexual couples in which one partner was HIV-positive and the other was not. The 535 couples (1,070 participants) were randomized into two groups: 520 individuals (260 couples) participated in couple-focused HIV/STD risk reduction focusing on preventing HIV/STD transmission and acquisition and 550 individuals (275 couples) were in the individual-focused health promotion group to influence behaviors linked to the risk of cardiovascular diseases, cerebrovascular diseases, diabetes mellitus and certain cancers, including physical activity, fruit and vegetable consumption, fat consumption, breast and prostate cancer screenings and alcohol use. Both interventions consisted of eight weekly structured two-hour sessions. Participants independently reported their health behaviors at the beginning of the study, immediately after the interventions, and six to 12 months post-intervention. The average age of the participants was about 43 years and the HIV-positive partner was female in 60.4 percent of the couples.

"Health promotion intervention participants were more likely to report consuming five or more servings of fruits and vegetables daily and adhering to physical activity guidelines compared with HIV/STD intervention participants," the authors found. "In the health promotion intervention compared with the HIV/STD intervention, participants consumed fatty foods less frequently, more men received prostate cancer screening, and more women received a mammogram. Alcohol use did not differ between the intervention groups." The authors suggest that it is "possible that targeting couples enhanced efficacy. Research suggests that health promotion strategies that incorporate family members and support networks are more effective than individual-focused strategies and may be especially appropriate for African Americans."

"In conclusion, African Americans are at high risk for morbidity and mortality from chronic diseases and are less likely to report engaging in behaviors associated with reduced risk of such diseases and to detect them at an early stage. Moreover, the risk of chronic disease is of particular concern for African Americans living with HIV because HIV and its treatment with HAART are associated with increased risk. The present study revealed low rates of fruit and vegetable consumption, physical activity and cancer screening in African American individuals in HIV-serodiscordant couples. Accordingly, this study is important, demonstrating that a theory-based contextually appropriate intervention that teaches skills caused positive changes on multiple behaviors linked to chronic diseases in African American members of HIV-serodiscordant couples."

Editorial: Human Immunodeficiency Virus Is (Once Again) a Primary Care Disease

In an accompanying editorial, Mitchell H. Katz, M.D., from the Los Angeles Department of Health Services, Los Angeles, notes that "if specialty care is less needed than it used to be for HIV-infected patients, it turns out primary care is more needed. Owing to the advances in HIV treatment, our patients are no longer dying: They are aging!"

"Although serodiscordant couples are a highly specialized population, there is no reason to believe that their intervention would not work among other HIV-infected persons. Certainly, this study should encourage others to attempt group health promotion classes because, as I often tell my HIV-infected patients with diabetes, liver disease or uncontrolled hypertension, 'It's not HIV that's going to kill you.'"

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by JAMA and Archives Journals.

Journal References:

N. El-Bassel, J. B. Jemmott, J. R. Landis, W. Pequegnat, G. M. Wingood, G. E. Wyatt, S. L. Bellamy. Intervention to Influence Behaviors Linked to Risk of Chronic Diseases: A Multisite Randomized Controlled Trial With African-American HIV-Serodiscordant Heterosexual Couples. Archives of Internal Medicine, 2011; 171 (8): 728 DOI: 10.1001/archinternmed.2011.136M. H. Katz. Human Immunodeficiency Virus Is (Once Again) a Primary Care Disease. Archives of Internal Medicine, 2011; 171 (8): 719 DOI: 10.1001/archinternmed.2011.130

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Chile quake didn’t reduce risk

In 2010 tremor, faults ruptured mainly outside area due for a big oneWeb edition : Sunday, January 30th, 2011 access GAP REMAINSA major earthquake that hit Chile in February 2010 (star marks epicenter) did not relieve seismic stress in a region known as the "Darwin gap" that lies between areas hit by quakes in 1928 and 1960. R. Stein, Lorito et al/Nature Geoscience 2011

The magnitude-8.8 earthquake that pummeled Chile in February 2010 did not relieve seismic stress the way scientists thought it might have, a new study suggests.

Quake risk thus remains high in the region, geophysicist Stefano Lorito of Italy’s National Institute of Geophysics and Volcanology in Rome and his colleagues report online January 30 in Nature Geoscience. In places, risk might even be higher than it was before last year’s quake.

The geologic stress remains because instead of the ground moving the most where stress had been building the longest, the team reports, the greatest slip occurred where a different quake had already relieved stress just eight decades earlier.

Scientists would like to be able to point at a fault segment that built up stress the longest and say it was primed to go next. But the new work shows that stress buildup does not automatically translate to an earthquake happening right in that area, says geophysicist Ross Stein of the U.S. Geological Survey in Menlo Park, Calif., who was not involved in the research. “It’s a very logical approach,” Stein says. “But I don’t think it holds up.”

Geologists weren’t surprised when the quake happened. Off the western coast of South America, the Nazca plate of Earth’s crust dives beneath the South American plate, pushing up the Andes and building up stress that gets relieved occasionally in powerful earthquakes. The biggest quake ever recorded, a magnitude-9.5 whopper, occurred along the Chilean coast in 1960. Some 300 kilometers north of that, a magnitude-8.0 quake struck in 1928.

Between those two ruptures — 1960 in the south, and 1928 in the north — lay a stretch that apparently hadn’t ruptured since 1835, when Charles Darwin visited aboard the H.M.S. Beagle and witnessed a major earthquake. Researchers had thought that this “Darwin gap” would be filled the next time a big quake struck the region.

But it wasn’t, says Lorito. His team used data on how the surface moved during the 2010 quake — from geodetic markers and tsunami observations, among others — to calculate which parts slipped the most.

The scientists found that the greatest slip occurred north of the quake’s epicenter, right around where the 1928 quake struck. South of the epicenter lay a secondary zone of slip. But right in the middle, where the Darwin gap lies, was little to no movement. “The Darwin gap is still there,” Lorito says.

Other earthquake zones, such as Sumatra in 2007, have experienced big quakes that didn’t relieve pent-up geologic stress where scientists thought it was greatest. “It is not strange to see that the rupture is complex, and that some parts can break at one time and some at another time,” Lorito says.

The new work fits with several other scenarios that scientists have developed to explain ground movement during the Chile quake. The scenarios, however, differ in their details. For example, researchers from the GFZ German Research Centre for Geosciences in Potsdam reported in Nature in September 2010 that some of the quake’s slip happened fairly close to the Darwin gap.

The teams reach different conclusions because they use different sets of quake observations and different analytical methods, says Onno Oncken of the Potsdam team. But overall, various groups agree on the broad patterns of how the ground moved — and where seismic risk remains high.

Along with the Darwin gap, another place to worry about may be a stretch between 37 degrees and 36 degrees south latitude, offshore from the city of Concepción. Lorito’s team concludes that stress was transferred there during the 2010 rupture. It could be capable of unleashing another quake of magnitude 7.5 to 8, the researchers write.

The 2010 Chile quake killed more than 500 people by causing both shaking and a tsunami.

Because of the seismic risk, the Chilean coast is one of the most studied regions in the world. For the past decade, Oncken and others have studded the area with seismometers to understand the details of how a diving plate like the Nazca causes quakes. “We now have the unique opportunity to do a detailed comparison from before and after an event,” says Oncken. “Whatever you look at, it’s fantastic data and new observations emerging at an incredible rate.”


Found in: Earth and Earth Science

View the original article here

Anti-helium discovered in Relativistic Heavy Ion Collider experiment

ScienceDaily (Apr. 25, 2011) — Eighteen examples of the heaviest antiparticle ever found, the nucleus of antihelium-4, have been made in the STAR experiment at RHIC, the Relativistic Heavy Ion Collider at the U.S. Department of Energy's Brookhaven National Laboratory.

"The STAR experiment is uniquely capable of finding antihelium-4," says the STAR experiment's spokesperson, Nu Xu, of the Nuclear Science Division (NSD) at Lawrence Berkeley National Laboratory (Berkeley Lab). "STAR already holds the record for massive antiparticles, last year having identified the anti-hypertriton, which contains three constituent antiparticles. With four antinucleons, antihelium-4 is produced at a rate a thousand times lower yet. To identify the 18 examples required sifting through the debris of a billion gold-gold collisions."

Collisions of energetic gold nuclei inside STAR briefly recreate conditions in the hot, dense early universe only millionths of a second after the big bang. Since equal amounts of matter and antimatter were created in the big bang they should have completely annihilated one another, but for reasons still not understood, only ordinary matter seems to have survived. Today this excess matter forms all of the visible universe we know.

Roughly equal amounts of matter and antimatter are also produced in heavy-ion (gold nuclei) collisions at RHIC. The resulting fireballs expand and cool quickly, so the antimatter can avoid annihilation long enough to be detected in the Time Projection Chamber at the heart of STAR.

Ordinary nuclei of helium atoms consist of two protons and two neutrons. Called alpha particles when emitted in radioactive decays, they were found in this form by Ernest Rutherford well over a century ago. The nucleus of antihelium-4 (the anti-alpha) contains two antiprotons bound with two antineutrons.

The most common antiparticles are generally the least massive, because it takes less energy to create them. Carl Anderson was the first to find an antiparticle, the antielectron (positron), in cosmic ray debris 1932. The antiproton (the nucleus of antihydrogen) and the antineutron were created at Berkeley Lab's Bevatron in the 1950s. Antideuteron nuclei ("anti-heavy-hydrogen," made of an antiproton and an antineutron) were created in accelerators at Brookhaven and CERN in the 1960s.

Each extra nucleon (called a baryon) increases the particle's baryon number, and in the STAR collisions every increase in baryon number decreases the rate of yield roughly a thousand times. The nuclei of the antihelium isotope with only one neutron (antihelium-3) has been made in accelerators since 1970; the STAR experiment produces many of these antiparticles, having baryon number 3. The antihelium nucleus with baryon number 4, just announced by STAR based on 16 examples identified in 2010 and two examples from an earlier run, contains the most nucleons of any antiparticle ever detected.

"It's likely that antihelium will be the heaviest antiparticle seen in an accelerator for some time to come," says STAR Collaboration member Xiangming Sun of Berkeley Lab's NSD. "After antihelium the next stable antimatter nucleus would be antilithium, and the production rate for antilithium in an accelerator is expected to be well over two million times less than for antihelium."

NSD's Maxim Naglis adds, "Finding even one example of antilithium would be a stroke of luck, and would probably require a breakthrough in accelerator technology."

If antihelium made by accelerators is rare, and heavier antiparticles rarer still, what of searching for these particles in space? The Alpha Magnetic Spectrometer (AMS) experiment, scheduled to be launched on one of the last space-shuttle missions to the International Space Station, is an instrument designed to do just that. A principal part of its mission is to hunt for distant galaxies made entirely of antimatter.

"Collisions among cosmic rays near Earth can produce antimatter particles, but the odds of these collisions producing an intact antihelium nucleus are so vanishingly small that finding even one would strongly suggest that it had drifted to Earth from a distant region of the universe dominated by antimatter," explains Hans Georg Ritter of Berkeley Lab's NSD. "Antimatter doesn't look any different from ordinary matter, but AMS finding just one antihelium nucleus would suggest that some of the galaxies we see are antimatter galaxies."

Meanwhile the STAR experiment at RHIC, which has shown that antihelium does indeed exist, is likely to hold the world record for finding the heaviest particle of antimatter for the foreseeable future.

This work was supported by the DOE Office of Science.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Lawrence Berkeley National Laboratory.

Journal Reference:

H. Agakishiev et al. Observation of the antimatter helium-4 nucleus. Nature, 2011; DOI: 10.1038/nature10079

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Scientists engineer nanoscale vaults to encapsulate 'nanodisks' for drug delivery

ScienceDaily (Apr. 23, 2011) — There's no question, drugs work in treating disease. But can they work better, and safer? In recent years, researchers have grappled with the challenge of administering therapeutics in a way that boosts their effectiveness by targeting specific cells in the body while minimizing their potential damage to healthy tissue.

The development of new methods that use engineered nanomaterials to transport drugs and release them directly into cells holds great potential in this area. And while several such drug-delivery systems -- including some that use dendrimers, liposomes or polyethylene glycol -- have won approval for clinical use, they have been hampered by size limitations and ineffectiveness in accurately targeting tissues.

Now, researchers at UCLA have developed a new and potentially far more effective means of targeted drug delivery using nanotechnology.

In a study to be published in the May 23 print issue of the journal Small, they demonstrate the ability to package drug-loaded "nanodisks" into vault nanoparticles, naturally occurring nanoscale capsules that have been engineered for therapeutic drug delivery. The study represents the first example of using vaults toward this goal.

The UCLA research team was led by Leonard H. Rome and included his colleagues Daniel C. Buehler and Valerie Kickhoefer from the UCLA Department of Biological Chemistry; Daniel B. Toso and Z. Hong Zhou from the UCLA Department of Microbiology, Immunology and Molecular Genetics; and the California NanoSystems Institute (CNSI) at UCLA.

Vault nanoparticles are found in the cytoplasm of all mammalian cells and are one of the largest known ribonucleoprotein complexes in the sub-100-nanometer range. A vault is essentially barrel-shaped nanocapsule with a large, hollow interior -- properties that make them ripe for engineering into a drug-delivery vehicles. The ability to encapsulate small-molecule therapeutic compounds into vaults is critical to their development for drug delivery.

Recombinant vaults are nonimmunogenic and have undergone significant engineering, including cell-surface receptor targeting and the encapsulation of a wide variety of proteins.

"A vault is a naturally occurring protein particle and so it causes no harm to the body," said Rome, CNSI associate director and a professor of biological chemistry. "These vaults release therapeutics slowly, like a strainer, through tiny, tiny holes, which provides great flexibility for drug delivery."

The internal cavity of the recombinant vault nanoparticle is large enough to hold hundreds of drugs, and because vaults are the size of small microbes, a vault particle containing drugs can easily be taken up into targeted cells.

With the goal of creating a vault capable of encapsulating therapeutic compounds for drug delivery, UCLA doctoral student Daniel Buhler designed a strategy to package another nanoparticle, known as a nanodisk (ND), into the vault's inner cavity, or lumen.

"By packaging drug-loaded NDs into the vault lumen, the ND and its contents would be shielded from the external medium," Buehler said. "Moreover, given the large vault interior, it is conceivable that multiple NDs could be packaged, which would considerably increase the localized drug concentration."

According to researcher Zhou, a professor of microbiology, immunology and molecular genetics and director of the CNSI's Electron Imaging Center for NanoMachines, electron microscopy and X-ray crystallography studies have revealed that both endogenous and recombinant vaults have a thin protein shell enclosing a large internal volume of about 100,000 cubic nanometers, which could potentially hold hundreds to thousands of small-molecular-weight compounds.

"These features make recombinant vaults an attractive target for engineering as a platform for drug delivery," Zhou said. "Our study represents the first example of using vaults toward this goal."

"Vaults can have a broad nanosystems application as malleable nanocapsules," Rome added.

The recombinant vaults are engineered to encapsulate the highly insoluble and toxic hydrophobic compound all-trans retinoic acid (ATRA) using a vault-binding lipoprotein complex that forms a lipid bilayer nanodisk.

The research was supported by the UC Discovery Grant Program, in collaboration with the research team's corporate sponsor, Abraxis Biosciences Inc., and by the Mather's Charitable Foundation and an NIH/NIBIB Award.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of California - Los Angeles.

Journal Reference:

Daniel C. Buehler, Daniel B. Toso, Valerie A. Kickhoefer, Z. Hong Zhou, Leonard H. Rome. Vaults Engineered for Hydrophobic Drug Delivery. Small, 2011; DOI: 10.1002/smll.201002274

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Moonless twilight may cue mass spawning

access READY, SETJust about to join in a synchronized mass spawning, pink bundles of eggs and sperm poke out of coral in Palau waiting for something — researchers aren’t sure what — but a blue shift in twilight might play a role.Charles A. Boch

Sentimental songs aside, maybe it’s an absence of moonlight that turns the bounding main into a sea of love.

On evenings when the moon lags below the horizon after sunset, twilight takes on an especially blue cast. That color shift, intensifying on nights after the full moon, might cue the remarkably synchronized mass spawning of some marine species, suggests Alison Sweeney of the University of California, Santa Barbara.

Corals, perhaps the most famous of the mass spawners, don’t have central nervous systems or actual eyes. Yet many corals manage to release their eggs and sperm into the water on one or just a few evenings of the year in the same few hours — sometimes just the same 20 minutes — as neighbors of the same species for miles around. Seasonal cues go into this feat, but what interests Sweeney and her colleagues is how species coordinate the fine-scale timing on a particular evening. “This 20-minute precision is pretty tough to explain,” she says.

In a first step to testing the notion of a blue-twilight cue, Sweeney and her colleagues floated sensors and a laptop wedged into an inner tube out to corals in the U.S. Virgin Islands. Measurements showed that the blue shift can be detected underwater, the researchers report in the March 1 Journal of Experimental Biology.

Just two light-sensing pigments of the opsin type, one tuned to greenish and the other to a blue wavelength, would be enough to detect such a shift, the researchers calculate. In recent genetic analyses, opsin pigments have been showing up in abundance in invertebrates, often more abundantly than in people, Sweeney says.

Marine animals often spawn in sync with some phase of the lunar cycle, and twilight’s color changes slightly around the time of the full moon, Sweeney says. Moonlight has a slight reddish tinge. So the waxing moon, which appears in the sky before the sun sets, shifts twilight a little toward the red. A full moon, however, just peeps over the horizon as the sun sets. As the moon wanes, it rises after sunset, leaving twilight bluer.

Spawning corals typically release their bundles of gametes to float toward the ocean surface and mingle in the evening. “It looks like the little pellets inside a bean bag chair,” Sweeney says. She recalls a spawning covering miles of reef in Palau that left a pink slick on the water still visible the next morning. “You come out of the water smelling like rotten flower shop,” she says.

The idea that a shift in twilight could trigger such an event does sound novel to evolutionary biologist Don Levitan of Florida State University in Tallahassee. “This is a correlation that sets up a series of exciting hypotheses that need to be tested,” he says. The idea does sound plausible, comments another biologist who has studied coral spawning, James Guest of the National University of Singapore.

There must be other lunar and daily cues, says Annie Mercier of Memorial University in St. John’s, Canada. She had her colleagues have just described lunar cycles in the reproduction of deep-sea corals and echinoderms 1,000 meters below the surface, where clues to lunar phase remain “elusive,” as she puts it.


Found in: Earth and Life

View the original article here

Wednesday 27 April 2011

Functioning synapse created using carbon nanotubes: Devices might be used in brain prostheses or synthetic brains

ScienceDaily (Apr. 22, 2011) — Engineering researchers the University of Southern California have made a significant breakthrough in the use of nanotechnologies for the construction of a synthetic brain. They have built a carbon nanotube synapse circuit whose behavior in tests reproduces the function of a neuron, the building block of the brain.

The team, which was led by Professor Alice Parker and Professor Chongwu Zhou in the USC Viterbi School of Engineering Ming Hsieh Department of Electrical Engineering, used an interdisciplinary approach combining circuit design with nanotechnology to address the complex problem of capturing brain function.

In a paper published in the proceedings of the IEEE/NIH 2011 Life Science Systems and Applications Workshop in April 2011, the Viterbi team detailed how they were able to use carbon nanotubes to create a synapse.

Carbon nanotubes are molecular carbon structures that are extremely small, with a diameter a million times smaller than a pencil point. These nanotubes can be used in electronic circuits, acting as metallic conductors or semiconductors.

"This is a necessary first step in the process," said Parker, who began the looking at the possibility of developing a synthetic brain in 2006. "We wanted to answer the question: Can you build a circuit that would act like a neuron? The next step is even more complex. How can we build structures out of these circuits that mimic the function of the brain, which has 100 billion neurons and 10,000 synapses per neuron?"

Parker emphasized that the actual development of a synthetic brain, or even a functional brain area is decades away, and she said the next hurdle for the research centers on reproducing brain plasticity in the circuits.

The human brain continually produces new neurons, makes new connections and adapts throughout life, and creating this process through analog circuits will be a monumental task, according to Parker.

She believes the ongoing research of understanding the process of human intelligence could have long-term implications for everything from developing prosthetic nanotechnology that would heal traumatic brain injuries to developing intelligent, safe cars that would protect drivers in bold new ways.

For Jonathan Joshi, a USC Viterbi Ph.D. student who is a co-author of the paper, the interdisciplinary approach to the problem was key to the initial progress. Joshi said that working with Zhou and his group of nanotechnology researchers provided the ideal dynamic of circuit technology and nanotechnology.

"The interdisciplinary approach is the only approach that will lead to a solution. We need more than one type of engineer working on this solution," said Joshi. "We should constantly be in search of new technologies to solve this problem."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Southern California, via EurekAlert!, a service of AAAS.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Conducting ferroelectrics may be key to new electronic memory

ScienceDaily (Apr. 25, 2011) — Novel properties of ferroelectric materials discovered at the Department of Energy's Oak Ridge National Laboratory are moving scientists one step closer to realizing a new paradigm of electronic memory storage.

A new study led by ORNL's Peter Maksymovych and published in the American Chemical Society's Nano Letters revealed that contrary to previous assumptions, domain walls in ferroelectric materials act as dynamic conductors instead of static ones.

Domain walls, the separation zones only a few atoms wide between opposing states of polarization in ferroelectric materials, are known to be conducting, but the origin of the conductivity has remained unclear.

"Our measurements identified that subtle and microscopically reversible distortions or kinks in the domain wall are at the heart of the dynamic conductivity," Maksymovych said. "The domain wall in its equilibrium state is not a true conductor like a rigid piece of copper wire. When you start to distort it by applying an electric field, it becomes a much better conductor."

Ferroelectrics, a unique class of materials that respond to the application of an electric field by microscopically switching their polarization, are already used in applications including sonar, medical imaging, fuel injectors and many types of sensors.

Now, researchers want to push the boundaries of ferroelectrics by making use of the materials' properties in areas such as memory storage and nanoelectronics. Gaining a detailed understanding of electrical conductance in domain walls is seen as a crucial step toward these next generation applications.

"This study shows for the first time that the dynamics of these defects -- the domain walls -- are a much richer source of memory functionality," Maksymovych said. "It turns out you can dial in the level of the conductivity in the domain wall, making it a tunable, metastable, dynamic memory element."

The domain wall's tunable nature refers to its delayed response to changes in conductivity, where shutting off an electric field does not produce an immediate drop in conductance. Instead, the domain wall "remembers" the last level of conductance for a given period of time and then relaxes to its original state, a phenomenon known as memristance. This type of behavior is unlike traditional electronics, which rely on silicon transistors that act as on-off switches when electric fields are applied.

"Finding functionality intrinsic to nanoscale systems that can be controlled in a novel way is not a path to compete with silicon, but it suggests a viable alternative to silicon for a new paradigm in electronics," Maksymovych said.

The ORNL-led team focused on bismuth ferrite samples, but researchers expect that the observed properties of domain walls will hold true for similar materials.

"The resulting memristive-like behavior is likely to be general to ferroelectric domain walls in semiconducting ferroelectric and multiferroic materials," said ORNL co-author Sergei Kalinin.

The samples used in the study were provided by the University of California at Berkeley. Other authors are ORNL's Arthur Baddorf, Jan Seidel and Ramamoorthy Ramesh of Lawrence Berkeley National Laboratory and UC Berkeley, and Pennsylvania State University's Pingping Wu and Long-Qing Chen.

Part of this work was supported by the Center for Nanophase Materials Sciences at ORNL. 

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by DOE/Oak Ridge National Laboratory.

Journal Reference:

Peter Maksymovych, Jan Seidel, Ying Hao Chu, Pingping Wu, Arthur P. Baddorf, Long-Qing Chen, Sergei V. Kalinin, Ramamoorthy Ramesh. Dynamic Conductivity of Ferroelectric Domain Walls in BiFeO3. Nano Letters, 2011; : 110412163353015 DOI: 10.1021/nl104363x

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Large Hadron Collider sets world record beam intensity

ScienceDaily (Apr. 23, 2011) — CERN's Large Hadron Collider (LHC) has set a new world record for beam intensity at a hadron collider when it collided beams with a luminosity of 4.67×1032 cm-2s-1. This exceeds the previous world record of 4.024×1032 cm-2s-1, which was set by the US Fermi National Accelerator Laboratory’s Tevatron collider in 2010, and marks an important milestone in LHC commissioning.

“Beam intensity is key to the success of the LHC, so this is a very important step,” said CERN Director General Rolf Heuer. “Higher intensity means more data, and more data means greater discovery potential.”

Luminosity gives a measure of how many collisions are happening in a particle accelerator: the higher the luminosity, the more particles are likely to collide. When looking for rare processes, this is important. Higgs particles, for example, will be produced very rarely if they exist at all, so for a conclusive discovery or refutation of their existence, a large amount of data is required.

The current LHC run is scheduled to continue to the end of 2012. That will give the experiments time to collect enough data to fully explore the energy range accessible with 3.5 TeV per beam collisions for new physics before preparing the LHC for higher energy running. By the end of the current running period, for example, we should know whether the Higgs boson exists or not.

“There’s a great deal of excitement at CERN today,” said CERN’s Director for Research and Scientific Computing, Sergio Bertolucci, “and a tangible feeling that we’re on the threshold of new discovery.”

After two weeks of preparing the LHC for this new level of beam intensity, the machine is now moving in to a phase of continuous physics running scheduled to last until the end of the year. There will then be a short technical stop, before physics running resumes for 2012.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by CERN.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Corals moving north

access CHANGE OF SCENCESome Pacific coral species have shifted northward over the last 80 years, a new study has found.Mila Zinkova/Wikimedia Commons

Some Pacific corals have done the equivalent of moving from sunny Atlanta to Detroit, possibly in response to rising ocean temperatures.

A new study of reefs around Japan reveals that a handful of coral species have migrated from the balmy subtropics to temperate climate zones over the last 80 years. The study is the first to track coral reefs for such a long time and over several latitude lines, a Japanese team reports in an upcoming Geophysical Research Letters.

The team, led by geographer Hiroya Yamano of the National Institute for Environmental Studies in Tsukuba, Japan, analyzed maps of corals from four time periods starting in the 1930s. They found that of nine common coral species, four had expanded northward, and two went as far as temperate waters. The study confirms what marine biologists and fishermen have speculated for years. “There were eyewitness accounts of the occurrence, but the data wasn’t so reliable,” says Yamano. “Now we can show very solid evidence.”

Now it appears that some coral species will migrate — and fast — in response to warming waters. Some species Yamano examined migrated as fast as 8.7 miles per year. Yamano calculated that a sample of land-traveling animals migrate only 0.4 miles per year on average. In 80 years, the fastest corals would travel nearly 700 miles. It would be like land plants making the Atlanta-to-Detroit trek between the Great Depression and today.

Coral reefs are important biologically because they house a diverse group of animals — about one in four marine species call a coral reef home. Reefs are made of animals called polyps with a hard skeleton made of calcium carbonate. Each year, the corals around Japan hatch larvae, which can get swept up by warm Pacific currents from the south called Kuroshio and Tsushima currents. But most larvae normally don’t settle far from home.

Taken with other studies that report animals moving north as temperature rises, it’s a good hypothesis that the corals in this study are moving to fight the heat, says John Pandolfi, a marine biologist at the University of Queensland in Brisbane, Australia. Researchers will next need to study these species in the lab to test whether temperature is truly the culprit.

But adjusting the marine thermostat isn’t the only way to kill a coral. Too much acidity from high concentrations of carbon dioxide can also weaken coral reefs. So it’s peculiar that the Japanese corals moved north, says Pandolfi, because their new homes are likely more saturated with carbon dioxide. It appears corals are able and willing to make that trade-off, he says.

Studies like this one will be crucial for creating a global database of how marine life is reacting to climate change, says Pandolfi. He and his colleagues are collating studies for such a data repository right now. Species all react differently to changes in temperature, and it’s difficult to figure out from the published literature which ones stay put.

“The good thing about this study, they didn’t just tell us what moved, but what didn’t move,” says Pandolfi. He says one pitfall in drawing data from a pool of research papers is that it’s harder to publish a study that shows no change in species movement, so data on those species are often not available.


Found in: Climate Change and Environment

View the original article here

Solar power goes viral: Researchers use virus to improve solar-cell efficiency

ScienceDaily (Apr. 25, 2011) — Researchers at MIT have found a way to make significant improvements to the power-conversion efficiency of solar cells by enlisting the services of tiny viruses to perform detailed assembly work at the microscopic level.

In a solar cell, sunlight hits a light-harvesting material, causing it to release electrons that can be harnessed to produce an electric current. The new MIT research, published online in the journal Nature Nanotechnology, is based on findings that carbon nanotubes -- microscopic, hollow cylinders of pure carbon -- can enhance the efficiency of electron collection from a solar cell's surface.

Previous attempts to use the nanotubes, however, had been thwarted by two problems. First, the making of carbon nanotubes generally produces a mix of two types, some of which act as semiconductors (sometimes allowing an electric current to flow, sometimes not) or metals (which act like wires, allowing current to flow easily). The new research, for the first time, showed that the effects of these two types tend to be different, because the semiconducting nanotubes can enhance the performance of solar cells, but the metallic ones have the opposite effect. Second, nanotubes tend to clump together, which reduces their effectiveness.

And that's where viruses come to the rescue. Graduate students Xiangnan Dang and Hyunjung Yi -- working with Angela Belcher, the W. M. Keck Professor of Energy, and several other researchers -- found that a genetically engineered version of a virus called M13, which normally infects bacteria, can be used to control the arrangement of the nanotubes on a surface, keeping the tubes separate so they can't short out the circuits, and keeping the tubes apart so they don't clump.

The system the researchers tested used a type of solar cell known as dye-sensitized solar cells, a lightweight and inexpensive type where the active layer is composed of titanium dioxide, rather than the silicon used in conventional solar cells. But the same technique could be applied to other types as well, including quantum-dot and organic solar cells, the researchers say. In their tests, adding the virus-built structures enhanced the power conversion efficiency to 10.6 percent from 8 percent -- almost a one-third improvement.

This dramatic improvement takes place even though the viruses and the nanotubes make up only 0.1 percent by weight of the finished cell. "A little biology goes a long way," Belcher says. With further work, the researchers think they can ramp up the efficiency even further.

The viruses are used to help improve one particular step in the process of converting sunlight to electricity. In a solar cell, the first step is for the energy of the light to knock electrons loose from the solar-cell material (usually silicon); then, those electrons need to be funneled toward a collector, from which they can form a current that flows to charge a battery or power a device. After that, they return to the original material, where the cycle can start again. The new system is intended to enhance the efficiency of the second step, helping the electrons find their way: Adding the carbon nanotubes to the cell "provides a more direct path to the current collector," Belcher says.

The viruses actually perform two different functions in this process. First, they possess short proteins called peptides that can bind tightly to the carbon nanotubes, holding them in place and keeping them separated from each other. Each virus can hold five to 10 nanotubes, each of which is held firmly in place by about 300 of the virus's peptide molecules. In addition, the virus was engineered to produce a coating of titanium dioxide (TiO2), a key ingredient for dye-sensitized solar cells, over each of the nanotubes, putting the titanium dioxide in close proximity to the wire-like nanotubes that carry the electrons.

The two functions are carried out in succession by the same virus, whose activity is "switched" from one function to the next by changing the acidity of its environment. This switching feature is an important new capability that has been demonstrated for the first time in this research, Belcher says.

In addition, the viruses make the nanotubes soluble in water, which makes it possible to incorporate the nanotubes into the solar cell using a water-based process that works at room temperature.

Prashant Kamat, a professor of chemistry and biochemistry at Notre Dame University who has done extensive work on dye-sensitized solar cells, says that while others have attempted to use carbon nanotubes to improve solar cell efficiency, "the improvements observed in earlier studies were marginal," while the improvements by the MIT team using the virus assembly method are "impressive."

"It is likely that the virus template assembly has enabled the researchers to establish a better contact between the TiO2 nanoparticles and carbon nanotubes. Such close contact with TiO2 nanoparticles is essential to drive away the photo-generated electrons quickly and transport it efficiently to the collecting electrode surface."

Kamat thinks the process could well lead to a viable commercial product: "Dye-sensitized solar cells have already been commercialized in Japan, Korea and Taiwan," he says. If the addition of carbon nanotubes via the virus process can improve their efficiency, "the industry is likely to adopt such processes."

Belcher and her colleagues have previously used differently engineered versions of the same virus to enhance the performance of batteries and other devices, but the method used to enhance solar cell performance is quite different, she says.

Because the process would just add one simple step to a standard solar-cell manufacturing process, it should be quite easy to adapt existing production facilities and thus should be possible to implement relatively rapidly, Belcher says.

The research team also included Paula Hammond, the Bayer Professor of Chemical Engineering; Michael Strano, the Charles (1951) and Hilda Roddey Career Development Associate Professor of Chemical Engineering; and four other graduate students and postdoctoral researchers. The work was funded by the Italian company Eni, through the MIT Energy Initiative's Solar Futures Program.

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Massachusetts Institute of Technology. The original article was written by David Chandler, MIT News Office.

Journal Reference:

Xiangnan Dang, Hyunjung Yi, Moon-Ho Ham, Jifa Qi, Dong Soo Yun, Rebecca Ladewski, Michael S. Strano, Paula T. Hammond, Angela M. Belcher. Virus-templated self-assembled single-walled carbon nanotubes for highly efficient electron collection in photovoltaic devices. Nature Nanotechnology, 2011; DOI: 10.1038/nnano.2011.50

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Clouds warm things up

access FLUFFY BLANKETClouds trap heat and warm the climate overall, says the best measurement yet of their influence on climate.Kables/Flickr

Over the past decade, during short-term climate changes, clouds trapped heat in the Earth’s atmosphere and warmed the planet, a new study suggests.

The work is the most detailed look at how clouds affect climate — one of the biggest scientific unknowns in how much global warming to expect (SN: 12/4/10, p. 24). Some researchers have suggested that clouds might cool the planet overall, but the study supports the opposite idea, that clouds make things toastier.

“This is really the first quantitative test of the total cloud feedback in climate models,” says Andrew Dessler, an atmospheric scientist at Texas A&M University in College Station. “The results suggest that our understanding of the cloud feedback, and the simulation of the cloud feedback by models, is actually quite good.”

Dessler’s paper appears in the December 10 issue of Science.

Only about a third of the roughly 3 degree Celsius warming expected over the next century will come directly from heat-trapping greenhouse gases like carbon dioxide, says Dessler. The rest comes from a variety of feedbacks, or processes that act to amplify the solar heating. Climate scientists understand almost all of these feedbacks very well, except for clouds.

That’s because clouds can both cool the planet, by reflecting sunlight back into space, and warm it, by absorbing heat reradiating from the Earth’s surface and preventing the heat from escaping. Researchers haven’t been able to figure out which of these effects is more important overall.

To tackle the question, Dessler used an instrument on NASA’s Terra satellite to track global radiation coming into and leaving Earth’s atmosphere between March 2000 and February 2010.

By subtracting complicating factors like the amount of atmospheric water vapor and how reflective the Earth’s surface was, Dessler was left with the effects of changing cloud cover. And although the data could potentially be interpreted as clouds showing a mild cooling effect, the best explanation, he says, is that clouds amplify whatever warming might be going on.

Running several computer models over the same period also produced a positive feedback, an indication that the climate simulations deal with cloud effects relatively well.

“It’s very encouraging to see analysis of observations placing some greater constraints on estimates of the magnitude of cloud feedback,” says Dennis Hartmann, an atmospheric scientist at the University of Washington in Seattle.

The biggest source of planetary climate change during the decade studied was the El Niño Southern Oscillation, a climate pattern that can temporarily raise temperatures by several tenths of a degree Celsius. More work is needed, says Hartmann, to understand how clouds will behave during long-term changes like the global warming expected from greenhouse gases.

But having some numbers on cloud feedback is better than having no numbers at all, says Dessler. He next plans to look more closely at how clouds are distributed around the globe, and how the feedback might vary locally.


Found in: Climate Change, Earth and Environment

View the original article here

Tuesday 26 April 2011

Development in fog harvesting process may make water available to the world’s poor

ScienceDaily (Apr. 25, 2011) — In the arid Namib Desert on the west coast of Africa, one type of beetle has found a distinctive way of surviving. When the morning fog rolls in, the Stenocara gracilipes species, also known as the Namib Beetle, collects water droplets on its bumpy back, then lets the moisture roll down into its mouth, allowing it to drink in an area devoid of flowing water.

What nature has developed, Shreerang Chhatre wants to refine, to help the world's poor. Chhatre is an engineer and aspiring entrepreneur at MIT who works on fog harvesting, the deployment of devices that, like the beetle, attract water droplets and corral the runoff. This way, poor villagers could collect clean water near their homes, instead of spending hours carrying water from distant wells or streams. In pursuing the technical and financial sides of his project, Chhatre is simultaneously a doctoral candidate in chemical engineering at MIT; an MBA student at the MIT Sloan School of Management; and a fellow at MIT's Legatum Center for Development and Entrepreneurship.

Access to water is a pressing global issue: the World Health Organization and UNICEF estimate that nearly 900 million people worldwide live without safe drinking water. The burden of finding and transporting that water falls heavily on women and children. "As a middle-class person, I think it's terrible that the poor have to spend hours a day walking just to obtain a basic necessity," Chhatre says.

A fog-harvesting device consists of a fence-like mesh panel, which attracts droplets, connected to receptacles into which water drips. Chhatre has co-authored published papers on the materials used in these devices, and believes he has improved their efficacy. "The technical component of my research is done," Chhatre says. He is pursuing his work at MIT Sloan and the Legatum Center in order to develop a workable business plan for implementing fog-harvesting devices.

Interest in fog harvesting dates to the 1990s, and increased when new research on Stenocara gracilipes made a splash in 2001. A few technologists saw potential in the concept for people. One Canadian charitable organization, FogQuest, has tested projects in Chile and Guatemala.

Chhatre's training as a chemical engineer has focused on the wettability of materials, their tendency to either absorb or repel liquids (think of a duck's feathers, which repel water). A number of MIT faculty have made advances in this area, including Robert Cohen of the Department of Chemical Engineering; Gareth McKinley of the Department of Mechanical Engineering; and Michael Rubner of the Department of Materials Science and Engineering. Chhatre, who also received his master's degree in chemical engineering from MIT in 2009, is co-author, with Cohen and McKinley among other researchers, of three published papers on the kinds of fabrics and coatings that affect wettability.

One basic principle of a good fog-harvesting device is that it must have a combination of surfaces that attract and repel water. For instance, the shell of Stenocara gracilipes has bumps that attract water and troughs that repel it; this way, drops collects on the bumps, then run off through the troughs without being absorbed, so that the water reaches the beetle's mouth.

To build fog-harvesting devices that work on a human scale, Chhatre says, "The idea is to use the design principles we developed and extend them to this problem."

To build larger fog harvesters, researchers generally use mesh, rather than a solid surface like a beetle's shell, because a completely impermeable object creates wind currents that will drag water droplets away from it. In this sense, the beetle's physiology is an inspiration for human fog harvesting, not a template. "We tried to replicate what the beetle has, but found this kind of open permeable surface is better," Chhatre says. "The beetle only needs to drink a few micro-liters of water. We want to capture as large a quantity as possible."

In some field tests, fog harvesters have captured one liter of water (roughly a quart) per one square meter of mesh, per day. Chhatre and his colleagues are conducting laboratory tests to improve the water collection ability of existing meshes.

FogQuest workers say there is more to fog harvesting than technology, however. "You have to get the local community to participate from the beginning," says Melissa Rosato, who served as project manager for a FogQuest program that has installed 36 mesh nets in the mountaintop village of Tojquia, Guatemala, and supplies water for 150 people. "They're the ones who are going to be managing and maintaining the equipment." Because women usually collect water for households, Rosato adds, "If women are not involved, chances of a long-term sustainable project are slim."

Whatever Chhatre's success in the laboratory, he agrees it will not be easy to turn fog-harvesting technology into a viable enterprise. "My consumer has little monetary power," he notes. As part of his Legatum fellowship and Sloan studies, Chhatre is analyzing which groups might use his potential product. Chhatre believes the technology could also work on the rural west coast of India, north of Mumbai, where he grew up.

Another possibility is that environmentally aware communities, schools or businesses in developed countries might try fog harvesting to reduce the amount of energy needed to obtain water. "As the number of people and businesses in the world increases and rainfall stays the same, more people will be looking for alternatives," says Robert Schemenauer, the executive director of FogQuest.

Indeed, the importance of water-supply issues globally is one reason Chhatre was selected for his Legatum fellowship.

"We welcomed Shreerang as a Legatum fellow because it is an important problem to solve," notes Iqbal Z. Quadir, director of the Legatum Center. "About one-third of the planet's water that is not saline happens to be in the air. Collecting water from thin air solves several problems, including transportation. If people do not spend time fetching water, they can be productively employed in other things which gives rise to an ability to pay. Thus, if this technology is sufficiently advanced and a meaningful amount of water can be captured, it could be commercially viable some day."

Quadir also feels that if Chhatre manages to sell a sufficient number of collection devices in the developed world, it could contribute to a reduction in price, making it more viable in poor countries. "The aviation industry in its infancy struggled with balloons, but eventually became a viable global industry," Quadir adds. "Shreerang's project addresses multiple problems at the same time and, after all, the water that fills our rivers and lakes comes from air."

That said, fog harvesting remains in its infancy, technologically and commercially, as Chhatre readily recognizes. "This is still a very open problem," he says. "It's a work in progress."

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Massachusetts Institute of Technology. The original article was written by Peter Dizikes.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Fiber-Optic Transatlantic Cable Could Save Milliseconds, Millions by Speeding Data to Stock Traders

By Matt Dellinger Posted 04.25.2011 at 1:00 pm 4 Comments
Speed Sells Fiber-optic cables allow stocks to be bought and sold in an instant. Joshua Lott/Reuters

Traders used to all buy and sell stocks in the same crowded room. Everyone received information at the same time, and the first guy to shout or signal got the sale. Today, using algorithms that exploit slightly different prices changing at slightly different speeds, and computers connected to exclusive fiber-optic lines that can buy and sell stocks within fractions of a second, high-frequency traders are able to buy low and sell slightly higher in virtually the same instant.

“A couple of milliseconds can roll out to a $20-million difference in [a trader’s] account at the end of the month,” says Nigel Bayliff, the CEO of Huawei Marine Networks, one of the companies laying down superfast fiber-optic lines.

Companies like Bayliff’s are looking for ways to shave time, and the easiest method is to build a more direct route. Last year, Mississippi-based Spread Networks opened a shorter connection between New York and Chicago that saved about three milliseconds and was estimated to have cost $300 million to develop. Huawei is working with another company, Hibernia Atlantic, to lay the first transatlantic fiber-optic submarine cable in a decade, a $400-million-plus project that will save traders five milliseconds.

To do this, Hibernia is laying nearly 3,000 miles of cable across the Grand Banks off Canada and the North Atlantic, a shorter route that most companies have avoided because it traverses relatively shallow waters. Undersea-cable companies prefer to work at greater depths; they can just drop naked cable down to the ocean floor. At less than a mile deep, though, they must bury armored cable to protect it from ship anchors, fishing trawls, dredging gear, and attacks from sharks, which are drawn to the line’s electricity.

Remotely Operated Underwater Vehicle (ROV):  Courtesy Global Marine EnergyFor all the money Hibernia and its clients will make from a 60-millisecond trip across the Atlantic, the installation will be slow. Crews on two ships, the Sovereign and the Cable Innovator, will deploy 24-ton ploughs to cut a trench up to six feet into the seabed, into which they will lay the cable. The top speed is about one mile an hour.

Each ship is outfitted with a dynamic positioning system that keeps it in place while laying cable, regardless of currents or winds. If something gets in the way, such as another submarine cable, the crews will use a remotely operated vehicle equipped with a pair of high-pressure water “swords” to break apart sediment. The ROV then uses a mechanical arm to bury the new cable underneath the obstacle and into the temporarily softened earth. “The seabed always throws up something unexpected,” says Stuart Wilson, the manager of cable-route engineering for Global Marine Systems, the company installing the Hibernia line.

Hibernia says its cable will go live next year, connecting it to Hibernia’s Global Financial Network, which has fiber optics running 15,000 miles between financial centers from Chicago to Frankfurt. But the New York-to-London line could be the company’s biggest draw, providing a competitive advantage of just five milliseconds—about the amount of time it takes a bee to flap its wings.


View the original article here

2010 ties record for warmest year yet

access HOT TIMESBetween 2000 and 2009, more of the globe warmed (red) than cooled (blue) compared with average temperatures from the baseline period 1951–1980.Robert Simmon; NASA GISS http://earthobservatory.nasa.gov/IOTD/view.php?id=47628

2010 has tied with 2005 as the hottest year on record, according to two new studies.

On January 12 NASA and the National Oceanic and Atmospheric Administration released their independent analyses of global surface temperature data for last year. Both found that 2010 was ever-so-slightly warmer than 2005. But the difference was not statistically large enough to declare 2010 the winner.

The warmth of 2010 is “not surprising, considering that global surface temperatures have been climbing,” says Deke Arndt, chief of the climate monitoring branch of NOAA’s National Climatic Data Center in Asheville, N.C. The last decade has been the warmest since record-keeping began in 1880.

Combined land and ocean surface temperatures across the globe in 2010 were 0.62 degrees Celsius higher than the 20th century average, the NOAA analysis reports. In the continuous United States, temperatures were 0.6 degrees above normal,making it the 23rd warmest year on record for the country. The Northern Hemisphere experienced its warmest year on record, while the Southern Hemisphere saw its sixth warmest.

The NASA analysis, produced by its Goddard Institute for Space Studies in New York City, uses the period from 1951 to 1980 as its baseline and found that, on a global scale, 2010 was about 0.74 degrees Celsius warmer than that average.

Last year saw plenty of meteorological oddities. Early in 2010, North America, Europe and other parts of the Northern Hemisphere experienced bitter cold and severe snowstorms — thanks mainly to a phenomenon known as the Arctic Oscillation, a pattern that lets cold air travel south from the pole. Summer heat waves then baked India, China and especially Russia, where at least 15,000 people died from the heat and related wildfires. Rainfall records show that it was globally the wettest year since 1900.

Hot on the heels of 2010 and 2005 is 1998, ranked by both NOAA and NASA as the third-hottest year on record. “They all kind of looked like each other,” says Arndt. All three years began with a medium-to-strong El Niño, a climate pattern characterized by warmer-than-average sea surface temperatures in the eastern tropical Pacific Ocean. And during each of those years, the El Niño tapered off, to be replaced by the cooling influence of its counterpart La Niña.

In fact, the third main group that analyzes surface temperature trends earlier ranked 1998 as edging out 2005 for the top spot. That group, based at the Met Office Hadley Centre in Exeter, England, has not yet released its final ranking for 2010.

All three groups use slightly different techniques to analyze a host of observations from ground stations, ships, buoys and satellites. For instance, if there is a gap in measurements at a particular station — say, in the Arctic, where monitoring stations are few and far between — the Hadley researchers leave that station blank when doing their analyses. But NASA and NOAA, in slightly different ways from each other, use data from the closest stations to take an informed guess as to what the missing station data might be.

All the bean counting about which particular year is the hottest obscures a more profound point, says Gavin Schmidt, a climatologist at the NASA Goddard center who was not involved in the recent temperature analysis. “The baseline is getting warmer and warmer every year,” he says. Indeed, 2010 was the 34th year in a row in which global temperatures were higher than the 20th century average.

Schmidt predicts that 2011 will not be as toasty as 2010 was, in part because of the La Niña that is currently cooling things down. But because levels of heat-trapping greenhouse gases continue to rise in the atmosphere, “it’s almost certain to still be a top-10 year,” he says. “Maybe even a top-five year.”


Found in: Climate Change, Earth and Environment

View the original article here

Saturn’s rings explained

access The icy particles that make up Saturn's rings may owe their existence to a big moon that smacked into the planet about 4.5 billion years ago.NASA/JPL/University of Colorado

Saturn’s majestic rings are the remnants of a long-vanished moon that was stripped of its icy outer layer before its rocky heart plunged into the planet, a new theory proposes. The icy fragments would have encircled the solar system’s second largest planet as rings and eventually spalled off small moons of their own that are still there today, says Robin Canup, a planetary scientist at the Southwest Research Institute in Boulder, Colo.

“Not only do you end up with the current ring, but you can also explain the inner ice-rich moons that haven’t been explained before,” she says. Canup’s paper appears online December 12 in Nature.

The origin of Saturn’s rings, a favorite of backyard astronomers, has baffled professional scientists. Earlier ideas about how the rings formed have fallen into two categories: either a small moon plunged intact into the planet and shattered, or a comet smacked into a moon, shredding the moon to bits. The problem is that both scenarios would produce an equal mix of rock and ice in Saturn’s rings — not the nearly 95 percent ice seen today.

Canup studied what happened in the period just after Saturn (and the solar system’s other planets) coalesced from a primordial disk of gas and dust 4.5 billion years ago. In previous work, she had shown that moon after moon would be born around the infant gas giants, each growing until the planet’s gravitational tug pulled it in to its destruction. Moons would have stopped forming when the disk of gas and dust was all used up.

In the new study, Canup calculated that a moon the size of Titan — Saturn’s largest at some 5,000 kilometers across — would begin to separate into layers as it migrated inward. Saturn’s tidal pull would cause much of the moon’s ice to melt and then refreeze as an outer mantle. As the moon spiraled into the planet, Canup’s calculations show, the icy layer would be stripped off to form the rings.

A moon so large would have produced rings several orders of magnitude more massive than today’s, Canup says. That, in turn, would have provided a source of ice for new, small moons spawned from the rings’ outer edge. Such a process, she says, could explain why Saturn’s inner moons are icy, out to and including the 1,000-kilometer-wide Tethys, while moons farther from the planet contain more rock.

“Once you hear it, it’s a pretty simple idea,” says Canup. “But no one was thinking of making a ring a lot more massive than the current ring, or losing a satellite like Titan. That was the conceptual break.”

“It’s a big deal,” agrees Luke Dones, also of the Southwest Research Institute, who has worked on the comet-makes-rings theory. “It never occurred to me that the rings could be so much more massive than they are now.”

Another recent study supports the notion that today’s rings are the remnants of massive ancient rings of pure ice. In a paper in press at Icarus, Larry Esposito, a planetary scientist at the University of Colorado at Boulder, calculates that more massive rings are less likely to be polluted by dust, and hence could still be as pristine as they appear today even after 4.5 billion years.

Some questions still linger about Canup’s model, says Dones, like why some of Saturn’s inner icy moons have more rock in them than others.

The theory will be put to the test in 2017, when NASA’s Cassini mission finishes its grand tour of Saturn by making the best measurements yet of the mass of the rings. Researchers can use those and other details to better tease out how the rings evolved over time.


Found in: Atom & Cosmos and Planetary Science

View the original article here