Sunday

Maps of magnesium/silicon (left) and thermal neutron absorption (right) across Mercury's surface (red indicates high values, blue low). These maps, together with maps of other elemental abundances, reveal the presence of distinct geochemical terranes. Volcanic smooth plains deposits are outlined in white.
Two new papers from members of the MESSENGER Science Team provide global-scale maps of Mercury's surface chemistry that reveal previously unrecognized geochemical terranes -- large regions that have compositions distinct from their surroundings. The presence of these large terranes has important implications for the history of the planet.

The MESSENGER mission was designed to answer several key scientific questions, including the nature of Mercury's geological history. Remote sensing of the surface's chemical composition has a strong bearing on this and other questions. Since MESSENGER was inserted into orbit about Mercury in March 2011, data from the spacecraft's X-Ray Spectrometer (XRS) and Gamma-Ray Spectrometer (GRS) have provided information on the concentrations of potassium, thorium, uranium, sodium, chlorine, and silicon, as well as ratios relative to silicon of magnesium, aluminum, sulfur, calcium, and iron.

Until now, however, geochemical maps for some of these elements and ratios have been limited to one hemisphere and have had poor spatial resolution. In "Evidence for geochemical terranes on Mercury: Global mapping of major elements with MESSENGER's X-Ray Spectrometer," published this week in Earth and Planetary Science Letters, the authors used a novel methodology to produce global maps of the magnesium/silicon and aluminum/silicon abundance ratios across Mercury's surface from data acquired by MESSENGER's XRS.

These are the first global geochemical maps of Mercury, and the first maps of global extent for any planetary body acquired via the technique of X-ray fluorescence, by which X-rays emitted from the Sun's atmosphere allow the planet's surface composition to be examined. The global magnesium and aluminum maps were paired with less spatially complete maps of sulfur/silicon, calcium/silicon, and iron/silicon, as well as other MESSENGER datasets, to study the geochemical characteristics of Mercury's surface and to investigate the evolution of the planet's thin silicate shell.

The most obvious of Mercury's geochemical terranes is a large feature, spanning more than 5 million square kilometers. This terrane "exhibits the highest observed magnesium/silicon, sulfur/silicon, and calcium/silicon ratios, as well as some of the lowest aluminum/silicon ratios on the planet's surface," writes Shoshana Weider, a planetary geologist and Visiting Scientist at the Carnegie Institution. Weider and colleagues suggest that this "high-magnesium region" could be the site of an ancient impact basin. By this interpretation, the distinctive chemical signature of the region reflects a substantial contribution from mantle material that was exposed during a large impact event.

A second paper, "Geochemical terranes of Mercury's northern hemisphere as revealed by MESSENGER neutron measurements," now available online in Icarus, presents the first maps of the absorption of low-energy ("thermal") neutrons across Mercury's surface. The data used in this second study were obtained with the GRS anti-coincidence shield, which is sensitive to neutron emissions from the surface of Mercury.

"From these maps we may infer the distribution of thermal-neutron-absorbing elements across the planet, including iron, chlorine, and sodium," writes lead author Patrick Peplowski of The Johns Hopkins University Applied Physics Laboratory. "This information has been combined with other MESSENGER geochemical measurements, including the new XRS measurements, to identify and map four distinct geochemical terranes on Mercury."

According to Peplowski, the results indicate that the smooth plains interior to the Caloris basin, Mercury's largest well-preserved impact basin, have an elemental composition that is distinct from other volcanic plains units, suggesting that the parental magmas were partial melts from a chemically distinct portion of Mercury's mantle. Mercury's high-magnesium region, first recognized from the XRS measurements, also contains high concentrations of unidentified neutron-absorbing elements.

"Earlier MESSENGER data have shown that Mercury's surface was pervasively shaped by volcanic activity," notes Peplowski. "The magmas erupted long ago were derived from the partial melting of Mercury's mantle. The differences in composition that we are observing among geochemical terranes indicate that Mercury has a chemically heterogeneous mantle."

"The consistency of the new XRS and GRS maps provides a new dimension to our view of Mercury's surface," Weider adds. "The terranes we observe had not previously been identified on the basis of spectral reflectance or geological mapping."

"The crust we see on Mercury was largely formed more than three billion years ago," says Carnegie's Larry Nittler, Deputy Principal Investigator of the mission and co-author of both studies. "The remarkable chemical variability revealed by MESSENGER observations will provide critical constraints on future efforts to model and understand Mercury's bulk composition and the ancient geological processes that shaped the planet's mantle and crust."
Selengkapnya »»  

Saturn moon's ocean may harbor hydrothermal activity, spacecraft data suggest

This cutaway view of Saturn's moon Enceladus is an artist's rendering that depicts possible hydrothermal activity that may be taking place on and under the seafloor of the moon's subsurface ocean, based on recently published results from NASA's Cassini mission.
NASA's Cassini spacecraft has provided scientists the first clear evidence that Saturn's moon Enceladus exhibits signs of present-day hydrothermal activity which may resemble that seen in the deep oceans on Earth. The implications of such activity on a world other than our planet open up unprecedented scientific possibilities.

"These findings add to the possibility that Enceladus, which contains a subsurface ocean and displays remarkable geologic activity, could contain environments suitable for living organisms," said John Grunsfeld, astronaut and associate administrator of NASA's Science Mission Directorate in Washington. "The locations in our solar system where extreme environments occur in which life might exist may bring us closer to answering the question: are we alone in the universe."

Hydrothermal activity occurs when seawater infiltrates and reacts with a rocky crust and emerges as a heated, mineral-laden solution, a natural occurrence in Earth's oceans. According to two science papers, the results are the first clear indications an icy moon may have similar ongoing active processes.

The first paper, published this week in the journal Nature, relates to microscopic grains of rock detected by Cassini in the Saturn system. An extensive, four-year analysis of data from the spacecraft, computer simulations and laboratory experiments led researchers to the conclusion the tiny grains most likely form when hot water containing dissolved minerals from the moon's rocky interior travels upward, coming into contact with cooler water. Temperatures required for the interactions that produce the tiny rock grains would be at least 194 degrees Fahrenheit (90 degrees Celsius).

"It's very exciting that we can use these tiny grains of rock, spewed into space by geysers, to tell us about conditions on -- and beneath -- the ocean floor of an icy moon," said the paper's lead author Sean Hsu, a postdoctoral researcher at the University of Colorado at Boulder.

Cassini's cosmic dust analyzer (CDA) instrument repeatedly detected miniscule rock particles rich in silicon, even before Cassini entered Saturn's orbit in 2004. By process of elimination, the CDA team concluded these particles must be grains of silica, which is found in sand and the mineral quartz on Earth. The consistent size of the grains observed by Cassini, the largest of which were 6 to 9 nanometers, was the clue that told the researchers a specific process likely was responsible.

On Earth, the most common way to form silica grains of this size is hydrothermal activity under a specific range of conditions; namely, when slightly alkaline and salty water that is super-saturated with silica undergoes a big drop in temperature.

"We methodically searched for alternate explanations for the nanosilica grains, but every new result pointed to a single, most likely origin," said co-author Frank Postberg, a Cassini CDA team scientist at Heidelberg University in Germany.

Hsu and Postberg worked closely with colleagues at the University of Tokyo who performed the detailed laboratory experiments that validated the hydrothermal activity hypothesis. The Japanese team, led by Yasuhito Sekine, verified the conditions under which silica grains form at the same size Cassini detected. The researchers think these conditions may exist on the seafloor of Enceladus, where hot water from the interior meets the relatively cold water at the ocean bottom.

The extremely small size of the silica particles also suggests they travel upward relatively quickly from their hydrothermal origin to the near-surface sources of the moon's geysers. From seafloor to outer space, a distance of about 30 miles (50 kilometers), the grains spend a few months to a few years in transit, otherwise they would grow much larger.

The authors point out that Cassini's gravity measurements suggest Enceladus' rocky core is quite porous, which would allow water from the ocean to percolate into the interior. This would provide a huge surface area where rock and water could interact.

The second paper, recently published in Geophysical Research Letters, suggests hydrothermal activity as one of two likely sources of methane in the plume of gas and ice particles that erupts from the south polar region of Enceladus. The finding is the result of extensive modeling to address why methane, as previously sampled by Cassini, is curiously abundant in the plume.

The team found that, at the high pressures expected in the moon's ocean, icy materials called clathrates could form that imprison methane molecules within a crystal structure of water ice. Their models indicate that this process is so efficient at depleting the ocean of methane that the researchers still needed an explanation for its abundance in the plume.

In one scenario, hydrothermal processes super-saturate the ocean with methane. This could occur if methane is produced faster than it is converted into clathrates. A second possibility is that methane clathrates from the ocean are dragged along into the erupting plumes and release their methane as they rise, like bubbles forming in a popped bottle of champagne.

The authors agree both scenarios are likely occurring to some degree, but they note that the presence of nanosilica grains, as documented by the other paper, favors the hydrothermal scenario.

"We didn't expect that our study of clathrates in the Enceladus ocean would lead us to the idea that methane is actively being produced by hydrothermal processes," said lead author Alexis Bouquet, a graduate student at the University of Texas at San Antonio. Bouquet worked with co-author Hunter Waite, who leads the Cassini Ion and Neutral Mass Spectrometer (INMS) team at Southwest Research Institute in San Antonio.

Cassini first revealed active geological processes on Enceladus in 2005 with evidence of an icy spray issuing from the moon's south polar region and higher-than-expected temperatures in the icy surface there. With its powerful suite of complementary science instruments, the mission soon revealed a towering plume of water ice and vapor, salts and organic materials that issues from relatively warm fractures on the wrinkled surface. Gravity science results published in 2014 strongly suggested the presence of a 6-mile- (10-kilometer-) deep ocean beneath an ice shell about 19 to 25 miles (30 to 40 kilometers) thick.

Selengkapnya »»  

Engineers create chameleon-like artificial 'skin' that shifts color on demand

Developed by engineers from the University of California at Berkeley, this chameleon-like artificial "skin" changes color as a minute amount of force is applied.
Borrowing a trick from nature, engineers from the University of California at Berkeley have created an incredibly thin, chameleon-like material that can be made to change color -- on demand -- by simply applying a minute amount of force.

This new material-of-many-colors offers intriguing possibilities for an entirely new class of display technologies, color-shifting camouflage, and sensors that can detect otherwise imperceptible defects in buildings, bridges, and aircraft.

"This is the first time anybody has made a flexible chameleon-like skin that can change color simply by flexing it," said Connie J. Chang-Hasnain, a member of the Berkeley team and co-author on a paper published today in Optica, The Optical Society's (OSA) new high-impact journal.

By precisely etching tiny features -- smaller than a wavelength of light -- onto a silicon film one thousand times thinner than a human hair, the researchers were able to select the range of colors the material would reflect, depending on how it was flexed and bent.

A Material that's a Horse of a Different Color

The colors we typically see in paints, fabrics, and other natural substances occur when white, broad spectrum light strikes their surfaces. The unique chemical composition of each surface then absorbs various bands, or wavelengths of light. Those that aren't absorbed are reflected back, with shorter wavelengths giving objects a blue hue and longer wavelengths appearing redder and the entire rainbow of possible combinations in between. Changing the color of a surface, such as the leaves on the trees in autumn, requires a change in chemical make-up.

Recently, engineers and scientists have been exploring another approach, one that would create designer colors without the use of chemical dyes and pigments. Rather than controlling the chemical composition of a material, it's possible to control the surface features on the tiniest of scales so they interact and reflect particular wavelengths of light. This type of "structural color" is much less common in nature, but is used by some butterflies and beetles to create a particularly iridescent display of color.

Controlling light with structures rather than traditional optics is not new. In astronomy, for example, evenly spaced slits known as diffraction gratings are routinely used to direct light and spread it into its component colors. Efforts to control color with this technique, however, have proved impractical because the optical losses are simply too great.

The authors of the Optica paper applied a similar principle, though with a radically different design, to achieve the color control they were looking for. In place of slits cut into a film they instead etched rows of ridges onto a single, thin layer of silicon. Rather than spreading the light into a complete rainbow, however, these ridges -- or bars -- reflect a very specific wavelength of light. By "tuning" the spaces between the bars, it's possible to select the specific color to be reflected. Unlike the slits in a diffraction grating, however, the silicon bars were extremely efficient and readily reflected the frequency of light they were tuned to.

Flexibility Is the Key to Control

Since the spacing, or period, of the bars is the key to controlling the color they reflect, the researchers realized it would be possible to subtly shift the period -- and therefore the color -- by flexing or bending the material.

"If you have a surface with very precise structures, spaced so they can interact with a specific wavelength of light, you can change its properties and how it interacts with light by changing its dimensions," said Chang-Hasnain.

Earlier efforts to develop a flexible, color shifting surface fell short on a number of fronts. Metallic surfaces, which are easy to etch, were inefficient, reflecting only a portion of the light they received. Other surfaces were too thick, limiting their applications, or too rigid, preventing them from being flexed with sufficient control.

The Berkeley researchers were able to overcome both these hurdles by forming their grating bars using a semiconductor layer of silicon approximately 120 nanometers thick. Its flexibility was imparted by embedding the silicon bars into a flexible layer of silicone. As the silicone was bent or flexed, the period of the grating spacings responded in kind.

The semiconductor material also allowed the team to create a skin that was incredibly thin, perfectly flat, and easy to manufacture with the desired surface properties. This produces materials that reflect precise and very pure colors and that are highly efficient, reflecting up to 83 percent of the incoming light.

Their initial design, subjected to a change in period of a mere 25 nanometers, created brilliant colors that could be shifted from green to yellow, orange, and red -- across a 39-nanometer range of wavelengths. Future designs, the researchers believe, could cover a wider range of colors and reflect light with even greater efficiency.

Chameleon Skin with Multiple Applications

For this demonstration, the researchers created a one-centimeter square layer of color-shifting silicon. Future developments would be needed to create a material large enough for commercial applications.

"The next step is to make this larger-scale and there are facilities already that could do so," said Chang-Hasnain. "At that point, we hope to be able to find applications in entertainment, security, and monitoring."

For consumers, this chameleon material could be used in a new class of display technologies, adding brilliant color presentations to outdoor entertainment venues. It also may be possible to create an active camouflage on the exterior of vehicles that would change color to better match the surrounding environment.

More day-to-day applications could include sensors that would change color to indicate that structural fatigue was stressing critical components on bridges, buildings, or the wings of airplanes.

"This is the first time anyone has achieved such a broad range of color on a one-layer, thin and flexible surface," concluded Change-Hasnain. "I think it's extremely cool."

Selengkapnya »»  

Underground ocean on Jupiter's largest moon, Ganymede

Observation of Aurorae on Ganymede. NASA's Hubble Space Telescope observed a pair of auroral belts encircling the Jovian moon Ganymede. The belts were observed in ultraviolet light by the Space Telescope Imaging Spectrograph and are colored blue in this illustration. They are overlaid on a visible-light image of Ganymede taken by NASA's Galileo orbiter. The locations of the glowing aurorae are determined by the moon's magnetic field, and therefore provide a probe of the moon's interior, where the magnetic field is generated. The amount of rocking of the magnetic field, caused by its interaction with Jupiter's own immense magnetosphere, provides evidence that the moon has a subsurface ocean of saline water.
NASA's Hubble Space Telescope has the best evidence yet for an underground saltwater ocean on Ganymede, Jupiter's largest moon. The subterranean ocean is thought to have more water than all the water on Earth's surface.

Identifying liquid water is crucial in the search for habitable worlds beyond Earth and for the search for life, as we know it.

"This discovery marks a significant milestone, highlighting what only Hubble can accomplish," said John Grunsfeld, assistant administrator of NASA's Science Mission Directorate at NASA Headquarters, Washington, D.C. "In its 25 years in orbit, Hubble has made many scientific discoveries in our own solar system. A deep ocean under the icy crust of Ganymede opens up further exciting possibilities for life beyond Earth."

Ganymede is the largest moon in our solar system and the only moon with its own magnetic field. The magnetic field causes aurorae, which are ribbons of glowing, hot electrified gas, in regions circling the north and south poles of the moon. Because Ganymede is close to Jupiter, it is also embedded in Jupiter's magnetic field. When Jupiter's magnetic field changes, the aurorae on Ganymede also change, "rocking" back and forth.

By watching the rocking motion of the two aurorae, scientists were able to determine that a large amount of saltwater exists beneath Ganymede's crust, affecting its magnetic field.

A team of scientists led by Joachim Saur of the University of Cologne in Germany came up with the idea of using Hubble to learn more about the inside of the moon.

"I was always brainstorming how we could use a telescope in other ways," said Saur. "Is there a way you could use a telescope to look inside a planetary body? Then I thought, the aurorae! Because aurorae are controlled by the magnetic field, if you observe the aurorae in an appropriate way, you learn something about the magnetic field. If you know the magnetic field, then you know something about the moon's interior."

If a saltwater ocean were present, Jupiter's magnetic field would create a secondary magnetic field in the ocean that would counter Jupiter's field. This "magnetic friction" would suppress the rocking of the aurorae. This ocean fights Jupiter's magnetic field so strongly that it reduces the rocking of the aurorae to 2 degrees, instead of 6 degrees if the ocean were not present.

Scientists estimate the ocean is 60 miles (100 kilometers) thick -- 10 times deeper than Earth's oceans -- and is buried under a 95-mile (150-kilometer) crust of mostly ice.

Scientists first suspected an ocean in Ganymede in the 1970s, based on models of the large moon. NASA's Galileo mission measured Ganymede's magnetic field in 2002, providing the first evidence supporting those suspicions. The Galileo spacecraft took brief "snapshot" measurements of the magnetic field in 20-minute intervals, but its observations were too brief to distinctly catch the cyclical rocking of the ocean's secondary magnetic field.

The new observations were done in ultraviolet light and could only be accomplished with a space telescope high above Earth's atmosphere, which blocks most ultraviolet light.

The team's results will be published online in the Journal of Geophysical Research: Space Physics on March 12.

Selengkapnya »»  

Monday

Diabetes, depression predict dementia risk in people with slowing minds

People with mild cognitive impairment are at higher risk of developing dementia if they have diabetes or psychiatric symptoms such as depression, finds a new review led by UCL researchers.

Mild cognitive impairment (MCI) is a state between normal ageing and dementia, where someone's mind is functioning less well than would be expected for their age. It affects 19% of people aged 65 and over, and around 46% of people with MCI develop dementia within 3 years compared with 3% of the general population.

The latest review paper, published in the American Journal of Psychiatry, analysed data from 62 separate studies, following a total of 15,950 people diagnosed with MCI. The study found that among people with MCI, those with diabetes were 65% more likely to progress to dementia and those with psychiatric symptoms were more than twice as likely to develop the condition.

"There are strong links between mental and physical health, so keeping your body healthy can also help to keep your brain working properly," explains lead author Dr Claudia Cooper (UCL Psychiatry). "Lifestyle changes to improve diet and mood might help people with MCI to avoid dementia, and bring many other health benefits. This doesn't necessarily mean that addressing diabetes, psychiatric symptoms and diet will reduce an individual's risk, but our review provides the best evidence to date about what might help."

The Alzheimer's Society charity recommends that people stay socially and physically active to help prevent dementia. Their guidelines also suggest eating a diet high in fruit and vegetables and low in meat and saturated fats, such as the Mediterranean diet.

"Some damage is already done in those with MCI but these results give a good idea about what it makes sense to target to reduce the chance of dementia," says senior author Professor Gill Livingston (UCL Psychiatry). "Randomised controlled trials are now needed."

Professor Alan Thompson, Dean of the UCL Faculty of Brain Sciences, says: "This impressive Systematic Review and meta-analysis from The Faculty of Brain Science's Division of Psychiatry underlines two important messages. Firstly, the impact of medical and psychiatric co-morbidities in individuals with mild cognitive impairment and secondly, the importance and therapeutic potential of early intervention in the prevention of dementia. Confirming these findings and incorporating appropriate preventative strategies could play an important part in lessening the ever-increasing societal burden of dementia in our ageing population."


Selengkapnya »»  

Ancient and modern cities aren't so different

Despite notable differences in appearance and governance, ancient human settlements function in much the same way as modern cities, according to new findings by researchers at the Santa Fe Institute and the University of Colorado Boulder.

Previous research has shown that as modern cities grow in population, so do their efficiencies and productivity. A city’s population outpaces its development of urban infrastructure, for example, and its production of goods and services outpaces its population. What's more, these patterns exhibit a surprising degree of mathematical regularity and predictability, a phenomenon called "urban scaling."

But has this always been the case?

SFI Professor Luis Bettencourt researches urban dynamics as a lead investigator of SFI's Cities, Scaling, and Sustainability research program. When he gave a talk in 2013 on urban scaling theory, Scott Ortman, now an assistant professor in the Department of Anthropology at CU Boulder and a former Institute Omidyar Fellow, noted that the trends Bettencourt described were not particular to modern times. Their discussion prompted a research project on the effects of city size through history.

To test their ideas, the team examined archaeological data from the Basin of Mexico (what is now Mexico City and nearby regions). In the 1960s — before Mexico City’s population exploded — surveyors examined all its ancient settlements, spanning 2000 years and four cultural eras in pre-contact Mesoamerica.

Using this data, the research team analyzed the dimensions of hundreds of ancient temples and thousands of ancient houses to estimate populations and densities, size and construction rates of monuments and buildings, and intensity of site use.

Their results indicate that the bigger the ancient settlement, the more productive it was.

“It was shocking and unbelievable,” says Ortman. “We were raised on a steady diet telling us that, thanks to capitalism, industrialization, and democracy, the modern world is radically different from worlds of the past. What we found here is that the fundamental drivers of robust socioeconomic patterns in modern cities precede all that.”

Bettencourt adds: “Our results suggest that the general ingredients of productivity and population density in human societies run much deeper and have everything to do with the challenges and opportunities of organizing human social networks.”

Though excited by the results, the researchers see the discovery as just one step in a long process. The team plans to examine settlement patterns from ancient sites in Peru, China, and Europe and study the factors that lead urban systems to emerge, grow, or collapse.


Selengkapnya »»  

Greenland is melting: The past might tell what the future holds

A team of scientists lead by Danish geologist Nicolaj Krog Larsen have managed to quantify how the Greenland Ice Sheet reacted to a warm period 8,000-5,000 years ago. Back then temperatures were 2-4 degrees C warmer than they are in the present. Their results have just been published in the scientific journal Geology, and are important as we are rapidly closing in on similar temperatures.

While the world is preparing for a rising global sea-level, a group of scientists led by Dr. Nicolaj Krog Larsen, Aarhus University in Denmark and Professor Kurt Kjær, Natural History Museum of Denmark ventured off to Greenland to investigate how fast the Greenland Ice Sheet reacted to past warming.

With hard work and high spirits the scientists spent six summers coring lakes in the ice free land surrounding the ice sheet. The lakes act as a valuable archive as they store glacial meltwater sediments in periods where the ice is advanced. That way is possible to study and precisely date periods in time when the ice was smaller than present.

"It has been hard work getting all these lake cores home, but is has definitely been worth the effort. Finally we are able to describe the ice sheet's response to earlier warm periods," says Dr. Nicolaj Krog Larsen of Aarhus University, Denmark.

Evidence has disappeared

The size of the Greenland Ice Sheet has varied since the Ice Age ended 11,500 years ago, and scientists have long sought to investigate the response to the warmest period 8,000-5,000 years ago where the temperatures were 2-4 °C warmer than they are in the present.

"The glaciers always leave evidence about their presence in the landscape. So far the problem has just been that the evidence is removed by new glacial advances. That is why it is unique that we are now able to quantify the mass loss during past warming by combining the lake sediment records with state-of-the-art modelling," says Professor Kurt Kjær, Natural History Museum of Denmark.

16 cm of global sea-level rise from Greenland

Their results show that the ice had its smallest extent exactly during the warming 8,000-5,000 years ago -- with that knowledge in hand they were able to review all available ice sheet models and choose the ones that best reproduced the reality of the past warming.

The best models show that during this period the ice sheet was losing mass at a rate of 100 Gigaton pr. year for several thousand years, and delivered the equivalent of 16 cm of global sea-level rise when temperatures were 2-4 °C warmer. For comparison, the mass loss in the last 25 years has varied between 0-400 Gigaton pr. year, and it is expected that the Arctic will warm 2-7 °C by the year 2100.


Selengkapnya »»  

Sunday

Key indicator for successful treatment of infertile couples

Couples have choices in infertility treatments. A recent finding by Marlene Goldman, MS, ScD of the Geisel School of Medicine at Dartmouth and colleagues, published in Fertility and Sterility, gives doctors and couples a new tool to determine which technique may be more effective for their situation.

"As a woman approaches menopause, her level of follicle stimulating hormone (FSH) rises," explained Goldman. "A higher FSH level is a key indicator that the woman may not be as fertile as necessary to conceive using certain common methods of infertility treatment."

The study determined if FSH and estrogen at the upper limits of normal, as measured on day three of the menstrual cycle, could predict treatment success as measured in live birth rates. The essential question was: should women with higher levels of FSH and estrogen be "fast-tracked" to in vitro fertilization (IVF), bypassing the conventional treatment trajectory?

Goldman and collaborators recorded no live births in the group with FSH and estrogen at the upper limits of normal, yet when the couples later pursued IVF, 33% were able to have babies.

"Some women express a preference to begin treatment for infertility with controlled ovarian hyper-stimulation (COH), whether by pill or injection, along with intrauterine insemination (IUI)," said Goldman. "When counseling women with day-three testing for FSH or estrogen at the upper limits of normal, it may be helpful for them to know that COH-IUI has not been successful in others with similar levels. Fortunately, IVF is a successful treatment for many women and if we can 'fast-track' them to IVF, bypassing COH-IUI, treatments will be quicker and may be less expensive."

Insurance companies may use FSH levels to determine if they will continue payments for future treatment cycles in women with high levels.

The next steps for Goldman include probing what makes IVF so successful and how to keep the success rate while reducing costs.

Marlene Goldman, MS, ScD, is Professor of Obstetrics & Gynecology, and of Community & Family Medicine at Dartmouth's Geisel School of Medicine. Her work in cancer is facilitated by Dartmouth's Norris Cotton Cancer Center in Lebanon, NH. She is the Vice-chair for Research in the Department of Obstetrics & Gynecology at Dartmouth-Hitchcock Medical Center. This work was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and National Institutes of Health grants RO1 HD38561 and RO1 HD44547. Her collaborators included Richard Reindollar, MD PI; Daniel J. Kaser, MD, first author; June L. Fung, PhD, all from Dartmouth; and Michael M. Alper, MD from Boston IVF.

About Norris Cotton Cancer Center at Dartmouth-Hitchcock Norris Cotton Cancer Center combines advanced cancer research at Dartmouth and the Geisel School of Medicine with patient-centered cancer care provided at Dartmouth-Hitchcock Medical Center in Lebanon, NH, at Dartmouth-Hitchcock regional locations in Manchester, Nashua, and Keene, NH, and St. Johnsbury, VT, and at 12 partner hospitals throughout New Hampshire and Vermont. It is one of 41 centers nationwide to earn the National Cancer Institute's "Comprehensive Cancer Center" designation. Learn more about Norris Cotton Cancer Center research, programs, and clinical trials online at cancer.dartmouth.edu.


Selengkapnya »»  

Mars exploration: NASA's MAVEN spacecraft completes first deep dip campaign

This image shows an artist concept of NASA's Mars Atmosphere and Volatile Evolution (MAVEN) mission.
NASA's Mars Atmosphere and Volatile Evolution has completed the first of five deep-dip maneuvers designed to gather measurements closer to the lower end of the Martian upper atmosphere.

"During normal science mapping, we make measurements between an altitude of about 150 km and 6,200 km (93 miles and 3,853 miles) above the surface," said Bruce Jakosky, MAVEN principal investigator at the University of Colorado's Laboratory for Atmospheric and Space Physics in Boulder. "During the deep-dip campaigns, we lower the lowest altitude in the orbit, known as periapsis, to about 125 km (78 miles) which allows us to take measurements throughout the entire upper atmosphere."

The 25 km (16 miles) altitude difference may not seem like much, but it allows scientists to make measurements down to the top of the lower atmosphere. At these lower altitudes, the atmospheric densities are more than ten times what they are at 150 km (93 miles).

"We are interested in the connections that run from the lower atmosphere to the upper atmosphere and then to escape to space," said Jakosky. "We are measuring all of the relevant regions and the connections between them."

The first deep dip campaign ran from Feb. 10 to 18. The first three days of this campaign were used to lower the periapsis. Each of the five campaigns lasts for five days allowing the spacecraft to observe for roughly 20 orbits. Since the planet rotates under the spacecraft, the 20 orbits allow sampling of different longitudes spaced around the planet, providing close to global coverage.

This month's deep dip maneuvers began when team engineers fired the rocket motors in three separate burns to lower the periapsis. The engineers did not want to do one big burn, to ensure that they didn't end up too deep in the atmosphere. So, they "walked" the spacecraft down gently in several smaller steps.

"Although we changed the altitude of the spacecraft, we actually aimed at a certain atmospheric density," said Jakosky. "We wanted to go as deep as we can without putting the spacecraft or instruments at risk."

Even though the atmosphere at these altitudes is very tenuous, it is thick enough to cause a noticeable drag on the spacecraft. Going to too high an atmospheric density could cause too much drag and heating due to friction that could damage spacecraft and instruments.

At the end of the campaign, two maneuvers were conducted to return MAVEN to normal science operation altitudes. Science data returned from the deep dip will be analyzed over the coming weeks. The science team will combine the results with what the spacecraft has seen during its regular mapping to get a better picture of the entire atmosphere and of the processes affecting it.

One of the major goals of the MAVEN mission is to understand how gas from the atmosphere escapes to space, and how this has affected the planet's climate history through time. In being lost to space, gas is removed from the top of the upper atmosphere. But it is the thicker lower atmosphere that controls the climate. MAVEN is studying the entire region from the top of the upper atmosphere all the way down to the lower atmosphere so that the connections between these regions can be understood.

Selengkapnya »»  

Brain's iconic seat of speech goes silent when we actually talk

New findings will better help map out the brain's speech regions.
For 150 years, the iconic Broca's area of the brain has been recognized as the command center for human speech, including vocalization. Now, scientists at UC Berkeley and Johns Hopkins University in Maryland are challenging this long-held assumption with new evidence that Broca's area actually switches off when we talk out loud.

The findings, reported in the Proceedings of the National Academy of Sciences journal, provide a more complex picture than previously thought of the frontal brain regions involved in speech production. The discovery has major implications for the diagnoses and treatments of stroke, epilepsy and brain injuries that result in language impairments.

"Every year millions of people suffer from stroke, some of which can lead to severe impairments in perceiving and producing language when critical brain areas are damaged," said study lead author Adeen Flinker, a postdoctoral researcher at New York University who conducted the study as a UC Berkeley Ph.D. student. "Our results could help us advance language mapping during neurosurgery as well as the assessment of language impairments."

Flinker said that neuroscientists traditionally organized the brain's language center into two main regions: one for perceiving speech and one for producing speech.

"That belief drives how we map out language during neurosurgery and classify language impairments," he said. "This new finding helps us move towards a less dichotomous view where Broca's area is not a center for speech production, but rather a critical area for integrating and coordinating information across other brain regions."

In the 1860s, French physician Pierre Paul Broca pinpointed this prefrontal brain region as the seat of speech. Broca's area has since ranked among the brain's most closely examined language regions in cognitive psychology. People with Broca's aphasia are characterized as having suffered damage to the brain's frontal lobe and tend to speak in short, stilted phrases that often omit short connecting words such as "the" and "and."

Specifically, Flinker and fellow researchers have found that Broca's area -- which is located in the frontal cortex above and behind the left eye -- engages with the brain's temporal cortex, which organizes sensory input, and later the motor cortex, as we process language and plan which sounds and movements of the mouth to use, and in what order. However, the study found, it disengages when we actually start to utter word sequences.

"Broca's area shuts down during the actual delivery of speech, but it may remain active during conversation as part of planning future words and full sentences," Flinker said.

The study tracked electrical signals emitted from the brains of seven hospitalized epilepsy patients as they repeated spoken and written words aloud. Researchers followed that brain activity -- using event-related causality technology -- from the auditory cortex, where the patients processed the words they heard, to Broca's area, where they prepared to articulate the words to repeat, to the motor cortex, where they finally spoke the words out loud.

In addition to Flinker, other co-authors and researchers on the study are Robert Knight and Avgusta Shestyuk at the Helen Wills Neuroscience Institute at UC Berkeley, Nina Dronkers at the Center for Aphasia and Related Disorders at the Veterans Affairs Northern California Health Care System, and Anna Korzeniewska, Piotr Franaszczuk and Nathan Crone at Johns Hopkins School of Medicine.


Selengkapnya »»  

Hubble gets best view of a circumstellar debris disk distorted by a planet

The photo at the bottom is the most detailed picture to date of a large, edge-on, gas-and-dust disk encircling the 20-million-year-old star Beta Pictoris. The new visible-light Hubble image traces the disk in closer to the star to within about 650 million miles of the star (which is inside the radius of Saturn's orbit about the Sun). When comparing the latest images to Hubble images taken in 1997 (top), astronomers find that the disk's dust distribution has barely changed over 15 years despite the fact that the entire structure is orbiting the star like a carousel. The Hubble Space Telescope photo has been artificially colored to bring out detail in the disk's structure.
Astronomers have used NASA's Hubble Space Telescope to take the most detailed picture to date of a large, edge-on, gas-and-dust disk encircling the 20-million-year-old star Beta Pictoris.

Beta Pictoris remains the only directly imaged debris disk that has a giant planet (discovered in 2009). Because the orbital period is comparatively short (estimated to be between 18 and 22 years), astronomers can see large motion in just a few years. This allows scientists to study how the Beta Pictoris disk is distorted by the presence of a massive planet embedded within the disk.

The new visible-light Hubble image traces the disk in closer to the star to within about 650 million miles of the star (which is inside the radius of Saturn's orbit about the Sun).

"Some computer simulations predicted a complicated structure for the inner disk due to the gravitational pull by the short-period giant planet. The new images reveal the inner disk and confirm the predicted structures. This finding validates models, which will help us to deduce the presence of other exoplanets in other disks," said Daniel Apai of the University of Arizona. The gas-giant planet in the Beta Pictoris system was directly imaged in infrared light by the European Southern Observatory's Very Large Telescope six years ago.

When comparing the latest Hubble images to Hubble images taken in 1997, astronomers find that the disk's dust distribution has barely changed over 15 years despite the fact that the entire structure is orbiting the star like a carousel. This means the disk's structure is smoothly continuous in the direction of its rotation on the timescale, roughly, of the accompanying planet's orbital period.

In 1984 Beta Pictoris was the very first star discovered to host a bright disk of light-scattering circumstellar dust and debris. Ever since then Beta Pictoris has been an object of intensive scrutiny with Hubble and with ground-based telescopes. Hubble spectroscopic observations in 1991 found evidence for extrasolar comets frequently falling into the star.

The disk is easily seen because it is tilted edge-on and is especially bright due to a very large amount of starlight-scattering dust. What's more, Beta Pictoris is closer to Earth (63 light-years) than most of the other known disk systems.

Though nearly all of the approximately two-dozen known light-scattering circumstellar disks have been viewed by Hubble to date, Beta Pictoris is the first and best example of what a young planetary system looks like, say researchers.

One thing astronomers have recently learned about circumstellar debris disks is that their structure, and amount of dust, is incredibly diverse and may be related to the locations and masses of planets in those systems. "The Beta Pictoris disk is the prototype for circumstellar debris systems, but it may not be a good archetype," said co-author Glenn Schneider of the University of Arizona.

For one thing the Beta Pictoris disk is exceptionally dusty. This may be due to recent major collisions among unseen planetary-sized and asteroid-sized bodies embedded within it. In particular, a bright lobe of dust and gas on the southwestern side of the disk may be the result of the pulverization of a Mars-sized body in a giant collision.

Both the 1997 and 2012 images were taken in visible light with Hubble's Space Telescope Imaging Spectrograph in its coronagraphic imaging mode. A coronagraph blocks out the glare of the central star so that the disk can be seen.

Selengkapnya »»  

For the first time, spacecraft catch solar shockwave in the act: 'Ultrarelativistic, killer electrons' made in 60 seconds

Earth's magnetosphere is depicted with the high-energy particles of the Van Allen radiation belts (shown in red) and various processes responsible for accelerating these particles to relativistic energies indicated. The effects of an interplanetary shock penetrate deep into this system, energizing electrons to ultra-relativistic energies in a matter of seconds.
On Oct. 8, 2013, an explosion on the sun's surface sent a supersonic blast wave of solar wind out into space. This shockwave tore past Mercury and Venus, blitzing by the moon before streaming toward Earth. The shockwave struck a massive blow to the Earth's magnetic field, setting off a magnetized sound pulse around the planet.

NASA's Van Allen Probes, twin spacecraft orbiting within the radiation belts deep inside the Earth's magnetic field, captured the effects of the solar shockwave just before and after it struck.

Now scientists at MIT's Haystack Observatory, the University of Colorado, and elsewhere have analyzed the probes' data, and observed a sudden and dramatic effect in the shockwave's aftermath: The resulting magnetosonic pulse, lasting just 60 seconds, reverberated through the Earth's radiation belts, accelerating certain particles to ultrahigh energies.

"These are very lightweight particles, but they are ultrarelativistic, killer electrons -- electrons that can go right through a satellite," says John Foster, associate director of MIT's Haystack Observatory. "These particles are accelerated, and their number goes up by a factor of 10, in just one minute. We were able to see this entire process taking place, and it's exciting: We see something that, in terms of the radiation belt, is really quick."

The findings represent the first time the effects of a solar shockwave on Earth's radiation belts have been observed in detail from beginning to end. Foster and his colleagues have published their results in the Journal of Geophysical Research.

Catching a shockwave in the act

Since August 2012, the Van Allen Probes have been orbiting within the Van Allen radiation belts. The probes' mission is to help characterize the extreme environment within the radiation belts, so as to design more resilient spacecraft and satellites.

One question the mission seeks to answer is how the radiation belts give rise to ultrarelativistic electrons -- particles that streak around the Earth at 1,000 kilometers per second, circling the planet in just five minutes. These high-speed particles can bombard satellites and spacecraft, causing irreparable damage to onboard electronics.

The two Van Allen probes maintain the same orbit around the Earth, with one probe following an hour behind the other. On Oct. 8, 2013, the first probe was in just the right position, facing the sun, to observe the radiation belts just before the shockwave struck the Earth's magnetic field. The second probe, catching up to the same position an hour later, recorded the shockwave's aftermath.

Dealing a "sledgehammer blow"

Foster and his colleagues analyzed the probes' data, and laid out the following sequence of events: As the solar shockwave made impact, according to Foster, it struck "a sledgehammer blow" to the protective barrier of the Earth's magnetic field. But instead of breaking through this barrier, the shockwave effectively bounced away, generating a wave in the opposite direction, in the form of a magnetosonic pulse -- a powerful, magnetized sound wave that propagated to the far side of the Earth within a matter of minutes.

In that time, the researchers observed that the magnetosonic pulse swept up certain lower-energy particles. The electric field within the pulse accelerated these particles to energies of 3 to 4 million electronvolts, creating 10 times the number of ultrarelativistic electrons that previously existed.

Taking a closer look at the data, the researchers were able to identify the mechanism by which certain particles in the radiation belts were accelerated. As it turns out, if particles' velocities as they circle the Earth match that of the magnetosonic pulse, they are deemed "drift resonant," and are more likely to gain energy from the pulse as it speeds through the radiation belts. The longer a particle interacts with the pulse, the more it is accelerated, giving rise to an extremely high-energy particle.

Foster says solar shockwaves can impact Earth's radiation belts a couple of times each month. The event in 2013 was a relatively minor one.

"This was a relatively small shock. We know they can be much, much bigger," Foster says. "Interactions between solar activity and Earth's magnetosphere can create the radiation belt in a number of ways, some of which can take months, others days. The shock process takes seconds to minutes. This could be the tip of the iceberg in how we understand radiation-belt physics."

Selengkapnya »»  

Friday

First glimpse of a chemical bond being born

This illustration shows atoms forming a tentative bond, a moment captured for the first time in experiments with an X-ray laser at SLAC National Accelerator Laboratory. The reactants are a carbon monoxide molecule, left, made of a carbon atom (black) and an oxygen atom (red), and a single atom of oxygen, just to the right of it. They are attached to the surface of a ruthenium catalyst, which holds them close to each other so they can react more easily. When hit with an optical laser pulse, the reactants vibrate and bump into each other, and the carbon atom forms a transitional bond with the lone oxygen, center. The resulting carbon dioxide molecule detaches and floats away, upper right. The Linac Coherent Light Source (LCLS) X-ray laser probed the reaction as it proceeded and allowed the movie to be created.
Scientists have used an X-ray laser at the Department of Energy's SLAC National Accelerator Laboratory to get the first glimpse of the transition state where two atoms begin to form a weak bond on the way to becoming a molecule.

This fundamental advance, reported Feb. 12 in Science Express and long thought impossible, will have a profound impact on the understanding of how chemical reactions take place and on efforts to design reactions that generate energy, create new products and fertilize crops more efficiently.

"This is the very core of all chemistry. It's what we consider a Holy Grail, because it controls chemical reactivity," said Anders Nilsson, a professor at the SLAC/Stanford SUNCAT Center for Interface Science and Catalysis and at Stockholm University who led the research. "But because so few molecules inhabit this transition state at any given moment, no one thought we'd ever be able to see it."

Bright, Fast Laser Pulses Achieve the Impossible

The experiments took place at SLAC's Linac Coherent Light Source (LCLS), a DOE Office of Science User Facility. Its brilliant, strobe-like X-ray laser pulses are short enough to illuminate atoms and molecules and fast enough to watch chemical reactions unfold in a way never possible before.

Researchers used LCLS to study the same reaction that neutralizes carbon monoxide (CO) from car exhaust in a catalytic converter. The reaction takes place on the surface of a catalyst, which grabs CO and oxygen atoms and holds them next to each other so they pair up more easily to form carbon dioxide.

In the SLAC experiments, researchers attached CO and oxygen atoms to the surface of a ruthenium catalyst and got reactions going with a pulse from an optical laser. The pulse heated the catalyst to 2,000 kelvins -- more than 3,000 degrees Fahrenheit -- and set the attached chemicals vibrating, greatly increasing the chance that they would knock into each other and connect.

The team was able to observe this process with X-ray laser pulses from LCLS, which detected changes in the arrangement of the atoms' electrons -- subtle signs of bond formation -- that occurred in mere femtoseconds, or quadrillionths of a second.

"First the oxygen atoms get activated, and a little later the carbon monoxide gets activated," Nilsson said. "They start to vibrate, move around a little bit. Then, after about a trillionth of a second, they start to collide and form these transition states."

'Rolling Marbles Uphill'

The researchers were surprised to see so many of the reactants enter the transition state -- and equally surprised to discover that only a small fraction of them go on to form stable carbon dioxide. The rest break apart again.

"It's as if you are rolling marbles up a hill, and most of the marbles that make it to the top roll back down again," Nilsson said. "What we are seeing is that many attempts are made, but very few reactions continue to the final product. We have a lot to do to understand in detail what we have seen here."

Theory played a key role in the experiments, allowing the team to predict what would happen and get a good idea of what to look for. "This is a super-interesting avenue for theoretical chemists. It's going to open up a completely new field," said report co-author Frank Abild-Pedersen of SLAC and SUNCAT.

A team led by Associate Professor Henrik Öström at Stockholm University did initial studies of how to trigger the reactions with the optical laser. Theoretical spectra were computed under the leadership of Stockholm Professor Lars G.M. Pettersson, a longtime collaborator with Nilsson.

Preliminary experiments at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL), another DOE Office of Science User Facility, also proved crucial. Led by SSRL's Hirohito Ogasawara and SUNCAT's Jerry LaRue, they measured the characteristics of the chemical reactants with an intense X-ray beam so researchers would be sure to identify everything correctly at the LCLS, where beam time is much more scarce. "Without SSRL this would not have worked," Nilsson said.

The team is already starting to measure transition states in other catalytic reactions that generate chemicals important to industry.

"This is extremely important, as it provides insight into the scientific basis for rules that allow us to design new catalysts," said SUNCAT Director and co-author Jens Nørskov.

Selengkapnya »»  

Application of laser microprobe technology to Apollo samples refines lunar impact history

This is a photomicrograph of a petrographic thin section of a piece of a coherent, crystalline impact melt breccia collected from landslide material at the base of the South Massif, Apollo 17 (sample 73217, 84). Different mineral and lithic clasts, as well as impact melt phases are evident. Determining the ages of different melt components in such a complex rock requires carefully focused analyses within context of spatial and petrographic information such as this. In their article published in the Feb. 12 issue of Science Advances, Mercer et al. used the laser microprobe 40Ar/39Ar technique to investigate age relationships of three of the distinct generations of impact melt shown in this image.
It's been more than 40 years since astronauts returned the last Apollo samples from the moon, and since then those samples have undergone some of the most extensive and comprehensive analysis of any geological collection. A team led by ASU researchers has now refined the timeline of meteorite impacts on the moon through a pioneering application of laser microprobe technology to Apollo 17 samples.

Impact cratering is the most ubiquitous geologic process affecting the solid surfaces of planetary bodies in the solar system. The moon's scarred surface serves as a record of meteorite bombardment that spans much of solar system history. Developing an absolute chronology of lunar impact events is of particular interest because the moon is an important proxy for understanding the early bombardment history of Earth, which has been largely erased by plate tectonics and erosion, and because we can use the lunar impact record to infer the ages of other cratered surfaces in the inner solar system.

Researchers in ASU's Group 18 Laboratories, headed by Professor Kip Hodges, used an ultraviolet laser microprobe attached to a high-sensitivity mass spectrometer to analyze argon isotopes in samples returned by Apollo 17. While the laser microprobe 40Ar/39Ar technique has been applied to a large number of problems in terrestrial geochronology, including studies of texturally complex samples, this is its first time it has been applied to samples from the Apollo archive.

The samples analyzed by the ASU team are known as lunar impact melt breccias -- mash-ups of glass, rock and crystal fragments that were created by impact events on the moon's surface.

When a meteor strikes another planetary body, the impact produces very large amounts of energy, some of which goes into shock heating and melting the target rocks. These extreme conditions can 'restart the clock' for some mineral-isotopic chronometers, particularly for material melted during impact. As a result, the absolute ages of lunar craters are primarily determined through isotope geochronology of components of the target rocks that were shocked and heated to the point of melting, and which have since solidified.

However, lunar rocks may have experienced multiple impact events over the course of billions of years of bombardment, potentially complicating attempts to date samples and relate the results to the ages of particular impact structures.

Conventional wisdom holds that the largest impact basins on the moon were responsible for generating the vast majority of impact melts, and therefore that nearly all of the samples dated must be related to the formation of those basins.

While it is true that enormous quantities of impact melt are generated by basin-scale impact events, recent images taken by the Lunar Reconnaissance Orbiter Camera confirm that even small craters with diameters on the order of 100 meters can generate impact melts. The team's findings have important implications for this particular observation. The results are published in the inaugural issue of the American Association for the Advancement of Science's newest journal, Science Advances, on Feb. 12.

"One of the samples we analyzed, 77115, records evidence for only one impact event, which may or may not be related to a basin-forming impact event. In contrast, we found that the other sample, 73217, preserves evidence for at least three impact events occurring over several hundred million years, not all of which can be related to basin-scale impacts," says Cameron Mercer, lead author of the paper and a graduate student in ASU's School of Earth and Space Exploration.

Sample 77115, collected by astronauts Gene Cernan and Harrison Schmitt at Station 7 during their third and final moonwalk, records a single melt-forming event about 3.83 billion years ago. Sample 73217, retrieved at Station 3 during the astronauts' second moonwalk, preserves evidence for at least three distinct impact melt-forming events occurring between 3.81 billion years ago and 3.27 billion years ago. The findings suggest that a single small sample can preserve multiple generations of melt products created by impact events over the course of billions of years.

"Our results emphasize the need for care in how we analyze samples in the context of impact dating, particularly for those samples that appear to have complex, polygenetic origins. This applies to both the samples that we currently have in our lunar and meteoritic collections, as well as samples that we recover during future human and robotic space exploration missions in the inner solar system," says Mercer.

Selengkapnya »»  

Magnitude of plastic waste going into the ocean calculated: 8 million metric tons of plastic enter the oceans per year

The 192 countries with a coast bordering the Atlanta, Pacific and Indian oceans, Mediterranean and Black seas produced a total of 2.5 billion metric tons of solid waste. Of that, 275 million metric tons was plastic, and an estimated 8 million metric tons of mismanaged plastic waste entered the ocean in 2010.
A plastic grocery bag cartwheels down the beach until a gust of wind spins it into the ocean. In 192 coastal countries, this scenario plays out over and over again as discarded beverage bottles, food wrappers, toys and other bits of plastic make their way from estuaries, seashores and uncontrolled landfills to settle in the world's seas.

How much mismanaged plastic waste is making its way from land to ocean has been a decades-long guessing game. Now, the University of Georgia's Jenna Jambeck and her colleagues in the National Center for Ecological Analysis and Synthesis working group have put a number on the global problem.

Their study, reported in the Feb. 13 edition of the journal Science, found between 4.8 and 12.7 million metric tons of plastic entered the ocean in 2010 from people living within 50 kilometers of the coastline. That year, a total of 275 million metric tons of plastic waste was generated in those 192 coastal countries.

Jambeck, an assistant professor of environmental engineering in the UGA College of Engineering and the study's lead author, explains the amount of plastic moving from land to ocean each year using 8 million metric tons as the midpoint: "Eight million metric tons is the equivalent to finding five grocery bags full of plastic on every foot of coastline in the 192 countries we examined."

To determine the amount of plastic going into the ocean, Jambeck "started it off beautifully with a very grand model of all sources of marine debris," said study co-author Roland Geyer, an associate professor with the University of California, Santa Barbara's Bren School of Environmental Science & Management, who teamed with Jambeck and others to develop the estimates.

They began by looking at all debris entering the ocean from land, sea and other pathways. Their goal was to develop models for each of these sources. After gathering rough estimates, "it fairly quickly emerged that the mismanaged waste and solid waste dispersed was the biggest contributor of all of them," he said. From there, they focused on plastic.

"For the first time, we're estimating the amount of plastic that enters the oceans in a given year," said study co-author Kara Lavender Law, a research professor at the Massachusetts-based Sea Education Association. "Nobody has had a good sense of the size of that problem until now."

The framework the researchers developed isn't limited to calculating plastic inputs into the ocean.

"Jenna created a framework to analyze solid waste streams in countries around the world that can easily be adapted by anyone who is interested," she said. "Plus, it can be used to generate possible solution strategies."

Plastic pollution in the ocean was first reported in the scientific literature in the early 1970s. In the 40 years since, there were no rigorous estimates of the amount and origin of plastic debris making its way into the marine environment until Jambeck's current study.

Part of the issue is that plastic is a relatively new problem coupled with a relatively new waste solution. Plastic first appeared on the consumer market in the 1930s and '40s. Waste management didn't start developing its current infrastructure in the U.S., Europe and parts of Asia until the mid-1970s. Prior to that time, trash was dumped in unstructured landfills--Jambeck has vivid memories of growing up in rural Minnesota, dropping her family's garbage off at a small dump and watching bears wander through furniture, tires and debris as they looked for food.

"It is incredible how far we have come in environmental engineering, advancing recycling and waste management systems to protect human health and the environment, in a relatively short amount of time," she said. "However, these protections are unfortunately not available equally throughout the world."

Some of the 192 countries included in the model have no formal waste management systems, Jambeck said. Solid waste management is typically one of the last urban environmental engineering infrastructure components to be addressed during a country's development. Clean water and sewage treatment often come first.

"The human impact from not having clean drinking water is acute, with sewage treatment often coming next," she said. "Those first two needs are addressed before solid waste, because waste doesn't seem to have any immediate threat to humans. And then solid waste piles up in streets and yards and it's the thing that gets forgotten for a while."

As the gross national income increases in these countries, so does the use of plastic. In 2013, the most current numbers available, global plastic resin production reached 299 million tons, a 647 percent increase over numbers recorded in 1975. Plastic resin is used to make many one-use items like wrappers, beverage bottles and plastic bags.

With the mass increase in plastic production, the idea that waste can be contained in a few-acre landfill or dealt with later is no longer viable. That was the mindset before the onslaught of plastic, when most people piled their waste--glass, food scraps, broken pottery--on a corner of their land or burned or buried it. Now, the average American generates about 5 pounds of trash per day with 13% of that being plastic.

But knowing how much plastic is going into the ocean is just one part of the puzzle, Jambeck said. With between 4.8 and 12.7 million metric tons going in, researchers like Law are only finding between 6,350 and 245,000 metric tons floating on the ocean's surface.

"This paper gives us a sense of just how much we're missing," Law said, "how much we need to find in the ocean to get to the total. Right now, we're mainly collecting numbers on plastic that floats. There is a lot of plastic sitting on the bottom of the ocean and on beaches worldwide."

Jambeck forecasts that the cumulative impact to the oceans will equal 155 million metric tons by 2025. The planet is not predicted to reach global "peak waste" before 2100, according to World Bank calculations.

"We're being overwhelmed by our waste," she said. "But our framework allows us to also examine mitigation strategies like improving global solid waste management and reducing plastic in the waste stream. Potential solutions will need to coordinate local and global efforts."

Selengkapnya »»  

Warming pushes Western U.S. toward driest period in 1,000 years: Unprecedented risk of drought in 21st century

Soil moisture 30 cm below ground projected through 2100 for high emissions scenario RCP 8.5. The soil moisture data are standardized to the Palmer Drought Severity Index and are deviations from the 20th century average.
During the second half of the 21st century, the U.S. Southwest and Great Plains will face persistent drought worse than anything seen in times ancient or modern, with the drying conditions "driven primarily" by human-induced global warming, a new study predicts.

The research says the drying would surpass in severity any of the decades-long "megadroughts" that occurred much earlier during the past 1,000 years -- one of which has been tied by some researchers to the decline of the Anasazi or Ancient Pueblo Peoples in the Colorado Plateau in the late 13th century. Many studies have already predicted that the Southwest could dry due to global warming, but this is the first to say that such drying could exceed the worst conditions of the distant past. The impacts today would be devastating, given the region's much larger population and use of resources.

"We are the first to do this kind of quantitative comparison between the projections and the distant past, and the story is a bit bleak," said Jason E. Smerdon, a co-author and climate scientist at the Lamont-Doherty Earth Observatory, part of the Earth Institute at Columbia University. "Even when selecting for the worst megadrought-dominated period, the 21st century projections make the megadroughts seem like quaint walks through the Garden of Eden."

"The surprising thing to us was really how consistent the response was over these regions, nearly regardless of what model we used or what soil moisture metric we looked at," said lead author Benjamin I. Cook of the NASA Goddard Institute for Space Studies and the Lamont-Doherty Earth Observatory. "It all showed this really, really significant drying."

The new study, "Unprecedented 21st-Century Drought Risk in the American Southwest and Central Plains," will be featured in the inaugural edition of the new online journal Science Advances, produced by the American Association for the Advancement of Science, which also publishes the leading journal Science.

Today, 11 of the past 14 years have been drought years in much of the American West, including California, Nevada, New Mexico and Arizona and across the Southern Plains to Texas and Oklahoma, according to the U.S. Drought Monitor, a collaboration of U.S. government agencies.

The current drought directly affects more than64 million people in the Southwest and Southern Plains, according to NASA, and many more are indirectly affected because of the impacts on agricultural regions.

Shrinking water supplies have forced western states to impose water use restrictions; aquifers are being drawn down to unsustainable levels, and major surface reservoirs such as Lake Mead and Lake Powell are at historically low levels. This winter's snowpack in the Sierras, a major water source for Los Angeles and other cities, is less than a quarter of what authorities call a "normal" level, according to a February report from the Los Angeles Department of Water and Power. California water officials last year cut off the flow of water from the northern part of the state to the south, forcing farmers in the Central Valley to leave hundreds of thousands of acres unplanted.

"Changes in precipitation, temperature and drought, and the consequences it has for our society -- which is critically dependent on our freshwater resources for food, electricity and industry -- are likely to be the most immediate climate impacts we experience as a result of greenhouse gas emissions," said Kevin Anchukaitis, a climate researcher at the Woods Hole Oceanographic Institution. Anchukaitis said the findings "require us to think rather immediately about how we could and would adapt."

Much of our knowledge about past droughts comes from extensive study of tree rings conducted by Lamont-Doherty scientist Edward Cook (Benjamin's father) and others, who in 2009 created the North American Drought Atlas. The atlas recreates the history of drought over the previous 2,005 years, based on hundreds of tree-ring chronologies, gleaned in turn from tens of thousands of tree samples across the United States, Mexico and parts of Canada.

For the current study, researchers used data from the atlas to represent past climate, and applied three different measures for drought -- two soil moisture measurements at varying depths, and a version of the Palmer Drought Severity Index, which gauges precipitation and evaporation and transpiration -- the net input of water into the land. While some have questioned how accurately the Palmer drought index truly reflects soil moisture, the researchers found it matched well with other measures, and that it "provides a bridge between the [climate] models and drought in observations," Cook said.

The researchers applied 17 different climate models to analyze the future impact of rising average temperatures on the regions. And, they compared two different global warming scenarios -- one with "business as usual," projecting a continued rise in emissions of the greenhouse gases that contribute to global warming; and a second scenario in which emissions are moderated.

By most of those measures, they came to the same conclusions.

"The results … are extremely unfavorable for the continuation of agricultural and water resource management as they are currently practiced in the Great Plains and southwestern United States," said David Stahle, professor in the Department of Geosciences at the University of Arkansas and director of the Tree-Ring Laboratory there. Stahle was not involved in the study, though he worked on the North American Drought Atlas.

Smerdon said he and his colleagues are confident in their results. The effects of CO2 on higher average temperature and the subsequent connection to drying in the Southwest and Great Plains emerge as a "strong signal" across the majority of the models, regardless of the drought metrics that are used, he said. And, he added, they are consistent with many previous studies.

Anchukaitis said the paper "provides an elegant and convincing connection" between reconstructions of past climate and the models pointing to the risk of future drought.

Toby R. Ault of Cornell University is a co-author of the study. Funding was provided by the NASA Modeling, Analysis and Prediction Program, NASA Strategic Science, and the U.S. National Science Foundation.

Selengkapnya »»  

Cosmology: First stars were born much later than thought

New maps from ESA's Planck satellite uncover the 'polarised' light from the early Universe across the entire sky, revealing that the first stars formed much later than previously thought.

The history of our Universe is a 13.8 billion-year tale that scientists endeavour to read by studying the planets, asteroids, comets and other objects in our Solar System, and gathering light emitted by distant stars, galaxies and the matter spread between them.

A major source of information used to piece together this story is the Cosmic Microwave Background, or CMB, the fossil light resulting from a time when the Universe was hot and dense, only 380,000 years after the Big Bang.

Thanks to the expansion of the Universe, we see this light today covering the whole sky at microwave wavelengths.

Between 2009 and 2013, Planck surveyed the sky to study this ancient light in unprecedented detail. Tiny differences in the background's temperature trace regions of slightly different density in the early cosmos, representing the seeds of all future structure, the stars and galaxies of today.

Scientists from the Planck collaboration have published the results from the analysis of these data in a large number of scientific papers over the past two years, confirming the standard cosmological picture of our Universe with ever greater accuracy.

"But there is more: the CMB carries additional clues about our cosmic history that are encoded in its 'polarisation'," explains Jan Tauber, ESA's Planck project scientist.

"Planck has measured this signal for the first time at high resolution over the entire sky, producing the unique maps released today."

Light is polarised when it vibrates in a preferred direction, something that may arise as a result of photons -- the particles of light -- bouncing off other particles. This is exactly what happened when the CMB originated in the early Universe.

Initially, photons were trapped in a hot, dense soup of particles that, by the time the Universe was a few seconds old, consisted mainly of electrons, protons and neutrinos. Owing to the high density, electrons and photons collided with one another so frequently that light could not travel any significant distant before bumping into another electron, making the early Universe extremely 'foggy'.

Slowly but surely, as the cosmos expanded and cooled, photons and the other particles grew farther apart, and collisions became less frequent.

This had two consequences: electrons and protons could finally combine and form neutral atoms without them being torn apart again by an incoming photon, and photons had enough room to travel, being no longer trapped in the cosmic fog.

Once freed from the fog, the light was set on its cosmic journey that would take it all the way to the present day, where telescopes like Planck detect it as the CMB. But the light also retains a memory of its last encounter with the electrons, captured in its polarisation.

"The polarisation of the CMB also shows minuscule fluctuations from one place to another across the sky: like the temperature fluctuations, these reflect the state of the cosmos at the time when light and matter parted company," says François Bouchet of the Institut d'Astrophysique de Paris, France.

"This provides a powerful tool to estimate in a new and independent way parameters such as the age of the Universe, its rate of expansion and its essential composition of normal matter, dark matter and dark energy."

Planck's polarisation data confirm the details of the standard cosmological picture determined from its measurement of the CMB temperature fluctuations, but add an important new answer to a fundamental question: when were the first stars born?
"After the CMB was released, the Universe was still very different from the one we live in today, and it took a long time until the first stars were able to form," explains Marco Bersanelli of Università degli Studi di Milano, Italy.

"Planck's observations of the CMB polarisation now tell us that these 'Dark Ages' ended some 550 million years after the Big Bang -- more than 100 million years later than previously thought.

"While these 100 million years may seem negligible compared to the Universe's age of almost 14 billion years, they make a significant difference when it comes to the formation of the first stars."

The Dark Ages ended as the first stars began to shine. And as their light interacted with gas in the Universe, more and more of the atoms were turned back into their constituent particles: electrons and protons.

This key phase in the history of the cosmos is known as the 'epoch of reionisation'.

The newly liberated electrons were once again able to collide with the light from the CMB, albeit much less frequently now that the Universe had significantly expanded. Nevertheless, just as they had 380 000 years after the Big Bang, these encounters between electrons and photons left a tell-tale imprint on the polarisation of the CMB.

"From our measurements of the most distant galaxies and quasars, we know that the process of reionisation was complete by the time that the Universe was about 900 million years old," says George Efstathiou of the University of Cambridge, UK.

"But, at the moment, it is only with the CMB data that we can learn when this process began."

Planck's new results are critical, because previous studies of the CMB polarisation seemed to point towards an earlier dawn of the first stars, placing the beginning of reionisation about 450 million years after the Big Bang.

This posed a problem. Very deep images of the sky from the NASA-ESA Hubble Space Telescope have provided a census of the earliest known galaxies in the Universe, which started forming perhaps 300-400 million years after the Big Bang.

However, these would not have been powerful enough to succeed at ending the Dark Ages within 450 million years.

"In that case, we would have needed additional, more exotic sources of energy to explain the history of reionisation," says Professor Efstathiou.

The new evidence from Planck significantly reduces the problem, indicating that reionisation started later than previously believed, and that the earliest stars and galaxies alone might have been enough to drive it.

This later end of the Dark Ages also implies that it might be easier to detect the very first generation of galaxies with the next generation of observatories, including the James Webb Space Telescope.

But the first stars are definitely not the limit. With the new Planck data released today, scientists are also studying the polarisation of foreground emission from gas and dust in the Milky Way to analyse the structure of the Galactic magnetic field.

The data have also enabled new important insights into the early cosmos and its components, including the intriguing dark matter and the elusive neutrinos, as described in papers also released today.

The Planck data have delved into the even earlier history of the cosmos, all the way to inflation -- the brief era of accelerated expansion that the Universe underwent when it was a tiny fraction of a second old. As the ultimate probe of this epoch, astronomers are looking for a signature of gravitational waves triggered by inflation and later imprinted on the polarisation of the CMB.

No direct detection of this signal has yet been achieved, as reported last week. However, when combining the newest all-sky Planck data with those latest results, the limits on the amount of primordial gravitational waves are pushed even further down to achieve the best upper limits yet.

"These are only a few highlights from the scrutiny of Planck's observations of the CMB polarisation, which is revealing the sky and the Universe in a brand new way," says Jan Tauber.

"This is an incredibly rich data set and the harvest of discoveries has just begun."

Series of publications: http://www.cosmos.esa.int/web/planck/publications

Selengkapnya »»