Monday

Diabetes, depression predict dementia risk in people with slowing minds

People with mild cognitive impairment are at higher risk of developing dementia if they have diabetes or psychiatric symptoms such as depression, finds a new review led by UCL researchers.

Mild cognitive impairment (MCI) is a state between normal ageing and dementia, where someone's mind is functioning less well than would be expected for their age. It affects 19% of people aged 65 and over, and around 46% of people with MCI develop dementia within 3 years compared with 3% of the general population.

The latest review paper, published in the American Journal of Psychiatry, analysed data from 62 separate studies, following a total of 15,950 people diagnosed with MCI. The study found that among people with MCI, those with diabetes were 65% more likely to progress to dementia and those with psychiatric symptoms were more than twice as likely to develop the condition.

"There are strong links between mental and physical health, so keeping your body healthy can also help to keep your brain working properly," explains lead author Dr Claudia Cooper (UCL Psychiatry). "Lifestyle changes to improve diet and mood might help people with MCI to avoid dementia, and bring many other health benefits. This doesn't necessarily mean that addressing diabetes, psychiatric symptoms and diet will reduce an individual's risk, but our review provides the best evidence to date about what might help."

The Alzheimer's Society charity recommends that people stay socially and physically active to help prevent dementia. Their guidelines also suggest eating a diet high in fruit and vegetables and low in meat and saturated fats, such as the Mediterranean diet.

"Some damage is already done in those with MCI but these results give a good idea about what it makes sense to target to reduce the chance of dementia," says senior author Professor Gill Livingston (UCL Psychiatry). "Randomised controlled trials are now needed."

Professor Alan Thompson, Dean of the UCL Faculty of Brain Sciences, says: "This impressive Systematic Review and meta-analysis from The Faculty of Brain Science's Division of Psychiatry underlines two important messages. Firstly, the impact of medical and psychiatric co-morbidities in individuals with mild cognitive impairment and secondly, the importance and therapeutic potential of early intervention in the prevention of dementia. Confirming these findings and incorporating appropriate preventative strategies could play an important part in lessening the ever-increasing societal burden of dementia in our ageing population."


Selengkapnya »»  

Ancient and modern cities aren't so different

Despite notable differences in appearance and governance, ancient human settlements function in much the same way as modern cities, according to new findings by researchers at the Santa Fe Institute and the University of Colorado Boulder.

Previous research has shown that as modern cities grow in population, so do their efficiencies and productivity. A city’s population outpaces its development of urban infrastructure, for example, and its production of goods and services outpaces its population. What's more, these patterns exhibit a surprising degree of mathematical regularity and predictability, a phenomenon called "urban scaling."

But has this always been the case?

SFI Professor Luis Bettencourt researches urban dynamics as a lead investigator of SFI's Cities, Scaling, and Sustainability research program. When he gave a talk in 2013 on urban scaling theory, Scott Ortman, now an assistant professor in the Department of Anthropology at CU Boulder and a former Institute Omidyar Fellow, noted that the trends Bettencourt described were not particular to modern times. Their discussion prompted a research project on the effects of city size through history.

To test their ideas, the team examined archaeological data from the Basin of Mexico (what is now Mexico City and nearby regions). In the 1960s — before Mexico City’s population exploded — surveyors examined all its ancient settlements, spanning 2000 years and four cultural eras in pre-contact Mesoamerica.

Using this data, the research team analyzed the dimensions of hundreds of ancient temples and thousands of ancient houses to estimate populations and densities, size and construction rates of monuments and buildings, and intensity of site use.

Their results indicate that the bigger the ancient settlement, the more productive it was.

“It was shocking and unbelievable,” says Ortman. “We were raised on a steady diet telling us that, thanks to capitalism, industrialization, and democracy, the modern world is radically different from worlds of the past. What we found here is that the fundamental drivers of robust socioeconomic patterns in modern cities precede all that.”

Bettencourt adds: “Our results suggest that the general ingredients of productivity and population density in human societies run much deeper and have everything to do with the challenges and opportunities of organizing human social networks.”

Though excited by the results, the researchers see the discovery as just one step in a long process. The team plans to examine settlement patterns from ancient sites in Peru, China, and Europe and study the factors that lead urban systems to emerge, grow, or collapse.


Selengkapnya »»  

Greenland is melting: The past might tell what the future holds

A team of scientists lead by Danish geologist Nicolaj Krog Larsen have managed to quantify how the Greenland Ice Sheet reacted to a warm period 8,000-5,000 years ago. Back then temperatures were 2-4 degrees C warmer than they are in the present. Their results have just been published in the scientific journal Geology, and are important as we are rapidly closing in on similar temperatures.

While the world is preparing for a rising global sea-level, a group of scientists led by Dr. Nicolaj Krog Larsen, Aarhus University in Denmark and Professor Kurt Kjær, Natural History Museum of Denmark ventured off to Greenland to investigate how fast the Greenland Ice Sheet reacted to past warming.

With hard work and high spirits the scientists spent six summers coring lakes in the ice free land surrounding the ice sheet. The lakes act as a valuable archive as they store glacial meltwater sediments in periods where the ice is advanced. That way is possible to study and precisely date periods in time when the ice was smaller than present.

"It has been hard work getting all these lake cores home, but is has definitely been worth the effort. Finally we are able to describe the ice sheet's response to earlier warm periods," says Dr. Nicolaj Krog Larsen of Aarhus University, Denmark.

Evidence has disappeared

The size of the Greenland Ice Sheet has varied since the Ice Age ended 11,500 years ago, and scientists have long sought to investigate the response to the warmest period 8,000-5,000 years ago where the temperatures were 2-4 °C warmer than they are in the present.

"The glaciers always leave evidence about their presence in the landscape. So far the problem has just been that the evidence is removed by new glacial advances. That is why it is unique that we are now able to quantify the mass loss during past warming by combining the lake sediment records with state-of-the-art modelling," says Professor Kurt Kjær, Natural History Museum of Denmark.

16 cm of global sea-level rise from Greenland

Their results show that the ice had its smallest extent exactly during the warming 8,000-5,000 years ago -- with that knowledge in hand they were able to review all available ice sheet models and choose the ones that best reproduced the reality of the past warming.

The best models show that during this period the ice sheet was losing mass at a rate of 100 Gigaton pr. year for several thousand years, and delivered the equivalent of 16 cm of global sea-level rise when temperatures were 2-4 °C warmer. For comparison, the mass loss in the last 25 years has varied between 0-400 Gigaton pr. year, and it is expected that the Arctic will warm 2-7 °C by the year 2100.


Selengkapnya »»  

Sunday

Key indicator for successful treatment of infertile couples

Couples have choices in infertility treatments. A recent finding by Marlene Goldman, MS, ScD of the Geisel School of Medicine at Dartmouth and colleagues, published in Fertility and Sterility, gives doctors and couples a new tool to determine which technique may be more effective for their situation.

"As a woman approaches menopause, her level of follicle stimulating hormone (FSH) rises," explained Goldman. "A higher FSH level is a key indicator that the woman may not be as fertile as necessary to conceive using certain common methods of infertility treatment."

The study determined if FSH and estrogen at the upper limits of normal, as measured on day three of the menstrual cycle, could predict treatment success as measured in live birth rates. The essential question was: should women with higher levels of FSH and estrogen be "fast-tracked" to in vitro fertilization (IVF), bypassing the conventional treatment trajectory?

Goldman and collaborators recorded no live births in the group with FSH and estrogen at the upper limits of normal, yet when the couples later pursued IVF, 33% were able to have babies.

"Some women express a preference to begin treatment for infertility with controlled ovarian hyper-stimulation (COH), whether by pill or injection, along with intrauterine insemination (IUI)," said Goldman. "When counseling women with day-three testing for FSH or estrogen at the upper limits of normal, it may be helpful for them to know that COH-IUI has not been successful in others with similar levels. Fortunately, IVF is a successful treatment for many women and if we can 'fast-track' them to IVF, bypassing COH-IUI, treatments will be quicker and may be less expensive."

Insurance companies may use FSH levels to determine if they will continue payments for future treatment cycles in women with high levels.

The next steps for Goldman include probing what makes IVF so successful and how to keep the success rate while reducing costs.

Marlene Goldman, MS, ScD, is Professor of Obstetrics & Gynecology, and of Community & Family Medicine at Dartmouth's Geisel School of Medicine. Her work in cancer is facilitated by Dartmouth's Norris Cotton Cancer Center in Lebanon, NH. She is the Vice-chair for Research in the Department of Obstetrics & Gynecology at Dartmouth-Hitchcock Medical Center. This work was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and National Institutes of Health grants RO1 HD38561 and RO1 HD44547. Her collaborators included Richard Reindollar, MD PI; Daniel J. Kaser, MD, first author; June L. Fung, PhD, all from Dartmouth; and Michael M. Alper, MD from Boston IVF.

About Norris Cotton Cancer Center at Dartmouth-Hitchcock Norris Cotton Cancer Center combines advanced cancer research at Dartmouth and the Geisel School of Medicine with patient-centered cancer care provided at Dartmouth-Hitchcock Medical Center in Lebanon, NH, at Dartmouth-Hitchcock regional locations in Manchester, Nashua, and Keene, NH, and St. Johnsbury, VT, and at 12 partner hospitals throughout New Hampshire and Vermont. It is one of 41 centers nationwide to earn the National Cancer Institute's "Comprehensive Cancer Center" designation. Learn more about Norris Cotton Cancer Center research, programs, and clinical trials online at cancer.dartmouth.edu.


Selengkapnya »»  

Mars exploration: NASA's MAVEN spacecraft completes first deep dip campaign

This image shows an artist concept of NASA's Mars Atmosphere and Volatile Evolution (MAVEN) mission.
NASA's Mars Atmosphere and Volatile Evolution has completed the first of five deep-dip maneuvers designed to gather measurements closer to the lower end of the Martian upper atmosphere.

"During normal science mapping, we make measurements between an altitude of about 150 km and 6,200 km (93 miles and 3,853 miles) above the surface," said Bruce Jakosky, MAVEN principal investigator at the University of Colorado's Laboratory for Atmospheric and Space Physics in Boulder. "During the deep-dip campaigns, we lower the lowest altitude in the orbit, known as periapsis, to about 125 km (78 miles) which allows us to take measurements throughout the entire upper atmosphere."

The 25 km (16 miles) altitude difference may not seem like much, but it allows scientists to make measurements down to the top of the lower atmosphere. At these lower altitudes, the atmospheric densities are more than ten times what they are at 150 km (93 miles).

"We are interested in the connections that run from the lower atmosphere to the upper atmosphere and then to escape to space," said Jakosky. "We are measuring all of the relevant regions and the connections between them."

The first deep dip campaign ran from Feb. 10 to 18. The first three days of this campaign were used to lower the periapsis. Each of the five campaigns lasts for five days allowing the spacecraft to observe for roughly 20 orbits. Since the planet rotates under the spacecraft, the 20 orbits allow sampling of different longitudes spaced around the planet, providing close to global coverage.

This month's deep dip maneuvers began when team engineers fired the rocket motors in three separate burns to lower the periapsis. The engineers did not want to do one big burn, to ensure that they didn't end up too deep in the atmosphere. So, they "walked" the spacecraft down gently in several smaller steps.

"Although we changed the altitude of the spacecraft, we actually aimed at a certain atmospheric density," said Jakosky. "We wanted to go as deep as we can without putting the spacecraft or instruments at risk."

Even though the atmosphere at these altitudes is very tenuous, it is thick enough to cause a noticeable drag on the spacecraft. Going to too high an atmospheric density could cause too much drag and heating due to friction that could damage spacecraft and instruments.

At the end of the campaign, two maneuvers were conducted to return MAVEN to normal science operation altitudes. Science data returned from the deep dip will be analyzed over the coming weeks. The science team will combine the results with what the spacecraft has seen during its regular mapping to get a better picture of the entire atmosphere and of the processes affecting it.

One of the major goals of the MAVEN mission is to understand how gas from the atmosphere escapes to space, and how this has affected the planet's climate history through time. In being lost to space, gas is removed from the top of the upper atmosphere. But it is the thicker lower atmosphere that controls the climate. MAVEN is studying the entire region from the top of the upper atmosphere all the way down to the lower atmosphere so that the connections between these regions can be understood.

Selengkapnya »»  

Brain's iconic seat of speech goes silent when we actually talk

New findings will better help map out the brain's speech regions.
For 150 years, the iconic Broca's area of the brain has been recognized as the command center for human speech, including vocalization. Now, scientists at UC Berkeley and Johns Hopkins University in Maryland are challenging this long-held assumption with new evidence that Broca's area actually switches off when we talk out loud.

The findings, reported in the Proceedings of the National Academy of Sciences journal, provide a more complex picture than previously thought of the frontal brain regions involved in speech production. The discovery has major implications for the diagnoses and treatments of stroke, epilepsy and brain injuries that result in language impairments.

"Every year millions of people suffer from stroke, some of which can lead to severe impairments in perceiving and producing language when critical brain areas are damaged," said study lead author Adeen Flinker, a postdoctoral researcher at New York University who conducted the study as a UC Berkeley Ph.D. student. "Our results could help us advance language mapping during neurosurgery as well as the assessment of language impairments."

Flinker said that neuroscientists traditionally organized the brain's language center into two main regions: one for perceiving speech and one for producing speech.

"That belief drives how we map out language during neurosurgery and classify language impairments," he said. "This new finding helps us move towards a less dichotomous view where Broca's area is not a center for speech production, but rather a critical area for integrating and coordinating information across other brain regions."

In the 1860s, French physician Pierre Paul Broca pinpointed this prefrontal brain region as the seat of speech. Broca's area has since ranked among the brain's most closely examined language regions in cognitive psychology. People with Broca's aphasia are characterized as having suffered damage to the brain's frontal lobe and tend to speak in short, stilted phrases that often omit short connecting words such as "the" and "and."

Specifically, Flinker and fellow researchers have found that Broca's area -- which is located in the frontal cortex above and behind the left eye -- engages with the brain's temporal cortex, which organizes sensory input, and later the motor cortex, as we process language and plan which sounds and movements of the mouth to use, and in what order. However, the study found, it disengages when we actually start to utter word sequences.

"Broca's area shuts down during the actual delivery of speech, but it may remain active during conversation as part of planning future words and full sentences," Flinker said.

The study tracked electrical signals emitted from the brains of seven hospitalized epilepsy patients as they repeated spoken and written words aloud. Researchers followed that brain activity -- using event-related causality technology -- from the auditory cortex, where the patients processed the words they heard, to Broca's area, where they prepared to articulate the words to repeat, to the motor cortex, where they finally spoke the words out loud.

In addition to Flinker, other co-authors and researchers on the study are Robert Knight and Avgusta Shestyuk at the Helen Wills Neuroscience Institute at UC Berkeley, Nina Dronkers at the Center for Aphasia and Related Disorders at the Veterans Affairs Northern California Health Care System, and Anna Korzeniewska, Piotr Franaszczuk and Nathan Crone at Johns Hopkins School of Medicine.


Selengkapnya »»  

Hubble gets best view of a circumstellar debris disk distorted by a planet

The photo at the bottom is the most detailed picture to date of a large, edge-on, gas-and-dust disk encircling the 20-million-year-old star Beta Pictoris. The new visible-light Hubble image traces the disk in closer to the star to within about 650 million miles of the star (which is inside the radius of Saturn's orbit about the Sun). When comparing the latest images to Hubble images taken in 1997 (top), astronomers find that the disk's dust distribution has barely changed over 15 years despite the fact that the entire structure is orbiting the star like a carousel. The Hubble Space Telescope photo has been artificially colored to bring out detail in the disk's structure.
Astronomers have used NASA's Hubble Space Telescope to take the most detailed picture to date of a large, edge-on, gas-and-dust disk encircling the 20-million-year-old star Beta Pictoris.

Beta Pictoris remains the only directly imaged debris disk that has a giant planet (discovered in 2009). Because the orbital period is comparatively short (estimated to be between 18 and 22 years), astronomers can see large motion in just a few years. This allows scientists to study how the Beta Pictoris disk is distorted by the presence of a massive planet embedded within the disk.

The new visible-light Hubble image traces the disk in closer to the star to within about 650 million miles of the star (which is inside the radius of Saturn's orbit about the Sun).

"Some computer simulations predicted a complicated structure for the inner disk due to the gravitational pull by the short-period giant planet. The new images reveal the inner disk and confirm the predicted structures. This finding validates models, which will help us to deduce the presence of other exoplanets in other disks," said Daniel Apai of the University of Arizona. The gas-giant planet in the Beta Pictoris system was directly imaged in infrared light by the European Southern Observatory's Very Large Telescope six years ago.

When comparing the latest Hubble images to Hubble images taken in 1997, astronomers find that the disk's dust distribution has barely changed over 15 years despite the fact that the entire structure is orbiting the star like a carousel. This means the disk's structure is smoothly continuous in the direction of its rotation on the timescale, roughly, of the accompanying planet's orbital period.

In 1984 Beta Pictoris was the very first star discovered to host a bright disk of light-scattering circumstellar dust and debris. Ever since then Beta Pictoris has been an object of intensive scrutiny with Hubble and with ground-based telescopes. Hubble spectroscopic observations in 1991 found evidence for extrasolar comets frequently falling into the star.

The disk is easily seen because it is tilted edge-on and is especially bright due to a very large amount of starlight-scattering dust. What's more, Beta Pictoris is closer to Earth (63 light-years) than most of the other known disk systems.

Though nearly all of the approximately two-dozen known light-scattering circumstellar disks have been viewed by Hubble to date, Beta Pictoris is the first and best example of what a young planetary system looks like, say researchers.

One thing astronomers have recently learned about circumstellar debris disks is that their structure, and amount of dust, is incredibly diverse and may be related to the locations and masses of planets in those systems. "The Beta Pictoris disk is the prototype for circumstellar debris systems, but it may not be a good archetype," said co-author Glenn Schneider of the University of Arizona.

For one thing the Beta Pictoris disk is exceptionally dusty. This may be due to recent major collisions among unseen planetary-sized and asteroid-sized bodies embedded within it. In particular, a bright lobe of dust and gas on the southwestern side of the disk may be the result of the pulverization of a Mars-sized body in a giant collision.

Both the 1997 and 2012 images were taken in visible light with Hubble's Space Telescope Imaging Spectrograph in its coronagraphic imaging mode. A coronagraph blocks out the glare of the central star so that the disk can be seen.

Selengkapnya »»  

For the first time, spacecraft catch solar shockwave in the act: 'Ultrarelativistic, killer electrons' made in 60 seconds

Earth's magnetosphere is depicted with the high-energy particles of the Van Allen radiation belts (shown in red) and various processes responsible for accelerating these particles to relativistic energies indicated. The effects of an interplanetary shock penetrate deep into this system, energizing electrons to ultra-relativistic energies in a matter of seconds.
On Oct. 8, 2013, an explosion on the sun's surface sent a supersonic blast wave of solar wind out into space. This shockwave tore past Mercury and Venus, blitzing by the moon before streaming toward Earth. The shockwave struck a massive blow to the Earth's magnetic field, setting off a magnetized sound pulse around the planet.

NASA's Van Allen Probes, twin spacecraft orbiting within the radiation belts deep inside the Earth's magnetic field, captured the effects of the solar shockwave just before and after it struck.

Now scientists at MIT's Haystack Observatory, the University of Colorado, and elsewhere have analyzed the probes' data, and observed a sudden and dramatic effect in the shockwave's aftermath: The resulting magnetosonic pulse, lasting just 60 seconds, reverberated through the Earth's radiation belts, accelerating certain particles to ultrahigh energies.

"These are very lightweight particles, but they are ultrarelativistic, killer electrons -- electrons that can go right through a satellite," says John Foster, associate director of MIT's Haystack Observatory. "These particles are accelerated, and their number goes up by a factor of 10, in just one minute. We were able to see this entire process taking place, and it's exciting: We see something that, in terms of the radiation belt, is really quick."

The findings represent the first time the effects of a solar shockwave on Earth's radiation belts have been observed in detail from beginning to end. Foster and his colleagues have published their results in the Journal of Geophysical Research.

Catching a shockwave in the act

Since August 2012, the Van Allen Probes have been orbiting within the Van Allen radiation belts. The probes' mission is to help characterize the extreme environment within the radiation belts, so as to design more resilient spacecraft and satellites.

One question the mission seeks to answer is how the radiation belts give rise to ultrarelativistic electrons -- particles that streak around the Earth at 1,000 kilometers per second, circling the planet in just five minutes. These high-speed particles can bombard satellites and spacecraft, causing irreparable damage to onboard electronics.

The two Van Allen probes maintain the same orbit around the Earth, with one probe following an hour behind the other. On Oct. 8, 2013, the first probe was in just the right position, facing the sun, to observe the radiation belts just before the shockwave struck the Earth's magnetic field. The second probe, catching up to the same position an hour later, recorded the shockwave's aftermath.

Dealing a "sledgehammer blow"

Foster and his colleagues analyzed the probes' data, and laid out the following sequence of events: As the solar shockwave made impact, according to Foster, it struck "a sledgehammer blow" to the protective barrier of the Earth's magnetic field. But instead of breaking through this barrier, the shockwave effectively bounced away, generating a wave in the opposite direction, in the form of a magnetosonic pulse -- a powerful, magnetized sound wave that propagated to the far side of the Earth within a matter of minutes.

In that time, the researchers observed that the magnetosonic pulse swept up certain lower-energy particles. The electric field within the pulse accelerated these particles to energies of 3 to 4 million electronvolts, creating 10 times the number of ultrarelativistic electrons that previously existed.

Taking a closer look at the data, the researchers were able to identify the mechanism by which certain particles in the radiation belts were accelerated. As it turns out, if particles' velocities as they circle the Earth match that of the magnetosonic pulse, they are deemed "drift resonant," and are more likely to gain energy from the pulse as it speeds through the radiation belts. The longer a particle interacts with the pulse, the more it is accelerated, giving rise to an extremely high-energy particle.

Foster says solar shockwaves can impact Earth's radiation belts a couple of times each month. The event in 2013 was a relatively minor one.

"This was a relatively small shock. We know they can be much, much bigger," Foster says. "Interactions between solar activity and Earth's magnetosphere can create the radiation belt in a number of ways, some of which can take months, others days. The shock process takes seconds to minutes. This could be the tip of the iceberg in how we understand radiation-belt physics."

Selengkapnya »»  

Friday

First glimpse of a chemical bond being born

This illustration shows atoms forming a tentative bond, a moment captured for the first time in experiments with an X-ray laser at SLAC National Accelerator Laboratory. The reactants are a carbon monoxide molecule, left, made of a carbon atom (black) and an oxygen atom (red), and a single atom of oxygen, just to the right of it. They are attached to the surface of a ruthenium catalyst, which holds them close to each other so they can react more easily. When hit with an optical laser pulse, the reactants vibrate and bump into each other, and the carbon atom forms a transitional bond with the lone oxygen, center. The resulting carbon dioxide molecule detaches and floats away, upper right. The Linac Coherent Light Source (LCLS) X-ray laser probed the reaction as it proceeded and allowed the movie to be created.
Scientists have used an X-ray laser at the Department of Energy's SLAC National Accelerator Laboratory to get the first glimpse of the transition state where two atoms begin to form a weak bond on the way to becoming a molecule.

This fundamental advance, reported Feb. 12 in Science Express and long thought impossible, will have a profound impact on the understanding of how chemical reactions take place and on efforts to design reactions that generate energy, create new products and fertilize crops more efficiently.

"This is the very core of all chemistry. It's what we consider a Holy Grail, because it controls chemical reactivity," said Anders Nilsson, a professor at the SLAC/Stanford SUNCAT Center for Interface Science and Catalysis and at Stockholm University who led the research. "But because so few molecules inhabit this transition state at any given moment, no one thought we'd ever be able to see it."

Bright, Fast Laser Pulses Achieve the Impossible

The experiments took place at SLAC's Linac Coherent Light Source (LCLS), a DOE Office of Science User Facility. Its brilliant, strobe-like X-ray laser pulses are short enough to illuminate atoms and molecules and fast enough to watch chemical reactions unfold in a way never possible before.

Researchers used LCLS to study the same reaction that neutralizes carbon monoxide (CO) from car exhaust in a catalytic converter. The reaction takes place on the surface of a catalyst, which grabs CO and oxygen atoms and holds them next to each other so they pair up more easily to form carbon dioxide.

In the SLAC experiments, researchers attached CO and oxygen atoms to the surface of a ruthenium catalyst and got reactions going with a pulse from an optical laser. The pulse heated the catalyst to 2,000 kelvins -- more than 3,000 degrees Fahrenheit -- and set the attached chemicals vibrating, greatly increasing the chance that they would knock into each other and connect.

The team was able to observe this process with X-ray laser pulses from LCLS, which detected changes in the arrangement of the atoms' electrons -- subtle signs of bond formation -- that occurred in mere femtoseconds, or quadrillionths of a second.

"First the oxygen atoms get activated, and a little later the carbon monoxide gets activated," Nilsson said. "They start to vibrate, move around a little bit. Then, after about a trillionth of a second, they start to collide and form these transition states."

'Rolling Marbles Uphill'

The researchers were surprised to see so many of the reactants enter the transition state -- and equally surprised to discover that only a small fraction of them go on to form stable carbon dioxide. The rest break apart again.

"It's as if you are rolling marbles up a hill, and most of the marbles that make it to the top roll back down again," Nilsson said. "What we are seeing is that many attempts are made, but very few reactions continue to the final product. We have a lot to do to understand in detail what we have seen here."

Theory played a key role in the experiments, allowing the team to predict what would happen and get a good idea of what to look for. "This is a super-interesting avenue for theoretical chemists. It's going to open up a completely new field," said report co-author Frank Abild-Pedersen of SLAC and SUNCAT.

A team led by Associate Professor Henrik Öström at Stockholm University did initial studies of how to trigger the reactions with the optical laser. Theoretical spectra were computed under the leadership of Stockholm Professor Lars G.M. Pettersson, a longtime collaborator with Nilsson.

Preliminary experiments at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL), another DOE Office of Science User Facility, also proved crucial. Led by SSRL's Hirohito Ogasawara and SUNCAT's Jerry LaRue, they measured the characteristics of the chemical reactants with an intense X-ray beam so researchers would be sure to identify everything correctly at the LCLS, where beam time is much more scarce. "Without SSRL this would not have worked," Nilsson said.

The team is already starting to measure transition states in other catalytic reactions that generate chemicals important to industry.

"This is extremely important, as it provides insight into the scientific basis for rules that allow us to design new catalysts," said SUNCAT Director and co-author Jens Nørskov.

Selengkapnya »»  

Application of laser microprobe technology to Apollo samples refines lunar impact history

This is a photomicrograph of a petrographic thin section of a piece of a coherent, crystalline impact melt breccia collected from landslide material at the base of the South Massif, Apollo 17 (sample 73217, 84). Different mineral and lithic clasts, as well as impact melt phases are evident. Determining the ages of different melt components in such a complex rock requires carefully focused analyses within context of spatial and petrographic information such as this. In their article published in the Feb. 12 issue of Science Advances, Mercer et al. used the laser microprobe 40Ar/39Ar technique to investigate age relationships of three of the distinct generations of impact melt shown in this image.
It's been more than 40 years since astronauts returned the last Apollo samples from the moon, and since then those samples have undergone some of the most extensive and comprehensive analysis of any geological collection. A team led by ASU researchers has now refined the timeline of meteorite impacts on the moon through a pioneering application of laser microprobe technology to Apollo 17 samples.

Impact cratering is the most ubiquitous geologic process affecting the solid surfaces of planetary bodies in the solar system. The moon's scarred surface serves as a record of meteorite bombardment that spans much of solar system history. Developing an absolute chronology of lunar impact events is of particular interest because the moon is an important proxy for understanding the early bombardment history of Earth, which has been largely erased by plate tectonics and erosion, and because we can use the lunar impact record to infer the ages of other cratered surfaces in the inner solar system.

Researchers in ASU's Group 18 Laboratories, headed by Professor Kip Hodges, used an ultraviolet laser microprobe attached to a high-sensitivity mass spectrometer to analyze argon isotopes in samples returned by Apollo 17. While the laser microprobe 40Ar/39Ar technique has been applied to a large number of problems in terrestrial geochronology, including studies of texturally complex samples, this is its first time it has been applied to samples from the Apollo archive.

The samples analyzed by the ASU team are known as lunar impact melt breccias -- mash-ups of glass, rock and crystal fragments that were created by impact events on the moon's surface.

When a meteor strikes another planetary body, the impact produces very large amounts of energy, some of which goes into shock heating and melting the target rocks. These extreme conditions can 'restart the clock' for some mineral-isotopic chronometers, particularly for material melted during impact. As a result, the absolute ages of lunar craters are primarily determined through isotope geochronology of components of the target rocks that were shocked and heated to the point of melting, and which have since solidified.

However, lunar rocks may have experienced multiple impact events over the course of billions of years of bombardment, potentially complicating attempts to date samples and relate the results to the ages of particular impact structures.

Conventional wisdom holds that the largest impact basins on the moon were responsible for generating the vast majority of impact melts, and therefore that nearly all of the samples dated must be related to the formation of those basins.

While it is true that enormous quantities of impact melt are generated by basin-scale impact events, recent images taken by the Lunar Reconnaissance Orbiter Camera confirm that even small craters with diameters on the order of 100 meters can generate impact melts. The team's findings have important implications for this particular observation. The results are published in the inaugural issue of the American Association for the Advancement of Science's newest journal, Science Advances, on Feb. 12.

"One of the samples we analyzed, 77115, records evidence for only one impact event, which may or may not be related to a basin-forming impact event. In contrast, we found that the other sample, 73217, preserves evidence for at least three impact events occurring over several hundred million years, not all of which can be related to basin-scale impacts," says Cameron Mercer, lead author of the paper and a graduate student in ASU's School of Earth and Space Exploration.

Sample 77115, collected by astronauts Gene Cernan and Harrison Schmitt at Station 7 during their third and final moonwalk, records a single melt-forming event about 3.83 billion years ago. Sample 73217, retrieved at Station 3 during the astronauts' second moonwalk, preserves evidence for at least three distinct impact melt-forming events occurring between 3.81 billion years ago and 3.27 billion years ago. The findings suggest that a single small sample can preserve multiple generations of melt products created by impact events over the course of billions of years.

"Our results emphasize the need for care in how we analyze samples in the context of impact dating, particularly for those samples that appear to have complex, polygenetic origins. This applies to both the samples that we currently have in our lunar and meteoritic collections, as well as samples that we recover during future human and robotic space exploration missions in the inner solar system," says Mercer.

Selengkapnya »»  

Magnitude of plastic waste going into the ocean calculated: 8 million metric tons of plastic enter the oceans per year

The 192 countries with a coast bordering the Atlanta, Pacific and Indian oceans, Mediterranean and Black seas produced a total of 2.5 billion metric tons of solid waste. Of that, 275 million metric tons was plastic, and an estimated 8 million metric tons of mismanaged plastic waste entered the ocean in 2010.
A plastic grocery bag cartwheels down the beach until a gust of wind spins it into the ocean. In 192 coastal countries, this scenario plays out over and over again as discarded beverage bottles, food wrappers, toys and other bits of plastic make their way from estuaries, seashores and uncontrolled landfills to settle in the world's seas.

How much mismanaged plastic waste is making its way from land to ocean has been a decades-long guessing game. Now, the University of Georgia's Jenna Jambeck and her colleagues in the National Center for Ecological Analysis and Synthesis working group have put a number on the global problem.

Their study, reported in the Feb. 13 edition of the journal Science, found between 4.8 and 12.7 million metric tons of plastic entered the ocean in 2010 from people living within 50 kilometers of the coastline. That year, a total of 275 million metric tons of plastic waste was generated in those 192 coastal countries.

Jambeck, an assistant professor of environmental engineering in the UGA College of Engineering and the study's lead author, explains the amount of plastic moving from land to ocean each year using 8 million metric tons as the midpoint: "Eight million metric tons is the equivalent to finding five grocery bags full of plastic on every foot of coastline in the 192 countries we examined."

To determine the amount of plastic going into the ocean, Jambeck "started it off beautifully with a very grand model of all sources of marine debris," said study co-author Roland Geyer, an associate professor with the University of California, Santa Barbara's Bren School of Environmental Science & Management, who teamed with Jambeck and others to develop the estimates.

They began by looking at all debris entering the ocean from land, sea and other pathways. Their goal was to develop models for each of these sources. After gathering rough estimates, "it fairly quickly emerged that the mismanaged waste and solid waste dispersed was the biggest contributor of all of them," he said. From there, they focused on plastic.

"For the first time, we're estimating the amount of plastic that enters the oceans in a given year," said study co-author Kara Lavender Law, a research professor at the Massachusetts-based Sea Education Association. "Nobody has had a good sense of the size of that problem until now."

The framework the researchers developed isn't limited to calculating plastic inputs into the ocean.

"Jenna created a framework to analyze solid waste streams in countries around the world that can easily be adapted by anyone who is interested," she said. "Plus, it can be used to generate possible solution strategies."

Plastic pollution in the ocean was first reported in the scientific literature in the early 1970s. In the 40 years since, there were no rigorous estimates of the amount and origin of plastic debris making its way into the marine environment until Jambeck's current study.

Part of the issue is that plastic is a relatively new problem coupled with a relatively new waste solution. Plastic first appeared on the consumer market in the 1930s and '40s. Waste management didn't start developing its current infrastructure in the U.S., Europe and parts of Asia until the mid-1970s. Prior to that time, trash was dumped in unstructured landfills--Jambeck has vivid memories of growing up in rural Minnesota, dropping her family's garbage off at a small dump and watching bears wander through furniture, tires and debris as they looked for food.

"It is incredible how far we have come in environmental engineering, advancing recycling and waste management systems to protect human health and the environment, in a relatively short amount of time," she said. "However, these protections are unfortunately not available equally throughout the world."

Some of the 192 countries included in the model have no formal waste management systems, Jambeck said. Solid waste management is typically one of the last urban environmental engineering infrastructure components to be addressed during a country's development. Clean water and sewage treatment often come first.

"The human impact from not having clean drinking water is acute, with sewage treatment often coming next," she said. "Those first two needs are addressed before solid waste, because waste doesn't seem to have any immediate threat to humans. And then solid waste piles up in streets and yards and it's the thing that gets forgotten for a while."

As the gross national income increases in these countries, so does the use of plastic. In 2013, the most current numbers available, global plastic resin production reached 299 million tons, a 647 percent increase over numbers recorded in 1975. Plastic resin is used to make many one-use items like wrappers, beverage bottles and plastic bags.

With the mass increase in plastic production, the idea that waste can be contained in a few-acre landfill or dealt with later is no longer viable. That was the mindset before the onslaught of plastic, when most people piled their waste--glass, food scraps, broken pottery--on a corner of their land or burned or buried it. Now, the average American generates about 5 pounds of trash per day with 13% of that being plastic.

But knowing how much plastic is going into the ocean is just one part of the puzzle, Jambeck said. With between 4.8 and 12.7 million metric tons going in, researchers like Law are only finding between 6,350 and 245,000 metric tons floating on the ocean's surface.

"This paper gives us a sense of just how much we're missing," Law said, "how much we need to find in the ocean to get to the total. Right now, we're mainly collecting numbers on plastic that floats. There is a lot of plastic sitting on the bottom of the ocean and on beaches worldwide."

Jambeck forecasts that the cumulative impact to the oceans will equal 155 million metric tons by 2025. The planet is not predicted to reach global "peak waste" before 2100, according to World Bank calculations.

"We're being overwhelmed by our waste," she said. "But our framework allows us to also examine mitigation strategies like improving global solid waste management and reducing plastic in the waste stream. Potential solutions will need to coordinate local and global efforts."

Selengkapnya »»  

Warming pushes Western U.S. toward driest period in 1,000 years: Unprecedented risk of drought in 21st century

Soil moisture 30 cm below ground projected through 2100 for high emissions scenario RCP 8.5. The soil moisture data are standardized to the Palmer Drought Severity Index and are deviations from the 20th century average.
During the second half of the 21st century, the U.S. Southwest and Great Plains will face persistent drought worse than anything seen in times ancient or modern, with the drying conditions "driven primarily" by human-induced global warming, a new study predicts.

The research says the drying would surpass in severity any of the decades-long "megadroughts" that occurred much earlier during the past 1,000 years -- one of which has been tied by some researchers to the decline of the Anasazi or Ancient Pueblo Peoples in the Colorado Plateau in the late 13th century. Many studies have already predicted that the Southwest could dry due to global warming, but this is the first to say that such drying could exceed the worst conditions of the distant past. The impacts today would be devastating, given the region's much larger population and use of resources.

"We are the first to do this kind of quantitative comparison between the projections and the distant past, and the story is a bit bleak," said Jason E. Smerdon, a co-author and climate scientist at the Lamont-Doherty Earth Observatory, part of the Earth Institute at Columbia University. "Even when selecting for the worst megadrought-dominated period, the 21st century projections make the megadroughts seem like quaint walks through the Garden of Eden."

"The surprising thing to us was really how consistent the response was over these regions, nearly regardless of what model we used or what soil moisture metric we looked at," said lead author Benjamin I. Cook of the NASA Goddard Institute for Space Studies and the Lamont-Doherty Earth Observatory. "It all showed this really, really significant drying."

The new study, "Unprecedented 21st-Century Drought Risk in the American Southwest and Central Plains," will be featured in the inaugural edition of the new online journal Science Advances, produced by the American Association for the Advancement of Science, which also publishes the leading journal Science.

Today, 11 of the past 14 years have been drought years in much of the American West, including California, Nevada, New Mexico and Arizona and across the Southern Plains to Texas and Oklahoma, according to the U.S. Drought Monitor, a collaboration of U.S. government agencies.

The current drought directly affects more than64 million people in the Southwest and Southern Plains, according to NASA, and many more are indirectly affected because of the impacts on agricultural regions.

Shrinking water supplies have forced western states to impose water use restrictions; aquifers are being drawn down to unsustainable levels, and major surface reservoirs such as Lake Mead and Lake Powell are at historically low levels. This winter's snowpack in the Sierras, a major water source for Los Angeles and other cities, is less than a quarter of what authorities call a "normal" level, according to a February report from the Los Angeles Department of Water and Power. California water officials last year cut off the flow of water from the northern part of the state to the south, forcing farmers in the Central Valley to leave hundreds of thousands of acres unplanted.

"Changes in precipitation, temperature and drought, and the consequences it has for our society -- which is critically dependent on our freshwater resources for food, electricity and industry -- are likely to be the most immediate climate impacts we experience as a result of greenhouse gas emissions," said Kevin Anchukaitis, a climate researcher at the Woods Hole Oceanographic Institution. Anchukaitis said the findings "require us to think rather immediately about how we could and would adapt."

Much of our knowledge about past droughts comes from extensive study of tree rings conducted by Lamont-Doherty scientist Edward Cook (Benjamin's father) and others, who in 2009 created the North American Drought Atlas. The atlas recreates the history of drought over the previous 2,005 years, based on hundreds of tree-ring chronologies, gleaned in turn from tens of thousands of tree samples across the United States, Mexico and parts of Canada.

For the current study, researchers used data from the atlas to represent past climate, and applied three different measures for drought -- two soil moisture measurements at varying depths, and a version of the Palmer Drought Severity Index, which gauges precipitation and evaporation and transpiration -- the net input of water into the land. While some have questioned how accurately the Palmer drought index truly reflects soil moisture, the researchers found it matched well with other measures, and that it "provides a bridge between the [climate] models and drought in observations," Cook said.

The researchers applied 17 different climate models to analyze the future impact of rising average temperatures on the regions. And, they compared two different global warming scenarios -- one with "business as usual," projecting a continued rise in emissions of the greenhouse gases that contribute to global warming; and a second scenario in which emissions are moderated.

By most of those measures, they came to the same conclusions.

"The results … are extremely unfavorable for the continuation of agricultural and water resource management as they are currently practiced in the Great Plains and southwestern United States," said David Stahle, professor in the Department of Geosciences at the University of Arkansas and director of the Tree-Ring Laboratory there. Stahle was not involved in the study, though he worked on the North American Drought Atlas.

Smerdon said he and his colleagues are confident in their results. The effects of CO2 on higher average temperature and the subsequent connection to drying in the Southwest and Great Plains emerge as a "strong signal" across the majority of the models, regardless of the drought metrics that are used, he said. And, he added, they are consistent with many previous studies.

Anchukaitis said the paper "provides an elegant and convincing connection" between reconstructions of past climate and the models pointing to the risk of future drought.

Toby R. Ault of Cornell University is a co-author of the study. Funding was provided by the NASA Modeling, Analysis and Prediction Program, NASA Strategic Science, and the U.S. National Science Foundation.

Selengkapnya »»  

Cosmology: First stars were born much later than thought

New maps from ESA's Planck satellite uncover the 'polarised' light from the early Universe across the entire sky, revealing that the first stars formed much later than previously thought.

The history of our Universe is a 13.8 billion-year tale that scientists endeavour to read by studying the planets, asteroids, comets and other objects in our Solar System, and gathering light emitted by distant stars, galaxies and the matter spread between them.

A major source of information used to piece together this story is the Cosmic Microwave Background, or CMB, the fossil light resulting from a time when the Universe was hot and dense, only 380,000 years after the Big Bang.

Thanks to the expansion of the Universe, we see this light today covering the whole sky at microwave wavelengths.

Between 2009 and 2013, Planck surveyed the sky to study this ancient light in unprecedented detail. Tiny differences in the background's temperature trace regions of slightly different density in the early cosmos, representing the seeds of all future structure, the stars and galaxies of today.

Scientists from the Planck collaboration have published the results from the analysis of these data in a large number of scientific papers over the past two years, confirming the standard cosmological picture of our Universe with ever greater accuracy.

"But there is more: the CMB carries additional clues about our cosmic history that are encoded in its 'polarisation'," explains Jan Tauber, ESA's Planck project scientist.

"Planck has measured this signal for the first time at high resolution over the entire sky, producing the unique maps released today."

Light is polarised when it vibrates in a preferred direction, something that may arise as a result of photons -- the particles of light -- bouncing off other particles. This is exactly what happened when the CMB originated in the early Universe.

Initially, photons were trapped in a hot, dense soup of particles that, by the time the Universe was a few seconds old, consisted mainly of electrons, protons and neutrinos. Owing to the high density, electrons and photons collided with one another so frequently that light could not travel any significant distant before bumping into another electron, making the early Universe extremely 'foggy'.

Slowly but surely, as the cosmos expanded and cooled, photons and the other particles grew farther apart, and collisions became less frequent.

This had two consequences: electrons and protons could finally combine and form neutral atoms without them being torn apart again by an incoming photon, and photons had enough room to travel, being no longer trapped in the cosmic fog.

Once freed from the fog, the light was set on its cosmic journey that would take it all the way to the present day, where telescopes like Planck detect it as the CMB. But the light also retains a memory of its last encounter with the electrons, captured in its polarisation.

"The polarisation of the CMB also shows minuscule fluctuations from one place to another across the sky: like the temperature fluctuations, these reflect the state of the cosmos at the time when light and matter parted company," says François Bouchet of the Institut d'Astrophysique de Paris, France.

"This provides a powerful tool to estimate in a new and independent way parameters such as the age of the Universe, its rate of expansion and its essential composition of normal matter, dark matter and dark energy."

Planck's polarisation data confirm the details of the standard cosmological picture determined from its measurement of the CMB temperature fluctuations, but add an important new answer to a fundamental question: when were the first stars born?
"After the CMB was released, the Universe was still very different from the one we live in today, and it took a long time until the first stars were able to form," explains Marco Bersanelli of Università degli Studi di Milano, Italy.

"Planck's observations of the CMB polarisation now tell us that these 'Dark Ages' ended some 550 million years after the Big Bang -- more than 100 million years later than previously thought.

"While these 100 million years may seem negligible compared to the Universe's age of almost 14 billion years, they make a significant difference when it comes to the formation of the first stars."

The Dark Ages ended as the first stars began to shine. And as their light interacted with gas in the Universe, more and more of the atoms were turned back into their constituent particles: electrons and protons.

This key phase in the history of the cosmos is known as the 'epoch of reionisation'.

The newly liberated electrons were once again able to collide with the light from the CMB, albeit much less frequently now that the Universe had significantly expanded. Nevertheless, just as they had 380 000 years after the Big Bang, these encounters between electrons and photons left a tell-tale imprint on the polarisation of the CMB.

"From our measurements of the most distant galaxies and quasars, we know that the process of reionisation was complete by the time that the Universe was about 900 million years old," says George Efstathiou of the University of Cambridge, UK.

"But, at the moment, it is only with the CMB data that we can learn when this process began."

Planck's new results are critical, because previous studies of the CMB polarisation seemed to point towards an earlier dawn of the first stars, placing the beginning of reionisation about 450 million years after the Big Bang.

This posed a problem. Very deep images of the sky from the NASA-ESA Hubble Space Telescope have provided a census of the earliest known galaxies in the Universe, which started forming perhaps 300-400 million years after the Big Bang.

However, these would not have been powerful enough to succeed at ending the Dark Ages within 450 million years.

"In that case, we would have needed additional, more exotic sources of energy to explain the history of reionisation," says Professor Efstathiou.

The new evidence from Planck significantly reduces the problem, indicating that reionisation started later than previously believed, and that the earliest stars and galaxies alone might have been enough to drive it.

This later end of the Dark Ages also implies that it might be easier to detect the very first generation of galaxies with the next generation of observatories, including the James Webb Space Telescope.

But the first stars are definitely not the limit. With the new Planck data released today, scientists are also studying the polarisation of foreground emission from gas and dust in the Milky Way to analyse the structure of the Galactic magnetic field.

The data have also enabled new important insights into the early cosmos and its components, including the intriguing dark matter and the elusive neutrinos, as described in papers also released today.

The Planck data have delved into the even earlier history of the cosmos, all the way to inflation -- the brief era of accelerated expansion that the Universe underwent when it was a tiny fraction of a second old. As the ultimate probe of this epoch, astronomers are looking for a signature of gravitational waves triggered by inflation and later imprinted on the polarisation of the CMB.

No direct detection of this signal has yet been achieved, as reported last week. However, when combining the newest all-sky Planck data with those latest results, the limits on the amount of primordial gravitational waves are pushed even further down to achieve the best upper limits yet.

"These are only a few highlights from the scrutiny of Planck's observations of the CMB polarisation, which is revealing the sky and the Universe in a brand new way," says Jan Tauber.

"This is an incredibly rich data set and the harvest of discoveries has just begun."

Series of publications: http://www.cosmos.esa.int/web/planck/publications

Selengkapnya »»  

Dogs know that smile on your face

This is the experimental set-up used to test whether dogs can discriminate emotional expressions of human faces.
Dogs can tell the difference between happy and angry human faces, according to a new study in the Cell Press journal Current Biology on February 12. The discovery represents the first solid evidence that an animal other than humans can discriminate between emotional expressions in another species, the researchers say.

"We think the dogs in our study could have solved the task only by applying their knowledge of emotional expressions in humans to the unfamiliar pictures we presented to them," says Corsin Müller of the University of Veterinary Medicine Vienna.

Previous attempts had been made to test whether dogs could discriminate between human emotional expressions, but none of them had been completely convincing. In the new study, the researchers trained dogs to discriminate between images of the same person making either a happy or an angry face. In every case, the dogs were shown only the upper or the lower half of the face. After training on 15 picture pairs, the dogs' discriminatory abilities were tested in four types of trials, including

(1) the same half of the faces as in the training but of novel faces,
(2) the other half of the faces used in training,
(3) the other half of novel faces, and
(4) the left half of the faces used in training.

The dogs were able to select the angry or happy face more often than would be expected by random chance in every case, the study found. The findings show that not only could the dogs learn to identify facial expressions, but they were also able to transfer what they learned in training to new cues.

"Our study demonstrates that dogs can distinguish angry and happy expressions in humans, they can tell that these two expressions have different meanings, and they can do this not only for people they know well, but even for faces they have never seen before," says Ludwig Huber, senior author and head of the group at the University of Veterinary Medicine Vienna's Messerli Research Institute.

What exactly those different meanings are for the dogs is hard to say, he adds, "but it appears likely to us that the dogs associate a smiling face with a positive meaning and an angry facial expression with a negative meaning." Müller and Huber report that the dogs were slower to learn to associate an angry face with a reward, suggesting that they already had an idea based on prior experience that it's best to stay away from people when they look angry.

The researchers will continue to explore the role of experience in the dogs' abilities to recognize human emotions. They also plan to study how dogs themselves express emotions and how their emotions are influenced by the emotions of their owners or other humans.

"We expect to gain important insights into the extraordinary bond between humans and one of their favorite pets, and into the emotional lives of animals in general," Müller says.

Selengkapnya »»  

Sunday

Scientists predict Earth-like planets around most stars

Planetary scientists have calculated that there are hundreds of billions of Earth-like planets in our galaxy which might support life.
Planetary scientists have calculated that there are hundreds of billions of Earth-like planets in our galaxy which might support life.

The new research, led by PhD student Tim Bovaird and Associate Professor Charley Lineweaver from The Australian National University (ANU), made the finding by applying a 200-year-old idea to the thousands of exo-planets discovered by the Kepler space telescope.

They found the standard star has about two planets in the so-called Goldilocks zone, the distance from the star where liquid water, crucial for life, can exist.

"The ingredients for life are plentiful, and we now know that habitable environments are plentiful," said Associate Professor Lineweaver, from the ANU Research School of Astronomy and Astrophysics and the Research School of Earth Sciences.

"However, the universe is not teeming with aliens with human-like intelligence that can build radio telescopes and space ships. Otherwise we would have seen or heard from them.

"It could be that there is some other bottleneck for the emergence of life that we haven't worked out yet. Or intelligent civilisations evolve, but then self-destruct."

The Kepler space telescope is biased towards seeing planets very close to their stars, that are too hot for liquid water, but the team extrapolated from Kepler's results using the theory that was used to predict the existence of Uranus.

"We used the Titius-Bode relation and Kepler data to predict the positions of planets that Kepler is unable to see," Associate Professor Lineweaver said.


Selengkapnya »»  

Wednesday

Add nature, art and religion to life's best anti-inflammatories

The awe we feel when we're in nature may help lower levels of pro-inflammatory proteins, a new study suggests.
Taking in such spine-tingling wonders as the Grand Canyon, Sistine Chapel ceiling or Schubert's "Ave Maria" may give a boost to the body's defense system, according to new research from UC Berkeley.

Researchers have linked positive emotions -- especially the awe we feel when touched by the beauty of nature, art and spirituality -- with lower levels of pro-inflammatory cytokines, which are proteins that signal the immune system to work harder.

"Our findings demonstrate that positive emotions are associated with the markers of good health," said Jennifer Stellar, a postdoctoral researcher at the University of Toronto and lead author of the study, which she conducted while at UC Berkeley.
While cytokines are necessary for herding cells to the body's battlegrounds to fight infection, disease and trauma, sustained high levels of cytokines are associated with poorer health and such disorders as type-2 diabetes, heart disease, arthritis and even Alzheimer's disease and clinical depression.

It has long been established that a healthy diet and lots of sleep and exercise bolster the body's defenses against physical and mental illnesses. But the Berkeley study, whose findings were just published in the journal Emotion, is one of the first to look at the role of positive emotions in that arsenal.

"That awe, wonder and beauty promote healthier levels of cytokines suggests that the things we do to experience these emotions -- a walk in nature, losing oneself in music, beholding art -- has a direct influence upon health and life expectancy," said UC Berkeley psychologist Dacher Keltner, a co-author of the study.

In two separate experiments, more than 200 young adults reported on a given day the extent to which they had experienced such positive emotions as amusement, awe, compassion, contentment, joy, love and pride. Samples of gum and cheek tissue, known as oral mucosal transudate, taken that same day showed that those who experienced more of these positive emotions, especially awe, wonder and amazement, had the lowest levels of the cytokine, Interleukin 6, a marker of inflammation.

In addition to autoimmune diseases, elevated cytokines have been tied to depression. One recent study found that depressed patients had higher levels of the pro-inflammatory cytokine known as TNF-alpha than their non-depressed counterparts. It is believed that by signaling the brain to produce inflammatory molecules, cytokines can block key hormones and neurotransmitters -- such as serotonin and dopamine -- that control moods, appetite, sleep and memory.

In answer to why awe would be a potent predictor of reduced pro-inflammatory cytokines, this latest study posits that "awe is associated with curiosity and a desire to explore, suggesting antithetical behavioral responses to those found during inflammation, where individuals typically withdraw from others in their environment," Stellar said.

As for which came first -- the low cytokines or the positive feelings -- Stellar said she can't say for sure: "It is possible that having lower cytokines makes people feel more positive emotions, or that the relationship is bidirectional," Stellar said.

Selengkapnya »»  

Scientists discover organism that hasn't evolved in more than 2 billion years

This is a section of a 1.8 billion-year-old fossil-bearing rock.
An international team of scientists has discovered the greatest absence of evolution ever reported -- a type of deep-sea microorganism that appears not to have evolved over more than 2 billion years. But the researchers say that the organisms' lack of evolution actually supports Charles Darwin's theory of evolution.

The findings are published online by the Proceedings of the National Academy of Sciences.

The scientists examined sulfur bacteria, microorganisms that are too small to see with the unaided eye, that are 1.8 billion years old and were preserved in rocks from Western Australia's coastal waters. Using cutting-edge technology, they found that the bacteria look the same as bacteria of the same region from 2.3 billion years ago -- and that both sets of ancient bacteria are indistinguishable from modern sulfur bacteria found in mud off of the coast of Chile.

"It seems astounding that life has not evolved for more than 2 billion years -- nearly half the history of Earth," said J. William Schopf, a UCLA professor of earth, planetary and space sciences in the UCLA College who was the study's lead author. "Given that evolution is a fact, this lack of evolution needs to be explained."

Charles Darwin's writings on evolution focused much more on species that had changed over time than on those that hadn't. So how do scientists explain a species living for so long without evolving?

"The rule of biology is not to evolve unless the physical or biological environment changes, which is consistent with Darwin," said Schopf, who also is director of UCLA's Center for the Study of Evolution and the Origin of Life. The environment in which these microorganisms live has remained essentially unchanged for 3 billion years, he said.

"These microorganisms are well-adapted to their simple, very stable physical and biological environment," he said. "If they were in an environment that did not change but they nevertheless evolved, that would have shown that our understanding of Darwinian evolution was seriously flawed."

Schopf said the findings therefore provide further scientific proof for Darwin's work. "It fits perfectly with his ideas," he said.

The fossils Schopf analyzed date back to a substantial rise in Earth's oxygen levels known as the Great Oxidation Event, which scientists believe occurred between 2.2 billion and 2.4 billion years ago. The event also produced a dramatic increase in sulfate and nitrate -- the only nutrients the microorganisms would have needed to survive in their seawater mud environment -- which the scientists say enabled the bacteria to thrive and multiply.

Schopf used several techniques to analyze the fossils, including Raman spectroscopy -- which enables scientists to look inside rocks to determine their composition and chemistry -- and confocal laser scanning microscopy -- which renders fossils in 3-D. He pioneered the use of both techniques for analyzing microscopic fossils preserved inside ancient rocks.


Selengkapnya »»  

Magnetic sense for humans? Electronic skin with magneto-sensory system enables 'sixth sense'

The new magnetic sensors are light enough (three gram per square meter) to float on a soap bubble.
Scientists from Germany and Japan have developed a new magnetic sensor, which is thin, robust and pliable enough to be smoothly adapted to human skin, even to the most flexible part of the human palm. The achievement suggests it may be possible to equip humans with magnetic sense.

Magnetoception is a sense which allows bacteria, insects and even vertebrates like birds and sharks to detect magnetic fields for orientation and navigation. Humans are however unable to perceive magnetic fields naturally. Dr. Denys Makarov and his team have developed an electronic skin with a magneto-sensory system that equips the recipient with a "sixth sense" able to perceive the presence of static or dynamic magnetic fields. These novel magneto-electronics are less than two micrometers thick and weights only three gram per square meter; they can even float on a soap bubble.

The new magnetic sensors withstand extreme bending with radii of less than three micrometer, and survive crumpling like a piece of paper without sacrificing the sensor performance. On elastic supports like a rubber band, they can be stretched to more than 270 percent and for over 1,000 cycles without fatigue. These versatile features are imparted to the magnetoelectronic elements by their ultra-thin and -flexible, yet robust polymeric support.

"We have demonstrated an on-skin touch-less human-machine interaction platform, motion and displacement sensorics applicable for soft robots or functional medical implants as well as magnetic functionalities for electronics on the skin," says Michael Melzer, the PhD student of the ERC group led by Denys Makarov concentrating on the realization of flexible and stretchable magnetoelectronics. "These ultrathin magnetic sensors with extraordinary mechanical robustness are ideally suited to be wearable, yet unobtrusive and imperceptible for orientation and manipulation aids," adds Prof. Oliver G. Schmidt, who is the director of the Institute for Integrative Nanosciences at the IFW Dresden.

This work was carried out at the Leibniz Institute for Solid State and Materials Research (IFW Dresden) and the TU Chemnitz in close collaboration with partners at the University of Tokyo and Osaka University in Japan.


Selengkapnya »»  

New technique doubles the distance of optical fiber communications

Optical fiber (stock image).
A new way to process fibre optic signals has been demonstrated by UCL researchers, which could double the distance at which data travels error-free through transatlantic sub-marine cables.

The new method has the potential to reduce the costs of long-distance optical fibre communications as signals wouldn't need to be electronically boosted on their journey, which is important when the cables are buried underground or at the bottom of the ocean.

As the technique can correct the transmitted data if they are corrupted or distorted on the journey, it could also help to increase the useful capacity of fibres. This is done right at the end of the link, at the receiver, without having to introduce new components within the link itself. Increasing capacity in this way is important as optical fibres carry 99% of all data and demand is rising with increased use of the internet, which can't be matched by the fibres' current capacity, and changing the receivers is far cheaper and easier than re-laying cables.

To cope with this increased demand, more information is being sent using the existing fibre infrastructure with different frequencies of light creating the data signals. The large number of light signals being sent can interact with each other and distort, causing the data to be received with errors.

The study published in Scientific Reports today and sponsored by the EPSRC reports a new way of improving the transmission distance, by undoing the interactions that occur between different optical channels as they travel side-by-side over an optical cable.

Study author Dr Robert Maher (UCL Electronic & Electrical Engineering), said: "By eliminating the interactions between the optical channels, we are able to double the distance signals can be transmitted error-free, from 3190km to 5890km, which is the largest increase ever reported for this system architecture. The challenge is to devise a technique to simultaneously capture a group of optical channels, known as a super-channel, with a single receiver. This allows us to undo the distortion by sending the data channels back on a virtual digital journey at the same time."

The researchers used a '16QAM super-channel' made of a set of frequencies which could be coded using amplitude, phase and frequency to create a high-capacity optical signal. The super-channel was then detected using a high-speed super-receiver and new signal processing techniques developed by the team enabled the reception of all the channels together and without error. The researchers will now test their new method on denser super-channels commonly used in digital cable TV (64QAM), cable modems (256QAM) and Ethernet connections (1024QAM).

Study author Professor Polina Bayvel (Electronic & Electrical Engineering) who is Professor of Optical Communications and Networks and Director of UNLOC, said: "We're excited to report such an important finding that will improve fibre optic communications. Our method greatly improves the efficiency of transmission of data -- almost doubling the transmission distances that can be achieved, with the potential to make significant savings over current state-of-the art commercial systems. One of the biggest global challenges we face is how to maintain communications with demand for the Internet booming -- overcoming the capacity limits of optical fibres cables is a large part of solving that problem."


Selengkapnya »»  

Computer chips: Engineers use disorder to control light on the nanoscale

Artist's depiction of light traveling through a photonic crystal superlattice, where holes have been randomly patterned. The result is a more narrow beam of light.
A breakthrough by a team of researchers from UCLA, Columbia University and other institutions could lead to the more precise transfer of information in computer chips, as well as new types of optical materials for light emission and lasers.

The researchers were able to control light at tiny lengths around 500 nanometers -- smaller than the light's own wavelength -- by using random crystal lattice structures to counteract light diffraction. The discovery could begin a new phase in laser collimation -- the science of keeping lasers precise and narrow instead of spreading out.

The study's principal investigator was Chee Wei Wong, associate professor of electrical engineering at the UCLA Henry Samueli School of Engineering and Applied Science.

Think of shining a flashlight against a wall. As the light moves from the flashlight and approaches the wall, it spreads out, a phenomenon called diffraction. The farther away the light source is held from the wall, the more the beam diffracts before it reaches the wall.

The same phenomenon also happens on a scale so small that distances are measured in nanometers -- a unit equal to one-billionth of a meter. For example, light could be used to carry information in computer chips and optical fibers. But when diffraction occurs, the transfer of data isn't as clean or precise as it could be.

Technology that prevents diffraction and more precisely controls the light used to transfer data could therefore lead to advances in optical communications, which would enable optical signal processing to overcome physical limitations in current electronics and could enable engineers to create improved optical fibers for use in biomedicine.

To control light on the nanoscale, the researchers used a photonic crystal superlattice, a lattice structure made of crystals that allows light through. The lattice was a disorderly pattern, with thousands of nanoscale heptagonal, square and triangular holes. These holes, each smaller than the wavelength of the light traveling through the structure, serve as guideposts for a beam of light.

Engineers had understood previously that uniformly patterned holes can control the spatial diffraction somewhat. But the researchers found in the new study that the structures with the most disorderly patterns were best able to trap and collimate the beam into a narrow path, and that the structure worked over a broad part of the infrared spectrum.

The study's lead author was Pin-Chun Hsieh, who was advised by Wong during his doctoral studies at Columbia University's Fu Foundation School of Engineering and Applied Science.

The effect of disorder, known as Anderson localization, was first proposed in 1958 by Nobel laureate Philip Anderson. It is the physical phenomenon that explains the conductance of electrons and waves in condensed matter physics.

The new study was the first to examine transverse Anderson localization in a chip-scale photonic crystal media. It was published online today by Nature Physics.

"This study allows us to validate the theory of Anderson localization in chip-scale photonics, through engineered randomness in an otherwise periodic structure," Wong said. "What Pin-Chun has observed provides a new path in controlling light propagation at the wavelength scale, that is, delivering structure arising out of randomness."

Hsieh, who also is chairman and majority owner of Taiwan-based Quantumstone Research, said the findings are completely counterintuitive because one might think that disorder in the structures would lead the light to spread out more. "This effect, based on intuition gained from electronic systems, where introduced impurities can turn an insulator into a semiconductor, shows unequivocally that controlling disorder can arrest transverse transport, and really reduce the spreading of light."

The numerical simulation was performed at University College London, and the sample fabrication was carried out at the Brookhaven National Laboratory in New York and at National Cheng Kung University in Taiwan.

The research was supported primarily by a grant from the U.S. Office of Naval Research. Additional support was provided by the National Science Foundation, the Department of Energy and the government of the United Kingdom. Hsieh is supported by a scholarship from Taiwan's Department of Education.

Selengkapnya »»