back to top
27.8 C
New York
Sunday, May 19, 2024
Home Blog Page 329

68 percent of New England and Mid-Atlantic beaches eroding

An assessment of coastal change over the past 150 years has found 68 percent of beaches in the New England and Mid-Atlantic region are eroding, according to a new US Geological Survey report. Scientists studied 650 miles of the New England and Mid-Atlantic coasts and found the average rate of coastal change was negative 1.6 feet per year. Of those beaches eroding,the most extreme case exceeded 60 feet per year.
An assessment of coastal change over the past 150 years has found 68 percent of beaches in the New England and Mid-Atlantic region are eroding, according to a U.S. Geological Survey report released today.
Scientists studied more than 650 miles of the New England and Mid-Atlantic coasts and found the average rate of coastal change – taking into account beaches that are both eroding and prograding — was negative 1.6 feet per year. Of those beaches eroding, the most extreme case exceeded 60 feet per year.
The past 25 to 30 years saw a small reduction in the percentage of beaches eroding – dropping to 60 percent, possibly as a result of beach restoration activities such as adding sand to beaches.
“This report provides invaluable objective data to help scientists and managers better understand natural changes to and human impacts on the New England and Mid-Atlantic coasts,” said Anne Castle, Assistant Secretary of the Interior for Water and Science. “The information gathered can inform decisions about future land use, transportation corridors, and restoration projects.”
Beaches change in response to a variety of factors, including changes in the amount of available sand, storms, sea-level rise and human activities. How much a beach is eroding or prograding in any given location is due to some combination of these factors, which vary from place to place.
The Mid-Atlantic coast – from Long Island, N.Y. to the Virginia-North Carolina border — is eroding at higher average rates than the New England coast. The difference in the type of coastline, with sandy areas being more vulnerable to erosion than areas with a greater concentration of rocky coasts, was the primary factor.
The researchers found that, although coastal change is highly variable, the majority of the coast is eroding throughout both regions, indicating erosion hazards are widespread.
“There is increasing need for this kind of comprehensive assessment in all coastal environments to guide managed response to sea-level rise,” said Dr. Cheryl Hapke of the USGS, lead author of the new report. “It is very difficult to predict what may happen in the future without a solid understanding of what has happened in the past.”
The researchers used historical data sources such as maps and aerial photographs, as well as modern data like lidar, or “light detection and ranging,” to measure shoreline change at more than 21,000 locations.
This analysis of past and present trends of shoreline movement is designed to allow for future repeatable analyses of shoreline movement, coastal erosion, and land loss. The results of the study provide a baseline for coastal change information that can be used to inform a wide variety of coastal management decisions, Hapke said.
The report, titled “National Assessment of Shoreline Change: Historical Shoreline Change along the New England and Mid-Atlantic Coasts,” is the fifth report produced as part of the USGS’s National Assessment of Shoreline Change project. An accompanying report that provides the geographic information system (GIS) data used to conduct the coastal change analysis is being released simultaneously.
Note: This story has been adapted from a news release issued by the United States Geological Survey

Scientists delve into ‘hotspot’ volcanoes along Pacific Ocean Seamount Trail

Like a string of underwater pearls, the Louisville Seamount Trail is strung across the Pacific. – IODP
Nearly half a mile of rock retrieved from beneath the seafloor is yielding new clues about how underwater volcanoes are created and whether the hotspots that led to their formation have moved over time.

Geoscientists have just completed an expedition to a string of underwater volcanoes, or seamounts, in the Pacific Ocean known as the Louisville Seamount Trail.

There they collected samples of sediments, basalt lava flows and other volcanic eruption materials to piece together the history of this ancient trail of volcanoes.

The expedition was part of the Integrated Ocean Drilling Program (IODP).

“Finding out whether hotspots in Earth’s mantle are stationary or not will lead to new knowledge about the basic workings of our planet,” says Rodey Batiza, section head for marine geosciences in the National Science Foundation’s (NSF) Division of Ocean Sciences.

Tens of thousands of seamounts exist in the Pacific Ocean. Expedition scientists probed a handful of the most important of these underwater volcanoes.

“We sampled ancient lava flows, and a fossilized algal reef,” says Anthony Koppers of Oregon State University. “The samples will be used to study the construction and evolution of individual volcanoes.”

Koppers led the expedition aboard the scientific research vessel JOIDES Resolution, along with co-chief scientist Toshitsugu Yamazaki from the Geological Survey of Japan at the National Institute of Advanced Industrial Science and Technology.

IODP is supported by NSF and Japan’s Ministry of Education, Culture, Sports, Science and Technology.

Over the last two months, scientists drilled 1,113 meters (3,651 feet) into the seafloor to recover 806 meters (2,644 feet) of volcanic rock.

The samples were retrieved from six sites at five seamounts ranging in age from 50 to 80 million years old.

“The sample recovery during this expedition was truly exceptional. I believe we broke the record for drilling igneous rock with a rotary core barrel,” says Yamazaki.

Igneous rock is rock formed through the cooling and solidification of magma or lava, while a rotary core barrel is a type of drilling tool used for penetrating hard rocks.

Trails of volcanoes found in the middle of tectonic plates, such as the Hawaii-Emperor and Louisville Seamount Trails, are believed to form from hotspots–plumes of hot material found deep within the Earth that supply a steady stream of heated rock.

As a tectonic plate drifts over a hotspot, new volcanoes are formed and old ones become extinct. Over time, a trail of volcanoes is formed. The Louisville Seamount Trail is some 4,300 kilometers (about 2,600 miles) long.

“Submarine volcanic trails like the Louisville Seamount Trail are unique because they record the direction and speed at which tectonic plates move,” says Koppers.

Scientists use these volcanoes to study the motion of tectonic plates, comparing the ages of the volcanoes against their location over time to calculate the rate at which a plate moved over a hotspot.
These calculations assume the hotspot stays in the same place.

“The challenge,” says Koppers, “is that no one knows if hotspots are truly stationary or if they somehow wander over time. If they wander, then our calculations of plate direction and speed need to be re-evaluated.”

“More importantly,” he says, “the results of this expedition will give us a more accurate picture of the dynamic nature of the interior of the Earth on a planetary scale.”

Recent studies in Hawaii have shown that the Hawaii hotspot may have moved as much as 15 degrees latitude (about 1,600 kilometers or 1,000 miles) over a period of 30 million years.

“We want to know if the Louisville hotspot moved at the same time and in the same direction as the Hawaiian hotspot. Our models suggest that it’s the opposite, but we won’t really know until we analyze the samples from this expedition,” says Yamazaki.

In addition to the volcanic rock, the scientists also recovered sedimentary rocks that preserve shells and an ancient algal reef, typical of living conditions in a very shallow marine environment.

These ancient materials show that the Louisville seamounts were once an archipelago of volcanic islands.

“We were really surprised to find only a thin layer of sediments on the tops of the seamounts, and only very few indications for the eruption of lava flows above sea level,” says Koppers.

The IODP Louisville Seamount Trail Expedition wasn’t solely focused on geology.

More than 60 samples from five seamounts were obtained for microbiology research.

Exploration of microbial communities under the seafloor, known as the “subseafloor biosphere,” is a rapidly developing field of research.

Using the Louisville samples, microbiologists will study both living microbial residents and those that were abundant over a large area, but now occupy only a few small areas.

They will examine population differences in microbes in the volcanic rock and overlying sediments, and in different kinds of lava flows.

They will also look for population patterns at various depths in the seafloor and compare them with seamounts of varying ages.

Samples from the Louisville Seamount Trail expedition will be analyzed to determine their age, composition and magnetic properties.

The information will be pieced together like a puzzle to create a story of the eruption history of the Louisville volcanoes.

It will then be compared to that of the Hawaiian volcanoes to determine whether hotspots are on the move.
The IODP is an international research program dedicated to advancing scientific understanding of the Earth through drilling, coring and monitoring the subseafloor.
Note: This story has been adapted from a news release issued by the National Science Foundation

Ground-based lasers vie with satellites to map Earth’s magnetic field

To measure the Earth’s magnetic field, an orange laser beam is directed at a layer of sodium 90 kilometers above the Earth. The beam is pulsed at a rate determined by the local magnetic field in order to excite spin polarization of the sodium atoms. The fluorescent emission from the sodium, which depends on the spin polarization, is detected by a ground-based telescope and analyzed to determine the strength of the magnetic field. – Dmitry Budker lab/UC Berkeley
Mapping the Earth’s magnetic field – to find oil, track storms or probe the planet’s interior – typically requires expensive satellites.
University of California, Berkeley, physicists have now come up with a much cheaper way to measure the Earth’s magnetic field using only a ground-based laser.
The method involves exciting sodium atoms in a layer 90 kilometers above the surface and measuring the light they give off.

“Normally, the laser makes the sodium atom fluoresce,” said Dmitry Budker, UC Berkeley professor of physics. “But if you modulate the laser light, when the modulation frequency matches the spin precession of the sodium atoms, the brightness of the spot changes.”

Because the local magnetic field determines the frequency at which the atoms precess, this allows someone with a ground-based laser to map the magnetic field anywhere on Earth.

Budker and three current and former members of his laboratory, as well as colleagues with the European Southern Observatory (ESO), lay out their technique in a paper appearing online this week in the journal Proceedings of the National Academy of Sciences.

Various satellites, ranging from the Geostationary Operational Environmental Satellites, or GOES, to an upcoming European mission called SWARM, carry instruments to measure the Earth’s magnetic field, providing data to companies searching for oil or minerals, climatologists tracking currents in the atmosphere and oceans, geophysicists studying the planet’s interior and scientists tracking space weather.

Ground-based measurements, however, can avoid several problems associated with satellites, Budker said. Because these spacecraft are moving at high speed, it’s not always possible to tell whether a fluctuation in the magnetic field strength is real or a result of the spacecraft having moved to a new location. Also, metal and electronic instruments aboard the craft can affect magnetic field measurements.

“A ground-based remote sensing system allows you to measure when and where you want and avoids problems of spatial and temporal dependence caused by satellite movement,” he said. “Initially, this is going to be competitive with the best satellite measurements, but it could be improved drastically.”

Laser guide stars

The idea was sparked by a discussion Budker had with a colleague about of the lasers used by many modern telescopes to remove the twinkle from stars caused by atmospheric disturbance. That technique, called laser guide star adaptive optics, employs lasers to excite sodium atoms deposited in the upper atmosphere by meteorites. Once excited, the atoms fluoresce, emitting light that mimics a real star.

Telescopes with such a laser guide star, including the Very Large Telescope in Chile and the Keck telescopes in Hawaii, adjust their “rubber mirrors” to cancel the laser guide star’s jiggle, and thus remove the jiggle for all nearby stars.

It is well known that these sodium atoms are affected by the Earth’s magnetic field. Budker, who specializes in extremely precise magnetic-field measurements, realized that you could easily determine the local magnetic field by exciting the atoms with a pulsed or modulated laser of the type used in guide stars. The method is based on the fact that the electron spin of each sodium atom precesses like a top in the presence of a magnetic field. Hitting the atom with light pulses at just the right frequency will cause the electrons to flip, affecting the way the atoms interact with light.

“It suddenly struck me that what we do in my lab with atomic magnetometers we can do with atoms freely floating in the sky,” he said.

Budker’s former post-doctoral fellow James Higbienow an assistant professor of physics and astronomy at Bucknell University – conducted laboratory measurements and computer simulations confirming that the effects of a modulated laser could be detected from the ground by a small telescope. He was assisted by Simon M. Rochester, who received his Ph.D. in physics from UC Berkeley last year, and current post-doctoral fellow Brian Patton.

Portable laser magnetometers

In practice, a 20- to 50-watt laser small enough to load on a truck or be attuned to the orange sodium line (589 nanometer wavelength) would shine polarized light into the 10 kilometer-thick sodium layer in the mesosphere, which is about 90 kilometers overhead. The frequency with which the laser light is modulated or pulsed would be shifted slightly around this wavelength to stimulate a spin flip.

The decrease or increase in brightness when the modulation is tuned to a “sweet spot” determined by the magnitude of the magnetic field could be as much as 10 percent of the typical fluorescence, Budker said. The spot itself would be too faint to see with the naked eye, but the brightness change could easily be measured by a small telescope.

“This is such a simple idea, I thought somebody must have thought of it before,” Budker said.

He was right. William Happer, a physicist who pioneered spin-polarized spectroscopy and the sodium laser guide stars, had thought of the idea, but had never published it.

“I was very, very happy to hear that, because I felt there may be a flaw in the idea, or that it had already been published,” Budker said.

While Budker’s lab continues its studies of how spin-polarized sodium atoms emit and absorb light, Budker’s co-authors Ronald Holzlöhner and Domenico Bonaccini Calia of the ESO in Garching, Germany, are building a 20-watt modulated laser for the Very Large Array in Chile that can be used to test the theory.
Note: This story has been adapted from a news release issued by the University of California – Berkeley

Researchers map out ice sheets shrinking during Ice Age

These maps show the rate at which the ice sheet over the British Isles during the last Ice Age melted. The ka on the images is short for thousand years and BP is ‘before present.’ So 27 Ka BP is the map of the ice sheet at 27,000 years ago. – University of Sheffiel
A set of maps created by the University of Sheffield have illustrated, for the first time, how our last British ice sheet shrunk during the Ice Age.

Led by Professor Chris Clark from the University’s Department of Geography, a team of experts developed the maps to understand what effect the current shrinking of ice sheets in parts of the Antarctic and Greenland will have on the speed of sea level rise.

The unique maps record the pattern and speed of shrinkage of the large ice sheet that covered the British Isles during the last Ice Age, approximately 20,000 years ago. The sheet, which subsumed most of Britain, Ireland and the North Sea, had an ice volume sufficient to raise global sea level by around 2.5 metres when it melted.

Using the maps, researchers will be able to understand the mechanisms and rate of change of ice sheet retreat, allowing them to make predictions for our polar regions, whose ice sheets appear to be melting as a result of temperature increases in the air and oceans.

The maps are based on new information on glacial landforms, such as moraines and drumlins, which were discovered using new technology such as remote sensing data that is able to image the land surface and seafloor at unprecedented resolutions. Experts combined this new information with that from fieldwork, some of it dating back to the nineteenth century, to produce the final maps of retreat.

It is also possible to use the maps to reveal exactly when land became exposed from beneath the ice and was available for colonization and use by plants, animals and humans. This provides the opportunity for viewers to pinpoint when their town/region emerged.

Professor Chris Clark, from the University of Sheffield’s Department of Geography, said: “It took us over 10 years to gather all the information in order to produce these maps, and we are delighted with the results, It is great to be able to visualize the ice sheet and notice that retreat speeds up and slows down, and it is vital of course that we learn exactly why. With such understanding we will be able to better predict ice losses in Greenland and Antarctica.

“In our next phase of work we hope to really tighten up on the timing and rates of retreat in more detail, by dropping tethered corers from a ship to extract seafloor sediments that can be radiocarbon dated.”
Note: This story has been adapted from a news release issued by the University of Sheffield

New model for how Nevada gold deposits formed may help in gold exploration

Barrick Gold Corporation’s large open pit at its Goldstrike Mine on the Carlin Trend. The mine has Carlin-type gold deposits, the formation of which has been newly modeled by University of Nevada researchers. – Photo by John Mundean, University of Nevada, Reno and it’s public service department, the Nevada Bureau of Mines and Geology.
A team of University of Nevada, Reno and University of Nevada, Las Vegas researchers have devised a new model for how Nevada’s gold deposits formed, which may help in exploration efforts for new gold deposits.

The deposits, known as Carlin-type gold deposits, are characterized by extremely fine-grained nanometer-sized particles of gold adhered to pyrite over large areas that can extend to great depths. More gold has been mined from Carlin-type deposits in Nevada in the last 50 years – more than $200 billion worth at today’s gold prices – than was ever mined from during the California gold rush of the 1800s.

This current Nevada gold boom started in 1961 with the discovery of the Carlin gold mine, near the town of Carlin, at a spot where the early westward-moving prospectors missed the gold because it was too fine-grained to be readily seen. Since the 1960s, geologists have found clusters of these “Carlin-type” deposits throughout northern Nevada.
They constitute, after South Africa, the second largest concentration of gold on Earth. Despite their importance, geologists have argued for decades about how they formed.
“Carlin-type deposits are unique to Nevada in that they represent a perfect storm of Nevada’s ideal geology – a tectonic trigger and magmatic processes, resulting in extremely efficient transport and deposition of gold,” said John Muntean, a research economic geologist with the Nevada Bureau of Mines and Geology at the University of Nevada, Reno and previously an industry geologist who explored for gold in Nevada for many years.
“Understanding how these deposits formed is important because most of the deposits that cropped out at the surface have likely been found. Exploration is increasingly targeting deeper deposits. Such risky deep exploration requires expensive drilling.
“Our model for the formation of Carlin-type deposits may not directly result in new discoveries, but models for gold deposit formation play an important role in how companies explore by mitigating risk. Knowing how certain types of gold deposits form allows one to be more predictive by evaluating whether ore-forming processes operated in the right geologic
settings. This could lead to identification of potential new areas of discovery.”
Muntean collaborated with researchers from the University of Nevada, Las Vegas: Jean Cline, a facultyprofessor of geology at UNLV and a leading authority on Carlin-type gold deposits; Adam Simon, an assistant professor of geoscience who provided new experimental data and his expertise on the interplay between magmas and ore deposits; and Tony Longo, a post-doctoral fellow who carried out detailed microanalyses of the ore minerals.
The team combined decades of previous studies by research and industry geologists with new data of their own to reach their conclusions, which were written about in the Jan. 23 early online issue of Nature Geoscience magazine and will appear in the February printed edition. The team relates formation of the gold deposits to a change in plate tectonics and a major magma event about 40 million years ago. It is the most complete explanation for Carlin-type gold deposits to date.
“Our model won’t be the final word on Carlin-type deposits,” Muntean said. “We hope it spurs new research in Nevada, especially by people who may not necessarily be ore deposit geologists.”
Note: This story has been adapted from a news release issued by the University of Nevada, Reno

A clearer picture of how rivers and deltas develop

This is a schematic model of a river-coast system. – Geleynse et al
By adding information about the subsoil to an existing sedimentation and erosion model, researchers at Delft University of Technology (TU Delft, The Netherlands) have obtained a clearer picture of how rivers and deltas develop over time. A better understanding of the interaction between the subsoil and flow processes in a river-delta system can play a key role in civil engineering (delta management), but also in geology (especially in the work of reservoir geologists). Nathanaël Geleynse et al. recently published in the journals Geophysical Research Letters and Earth and Planetary Science Letters.
Model
Many factors are involved in how a river behaves and the creation of a river delta. Firstly, of course, there is the river itself. What kind of material does it transport to the delta? Does this material consist of small particles (clay) or larger particles (sand)? But other important factors include the extent of the tidal differences at the coast and the height of the waves whipped up by the wind. In this study, researchers at TU Delft are working together with Deltares and making use of the institute’s computer models (Delft3D software). These models already take a large number of variables into account. Geleynse et al. have now supplemented them with information on the subsoil. It transpires that this variable also exerts a significant influence on how the river behaves and the closely related process of delta formation.
Room for the River

The extra dimension that Geleynse et al. have added to the model is important to delta management, among other things. If – as the Delta Commission recommends – we should be creating “Room for the River”, it is important to know what a river will do with that space. Nathanaël Geleynse explains: “Existing data do not enable us to give ready-made answers to specific management questions … nature is not so easily tamed … but they do offer plausible explanations for the patterns and shapes we see on the surface. The flow system carries the signature of the subsoil, something we were relatively unaware of until now. Our model provides ample scope for further development and for studying various scenarios in the current structure.”

Geological information
River management is all about short-term and possible future scenarios. But the model developed by Geleynse et al. also offers greater insight into how a river/delta has developed over thousands of years. What might the subsoil have looked like and – a key factor for the oil industry – where might you expect to find oil reserves and what might their geometrical characteristics be? In combination with data from a limited number of core samples and other local measurements, the model can give a more detailed picture of the area in question than was possible until now.
The link between the creation of the delta and the structure of the delta subsoil is also of interest to engineers who wish to build there. Hundreds of millions of people across the globe live in deltas and these urban deltas are only expected to grow in the decades to come.
 
Note: This story has been adapted from a news release issued by the Delft University of Technology

Researcher says the next large central US earthquake may not be in New Madrid

Liu on the site of May 2008 Wenchuan earthquake in the Sichuan province of China, where more than 90,000 people died. Credit: Image courtesy of University of Missouri-Columbia
Liu on the site of May 2008 Wenchuan earthquake in the Sichuan province of China, where more than 90,000 people died.
Credit: Image courtesy of University of Missouri-Columbia

This December marks the bicentennial of the New Madrid earthquakes of 1811-12, which are the biggest earthquakes known to have occurred in the central U.S.

Now, based on the earthquake record in China, a University of Missouri researcher says that mid-continent earthquakes tend to move among fault systems, so the next big earthquake in the central U.S. may actually occur someplace else other than along the New Madrid faults.

Mian Liu, professor of geological sciences in the College of Arts and Science at MU, examined records from China, where earthquakes have been recorded and described for the past 2,000 years.

Surprisingly, he found that during this time period big earthquakes have never occurred twice in the same place.

“In North China, where large earthquakes occur relatively frequently, not a single one repeated on the same fault segment in the past two thousand years,” Liu said. “So we need to look at the ‘big picture’ of interacting faults, rather than focusing only on the faults where large earthquakes occurred in the recent past.”

Mid-continent earthquakes, such as the ones that occurred along the New Madrid faults, occur on a complicated system of interacting faults spread throughout a large region.

A large earthquake on one fault can increase the stress on other faults, making some of them more likely to have a major earthquake. The major faults may stay dormant for thousands of years and then wake up to have a short period of activity.

Along with co-authors Seth Stein, a professor of earth and planetary sciences at Northwestern University, and Hui Wang, a Chinese Earthquake Administration researcher, Liu believes this discovery will provide valuable information about the patterns of earthquakes in the central and eastern United States, northwestern Europe, and Australia. The results have been published in the journal Lithosphere.

“The New Madrid faults in the central U.S., for example, had three to four large events during 1811-12, and perhaps a few more in the past thousand years. This led scientists to believe that more were on the way,” Stein said. “However, high-precision Global Positioning System (GPS) measurements in the past two decades have found no significant strain in the New Madrid area.

The China results imply that the major earthquakes at New Madrid may be ending, as the pressure will eventually shift to another fault.”

While this study shows that mid-continent earthquakes seem to be more random than previously thought, the researchers believe it actually helps them better understand these seismic events.

“The rates of earthquake energy released on the major fault zones in North China are complementary,” Wang said. “Increasing seismic energy release on one fault zone was accompanied by decreasing energy on the others. This means that the fault zones are coupled mechanically.”

Studying fault coupling with GPS measurements, earthquake history, and computer simulation will allow the scientists to better understand the mysterious mid-continent earthquakes.

“What we’ve discovered about mid-continent earthquakes won’t make forecasting them any easier, but it should help,” Liu said.

Note: This story has been adapted from a news release issued by the University of Missouri-Columbia

Cave reveals Southwest’s abrupt climate swings during Ice Age

Sarah Truebe, a geosciences doctoral student at the University of Arizona, checks on an experiment that measures how fast cave formations grow in Arizona’s Cave of the Bells. – Copyright 2010 Stella Cousins.
Ice Age climate records from an Arizona stalagmite link the Southwest’s winter precipitation to temperatures in the North Atlantic, according to new research.
 
The finding is the first to document that the abrupt changes in Ice Age climate known from Greenland also occurred in the southwestern U.S., said co-author Julia E. Cole of the University of Arizona in Tucson.
“It’s a new picture of the climate in the Southwest during the last Ice Age,” said Cole, a UA professor of geosciences. “When it was cold in Greenland, it was wet here, and when it was warm in Greenland, it was dry here.”

The researchers tapped into the natural climate archives recorded in a stalagmite from a limestone cave in southern Arizona. Stalagmites grow up from cave floors.
The stalagmite yielded an almost continuous, century-by-century climate record spanning 55,000 to 11,000 years ago. During that time ice sheets covered much of North America, and the Southwest was cooler and wetter than it is now.

Cole and her colleagues found the Southwest flip-flopped between wet and dry periods during the period studied.

Each climate regime lasted from a few hundred years to more than one thousand years, she said. In many cases, the transition from wet to dry or vice versa took less than 200 years.

“These changes are part of a global pattern of abrupt changes that were first documented in Greenland ice cores,” she said. “No one had documented those changes in the Southwest before.”

Scientists suggest that changes in the northern Atlantic Ocean’s circulation drove the changes in Greenland’s Ice Age climate, Cole said. “Those changes resulted in atmospheric changes that pushed around the Southwest’s climate.”
 
She added that observations from the 20th and 21st centuries link modern-day alterations in the North Atlantic’s temperature with changes in the storm track that controls the Southwest’s winter precipitation.

“Also, changes in the storm track are the kinds of changes we expect to see in a warming world,” she said. “When you warm the North Atlantic, you move the storm track north.”
The team’s paper, “Moisture Variability in the Southwestern U.S. Linked to Abrupt Glacial Climate Change,” is scheduled for publication in the February issue of Nature Geoscience.

Cole’s UA co-authors are Jennifer D. M. Wagner, J. Warren Beck, P. Jonathan Patchett and Heidi R. Barnett. Co-author Gideon M. Henderson is from the University of Oxford, U.K.
 
Cole became interested in studying cave formations as natural climate archives about 10 years ago. At the suggestion of some local cave specialists, she and her students began working in the Cave of the Bells, an active limestone cave in the Santa Rita Mountains.

In such a cave, mineral-rich water percolates through the soil into the cave below and onto its floor. As the water loses carbon dioxide, the mineral known as calcium carbonate is left behind. As the calcium carbonate accumulates in the same spot on the cave floor over thousands of years, it forms a stalagmite.

The researchers chose the particular stalagmite for study because it was deep enough in the cave that the humidity was always high, an important condition for preservation of climate records, Cole said. Following established cave conservation protocols, the researchers removed the formation, which was less than 18 inches tall.

For laboratory analyses, first author Wagner took a core about one inch in diameter from the center of the stalagmite. The scientists then returned the formation to the cave, glued it back into its previous location with special epoxy and capped it with a limestone plug.

To read the climate record preserved in the stalagmite, Wagner sliced the core lengthwise several times for several different analyses.

On one slice, she shaved more than 1,200 hair-thin, 100-micron samples and measured what types of oxygen molecule each one contained.

A rare form of oxygen, oxygen-18, is more common in the calcium carbonate deposited during dry years. By seeing how much oxygen-18 was present in each layer, the scientists could reconstruct the region’s pattern of wet and dry climate.

To assign dates to each wet and dry period, Wagner used another slice of the core for an analysis called uranium-thorium dating.

The radioactive element uranium is present in minute amounts in the water dripping onto a stalagmite. The uranium then becomes part of the formation. Uranium decays into the element thorium at a steady and known rate, so its decay rate can be used to construct a timeline of a stalagmite’s growth.

By matching the stalagmite’s growth timeline with the sequence of wet and dry periods revealed by the oxygen analyses, the researchers could tell in century-by-century detail when the Southwest was wet and when it was dry.

“This work shows the promise of caves to providing climate records for the Southwest. It’s a new kind of climate record for this region,” Cole said.

She and her colleagues are now expanding their efforts by sampling other cave formations in the region.
 
Note: This story has been adapted from a news release issued by the University of Arizona

Earth’s hot past: Prologue to future climate?

If carbon dioxide emissions continue on their current trajectory, Earth may someday return to an ancient, hotter climate when the Antarctic ice sheet didn’t exist. – NOAA
The magnitude of climate change during Earth’s deep past suggests that future temperatures may eventually rise far more than projected if society continues its pace of emitting greenhouse gases, a new analysis concludes.
 
The study, by National Center for Atmospheric Research (NCAR) scientist Jeffrey Kiehl, will appear as a “Perspectives” article in this week’s issue of the journal Science.
The work was funded by the National Science Foundation (NSF), NCAR’s sponsor.

Building on recent research, the study examines the relationship between global temperatures and high levels of carbon dioxide in the atmosphere tens of millions of years ago.

It warns that, if carbon dioxide emissions continue at their current rate through the end of this century, atmospheric concentrations of the greenhouse gas will reach levels that existed about 30 million to 100 million years ago.

Global temperatures then averaged about 29 degrees Fahrenheit (16 degrees Celsius) above pre-industrial levels.

Kiehl said that global temperatures may take centuries or millennia to fully adjust in response to the higher carbon dioxide levels.

Accorning to the study and based on recent computer model studies of geochemical processes, elevated levels of carbon dioxide may remain in the atmosphere for tens of thousands of years.

The study also indicates that the planet’s climate system, over long periods of times, may be at least twice as sensitive to carbon dioxide as currently projected by computer models, which have generally focused on shorter-term warming trends.

This is largely because even sophisticated computer models have not yet been able to incorporate critical processes, such as the loss of ice sheets, that take place over centuries or millennia and amplify the initial warming effects of carbon dioxide.

“If we don’t start seriously working toward a reduction of carbon emissions, we are putting our planet on a trajectory that the human species has never experienced,” says Kiehl, a climate scientist who specializes in studying global climate in Earth’s geologic past.

“We will have committed human civilization to living in a different world for multiple generations.”

The Perspectives article pulls together several recent studies that look at various aspects of the climate system, while adding a mathematical approach by Kiehl to estimate average global temperatures in the distant past.

Its analysis of the climate system’s response to elevated levels of carbon dioxide is supported by previous studies that Kiehl cites.

“This research shows that squaring the evidence of environmental change in the geologic record with mathematical models of future climate is crucial,” says David Verardo, Director of NSF’s Paleoclimate Program. “Perhaps Shakespeare’s words that ‘what’s past is prologue’ also apply to climate.”

Kiehl focused on a fundamental question: when was the last time Earth’s atmosphere contained as much carbon dioxide as it may by the end of this century?

If society continues its current pace of increasing the burning of fossil fuels, atmospheric levels of carbon dioxide are expected to reach about 900 to 1,000 parts per million by the end of this century.
 
That compares with current levels of about 390 parts per million, and pre-industrial levels of about 280 parts per million.

Since carbon dioxide is a greenhouse gas that traps heat in Earth’s atmosphere, it is critical for regulating Earth’s climate.

Without carbon dioxide, the planet would freeze over.
 
But as atmospheric levels of the gas rise, which has happened at times in the geologic past, global temperatures increase dramatically and additional greenhouse gases, such as water vapor and methane, enter the atmosphere through processes related to evaporation and thawing.

This leads to further heating.

Kiehl drew on recently published research that, by analyzing molecular structures in fossilized organic materials, showed that carbon dioxide levels likely reached 900 to 1,000 parts per
million about 35 million years ago.

At that time, temperatures worldwide were substantially warmer than at present, especially in polar regions–even though the Sun’s energy output was slightly weaker.

The high levels of carbon dioxide in the ancient atmosphere kept the tropics at about 9-18 F (5-10 C) above present-day temperatures.

The polar regions were some 27-36 F (15-20 C) above present-day temperatures.

Kiehl applied mathematical formulas to calculate that Earth’s average annual temperature 30 to 40 million years ago was about 88 F (31 C)–substantially higher than the pre-industrial average temperature of about 59 F (15 C).

The study also found that carbon dioxide may have two times or more an effect on global temperatures than currently projected by computer models of global climate.

The world’s leading computer models generally project that a doubling of carbon dioxide in the atmosphere would have a heating impact in the range of 0.5 to 1.0 degrees Celsius watts per square meter. (The unit is a measure of the sensitivity of Earth’s climate to changes in greenhouse gases.)

However, the published data show that the comparable impact of carbon dioxide 35 million years ago amounted to about 2 C watts per square meter.

Computer models successfully capture the short-term effects of increasing carbon dioxide in the atmosphere.

But the record from Earth’s geologic past also encompasses longer-term effects, which accounts for the discrepancy in findings.

The eventual melting of ice sheets, for example, leads to additional heating because exposed dark surfaces of land or water absorb more heat than ice sheets.

“This analysis shows that on longer time scales, our planet may be much more sensitive to greenhouse gases than we thought,” Kiehl says.

Climate scientists are currently adding more sophisticated depictions of ice sheets and other factors to computer models.

As these improvements come on-line, Kiehl believes that the computer models and the paleoclimate record will be in closer agreement, showing that the impacts of carbon dioxide on climate over time will likely be far more substantial than recent research has indicated.

Because carbon dioxide is being pumped into the atmosphere at a rate that has never been experienced, Kiehl could not estimate how long it would take for the planet to fully heat up.
However, a rapid warm-up would make it especially difficult for societies and ecosystems to adapt, he says.

If emissions continue on their current trajectory, “the human species and global ecosystems will be placed in a climate state never before experienced in human history,” the paper states.
 
 
Note: This story has been adapted from a news release issued by the National Science Foundation

Hydrocarbons in the deep Earth?

The oil and gas that fuels our homes and cars started out as living organisms that died, were compressed, and heated under heavy layers of sediments in the Earth’s crust. Scientists have debated for years whether some of these hydrocarbons could also have been created deeper in the Earth and formed without organic matter. Now for the first time, scientists have found that ethane and heavier hydrocarbons can be synthesized under the pressure-temperature conditions of the upper mantle -the layer of Earth under the crust and on top of the core. The research was conducted by scientists at the Carnegie Institution’s Geophysical Laboratory, with colleagues from Russia and Sweden, and is published in the July 26, advanced on-line issue of Nature Geoscience
Methane (CH4) is the main constituent of natural gas, while ethane (C2H6) is used as a petrochemical feedstock. Both of these hydrocarbons, and others associated with fuel, are called saturated hydrocarbons because they have simple, single bonds and are saturated with hydrogen. Using a diamond anvil cell and a laser heat source, the scientists first subjected methane to pressures exceeding 20 thousand times the atmospheric pressure at sea level and temperatures ranging from 1,300 F° to over 2,240 F°. These conditions mimic those found 40 to 95 miles deep inside the Earth. The methane reacted and formed ethane, propane, butane, molecular hydrogen, and graphite. The scientists then subjected ethane to the same conditions and it produced methane. The transformations suggest heavier hydrocarbons could exist deep down. The reversibility implies that the synthesis of saturated hydrocarbons is thermodynamically controlled and does not require organic matter.

The scientists ruled out the possibility that catalysts used as part of the experimental apparatus were at work, but they acknowledge that catalysts could be involved in the deep Earth with its mix of compounds.

“We were intrigued by previous experiments and theoretical predictions,” remarked Carnegie’s Alexander Goncharov a coauthor. “Experiments reported some years ago subjected methane to high pressures and temperatures and found that heavier hydrocarbons formed from methane under very similar pressure and temperature conditions. However, the molecules could not be identified and a distribution was likely. We overcame this problem with our improved laser-heating technique where we could cook larger volumes more uniformly. And we found that methane can be produced from ethane.”

 
The hydrocarbon products did not change for many hours, but the tell-tale chemical signatures began to fade after a few days.

Professor Kutcherov, a coauthor, put the finding into context: “The notion that hydrocarbons generated in the mantle migrate into the Earth’s crust and contribute to oil-and-gas reservoirs was promoted in Russia and Ukraine many years ago. The synthesis and stability of the compounds studied here as well as heavier hydrocarbons over the full range of conditions within the Earth’s mantle now need to be explored. In addition, the extent to which this ‘reduced’ carbon survives migration into the crust needs to be established (e.g., without being oxidized to CO2). These and related questions demonstrate the need for a new experimental and theoretical program to study the fate of carbon in the deep Earth.”
 
Note: This story has been adapted from a news release issued by the Carnegie Institution

Mountain glacier melt to contribute 12 centimeters to world sea-level increases by 2100

A huge piece of ice breaking off the 80m high Glaciar Perito Moreno, El Calafate, Argentina
Melt off from small mountain glaciers and ice caps will contribute about 12 centimetres to world sea-level increases by 2100, according to UBC research published this week in Nature Geoscience.
The largest contributors to projected global sea-level increases are glaciers in Arctic Canada, Alaska and landmass bound glaciers in the Antarctic. Glaciers in the European Alps, New Zealand, the Caucasus, Western Canada and the Western United Sates–though small absolute contributors to global sea-level increases–are projected to lose more than 50 per cent of their current ice volume.

The study modelled volume loss and melt off from 120,000 mountain glaciers and ice caps, and is one of the first to provide detailed projections by region. Currently, melt from smaller mountain glaciers and ice caps is responsible for a disproportionally large portion of sea level increases, even though they contain less than one per cent of all water on Earth bound in glacier ice.

“There is a lot of focus on the large ice sheets but very few global scale studies quantifying how much melt to expect from these smaller glaciers that make up about 40 percent of the entire sea-level rise that we observe right now,” says Valentina Radic, a postdoctoral researcher with the Department of Earth and Ocean Sciences and lead author of the study.
Increases in sea levels caused by the melting of the Greenland and Antarctic ice sheets, and the thermal expansion of water, are excluded from the results.
Radic and colleague Regine Hock at the University of Alaska, Fairbanks, modeled future glacier melt based on temperature and precipitation projections from 10 global climate models used by the 
Intergovernmental Panel on Climate Change.
 

“While the overall sea level increase projections in our study are on par with IPCC studies, our results are more detailed and regionally resolved,” says Radic. “This allows us to get a better picture of projected regional ice volume change and potential impacts on local water supplies, and changes in glacier size distribution.”

Global projections of sea level rises from mountain glacier and ice cap melt from the IPCC range between seven and 17 centimetres by the end of 2100. Radic’s projections are only slightly higher, in the range of seven to 18 centimetres.

Radic’s projections don’t include glacier calving–the production of icebergs. Calving of tide-water glaciers may account for 30 per cent to 40 per cent of their total mass loss.

“Incorporating calving into the models of glacier mass changes on regional and global scale is still a challenge and a major task for future work,” says Radic.

However, the new projections include detailed projection of melt off from small glaciers surrounding the Greenland and Antarctic ice sheets, which have so far been excluded from, or only estimated in, global assessments.
 
Note: This story has been adapted from a news release issued by the University of British Columbia

Sulphur proves important in the formation of gold mines

 
Collaborating with an international research team, an economic geologist from The University of Western Ontario has discovered how gold-rich magma is produced, unveiling an all-important step in the formation of gold mines.
The findings were published in the December issue of Nature Geoscience.
Robert Linnen, the Robert Hodder Chair in Economic Geology in Western’s Department of Earth Sciences conducts research near Kirkland Lake, Ontario and says the results of the study could lead to a breakthrough in choosing geographic targets for gold exploration and making exploration more successful.

Noble metals, like gold, are transported by magma from deep within the mantle (below the surface) of the Earth to the shallow crust (the surface), where they form deposits. Through a series of experiments, Linnen and his colleagues from the University of Hannover (Germany), the University of Potsdam (Germany) and Laurentian University found that gold-rich magma can be generated in mantle also containing high amounts of sulphur.

“Sulphur wasn’t recognized as being that important, but we found it actually enhances gold solubility and solubility is a very important step in forming a gold deposit,” explains Linnen. “In some cases, we were detecting eight times the amount of gold if sulphur was also present.”
Citing the World Gold Council, Linnen says the best estimates available suggest the total volume of gold mined up to the end of 2009 was approximately 165,600 tonnes. Approximately 65 per cent of that total has been mined since 1950.
“All the easy stuff has been found,” offers Linnen. “So when you project to the future, we’re going to have to come up with different ways, different technologies and different philosophies for finding more resources because the demand for resources is ever-increasing.”
Note: This story has been adapted from a news release issued by the University of Western Ontario

Widespread ancient ocean ‘dead zones’ challenged early life

The oceans became oxygen-rich as they are today about 600 million years ago, during Earth’s Late Ediacaran Period. Before that, most scientists believed until recently, the ancient oceans were relatively oxygen-poor for the preceding four billion years.
Now biogeochemists at the University of California-Riverside (UCR) have found evidence that the oceans went back to being “anoxic,” or oxygen-poor, around 499 million years ago, soon after the first appearance of animals on the planet.
They remained anoxic for two to four million years.
The researchers suggest that such anoxic conditions may have been commonplace over a much broader interval of time.

“This work is important at many levels, from the steady growth of atmospheric oxygen in the last 600 million years, to the potential impact of oxygen level fluctuations on early evolution and diversification of life,” said Enriqueta Barrera, program director in the National Science Foundation (NSF)’s Division of Earth Sciences, which funded the research.

The researchers argue that such fluctuations in the oceans’ oxygen levels are the most likely explanation for what drove the explosive diversification of life forms and rapid evolutionary turnover that marked the Cambrian Period some 540 to 488 million years ago.
They report in this week’s issue of the journal Nature that the transition from a generally oxygen-rich ocean during the Cambrian to the fully oxygenated ocean we have today was not a simple turn of the switch, as has been widely accepted until now.
“Our research shows that the ocean fluctuated between oxygenation states 499 million years ago,” said paper co-author Timothy Lyons, a UCR biogeochemist and co-author of the paper.
“Such fluctuations played a major, perhaps dominant, role in shaping the early evolution of animals on the planet by driving extinction and clearing the way for new organisms to take their place.”
Oxygen is necessary for animal survival, but not for the many bacteria that thrive in and even demand life without oxygen.

Understanding how the environment changed over the course of Earth’s history can give scientists clues to how life evolved and flourished during the critical, very early stages of animal evolution.

“Life and the environment in which it lives are intimately linked,” said Benjamin Gill, the first author of the paper, a biogeochemist at UCR, and currently a postdoctoral researcher at Harvard University.

When the ocean’s oxygenation states changed rapidly in Earth’s history, some organisms were not able to cope.
Oceanic oxygen affects cycles of other biologically important elements such as iron, phosphorus and nitrogen.
“Disruption of these cycles is another way to drive biological crises,” Gill said. “A switch to an oxygen-poor state of the ocean can cause major extinction of species.”
The researchers are now working to find an explanation for why the oceans became oxygen-poor about 499 million years ago.
“We have the ‘effect,’ but not the ’cause,'” said Gill.
“The oxygen-poor state persisted likely until the enhanced burial of organic matter, originally derived from oxygen-producing photosynthesis, resulted in the accumulation of more oxygen in the atmosphere and ocean
“As a kind of negative feedback, the abundant burial of organic material facilitated by anoxia may have bounced the ocean to a more oxygen-rich state.”
Understanding past events in Earth’s distant history can help refine our view of changes happening on the planet now, said Gill.
“Today, some sections of the world’s oceans are becoming oxygen-poor–the Chesapeake Bay (surrounded by Maryland and Virginia) and the so-called ‘dead zone’ in the Gulf of Mexico are just two examples,” he said.
“We know the Earth went through similar scenarios in the past. Understanding the ancient causes and consequences can provide essential clues to what the future has in store for our oceans.”

The team examined the carbon, sulfur and molybdenum contents of rocks they collected from localities in the United States, Sweden, and Australia.

Combined, these analyses allowed the scientists to infer the amount of oxygen present in the ocean at the time the limestones and shales were deposited.
By looking at successive rock layers, they were able to compile the biogeochemical history of the ocean.
Note: This story has been adapted from a news release issued by the National Science Foundation

Hot stuff: Magma at shallow depth under Hawaii

Ohio State University researchers have found a new way to gauge the depth of the magma chamber that forms the Hawaiian Island volcanic chain, and determined that the magma lies much closer to the surface than previously thought.
The finding could help scientists predict when Hawaiian volcanoes are going to erupt. It also suggests that Hawaii holds great potential for thermal energy.
Julie Ditkof, an honors undergraduate student in earth sciences at Ohio State, described the study at the American Geophysical Union Meeting in San Francisco on Tuesday, December 14.
For her honors thesis, Ditkof took a technique that her advisor Michael Barton, professor of earth sciences, developed to study magma in Iceland, and applied it to Hawaii.
She discovered that magma lies an average of 3 to 4 kilometers (about 1.9 to 2.5 miles) beneath the surface of Hawaii.

“Hawaii was already unique among volcanic systems, because it has such an extensive plumbing system, and the magma that erupts has a unique and variable chemical composition,” Ditkof explained. “Now we know the chamber is at a shallow depth not seen anywhere else in the world.”

For example, Barton determined that magma chambers beneath Iceland lie at an average depth of 20 kilometers.
While that means the crust beneath Hawaii is much thinner than the crust beneath Iceland, Hawaiians have nothing to fear.
“The crust in Hawaii has been solidifying from eruptions for more than 300,000 years now. The crust doesn’t get consumed by the magma chamber. It floats on top,” Ditkof explained.
The results could help settle two scientific debates, however.
Researchers have wondered whether more than one magma chamber was responsible for the varying chemical compositions, even though seismological studies indicated only one chamber was present.
Meanwhile, those same seismological studies pegged the depth as shallow, while petrologic studies – studies of rock composition – pegged it deeper.
There has never been a way to prove who was right, until now.
“We suspected that the depth was actually shallow, but we wanted to confirm or deny all those other studies with hard data,” Barton said.
He and Ditkof determined that there is one large magma chamber just beneath the entire island chain that feeds the Hawaiian volcanoes through many different conduits.
They came to this conclusion after Ditkof analyzed the chemical composition of nearly 1,000 magma samples. From the ratio of some elements to others – aluminum to calcium, for example, or calcium to magnesium – she was able to calculate the pressure at which the magma had crystallized.
For his studies of Iceland, Barton created a methodology for converting those pressure calculations to depth. When Ditkof applied that methodology, she obtained an average depth of 3 to 4 kilometers.
Researchers could use this technique to regularly monitor pressures inside the chamber and make more precise estimates of when eruptions are going to occur.
Barton said that, ultimately, the finding might be more important in terms of energy.
“Hawaii has huge geothermal resources that haven’t been tapped fully,” he said, and quickly added that scientists would have to determine whether tapping that energy was practical – or safe.
“You’d have to drill some test bore holes. That’s dangerous on an active volcano, because then the lava could flow down and wipe out your drilling rig.”
Note: This story has been adapted from a news release issued by the Ohio State University

Ancient raindrops reveal a wave of mountains sent south by sinking Farallon plate

50 million years ago, mountains began popping up in southern British Columbia. Over the next 22 million years, a wave of mountain building swept (geologically speaking) down western North America as far south as Mexico and as far east as Nebraska, according to Stanford geochemists. Their findings help put to rest the idea that the mountains mostly developed from a vast, Tibet-like plateau that rose up across most of the western U.S. roughly simultaneously and then subsequently collapsed and eroded into what we see today.
The data providing the insight into the mountains – so popularly renowned for durability – came from one of the most ephemeral of sources: raindrops. Or more specifically, the isotopic residue – fingerprints, effectively – of ancient precipitation that rained down upon the American west between 65 and 28 million years ago.
Atoms of the same element but with different numbers of neutrons in their nucleus are called isotopes. More neutrons make for a heavier atom and as a cloud rises, the water molecules that contain the heavier isotopes of hydrogen and oxygen tend to fall first. By measuring the ratio of heavy to light isotopes in the long-ago rainwater, researchers can infer the elevation of the land when the raindrops fell.
 
The water becomes incorporated into clays and carbonate minerals on the surface, or in volcanic glass, which are then preserved for the ages in the sediments.
 
Hari Mix, a PhD candidate in Environmental Earth System Science at Stanford, worked with the analyses of about 2,800 samples – several hundred that he and his colleagues collected, the rest from published studies – and used the isotopic ratios to calculate the composition of the ancient rain. Most of the samples were from carbonate deposits in ancient soils and lake sediments, taken from dozens of basins around the western U.S.
 
Using the elevation trends revealed in the data, Mix was able to decipher the history of the mountains. “Where we got a huge jump in isotopic ratios, we interpret that as a big uplift,” he said.

“We saw a major isotopic shift at around 49 million years ago, in southwest Montana,” he said. “And another one at 39 mya, in northern Nevada” as the uplift moved southward. Previous work by Chamberlain’s group had found evidence for these shifts in data from two basins, but Mix’s work with the larger data set demonstrated that the pattern of uplift held across the entire western U.S.
 
The uplift is generally agreed to have begun when the Farallon plate, a tectonic plate that was being shoved under the North American plate, slowly began peeling away from the underside of the continent.

“The peeling plate looked sort of like a tongue curling down,” said Page Chamberlain, a professor in environmental Earth system science who is Mix’s advisor.
 
As hot material from the underlying mantle flowed into the gap between the peeling plates, the heat and buoyancy of the material caused the overlying land to rise in elevation. The peeling tongue continued to fall off, and hot mantle continued to flow in behind it, sending a slow-motion wave of mountain-building coursing southward.
“We knew that the Farallon plate fell away, but the geometry of how that happened and the topographic response to it is what has been debated,” Mix said.
 
Mix and Chamberlain estimate that the topographic wave would have been at least one to two kilometers higher than the landscape it rolled across and would have produced mountains with elevations up to a little over 4 kilometers (about 14,000 feet), comparable to the elevations existing today.
 
Mix said their isotopic data corresponds well with other types of evidence that have been documented.

“There was a big north to south sweep of volcanism through the western U.S. at the exact same time,” he said.
There was also a simultaneous extension of the Earth’s crust, which results when the crust is heated from below, as it would have been by the flow of hot magma under the North American plate.

“The pattern of topographic uplift we found matches what has been documented by other people in terms of the volcanology and extension,” Mix said.

“Those three things together, those patterns, all point to something going on with the Farallon plate as being responsible for the construction of the western mountain ranges, the Cordillera.”
Chamberlain said that while there was certainly elevated ground, it was not like Tibet.
“It was not an average elevation of 15,000 feet. It was something much more subdued,” he said.

“The main implication of this work is that it was not a plateau that collapsed, but rather something that happened in the mantle, that was causing this mountain growth,” Chamberlain said.
 
Note: This story has been adapted from a news release issued by the Stanford University

First measurement of magnetic field in Earth’s core

A University of California, Berkeley, geophysicist has made the first-ever measurement of the strength of the magnetic field inside Earth’s core, 1,800 miles underground

The magnetic field strength is 25 Gauss, or 50 times stronger than the magnetic field at the surface that makes compass needles align north-south. Though this number is in the middle of the range geophysicists predict, it puts constraints on the identity of the heat sources in the core that keep the internal dynamo running to maintain this magnetic field


“This is the first really good number we’ve had based on observations, not inference,” said author Bruce A. Buffett, professor of earth and planetary science at UC Berkeley. “The result is not controversial, but it does rule out a very weak magnetic field and argues against a very strong field.”

The results are published in the Dec. 16 issue of the journal Nature.
 
A strong magnetic field inside the outer core means there is a lot of convection and thus a lot of heat being produced, which scientists would need to account for, Buffett said. The presumed sources of energy are the residual heat from 4 billion years ago when the planet was hot and molten, release of gravitational energy as heavy elements sink to the bottom of the liquid core, and radioactive decay of long-lived elements such as potassium, uranium and thorium.
 
A weak field – 5 Gauss, for example – would imply that little heat is being supplied by radioactive decay, while a strong field, on the order of 100 Gauss, would imply a large contribution from radioactive decay.

“A measurement of the magnetic field tells us what the energy requirements are and what the sources of heat are,” Buffett said.
 
About 60 percent of the power generated inside the earth likely comes from the exclusion of light elements from the solid inner core as it freezes and grows, he said. This constantly builds up crud in the outer core.
 
The Earth’s magnetic field is produced in the outer two-thirds of the planet’s iron/nickel core. This outer core, about 1,400 miles thick, is liquid, while the inner core is a frozen iron and nickel wrecking ball with a radius of about 800 miles – roughly the size of the moon. The core is surrounded by a hot, gooey mantle and a rigid surface crust.
 
The cooling Earth originally captured its magnetic field from the planetary disk in which the solar system formed. That field would have disappeared within 10,000 years if not for the planet’s internal dynamo, which regenerates the field thanks to heat produced inside the planet. The heat makes the liquid outer core boil, or “convect,” and as the conducting metals rise and then sink through the existing magnetic field, they create electrical currents that maintain the magnetic field. This roiling dynamo produces a slowly shifting magnetic field at the surface.

“You get changes in the surface magnetic field that look a lot like gyres and flows in the oceans and the atmosphere, but these are being driven by fluid flow in the outer core,” Buffett said.
 
Buffett is a theoretician who uses observations to improve computer models of the earth’s internal dynamo. Now at work on a second generation model, he admits that a lack of information about conditions in the earth’s interior has been a big hindrance to making accurate models.
 
 
He realized, however, that the tug of the moon on the tilt of the earth’s spin axis could provide information about the magnetic field inside. This tug would make the inner core precess – that is, make the spin axis slowly rotate in the opposite direction – which would produce magnetic changes in the outer core that damp the precession. Radio observations of distant quasars – extremely bright, active galaxies – provide very precise measurements of the changes in the earth’s rotation axis needed to calculate this damping.

“The moon is continually forcing the rotation axis of the core to precess, and we’re looking at the response of the fluid outer core to the precession of the inner core,” he said.
 
By calculating the effect of the moon on the spinning inner core, Buffett discovered that the precession makes the slightly out-of-round inner core generate shear waves in the liquid outer core. These waves of molten iron and nickel move within a tight cone only 30 to 40 meters thick, interacting with the magnetic field to produce an electric current that heats the liquid. This serves to damp the precession of the rotation axis. The damping causes the precession to lag behind the moon as it orbits the earth. A measurement of the lag allowed Buffett to calculate the magnitude of the damping and thus of the magnetic field inside the outer core.
 
Buffett noted that the calculated field – 25 Gauss – is an average over the entire outer core. The field is expected to vary with position.
“I still find it remarkable that we can look to distant quasars to get insights into the deep interior of our planet,” Buffett said.
 
Note: This story has been adapted from a news release issued by the University of California – Berkeley

New way found of monitoring volcanic ash cloud

The eruption of the Icelandic volcano Eyjafjallajökull in April this year resulted in a giant ash cloud, which – at one point covering most of Europe – brought international aviation to a temporary standstill, resulting in travel chaos for tens of thousands
New research, to be published today, Friday 10 December, in IOP Publishing’s Environmental Research Letters, shows that lightning could be used as part of an integrated approach to estimate volcanic plume properties.
The scientists found that during many of the periods of significant volcanic activity, the ash plume was sufficiently electrified to generate lightning, which was measured by the UK Met Office’s long range lightning location network (ATDnet), operating in the Very Low Frequency radio spectrum.
The measurements suggest a general correlation between lightning frequency and plume height and the method has the advantage of being detectable many thousands of kilometers away, in both day and night as well as in all weather conditions.

As the researchers write, “When a plume becomes sufficiently electrified to produce lightning, the rate of lightning generation provides a method of remotely monitoring the plume height, offering clear benefits to the volcanic monitoring community.”
 
Note: This story has been adapted from a news release issued by the Institute of Physics

Using chaos to model geophysical phenomena

Geophysical phenomena such as the dynamics of the atmosphere and ocean circulation are typically modeled mathematically by tracking the motion of air or water particles. These mathematical models define velocity fields that, given (i) a position in three-dimensional space and (ii) a time instant, provide a speed and direction for a particle at that position and time instant
“Geophysical phenomena are still not fully understood, especially in turbulent regimes,” explains Gary Froyland at the School of Mathematics and Statistics and the Australian Research Council Centre of Excellence for Mathematics and Statistics of Complex Systems (MASCOS) at the University of New South Wales in Australia.

“Nevertheless, it is very important that scientists can quantify the ‘transport’ properties of these geophysical systems: Put very simply, how does a packet of air or water get from A to B, and how large are these packets? An example of one of these packets is the Antarctic polar vortex, a rotating mass of air in the stratosphere above Antarctica that traps chemicals such as ozone and chlorofluorocarbons (CFCs), exacerbating the effect of the CFCs on the ozone hole,” Froyland says.

 
In the American Institute of Physics’ journal CHAOS, Froyland and his research team, including colleague Adam Monahan from the School of Earth and Ocean Sciences at the University of Victoria in Canada, describe how they developed the first direct approach for identifying these packets, called “coherent sets” due to their nondispersive properties.
This technique is based on so-called “transfer operators,” which represent a complete description of the ensemble evolution of the fluid. The transfer operator approach is very simple to implement, they say, requiring only singular vector computations of a matrix of transitions induced by the dynamics.
 
When tested using European Centre for Medium Range Weather Forecasting (ECMWF) data, they found that their new methodology was significantly better than existing technologies for identifying the location and transport properties of the vortex.
 
The transport operator methodology has myriad applications in atmospheric science and physical oceanography to discover the main transport pathways in the atmosphere and oceans, and to quantify the transport. “As atmosphere-ocean models continue to increase in resolution with improved computing power, the analysis and understanding of these models with techniques such as transfer operators must be undertaken beyond pure simulation,” says Froyland.
 
Their next application will be the Agulhas rings off the South African coast, because the rings are responsible for a significant amount of transport of warm water and salt between the Indian and Atlantic Oceans.
 
Note: This story has been adapted from a news release issued by the American Institute of Physics

New research shows rivers cut deep notches in the Alps’ broad glacial valleys

For years, geologists have argued about the processes that formed steep inner gorges in the broad glacial valleys of the Swiss Alps.
 
The U-shaped valleys were created by slow-moving glaciers that behaved something like road graders, eroding the bedrock over hundreds or thousands of years. When the glaciers receded, rivers carved V-shaped notches, or inner gorges, into the floors of the glacial valleys. But scientists disagreed about whether those notches were erased by subsequent glaciers and then formed all over again as the second round of glaciers receded.
New research led by a University of Washington scientist indicates that the notches endure, at least in part, from one glacial episode to the next. The glaciers appear to fill the gorges with ice and rock, protecting them from being scoured away as the glaciers move.
When the glaciers receded, the resulting rivers returned to the gorges and easily cleared out the debris deposited there, said David Montgomery, a UW professor of Earth and space sciences.

“The alpine inner gorges appear to lay low and endure glacial attack. They are topographic survivors,” Montgomery said.

“The answer is not so simple that the glaciers always win. The river valleys can hide under the glaciers and when the glaciers melt the rivers can go back to work.”
 
Montgomery is lead author of a paper describing the research, published online Dec. 5 in Nature Geoscience. Co-author is Oliver Korup of the University of Potsdam in Germany, who did the work while with the Swiss Federal Research Institutes in Davos, Switzerland.
 
The researchers used topographic data taken from laser-based (LIDAR) measurements to determine that, if the gorges were erased with each glacial episode, the rivers would have had to erode the bedrock from one-third to three-quarters of an inch per year since the last glacial period to get gorges as deep as they are today.
“That is screamingly fast. It’s really too fast for the processes,” Montgomery said. Such erosion rates would exceed those in all areas of the world except the most tectonically active regions, the researchers said, and they would have to maintain those rates for 1,000 years.
 
Montgomery and Korup found other telltale evidence, sediment from much higher elevations and older than the last glacial deposits, at the bottom of the river gorges. That material likely was pushed into the gorges as glaciers moved down the valleys, indicating the gorges formed before the last glaciers.

“That means the glaciers aren’t cutting down the bedrock as fast as the rivers do. If the glaciers were keeping up, each time they’d be able to erase the notch left by the river,” Montgomery said.
 
“They’re locked in this dance, working together to tear the mountains down.”
The work raises questions about how common the preservation of gorges might be in other mountainous regions of the world.

“It shows that inner gorges can persist, and so the question is, ‘How typical is that?’ I don’t think every inner gorge in the world survives multiple glaciations like that, but the Swiss Alps are a classic case. That’s where mountain glaciation was first discovered.”
 
Note: This story has been adapted from a news release issued by the University of Washington

SCEC’s ‘M8’ earthquake simulation breaks computational records, promises better quake models

A multi-disciplinary team of researchers has presented the world’s most advanced earthquake shaking simulation at the Supercomputing 2010 (SC10) conference held this week in New Orleans. The research was selected as a finalist for the Gordon Bell prize, awarded at the annual conference for outstanding achievement in high-performance computing applications.

The “M8” simulation represents how a magnitude 8.0 earthquake on the southern San Andreas Fault will shake a larger area, in greater detail, than previously possible. Perhaps most importantly, the development of the M8 simulation advances the state-of-the-art in terms of the speed and efficiency at which such calculations can be performed.

The Southern California Earthquake Center (SCEC) at the University of Southern California (USC) was the lead coordinator in the project. San Diego Supercomputer Center (SDSC) researchers provided the high-performance computing and scientific visualization expertise for the simulation. Scientific details of the earthquake were developed by scientists at San Diego State University (SDSU). Ohio State University (OSU) researchers were also part of the collaborative effort to improve the efficiency of the software involved.
While this specific earthquake has a low probability of occurrence, the improvements in technology required to produce this simulation will now allow scientists to simulate other more likely earthquakes scenarios in much less time than previously required. Because such simulations are the most important and widespread applications of high performance computing for seismic hazard estimation currently in use, the SCEC team has been focused on optimizing the technologies and codes needed to create them.
The M8 simulation was funded through a number of National Science Foundation (NSF) grants and it was performed using supercomputer resources including NSF’s Kraken supercomputer at National Institute for Computational Science (NICS) and the Department of Energy (DOE) Jaguar supercomputer at the National Center for Computational Science . The SCEC M8 simulation represents the latest in earthquake science and in computations at the petascale level, which refers to supercomputers capable of more than one quadrillion floating point operations (calculations) per second.
Petascale simulations such as this one are needed to understand the rupture and wave dynamics of the largest earthquakes, at shaking frequencies required to engineer safe structures,” said Thomas Jordan, director of SCEC and Principal Investigator for the project. Previous simulations were useful only for modeling how tall structures will behave in earthquakes, but the new simulation can be used to understand how a broader range of buildings will respond.
“The scientific results of this massive simulation are very interesting, and its level of detail has allowed us to observe things that we were not able to see in the past,” said Kim Olsen, professor of geological sciences at SDSU, and lead seismologist of the study. .
However, given the massive number of calculations required, only the most advanced supercomputers are capable of producing such simulations in a reasonable time period. “This M8 simulation represents a milestone calculation, a breakthrough in seismology both in terms of computational size and scalability,” said Yifeng Cui, a computational scientist at SDSC. “It’s also the largest and most detailed simulation of a major earthquake ever performed in terms of floating point operations, and opens up new territory for earthquake science and engineering with the goal of reducing the potential for loss of life and property.”
Specifically, the M8 simulation is the largest in terms duration of the shaking modeled (six minutes) and the geographical area covered – a rectangular volume approximately 500 miles (810km) long by 250 miles (405 km) wide, by 50 miles (85km) deep. The team’s latest research also set a new record in the number of computer processor cores used, with 223,074 cores sustaining performance of 220 trillion calculations per second for 24 hours on the Jaguar Cray XT5 supercomputer at the Oak Ridge National Laboratory (ORNL) in Tennessee.
We have come a long way in just six years, doubling the seismic frequencies modeled by our simulations every two to three years, from 0.5 Hertz (or cycles per second) in the TeraShake simulations, to 1.0 Hertz in the ShakeOut simulations, and now to 2.0 Hertz in this latest project,” said Phil Maechling, SCEC’s associate director for Information Technology.
In terms of earthquake science, these simulations can be used to study issues of how earthquake waves travel through structures in the earth’s crust and to improve three-dimensional models of such structures.
Based on our calculations, we are finding that deep sedimentary basins, such as those in the Los Angeles area, are getting larger shaking than are predicted by the standard methods,” Jordan said. “By improving the predictions, making them more realistic, we can help engineers make new buildings safer.” The simulations are also useful in developing better seismic hazard policies and for improving scenarios used in emergency planning.
Note: This story has been adapted from a news release issued by the University of Southern California

Related Articles