Home Blog Page 110

Yellowstone Spawned Twin Super-Eruptions that Altered Global Climate

Yellowstone National Park
Yellowstone National Park

A new geological record of the Yellowstone supervolcano’s last catastrophic eruption is rewriting the story of what happened 630,000 years ago and how it affected Earth’s climate. This eruption formed the vast Yellowstone caldera observed today, the second largest on Earth.

Two layers of volcanic ash bearing the unique chemical fingerprint of Yellowstone’s most recent super-eruption have been found in seafloor sediments in the Santa Barbara Basin, off the coast of Southern California. These layers of ash, or tephra, are sandwiched among sediments that contain a remarkably detailed record of ocean and climate change. Together, both the ash and sediments reveal that the last eruption was not a single event, but two closely spaced eruptions that tapped the brakes on a natural global-warming trend that eventually led the planet out of a major ice age.

“We discovered here that there are two ash-forming super-eruptions 170 years apart and each cooled the ocean by about 3 degrees Celsius,” said U.C. Santa Barbara geologist Jim Kennett, who will be presenting a poster about the work on Wednesday, 25 Oct., at the annual meeting of the Geological Society of America in Seattle. Attaining the resolution to detect the separate eruptions and their climate effects is due to several special conditions found in the Santa Barbara Basin, Kennett said.

One condition is the steady supply of sediment to the basin from land — about one millimeter per year. Then there is the highly productive ocean in the area, fed by upwelling nutrients from the deep ocean. This produced abundant tiny shells of foraminifera that sank to the seafloor where they were buried and preserved in the sediment. These shells contain temperature-dependent oxygen isotopes that reveal the sea surface temperatures in which they lived.

But none of this would be much use, said Kennett, if it not for the fact that oxygen levels at the seafloor in the basin are so low as to preclude burrowing marine animals that mix the sediments and degrade details of the climate record. As a result, Kennett and his colleagues can resolve the climate with decadal resolution.

By comparing the volcanic ash record with the foraminifera climate record, it’s quite clear, he said, that both of these eruptions caused separate volcanic winters — which is when ash and volcanic sulfur dioxide emissions reduce that amount of sunlight reaching Earth’s surface and cause temporary cooling. These cooling events occurred at an especially sensitive time when the global climate was warming out of an ice age and easily disrupted by such events.

Kennett and colleagues discovered that the onset of the global cooling events was abrupt and coincided precisely with the timing of the supervolcanic eruptions, the first such observation of its kind.

But each time, the cooling lasted longer than it should have, according to simple climate models, he said. “We see planetary cooling of sufficient magnitude and duration that there had to be other feedbacks involved.” These feedbacks might include increased sunlight-reflecting sea ice and snow cover or a change in ocean circulation that would cool the planet for a longer time.

“It was a fickle, but fortunate time,” Kennett said of the timing of the eruptions. “If these eruptions had happened during another climate state we may not have detected the climatic consequences because the cooling episodes would not have lasted so long.”

Note: The above post is reprinted from materials provided by Geological Society of America.

6,000-year-old skull could be from the world’s earliest known tsunami victim

This is the cranium of a person who lived in what's now Papua New Guinea, 6,000 years ago.
This is the cranium of a person who lived in what’s now Papua New Guinea, 6,000 years ago. Credit: Arthur Durband

Tsunamis spell calamity. These giant waves, caused by earthquakes, volcanic eruptions, and underwater landslides, are some of the deadliest natural disasters known; the 2004 tsunami in the Indian Ocean killed over 230,000 people, a higher death toll than any fire or hurricane. Scientists studying the effects of tsunamis have now shed light on what could be the earliest record of a person killed in a tsunami: someone who lived 6,000 years ago in what’s now Papua New Guinea in the southwest Pacific. Their skull was found in geological sediments having the distinctive hallmarks of ancient tsunami activity. This means, scientists posit in a new paper in PLOS ONE, that this skull could be from the earliest known tsunami victim.

“If we are right about how this person had died thousands of years ago, we have dramatic proof that living by the sea isn’t always a life of beautiful golden sunsets and great surfing conditions,” says John Terrell, Regenstein Curator of Pacific Anthropology at The Field Museum and one of the study’s authors. “Maybe this individual can help us as scientists to convince skeptics today that all of us on earth must take climate change and rising sea levels seriously as the threats they truly are.”

The skull in question was found in 1929, buried in the ground near the small town of Aitape on the northern of Papua New Guinea, about 500 miles north of Australia. Terrell has been doing archaeological and anthropological research in this coastal region of New Guinea, the second largest island in the world, since 1990. The new PLOS One study is a continuation of that work, contributed to by the University of New South Wales, l’Université de Bourgogne-Franche-Comté, the University of Notre Dame, the University of Auckland, New Zealand’s National Institute of Water and Atmospheric Research, the University of Papua New Guinea, Papua New Guinea National Museum and Art Gallery, and The Field Museum. As a member of this international team, Terrell says he has long wondered what to make of this tantalizing human find.

“The skull has always been of great archaeological interest because it is one of the few early skeletal remains from the area,” says Mark Golitko of the University of Notre Dame and The Field Museum. “It was originally thought that the skull belonged to Homo erectus until the deposits were more reliably radiocarbon dated to about 5,000 to 6,000 years. Back then, sea levels were higher and the area would have been just behind the shoreline.”

In 2014 Golitko and others went back to the exact place where this skull had been found to look for new clues about what killed this individual. “We have now been able to confirm what we have long suspected,” says James Goff at the University of New South Wales in Australia, the report’s first author. “The geological similarities between the sediments at the place where the skull was found and sediments laid down during the 1998 tsunami that hit this same coastline have made us realise that human populations in this area have been affected by these massive inundations for thousands of years.”

“Given the evidence we have in hand, we are more convinced than before that this person was either violently killed by a tsunami, or had their grave ripped open by one — leading to their head but not the rest of their body being naturally reburied where it then remained undiscovered in the ground for some 6,000 or so years,” explains Goff.

“It is easy to be fooled by the great beauty of the Sepik coast of Papua New Guinea into thinking that surely this part of the world must be as close to paradise-on-earth as anybody could want. This person’s skull is witness to the fact that here as elsewhere natural disasters can suddenly and unexpectedly turn the world upside down,” says Terrell.

Reference:
James Goff, Mark Golitko, Ethan Cochrane, Darren Curnoe, Shaun Williams, John Terrell. Reassessing the environmental context of the Aitape Skull – The oldest tsunami victim in the world? PLOS ONE, 2017; 12 (10): e0185248 DOI: 10.1371/journal.pone.0185248

Note: The above post is reprinted from materials provided by Field Museum.

World’s oldest and most complex trees

This is an illustrative transverse plane through the small trunk
This is an illustrative transverse plane through the small trunk, showing the three naturally-fractured parts. Credit: Xu and Berry, 2017

The first trees to have ever grown on Earth were also the most complex, new research has revealed.

Fossils from a 374-million-year-old tree found in north-west China have revealed an interconnected web of woody strands within the trunk of the tree that is much more intricate than that of the trees we see around us today.

The strands, known as xylem, are responsible for conducting water from a tree’s roots to its branches and leaves. In the most familiar trees the xylem forms a single cylinder to which new growth is added in rings year by year just under the bark. In other trees, notably palms, xylem is formed in strands embedded in softer tissues throughout the trunk.

Writing in the journal Proceedings of the National Academy of Sciences, the scientists have shown that the earliest trees, belonging to a group known as the cladoxlopsids, had their xylem dispersed in strands in the outer 5 cm of the tree trunk only, whilst the middle of the trunk was completely hollow.

The narrow strands were arranged in an organised fashion and were interconnected to each other like a finely tuned network of water pipes.

The team, which includes researchers from Cardiff University, Nanjing Institute of Geology and Palaeontology, and State University of New York, also show that the development of these strands allowed the tree’s overall growth.

Rather than the tree laying down one growth ring under the bark every year, each of the hundreds of individual strands were growing their own rings, like a large collection of mini trees.

As the strands got bigger, and the volume of soft tissues between the strands increased, the diameter of the tree trunk expanded. The new discovery shows conclusively that the connections between each of the strands would split apart in a curiously controlled and self-repairing way to accommodate the growth.

At the very bottom of the tree there was also a peculiar mechanism at play — as the tree’s diameter expanded the woody strands rolled out from the side of the trunk at the base of the tree, forming the characteristic flat base and bulbous shape synonymous with the cladoxylopsids.

Co-author of the study Dr Chris Berry, from Cardiff University’s School of Earth and Ocean Sciences, said: “There is no other tree that I know of in the history of Earth that has ever done anything as complicated as this. The tree simultaneously ripped its skeleton apart and collapsed under its own weight while staying alive and growing upwards and outwards to become the dominant plant of its day.

“By studying these extremely rare fossils, we’ve gained an unprecedented insight into the anatomy of our earliest trees and the complex growth mechanisms that they employed.

“This raises a provoking question: why are the very oldest trees the most complicated?”

Dr Berry has been studying cladoxylopsids for nearly 30 years, uncovering fragmentary fossils from all over the world. He’s previously helped uncovered a previously mythical fossil forest in Gilboa, New York, where cladoxylopsid trees grew over 385 million years ago.

Yet Dr Berry was amazed when a colleague uncovered a massive, well-preserved fossil of a cladoxylopsid tree trunk in Xinjiang, north-west China.

“Previous examples of these trees have filled with sand when fossilised, offering only tantalising clues about their anatomy. The fossilised trunk obtained from Xinjiang was huge and perfectly preserved in glassy silica as a result of volcanic sediments, allowing us to observe every single cell of the plant,” Dr Berry continued.

The overall aim of Dr Berry’s research is to understand how much carbon these trees were capable of capturing from the atmosphere and how this effected Earth’s climate.

Reference:
Hong-He Xu, Christopher M. Berry, William E. Stein, Yi Wang, Peng Tang, Qiang Fu. Unique growth strategy in the Earth’s first trees revealed in silicified fossil trunks from China. Proceedings of the National Academy of Sciences, 2017; 201708241 DOI: 10.1073/pnas.1708241114

Note: The above post is reprinted from materials provided by Cardiff University.

Ice sheets may melt rapidly in response to distant volcanoes

Ice Core
Sediments deposited by ice sheet meltwater provide clues about ancient climates, as well as the future effects of global warming. Credit: Francesco Muschitiello

Volcanic eruptions have been known to cool the global climate, but they can also exacerbate the melting of ice sheets, according to a paper published in Nature Communications.

Researchers who analyzed ice cores and meltwater deposits found that ancient eruptions caused immediate and significant melting of the ice sheet that covered much of northern Europe at the end of the last ice age, some 12,000 to 13,000 years ago.

“Over a time span of 1,000 years, we found that volcanic eruptions generally correspond with enhanced ice sheet melting within a year or so,” says lead author Francesco Muschitiello, who completed the research as a postdoctoral fellow at Columbia University’s Lamont-Doherty Earth Observatory.

These weren’t volcanoes erupting on or near the ice sheet, but located a thousand miles away in some cases. The eruptions heaved huge clouds of ash into the sky, and when the ash fell on the ice sheet, its darker color made the ice absorb more solar heat than usual.

“We know that if you have darker ice, you decrease the reflectance and it melts more quickly. It’s basic science,” says Muschitiello. “But no one so far has been able to demonstrate this direct link between volcanism and ice melting when it comes to ancient climates.”

The discovery comes from the cross-sections of deposits, called glacial varves, most of which had been collected in the 1980s and 1990s. Varves are the layered sediments that form when meltwater below an ice sheet routes large amounts of debris into lakes near the sheet’s edge. Like the rings of a tree, the layers of a glacial varve tell the story of each year’s conditions; a thicker layer indicates more melting, since there would have been a higher volume of water to carry the sediment.

The team also compared the varves to cores from the Greenland ice sheet, whose layers contain a record of ancient atmospheric conditions. Testing of those layers for sulfates revealed which years experienced explosive volcanic eruptions, which tend to release large amounts of ash. Matching up the ice layers with varve layers from the same time periods, the team found that years with explosive volcanic activity corresponded to thicker varve layers, indicating more melting of the northern European ice sheet.

Muschitiello and his colleagues studied a period ranging from 13,200 to 12,000 years ago, when the last ice age was transitioning into today’s warm climate. They focused specifically on volcanic eruptions in the northern high latitudes — events similar to the 2010 eruptions of Iceland’s Eyjafjallajökull volcano. Although that eruption was relatively minor, its large ash cloud shut down air traffic across most of Europe for about a week.

How much melting could an eruption like that cause? “It’s difficult to put an exact number to it,” says glaciologist and coauthor James Lea from the University of Liverpool. “It depends on many factors.” Running thousands of model simulations, the team found that the amount of melting depends on the individual eruption, which season it occurs in, the snowpack conditions at the time, and the elevation of the ice sheet. “Change any one of these and you would get different amounts of melt,” says Lea. In the worst scenarios, the model predicted that ash deposition would remove between 20 centimeters and almost one meter of ice from the surface of the highest parts of the ice sheet.

The model results should be taken with a pinch of salt, Muschitiello cautions, due to uncertainties about past conditions. However, because the team simulated a very broad range of potential conditions, he’s confident that the ice sheet’s real response lies somewhere within their range.

Michael Sigl, a paleoclimatologist from the Paul Scherrer Institute in Switzerland who wasn’t involved in the new study, says the hypothesis that ash particles might counteract the cooling effects of volcanic eruptions is intriguing. But, he said, “coincidences in the timing of rapid ice-sheet melting events and eruption dates do not automatically imply causation, and there may be other scenarios that could be consistent with the presented data.” Sigl’s own work has found a link between eruption-induced ozone depletion and deglaciation in the Southern Hemisphere. Nevertheless, he says, the new study shows that more work is needed to understand the effects of aerosol emissions from volcanic eruptions.

The preliminary results suggest that “present day ice sheets are potentially very vulnerable to volcanic eruptions,” says Muschitiello. They also point to a possible hole in the climate models that scientists use to make predictions about the future: Models currently don’t simulate the ice sheets’ response to changes in particulate deposition from the atmosphere in an interactive way.

Another intriguing implication is that previous research has suggested that melting ice sheets and glaciers could increase the frequency of volcanic eruptions in glaciated areas by lightening loads on earth’s crust, allowing underlying magma to rise. If the link between volcanism and ice sheet melting is confirmed, it could indicate the presence of a so-called “positive feedback loop” in which eruptions exacerbate melting, and more melting causes more eruptions, and so on.

Muschitiello says the study “can give us hints about the mechanisms at play when you’re expecting rapid climate change.”

Reference:
Francesco Muschitiello, Francesco S. R. Pausata, James M. Lea, Douglas W. F. Mair, Barbara Wohlfarth. Enhanced ice sheet melting driven by volcanic eruptions during the last deglaciation. Nature Communications, 2017; 8 (1) DOI: 10.1038/s41467-017-01273-1

Note: The above post is reprinted from materials provided by The Earth Institute at Columbia University.

Raton Basin earthquakes linked to oil and gas fluid injections

Rig "An oil platform"
Representative Image: An oil platform

A rash of earthquakes in southern Colorado and northern New Mexico recorded between 2008 and 2010 was likely due to fluids pumped deep underground during oil and gas wastewater disposal, says a new University of Colorado Boulder study.

The study, which took place in the 2,200-square-mile Raton Basin along the central Colorado-northern New Mexico border, found more than 1,800 earthquakes up to magnitude 4.3 during that period, linking most to wastewater injection well activity. Such wells are used to pump water back in the ground after it has been extracted during the collection of methane gas from subterranean coal beds.

One key piece of the new study was the use of hydrogeological modeling of pore pressure in what is called the “basement rock” of the Raton Basin – rock several miles deep that underlies the oldest stratified layers. Pore pressure is the fluid pressure within rock fractures and rock pores.

While two previous studies have linked earthquakes in the Raton Basin to wastewater injection wells, this is the first to show that elevated pore pressures deep underground are well above earthquake-triggering thresholds, said CU Boulder doctoral student Jenny Nakai, lead study author. The northern edges of the Raton Basin border Trinidad, Colorado, and Raton, New Mexico.

“We have shown for the first time a plausible causative mechanism for these earthquakes,” said Nakai of the Department of Geological Sciences. “The spatial patterns of seismicity we observed are reflected in the distribution of wastewater injection and our modeled pore pressure change.”

A paper on the study was published in the Journal of Geophysical Research: Solid Earth. Co-authors on the study include CU Boulder Professors Anne Sheehan and Shemin Ge of geological sciences, former CU Boulder doctoral student Matthew Weingarten, now a postdoctoral fellow at Stanford University, and Professor Susan Bilek of the New Mexico Institute of Mining and Technology in Socorro.

The Raton Basin earthquakes between 2008 and 2010 were measured by the seismometers from the EarthScope USArray Transportable Array, a program funded by the National Science Foundation (NSF) to measure earthquakes and map Earth’s interior across the country. The team also used seismic data from the Colorado Rockies Experiment and Seismic Transects (CREST), also funded by NSF.

As part of the research, the team simulated in 3-D a 12-mile long fault gleaned from seismicity data in the Vermejo Park region in the Raton Basin. The seismicity patterns also suggest a second, smaller fault in the Raton Basin that was active from 2008-2010.

Nakai said the research team did not look at the relationship between the Raton Basin earthquakes and hydraulic fracturing, or fracking.

The new study also showed the number of earthquakes in the Raton Basin correlates with the cumulative volume of wastewater injected in wells up to about 9 miles away from the individual earthquakes. There are 28 “Class II” wastewater disposal wells – wells that are used to dispose of waste fluids associated with oil and natural gas production – in the Raton Basin, and at least 200 million barrels of wastewater have been injected underground there by the oil and gas industry since 1994.

“Basement rock is typically more brittle and fractured than the rock layers above it,” said Sheehan, also a fellow at CU’s Cooperative Institute for Research in Environmental Sciences. “When pore pressure increases in basement rock, it can cause earthquakes.”

There is still a lot to learn about the Raton Basin earthquakes, said the CU Boulder researchers. While the oil and gas industry has monitored seismic activity with seismometers in the Raton Basin for years and mapped some sub-surface faults, such data are not made available to researchers or the public.

The earthquake patterns in the Raton Basin are similar to other U.S. regions that have shown “induced seismicity” likely caused by wastewater injection wells, said Nakai. Previous studies involving CU Boulder showed that injection wells likely caused earthquakes near Greeley, Colorado, in Oklahoma and in the mid-continent region of the United States in recent years.

Note: The above post is reprinted from materials provided by University of Colorado at Boulder.

Anticipating aftershocks

San Andreas fault
(a) Aftershock nucleation rates following a magnitude 7 earthquake on the Mojave section of the San Andreas fault based on 2×105 UCERF3‐ETAS simulations. (Inset) Magnitude Frequency Distribution for ruptures with some part inside the dashed box defining the greater Los Angeles area. (b) Same as (a), but for an M 7.1 mainshock on the Hayward fault; inset graph pertains to the dashed box defining the San Francisco Bay area. Credit: Sean Cunningham, TACC

Southern California has the highest earthquake risk of any region in the U.S., but exactly how risky and where the greatest risks lie remains an open question.

Earthquakes occur infrequently and depend on complex geological factors deep underground, making them hard to reliably predict in advance. For that reason, forecasting earthquakes means relying on massive computer models and multifaceted simulations, which recreate the rock physics and regional geology and require big supercomputers to execute.

In June 2017, a team of researchers from the U.S. Geological Survey and the Southern California Earthquake Center (SCEC) released a major paper in Seismological Research Letters that summarized the scientific and hazard results of one of the world’s biggest and most well-known earthquake simulation projects: The Uniform California Earthquake Rupture Forecast (UCERF3).

The results relied on computations performed on the original Stampede supercomputer at the Texas Advanced Computing Center, resources at the University of Southern California Center for High-Performance Computing, as well as the newly deployed Stampede2 supercomputer, to which the research team had early access. (Stampede 1 and Stampede2 are supported by grants from the National Science Foundation.)

“High-performance computing on TACC’s Stampede system, and during the early user period of Stampede2, allowed us to create what is, by all measures, the most advanced earthquake forecast in the world,” said Thomas H. Jordan, director of the Southern California Earthquake Center and one of the lead authors on the paper.

The new forecast is the first fault-based model to provide self-consistent rupture probabilities from the very short-term — over a period of less than an hour — to the very long term — up to more than a century. It is also the first model capable of evaluating the short-term hazards that result from multi-event sequences of complex faulting.

To derive the model, the researchers ran 250,000 rupture scenarios of the state of California, vastly more than in the previous model, which simulated 8,000 ruptures.

Among its novel findings, the researchers’ simulations showed that in the week following a magnitude 7.0 earthquake, the likelihood of another magnitude 7.0 quake would be up to 300 times greater than the week beforehand. This scenario of ‘cascading’ ruptures was demonstrated in the 2002 magnitude 7.9 Denali, Alaska, and the 2016 magnitude 7.8 Kaikoura, New Zealand earthquakes, according to David Jacobson and Ross Stein of Temblor.

The dramatic increase in the likelihood of powerful aftershocks is due to the inclusion of a new class of models that assess short-term changes in seismic hazard based on what is known about earthquake clustering and aftershock excitations. These factors have never been used in a comprehensive, statewide model like this one.

The current model also takes into account the likelihood of ruptures jumping from one fault to a nearby one, which has been observed in California’s highly interconnected fault system.

Based on these and other new factors, the new model increases the likelihood of powerful aftershocks but downgrades the predicted frequency of earthquakes between magnitude 6.5 and 7.0, which did not match historical records.

Importantly, UCERF3 can be updated with observed seismicity — real-time data based on earthquakes in action — to capture the static or dynamic triggering effects that play out during a particular sequence of events. The framework is adaptable to many other continental fault systems, and the short-term component might be applicable to the forecasting of minor earthquakes and tremors that are caused by human activity.

The impact of such an improved model goes beyond the fundamental scientific improvement it represents. It has the potential to impact building codes, insurance rates, and the state’s response to a powerful earthquake.

Said Jordan, “The U.S. Geological Survey has included UCERF3 as the California component of the National Seismic Hazard Model, and the model is being evaluated for use in operational earthquake forecasting on timescales from hours to decades.”

ESTIMATING THE COST TO REBUILD

In addition to forecasting the likelihood of an earthquake, models like UCERF3 help predict the associated costs of earthquakes in the region. In recent months, the researchers used UCERF3 and Stampede2 to create a prototype operational loss model, which they described in a paper posted online to Earthquake Spectra in August.

The model estimates the statewide financial losses to the region (the costs to repair buildings and other damages) caused by an earthquake and its aftershocks. The risk metric is based on a vulnerability function and the total replacement cost of asset types in a given census tract.

The model found that the expected loss per year when averaged over many years would be $4.0 billion statewide. More importantly, the model was able to quantify how expected losses change with time due to recent seismic activity. For example, the expected losses in a year following an magnitude 7.1 main shock spike to $24 billion due to potentially damaging aftershocks, a factor of six greater than during “normal” times.

Being able to quantify such fluctuations will enable financial institutions, such as earthquake insurance providers, to adjust their business decisions accordingly.

“It’s all about providing tools that will help make society more resilient to damaging earthquake sequences,” says Ned Field of the USGS, another lead author of the two studies.

Though there’s a great deal of uncertainty in both the seismicity and the loss estimates, the model is an important step at quantifying earthquake risk and potentially devastation in the region, thereby helping decision-makers determine whether and how to respond.

Reference:
A Synoptic View of the Third Uniform California Earthquake Rupture Forecast (UCERF3). DOI: 10.1785/0220170045

Note: The above post is reprinted from materials provided by University of Texas at Austin, Texas Advanced Computing Center.

How do we know the age of the Earth?

The Earth is 4.565 billion years old, give or take a few million years. How do scientists know that? Since there’s no “established in” plaque stuck in a cliff somewhere, geologists deduced the age of the Earth thanks to a handful of radioactive elements.

With radiometric dating, scientists can put an age on really old rocks — and even good old Mother Earth. For the 30th anniversary of National Chemistry Week, this edition of Reactions describes how scientists date rocks


The American Chemical Society, the world’s largest scientific society, is a not-for-profit organization chartered by the U.S. Congress. ACS is a global leader in providing access to chemistry-related information and research through its multiple databases, peer-reviewed journals and scientific conferences. ACS does not conduct research, but publishes and publicizes peer-reviewed scientific studies. Its main offices are in Washington, D.C., and Columbus, Ohio.

Diamonds deliver insights into the chemistry of the deep Earth’s interior

Inclusion within a diamond
Inclusion within a diamond (black arrow, microscopic Credit: A. Schreiber, GFZ)

Nitrogen is one of the most enigmatic elements within system Earth. No matter where in the world scientists take measurements, in the atmosphere or in solid rock, everywhere they come across the “missing nitrogen“ problem: compared to other planets there is obviously far too little nitrogen found. The scientists Felix Kaminsky, KM Diamond Exploration, Canada, and Richard Wirth, GFZ section Chemistry and Physics of Earth Materials, now identified a “witness“ from the deep that is able to unravel the mystery.

With an amount of 78 percent nitrogen is the main component of air on Earth and it also is a key component of life. A comparison with other planets, however, reveals that there should be a much higher amount found on Earth. According to recent estimates the balance is missing up to 90 percent of nitrogen. Where is it gone? Existing hypotheses assume that large amounts of nitrogen may have been degassed during the formation of Earth or following a meteor impact. Another hypothesis assumes that large amounts of nitrogen may be found within the Earth’s interior, in the Earth’s mantle or core. Since measuring devices cannot reach down there these are, however, not more than assumptions so far.

Diamonds from Northwest Brazil now give the crucial hint. In Rio Soriso volcanic vents, called kimberlite pipes, broke through the Earth’s crust and thereby transported the diamonds up to the Earth’s surface. Felix Kaminsky and Richard Wirth now precisely investigated the molecular composition of inclusions within the diamonds and published their results in the scientific journal American Mineralogist. Wirth: “Diamonds are formed under high pressure and high temperatures within the Earth’s mantle and are transported to the Earth’s surface by volcanic activity. The chemical composition of diamonds and of the inclusions within them are therefore a reflection of the composition of the Earth’s interior”.

Diamonds from kimberlite pipes also occur on other places of the Earth, for example in South Africa, Siberia or the Canadian Shield. However, the diamonds of Rio Soriso are especially rich in inclusions. They were formed in the lowermost layers of the lower mantle and thereby allow for a rare insight into the deep. At the GFZ, Wirth investigated the inclusions by different electron-microscopic methods.

Wirth: “Unlike other diamond deposits on Earth, the inclusions of the Rio Soriso diamonds contain large amounts of nitrogen. For the first time, we were able to detect iron nitrides and carbonitride, chemical compounds of iron and carbon with nitrogen, within diamond inclusions. .” This provides science with an unambiguous proof of the existence of nitrogen in the lower Earth’s mantle and core. The scientists assume that the chemical compounds of iron nitrides and carbonitride are typical compounds of the core-mantle-boundary. Wirth: “The compounds were probably transported by liquid metal from the core to the lowermost layers of the lower mantle.” The search for the “missing nitrogen” of system Earth seems to have come to an end. (ak)

Reference:
Kaminsky, F., Wirth, R., 2017. Nitrides and carbonitrides from the lowermost mantle and their importance in the search for Earth’s “lost” nitrogen. American Mineralogist 102, 1667-1676. DOI: 10.2138/am-2017-6101

Note: The above post is reprinted from materials provided by GFZ German Research Centre for Geosciences.

New magma pathways after giant lateral volcano collapses

Giant lateral volcano collapses affects the deep paths of magma.
Giant lateral volcano collapses affects the deep paths of magma. This process can be seen at Fogo Volcano, Cabo Verde. Credit: GFZ/Walter

Giant lateral collapses are huge landslides occurring at the flanks of a volcano. Giant lateral collapses are rather common events during the evolution of a large volcanic edifice, often with dramatic consequences such as tsunami and volcano explosions. These catastrophic events interact with the magmatic activity of the volcano, as a new research in Nature Communications suggests. Giant lateral collapses may change the style of volcanism and the chemistry of magma, and as a new study by GFZ scientists reveals, also affects and diverges the deep paths of magmas. New volcano centres may form at other places, which the scientists explain by studying the stress field changes associated with the lateral collapse.

In the study entitled “The effect of giant lateral collapses on magma pathways and the location of volcanism,” authored by F. Maccaferri, N. Richter and T. Walter, all working at GFZ, in section 2.1 (Physics of earthquakes and volcanoes), the propagation path of magmatic intrusions underneath a volcanic edifice has been simulated by means of a mathematical model. Computer simulations revealed that the mechanical effect on the earth crust resulting from a large lateral collapse, can promote the deflection of deep magmatic intrusions, favouring the formation of a new eruptive centre within the collapse embayment. This result has been quantitatively validated against observations at Fogo Volcano, Cabo Verde.

A broader view to other regions reveals that this shift of volcanism associated with giant lateral collapses is rather common, as observed at several of the Canary Islands, Hawaii, Stromboli and elsewhere. This study may have implications particularly for our understanding of the long term evolution of intraplate volcanic ocean islands and sheds lights on the interacting processes occurring during growth and collapse of volcanic edifices.

Reference:
Francesco Maccaferri, Nicole Richter, Thomas R. Walter. The effect of giant lateral collapses on magma pathways and the location of volcanism. Nature Communications, 2017; 8 (1) DOI: 10.1038/s41467-017-01256-2

Note: The above post is reprinted from materials provided by GFZ GeoForschungsZentrum Potsdam, Helmholtz Centre.

Zircon as Earth’s timekeeper: Are we reading the clock right?

Igneous zircon crystal: shows zircon had two main growth periods approx. 20 million years apart in different magmas.
Cathodoluminescence image from a scanning electron microscope of a typical igneous zircon crystal from samples studied by the QUT research team, revealing growth rings of the zircon. Yellow circles enclose ablation sites by a laser from which isotopic data is measured to determine the age of zircon growth. The analytical spots here show this zircon had two main growth periods approximately 20 million years apart in different magmas. Credit: QUT

Zircon crystals in igneous rocks must be carefully examined and not relied upon solely to predict future volcanic eruptions and other tectonic events, QUT researchers have shown.

  • Zircon is a robust mineral and a timekeeper of Earth history
  • Distinguishing the origins of zircon crystals, their individual chemistry and properties is not straightforward
  • Misinterpreting data from zircon crystals could skew timescales for geological events such as volcanic eruptions by millions of years
  • This has implications for understanding volcanic hazards and the future risks they pose

The researchers’ findings have been published in Earth-Science Reviews. The paper, Use and abuse of zircon-based thermometers: A critical review and a recommended approach to identify antecrystic zircons, also proposes an efficient and integrated approach to assist in identifying zircons and evaluating zircon components sourced from older rocks.

Associate Professor Scott Bryan, from QUT’s Science and Engineering Faculty, said the researchers had “gone back to basic science” and reassessed large data sets of analyses of igneous rocks in Queensland and from around the world, to show that wrong assumptions can be made about zircon crystals.

Igneous rocks are formed by the cooling of magma (molten rock) which makes its way to Earth’s surface, often leading to volcanic eruptions.

“One of the assumptions being made is that the composition of the zircons and the rocks in which they have formed give an accurate record of the magmas and conditions at which the zircons and magmas formed,” Associate Professor Bryan said.

“From this, we then estimate the age of the event that caused them to form.

“But some zircon crystals may not be related to their host rocks at all. They may have come from the source of the magma deep in the Earth’s crust or they may have been picked up by the magma on its way to the surface.

“If you don’t distinguish between the types of crystals then you get a big variation in the age of the event which formed the rocks, potentially millions of years, as well as developing incorrect views on the conditions needed to make magmas.

“It is critical to get the timescales of magmatism correct, so we can understand how long it might take for reservoirs of magma to build up and erupt.”

This is particularly relevant to ‘supervolcanoes’ which do not always have pools of magma sitting beneath them, Associate Professor Bryan said.

There are more than 20 supervolcanoes on Earth, including Yellowstone in the US and Taupo in New Zealand.

“Determining accurately what zircon is telling us is fundamental to understanding Earth’s history, defining major events such as mass extinctions, and how we understand global plate tectonics,” he said.

“We need to understand the past, and read the geological clocks correctly, to accurately predict the future and to mitigate future hazards.”

Reference:
C. Siégel, S.E. Bryan, C.M. Allen, D.A. Gust. Use and abuse of zircon-based thermometers: A critical review and a recommended approach to identify antecrystic zircons. Earth-Science Reviews, 2018; 176: 87 DOI: 10.1016/j.earscirev.2017.08.011

Note: The above post is reprinted from materials provided by Queensland University of Technology.

Mongolian microfossils point to the rise of animals on Earth

This is an image of assorted microfossils from the Ediacaran Khesen Formation, Mongolia.
This is an image of assorted microfossils from the Ediacaran Khesen Formation, Mongolia. Each fossil is on the order of 200 microns maximum dimension. Credit: Yale University

A Yale-led research team has discovered a cache of embryo-like microfossils in northern Mongolia that may shed light on questions about the long-ago shift from microbes to animals on Earth.

Called the Khesen Formation, the site is one of the most significant for early Earth fossils since the discovery of the Doushantuo Formation in southern China nearly 20 years ago. The Dousantuo Formation is 600 million years old; the Khesen Formation is younger, at about 540 million years old.

“Understanding how and when animals evolved has proved very difficult for paleontologists. The discovery of an exceptionally well-preserved fossil assemblage with animal embryo-like fossils gives us a new window onto a critical transition in life’s history,” said Yale graduate student Ross Anderson, first author of a study in the journal Geology.

The new cache of fossils represents eight genera and about 17 species, comprising tens to hundreds of individuals. Many of them are spiny microfossils called acritarchs, which are roughly 100 microns in size — about one-third the thickness of a fingernail.

The Khesen Formation is located to the west of Lake Khuvsgul in northern Mongolia. “This site was of particular interest to us because it had the right type of rocks — phosphorites — that had preserved similar organisms in China,” Anderson said.

The discovery may help scientists confirm a much earlier date for the existence of Earth ecosystems with animals, rather than just microbes. For two decades, researchers have debated the findings at the Doushantuo Formation, with no resolution. If confirmed as animals, these microfossils would represent the oldest animals to be preserved in the geological record.

The other authors of the study are Derek Briggs, Yale’s G. Evelyn Hutchinson Professor of Geology and Geophysics and curator at the Yale Peabody Museum of Natural History; Sean McMahon, a postdoctoral fellow in the Briggs lab; Francis Macdonald of Harvard; and David Jones of Amherst College.

The researchers said the Khesen Formation should provide scientists with additional information for years to come.

“This study is only the tip of the iceberg, as most of the fossils derive from only two samples,” Anderson said. Since the original discovery, the Yale team has worked with Harvard and the Mongolian University of Science and Technology to sample several additional sites within the formation.

Reference:
Ross P. Anderson, Francis A. Macdonald, David S. Jones, Sean McMahon, Derek E.G. Briggs. Doushantuo-type microfossils from latest Ediacaran phosphorites of northern Mongolia. Geology, 2017; DOI: 10.1130/G39576.1

Note: The above post is reprinted from materials provided by Yale University. Original written by Jim Shelton.

50 simulations of the ‘Really Big One’ show how a 9.0 Cascadia earthquake could play out

Simulation parameters for the scenario that generated the least shaking in the Seattle area.
Simulation parameters for the scenario that generated the least shaking in the Seattle area. Credit: Erin Wirth/University of Washington/USGS

One of the worst nightmares for many Pacific Northwest residents is a huge earthquake along the offshore Cascadia Subduction Zone, which would unleash damaging and likely deadly shaking in coastal Washington, Oregon, British Columbia and northern California.

The last time this happened was in 1700, before seismic instruments were around to record the event. So what will happen when it ruptures next is largely unknown.

A University of Washington research project, to be presented Oct. 24 at the Geological Society of America’s annual meeting in Seattle, simulates 50 different ways that a magnitude-9.0 earthquake on the Cascadia subduction zone could unfold.

“There had been just a handful of detailed simulations of a magnitude-9 Cascadia earthquake, and it was hard to know if they were showing the full range,” said Erin Wirth, who led the project as a UW postdoctoral researcher in Earth and space sciences. “With just a few simulations you didn’t know if you were seeing a best-case, a worst-case or an average scenario. This project has really allowed us to be more confident in saying that we’re seeing the full range of possibilities.”

Off the Oregon and Washington coast, the Juan de Fuca oceanic plate is slowly moving under the North American plate. Geological clues show that it last jolted and unleashed a major earthquake in 1700, and that it does so roughly once every 500 years. It could happen any day.

Wirth’s project ran simulations using different combinations for three key factors: the epicenter of the earthquake; how far inland the earthquake will rupture; and which sections of the fault will generate the strongest shaking.

Results show that the intensity of shaking can be less for Seattle if the epicenter is fairly close to beneath the city. From that starting point, seismic waves will radiate away from Seattle, sending the biggest shakes in the direction of travel of the rupture.

“Surprisingly, Seattle experiences less severe shaking if the epicenter is located just beneath the tip of northwest Washington,” Wirth said. “The reason is because the rupture is propagating away from Seattle, so it’s most affecting sites offshore. But when the epicenter is located pretty far offshore, the rupture travels inland and all of that strong ground shaking piles up on its way to Seattle, to make the shaking in Seattle much stronger.”

The research effort began by establishing which factors most influence the pattern of ground shaking during a Cascadia earthquake. One, of course, is the epicenter, or more specifically the “hypocenter,” which locates the earthquake’s starting point in three-dimensional space.

Another factor they found to be important is how far inland the fault slips. A magnitude-9.0 earthquake would likely give way along the whole north-south extent of the subduction zone, but it’s not well known how far east the shake-producing area would extend, approaching the area beneath major cities such as Seattle and Portland.

The third factor is a new idea relating to a subduction zone’s stickiness. Earthquake researchers have become aware of the importance of “sticky points,” or areas between the plates that can catch and generate more shaking. This is still an area of current research, but comparisons of different seismic stations during the 2010 Chile earthquake and the 2011 Tohoku earthquake show that some parts of the fault released more strong shaking than others.

Wirth simulated a magnitude-9.0 earthquake, about the middle of the range of estimates for the magnitude of the 1700 earthquake. Her 50 simulations used variables spanning realistic values for the depth of the slip, and had randomly placed hypocenters and sticky points. The high-resolution simulations were run on supercomputers at the Pacific Northwest National Laboratory and the University of Texas, Austin.

Overall, the results confirm that coastal areas would be hardest hit, and locations in sediment-filled basins like downtown Seattle would shake more than hard, rocky mountaintops. But within that general framework, the picture can vary a lot; depending on the scenario, the intensity of shaking can vary by a factor of 10. But none of the pictures is rosy.

“We are finding large amplification of ground shaking by the Seattle basin,” said collaborator Art Frankel, a U.S. Geological Survey seismologist and affiliate faculty member at the UW. “The average duration of strong shaking in Seattle is about 100 seconds, about four times as long as from the 2001 Nisqually earthquake.”

The research was done as part of the M9 Project, a National Science Foundation-funded effort to figure out what a magnitude-9 earthquake might look like in the Pacific Northwest and how people can prepare. Two publications are being reviewed by the USGS, and engineers are already using the simulation results to assess how tall buildings in Seattle might respond to the predicted pattern of shaking.

As a new employee of the USGS, Wirth will now use geological clues to narrow down the possible earthquake scenarios.

“We’ve identified what parameters we think are important,” Wirth said. “I think there’s a future in using geologic evidence to constrain these parameters, and maybe improve our estimate of seismic hazard in the Pacific Northwest.”

Note: The above post is reprinted from materials provided by University of Washington. Original written by Hannah Hickey.

What is an Earthquake?

Earthquake

What is earthquake?

An earthquake is the shaking of the surface of the Earth, resulting from the sudden release of energy in the Earth’s lithosphere that creates seismic waves. Earthquakes can range in size from those that are so weak that they cannot be felt to those violent enough to toss people around and destroy whole cities. The seismicity or seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time.

At the Earth’s surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides, and occasionally volcanic activity.

In its most general sense, the word earthquake is used to describe any seismic event — whether natural or caused by humans — that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake’s point of initial rupture is called its focus or hypocenter. The epicenter is the point at ground level directly above the hypocenter.

What causes earthquakes and where do they happen?

The earth has four major layers: the inner core, outer core, mantle and crust. The crust and the top of the mantle make up a thin skin on the surface of our planet. But this skin is not all in one piece – it is made up of many pieces like a puzzle covering the surface of the earth.  Not only that, but these puzzle pieces keep slowly moving around, sliding past one another and bumping into each other. We call these puzzle pieces tectonic plates, and the edges of the plates are called the plate boundaries. The plate boundaries are made up of many faults, and most of the earthquakes around the world occur on these faults. Since the edges of the plates are rough, they get stuck while the rest of the plate keeps moving. Finally, when the plate has moved far enough, the edges unstick on one of the faults and there is an earthquake.

Why does the earth shake when there is an earthquake?

While the edges of faults are stuck together, and the rest of the block is moving, the energy that would normally cause the blocks to slide past one another is being stored up. When the force of the moving blocks finally overcomes the friction of the jagged edges of the fault and it unsticks, all that stored up energy is released. The energy radiates outward from the fault in all directions in the form of seismic waves like ripples on a pond. The seismic waves shake the earth as they move through it, and when the waves reach the earth’s surface, they shake the ground and anything on it, like our houses and us! (see P&S Wave inset)

How are earthquakes recorded?

Earthquakes are recorded by instruments called seismographs. The recording they make is called a seismogram. The seismograph has a base that sets firmly in the ground, and a heavy weight that hangs free. When an earthquake causes the ground to shake, the base of the seismograph shakes too, but the hanging weight does not. Instead the spring or string that it is hanging from absorbs all the movement. The difference in position between the shaking part of the seismograph and the motionless part is what is recorded.

How do scientists measure the size of earthquakes?

The size of an earthquake depends on the size of the fault and the amount of slip on the fault, but that’s not something scientists can simply measure with a measuring tape since faults are many kilometers deep beneath the earth’s surface. So how do they measure an earthquake? They use the seismogram recordings made on the seismographs at the surface of the earth to determine how large the earthquake was. A short wiggly line that doesn’t wiggle very much means a small earthquake, and a long wiggly line that wiggles a lot means a large earthquake. The length of the wiggle depends on the size of the fault, and the size of the wiggle depends on the amount of slip.

The size of the earthquake is called its magnitude. There is one magnitude for each earthquake. Scientists also talk about the intensity of shaking from an earthquake, and this varies depending on where you are during the earthquake.

How can scientists tell where the earthquake happened?

Seismograms come in handy for locating earthquakes too, and being able to see the P wave and the S wave is important. You learned how P & S waves each shake the ground in different ways as they travel through it. P waves are also faster than S waves, and this fact is what allows us to tell where an earthquake was. To understand how this works, let’s compare P and S waves to lightning and thunder. Light travels faster than sound, so during a thunderstorm you will first see the lightning and then you will hear the thunder. If you are close to the lightning, the thunder will boom right after the lightning, but if you are far away from the lightning, you can count several seconds before you hear the thunder. The further you are from the storm, the longer it will take between the lightning and the thunder.

P waves are like the lightning, and S waves are like the thunder. The P waves travel faster and shake the ground where you are first. Then the S waves follow and shake the ground also. If you are close to the earthquake, the P and S wave will come one right after the other, but if you are far away, there will be more time between the two. By looking at the amount of time between the P and S wave on a seismogram recorded on a seismograph, scientists can tell how far away the earthquake was from that location. However, they can’t tell in what direction from the seismograph the earthquake was, only how far away it was. If they draw a circle on a map around the station where the radius of the circle is the determined distance to the earthquake, they know the earthquake lies somewhere on the circle. But where?

Scientists then use a method called triangulation to determine exactly where the earthquake was (figure 6). It is called triangulation because a triangle has three sides, and it takes three seismographs to locate an earthquake. If you draw a circle on a map around three different seismographs where the radius of each is the distance from that station to the earthquake, the intersection of those three circles is the epicenter!

Can scientists predict earthquakes?

No, and it is unlikely they will ever be able to predict them. Scientists have tried many different ways of predicting earthquakes, but none have been successful. On any particular fault, scientists know there will be another earthquake sometime in the future, but they have no way of telling when it will happen.

 

Effects of earthquakes

Shaking and ground rupture

Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation. The ground-shaking is measured by ground acceleration.

Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits.

Ground rupture is a visible breaking and displacement of the Earth’s surface along the trace of the fault, which may be of the order of several meters in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges and nuclear power stations and requires careful mapping of existing faults to identify any which are likely to break the ground surface within the life of the structure.

Landslides and avalanches

Earthquakes, along with severe storms, volcanic activity, coastal wave attack, and wildfires, can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue.

Fires

Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself.

Soil liquefaction

Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves.

Tsunami

Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water – including when an earthquake occurs at sea. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600-800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them.

Ordinarily, subduction earthquakes under magnitude 7.5 on the Richter magnitude scale do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more. “ex: Japan Tsunami 2011

Floods

A flood is an overflow of any amount of water that reaches land. Floods occur usually when the volume of water within a body of water, such as a river or lake, exceeds the total capacity of the formation, and as a result some of the water flows or sits outside of the normal perimeter of the body. However, floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods.

The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flood if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people.


Reference:
Wikipedia: Earthquake
USGS: The Science of Earthquakes
British Geological Survey: What is an earthquake?
Geoscience Australia: What is an Earthquake?

Machine learning used to predict earthquakes in a lab setting

Haiti Earthquake. Credit: United Nations Development Programme

A group of researchers from the UK and the US have used machine learning techniques to successfully predict earthquakes. Although their work was performed in a laboratory setting, the experiment closely mimics real-life conditions, and the results could be used to predict the timing of a real earthquake.

The team, from the University of Cambridge, Los Alamos National Laboratory and Boston University, identified a hidden signal leading up to earthquakes, and used this ‘fingerprint’ to train a machine learning algorithm to predict future earthquakes. Their results, which could also be applied to avalanches, landslides and more, are reported in the journal Geophysical Review Letters.

For geoscientists, predicting the timing and magnitude of an earthquake is a fundamental goal. Generally speaking, pinpointing where an earthquake will occur is fairly straightforward: if an earthquake has struck a particular place before, the chances are it will strike there again. The questions that have challenged scientists for decades are how to pinpoint when an earthquake will occur, and how severe it will be. Over the past 15 years, advances in instrument precision have been made, but a reliable earthquake prediction technique has not yet been developed.

As part of a project searching for ways to use machine learning techniques to make gallium nitride (GaN) LEDs more efficient, the study’s first author, Bertrand Rouet-Leduc, who was then a PhD student at Cambridge, moved to Los Alamos National Laboratory in New Mexico to start a collaboration on machine learning in materials science between Cambridge University and Los Alamos. From there the team started helping the Los Alamos Geophysics group on machine learning questions.

The team at Los Alamos, led by Paul Johnson, studies the interactions among earthquakes, precursor quakes (often very small earth movements) and faults, with the hope of developing a method to predict earthquakes. Using a lab-based system that mimics real earthquakes, the researchers used machine learning techniques to analyse the acoustic signals coming from the ‘fault’ as it moved and search for patterns.

The laboratory apparatus uses steel blocks to closely mimic the physical forces at work in a real earthquake, and also records the seismic signals and sounds that are emitted. Machine learning is then used to find the relationship between the acoustic signal coming from the fault and how close it is to failing.

The machine learning algorithm was able to identify a particular pattern in the sound, previously thought to be nothing more than noise, which occurs long before an earthquake. The characteristics of this sound pattern can be used to give a precise estimate (within a few percent) of the stress on the fault (that is, how much force is it under) and to estimate the time remaining before failure, which gets more and more precise as failure approaches. The team now thinks that this sound pattern is a direct measure of the elastic energy that is in the system at a given time.

“This is the first time that machine learning has been used to analyse acoustic data to predict when an earthquake will occur, long before it does, so that plenty of warning time can be given – it’s incredible what machine learning can do,” said co-author Professor Sir Colin Humphreys of Cambridge’s Department of Materials Science & Metallurgy, whose main area of research is energy-efficient and cost-effective LEDs. Humphreys was Rouet-Leduc’s supervisor when he was a PhD student at Cambridge.

“Machine learning enables the analysis of datasets too large to handle manually and looks at data in an unbiased way that enables discoveries to be made,” said Rouet-Leduc.

Although the researchers caution that there are multiple differences between a lab-based experiment and a real earthquake, they hope to progressively scale up their approach by applying it to real systems which most resemble their lab system. One such site is in California along the San Andreas Fault, where characteristic small repeating earthquakes are similar to those in the lab-based earthquake simulator. Progress is also being made on the Cascadia fault in the Pacific Northwest of the United States and British Columbia, Canada, where repeating slow earthquakes that occur over weeks or months are also very similar to laboratory earthquakes.

“We’re at a point where huge advances in instrumentation, machine learning, faster computers and our ability to handle massive data sets could bring about huge advances in earthquake science,” said Rouet-Leduc.

Reference:
Bertrand Rouet-Leduc et al, Machine Learning Predicts Laboratory Earthquakes, Geophysical Research Letters (2017). DOI: 10.1002/2017GL074677

Note: The above post is reprinted from materials provided by University of Cambridge.

Plume-subduction interaction forms large auriferous provinces

Lithospheric-scale processes involved in the precursor stage of formation of the Deseado Massif auriferous province. Stage A: plume activity during Early Jurassic related to the initial stages of Gondwana break-up induces metasomatic Au enrichment in the overlying SCLM and coeval partial melting. The inset shows the transfer of Au to the enriched domains and partial melting processes responsible for the early magmatic stages of the CA-SLIP. Stage B: onset of the subduction zone at the western margin of Gondwana provides fluids capable of scavenging Au from formerly enriched domains and generates calc-alkaline magmatism represented by the middle-late magmatic stages of the CA-SLIP that hosts the Au deposits. The inset shows the process of partial melting of enriched domains and Au transport to crustal levels; some portions of enriched lithosphere remain unmodified

Gold enrichment at the crustal or mantle source has been proposed as a key ingredient in the production of giant gold deposits and districts. However, the lithospheric-scale processes controlling gold endowment in a given metallogenic province remain unclear.

Here we provide the first direct evidence of native gold in the mantle beneath the Deseado Massif in Patagonia that links an enriched mantle source to the occurrence of a large auriferous province in the overlying crust. A precursor stage of mantle refertilisation by plume-derived melts generated a gold-rich mantle source during the Early Jurassic.

The interplay of this enriched mantle domain and subduction-related fluids released during the Middle-Late Jurassic resulted in optimal conditions to produce the ore-forming magmas that generated the gold deposits. Our study highlights that refertilisation of the subcontinental lithospheric mantle is a key factor in forming large metallogenic provinces in the Earth’s crust, thus providing an alternative view to current crust-related enrichment models.

The traditional notion of Au endowment in a given metallogenic province is that Au accumulates by highly efficient magmatic-hydrothermal enrichment processes operating in a chemically ‘average’ crust. However, more recent views point to anomalously enriched source regions and/or melts that are critical for the formation of Au provinces at a lithospheric scale. Within this perspective, Au-rich melts/fluids might originate from a mid or lower crust reservoir and later migrate through favourable structural zones to shallower crustal levels where the Au deposits form. Alternatively, the subcontinental lithospheric mantle (SCLM) may also play a role as a source of metal-rich magmas.

This model involves deep-seated Au-rich magmas that may infiltrate the edges of buoyant and rigid domains in the SCLM producing transient Au storage zones. Upon melting, the ascending magma scavenges the Au as it migrates towards the uppermost overlying crust. Discontinuities between buoyant and rigid domains in the SCLM provide the channelways for the uprising of Au-rich fluids or melts from the convecting underlying mantle, and when connected to the overlying crust by trans-lithospheric faults, a large Au deposit or well-endowed auriferous province can be formed. Thus, the generation of Au deposits in the crust may result from the conjunction in time and space of three essential factors: an upper mantle or lower crustal source region particularly enriched in Au, a transient remobilisation event and favourable lithospheric-scale plumbing structures.

The giant Ladolam Au deposit in Papua New Guinea gives a good single-deposit case example of this mechanism since deep trans-lithospheric faults connect the crustal Au deposit directly with the mantle source, and similar Os isotopic compositions are exhibited by Au ores and metal-enriched peridotite of the underlying mantle. Despite these evidences, the genetic relation between a pre-enriched mantle source and the occurrence of gold provinces in the upper crust remains controversial since limited evidence is available at a broader regional scale.

More detail >>

Reference:
Plume-subduction interaction forms large auriferous provinces. Santiago Tassara, José M. González-Jiménez, Martin Reich, Manuel E. Schilling, Diego Morata, Graham Begg, Edward Saunders, William L. Griffin, Suzanne Y. O’Reilly, Michel Grégoire, Fernando Barra & Alexandre Corgne. DOI:10.1038/s41467-017-00821-z

What is Plate Tectonics?

Plate Tectonics
The layer of the Earth we live on is broken into a dozen or so rigid slabs (called tectonic plates by geologists) that are moving relative to one another. Credit: USGS

Plate tectonics is a scientific theory describing the large-scale motion of seven large plates and the movements of a larger number of smaller plates of the Earth’s lithosphere, since tectonic processes began on Earth between 3 and 3.5 billion years ago. The model builds on the concept of continental drift, an idea developed during the first decades of the 20th century. The geoscientific community accepted plate-tectonic theory after seafloor spreading was validated in the late 1950s and early 1960s.

The lithosphere, which is the rigid outermost shell of a planet (the crust and upper mantle), is broken into tectonic plates. The Earth’s lithosphere is composed of seven or eight major plates (depending on how they are defined) and many minor plates. Where the plates meet, their relative motion determines the type of boundary: convergent, divergent, or transform. Earthquakes, volcanic activity, mountain-building, and oceanic trench formation occur along these plate boundaries (or faults). The relative movement of the plates typically ranges from zero to 100 mm annually.

How do these massive slabs of solid rock float despite their tremendous weight?

The answer lies in the composition of the rocks. Continental crust is composed of granitic rocks which are made up of relatively lightweight minerals such as quartz and feldspar. By contrast, oceanic crust is composed of basaltic rocks, which are much denser and heavier. The variations in plate thickness are nature’s way of partly compensating for the imbalance in the weight and density of the two types of crust. Because continental rocks are much lighter, the crust under the continents is much thicker (as much as 100 km) whereas the crust under the oceans is generally only about 5 km thick. Like icebergs, only the tips of which are visible above water, continents have deep “roots” to support their elevations.

How did oceanic plate boundaries mapped?

Most of the boundaries between individual plates cannot be seen, because they are hidden beneath the oceans. Yet oceanic plate boundaries can be mapped accurately from outer space by measurements from GEOSAT satellites. Earthquake and volcanic activity is concentrated near these boundaries. Tectonic plates probably developed very early in the Earth’s 4.6-billion-year history, and they have been drifting about on the surface ever since-like slow-moving bumper cars repeatedly clustering together and then separating.

Types of plate boundaries

Transform boundaries

Transform boundary

Transform boundaries (Conservative) occur where two lithospheric plates slide, or perhaps more accurately, grind past each other along transform faults, where plates are neither created nor destroyed. The relative motion of the two plates is either sinistral (left side toward the observer) or dextral (right side toward the observer). Transform faults occur across a spreading center. Strong earthquakes can occur along a fault. The San Andreas Fault in California is an example of a transform boundary exhibiting dextral motion.

Divergent boundaries

Divergent boundary

Divergent boundaries (Constructive) occur where two plates slide apart from each other. At zones of ocean-to-ocean rifting, divergent boundaries form by seafloor spreading, allowing for the formation of new ocean basin. As the ocean plate splits, the ridge forms at the spreading center, the ocean basin expands, and finally, the plate area increases causing many small volcanoes and/or shallow earthquakes. At zones of continent-to-continent rifting, divergent boundaries may cause new ocean basin to form as the continent splits, spreads, the central rift collapses, and ocean fills the basin. Active zones of Mid-ocean ridges (e.g., Mid-Atlantic Ridge and East Pacific Rise), and continent-to-continent rifting (such as Africa’s East African Rift and Valley, Red Sea) are examples of divergent boundaries.

Convergent boundaries

Convergent boundary

Convergent boundary, also known as a destructive plate boundary, is a region of active deformation where two or more tectonic plates or fragments of the lithosphere near the end of their life cycle. This is in contrast to a constructive plate boundary (also known as a mid-ocean ridge or spreading center). As a result of pressure, friction, and plate material melting in the mantle, earthquakes and volcanoes are common near destructive boundaries, where subduction zones or an area of continental collision (depending on the nature of the plates involved) occurs. The subducting plate in a subduction zone is normally oceanic crust, and moves beneath the other plate, which can be made of either oceanic or continental crust. During collisions between two continental plates, large mountain ranges, such as the Himalayas are formed. In other regions, a divergent boundary or transform faults may be present.


Reference:
Wikipedia: Plate tectonics
USGS: What is a tectonic plate?
Wikipedia: Transform fault
Wikipedia: Divergent boundary
Wikipedia: Convergent boundary

New tyrannosaur fossil is most complete found in Southwestern US

Toe bones, the upper jaw and snout of the fossilized remains of a tyrannosaur skeleton
Toe bones, the upper jaw and snout of the fossilized remains of a tyrannosaur skeleton found in Grand Staircase-Escalante National Monument. The skeleton is the most complete of its kind found in the Southwest United States. Credit: Mark Johnston/NHMU

A remarkable new fossilized skeleton of a tyrannosaur discovered in the Bureau of Land Management’s Grand Staircase-Escalante National Monument (GSENM) in southern Utah was airlifted by helicopter Sunday, Oct 15, from a remote field site, and delivered to the Natural History Museum of Utah where it will be uncovered, prepared, and studied. The fossil is approximately 76 million years old and is most likely an individual of the species Teratophoneus curriei, one of Utah’s ferocious tyrannosaurs that walked western North America between 66 and 90 million years ago during the Late Cretaceous Period.

“With at least 75 percent of its bones preserved, this is the most complete skeleton of a tyrannosaur ever discovered in the southwestern US,” said Dr. Randall Irmis, curator of paleontology at the Museum and associate professor in the Department of Geology and Geophysics at the University of Utah. “We are eager to get a closer look at this fossil to learn more about the southern tyrannosaur’s anatomy, biology, and evolution.”

GSENM Paleontologist Dr. Alan Titus discovered the fossil in July 2015 in the Kaiparowits Formation, part of the central plateau region of the monument. Particularly notable is that the fossil includes a nearly complete skull. Scientists hypothesize that this tyrannosaur was buried either in a river channel or by a flooding event on the floodplain, keeping the skeleton intact.

“The monument is a complex mix of topography — from high desert to badlands — and most of the surface area is exposed rock, making it rich grounds for new discoveries, said Titus. “And we’re not just finding dinosaurs, but also crocodiles, turtles, mammals, amphibians, fish, invertebrates, and plant fossils — remains of a unique ecosystem not found anywhere else in the world,” said Titus.

Although many tyrannosaur fossils have been found over the last one hundred years in the northern Great Plains region of the northern US and Canada, until relatively recently, little was known about them in the southern US. This discovery, and the resulting research, will continue to cement the monument as a key place for understanding the group’s southern history, which appears to have followed a different path than that of their northern counterparts.

This southern tyrannosaur fossil is thought to be a sub-adult individual, 12-15 years old, 17-20 feet long, and with a relatively short head, unlike the typically longer-snouted look of northern tyrannosaurs.

Collecting such fossils from the monument can be unusually challenging. “Many areas are so remote that often we need to have supplies dropped in and the crew hikes in,” said Irmis. For this particular field site, Museum and monument crews back-packed in, carrying all of the supplies they needed to excavate the fossil, such as plaster, water and tools to work at the site for several weeks. The crews conducted a three-week excavation in early May 2017, and continued work during the past two weeks until the specimen was ready to be airlifted out.

Irmis said with the help of dedicated volunteers, it took approximately 2,000-3,000 people hours to excavate the site and estimates at least 10,000 hours of work remain to prepare the specimen for research. “Without our volunteer team members, we wouldn’t be able to accomplish this work. We absolutely rely on them throughout the entire process,” said Irmis.

Irmis says that this new fossil find is extremely significant. Whether it is a new species or an individual of Teratophoneus, the new research will provide important context as to how this animal lived. “We’ll look at the size of this new fossil, it’s growth pattern, biology, reconstruct muscles to see how the animal moved, how fast could it run, and how it fed with its jaws. The possibilities are endless and exciting,” said Irmis.

During the past 20 years, crews from the Natural History Museum of Utah and GSENM have unearthed more than a dozen new species of dinosaurs in GSENM, with several additional species awaiting formal scientific description. Some of the finds include another tyrannosaur named Lythronax, and a variety of other plant-eating dinosaurs — among them duck-billed hadrosaurs, armored ankylosaurs, dome-headed pachycephalosaurs, and a number of horned dinosaurs, such as Utahceratops, Kosmoceratops, Nasutoceratops, and Machairoceratops. Other fossil discoveries include fossil plants, insect traces, snails, clams, fishes, amphibians, lizards, turtles, crocodiles, and mammals. Together, this diverse bounty of fossils is offering one of the most comprehensive glimpses into a Mesozoic ecosystem. Remarkably, virtually all of the dinosaur species found in GSENM appear to be unique to this area, and are not found anywhere else on Earth

Note: The above post is reprinted from materials provided by University of Utah.

Drilling into the mysteries of seismic activity

Drill ship Fugro Synergy
Drill ship Fugro Synergy. Credit: CCotteril@ECORD_IODP

An international expedition aims to better understand seismic activity through samples collected from one of the most geologically active areas in Europe.

More than 30 scientists, including Dr Richard Collier from the University of Leeds, will be participating in an expedition which will analyse data gathered from a tear in the ocean floor – the Corinth Rift.

The rift is caused by one of the Earth’s tectonic plates being ripped apart causing such geological hazards as earthquakes.

The overall aim of the project is to gain insight into the rifting process by collecting sediment cores and compiling data from the samples on their geological history, composition, age and structure.

The research vessel, DV Fugro Synergy, will launch in late October to collect the cores at three different locations with drilling going to a depth of 750 metres below the seabed.

Dr Collier, from the School of Earth and Environment at Leeds said: “The Corinth Rift provides a unique laboratory in one of the most seismically active areas in Europe. It is a relatively young tectonic feature having only formed in the last five million years. It is an ideal location to learn more about early rift development and how tectonics affect the landscape.

“The cores will also allow us to determine the relative impacts of sea level change of and climate change through time on the transfer of sediment from the surrounding landscape to the basin floor.

“The opportunity to quantify these competing controls on rift sedimentation for the first time makes this project particularly exciting. By increasing our understanding of this particular rift we may be better able to predict seismic hazards in other areas and inform the hunt for sediment bodies in other parts of the world that might contain hydrocarbons. ”

Researchers have been working in the Gulf of Corinth region for many decades – examining sediments and active fault traces exposed on land and using marine geophysics to image the basin and its structure below the seafloor. But there is very little information about the age of the sediments and of the environment of the rift in the last one to two million years.

The core samples collected and analysed by the team will help answer such questions as: What are the implications for earthquake activity in a developing rift? How does the rift actually evolve and grow and on what timescale? How did the activity on faults change with time? How does the landscape respond to tectonic and climatic changes? And what was the climate and the environment of the rift basin in the last one to two million years?

Co-chief scientist of the expedition, Professor Lisa McNeill from the University of Southampton, said: “By drilling, we hope to find this last piece of the jigsaw puzzle. It will help us to unravel the sequence of events as the rift has evolved and, importantly, how fast the faults, which regularly generate damaging earthquakes, are slipping.”

The 33 scientists involved in the expedition are from Australia, Brazil, China, France, Germany, Greece, India, Norway, Spain, the United States, and the United Kingdom and cover a range of different geoscience disciplines.

Nine of them will sail onboard the drill ship Fugro Synergy from October to December of this year. After the offshore phase in the Gulf of Corinth the entire team, including Dr Collier, will meet for the first time at the IODP Bremen Core Repository (BCR), located at MARUM – Center for Marine Environmental Sciences at the University of Bremen, Germany. There they will spend a month splitting, analysing and sampling the cores and reviewing the data collected.

Note: The above post is reprinted from materials provided by University of Leeds.

Database eyes human role in earthquakes

Seismogram
Seismogram being recorded by a seismograph at the Weston Observatory in Massachusetts, USA. Credit: Wikipedia

A new database showcasing hundreds of examples of human-triggered earthquakes should shake up policy-makers, regulators and industry executives looking to mitigate these unacceptable hazards caused by our own actions, according to a Western Earth Sciences professor.

“More and more, we are recognizing how many earthquakes are actually human-induced,” said Gail Atkinson, Industrial Research Chair in Hazards from Induced Seismicity at Western.

“Researchers at the U.S. Geological Survey are now raising the possibility many of the large, well-known earthquakes in California that happened over the 1930s-50s – like the Long Beach Earthquake (in 1933) or the Kern County Earthquake (in 1952), which was a magnitude of 7.5 – may have been induced by oil-production in southern California at the time,” she explained.

Atkinson’s research group is studying this phenomenon of human-triggered earthquakes – or, induced seismicity – in western Canada, with a particular focus in Alberta. Her team has found evidence showing a significant increase in the number of earthquakes in the last five years or so in the active region. More than half of those appear to be related to hydraulic fracturing.

These findings are included in the new Human-Induced Earthquake Database – or HiQuake – which contains 728 examples of earthquakes (or sequences of earthquakes) that may have been set off by humans over the past 149 years.

While her team has uncovered evidence linking hydraulic fracturing to an increase in earthquakes, research also suggests a link between earthquakes and wastewater disposal in Alberta.

“There’s only a relatively small fraction of earthquakes purely tectonic or natural – so most of the seismicity we see in western Alberta and eastern British Columbia appears to be related to the oil and gas industry,” Atkinson noted.

“And that’s been raising a whole host of new issues in terms of how we should be planning and regulating hydraulic fracturing and oil and gas activity so we’re not causing unacceptable hazards – from seismic activity in particular – and ensuring we don’t conduct fracturing operations close to major infrastructure such as major dams or cortical facilities that (we don’t want to damage).”

With all these findings of human-induced seismicity emerging, and a new encyclopedic database storing the instances, researchers have been trying to wedge themselves between science and public policy in order to mitigate damage caused by human-triggered earthquakes, she continued.

“We’ve been trying to translate that knowledge into suggested guidelines, for example, for exclusion zones around critical infrastructures. We’ve suggested there shouldn’t be any hydraulic fracturing within 5 km of major dams or critical infrastructure,” Atkinson said.

“That’s the beginning. We’re working with regulators and policy-makers to try to get those ideas out there. The ideas are gaining traction. With some of the larger players – oil companies, Canadian associations for petroleum producers, and so on – if we can get them to start building that kind of thinking into best practices, that might actually be more achievable than regulation, which seems difficult to enforce. We’ve certainly started a dialogue; we have people talking. But how to translate findings into concrete policy, that is going to take time.”

Having something like HiQuake compile all documented instances of human-triggered earthquakes in one place makes it easier for researchers when they try to conduct studies establishing links between factors, Atkinson continued, adding this establishes the possibility of, at the very least, mitigating damage caused by such events.

“Unlike with natural earthquake hazards, we can do something about this. That’s what really motivates us. Whereas, with natural hazards, you can’t do anything about it, other than be prepared. You can’t stop an earthquake from happening; you can’t predict where it might happen. Similarly, with other natural disasters like hurricanes, you can be prepared, but you can’t stop it.

“This is something within our power to control. We really do have an opportunity here to make sure we don’t cause a major environmental disaster through actions we’ve taken that we didn’t need to take.”

Note: The above post is reprinted from materials provided by University of Western Ontario.

New research proves that birds and flying reptiles were friends, not foes

Credit: Jeff Kubina/Flickr

New Macquarie University research, published in the journal Proceedings of the Royal Society B, has shown that birds and pterosaurs did, in fact, co-exist for millions of years peacefully, as opposed to the long-held and historical belief that birds competitively-displaced pterosaurs as suggested.

It had previously been suggested that birds and pterosaurs competed with each other during the Cretaceous, a period more than 65 million years ago, and that this led to pterosaurs evolving larger body sizes to avoid competition with the smaller birds. However, after comparing jaw sizes, limb proportions and other functional characteristics not explored in previous studies, lead author Dr Nicholas Chan says this is not the case.

The research used morphospaces, a way of mapping the forms of organisms, and found distinct ecological separation between the two groups based on size, features of the wings and legs and feeding adaptations. In other words, this would suggest that the two were not long term competitors. Had the two been in direct competition, birds were believed to have been the reason that pterosaurs evolved into larger species in order to avoid competition for resources.

“Any competition between the two groups was likely localised over a relatively short periods of time,” says Dr Chan, from the Department of Biological Sciences at Macquarie University.

“While previous research only compared the limb bones of the two groups, our research compared jaw lengths, wing and leg proportions in order to determine functionally-equivalent traits, and found that the there was very little ecomophological overlap between the two.”

“The difference in the species functional morphology means that both groups co-existed without ongoing competition. Birds had shorter mid-wings, longer metatarsals, and shorter jaws. So they likely flew, walked, and fed differently from pterosaurs.”

Reference:
Nicholas R. Chan. Morphospaces of functionally analogous traits show ecological separation between birds and pterosaurs, Proceedings of the Royal Society B: Biological Sciences (2017). DOI: 10.1098/rspb.2017.1556

Note: The above post is reprinted from materials provided by Macquarie University.

Related Articles