28 Projects, page 1 of 3
Loading
- Project . 2011 - 2015Funder: UKRI Project Code: NE/I009906/1Funder Contribution: 625,765 GBPPartners: University of Southampton, Willis Limited, University of Ottawa, UKCIP, EA, AUSTRALIAN NATIONAL UNIVERSITY
The vulnerability of extensive near-coastal habitation, infrastructure, and trade makes global sea-level rise a major global concern for society. The UK coastline, for example, has ~£150 billion of assets at risk from coastal flooding, of which with £75 billion in London alone. Consequently, most nations have developed/ implemented protection plans, which commonly use ranges of sea-level rise estimates from global warming scenarios such as those published by IPCC, supplemented by worst-case values from limited geological studies. UKCP09 provides the most up-to-date guidance on UK sea-level rise scenarios and includes a low probability, high impact range for maximum UK sea level rise for use in contingency planning and in considerations regarding the limits to potential adaptation (the H++ scenario). UKCP09 emphasises that the H++ scenario is unlikely for the next century, but it does introduce significant concerns when planning for longer-term future sea-level rise. Currently, the range for H++ is set to 0.9-1.9 m of rise by the end of the 21st century. This range of uncertainty is large (with vast planning and financial implications), and - more critically - it has no robust statistical basis. It is important, therefore, to better understand the processes controlling the maximum sea-level rise estimate for the future on these time-scales. This forms the overarching motivation for the consortium project proposed here. iGlass is a broad-ranging interdisciplinary project that will integrate field data and modelling, in order to study the response of ice volume/sea level to different climate states during the last five interglacials, which include times with significantly higher sea level than the present. This will identify the likelihood of reduced ice cover over Greenland and West Antarctica, an important constraint on future sea-level projections. A key outcome will be to place sound limits on the likely ice-volume contribution to maximum sea-level rise estimates for the future. Our project is guided by three key questions: Q1. What do palaeo-sea level positions reveal about the global ice-volume/sea-level changes during a range of different interglacial climate states? Q2. What were the rates of sea-level rise in past interglacials, and to what extent are these relevant for future change, given the different climate forcing? Q3. Under a range of given (IPCC) climate projection scenarios, what are the projected limits to maximum sea-level rise over the next few centuries when accounting for ice-sheet contributions? The research will directly inform decision-making processes regarding flood risk management in the UK and abroad. In this respect, the project benefits from the close co-operation with scientists and practitioners in the UK Environment Agency, UKCIP, the UK insurance industry, as well as the wider global academic and user communities.
- Project . 2011 - 2017Funder: UKRI Project Code: NE/I027282/1Funder Contribution: 612,995 GBPPartners: University of Bristol, DFO, University of Wisconsin–Oshkosh, University of Waterloo (Canada)
Methane is a powerful long-lived greenhouse gas that is second only to carbon dioxide in its radiative forcing potential. Understanding the Earth's methane cycle at regional scales is a necessary step for evaluating the effectiveness of methane emission reduction schemes, detecting changes in biological sources and sinks of methane that are influenced by climate, and predicting and perhaps mitigating future methane emissions. The growth rate of atmospheric methane has slowed since the 1990s but it continues to show considerable year-to-year variability that cannot be adequately explained. Some of the variability is caused by the influence of weather on systems in which methane is produced biologically. When an anomalous increase in atmospheric methane is detected in the northern hemisphere that links to warm weather conditions, typically wetlands and peatlands are thought to be the cause. However, small lakes and ponds commonly are overlooked as potential major sources of methane emissions. Lakes historically have been regarded as minor emitters of methane because diffusive fluxes during summer months are negligible. This notion has persisted until recently even though measurements beginning in the 1990s have consistently shown that significant amounts of methane are emitted from northern lakes during spring and autumn. In the winter time the ice cover isolates lake water from the atmosphere and the water column become poor in oxygen and stratified. Methane production increases in bottom sediment and the gas spreads through the water column with some methane-rich bubbles rising upwards and becoming trapped in the ice cover as it thickens downward in late winter. In spring when the ice melts the gas is released. Through changes in temperature and the influence of wind the lake water column mixes and deeper accumulations of methane are lost to the atmosphere. In summer the water column stratifies again and methane accumulates once more in the bottom sediments. When the water column become thermally unstable in the autumn and eventually overturns the deep methane is once again released although a greater proportion of it appears to be consumed by bacteria in the autumn. Lakes differ in the chemistry of their water as well as the geometry of their basins. Thus it is difficult to be certain that all lakes will behave in this way but for many it seems likely. The proposed study will measure the build-up of methane in lakes during spring and autumn across a range of ecological zones in North America. The focus will be on spring build-up and emissions because that gas is the least likely to be influenced by methane-consuming bacteria. However, detailed measurements of methane emissions will also be made in the autumn at a subset of lakes. The measurements will then be scaled to a regional level using remote sensing data providing a 'bottom-up' estimate of spring and autumn methane fluxes. Those results will be compared to a 'top-down' estimate determined using a Met Office dispersion model that back-calculates the path of air masses for which the concentration of atmospheric methane has been measured at global monitoring stations in order to determine how much methane had to be added to the air during its passage through a region. Comparing estimates by these two approaches will provide independent assessments of the potential impact of seasonal methane fluxes from northern lakes. In addition measurements of the light and heavy versions of carbon and hydrogen atoms in methane (C, H) and water (H) will be measured to evaluate their potential use as tracer for uniquely identifying methane released by lakes at different latitudes. If successful the proposed study has the potential to yield a step-change in our perception of the methane cycle by demonstrating conclusively that a second major weather-sensitive source of biological methane contributes to year-to-year shifts in the growth rate of atmospheric methane.
- Project . 2011 - 2014Funder: UKRI Project Code: NE/H024301/1Funder Contribution: 716,274 GBPPartners: UU, University of Ottawa, University of Maine, TCD, Geological Survey of Ireland
Relative sea level (RSL) change reflects the interplay between a large number of variables operating at scales from global to local. Changes in RSL around the British Isles (BI) since the height of the last glaciation (ca. 24 000 years ago), are dominated by two key variables (i) the rise of ocean levels caused by climate warming and the melting of land-based ice; and (ii) the vertical adjustment of the Earth's surface due to the redistribution of this mass (unloading of formerly glaciated regions and loading of the ocean basins and margins). As a consequence RSL histories vary considerably across the region once covered by the British-Irish Ice Sheet (BIIS). The variable RSL history means that the BI is a globally important location for studying the interactions between land, ice and the ocean during the profound and rapid changes that followed the last glacial maximum. The BI RSL record is an important yardstick for testing global models of land-ice-ocean interactions and this in turn is important for understanding future climate and sea level scenarios. At present, the observational record of RSL change in the British Isles is limited to shallow water areas because of accessibility and only the later part of the RSL curve is well studied. In Northern Britain, where the land has been rising most, RSL indicators are close to or above present sea level and the RSL record is most complete. In southern locations, where uplift has been less, sea level was below the present for long periods of time but there is very little data on RSL position. There are varying levels of agreement between models and existing field data and we cannot be certain of model projections of former low sea levels. Getting the models right is important for understanding the whole global pattern of land-ice-ocean interactions in the past and into the future. To gather the missing data and thus improve the utility of the British RSL curves for testing earth-ice-ocean models, we will employ a specialised, interdisciplinary approach that brings together a unique team of experts in a multidisciplinary team. We have carefully selected sites where there is evidence of former sea levels is definitely preserved and we will use existing seabed geological data in British and Irish archives to plan our investigations. The first step is marine geophysical profiling of submerged seabed sediments and mapping of surface geomorphological features on the seabed. These features include the (usually) erosional surface (unconformity) produced by the rise in sea level, and surface geomorphological features that indicate former shorelines (submerged beaches, barriers and deltas). These allow us to identify the position (but not the age) of lower than present sea levels. The second step is to use this stratigraphic and geomorphological information to identify sites where we will take cores to acquire sediments and organic material from low sea-level deposits. We will analyse the sediments and fossil content of the cores to find material that can be closely related to former sea levels and radiocarbon dated. The third step in our approach is to extend the observed RSL curves using our new data and compare this to model predictions of RSL. We can then modify the parameters in the model to obtain better agreement with observations and thus better understand the earth-ice-ocean interactions. These data are also important for understanding the palaeogeography of the British Isles. Our data will allow a first order reconstruction of former coastlines, based upon the modern bathymetry, for different time periods during the deglaciation. This is of particular importance to the presence or absence of potential landbridges that might have enabled immigration to Ireland of humans and animals. They will also allow us to identify former land surfaces on the seabed. The palaeogeography is crucial to understanding the evolving oceanographic circulation of the Irish Sea.
- Project . 2011 - 2015Funder: UKRI Project Code: NE/I012915/1Funder Contribution: 401,388 GBPPartners: University of Turku, LANDCARE RESEARCH, University of Exeter, University of Victoria, Lehigh University, GTK, University of Hawaiʻi Sea Grant, University of Quebec
Future climate change is one of the most challenging issues facing humankind and an enormous research effort is directed at attempting to construct realistic projections of 21st century climate based on underlying assumptions about greenhouse gas emissions. Climate models now include many of the components of the earth system that influence climate over a range of timescales. Understanding and quantifying earth system processes is vital to projections of future climate change because many processes provide 'feedbacks' to climate change, either reinforcing upward trends in greenhouse gas concentrations and temperature (positive feedbacks) or sometimes damping them (negative feedbacks). One key feedback loop is formed by the global carbon cycle, part of which is the terrestrial carbon cycle. As carbon dioxide concentrations and temperatures rise, carbon sequestration by plants increases but at the same time, increasing temperatures lead to increased decay of dead plant material in soils. Carbon cycle models suggest that the balance between these two effects will lead to a strong positive feedback, but there is a very large uncertainty associated with this finding and this process represents one of the biggest unknowns in future climate change projections. In order to reduce these uncertainties, models need to be validated against data such as records for the past millennium. Furthermore, it is extremely important to make sure that the models are providing a realistic representation of the global carbon cycle and include all its major component parts. Current models exclude any consideration of the reaction of peatlands to climate change, even though these ecosystems contain almost as much carbon as the global atmosphere and are potentially sensitive to climate variability. On the one hand, increased warmth may increase respiration and decay of peat and on the other hand, even quite small increases in productivity may compensate for this or even exceed it in high latitude peatlands. A further complication is that peatlands emit quite large quantities of methane, another powerful greenhouse gas. Our proposed project aims to assess the contribution of peatlands to the global carbon cycle over the past 1000 years by linking together climate data and climate model output with models that simulate the distribution and growth of peatlands on a global scale. The models will also estimate changes in methane emissions from peatlands. In particular, we will test the hypotheses that warmth leads to lower rates of carbon accumulation and that this means that globally, peatlands will sequester less carbon in future than they do now. We will also test whether future climate changes lead to a positive or negative feedback from peatland methane emissions. To determine how well our models can simulate the peatland-climate links, we will test the model output for the last millennium against fossil data of peat growth rates and hydrological changes (related to methane emissions). To do this, we will assemble a large database of published information but also new data acquired in collaboration with partners from other research organisations around the world who are involved in collecting information and samples that we can make use of once we undertake some additional dating and analyses. Once the model has been evaluated against the last millennium data, we will make projections of the future changes in the global carbon cycle that may occur as a result of future climate change. This will provide a strong basis for making a decision on the need to incorporate peatland dynamics into the next generation of climate models. Ultimately we expect this to reduce uncertainty in future climate change predictions.
- Project . 2011 - 2015Funder: UKRI Project Code: EP/J003247/1Funder Contribution: 359,554 GBPPartners: Maplesoft, University of Bath
Connectedness, as in "can we get there from here", is a fundamental concept, both in actual space and in various abstract spaces. Consider a long ladder in a right-angled corridor: can it get round the corner? Calling it a corridor implies that it is connected in actual three-dimensional space. But if we consider the space of configurations of the ladder, this is determined by the position and orientation of the ladder, and the `corridor' is now the requirement that no part of the ladder run into the walls - it is not sufficient that the ends of the ladder be clear of the walls. If the ladder is too long, it may have two feasible positions, one in each arm of the corridor, but there may be no possible way to get from one to the other. In this case we say that the configuration space of the ladder is not connected: we can't get the ladder there from here, even though we can get each end (taken separately, which is physically impossible) from here to there. Connectedness in configuration space is therefore the key to motion planning. These are problems human beings (especially furniture movers, or people trying to park cars in confined spaces) solve intuitively, but find very hard to explain. Note that the ladder is rigid and three-dimensional, hence its position is determined by the coordinates of three points on it, so configuration space is nine-dimensional. Connectedness in mathematical spaces is also important. The square root of 4 can be either 2 or -2: we have to decide which. Similarly, the square root of 9 can be 3 or -3. But, if 4 is connected to 9 in our problem space (whatever that is), we can't make these choices independently: our choice has to be consistent along the path from 4 to 9. When it is impossible to make such decisions totally consistently, we have what mathematicians call a `branch cut' - the classic example being the International Date Line, because it is impossible to assign `day' consistently round a globe. In previous work, we have shown that several mathematical paradoxes reduce to connectedness questions in an appropriate space divided by the relevant branch cuts. This is an area of mathematics which is notoriously difficult to get right by hand, and mathematicians, and software packages, often have internal inconsistencies when it comes to branch cuts. The standard computational approach to connectedness, which has been suggested in motion planning since the early 1980s, is via a technique called cylindrical algebraic decomposition. This has historically been computed via a "bottom-up" approach: we first analyse one direction, say the x-axis, decomposing it into all the critical points and intermediate regions necessary, then we take each (x,y)-cylinder above each critical point or region, and decompose it, then each (x,y,z) above each of these regions, and so on. Not only does this sound tedious, but it is inevitably tedious - the investigators and others have shown that the problem is extremely difficult (doubly exponential in the number of dimensions). Much of the time, notably in motion planning, we are not actually interested in the lower-dimensional components, since they would correspond to a motion with no degrees of freedom, rather like tightrope-walking. Recent Canadian developments have shown an alternative way of computing such decompositions via so-called triangular decompositions, and a 2010 paper (Moreno Maza in Canada + Davenport) has shown that the highest-dimensional components of a triangular decomposition can be computed in singly-exponential time. This therefore opens up the prospect, which we propose to investigate, of computing the highest-dimensional components of a cylindrical decomposition in singly-exponential time, which would be a major breakthrough in computational geometry.
- Project . 2011 - 2012Funder: UKRI Project Code: NE/I015647/1Funder Contribution: 53,059 GBPPartners: DRDC, NOC
In December 1995 a group of marine scientists and technologists met to define the scope of a thematic programme proposal that would demonstrate the utility of Autonomous Underwater Vehicles (AUVs) for ocean science. The aim of the programme was tackle questions that could only be answered using the unique features of such vehicles. The 'Autosub Science Missions' programme funded the development of Autosub1, Autosub2 and Autosub3 autonomous underwater vehicles. Today, this vision is pursued in Oceans2025 science programme, in which NERC is currently funding the development of Autosub6000 and Autosub Long Range - the aim here is to go deeper and longer. Such platforms will help the U.K. maintain its position as one of the World's leaders in ocean science. Estimating AUV reliability is of paramount importance for deployments in hazardous and complex environments. Reliability is the probability that a system will perform its specified function over a given period of time under defined environmental conditions. The AUV reliability is influenced by many factors, human errors play a critical role but other challenges arise from severe operational conditions, the fact that some AUV components were not designed to operate in such conditions and furthermore the fact that some of the vehicle's components were not designed to work together. As a result, we cannot ignore the fact that measuring AUV reliability must be based experts' subjective risk assessment. If we are going to use experts risk assessment we must follow a formal process. A formal elicitation process will enable transparency and repeatability of the assessment. To tackle this problem, the Underwater Systems Laboratory (USL) created a risk and reliability management process tailored to AUV operations (RMP-AUV). This project is aimed to validate existing methods and to develop new, more detailed methods for estimating AUV operational risk. The new risk models will quantify the effects of fault or incident mitigation on estimates of risk of loss and on risk of non-delivery of data. The aim is to derive new models to measure the reliability growth of AUVs. The new methods will be based on Bayesian statistics. This is a mathematical method in which the prior belief in a proposition -in our case, risk estimate- is updated based on the likelihood that the proposition is affected by a new observation. Through collaboration with AUV manufacturer International Submarine Engineering (ISE) and Defence Research and Development Canada (DRDC) we have a rare, time-limited opportunity to use an extensive data set on the faults and incidents with an ISE Explorer AUV. Furthermore, our partners are eager to co-develop, test and apply risk mitigation tracking and modelling methods within their high impact project in support of Canada's UNCLOS Article 76 submission. The models would be tested with reliability data already gathered, with tracking of faults from the April 2010 Arctic campaign, engineering rework, and a 2011 Arctic expedition.
- Project . 2011 - 2017Funder: UKRI Project Code: NE/I028017/1Funder Contribution: 817,613 GBPPartners: University of Leeds, University of Manitoba, Massachusetts Institute of Technology, USA, KOERI
The Earth's surface is broken into numerous tectonic plates, which are continually moving. The movement of the plates relative to each other is the source for most earthquake activity on Earth, which is typically focussed into narrow fault zones where the plates collide, pull apart, or slide past each other. Within the fault zones the deformation in the upper 10-15 km of the Earth's crust is localised onto narrow fault planes. Earthquakes occur when the stresses on the fault planes caused by plate motions overcome frictional resistance, and these represent significant hazard for communities living in fault zones - in the first decade of the 21st century alone, earthquakes killed 700,000 people. In strike-slip fault zones, where plates slide past each other, earthquakes typically only break the upper crust. We know that the lower crust (deeper than 10-15 km) must be deforming continuously, because we can measure how the ground surface deforms between earthquakes. But because rock samples or other direct measurements cannot easily be obtained from these depths, we have a poor understanding of how the lower crust behaves and influences the loading of stresses in the upper crust to cause major earthquakes.We propose an inter-disciplinary project with the aim of understanding the earthquake loading cycle (how stresses build through plate motions and are released in earthquakes) along a major European fault, the North Anatolian Fault Zone (NAFZ) in Turkey. The NAFZ is a strike-slip fault comparable in length and slip rate to the San Andreas Fault in California. It crosses a densely populated region of northern Turkey and constitutes a major seismic hazard - over 1000 km of the fault ruptured during 12 large earthquakes in the 20th century. The western end of the NAFZ ruptured in two major earthquakes in 1999 at Izmit on 17 August and Düzce, 87 days later, killing more than 30,000 people. A seismic gap remains south of Istanbul, an urban centre of more than 10 million people, where there is ~60% chance of significant shaking within the next few decades (Parsons et al. 2000).We aim to measure the properties of the fault in the lower crust to set constraints on the earthquake loading cycle along the NAFZ. The project involves (i) a novel high-resolution seismic experiment aimed at resolving the fault zone structure at depth, (ii) geological analysis of an exhumed fault zone representative of the mid to lower crust under the fault, and (iii) analysis of satellite measurements of surface displacement. The results from these studies will be used to build computational models of the earthquake loading cycle. In this project we aim to explain how the movements of the tectonic plates interact with the fault zone and how this is affected by the lower crustal structure. This will ultimately contribute to better assessment of the seismic hazard associated with large fault zone. The resulting synthesis of the geophysical and geological data together with geodynamical modelling will guide future investigations for other major strike-slip fault zones.
- Project . 2011 - 2012Funder: UKRI Project Code: NE/I016481/1Funder Contribution: 52,767 GBPPartners: University of Leicester, ROM
The aims of this project are simple. By rotting velvet worms (onychophorans) under controlled conditions we will generate the data required to start correctly interpreting the fossil record of lobopodians. Accurate placement of lobopodians in the Tree of Life has the potential to resolve a major evolutionary problem: the origin of the arthropods. Arthropods are arguably the most successful animals on Earth: more diverse and abundant than any other group, they are important and familiar to everyone. Yet the identity of the arthropods' nearest living relatives, and the details of arthropod origins and early evolution remain unclear. In contrast to arthropods, onychophorans are both obscure and enigmatic. With their fat legs and body annulations they resemble a conga-line of overweight Michelin-men. A recent popular account of animal relationships noted that 'no group has prompted more zoological debate' (Tudge 2000, The Variety of Life) - exactly where onychophorans sit in the Tree of Life remains controversial. Surprisingly, answering the question of onychophoran relationships holds the key to unlocking the evolutionary emergence of the arthropods, and this is where fossil lobopodians have a major role to play. These extinct, soft-bodied organisms (almost all of Cambrian age) share a number of important anatomical features with onychophorans, but recent evolutionary analyses suggest that fossil lobopodians include the ancestors of arthropods, of onychophorans, and of panarthropods (the larger group to which both onychophorans & arthropods belong). Consequently, finding the correct places for fossil lobopodians in the Tree of Life has the potential to reveal the sequence in which important characteristics of arthropods and onychophorans were acquired. If lobopodian branches do fill the gap between living onychophorans and arthropods, we may be able to resolve relationships between the major arthropod branches. This potential can only be realised with correct placement of lobopodians, and this requires new information about how they decayed. Much of the current disagreement over the placement of lobopodians arises because we don't understand how the process of decay affected their bodies prior to fossilization. Studies of other organisms show that decay rapidly alters the appearance of important anatomical features. As soft tissues rot and collapse the shape and juxtaposition of body parts - crucial criteria for anatomical comparison - change significantly. Other features rot away completely. We need new data so that these changes, which will have affected all fossil lobopodians to some degree, can be taken into account when interpreting their anatomy. We will employ a new approach to the experimental study of how animals decay, recently developed in our lab. We will rot onychophorans under controlled lab conditions and carefully record their important anatomical features (many of which they share with fossil lobopodians) at timed intervals as they decompose. From this we will determine the rate and sequence of decay of features; when and how their juxtaposition, shape and appearance change. This will allow us to establish criteria for the recognition of decay-transformed features in fossil lobopodians and reassess the anatomy and evolutionary relationships of these controversial animals (including exceptionally well-preserved new material). It will also allow us to further test a hypothesis developed from our ongoing decay experiments: that the decay of evolutionarily important anatomical features of soft bodied animals is not random - features that are most useful for recognizing evolutionary relationships are the most likely to decay rapidly. If this pattern is widespread it is an important yet previously unrecognised bias in reconstructing the evolutionary relationships of fossils.
- Project . 2011 - 2012Funder: UKRI Project Code: NE/I016686/1Funder Contribution: 46,916 GBPPartners: ITRES Research Ltd, NERC British Antarctic Survey, University of Twente
Geological maps are a primary source of information for understanding much about an area's potential (e.g. mineral resources, engineering/construction suitability) through to anticipating and mitigating natural events (e.g. landslides, earthquakes). Geological maps exist for almost the entire planet and some maps (e.g. British Isles) have been continually refined and updated over the last 150 years. The Polar and highly mountainous regions of the world pose major logistical problems to gain access to certain areas, such that the geology of some regions remains poorly understood or completely unknown. The Antarctic Peninsula is one example where the glaciated terrain and mountainous relief have prevented access to field geologists. Over 50 years of geological mapping on the Antarctic Peninsula has led to a good understanding of its geological history and its links to the Andes and the supercontinent, Gondwana, of which Antarctica formed a part. However, some very large areas (100s km2) still remain poorly known or unexplored. The geological evolution of the Antarctic Peninsula can only be fully understood with a more complete knowledge of the rock types present. Although there is no substitute for fieldwork, gathering data from aircraft-mounted instruments or satellites offers the potential of providing geologists a first order method of remotely identifying rock types. Geologists working on the Antarctic Peninsula already make use of aeromagnetic and aerogravity data to help understand the sub-ice geology and a recent study has used satellite data for identifying minerals using reflectance data This, however proved to have limitations as comparatively few of the major rock-forming minerals display diagnostic absorption features. In contrast, almost all rock forming minerals display diagnostic spectral emission features in the thermal infrared region, which has the potential to be a valuable tool in distinguishing features for igneous and sedimentary rocks. Thermal data from satellites is available, but it has limited spectral bands that would not yield the resolution required to differentiate between minerals. Funding through the Foreign and Commonwealth Office, UK has already been secured (Biological Sciences, BAS) for a survey to assess vegetation type and extent at sites on Adelaide Island on the Antarctic Peninsula. The survey will be conducted using an instrument owned by a Canadian research company (ITRES); such an instrument is not currently available to NERC. The Canadian owned, 64-band thermal imaging instrument (Thermal Airborne Spectrographic Imager: TASI) is capable of generating high spatial and spectral resolution thermal emission data. It can be fitted to a British Antarctic Survey Twin Otter aircraft and is able to generate very high quality data that can map the type and extent of vegetation at several sites along the Antarctic Peninsula. The instrument can also be used in conjunction with other survey flying to optimise time and resources. Funding is sought here to use the same vegetation survey dataset, but to investigate its potential to identify different minerals and rock types. If funding is secured a ground-based spectral study would be done in conjunction with the airborne survey to calibrate the data. This study would be carried out in an area where the geology is well described and understood, such that a proof of concept could be established before extending the techniques to areas where geological understanding is poor or absent. If successful, the intention would be to extend the work into other polar regions or highly mountainous, difficult to access regions and develop the techniques further.
- Project . 2011 - 2013Funder: UKRI Project Code: EP/I017984/1Funder Contribution: 96,760 GBPPartners: UBC, University of Warwick
Imperfectly observed evolving systems arise throughout the human world. Weather forecasting, modelling stock prices, transcribing music or interpreting human speech automatically are just a few of the situations in which imperfect observations of a system which evolves in time are all that is available whilst the underlying system is the thing in which we are interested: Given satellite observations and sparse localised measurements, we'd like to accurately characterise the weather now and predict future weather; given measurements of pitch at discrete times we'd like a computer to be able to produce a meaningful description of what was being said at the time.Surprisingly, it's possible to model a great number of these problems using a common framework, known as a state space model (or hidden Markov model). Inferring the likely value of the unobserved process based upon a sequence of observations, as those observations become available is in principle reasonably straightforward but it requires the evaluation of integrals which cannot be solved by analytical mathematics and which are too complex to deal with accurately via simple numerical methods. Simulation-based techniques have been developed to address these problems and are now the most powerful collection of tools for estimating the current state of the unobserved process given all of the observations received so far. Much effort has been dedicated in recent years to designing algorithms to efficiently describe the likely path of the unobserved process from the beginning of the observation sequence up to the current time in a similar way. This problem is much harder as each observation we receive tells us a little more about the likely history of the process and continually updating this ever-longer list of locations in an efficient way is far from simple.The methods proposed here will attempt to extend simulation-based statistical techniques in a new direction which is particularly well suited to characterisation of the whole path of the unobserved process and not just its terminal value. Two different strategies based around the same premise - that sometimes several smaller simulations can in a particular sense outperform a single larger simulation for the same computational cost - will be investigated. The techniques developed will be investigated both theoretically and empirically.In addition to developing and analysing new computational techniques, the project will provide software libraries which simplify the use of these methods in real problems (hopefully to the extent that scientists who are expert in particular application domains will be able to apply the techniques directly to their own problems).The research could be considered successful if:1/ It leads to new methods for performing inference in state space models.2/ These methods can be implemented with less application-specific tuning that existing methods require or these methods provide more efficient use of computational resources.3/ These methods are sufficiently powerful to allow the use of more complex models than are currently practical.4/ The methods are adopted by practitioners in at least some of the many areas in which these techniques might be usefully employed.The long term benefits could include more realistic assessment of risk in financial systems, more reliable tracking and prediction of meteorological phenomena and improved technological products wherever there is a need to dynamically incorporate knowledge arising from measurements as they become available. There will be particular advantages in settings in which the full path of the imperfectly observed underlying process is of interest but there is scope for improvement even when this is not the case.
28 Projects, page 1 of 3
Loading
- Project . 2011 - 2015Funder: UKRI Project Code: NE/I009906/1Funder Contribution: 625,765 GBPPartners: University of Southampton, Willis Limited, University of Ottawa, UKCIP, EA, AUSTRALIAN NATIONAL UNIVERSITY
The vulnerability of extensive near-coastal habitation, infrastructure, and trade makes global sea-level rise a major global concern for society. The UK coastline, for example, has ~£150 billion of assets at risk from coastal flooding, of which with £75 billion in London alone. Consequently, most nations have developed/ implemented protection plans, which commonly use ranges of sea-level rise estimates from global warming scenarios such as those published by IPCC, supplemented by worst-case values from limited geological studies. UKCP09 provides the most up-to-date guidance on UK sea-level rise scenarios and includes a low probability, high impact range for maximum UK sea level rise for use in contingency planning and in considerations regarding the limits to potential adaptation (the H++ scenario). UKCP09 emphasises that the H++ scenario is unlikely for the next century, but it does introduce significant concerns when planning for longer-term future sea-level rise. Currently, the range for H++ is set to 0.9-1.9 m of rise by the end of the 21st century. This range of uncertainty is large (with vast planning and financial implications), and - more critically - it has no robust statistical basis. It is important, therefore, to better understand the processes controlling the maximum sea-level rise estimate for the future on these time-scales. This forms the overarching motivation for the consortium project proposed here. iGlass is a broad-ranging interdisciplinary project that will integrate field data and modelling, in order to study the response of ice volume/sea level to different climate states during the last five interglacials, which include times with significantly higher sea level than the present. This will identify the likelihood of reduced ice cover over Greenland and West Antarctica, an important constraint on future sea-level projections. A key outcome will be to place sound limits on the likely ice-volume contribution to maximum sea-level rise estimates for the future. Our project is guided by three key questions: Q1. What do palaeo-sea level positions reveal about the global ice-volume/sea-level changes during a range of different interglacial climate states? Q2. What were the rates of sea-level rise in past interglacials, and to what extent are these relevant for future change, given the different climate forcing? Q3. Under a range of given (IPCC) climate projection scenarios, what are the projected limits to maximum sea-level rise over the next few centuries when accounting for ice-sheet contributions? The research will directly inform decision-making processes regarding flood risk management in the UK and abroad. In this respect, the project benefits from the close co-operation with scientists and practitioners in the UK Environment Agency, UKCIP, the UK insurance industry, as well as the wider global academic and user communities.
- Project . 2011 - 2017Funder: UKRI Project Code: NE/I027282/1Funder Contribution: 612,995 GBPPartners: University of Bristol, DFO, University of Wisconsin–Oshkosh, University of Waterloo (Canada)
Methane is a powerful long-lived greenhouse gas that is second only to carbon dioxide in its radiative forcing potential. Understanding the Earth's methane cycle at regional scales is a necessary step for evaluating the effectiveness of methane emission reduction schemes, detecting changes in biological sources and sinks of methane that are influenced by climate, and predicting and perhaps mitigating future methane emissions. The growth rate of atmospheric methane has slowed since the 1990s but it continues to show considerable year-to-year variability that cannot be adequately explained. Some of the variability is caused by the influence of weather on systems in which methane is produced biologically. When an anomalous increase in atmospheric methane is detected in the northern hemisphere that links to warm weather conditions, typically wetlands and peatlands are thought to be the cause. However, small lakes and ponds commonly are overlooked as potential major sources of methane emissions. Lakes historically have been regarded as minor emitters of methane because diffusive fluxes during summer months are negligible. This notion has persisted until recently even though measurements beginning in the 1990s have consistently shown that significant amounts of methane are emitted from northern lakes during spring and autumn. In the winter time the ice cover isolates lake water from the atmosphere and the water column become poor in oxygen and stratified. Methane production increases in bottom sediment and the gas spreads through the water column with some methane-rich bubbles rising upwards and becoming trapped in the ice cover as it thickens downward in late winter. In spring when the ice melts the gas is released. Through changes in temperature and the influence of wind the lake water column mixes and deeper accumulations of methane are lost to the atmosphere. In summer the water column stratifies again and methane accumulates once more in the bottom sediments. When the water column become thermally unstable in the autumn and eventually overturns the deep methane is once again released although a greater proportion of it appears to be consumed by bacteria in the autumn. Lakes differ in the chemistry of their water as well as the geometry of their basins. Thus it is difficult to be certain that all lakes will behave in this way but for many it seems likely. The proposed study will measure the build-up of methane in lakes during spring and autumn across a range of ecological zones in North America. The focus will be on spring build-up and emissions because that gas is the least likely to be influenced by methane-consuming bacteria. However, detailed measurements of methane emissions will also be made in the autumn at a subset of lakes. The measurements will then be scaled to a regional level using remote sensing data providing a 'bottom-up' estimate of spring and autumn methane fluxes. Those results will be compared to a 'top-down' estimate determined using a Met Office dispersion model that back-calculates the path of air masses for which the concentration of atmospheric methane has been measured at global monitoring stations in order to determine how much methane had to be added to the air during its passage through a region. Comparing estimates by these two approaches will provide independent assessments of the potential impact of seasonal methane fluxes from northern lakes. In addition measurements of the light and heavy versions of carbon and hydrogen atoms in methane (C, H) and water (H) will be measured to evaluate their potential use as tracer for uniquely identifying methane released by lakes at different latitudes. If successful the proposed study has the potential to yield a step-change in our perception of the methane cycle by demonstrating conclusively that a second major weather-sensitive source of biological methane contributes to year-to-year shifts in the growth rate of atmospheric methane.
- Project . 2011 - 2014Funder: UKRI Project Code: NE/H024301/1Funder Contribution: 716,274 GBPPartners: UU, University of Ottawa, University of Maine, TCD, Geological Survey of Ireland
Relative sea level (RSL) change reflects the interplay between a large number of variables operating at scales from global to local. Changes in RSL around the British Isles (BI) since the height of the last glaciation (ca. 24 000 years ago), are dominated by two key variables (i) the rise of ocean levels caused by climate warming and the melting of land-based ice; and (ii) the vertical adjustment of the Earth's surface due to the redistribution of this mass (unloading of formerly glaciated regions and loading of the ocean basins and margins). As a consequence RSL histories vary considerably across the region once covered by the British-Irish Ice Sheet (BIIS). The variable RSL history means that the BI is a globally important location for studying the interactions between land, ice and the ocean during the profound and rapid changes that followed the last glacial maximum. The BI RSL record is an important yardstick for testing global models of land-ice-ocean interactions and this in turn is important for understanding future climate and sea level scenarios. At present, the observational record of RSL change in the British Isles is limited to shallow water areas because of accessibility and only the later part of the RSL curve is well studied. In Northern Britain, where the land has been rising most, RSL indicators are close to or above present sea level and the RSL record is most complete. In southern locations, where uplift has been less, sea level was below the present for long periods of time but there is very little data on RSL position. There are varying levels of agreement between models and existing field data and we cannot be certain of model projections of former low sea levels. Getting the models right is important for understanding the whole global pattern of land-ice-ocean interactions in the past and into the future. To gather the missing data and thus improve the utility of the British RSL curves for testing earth-ice-ocean models, we will employ a specialised, interdisciplinary approach that brings together a unique team of experts in a multidisciplinary team. We have carefully selected sites where there is evidence of former sea levels is definitely preserved and we will use existing seabed geological data in British and Irish archives to plan our investigations. The first step is marine geophysical profiling of submerged seabed sediments and mapping of surface geomorphological features on the seabed. These features include the (usually) erosional surface (unconformity) produced by the rise in sea level, and surface geomorphological features that indicate former shorelines (submerged beaches, barriers and deltas). These allow us to identify the position (but not the age) of lower than present sea levels. The second step is to use this stratigraphic and geomorphological information to identify sites where we will take cores to acquire sediments and organic material from low sea-level deposits. We will analyse the sediments and fossil content of the cores to find material that can be closely related to former sea levels and radiocarbon dated. The third step in our approach is to extend the observed RSL curves using our new data and compare this to model predictions of RSL. We can then modify the parameters in the model to obtain better agreement with observations and thus better understand the earth-ice-ocean interactions. These data are also important for understanding the palaeogeography of the British Isles. Our data will allow a first order reconstruction of former coastlines, based upon the modern bathymetry, for different time periods during the deglaciation. This is of particular importance to the presence or absence of potential landbridges that might have enabled immigration to Ireland of humans and animals. They will also allow us to identify former land surfaces on the seabed. The palaeogeography is crucial to understanding the evolving oceanographic circulation of the Irish Sea.
- Project . 2011 - 2015Funder: UKRI Project Code: NE/I012915/1Funder Contribution: 401,388 GBPPartners: University of Turku, LANDCARE RESEARCH, University of Exeter, University of Victoria, Lehigh University, GTK, University of Hawaiʻi Sea Grant, University of Quebec
Future climate change is one of the most challenging issues facing humankind and an enormous research effort is directed at attempting to construct realistic projections of 21st century climate based on underlying assumptions about greenhouse gas emissions. Climate models now include many of the components of the earth system that influence climate over a range of timescales. Understanding and quantifying earth system processes is vital to projections of future climate change because many processes provide 'feedbacks' to climate change, either reinforcing upward trends in greenhouse gas concentrations and temperature (positive feedbacks) or sometimes damping them (negative feedbacks). One key feedback loop is formed by the global carbon cycle, part of which is the terrestrial carbon cycle. As carbon dioxide concentrations and temperatures rise, carbon sequestration by plants increases but at the same time, increasing temperatures lead to increased decay of dead plant material in soils. Carbon cycle models suggest that the balance between these two effects will lead to a strong positive feedback, but there is a very large uncertainty associated with this finding and this process represents one of the biggest unknowns in future climate change projections. In order to reduce these uncertainties, models need to be validated against data such as records for the past millennium. Furthermore, it is extremely important to make sure that the models are providing a realistic representation of the global carbon cycle and include all its major component parts. Current models exclude any consideration of the reaction of peatlands to climate change, even though these ecosystems contain almost as much carbon as the global atmosphere and are potentially sensitive to climate variability. On the one hand, increased warmth may increase respiration and decay of peat and on the other hand, even quite small increases in productivity may compensate for this or even exceed it in high latitude peatlands. A further complication is that peatlands emit quite large quantities of methane, another powerful greenhouse gas. Our proposed project aims to assess the contribution of peatlands to the global carbon cycle over the past 1000 years by linking together climate data and climate model output with models that simulate the distribution and growth of peatlands on a global scale. The models will also estimate changes in methane emissions from peatlands. In particular, we will test the hypotheses that warmth leads to lower rates of carbon accumulation and that this means that globally, peatlands will sequester less carbon in future than they do now. We will also test whether future climate changes lead to a positive or negative feedback from peatland methane emissions. To determine how well our models can simulate the peatland-climate links, we will test the model output for the last millennium against fossil data of peat growth rates and hydrological changes (related to methane emissions). To do this, we will assemble a large database of published information but also new data acquired in collaboration with partners from other research organisations around the world who are involved in collecting information and samples that we can make use of once we undertake some additional dating and analyses. Once the model has been evaluated against the last millennium data, we will make projections of the future changes in the global carbon cycle that may occur as a result of future climate change. This will provide a strong basis for making a decision on the need to incorporate peatland dynamics into the next generation of climate models. Ultimately we expect this to reduce uncertainty in future climate change predictions.
- Project . 2011 - 2015Funder: UKRI Project Code: EP/J003247/1Funder Contribution: 359,554 GBPPartners: Maplesoft, University of Bath
Connectedness, as in "can we get there from here", is a fundamental concept, both in actual space and in various abstract spaces. Consider a long ladder in a right-angled corridor: can it get round the corner? Calling it a corridor implies that it is connected in actual three-dimensional space. But if we consider the space of configurations of the ladder, this is determined by the position and orientation of the ladder, and the `corridor' is now the requirement that no part of the ladder run into the walls - it is not sufficient that the ends of the ladder be clear of the walls. If the ladder is too long, it may have two feasible positions, one in each arm of the corridor, but there may be no possible way to get from one to the other. In this case we say that the configuration space of the ladder is not connected: we can't get the ladder there from here, even though we can get each end (taken separately, which is physically impossible) from here to there. Connectedness in configuration space is therefore the key to motion planning. These are problems human beings (especially furniture movers, or people trying to park cars in confined spaces) solve intuitively, but find very hard to explain. Note that the ladder is rigid and three-dimensional, hence its position is determined by the coordinates of three points on it, so configuration space is nine-dimensional. Connectedness in mathematical spaces is also important. The square root of 4 can be either 2 or -2: we have to decide which. Similarly, the square root of 9 can be 3 or -3. But, if 4 is connected to 9 in our problem space (whatever that is), we can't make these choices independently: our choice has to be consistent along the path from 4 to 9. When it is impossible to make such decisions totally consistently, we have what mathematicians call a `branch cut' - the classic example being the International Date Line, because it is impossible to assign `day' consistently round a globe. In previous work, we have shown that several mathematical paradoxes reduce to connectedness questions in an appropriate space divided by the relevant branch cuts. This is an area of mathematics which is notoriously difficult to get right by hand, and mathematicians, and software packages, often have internal inconsistencies when it comes to branch cuts. The standard computational approach to connectedness, which has been suggested in motion planning since the early 1980s, is via a technique called cylindrical algebraic decomposition. This has historically been computed via a "bottom-up" approach: we first analyse one direction, say the x-axis, decomposing it into all the critical points and intermediate regions necessary, then we take each (x,y)-cylinder above each critical point or region, and decompose it, then each (x,y,z) above each of these regions, and so on. Not only does this sound tedious, but it is inevitably tedious - the investigators and others have shown that the problem is extremely difficult (doubly exponential in the number of dimensions). Much of the time, notably in motion planning, we are not actually interested in the lower-dimensional components, since they would correspond to a motion with no degrees of freedom, rather like tightrope-walking. Recent Canadian developments have shown an alternative way of computing such decompositions via so-called triangular decompositions, and a 2010 paper (Moreno Maza in Canada + Davenport) has shown that the highest-dimensional components of a triangular decomposition can be computed in singly-exponential time. This therefore opens up the prospect, which we propose to investigate, of computing the highest-dimensional components of a cylindrical decomposition in singly-exponential time, which would be a major breakthrough in computational geometry.
- Project . 2011 - 2012Funder: UKRI Project Code: NE/I015647/1Funder Contribution: 53,059 GBPPartners: DRDC, NOC
In December 1995 a group of marine scientists and technologists met to define the scope of a thematic programme proposal that would demonstrate the utility of Autonomous Underwater Vehicles (AUVs) for ocean science. The aim of the programme was tackle questions that could only be answered using the unique features of such vehicles. The 'Autosub Science Missions' programme funded the development of Autosub1, Autosub2 and Autosub3 autonomous underwater vehicles. Today, this vision is pursued in Oceans2025 science programme, in which NERC is currently funding the development of Autosub6000 and Autosub Long Range - the aim here is to go deeper and longer. Such platforms will help the U.K. maintain its position as one of the World's leaders in ocean science. Estimating AUV reliability is of paramount importance for deployments in hazardous and complex environments. Reliability is the probability that a system will perform its specified function over a given period of time under defined environmental conditions. The AUV reliability is influenced by many factors, human errors play a critical role but other challenges arise from severe operational conditions, the fact that some AUV components were not designed to operate in such conditions and furthermore the fact that some of the vehicle's components were not designed to work together. As a result, we cannot ignore the fact that measuring AUV reliability must be based experts' subjective risk assessment. If we are going to use experts risk assessment we must follow a formal process. A formal elicitation process will enable transparency and repeatability of the assessment. To tackle this problem, the Underwater Systems Laboratory (USL) created a risk and reliability management process tailored to AUV operations (RMP-AUV). This project is aimed to validate existing methods and to develop new, more detailed methods for estimating AUV operational risk. The new risk models will quantify the effects of fault or incident mitigation on estimates of risk of loss and on risk of non-delivery of data. The aim is to derive new models to measure the reliability growth of AUVs. The new methods will be based on Bayesian statistics. This is a mathematical method in which the prior belief in a proposition -in our case, risk estimate- is updated based on the likelihood that the proposition is affected by a new observation. Through collaboration with AUV manufacturer International Submarine Engineering (ISE) and Defence Research and Development Canada (DRDC) we have a rare, time-limited opportunity to use an extensive data set on the faults and incidents with an ISE Explorer AUV. Furthermore, our partners are eager to co-develop, test and apply risk mitigation tracking and modelling methods within their high impact project in support of Canada's UNCLOS Article 76 submission. The models would be tested with reliability data already gathered, with tracking of faults from the April 2010 Arctic campaign, engineering rework, and a 2011 Arctic expedition.
- Project . 2011 - 2017Funder: UKRI Project Code: NE/I028017/1Funder Contribution: 817,613 GBPPartners: University of Leeds, University of Manitoba, Massachusetts Institute of Technology, USA, KOERI
The Earth's surface is broken into numerous tectonic plates, which are continually moving. The movement of the plates relative to each other is the source for most earthquake activity on Earth, which is typically focussed into narrow fault zones where the plates collide, pull apart, or slide past each other. Within the fault zones the deformation in the upper 10-15 km of the Earth's crust is localised onto narrow fault planes. Earthquakes occur when the stresses on the fault planes caused by plate motions overcome frictional resistance, and these represent significant hazard for communities living in fault zones - in the first decade of the 21st century alone, earthquakes killed 700,000 people. In strike-slip fault zones, where plates slide past each other, earthquakes typically only break the upper crust. We know that the lower crust (deeper than 10-15 km) must be deforming continuously, because we can measure how the ground surface deforms between earthquakes. But because rock samples or other direct measurements cannot easily be obtained from these depths, we have a poor understanding of how the lower crust behaves and influences the loading of stresses in the upper crust to cause major earthquakes.We propose an inter-disciplinary project with the aim of understanding the earthquake loading cycle (how stresses build through plate motions and are released in earthquakes) along a major European fault, the North Anatolian Fault Zone (NAFZ) in Turkey. The NAFZ is a strike-slip fault comparable in length and slip rate to the San Andreas Fault in California. It crosses a densely populated region of northern Turkey and constitutes a major seismic hazard - over 1000 km of the fault ruptured during 12 large earthquakes in the 20th century. The western end of the NAFZ ruptured in two major earthquakes in 1999 at Izmit on 17 August and Düzce, 87 days later, killing more than 30,000 people. A seismic gap remains south of Istanbul, an urban centre of more than 10 million people, where there is ~60% chance of significant shaking within the next few decades (Parsons et al. 2000).We aim to measure the properties of the fault in the lower crust to set constraints on the earthquake loading cycle along the NAFZ. The project involves (i) a novel high-resolution seismic experiment aimed at resolving the fault zone structure at depth, (ii) geological analysis of an exhumed fault zone representative of the mid to lower crust under the fault, and (iii) analysis of satellite measurements of surface displacement. The results from these studies will be used to build computational models of the earthquake loading cycle. In this project we aim to explain how the movements of the tectonic plates interact with the fault zone and how this is affected by the lower crustal structure. This will ultimately contribute to better assessment of the seismic hazard associated with large fault zone. The resulting synthesis of the geophysical and geological data together with geodynamical modelling will guide future investigations for other major strike-slip fault zones.
- Project . 2011 - 2012Funder: UKRI Project Code: NE/I016481/1Funder Contribution: 52,767 GBPPartners: University of Leicester, ROM
The aims of this project are simple. By rotting velvet worms (onychophorans) under controlled conditions we will generate the data required to start correctly interpreting the fossil record of lobopodians. Accurate placement of lobopodians in the Tree of Life has the potential to resolve a major evolutionary problem: the origin of the arthropods. Arthropods are arguably the most successful animals on Earth: more diverse and abundant than any other group, they are important and familiar to everyone. Yet the identity of the arthropods' nearest living relatives, and the details of arthropod origins and early evolution remain unclear. In contrast to arthropods, onychophorans are both obscure and enigmatic. With their fat legs and body annulations they resemble a conga-line of overweight Michelin-men. A recent popular account of animal relationships noted that 'no group has prompted more zoological debate' (Tudge 2000, The Variety of Life) - exactly where onychophorans sit in the Tree of Life remains controversial. Surprisingly, answering the question of onychophoran relationships holds the key to unlocking the evolutionary emergence of the arthropods, and this is where fossil lobopodians have a major role to play. These extinct, soft-bodied organisms (almost all of Cambrian age) share a number of important anatomical features with onychophorans, but recent evolutionary analyses suggest that fossil lobopodians include the ancestors of arthropods, of onychophorans, and of panarthropods (the larger group to which both onychophorans & arthropods belong). Consequently, finding the correct places for fossil lobopodians in the Tree of Life has the potential to reveal the sequence in which important characteristics of arthropods and onychophorans were acquired. If lobopodian branches do fill the gap between living onychophorans and arthropods, we may be able to resolve relationships between the major arthropod branches. This potential can only be realised with correct placement of lobopodians, and this requires new information about how they decayed. Much of the current disagreement over the placement of lobopodians arises because we don't understand how the process of decay affected their bodies prior to fossilization. Studies of other organisms show that decay rapidly alters the appearance of important anatomical features. As soft tissues rot and collapse the shape and juxtaposition of body parts - crucial criteria for anatomical comparison - change significantly. Other features rot away completely. We need new data so that these changes, which will have affected all fossil lobopodians to some degree, can be taken into account when interpreting their anatomy. We will employ a new approach to the experimental study of how animals decay, recently developed in our lab. We will rot onychophorans under controlled lab conditions and carefully record their important anatomical features (many of which they share with fossil lobopodians) at timed intervals as they decompose. From this we will determine the rate and sequence of decay of features; when and how their juxtaposition, shape and appearance change. This will allow us to establish criteria for the recognition of decay-transformed features in fossil lobopodians and reassess the anatomy and evolutionary relationships of these controversial animals (including exceptionally well-preserved new material). It will also allow us to further test a hypothesis developed from our ongoing decay experiments: that the decay of evolutionarily important anatomical features of soft bodied animals is not random - features that are most useful for recognizing evolutionary relationships are the most likely to decay rapidly. If this pattern is widespread it is an important yet previously unrecognised bias in reconstructing the evolutionary relationships of fossils.
- Project . 2011 - 2012Funder: UKRI Project Code: NE/I016686/1Funder Contribution: 46,916 GBPPartners: ITRES Research Ltd, NERC British Antarctic Survey, University of Twente
Geological maps are a primary source of information for understanding much about an area's potential (e.g. mineral resources, engineering/construction suitability) through to anticipating and mitigating natural events (e.g. landslides, earthquakes). Geological maps exist for almost the entire planet and some maps (e.g. British Isles) have been continually refined and updated over the last 150 years. The Polar and highly mountainous regions of the world pose major logistical problems to gain access to certain areas, such that the geology of some regions remains poorly understood or completely unknown. The Antarctic Peninsula is one example where the glaciated terrain and mountainous relief have prevented access to field geologists. Over 50 years of geological mapping on the Antarctic Peninsula has led to a good understanding of its geological history and its links to the Andes and the supercontinent, Gondwana, of which Antarctica formed a part. However, some very large areas (100s km2) still remain poorly known or unexplored. The geological evolution of the Antarctic Peninsula can only be fully understood with a more complete knowledge of the rock types present. Although there is no substitute for fieldwork, gathering data from aircraft-mounted instruments or satellites offers the potential of providing geologists a first order method of remotely identifying rock types. Geologists working on the Antarctic Peninsula already make use of aeromagnetic and aerogravity data to help understand the sub-ice geology and a recent study has used satellite data for identifying minerals using reflectance data This, however proved to have limitations as comparatively few of the major rock-forming minerals display diagnostic absorption features. In contrast, almost all rock forming minerals display diagnostic spectral emission features in the thermal infrared region, which has the potential to be a valuable tool in distinguishing features for igneous and sedimentary rocks. Thermal data from satellites is available, but it has limited spectral bands that would not yield the resolution required to differentiate between minerals. Funding through the Foreign and Commonwealth Office, UK has already been secured (Biological Sciences, BAS) for a survey to assess vegetation type and extent at sites on Adelaide Island on the Antarctic Peninsula. The survey will be conducted using an instrument owned by a Canadian research company (ITRES); such an instrument is not currently available to NERC. The Canadian owned, 64-band thermal imaging instrument (Thermal Airborne Spectrographic Imager: TASI) is capable of generating high spatial and spectral resolution thermal emission data. It can be fitted to a British Antarctic Survey Twin Otter aircraft and is able to generate very high quality data that can map the type and extent of vegetation at several sites along the Antarctic Peninsula. The instrument can also be used in conjunction with other survey flying to optimise time and resources. Funding is sought here to use the same vegetation survey dataset, but to investigate its potential to identify different minerals and rock types. If funding is secured a ground-based spectral study would be done in conjunction with the airborne survey to calibrate the data. This study would be carried out in an area where the geology is well described and understood, such that a proof of concept could be established before extending the techniques to areas where geological understanding is poor or absent. If successful, the intention would be to extend the work into other polar regions or highly mountainous, difficult to access regions and develop the techniques further.
- Project . 2011 - 2013Funder: UKRI Project Code: EP/I017984/1Funder Contribution: 96,760 GBPPartners: UBC, University of Warwick
Imperfectly observed evolving systems arise throughout the human world. Weather forecasting, modelling stock prices, transcribing music or interpreting human speech automatically are just a few of the situations in which imperfect observations of a system which evolves in time are all that is available whilst the underlying system is the thing in which we are interested: Given satellite observations and sparse localised measurements, we'd like to accurately characterise the weather now and predict future weather; given measurements of pitch at discrete times we'd like a computer to be able to produce a meaningful description of what was being said at the time.Surprisingly, it's possible to model a great number of these problems using a common framework, known as a state space model (or hidden Markov model). Inferring the likely value of the unobserved process based upon a sequence of observations, as those observations become available is in principle reasonably straightforward but it requires the evaluation of integrals which cannot be solved by analytical mathematics and which are too complex to deal with accurately via simple numerical methods. Simulation-based techniques have been developed to address these problems and are now the most powerful collection of tools for estimating the current state of the unobserved process given all of the observations received so far. Much effort has been dedicated in recent years to designing algorithms to efficiently describe the likely path of the unobserved process from the beginning of the observation sequence up to the current time in a similar way. This problem is much harder as each observation we receive tells us a little more about the likely history of the process and continually updating this ever-longer list of locations in an efficient way is far from simple.The methods proposed here will attempt to extend simulation-based statistical techniques in a new direction which is particularly well suited to characterisation of the whole path of the unobserved process and not just its terminal value. Two different strategies based around the same premise - that sometimes several smaller simulations can in a particular sense outperform a single larger simulation for the same computational cost - will be investigated. The techniques developed will be investigated both theoretically and empirically.In addition to developing and analysing new computational techniques, the project will provide software libraries which simplify the use of these methods in real problems (hopefully to the extent that scientists who are expert in particular application domains will be able to apply the techniques directly to their own problems).The research could be considered successful if:1/ It leads to new methods for performing inference in state space models.2/ These methods can be implemented with less application-specific tuning that existing methods require or these methods provide more efficient use of computational resources.3/ These methods are sufficiently powerful to allow the use of more complex models than are currently practical.4/ The methods are adopted by practitioners in at least some of the many areas in which these techniques might be usefully employed.The long term benefits could include more realistic assessment of risk in financial systems, more reliable tracking and prediction of meteorological phenomena and improved technological products wherever there is a need to dynamically incorporate knowledge arising from measurements as they become available. There will be particular advantages in settings in which the full path of the imperfectly observed underlying process is of interest but there is scope for improvement even when this is not the case.