21 Projects, page 1 of 3
Loading
- Project . 2009 - 2011Funder: UKRI Project Code: NE/F021399/1Funder Contribution: 222,230 GBPPartners: University of Edinburgh, Newcastle University, University of Bristol, Utrecht University, University of London, University of Alberta
This project will quantify the effect of surface generated melt-water fluctuations on ice motion at the margin of the Greenland Ice Sheet (GrIS). More specifically, it will provide data that will enable ice-sheet modellers to improve their predictions of the future contribution of the GrIS to sea level rise in response to a warming world. To achieve this aim requires a dedicated field campaign to the GrIS to investigate seasonal ice flow dynamics and runoff processes along flow parallel transects extending from the ice sheet margin to the equilibrium line altitude (ELA) at both tidewater and land-terminating glaciers. The greatest store of fresh water in the northern hemisphere - equivalent to 7m of eustatic sea level rise - is held within the Greenland Ice Sheet (GrIS), and yet its present and future contribution to sea level is poorly constrained (IPCC, 2007). Recent observations suggest that mass loss near the margin of the GrIS is accelerating through a combination of increased surface melting (e.g. Steffen et al, 2004) and dynamic thinning (e.g. Rignot and Kanagaratnam, 2006). However, the key processes controlling dynamic thinning have yet to be identified (Alley et al, 2005), and in consequence, are not incorporated in the ice-sheet models which form the basis of the IPCC sea level projections. This in part reflects the fact that the satellite data that has revealed the widespread speed-up of glaciers cannot be acquired at the temporal resolution needed to resolve the causal mechanisms. Our present understanding of GrIS mass balance is especially complicated by uncertainties in the sensitivity of ice-marginal dynamics to changes in melt-water induced lubrication resulting from penetration of supraglacial melt-waters to the glacier bed (Zwally et al, 2002). Recent observations on the GrIS Shepherd et al, in review) reveal, over a five day period in July, a strong and direct coupling between surface hydrology and dynamics where diurnal fluctuations in velocity of >100% occur and where maximum daily velocities scale with temperature. Such observations confirm the need to acquire hydrological and dynamic data at high temporal (sub-hourly) and spatial resolution throughout the year to parameterise the coupling between ice melting and flow. This project will collect data at the necessary resolution to quantify the relationship between melt-water production and ice sheet dynamics thereby enabling ice-sheet modellers to improve predictions of the GrIS's response to climate change. We will conduct ground based experiments along two flow-parallel transects at the western margin of the GrIS in adjacent land and marine terminating drainage basins to address the following objectives: 1. Is there a temporal and spatial pattern to any hydrology-dynamic link associated with the seasonal evolution of the supraglacial drainage system (including supraglacial lakes)? 2. Over what area does surface generated meltwater penetrate to the base of the ice sheet? 3. Is there a relationship between the volume of meltwater input at the glacier surface and the magnitude of the dynamic response? 4. Do tidewater and land-terminating glaciers behave differently during the course of a melt-season? Field campaigns will be undertaken during 2008 and 2009 to determine: 1) The rate, extent and duration of melt. 2) The temporal and spatial variations in water volumes stored in and released from supraglacial lakes and delivered to freely draining moulins. 3) The seasonal, diurnal and hourly variations in ice dynamics. 4) The variations in proglacial discharge and water chemistry (at Russell Glacier). As a result of our work, it will be possible to determine whether ice dynamics at the margin of the GrIS is significantly affected by lubrication of the glacier bed following the drainage of surface derived meltwaters. Our results will be delivered to ice sheet modellers to help them constrain predictions for the future of the GrIS
- Project . 2008 - 2011Funder: UKRI Project Code: EP/F042728/1Funder Contribution: 224,957 GBPPartners: McGill University, University of Oxford, UvA
I aim to develop high level structures for reasoning about knowledge of agents in a multi-agent system where agents communicate and as a result update their information. All of us take part in such situations when communicating through the internet, surfing the web, bidding in auctions, or buying on financial markets. Reasoning about knowledge acquisition in these situations becomes more challenging when some agents are not honest and they cheat and lie in their actions and as a result other agents acquire wrong information. The current models of these situations are low level: they require specifying untidy details and hide the high level structure of information flow between the agents. This makes modeling a hard task and proving properties of the model an involved and complicated problem. The complexity of reasoning in these situations raises the question: ``Which structures are required to reason about knowledge acquisition?'', in other words, ``What are the foundational structures of knowledge acquisition?''. High level methods provide us with a minimal unifying structure that benefits from partiality of information: we do not need to specify all the details of the situations we are modeling. They also bring out the conceptual structure of information and update, hide the untidy details, and tidy up the proofs. My plan is to (1) Study the foundational structures that govern knowledge acquisition as a result of information flow between the agents and then develop a unifying framework to formally express these structures in a logical syntax with a comprehensive semantics. I aim to use known mathematical structures, such as algebra, coalegbra and topology, for the semantics. The syntactic theory will be a rule-based proof-theoretic calculus that helps us prove properties about knowledge acquisition in a programmatic algorithmic manner. (2) Apply this framework to reason about security properties of multi-agent protocols. Examples of these protocols are communication protocols between a client and a bank for online banking. We want to make sure that such a protocol is secure, that is, the client's information remains secret throughout the transaction. Because of the potentially unlimited computational abilities of the intruder, these protocols become very complex and verifying their security becomes a challenging task. It is exactly here that our high level setting becomes a necessity, that is, in formal analysis of these protocols and in proving their security properties. The semantic structures that I aim to use have also been used to model the logic of Quantum Mechanics. So my model will be flexible enough to accommodate quantum situations. These situations are important for security protocols because they benefit from additional non-local capabilities of Quantum Mechanics, which guarantee better safety properties. I aim to apply the knowledge acquisition framework to Quantum protocols and prove their sharing and secrecy properties. On the same track, similar semantic structures have been used for information retrieval from the web. I aim to exploit these models and study their relationship to my framework. (3) Write a computer program to implement the axiomatic semantic structure and produce a software package. This software will help us automatically verify properties of multi-agent protocols, such as the security protocols mentioned above.
- Project . 2007 - 2011Funder: UKRI Project Code: NE/E004016/1Funder Contribution: 453,995 GBPPartners: NERC British Antarctic Survey, Free University of Brussels (VUB), ODU, NVE, University of Alberta, University of Bristol, Montana State University System
Carbon is one of the essential elements required for life to exist, alongside energy and liquid water. In contrast to other parts of the Earth's biosphere, cycling of carbon compounds beneath glaciers and ice sheets is poorly understood, since these environments were believed to be devoid of life until recently. Significant populations of micro-organisms have recently been found beneath ice masses (Sharp et al., 1999; Skidmore et al., 2000; Foght et al., 2004). Evidence shows that, as in other watery environments on Earth, these sub-ice microbes are able to process a variety of carbon forms over a range of conditions, producing greenhouse gases, such as CO2 and CH4 (Skidmore et al., 2000). Almost nothing is known about 1) the range of carbon compounds available to microbes beneath ice, 2) the degree to which they can be used as food by microbes and 3) the rates of utilisation and the full spectrum of products (e.g. gases). This information is important for understanding the global carbon cycle on Earth. The fate of large amounts of organic carbon during the advance of the glaciers over the boreal forest during the last ice age (Van Campo et al., 1993), for example, is unknown and is likely to depend fundamentally on microbial processes in sub-ice environments. Current models of Earth's global carbon cycle assume this carbon is 'lost' from the Earth's system (Adarns et al., 1990; Van Campo et al., 1993; Francois et al., 1999). The possibility that it is used by subglacial microbes and converted to CO2 and CH4 has not been considered. This may have potential for explaining variations in Earth's atmospheric greenhouse gas composition over the last 2 million years. Sub-glacial environments lacking a modern carbon supply (e.g. trees, microbial cells) may represent ideal model systems for icy habitats on other terrestrial planets (e.g. Mars and Jupiter moons; Clifford, 1987; Pathare et al. 1998; Kivelson et al. 2000), and may be used to help determine whether life is possible in these more extreme systems.
- Project . 2006 - 2011Funder: UKRI Project Code: EP/D073944/1Funder Contribution: 563,957 GBPPartners: University of Bristol, RWDI, NATIONAL RESEARCH COUNCIL OF CANADA
As more slender and more adventurous structures, such as cable-stayed bridges, are constructed, they become increasingly susceptible to large amplitude vibrations, particularly due to aerodynamic loading. Wind-induced vibrations of bridge decks, cables, towers, lamp columns and overhead electricity cables are indeed very common. This can lead to unacceptably large movements, direct structural failure, or dangerous long-term fatigue damage of structural components. Complex interactions between the wind and the structure and also between different components of the structure (e.g. cables and bridge deck) can lead to vibration problems, so for proper understanding of the behaviour, both aerodynamic and structural effects need to be considered.Whilst some of the mechanisms of wind loading of structures are reasonably well understood, others are not, and many instances of vibrations, particularly of cables, are not well explained. Recent work has developed a generalised method for analysing 'galloping' vibrations. These are caused by changes in wind forces on a structure when it starts to move, which actually tend to increase the motion. For typical bridge cables (or other similar size structures) in moderately strong winds, a particular change in the wind flow around the cable occurs, known as the drag crisis. This changes the forces on the cable and causes a special case of galloping-type vibrations, which the new method of analysis is able to predict, for the first time. Comparisons of these calculations with wind tunnel test results on inclined cylinders have confirmed that the basic method does work, but there is a need to consider additional effects, such as wind turbulence, torsional motion of the structure and more accurate account of the changes in the aerodynamic forces as the structure moves. It is proposed to develop the approach to include these effects, using further wind tunnel data, to eventually create a unified framework for wind loading analysis of any real structure for galloping, together with the other aerodynamic mechanisms buffeting (due to wind turbulence) and flutter.Meanwhile, interactions between vibrations of structural components can cause serious effects. For example, very small vibrations of a bridge deck can cause very large vibrations of the cables supporting it, through the mechanism of 'parametric excitation'. Even more surprisingly, in other instances, localised cable vibrations can lead to vibrations of the whole structure. Research under another grant is already considering these effects for very simplified structures, but it is proposed to extend the analysis to realistic full structures. Also, often cables are tied together to try to prevent vibrations of individual cables, but they can then all vibrate together as a network. This project therefore aims to analyse full cable networks, to understand how their vibrations can be limited.Finally, it is proposed to bring together the above two main areas, to include both aerodynamic and structural dynamic interactions in the analysis of slender structures. For example, because of the interactions, the wind loads on relatively small elements, such as cables, can have surprisingly large effects on the overall dynamic response of large structures. At present this is generally ignored, but the joint approach will address this issue. Also, in some instances, only a combined view of the phenomena may be able to explain the behaviour observed on full-scale structures in practice. The holistic view of the wind loading and structural behaviour should provide tools to help avoid undesirable and potentially dangerous effects of vibrations of slender structures in the future. Based on the analysis, this could be achieved by modifying the shape of the elements to change the wind loads, or introducing dampers to absorb enough vibration energy.
- Project . 2008 - 2011Funder: UKRI Project Code: BB/E020372/1Funder Contribution: 520,983 GBPPartners: Imperial College London, McGill University
Recent advances in biological technology enable the measurement of multiple measures of complex systems from the cell to the whole organism. However, these technologies generate massive amount of data and it is a major task to process these robustly and efficiently. The aim of our multidisciplinary project is to devise methods to combine and analyze different data measurements arising from experiments in modern biology that will ultimately aid in the understanding of the causes of common diseases, and lead to the development of new treatments. It is now possible to investigate how complex organisms function by measuring in great detail the chemical composition of, for example, a sample of blood or urine, and also to measure how that composition changes over time, or in reaction to different treatments or experimental conditions. Perhaps most importantly, it is also possible to compare the composition across different groups that may have or not have a particular disease, and to use this comparison to understand how treatments might be developed. This exciting prospect can only be achieved, however, if the experimental data are collected and analyzed as accurately possible. This is the principal goal of our research. We will focus on so-called 'metabolic' analysis using two specific types of technology (known by the initials NMR and MS) that allow us to measure the amount of a large number of different chemicals (or metabolites) that are present in the samples of blood or other body fluids being analyzed. Metabolites are small molecules present in all organisms which are essential to the functioning of their living cells. NMR and MS are both extremely sophisticated measurement procedures that each produce a large amount of data (spectra), but although the measurements from the two technologies contain some information on the same metabolites, most of the information from the two sources is not identical, and an important statistical modelling task involves combining data from them in the most sensible fashion. We will separate this task into two components; first, the mathematical modelling of the NMR and MS metabolite spectra, and secondly the combination of the data across the two measurement systems. Both components require major input from both biologists and statisticians involved in our research programme. The statistical analysis of the large amounts of data generated by NMR and MS technologies is an extremely challenging task. Some methods for data analysis do already exist, but they do not use all the information at hand. An important advantage of our approach is that we will use physico-chemical information already available about typical metabolites to direct how we build our models and carry out our analysis. Such physico-chemical 'prior' information has been only rarely used in the analysis of metabolite data, but we feel that it provides an important guide as to how analysis should proceed. Thus we will adopt a Bayesian statistical approach that combines data and prior information in a principled fashion. However, despite being scientifically attractive, this modelling approach needs advanced computing methods so that the analysis can be implemented, and a major component of the research we will carry out will be to implement the most efficient computational strategies. Understanding and modelling the content of NMR and MS metabolite spectra is a complicated task that requires both highly specialized chemical knowledge and state of the art statistical techniques. The novelty of our project is that by using a Bayesian analysis framework we are able to harness and incorporate such specialist information. Our multidisciplinary research team that combines expertise in modelling, statistics, chemical biology and bioinformatics will ensure the success of our research programme and facilitate the dissemination of its results to a wide community.
- Project . 2007 - 2011Funder: UKRI Project Code: PP/E001947/1Funder Contribution: 286,718 GBPPartners: Lancaster University, UoC
The Earth possesses a magnetic field which is approximately dipolar in shape - very similar to the magnetic field produced a simple bar magnet. Magnetic field lines emerge from the planet at one magnetic pole and extend out of the atmosphere and many thousands of kilometres into space, before returning to the magnetic pole in the opposite hemisphere. Rather than being a vacuum, the region of space that these field lines pass through is filled with plasma - an electrically conducting gas made up charged particles. Most of these particles originate in the Earth's atmosphere having been produced by ultraviolet sunlight which ionises gases in the high altitude atmosphere. The Sun also possesses a strong magnetic field. As nuclear processes generate energy in the solar interior, the outer layer of the solar atmosphere expands outwards through the solar system (forming the solar wind), and carries with it remnants of the Sun's magnetic field (the interplanetary magnetic field). When the solar wind and interplanetary magnetic field arrive at the Earth, they collide with the Earth's magnetic field and are diverted around the planet. The cavity carved out of the solar wind by the Earth's magnetic field is called the magnetosphere. Inside the magnetosphere the plasma and magnetic field originate mainly from the Earth. Outside of the magnetosphere, they originate from the Sun. At the boundary between the interplanetary and terrestrial magnetic fields on the dayside of the Earth, the field lines sometimes orient themselves in opposite directions. When this happens, the field lines can merge or 'reconnect' across the boundary. In other words, closed magnetic field lines that start and finish at the Earth's surface in opposite hemispheres can be opened so that one end stays fixed to the Earth while the other extends outwards into the solar wind. Since the solar wind is constantly streaming away from the Sun, the newly-opened magnetic field line is dragged and stretched away from the Earth. Therefore, because of the process of magnetic reconnection at the dayside boundary, the Earth's dipolar magnetic field is stretched out on the planet's nightside to form a long magnetic tail that points away from the Sun. If the Earth's magnetic field was continuously being peeled away and dragged into the tail, eventually there would be no field left on the dayside of the planet. However, a process in the tail periodically acts to reduce the amount open magnetic field in the tail and return closed field to the dayside - this process is magnetic reconnection. By reconnecting two open magnetic field lines a closed magnetic field is produced (like typing together the two loose ends of a piece of elastic). However, the resulting closed field is highly stretched and, just like a stretched elastic band, it contracts back towards the Earth, catapulting some of the magnetospheric plasma Earthward. The reconnection process in the tail is not steady. Generally magnetic field builds up in the tail until some critical point is reached. Somehow, reconnection is triggered and stretched magnetic field is removed from the tail and returned to the Earth. The period when tail field is building is known as the substorm growth phase, while the explosive release of energy in the tail associated with reconnection and the closure of open field lines is known as the substorm expansion phase. However, the processes that cause the triggering of the expansion phase (i.e. that mechanisms that trigger the catapult) remain unclear - it is one of the biggest uncertainties in solar-terrestrial physics. This investigation will use measurements from instruments on spacecraft located in the tail and observations made from the Earth in order to determine the triggering mechanism of magnetospheric substorms.
- Project . 2011 - 2011Funder: UKRI Project Code: BB/J003980/1Funder Contribution: 1,710 GBPPartners: McMaster University, Mount Sinai Hospital Toronto, JIC
Canada
- Project . 2009 - 2011Funder: UKRI Project Code: EP/H023836/1Funder Contribution: 195,938 GBPPartners: University of Salford, McMaster University
The aim of Silicon photonics is nothing less than the complete convergence of optics and electronics. In the first instance this endeaviour was aimed at oversoming the limitations imposed by nature in the transport of information using electrons. However, the work has already thrown up more general optical technologies which can be minaturised onto silicon chips. In fact, engineers are slowly building a whole optics toolbox on silicon, including detectors, modulators and spectrometers. The international consortium assembled for the current work have already made significant progress in providing the long sought after on-chip light source. The feasibility studies proposed here are aimed at building on the existing expertise found in the consortium and elsewhere to apply these technologies to the optical detection and manipulation of single biomolecules in a way than can be miniaturised giving devices that have such functionalities on a silicon chip. The impact of the work will be enhanced by the fact that the approaches used are compatible with those used during the manufacture of standard silicon chips and that the end products can be mass produced (at costs measured in cents per unit) for personalised health care applications in every home, doctor's surgery/pharmacist; for the detection of low level atmosphere or water born pollutants or for counter terrorism/military applications
- Project . 2008 - 2011Funder: UKRI Project Code: EP/E059430/1Funder Contribution: 312,723 GBPPartners: Petrobank Energy and Resources Ltd, University of Bath
Heavy crude oil and bitumen are a vast, largely unexploited hydrocarbon resource, with barely 1% produced so far, compared with more than 50% of conventional light oil (like the North Sea). More than 80% of this heavy, unconventional oil, lies in the Western hemisphere, whereas more than 80% of conventional light oil lies in the Eastern hemisphere (mainly in the Middle East). Over the next 10-30 years, geopolitical factors,and also the emerging strength of Asian countries, especially India and China, will create increasing tensions and uncertainty, with regard to the availability and supply of crude oil. Alongside gas, nuclear and renewables, crude oil will continue to be an important part of the UK's 'energy mix' for decades to come. How will the crude oil we need for industry and transportation be be obtained and will it be as secure as it was from the North Sea?The huge Athabsca Oil Sands deposits in Canada (1.5 trilllion barrels) provides an opportunity for the UK to secure access to a long-term, stable supply. The first step towards this was the development of a new technology,THAI - 'Toe-to-Heel Air Injection', to produce Oil Sands bitumen and heavy oil. It was discovered by the Improved Oil Recovery group at the University Bath, in the 1990's, and is currently being field tested at Christina Lake, Alberta, Canada. In 1998, in collaboration with the Petroleum Recovery Institute (PRI), Calgary, Canada, the Bath goup discovered another process,based on THAI, called CAPRI. The THAI-CAPRI processes have the potential to convert bitumen and heavy crude into virtually a light crude oil, of almost pararaffin-like consistency, at a fraction of the cost of conventional surface processing. A surface upgrading plant has recently been proposed for the UK, at a cost of $2-3 billion.The advantage of CAPRI is that it creates a catalytic reactor in the petroleum reservoir, by 'sleeving' a layer of of catalyst around the 500-100 m long horizontal production well, inside the reservoir. The high pressure and temperature in the reservoir enable thermal cracking and hydroconversion reactions to take place, so that only light, converted oil is produced at the surface. Apart from the cost of the catalyst, which can be a standard refinery catalyst, the CAPRI reactor is virtually free! All that is needed is to inject compressed air, in order to propagate a combustion front in a 'toe-to-heel' manner along the horizontal production well.In collaboration with the University of Birmingham, the project will investigate the effectiveness of a range of catalysts for use in the CAPRI process. The University of Birmingham team, led by Dr. Joe Wood, wiil investigate the long-term survivability of the catalysts,which is critical for the operation of CAPRI. Once the catalyst is emplaced around the horizontal well, it will be expensive to recover or replace it. Previous 3D combustion cell experiments conducted by the Bath team, only allowed catalyst operating periods of a few hours, whereas, in practise, the catalyst will need to survive, remain active, for days, or weeks. The Bath team will undertake detailed studies to characterise the internal pore structure of the catalysts used in the experiments, to obtain fundamental information on catalyst deactivation, which can be related to the process conditions and oil composition. They will also develop a detailed numerical model of the CAPRI reactor. This will provide a tool to explore 'fine details' of the THAI-CAPRI process, which will aid in the selection/optimisation of the most suitable catalysts. The model will be incorporated into a larger model using the STARS reservoir simulator. Preliminary reservoir siumlations will be made to explore the potential operating conditions for CAPRI at field -scale.On a commercial-scale, the THAI-CAPRI process could translate the oil resource in the Athabasca Oil Sands into the world's biggest, exceeding the Middle East.
- Project . 2011 - 2011Funder: UKRI Project Code: NE/I005978/1Funder Contribution: 297,319 GBPPartners: University of Birmingham, University of London, UEA, University of St Andrews, AUSTRALIAN NATIONAL UNIVERSITY, McGill University, Nanjing Institute of Geology & Palaeonto, NERC British Geological Survey
The Earth is a truly remarkable planet. In addition to the physical processes driving plate tectonics, climate and ocean-atmospheric exchange, it supports an extraordinary diversity of living organisms, from microbes to mammals and everything in between. Such wasn't always the case, however, and it is clear that both the planet and its biosphere have evolved - indeed, co-evolved - over deep time. In the past two billion years, by far the most fundamental shift in this co-evolutionary process occurred during the Neoproterozoic (1000 to 542 million years ago), a planetary revolution that culminated in the modern Earth system. The Neoproterozoic begins with a biosphere populated almost exclusively by microbes, and ends in the midst of its greatest ever evolutionary radiation - including the diverse macroscopic and biomineralizing organisms that define the modern biosphere. At the same time, it witnessed the greatest climatic and biogeochemical perturbations that the planet has ever experienced, alongside major palaeogeographic reconfigurations and a deep ocean that is becoming oxygenated for the first time. There is no question that these phenomena are broadly interlinked, but the tangle of causes, consequences and co-evolutionary feedbacks has yet to be convincingly teased apart. In order to reconstruct the Neoproterozoic revolution, we propose a multidisciplinary programme of research that will capture its evolving geochemical and biological signatures in unprecedented detail. Most significantly, these collated data will be assessed and modeled in the context of a co-evolving Earth system, whereby developments in one compartment potentially facilitate and escalate those in another, sometimes to the extent of deriving entirely novel phenomena and co-evolutionary opportunities. Our approach will be guided by three general hypotheses, testable against accruing data and theory: H1) that the enhanced weathering associated with land-dwelling eukaryotes was initiated in the early Neoproterozoic leading to major environmental change, including extreme glaciations and stepwise increase(s) in atmospheric oxygen concentration; H2) that major environmental changes in the mid Neoproterozoic triggered the emergence of animals; and H3) that the late Neoproterozoic-Cambrian radiations of animals and biomineralization were themselves responsible for much of the accompanying biogeochemical perturbation. Primary data for this project will be assembled from field studies of key geological sections in the UK and North China, along with contributed sample sets from Namibia, Spitsbergen and various archived collections. Together, these offer close to comprehensive coverage of the Neoproterozoic - not least, spectacular new surfaces of Ediacaran macrofossils from Charnwood Forest. Collected samples will be analysed to assess associated weathering and climate (Sr, C, O and S isotopes), oceanic redox conditions (Fe speciation and trace metals), nutrient dynamics (P speciation and trace metals) and biological constituents (microfossils, macrofossils and biomarker molecules). These data will be integrated and interrogated through the development of heuristic, spatial and evolutionary models. Beyond its integrative approach, the strength of this proposal lies in the diversity of the contributing researchers. Alongside our own expertise in biogeochemistry, palaeobiology and Earth system modelling, we are very pleased to have attracted world-class project partners in Neoproterozoic stratigraphy, geochronology and biomarker analysis. Further insight will come from our contingent of two PDRAs and three PhD students working across the range of topics and linked via a schedule of regular team meetings. Taken together, we anticipate a fundamentally improved understanding of the Neoproterozoic Earth system and the co-evolutionary interplay between the biosphere and planet.
21 Projects, page 1 of 3
Loading
- Project . 2009 - 2011Funder: UKRI Project Code: NE/F021399/1Funder Contribution: 222,230 GBPPartners: University of Edinburgh, Newcastle University, University of Bristol, Utrecht University, University of London, University of Alberta
This project will quantify the effect of surface generated melt-water fluctuations on ice motion at the margin of the Greenland Ice Sheet (GrIS). More specifically, it will provide data that will enable ice-sheet modellers to improve their predictions of the future contribution of the GrIS to sea level rise in response to a warming world. To achieve this aim requires a dedicated field campaign to the GrIS to investigate seasonal ice flow dynamics and runoff processes along flow parallel transects extending from the ice sheet margin to the equilibrium line altitude (ELA) at both tidewater and land-terminating glaciers. The greatest store of fresh water in the northern hemisphere - equivalent to 7m of eustatic sea level rise - is held within the Greenland Ice Sheet (GrIS), and yet its present and future contribution to sea level is poorly constrained (IPCC, 2007). Recent observations suggest that mass loss near the margin of the GrIS is accelerating through a combination of increased surface melting (e.g. Steffen et al, 2004) and dynamic thinning (e.g. Rignot and Kanagaratnam, 2006). However, the key processes controlling dynamic thinning have yet to be identified (Alley et al, 2005), and in consequence, are not incorporated in the ice-sheet models which form the basis of the IPCC sea level projections. This in part reflects the fact that the satellite data that has revealed the widespread speed-up of glaciers cannot be acquired at the temporal resolution needed to resolve the causal mechanisms. Our present understanding of GrIS mass balance is especially complicated by uncertainties in the sensitivity of ice-marginal dynamics to changes in melt-water induced lubrication resulting from penetration of supraglacial melt-waters to the glacier bed (Zwally et al, 2002). Recent observations on the GrIS Shepherd et al, in review) reveal, over a five day period in July, a strong and direct coupling between surface hydrology and dynamics where diurnal fluctuations in velocity of >100% occur and where maximum daily velocities scale with temperature. Such observations confirm the need to acquire hydrological and dynamic data at high temporal (sub-hourly) and spatial resolution throughout the year to parameterise the coupling between ice melting and flow. This project will collect data at the necessary resolution to quantify the relationship between melt-water production and ice sheet dynamics thereby enabling ice-sheet modellers to improve predictions of the GrIS's response to climate change. We will conduct ground based experiments along two flow-parallel transects at the western margin of the GrIS in adjacent land and marine terminating drainage basins to address the following objectives: 1. Is there a temporal and spatial pattern to any hydrology-dynamic link associated with the seasonal evolution of the supraglacial drainage system (including supraglacial lakes)? 2. Over what area does surface generated meltwater penetrate to the base of the ice sheet? 3. Is there a relationship between the volume of meltwater input at the glacier surface and the magnitude of the dynamic response? 4. Do tidewater and land-terminating glaciers behave differently during the course of a melt-season? Field campaigns will be undertaken during 2008 and 2009 to determine: 1) The rate, extent and duration of melt. 2) The temporal and spatial variations in water volumes stored in and released from supraglacial lakes and delivered to freely draining moulins. 3) The seasonal, diurnal and hourly variations in ice dynamics. 4) The variations in proglacial discharge and water chemistry (at Russell Glacier). As a result of our work, it will be possible to determine whether ice dynamics at the margin of the GrIS is significantly affected by lubrication of the glacier bed following the drainage of surface derived meltwaters. Our results will be delivered to ice sheet modellers to help them constrain predictions for the future of the GrIS
- Project . 2008 - 2011Funder: UKRI Project Code: EP/F042728/1Funder Contribution: 224,957 GBPPartners: McGill University, University of Oxford, UvA
I aim to develop high level structures for reasoning about knowledge of agents in a multi-agent system where agents communicate and as a result update their information. All of us take part in such situations when communicating through the internet, surfing the web, bidding in auctions, or buying on financial markets. Reasoning about knowledge acquisition in these situations becomes more challenging when some agents are not honest and they cheat and lie in their actions and as a result other agents acquire wrong information. The current models of these situations are low level: they require specifying untidy details and hide the high level structure of information flow between the agents. This makes modeling a hard task and proving properties of the model an involved and complicated problem. The complexity of reasoning in these situations raises the question: ``Which structures are required to reason about knowledge acquisition?'', in other words, ``What are the foundational structures of knowledge acquisition?''. High level methods provide us with a minimal unifying structure that benefits from partiality of information: we do not need to specify all the details of the situations we are modeling. They also bring out the conceptual structure of information and update, hide the untidy details, and tidy up the proofs. My plan is to (1) Study the foundational structures that govern knowledge acquisition as a result of information flow between the agents and then develop a unifying framework to formally express these structures in a logical syntax with a comprehensive semantics. I aim to use known mathematical structures, such as algebra, coalegbra and topology, for the semantics. The syntactic theory will be a rule-based proof-theoretic calculus that helps us prove properties about knowledge acquisition in a programmatic algorithmic manner. (2) Apply this framework to reason about security properties of multi-agent protocols. Examples of these protocols are communication protocols between a client and a bank for online banking. We want to make sure that such a protocol is secure, that is, the client's information remains secret throughout the transaction. Because of the potentially unlimited computational abilities of the intruder, these protocols become very complex and verifying their security becomes a challenging task. It is exactly here that our high level setting becomes a necessity, that is, in formal analysis of these protocols and in proving their security properties. The semantic structures that I aim to use have also been used to model the logic of Quantum Mechanics. So my model will be flexible enough to accommodate quantum situations. These situations are important for security protocols because they benefit from additional non-local capabilities of Quantum Mechanics, which guarantee better safety properties. I aim to apply the knowledge acquisition framework to Quantum protocols and prove their sharing and secrecy properties. On the same track, similar semantic structures have been used for information retrieval from the web. I aim to exploit these models and study their relationship to my framework. (3) Write a computer program to implement the axiomatic semantic structure and produce a software package. This software will help us automatically verify properties of multi-agent protocols, such as the security protocols mentioned above.
- Project . 2007 - 2011Funder: UKRI Project Code: NE/E004016/1Funder Contribution: 453,995 GBPPartners: NERC British Antarctic Survey, Free University of Brussels (VUB), ODU, NVE, University of Alberta, University of Bristol, Montana State University System
Carbon is one of the essential elements required for life to exist, alongside energy and liquid water. In contrast to other parts of the Earth's biosphere, cycling of carbon compounds beneath glaciers and ice sheets is poorly understood, since these environments were believed to be devoid of life until recently. Significant populations of micro-organisms have recently been found beneath ice masses (Sharp et al., 1999; Skidmore et al., 2000; Foght et al., 2004). Evidence shows that, as in other watery environments on Earth, these sub-ice microbes are able to process a variety of carbon forms over a range of conditions, producing greenhouse gases, such as CO2 and CH4 (Skidmore et al., 2000). Almost nothing is known about 1) the range of carbon compounds available to microbes beneath ice, 2) the degree to which they can be used as food by microbes and 3) the rates of utilisation and the full spectrum of products (e.g. gases). This information is important for understanding the global carbon cycle on Earth. The fate of large amounts of organic carbon during the advance of the glaciers over the boreal forest during the last ice age (Van Campo et al., 1993), for example, is unknown and is likely to depend fundamentally on microbial processes in sub-ice environments. Current models of Earth's global carbon cycle assume this carbon is 'lost' from the Earth's system (Adarns et al., 1990; Van Campo et al., 1993; Francois et al., 1999). The possibility that it is used by subglacial microbes and converted to CO2 and CH4 has not been considered. This may have potential for explaining variations in Earth's atmospheric greenhouse gas composition over the last 2 million years. Sub-glacial environments lacking a modern carbon supply (e.g. trees, microbial cells) may represent ideal model systems for icy habitats on other terrestrial planets (e.g. Mars and Jupiter moons; Clifford, 1987; Pathare et al. 1998; Kivelson et al. 2000), and may be used to help determine whether life is possible in these more extreme systems.
- Project . 2006 - 2011Funder: UKRI Project Code: EP/D073944/1Funder Contribution: 563,957 GBPPartners: University of Bristol, RWDI, NATIONAL RESEARCH COUNCIL OF CANADA
As more slender and more adventurous structures, such as cable-stayed bridges, are constructed, they become increasingly susceptible to large amplitude vibrations, particularly due to aerodynamic loading. Wind-induced vibrations of bridge decks, cables, towers, lamp columns and overhead electricity cables are indeed very common. This can lead to unacceptably large movements, direct structural failure, or dangerous long-term fatigue damage of structural components. Complex interactions between the wind and the structure and also between different components of the structure (e.g. cables and bridge deck) can lead to vibration problems, so for proper understanding of the behaviour, both aerodynamic and structural effects need to be considered.Whilst some of the mechanisms of wind loading of structures are reasonably well understood, others are not, and many instances of vibrations, particularly of cables, are not well explained. Recent work has developed a generalised method for analysing 'galloping' vibrations. These are caused by changes in wind forces on a structure when it starts to move, which actually tend to increase the motion. For typical bridge cables (or other similar size structures) in moderately strong winds, a particular change in the wind flow around the cable occurs, known as the drag crisis. This changes the forces on the cable and causes a special case of galloping-type vibrations, which the new method of analysis is able to predict, for the first time. Comparisons of these calculations with wind tunnel test results on inclined cylinders have confirmed that the basic method does work, but there is a need to consider additional effects, such as wind turbulence, torsional motion of the structure and more accurate account of the changes in the aerodynamic forces as the structure moves. It is proposed to develop the approach to include these effects, using further wind tunnel data, to eventually create a unified framework for wind loading analysis of any real structure for galloping, together with the other aerodynamic mechanisms buffeting (due to wind turbulence) and flutter.Meanwhile, interactions between vibrations of structural components can cause serious effects. For example, very small vibrations of a bridge deck can cause very large vibrations of the cables supporting it, through the mechanism of 'parametric excitation'. Even more surprisingly, in other instances, localised cable vibrations can lead to vibrations of the whole structure. Research under another grant is already considering these effects for very simplified structures, but it is proposed to extend the analysis to realistic full structures. Also, often cables are tied together to try to prevent vibrations of individual cables, but they can then all vibrate together as a network. This project therefore aims to analyse full cable networks, to understand how their vibrations can be limited.Finally, it is proposed to bring together the above two main areas, to include both aerodynamic and structural dynamic interactions in the analysis of slender structures. For example, because of the interactions, the wind loads on relatively small elements, such as cables, can have surprisingly large effects on the overall dynamic response of large structures. At present this is generally ignored, but the joint approach will address this issue. Also, in some instances, only a combined view of the phenomena may be able to explain the behaviour observed on full-scale structures in practice. The holistic view of the wind loading and structural behaviour should provide tools to help avoid undesirable and potentially dangerous effects of vibrations of slender structures in the future. Based on the analysis, this could be achieved by modifying the shape of the elements to change the wind loads, or introducing dampers to absorb enough vibration energy.
- Project . 2008 - 2011Funder: UKRI Project Code: BB/E020372/1Funder Contribution: 520,983 GBPPartners: Imperial College London, McGill University
Recent advances in biological technology enable the measurement of multiple measures of complex systems from the cell to the whole organism. However, these technologies generate massive amount of data and it is a major task to process these robustly and efficiently. The aim of our multidisciplinary project is to devise methods to combine and analyze different data measurements arising from experiments in modern biology that will ultimately aid in the understanding of the causes of common diseases, and lead to the development of new treatments. It is now possible to investigate how complex organisms function by measuring in great detail the chemical composition of, for example, a sample of blood or urine, and also to measure how that composition changes over time, or in reaction to different treatments or experimental conditions. Perhaps most importantly, it is also possible to compare the composition across different groups that may have or not have a particular disease, and to use this comparison to understand how treatments might be developed. This exciting prospect can only be achieved, however, if the experimental data are collected and analyzed as accurately possible. This is the principal goal of our research. We will focus on so-called 'metabolic' analysis using two specific types of technology (known by the initials NMR and MS) that allow us to measure the amount of a large number of different chemicals (or metabolites) that are present in the samples of blood or other body fluids being analyzed. Metabolites are small molecules present in all organisms which are essential to the functioning of their living cells. NMR and MS are both extremely sophisticated measurement procedures that each produce a large amount of data (spectra), but although the measurements from the two technologies contain some information on the same metabolites, most of the information from the two sources is not identical, and an important statistical modelling task involves combining data from them in the most sensible fashion. We will separate this task into two components; first, the mathematical modelling of the NMR and MS metabolite spectra, and secondly the combination of the data across the two measurement systems. Both components require major input from both biologists and statisticians involved in our research programme. The statistical analysis of the large amounts of data generated by NMR and MS technologies is an extremely challenging task. Some methods for data analysis do already exist, but they do not use all the information at hand. An important advantage of our approach is that we will use physico-chemical information already available about typical metabolites to direct how we build our models and carry out our analysis. Such physico-chemical 'prior' information has been only rarely used in the analysis of metabolite data, but we feel that it provides an important guide as to how analysis should proceed. Thus we will adopt a Bayesian statistical approach that combines data and prior information in a principled fashion. However, despite being scientifically attractive, this modelling approach needs advanced computing methods so that the analysis can be implemented, and a major component of the research we will carry out will be to implement the most efficient computational strategies. Understanding and modelling the content of NMR and MS metabolite spectra is a complicated task that requires both highly specialized chemical knowledge and state of the art statistical techniques. The novelty of our project is that by using a Bayesian analysis framework we are able to harness and incorporate such specialist information. Our multidisciplinary research team that combines expertise in modelling, statistics, chemical biology and bioinformatics will ensure the success of our research programme and facilitate the dissemination of its results to a wide community.
- Project . 2007 - 2011Funder: UKRI Project Code: PP/E001947/1Funder Contribution: 286,718 GBPPartners: Lancaster University, UoC
The Earth possesses a magnetic field which is approximately dipolar in shape - very similar to the magnetic field produced a simple bar magnet. Magnetic field lines emerge from the planet at one magnetic pole and extend out of the atmosphere and many thousands of kilometres into space, before returning to the magnetic pole in the opposite hemisphere. Rather than being a vacuum, the region of space that these field lines pass through is filled with plasma - an electrically conducting gas made up charged particles. Most of these particles originate in the Earth's atmosphere having been produced by ultraviolet sunlight which ionises gases in the high altitude atmosphere. The Sun also possesses a strong magnetic field. As nuclear processes generate energy in the solar interior, the outer layer of the solar atmosphere expands outwards through the solar system (forming the solar wind), and carries with it remnants of the Sun's magnetic field (the interplanetary magnetic field). When the solar wind and interplanetary magnetic field arrive at the Earth, they collide with the Earth's magnetic field and are diverted around the planet. The cavity carved out of the solar wind by the Earth's magnetic field is called the magnetosphere. Inside the magnetosphere the plasma and magnetic field originate mainly from the Earth. Outside of the magnetosphere, they originate from the Sun. At the boundary between the interplanetary and terrestrial magnetic fields on the dayside of the Earth, the field lines sometimes orient themselves in opposite directions. When this happens, the field lines can merge or 'reconnect' across the boundary. In other words, closed magnetic field lines that start and finish at the Earth's surface in opposite hemispheres can be opened so that one end stays fixed to the Earth while the other extends outwards into the solar wind. Since the solar wind is constantly streaming away from the Sun, the newly-opened magnetic field line is dragged and stretched away from the Earth. Therefore, because of the process of magnetic reconnection at the dayside boundary, the Earth's dipolar magnetic field is stretched out on the planet's nightside to form a long magnetic tail that points away from the Sun. If the Earth's magnetic field was continuously being peeled away and dragged into the tail, eventually there would be no field left on the dayside of the planet. However, a process in the tail periodically acts to reduce the amount open magnetic field in the tail and return closed field to the dayside - this process is magnetic reconnection. By reconnecting two open magnetic field lines a closed magnetic field is produced (like typing together the two loose ends of a piece of elastic). However, the resulting closed field is highly stretched and, just like a stretched elastic band, it contracts back towards the Earth, catapulting some of the magnetospheric plasma Earthward. The reconnection process in the tail is not steady. Generally magnetic field builds up in the tail until some critical point is reached. Somehow, reconnection is triggered and stretched magnetic field is removed from the tail and returned to the Earth. The period when tail field is building is known as the substorm growth phase, while the explosive release of energy in the tail associated with reconnection and the closure of open field lines is known as the substorm expansion phase. However, the processes that cause the triggering of the expansion phase (i.e. that mechanisms that trigger the catapult) remain unclear - it is one of the biggest uncertainties in solar-terrestrial physics. This investigation will use measurements from instruments on spacecraft located in the tail and observations made from the Earth in order to determine the triggering mechanism of magnetospheric substorms.
- Project . 2011 - 2011Funder: UKRI Project Code: BB/J003980/1Funder Contribution: 1,710 GBPPartners: McMaster University, Mount Sinai Hospital Toronto, JIC
Canada
- Project . 2009 - 2011Funder: UKRI Project Code: EP/H023836/1Funder Contribution: 195,938 GBPPartners: University of Salford, McMaster University
The aim of Silicon photonics is nothing less than the complete convergence of optics and electronics. In the first instance this endeaviour was aimed at oversoming the limitations imposed by nature in the transport of information using electrons. However, the work has already thrown up more general optical technologies which can be minaturised onto silicon chips. In fact, engineers are slowly building a whole optics toolbox on silicon, including detectors, modulators and spectrometers. The international consortium assembled for the current work have already made significant progress in providing the long sought after on-chip light source. The feasibility studies proposed here are aimed at building on the existing expertise found in the consortium and elsewhere to apply these technologies to the optical detection and manipulation of single biomolecules in a way than can be miniaturised giving devices that have such functionalities on a silicon chip. The impact of the work will be enhanced by the fact that the approaches used are compatible with those used during the manufacture of standard silicon chips and that the end products can be mass produced (at costs measured in cents per unit) for personalised health care applications in every home, doctor's surgery/pharmacist; for the detection of low level atmosphere or water born pollutants or for counter terrorism/military applications
- Project . 2008 - 2011Funder: UKRI Project Code: EP/E059430/1Funder Contribution: 312,723 GBPPartners: Petrobank Energy and Resources Ltd, University of Bath
Heavy crude oil and bitumen are a vast, largely unexploited hydrocarbon resource, with barely 1% produced so far, compared with more than 50% of conventional light oil (like the North Sea). More than 80% of this heavy, unconventional oil, lies in the Western hemisphere, whereas more than 80% of conventional light oil lies in the Eastern hemisphere (mainly in the Middle East). Over the next 10-30 years, geopolitical factors,and also the emerging strength of Asian countries, especially India and China, will create increasing tensions and uncertainty, with regard to the availability and supply of crude oil. Alongside gas, nuclear and renewables, crude oil will continue to be an important part of the UK's 'energy mix' for decades to come. How will the crude oil we need for industry and transportation be be obtained and will it be as secure as it was from the North Sea?The huge Athabsca Oil Sands deposits in Canada (1.5 trilllion barrels) provides an opportunity for the UK to secure access to a long-term, stable supply. The first step towards this was the development of a new technology,THAI - 'Toe-to-Heel Air Injection', to produce Oil Sands bitumen and heavy oil. It was discovered by the Improved Oil Recovery group at the University Bath, in the 1990's, and is currently being field tested at Christina Lake, Alberta, Canada. In 1998, in collaboration with the Petroleum Recovery Institute (PRI), Calgary, Canada, the Bath goup discovered another process,based on THAI, called CAPRI. The THAI-CAPRI processes have the potential to convert bitumen and heavy crude into virtually a light crude oil, of almost pararaffin-like consistency, at a fraction of the cost of conventional surface processing. A surface upgrading plant has recently been proposed for the UK, at a cost of $2-3 billion.The advantage of CAPRI is that it creates a catalytic reactor in the petroleum reservoir, by 'sleeving' a layer of of catalyst around the 500-100 m long horizontal production well, inside the reservoir. The high pressure and temperature in the reservoir enable thermal cracking and hydroconversion reactions to take place, so that only light, converted oil is produced at the surface. Apart from the cost of the catalyst, which can be a standard refinery catalyst, the CAPRI reactor is virtually free! All that is needed is to inject compressed air, in order to propagate a combustion front in a 'toe-to-heel' manner along the horizontal production well.In collaboration with the University of Birmingham, the project will investigate the effectiveness of a range of catalysts for use in the CAPRI process. The University of Birmingham team, led by Dr. Joe Wood, wiil investigate the long-term survivability of the catalysts,which is critical for the operation of CAPRI. Once the catalyst is emplaced around the horizontal well, it will be expensive to recover or replace it. Previous 3D combustion cell experiments conducted by the Bath team, only allowed catalyst operating periods of a few hours, whereas, in practise, the catalyst will need to survive, remain active, for days, or weeks. The Bath team will undertake detailed studies to characterise the internal pore structure of the catalysts used in the experiments, to obtain fundamental information on catalyst deactivation, which can be related to the process conditions and oil composition. They will also develop a detailed numerical model of the CAPRI reactor. This will provide a tool to explore 'fine details' of the THAI-CAPRI process, which will aid in the selection/optimisation of the most suitable catalysts. The model will be incorporated into a larger model using the STARS reservoir simulator. Preliminary reservoir siumlations will be made to explore the potential operating conditions for CAPRI at field -scale.On a commercial-scale, the THAI-CAPRI process could translate the oil resource in the Athabasca Oil Sands into the world's biggest, exceeding the Middle East.
- Project . 2011 - 2011Funder: UKRI Project Code: NE/I005978/1Funder Contribution: 297,319 GBPPartners: University of Birmingham, University of London, UEA, University of St Andrews, AUSTRALIAN NATIONAL UNIVERSITY, McGill University, Nanjing Institute of Geology & Palaeonto, NERC British Geological Survey
The Earth is a truly remarkable planet. In addition to the physical processes driving plate tectonics, climate and ocean-atmospheric exchange, it supports an extraordinary diversity of living organisms, from microbes to mammals and everything in between. Such wasn't always the case, however, and it is clear that both the planet and its biosphere have evolved - indeed, co-evolved - over deep time. In the past two billion years, by far the most fundamental shift in this co-evolutionary process occurred during the Neoproterozoic (1000 to 542 million years ago), a planetary revolution that culminated in the modern Earth system. The Neoproterozoic begins with a biosphere populated almost exclusively by microbes, and ends in the midst of its greatest ever evolutionary radiation - including the diverse macroscopic and biomineralizing organisms that define the modern biosphere. At the same time, it witnessed the greatest climatic and biogeochemical perturbations that the planet has ever experienced, alongside major palaeogeographic reconfigurations and a deep ocean that is becoming oxygenated for the first time. There is no question that these phenomena are broadly interlinked, but the tangle of causes, consequences and co-evolutionary feedbacks has yet to be convincingly teased apart. In order to reconstruct the Neoproterozoic revolution, we propose a multidisciplinary programme of research that will capture its evolving geochemical and biological signatures in unprecedented detail. Most significantly, these collated data will be assessed and modeled in the context of a co-evolving Earth system, whereby developments in one compartment potentially facilitate and escalate those in another, sometimes to the extent of deriving entirely novel phenomena and co-evolutionary opportunities. Our approach will be guided by three general hypotheses, testable against accruing data and theory: H1) that the enhanced weathering associated with land-dwelling eukaryotes was initiated in the early Neoproterozoic leading to major environmental change, including extreme glaciations and stepwise increase(s) in atmospheric oxygen concentration; H2) that major environmental changes in the mid Neoproterozoic triggered the emergence of animals; and H3) that the late Neoproterozoic-Cambrian radiations of animals and biomineralization were themselves responsible for much of the accompanying biogeochemical perturbation. Primary data for this project will be assembled from field studies of key geological sections in the UK and North China, along with contributed sample sets from Namibia, Spitsbergen and various archived collections. Together, these offer close to comprehensive coverage of the Neoproterozoic - not least, spectacular new surfaces of Ediacaran macrofossils from Charnwood Forest. Collected samples will be analysed to assess associated weathering and climate (Sr, C, O and S isotopes), oceanic redox conditions (Fe speciation and trace metals), nutrient dynamics (P speciation and trace metals) and biological constituents (microfossils, macrofossils and biomarker molecules). These data will be integrated and interrogated through the development of heuristic, spatial and evolutionary models. Beyond its integrative approach, the strength of this proposal lies in the diversity of the contributing researchers. Alongside our own expertise in biogeochemistry, palaeobiology and Earth system modelling, we are very pleased to have attracted world-class project partners in Neoproterozoic stratigraphy, geochronology and biomarker analysis. Further insight will come from our contingent of two PDRAs and three PhD students working across the range of topics and linked via a schedule of regular team meetings. Taken together, we anticipate a fundamentally improved understanding of the Neoproterozoic Earth system and the co-evolutionary interplay between the biosphere and planet.