4 Projects, page 1 of 1
Loading
- Project . 2008 - 2011Funder: UKRI Project Code: EP/F042728/1Funder Contribution: 224,957 GBPPartners: McGill University, University of Oxford, UvA
I aim to develop high level structures for reasoning about knowledge of agents in a multi-agent system where agents communicate and as a result update their information. All of us take part in such situations when communicating through the internet, surfing the web, bidding in auctions, or buying on financial markets. Reasoning about knowledge acquisition in these situations becomes more challenging when some agents are not honest and they cheat and lie in their actions and as a result other agents acquire wrong information. The current models of these situations are low level: they require specifying untidy details and hide the high level structure of information flow between the agents. This makes modeling a hard task and proving properties of the model an involved and complicated problem. The complexity of reasoning in these situations raises the question: ``Which structures are required to reason about knowledge acquisition?'', in other words, ``What are the foundational structures of knowledge acquisition?''. High level methods provide us with a minimal unifying structure that benefits from partiality of information: we do not need to specify all the details of the situations we are modeling. They also bring out the conceptual structure of information and update, hide the untidy details, and tidy up the proofs. My plan is to (1) Study the foundational structures that govern knowledge acquisition as a result of information flow between the agents and then develop a unifying framework to formally express these structures in a logical syntax with a comprehensive semantics. I aim to use known mathematical structures, such as algebra, coalegbra and topology, for the semantics. The syntactic theory will be a rule-based proof-theoretic calculus that helps us prove properties about knowledge acquisition in a programmatic algorithmic manner. (2) Apply this framework to reason about security properties of multi-agent protocols. Examples of these protocols are communication protocols between a client and a bank for online banking. We want to make sure that such a protocol is secure, that is, the client's information remains secret throughout the transaction. Because of the potentially unlimited computational abilities of the intruder, these protocols become very complex and verifying their security becomes a challenging task. It is exactly here that our high level setting becomes a necessity, that is, in formal analysis of these protocols and in proving their security properties. The semantic structures that I aim to use have also been used to model the logic of Quantum Mechanics. So my model will be flexible enough to accommodate quantum situations. These situations are important for security protocols because they benefit from additional non-local capabilities of Quantum Mechanics, which guarantee better safety properties. I aim to apply the knowledge acquisition framework to Quantum protocols and prove their sharing and secrecy properties. On the same track, similar semantic structures have been used for information retrieval from the web. I aim to exploit these models and study their relationship to my framework. (3) Write a computer program to implement the axiomatic semantic structure and produce a software package. This software will help us automatically verify properties of multi-agent protocols, such as the security protocols mentioned above.
- Project . 2008 - 2011Funder: UKRI Project Code: BB/E020372/1Funder Contribution: 520,983 GBPPartners: Imperial College London, McGill University
Recent advances in biological technology enable the measurement of multiple measures of complex systems from the cell to the whole organism. However, these technologies generate massive amount of data and it is a major task to process these robustly and efficiently. The aim of our multidisciplinary project is to devise methods to combine and analyze different data measurements arising from experiments in modern biology that will ultimately aid in the understanding of the causes of common diseases, and lead to the development of new treatments. It is now possible to investigate how complex organisms function by measuring in great detail the chemical composition of, for example, a sample of blood or urine, and also to measure how that composition changes over time, or in reaction to different treatments or experimental conditions. Perhaps most importantly, it is also possible to compare the composition across different groups that may have or not have a particular disease, and to use this comparison to understand how treatments might be developed. This exciting prospect can only be achieved, however, if the experimental data are collected and analyzed as accurately possible. This is the principal goal of our research. We will focus on so-called 'metabolic' analysis using two specific types of technology (known by the initials NMR and MS) that allow us to measure the amount of a large number of different chemicals (or metabolites) that are present in the samples of blood or other body fluids being analyzed. Metabolites are small molecules present in all organisms which are essential to the functioning of their living cells. NMR and MS are both extremely sophisticated measurement procedures that each produce a large amount of data (spectra), but although the measurements from the two technologies contain some information on the same metabolites, most of the information from the two sources is not identical, and an important statistical modelling task involves combining data from them in the most sensible fashion. We will separate this task into two components; first, the mathematical modelling of the NMR and MS metabolite spectra, and secondly the combination of the data across the two measurement systems. Both components require major input from both biologists and statisticians involved in our research programme. The statistical analysis of the large amounts of data generated by NMR and MS technologies is an extremely challenging task. Some methods for data analysis do already exist, but they do not use all the information at hand. An important advantage of our approach is that we will use physico-chemical information already available about typical metabolites to direct how we build our models and carry out our analysis. Such physico-chemical 'prior' information has been only rarely used in the analysis of metabolite data, but we feel that it provides an important guide as to how analysis should proceed. Thus we will adopt a Bayesian statistical approach that combines data and prior information in a principled fashion. However, despite being scientifically attractive, this modelling approach needs advanced computing methods so that the analysis can be implemented, and a major component of the research we will carry out will be to implement the most efficient computational strategies. Understanding and modelling the content of NMR and MS metabolite spectra is a complicated task that requires both highly specialized chemical knowledge and state of the art statistical techniques. The novelty of our project is that by using a Bayesian analysis framework we are able to harness and incorporate such specialist information. Our multidisciplinary research team that combines expertise in modelling, statistics, chemical biology and bioinformatics will ensure the success of our research programme and facilitate the dissemination of its results to a wide community.
- Project . 2008 - 2011Funder: UKRI Project Code: EP/E059430/1Funder Contribution: 312,723 GBPPartners: Petrobank Energy and Resources Ltd, University of Bath
Heavy crude oil and bitumen are a vast, largely unexploited hydrocarbon resource, with barely 1% produced so far, compared with more than 50% of conventional light oil (like the North Sea). More than 80% of this heavy, unconventional oil, lies in the Western hemisphere, whereas more than 80% of conventional light oil lies in the Eastern hemisphere (mainly in the Middle East). Over the next 10-30 years, geopolitical factors,and also the emerging strength of Asian countries, especially India and China, will create increasing tensions and uncertainty, with regard to the availability and supply of crude oil. Alongside gas, nuclear and renewables, crude oil will continue to be an important part of the UK's 'energy mix' for decades to come. How will the crude oil we need for industry and transportation be be obtained and will it be as secure as it was from the North Sea?The huge Athabsca Oil Sands deposits in Canada (1.5 trilllion barrels) provides an opportunity for the UK to secure access to a long-term, stable supply. The first step towards this was the development of a new technology,THAI - 'Toe-to-Heel Air Injection', to produce Oil Sands bitumen and heavy oil. It was discovered by the Improved Oil Recovery group at the University Bath, in the 1990's, and is currently being field tested at Christina Lake, Alberta, Canada. In 1998, in collaboration with the Petroleum Recovery Institute (PRI), Calgary, Canada, the Bath goup discovered another process,based on THAI, called CAPRI. The THAI-CAPRI processes have the potential to convert bitumen and heavy crude into virtually a light crude oil, of almost pararaffin-like consistency, at a fraction of the cost of conventional surface processing. A surface upgrading plant has recently been proposed for the UK, at a cost of $2-3 billion.The advantage of CAPRI is that it creates a catalytic reactor in the petroleum reservoir, by 'sleeving' a layer of of catalyst around the 500-100 m long horizontal production well, inside the reservoir. The high pressure and temperature in the reservoir enable thermal cracking and hydroconversion reactions to take place, so that only light, converted oil is produced at the surface. Apart from the cost of the catalyst, which can be a standard refinery catalyst, the CAPRI reactor is virtually free! All that is needed is to inject compressed air, in order to propagate a combustion front in a 'toe-to-heel' manner along the horizontal production well.In collaboration with the University of Birmingham, the project will investigate the effectiveness of a range of catalysts for use in the CAPRI process. The University of Birmingham team, led by Dr. Joe Wood, wiil investigate the long-term survivability of the catalysts,which is critical for the operation of CAPRI. Once the catalyst is emplaced around the horizontal well, it will be expensive to recover or replace it. Previous 3D combustion cell experiments conducted by the Bath team, only allowed catalyst operating periods of a few hours, whereas, in practise, the catalyst will need to survive, remain active, for days, or weeks. The Bath team will undertake detailed studies to characterise the internal pore structure of the catalysts used in the experiments, to obtain fundamental information on catalyst deactivation, which can be related to the process conditions and oil composition. They will also develop a detailed numerical model of the CAPRI reactor. This will provide a tool to explore 'fine details' of the THAI-CAPRI process, which will aid in the selection/optimisation of the most suitable catalysts. The model will be incorporated into a larger model using the STARS reservoir simulator. Preliminary reservoir siumlations will be made to explore the potential operating conditions for CAPRI at field -scale.On a commercial-scale, the THAI-CAPRI process could translate the oil resource in the Athabasca Oil Sands into the world's biggest, exceeding the Middle East.
- Project . 2008 - 2011Funder: UKRI Project Code: EP/G014124/1Funder Contribution: 283,718 GBPPartners: University of Birmingham, NRC
The ability to control the evolution of a reaction is a long-standing goal of chemistry. One approach is to use the electric field provided by a laser pulse as the guide. Recent work has focused on shaping and timing the pulse so that the field interacts with the molecules in a particular way to influence the energy flow through the molecule and thus eventually the course of a reaction. The optimal pulse shape is achieved by using a feedback loop , focusing on a signal related to the desired outcome and allowing a computer algorithm to change the pulse shape during repeated cycles of the experiment until the signal is maximised. This optimal control scheme has proved to be able to control a wide range of chemical systems, but the complicated pulse shapes provide little insight into the procedure, and the experiments have a black box nature. A different, very appealing, approach to control through a laser field is to use the field to change the shape of the potential energy surface over which the reaction proceeds. This can be acheived using a strong pulse which induces Stark shifting of the surface. By careful timing of a pulse of the appropriate strength, it has been shown that it is possible to control the products from IBr dissociation by effectively changing the barrier height to the different possible channels.The project aims to investigate theoretically this potentially general approach to laser control. The results should start to build up a picture of how the complicated potential energy surfaces of small molecules are altered by interaction with the field. This will help in the development of experiments and in our understanding of how molecules behave in a light field.
4 Projects, page 1 of 1
Loading
- Project . 2008 - 2011Funder: UKRI Project Code: EP/F042728/1Funder Contribution: 224,957 GBPPartners: McGill University, University of Oxford, UvA
I aim to develop high level structures for reasoning about knowledge of agents in a multi-agent system where agents communicate and as a result update their information. All of us take part in such situations when communicating through the internet, surfing the web, bidding in auctions, or buying on financial markets. Reasoning about knowledge acquisition in these situations becomes more challenging when some agents are not honest and they cheat and lie in their actions and as a result other agents acquire wrong information. The current models of these situations are low level: they require specifying untidy details and hide the high level structure of information flow between the agents. This makes modeling a hard task and proving properties of the model an involved and complicated problem. The complexity of reasoning in these situations raises the question: ``Which structures are required to reason about knowledge acquisition?'', in other words, ``What are the foundational structures of knowledge acquisition?''. High level methods provide us with a minimal unifying structure that benefits from partiality of information: we do not need to specify all the details of the situations we are modeling. They also bring out the conceptual structure of information and update, hide the untidy details, and tidy up the proofs. My plan is to (1) Study the foundational structures that govern knowledge acquisition as a result of information flow between the agents and then develop a unifying framework to formally express these structures in a logical syntax with a comprehensive semantics. I aim to use known mathematical structures, such as algebra, coalegbra and topology, for the semantics. The syntactic theory will be a rule-based proof-theoretic calculus that helps us prove properties about knowledge acquisition in a programmatic algorithmic manner. (2) Apply this framework to reason about security properties of multi-agent protocols. Examples of these protocols are communication protocols between a client and a bank for online banking. We want to make sure that such a protocol is secure, that is, the client's information remains secret throughout the transaction. Because of the potentially unlimited computational abilities of the intruder, these protocols become very complex and verifying their security becomes a challenging task. It is exactly here that our high level setting becomes a necessity, that is, in formal analysis of these protocols and in proving their security properties. The semantic structures that I aim to use have also been used to model the logic of Quantum Mechanics. So my model will be flexible enough to accommodate quantum situations. These situations are important for security protocols because they benefit from additional non-local capabilities of Quantum Mechanics, which guarantee better safety properties. I aim to apply the knowledge acquisition framework to Quantum protocols and prove their sharing and secrecy properties. On the same track, similar semantic structures have been used for information retrieval from the web. I aim to exploit these models and study their relationship to my framework. (3) Write a computer program to implement the axiomatic semantic structure and produce a software package. This software will help us automatically verify properties of multi-agent protocols, such as the security protocols mentioned above.
- Project . 2008 - 2011Funder: UKRI Project Code: BB/E020372/1Funder Contribution: 520,983 GBPPartners: Imperial College London, McGill University
Recent advances in biological technology enable the measurement of multiple measures of complex systems from the cell to the whole organism. However, these technologies generate massive amount of data and it is a major task to process these robustly and efficiently. The aim of our multidisciplinary project is to devise methods to combine and analyze different data measurements arising from experiments in modern biology that will ultimately aid in the understanding of the causes of common diseases, and lead to the development of new treatments. It is now possible to investigate how complex organisms function by measuring in great detail the chemical composition of, for example, a sample of blood or urine, and also to measure how that composition changes over time, or in reaction to different treatments or experimental conditions. Perhaps most importantly, it is also possible to compare the composition across different groups that may have or not have a particular disease, and to use this comparison to understand how treatments might be developed. This exciting prospect can only be achieved, however, if the experimental data are collected and analyzed as accurately possible. This is the principal goal of our research. We will focus on so-called 'metabolic' analysis using two specific types of technology (known by the initials NMR and MS) that allow us to measure the amount of a large number of different chemicals (or metabolites) that are present in the samples of blood or other body fluids being analyzed. Metabolites are small molecules present in all organisms which are essential to the functioning of their living cells. NMR and MS are both extremely sophisticated measurement procedures that each produce a large amount of data (spectra), but although the measurements from the two technologies contain some information on the same metabolites, most of the information from the two sources is not identical, and an important statistical modelling task involves combining data from them in the most sensible fashion. We will separate this task into two components; first, the mathematical modelling of the NMR and MS metabolite spectra, and secondly the combination of the data across the two measurement systems. Both components require major input from both biologists and statisticians involved in our research programme. The statistical analysis of the large amounts of data generated by NMR and MS technologies is an extremely challenging task. Some methods for data analysis do already exist, but they do not use all the information at hand. An important advantage of our approach is that we will use physico-chemical information already available about typical metabolites to direct how we build our models and carry out our analysis. Such physico-chemical 'prior' information has been only rarely used in the analysis of metabolite data, but we feel that it provides an important guide as to how analysis should proceed. Thus we will adopt a Bayesian statistical approach that combines data and prior information in a principled fashion. However, despite being scientifically attractive, this modelling approach needs advanced computing methods so that the analysis can be implemented, and a major component of the research we will carry out will be to implement the most efficient computational strategies. Understanding and modelling the content of NMR and MS metabolite spectra is a complicated task that requires both highly specialized chemical knowledge and state of the art statistical techniques. The novelty of our project is that by using a Bayesian analysis framework we are able to harness and incorporate such specialist information. Our multidisciplinary research team that combines expertise in modelling, statistics, chemical biology and bioinformatics will ensure the success of our research programme and facilitate the dissemination of its results to a wide community.
- Project . 2008 - 2011Funder: UKRI Project Code: EP/E059430/1Funder Contribution: 312,723 GBPPartners: Petrobank Energy and Resources Ltd, University of Bath
Heavy crude oil and bitumen are a vast, largely unexploited hydrocarbon resource, with barely 1% produced so far, compared with more than 50% of conventional light oil (like the North Sea). More than 80% of this heavy, unconventional oil, lies in the Western hemisphere, whereas more than 80% of conventional light oil lies in the Eastern hemisphere (mainly in the Middle East). Over the next 10-30 years, geopolitical factors,and also the emerging strength of Asian countries, especially India and China, will create increasing tensions and uncertainty, with regard to the availability and supply of crude oil. Alongside gas, nuclear and renewables, crude oil will continue to be an important part of the UK's 'energy mix' for decades to come. How will the crude oil we need for industry and transportation be be obtained and will it be as secure as it was from the North Sea?The huge Athabsca Oil Sands deposits in Canada (1.5 trilllion barrels) provides an opportunity for the UK to secure access to a long-term, stable supply. The first step towards this was the development of a new technology,THAI - 'Toe-to-Heel Air Injection', to produce Oil Sands bitumen and heavy oil. It was discovered by the Improved Oil Recovery group at the University Bath, in the 1990's, and is currently being field tested at Christina Lake, Alberta, Canada. In 1998, in collaboration with the Petroleum Recovery Institute (PRI), Calgary, Canada, the Bath goup discovered another process,based on THAI, called CAPRI. The THAI-CAPRI processes have the potential to convert bitumen and heavy crude into virtually a light crude oil, of almost pararaffin-like consistency, at a fraction of the cost of conventional surface processing. A surface upgrading plant has recently been proposed for the UK, at a cost of $2-3 billion.The advantage of CAPRI is that it creates a catalytic reactor in the petroleum reservoir, by 'sleeving' a layer of of catalyst around the 500-100 m long horizontal production well, inside the reservoir. The high pressure and temperature in the reservoir enable thermal cracking and hydroconversion reactions to take place, so that only light, converted oil is produced at the surface. Apart from the cost of the catalyst, which can be a standard refinery catalyst, the CAPRI reactor is virtually free! All that is needed is to inject compressed air, in order to propagate a combustion front in a 'toe-to-heel' manner along the horizontal production well.In collaboration with the University of Birmingham, the project will investigate the effectiveness of a range of catalysts for use in the CAPRI process. The University of Birmingham team, led by Dr. Joe Wood, wiil investigate the long-term survivability of the catalysts,which is critical for the operation of CAPRI. Once the catalyst is emplaced around the horizontal well, it will be expensive to recover or replace it. Previous 3D combustion cell experiments conducted by the Bath team, only allowed catalyst operating periods of a few hours, whereas, in practise, the catalyst will need to survive, remain active, for days, or weeks. The Bath team will undertake detailed studies to characterise the internal pore structure of the catalysts used in the experiments, to obtain fundamental information on catalyst deactivation, which can be related to the process conditions and oil composition. They will also develop a detailed numerical model of the CAPRI reactor. This will provide a tool to explore 'fine details' of the THAI-CAPRI process, which will aid in the selection/optimisation of the most suitable catalysts. The model will be incorporated into a larger model using the STARS reservoir simulator. Preliminary reservoir siumlations will be made to explore the potential operating conditions for CAPRI at field -scale.On a commercial-scale, the THAI-CAPRI process could translate the oil resource in the Athabasca Oil Sands into the world's biggest, exceeding the Middle East.
- Project . 2008 - 2011Funder: UKRI Project Code: EP/G014124/1Funder Contribution: 283,718 GBPPartners: University of Birmingham, NRC
The ability to control the evolution of a reaction is a long-standing goal of chemistry. One approach is to use the electric field provided by a laser pulse as the guide. Recent work has focused on shaping and timing the pulse so that the field interacts with the molecules in a particular way to influence the energy flow through the molecule and thus eventually the course of a reaction. The optimal pulse shape is achieved by using a feedback loop , focusing on a signal related to the desired outcome and allowing a computer algorithm to change the pulse shape during repeated cycles of the experiment until the signal is maximised. This optimal control scheme has proved to be able to control a wide range of chemical systems, but the complicated pulse shapes provide little insight into the procedure, and the experiments have a black box nature. A different, very appealing, approach to control through a laser field is to use the field to change the shape of the potential energy surface over which the reaction proceeds. This can be acheived using a strong pulse which induces Stark shifting of the surface. By careful timing of a pulse of the appropriate strength, it has been shown that it is possible to control the products from IBr dissociation by effectively changing the barrier height to the different possible channels.The project aims to investigate theoretically this potentially general approach to laser control. The results should start to build up a picture of how the complicated potential energy surfaces of small molecules are altered by interaction with the field. This will help in the development of experiments and in our understanding of how molecules behave in a light field.