5,179 Projects, page 1 of 1,036
Understanding the processes capable of driving parts of the ocean from oxic (oxygen containing) to either anoxic (no oxygen) or euxinic (no oxygen and containing toxic sulfide) conditions is critical for identifying possible scenarios for the future ocean on a warmer Earth. The mid Cretaceous is well-known as a time of high global temperatures, and linked to this was the widespread deposition of black shales (organic-rich sediments commonly deposited during ocean anoxic events). The detailed chemical nature of these black shale units can potentially document a remarkable record of the driving factors and response of the Earth System to rapid climate change, and they have therefore become a focal point for extreme climate research. Recent short time-scale (millenial and shorter) sediment records have demonstrated that the Cretaceous ocean was characterized by repeated rapid changes between oxic and anoxic depositional conditions, which were particularly extreme during black shale deposition. Understanding these short-term cycles is essential for improving our understanding of how predictions of future rapid swings in climate may effect ocean chemistry, with subsequent impacts on ecosystem stability and climate feedback mechanisms. While most studies have focussed on making a simple distinction between oxic or anoxic conditions, this approach offers a measure for different levels of oxygen depletion, by applying a number of well established and novel redox indicators. The focus will be to use the isotope tracers (Mo, Cr, S) to assess the spatial extent of marine oxygen depletion and to recognise relationships between local, regional, and global mechanisms, since it is the local and regional effects of global warming that are predicted to cause major uncertainties in the future. The proposed research uses a combination of cutting-edge analytical techniques to develop high resolution geochemical records by combining novel isotope tracers with well-established techniques. Specific expected outcomes of the proposal include: 1. A better understanding of Cr isotope systematics during chemical transformations. This is essential for applying the Cr isotope system as a robust redox proxy. The approach includes a series of experimental measurements of Cr isotope fractionations upon sorption to and co-precipitation with different Fe (oxyhydr)oxides, under oxic, anoxic (non-sulphidic) and sulphidic conditions. 2. High resolution records of the specific nature of oxygen depletion (e.g. oxic versus anoxic versus euxinic), of several well defined, short time-scale cycles, by using multiple geochemical techniques. 3. Insights into short term changes in the local and global redox state of the ocean by pairing high resolution Mo and Cr isotope analyses. 4. An evaluation of the mechanisms involved in driving changes in ocean redox levels, utilizing detailed analyses of C-S-Fe systematics, geochemical and isotopic data. The investigation of redox cycles and short-term redox transitions, where existing data highlight differences in the timing and phase relationships of geochemical features, will reveal which pathways could have led to the observed redox changes.
The overall aim of the proposed PhD is to develop and conduct the first closed-loop demonstration of process control in upstream bioprocessing. Doing so will involve, but is not limited to, the following: (1) Develop a consistent framework to build and formalise model development. (2) Definethe measures of success of a process plant model (with regards to upstream biopharamaceutical process control). (3) Apply process data to produce a process plant model. The resulting model should then be validated against an independent set of plant data (e.g. from GSK). If available plant data is deemed unsuitable/incomplete the design of experiments paradigm should be used to capture data for a range of process conditions and operation mode. (4) Review the measurement technology that can be implemented towards automation of upstream biopharmaceutical processes justifying whether current technology will suffice the needs of this study. (5) Develop an online process control methodology that can be applied to control upstream biopharmaceutical processes. In particular, one suited to controlling mAb glycolisation. (6) Validate and implement said process control methodology to an existing online process.
As we move into an era where climate models produce tens of petabytes of data, how do we turn that ocean of data into scientific insight? Conducting new analyses of climate simulations is a core mechanism for developing understanding of the climate system. As computers become larger and the models behind these simulations become ever more sophisticated, the ability of scientists to work effectively with the data is frustrated. CMIP5 was estimated to produce 3.3 petabytes of data (1000 state-of-the art hard drives) and CMIP6 has a projected data volume of 18 petabytes. Key countries including the UK, the US and Germany are currently rebuilding their climate model software on the basis of more sophisticated numerics. This will produce more accurate simulations, but also data sets which are more complex to process correctly. However, scientific advances strongly depend on diversity of effort: it is essential that small groups of scientists in diverse institutions can test innovative ideas against climate model data sets. At the same time as the data volume is increasing and the numerics are becoming more complex, it becomes more and more essential that small groups of scientists and students can compute new derived quantities. A climate statistic is a mathematical statement, which a climate scientist can typically express in a few lines of mathematics. The current approach to the evaluation of this statement is for a scientist to spend weeks or months developing a bespoke script and tuning it to the separate data structure of each climate model to which it is to be applied. This is labour-intensive and requires reworking for each new statistic and each new model. Most critically, there is no effective mechanism for users of the results to verify that the statistic is correctly evaluated. Furthermore, this approach typically requires the data to be downloaded by each research group, an increasingly infeasible task. The missing link in this process is the ability to take the mathematical statement of the statistic and automatically and efficiently evaluate it correctly in the light of the discrete data representation of each model. The student on this project will make a major contribution to the solution of this problem by producing a system which generates climate data query software from the high-level mathematical specification of the diagnostic to be calculated. They will leverage the existing Firedrake project (http://firedrakeproject.org) to automatically derive mathematically correct parallel algorithms. The resulting system will be: Efficient: rather than spending months on coding, climate scientists will be able to move directly from formulating the question to studying the outputs. Model portable: the same mathematical statement can be run on different models. This is essential for reliable and trustable intercomparisons. Verifiably correct: the statistics will be correctly calculated from the underlying numerics, this will be testable through extensive test suites, and the scientist will be able to publish the actual mathematical code in their papers, so the provenance of their results is established and testable. Distributed: statistics can be calculated and processed where the data is archived, without downloading huge data sets. If individual scientists are to continue to do innovative work with climate model data on which the users of climate science can rely, solving the problems this project addresses is essential.
We put forward the CENTRAL HYPOTHESIS that significant fraction of the observed oceanic variability is intrinsic, that is, driven by the internal dynamics of the ocean rather than by variations of the external forcings. Some of this variability is likely to be explained in terms of the transient linear modes of the climatological mean circulation, but most of it --- and this is our SECOND HYPOTHESIS --- is likely to be driven and controlled by the transient mesoscale eddies that constitute synoptic variability of the ocean and strongly interact with the large-scale circulation. The RESEARCH STRATEGY of this Project consists of several steps. First, we will employ a cutting-edge, idealized numerical model on the UK's national supercomputer HECToR and compute a set of pioneering solutions with explicitly and very well resolved eddies and with explicitly simulated large-scale low-frequency variability. All solutions will be computed in a systematic way, from physically more simple to more comprehensive, and the focus of the proposed modelling will be on the dynamical realism of the eddies and on the underlying fundamental physical processes. Our preliminary results demonstrate presence of the robust and significant, intrinsic large-scale low-frequency variability that interact with and is driven by the transient mesoscale eddies. Second, all solutions will be systematically analysed, and the outcome of this analysis will provide the basis for building a theory. At this point we will be guided, perhaps only initially, by the existing theoretical ideas. A very useful and efficient NETWORK OF COLLABORATIONS will connect this Project with the research groups engaged in observations, modelling, and understanding of the oceanic large-scale low-frequency variability and the underlying eddy effects. The INTELLECTUAL MERIT of this Project is in addressing poorly understood intrinsic variability of the ocean and the corresponding roles of the mesoscale eddies. The BROADER IMPACT is in terms of understanding the global climate variability, with the ultimate goal of acheiving more accurate predictions of the global climate change. The BROADER CONTEXT is that the underlying nonlinear mechanisms are likely to be pertinent to other parts of the global ocean.
Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.