Powered by OpenAIRE graph
Found an issue? Give us feedback

University of Bristol

Country: United Kingdom

University of Bristol

Funder (2)
Top 100 values are shown in the filters
Results number
arrow_drop_down
4,573 Projects, page 1 of 915
  • Funder: UKRI Project Code: BB/I014063/1
    Funder Contribution: 294,752 GBP

    Proteins are biological molecules constructed from linear chains of amino acids that adopt complex 3D structures informed by their amino acid sequence. Each protein typically has a unique structure that is indelibly linked to the function it performs in nature. Enzymes are proteins that catalyze the chemical reactions that occur in the cell, examples of which facilitate the capture and storage of chemical energy from respiration and photosynthesis. The design of new artificial proteins and enzymes remains one of the great challenges in biochemistry, testing our fundamental understanding of the nature of protein as a material. Unlocking the exceptionally powerful array of chemistries that natural enzymes perform promises routes to new drugs, therapies and sources of renewable green energy. Most attempts to construct new enzymes have focussed on modifying natural proteins and enzymes to introduce new catalytic function with modest degrees of success. The problems associated with redesigning natural proteins are due to the layers of complexity that nature incorporates through natural selection into a protein's complicated 3D structure. This complexity serves to complicate functional deconstruction of naturally evolved proteins and enzymes, rendering their redesign intrinsically difficult. We believe that this complexity is not a necessary feature of proteins and enzymes. Our method to effectively avoid such complexity is to work with proteins that have been untouched by natural selection. These simple proteins, neoproteins, are small, robust protein scaffolds with generic amino acid sequences that serve as templates onto which natural protein functions can be added. Non-protein components of certain proteins and enzymes, such as the heme molecule of the protein hemoglobin, can be effectively supported in neoproteins and the various functions that these molecules perform in natural proteins can be exploited. An example of how this method can be effectively used is the creation of a heme-binding neoprotein capable of reversibly binding oxygen, a function common to myoglobin, hemoglobin and the recently discovered neuroglobin. Functional elements of engineering are added step-by-step and the requirements to form such a protein are surprisingly few in number. And, as E. coli produces the artificial protein in large quantities, the oxygen-binding neoprotein is exceptionally cheap to produce and easy to alter through standard molecular biology techniques. Since the oxygen bound state in heme proteins is a pre-requisite for a multitude of catalytic processes in natural proteins, we plan to take inspiration from nature to further the development of these proteins into artificial enzymes. We have developed the oxygen-binding neoprotein to include hemes rigidly attached to the protein backbone. This alleviates problems associated with heme loss from previous designs and allows for an unprecedented control of neoprotein properties and function. Since natural oxygen-dependent catalysis requires that oxygen be 'activated' by the controlled addition of electrons, we will explore this reaction in our oxygen binding neoproteins, gaining valuable information about the generation and stability of intermediates capable of powerful oxygenic catalysis. Ultimately, we plan to combine the oxygen binding and electron delivery functions into either a single protein or a combination of associated protein subunits with discrete functions. Much as modular furniture design uses combinations of smaller functionally independent subunits such as legs, drawers, shelves and assembles them to particular specifications, we think an analogous approach can be applied to the construction of new proteins and enzymes whose functions are dictated by the designer. An advantage of this approach is that through the reproduction of enzyme and protein function in artificial proteins a deep fundamental understanding of the workings of their natural counterparts is gained.

    more_vert
  • Funder: UKRI Project Code: 1806689

    The aim of this research project is to develop methodology for data-efficient and interpretable modeling from multiple views. The focus will be on nonparametric Bayesian methods such as models based on Gaussian processes. The projects goals are to develop machine learning that will be applicable to a large range of different scenarios. We will initially focus on applications which use motion capture data. Our motivation for this is twofold. Human motion is something which is readily interpretable by humans, making it easy to attach "meaning" to results. This means that we can evaluate our first goal, creating interpretable models. Secondly, we wish to be data efficient. Motion capture data is expensive to collect and requires specialist equipment. Therefore we will have to learn from small amounts of data which requires us to use it in its most efficient form. This also leads to the multi-view motivation, using motion capture data as an example, given that we have several people performing the same action we will see each of these as views of the same underlying concept directly expanding our dataset compared to if we had to learn from only one person. Even though motivated by motion capture data multi-view learning is applicable to a large range of scenarios. Virtually everything can be seen from more than one perspective. Such as a car driving on the street seen by two people from different angles; or just by the two eyes of a single person. Another perspective of the car is the sound of it, or even the smell of it. A face from the perspectives of being ten different individuals, or in twenty different lighting conditions. Perspectives are not limited to being of physical objects; it can also be e.g. the action of walking from the perspectives of being carried out by different individuals or groups of people. Or a song from the perspectives of being performed by different musicians. Multi-view learning is focused on exploiting these connections in the data to uncover latent concepts that explains each of the views in a joint manner. This allows us to understand in what aspects the different views are similar and where they differ; as well as conducting inference between them. A central aspect of any learning systems is to be able to interrogate a model. What has been learnt? How likely is the model? What is the certainty of predictions? Besides understanding being the essence of many applications, such as analysis or diagnosis, it is vital for trusting the result of learning, the assumptions made, predictions as well as for guidance and comparison useful for further development. Over the last decade we have seen machine learning successfully applied to a large range of new domains. Many of its successes comes from availability of large data sets. In many ways, large amounts of data have reduced the demands of learning systems as they can be allowed to be less abstract. However, for many applications large datasets are not (and will most likely not be) available which means we need to use the information available in the data as efficiently as possible. Therefore we will work on data efficient models that use principled uncertainty propagation in order to reduce the need for large data sets.

    more_vert
  • Funder: UKRI Project Code: BB/S013199/1
    Funder Contribution: 417,909 GBP

    The human brain contains approximately 80 billion nerve cells (neurons). When you learn something new, which of those neurons will be involved in storing that information? Are all neurons equal, or are some more likely than others to store memories? Will a memory always involve the same neurons? Or will the memory trace (known as an 'engram') change over time, allowing you to file memories appropriately, remembering the important information and updating the engram with new knowledge? These questions are all very challenging to answer. But if we do not answer them, we can never understand how the brain works, or how to treat memory disorders associated with illnesses such as dementia, depression and schizophrenia. Over the past few years, technology has advanced to the point at which we can - at least in mice - "capture" the groups of neurons involved in learning a specific memory. These neurons are known as 'engram neurons', and were first discovered in a part of the brain called the hippocampus, which acts as a central indexing system for memory files in all mammals. We can activate engram neurons (to trigger recall of the memory) or silence them (to delete a memory). These methods have transformed our understanding of memory mechanisms over the past 2 years, but they are very new and evolving rapidly, as is out ability to measure brain activity from hundreds of neurons simultaneously. In this 3-year series of experiments, we will combine capture of engram neurons with recording the activity from large populations of neurons in mouse hippocampus and connected brain regions to translate the algorithms used by engram neurons to learn, process and remember new information. This project would not be possible without the international team of neuroscientists involved, which spans the UK and Japan, uniting complementary expertise in genetics, psychology, computational analyses and electrical engineering. We will also measure, for the first time, the activity of engram neurons during sleep. We have known for 2000 years that sleep supports healthy memory, but we still do not know precisely how. Now we have discovered engram neurons, the answers may be within reach. Given that your entire personality and worldview are products of the many memory engrams stored by your brain, neuroscience of this type remains essential if we are to understand the fundamental biology of life and disease.

    more_vert
  • Funder: EC Project Code: 740222
    Overall Budget: 183,455 EURFunder Contribution: 183,455 EUR

    Understanding plant responses to rising atmospheric CO2 is of major interest given the necessity to select and develop new crop cultivars that will perform better in a changing global climate. By regulating the exchange of water and CO2 between the interior of the leaf and the atmosphere, stomata -the pores at the leaf surface- play a major role in CO2-mediated processes. Stomatal aperture is constantly adjusted, in response to internal cues and external environmental signals, through turgor changes of the two guard cells that surround the pore. Plants respond to elevated CO2 levels by reducing stomatal aperture, and recent advances have shed light on the intracellular machinery responsible for this response. However, little is understood about the variability in stomatal responses to CO2, both among and within species. This project will decipher the main factors responsible for variations in stomatal responses to CO2. It will specifically address the roles of ABA-signalling, photosynthesis and stomatal morphology in the response of stomatal conductance (gs) to CO2 and how this impacts on plant performance. These will be investigated from the cellular to the level of the whole plant by using the model plant Arabidopsis thaliana and major European crops. Genetic diversity will be used in different forms (mutants, association studies, inter-specific comparison) to identify the causal factors of variation. The knowledge gained into gs responses will be integrated into predictive models for water-use efficiency to scale-up to whole plant performance.

    visibility30
    visibilityviews30
    downloaddownloads27
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: ST/V003089/1
    Funder Contribution: 41,687 GBP

    The LHCb Upgrade II proposal addresses the goal set out in the European Strategy for Particle Physics of exploiting fully the physics potential of the High Luminosity LHC. The experiment is designed to exploit the high production rate of heavy flavoured (beauty and charm) particles in LHC collisions, enabling precision searches for physics beyond the Standard Model through loop processes. LHCb Upgrade II builds on the existing infrastructure of the LHCb experiment, with upgrades to all subdetector systems and new elements that will provide capability to resolve particles from different collisions in the same bunch crossing through the use of precision timing information. Data will be recorded at much higher rates than currently possible, leading to a broad and exciting physics programme, unmatched by any other experiment. This proposal is for a 3 year R&D project within the UK to develop specific, strategic technologies that will enable the physics goals of LHCb Upgrade II to be achieved. This will allow us to develop links with UK industry and continue to lead the international collaboration through preparation of the LHCb Upgrade II framework TDR as well as subdetector-specific TDRs. The UK groups have particular interests in addressing challenges in vertexing and tracking, charged hadron identification and data processing by developing strategic technologies, and the project is structured into workpackages accordingly. We plan that this initial project be followed by further stages for prototyping and construction of the subdetectors, and eventually to operation of the experiment and analysis of the data. Requests for funding of these later stages will of course be made separately.

    more_vert
Powered by OpenAIRE graph
Found an issue? Give us feedback

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.