Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
The following results are related to Canada. Are you interested to view more results? Visit OpenAIRE - Explore.
16 Projects, page 1 of 2

  • Canada
  • 2012-2021
  • UK Research and Innovation
  • UKRI|EPSRC
  • 2013

10
arrow_drop_down
  • Funder: UKRI Project Code: EP/K008781/1
    Funder Contribution: 347,135 GBP
    Partners: NRCan, SolarMetrics, STFC - Laboratories, University of Leicester

    Efficient air traffic management depends on reliable communications between aircraft and the air traffic control centres. However there is a lack of ground infrastructure in the Arctic to support communications via the standard VHF links (and over the Arctic Ocean such links are impossible) and communication via geostationary satellites is not possible above about 82 degrees latitude because of the curvature of the Earth. Thus for the high latitude flights it is necessary to use high frequency (HF) radio for communication. HF radio relies on reflections from the ionosphere to achieve long distance communication round the curve of the Earth. Unfortunately the high latitude ionosphere is affected by space weather disturbances that can disrupt communications. These disturbances originate with events on the Sun such as solar flares and coronal mass ejections that send out particles that are guided by the Earth's magnetic field into the regions around the poles. During such events HF radio communication can be severely disrupted and aircraft are forced to use longer low latitude routes with consequent increased flight time, fuel consumption and cost. Often, the necessity to land and refuel for these longer routes further increases the fuel consumption. The work described in this proposal cannot prevent the space weather disturbances and their effects on radio communication, but by developing a detailed understanding of the phenomena and using this to provide space weather information services the disruption to flight operations can be minimised. The occurrence of ionospheric disturbances and disruption of radio communication follows the 11-year cycle in solar activity. During the last peak in solar activity a number of events caused disruption of trans-Atlantic air routes. Disruptions to radio communications in recent years have been less frequent as we were at the low phase of the solar cycle. However, in the next few years there will be an upswing in solar activity that will produce a consequent increase in radio communications problems. The increased use of trans-polar routes and the requirement to handle greater traffic density on trans-Atlantic routes both mean that maintaining reliable high latitude communications will be even more important in the future.

  • Funder: UKRI Project Code: EP/K036033/1
    Funder Contribution: 236,177 GBP
    Partners: Scottish and Southern Energy SSE plc, PTRC, UKCCS Research Centre, University of Edinburgh

    Carbon capture and storage (CCS) has emerged as a promising means of lowering CO2 emissions from fossil fuel combustion. However, concerns about the possibility of harmful CO2 leakage are contributing to slow widespread adoption of the technology. Research to date has failed to identify a cheap and effective means of unambiguously identifying leakage of CO2 injected, or a viable means of identifying ownership of it. This means that in the event of a leak from a storage site that multiple operators have injected into, it is impossible to determine whose CO2 is leaking. The on-going debate regarding leakage and how to detect it has been frequently documented in the popular press and scientific publications. This has contributed to public confusion and fear, particularly close to proposed storage sites, causing the cancellation of several large storage projects such as that at Barendrecht in the Netherlands. One means to reduce public fears over CCS is to demonstrate a simple method which is able to reliably detect the leakage of CO2 from a storage site and determine the ownership of that CO2. Measurements of noble gases (helium, neon, argon, krypton and xenon) and the ratios of light and heavy stable isotopes of carbon and oxygen in natural CO2 fields have shown how CO2 is naturally stored over millions of years. Noble gases have also proved to be effective at identifying the natural leakage of CO2 above a CO2 reservoir in Arizona and an oil field in Wyoming and in ruling out the alleged leakage of CO2 from the Weyburn storage site in Canada. Recent research has shown amounts of krypton are enhanced relative to those of argon and helium in CO2 captured from a nitrate fertiliser plant in Brazil. This enrichment is due to the greater solubility of the heavier noble gases, so they are more readily dissolved into the solvent used for capture. This fingerprint has been shown to act as an effective means of tracking CO2 injected into Brazilian and USA oil fields to increase oil production. Similar enrichments in heavy noble gases, along with high helium concentrations are well documented in coals, coal-bed methane and in organic rich oil and gas source rocks. As noble gases are unreactive, these enrichments will not be affected by burning the gas or coal in a power station and hence will be passed onto the flue gases. Samples of CO2 obtained from an oxyfuel pilot CO2 capture plant at Lacq in France which contain helium and krypton enrichments well above atmospheric values confirm this. Despite identification of these distinctive fingerprints, no study has yet investigated if there is a correlation between them and different CO2 capture technologies or the fossil fuel being burnt. We propose to measure the carbon and oxygen stable isotope and noble gas fingerprint in captured CO2 from post, pre and oxyfuel pilot capture plants. We will find out if unique fingerprints arise from the capture technology used or fuel being burnt. We will determine if these fingerprints are distinctive enough to track the CO2 once it is injected underground without the need of adding expense artificial tracers. We will investigate if they are sufficient to distinguish ownership of multiple CO2 streams injected into the same storage site and if they can provide an early warning of unplanned CO2 movement out of the storage site. To do this we will determine the fingerprint of CO2 captured from the Boundary Dam Power Plant prior to its injection into the Aquistore saline aquifer storage site in Saskatechwan, Canada. By comparing this to the fingerprint of the CO2 produced from the Aquistore monitoring well, some 100m from the injection well, we will be able to see if the fingerprint is retained after the CO2 has moved through the saline aquifer. This will show if this technique can be used to track the movement of CO2 in future engineered storage sites, particularly offshore saline aquifers which will be used for future UK large volume CO2 storage.

  • Funder: UKRI Project Code: EP/L001942/1
    Funder Contribution: 254,532 GBP
    Partners: UoC, Newcastle University

    Corrosion of metals affects multiple industries and poses major risks to the environment and human safety, and is estimated to cause economic losses in excess of £2.5 trillion worldwide (around 6% of global GDP). Microbiologically-influenced corrosion (MIC) is believed to play a major role in this, but precise estimates are prevented by our limited understanding of MIC-related processes. In the oil and gas sector biocorrosion is usually linked to the problem of "souring" caused by sulfate-reducing bacteria (SRB) that produce corrosive hydrogen sulfide in subsurface reservoirs and topsides facilities. To combat souring, reservoir engineers have begun turning to nitrate injection as a green biotechnology whereby sulfide removal can be catalysed by diverse sulfide-oxidising nitrate-reducing bacteria (soNRB). However, this promising technology is threatened by reports that soNRB could enhance localized corrosion through incomplete oxidation of sulfide to corrosive sulfur intermediates. It is likely that soNRB are corrosive under certain circumstances; end products of soNRB metabolism vary depending prevailing levels of sulfide (i.e., from the SRB-catalyzed reservoir souring) and nitrate (i.e., the engineering "nitrate dose" introduced to combat souring). Furthermore soNRB corrosion will depend on the specific physiological features of the particular strains present, which vary from field to field, but usually include members of the Epsilonproteobacteria - the most frequently detected bacterial phylum in 16S rRNA genomic surveys of medium temperature oil fields. A new era of biological knowledge is dawning with the advent of inexpensive, high throughput nucleic acid sequencing technologies that can now be applied to microbial genomics. New high throughput sequencing platforms are allowing unprecedented levels of interrogation of microbial communities at the DNA (genomic) and RNA (transcriptomic) levels. Engineering biology aims to harness the power of this biological "-omics" revolution by bringing these powerful tools to bear on industrial problems like biocorrosion. This project will combine genomics and transcriptomics with process measurements of soNRB metabolism and real time corrosion monitoring via linear polarization resistance. By measuring all of these variables in experimental oil field microcosms, and scaling-up to pan-industry oil field screening, a predictive understanding of corrosion linked to nitrogen and sulfur biotransformations will emerge, putting new diagnostic genomics assays in the hands of petroleum engineers. The oil industry needs green technologies like nitrate injection. This research will develop new approaches that will safeguard this promising technology by allowing nitrate-associated biocorrosion potential to be assessed in advance. This will enhance nitrate injection's ongoing successful application to be based on informed risk assessments.

  • Funder: UKRI Project Code: EP/K020404/1
    Funder Contribution: 585,535 GBP
    Partners: Thornhill Research Inc, GlaxoSmithKline, University of California, Berkely, Cardiff University, University of Toronto, GE Healthcare

    Diseases of the brain including neurological conditions, such as epilepsy, multiple sclerosis and dementia, and common psychiatric conditions such as depression and schizophrenia, have considerable personal, social and economic costs for the sufferers and their carers. Improving the tools at our disposal for quantifying brain function would help with diagnosis, choosing the right treatment for the patient and developing new, more effective, treatments. This proposal aims to develop a reliable non-invasive brain imaging method using magnetic resonance imaging (MRI) that maps, across the whole human brain with a spatial resolution of a few millimetres, the amount of oxygen that the brain is consuming. The rate of oxygen consumption, known as CMRO2, reflects neural activity and can change through disease processes. It provides a marker of disease and treatment related alterations in brain activity. Our proposed method would also map the functional characteristics of brain blood vessels whose health is crucial for the supply of oxygen and nutrients to the brain. Until recently, it has only been possible to quantitatively map the human brain's metabolic energy use through positron emission tomography (PET), which relies on radioactive tracers. The application of such measurements is limited, as in order to minimise radiation doses, it cannot be applied many times in the same patients or healthy volunteers. This hampers the repeated study of disease or treatment progression and the study of normal brain development and aging. Our proposed method would avoid the use of ionizing radiation, would be cheaper than PET and more widely available, and would expand the applications of quantified CMRO2 mapping to more centres, leading to improved treatment targeting and potential healthcare cost savings. We have performed some initial tests that show our proposed method to be feasible. It relies on mapping simultaneously the flow of blood to each part of the brain and the oxygenation of the blood leaving each part of the brain. Necessary for the measurement is the modulation of brain blood flow and oxygen levels, achieved by asking volunteers to breathe air enriched with carbon dioxide and oxygen. These procedures involve the volunteer wearing a face-mask but are safe and well tolerated. Our proposed method should yield additional information describing cerebrovascular properties compared to other recently-proposed methods. This means that it would require fewer assumptions which may be not be invalid in the diseased brain, giving our approach a wider scope of application and offering potentially richer clinical information. This proposal optimises our method to ensure it is efficient and reliable for widespread research and eventually clinical use. We propose a close collaboration between physicists developing the neuroimaging methodology and clinical academic researchers who will help us to demonstrate its clinical feasibility in two common neurological diseases, epilepsy and multiple sclerosis (MS). About 70% of the project will be methodological development to optimise our image acquisition and data analysis strategy to yield accurate and repeatable measurements within about 10 minutes of scanning. The remaining 30% of the project will validate the method in groups of epilepsy and MS patients who volunteer to help us with our research. Validation will be performed by comparison with PET, the current 'gold standard.' The project will develop and benefit from partnerships with academic and industrial researchers in the UK and internationally. In particular, the work has good potential for application in the drug development industry, a strong industrial sector in the UK, for the development of new and effective compounds to treat psychiatric and neurological disorders. This project would help maintain the UK at the forefront internationally of neuroimaging research, a position it has long held and from which it has benefitted.

  • Funder: UKRI Project Code: EP/K037161/1
    Funder Contribution: 489,871 GBP
    Partners: Airbus, LR IMEA, University of Southampton, Bombardier Inc

    Noise and vibration are important performance aspects in many mechanical systems. High noise and vibration levels can be detrimental to structures (e.g. causing damage) and to the human operators (e.g. causing fatigue or injury). Thus, it is important to be able to understand how structures vibrate and emit noise, i.e., their vibroacoustic behavior. Traditionally, engineers would try to describe the vibroacoustics using analytical methods. However, these are only possible for very simple structures. Structures that engineers confront in the aerospace, railway or maritime sectors are often made of composite panels that are connected together using complicated structural joints. The analysis of the vibroacoustics of such complex built-up structures cannot be performed analytically. Over the years, researchers have developed numerical techniques to solve this problem. Element-based methods (such as the finite element method) are well-developed and well-established methods with many commercial/in-house codes that can be used. However, aerospace, railway and maritime structures are relatively large. For example, a typical railway car can be modelled using the finite element method up to 500 Hz. Above this frequency, the size of the finite element model becomes too large, impractical and the associated computational cost becomes prohibitive. However, the audio frequency range is 20 Hz-20 kHz. At high frequency (above 10 kHz), the railway car can be modelled using energy-based methods such as the statistical energy analysis method. Energy-based statistical methods are valuable, but less well-established than element-based methods. The railway car example points to a frequency gap, indeed a mid-frequency gap, where neither element-based nor energy-based methods can be used. I am proposing to use wave methods to bridge the mid-frequency gap and to further strengthen energy methods. Waves provide a unifying, intuitive approach to vibroacoustics. The computational cost of a wave model is substantially small (especially when compared to a full finite element model), and the wave properties of structures can be obtained by post processing the finite element model of a small segment of an arbitrarily large structure. Thus, the goal of this programme is to develop a wave-based toolbox for modelling the vibroacoustics complex built-up structures. Industrial examples from the aerospace, railway and maritime sectors will be used to demonstrate the efficiency and effectiveness of the developed methods.

  • Funder: UKRI Project Code: EP/K034383/1
    Funder Contribution: 2,246,110 GBP
    Partners: University of Rome, AIM, Abdus Salam ICTP, University of Waterloo (Canada), University of Warwick

    L-functions and modular forms are fundamental mathematical objects that encode much of our knowledge of contemporary number theory. They form part of a web of interconnected objects, the understanding of which in the most basic cases lies at the foundations of much of modern mathematics. A spectacular example is Wiles' proof of Fermat's Last Theorem, which was an application of a fundamental "modularity" link between L-functions, modular forms and elliptic curves. This project will greatly extend and generalize such connections, both theoretically and computationally. The research vision inspiring our programme can be summarised as: "Breaking the boundaries of classical L-functions and modular forms, and exploring their applications to 21st-century mathematics, physics, and computer science". Our guiding goal is to push forward both theoretical and algorithmic developments, in order to develop L-functions and modular forms far beyond current capabilities. This programme will systematically develop an extensive catalogue of number theoretic objects, and will make this information available through an integrated online resource that will become an indispensable tool for the world's research community. L-functions are to pure mathematics what fundamental particles are to physics: their interaction reveal fundamental truths. To continue the analogy, computers are to number theorists what colliders are to particle physicists. Aside from their established role as empirical "testers" for conjectures and theories, experiments can often throw up quite unexpected phenomena which go on to reshape modern theory. Our programme will establish a major database and encyclopedia of knowledge about L-functions and related objects, which will play a role analogous to that of the LHC for the scientists at CERN. Both are at the threshold of tantalising glimpses into completely uncharted territories: higher degree L-functions for us and the Higgs boson for them. Theoretical and computational work on higher degree L-functions has only started to make substantial progress in the past few years. There do not currently exist efficient methods to work with these, and rigorous computations with them are not yet possible. Neither is there yet an explicit description of all ways in which degree 3 L-functions can arise. We will address these facets in our research programme: both algorithmic development and theoretical classification. As well as having theoretical applications to modularity relationships as in Wiles' proof, detailed knowledge of L-functions has more far-reaching implications. Collections of L-functions have statistical properties which first arose in theoretical physics. This surprising connection, which has witnessed substantial developments led by researchers in Bristol, has fundamental predictive power in number theory; the synergy will be vastly extended in this programme. In another strand, number theory plays an increasingly vital role in computing and communications, as evidenced by its striking applications to both cryptography and coding theory. The Riemann Hypothesis (one of the Clay Mathematics Million Dollar Millennium Problems) concerns the distribution of prime numbers, and the correctness of the best algorithms for testing large prime numbers depend on the truth of a generalised version of this 150-year-old unsolved problem. These are algorithms which are used by public-key cryptosystems that everyone who uses the Internet relies on daily, and that underpin our digital economy. Our programme involves the creation of a huge amount of data about a wide range of modular forms and L-functions, which will far surpass in range and depth anything computed before in this area. This in turn will be used to analyse some of the most famous outstanding problems in mathematics, including the Riemann Hypothesis and another Clay problem, the Birch and Swinnerton-Dyer conjecture.

  • Funder: UKRI Project Code: EP/H00324X/2
    Funder Contribution: 275,478 GBP
    Partners: University of Cambridge, University of Warwick, UBC, National High Magnetic Field Laboratory, ANL, EWU

    Resistance is futile: lightbulbs and heaters aside, the majority of electronic components are at their most efficient when their electrical resistance is minimized. In the present climate, with energy sustainability regularly topping the international agenda, reducing the power lost in conducting devices or transmission lines is of worldwide importance. Research into the nature of novel conducting materials is hence vital to secure the global energy future.Superconductivity, the phenomenon of zero electrical resistance which occurs below a critical temperature in certain materials, remains inadequately explained. At present, these critical temperatures are typically very low, less than 140 Kelvin (-133 Celsius), but a more complete understanding of what causes the superconducting state to form could result in the design of materials that display superconductivity at the enhanced temperatures required for mass technological exploitation. Unfortunately, it is the very materials which are most likely to lead us to this end, the so-called unconventional superconductors, that are the least understood. In such materials, the superconducting state appears to be in competition with at least two other phases of matter: magnetism and normal, metallic conductivity. A delicate balance governs which is the dominant phase at low temperatures; the ground-state. By making slight adjustments to the composition of the materials or by applying moderate pressures certain interactions between the electrons in the compound can be strengthened at the expense of others causing the balance to tip in favour of a particular ground-state. The technicalities of how to do this are relatively well-known. What remains to be explained is why it happens, what it is that occurs at the vital tipping point where the superconductivity wins out over the magnetic or the metallic phases - in short, exactly what stabilizes the unconventional superconducting state? It is this question that the proposed project seeks to answer. I will use magnetic fields to explore the ground-states exhibited by three families of unconventional superconductor: the famous cuprate superconductors (whose discovery in the 1980s revolutionized the field of superconductivity and which remain the record-holders for the highest critical temperature); some recently discovered superconductors based on the most magnetic of atoms - iron (the discovery of these new materials in the spring of 2008 came as somewhat of a surprise, magnetism often being thought as competing with superconductivity); and a family of material based on superconducting layers of organic molecules. I propose to measure the strength of the interactions that are responsible for the magnetic and electronic properties of these materials as the systems are pushed, using applied pressure, through the tipping point at which the superconductivity becomes dominant. In particular, the electronic interactions in layered materials like those considered here can only be reliably and completely determined via a technique known as angle-dependent magnetoresistance. This technique remains to be applied to most unconventional superconductors, particularly at elevated pressures, mostly likely because it is experimentally challenging and familiar only to a handful of researchers. However, the rewards of performing such experiments are a far greater insight into what changes in interactions occur at the very edge of the superconducting state. Chasing the mechanism responsible for stabilizing unconventional superconductivity is an ambitious aim, and many traditional experimental techniques have proved inadquate. It is becoming clear, in the light of recent advances in the field, that the route to success lies in subjecting high-quality samples to the most extreme probes available, a combination of high magnetic fields and high applied pressures.

  • Funder: UKRI Project Code: EP/K030558/1
    Funder Contribution: 724,429 GBP
    Partners: Queen's University Canada, University of Otago, Durham University

    Our research involves the theoretical and experimental investigation of quantum many-body dynamics in systems of ultra-cold atoms, with the view of developing next-generation rotational sensors, and developing tools for and improving our general understanding of interacting many-body systems far from equilibrium. The central idea is based on using ultra-cold atoms with bosonic spin statistics, in contrast to e.g., electrons orbiting an atomic nucleus, where two electrons with the same spin cannot occupy exactly the same energy level or orbital (fermionic spin statistics). This means that at sufficiently low temperatures a dilute atomic gas composed of such bosonic atoms undergoes a particular kind of phase transition. A phase transition is a sudden, qualitative change of state, like and ordinary gas condensing to a liquid state as the temperature is lowered. The state of matter reached in the case of very dilute, low temperature bosonic atoms is called a Bose-Einstein condensate. This can be seen as the atomic/matter equivalent of a laser; a coherent, intense source of atoms, with consequent advantages to measurement science or metrology (which in the case of light are limited by the minimum wavelength for the light to be visible and controlled by conventional optics). Atom-atom interactions are, unfortunately, typically problematical, and tend to counteract the advantages of a coherent atomic source. We will build upon a proposal (suggested one of the investigators) where the issues associated with atom-atom interactions appear to be largely avoided due to an astutely chosen experimental geometry. In the process of investigating this proposed system as well as a number of closely related issues, we will deepen our understanding of nonequilibrium dynamics (due, for example, to the crucial importance of avoiding such things as flow instabilities in any functioning rotational senser), and develop broadly applicable theoretical tools accounting for the influence and production of complicated many-body effects. As such our research falls within the EPSRC Physics Grand Challenges "Emergence and Physics Far From Equilibrium" (motivated by the fact that "dramatic collective behaviour can emerge unexpectedly in large complicated systems" and "This fundamental work will be driven by the ever-present possibility that emergent states may provide the foundations for the technologies of the future") and "Quantum Physics for New Quantum Technologies" (motivated by "Next generation quantum technologies will rely on our understanding and exploitation of coherence and entanglement" and "Success requires a deeper understanding of quantum physics and a broad ranging development of the enabling tools and technologies"). Ultracold atoms are an ideal configuration in which to investigate dynamics far from equilibrium, due to a very high degree of flexibility in their experimental configurations (varying the experimental geometry, strength of interaction, and even whether the interactions are attractive or repulsive, by appropriate combinations of magnetic, laser and microwave fields), and atomic, molecular and optical (AMO) physics systems have a superlative record in terms of precision measurement, most notably in the form of atomic clocks, which, for example, underpin the functioning of the global positioning system (GPS).

  • Funder: UKRI Project Code: EP/L002787/1
    Funder Contribution: 98,503 GBP
    Partners: Weizmann Institute of Science, University of Ottawa, University of Warwick

    The project is devoted to basic research in pure mathematics. It is based on the well-studied interplay between the theory of electrical networks -seen as abstract mathematical tools- and the theory of random walks on graphs. Four specific topics within this framework are addressed by the project: -- The Poisson boundary for random walk on graphs and groups; We follow an active tradition in group theory, triggered by Kesten, where results about groups are obtained indirectly by considering a random walk on the group and relating its behaviour, or the structure of a boundary associated to it, to the algebraic properties of the group. The project benefits from a recent strong result of the applicant providing a criterion for the Poisson boundary, as well as a novel idea of associating a random finite graph rather than a random walk with a group, exploiting the recent theory of graphons by Lovasz et. al. -- Discrete conformal uniformization in the sense of Benjamini & Schramm; We seek to strengthen a new result of the applicant, related to the above, that answered a question of Benjamini & Schramm. Such a strengthening will provide new results on the Poisson boundary. -- The relationship between the cover time and the cover cost in extremal and random finite graphs. The cover time of a graph is an important concept in mathematics and computer science, and is even studied by physicists, but it is very hard to compute or even approximate. Using the concept of cover cost that the applicant introduced, we seek to simplify the approximation of the cover time for many classes of graphs by breaking it down into two steps: showing that it is close to the cover cost, and computing the (provably more tractable) cover cost. These topics lie in different areas of mathematics, all of which have seen a lot of research activity in recent years. They are interlinked by the general theme of electrical networks, random walks, and their interplay, and share further finer interconnections. The project aims to contribute by producing new results individually for each sub-topic as well as by establishing or strengthening connections between them. The project's results will be of interest to several research communities, including Graph Theory, Probability, (discrete) Potential Theory and Group Theory.

  • Project . 2013 - 2018
    Funder: UKRI Project Code: EP/K033085/1
    Funder Contribution: 1,122,320 GBP
    Partners: Toshiba Corporation, University of Bristol, IMEC, TU/e, University of Toronto, University of Rome I (La Sapienza), Cornell University, Defence Science & Tech Lab DSTL, EHU, XMOS Ltd...

    Quantum information science and technologies offer a completely new and powerful approach to processing and transmitting information by combining two of the great scientific discoveries of the 20th century - quantum mechanics and information theory. By encoding information in quantum systems, quantum information processing promises huge computation power, while quantum communications is already in its first stages of commercialisation, and offers the ultimate in information security. However, for quantum technologies to have as big an impact on science, technology and society as anticipated, a practical scalable integration platform is required where all the key components can be integrated to a single micro-chip technology, very much akin to the development of the first microelectronic integrated circuits. Of the various approaches to realising quantum technologies, single particles of light (photons) are particularly appealing due to their low-noise properties and ease of manipulation at the single qubit level. It is possible to harness the quantum mechanical properties of single photons, taking advantage of strange quantum properties such as superposition and entanglement to provide new ways to encode, process and transmit information. Quantum photonics promises to be a truly disruptive technology in information processing, communications and sensing, and for deepening our understanding of fundamental quantum physics and quantum information science. However, current approaches are limited to simple optical circuits with low photon numbers, inefficient detectors and no clear routes to scalability. For quantum optic information science to go beyond current limitations, and for quantum applications to have a significant real-world impact, there is a clear and urgent need to develop a fully integrated quantum photonic technology platform to realise large and complex quantum circuits capable of generating, manipulating and detecting large photon-number states. This Fellowship will enable the PI and his research team to develop such a technology platform, based on silicon photonics. Drawing from the advanced fabrication technologies developed for the silicon microelectronics industry, state of the art silicon quantum photonic devices will enable compact, large-scale and complex quantum circuits, experiments and applications. This technology platform will overcome the current 8-photon barrier in a scalable way, enable circuits of unprecedented complexity, and will be used to address important fundamental questions, develop new approaches to quantum communications, enhance the performance of quantum sensing, provide a platform for new routes to quantum simulations, and achieve computational complexities that can challenge the limits of conventional computing. This multidisciplinary research programme will bring together engineers, physicists and industrial partners to tackle these scientific and technological challenges.

Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
The following results are related to Canada. Are you interested to view more results? Visit OpenAIRE - Explore.
16 Projects, page 1 of 2
  • Funder: UKRI Project Code: EP/K008781/1
    Funder Contribution: 347,135 GBP
    Partners: NRCan, SolarMetrics, STFC - Laboratories, University of Leicester

    Efficient air traffic management depends on reliable communications between aircraft and the air traffic control centres. However there is a lack of ground infrastructure in the Arctic to support communications via the standard VHF links (and over the Arctic Ocean such links are impossible) and communication via geostationary satellites is not possible above about 82 degrees latitude because of the curvature of the Earth. Thus for the high latitude flights it is necessary to use high frequency (HF) radio for communication. HF radio relies on reflections from the ionosphere to achieve long distance communication round the curve of the Earth. Unfortunately the high latitude ionosphere is affected by space weather disturbances that can disrupt communications. These disturbances originate with events on the Sun such as solar flares and coronal mass ejections that send out particles that are guided by the Earth's magnetic field into the regions around the poles. During such events HF radio communication can be severely disrupted and aircraft are forced to use longer low latitude routes with consequent increased flight time, fuel consumption and cost. Often, the necessity to land and refuel for these longer routes further increases the fuel consumption. The work described in this proposal cannot prevent the space weather disturbances and their effects on radio communication, but by developing a detailed understanding of the phenomena and using this to provide space weather information services the disruption to flight operations can be minimised. The occurrence of ionospheric disturbances and disruption of radio communication follows the 11-year cycle in solar activity. During the last peak in solar activity a number of events caused disruption of trans-Atlantic air routes. Disruptions to radio communications in recent years have been less frequent as we were at the low phase of the solar cycle. However, in the next few years there will be an upswing in solar activity that will produce a consequent increase in radio communications problems. The increased use of trans-polar routes and the requirement to handle greater traffic density on trans-Atlantic routes both mean that maintaining reliable high latitude communications will be even more important in the future.

  • Funder: UKRI Project Code: EP/K036033/1
    Funder Contribution: 236,177 GBP
    Partners: Scottish and Southern Energy SSE plc, PTRC, UKCCS Research Centre, University of Edinburgh

    Carbon capture and storage (CCS) has emerged as a promising means of lowering CO2 emissions from fossil fuel combustion. However, concerns about the possibility of harmful CO2 leakage are contributing to slow widespread adoption of the technology. Research to date has failed to identify a cheap and effective means of unambiguously identifying leakage of CO2 injected, or a viable means of identifying ownership of it. This means that in the event of a leak from a storage site that multiple operators have injected into, it is impossible to determine whose CO2 is leaking. The on-going debate regarding leakage and how to detect it has been frequently documented in the popular press and scientific publications. This has contributed to public confusion and fear, particularly close to proposed storage sites, causing the cancellation of several large storage projects such as that at Barendrecht in the Netherlands. One means to reduce public fears over CCS is to demonstrate a simple method which is able to reliably detect the leakage of CO2 from a storage site and determine the ownership of that CO2. Measurements of noble gases (helium, neon, argon, krypton and xenon) and the ratios of light and heavy stable isotopes of carbon and oxygen in natural CO2 fields have shown how CO2 is naturally stored over millions of years. Noble gases have also proved to be effective at identifying the natural leakage of CO2 above a CO2 reservoir in Arizona and an oil field in Wyoming and in ruling out the alleged leakage of CO2 from the Weyburn storage site in Canada. Recent research has shown amounts of krypton are enhanced relative to those of argon and helium in CO2 captured from a nitrate fertiliser plant in Brazil. This enrichment is due to the greater solubility of the heavier noble gases, so they are more readily dissolved into the solvent used for capture. This fingerprint has been shown to act as an effective means of tracking CO2 injected into Brazilian and USA oil fields to increase oil production. Similar enrichments in heavy noble gases, along with high helium concentrations are well documented in coals, coal-bed methane and in organic rich oil and gas source rocks. As noble gases are unreactive, these enrichments will not be affected by burning the gas or coal in a power station and hence will be passed onto the flue gases. Samples of CO2 obtained from an oxyfuel pilot CO2 capture plant at Lacq in France which contain helium and krypton enrichments well above atmospheric values confirm this. Despite identification of these distinctive fingerprints, no study has yet investigated if there is a correlation between them and different CO2 capture technologies or the fossil fuel being burnt. We propose to measure the carbon and oxygen stable isotope and noble gas fingerprint in captured CO2 from post, pre and oxyfuel pilot capture plants. We will find out if unique fingerprints arise from the capture technology used or fuel being burnt. We will determine if these fingerprints are distinctive enough to track the CO2 once it is injected underground without the need of adding expense artificial tracers. We will investigate if they are sufficient to distinguish ownership of multiple CO2 streams injected into the same storage site and if they can provide an early warning of unplanned CO2 movement out of the storage site. To do this we will determine the fingerprint of CO2 captured from the Boundary Dam Power Plant prior to its injection into the Aquistore saline aquifer storage site in Saskatechwan, Canada. By comparing this to the fingerprint of the CO2 produced from the Aquistore monitoring well, some 100m from the injection well, we will be able to see if the fingerprint is retained after the CO2 has moved through the saline aquifer. This will show if this technique can be used to track the movement of CO2 in future engineered storage sites, particularly offshore saline aquifers which will be used for future UK large volume CO2 storage.

  • Funder: UKRI Project Code: EP/L001942/1
    Funder Contribution: 254,532 GBP
    Partners: UoC, Newcastle University

    Corrosion of metals affects multiple industries and poses major risks to the environment and human safety, and is estimated to cause economic losses in excess of £2.5 trillion worldwide (around 6% of global GDP). Microbiologically-influenced corrosion (MIC) is believed to play a major role in this, but precise estimates are prevented by our limited understanding of MIC-related processes. In the oil and gas sector biocorrosion is usually linked to the problem of "souring" caused by sulfate-reducing bacteria (SRB) that produce corrosive hydrogen sulfide in subsurface reservoirs and topsides facilities. To combat souring, reservoir engineers have begun turning to nitrate injection as a green biotechnology whereby sulfide removal can be catalysed by diverse sulfide-oxidising nitrate-reducing bacteria (soNRB). However, this promising technology is threatened by reports that soNRB could enhance localized corrosion through incomplete oxidation of sulfide to corrosive sulfur intermediates. It is likely that soNRB are corrosive under certain circumstances; end products of soNRB metabolism vary depending prevailing levels of sulfide (i.e., from the SRB-catalyzed reservoir souring) and nitrate (i.e., the engineering "nitrate dose" introduced to combat souring). Furthermore soNRB corrosion will depend on the specific physiological features of the particular strains present, which vary from field to field, but usually include members of the Epsilonproteobacteria - the most frequently detected bacterial phylum in 16S rRNA genomic surveys of medium temperature oil fields. A new era of biological knowledge is dawning with the advent of inexpensive, high throughput nucleic acid sequencing technologies that can now be applied to microbial genomics. New high throughput sequencing platforms are allowing unprecedented levels of interrogation of microbial communities at the DNA (genomic) and RNA (transcriptomic) levels. Engineering biology aims to harness the power of this biological "-omics" revolution by bringing these powerful tools to bear on industrial problems like biocorrosion. This project will combine genomics and transcriptomics with process measurements of soNRB metabolism and real time corrosion monitoring via linear polarization resistance. By measuring all of these variables in experimental oil field microcosms, and scaling-up to pan-industry oil field screening, a predictive understanding of corrosion linked to nitrogen and sulfur biotransformations will emerge, putting new diagnostic genomics assays in the hands of petroleum engineers. The oil industry needs green technologies like nitrate injection. This research will develop new approaches that will safeguard this promising technology by allowing nitrate-associated biocorrosion potential to be assessed in advance. This will enhance nitrate injection's ongoing successful application to be based on informed risk assessments.

  • Funder: UKRI Project Code: EP/K020404/1
    Funder Contribution: 585,535 GBP
    Partners: Thornhill Research Inc, GlaxoSmithKline, University of California, Berkely, Cardiff University, University of Toronto, GE Healthcare

    Diseases of the brain including neurological conditions, such as epilepsy, multiple sclerosis and dementia, and common psychiatric conditions such as depression and schizophrenia, have considerable personal, social and economic costs for the sufferers and their carers. Improving the tools at our disposal for quantifying brain function would help with diagnosis, choosing the right treatment for the patient and developing new, more effective, treatments. This proposal aims to develop a reliable non-invasive brain imaging method using magnetic resonance imaging (MRI) that maps, across the whole human brain with a spatial resolution of a few millimetres, the amount of oxygen that the brain is consuming. The rate of oxygen consumption, known as CMRO2, reflects neural activity and can change through disease processes. It provides a marker of disease and treatment related alterations in brain activity. Our proposed method would also map the functional characteristics of brain blood vessels whose health is crucial for the supply of oxygen and nutrients to the brain. Until recently, it has only been possible to quantitatively map the human brain's metabolic energy use through positron emission tomography (PET), which relies on radioactive tracers. The application of such measurements is limited, as in order to minimise radiation doses, it cannot be applied many times in the same patients or healthy volunteers. This hampers the repeated study of disease or treatment progression and the study of normal brain development and aging. Our proposed method would avoid the use of ionizing radiation, would be cheaper than PET and more widely available, and would expand the applications of quantified CMRO2 mapping to more centres, leading to improved treatment targeting and potential healthcare cost savings. We have performed some initial tests that show our proposed method to be feasible. It relies on mapping simultaneously the flow of blood to each part of the brain and the oxygenation of the blood leaving each part of the brain. Necessary for the measurement is the modulation of brain blood flow and oxygen levels, achieved by asking volunteers to breathe air enriched with carbon dioxide and oxygen. These procedures involve the volunteer wearing a face-mask but are safe and well tolerated. Our proposed method should yield additional information describing cerebrovascular properties compared to other recently-proposed methods. This means that it would require fewer assumptions which may be not be invalid in the diseased brain, giving our approach a wider scope of application and offering potentially richer clinical information. This proposal optimises our method to ensure it is efficient and reliable for widespread research and eventually clinical use. We propose a close collaboration between physicists developing the neuroimaging methodology and clinical academic researchers who will help us to demonstrate its clinical feasibility in two common neurological diseases, epilepsy and multiple sclerosis (MS). About 70% of the project will be methodological development to optimise our image acquisition and data analysis strategy to yield accurate and repeatable measurements within about 10 minutes of scanning. The remaining 30% of the project will validate the method in groups of epilepsy and MS patients who volunteer to help us with our research. Validation will be performed by comparison with PET, the current 'gold standard.' The project will develop and benefit from partnerships with academic and industrial researchers in the UK and internationally. In particular, the work has good potential for application in the drug development industry, a strong industrial sector in the UK, for the development of new and effective compounds to treat psychiatric and neurological disorders. This project would help maintain the UK at the forefront internationally of neuroimaging research, a position it has long held and from which it has benefitted.

  • Funder: UKRI Project Code: EP/K037161/1
    Funder Contribution: 489,871 GBP
    Partners: Airbus, LR IMEA, University of Southampton, Bombardier Inc

    Noise and vibration are important performance aspects in many mechanical systems. High noise and vibration levels can be detrimental to structures (e.g. causing damage) and to the human operators (e.g. causing fatigue or injury). Thus, it is important to be able to understand how structures vibrate and emit noise, i.e., their vibroacoustic behavior. Traditionally, engineers would try to describe the vibroacoustics using analytical methods. However, these are only possible for very simple structures. Structures that engineers confront in the aerospace, railway or maritime sectors are often made of composite panels that are connected together using complicated structural joints. The analysis of the vibroacoustics of such complex built-up structures cannot be performed analytically. Over the years, researchers have developed numerical techniques to solve this problem. Element-based methods (such as the finite element method) are well-developed and well-established methods with many commercial/in-house codes that can be used. However, aerospace, railway and maritime structures are relatively large. For example, a typical railway car can be modelled using the finite element method up to 500 Hz. Above this frequency, the size of the finite element model becomes too large, impractical and the associated computational cost becomes prohibitive. However, the audio frequency range is 20 Hz-20 kHz. At high frequency (above 10 kHz), the railway car can be modelled using energy-based methods such as the statistical energy analysis method. Energy-based statistical methods are valuable, but less well-established than element-based methods. The railway car example points to a frequency gap, indeed a mid-frequency gap, where neither element-based nor energy-based methods can be used. I am proposing to use wave methods to bridge the mid-frequency gap and to further strengthen energy methods. Waves provide a unifying, intuitive approach to vibroacoustics. The computational cost of a wave model is substantially small (especially when compared to a full finite element model), and the wave properties of structures can be obtained by post processing the finite element model of a small segment of an arbitrarily large structure. Thus, the goal of this programme is to develop a wave-based toolbox for modelling the vibroacoustics complex built-up structures. Industrial examples from the aerospace, railway and maritime sectors will be used to demonstrate the efficiency and effectiveness of the developed methods.

  • Funder: UKRI Project Code: EP/K034383/1
    Funder Contribution: 2,246,110 GBP
    Partners: University of Rome, AIM, Abdus Salam ICTP, University of Waterloo (Canada), University of Warwick

    L-functions and modular forms are fundamental mathematical objects that encode much of our knowledge of contemporary number theory. They form part of a web of interconnected objects, the understanding of which in the most basic cases lies at the foundations of much of modern mathematics. A spectacular example is Wiles' proof of Fermat's Last Theorem, which was an application of a fundamental "modularity" link between L-functions, modular forms and elliptic curves. This project will greatly extend and generalize such connections, both theoretically and computationally. The research vision inspiring our programme can be summarised as: "Breaking the boundaries of classical L-functions and modular forms, and exploring their applications to 21st-century mathematics, physics, and computer science". Our guiding goal is to push forward both theoretical and algorithmic developments, in order to develop L-functions and modular forms far beyond current capabilities. This programme will systematically develop an extensive catalogue of number theoretic objects, and will make this information available through an integrated online resource that will become an indispensable tool for the world's research community. L-functions are to pure mathematics what fundamental particles are to physics: their interaction reveal fundamental truths. To continue the analogy, computers are to number theorists what colliders are to particle physicists. Aside from their established role as empirical "testers" for conjectures and theories, experiments can often throw up quite unexpected phenomena which go on to reshape modern theory. Our programme will establish a major database and encyclopedia of knowledge about L-functions and related objects, which will play a role analogous to that of the LHC for the scientists at CERN. Both are at the threshold of tantalising glimpses into completely uncharted territories: higher degree L-functions for us and the Higgs boson for them. Theoretical and computational work on higher degree L-functions has only started to make substantial progress in the past few years. There do not currently exist efficient methods to work with these, and rigorous computations with them are not yet possible. Neither is there yet an explicit description of all ways in which degree 3 L-functions can arise. We will address these facets in our research programme: both algorithmic development and theoretical classification. As well as having theoretical applications to modularity relationships as in Wiles' proof, detailed knowledge of L-functions has more far-reaching implications. Collections of L-functions have statistical properties which first arose in theoretical physics. This surprising connection, which has witnessed substantial developments led by researchers in Bristol, has fundamental predictive power in number theory; the synergy will be vastly extended in this programme. In another strand, number theory plays an increasingly vital role in computing and communications, as evidenced by its striking applications to both cryptography and coding theory. The Riemann Hypothesis (one of the Clay Mathematics Million Dollar Millennium Problems) concerns the distribution of prime numbers, and the correctness of the best algorithms for testing large prime numbers depend on the truth of a generalised version of this 150-year-old unsolved problem. These are algorithms which are used by public-key cryptosystems that everyone who uses the Internet relies on daily, and that underpin our digital economy. Our programme involves the creation of a huge amount of data about a wide range of modular forms and L-functions, which will far surpass in range and depth anything computed before in this area. This in turn will be used to analyse some of the most famous outstanding problems in mathematics, including the Riemann Hypothesis and another Clay problem, the Birch and Swinnerton-Dyer conjecture.

  • Funder: UKRI Project Code: EP/H00324X/2
    Funder Contribution: 275,478 GBP
    Partners: University of Cambridge, University of Warwick, UBC, National High Magnetic Field Laboratory, ANL, EWU

    Resistance is futile: lightbulbs and heaters aside, the majority of electronic components are at their most efficient when their electrical resistance is minimized. In the present climate, with energy sustainability regularly topping the international agenda, reducing the power lost in conducting devices or transmission lines is of worldwide importance. Research into the nature of novel conducting materials is hence vital to secure the global energy future.Superconductivity, the phenomenon of zero electrical resistance which occurs below a critical temperature in certain materials, remains inadequately explained. At present, these critical temperatures are typically very low, less than 140 Kelvin (-133 Celsius), but a more complete understanding of what causes the superconducting state to form could result in the design of materials that display superconductivity at the enhanced temperatures required for mass technological exploitation. Unfortunately, it is the very materials which are most likely to lead us to this end, the so-called unconventional superconductors, that are the least understood. In such materials, the superconducting state appears to be in competition with at least two other phases of matter: magnetism and normal, metallic conductivity. A delicate balance governs which is the dominant phase at low temperatures; the ground-state. By making slight adjustments to the composition of the materials or by applying moderate pressures certain interactions between the electrons in the compound can be strengthened at the expense of others causing the balance to tip in favour of a particular ground-state. The technicalities of how to do this are relatively well-known. What remains to be explained is why it happens, what it is that occurs at the vital tipping point where the superconductivity wins out over the magnetic or the metallic phases - in short, exactly what stabilizes the unconventional superconducting state? It is this question that the proposed project seeks to answer. I will use magnetic fields to explore the ground-states exhibited by three families of unconventional superconductor: the famous cuprate superconductors (whose discovery in the 1980s revolutionized the field of superconductivity and which remain the record-holders for the highest critical temperature); some recently discovered superconductors based on the most magnetic of atoms - iron (the discovery of these new materials in the spring of 2008 came as somewhat of a surprise, magnetism often being thought as competing with superconductivity); and a family of material based on superconducting layers of organic molecules. I propose to measure the strength of the interactions that are responsible for the magnetic and electronic properties of these materials as the systems are pushed, using applied pressure, through the tipping point at which the superconductivity becomes dominant. In particular, the electronic interactions in layered materials like those considered here can only be reliably and completely determined via a technique known as angle-dependent magnetoresistance. This technique remains to be applied to most unconventional superconductors, particularly at elevated pressures, mostly likely because it is experimentally challenging and familiar only to a handful of researchers. However, the rewards of performing such experiments are a far greater insight into what changes in interactions occur at the very edge of the superconducting state. Chasing the mechanism responsible for stabilizing unconventional superconductivity is an ambitious aim, and many traditional experimental techniques have proved inadquate. It is becoming clear, in the light of recent advances in the field, that the route to success lies in subjecting high-quality samples to the most extreme probes available, a combination of high magnetic fields and high applied pressures.

  • Funder: UKRI Project Code: EP/K030558/1
    Funder Contribution: 724,429 GBP
    Partners: Queen's University Canada, University of Otago, Durham University

    Our research involves the theoretical and experimental investigation of quantum many-body dynamics in systems of ultra-cold atoms, with the view of developing next-generation rotational sensors, and developing tools for and improving our general understanding of interacting many-body systems far from equilibrium. The central idea is based on using ultra-cold atoms with bosonic spin statistics, in contrast to e.g., electrons orbiting an atomic nucleus, where two electrons with the same spin cannot occupy exactly the same energy level or orbital (fermionic spin statistics). This means that at sufficiently low temperatures a dilute atomic gas composed of such bosonic atoms undergoes a particular kind of phase transition. A phase transition is a sudden, qualitative change of state, like and ordinary gas condensing to a liquid state as the temperature is lowered. The state of matter reached in the case of very dilute, low temperature bosonic atoms is called a Bose-Einstein condensate. This can be seen as the atomic/matter equivalent of a laser; a coherent, intense source of atoms, with consequent advantages to measurement science or metrology (which in the case of light are limited by the minimum wavelength for the light to be visible and controlled by conventional optics). Atom-atom interactions are, unfortunately, typically problematical, and tend to counteract the advantages of a coherent atomic source. We will build upon a proposal (suggested one of the investigators) where the issues associated with atom-atom interactions appear to be largely avoided due to an astutely chosen experimental geometry. In the process of investigating this proposed system as well as a number of closely related issues, we will deepen our understanding of nonequilibrium dynamics (due, for example, to the crucial importance of avoiding such things as flow instabilities in any functioning rotational senser), and develop broadly applicable theoretical tools accounting for the influence and production of complicated many-body effects. As such our research falls within the EPSRC Physics Grand Challenges "Emergence and Physics Far From Equilibrium" (motivated by the fact that "dramatic collective behaviour can emerge unexpectedly in large complicated systems" and "This fundamental work will be driven by the ever-present possibility that emergent states may provide the foundations for the technologies of the future") and "Quantum Physics for New Quantum Technologies" (motivated by "Next generation quantum technologies will rely on our understanding and exploitation of coherence and entanglement" and "Success requires a deeper understanding of quantum physics and a broad ranging development of the enabling tools and technologies"). Ultracold atoms are an ideal configuration in which to investigate dynamics far from equilibrium, due to a very high degree of flexibility in their experimental configurations (varying the experimental geometry, strength of interaction, and even whether the interactions are attractive or repulsive, by appropriate combinations of magnetic, laser and microwave fields), and atomic, molecular and optical (AMO) physics systems have a superlative record in terms of precision measurement, most notably in the form of atomic clocks, which, for example, underpin the functioning of the global positioning system (GPS).

  • Funder: UKRI Project Code: EP/L002787/1
    Funder Contribution: 98,503 GBP
    Partners: Weizmann Institute of Science, University of Ottawa, University of Warwick

    The project is devoted to basic research in pure mathematics. It is based on the well-studied interplay between the theory of electrical networks -seen as abstract mathematical tools- and the theory of random walks on graphs. Four specific topics within this framework are addressed by the project: -- The Poisson boundary for random walk on graphs and groups; We follow an active tradition in group theory, triggered by Kesten, where results about groups are obtained indirectly by considering a random walk on the group and relating its behaviour, or the structure of a boundary associated to it, to the algebraic properties of the group. The project benefits from a recent strong result of the applicant providing a criterion for the Poisson boundary, as well as a novel idea of associating a random finite graph rather than a random walk with a group, exploiting the recent theory of graphons by Lovasz et. al. -- Discrete conformal uniformization in the sense of Benjamini & Schramm; We seek to strengthen a new result of the applicant, related to the above, that answered a question of Benjamini & Schramm. Such a strengthening will provide new results on the Poisson boundary. -- The relationship between the cover time and the cover cost in extremal and random finite graphs. The cover time of a graph is an important concept in mathematics and computer science, and is even studied by physicists, but it is very hard to compute or even approximate. Using the concept of cover cost that the applicant introduced, we seek to simplify the approximation of the cover time for many classes of graphs by breaking it down into two steps: showing that it is close to the cover cost, and computing the (provably more tractable) cover cost. These topics lie in different areas of mathematics, all of which have seen a lot of research activity in recent years. They are interlinked by the general theme of electrical networks, random walks, and their interplay, and share further finer interconnections. The project aims to contribute by producing new results individually for each sub-topic as well as by establishing or strengthening connections between them. The project's results will be of interest to several research communities, including Graph Theory, Probability, (discrete) Potential Theory and Group Theory.

  • Project . 2013 - 2018
    Funder: UKRI Project Code: EP/K033085/1
    Funder Contribution: 1,122,320 GBP
    Partners: Toshiba Corporation, University of Bristol, IMEC, TU/e, University of Toronto, University of Rome I (La Sapienza), Cornell University, Defence Science & Tech Lab DSTL, EHU, XMOS Ltd...

    Quantum information science and technologies offer a completely new and powerful approach to processing and transmitting information by combining two of the great scientific discoveries of the 20th century - quantum mechanics and information theory. By encoding information in quantum systems, quantum information processing promises huge computation power, while quantum communications is already in its first stages of commercialisation, and offers the ultimate in information security. However, for quantum technologies to have as big an impact on science, technology and society as anticipated, a practical scalable integration platform is required where all the key components can be integrated to a single micro-chip technology, very much akin to the development of the first microelectronic integrated circuits. Of the various approaches to realising quantum technologies, single particles of light (photons) are particularly appealing due to their low-noise properties and ease of manipulation at the single qubit level. It is possible to harness the quantum mechanical properties of single photons, taking advantage of strange quantum properties such as superposition and entanglement to provide new ways to encode, process and transmit information. Quantum photonics promises to be a truly disruptive technology in information processing, communications and sensing, and for deepening our understanding of fundamental quantum physics and quantum information science. However, current approaches are limited to simple optical circuits with low photon numbers, inefficient detectors and no clear routes to scalability. For quantum optic information science to go beyond current limitations, and for quantum applications to have a significant real-world impact, there is a clear and urgent need to develop a fully integrated quantum photonic technology platform to realise large and complex quantum circuits capable of generating, manipulating and detecting large photon-number states. This Fellowship will enable the PI and his research team to develop such a technology platform, based on silicon photonics. Drawing from the advanced fabrication technologies developed for the silicon microelectronics industry, state of the art silicon quantum photonic devices will enable compact, large-scale and complex quantum circuits, experiments and applications. This technology platform will overcome the current 8-photon barrier in a scalable way, enable circuits of unprecedented complexity, and will be used to address important fundamental questions, develop new approaches to quantum communications, enhance the performance of quantum sensing, provide a platform for new routes to quantum simulations, and achieve computational complexities that can challenge the limits of conventional computing. This multidisciplinary research programme will bring together engineers, physicists and industrial partners to tackle these scientific and technological challenges.