Powered by OpenAIRE graph
Found an issue? Give us feedback

UniGe

University of Genoa
Country: Italy
Funder (2)
Top 100 values are shown in the filters
Results number
arrow_drop_down
9 Projects, page 1 of 2
  • Funder: UKRI Project Code: EP/X010740/1
    Funder Contribution: 348,065 GBP

    Inverse problems are concerned with the reconstruction of the causes of a physical phenomena from given observational data. They have wide applications in many problems in science and engineering such as medical imaging, signal processing, and machine learning. Iterative methods are a particularly powerful paradigm for solving a wide variety of inverse problems. They are often posed by defining an objective function that contains information about data fidelity and assumptions about the sought quantity, which is then minimised through an iterative process. Mathematics has played a critical role in analysing inverse problems and corresponding algorithms. Recent advances in data acquisition and precision have resulted in datasets of increasing size for a vast number of problems, including computed and positron emission tomography. This increase in data size poses significant computational challenges for traditional reconstruction methods, which typically require the use of all the observational data in each iteration. Stochastic iterative methods address this computational bottleneck by using only a small subset of observation in each iteration. The resulting methods are highly scalable, and have been successfully deployed in a wide range of problems. However, the use of stochastic methods has thus far been limited to a restrictive set of geometric assumptions, requiring Hilbert or Euclidean spaces. The proposed fellowship aims to address these issues by developing stochastic gradient methods for solving inverse problems posed in Banach spaces. The use of non-Hilbert spaces is gaining increased attention within inverse problems and machine learning communities. Banach spaces offer much richer geometric structures, and are a natural problem domain for many problems in partial differential equation and medical tomography. Moreover, Banach-space norms are advantageous for preservation of important properties, such as sparsity. This fellowship will introduce modern optimisation methods into classical Banach space theory and its successful completion will create novel research opportunities for inverse problems and machine learning.

    more_vert
  • Funder: UKRI Project Code: NE/S013431/1
    Funder Contribution: 42,396 GBP

    Vibrio cholerae is a marine bacteria living and feeding on the surfaces of tiny microscopic animals called zooplankton in the upper oceans. Zooplankton are dispersed by ocean currents and so Vibrio cholerae bacteria can spread to different regions via this route potentially spreading infections. Vibrio bacteria increase in numbers when sea surface temperatures have increased and tiny microscopic animals called zooplankton are at high abundance. In humans, V. cholerae causes Cholera, a diarrhoeal disease along with skin infections, meningitis and septicaemia if contaminated seafood is consumed or by bathing in contaminated waters. V. cholerae can also live in fresh or brackish water and so can infect people drinking contaminated freshwater too. There are many different variant forms of V. cholerae. In warmer climates, epidemic O1 and 0139 variants exist and are endemic. These cause severe gastrointestinal disease leading to fatalities. However, a multitude of non-severe variants exist in temperate Northern oceanic regions, such as UK and Canada, and these have a more favourable outcome. However non-severe types can evolve to become pathogenic thus it is important to monitor strain types to better predict and provide early warning for potential infectious events. Genetic methods are the best way to measure and track the multitude of ever changing Vibrio cholerae strains and several databases exist mapping the global distribution of different strains, including the European Union Reference Laboratory (EURL) hosted by CEFAS, the UK government lab that tests for food and water safety in UK waters. A recent outbreak of V. cholera occured in April 2018 in Vancouver Island, British Columbia on the Northwest coast of Canada causing four people to suffer cholera infection after consuming fish eggs. This is a rare occurence in temperate oceanic waters. This event happened soon after a recent unusal marine heatwave in this region between 2014-2016 and we are interested in determining whether the higher sea surface temperatures had altered zooplankton communities to enable pathogenic V. cholerae to thrive. Such events may happen in UK water as the English Channel and North sea are the fastest warming waters surrounding the UK. The waters surrounding BC Canada are regularly sampled by the Continuous Plankton Recorder (CPR) survey that also records zooplankton species and additionally by the Department of Fisheries and Oceans (DFO) in Canada that have captured water very near the site of infection. Although CPR samples are preserved in formalin which makes genetic detection difficult, we have nevertheless been able to quantify and detect variants of Vibrio from CPR samples. We propse a pilot study to concentrate up Vibrio cholerae using Whole Genome Enrichment in these samples to allow all of the variants of this bacteria to be detected using high-throughput sequencing. This will allow us to detect the infectious types and, by comparing them with strains from EURL, find out where they came from, whether the strain started out as infectious and if they are found elsewhere (such as UK waters) and the route they travelled to end up in Vancouver Island. We will also find out if the extent that increased sea surface temperatures allow human infectious Vibrio cholerae to increase and persist in local waters and in wider oceanic regions. As zooplankton act as hosts to Vibrio cholerae, we will determine if there are certain zooplankton species or groups of zooplankton that harbour this pathogen and facilitate its dispersal and persistence in oceanic waters. This will allow us to work out if this is these human infectious Vibrio cholerae strains are a transient or persistent threat and the environmental conditions in which they thrive. We will trasnmit this information to local governmental monitoring agencies to allow them to set up an early warning system if they find this bacteria again.

    more_vert
  • Funder: UKRI Project Code: EP/V036777/1
    Funder Contribution: 1,357,110 GBP

    This project brings together unique expertise in Computational and Experimental Fluid Dynamics, Model Reduction and Artificial Intelligence, to identify solutions for the management of people and spaces in the current pandemic and post lockdown. A new interactive tool is proposed that evaluates the risk of infection in the indoor environment from droplets and aerosols generated when breathing, talking, coughing and sneezing. This capability will become more critical as winter approaches and building ventilation will need to be limited for comfort considerations. The fluid dynamic behaviour of droplets and aerosols, the effect of using face masks as well as other parameters such as room volume, ventilation and number of occupants are considered. A datahub capable of storing, curating and managing heterogeneous data from sources internal and external to the project will be created. A synergetic experimental and numerical approach will be undertaken. These will complement the existing literature and data from other EPSRC-funded projects providing suitable datasets with adequate resolution in time and space for all the relevant features. To support experiments and numerical simulations, reduced order models capable of interpolating and extrapolating the scenarios collected in the database will be used. This will permit the estimation of droplet and aerosol concentrations and distributions in unknown scenarios at low-computational cost, in near real-time. A state-of-the-art AI-based framework, incorporating descriptive, predictive and prescriptive techniques will extract the knowledge from the data and drive the decision-making process and provide in near real-time the assessment of risk levels.

    visibility49
    visibilityviews49
    downloaddownloads207
    Powered by Usage counts
    more_vert
  • Funder: CHIST-ERA Project Code: CHIST-ERA-17-ORMR-004

    Humans excel when dealing with everyday objects and manipulation tasks, learning new skills, and adapting to different or complex environments. This is a basic skill for our survival as well as a key feature in our world of artefacts and human-made devices. Our expert ability to use our hands results from a lifetime of learning by both observing other skilled humans and ourselves as we discover how to handle objects first hand. Unfortunately, today's robotic hands are still unable to achieve such a high level of dexterity in comparison to humans nor are systems entirely able to understand their own potential. In order for robots to truly operate in a human world and fulfil the expectations as intelligent assistants, they must be able to manipulate a wide variety of unknown objects by mastering their capabilities of strength, finesse and subtlety. To achieve such dexterity with robotic hands, cognitive capacity is needed to deal with uncertainties in the real world and to generalise previously learned skills to new objects and tasks. Furthermore, we assert that the complexity of programming must be greatly reduced and robot autonomy must become much more natural. The InDex project aims to understand how humans perform in-hand object manipulation and to replicate the observed skilled movements with dexterous artificial hands, merging the concepts of reinforcement and transfer learning to generalise in-hand skills for multiple objects and tasks. In addition, an abstraction and representation of previous knowledge will be fundamental for the reproducibility of learned skills to different hardware. Learning will use data across multiple modalities that will be collected, annotated and assembled into a large dataset. The data and our methods will be shared with the wider research community to allow testing against benchmarks and reproduction of results. More concretely, the core objectives are: (i) to build a multi-modal artificial perception architecture that extracts data of object manipulation by humans; (ii) the creation of a multimodal dataset of in-hand manipulation tasks such as regrasping, reorienting and finely repositioning; (iii) the development of an advanced object modelling and recognition system, including the characterisation of object affordances and grasping properties, in order to encapsulate both explicit information and possible implicit object usages; (iv) to autonomously learn and precisely imitate human strategies in handling tasks; and (v) to build a bridge between observation and execution, allowing deployment that is independent of the robot architecture.

    visibility69
    visibilityviews69
    downloaddownloads68
    Powered by Usage counts
    more_vert
  • Funder: UKRI Project Code: EP/R019622/1
    Funder Contribution: 100,987 GBP

    This project concerns computational mathematics and logic. The aim is to improve the ability of computers to perform ``Quantifier Elimination'' (QE). We say a logical statement is ``quantified'' if it is preceded by a qualification such as "for all" or "there exists". Here is an example of a quantified statement: "there exists x such that ax^2 + bx + c = 0 has two solutions for x". While the statement is mathematically precise the implications are unclear - what restrictions does this statement of existence force upon us? QE corresponds to replacing a quantified statement by an unquantified one which is equivalent. In this case we may replace the statement by: "b^2 - 4ac > 0", which is the condition for x to have two solutions. You may have recognised this equivalence from GCSE mathematics, when studying the quadratic equation. The important point here is that the latter statement can actually be derived automatically by a computer from the former, using a QE procedure. QE is not subject to the numerical rounding errors of most computations. Solutions are not in the form of a numerical answer but an algebraic description which offers insight into the structure of the problem at hand. In the example above, QE shows us not what the solutions to a particular quadratic equation are, but how in general the number of solutions depends on the coefficients a, b, and c. QE has numerous applications throughout engineering and the sciences. An example from biology is the determination of medically important values of parameters in a biological network; while another from economics is identifying which hypotheses in economic theories are compatible, and for what values of the variables. In both cases, QE can theoretically help, but in practice the size of the statements means state-of-the-art procedures run out of computer time/memory. The extensive development of QE procedures means they have many options and choices about how they are run. These decisions can greatly affect how long QE takes, rendering an intractable problem easy and vice versa. Making the right choice is a critical, but understudied problem and is the focus of this project. At the moment QE procedures make such choices either under direct supervision of a human or based on crude human-made heuristics (rules of thumb based on intuition / experience but with limited scientific basis). The purpose of this project is to replace these by machine learning techniques. Machine Learning (ML) is an overarching term for tools that allow computers to make decisions that are not explicitly programmed, usually involving the statistical analysis of large quantities of data. ML is quite at odds with the field of Symbolic Computation which studies QE, as the latter prizes exact correctness and so shuns the use of probabilistic tools making its application here very novel. We are able to combine these different worlds because the choices which we will use ML to make will all produce a correct and exact answer (but with different computational costs). The project follows pilot studies undertaken by the PI which experimented with one ML technique and found it improved upon existing heuristics for two particular decisions in a QE algorithm. We will build on this by working with the spectrum of leading ML tools to identify the optimal techniques for application in Symbolic Computation. We will demonstrate their use for both low level algorithm decisions and choices between different theories and implementations. Although focused on QE, we will also demonstrate ML as being a new route to optimisation in Computer Algebra more broadly and work encompasses Project Partners and events to maximise this. Finally, the project will deliver an improved QE procedure that makes use of ML automatically, without user input. This will be produced in the commercial Computer Algebra software Maple in collaboration with industrial Project Partner Maplesoft.

    more_vert
Powered by OpenAIRE graph
Found an issue? Give us feedback

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.