Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
The following results are related to Canada. Are you interested to view more results? Visit OpenAIRE - Explore.
6 Projects, page 1 of 1

  • Canada
  • 2017-2021
  • CHIST-ERA

  • Funder: CHIST-ERA Project Code: M2CR
    Partners: Computer Vision Center, Université de Montréal / LISA, Université du Mans / LIUM

    Communication is one of the necessary condition to develop intelligence in living beings. Humans use several modalities to exchange information: speech, written text, both in many languages, gestures, images, and many more. There is evidence that human learning is more effective when several modalities are used. There is a large body of research to make computers process these modalities, and ultimately, understand human language. These modalities have been, however, generally addressed independently or at most in pairs. However, merging information from multiple modalities is best done at the highest levels of abstraction, which deep learning models are trained to capture. The M2CR project aims at developing a revolutionary approach to combine all these modalities and their respective tasks in one unified architecture, based on deep neural networks, including both a discriminant and a generative component through multiple levels of representation. Our system will jointly learn from resources in several modalities, including but not limited to text of several languages (European languages, Chinese and Arabic), speech and images. In doing so, the system will learn one common semantic representation of the underlying information, both at a channel-specific level and at a higher channel-independent level. Pushing these ideas to the large scale, e.g. training on very large corpora, the M2CR project has the ambition to advance the state-of-the-art in human language understanding (HLU). M2CR will address all major tasks in HLU by one unified architecture: speech understanding and translation, multilingual image retrieval and description, etc. The M2CR project will collect existing multimodal and multilingual corpora, extend them as needed, and make them freely available to the community. M2CR will also define shared tasks to set up a common evaluation framework and ease research for other institutions, beyond the partners of this consortium. All developed software and tools will be open-source. By these means, we hope to help to advance the field of human language.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-SDCDN-005
    Partners: Concordia University, Université du Québec à Montréal, SU, LABORATOIRE INFORMATIQUE IMAGE INTERACTION, university of La Rochelle

    Since the emergence of Cloud Computing and the associated Over-The-Top (OTT) valueadded service providers more than a decade ago, the architecture of the communication infrastructure − namely the Internet and the (mobile) telecommunication infrastructure – keep improving with computing, caching and networking services becoming more coupled. OTTs are moving from being purely cloud-based to being more distributed and residing close to the edge, a concept known to be “Fog Computing”. Network operators and telecom vendors advertise the “Mobile Edge Computing (MEC)” capabilities they may offer within their 5G Radio-Access and Core Networks. Lately, the GAFAM (Google, Apple, Facebook, Amazon and Microsoft) came into the play as well offering what is known as Smart Speakers (Amazon Echo, Apple HomePod and Google Home), which can also serve as IoT hubs with “Mist/Skin Computing” capabilities. While these have an important influence on the underlying network performances, such computing paradigms are still loosely coupled with each other and with the underlying communication and data storage infrastructures, e.g., even for the forthcoming 5G systems. It is expected that a tight coupling of computing platforms with the networking infrastructure will be required in post-5G networks, so that a large number of distributed and heterogeneous devices belonging to different stakeholders communicate and cooperate with each other in order to execute services or store data in exchange for a reward. This is what we call here the smart collaborative computing, caching and networking paradigm. The objective of SCORING project is to develop and analyse this new paradigm by targeting the following research challenges, which are split into five different strata: At the computing stratum: Proactive placement of computing services, while taking into account users mobility as well as per-computing-node battery status and computing load; At the storage stratum: Proactive placement of stores and optimal caching of contents/functions, while taking into account the joint networking and computing constraints; At the software stratum: Efficient management of micro-services in such a multi-tenant distributed realm, by exploiting the Information-Centric Networking principles to support both name and compute function resolution; At the networking stratum: Enforcement of dynamic routing policies, using Software Defined Networking (SDN), to satisfy the distributed end-user computation requirements and their Quality of Experience (QoE); At the resource management stratum: Design of new network-economic models to support service offering in an optimal way, while considering the multi-stakeholder feature of the collaborative computing, caching and networking paradigm proposed in this project. Smartness will be brought here by using adequate mathematical tools used in combination for the design of each of the five strata: machine learning (proactive placement problems), multi-objective optimization, graph theory and complex networks (information-centric design of content and micro-services caching) and game theory (network-economics model). Demonstration of the feasibility of the proposed strata on a realistic and integrated testbed as well as on an integrated simulation platform (based on available open-source network-simulation toolkits), will be one of the main goals of the project. The test-bed will be built by exploiting different virtualization (VM/Containers) technologies to deploy compute and storage functions within a genuine networking architecture. Last but not least, all building blocks forming the realistic and integrated test-bed, on the one hand, and the integrated simulation platform, on the other hand, will be made available to the research community at the end of the project as open source software.

  • Funder: CHIST-ERA Project Code: IGLU
    Partners: Inria Bordeaux Sud-Ouest / Flowers Team, University of Mons / Numediart Research Institute, University of Zaragoza, Université de Sherbrooke, Université de Lille 1, KTH Royal Institute of Technology

    Language is an ability that develops in young children through joint interaction with their caretakers and their physical environment. At this level, human language understanding could be referred as interpreting and expressing semantic concepts (e.g. objects, actions and relations) through what can be perceived (or inferred) from current context in the environment. Previous work in the field of artificial intelligence has failed to address the acquisition of such perceptually-grounded knowledge in virtual agents (avatars), mainly because of the lack of physical embodiment (ability to interact physically) and dialogue, communication skills (ability to interact verbally). We believe that robotic agents are more appropriate for this task, and that interaction is a so important aspect of human language learning and understanding that pragmatic knowledge (identifying or conveying intention) must be present to complement semantic knowledge. Through a developmental approach where knowledge grows in complexity while driven by multimodal experience and language interaction with a human, we propose an agent that will incorporate models of dialogues, human emotions and intentions as part of its decision-making process. This will lead anticipation and reaction not only based on its internal state (own goal and intention, perception of the environment), but also on the perceived state and intention of the human interactant. This will be possible through the development of advanced machine learning methods (combining developmental, deep and reinforcement learning) to handle large-scale multimodal inputs, besides leveraging state-of-the-art technological components involved in a language-based dialog system available within the consortium. Evaluations of learned skills and knowledge will be performed using an integrated architecture in a culinary use-case, and novel databases enabling research in grounded human language understanding will be released. IGLU will gather an interdisciplinary consortium composed of committed and experienced researchers in machine learning, neurosciences and cognitive sciences, developmental robotics, speech and language technologies, and multimodal/multimedia signal processing. We expect to have key impacts in the development of more interactive and adaptable systems sharing our environment in everyday life.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-17-ORMR-007
    Partners: Université Laval, University of Birmingham, UniPi, CTU

    In this project, the team of researchers will address the problem of autonomous robotic grasping of objects in challenging scenes. We consider two industrially and economically important open challenges which require advanced vision-guided grasping. 1) “Bin-picking” for manufacturing, where components must be grasped from a random, self-occluding heap inside a bin or box. Parts may have known models, but will only be partially visible in the heap and may have complex shapes. Shiny/reflective metal parts make 3D vision difficult, and the bin walls provide difficult reach-to-grasp and visibility constraints. 2) Waste materials handling, which may be hazardous (e.g. nuclear) waste, or materials for recycling in the circular economy. Here the robot has no prior models of object shapes, and grasped materials may also be deformable (e.g. contaminated gloves, hoses). The proposed project comprises two parallel thrusts: perception (visual and tactile) and action (planning and control for grasping/manipulation). However, perception and action are tightly coupled and this project will build on recent advances in “active perception” and “simultaneous perception and manipulation” (SPAM). In the first thrust, we will exploit recent advances in 3D sensor technology and develop perception algorithms that are robust in challenging environments, e.g. handling shiny (metallic) or transparent (glass/perspex) objects, self-occluding heaps, known objects which may be deformable or fragmented, and unknown objects which lack any pre-existing models. In the second thrust, autonomous grasp planners will be developed with respect to visual features perceived by algorithms developed in the first thrust. Grasps must be planned to be secure, but also provide affordances to facilitate post-grasp manipulative actions, and also afford collision-free reach-to-grasp trajectories. Perceptual noise and uncertainty will be overcome in two ways, namely using computationally adaptive algorithms and mechanically adaptive underactuated hands. An object initially grasped by an accessible feature may need to be re-grasped (for example a tool that is not initially graspable by its handle). We will develop re-grasping strategies that exploit object properties learned during the initial grasp or manipulative actions. Overarching themes in the project are: methods that are generalisable across platforms; reproducibility of results; and the transfer of data. Therefore, the methods proposed in the two thrusts will be tested for reproducibility by implementing them in the different partner’s laboratories, using both similar and different hardware. Large amounts of data will be collected throughout these tests, and published online as a set of international benchmark vision and robotics challenges, curated by Université Laval once the project is completed.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-SDCDN-003
    Partners: Queen’s University Belfast, NTUA, ÉTS, UCL, INRIA

    The potential offered by the abundance of sensors, actuators and communications in IoT era is hindered by the limited computational capacity of local nodes, making the distribution of computing in time and space a necessity. Several key challenges need to be addressed in order to optimally and jointly exploit the network, computing, and storage resources, guaranteeing at the same time feasibility for time-critical and mission-critical tasks. Our research takes upon these challenges by dynamically distributing resources when the demand is rapidly time varying. We first propose an analytic mathematical dynamical modelling of the resources, offered workload, and networking environment, that incorporates phenomena met in wireless communications, mobile edge computing data centres, and network topologies. We also propose a new set of estimators for the workload and resources time-varying profiles that continuously update the model parameters. Building on this framework, we aim to develop novel resource allocation mechanisms that take explicitly into account service differentiation and context-awareness, and most importantly, provide formal guarantees for well-defined QoS/QoE metrics. Our research goes well beyond the state of the art also in the design of control algorithms for cyber-physical systems (CPS), by incorporating resource allocation mechanisms to the decision strategy itself. We propose a new generation of controllers, driven by a co-design philosophy both in the network and computing resources utilization. This paradigm has the potential to cause a quantum leap in crucial fields in engineering, e.g., Industry 4.0, collaborative robotics, logistics, multi-agent systems etc. To achieve these breakthroughs, we utilize and combine tools from Automata and Graph theory, Machine Learning, Modern Control Theory and Network Theory, fields where the consortium has internationally leading expertise. Although researchers from Computer and Network Science, Control Engineering and Applied Mathematics have proposed various approaches to tackle the above challenges, our research constitutes the first truly holistic, multidisciplinary approach that combines and extends recent, albeit fragmented results from all aforementioned fields, thus bridging the gap between efforts of different communities. Our developed theory will be extensively tested on available experimental testbed infrastructures of the participating entities. The efficiency of the overall proposed framework will be tested and evaluated under three complex use cases involving mobile autonomous agents in IoT environments: (i) distributed remote path planning of a group of mobile robots with complex specifications, (ii) rapid deployment of mobile agents for distributed computing purposes in disaster scenarios and (iii) mobility-aware resource allocation for crowded areas with pre-defined performance indicators to reach.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-ACAI-005
    Partners: IM2NP, C2N, IBM Research Zurich Laboratory, Université de Sherbrooke

    Edge computing (EC) and the development of portable devices such as cell phones, autonomous robot or health tracking systems represent one of the big challenges for artificial intelligence (AI) deployment. These hardware systems present very tight constraints in terms of energy consumption and computing power that today’s AI strategies cannot cope with. While high power GPU are well adapted to deep neural network implementations that should strongly benefit to AI development, ultra-low power and robust computing with limited resources need to be proposed for EC applications. To this end, we propose to explore the hardware implementation of small-scale neural networks with limited complexity that could satisfy EC requirements. Notably, spiking neural networks present a real opportunity to this end since, they can combine low power operation and non-trivial computing functions as biological neural networks do. In fact, spiking neural networks (SNNs) of moderate size can reproduce important aspects that are not considered in state-of-the-art machine learning approaches: i) non-linear dynamical regime (i.e. synchronized, critic, driven by attractor dynamics, sequences of spikes) that might explain basic mechanisms in perception and ii) the fast computing that occurs in the brain even if neurons are slow. The UNICO project proposes to address the material implementation of such SNNs by integrating in a dedicated hardware, the key ingredients at work in such SNNs. In fact, we can anticipate that the physical implementation of such highly parallel systems will encounter strong limitations with conventional technologies. A real breakthrough for Information and Communication Technologies would be to capitalize on emerging nanotechnologies to implement efficiently these SNNs on an ultra-low power hardware. Here, state of the art analog resistive memory technologies, or memristive devices, will be developed and integrated in the Back End Of Line of CMOS for implementing analog SNNs. By gathering competences from material sciences, device engineering, neuromorphic engineering and machine learning, we will explore how such SNNs can be deployed on various computing tasks of interest for EC applications. The expected innovations at both the hardware and computing levels could benefit to a wide range of AI applications in the future.

Advanced search in
Projects
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
The following results are related to Canada. Are you interested to view more results? Visit OpenAIRE - Explore.
6 Projects, page 1 of 1
  • Funder: CHIST-ERA Project Code: M2CR
    Partners: Computer Vision Center, Université de Montréal / LISA, Université du Mans / LIUM

    Communication is one of the necessary condition to develop intelligence in living beings. Humans use several modalities to exchange information: speech, written text, both in many languages, gestures, images, and many more. There is evidence that human learning is more effective when several modalities are used. There is a large body of research to make computers process these modalities, and ultimately, understand human language. These modalities have been, however, generally addressed independently or at most in pairs. However, merging information from multiple modalities is best done at the highest levels of abstraction, which deep learning models are trained to capture. The M2CR project aims at developing a revolutionary approach to combine all these modalities and their respective tasks in one unified architecture, based on deep neural networks, including both a discriminant and a generative component through multiple levels of representation. Our system will jointly learn from resources in several modalities, including but not limited to text of several languages (European languages, Chinese and Arabic), speech and images. In doing so, the system will learn one common semantic representation of the underlying information, both at a channel-specific level and at a higher channel-independent level. Pushing these ideas to the large scale, e.g. training on very large corpora, the M2CR project has the ambition to advance the state-of-the-art in human language understanding (HLU). M2CR will address all major tasks in HLU by one unified architecture: speech understanding and translation, multilingual image retrieval and description, etc. The M2CR project will collect existing multimodal and multilingual corpora, extend them as needed, and make them freely available to the community. M2CR will also define shared tasks to set up a common evaluation framework and ease research for other institutions, beyond the partners of this consortium. All developed software and tools will be open-source. By these means, we hope to help to advance the field of human language.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-SDCDN-005
    Partners: Concordia University, Université du Québec à Montréal, SU, LABORATOIRE INFORMATIQUE IMAGE INTERACTION, university of La Rochelle

    Since the emergence of Cloud Computing and the associated Over-The-Top (OTT) valueadded service providers more than a decade ago, the architecture of the communication infrastructure − namely the Internet and the (mobile) telecommunication infrastructure – keep improving with computing, caching and networking services becoming more coupled. OTTs are moving from being purely cloud-based to being more distributed and residing close to the edge, a concept known to be “Fog Computing”. Network operators and telecom vendors advertise the “Mobile Edge Computing (MEC)” capabilities they may offer within their 5G Radio-Access and Core Networks. Lately, the GAFAM (Google, Apple, Facebook, Amazon and Microsoft) came into the play as well offering what is known as Smart Speakers (Amazon Echo, Apple HomePod and Google Home), which can also serve as IoT hubs with “Mist/Skin Computing” capabilities. While these have an important influence on the underlying network performances, such computing paradigms are still loosely coupled with each other and with the underlying communication and data storage infrastructures, e.g., even for the forthcoming 5G systems. It is expected that a tight coupling of computing platforms with the networking infrastructure will be required in post-5G networks, so that a large number of distributed and heterogeneous devices belonging to different stakeholders communicate and cooperate with each other in order to execute services or store data in exchange for a reward. This is what we call here the smart collaborative computing, caching and networking paradigm. The objective of SCORING project is to develop and analyse this new paradigm by targeting the following research challenges, which are split into five different strata: At the computing stratum: Proactive placement of computing services, while taking into account users mobility as well as per-computing-node battery status and computing load; At the storage stratum: Proactive placement of stores and optimal caching of contents/functions, while taking into account the joint networking and computing constraints; At the software stratum: Efficient management of micro-services in such a multi-tenant distributed realm, by exploiting the Information-Centric Networking principles to support both name and compute function resolution; At the networking stratum: Enforcement of dynamic routing policies, using Software Defined Networking (SDN), to satisfy the distributed end-user computation requirements and their Quality of Experience (QoE); At the resource management stratum: Design of new network-economic models to support service offering in an optimal way, while considering the multi-stakeholder feature of the collaborative computing, caching and networking paradigm proposed in this project. Smartness will be brought here by using adequate mathematical tools used in combination for the design of each of the five strata: machine learning (proactive placement problems), multi-objective optimization, graph theory and complex networks (information-centric design of content and micro-services caching) and game theory (network-economics model). Demonstration of the feasibility of the proposed strata on a realistic and integrated testbed as well as on an integrated simulation platform (based on available open-source network-simulation toolkits), will be one of the main goals of the project. The test-bed will be built by exploiting different virtualization (VM/Containers) technologies to deploy compute and storage functions within a genuine networking architecture. Last but not least, all building blocks forming the realistic and integrated test-bed, on the one hand, and the integrated simulation platform, on the other hand, will be made available to the research community at the end of the project as open source software.

  • Funder: CHIST-ERA Project Code: IGLU
    Partners: Inria Bordeaux Sud-Ouest / Flowers Team, University of Mons / Numediart Research Institute, University of Zaragoza, Université de Sherbrooke, Université de Lille 1, KTH Royal Institute of Technology

    Language is an ability that develops in young children through joint interaction with their caretakers and their physical environment. At this level, human language understanding could be referred as interpreting and expressing semantic concepts (e.g. objects, actions and relations) through what can be perceived (or inferred) from current context in the environment. Previous work in the field of artificial intelligence has failed to address the acquisition of such perceptually-grounded knowledge in virtual agents (avatars), mainly because of the lack of physical embodiment (ability to interact physically) and dialogue, communication skills (ability to interact verbally). We believe that robotic agents are more appropriate for this task, and that interaction is a so important aspect of human language learning and understanding that pragmatic knowledge (identifying or conveying intention) must be present to complement semantic knowledge. Through a developmental approach where knowledge grows in complexity while driven by multimodal experience and language interaction with a human, we propose an agent that will incorporate models of dialogues, human emotions and intentions as part of its decision-making process. This will lead anticipation and reaction not only based on its internal state (own goal and intention, perception of the environment), but also on the perceived state and intention of the human interactant. This will be possible through the development of advanced machine learning methods (combining developmental, deep and reinforcement learning) to handle large-scale multimodal inputs, besides leveraging state-of-the-art technological components involved in a language-based dialog system available within the consortium. Evaluations of learned skills and knowledge will be performed using an integrated architecture in a culinary use-case, and novel databases enabling research in grounded human language understanding will be released. IGLU will gather an interdisciplinary consortium composed of committed and experienced researchers in machine learning, neurosciences and cognitive sciences, developmental robotics, speech and language technologies, and multimodal/multimedia signal processing. We expect to have key impacts in the development of more interactive and adaptable systems sharing our environment in everyday life.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-17-ORMR-007
    Partners: Université Laval, University of Birmingham, UniPi, CTU

    In this project, the team of researchers will address the problem of autonomous robotic grasping of objects in challenging scenes. We consider two industrially and economically important open challenges which require advanced vision-guided grasping. 1) “Bin-picking” for manufacturing, where components must be grasped from a random, self-occluding heap inside a bin or box. Parts may have known models, but will only be partially visible in the heap and may have complex shapes. Shiny/reflective metal parts make 3D vision difficult, and the bin walls provide difficult reach-to-grasp and visibility constraints. 2) Waste materials handling, which may be hazardous (e.g. nuclear) waste, or materials for recycling in the circular economy. Here the robot has no prior models of object shapes, and grasped materials may also be deformable (e.g. contaminated gloves, hoses). The proposed project comprises two parallel thrusts: perception (visual and tactile) and action (planning and control for grasping/manipulation). However, perception and action are tightly coupled and this project will build on recent advances in “active perception” and “simultaneous perception and manipulation” (SPAM). In the first thrust, we will exploit recent advances in 3D sensor technology and develop perception algorithms that are robust in challenging environments, e.g. handling shiny (metallic) or transparent (glass/perspex) objects, self-occluding heaps, known objects which may be deformable or fragmented, and unknown objects which lack any pre-existing models. In the second thrust, autonomous grasp planners will be developed with respect to visual features perceived by algorithms developed in the first thrust. Grasps must be planned to be secure, but also provide affordances to facilitate post-grasp manipulative actions, and also afford collision-free reach-to-grasp trajectories. Perceptual noise and uncertainty will be overcome in two ways, namely using computationally adaptive algorithms and mechanically adaptive underactuated hands. An object initially grasped by an accessible feature may need to be re-grasped (for example a tool that is not initially graspable by its handle). We will develop re-grasping strategies that exploit object properties learned during the initial grasp or manipulative actions. Overarching themes in the project are: methods that are generalisable across platforms; reproducibility of results; and the transfer of data. Therefore, the methods proposed in the two thrusts will be tested for reproducibility by implementing them in the different partner’s laboratories, using both similar and different hardware. Large amounts of data will be collected throughout these tests, and published online as a set of international benchmark vision and robotics challenges, curated by Université Laval once the project is completed.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-SDCDN-003
    Partners: Queen’s University Belfast, NTUA, ÉTS, UCL, INRIA

    The potential offered by the abundance of sensors, actuators and communications in IoT era is hindered by the limited computational capacity of local nodes, making the distribution of computing in time and space a necessity. Several key challenges need to be addressed in order to optimally and jointly exploit the network, computing, and storage resources, guaranteeing at the same time feasibility for time-critical and mission-critical tasks. Our research takes upon these challenges by dynamically distributing resources when the demand is rapidly time varying. We first propose an analytic mathematical dynamical modelling of the resources, offered workload, and networking environment, that incorporates phenomena met in wireless communications, mobile edge computing data centres, and network topologies. We also propose a new set of estimators for the workload and resources time-varying profiles that continuously update the model parameters. Building on this framework, we aim to develop novel resource allocation mechanisms that take explicitly into account service differentiation and context-awareness, and most importantly, provide formal guarantees for well-defined QoS/QoE metrics. Our research goes well beyond the state of the art also in the design of control algorithms for cyber-physical systems (CPS), by incorporating resource allocation mechanisms to the decision strategy itself. We propose a new generation of controllers, driven by a co-design philosophy both in the network and computing resources utilization. This paradigm has the potential to cause a quantum leap in crucial fields in engineering, e.g., Industry 4.0, collaborative robotics, logistics, multi-agent systems etc. To achieve these breakthroughs, we utilize and combine tools from Automata and Graph theory, Machine Learning, Modern Control Theory and Network Theory, fields where the consortium has internationally leading expertise. Although researchers from Computer and Network Science, Control Engineering and Applied Mathematics have proposed various approaches to tackle the above challenges, our research constitutes the first truly holistic, multidisciplinary approach that combines and extends recent, albeit fragmented results from all aforementioned fields, thus bridging the gap between efforts of different communities. Our developed theory will be extensively tested on available experimental testbed infrastructures of the participating entities. The efficiency of the overall proposed framework will be tested and evaluated under three complex use cases involving mobile autonomous agents in IoT environments: (i) distributed remote path planning of a group of mobile robots with complex specifications, (ii) rapid deployment of mobile agents for distributed computing purposes in disaster scenarios and (iii) mobility-aware resource allocation for crowded areas with pre-defined performance indicators to reach.

  • Funder: CHIST-ERA Project Code: CHIST-ERA-18-ACAI-005
    Partners: IM2NP, C2N, IBM Research Zurich Laboratory, Université de Sherbrooke

    Edge computing (EC) and the development of portable devices such as cell phones, autonomous robot or health tracking systems represent one of the big challenges for artificial intelligence (AI) deployment. These hardware systems present very tight constraints in terms of energy consumption and computing power that today’s AI strategies cannot cope with. While high power GPU are well adapted to deep neural network implementations that should strongly benefit to AI development, ultra-low power and robust computing with limited resources need to be proposed for EC applications. To this end, we propose to explore the hardware implementation of small-scale neural networks with limited complexity that could satisfy EC requirements. Notably, spiking neural networks present a real opportunity to this end since, they can combine low power operation and non-trivial computing functions as biological neural networks do. In fact, spiking neural networks (SNNs) of moderate size can reproduce important aspects that are not considered in state-of-the-art machine learning approaches: i) non-linear dynamical regime (i.e. synchronized, critic, driven by attractor dynamics, sequences of spikes) that might explain basic mechanisms in perception and ii) the fast computing that occurs in the brain even if neurons are slow. The UNICO project proposes to address the material implementation of such SNNs by integrating in a dedicated hardware, the key ingredients at work in such SNNs. In fact, we can anticipate that the physical implementation of such highly parallel systems will encounter strong limitations with conventional technologies. A real breakthrough for Information and Communication Technologies would be to capitalize on emerging nanotechnologies to implement efficiently these SNNs on an ultra-low power hardware. Here, state of the art analog resistive memory technologies, or memristive devices, will be developed and integrated in the Back End Of Line of CMOS for implementing analog SNNs. By gathering competences from material sciences, device engineering, neuromorphic engineering and machine learning, we will explore how such SNNs can be deployed on various computing tasks of interest for EC applications. The expected innovations at both the hardware and computing levels could benefit to a wide range of AI applications in the future.