Datacentre traffic is experiencing 2-digit growth challenging the scalability of current network architectures. The new concept of disaggregation exacerbates bandwidth and latency demands whereas emerging cloud business opportunities urge for reliable inter-datacenter networking. PROJECT will develop an end-to-end solution extending from the datacenter architecture and optical subsystem design to the overlaying control plane and application interfaces. PROJECT hybrid electronic-optical network architecture scales linearly with the number of datacenter hosts, offers Ethernet granularity and saves up to 94% power and 30% cost. It consolidates compute and storage networks over a single, Ethernet optical TDMA network. Low latency, hardware-level dynamic re-configurability and quasi-deterministic QoS are supported in view of disaggregated datacenter deployment scenarios. A fully functional control plane overlay will be developed comprising an SDN controller along with its interfaces. The southbound interface abstracts physical layer infrastructure and allows dynamic hardware-level network reconfigurability. The northbound interface links the SDN controller with the application requirements through an Application Programming Interface. PROJECT innovative control plane enables Application Defined Networking and merges hardware and software virtualization over the hybrid optical infrastructure. It also integrates SDN modules and functions for inter-datacenter connectivity, enabling dynamic bandwidth allocation based on the needs of migrating VMs as well as on existing Service Level Agreements for transparent networking among telecom and datacenter operator’s domains. Fully-functional network subsystems will be prototyped: a 400Gb/s hybrid Top-of-Rack switch, a 50Gb/s electronic-optical smart Network Interface Card and a fast optical pod switch. PROJECT concept will be demonstrated in the lab and in its operational environment for both intra- and inter-datacenter scenario
MIKELANGELO is a project, targeted to disrupt the core underlying technologies of Cloud computing, enabling even bigger uptake of Cloud computing, HPC in the Cloud and Big Data technologies under one umbrella. The vision of MIKELANGELO is to improve responsiveness, agility and security of the virtual infrastructure through packaged applications, using lean guest operating system OSv and superfast hypervisor SuperKVM. In short, the work will concentrate on improvement of virtual I/O in KVM, using additional virtio expertise, integrated with the light-weight operating system OSv and with enhanced Security. The HPC in the Cloud focus will be provided through involvement of a large HPC centre, with the ability and business need to cloudify their HPC business. The Consortium consists of hand-picked experts (e.g., the original creator of KVM - Avi Kivity) who participate in the overall effort to reduce one of the last performance hurdles in the virtualisation (I/O). Other layers of inefficiency are addressed through OSv (thin operating system) and all packaged under the OpenStack or OpenNebula. Such approach will allow for use of MIKELANGELO stack on heterogeneous infrastructures, with high responsiveness, agility and better security. The targeted audience are primarily SMEs (e.g. simulation dependent SMEs). Finally, the use-cases have clear owners, thus directly contributing to the exploitation.
The cloud computing industry has grown massively over the last decade and with that new areas of application have arisen. Some areas require specialized hardware, which needs to be placed in locations close to the user. User requirements such as ultra-low latency, security and location awareness are becoming more and more common, for example, in Smart Cities, industrial automation and data analytics. Modern cloud applications have also become more complex as they usually run on a distributed computer system, split up into components that must run with high availability. Unifying such diverse systems into centrally controlled compute clusters and providing sophisticated scheduling decisions across them are two major challenges in this field. Scheduling decisions for a cluster consisting of cloud and edge nodes must consider unique characteristics such as variability in node and network capacity. The common solution for orchestrating large clusters is Kubernetes, however, it is designed for reliable homogeneous clusters. Many applications and extensions are available for Kubernetes. Unfortunately, none of them accounts for optimization of both performance and energy or addresses data and job locality. In DECICE, we develop an open and portable cloud management framework for automatic and adaptive optimization of applications by mapping jobs to the most suitable resources in a heterogeneous system landscape. By utilizing holistic monitoring, we construct a digital twin of the system that reflects on the original system. An AI-scheduler makes decisions on placement of job and data as well as conducting job rescheduling to adjust to system changes. A virtual training environment is provided that generates test data for training of ML-models and the exploration of what-if scenarios. The portable framework is integrated into the Kubernetes ecosystem and validated using relevant use cases on real-world heterogeneous systems.
FAIRCORE4EOSC focuses on the development and realisation of EOSC-Core components supporting a FAIR EOSC, addressing gaps identified in the SRIA. Leveraging existing technologies and services, the project will develop nine new EOSC-Core components aimed to improve the discoverability and interoperability of an increased amount of research outputs. FAIRCORE4EOSC will also contribute to the EOSC Interoperability Framework by establishing new guidelines on the new EOSC-Core components. The new components will be crucial to support the FAIR research life cycle. Five user-centric case studies (climate change, social sciences and humanities, mathematics, national research information systems, research data management communities) will drive the development and testing of the new components ensuring they are tailored to the user needs (co-design). All the selected case studies share similar challenges that are common to many other stakeholder groups: research communities at European and national level have datasets that currently cannot be found in the EOSC; they use Digital Object Identifiers (DOIs) but they are lacking PIDs for different levels of aggregation; they use community specific services to manage metadata that make cross-discipline reuse and interoperability complex. The user stories and best practices drawn by the case studies will be used to foster uptake of the new components beyond the project partners. The 22 complementary partners of the FAIRCORE4EOSC consortium have long-lasting experience in the provision and development of research data services, persistent identifiers, metadata and semantic registries, services and tools to archive and reference research software. The partners have also significantly contributed to the EOSC SRIA and are active members of the EOSC Association Task Forces (TFs) providing the project a unique insight and capacity to boost the development of the Web of FAIR Data and Related Services.