Extreme scale HPC systems impose significant challenges for developers aiming at obtaining applications efficiently utilising all available resources. In particular, the development of such applications is accompanied by the complex and labour-intensive task of managing parallel control flows, data dependencies and underlying hardware resources – each of these obligations constituting challenging problems on its own. The AllScale environment, the focus of this project, will provide a novel, sophisticated approach enabling the decoupling of the specification of parallelism from the associated management activities during program execution. Its foundation is a parallel programming model based on nested recursive parallelism, opening up the potential for a variety of compiler and runtime system based techniques adding to the capabilities of resulting applications. These include the (i) automated porting of application from small- to extreme scale architectures, (ii) the flexible tuning of the program execution to satisfy trade-offs among multiple objectives including execution time, energy and resource usage, (iii) the management of hardware resources and associated parameters (e.g. clock speed), (iv) the integration of resilience management measures to compensate for isolated hardware failures and (v) the possibility of online performance monitoring and analysis. All these services will be provided in an application independent, reusable fashion by a combination of sophisticated, modular, and customizable compiler and runtime system based solutions. AllScale will boost the development productivity, portability, and runtime, energy, and resource efficiency of parallel applications targeting small to extreme scale parallel systems by leveraging the inherent advantages of nested recursive parallelism, and will be validated with applications from fluid dynamics, environmental hazard and space weather simulations provided by SME, industry and scientific partners.
The widespread use of sensor and IoT devices is generating huge volumes of time series data in various industries like finance, energy, factories, medicine, manufacturing and others. Industries use these data for monitoring, but their main potential is still untapped. Existing techniques and software for time series management do not provide tools sufficiently scalable and sophisticated for managing the huge volumes of data or adequate forecasting, prediction and diagnostics. MORE will create a platform that will address the technical challenges in time series and stream management, focusing on the RES industry. MORE’s platform will introduce an architecture that combines edge computing and cloud computing to be able to guarantee both responsiveness and provide sophisticated analytics simultaneously. This architecture will be combined with the usage of time series summarization techniques, or as we more accurately term them in MORE, modelling techniques for sensor data. Models are any compressed representations that allow the reconstruction of the original data points of a time series (e.g. a linear function) within a known error-bound (possibly zero). This approach has synergies with the edge computing approach, since summarization can be done at the edge, reducing the load in the whole data processing pipeline. MORE will introduce advanced analytics tools for prediction, forecasting and diagnostics based on two technological directions: machine learning and pattern extraction, with emphasis to motifs, which is the state-of-the-art for time series. MORE will adjust these techniques to work directly on models of data, thus enabling them to scale beyond state-of-the-art. The ability to ingest huge volumes of data will have an important impact to the accuracy of the prediction and diagnostics models.
In this proposal, we address the matter of transparency and explainability of AI using approaches inspired by control theory. Notably, we consider a comprehensive and flexible certification of properties of AI pipelines, certain closed-loops and more complicated interconnections. At one extreme, one could consider risk averse a priori guarantees via hard constraints on certain bias measures in the training process. At the other extreme, one could consider nuanced communication of the exact tradeoffs involved in AI pipeline choices and their effect on industrial and bias outcomes, post hoc. Both extremes offer little in terms of optimizing the pipeline and inflexibility in explaining the pipeline’s fairness-related qualities. Seeking the middle-ground, we suggest a priori certification of fairness-related qualities in AI pipelines via modular compositions of pre-processing, training, inference, and post-processing steps with certain properties. Furthermore, we present an extensive programme in explainability of fairness-related qualities. We seek to inform both the developer and the user thoroughly in regards to the possible algorithmic choices and their expected effects. Overall, this will effectively support the development of AI pipelines with guaranteed levels of performance, explained clearly. Three use cases (in Human Resources automation, Financial Technology, and Advertising) will be used to assess the effectiveness of our approaches.
As occupant behaviour can be considered as one of the main drivers of the performance gap, TOPAs will focus on reducing the gap from an operational perspective, hence supporting Post Occupancy Evaluation. Quantifying the performance gap is non-trivial, the performance gap is dependent on time and contextual factors, and individual buildings will have a particular performance gap. The delivery of energy efficiency projects through energy performance contracts and ESCOs is widely seen as a way of addressing sub-optimal post installation performance of energy efficiency technologies. Since this model is very attractive from many perspectives and is identified as a central route to delivery of energy efficiency gains in the EPBD, methods and models for the accurate measurement and verification of energy savings are essential to the growth of the ESCO market. The energy audit process is generally done for a fixed duration at a specific point in time. A key outcome is the identification and root cause analysis of energy inefficiencies and as a result a plan is put in place to minimise these inefficiencies. This can be very effective at reducing energy consumption in a building. However, from an implementation perspective, it can be difficult to identify all issues (in some cases conflicting system level goals) and the persistence of savings can be poor and as a consequence inefficiencies re-appear. Continuous energy auditing takes this one-off process and makes it a constant rolling cycle where a detailed overview of the building performance is consistently available making it possible to refine the energy management plan. TOPAs adopts the principle of continuous performance auditing and considers not only energy use but also an understanding of how buildings are used and their climatic state, thus providing a holistic performance audit process through supporting tools and methodologies that minimise the gap between predicted and actual energy use.