This project looks at Qur’an translations as a central medium through which Muslims across the globe today approach their faith. Since the early 20th century, Qur’an translations have been produced in nearly all languages read by Muslims by a variety of individual and institutional actors across nation-state borders. GloQur aims to elucidate three major transnational dimensions of the burgeoning field of Qur’an translation and their interdependence. First, it will examine transnational governmental and non-governmental actors in the field as well as the translations produced by them. Second, it strives to transcend the simple dichotomy between Arabic and ‘vernacular’ languages by analysing, from a historical perspective, the complex centre-periphery structures created by the spread of European languages such as English, French and Russian. Third, we will study the negotiation and reconstruction of a shared exegetical heritage in various linguistic, social and ideological settings. We will examine the conditions in which translations were and still are commissioned and produced, the literary history and ideological backdrop of translations, the translators’ decisions as they become manifest in the texts and their use by local audiences. By studying the role of Qur’an translations in specific Muslim communities, as well as their use in social media, we seek to shed light on the linguistic, cultural and religious significance that is attributed to them and on the processes through which specific translations are elevated to a position of authority. GloQur will thus bridge the gap between philological, historical and anthropological approaches to modern and contemporary Muslim engagement with the Qur’an. By developing an analytical framework for understanding the translation of a sacred text as a transnational religious, social and political practice, the project will break new ground in understanding the global dynamics of contemporary Islam.
Deep neural networks (DNNs) have led to dramatic improvements of the state-of-the-art for many important classification problems, such as object recognition from images or speech recognition from audio data. However, DNNs are also notoriously dependent on the tuning of their hyperparameters. Since their manual tuning is time-consuming and requires expert knowledge, recent years have seen the rise of Bayesian optimization methods for automating this task. While these methods have had substantial successes, their treatment of DNN performance as a black box poses fundamental limitations, allowing manual tuning to be more effective for large and computationally expensive data sets: humans can (1) exploit prior knowledge and extrapolate performance from data subsets, (2) monitor the DNN's internal weight optimization by stochastic gradient descent over time, and (3) reactively change hyperparameters at runtime. We therefore propose to model DNN performance beyond a blackbox level and to use these models to develop for the first time: 1. Next-generation Bayesian optimization methods that exploit data-driven priors to optimize performance orders of magnitude faster than currently possible; 2. Graybox Bayesian optimization methods that have access to -- and exploit -- performance and state information of algorithm runs over time; and 3. Hyperparameter control strategies that learn across different datasets to adapt hyperparameters reactively to the characteristics of any given situation. DNNs play into our project in two ways. First, in all our methods we will use (Bayesian) DNNs to model and exploit the large amounts of performance data we will collect on various datasets. Second, our application goal is to optimize and control DNN hyperparameters far better than human experts and to obtain: 4. Computationally inexpensive auto-tuned deep neural networks, even for large datasets, enabling the widespread use of deep learning by non-experts.
AppSAM will unlock the synthetic capability of S-adenosyl¬methionine (SAM)-dependent methyltransferases and radical SAM enzymes for application in environmentally friendly and fully sustainable reactions. The biotechnological application of these enzymes will provide access to chemo-, regio- and stereoselective methylations and alkylations, as well as to a wide range of complex rearrangement reactions that are currently not possible through traditional approaches. Methylation reactions are of particular interest due to their importance in epigenetics, cancer metabolism and the development of novel pharmaceuticals. As chemical methylation methods often involve toxic compounds and rarely exhibit the desired selectivity and specificity, there is an urgent need for new, environmentally friendly methodologies. The proposed project will meet these demands by the provision of modular in vitro and in vivo systems that can be tailored to specific applications. In the first phase of AppSAM, efficient in vitro SAM-regeneration systems will be developed for use with methyltransferases as well as radical SAM enzymes. To achieve this aim, enzymes from different biosynthetic pathways will be combined in multi-enzyme cascades; methods from enzyme and reaction engineering will be used for optimisation. The second phase of AppSAM will address the application on a preparative scale. This will include the isolation of pure product from the in vitro systems, reactions using immobilised enzymes and extracts from in vivo productions. In addition to E. coli, the methylotrophic bacterium Methylobacter extorquens AM1 will be used as a host for the in vivo systems. M. extorquens can use C1 building blocks such as methanol as the sole carbon source, thereby initiating the biotechnological methylation process from a green source material and making the process fully sustainable, as well as being compatible with an envisaged “methanol economy”.
Machine learning has become a key technology for modern data-driven industrial applications. This success is built on recent research advances in the field of artificial intelligence and more specifically was enabled by key advances in machine learning. Unfortunately, the performance of many machine learning methods is very sensitive to a myriad of design decisions and thus requires a significant amount of machine learning expertise which is often rare and makes this technology inaccessible for small and medium-sized companies that cannot afford their own team of machine learning experts. My ERC grant BeyondBlackbox on automated machine learning (AutoML) addresses this problem from a research perspective. In it, my team and I developed methods which systematically and efficiently adapt and tune machine learning pipelines and implemented them into a research prototype. This resulting research prototype, in principle, allows ML novices easy and affordable access to the most advanced ML methods, automatically customized for the user's own data, and with this research prototype, my team and I have won several competitions, including competitions against up to 130 teams of human ML experts. The potential economic impact is substantial since AutoML technology saves computational resources and human time and therefore reduces the cost of creating value from ML. In this POC project, I and my team will transform our existing research prototype to a professional prototype, perform a technical validation, perform market research and build up business contacts to evaluate this prototype in an industrial setting. Furthermore, we will develop a sustainable business model and assess ways of commercializing the advances made in my ERC grant in order to bring them to market.