Partner: armasuisse Partner contact: Alain Mermoud EPFL laboratory: Distributed Information Systems Laboratory (LSIR) EPFL contact: Prof. Karl Aberer, Angelika Romanou The objective of the TMM project is to identify, at an early stage, the risks associated with new technologies and develop solutions to ward off such threats. It also aims to assess existing products and applications to pinpoint vulnerabilities. In that process, artificial intelligence and machine learning will play an important part. The main goal of this project is to automatically identify technology offerings of Swiss companies especially in the cyber security domain. This also includes identifying key stakeholders in these companies, possible patents, published scientific papers.
Topics • Machine LearningView
Partner: Microsoft Partner contact: Dimitrios Dimitriadis, Emre Kıcıman, Robert Sim, Shruti Tople EPFL laboratory: Data Science Lab (dlab) EPFL contact: Prof. Robert West, Valentin Hartmann, Maxime Peyrard As machine learning (ML) models are becoming more complex, there has been a growing interest in making use of decentrally generated data (e.g., from smartphones) and in pooling data from many actors. At the same time, however, privacy concerns about organizations collecting data have risen. As an additional challenge, decentrally generated data is often highly heterogeneous, thus breaking assumptions needed by standard ML models. Here, we propose to “kill two birds with one stone” by developing Invariant Federated Learning, a framework for training ML models without directly collecting data, while not only being robust to, but even benefiting from, heterogeneous data.
Topics • Machine LearningView
Partner: Microsoft Partner contact: Adrien Ghosn, Marios Kogias EPFL laboratory: Data Center Systems Laboratory (DCSL) , HexHive Laboratory EPFL contact: Prof. Edouard Bugnion, Prof. Mathias Payer Confidential computing is an increasingly popular means to wider Cloud adoption. By offering confidential virtual machines and enclaves, Cloud service providers now host organizations, such as banks and hospitals, that abide by stringent legal requirement with regards to their client’s data confidentiality. Unfortunately, confidential computing solutions depend on bleeding-edge emerging hardware that (1) takes long to roll out at the Cloud scale and (2) as a recent technology, it is bound to frequent changes and potential security vulnerabilities. This proposal leverage existing commodity hardware combined with new programming language and formal method techniques and identify how to provide similar or even more elaborate confidentiality and integrity guarantees than the existing confidential hardware.
Topics • Privacy Protection & CryptographyView
Partner: ICRC, funded by HAC Partner contact: TBD EPFL laboratory: Decentralized Distributed Systems Laboratory (DEDIS) EPFL contact: Prof. Bryan Ford To serve the 80 million forcibly-displaced people around the globe, direct cash assistance is gaining acceptance. ICRC’s beneficiaries often do not have, or do not want, the ATM cards or mobile wallets normally used to spend or withdraw cash digitally, because issuers would subject them to privacy-invasive identity verification and potential screening against sanctions and counterterrorism watchlists. On top of that, existing solutions increase the risk of data leaks or surveillance induced by the many third parties having access to the data generated in the transactions. The proposed research focuses on the identity, account, and wallet management challenges in the design of a humanitarian cryptocurrency or token intended to address the above problems.View
Partner: ICRC, CHUV Partner contact: Mary-Anne Hartley EPFL laboratory: Machine Learning and Optimization Laboratory (MLO), intelligent Global Health Research group EPFL contact: Prof. Martin Jaggi, Mary-Anne Hartley Point-of-Care Ultrasound (PoCUS) is a powerfully versatile and virtually consumable-free clinical tool for the diagnosis and management of a range of diseases. While the promise of this tool in resource-limited settings may seem obvious, it’s implementation is limited by inter-user bias, requiring specific training and standardisation.This makes PoCUS a good candidate for computer-aided interpretation support. Our study proposes the development of a PoCUS training program adapted to resource limited settings and the particular needs of the ICRC.View
Partner: armasuisse Partner contact: Albert Blarer EPFL laboratory: Chair of Biostatistics EPFL contact: Prof. Mats J. Stensrud In this report we consider several real-life scenarios that may provoke causal research questions. As we introduce concepts in causal inference, we reference these case studies and other examples to clarify ideas and provide examples of how researchers are approaching topics using clear causal thinking.
Topics • Machine LearningView
Partner: ICRC, funded by HAC Partner contact: Fabrice Lauper EPFL laboratory: Distributed Information Systems Laboratory (LSIR) EPFL contact: Prof. Karl Aberer, Rebekah Overdorf In this project, we are working with the ICRC to develop technical methods to combat social media-based attacks against humanitarian organizations. We are uncovering how the phenomenon of weaponizing information impacts humanitarian organizations and developing methods to detect and prevent such attacks, primarily via natural language processing and machine learning methods.View
Partner: Cyber-Defence Campus (armasuisse) Partner contact: Ljiljana Dolamic EPFL laboratory: Signal Processing Laboratory (LTS4) EPFL contact: Prof. Pascal Frossard, Sahar Sadrizadeh Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various methods have been proposed to craft these examples in image data. However, these methods are not readily applicable to Natural Language Processing (NLP). In this project, we aim to propose methods to generate adversarial examples for NLP models such as neural machine translation models in different languages. Moreover, through adversarial attacks, we mean to analyze the vulnerability and interpretability of these models.View
Partner: armasuisse Partner contact: Gérôme Bovet EPFL laboratory: Signal Processing Laboratory (LTS4) EPFL contact: Prof. Pascal Frossard Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use adversarial learning, in which the model is fine-tuned with these adversarial perturbations. However, this method has several drawbacks. It is computationally costly, has convergence instabilities and it does not protect against multiple types of corruptions at the same time. The objective of this project is to develop improved and effective adversarial training solutions that tackle these drawbacks.View